diff --git a/.travis.yml b/.travis.yml index 0b25cff718..e8d1e49c04 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,7 +1,7 @@ language: c script: - - sh ./scripts/check.sh - - ./scripts/badge.sh + - 'if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then sh ./scripts/check.sh; fi' + - 'if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then sh ./scripts/badge.sh; fi' branches: only: - master diff --git a/published/20171202 Simulating the Altair.md b/published/20171202 Simulating the Altair.md new file mode 100644 index 0000000000..e59c3c913c --- /dev/null +++ b/published/20171202 Simulating the Altair.md @@ -0,0 +1,69 @@ +模拟 Altair 8800 计算机 +====== + +[Altair 8800][1] 是 1975 年发布的自建家用电脑套件。Altair 基本上是第一台个人电脑(PC),虽然 PC 这个名词好几年前就出现了。对 Dell、HP 或者 Macbook 而言它是亚当(或者夏娃)。 + +有些人认为为 Z80(与 Altair 的 Intel 8080 密切相关的处理器)编写仿真器真是太棒了,并认为它需要一个模拟 Altair 的控制面板。所以如果你想知道 1975 年使用电脑是什么感觉,你可以在你的 Macbook 上运行 Altair: + +![Altair 8800][2] + +### 安装它 + +你可以从[这里][3]的 FTP 服务器下载 Z80 包。你要查找最新的 Z80 包版本,例如 `z80pack-1.26.tgz`。 + +首先解压文件: + +``` +$ tar -xvf z80pack-1.26.tgz +``` + +进入解压目录: + +``` +$ cd z80pack-1.26 +``` + +控制面板模拟基于名为 `frontpanel` 的库。你必须先编译该库。如果你进入 `frontpanel` 目录,你会发现 `README` 文件列出了这个库自己的依赖项。你在这里的体会几乎肯定会与我的不同,但也许我的痛苦可以作为例子。我安装了依赖项,但是是通过 [Homebrew][4] 安装的。为了让库能够编译,我必须确保在 `Makefile.osx` 中将 `/usr/local/include `添加到 Clang 的 include 路径中。 + +如果你觉得依赖没有问题,那么你应该就能编译这个库(我们现在位于 `z80pack-1.26/frontpanel`): + +``` +$ make -f Makefile.osx ... +$ make -f Makefile.osx clean +``` + +你应该会得到 `libfrontpanel.so`。我把它拷贝到 `libfrontpanel.so`。 + +Altair 模拟器位于 `z80pack-1.26/altairsim` 下。你现在需要编译模拟器本身。进入 `z80pack-1.26/altairsim/srcsim` 并再次运行 `make`: + +``` +$ make -f Makefile.osx ... +$ make -f Makefile.osx clean +``` + +该过程将在 `z80pack-1.26/altairsim` 中创建一个名为 `altairsim` 的可执行文件。运行该可执行文件,你应该会看到标志性的 Altair 控制面板! + +如果你想要探究,请阅读原始的 [Altair 手册][5] + +如果你喜欢这篇文章,我们每两周更新一次!在 Twitter 上关注 [@TwoBitHistory]​​[6] 或订阅 [RSS 源][7]了解什么时候有新文章。 + +-------------------------------------------------------------------------------- + +via: https://twobithistory.org/2017/12/02/simulating-the-altair.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Altair_8800 +[2]: https://www.autometer.de/unix4fun/z80pack/altair.png +[3]: http://www.autometer.de/unix4fun/z80pack/ftp/ +[4]: http://brew.sh/ +[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf +[6]: https://twitter.com/TwoBitHistory +[7]: https://twobithistory.org/feed.xml diff --git a/translated/tech/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md b/published/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md similarity index 69% rename from translated/tech/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md rename to published/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md index 898955242a..a34c575261 100644 --- a/translated/tech/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md +++ b/published/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md @@ -1,4 +1,4 @@ -Flameshot – 一个简洁但功能丰富的截图工具 +Flameshot:一个简洁但功能丰富的截图工具 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/Flameshot-720x340.png) @@ -10,11 +10,13 @@ Flameshot – 一个简洁但功能丰富的截图工具 **在 Arch Linux 上:** Flameshot 可以从 Arch LInux 的 [community] 仓库中获取。确保你已经启用了 community 仓库,然后就可以像下面展示的那样使用 pacman 来安装 Flameshot : + ``` $ sudo pacman -S flameshot ``` 它也可以从 [**AUR**][1] 中获取,所以你还可以使用任意一个 AUR 帮助程序(例如 [**Yay**][2])来在基于 Arch 的系统中安装它: + ``` $ yay -S flameshot-git ``` @@ -26,6 +28,7 @@ $ sudo dnf install flameshot ``` 在 **Debian 10+** 和 **Ubuntu 18.04+** 中,可以使用 APT 包管理器来安装它: + ``` $ sudo apt install flameshot ``` @@ -35,97 +38,105 @@ $ sudo apt install flameshot ``` $ sudo zypper install flameshot ``` + 在其他的 Linux 发行版中,可以从源代码编译并安装它。编译过程中需要 **Qt version 5.3** 以及 **GCC 4.9.2** 或者它们的更高版本。 ### 使用 -可以从菜单或者应用启动器中启动 Flameshot。在 MATE 桌面环境,它通常可以在 **Applications - > Graphics** 下找到。 +可以从菜单或者应用启动器中启动 Flameshot。在 MATE 桌面环境,它通常可以在 “Applications -> Graphics” 下找到。 一旦打开了它,你就可以在系统面板中看到 Flameshot 的托盘图标。 **注意:** -假如你使用 Gnome 桌面环境,为了能够看到系统托盘图标,你需要安装 [TopIcons][3] 扩展。 +假如你使用 Gnome 桌面环境,为了能够看到系统托盘图标,你需要安装 [TopIcons][3] 扩展。 在 Flameshot 托盘图标上右击,你便会看到几个菜单项,例如打开配置窗口、信息窗口以及退出该应用。 -要进行截图,只需要点击托盘图标就可以了。接着你将看到如何使用 Flameshot 的帮助窗口。选择一个截图区域,然后敲 **ENTER** 键便可以截屏了,点击右键便可以看到颜色拾取器,再敲空格键便可以查看屏幕侧边的面板。你可以使用鼠标的滚轮来增加或者减少指针的宽度。 +要进行截图,只需要点击托盘图标就可以了。接着你将看到如何使用 Flameshot 的帮助窗口。选择一个截图区域,然后敲回车键便可以截屏了,点击右键便可以看到颜色拾取器,再敲空格键便可以查看屏幕侧边的面板。你可以使用鼠标的滚轮来增加或者减少指针的宽度。 Flameshot 自带一系列非常好的功能,例如: - * 可以进行手写 - * 可以划直线 - * 可以画长方形或者圆形框 - * 可以进行长方形区域选择 - * 可以画箭头 - * 可以对要点进行标注 - * 可以添加文本 - * 可以对图片或者文字进行模糊处理 - * 可以展示图片的尺寸大小 - * 在编辑图片是可以进行撤销和重做操作 - * 可以将选择的东西复制到剪贴板 - * 可以保存选择 - * 可以离开截屏 - * 可以选择另一个 app 来打开图片 - * 可以上传图片到 imgur 网站 - * 可以将图片固定到桌面上 +* 可以进行手写 +* 可以划直线 +* 可以画长方形或者圆形框 +* 可以进行长方形区域选择 +* 可以画箭头 +* 可以对要点进行标注 +* 可以添加文本 +* 可以对图片或者文字进行模糊处理 +* 可以展示图片的尺寸大小 +* 在编辑图片是可以进行撤销和重做操作 +* 可以将选择的东西复制到剪贴板 +* 可以保存选区 +* 可以离开截屏 +* 可以选择另一个 app 来打开图片 +* 可以上传图片到 imgur 网站 +* 可以将图片固定到桌面上 下面是一个示例的视频: -**快捷键** +### 快捷键 -Frameshot 也支持快捷键。在 Flameshot 的托盘图标上右击并点击 **Information** 窗口便可以看到在 GUI 模式下所有可用的快捷键。下面是在 GUI 模式下可用的快捷键清单: +Frameshot 也支持快捷键。在 Flameshot 的托盘图标上右击并点击 “Information” 窗口便可以看到在 GUI 模式下所有可用的快捷键。下面是在 GUI 模式下可用的快捷键清单: | 快捷键 | 描述 | |------------------------|------------------------------| -| ←, ↓, ↑, → | 移动选择区域 1px | -| Shift + ←, ↓, ↑, → | 将选择区域大小更改 1px | -| Esc | 退出截图 | -| Ctrl + C | 复制到粘贴板 | -| Ctrl + S | 将选择区域保存为文件 | -| Ctrl + Z | 撤销最近的一次操作 | -| Right Click | 展示颜色拾取器 | -| Mouse Wheel | 改变工具的宽度 | +| `←`、`↓`、`↑`、`→` | 移动选择区域 1px | +| `Shift` + `←`、`↓`、`↑`、`→` | 将选择区域大小更改 1px | +| `Esc` | 退出截图 | +| `Ctrl` + `C` | 复制到粘贴板 | +| `Ctrl` + `S` | 将选择区域保存为文件 | +| `Ctrl` + `Z` | 撤销最近的一次操作 | +| 鼠标右键 | 展示颜色拾取器 | +| 鼠标滚轮 | 改变工具的宽度 | -边按住 Shift 键并拖动选择区域的其中一个控制点将会对它相反方向的控制点做类似的拖放操作。 +边按住 `Shift` 键并拖动选择区域的其中一个控制点将会对它相反方向的控制点做类似的拖放操作。 -**命令行选项** +### 命令行选项 Flameshot 也支持一系列的命令行选项来延时截图和保存图片到自定义的路径。 要使用 Flameshot GUI 模式,运行: + ``` $ flameshot gui ``` 要使用 GUI 模式截屏并将你选取的区域保存到一个自定义的路径,运行: + ``` $ flameshot gui -p ~/myStuff/captures ``` 要延时 2 秒后打开 GUI 模式可以使用: + ``` $ flameshot gui -d 2000 ``` 要延时 2 秒并将截图保存到一个自定义的路径(无 GUI)可以使用: + ``` $ flameshot full -p ~/myStuff/captures -d 2000 ``` 要截图全屏并保存到自定义的路径和粘贴板中使用: + ``` $ flameshot full -c -p ~/myStuff/captures ``` -要在截屏中包含鼠标并将图片保存为 **PNG** 格式可以使用: +要在截屏中包含鼠标并将图片保存为 PNG 格式可以使用: + ``` $ flameshot screen -r ``` 要对屏幕 1 进行截屏并将截屏复制到粘贴板中可以运行: + ``` $ flameshot screen -n 1 -c ``` @@ -143,7 +154,7 @@ via: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-scre 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20140607 Five things that make Go fast.md b/published/201810/20140607 Five things that make Go fast.md similarity index 100% rename from published/20140607 Five things that make Go fast.md rename to published/201810/20140607 Five things that make Go fast.md diff --git a/translated/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md b/published/201810/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md similarity index 92% rename from translated/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md rename to published/201810/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md index 2b3a558191..0667575e63 100644 --- a/translated/tech/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md +++ b/published/201810/20161014 Compiling Lisp to JavaScript From Scratch in 350 LOC.md @@ -1,28 +1,23 @@ -# 用 350 行代码从零开始,将 Lisp 编译成 JavaScript +用 350 行代码从零开始,将 Lisp 编译成 JavaScript +====== -我们将会在本篇文章中看到从零开始实现的编译器,将简单的类 LISP 计算语言编译成 JavaScript。完整的源代码在 [这里][7]. +我们将会在本篇文章中看到从零开始实现的编译器,将简单的类 LISP 计算语言编译成 JavaScript。完整的源代码在 [这里][7]。 我们将会: 1. 自定义语言,并用它编写一个简单的程序 - 2. 实现一个简单的解析器组合器 - 3. 为该语言实现一个解析器 - 4. 为该语言实现一个美观的打印器 - -5. 为我们的需求定义 JavaScript 的一个子集 - +5. 为我们的用途定义 JavaScript 的一个子集 6. 实现代码转译器,将代码转译成我们定义的 JavaScript 子集 - 7. 把所有东西整合在一起 开始吧! -### 1. 定义语言 +### 1、定义语言 -lisps 最迷人的地方在于,它们的语法就是树状表示的,这就是这门语言很容易解析的原因。我们很快就能接触到它。但首先让我们把自己的语言定义好。关于我们语言的语法的范式(BNF)描述如下: +Lisp 族语言最迷人的地方在于,它们的语法就是树状表示的,这就是这门语言很容易解析的原因。我们很快就能接触到它。但首先让我们把自己的语言定义好。关于我们语言的语法的范式(BNF)描述如下: ``` program ::= expr @@ -35,17 +30,17 @@ expr ::= | | ([]) 该语言中,我们保留一些内建的特殊形式,这样我们就能做一些更有意思的事情: -* let 表达式使我们可以在它的 body 环境中引入新的变量。语法如下: +* `let` 表达式使我们可以在它的 `body` 环境中引入新的变量。语法如下: -``` + ``` let ::= (let ([]) ) letargs ::= ( ) body ::= ``` -* lambda 表达式:也就是匿名函数定义。语法如下: +* `lambda` 表达式:也就是匿名函数定义。语法如下: -``` + ``` lambda ::= (lambda ([]) ) ``` @@ -94,12 +89,11 @@ data Atom 另一件你想做的事情可能是在语法中添加一些注释信息。比如定位:`Expr` 是来自哪个文件的,具体到这个文件的哪一行哪一列。你可以在后面的阶段中使用这一特性,打印出错误定位,即使它们不是处于解析阶段。 * _练习 1_:添加一个 `Program` 数据类型,可以按顺序包含多个 `Expr` - * _练习 2_:向语法树中添加一个定位注解。 -### 2. 实现一个简单的解析器组合库 +### 2、实现一个简单的解析器组合库 -我们要做的第一件事情是定义一个嵌入式领域专用语言(Embedded Domain Specific Language 或者 EDSL),我们会用它来定义我们的语言解析器。这常常被称为解析器组合库。我们做这件事完全是出于学习的目的,Haskell 里有很好的解析库,在实际构建软件或者进行实验时,你应该使用它们。[megaparsec][8] 就是这样的一个库。 +我们要做的第一件事情是定义一个嵌入式领域专用语言Embedded Domain Specific Language(EDSL),我们会用它来定义我们的语言解析器。这常常被称为解析器组合库。我们做这件事完全是出于学习的目的,Haskell 里有很好的解析库,在实际构建软件或者进行实验时,你应该使用它们。[megaparsec][8] 就是这样的一个库。 首先我们来谈谈解析库的实现的思路。本质上,我们的解析器就是一个函数,接受一些输入,可能会读取输入的一些或全部内容,然后返回解析出来的值和无法解析的输入部分,或者在解析失败时抛出异常。我们把它写出来。 @@ -114,7 +108,6 @@ data ParseError = ParseError ParseString Error type Error = String - ``` 这里我们定义了三个主要的新类型。 @@ -124,9 +117,7 @@ type Error = String 第二个,`ParseString` 是我们的输入或携带的状态。它有三个重要的部分: * `Name`: 这是源的名字 - * `(Int, Int)`: 这是源的当前位置 - * `String`: 这是等待解析的字符串 第三个,`ParseError` 包含了解析器的当前状态和一个错误信息。 @@ -180,13 +171,11 @@ instance Monad Parser where Right (rs, rest) -> case f rs of Parser parser -> parser rest - ``` 接下来,让我们定义一种的方式,用于运行解析器和防止失败的助手函数: ``` - runParser :: String -> String -> Parser a -> Either ParseError (a, ParseString) runParser name str (Parser parser) = parser $ ParseString name (0,0) str @@ -237,7 +226,6 @@ many parser = go [] many1 :: Parser a -> Parser [a] many1 parser = (:) <$> parser <*> many parser - ``` 下面的这些解析器通过我们定义的组合器来实现一些特殊的解析器: @@ -273,14 +261,13 @@ sepBy sep parser = do frst <- optional parser rest <- many (sep *> parser) pure $ maybe rest (:rest) frst - ``` 现在为该门语言定义解析器所需要的所有东西都有了。 -* _练习_ :实现一个 EOF(end of file/input,即文件或输入终止符)解析器组合器。 +* _练习_ :实现一个 EOF(end of file/input,即文件或输入终止符)解析器组合器。 -### 3. 为我们的语言实现解析器 +### 3、为我们的语言实现解析器 我们会用自顶而下的方法定义解析器。 @@ -296,7 +283,6 @@ parseAtom = parseSymbol <|> parseInt parseSymbol :: Parser Atom parseSymbol = fmap Symbol parseName - ``` 注意到这四个函数是在我们这门语言中属于高阶描述。这解释了为什么 Haskell 执行解析工作这么棒。在定义完高级部分后,我们还需要定义低级别的 `parseName` 和 `parseInt`。 @@ -311,7 +297,7 @@ parseName = do pure (c:cs) ``` -整数是一系列数字,数字前面可能有负号 ‘-’: +整数是一系列数字,数字前面可能有负号 `-`: ``` parseInt :: Parser Atom @@ -333,12 +319,10 @@ runExprParser name str = ``` * _练习 1_ :为第一节中定义的 `Program` 类型编写一个解析器 - * _练习 2_ :用 Applicative 的形式重写 `parseName` - * _练习 3_ :`parseInt` 可能出现溢出情况,找到处理它的方法,不要用 `read`。 -### 4. 为这门语言实现一个更好看的输出器 +### 4、为这门语言实现一个更好看的输出器 我们还想做一件事,将我们的程序以源代码的形式打印出来。这对完善错误信息很有用。 @@ -372,7 +356,7 @@ indent tabs e = concat (replicate tabs " ") ++ e 好,目前为止我们写了近 200 行代码,这些代码一般叫做编译器的前端。我们还要写大概 150 行代码,用来执行三个额外的任务:我们需要根据需求定义一个 JS 的子集,定义一个将我们的语言转译成这个子集的转译器,最后把所有东西整合在一起。开始吧。 -### 5. 根据需求定义 JavaScript 的子集 +### 5、根据需求定义 JavaScript 的子集 首先,我们要定义将要使用的 JavaScript 的子集: @@ -411,10 +395,9 @@ printJSExpr doindent tabs = \case ``` * _练习 1_ :添加 `JSProgram` 类型,它可以包含多个 `JSExpr` ,然后创建一个叫做 `printJSExprProgram` 的函数来生成代码。 - * _练习 2_ :添加 `JSExpr` 的新类型:`JSIf`,并为其生成代码。 -### 6. 实现到我们定义的 JavaScript 子集的代码转译器 +### 6、实现到我们定义的 JavaScript 子集的代码转译器 我们快做完了。这一节将会创建函数,将 `Expr` 转译成 `JSExpr`。 @@ -437,7 +420,6 @@ translateList = \case f xs f:xs -> JSFunCall <$> translateToJS f <*> traverse translateToJS xs - ``` `builtins` 是一系列要转译的特例,就像 `lambada` 和 `let`。每一种情况都可以获得一系列参数,验证它是否合乎语法规范,然后将其转译成等效的 `JSExpr`。 @@ -456,7 +438,6 @@ builtins = ,("div", transBinOp "div" "/") ,("print", transPrint) ] - ``` 我们这种情况,会将内建的特殊形式当作特殊的、非第一类的进行对待,因此不可能将它们当作第一类函数。 @@ -480,10 +461,9 @@ transLambda = \case fromSymbol :: Expr -> Either String Name fromSymbol (ATOM (Symbol s)) = Right s fromSymbol e = Left $ "cannot bind value to non symbol type: " ++ show e - ``` -我们会将 let 转译成带有相关名字参数的函数定义,然后带上参数调用函数,因此会在这一作用域中引入变量: +我们会将 `let` 转译成带有相关名字参数的函数定义,然后带上参数调用函数,因此会在这一作用域中引入变量: ``` transLet :: [Expr] -> Either TransError JSExpr @@ -522,35 +502,27 @@ transBinOp _ f list = foldl1 (JSBinOp f) <$> traverse translateToJS list transPrint :: [Expr] -> Either TransError JSExpr transPrint [expr] = JSFunCall (JSSymbol "console.log") . (:[]) <$> translateToJS expr transPrint xs = Left $ "Syntax error. print expected 1 arguments, got: " ++ show (length xs) - ``` 注意,如果我们将这些代码当作 `Expr` 的特例进行解析,那我们就可能会跳过语法验证。 * _练习 1_ :将 `Program` 转译成 `JSProgram` - * _练习 2_ :为 `if Expr Expr Expr` 添加一个特例,并将它转译成你在上一次练习中实现的 `JSIf` 条件语句。 -### 7. 把所有东西整合到一起 +### 7、把所有东西整合到一起 最终,我们将会把所有东西整合到一起。我们会: 1. 读取文件 - 2. 将文件解析成 `Expr` - 3. 将文件转译成 `JSExpr` - 4. 将 JavaScript 代码发送到标准输出流 我们还会启用一些用于测试的标志位: * `--e` 将进行解析并打印出表达式的抽象表示(`Expr`) - * `--pp` 将进行解析,美化输出 - * `--jse` 将进行解析、转译、并打印出生成的 JS 表达式(`JSExpr`)的抽象表示 - * `--ppc` 将进行解析,美化输出并进行编译 ``` @@ -616,10 +588,10 @@ undefined via: https://gilmi.me/blog/post/2016/10/14/lisp-to-js -作者:[ Gil Mizrahi ][a] +作者:[Gil Mizrahi][a] 选题:[oska874][b] 译者:[BriFuture](https://github.com/BriFuture) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20170810 How we built our first full-stack JavaScript web app in three weeks.md b/published/201810/20170810 How we built our first full-stack JavaScript web app in three weeks.md similarity index 100% rename from published/20170810 How we built our first full-stack JavaScript web app in three weeks.md rename to published/201810/20170810 How we built our first full-stack JavaScript web app in three weeks.md diff --git a/published/20170926 Managing users on Linux systems.md b/published/201810/20170926 Managing users on Linux systems.md similarity index 100% rename from published/20170926 Managing users on Linux systems.md rename to published/201810/20170926 Managing users on Linux systems.md diff --git a/published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md b/published/201810/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md similarity index 100% rename from published/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md rename to published/201810/20171022 Review- Algorithms to Live By by Brian Christian - Tom Griffiths.md diff --git a/published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md b/published/201810/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md similarity index 100% rename from published/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md rename to published/201810/20171129 How to Install and Use Wireshark on Debian and Ubuntu 16.04_17.10.md diff --git a/published/20171204 Improve your Bash scripts with Argbash.md b/published/201810/20171204 Improve your Bash scripts with Argbash.md similarity index 100% rename from published/20171204 Improve your Bash scripts with Argbash.md rename to published/201810/20171204 Improve your Bash scripts with Argbash.md diff --git a/published/201810/20171208 24 Must Have Essential Linux Applications In 2017.md b/published/201810/20171208 24 Must Have Essential Linux Applications In 2017.md new file mode 100644 index 0000000000..f098b09fd8 --- /dev/null +++ b/published/201810/20171208 24 Must Have Essential Linux Applications In 2017.md @@ -0,0 +1,258 @@ +24 个必备的 Linux 应用程序 +====== + +![](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/10/Essentials-applications-for-every-Linux-user.jpg) + +> 提要:Linux 上必备的应用程序是什么呢?这个答案具有主观性并取决于你使用 Linux 桌面的目的是什么。但确实存在一些必备的并且大部分 Linux 用户都会安装的应用程序。接下来我们会列举出那些在所有 Linux 发行版上你都会安装的最优秀的 Linux 应用程序。 + +在 Linux 的世界中,所有东西都由你选择。你要选择一个发行版?你能找到一大把。你想要找到一个称心的音乐播放器?同样会有许多选择。 + +但它们并非全部遵循相同的设计理念 —— 其中一些可能追求极致轻量化而另一些会提供数不清的特性。因此想要找到正中需求的应用程序会成为相当令人头疼的繁重任务。那就让我们来缓解你的头疼吧。 + +### 对于 Linux 用户来说最优秀的自由软件 + +接下来我将罗列一系列在不同应用场景下我偏爱的必备 Linux 自由软件。当然此处我并非在说它们是最好的,但确实是在特定类别下我尝试的一系列软件中最喜欢的。也同样欢迎你在评论区介绍你最喜欢的应用程序。 + +同时我们也制作了关于此次应用清单的[视频](https://youtu.be/awawJnkUbWs)。在 YouTube 上订阅我们的频道获取更多的 Linux 视频。 + +### 网页浏览器 + +![网页浏览器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg) + +*网页浏览器* + +#### Google Chrome + +[Google Chrome][12] 是一个强大并且功能完善的浏览器解决方案,它拥有完美的同步功能以及丰富的扩展。如果你喜欢 Google 的生态系统那么 Google Chrome 毫无疑问会是你的菜。如果你想要更加开源的解决方案,你可以尝试 [Chromium][13],它是 Google Chrome 的上游项目。 + +#### Firefox + +如果你不是 Google Chrome 的粉丝,你可以尝试 [Firefox][14]。它一直以来都是一个非常稳定并且健壮的网页浏览器。 + +#### Vivaldi + +当然,如果你想要尝试点不同的新东西,你可以尝试 [Vivaldi][15]。Vivaldi 是一个完全重新设计的网页浏览器,它由 Opera 浏览器项目的前成员基于 Chromium 项目而创建。Vivaldi 轻量并且可定制,虽然它还非常年轻并且在某些特性上仍不完善,但它仍能让你眼前一亮并且优雅地工作。 + +- [推荐阅读:[回顾] Otter 浏览器为 Opera 爱好者带来了希望][40] + +### 下载管理器 + +![下载管理器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg) + +*下载管理器* + +#### uGet + +[uGet][16] 是我遇到过最棒的下载管理器,它是开源的并且能满足你对于一款下载管理器的一切期许。uGet 提供一系列便于管理下载的高级设置。你能够管理下载队列并且断点续传,针对大文件使用多连接下载,根据不同列表将文件下载至不同路径,等等。 + +#### XDM + +Xtreme 下载管理器([XDM][17])是一个 Java 开发的强大并且开源的下载工具。它拥有下载管理器的所有基本特性,包括视频抓取、智能计划任务以及浏览器集成。 + +- [推荐阅读:Linux 下最好的 4 个下载管理器][41] + +### BitTorrent 客户端 + +![BitTorrent 客户端](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg) + +*BitTorrent 客户端* + +#### Deluge + +[Deluge][18] 是一个拥有漂亮用户界面的开源 BitTorrent 客户端。如果你习惯在 Windows 上使用 uTorrent,那么 Deluge 的界面会让你倍感亲切。它拥有丰富的设置项和针对不同任务的插件支持。 + +#### Transmission + +[Transmission][19] 力求精简,它是用户界面最精简的 BitTorrent 客户端之一。Transmission 是许多 Linux 发行版的预装软件。 + +- [推荐阅读:Ubuntu Linux 上前 5 名的 Torrent 客户端][42] + +### 云存储 + +![云存储](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg) + +*云存储* + +#### Dropbox + +[Dropbox][20] 是目前最流行的云存储服务之一,它为新用户提供了 2GB 的免费存储空间,以及一个健壮并且易于使用的 Linux 客户端。 + +#### MEGA + +[MEGA][21] 提供了 50GB 的免费存储,但这还并不是它最大的优点,MEGA 还为你的文件提供了端到端的加密支持。MEGA 提供一个名为 MEGAsync 的可靠的 Linux 客户端。 + +- [推荐阅读:2017 年 Linux 上最优秀的免费云服务][43] + +### 通讯工具 + +![通讯工具](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg) + +*通讯工具* + +#### Pidgin + +[Pidgin][22] 是一款开源的即时通讯工具,它支持许多聊天平台,包括 Google Talk、Yahoo 甚至 IRC。Pidgin 可通过第三方插件进行扩展,能提供许多附加功能。 + +你也可以使用 [Franz][23] 或 [Rambox][24] 来在一个应用中使用多个通讯服务。 + +#### Skype + +我们都知道 [Skype][25] 是最流行的视频聊天平台之一,它[发布了全新的 Linux 桌面客户端][26]。 + +- [推荐阅读:2017 年 Linux 平台上最优秀的 6 款消息应用][44] + +### 办公套件 + +![办公套件](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg) + +*办公套件* + +#### LibreOffice + +[LibreOffice][27] 是 Linux 平台上开发最为活跃的开源办公套件,主要包括 Writer、Calc、Impress、Draw、Math、Base 六个主要模块,并且每个模块都提供广泛的文件格式支持。同时 LibreOffice 也支持第三方的扩展,以上优势使它成为许多 Linux 发行版的默认办公套件。 + +#### WPS Office + +如果你想要尝试除 LibreOffice 以外的办公套件,[WPS Office][28] 值得一试。WPS Office 套件包括了写作、演示和数据表格支持。 + +- [推荐阅读:Linux 平台 6 款最优秀的 Microsoft Office 替代品][45] + +### 音乐播放器 + +![音乐播放器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg) + +*音乐播放器* + +#### Lollypop + +[Lollypop][29] 是一款相对较新的开源音乐播放器,拥有漂亮又不失简洁的用户界面。它提供优秀的音乐管理、歌曲推荐、在线广播和派对模式支持。虽然它是一款不具有太多特性的简洁音乐播放器,但仍值得我们去尝试。 + +#### Rhythmbox + +[Rhythmbox][30] 是一款主要为 GNOME 桌面环境开发的音乐播放器,当然它也可以在其他桌面环境运行。它能完成所有作为一款音乐播放器的基础功能,包括 CD 抓取和烧制、乱序播放,等等。它也提供了 iPod 支持。 + +#### cmus + +如果你想要最轻量,并且喜欢命令行的话,[cmus][31] 适合你。个人来讲,我是它的粉丝用户。cmus 是一款面向类 Unix 平台的小巧、快速并且强大的命令音乐播放器。它包含所有基础的音乐播放器特性,并且你能够通过扩展和脚本来增强它的功能。 + +- [推荐阅读:如何在 Ubuntu 14.04 和 Linux Mint 17 上安装 Tomahawk 播放器][46] + +(LCTT 译注:好了好了,大家不要提醒我了,我这次补充上深受国内 Linux 和开源爱好者喜爱的[网易云音乐](https://music.163.com/#/download)。:D ) + +### 视频播放器 + +![视频播放器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg) + +*视频播放器* + +#### VLC + +[VLC][32] 是一款简洁、快速、轻量并且非常强大的开源媒体播放器,它能够直接播放几乎所有格式的媒体文件,同时也能够播放在线的流媒体。它也能够安装一些时髦的扩展来完成不同的任务,比如直接在播放器内下载字幕。 + +#### Kodi + +[Kodi][33] 是一款成熟并且开源的媒体中心,在它的用户群中非常受欢迎。它能够处理来自本地或者网络媒体存储的视频、音乐、图片、播客甚至游戏,更强大的是你还能用它来录制电视节目。Kodi 可由附加组件和皮肤进行定制。 + +- [推荐阅读:Linux 平台上的 4 款格式工厂替代品][47] + +### 照片编辑器 + +![照片编辑器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg) + +*照片编辑器* + +#### GIMP + +[GIMP][34] 是 Linux 平台上 Photoshop 的替代品,它是一款开源、全功能并且专业的照片编辑软件。它打包了各式各样的工具用来编辑图片,更强大的是,它包含丰富的自定义设置以及第三方插件来增强体验。 + +#### Krita + +[Krita][35] 主要是作为一款绘图工具,但也可以作为照片编辑软件。它是开源的并且打包了非常多复杂的高级工具。 + +- [推荐阅读:Linux 平台最好的照片应用][48] + +### 文字编辑器 + +每个 Linux 发行版都拥有自己的文字编缉器解决方案,当然大体上它们都非常简洁并且没有太多功能。但是也有一些文字编辑器具有更强大的功能。 + +![文字编辑器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg) + +*文字编辑器* + +#### Atom + +[Atom][36] 是由 GitHub 开发的一款现代高度可配置的文字编辑器,它是完全开源的并且能够提供所有你能想到的文字编辑器功能。你可以开箱即用,也可以将其配置成你想要的样子。并且你可以从它的社区安装大量的扩展和主题。 + +#### Sublime Text + +[Sublime Text][37] 是最受欢迎的文字编辑器之一,虽然它并不是免费的,但你可以无限地试用该款软件。Sublime Text 是一款特性丰富并且高度模块化的软件,当然它也提供插件和主题支持。 + +- [推荐阅读:Linux 平台最优秀的 4 款现代开源代码编辑器][49] + +(LCTT 译注:当然,我知道你们也忘记不了 [VSCode](https://code.visualstudio.com/download)。) + +### 启动器 + +![启动器](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg) + +*启动器* + +#### Albert + +[Albert][38] 是一款快速、可扩展、可定制的生产力工具,受 Alfred(Mac 平台上一个非常好的生产力工具)启发并且仍处于开发阶段,它的目标是“使所有触手可及”。它能够与你的 Linux 发行版非常好的集成,帮助你提高生产力。 + +#### Synapse + +[Synapse][39] 已经有些年头了,它是一个能够搜索和运行程序的简单启动器。它也同时能够加速一些工作流,譬如音乐控制、文件搜索、路径切换、书签、运行命令,等等。 + +正如 Abhishek 所考虑的,我们将根据读者的(也就是你的)反馈更新最佳 Linux 应用程序清单。那么,你最爱的 Linux 应用程序是什么呢?分享给我们或者为这个清单增加新的软件分类吧。 + +--- + +via: https://itsfoss.com/essential-linux-applications/ + +作者:[Munif Tanjim][a] +译者:[cycoe](https://github.com/cycoe) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/munif/ +[12]: https://www.google.com/chrome/browser +[13]: https://www.chromium.org/Home +[14]: https://www.mozilla.org/en-US/firefox +[15]: https://vivaldi.com +[16]: http://ugetdm.com/ +[17]: http://xdman.sourceforge.net/ +[18]: http://deluge-torrent.org/ +[19]: https://transmissionbt.com/ +[20]: https://www.dropbox.com +[21]: https://mega.nz/ +[22]: https://www.pidgin.im/ +[23]: https://itsfoss.com/franz-messaging-app/ +[24]: http://rambox.pro/ +[25]: https://www.skype.com +[26]: https://itsfoss.com/skpe-alpha-linux/ +[27]: https://www.libreoffice.org +[28]: https://www.wps.com +[29]: http://gnumdk.github.io/lollypop-web/ +[30]: https://wiki.gnome.org/Apps/Rhythmbox +[31]: https://cmus.github.io/ +[32]: http://www.videolan.org +[33]: https://kodi.tv +[34]: https://www.gimp.org/ +[35]: https://krita.org/en/ +[36]: https://atom.io/ +[37]: http://www.sublimetext.com/ +[38]: https://github.com/ManuelSchneid3r/albert +[39]: https://launchpad.net/synapse-project +[40]: https://itsfoss.com/otter-browser-review/ +[41]: https://itsfoss.com/4-best-download-managers-for-linux/ +[42]: https://itsfoss.com/best-torrent-ubuntu/ +[43]: https://itsfoss.com/cloud-services-linux/ +[44]: https://itsfoss.com/best-messaging-apps-linux/ +[45]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/ +[46]: https://itsfoss.com/install-tomahawk-ubuntu-1404-linux-mint-17/ +[47]: https://itsfoss.com/format-factory-alternative-linux/ +[48]: https://itsfoss.com/image-applications-ubuntu-linux/ +[49]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ diff --git a/published/201810/20171214 Peeking into your Linux packages.md b/published/201810/20171214 Peeking into your Linux packages.md new file mode 100644 index 0000000000..72b40c31cb --- /dev/null +++ b/published/201810/20171214 Peeking into your Linux packages.md @@ -0,0 +1,125 @@ +一窥你安装的 Linux 软件包 +====== +> 这些最有用的命令可以让你了解安装在你的 Debian 类的 Linux 系统上的包的情况。 + +![](https://images.idgesg.net/images/article/2017/12/christmas-packages-100744371-large.jpg) + +你有没有想过你的 Linux 系统上安装了几千个软件包? 是的,我说的是“千”。 即使是相当一般的 Linux 系统也可能安装了上千个软件包。 有很多方法可以获得这些包到底是什么包的详细信息。 + +首先,要在基于 Debian 的发行版(如 Ubuntu)上快速得到已安装的软件包数量,请使用 `apt list --installed`, 如下: + +``` +$ apt list --installed | wc -l +2067 +``` + +这个数字实际上多了一个,因为输出中包含了 “Listing ...” 作为它的第一行。 这个命令会更准确: + +``` +$ apt list --installed | grep -v "^Listing" | wc -l +2066 +``` + +要获得所有这些包的详细信息,请按以下方式浏览列表: + +``` +$ apt list --installed | more +Listing... +a11y-profile-manager-indicator/xenial,now 0.1.10-0ubuntu3 amd64 [installed] +account-plugin-aim/xenial,now 3.12.11-0ubuntu3 amd64 [installed] +account-plugin-facebook/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed] +account-plugin-flickr/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed] +account-plugin-google/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed] +account-plugin-jabber/xenial,now 3.12.11-0ubuntu3 amd64 [installed] +account-plugin-salut/xenial,now 3.12.11-0ubuntu3 amd64 [installed] + +``` + +这需要观察很多细节 —— 特别是让你的眼睛在所有 2000 多个文件中徘徊。 它包含包名称、版本等,以及更多但并不是以最易于我们人类解析的显示信息。 `dpkg-query` 使得描述更容易理解,但这些描述会塞满你的命令窗口,除非窗口非常宽。 因此,为了让此篇文章更容易阅读,下面的数据显示已经分成了左右两侧。 + +左侧: + +``` +$ dpkg-query -l | more +Desired=Unknown/Install/Remove/Purge/Hold +| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend +|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) +||/ Name Version ++++-==============================================-=================================- +ii a11y-profile-manager-indicator 0.1.10-0ubuntu3 +ii account-plugin-aim 3.12.11-0ubuntu3 +ii account-plugin-facebook 0.12+16.04.20160126-0ubuntu1 +ii account-plugin-flickr 0.12+16.04.20160126-0ubuntu1 +ii account-plugin-google 0.12+16.04.20160126-0ubuntu1 +ii account-plugin-jabber 3.12.11-0ubuntu3 +ii account-plugin-salut 3.12.11-0ubuntu3 +ii account-plugin-twitter 0.12+16.04.20160126-0ubuntu1 +rc account-plugin-windows-live 0.11+14.04.20140409.1-0ubuntu2 +``` + +右侧: + +``` +Architecture Description +============-===================================================================== +amd64 Accessibility Profile Manager - Unity desktop indicator +amd64 Messaging account plugin for AIM +all GNOME Control Center account plugin for single signon - facebook +all GNOME Control Center account plugin for single signon - flickr +all GNOME Control Center account plugin for single signon +amd64 Messaging account plugin for Jabber/XMPP +amd64 Messaging account plugin for Local XMPP (Salut) +all GNOME Control Center account plugin for single signon - twitter +all GNOME Control Center account plugin for single signon - windows live +``` + +每行开头的 `ii` 和 `rc` 名称(见上文“左侧”)是包状态指示符。 第一个字母表示包的预期状态: + +- `u` -- 未知 +- `i` -- 安装 +- `r` -- 移除/反安装 +- `p` -- 清除(也包括配置文件) +- `h` -- 保留 + +第二个代表包的当前状态: + +- `n` -- 未安装 +- `i` -- 已安装 +- `c` -- 配置文件(只安装了配置文件) +- `U` -- 未打包 +- `F` -- 半配置(出于某些原因配置失败) +- `h` -- 半安装(出于某些原因配置失败) +- `W` -- 等待触发(该包等待另外一个包的触发器) +- `t` -- 待定触发(该包被触发) + +在通常的双字符字段末尾添加的 `R` 表示需要重新安装。 你可能永远不会碰到这些。 + +快速查看整体包状态的一种简单方法是计算在不同状态中包含的包的数量: + +``` +$ dpkg-query -l | tail -n +6 | awk '{print $1}' | sort | uniq -c + 2066 ii + 134 rc +``` + +我从上面的 `dpkg-query` 输出中排除了前五行,因为这些是标题行,会混淆输出。 + +这两行基本上告诉我们,在这个系统上,应该安装了 2066 个软件包,而 134 个其他的软件包已被删除,但留下了配置文件。 你始终可以使用以下命令删除程序包的剩余配置文件: + +``` +$ sudo dpkg --purge xfont-mathml +``` + +请注意,如果程序包二进制文件和配置文件都已经安装了,则上面的命令将两者都删除。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3242808/linux/peeking-into-your-linux-packages.html + +作者:[Sandra Henry-Stocker][a] +译者:[Flowsnow](https://github.com/Flowsnow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ diff --git a/published/20180105 The Best Linux Distributions for 2018.md b/published/201810/20180105 The Best Linux Distributions for 2018.md similarity index 100% rename from published/20180105 The Best Linux Distributions for 2018.md rename to published/201810/20180105 The Best Linux Distributions for 2018.md diff --git a/published/20180117 How to get into DevOps.md b/published/201810/20180117 How to get into DevOps.md similarity index 100% rename from published/20180117 How to get into DevOps.md rename to published/201810/20180117 How to get into DevOps.md diff --git a/published/20180123 Moving to Linux from dated Windows machines.md b/published/201810/20180123 Moving to Linux from dated Windows machines.md similarity index 100% rename from published/20180123 Moving to Linux from dated Windows machines.md rename to published/201810/20180123 Moving to Linux from dated Windows machines.md diff --git a/published/20180201 Conditional Rendering in React using Ternaries and.md b/published/201810/20180201 Conditional Rendering in React using Ternaries and.md similarity index 100% rename from published/20180201 Conditional Rendering in React using Ternaries and.md rename to published/201810/20180201 Conditional Rendering in React using Ternaries and.md diff --git a/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md b/published/201810/20180201 Rock Solid React.js Foundations A Beginners Guide.md similarity index 74% rename from translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md rename to published/201810/20180201 Rock Solid React.js Foundations A Beginners Guide.md index bdb2abca36..aefa43d072 100644 --- a/translated/tech/20180201 Rock Solid React.js Foundations A Beginners Guide.md +++ b/published/201810/20180201 Rock Solid React.js Foundations A Beginners Guide.md @@ -1,38 +1,36 @@ 坚实的 React 基础:初学者指南 -============================================================ +============ + ![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png) -React.js crash course + +*React.js crash course* 在过去的几个月里,我一直在使用 React 和 React-Native。我已经发布了两个作为产品的应用, [Kiven Aa][1](React)和 [Pollen Chat][2](React Native)。当我开始学习 React 时,我找了一些不仅仅是教我如何用 React 写应用的东西(一个博客,一个视频,一个课程,等等),我也想让它帮我做好面试准备。 我发现的大部分资料都集中在某一单一方面上。所以,这篇文章针对的是那些希望理论与实践完美结合的观众。我会告诉你一些理论,以便你了解幕后发生的事情,然后我会向你展示如何编写一些 React.js 代码。 -如果你更喜欢视频形式,我在YouTube上传了整个课程,请去看看。 - +如果你更喜欢视频形式,我在 [YouTube][https://youtu.be/WJ6PgzI16I4] 上传了整个课程,请去看看。 让我们开始...... > React.js 是一个用于构建用户界面的 JavaScript 库 -你可以构建各种单页应用程序。例如,你希望在用户界面上实时显示更改的聊天软件和电子商务门户。 +你可以构建各种单页应用程序。例如,你希望在用户界面上实时显示变化的聊天软件和电子商务门户。 ### 一切都是组件 -React 应用由组件组成,数量多且互相嵌套。你或许会问:”可什么是组件呢?“ +React 应用由组件组成,数量繁多且互相嵌套。你或许会问:”可什么是组件呢?“ 组件是可重用的代码段,它定义了某些功能在 UI 上的外观和行为。 比如,按钮就是一个组件。 -让我们看看下面的计算器,当你尝试计算2 + 2 = 4 -1 = 3(简单的数学题)时,你会在Google上看到这个计算器。 +让我们看看下面的计算器,当你尝试计算 2 + 2 = 4 -1 = 3(简单的数学题)时,你会在 Google 上看到这个计算器。 ![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png) -红色标记表示组件 - +*红色标记表示组件* 如上图所示,这个计算器有很多区域,比如展示窗口和数字键盘。所有这些都可以是许多单独的组件或一个巨大的组件。这取决于在 React 中分解和抽象出事物的程度。你为所有这些组件分别编写代码,然后合并这些组件到一个容器中,而这个容器又是一个 React 组件。这样你就可以创建可重用的组件,最终的应用将是一组协同工作的单独组件。 - - 以下是一个你践行了以上原则并可以用 React 编写计算器的方法。 ``` @@ -47,7 +45,6 @@ React 应用由组件组成,数量多且互相嵌套。你或许会问:” - ``` 没错!它看起来像HTML代码,然而并不是。我们将在后面的部分中详细探讨它。 @@ -56,7 +53,7 @@ React 应用由组件组成,数量多且互相嵌套。你或许会问:” 这篇教程专注于 React 的基础部分。它没有偏向 Web 或 React Native(开发移动应用)。所以,我们会用一个在线编辑器,这样可以在学习 React 能做什么之前避免 web 或 native 的具体配置。 -我已经为读者在 [codepen.io][4] 设置好了开发环境。只需点开这个链接并且阅读所有 HTML 和 JavaScript 注释。 +我已经为读者在 [codepen.io][4] 设置好了开发环境。只需点开[该链接][4]并且阅读 HTML 和 JavaScript 中的所有注释。 ### 控制组件 @@ -70,8 +67,6 @@ React 应用由组件组成,数量多且互相嵌套。你或许会问:” 在 React 中,一个函数式组件通过 `props` 对象使用你传递给它的任意数据。它返回一个对象,该对象描述了 React 应渲染的 UI。函数式组件也称为无状态组件。 - - 让我们编写第一个函数式组件。 ``` @@ -80,14 +75,12 @@ function Hello(props) { } ``` - - 就这么简单。我们只是将 `props` 作为参数传递给了一个普通的 JavaScript 函数并且有返回值。嗯?返回了什么?那个 `
{props.name}
`。它是 JSX(JavaScript Extended)。我们将在后面的部分中详细了解它。 -上面这个函数将在浏览器中渲染出以下HTML。 +上面这个函数将在浏览器中渲染出以下 HTML。 ``` - +
rajat
@@ -104,7 +97,7 @@ function Hello(props) { 属性 `name` 在上面的代码中变成了 `Hello` 组件里的 `props.name` ,属性 `age` 变成了 `props.age` 。 -> 记住! 你可以将一个React组件嵌套在其他React组件中。 +> 记住! 你可以将一个 React 组件嵌套在其他 React 组件中。 让我们在 codepen playground 使用 `Hello` 组件。用我们的 `Hello` 组件替换 `ReactDOM.render()` 中的 `div`,并在底部窗口中查看更改。 @@ -117,13 +110,15 @@ ReactDOM.render(, document.getElementById('root')); ``` -> 但是如果你的组件有一些内部状态怎么办?例如,像下面的计数器组件一样,它有一个内部计数变量,它在 + 和 - 键按下时发生变化。 +> 但是如果你的组件有一些内部状态怎么办?例如,像下面的计数器组件一样,它有一个内部计数变量,它在 `+` 和 `-` 键按下时发生变化。 -具有内部状态的 React 组件 +![](https://media.giphy.com/media/3ohs4xEtqjJIs4FJ9C/giphy.gif) + +*具有内部状态的 React 组件* #### b) 基于类的组件 -基于类的组件有一个额外属性 `state` ,你可以用它存放组件的私有数据。我们可以用 class 表示法重写我们的 `Hello` 。由于这些组件具有状态,因此这些组件也称为有状态组件。 +基于类的组件有一个额外属性 `state` ,你可以用它存放组件的私有数据。我们可以用 `class` 表示法重写我们的 `Hello` 。由于这些组件具有状态,因此这些组件也称为有状态组件。 ``` class Counter extends React.Component { @@ -138,9 +133,9 @@ class Counter extends React.Component { } ``` -我们继承了 React 库的 React.Component 类以在React中创建基于类的组件。在[这里][5]了解更多有关 JavaScript 类的东西。 +我们继承了 React 库的 `React.Component` 类以在 React 中创建基于类的组件。在[这里][5]了解更多有关 JavaScript 类的东西。 -`render()` 方法必须存在于你的类中,因为React会查找此方法,用以了解它应在屏幕上渲染的 UI。为了使用这种内部状态,我们首先要在组件 +`render()` 方法必须存在于你的类中,因为 React 会查找此方法,用以了解它应在屏幕上渲染的 UI。为了使用这种内部状态,我们首先要在组件 要使用这种内部状态,我们首先必须按以下方式初始化组件类的构造函数中的状态对象。 @@ -166,47 +161,47 @@ class Counter extends React.Component { // In your react app: ``` -类似地,可以使用 this.props 对象在我们基于类的组件内访问 props。 +类似地,可以使用 `this.props` 对象在我们基于类的组件内访问 `props`。 -要设置 state,请使用 `React.Component` 的 `setState()`。 在本教程的最后一部分中,我们将看到一个这样的例子。 +要设置 `state`,请使用 `React.Component` 的 `setState()`。 在本教程的最后一部分中,我们将看到一个这样的例子。 > 提示:永远不要在 `render()` 函数中调用 `setState()`,因为 `setState` 会导致组件重新渲染,这将导致无限循环。 ![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png) -基于类的组件具有可选属性 “state”。 + +*基于类的组件具有可选属性 “state”。* 除了 `state` 以外,基于类的组件有一些声明周期方法比如 `componentWillMount()`。你可以利用这些去做初始化 `state`这样的事, 可是那将超出这篇文章的范畴。 ### JSX -JSX 是 JavaScript Extended 的一种简短形式,它是一种编写 React components 的方法。使用 JSX,你可以在类 XML 标签中获得 JavaScript 的全部力量。 +JSX 是 JavaScript Extended 的缩写,它是一种编写 React 组件的方法。使用 JSX,你可以在类 XML 标签中获得 JavaScript 的全部力量。 -你把 JavaScript 表达式放在`{}`里。下面是一些有效的 JSX 例子。 +你把 JavaScript 表达式放在 `{}` 里。下面是一些有效的 JSX 例子。 ``` - ; -
- ``` -它的工作方式是你编写 JSX 来描述你的 UI 应该是什么样子。像 Babel 这样的转码器将这些代码转换为一堆 `React.createElement()`调用。然后,React 库使用这些 `React.createElement()`调用来构造 DOM 元素的树状结构。对于 React 的网页视图或 React Native 的 Native 视图,它将保存在内存中。 +它的工作方式是你编写 JSX 来描述你的 UI 应该是什么样子。像 Babel 这样的转码器将这些代码转换为一堆 `React.createElement()` 调用。然后,React 库使用这些 `React.createElement()` 调用来构造 DOM 元素的树状结构。对于 React 的网页视图或 React Native 的 Native 视图,它将保存在内存中。 -React 接着会计算它如何在存储展示给用户的 UI 的内存中有效地模仿这个树。此过程称为 [reconciliation][7]。完成计算后,React会对屏幕上的真正 UI 进行更改。 +React 接着会计算它如何在展示给用户的 UI 的内存中有效地模仿这个树。此过程称为 [reconciliation][7]。完成计算后,React 会对屏幕上的真正 UI 进行更改。 ![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png) -React 如何将你的 JSX 转化为描述应用 UI 的树。 + +*React 如何将你的 JSX 转化为描述应用 UI 的树。* 你可以使用 [Babel 的在线 REPL][8] 查看当你写一些 JSX 的时候,React 的真正输出。 ![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png) -使用Babel REPL 转换 JSX 为普通 JavaScript + +*使用Babel REPL 转换 JSX 为普通 JavaScript* > 由于 JSX 只是 `React.createElement()` 调用的语法糖,因此可以在没有 JSX 的情况下使用 React。 -现在我们了解了所有的概念,所以我们已经准备好编写我们之前看到的作为GIF图的计数器组件。 +现在我们了解了所有的概念,所以我们已经准备好编写我们之前看到之前的 GIF 图中的计数器组件。 代码如下,我希望你已经知道了如何在我们的 playground 上渲染它。 @@ -249,20 +244,19 @@ class Counter extends React.Component { 以下是关于上述代码的一些重点。 1. JSX 使用 `驼峰命名` ,所以 `button` 的 属性是 `onClick`,不是我们在HTML中用的 `onclick`。 - 2. 绑定 `this` 是必要的,以便在回调时工作。 请参阅上面代码中的第8行和第9行。 最终的交互式代码位于[此处][9]。 -有了这个,我们已经到了 React 速成课程的结束。我希望我已经阐明了 React 如何工作以及如何使用 React 来构建更大的应用程序,使用更小和可重用的组件。 +有了这个,我们已经到了 React 速成课程的结束。我希望我已经阐明了 React 如何工作,以及如何使用 React 来构建更大的应用程序,使用更小和可重用的组件。 -------------------------------------------------------------------------------- via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923 -作者:[Rajat Saxena ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +作者:[Rajat Saxena][a] +译者:[GraveAccent](https://github.com/GraveAccent) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180329 How to configure multiple websites with Apache web server.md b/published/201810/20180329 How to configure multiple websites with Apache web server.md similarity index 100% rename from published/20180329 How to configure multiple websites with Apache web server.md rename to published/201810/20180329 How to configure multiple websites with Apache web server.md diff --git a/published/20180412 A Desktop GUI Application For NPM.md b/published/201810/20180412 A Desktop GUI Application For NPM.md similarity index 100% rename from published/20180412 A Desktop GUI Application For NPM.md rename to published/201810/20180412 A Desktop GUI Application For NPM.md diff --git a/published/20180413 The df Command Tutorial With Examples For Beginners.md b/published/201810/20180413 The df Command Tutorial With Examples For Beginners.md similarity index 100% rename from published/20180413 The df Command Tutorial With Examples For Beginners.md rename to published/201810/20180413 The df Command Tutorial With Examples For Beginners.md diff --git a/published/20180522 Free Resources for Securing Your Open Source Code.md b/published/201810/20180522 Free Resources for Securing Your Open Source Code.md similarity index 100% rename from published/20180522 Free Resources for Securing Your Open Source Code.md rename to published/201810/20180522 Free Resources for Securing Your Open Source Code.md diff --git a/published/20180528 What is behavior-driven Python.md b/published/201810/20180528 What is behavior-driven Python.md similarity index 100% rename from published/20180528 What is behavior-driven Python.md rename to published/201810/20180528 What is behavior-driven Python.md diff --git a/published/20180531 How to create shortcuts in vi.md b/published/201810/20180531 How to create shortcuts in vi.md similarity index 100% rename from published/20180531 How to create shortcuts in vi.md rename to published/201810/20180531 How to create shortcuts in vi.md diff --git a/published/20180601 Download an OS with GNOME Boxes.md b/published/201810/20180601 Download an OS with GNOME Boxes.md similarity index 100% rename from published/20180601 Download an OS with GNOME Boxes.md rename to published/201810/20180601 Download an OS with GNOME Boxes.md diff --git a/translated/tech/20180615 How To Rename Multiple Files At Once In Linux.md b/published/201810/20180615 How To Rename Multiple Files At Once In Linux.md similarity index 51% rename from translated/tech/20180615 How To Rename Multiple Files At Once In Linux.md rename to published/201810/20180615 How To Rename Multiple Files At Once In Linux.md index 14f16b3eb6..05916fb914 100644 --- a/translated/tech/20180615 How To Rename Multiple Files At Once In Linux.md +++ b/published/201810/20180615 How To Rename Multiple Files At Once In Linux.md @@ -3,11 +3,11 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/06/Rename-Multiple-Files-720x340.png) -你可能已经知道,我们使用 mv 命令在类 Unix 操作系统中重命名或者移动文件和目录。 但是,mv 命令不支持一次重命名多个文件。 不用担心。 在本教程中,我们将学习使用 Linux 中的 “mmv” 命令一次重命名多个文件。 此命令用于在类 Unix 操作系统中使用标准通配符批量移动,复制,追加和重命名文件。 +你可能已经知道,我们使用 `mv` 命令在类 Unix 操作系统中重命名或者移动文件和目录。 但是,`mv` 命令不支持一次重命名多个文件。 不用担心。 在本教程中,我们将学习使用 Linux 中的 `mmv` 命令一次重命名多个文件。 此命令用于在类 Unix 操作系统中使用标准通配符批量移动、复制、追加和重命名文件。 ### 在 Linux 中一次重命名多个文件 -mmv 程序可在基于 Debian 的系统的默认仓库中使用。 要想在 Debian,Ubuntu,Linux Mint 上安装它,请运行以下命令: +`mmv` 程序可在基于 Debian 的系统的默认仓库中使用。 要想在 Debian、Ubuntu、Linux Mint 上安装它,请运行以下命令: ``` $ sudo apt-get install mmv @@ -20,7 +20,7 @@ $ ls a1.txt a2.txt a3.txt ``` -现在,你想要将所有以字母 “a” 开头的文件重命名为以 “b” 开头的。 当然,你可以在几秒钟内手动执行此操作。 但是想想你是否有数百个文件想要重命名? 这是一个非常耗时的过程。 这时候 **mmv** 命令就很有帮助了。 +现在,你想要将所有以字母 “a” 开头的文件重命名为以 “b” 开头的。 当然,你可以在几秒钟内手动执行此操作。 但是想想你是否有数百个文件想要重命名? 这是一个非常耗时的过程。 这时候 `mmv` 命令就很有帮助了。 要将所有以字母 “a” 开头的文件重命名为以字母 “b” 开头的,只需要运行: @@ -33,22 +33,20 @@ $ mmv a\* b\#1 ``` $ ls b1.txt b2.txt b3.txt - ``` -如你所见,所有以字母 “a” 开头的文件(即 a1.txt,a2.txt,a3.txt)都重命名为 b1.txt,b2.txt,b3.txt。 +如你所见,所有以字母 “a” 开头的文件(即 `a1.txt`、`a2.txt`、`a3.txt`)都重命名为 `b1.txt`、`b2.txt`、`b3.txt`。 **解释** -在上面的例子中,第一个参数(a\\*)是 'from' 模式,第二个参数是 'to' 模式(b\\#1)。根据上面的例子,mmv 将查找任何以字母 'a' 开头的文件名,并根据第二个参数重命名匹配的文件,即 'to' 模式。我们使用通配符,例如用 '*','?' 和 '[]' 来匹配一个或多个任意字符。请注意,你必须避免使用通配符,否则它们将被 shell 扩展,mmv 将无法理解。 +在上面的例子中,第一个参数(`a\*`)是 “from” 模式,第二个参数是 “to” 模式(`b\#1`)。根据上面的例子,`mmv` 将查找任何以字母 “a” 开头的文件名,并根据第二个参数重命名匹配的文件,即 “to” 模式。我们可以使用通配符,例如用 `*`、`?` 和 `[]` 来匹配一个或多个任意字符。请注意,你必须转义使用通配符,否则它们将被 shell 扩展,`mmv` 将无法理解。 -'to' 模式中的 '#1' 是通配符索引。它匹配 'from' 模式中的第一个通配符。 'to' 模式中的 '#2' 将匹配第二个通配符,依此类推。在我们的例子中,我们只有一个通配符(星号),所以我们写了一个 #1。并且,哈希标志也应该被转义。此外,你也可以用引号括起模式。 +“to” 模式中的 `#1` 是通配符索引。它匹配 “from” 模式中的第一个通配符。 “to” 模式中的 `#2` 将匹配第二个通配符(如果有的话),依此类推。在我们的例子中,我们只有一个通配符(星号),所以我们写了一个 `#1`。并且,`#` 符号也应该被转义。此外,你也可以用引号括起模式。 -你甚至可以将具有特定扩展名的所有文件重命名为其他扩展名。例如,要将当前目录中的所有 **.txt** 文件重命名为 **.doc** 文件格式,只需运行: +你甚至可以将具有特定扩展名的所有文件重命名为其他扩展名。例如,要将当前目录中的所有 `.txt` 文件重命名为 `.doc` 文件格式,只需运行: ``` $ mmv \*.txt \#1.doc - ``` 这是另一个例子。 我们假设你有以下文件。 @@ -56,16 +54,14 @@ $ mmv \*.txt \#1.doc ``` $ ls abcd1.txt abcd2.txt abcd3.txt - ``` -你希望在当前目录下的所有文件中将第一次出现的 **abc** 替换为 **xyz**。 你会怎么做呢? +你希望在当前目录下的所有文件中将第一次出现的 “abc” 替换为 “xyz”。 你会怎么做呢? 很简单。 ``` $ mmv '*abc*' '#1xyz#2' - ``` 请注意,在上面的示例中,模式被单引号括起来了。 @@ -75,77 +71,74 @@ $ mmv '*abc*' '#1xyz#2' ``` $ ls xyzd1.txt xyzd2.txt xyzd3.txt - ``` -看到没? 文件 **abcd1.txt**,**abcd2.txt** 和 **abcd3.txt** 已经重命名为 **xyzd1.txt**,**xyzd2.txt** 和 **xyzd3.txt**。 +看到没? 文件 `abcd1.txt`、`abcd2.txt` 和 `abcd3.txt` 已经重命名为 `xyzd1.txt`、`xyzd2.txt` 和 `xyzd3.txt`。 -mmv 命令的另一个值得注意的功能是你可以使用 **-n** 选项打印输出而不是重命名文件,如下所示。 +`mmv` 命令的另一个值得注意的功能是你可以使用 `-n` 选项打印输出而不是重命名文件,如下所示。 ``` $ mmv -n a\* b\#1 a1.txt -> b1.txt a2.txt -> b2.txt a3.txt -> b3.txt - ``` -这样,你可以在重命名文件之前简单地验证 mmv 命令实际执行的操作。 +这样,你可以在重命名文件之前简单地验证 `mmv` 命令实际执行的操作。 有关更多详细信息,请参阅 man 页面。 ``` $ man mmv - ``` -**更新:** +### 更新:Thunar 文件管理器 -**Thunar 文件管理器**默认具有内置**批量重命名**选项。 如果你正在使用thunar,那么重命名文件要比使用mmv命令容易得多。 +**Thunar 文件管理器**默认具有内置**批量重命名**选项。 如果你正在使用 Thunar,那么重命名文件要比使用 `mmv` 命令容易得多。 -Thunar在大多数Linux发行版的默认仓库库中都可用。 +Thunar 在大多数 Linux 发行版的默认仓库库中都可用。 -要在基于Arch的系统上安装它,请运行: +要在基于 Arch 的系统上安装它,请运行: ``` $ sudo pacman -S thunar ``` -在 RHEL,CentOS 上: +在 RHEL、CentOS 上: + ``` $ sudo yum install thunar ``` 在 Fedora 上: + ``` $ sudo dnf install thunar - ``` 在 openSUSE 上: + ``` $ sudo zypper install thunar - ``` -在 Debian,Ubuntu,Linux Mint 上: +在 Debian、Ubuntu、Linux Mint 上: + ``` $ sudo apt-get install thunar - ``` 安装后,你可以从菜单或应用程序启动器中启动批量重命名程序。 要从终端启动它,请使用以下命令: ``` $ thunar -B - ``` -批量重命名就是这么回事。 +批量重命名方式如下。 ![][1] -单击加号,然后选择要重命名的文件列表。 批量重命名可以重命名文件的名称,文件的后缀或者同事重命名文件的名称和后缀。 Thunar 目前支持以下批量重命名: +单击“+”,然后选择要重命名的文件列表。 批量重命名可以重命名文件的名称、文件的后缀或者同时重命名文件的名称和后缀。 Thunar 目前支持以下批量重命名: - 插入日期或时间 - 插入或覆盖 @@ -158,9 +151,9 @@ $ thunar -B ![][2] -选择条件后,单击**重命名文件**选项来重命名文件。 +选择条件后,单击“重命名文件”选项来重命名文件。 -你还可以通过选择两个或更多文件从 Thunar 中打开批量重命名器。 选择文件后,按F2或右键单击并选择**重命名**。 +你还可以通过选择两个或更多文件从 Thunar 中打开批量重命名器。 选择文件后,按 F2 或右键单击并选择“重命名”。 嗯,这就是本次的所有内容了。希望有所帮助。更多干货即将到来。敬请关注! @@ -173,10 +166,10 @@ via: https://www.ostechnix.com/how-to-rename-multiple-files-at-once-in-linux/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.ostechnix.com/author/sk/ [1]: http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename.png -[2]: http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename-1.png \ No newline at end of file +[2]: http://www.ostechnix.com/wp-content/uploads/2018/06/bulk-rename-1.png diff --git a/published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md b/published/201810/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md similarity index 100% rename from published/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md rename to published/201810/20180703 Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server.md diff --git a/published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md b/published/201810/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md similarity index 100% rename from published/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md rename to published/201810/20180704 Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS.md diff --git a/published/20180709 How To Configure SSH Key-based Authentication In Linux.md b/published/201810/20180709 How To Configure SSH Key-based Authentication In Linux.md similarity index 100% rename from published/20180709 How To Configure SSH Key-based Authentication In Linux.md rename to published/201810/20180709 How To Configure SSH Key-based Authentication In Linux.md diff --git a/translated/tech/20180715 Why is Python so slow.md b/published/201810/20180715 Why is Python so slow.md similarity index 57% rename from translated/tech/20180715 Why is Python so slow.md rename to published/201810/20180715 Why is Python so slow.md index 1e6227b9e3..ac83056ae8 100644 --- a/translated/tech/20180715 Why is Python so slow.md +++ b/published/201810/20180715 Why is Python so slow.md @@ -1,42 +1,39 @@ 为什么 Python 这么慢? -============================================================ +========== -Python 现在越来越火,已经迅速扩张到包括 DevOps、数据科学、web 开发、信息安全等各个领域当中。 +Python 现在越来越火,已经迅速扩张到包括 DevOps、数据科学、Web 开发、信息安全等各个领域当中。 然而,相比起 Python 扩张的速度,Python 代码的运行速度就显得有点逊色了。 - ![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg) -> 在代码运行速度方面,Java、C、C++、C#和 Python 要如何进行比较呢?并没有一个放之四海而皆准的标准,因为具体结果很大程度上取决于运行的程序类型,而语言基准测试Computer Language Benchmarks Games可以作为[衡量的一个方面][5]。 +> 在代码运行速度方面,Java、C、C++、C# 和 Python 要如何进行比较呢?并没有一个放之四海而皆准的标准,因为具体结果很大程度上取决于运行的程序类型,而语言基准测试Computer Language Benchmarks Games可以作为[衡量的一个方面][5]。 -根据我这些年来进行语言基准测试的经验来看,Python 比很多语言运行起来都要慢。无论是使用 [JIT][7] 编译器的 C#、Java,还是使用 [AOT][8] 编译器的 C、C ++,又或者是 JavaScript 这些解释型语言,Python 都[比它们运行得慢][6]。 +根据我这些年来进行语言基准测试的经验来看,Python 比很多语言运行起来都要慢。无论是使用 [JIT][7] 编译器的 C#、Java,还是使用 [AOT][8] 编译器的 C、C++,又或者是 JavaScript 这些解释型语言,Python 都[比它们运行得慢][6]。 - 注意:对于文中的 Python ,一般指 CPython 这个官方的实现。当然我也会在本文中提到其它语言的 Python 实现。 +注意:对于文中的 “Python” ,一般指 CPython 这个官方的实现。当然我也会在本文中提到其它语言的 Python 实现。 > 我要回答的是这个问题:对于一个类似的程序,Python 要比其它语言慢 2 到 10 倍不等,这其中的原因是什么?又有没有改善的方法呢? 主流的说法有这些: * “是全局解释器锁Global Interpreter Lock(GIL)的原因” - * “是因为 Python 是解释型语言而不是编译型语言” - * “是因为 Python 是一种动态类型的语言” 哪一个才是是影响 Python 运行效率的主要原因呢? ### 是全局解释器锁的原因吗? -现在很多计算机都配备了具有多个核的 CPU ,有时甚至还会有多个处理器。为了更充分利用它们的处理能力,操作系统定义了一个称为线程的低级结构。某一个进程(例如 Chrome 浏览器)可以建立多个线程,在系统内执行不同的操作。在这种情况下,CPU 密集型进程就可以跨核心共享负载了,这样的做法可以大大提高应用程序的运行效率。 +现在很多计算机都配备了具有多个核的 CPU ,有时甚至还会有多个处理器。为了更充分利用它们的处理能力,操作系统定义了一个称为线程的低级结构。某一个进程(例如 Chrome 浏览器)可以建立多个线程,在系统内执行不同的操作。在这种情况下,CPU 密集型进程就可以跨核心分担负载了,这样的做法可以大大提高应用程序的运行效率。 -例如在我写这篇文章时,我的 Chrome 浏览器打开了 44 个线程。要知道的是,基于 POSIX 的操作系统(例如 Mac OS、Linux)和 Windows 操作系统的线程结构、API 都是不同的,因此操作系统还负责对各个线程的调度。 +例如在我写这篇文章时,我的 Chrome 浏览器打开了 44 个线程。需要提及的是,基于 POSIX 的操作系统(例如 Mac OS、Linux)和 Windows 操作系统的线程结构、API 都是不同的,因此操作系统还负责对各个线程的调度。 如果你还没有写过多线程执行的代码,你就需要了解一下线程锁的概念了。多线程进程比单线程进程更为复杂,是因为需要使用线程锁来确保同一个内存地址中的数据不会被多个线程同时访问或更改。 CPython 解释器在创建变量时,首先会分配内存,然后对该变量的引用进行计数,这称为引用计数reference counting。如果变量的引用数变为 0,这个变量就会从内存中释放掉。这就是在 for 循环代码块内创建临时变量不会增加内存消耗的原因。 -而当多个线程内共享一个变量时,CPython 锁定引用计数的关键就在于使用了 GIL,它会谨慎地控制线程的执行情况,无论同时存在多少个线程,每次只允许一个线程进行操作。 +而当多个线程内共享一个变量时,CPython 锁定引用计数的关键就在于使用了 GIL,它会谨慎地控制线程的执行情况,无论同时存在多少个线程,解释器每次只允许一个线程进行操作。 #### 这会对 Python 程序的性能有什么影响? @@ -45,9 +42,10 @@ CPython 解释器在创建变量时,首先会分配内存,然后对该变量 但如果你通过在单进程中使用多线程实现并发,并且是 IO 密集型(例如网络 IO 或磁盘 IO)的线程,GIL 竞争的效果就很明显了。 ![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png) -由 David Beazley 提供的 GIL 竞争情况图[http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1] -对于一个 web 应用(例如 Django),同时还使用了 WSGI,那么对这个 web 应用的每一个请求都是一个单独的 Python 进程,而且每个请求只有一个锁。同时 Python 解释器的启动也比较慢,某些 WSGI 实现还具有“守护进程模式”,[就会导致 Python 进程非常繁忙][9]。 +*由 David Beazley 提供的 GIL 竞争情况图[http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]* + +对于一个 web 应用(例如 Django),同时还使用了 WSGI,那么对这个 web 应用的每一个请求都运行一个**单独**的 Python 解释器,而且每个请求只有一个锁。同时因为 Python 解释器的启动比较慢,某些 WSGI 实现还具有“守护进程模式”,[可以使 Python 进程一直就绪][9]。 #### 其它的 Python 解释器表现如何? @@ -57,46 +55,43 @@ CPython 解释器在创建变量时,首先会分配内存,然后对该变量 #### JavaScript 在这方面又是怎样做的呢? -所有的 Javascript 引擎使用的都是 [mark-and-sweep 垃圾收集算法][12],而 GIL 使用的则是 CPython 的内存管理算法。因此 JavaScript 没有 GIL,而且它是单线程的,也不需要用到 GIL, JavaScript 的事件循环和 Promise/Callback 模式实现了以异步编程的方式代替并发。在 Python 当中也有一个类似的 asyncio 事件循环。 +所有的 Javascript 引擎使用的都是 [mark-and-sweep 垃圾收集算法][12],而 GIL 使用的则是 CPython 的内存管理算法。 +JavaScript 没有 GIL,而且它是单线程的,也不需要用到 GIL, JavaScript 的事件循环和 Promise/Callback 模式实现了以异步编程的方式代替并发。在 Python 当中也有一个类似的 asyncio 事件循环。 ### 是因为 Python 是解释型语言吗? -我经常会听到这个说法,但其实当终端上执行 `python myscript.py` 之后,CPython 会对代码进行一系列的读取、语法分析、解析、编译、解释和执行的操作。 +我经常会听到这个说法,但是这过于粗陋地简化了 Python 所实际做的工作了。其实当终端上执行 `python myscript.py` 之后,CPython 会对代码进行一系列的读取、语法分析、解析、编译、解释和执行的操作。 -如果你对这一系列过程感兴趣,也可以阅读一下我之前的文章: +如果你对这一系列过程感兴趣,也可以阅读一下我之前的文章:[在 6 分钟内修改 Python 语言][13] 。 -[在 6 分钟内修改 Python 语言][13] +`.pyc` 文件的创建是这个过程的重点。在代码编译阶段,Python 3 会将字节码序列写入 `__pycache__/` 下的文件中,而 Python 2 则会将字节码序列写入当前目录的 `.pyc` 文件中。对于你编写的脚本、导入的所有代码以及第三方模块都是如此。 -创建 `.pyc` 文件是这个过程的重点。在代码编译阶段,Python 3 会将字节码序列写入 `__pycache__/` 下的文件中,而 Python 2 则会将字节码序列写入当前目录的 `.pyc` 文件中。对于你编写的脚本、导入的所有代码以及第三方模块都是如此。 - -因此,绝大多数情况下(除非你的代码是一次性的……),Python 都会解释字节码并执行。与 Java、C#.NET 相比: +因此,绝大多数情况下(除非你的代码是一次性的……),Python 都会解释字节码并本地执行。与 Java、C#.NET 相比: > Java 代码会被编译为“中间语言”,由 Java 虚拟机读取字节码,并将其即时编译为机器码。.NET CIL 也是如此,.NET CLR(Common-Language-Runtime)将字节码即时编译为机器码。 -既然 Python 不像 Java 和 C# 那样使用虚拟机或某种字节码,为什么 Python 在基准测试中仍然比 Java 和 C# 慢得多呢?首要原因是,.NET 和 Java 都是 JIT 编译的。 +既然 Python 像 Java 和 C# 那样都使用虚拟机或某种字节码,为什么 Python 在基准测试中仍然比 Java 和 C# 慢得多呢?首要原因是,.NET 和 Java 都是 JIT 编译的。 -即时编译Just-in-time compilation(JIT)需要一种中间语言,以便将代码拆分为多个块(或多个帧)。而提前编译器ahead of time compiler(AOT)则需要确保 CPU 在任何交互发生之前理解每一行代码。 +即时Just-in-time(JIT)编译需要一种中间语言,以便将代码拆分为多个块(或多个帧)。而提前ahead of time(AOT)编译器则需要确保 CPU 在任何交互发生之前理解每一行代码。 -JIT 本身是不会让执行速度加快的,因为它执行的仍然是同样的字节码序列。但是 JIT 会允许运行时的优化。一个优秀的 JIT 优化器会分析出程序的哪些部分会被多次执行,这就是程序中的“热点”,然后,优化器会将这些热点编译得更为高效以实现优化。 +JIT 本身不会使执行速度加快,因为它执行的仍然是同样的字节码序列。但是 JIT 会允许在运行时进行优化。一个优秀的 JIT 优化器会分析出程序的哪些部分会被多次执行,这就是程序中的“热点”,然后优化器会将这些代码替换为更有效率的版本以实现优化。 -这就意味着如果你的程序是多次地重复相同的操作时,有可能会被优化器优化得更快。而且,Java 和 C# 是强类型语言,因此优化器对代码的判断可以更为准确。 +这就意味着如果你的程序是多次重复相同的操作时,有可能会被优化器优化得更快。而且,Java 和 C# 是强类型语言,因此优化器对代码的判断可以更为准确。 -PyPy 使用了明显快于 CPython 的 JIT。更详细的结果可以在这篇性能基准测试文章中看到: - -[哪一个 Python 版本最快?][15] +PyPy 使用了明显快于 CPython 的 JIT。更详细的结果可以在这篇性能基准测试文章中看到:[哪一个 Python 版本最快?][15]。 #### 那为什么 CPython 不使用 JIT 呢? -JIT 也不是完美的,它的一个显著缺点就在于启动时间。 CPython 的启动时间已经相对比较慢,而 PyPy 比 CPython 启动还要慢 2 到 3 倍,所以 Java 虚拟机启动速度已经是出了名的慢了。.NET CLR则通过在系统启动时自启动来优化体验, 甚至还有专门运行 CLR 的操作系统。 +JIT 也不是完美的,它的一个显著缺点就在于启动时间。 CPython 的启动时间已经相对比较慢,而 PyPy 比 CPython 启动还要慢 2 到 3 倍。Java 虚拟机启动速度也是出了名的慢。.NET CLR 则通过在系统启动时启动来优化体验,而 CLR 的开发者也是在 CLR 上开发该操作系统。 -因此如果你的 Python 进程在一次启动后就长时间运行,JIT 就比较有意义了,因为代码里有“热点”可以优化。 +因此如果你有个长时间运行的单一 Python 进程,JIT 就比较有意义了,因为代码里有“热点”可以优化。 -尽管如此,CPython 仍然是通用的代码实现。设想如果使用 Python 开发命令行程序,但每次调用 CLI 时都必须等待 JIT 缓慢启动,这种体验就相当不好了。 +不过,CPython 是个通用的实现。设想如果使用 Python 开发命令行程序,但每次调用 CLI 时都必须等待 JIT 缓慢启动,这种体验就相当不好了。 -CPython 必须通过大量用例的测试,才有可能实现[将 JIT 插入到 CPython 中][17],但这个改进工作的进度基本处于停滞不前的状态。 +CPython 试图用于各种使用情况。有可能实现[将 JIT 插入到 CPython 中][17],但这个改进工作的进度基本处于停滞不前的状态。 -> 如果你想充分发挥 JIT 的优势,请使用PyPy。 +> 如果你想充分发挥 JIT 的优势,请使用 PyPy。 ### 是因为 Python 是一种动态类型的语言吗? @@ -113,11 +108,11 @@ a = "foo" Python 也实现了这样的转换,但用户看不到这些转换,也不需要关心这些转换。 -变量类型不固定并不是 Python 运行慢的原因,Python 通过巧妙的设计让用户可以让各种结构变得动态:可以在运行时更改对象上的方法,也可以在运行时让模块调用新声明的值,几乎可以做到任何事。 +不用必须声明类型并不是为了使 Python 运行慢,Python 的设计是让用户可以让各种东西变得动态:可以在运行时更改对象上的方法,也可以在运行时动态添加底层系统调用到值的声明上,几乎可以做到任何事。 -但也正是这种设计使得 Python 的优化难度变得很大。 +但也正是这种设计使得 Python 的优化异常的难。 -为了证明我的观点,我使用了一个 `dtrace` 这个 Mac OS 上的系统调用跟踪工具。CPython 中没有内置 dTrace,因此必须重新对 CPython 进行编译。以下使用 Python 3.6.6 进行为例: +为了证明我的观点,我使用了一个 Mac OS 上的系统调用跟踪工具 DTrace。CPython 发布版本中没有内置 DTrace,因此必须重新对 CPython 进行编译。以下以 Python 3.6.6 为例: ``` wget https://github.com/python/cpython/archive/v3.6.6.zip @@ -127,22 +122,19 @@ cd v3.6.6 make ``` -这样 `python.exe` 将使用 dtrace 追踪所有代码。[Paul Ross 也作过关于 dtrace 的闪电演讲][19]。你可以下载 Python 的 dtrace 启动文件来查看函数调用、系统调用、CPU 时间、执行时间,以及各种其它的内容。 +这样 `python.exe` 将使用 DTrace 追踪所有代码。[Paul Ross 也作过关于 DTrace 的闪电演讲][19]。你可以下载 Python 的 DTrace 启动文件来查看函数调用、执行时间、CPU 时间、系统调用,以及各种其它的内容。 -`sudo dtrace -s toolkit/.d -c ‘../cpython/python.exe script.py’` +``` +sudo dtrace -s toolkit/.d -c ‘../cpython/python.exe script.py’ +``` -`py_callflow` 追踪器显示了程序里调用的所有函数。 - - -![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif) +`py_callflow` 追踪器[显示](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif)了程序里调用的所有函数。 那么,Python 的动态类型会让它变慢吗? -* 类型比较和类型转换消耗的资源是比较多的,每次读取、写入或引用变量时都会检查变量的类型 - -* Python 的动态程度让它难以被优化,因此很多 Python 的替代品都为了提升速度而在灵活性方面作出了妥协 - -* 而 [Cython][2] 结合了 C 的静态类型和 Python 来优化已知类型的代码,它可以将[性能提升][3] 84 倍。 +* 类型比较和类型转换消耗的资源是比较多的,每次读取、写入或引用变量时都会检查变量的类型 +* Python 的动态程度让它难以被优化,因此很多 Python 的替代品能够如此快都是为了提升速度而在灵活性方面作出了妥协 +* 而 [Cython][2] 结合了 C 的静态类型和 Python 来优化已知类型的代码,它[可以将][3]性能提升 **84 倍**。 ### 总结 @@ -158,7 +150,7 @@ make Jake VDP 的优秀文章(略微过时) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21] -Dave Beazley’s 关于 GIL 的演讲 [http://www.dabeaz.com/python/GIL.pdf][22] +Dave Beazley 关于 GIL 的演讲 [http://www.dabeaz.com/python/GIL.pdf][22] JIT 编译器的那些事 [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23] @@ -169,7 +161,7 @@ via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b 作者:[Anthony Shaw][a] 选题:[oska874][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180724 75 Most Used Essential Linux Applications of 2018.md b/published/201810/20180724 75 Most Used Essential Linux Applications of 2018.md similarity index 100% rename from published/20180724 75 Most Used Essential Linux Applications of 2018.md rename to published/201810/20180724 75 Most Used Essential Linux Applications of 2018.md diff --git a/published/20180724 Building a network attached storage device with a Raspberry Pi.md b/published/201810/20180724 Building a network attached storage device with a Raspberry Pi.md similarity index 100% rename from published/20180724 Building a network attached storage device with a Raspberry Pi.md rename to published/201810/20180724 Building a network attached storage device with a Raspberry Pi.md diff --git a/published/20180803 5 Essential Tools for Linux Development.md b/published/201810/20180803 5 Essential Tools for Linux Development.md similarity index 100% rename from published/20180803 5 Essential Tools for Linux Development.md rename to published/201810/20180803 5 Essential Tools for Linux Development.md diff --git a/published/201810/20180810 How To Remove Or Disable Ubuntu Dock.md b/published/201810/20180810 How To Remove Or Disable Ubuntu Dock.md new file mode 100644 index 0000000000..c535b92a68 --- /dev/null +++ b/published/201810/20180810 How To Remove Or Disable Ubuntu Dock.md @@ -0,0 +1,143 @@ +如何移除或禁用 Ubuntu Dock +====== + +![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png) + +> 如果你想用其它 dock(例如 Plank dock)或面板来替换 Ubuntu 18.04 中的 Dock,或者你想要移除或禁用 Ubuntu Dock,本文会告诉你如何做。 + +Ubuntu Dock - 屏幕左侧栏,可用于固定应用程序或访问已安装的应用程序。使用默认的 Ubuntu 会话时,[无法][1]使用 Gnome Tweaks 禁用它(禁用无效)。但是如果你需要,还是有几种方法来摆脱它的。下面我将列出 4 种方法可以移除或禁用 Ubuntu Dock,以及每个方法的缺点(如果有的话),还有如何撤销每个方法的更改。本文还包括在没有 Ubuntu Dock 的情况下访问活动概览Activities Overview和已安装应用程序列表的其它方法。 + +### 如何在没有 Ubuntu Dock 的情况下访问活动概览 + +如果没有 Ubuntu Dock,你可能无法访问活动的或已安装的应用程序列表(可以通过单击 Dock 底部的“显示应用程序”按钮从 Ubuntu Dock 访问)。例如,如果你想使用 Plank Dock 就是这样。 + +显然,如果你安装了 Dash to Panel 扩展来替代 Ubuntu Dock,那么还好。因为 Dash to Panel 提供了一个按钮来访问活动概览或已安装的应用程序。 + +根据你计划用来替代 Ubuntu Dock 的软件,如果无法访问活动概览,那么你可以启用“活动概览热角”选项,只需将鼠标移动到屏幕的左上角即可打开活动概览。访问已安装的应用程序列表的另一种方法是使用快捷键:`Super + A`。 + +如果要启用“活动概览热角”,使用以下命令: + +``` +gsettings set org.gnome.shell enable-hot-corners true +``` + +如果以后要撤销此操作并禁用该热角,那么你需要使用以下命令: + +``` +gsettings set org.gnome.shell enable-hot-corners false +``` + +你可以使用 Gnome Tweaks 应用程序(该选项位于 Gnome Tweaks 的 “Top Bar” 部分)启用或禁用“活动概览热角” 选项,可以使用以下命令进行安装它: + +``` +sudo apt install gnome-tweaks +``` + +### 如何移除或禁用 Ubuntu Dock + +下面你将找到 4 种摆脱 Ubuntu Dock 的方法,环境在 Ubuntu 18.04 下。 + +#### 方法 1: 移除 Gnome Shell Ubuntu Dock 包 + +摆脱 Ubuntu Dock 的最简单方法就是删除包。 + +这将会从你的系统中完全移除 Ubuntu Dock 扩展,但同时也移除了 `ubuntu-desktop` 元数据包。如果你移除 `ubuntu-desktop` 元数据包,不会马上出现问题,因为它本身没有任何作用。`ubuntu-desktop ` 元数据包依赖于组成 Ubuntu 桌面的大量包。它的依赖关系不会被删除,也不会被破坏。问题是如果你以后想升级到新的 Ubuntu 版本,那么将不会安装任何新的 `ubuntu-desktop` 依赖项。 + +为了解决这个问题,你可以在升级到较新的 Ubuntu 版本之前安装 `ubuntu-desktop` 元数据包(例如,如果你想从 Ubuntu 18.04 升级到 18.10)。 + +如果你对此没有意见,并且想要从系统中删除 Ubuntu Dock 扩展包,使用以下命令: + +``` +sudo apt remove gnome-shell-extension-ubuntu-dock +``` + +如果以后要撤消更改,只需使用以下命令安装扩展: + +``` +sudo apt install gnome-shell-extension-ubuntu-dock +``` + +或者重新安装 `ubuntu-desktop` 元数据包(这将会安装你可能已删除的任何 `ubuntu-desktop` 依赖项,包括 Ubuntu Dock),你可以使用以下命令: + +``` +sudo apt install ubuntu-desktop +``` + +#### 方法 2:安装并使用 vanilla Gnome 会话而不是默认的 Ubuntu 会话 + +摆脱 Ubuntu Dock 的另一种方法是安装和使用原生 Gnome 会话。安装 原生 Gnome 会话还将安装此会话所依赖的其它软件包,如 Gnome 文档、地图、音乐、联系人、照片、跟踪器等。 + +通过安装原生 Gnome 会话,你还将获得默认 Gnome GDM 登录和锁定屏幕主题,而不是 Ubuntu 默认的 Adwaita Gtk 主题和图标。你可以使用 Gnome Tweaks 应用程序轻松更改 Gtk 和图标主题。 + +此外,默认情况下将禁用 AppIndicators 扩展(因此使用 AppIndicators 托盘的应用程序不会显示在顶部面板上),但你可以使用 Gnome Tweaks 启用此功能(在扩展中,启用 Ubuntu appindicators 扩展)。 + +同样,你也可以从原生 Gnome 会话启用或禁用 Ubuntu Dock,这在 Ubuntu 会话中是不可能的(使用 Ubuntu 会话时无法从 Gnome Tweaks 禁用 Ubuntu Dock)。 + +如果你不想安装原生 Gnome 会话所需的这些额外软件包,那么这个移除 Ubuntu Dock 的这个方法不适合你,请查看其它方法。 + +如果你对此没有意见,以下是你需要做的事情。要在 Ubuntu 中安装原生的 Gnome 会话,使用以下命令: + +``` +sudo apt install vanilla-gnome-desktop +``` + +安装完成后,重启系统。在登录屏幕上,单击用户名,单击 “Sign in” 按钮旁边的齿轮图标,然后选择 “GNOME” 而不是 “Ubuntu”,之后继续登录。 + +![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png) + +如果要撤销此操作并移除原生 Gnome 会话,可以使用以下命令清除原生 Gnome 软件包,然后删除它安装的依赖项(第二条命令): + +``` +sudo apt purge vanilla-gnome-desktop +sudo apt autoremove +``` + +然后重新启动,并以相同的方式从 GDM 登录屏幕中选择 Ubuntu。 + +#### 方法 3:从桌面上永久隐藏 Ubuntu Dock,而不是将其移除 + +如果你希望永久隐藏 Ubuntu Dock,不让它显示在桌面上,但不移除它或使用原生 Gnome 会话,你可以使用 Dconf 编辑器轻松完成此操作。这样做的缺点是 Ubuntu Dock 仍然会使用一些系统资源,即使你没有在桌面上使用它,但你也可以轻松恢复它而无需安装或移除任何包。 + +Ubuntu Dock 只对你的桌面隐藏,当你进入叠加模式(活动)时,你仍然可以看到并从那里使用 Ubuntu Dock。 + +要永久隐藏 Ubuntu Dock,使用 Dconf 编辑器导航到 `/org/gnome/shell/extensions/dash-to-dock` 并禁用以下选项(将它们设置为 `false`):`autohide`、`dock-fixed` 和 `intellihide`。 + +如果你愿意,可以从命令行实现此目的,运行以下命令: + +``` +gsettings set org.gnome.shell.extensions.dash-to-dock autohide false +gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false +gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false +``` + +如果你改变主意了并想撤销此操作,你可以使用 Dconf 编辑器从 `/org/gnome/shell/extensions/dash-to-dock` 中启动 `autohide`、 `dock-fixed` 和 `intellihide`(将它们设置为 `true`),或者你可以使用以下这些命令: + +``` +gsettings set org.gnome.shell.extensions.dash-to-dock autohide true +gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true +gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true +``` + +#### 方法 4:使用 Dash to Panel 扩展 + +[Dash to Panel][2] 是 Gnome Shell 的一个高度可配置面板,是 Ubuntu Dock 或 Dash to Dock 的一个很好的替代品(Ubuntu Dock 是从 Dash to Dock 分叉而来的)。安装和启动 Dash to Panel 扩展会禁用 Ubuntu Dock,因此你无需执行其它任何操作。 + +你可以从 [extensions.gnome.org][3] 来安装 Dash to Panel。 + +如果你改变主意并希望重新使用 Ubuntu Dock,那么你可以使用 Gnome Tweaks 应用程序禁用 Dash to Panel,或者通过单击以下网址旁边的 X 按钮完全移除 Dash to Panel: https://extensions.gnome.org/local/ 。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html + +作者:[Logix][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[MjSeven](https://github.com/MjSeven) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/118280394805678839070 +[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020 +[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html +[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/ diff --git a/published/20180813 5 of the Best Linux Educational Software and Games for Kids.md b/published/201810/20180813 5 of the Best Linux Educational Software and Games for Kids.md similarity index 100% rename from published/20180813 5 of the Best Linux Educational Software and Games for Kids.md rename to published/201810/20180813 5 of the Best Linux Educational Software and Games for Kids.md diff --git a/published/20180814 Automating backups on a Raspberry Pi NAS.md b/published/201810/20180814 Automating backups on a Raspberry Pi NAS.md similarity index 100% rename from published/20180814 Automating backups on a Raspberry Pi NAS.md rename to published/201810/20180814 Automating backups on a Raspberry Pi NAS.md diff --git a/published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md b/published/201810/20180815 How to Create M3U Playlists in Linux [Quick Tip].md similarity index 100% rename from published/20180815 How to Create M3U Playlists in Linux [Quick Tip].md rename to published/201810/20180815 How to Create M3U Playlists in Linux [Quick Tip].md diff --git a/published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md b/published/201810/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md similarity index 100% rename from published/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md rename to published/201810/20180816 Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension).md diff --git a/published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/published/201810/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md similarity index 100% rename from published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md rename to published/201810/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md diff --git a/published/20180821 A checklist for submitting your first Linux kernel patch.md b/published/201810/20180821 A checklist for submitting your first Linux kernel patch.md similarity index 100% rename from published/20180821 A checklist for submitting your first Linux kernel patch.md rename to published/201810/20180821 A checklist for submitting your first Linux kernel patch.md diff --git a/published/201810/20180823 CLI- improved.md b/published/201810/20180823 CLI- improved.md new file mode 100644 index 0000000000..71d3f24be6 --- /dev/null +++ b/published/201810/20180823 CLI- improved.md @@ -0,0 +1,298 @@ +命令行:增强版 +====== + +我不确定有多少 Web 开发者能完全避免使用命令行。就我来说,我从 1997 年上大学就开始使用命令行了,那时的 l33t-hacker 让我着迷,同时我也觉得它很难掌握。 + +过去这些年我的命令行本领在逐步加强,我经常会去搜寻工作中能用的更好的命令行工具。下面就是我现在使用的用于增强原有命令行工具的列表。 + +### 怎么忽略我所做的命令行增强 + +通常情况下我会用别名将新的增强的命令行工具覆盖原来的命令(如 `cat` 和 `ping`)。 + +如果我需要运行原来的命令的话(有时我确实需要这么做),我会像下面这样来运行未加修改的原始命令。(我用的是 Mac,你的用法可能不一样) + +``` +$ \cat # 忽略叫 "cat" 的别名 - 具体解释: https://stackoverflow.com/a/16506263/22617 +$ command cat # 忽略函数和别名 +``` + +### bat > cat + +`cat` 用于打印文件的内容,如果你平时用命令行很多的话,例如语法高亮之类的功能会非常有用。我首先发现了 [ccat][3] 这个有语法高亮功能的工具,然后我发现了 [bat][4],它的功能有语法高亮、分页、行号和 git 集成。 + +`bat` 命令也能让我在(多于一屏的)输出里使用 `/` 搜索(和用 `less` 搜索功能一样)。 + +![Simple bat output][5] + +我将别名 `cat` 指到了 `bat` 命令: + +``` +alias cat='bat' +``` + +- [安装指引][4] + +### prettyping > ping + +`ping` 非常有用,当我碰到“糟了,是不是 X 挂了?/我的网不通了?”这种情况下我最先想到的工具就是它了。但是 `prettyping`(“prettyping” 可不是指“pre typing”)在 `ping` 的基础上加了友好的输出,这可让我感觉命令行友好了很多呢。 + +![prettyping][6] + +我也将 `ping` 用别名链接到了 `prettyping` 命令: + +``` +alias ping='prettyping --nolegend' +``` + +- [安装指引][7] + +### fzf > ctrl+r + +在终端里,使用 `ctrl+r` 将允许你在命令历史里[反向搜索][8]使用过的命令,这是个挺好的小技巧,尽管它有点麻烦。 + +`fzf` 这个工具相比于 `ctrl+r` 有了**巨大的**进步。它能针对命令行历史进行模糊查询,并且提供了对可能的合格结果进行全面交互式预览。 + +![视频](https://player.vimeo.com/video/217497007) + +除了搜索命令历史,`fzf` 还能预览和打开文件,我在下面的视频里展示了这些功能。 + +![视频](https://player.vimeo.com/video/286345188) + +为了这个预览的效果,我创建了一个叫 `preview` 的别名,它将 `fzf` 和前文提到的 `bat` 组合起来完成预览功能,还给上面绑定了一个定制的热键 `ctrl+o` 来打开 VS Code: + +``` +alias preview="fzf --preview 'bat --color \"always\" {}'" +# 支持在 VS Code 里用 ctrl+o 来打开选择的文件 +export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'" +``` + +- [安装指引][9] + +### htop > top + +`top` 是当我想快速诊断为什么机器上的 CPU 跑的那么累或者风扇为什么突然呼呼大做的时候首先会想到的工具。我在生产环境也会使用这个工具。讨厌的是 Mac 上的 `top` 和 Linux 上的 `top` 有着极大的不同(恕我直言,应该是差的多)。 + +不过,`htop` 是对 Linux 上的 `top` 和 Mac 上蹩脚的 `top` 的极大改进。它增加了包括颜色输出,键盘热键绑定以及不同的视图输出,这对理解进程之间的父子关系有极大帮助。 + +一些很容易上手的热键: + +* `P` —— 按 CPU 使用率排序 +* `M` —— 按内存使用排序 +* `F4` —— 用字符串过滤进程(例如只看包括 node 的进程) +* `space` —— 锚定一个单独进程,这样我能观察它是否有尖峰状态 + +![htop output][10] + +在 Mac Sierra 上 htop 有个奇怪的 bug,不过这个 bug 可以通过以 root 运行来绕过(我实在记不清这个 bug 是什么,但是这个别名能搞定它,有点讨厌的是我得每次都输入 root 密码。): + +``` +alias top="sudo htop" # 给 top 加上别名并且绕过 Sierra 上的 bug +``` + +- [安装指引][11] + +### diff-so-fancy > diff + +我非常确定我是几年前从 Paul Irish 那儿学来的这个技巧,尽管我很少直接使用 `diff`,但我的 git 命令行会一直使用 `diff`。`diff-so-fancy` 给了我代码语法颜色和更改字符高亮的功能。 + +![diff so fancy][12] + +在我的 `~/.gitconfig` 文件里我用了下面的选项来打开 `git diff` 和 `git show` 的 `diff-so-fancy` 功能。 + +``` +[pager] + diff = diff-so-fancy | less --tabs=1,5 -RFX + show = diff-so-fancy | less --tabs=1,5 -RFX +``` + +- [安装指引][13] + +### fd > find + +尽管我使用 Mac,但我绝不是 Spotlight 的粉丝,我觉得它的性能很差,关键字也难记,加上更新它自己的数据库时会拖慢 CPU,简直一无是处。我经常使用 [Alfred][14],但是它的搜索功能也不是很好。 + +我倾向于在命令行中搜索文件,但是 `find` 的难用在于很难去记住那些合适的表达式来描述我想要的文件。(而且 Mac 上的 `find` 命令和非 Mac 的 `find` 命令还有些许不同,这更加深了我的失望。) + +`fd` 是一个很好的替代品(它的作者和 `bat` 的作者是同一个人)。它非常快而且对于我经常要搜索的命令非常好记。 + +几个上手的例子: + +``` +$ fd cli # 所有包含 "cli" 的文件名 +$ fd -e md # 所有以 .md 作为扩展名的文件 +$ fd cli -x wc -w # 搜索 "cli" 并且在每个搜索结果上运行 `wc -w` +``` + +![fd output][15] + +- [安装指引][16] + +### ncdu > du + +对我来说,知道当前磁盘空间被什么占用了非常重要。我用过 Mac 上的 [DaisyDisk][17],但是我觉得那个程序产生结果有点慢。 + +`du -sh` 命令是我经常会运行的命令(`-sh` 是指结果以“汇总”和“人类可读”的方式显示),我经常会想要深入挖掘那些占用了大量磁盘空间的目录,看看到底是什么在占用空间。 + +`ncdu` 是一个非常棒的替代品。它提供了一个交互式的界面并且允许快速的扫描那些占用了大量磁盘空间的目录和文件,它又快又准。(尽管不管在哪个工具的情况下,扫描我的 home 目录都要很长时间,它有 550G) + +一旦当我找到一个目录我想要“处理”一下(如删除,移动或压缩文件),我会使用 `cmd` + 点击 [iTerm2][18] 顶部的目录名字的方法在 Finder 中打开它。 + +![ncdu output][19] + +还有另外[一个叫 nnn 的替代选择][20],它提供了一个更漂亮的界面,它也提供文件尺寸和使用情况,实际上它更像一个全功能的文件管理器。 + +我的 `du` 是如下的别名: + +``` +alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules" +``` + +选项说明: + +* `--color dark` 使用颜色方案 +* `-rr` 只读模式(防止误删和运行新的 shell 程序) +* `--exclude` 忽略不想操作的目录 + +- [安装指引][21] + +### tldr > man + +几乎所有的命令行工具都有一个相伴的手册,它可以被 `man <命令名>` 来调出,但是在 `man` 的输出里找到东西可有点让人困惑,而且在一个包含了所有的技术细节的输出里找东西也挺可怕的。 + +这就是 TL;DR 项目(LCTT 译注:英文里“文档太长,没空去读”的缩写)创建的初衷。这是一个由社区驱动的文档系统,而且可以用在命令行上。就我现在使用的经验,我还没碰到过一个命令没有它相应的文档,你[也可以做贡献][22]。 + +![TLDR output for 'fd'][23] + +一个小技巧,我将 `tldr` 的别名链接到 `help`(这样输入会快一点……) + +``` +alias help='tldr' +``` + +- [安装指引][24] + +### ack || ag > grep + +`grep` 毫无疑问是一个强力的命令行工具,但是这些年来它已经被一些工具超越了,其中两个叫 `ack` 和 `ag`。 + +我个人对 `ack` 和 `ag` 都尝试过,而且没有非常明显的个人偏好,(也就是说它们都很棒,并且很相似)。我倾向于默认只使用 `ack`,因为这三个字符就在指尖,很好打。并且 `ack` 有大量的 `ack --bar` 参数可以使用!(你一定会体会到这一点。) + +`ack` 和 `ag` 默认都使用正则表达式来搜索,这非常契合我的工作,我能使用类似于 `--js` 或 `--html` 这种标识指定文件类型搜索。(尽管 `ag` 比 `ack` 在文件类型过滤器里包括了更多的文件类型。) + +两个工具都支持常见的 `grep` 选项,如 `-B` 和 `-A` 用于在搜索的上下文里指代“之前”和“之后”。 + +![ack in action][25] + +因为 `ack` 不支持 markdown(而我又恰好写了很多 markdown),我在我的 `~/.ackrc` 文件里加了以下定制语句: + +``` +--type-set=md=.md,.mkd,.markdown +--pager=less -FRX +``` + +- 安装指引:[ack][26],[ag][27] +- [关于 ack & ag 的更多信息][28] + +### jq > grep 及其它 + +我是 [jq][29] 的忠实粉丝之一。当然一开始我也在它的语法里苦苦挣扎,好在我对查询语言还算有些使用心得,现在我对 `jq` 可以说是每天都要用。(不过从前我要么使用 `grep` 或者使用一个叫 [json][30] 的工具,相比而言后者的功能就非常基础了。) + +我甚至开始撰写一个 `jq` 的教程系列(有 2500 字并且还在增加),我还发布了一个[网页工具][31]和一个 Mac 上的应用(这个还没有发布。) + +`jq` 允许我传入一个 JSON 并且能非常简单的将其转变为一个使用 JSON 格式的结果,这正是我想要的。下面这个例子允许我用一个命令更新我的所有 node 依赖。(为了阅读方便,我将其分成为多行。) + +``` +$ npm i $(echo $(\ + npm outdated --json | \ + jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \ +)) +``` + +上面的命令将使用 npm 的 JSON 输出格式来列出所有过期的 node 依赖,然后将下面的源 JSON 转换为: + +``` +{ + "node-jq": { + "current": "0.7.0", + "wanted": "0.7.0", + "latest": "1.2.0", + "location": "node_modules/node-jq" + }, + "uuid": { + "current": "3.1.0", + "wanted": "3.2.1", + "latest": "3.2.1", + "location": "node_modules/uuid" + } +} +``` + +转换结果为: + +``` +node-jq@1.2.0 +uuid@3.2.1 +``` + +上面的结果会被作为 `npm install` 的输入,你瞧,我的升级就这样全部搞定了。(当然,这里有点小题大做了。) + +### 很荣幸提及一些其它的工具 + +我也在开始尝试一些别的工具,但我还没有完全掌握它们。(除了 `ponysay`,当我打开一个新的终端会话时,它就会出现。) + +* [ponysay][32] > `cowsay` +* [csvkit][33] > `awk 及其它` +* [noti][34] > `display notification` +* [entr][35] > `watch` + +### 你有什么好点子吗? + +上面是我的命令行清单。你的呢?你有没有试着去增强一些你每天都会用到的命令呢?请告诉我,我非常乐意知道。 + +-------------------------------------------------------------------------------- + +via: https://remysharp.com/2018/08/23/cli-improved + +作者:[Remy Sharp][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[DavidChenLiang](https://github.com/DavidChenLiang) +校对:[pityonline](https://github.com/pityonline), [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://remysharp.com +[1]: https://remysharp.com/images/terminal-600.jpg +[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount +[3]: https://github.com/jingweno/ccat +[4]: https://github.com/sharkdp/bat +[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output) +[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output) +[7]: http://denilson.sa.nom.br/prettyping/ +[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks +[9]: https://github.com/junegunn/fzf +[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output) +[11]: http://hisham.hm/htop/ +[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output) +[13]: https://github.com/so-fancy/diff-so-fancy +[14]: https://www.alfredapp.com/ +[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output) +[16]: https://github.com/sharkdp/fd/ +[17]: https://daisydiskapp.com/ +[18]: https://www.iterm2.com/ +[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output) +[20]: https://github.com/jarun/nnn +[21]: https://dev.yorhel.nl/ncdu +[22]: https://github.com/tldr-pages/tldr#contributing +[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for fd) +[24]: http://tldr-pages.github.io/ +[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args) +[26]: https://beyondgrep.com +[27]: https://github.com/ggreer/the_silver_searcher +[28]: http://conqueringthecommandline.com/book/ack_ag +[29]: https://stedolan.github.io/jq +[30]: http://trentm.com/json/ +[31]: https://jqterm.com +[32]: https://github.com/erkin/ponysay +[33]: https://csvkit.readthedocs.io/en/1.0.3/ +[34]: https://github.com/variadico/noti +[35]: http://www.entrproject.org/ diff --git a/published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md b/published/201810/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md similarity index 100% rename from published/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md rename to published/201810/20180823 How To Easily And Safely Manage Cron Jobs In Linux.md diff --git a/published/20180824 5 cool music player apps.md b/published/201810/20180824 5 cool music player apps.md similarity index 100% rename from published/20180824 5 cool music player apps.md rename to published/201810/20180824 5 cool music player apps.md diff --git a/published/20180824 What Stable Kernel Should I Use.md b/published/201810/20180824 What Stable Kernel Should I Use.md similarity index 100% rename from published/20180824 What Stable Kernel Should I Use.md rename to published/201810/20180824 What Stable Kernel Should I Use.md diff --git a/published/20180827 4 tips for better tmux sessions.md b/published/201810/20180827 4 tips for better tmux sessions.md similarity index 100% rename from published/20180827 4 tips for better tmux sessions.md rename to published/201810/20180827 4 tips for better tmux sessions.md diff --git a/translated/tech/20180827 A sysadmin-s guide to containers.md b/published/201810/20180827 A sysadmin-s guide to containers.md similarity index 64% rename from translated/tech/20180827 A sysadmin-s guide to containers.md rename to published/201810/20180827 A sysadmin-s guide to containers.md index f1c27e41c4..6716fd3b82 100644 --- a/translated/tech/20180827 A sysadmin-s guide to containers.md +++ b/published/201810/20180827 A sysadmin-s guide to containers.md @@ -1,38 +1,39 @@ -写给系统管理员的容器手册 +面向系统管理员的容器手册 ====== +> 你所需了解的容器如何工作的知识。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) -现在人们严重地过度使用“容器”这个术语。另外,对不同的人来说,它可能会有不同的含义,这取决于上下文。 +现在人们严重过度使用了“容器”这个术语。另外,对不同的人来说,它可能会有不同的含义,这取决于上下文。 -传统的 Linux 容器只是系统上普通的进程组成的进程组。进程组之间是相互隔离的,实现方法包括:资源限制(控制组 [cgoups])、Linux 安全限制(文件权限,基于 Capability 的安全模块,SELinux,AppArmor,seccomp 等)还有名字空间(进程 ID,网络,挂载等)。 +传统的 Linux 容器只是系统上普通的进程。一组进程与另外一组进程是相互隔离的,实现方法包括:资源限制(控制组 [cgoups])、Linux 安全限制(文件权限,基于 Capability 的安全模块、SELinux、AppArmor、seccomp 等)还有名字空间(进程 ID、网络、挂载等)。 -如果你启动一台现代 Linux 操作系统,使用 `cat /proc/PID/cgroup` 命令就可以看到该进程是属于一个控制组的。还可以从 `/proc/PID/status` 文件中查看进程的 Capability 信息,从 `/proc/self/attr/current` 文件中查看进程的 SELinux 标签信息,从 `/proc/PID/ns` 目录下的文件查看进程所属的名字空间。因此,如果把容器定义为带有资源限制、Linux 安全限制和名字空间的进程,那么按照这个定义,Linux 操作系统上的每一个进程都在容器里。因此我们常说 [Linux 就是容器,容器就是 Linux][1]。而**容器运行时**是这样一种工具,它调整上述资源限制、安全限制和名字空间,并启动容器。 +如果你启动一台现代 Linux 操作系统,使用 `cat /proc/PID/cgroup` 命令就可以看到该进程是属于一个控制组的。还可以从 `/proc/PID/status` 文件中查看进程的 Capability 信息,从 `/proc/self/attr/current` 文件中查看进程的 SELinux 标签信息,从 `/proc/PID/ns` 目录下的文件查看进程所属的名字空间。因此,如果把容器定义为带有资源限制、Linux 安全限制和名字空间的进程,那么按照这个定义,Linux 操作系统上的每一个进程都在一个容器里。因此我们常说 [Linux 就是容器,容器就是 Linux][1]。而**容器运行时**是这样一种工具,它调整上述资源限制、安全限制和名字空间,并启动容器。 Docker 引入了**容器镜像**的概念,镜像是一个普通的 TAR 包文件,包含了: - * **Rootfs(容器的根文件系统):**一个目录,看起来像是操作系统的普通根目录(/),例如,一个包含 `/usr`, `/var`, `/home` 等的目录。 - * **JSON 文件(容器的配置):**定义了如何运行 rootfs;例如,当容器启动的时候要在 rootfs 里运行什么 **command** 或者 **entrypoint**,给容器定义什么样的**环境变量**,容器的**工作目录**是哪个,以及其他一些设置。 +* **rootfs(容器的根文件系统)**:一个目录,看起来像是操作系统的普通根目录(`/`),例如,一个包含 `/usr`, `/var`, `/home` 等的目录。 +* **JSON 文件(容器的配置)**:定义了如何运行 rootfs;例如,当容器启动的时候要在 rootfs 里运行什么命令(`CMD`)或者入口(`ENTRYPOINT `),给容器定义什么样的环境变量(`ENV`),容器的工作目录(`WORKDIR `)是哪个,以及其他一些设置。 Docker 把 rootfs 和 JSON 配置文件打包成**基础镜像**。你可以在这个基础之上,给 rootfs 安装更多东西,创建新的 JSON 配置文件,然后把相对于原始镜像的不同内容打包到新的镜像。这种方法创建出来的是**分层的镜像**。 -[Open Container Initiative(开放容器计划 OCI)][2] 标准组织最终把容器镜像的格式标准化了,也就是 [OCI Image Specification(OCI 镜像规范)][3]。 +[开放容器计划][2]Open Container Initiative(OCI)标准组织最终把容器镜像的格式标准化了,也就是 [镜像规范][3]OCI Image Specification(OCI)。 用来创建容器镜像的工具被称为**容器镜像构建器**。有时候容器引擎做这件事情,不过可以用一些独立的工具来构建容器镜像。 -Docker 把这些容器镜像(**tar 包**)托管到 web 服务中,并开发了一种协议来支持从 web 拉取镜像,这个 web 服务就叫**容器仓库**。 +Docker 把这些容器镜像(**tar 包**)托管到 web 服务中,并开发了一种协议来支持从 web 拉取镜像,这个 web 服务就叫容器仓库container registry。 **容器引擎**是能从镜像仓库拉取镜像并装载到**容器存储**上的程序。容器引擎还能启动**容器运行时**(见下图)。 ![](https://opensource.com/sites/default/files/linux_container_internals_2.0_-_hosts.png) -容器存储一般是**写入时复制**(COW)的分层文件系统。从容器仓库拉取一个镜像时,其中的 rootfs 首先被解压到磁盘。如果这个镜像是多层的,那么每一层都会被下载到 COW 文件系统的不同分层。 COW 文件系统保证了镜像的每一层独立存储,这最大化了多个分层镜像之间的文件共享程度。容器引擎通常支持多种容器存储类型,包括 `overlay`、`devicemapper`、`btrfs`、`aufs` 和 `zfs`。 +容器存储一般是写入时复制copy-on-write(COW)的分层文件系统。从容器仓库拉取一个镜像时,其中的 rootfs 首先被解压到磁盘。如果这个镜像是多层的,那么每一层都会被下载到 COW 文件系统的不同分层。 COW 文件系统保证了镜像的每一层独立存储,这最大化了多个分层镜像之间的文件共享程度。容器引擎通常支持多种容器存储类型,包括 `overlay`、`devicemapper`、`btrfs`、`aufs` 和 `zfs`。 容器引擎将容器镜像下载到容器存储中之后,需要创建一份**容器运行时配置**,这份配置是用户/调用者的输入和镜像配置的合并。例如,容器的调用者可能会调整安全设置,添加额外的环境变量或者挂载一些卷到容器中。 容器运行时配置的格式,和解压出来的 rootfs 也都被开放容器计划 OCI 标准组织做了标准化,称为 [OCI 运行时规范][4]。 -最终,容器引擎启动了一个**容器运行时**来读取运行时配置,修改 Linux 控制组、安全限制和名字空间,并执行容器命令来创建容器的 **PID 1**。至此,容器引擎已经可以把容器的标准输入/标准输出转给调用方,并控制容器了(例如,stop,start,attach)。 +最终,容器引擎启动了一个**容器运行时**来读取运行时配置,修改 Linux 控制组、安全限制和名字空间,并执行容器命令来创建容器的 **PID 1** 进程。至此,容器引擎已经可以把容器的标准输入/标准输出转给调用方,并控制容器了(例如,`stop`、`start`、`attach`)。 值得一提的是,现在出现了很多新的容器运行时,它们使用 Linux 的不同特性来隔离容器。可以使用 KVM 技术来隔离容器(想想迷你虚拟机),或者使用其他虚拟机监视器策略(例如拦截所有从容器内的进程发起的系统调用)。既然我们有了标准的运行时规范,这些工具都能被相同的容器引擎来启动。即使在 Windows 系统下,也可以使用 OCI 运行时规范来启动 Windows 容器。 @@ -45,7 +46,7 @@ via: https://opensource.com/article/18/8/sysadmins-guide-containers 作者:[Daniel J Walsh][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[belitex](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md b/published/201810/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md similarity index 100% rename from published/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md rename to published/201810/20180827 Solve -error- failed to commit transaction (conflicting files)- In Arch Linux.md diff --git a/published/20180830 6 places to host your git repository.md b/published/201810/20180830 6 places to host your git repository.md similarity index 100% rename from published/20180830 6 places to host your git repository.md rename to published/201810/20180830 6 places to host your git repository.md diff --git a/published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md b/published/201810/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md similarity index 100% rename from published/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md rename to published/201810/20180901 5 Ways to Take Screenshot in Linux GUI and Terminal.md diff --git a/published/201810/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md b/published/201810/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md new file mode 100644 index 0000000000..a34c575261 --- /dev/null +++ b/published/201810/20180901 Flameshot - A Simple, Yet Powerful Feature-rich Screenshot Tool.md @@ -0,0 +1,164 @@ +Flameshot:一个简洁但功能丰富的截图工具 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/Flameshot-720x340.png) + +截图是我工作的一部分,我先前使用深度截图工具来截图,深度截图是一个简单、轻量级且非常简洁的截图工具。它自带许多功能例如窗口识别、快捷键支持、图片编辑、延时截图、社交分享、智能存储以及图片清晰度调整等功能。今天我碰巧发现了另一个具备多种功能的截图工具,它就是 **Flameshot** ,一个简单但功能丰富的针对类 Unix 系统的截图工具。它简单易用,可定制并且有选项可以支持上传截图到在线图片分享网站 **imgur** 上。同时 Flameshot 有一个 CLI 版本,所以你也可以从命令行来进行截图。Flameshot 是一个完全免费且开源的工具。在本教程中,我们将看到如何安装 Flameshot 以及如何使用它来截图。 + +### 安装 Flameshot + +**在 Arch Linux 上:** + +Flameshot 可以从 Arch LInux 的 [community] 仓库中获取。确保你已经启用了 community 仓库,然后就可以像下面展示的那样使用 pacman 来安装 Flameshot : + +``` +$ sudo pacman -S flameshot +``` + +它也可以从 [**AUR**][1] 中获取,所以你还可以使用任意一个 AUR 帮助程序(例如 [**Yay**][2])来在基于 Arch 的系统中安装它: + +``` +$ yay -S flameshot-git +``` + +**在 Fedora 中:** + +``` +$ sudo dnf install flameshot +``` + +在 **Debian 10+** 和 **Ubuntu 18.04+** 中,可以使用 APT 包管理器来安装它: + +``` +$ sudo apt install flameshot +``` + +**在 openSUSE 上:** + +``` +$ sudo zypper install flameshot +``` + +在其他的 Linux 发行版中,可以从源代码编译并安装它。编译过程中需要 **Qt version 5.3** 以及 **GCC 4.9.2** 或者它们的更高版本。 + +### 使用 + +可以从菜单或者应用启动器中启动 Flameshot。在 MATE 桌面环境,它通常可以在 “Applications -> Graphics” 下找到。 + +一旦打开了它,你就可以在系统面板中看到 Flameshot 的托盘图标。 + +**注意:** + +假如你使用 Gnome 桌面环境,为了能够看到系统托盘图标,你需要安装 [TopIcons][3] 扩展。 + +在 Flameshot 托盘图标上右击,你便会看到几个菜单项,例如打开配置窗口、信息窗口以及退出该应用。 + +要进行截图,只需要点击托盘图标就可以了。接着你将看到如何使用 Flameshot 的帮助窗口。选择一个截图区域,然后敲回车键便可以截屏了,点击右键便可以看到颜色拾取器,再敲空格键便可以查看屏幕侧边的面板。你可以使用鼠标的滚轮来增加或者减少指针的宽度。 + +Flameshot 自带一系列非常好的功能,例如: + +* 可以进行手写 +* 可以划直线 +* 可以画长方形或者圆形框 +* 可以进行长方形区域选择 +* 可以画箭头 +* 可以对要点进行标注 +* 可以添加文本 +* 可以对图片或者文字进行模糊处理 +* 可以展示图片的尺寸大小 +* 在编辑图片是可以进行撤销和重做操作 +* 可以将选择的东西复制到剪贴板 +* 可以保存选区 +* 可以离开截屏 +* 可以选择另一个 app 来打开图片 +* 可以上传图片到 imgur 网站 +* 可以将图片固定到桌面上 + +下面是一个示例的视频: + + + +### 快捷键 + +Frameshot 也支持快捷键。在 Flameshot 的托盘图标上右击并点击 “Information” 窗口便可以看到在 GUI 模式下所有可用的快捷键。下面是在 GUI 模式下可用的快捷键清单: + +| 快捷键 | 描述 | +|------------------------|------------------------------| +| `←`、`↓`、`↑`、`→` | 移动选择区域 1px | +| `Shift` + `←`、`↓`、`↑`、`→` | 将选择区域大小更改 1px | +| `Esc` | 退出截图 | +| `Ctrl` + `C` | 复制到粘贴板 | +| `Ctrl` + `S` | 将选择区域保存为文件 | +| `Ctrl` + `Z` | 撤销最近的一次操作 | +| 鼠标右键 | 展示颜色拾取器 | +| 鼠标滚轮 | 改变工具的宽度 | + +边按住 `Shift` 键并拖动选择区域的其中一个控制点将会对它相反方向的控制点做类似的拖放操作。 + +### 命令行选项 + +Flameshot 也支持一系列的命令行选项来延时截图和保存图片到自定义的路径。 + +要使用 Flameshot GUI 模式,运行: + +``` +$ flameshot gui +``` + +要使用 GUI 模式截屏并将你选取的区域保存到一个自定义的路径,运行: + +``` +$ flameshot gui -p ~/myStuff/captures +``` + +要延时 2 秒后打开 GUI 模式可以使用: + +``` +$ flameshot gui -d 2000 +``` + +要延时 2 秒并将截图保存到一个自定义的路径(无 GUI)可以使用: + +``` +$ flameshot full -p ~/myStuff/captures -d 2000 +``` + +要截图全屏并保存到自定义的路径和粘贴板中使用: + +``` +$ flameshot full -c -p ~/myStuff/captures +``` + +要在截屏中包含鼠标并将图片保存为 PNG 格式可以使用: + +``` +$ flameshot screen -r +``` + +要对屏幕 1 进行截屏并将截屏复制到粘贴板中可以运行: + +``` +$ flameshot screen -n 1 -c +``` + +你还需要什么功能呢?Flameshot 拥有几乎截屏的所有功能:添加注释、编辑图片、模糊处理或者对要点做高亮等等功能。我想:在我找到它的最佳替代品之前,我将一直使用 Flameshot 来作为我当前的截图工具。请尝试一下它,你不会失望的。 + +好了,这就是今天的全部内容了。后续将有更多精彩内容,请保持关注! + +Cheers! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://aur.archlinux.org/packages/flameshot-git +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[3]: https://extensions.gnome.org/extension/1031/topicons/ diff --git a/published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md b/published/201810/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md similarity index 100% rename from published/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md rename to published/201810/20180906 How To Limit Network Bandwidth In Linux Using Wondershaper.md diff --git a/published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md b/published/201810/20180907 How to Use the Netplan Network Configuration Tool on Linux.md similarity index 100% rename from published/20180907 How to Use the Netplan Network Configuration Tool on Linux.md rename to published/201810/20180907 How to Use the Netplan Network Configuration Tool on Linux.md diff --git a/published/20180910 How To List An Available Package Groups In Linux.md b/published/201810/20180910 How To List An Available Package Groups In Linux.md similarity index 100% rename from published/20180910 How To List An Available Package Groups In Linux.md rename to published/201810/20180910 How To List An Available Package Groups In Linux.md diff --git a/translated/tech/20180912 How to build rpm packages.md b/published/201810/20180912 How to build rpm packages.md similarity index 58% rename from translated/tech/20180912 How to build rpm packages.md rename to published/201810/20180912 How to build rpm packages.md index 8506184294..16a04f80a6 100644 --- a/translated/tech/20180912 How to build rpm packages.md +++ b/published/201810/20180912 How to build rpm packages.md @@ -1,19 +1,19 @@ -如何构建rpm包 +如何构建 RPM 包 ====== -节省跨多个主机安装文件和脚本的时间和精力。 +> 节省跨多个主机安装文件和脚本的时间和精力。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1) -自20多年前我开始使用 Linux 以来,我已经使用过基于 rpm 的软件包管理器在 Red Hat 和 Fedora Linux系统上安装软件。我使用过 **rpm** 程序本身,还有 **yum** 和 **DNF** ,用于在我的 Linux 主机上安装和更新软件包,DNF 是 yum 的一个紧密后代。 yum 和 DNF 工具是 rpm 实用程序的包装器,它提供了其他功能,例如查找和安装包依赖项的功能。 +自20多年前我开始使用 Linux 以来,我已经使用过基于 rpm 的软件包管理器在 Red Hat 和 Fedora Linux 系统上安装软件。我使用过 `rpm` 程序本身,还有 `yum` 和 `dnf` ,用于在我的 Linux 主机上安装和更新软件包,`dnf` 是 `yum` 的一个近亲。 `yum` 和 `dnf` 工具是 `rpm` 实用程序的包装器,它提供了其他功能,例如查找和安装包依赖项的功能。 -多年来,我创建了许多 Bash 脚本,其中一些脚本具有单独的配置文件,我希望在大多数新计算机和虚拟机上安装这些脚本。这也能解决安装所有这些软件包需要花费大量时间的难题,因此我决定通过创建一个 rpm 软件包来自动执行该过程,我可以将其复制到目标主机并将所有这些文件安装在适当的位置。虽然 **rpm** 工具以前用于构建 rpm 包,但该功能已被删除,并且创建了一个新工具来构建新的 rpm。 +多年来,我创建了许多 Bash 脚本,其中一些脚本具有单独的配置文件,我希望在大多数新计算机和虚拟机上安装这些脚本。这也能解决安装所有这些软件包需要花费大量时间的难题,因此我决定通过创建一个 rpm 软件包来自动执行该过程,我可以将其复制到目标主机并将所有这些文件安装在适当的位置。虽然 `rpm` 工具以前用于构建 rpm 包,但该功能已被删除,并且创建了一个新工具来构建新的 rpm。 -当我开始这个项目时,我发现很少有关于创建 rpm 包的信息,但我找到了一本书,名为《Maximum RPM》,这本书才帮我弄明白了。这本书现在已经过时了,我发现的绝大多数信息都是如此。它也已经绝版,使用复印件需要花费数百美元。[Maximum RPM][1] 的在线版本是免费提供的,并保持最新。 [RPM 网站][2]还有其他网站的链接,这些网站上有很多关于 rpm 的文档。其他的信息往往是简短的,显然都是假设你已经对该过程有了很多了解。 +当我开始这个项目时,我发现很少有关于创建 rpm 包的信息,但我找到了一本书,名为《Maximum RPM》,这本书才帮我弄明白了。这本书现在已经过时了,我发现的绝大多数信息都是如此。它也已经绝版,用过的副本也需要花费数百美元。[Maximum RPM][1] 的在线版本是免费提供的,并保持最新。该 [RPM 网站][2]还有其他网站的链接,这些网站上有很多关于 rpm 的文档。其他的信息往往是简短的,显然都是假设你已经对该过程有了很多了解。 -此外,我发现的每个文档都假定代码需要在开发环境中从源代码编译。我不是开发人员。我是一个系统管理员,我们系统管理员有不同的需求,因为我们不需要或者我们不应该为了管理任务而去编译代码;我们应该使用 shell 脚本。所以我们没有源代码,因为它需要被编译成二进制可执行文件。我们拥有的是一个也是可执行的源代码。 +此外,我发现的每个文档都假定代码需要在开发环境中从源代码编译。我不是开发人员。我是一个系统管理员,我们系统管理员有不同的需求,因为我们不需要或者我们不应该为了管理任务而去编译代码;我们应该使用 shell 脚本。所以我们没有源代码,因为它需要被编译成二进制可执行文件。我们拥有的源代码也应该是可执行的。 -在大多数情况下,此项目应作为非 root 用户执行。 Rpm 包永远不应该由 root 用户构建,而只能由非特权普通用户构建。我将指出哪些部分应该以 root 身份执行,哪些部分应由非 root,非特权用户执行。 +在大多数情况下,此项目应作为非 root 用户执行。 rpm 包永远不应该由 root 用户构建,而只能由非特权普通用户构建。我将指出哪些部分应该以 root 身份执行,哪些部分应由非 root,非特权用户执行。 ### 准备 @@ -37,7 +37,7 @@ passwd: all authentication tokens updated successfully. [root@testvm1 ~]# ``` -构建 rpm 包需要 `rpm-build` 包,该包可能尚未安装。 现在以 root 身份安装它。 请注意,此命令还将安装多个依赖项。 数量可能会有所不同,具体取决于主机上已安装的软件包; 它在我的测试虚拟机上总共安装了17个软件包,这是非常小的。 +构建 rpm 包需要 `rpm-build` 包,该包可能尚未安装。 现在以 root 身份安装它。 请注意,此命令还将安装多个依赖项。 数量可能会有所不同,具体取决于主机上已安装的软件包; 它在我的测试虚拟机上总共安装了 17 个软件包,这是非常小的。 ``` dnf install -y rpm-build @@ -49,15 +49,15 @@ dnf install -y rpm-build wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar ``` -此 tar 包包含将由最终 rpm 程序安装的所有文件和 Bash 脚本。 还有一个完整的 spec 文件,你可以使用它来构建 rpm。 我们将详细介绍 spec 文件的每个部分。 +此 tar 包包含将由最终 `rpm` 程序安装的所有文件和 Bash 脚本。 还有一个完整的 spec 文件,你可以使用它来构建 rpm。 我们将详细介绍 spec 文件的每个部分。 -作为普通学生 student,使用你的家目录作为当前工作目录(pwd),解压缩 tar 包。 +作为普通学生 student,使用你的家目录作为当前工作目录(`pwd`),解压缩 tar 包。 ``` [student@testvm1 ~]$ cd ; tar -xvf utils.tar ``` -使用 `tree` 命令验证~/development 的目录结构和包含的文件,如下所示: +使用 `tree` 命令验证 `~/development` 的目录结构和包含的文件,如下所示: ``` [student@testvm1 ~]$ tree development/ @@ -77,13 +77,13 @@ development/ [student@testvm1 ~]$ ``` -`mymotd` 脚本创建一个发送到标准输出的“当日消息”数据流。 `create_motd` 脚本运行 `mymotd` 脚本并将输出重定向到 /etc/motd 文件。 此文件用于向使用SSH远程登录的用户显示每日消息。 +`mymotd` 脚本创建一个发送到标准输出的“当日消息”数据流。 `create_motd` 脚本运行 `mymotd` 脚本并将输出重定向到 `/etc/motd` 文件。 此文件用于向使用 SSH 远程登录的用户显示每日消息。 -`die` 脚本是我自己的脚本,它将 `kill` 命令包装在一些代码中,这些代码可以找到与指定字符串匹配的运行程序并将其终止。 它使用 `kill -9` 来确保kill命令一定会执行。 +`die` 脚本是我自己的脚本,它将 `kill` 命令包装在一些代码中,这些代码可以找到与指定字符串匹配的运行程序并将其终止。 它使用 `kill -9` 来确保 `kill` 命令一定会执行。 -`sysdata` 脚本可以显示有关计算机硬件,还有已安装的 Linux 版本,所有已安装的软件包以及硬盘驱动器元数据的数万行数据。 我用它来记录某个时间点的主机状态。 我以后可以用它作为参考。 我曾经这样做是为了维护我为客户安装的主机记录。 +`sysdata` 脚本可以显示有关计算机硬件,还有已安装的 Linux 版本,所有已安装的软件包以及硬盘驱动器元数据等数万行数据。 我用它来记录某个时间点的主机状态。 我以后可以用它作为参考。 我曾经这样做是为了维护我为客户安装的主机记录。 -你可能需要将这些文件和目录的所有权更改为 student:student 。 如有必要,使用以下命令执行此操作: +你可能需要将这些文件和目录的所有权更改为 `student:student` 。 如有必要,使用以下命令执行此操作: ``` chown -R student:student development @@ -104,11 +104,11 @@ chown -R student:student development     └── SRPMS ``` -我们不会创建 rpmbuild/RPMS/X86_64 目录,因为对于64位编译的二进制文件这是特定于体系结构的。 我们有 shell 脚本,不是特定于体系结构的。 实际上,我们也不会使用 SRPMS 目录,它将包含编译器的源文件。 +我们不会创建 `rpmbuild/RPMS/X86_64` 目录,因为它是特定于体系结构编译的 64 位二进制文件。 我们有 shell 脚本,不是特定于体系结构的。 实际上,我们也不会使用 `SRPMS` 目录,它将包含编译器的源文件。 ### 检查 spec 文件 -每个 spec 文件都有许多部分,其中一些部分可能会被忽视或省略,取决于 rpm 构建的具体情况。 这个特定的 spec 文件不是工作所需的最小文件的示例,但它是一个很好的包含不需要编译的文件的中等复杂 spec 文件的例子。 如果需要编译,它将在`构建`部分中执行,该部分在此 spec 文件中省略掉了,因为它不是必需的。 +每个 spec 文件都有许多部分,其中一些部分可能会被忽视或省略,取决于 rpm 构建的具体情况。 这个特定的 spec 文件不是工作所需的最小文件的示例,但它是一个包含不需要编译的文件的中等复杂 spec 文件的很好例子。 如果需要编译,它将在 `%build` 部分中执行,该部分在此 spec 文件中省略掉了,因为它不是必需的。 #### 前言 @@ -139,40 +139,46 @@ BuildRoot: ~/rpmbuild/ # rpmbuild --target noarch -bb utils.spec ``` -`rpmbuild` 程序会忽略注释行。我总是喜欢在本节中添加注释,其中包含创建包所需的 `rpmbuild` 命令的确切语法。摘要标签是包的简短描述。 Name,Version 和 Release 标签用于创建 rpm 文件的名称,如utils-1.00-1.rpm 中所示。通过增加发行版号码和版本号,你可以创建 rpm 包去更新旧版本的。 +`rpmbuild` 程序会忽略注释行。我总是喜欢在本节中添加注释,其中包含创建包所需的 `rpmbuild` 命令的确切语法。 -许可证标签定义了发布包的许可证。我总是使用 GPL 的一个变体。指定许可证对于澄清包中包含的软件是开源的这一事实非常重要。这也是我将许可证和 GPL 语句包含在将要安装的文件中的原因。 +`Summary` 标签是包的简短描述。 -URL 通常是项目或项目所有者的网页。在这种情况下,它是我的个人网页。 +`Name`、`Version` 和 `Release` 标签用于创建 rpm 文件的名称,如 `utils-1.00-1.rpm`。通过增加发行版号码和版本号,你可以创建 rpm 包去更新旧版本的。 -Group 标签很有趣,通常用于 GUI 应用程序。 Group 标签的值决定了应用程序菜单中的哪一组图标将包含此包中可执行文件的图标。与 Icon 标签(我们此处未使用)一起使用时,Group 标签允许添加图标和所需信息用于将程序启动到应用程序菜单结构中。 +`License` 标签定义了发布包的许可证。我总是使用 GPL 的一个变体。指定许可证对于澄清包中包含的软件是开源的这一事实非常重要。这也是我将 `License` 和 `GPL` 语句包含在将要安装的文件中的原因。 -Packager 标签用于指定负责维护和创建包的人员或组织。 +`URL` 通常是项目或项目所有者的网页。在这种情况下,它是我的个人网页。 -Requires 语句定义此 rpm 包的依赖项。每个都是包名。如果其中一个指定的软件包不存在,DNF 安装实用程序将尝试在 /etc/yum.repos.d 中定义的某个已定义的存储库中找到它,如果存在则安装它。如果 DNF 找不到一个或多个所需的包,它将抛出一个错误,指出哪些包丢失并终止。 +`Group` 标签很有趣,通常用于 GUI 应用程序。 `Group` 标签的值决定了应用程序菜单中的哪一组图标将包含此包中可执行文件的图标。与 `Icon` 标签(我们此处未使用)一起使用时,`Group` 标签允许在应用程序菜单结构中添加用于启动程序的图标和所需信息。 -BuildRoot 行指定顶级目录,`rpmbuild` 工具将在其中找到 spec 文件,并在构建包时在其中创建临时目录。完成的包将存储在我们之前指定的noarch子目录中。注释显示了构建此程序包的命令语法,包括定义了目标体系结构的 `–target noarch` 选项。因为这些是Bash脚本,所以它们与特定的CPU架构无关。如果省略此选项,则构建将选用正在执行构建的CPU的体系结构。 +`Packager` 标签用于指定负责维护和创建包的人员或组织。 + +`Requires` 语句定义此 rpm 包的依赖项。每个都是包名。如果其中一个指定的软件包不存在,DNF 安装实用程序将尝试在 `/etc/yum.repos.d` 中定义的某个已定义的存储库中找到它,如果存在则安装它。如果 DNF 找不到一个或多个所需的包,它将抛出一个错误,指出哪些包丢失并终止。 + +`BuildRoot` 行指定顶级目录,`rpmbuild` 工具将在其中找到 spec 文件,并在构建包时在其中创建临时目录。完成的包将存储在我们之前指定的 `noarch` 子目录中。 + +注释显示了构建此程序包的命令语法,包括定义了目标体系结构的 `–target noarch` 选项。因为这些是 Bash 脚本,所以它们与特定的 CPU 架构无关。如果省略此选项,则构建将选用正在执行构建的 CPU 的体系结构。 `rpmbuild` 程序可以针对许多不同的体系结构,并且使用 `--target` 选项允许我们在不同的体系结构主机上构建特定体系结构的包,其具有与执行构建的体系结构不同的体系结构。所以我可以在 x86_64 主机上构建一个用于 i686 架构的软件包,反之亦然。 如果你有自己的网站,请将打包者的名称更改为你自己的网站。 -#### 描述 +#### 描述部分(`%description`) -spec 文件的 `描述` 部分包含 rpm 包的描述。 它可以很短,也可以包含许多信息。 我们的 `描述` 部分相当简洁。 +spec 文件的 `%description` 部分包含 rpm 包的描述。 它可以很短,也可以包含许多信息。 我们的 `%description` 部分相当简洁。 ``` %description A collection of utility scripts for testing RPM creation. ``` -#### 准备 +#### 准备部分(`%prep`) -`准备` 部分是在构建过程中执行的第一个脚本。 在安装程序包期间不会执行此脚本。 +`%prep` 部分是在构建过程中执行的第一个脚本。 在安装程序包期间不会执行此脚本。 -这个脚本只是一个 Bash shell 脚本。 它准备构建目录,根据需要创建用于构建的目录,并将相应的文件复制到各自的目录中。 这将包括完整编译作为构建的一部分所需的源。 +这个脚本只是一个 Bash shell 脚本。 它准备构建目录,根据需要创建用于构建的目录,并将相应的文件复制到各自的目录中。 这将包括作为构建的一部分的完整编译所需的源代码。 -$RPM_BUILD_ROOT 目录表示已安装系统的根目录。 在 $RPM_BUILD_ROOT 目录中创建的目录是实时文件系统中的绝对路径,例如 /user/local/share/utils,/usr/local/bin 等。 +`$RPM_BUILD_ROOT` 目录表示已安装系统的根目录。 在 `$RPM_BUILD_ROOT` 目录中创建的目录是真实文件系统中的绝对路径,例如 `/user/local/share/utils`、`/usr/local/bin` 等。 对于我们的包,我们没有预编译源,因为我们的所有程序都是 Bash 脚本。 因此,我们只需将这些脚本和其他文件复制到已安装系统的目录中。 @@ -193,11 +199,11 @@ cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils exit ``` -请注意,本节末尾的 exit 语句是必需的。 +请注意,本节末尾的 `exit` 语句是必需的。 -#### 文件 +#### 文件部分(`%files`) -spec 文件的这一部分定义了要安装的文件及其在目录树中的位置。 它还指定了要安装的每个文件的文件属性以及所有者和组所有者。 文件权限和所有权是可选的,但我建议明确设置它们以消除这些属性在安装时不正确或不明确的任何可能性。 如果目录尚不存在,则会在安装期间根据需要创建目录。 +spec 文件的 `%files` 这一部分定义了要安装的文件及其在目录树中的位置。 它还指定了要安装的每个文件的文件属性(`%attr`)以及所有者和组所有者。 文件权限和所有权是可选的,但我建议明确设置它们以消除这些属性在安装时不正确或不明确的任何可能性。 如果目录尚不存在,则会在安装期间根据需要创建目录。 ``` %files @@ -205,13 +211,13 @@ spec 文件的这一部分定义了要安装的文件及其在目录树中的位 %attr(0644, root, root) /usr/local/share/utils/* ``` -#### 安装前 +#### 安装前(`%pre`) -在我们的实验室项目的 spec 文件中,此部分为空。 这将放置那些需要 rpm 安装前执行的脚本。 +在我们的实验室项目的 spec 文件中,此部分为空。 这应该放置那些需要 rpm 中的文件安装前执行的脚本。 -#### 安装后 +#### 安装后(`%post`) -spec 文件的这一部分是另一个 Bash 脚本。 这个在安装文件后运行。 此部分几乎可以是你需要或想要的任何内容,包括创建文件,运行系统命令以及重新启动服务以在进行配置更改后重新初始化它们。 我们的 rpm 包的 `安装后` 脚本执行其中一些任务。 +spec 文件的这一部分是另一个 Bash 脚本。 这个在文件安装后运行。 此部分几乎可以是你需要或想要的任何内容,包括创建文件、运行系统命令以及重新启动服务以在进行配置更改后重新初始化它们。 我们的 rpm 包的 `%post` 脚本执行其中一些任务。 ``` %post @@ -236,11 +242,11 @@ fi 此脚本中包含的注释应明确其用途。 -#### 卸载后 +#### 卸载后(`%postun`) -此部分包含将在卸载 rpm 软件包后运行的脚本。 使用 rpm 或 DNF 删除包会删除文件部分中列出的所有文件,但它不会删除安装后部分创建的文件或链接,因此我们需要在本节中处理。 +此部分包含将在卸载 rpm 软件包后运行的脚本。 使用 `rpm` 或 `dnf` 删除包会删除文件部分中列出的所有文件,但它不会删除安装后部分创建的文件或链接,因此我们需要在本节中处理。 -此脚本通常由清理任务组成,只是清除以前由rpm安装的文件,但rpm本身无法完成清除。 对于我们的包,它包括删除 `安装后` 脚本创建的链接并恢复 motd 文件的已保存原件。 +此脚本通常由清理任务组成,只是清除以前由 `rpm` 安装的文件,但 rpm 本身无法完成清除。 对于我们的包,它包括删除 `%post` 脚本创建的链接并恢复 motd 文件的已保存原件。 ``` %postun @@ -254,9 +260,9 @@ then fi ``` -#### 清理 +#### 清理(`%clean`) -这个 Bash 脚本在 rpm 构建过程之后开始清理。 下面 `清理` 部分中的两行删除了 `rpm-build` 命令创建的构建目录。 在许多情况下,可能还需要额外的清理。 +这个 Bash 脚本在 rpm 构建过程之后开始清理。 下面 `%clean` 部分中的两行删除了 `rpm-build` 命令创建的构建目录。 在许多情况下,可能还需要额外的清理。 ``` %clean @@ -264,9 +270,9 @@ rm -rf $RPM_BUILD_ROOT/usr/local/bin rm -rf $RPM_BUILD_ROOT/usr/local/share/utils ``` -#### 更新日志 +#### 变更日志(`%changelog`) -此可选的文本部分包含 rpm 及其包含的文件的更改列表。 最新的更改记录在本部分顶部。 +此可选的文本部分包含 rpm 及其包含的文件的变更列表。最新的变更记录在本部分顶部。 ``` %changelog @@ -280,20 +286,20 @@ rm -rf $RPM_BUILD_ROOT/usr/local/share/utils ### 构建 rpm -spec 文件必须位于 rpmbuild 目录树的 SPECS 目录中。 我发现最简单的方法是创建一个指向该目录中实际 spec 文件的链接,以便可以在开发目录中对其进行编辑,而无需将其复制到 SPECS 目录。 将 SPECS 目录设为当前工作目录,然后创建链接。 +spec 文件必须位于 `rpmbuild` 目录树的 `SPECS` 目录中。 我发现最简单的方法是创建一个指向该目录中实际 spec 文件的链接,以便可以在开发目录中对其进行编辑,而无需将其复制到 `SPECS` 目录。 将 `SPECS` 目录设为当前工作目录,然后创建链接。 ``` cd ~/rpmbuild/SPECS/ ln -s ~/development/spec/utils.spec ``` -运行以下命令以构建 rpm 。 如果没有错误发生,只需要花一点时间来创建 rpm 。 +运行以下命令以构建 rpm。 如果没有错误发生,只需要花一点时间来创建 rpm。 ``` rpmbuild --target noarch -bb utils.spec ``` -检查 ~/rpmbuild/RPMS/noarch 目录以验证新的 rpm 是否存在。 +检查 `~/rpmbuild/RPMS/noarch` 目录以验证新的 rpm 是否存在。 ``` [student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/ @@ -305,7 +311,7 @@ total 24 ### 测试 rpm -以 root 用户身份安装 rpm 以验证它是否正确安装并且文件是否安装在正确的目录中。 rpm 的确切名称将取决于你在 Preamble 部分中标签的值,但如果你使用了示例中的值,则 rpm 名称将如下面的示例命令所示: +以 root 用户身份安装 rpm 以验证它是否正确安装并且文件是否安装在正确的目录中。 rpm 的确切名称将取决于你在前言部分中标签的值,但如果你使用了示例中的值,则 rpm 名称将如下面的示例命令所示: ``` [root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/ @@ -318,9 +324,9 @@ Updating / installing...    1:utils-1.0.0-1                    ################################# [100%] ``` -检查 /usr/local/bin 以确保新文件存在。 你还应验证是否已创建 /etc/cron.daily 中的 create_motd 链接。 +检查 `/usr/local/bin` 以确保新文件存在。 你还应验证是否已创建 `/etc/cron.daily` 中的 `create_motd` 链接。 -使用 `rpm -q --changelog utils` 命令查看更改日志。 使用 `rpm -ql utils` 命令(在 `ql`中为小写 L )查看程序包安装的文件。 +使用 `rpm -q --changelog utils` 命令查看更改日志。 使用 `rpm -ql utils` 命令(在 `ql` 中为小写 `L` )查看程序包安装的文件。 ``` [root@testvm1 noarch]# rpm -q --changelog utils @@ -356,11 +362,11 @@ Requires: badrequire 构建包并尝试安装它。 显示什么消息? -我们使用 `rpm` 命令来安装和删除 `utils` 包。 尝试使用 yum 或 DNF 安装软件包。 你必须与程序包位于同一目录中,或指定程序包的完整路径才能使其正常工作。 +我们使用 `rpm` 命令来安装和删除 `utils` 包。 尝试使用 `yum` 或 `dnf` 安装软件包。 你必须与程序包位于同一目录中,或指定程序包的完整路径才能使其正常工作。 ### 总结 -在这里看一下创建 rpm 包的基础知识,我们没有涉及很多标签和很多部分。 下面列出的资源可以提供更多信息。 构建 rpm 包并不困难;你只需要正确的信息。 我希望这对你有所帮助——我花了几个月的时间来自己解决问题。 +在这篇对创建 rpm 包的基础知识的概览中,我们没有涉及很多标签和很多部分。 下面列出的资源可以提供更多信息。 构建 rpm 包并不困难;你只需要正确的信息。 我希望这对你有所帮助——我花了几个月的时间来自己解决问题。 我们没有涵盖源代码构建,但如果你是开发人员,那么从这一点开始应该是一个简单的步骤。 @@ -368,9 +374,9 @@ Requires: badrequire ### 资料 -- Edward C. Baily,Maximum RPM,Sams著,于2000年,ISBN 0-672-31105-4 -- Edward C. Baily,[Maximum RPM][1],更新在线版本 -- [RPM文档][4]:此网页列出了 rpm 的大多数可用在线文档。 它包括许多其他网站的链接和有关 rpm 的信息。 +- Edward C. Baily,《Maximum RPM》,Sams 出版于 2000 年,ISBN 0-672-31105-4 +- Edward C. Baily,《[Maximum RPM][1]》,更新在线版本 +- [RPM 文档][4]:此网页列出了 rpm 的大多数可用在线文档。 它包括许多其他网站的链接和有关 rpm 的信息。 -------------------------------------------------------------------------------- @@ -379,7 +385,7 @@ via: https://opensource.com/article/18/9/how-build-rpm-packages 作者:[David Both][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -387,4 +393,4 @@ via: https://opensource.com/article/18/9/how-build-rpm-packages [1]: http://ftp.rpm.org/max-rpm/ [2]: http://rpm.org/index.html [3]: http://www.both.org/?p=960 -[4]: http://rpm.org/documentation.html \ No newline at end of file +[4]: http://rpm.org/documentation.html diff --git a/published/20180913 ScreenCloud- The Screenshot-- App.md b/published/201810/20180913 ScreenCloud- The Screenshot-- App.md similarity index 100% rename from published/20180913 ScreenCloud- The Screenshot-- App.md rename to published/201810/20180913 ScreenCloud- The Screenshot-- App.md diff --git a/published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md b/published/201810/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md similarity index 100% rename from published/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md rename to published/201810/20180915 Backup Installed Packages And Restore Them On Freshly Installed Ubuntu.md diff --git a/published/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/published/201810/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md similarity index 100% rename from published/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md rename to published/201810/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md diff --git a/published/20180917 4 scanning tools for the Linux desktop.md b/published/201810/20180917 4 scanning tools for the Linux desktop.md similarity index 100% rename from published/20180917 4 scanning tools for the Linux desktop.md rename to published/201810/20180917 4 scanning tools for the Linux desktop.md diff --git a/published/20180917 Getting started with openmediavault- A home NAS solution.md b/published/201810/20180917 Getting started with openmediavault- A home NAS solution.md similarity index 100% rename from published/20180917 Getting started with openmediavault- A home NAS solution.md rename to published/201810/20180917 Getting started with openmediavault- A home NAS solution.md diff --git a/published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md b/published/201810/20180918 Linux firewalls- What you need to know about iptables and firewalld.md similarity index 100% rename from published/20180918 Linux firewalls- What you need to know about iptables and firewalld.md rename to published/201810/20180918 Linux firewalls- What you need to know about iptables and firewalld.md diff --git a/published/20180918 Top 3 Python libraries for data science.md b/published/201810/20180918 Top 3 Python libraries for data science.md similarity index 100% rename from published/20180918 Top 3 Python libraries for data science.md rename to published/201810/20180918 Top 3 Python libraries for data science.md diff --git a/published/20180919 Host your own cloud with Raspberry Pi NAS.md b/published/201810/20180919 Host your own cloud with Raspberry Pi NAS.md similarity index 100% rename from published/20180919 Host your own cloud with Raspberry Pi NAS.md rename to published/201810/20180919 Host your own cloud with Raspberry Pi NAS.md diff --git a/published/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/published/201810/20180919 How Writing Can Expand Your Skills and Grow Your Career.md similarity index 100% rename from published/20180919 How Writing Can Expand Your Skills and Grow Your Career.md rename to published/201810/20180919 How Writing Can Expand Your Skills and Grow Your Career.md diff --git a/published/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/published/201810/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md similarity index 100% rename from published/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md rename to published/201810/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md diff --git a/published/20180920 8 Python packages that will simplify your life with Django.md b/published/201810/20180920 8 Python packages that will simplify your life with Django.md similarity index 100% rename from published/20180920 8 Python packages that will simplify your life with Django.md rename to published/201810/20180920 8 Python packages that will simplify your life with Django.md diff --git a/published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md b/published/201810/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md similarity index 100% rename from published/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md rename to published/201810/20180920 WinWorld - A Large Collection Of Defunct OSs, Software And Games.md diff --git a/published/20180921 Clinews - Read News And Latest Headlines From Commandline.md b/published/201810/20180921 Clinews - Read News And Latest Headlines From Commandline.md similarity index 100% rename from published/20180921 Clinews - Read News And Latest Headlines From Commandline.md rename to published/201810/20180921 Clinews - Read News And Latest Headlines From Commandline.md diff --git a/published/201810/20180921 Control your data with Syncthing- An open source synchronization tool.md b/published/201810/20180921 Control your data with Syncthing- An open source synchronization tool.md new file mode 100644 index 0000000000..4f68ff5b0d --- /dev/null +++ b/published/201810/20180921 Control your data with Syncthing- An open source synchronization tool.md @@ -0,0 +1,107 @@ +使用开源同步工具 Syncthing 控制你的数据 +====== +> 决定如何存储和共享您的个人信息。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) + +如今,我们的一些最重要的财产 —— 从家人和朋友的照片和视频到财务和医疗文件 —— 都是数据。即便是云存储服务的迅猛发展,我们仍有对隐私和个人数据缺乏控制的担忧。从棱镜监控计划到谷歌[让 APP 开发者扫描你的个人邮件][1],这些新闻的报道应该会让我们对我们个人信息的安全性有所顾虑。 + +[Syncthing][2] 可以让你放下心来。它是一款开源的点对点文件同步工具,可以运行在 Linux、Windows、Mac、Android 和其他(抱歉,没有iOS)。Syncthing 使用自定的协议,叫[块交换协议](3)。简而言之,Syncting 能让你无需拥有服务器来跨设备同步数据。 + +在这篇文章中,我将解释如何在 Linux 电脑和安卓手机之间安装和同步文件。 + +### Linux + +Syncting 在大多数流行的发行版都能下载。Fedora 28 包含其最新版本。 + +要在 Fedora 上安装 Syncthing,你能在软件中心搜索,或者执行以下命令: + +``` +sudo dnf install syncthing syncthing-gtk +``` + +一旦安装好后,打开它。你将会看到一个助手帮你配置 Syncthing。点击 “Next” 直到它要求配置 WebUI。最安全的选项是选择“Listen on localhost”。那将会禁止 Web 界面并且阻止未经授权的用户。 + +![Syncthing in Setup WebUI dialog box][5] + +*Syncthing 安装时的 WebUI 对话框* + +关闭对话框。现在 Syncthing 安装好了。现在可以分享文件夹,连接一台设备开始同步了。但是,让我们用你其它的客户端继续。 + +### Android + +Syncthing 在 Google Play 和 F-Droid 应用商店都能下载。 + +![](https://opensource.com/sites/default/files/uploads/syncthing2.png) + +安装应用程序后,会显示欢迎界面。给 Syncthing 授予你设备存储的权限。你可能会被要求为了此应用程序而禁用电池优化。这样做是安全的,因为我们将优化应用程序,使其仅在插入电源并连接到无线网络时同步。 + +点击主菜单图标来到“Settings”,然后是“Run Conditions”(运行条件)。点击“Always run in the background, Run only when charging”(总是在后台运行,仅在充电时运行)和“Run only on wifi”(仅在 WIFI 下运行)。现在你的安卓客户端已经准备好与你的设备交换文件。 + +Syncting 中有两个重要的概念需要记住:文件夹和设备。文件夹是你想要分享的,但是你必须有一台设备来分享。 Syncthing 允许你用不同的设备分享独立的文件夹。设备是通过交换设备的 ID 来添加的。设备 ID 是在 Syncting 首次启动时创建的一个唯一的密码安全标识符。 + +### 连接设备 + +现在让我们连接你的 Linux 机器和你的 Android 客户端。 + +在您的 Linux 计算机中,打开 Syncting,单击“Settings”图标,然后单击“Show ID”,就会显示一个二维码。 + +在你的安卓手机上,打开 Syncthing。在主界面上,点击“Devices”页后点击 “+” 。在第一个区域内点击二维码符号来启动二维码扫描。 + +将你手机的摄像头对准电脑上的二维码。设备 ID 字段将由您的桌面客户端设备 ID 填充。起一个适合的名字并保存。因为添加设备有两种方式,现在你需要在电脑客户端上确认你想要添加安卓手机。你的电脑客户端可能会花上好几分钟来请求确认。当提示确认时,点击“Add”。 + +![](https://opensource.com/sites/default/files/uploads/syncthing6.png) + +在“New Device”窗口,你能确认并配置一些关于你设备的选项,像是“Device Name”和“Addresses”。如果你在地址那一栏选择 “dynamic” (动态),客户端将会自动探测设备的 IP 地址,但是你想要保持住某一个 IP 地址,你能将该地址填进这一栏里。如果你已经创建了文件夹(或者在这之后),你也能与新设备分享这个文件夹。 + +![](https://opensource.com/sites/default/files/uploads/syncthing7.png) + +你的电脑和安卓设备已经配对,可以交换文件了。(如果你有多台电脑或手机,只需重复这些步骤。) + +### 分享文件夹 + +既然您想要同步的设备之间已经连接,现在是时候共享一个文件夹了。您可以在电脑上共享文件夹,添加了该文件夹中的设备将获得一份副本。 + +若要共享文件夹,请转至“Settings”并单击“Add Shared Folder”(添加共享文件夹): + +![](https://opensource.com/sites/default/files/uploads/syncthing8.png) + +在下一个窗口中,输入要共享的文件夹的信息: + +![](https://opensource.com/sites/default/files/uploads/syncthing9.png) + +你可以使用任何你想要的标签。“Folder ID”将随机生成,用于识别客户端之间的文件夹。在“Path”里,点击“Browse”就能定位到你想要分享的文件夹。如果你想 Syncthing 监控文件夹的变化(例如删除、新建文件等),点击“Monitor filesystem for changes”(监控文件系统变化)。 + +记住,当你分享一个文件夹,在其他客户端的任何改动都将会反映到每一台设备上。这意味着如果你在其他电脑和手机设备之间分享了一个包含图片的文件夹,在这些客户端上的改动都会同步到每一台设备。如果这不是你想要的,你能让你的文件夹“Send Only”(只是发送)给其他客户端,但是其他客户端的改动都不会被同步。 + +完成后,转至“Share with Devices”(与设备共享)页并选择要与之同步文件夹的主机。 + +您选择的所有设备都需要接受共享请求;您将在设备上收到通知。 + +正如共享文件夹时一样,您必须配置新的共享文件夹: + +![](https://opensource.com/sites/default/files/uploads/syncthing12.png) + +同样,在这里您可以定义任何标签,但是 ID 必须匹配每个客户端。在文件夹选项中,选择文件夹及其文件的位置。请记住,此文件夹中所做的任何更改都将反映到文件夹所允许同步的每个设备上。 + +这些是连接设备和与 Syncting 共享文件夹的步骤。开始复制可能需要几分钟时间,这取决于您的网络设置或您是否不在同一网络上。 + +Syncting 提供了更多出色的功能和选项。试试看,并把握你数据的控制权。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/take-control-your-data-syncthing + +作者:[Michael Zamot][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[ypingcn](https://github.com/ypingcn) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695 +[2]: https://syncthing.net/ +[3]: https://docs.syncthing.net/specs/bep-v1.html +[4]: /file/410191 +[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png diff --git a/published/20180924 A Simple, Beautiful And Cross-platform Podcast App.md b/published/201810/20180924 A Simple, Beautiful And Cross-platform Podcast App.md similarity index 100% rename from published/20180924 A Simple, Beautiful And Cross-platform Podcast App.md rename to published/201810/20180924 A Simple, Beautiful And Cross-platform Podcast App.md diff --git a/published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md b/published/201810/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md similarity index 100% rename from published/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md rename to published/201810/20180924 How To Find Out Which Port Number A Process Is Using In Linux.md diff --git a/published/20180924 Why Linux users should try Rust.md b/published/201810/20180924 Why Linux users should try Rust.md similarity index 100% rename from published/20180924 Why Linux users should try Rust.md rename to published/201810/20180924 Why Linux users should try Rust.md diff --git a/published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/published/201810/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md similarity index 100% rename from published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md rename to published/201810/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md diff --git a/published/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md b/published/201810/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md similarity index 100% rename from published/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md rename to published/201810/20180925 How to Boot Ubuntu 18.04 - Debian 9 Server in Rescue (Single User mode) - Emergency Mode.md diff --git a/published/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md b/published/201810/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md similarity index 100% rename from published/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md rename to published/201810/20180925 How to Replace one Linux Distro With Another in Dual Boot -Guide.md diff --git a/published/20180926 3 open source distributed tracing tools.md b/published/201810/20180926 3 open source distributed tracing tools.md similarity index 100% rename from published/20180926 3 open source distributed tracing tools.md rename to published/201810/20180926 3 open source distributed tracing tools.md diff --git a/published/20180926 An introduction to swap space on Linux systems.md b/published/201810/20180926 An introduction to swap space on Linux systems.md similarity index 100% rename from published/20180926 An introduction to swap space on Linux systems.md rename to published/201810/20180926 An introduction to swap space on Linux systems.md diff --git a/published/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md b/published/201810/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md similarity index 100% rename from published/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md rename to published/201810/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md diff --git a/published/20180926 How to use the Scikit-learn Python library for data science projects.md b/published/201810/20180926 How to use the Scikit-learn Python library for data science projects.md similarity index 100% rename from published/20180926 How to use the Scikit-learn Python library for data science projects.md rename to published/201810/20180926 How to use the Scikit-learn Python library for data science projects.md diff --git a/published/20180927 5 cool tiling window managers.md b/published/201810/20180927 5 cool tiling window managers.md similarity index 100% rename from published/20180927 5 cool tiling window managers.md rename to published/201810/20180927 5 cool tiling window managers.md diff --git a/published/20180927 How To Find And Delete Duplicate Files In Linux.md b/published/201810/20180927 How To Find And Delete Duplicate Files In Linux.md similarity index 100% rename from published/20180927 How To Find And Delete Duplicate Files In Linux.md rename to published/201810/20180927 How To Find And Delete Duplicate Files In Linux.md diff --git a/published/20180927 How to Use RAR files in Ubuntu Linux.md b/published/201810/20180927 How to Use RAR files in Ubuntu Linux.md similarity index 100% rename from published/20180927 How to Use RAR files in Ubuntu Linux.md rename to published/201810/20180927 How to Use RAR files in Ubuntu Linux.md diff --git a/published/20180928 10 handy Bash aliases for Linux.md b/published/201810/20180928 10 handy Bash aliases for Linux.md similarity index 100% rename from published/20180928 10 handy Bash aliases for Linux.md rename to published/201810/20180928 10 handy Bash aliases for Linux.md diff --git a/published/20180928 A Free And Secure Online PDF Conversion Suite.md b/published/201810/20180928 A Free And Secure Online PDF Conversion Suite.md similarity index 100% rename from published/20180928 A Free And Secure Online PDF Conversion Suite.md rename to published/201810/20180928 A Free And Secure Online PDF Conversion Suite.md diff --git a/published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md b/published/201810/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md similarity index 100% rename from published/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md rename to published/201810/20180928 How to Install Popcorn Time on Ubuntu 18.04 and Other Linux Distributions.md diff --git a/published/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md b/published/201810/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md similarity index 100% rename from published/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md rename to published/201810/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md diff --git a/published/20181001 16 iptables tips and tricks for sysadmins.md b/published/201810/20181001 16 iptables tips and tricks for sysadmins.md similarity index 100% rename from published/20181001 16 iptables tips and tricks for sysadmins.md rename to published/201810/20181001 16 iptables tips and tricks for sysadmins.md diff --git a/published/20181001 How to Install Pip on Ubuntu.md b/published/201810/20181001 How to Install Pip on Ubuntu.md similarity index 100% rename from published/20181001 How to Install Pip on Ubuntu.md rename to published/201810/20181001 How to Install Pip on Ubuntu.md diff --git a/published/20181002 How use SSH and SFTP protocols on your home network.md b/published/201810/20181002 How use SSH and SFTP protocols on your home network.md similarity index 100% rename from published/20181002 How use SSH and SFTP protocols on your home network.md rename to published/201810/20181002 How use SSH and SFTP protocols on your home network.md diff --git a/published/20181003 Introducing Swift on Fedora.md b/published/201810/20181003 Introducing Swift on Fedora.md similarity index 100% rename from published/20181003 Introducing Swift on Fedora.md rename to published/201810/20181003 Introducing Swift on Fedora.md diff --git a/published/20181003 Tips for listing files with ls at the Linux command line.md b/published/201810/20181003 Tips for listing files with ls at the Linux command line.md similarity index 100% rename from published/20181003 Tips for listing files with ls at the Linux command line.md rename to published/201810/20181003 Tips for listing files with ls at the Linux command line.md diff --git a/published/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/published/201810/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md similarity index 100% rename from published/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md rename to published/201810/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md diff --git a/published/20181005 Open Source Logging Tools for Linux.md b/published/201810/20181005 Open Source Logging Tools for Linux.md similarity index 100% rename from published/20181005 Open Source Logging Tools for Linux.md rename to published/201810/20181005 Open Source Logging Tools for Linux.md diff --git a/published/20181008 Python at the pump- A script for filling your gas tank.md b/published/201810/20181008 Python at the pump- A script for filling your gas tank.md similarity index 100% rename from published/20181008 Python at the pump- A script for filling your gas tank.md rename to published/201810/20181008 Python at the pump- A script for filling your gas tank.md diff --git a/published/201810/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/published/201810/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md new file mode 100644 index 0000000000..b14c45ded7 --- /dev/null +++ b/published/201810/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md @@ -0,0 +1,277 @@ +重启和关闭 Linux 系统的 6 个终端命令 +====== + +在 Linux 管理员的日程当中,有很多需要执行的任务,其中就有系统的重启和关闭。 + +对于 Linux 管理员来说,重启和关闭系统是其诸多风险操作中的一例,有时候,由于某些原因,这些操作可能无法挽回,他们需要更多的时间来排查问题。 + +在 Linux 命令行模式下我们可以执行这些任务。很多时候,由于熟悉命令行,Linux 管理员更倾向于在命令行下完成这些任务。 + +重启和关闭系统的 Linux 命令并不多,用户需要根据需要,选择合适的命令来完成任务。 + +以下所有命令都有其自身特点,并允许被 Linux 管理员使用. + +**建议阅读:** + +- [查看系统/服务器正常运行时间的 11 个方法][1] +- [Tuptime 一款为 Linux 系统保存历史记录、统计运行时间工具][2] + +系统重启和关闭之始,会通知所有已登录的用户和进程。当然,如果使用了时间参数,系统将拒绝新的用户登入。 + +执行此类操作之前,我建议您坚持复查,因为您只能得到很少的提示来确保这一切顺利。 + +下面陈列了一些步骤: + +* 确保您拥有一个可以处理故障的控制台,以防之后可能会发生的问题。 VMWare 可以访问虚拟机,而 IPMI、iLO 和 iDRAC 可以访问物理服务器。 +* 您需要通过公司的流程,申请修改或故障的执行权直到得到许可。 +* 为安全着想,备份重要的配置文件,并保存到其他服务器上. +* 验证日志文件(提前检查) +* 和相关团队交流,比如数据库管理团队,应用团队等。 +* 通知数据库和应用服务人员关闭服务,并得到确定答复。 +* 使用适当的命令复盘操作,验证工作。 +* 最后,重启系统。 +* 验证日志文件,如果一切顺利,执行下一步操作,如果发现任何问题,对症排查。 +* 无论是回退版本还是运行程序,通知相关团队提出申请。 +* 对操作做适当守候,并将预期的一切正常的反馈给团队 + +使用下列命令执行这项任务。 + +* `shutdown`、`halt`、`poweroff`、`reboot` 命令:用来停机、重启或切断电源 +* `init` 命令:是 “initialization” 的简称,是系统启动的第一个进程。 +* `systemctl` 命令:systemd 是 Linux 系统和服务器的管理程序。 + +### 方案 1:如何使用 shutdown 命令关闭和重启 Linux 系统 + +`shutdown` 命令用于断电或重启本地和远程的 Linux 机器。它为高效完成作业提供多个选项。如果使用了时间参数,系统关闭的 5 分钟之前,会创建 `/run/nologin` 文件,以确保后续的登录会被拒绝。 + +通用语法如下: + +``` +# shutdown [OPTION] [TIME] [MESSAGE] +``` + +运行下面的命令来立即关闭 Linux 机器。它会立刻杀死所有进程,并关闭系统。 + +``` +# shutdown -h now +``` + +* `-h`:如果不特指 `-halt` 选项,这等价于 `-poweroff` 选项。 + +另外我们可以使用带有 `-halt` 选项的 `shutdown` 命令来立即关闭设备。 + +``` +# shutdown --halt now +或者 +# shutdown -H now +``` + +* `-H, --halt`:停止设备运行 + +另外我们可以使用带有 `poweroff` 选项的 `shutdown` 命令来立即关闭设备。 + +``` +# shutdown --poweroff now +或者 +# shutdown -P now +``` + +* `-P, --poweroff`:切断电源(默认)。 + +如果您没有使用时间选项运行下面的命令,它将会在一分钟后执行给出的命令。 + +``` +# shutdown -h +Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel. + +root@2daygeek.com# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! +``` + +其他的登录用户都能在中断中看到如下的广播消息: + +``` +[daygeek@2daygeek.com ~]$ +Broadcast message from root@2daygeek.com (Mon 2018-10-08 06:41:31 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! +``` + +对于使用了 `-halt` 选项: + +``` +# shutdown -H +Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel. + +root@2daygeek.com# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:36:53 EDT): + +The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT! +``` + +对于使用了 `-poweroff` 选项: + +``` +# shutdown -P +Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel. + +root@2daygeek.com# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:39:07 EDT): + +The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT! +``` + +可以在您的终端上敲击 `shutdown -c` 选项取消操作。 + +``` +# shutdown -c + +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:39:09 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT! +``` + +其他的登录用户都能在中断中看到如下的广播消息: + +``` +[daygeek@2daygeek.com ~]$ +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 06:41:35 EDT): + +The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT! +``` + +添加时间参数,如果你想在 `N` 秒之后执行关闭或重启操作。这里,您可以为所有登录用户添加自定义广播消息。例如,我们将在五分钟后重启设备。 + +``` +# shutdown -r +5 "To activate the latest Kernel" +Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel. + +[root@vps138235 ~]# +Broadcast message from root@vps.2daygeek.com (Mon 2018-10-08 07:08:16 EDT): + +To activate the latest Kernel +The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT! +``` + +运行下面的命令立即重启 Linux 机器。它会立即杀死所有进程并且重新启动系统。 + +``` +# shutdown -r now +``` + +* `-r, --reboot`: 重启设备。 + +### 方案 2:如何通过 reboot 命令关闭和重启 Linux 系统 + +`reboot` 命令用于关闭和重启本地或远程设备。`reboot` 命令拥有两个实用的选项。 + +它能够优雅的关闭和重启设备(就好像在系统菜单中惦记重启选项一样简单)。 + +执行不带任何参数的 `reboot` 命令来重启 Linux 机器。 + +``` +# reboot +``` + +执行带 `-p` 参数的 `reboot` 命令来关闭 Linux 机器电源。 + +``` +# reboot -p +``` + +* `-p, --poweroff`:调用 `halt` 或 `poweroff` 命令,切断设备电源。 + +执行带 `-f` 参数的 `reboot` 命令来强制重启 Linux 设备(这类似按压机器上的电源键)。 + +``` +# reboot -f +``` + +* `-f, --force`:立刻强制中断,切断电源或重启。 + +### 方案 3:如何通过 init 命令关闭和重启 Linux 系统 + +`init`(“initialization” 的简写)是系统启动的第一个进程。 + +它将会检查 `/etc/inittab` 文件并决定 linux 运行级别。同时,允许用户在 Linux 设备上执行关机或重启操作. 这里存在从 `0` 到 `6` 的七个运行等级。 + +**建议阅读:** + +- [如何检查 Linux 上所有运行的服务][3] + +执行以下 `init` 命令关闭系统。 + +``` +# init 0 +``` + +* `0`: 停机 – 关闭系统。 + +运行下面的 `init` 命令重启设备: + +``` +# init 6 +``` + +* `6`:重启 – 重启设备。 + +### 方案 4:如何通过 halt 命令关闭和重启 Linux 系统 + +`halt` 命令用来切断电源或关闭远程 Linux 机器或本地主机。 +中断所有进程并关闭 cpu。 + +``` +# halt +``` + +### 方案 5:如何通过 poweroff 命令关闭和重启 Linux 系统 + +`poweroff` 命令用来切断电源或关闭远程 Linux 机器或本地主机。 `poweroff` 很像 `halt`,但是它可以关闭设备硬件(灯和其他 PC 上的其它东西)。它会给主板发送 ACPI 指令,然后信号发送到电源,切断电源。 + +``` +# poweroff +``` + +### 方案 6:如何通过 systemctl 命令关闭和重启 Linux 系统 + +systemd 是一款适用于所有主流 Linux 发型版的全新 init 系统和系统管理器,而不是传统的 SysV init 系统。 + +systemd 兼容与 SysV 和 LSB 初始化脚本。它能够替代 SysV init 系统。systemd 是内核启动的第一个进程,并持有序号为 1 的进程 PID。 + +**建议阅读:** + +- [chkservice – 一款终端下系统单元管理工具][4] + +它是一切进程的父进程,Fedora 15 是第一个适配安装 systemd (替代了 upstart)的发行版。 + +`systemctl` 是命令行下管理 systemd 守护进程和服务的主要工具(如 `start`、`restart`、`stop`、`enable`、`disable`、`reload` & `status`)。 + +systemd 使用 .service 文件而不是 SysV init 使用的 bash 脚本。 systemd 将所有守护进程归与自身的 Linux cgroups 用户组下,您可以浏览 `/cgroup/systemd` 文件查看该系统层次等级。 + +``` +# systemctl halt +# systemctl poweroff +# systemctl reboot +# systemctl suspend +# systemctl hibernate +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[cyleft](https://github.com/cyleft) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ +[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/ +[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ +[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ diff --git a/translated/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md b/published/201810/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md similarity index 76% rename from translated/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md rename to published/201810/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md index 46d3dc8885..1a732b0c9f 100644 --- a/translated/tech/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md +++ b/published/201810/20181009 Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool.md @@ -1,6 +1,7 @@ 用这个漂亮的工具将方程式截图迅速转换为 LaTeX ====== -**Mathpix 是一个漂亮的小工具,它允许你截取复杂数学方程式的截图,并立即将其转换为 LaTeX 可编辑文本。** + +> Mathpix 是一个漂亮的小工具,它允许你截取复杂数学方程式的截图,并立即将其转换为 LaTeX 可编辑文本。 ![Mathpix converts math equations images into LaTeX][1] @@ -10,15 +11,11 @@ [Mathpix][3] 是一个在这方面可以帮助你的小工具。 -假设你正在阅读带有数学方程式的文档。如果你想在[LaTeX 文档][4]中使用这些方程,你需要使用你的 LaTeX 技能和有充足的时间。 +假设你正在阅读带有数学方程式的文档。如果你想在 [LaTeX 文档][4]中使用这些方程,你需要使用你的 LaTeX 技能,并且得有充足的时间。 -但是 Mathpix 为您解决了这个问题。使用 Mathpix,你截取数学方程式的截图,它会立即为你提供 LaTeX 代码。然后,你可以在你[最喜欢的 LaTeX 编辑器] [2]中使用此代码。 +但是 Mathpix 为您解决了这个问题。使用 Mathpix,你可以截取数学方程式的截图,它会立即为你提供 LaTeX 代码。然后,你可以在你[最喜欢的 LaTeX 编辑器] [2]中使用此代码。 -请参阅以下视频中的 Mathpix 使用: - - - -[视频来源][5]:Reddit 用户 [kaitlinmcunningham][6] +请参阅[该视频](https://itsfoss.com/wp-content/uploads/2018/10/mathpix.mp4)中的 Mathpix 使用方式。([视频来源][5]:Reddit 用户 [kaitlinmcunningham][6]) 不是超酷吗?我想编写 LaTeX 文档最困难的部分是那些复杂的方程式。对于像我这样懒人,Mathpix 是天赐之物。 @@ -32,14 +29,13 @@ Mathpix 适用于 Linux、macOS、Windows 和 iOS。暂时还没有 Android 应 ``` sudo snap install mathpix-snipping-tool - ``` -使用 Mathpix 很简单。安装后,打开该工具。你会在顶部面板中找到它。你可以使用键盘快捷键 Ctrl+Alt+M 开始使用 Mathpix 截图。 +使用 Mathpix 很简单。安装后,打开该工具。你会在顶部面板中找到它。你可以使用键盘快捷键 `Ctrl+Alt+M` 开始使用 Mathpix 截图。 它会立即将方程图片转换为 LaTeX 代码。代码将被复制到剪贴板中,然后你可以将其粘贴到 LaTeX 编辑器中。 -Mathpix 的光学字符识别技术[正在被][9]许多公司像 [WolframAlpha][10]、微软、谷歌等公司用于在处理数学符号时提升工具的图像识别能力。 +Mathpix 的光学字符识别技术[正在被][9]像 [WolframAlpha][10]、微软、谷歌等许多公司用于在处理数学符号时提升工具的图像识别能力。 总而言之,它对学生和学者来说是一个很棒的工具。它是免费使用的,我非常希望它是一个开源工具。但我们无法在生活中得到一切,不是么? @@ -52,7 +48,7 @@ via: https://itsfoss.com/mathpix/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181009 How To Create And Maintain Your Own Man Pages.md b/published/201810/20181009 How To Create And Maintain Your Own Man Pages.md similarity index 100% rename from published/20181009 How To Create And Maintain Your Own Man Pages.md rename to published/201810/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/published/201810/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md similarity index 100% rename from published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md rename to published/201810/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md diff --git a/translated/tech/20181010 Design faster web pages, part 1- Image compression.md b/published/201810/20181010 Design faster web pages, part 1- Image compression.md similarity index 72% rename from translated/tech/20181010 Design faster web pages, part 1- Image compression.md rename to published/201810/20181010 Design faster web pages, part 1- Image compression.md index a34af65920..4a2933e67a 100644 --- a/translated/tech/20181010 Design faster web pages, part 1- Image compression.md +++ b/published/201810/20181010 Design faster web pages, part 1- Image compression.md @@ -1,13 +1,13 @@ -设计更快的网页——第一部分:图片压缩 +设计更快的网页(一):图片压缩 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/02/fasterwebsites1-816x345.jpg) -很多 Web 开发者都希望做出加载速度很快的网页。在移动设备浏览占比越来越大的背景下,使用响应式设计使得网站在小屏幕下看起来更漂亮只是其中一个方面。Browser Calories 可以展示网页的加载时间——这不单单关系到用户,还会影响到通过加载速度来进行评级的搜索引擎。这个系列的文章介绍了如何使用 Fedora 提供的工具来给网页“瘦身”。 +很多 Web 开发者都希望做出加载速度很快的网页。在移动设备浏览占比越来越大的背景下,使用响应式设计使得网站在小屏幕下看起来更漂亮只是其中一个方面。Browser Calories 可以展示网页的加载时间 —— 这不单单关系到用户,还会影响到通过加载速度来进行评级的搜索引擎。这个系列的文章介绍了如何使用 Fedora 提供的工具来给网页“瘦身”。 ### 准备工作 -在你开始缩减网页之前,你需要明确核心问题所在。为此,你可以使用 [Browserdiet][1]. 这是一个浏览器插件,适用于 Firefox, Opera, Chrome 和其它浏览器。它会对打开的网页进行性能分析,这样你就可以知道应该从哪里入手来缩减网页。 +在你开始缩减网页之前,你需要明确核心问题所在。为此,你可以使用 [Browserdiet][1]. 这是一个浏览器插件,适用于 Firefox、Opera、 Chrome 和其它浏览器。它会对打开的网页进行性能分析,这样你就可以知道应该从哪里入手来缩减网页。 然后,你需要一些用来处理的页面。下面的例子是针对 [getferoda.org][2] 的测试截图。一开始,它看起来非常简单,也符合响应式设计。 @@ -17,43 +17,39 @@ ### Web 优化 -网页中包含 281 KB 的 JavaScript 文件,203 KB 的 CSS 文件,还有 1.2 MB 的图片。我们先从最严重的问题——图片开始入手。为了解决问题,你需要的工具集有 GIMP, ImageMagick 和 optipng. 你可以使用如下命令轻松安装它们: +网页中包含 281 KB 的 JavaScript 文件、203 KB 的 CSS 文件,还有 1.2 MB 的图片。我们先从最严重的问题 —— 图片开始入手。为了解决问题,你需要的工具集有 GIMP、ImageMagick 和 optipng. 你可以使用如下命令轻松安装它们: ``` sudo dnf install gimp imagemagick optipng - ``` 比如,我们先拿到这个 6.4 KB 的[文件][4]: ![][4] -首先,使用 file 命令来获取这张图片的一些基本信息: +首先,使用 `file` 命令来获取这张图片的一些基本信息: ``` $ file cinnamon.png cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced - ``` 这张只由白色和灰色构成的图片使用 8 位 / RGBA 模式来存储。这种方式并没有那么高效。 -使用 GIMP,你可以为这张图片设置一个更合适的颜色模式。在 GIMP 中打开 cinnamon.png. 然后,在“图片 > 模式”菜单中将其设置为“灰度模式”。将这张图片以 PNG 格式导出。导出时使用压缩因子 9,导出对话框中的其它配置均使用默认选项。 +使用 GIMP,你可以为这张图片设置一个更合适的颜色模式。在 GIMP 中打开 `cinnamon.png`。然后,在“图片 > 模式”菜单中将其设置为“灰度模式”。将这张图片以 PNG 格式导出。导出时使用压缩因子 9,导出对话框中的其它配置均使用默认选项。 ``` $ file cinnamon.png cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced - ``` -输出显示,现在这个文件现在处于 8 位 / 灰阶+aplha 模式。文件大小从 6.4 KB 缩小到了 2.8 KB. 这已经是原来大小的 43.75% 了。但是,我们能做的还有很多! +输出显示,现在这个文件现在处于 8 位 / 灰阶 + aplha 模式。文件大小从 6.4 KB 缩小到了 2.8 KB. 这已经是原来大小的 43.75% 了。但是,我们能做的还有很多! 你可以使用 ImageMagick 工具来查看这张图片的更多信息。 ``` $ identify cinnamon2.png cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000 - ``` 它告诉你,这个文件的大小为 2831 字节。我们回到 GIMP,重新导出文件。在导出对话框中,取消存储时间戳和 alpha 通道色值,来让文件更小一点。现在文件输出显示: @@ -61,12 +57,11 @@ cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000 ``` $ identify cinnamon.png cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000 - ``` -下面,用 optipng 来无损优化你的 PNG 图片。具有相似功能的工具有很多,包括 **advdef**(这是 advancecomp 的一部分),**pngquant** 和 **pngcrush**。 +下面,用 `optipng` 来无损优化你的 PNG 图片。具有相似功能的工具有很多,包括 `advdef`(这是 advancecomp 的一部分),`pngquant` 和 `pngcrush`。 -对你的文件运行 optipng. 注意,这个操作会覆盖你的原文件: +对你的文件运行 `optipng`。 注意,这个操作会覆盖你的原文件: ``` $ optipng -o7 cinnamon.png @@ -85,25 +80,22 @@ Selecting parameters: Output IDAT size = 1920 bytes (800 bytes decrease) Output file size = 2012 bytes (800 bytes = 28.45% decrease) - ``` --o7 选项处理起来最慢,但最终效果最好。于是你又将文件缩小了 800 字节,现在它只有 2012 字节了。 +`-o7` 选项处理起来最慢,但最终效果最好。于是你又将文件缩小了 800 字节,现在它只有 2012 字节了。 要压缩文件夹下的所有 PNG,可以使用这个命令: ``` $ optipng -o7 -dir= *.png - ``` --dir 选项用来指定输出文件夹。如果不加这个选项,optipng 会覆盖原文件。 +`-dir` 选项用来指定输出文件夹。如果不加这个选项,`optipng` 会覆盖原文件。 ### 选择正确的文件格式 当涉及到在互联网中使用的图片时,你可以选择: - + [JPG 或 JPEG][9] + [GIF][10] + [PNG][11] @@ -112,27 +104,24 @@ $ optipng -o7 -dir= *.png + [JPG 2000 或 JP2][14] + [SVG][15] - JPG-LS 和 JPG 2000 没有得到广泛使用。只有一部分数码相机支持这些格式,所以我们可以忽略它们。aPNG 是动态的 PNG 格式,也没有广泛使用。 -可以通过更改压缩率或者使用其它文件格式来节省下更多字节。我们无法在 GIMP 中应用第一种方法,因为现在的图片已经使用了最高的压缩率了。因为我们的图片中不再包含 [aplha 通道][5],你可以使用 JPG 类型来替代 PNG. 现在,使用默认值:90% 质量——你可以将它减小至 85%,但这样会导致可见的叠影。这样又省下一些字节: +可以通过更改压缩率或者使用其它文件格式来节省下更多字节。我们无法在 GIMP 中应用第一种方法,因为现在的图片已经使用了最高的压缩率了。因为我们的图片中不再包含 [aplha 通道][5],你可以使用 JPG 类型来替代 PNG。 现在,使用默认值:90% 质量 —— 你可以将它减小至 85%,但这样会导致可见的叠影。这样又省下一些字节: ``` $ identify cinnamon.jpg cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 - ``` 只将这张图转成正确的色域,并使用 JPG 作为文件格式,就可以将它从 23 KB 缩小到 12.3 KB,减少了近 50%. - #### PNG vs JPG: 质量和压缩率 那么,剩下的文件我们要怎么办呢?除了 Fedora “风味”图标和四个特性图标之外,此方法适用于所有其他图片。我们能够处理的图片都有一个白色的背景。 PNG 和 JPG 的一个主要区别在于,JPG 没有 alpha 通道。所以,它没有透明度选项。如果你使用 JPG 并为它添加白色背景,你可以将文件从 40.7 KB 缩小至 28.3 KB. -现在又有了四个可以处理的图片:背景图。对于灰色背景,你可以再次使用灰阶模式。对更大的图片,我们就可以节省下更多的空间。它从 216.2 KB 缩小到了 51 KB——基本上只有原图的 25% 了。整体下来,你把这些图片从 481.1 KB 缩小到了 191.5 KB——只有一开始的 39.8%. +现在又有了四个可以处理的图片:背景图。对于灰色背景,你可以再次使用灰阶模式。对更大的图片,我们就可以节省下更多的空间。它从 216.2 KB 缩小到了 51 KB —— 基本上只有原图的 25% 了。整体下来,你把这些图片从 481.1 KB 缩小到了 191.5 KB —— 只有一开始的 39.8%. #### 质量 vs 大小 @@ -144,7 +133,7 @@ PNG 和 JPG 的另外一个区别在于质量。PNG 是一种无损压缩光栅 ![][6] -你将一开始 1.2 MB 的图片体积缩小到了 488.9 KB. 只需通过 optipng 进行优化,就可以达到之前体积的三分之一。这可能使得页面更快地加载。不过,要是使用蜗牛到超音速来对比,这个速度还没到达赛车的速度呢! +你将一开始 1.2 MB 的图片体积缩小到了 488.9 KB. 只需通过 `optipng` 进行优化,就可以达到之前体积的三分之一。这可能使得页面更快地加载。不过,要是使用蜗牛到超音速来对比,这个速度还没到达赛车的速度呢! 最后,你可以在 [Google Insights][7] 中查看结果,例如: @@ -160,7 +149,7 @@ via: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression 作者:[Sirko Kemter][a] 选题:[lujun9972][b] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201810/20181010 How To List The Enabled-Active Repositories In Linux.md b/published/201810/20181010 How To List The Enabled-Active Repositories In Linux.md new file mode 100644 index 0000000000..727f1f7c54 --- /dev/null +++ b/published/201810/20181010 How To List The Enabled-Active Repositories In Linux.md @@ -0,0 +1,278 @@ +如何列出在 Linux 上已启用/激活的仓库 +====== + +有很多方法可以列出在 Linux 已启用的仓库。我们将在下面展示给你列出已激活仓库的简便方法。这有助于你知晓你的系统上都启用了哪些仓库。一旦你掌握了这些信息,你就可以添加任何之前还没有准备启用的仓库了。 + +举个例子,如果你想启用 epel 仓库,你需要先检查它是否已经启用了。这篇教程将会帮助你做这件事情。 + +### 什么是仓库? + +存储特定程序软件包的中枢位置就是一个软件仓库。 + +所有的 Linux 发行版都在维护自己的仓库,而且允许用户下载并安装这些软件包到他们的机器上。 + +每个仓库提供者都提供了一套包管理工具,用以管理他们的仓库,比如搜索、安装、更新、升级、移除等等。 + +大多数 Linux 发行版都作为免费软件,除了 RHEL 和 SUSE,要访问他们的仓库你需要先购买订阅。 + +**建议阅读:** + +- [在 Linux 上,如何通过 DNF/YUM 设置管理命令添加、启用、关闭一个仓库][1] +- [在 Linux 上如何按大小列出已安装的包][2] +- [在 Linux 上如何列出升级的包][3] +- [在 Linux 上如何查看一个特定包安装/升级/更新/移除/清除的日期][4] +- [在 Linux 上如何查看一个包的详细信息][5] +- [在你的 Linux 发行版上如何查看一个包是否可用][6] +- [在 Linux 如何列出可用的软件包组][7] +- [Newbies corner —— 一个图形化的 Linux 包管理的前端工具][8] +- [Linux 专家须知,命令行包管理 & 使用列表][9] + +### 在 RHEL/CentOS 上列出已启用的库 + +RHEL 和 CentOS 系统使用的是 RPM 包管理,所以我们可以使用 Yum 包管理器查看这些信息。 + +YUM 意即 “Yellowdog Updater,Modified”,它是一个开源的包管理器的命令行前端,用于基于 RPM 的系统上,例如 RHEL 和 CentOS。 + +YUM 是获取、安装、删除、查询和管理来自发行版仓库和其他第三方库的 RPM 包的主要工具。 + +**建议阅读:** [在 RHEL/CentOS 系统上用 YUM 命令管理包][10] + +基于 RHEL 的系统主要提供以下三个主要的仓库。这些仓库是默认启用的。 + +* **base**:它包含了所有的核心包和基础包。 +* **extras**:它向 CentOS 提供了不破坏上游兼容性或更新基本组件的额外功能。这是一个上游仓库,还有额外的 CentOS 包。 +* **updates**:它提供了 bug 修复包、安全包和增强包。 + +``` +# yum repolist +或者 +# yum repolist enabled +``` + +``` +Loaded plugins: fastestmirror +Determining fastest mirrors + * epel: ewr.edge.kernel.org +repo id repo name status +!base/7/x86_64 CentOS-7 - Base 9,911 +!epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 12,687 +!extras/7/x86_64 CentOS-7 - Extras 403 +!updates/7/x86_64 CentOS-7 - Updates 1,348 +repolist: 24,349 +``` + +### 如何列出 Fedora 上已启用的包 + +DNF 意即 “Dandified yum”。我们可以说 DNF 是下一代的 yum 包管理器,使用了 hawkey/libsolv 作为后端。自从 Fedroa 18 开始,Aleš Kozumplík 就开始开发 DNF,最终在 Fedora 22 上实现/发布。 + +Fedora 22 及之后的系统上都使用 DNF 安装、升级、搜索和移除包。它可以自动解决依赖问题,并使包的安装过程平顺没有任何麻烦。 + +因为 Yum 许多长时间未解决的问题,现在 Yum 已经被 DNF 所替代。你问为什么他没有给 Yum 打补丁。Aleš Kozumplík 解释说修补在技术上太困难了,而 YUM 团队无法立即承受这些变更,还有其他的问题,YUM 是 56k 行代码,而 DNF 是 29k 行代码。因此,除了分叉之外,别无选择。 + +**建议阅读:** [在 Fedora 上使用 DNF 管理软件][11] + +Fedora 主要提供下面两个主仓库。这些库将被默认启用。 + +* **fedora**:它包括所有的核心包和基础包。 +* **updates**:它提供了来自稳定发行版的 bug 修复包、安全包和增强包。 + +``` +# dnf repolist +或者 +# dnf repolist enabled +``` + +``` +Last metadata expiration check: 0:02:56 ago on Wed 10 Oct 2018 06:12:22 PM IST. +repo id repo name status +docker-ce-stable Docker CE Stable - x86_64 6 +*fedora Fedora 26 - x86_64 53,912 +home_mhogomchungu mhogomchungu's Home Project (Fedora_25) 19 +home_moritzmolch_gencfsm Gnome Encfs Manager (Fedora_25) 5 +mystro256-gnome-redshift Copr repo for gnome-redshift owned by mystro256 6 +nodesource Node.js Packages for Fedora Linux 26 - x86_64 83 +rabiny-albert Copr repo for albert owned by rabiny 3 +*rpmfusion-free RPM Fusion for Fedora 26 - Free 536 +*rpmfusion-free-updates RPM Fusion for Fedora 26 - Free - Updates 278 +*rpmfusion-nonfree RPM Fusion for Fedora 26 - Nonfree 202 +*rpmfusion-nonfree-updates RPM Fusion for Fedora 26 - Nonfree - Updates 95 +*updates Fedora 26 - x86_64 - Updates +``` + +### 如何列出 Debian/Ubuntu 上已启用的仓库 + +基于 Debian 的系统使用的是 APT/APT-GET 包管理,因此我们可以使用 APT/APT-GET 包管理器去获取该信息。 + +APT 意即 “Advanced Packaging Tool”,它取代了 `apt-get`,就像 DNF 取代 Yum 一样。 它具有丰富的命令行工具,在一个命令(`apt`)中包含了所有功能,如 `apt-cache`、`apt-search`、`dpkg`、`apt-cdrom`、`apt-config`、`apt-key` 等,还有其他几个独特的功能。 例如,我们可以通过 APT 轻松安装 .dpkg 软件包,而我们无法通过 APT-GET 获得和包含在 APT 命令中类似的功能。 由于 APT-GET 中未能解决的问题,APT 取代了 APT-GET。 + +apt-get 是一个强大的命令行工具,它用以自动下载和安装新的软件包、升级已存在的软件包、更新包索引列表、还有升级整个基于 Debian 的系统。 + +``` +# apt-cache policy +Package files: + 100 /var/lib/dpkg/status + release a=now + 500 http://ppa.launchpad.net/peek-developers/stable/ubuntu artful/main amd64 Packages + release v=17.10,o=LP-PPA-peek-developers-stable,a=artful,n=artful,l=Peek stable releases,c=main,b=amd64 + origin ppa.launchpad.net + 500 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages + release v=17.10,o=LP-PPA-notepadqq-team-notepadqq,a=artful,n=artful,l=Notepadqq,c=main,b=amd64 + origin ppa.launchpad.net + 500 http://dl.google.com/linux/chrome/deb stable/main amd64 Packages + release v=1.0,o=Google, Inc.,a=stable,n=stable,l=Google,c=main,b=amd64 + origin dl.google.com + 500 https://download.docker.com/linux/ubuntu artful/stable amd64 Packages + release o=Docker,a=artful,l=Docker CE,c=stable,b=amd64 + origin download.docker.com + 500 http://security.ubuntu.com/ubuntu artful-security/multiverse amd64 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=multiverse,b=amd64 + origin security.ubuntu.com + 500 http://security.ubuntu.com/ubuntu artful-security/universe amd64 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=universe,b=amd64 + origin security.ubuntu.com + 500 http://security.ubuntu.com/ubuntu artful-security/restricted i386 Packages + release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=restricted,b=i386 + origin security.ubuntu.com +. +. + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/restricted amd64 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=restricted,b=amd64 + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/main i386 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=i386 + origin in.archive.ubuntu.com + 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages + release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=amd64 + origin in.archive.ubuntu.com +Pinned packages: + +``` + +### 如何在 openSUSE 上列出已启用的仓库 + +openSUSE 使用 zypper 包管理,因此我们可以使用 zypper 包管理获得更多信息。 + +Zypper 是 suse 和 openSUSE 发行版的命令行包管理。它用于安装、更新、搜索、移除包和管理仓库,执行各种查询等。Zypper 以 ZYpp 系统管理库(libzypp)作为后端。 + +**建议阅读:** [在 openSUSE 和 suse 系统上使用 Zypper 命令管理包][12] + +``` +# zypper repos + +# | Alias | Name | Enabled | GPG Check | Refresh +--+-----------------------+-----------------------------------------------------+---------+-----------+-------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes +``` + +列出仓库及 URI。 + +``` +# zypper lr -u + +# | Alias | Name | Enabled | GPG Check | Refresh | URI +--+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------------------------------------------------------------------------------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | http://ftp.gwdg.de/pub/linux/packman/suse/openSUSE_Leap_42.1/ +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | http://dl.google.com/linux/chrome/rpm/stable/x86_64 +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/home:/lazka0:/ql-stable/openSUSE_42.1/ +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/non-oss/ +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/oss/ +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/oss/ +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/non-oss/ +``` + +通过优先级列出仓库。 + +``` +# zypper lr -p + +# | Alias | Name | Enabled | GPG Check | Refresh | Priority +--+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------- +1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | 99 +2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | 99 +3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | 99 +4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | 99 +5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | 99 +6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | 99 +7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | 99 +``` + +### 如何列出 Arch Linux 上已启用的仓库 + +基于 Arch Linux 的系统使用 pacman 包管理,因此我们可以使用 pacman 包管理获取这些信息。 + +pacman 意即 “package manager utility”。pacman 是一个命令行实用程序,用以安装、构建、移除和管理 Arch Linux 包。pacman 使用 libalpm (Arch Linux 包管理库)作为后端去进行这些操作。 + +**建议阅读:** [在基于 Arch Linux的系统上使用 Pacman命令管理包][13] + +``` +# pacman -Syy +:: Synchronizing package databases... + core 132.6 KiB 1524K/s 00:00 [############################################] 100% + extra 1859.0 KiB 750K/s 00:02 [############################################] 100% + community 3.5 MiB 149K/s 00:24 [############################################] 100% + multilib 182.7 KiB 1363K/s 00:00 [############################################] 100% +``` + +### 如何使用 INXI Utility 列出 Linux 上已启用的仓库 + +inix 是 Linux 上检查硬件信息非常有用的工具,还提供很多的选项去获取 Linux 上的所有硬件信息,我从未在 Linux 上发现其他有如此效用的程序。它由 locsmif 分叉自古老而古怪的 infobash。 + +inix 是一个可以快速显示硬件信息、CPU、硬盘、Xorg、桌面、内核、GCC 版本、进程、内存使用和很多其他有用信息的程序,还使用于论坛技术支持和调试工具上。 + +这个实用程序将会显示所有发行版仓库的数据信息,例如 RHEL、CentOS、Fedora、Debain、Ubuntu、LinuxMint、ArchLinux、openSUSE、Manjaro等。 + +**建议阅读:** [inxi – 一个在 Linux 上检查硬件信息的好工具][14] + +``` +# inxi -r +Repos: Active apt sources in file: /etc/apt/sources.list + deb http://in.archive.ubuntu.com/ubuntu/ yakkety main restricted + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates main restricted + deb http://in.archive.ubuntu.com/ubuntu/ yakkety universe + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates universe + deb http://in.archive.ubuntu.com/ubuntu/ yakkety multiverse + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates multiverse + deb http://in.archive.ubuntu.com/ubuntu/ yakkety-backports main restricted universe multiverse + deb http://security.ubuntu.com/ubuntu yakkety-security main restricted + deb http://security.ubuntu.com/ubuntu yakkety-security universe + deb http://security.ubuntu.com/ubuntu yakkety-security multiverse + Active apt sources in file: /etc/apt/sources.list.d/arc-theme.list + deb http://download.opensuse.org/repositories/home:/Horst3180/xUbuntu_16.04/ / + Active apt sources in file: /etc/apt/sources.list.d/snwh-ubuntu-pulp-yakkety.list + deb http://ppa.launchpad.net/snwh/pulp/ubuntu yakkety main +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-list-the-enabled-active-repositories-in-linux/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-add-enable-disable-a-repository-dnf-yum-config-manager-on-linux/ +[2]: https://www.2daygeek.com/how-to-list-installed-packages-by-size-largest-on-linux/ +[3]: https://www.2daygeek.com/how-to-view-list-the-available-packages-updates-in-linux/ +[4]: https://www.2daygeek.com/how-to-view-a-particular-package-installed-updated-upgraded-removed-erased-date-on-linux/ +[5]: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-in-linux/ +[6]: https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/ +[7]: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ +[8]: https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/ +[9]: https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[14]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ diff --git a/published/20181011 A Front-end For Popular Package Managers.md b/published/201810/20181011 A Front-end For Popular Package Managers.md similarity index 100% rename from published/20181011 A Front-end For Popular Package Managers.md rename to published/201810/20181011 A Front-end For Popular Package Managers.md diff --git a/published/20181011 Getting started with Minikube- Kubernetes on your laptop.md b/published/201810/20181011 Getting started with Minikube- Kubernetes on your laptop.md similarity index 100% rename from published/20181011 Getting started with Minikube- Kubernetes on your laptop.md rename to published/201810/20181011 Getting started with Minikube- Kubernetes on your laptop.md diff --git a/published/20181012 Command line quick tips- Reading files different ways.md b/published/201810/20181012 Command line quick tips- Reading files different ways.md similarity index 100% rename from published/20181012 Command line quick tips- Reading files different ways.md rename to published/201810/20181012 Command line quick tips- Reading files different ways.md diff --git a/published/20181012 Happy birthday, KDE- 11 applications you never knew existed.md b/published/201810/20181012 Happy birthday, KDE- 11 applications you never knew existed.md similarity index 100% rename from published/20181012 Happy birthday, KDE- 11 applications you never knew existed.md rename to published/201810/20181012 Happy birthday, KDE- 11 applications you never knew existed.md diff --git a/translated/tech/20181012 How To Lock Virtual Console Sessions On Linux.md b/published/201810/20181012 How To Lock Virtual Console Sessions On Linux.md similarity index 53% rename from translated/tech/20181012 How To Lock Virtual Console Sessions On Linux.md rename to published/201810/20181012 How To Lock Virtual Console Sessions On Linux.md index 5eb290442f..acfea7e185 100644 --- a/translated/tech/20181012 How To Lock Virtual Console Sessions On Linux.md +++ b/published/201810/20181012 How To Lock Virtual Console Sessions On Linux.md @@ -3,7 +3,7 @@ ![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png) -当你在共享系统上工作时,你可能不希望其他用户在你的控制台中悄悄地看你在做什么。如果是这样,我知道有个简单的技巧来锁定自己的会话,同时仍然允许其他用户在其他虚拟控制台上使用该系统。要感谢 **Vlock**(**V** irtual Console **lock**),这是一个命令行程序,用于锁定 Linux 控制台上的一个或多个会话。如有必要,你可以锁定整个控制台并完全禁用虚拟控制台切换功能。Vlock 对于有多个用户访问控制台的共享 Linux 系统特别有用。 +当你在共享的系统上工作时,你可能不希望其他用户偷窥你的控制台中看你在做什么。如果是这样,我知道有个简单的技巧来锁定自己的会话,同时仍然允许其他用户在其他虚拟控制台上使用该系统。要感谢 **Vlock**(**V**irtual Console **lock**),这是一个命令行程序,用于锁定 Linux 控制台上的一个或多个会话。如有必要,你可以锁定整个控制台并完全禁用虚拟控制台切换功能。Vlock 对于有多个用户访问控制台的共享 Linux 系统特别有用。 ### 安装 Vlock @@ -12,96 +12,94 @@ 在 Debian、Ubuntu、Linux Mint 上,运行以下命令来安装 Vlock: ``` - $ sudo apt-get install vlock +$ sudo apt-get install vlock ``` 在 Fedora 上: ``` - $ sudo dnf install vlock +$ sudo dnf install vlock ``` 在 RHEL、CentOS 上: ``` - $ sudo yum install vlock +$ sudo yum install vlock ``` -### 在Linux上锁定虚拟控制台会话 +### 在 Linux 上锁定虚拟控制台会话 Vlock 的一般语法是: ``` - vlock [ -acnshv ] [ -t ] [ plugins... ] +vlock [ -acnshv ] [ -t ] [ plugins... ] ``` 这里: - * **a** – 锁定所有虚拟控制台会话, - * **c** – 锁定当前虚拟控制台会话, - * **n** – 在锁定所有会话之前切换到新的空控制台, - * **s** – 禁用 SysRq 键机制, - * **t** – 指定屏保插件的超时时间, - * **h** – 显示帮助, - * **v** – 显示版本。 - - +* `a` —— 锁定所有虚拟控制台会话, +* `c` —— 锁定当前虚拟控制台会话, +* `n` —— 在锁定所有会话之前切换到新的空控制台, +* `s` —— 禁用 SysRq 键机制, +* `t` —— 指定屏保插件的超时时间, +* `h` —— 显示帮助, +* `v` —— 显示版本。 让我举几个例子。 -**1\. 锁定当前控制台会话** +#### 1、 锁定当前控制台会话 在没有任何参数的情况下运行 Vlock 时,它默认锁定当前控制台会话 (TYY)。要解锁会话,你需要输入当前用户的密码或 root 密码。 ``` - $ vlock +$ vlock ``` ![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif) -你还可以使用 **-c** 标志来锁定当前的控制台会话。 +你还可以使用 `-c` 标志来锁定当前的控制台会话。 ``` - $ vlock -c +$ vlock -c ``` -请注意,此命令仅锁定当前控制台。你可以按 **ALT+F2** 切换到其他控制台。有关在 TTY 之间切换的更多详细信息,请参阅以下指南。 +请注意,此命令仅锁定当前控制台。你可以按 `ALT+F2` 切换到其他控制台。有关在 TTY 之间切换的更多详细信息,请参阅以下指南。 此外,如果系统有多个用户,则其他用户仍可以访问其各自的 TTY。 -**2\. 锁定所有控制台会话** +#### 2、 锁定所有控制台会话 要同时锁定所有 TTY 并禁用虚拟控制台切换功能,请运行: ``` - $ vlock -a +$ vlock -a ``` 同样,要解锁控制台会话,只需按下回车键并输入当前用户的密码或 root 用户密码。 请记住,**root 用户可以随时解锁任何 vlock 会话**,除非在编译时禁用。 -**3\. 在锁定所有控制台之前切换到新的虚拟控制台** +#### 3、 在锁定所有控制台之前切换到新的虚拟控制台 -在锁定所有控制台之前,还可以使 Vlock 从 X 会话切换到新的空虚拟控制台。为此,请使用 **-n** 标志。 +在锁定所有控制台之前,还可以使 Vlock 从 X 会话切换到新的空虚拟控制台。为此,请使用 `-n` 标志。 ``` - $ vlock -n +$ vlock -n ``` -**4\. 禁用 SysRq 机制** +#### 4、 禁用 SysRq 机制 -你也许知道,魔术 SysRq 键机制允许用户在系统死机时执行某些操作。因此,用户可以使用 SysRq 解锁控制台。为了防止这种情况,请传递 **-s** 选项以禁用 SysRq 机制。请记住,这只适用于有 **-a** 选项的时候。 +你也许知道,魔术 SysRq 键机制允许用户在系统死机时执行某些操作。因此,用户可以使用 SysRq 解锁控制台。为了防止这种情况,请传递 `-s` 选项以禁用 SysRq 机制。请记住,这个选项只适用于有 `-a` 选项的时候。 ``` - $ vlock -sa +$ vlock -sa ``` 有关更多选项及其用法,请参阅帮助或手册页。 ``` - $ vlock -h - $ man vlock +$ vlock -h +$ man vlock ``` Vlock 可防止未经授权的用户获得控制台访问权限。如果你在为 Linux 寻找一个简单的控制台锁定机制,那么 Vlock 值得一试! @@ -111,7 +109,6 @@ Vlock 可防止未经授权的用户获得控制台访问权限。如果你在 干杯! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/ @@ -119,7 +116,7 @@ via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/ 作者:[SK][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181013 How to Install GRUB on Arch Linux (UEFI).md b/published/201810/20181013 How to Install GRUB on Arch Linux (UEFI).md similarity index 100% rename from published/20181013 How to Install GRUB on Arch Linux (UEFI).md rename to published/201810/20181013 How to Install GRUB on Arch Linux (UEFI).md diff --git a/published/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md b/published/201810/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md similarity index 100% rename from published/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md rename to published/201810/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md diff --git a/published/20181015 Running Linux containers as a non-root with Podman.md b/published/201810/20181015 Running Linux containers as a non-root with Podman.md similarity index 100% rename from published/20181015 Running Linux containers as a non-root with Podman.md rename to published/201810/20181015 Running Linux containers as a non-root with Podman.md diff --git a/published/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md b/published/201810/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md similarity index 100% rename from published/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md rename to published/201810/20181016 Turn Your Old PC into a Retrogaming Console with Lakka Linux.md diff --git a/published/201810/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md b/published/201810/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md new file mode 100644 index 0000000000..4ff1d767e4 --- /dev/null +++ b/published/201810/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md @@ -0,0 +1,71 @@ +MidnightBSD 发布 1.0! +====== + +几天前,Lucas Holt 宣布发布 MidnightBSD 1.0。让我们快速看一下这个新版本中包含的内容。 + +### 什么是 MidnightBSD? + +![MidnightBSD][1] + +[MidnightBSD][2] 是 FreeBSD 的一个分支。Lucas 创建了 MightnightBSD,这成为桌面用户和 BSD 新手的一个选择。他想创造一个能让人们快速体验 BSD 桌面的东西。他认为其他发行版过于关注服务器市场。 + +### MidnightBSD 1.0 中有什么? + +根据[发布说明][3]([视频](https://www.youtube.com/embed/-rlk2wFsjJ4)),1.0 中的大部分工作都是更新基础系统,改进包管理器和更新工具。新版本与 FreeBSD 10-Stable 兼容。 + +Mports(MidnightBSD 的包管理系统)已经升级支持使用一个命令安装多个包。`mport upgrade` 命令已经修复。Mports 现在会跟踪已弃用和过期的包。它还引入了新的包格式。 + +其他变化包括: + + * 现在支持 [ZFS][4] 作为启动文件系统。以前,ZFS 只能用于附加存储。 +  * 支持 NVME SSD。 +  * AMD Ryzen 和 Radeon 的支持得到了改善。 +  * Intel、Broadcom 和其他驱动程序已更新。 +  * 已从 FreeBSD 移植 bhyve 支持。 +  * 传感器框架已被删除,因为它导致锁定问题。 +  * 删除了 Sudo 并用 OpenBSD 中的 [doas][5] 替换。 +  * 增加了对 Microsoft hyper-v 的支持。 + +### 升级之前 + +如果你当前是 MidnightBSD 的用户或正在考虑尝试新版本,那么还是再等一会。Lucas 目前正在重建软件包以支持新的软件包格式和工具。他还计划在未来几个月内升级软件包和移植桌面环境。他目前正致力于移植 Firefox 52 ESR,因为它是最后一个不需要 Rust 的版本。他还希望将更新版本的 Chromium 移植到 MidnightBSD。我建议关注 MidnightBSD 的 [Twitter][6]。 + +### 0.9 怎么回事? + +你可能注意到 MidnightBSD 的先前版本是 0.8.6。你现在可能想知道“为什么跳到 1.0”?根据 Lucas 的说法,他在开发 0.9 时遇到了几个问题。事实上,他重试好几次。他最终采用与 0.9 分支不同的方式,并变成了 1.0。有些软件包在 0.* 系列也有问题。 + +### 需要帮助 + +目前,MidnightBSD 项目几乎是 Lucas Holt 一个人的作品。这是其发展缓慢的主要原因。如果你有兴趣帮忙,可以通过 [Twitter][6] 与他联系。 + +在[发布公告视频][7]中。Lucas 说他遇到了上游项目接受补丁的问题。他们似乎认为 MidnightBSD 太小了。这通常意味着他必须从头开始移植应用。 + +### 想法 + +我对劣势者有一个想法。在我接触的所有 BSD 中,这个外号最适合 MidnightBSD。一个人想要创建一个轻松的桌面体验。当前只有一个其他的 BSD 在尝试做相似的事情:Project Trident。我想这是 BSD 成功的真正的阻碍。Linux 成功是因为人们可以快速容易地安装它。希望 MidnightBSD 为 BSD 做到这一点,但是还有很长的路要走。 + +你有没有用过 MidnightBSD?如果没有,你最喜欢的 BSD 是什么?我们应该涵盖哪些其他 BSD 主题?请在下面的评论中告诉我们。 + +如果你觉得这篇文章有趣,请花一点时间在社交媒体,Hacker News 或 [Reddit][8] 上分享它。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/midnightbsd-1-0-release/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/midnightbsd-wallpaper.jpeg +[2]: https://www.midnightbsd.org/ +[3]: https://www.midnightbsd.org/notes/ +[4]: https://itsfoss.com/what-is-zfs/ +[5]: https://man.openbsd.org/doas +[6]: https://twitter.com/midnightbsd +[7]: https://www.youtube.com/watch?v=-rlk2wFsjJ4 +[8]: http://reddit.com/r/linuxusersgroup diff --git a/translated/tech/20181018 Understanding Linux Links- Part 1.md b/published/201810/20181018 Understanding Linux Links- Part 1.md similarity index 50% rename from translated/tech/20181018 Understanding Linux Links- Part 1.md rename to published/201810/20181018 Understanding Linux Links- Part 1.md index ab2433484e..ecfb777cd9 100644 --- a/translated/tech/20181018 Understanding Linux Links- Part 1.md +++ b/published/201810/20181018 Understanding Linux Links- Part 1.md @@ -1,57 +1,64 @@ -理解 Linux 链接:第一部分 +理解 Linux 链接(一) ====== +> 链接是可以将文件和目录放在你希望它们放在的位置的另一种方式。 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-link-498708.jpg?itok=DyVEcEsc) -除了 `cp` 和 `mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是另一种方式可以将文件和目录放在你希它们放在的位置。它的优点是可以让你同时在多个位置显示一个文件或目录。 +除了 `cp` 和 `mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是可以将文件和目录放在你希望它们放在的位置的另一种方式。它的优点是可以让你同时在多个位置显示一个文件或目录。 -如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为 _partition table_(分区表)的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。 +如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统是为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为分区表partition table的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。 -虽然有不同类型的分区表,但是在分区开头的表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,最后告诉 presto:这就是你的文件。 +虽然有不同类型的分区表,但是在分区开头的那个表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,您看:这就是你的文件。 ### 硬链接 硬链接只是分区表中的一个条目,它指向磁盘上的某个区域,表示该区域**已经被分配给文件**。换句话说,硬链接指向已经被另一个条目索引的数据。让我们看看它是如何工作的。 打开终端,创建一个实验目录并进入: + ``` mkdir test_dir cd test_dir ``` 使用 [touch][1] 创建一个文件: + ``` touch test.txt ``` -为了获得更多的体验(?),在文本编辑器中打开 _test.txt_ 并添加一些单词。 +为了获得更多的体验(?),在文本编辑器中打开 `test.txt` 并添加一些单词。 现在通过执行以下命令来建立硬链接: + ``` ln test.txt hardlink_test.txt ``` -运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: _hardlink\_test.txt_ 包含相同的内容,没有填充磁盘中的任何更多空间(尝试使用大文件来测试),并与 _test.txt_ 使用相同的 inode: +运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: `hardlink_test.txt` 包含相同的内容,没有填充磁盘中的任何更多空间(可以尝试使用大文件来测试),并与 `test.txt` 使用相同的 inode: + ``` $ ls -li *test* 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt ``` -_ls_ 的 `-i` 选项显示一个文件的 _inode 数值_。_inode_ 是分区表中的信息块,它包含磁盘上文件或目录的位置,上次修改的时间以及其它数据。如果两个文件使用相同的 inode,那么无论它们在目录树中的位置如何,它们在实际效果上都是相同的文件。 +`ls` 的 `-i` 选项显示一个文件的 “inode 数值”。“inode” 是分区表中的信息块,它包含磁盘上文件或目录的位置、上次修改的时间以及其它数据。如果两个文件使用相同的 inode,那么无论它们在目录树中的位置如何,它们在实际上都是相同的文件。 ### 软链接 -软链接,也称为 _symlinks_(系统链接),它是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小插槽。但它只包含一小段数据,将操作系统指向另一个文件或目录。 +软链接,也称为符号链接symlink,它与硬链接是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小块地方。但它只包含一小段数据,将操作系统指向另一个文件或目录。 你可以使用 `ln` 的 `-s` 选项来创建一个软链接: + ``` ln -s test.txt softlink_test.txt ``` -这将在当前目录中创建软链接 _softlink\_test.txt_,它指向 _test.txt_。 +这将在当前目录中创建软链接 `softlink_test.txt`,它指向 `test.txt`。 再次执行 `ls -li`,你可以看到两种链接的不同之处: + ``` $ ls -li total 8 @@ -60,48 +67,53 @@ total 8 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt ``` -_hardlink\_test.txt_ 和 _test.txt_ 包含一些文本并占据相同的空格*字面*。它们使用相同的 inode 数值。与此同时,_softlink\_test.txt_ 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 _ls_ 的 `-l` 选项还会显示软链接指向的文件或目录。 +`hardlink_test.txt` 和 `test.txt` 包含一些文本并且*字面上*占据相同的空间。它们使用相同的 inode 数值。与此同时,`softlink_test.txt` 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 `ls` 的 `-l` 选项还会显示软链接指向的文件或目录。 ### 为什么要用链接? 它们适用于**带有自己环境的应用程序**。你的 Linux 发行版通常不会附带你需要应用程序的最新版本。以优秀的 [Blender 3D][2] 设计软件为例,Blender 允许你创建 3D 静态图像以及动画电影,人人都想在自己的机器上拥有它。问题是,当前版本的 Blender 至少比任何发行版中的自带的高一个版本。 -幸运的是,[Blender 提供下载][3]开箱即用。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。 +幸运的是,[Blender 提供可以开箱即用的下载][3]。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。 每次你想运行 Blender,你都可以 `cd` 到你下载它的文件夹并运行: + ``` ./blender ``` 但这很不方便。如果你可以从文件系统的任何地方,比如桌面命令启动器中运行 `blender` 命令会更好。 -这样做的方法是将 _blender_ 可执行文件链接到 _bin/_ 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。 +这样做的方法是将 `blender` 可执行文件链接到 `bin/` 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。 + ``` ln -s /path/to/blender_directory/blender /home//bin ``` -你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 _/usr/lib_ 目录,你会看到许多软链接文件飞过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 _libblah_ 链接到 _libblah.so.2_,你甚至可能会注意到 _libblah.so.2_ 依次链接到原始文件 _libblah.so.2.1.0_。 +你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 `/usr/lib` 目录,你会看到许多软链接文件一闪而过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 `libblah` 链接到 `libblah.so.2`,你甚至可能会注意到 `libblah.so.2` 相应链接到原始文件 `libblah.so.2.1.0`。 -这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序相信它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。 +这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序**相信**它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。 + +有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 `/usr/local` 下,程序本身最终在 `/usr/local/bin` 中,它在 `/usr/local/bin` 目录中查找它需要的库。但假设你的新程序需要 `libblah`,但 `libblah` 在 `/usr/lib` 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 `/usr/local/lib`: -有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 _/usr/local_ 下,程序本身最终在 _/usr/local/bin_ 中,它在 _/usr/local/bin_ 目录中查找它需要的库。但假设你的新程序需要 _libblah_,但 _libblah_ 在 _/usr/lib_ 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 _/usr/local/lib_: ``` ln -s /usr/lib/libblah /usr/local/lib ``` -或者如果你愿意,可以 `cd` 到 _/usr/local/lib_: +或者如果你愿意,可以 `cd` 到 `/usr/local/lib`: + ``` cd /usr/local/lib ``` 然后使用链接: + ``` ln -s ../lib/libblah ``` 还有几十个案例证明软链接是有用的,当你使用 Linux 更熟练时,你肯定会发现它们,但这些是最常见的。下一次,我们将看一些你需要注意的链接怪异。 -通过 Linux 基金会和 edX 的免费 ["Linux 简介"][4]课程了解有关 Linux 的更多信息。 +通过 Linux 基金会和 edX 的免费 [“Linux 简介”][4]课程了解有关 Linux 的更多信息。 -------------------------------------------------------------------------------- @@ -111,7 +123,7 @@ via: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1 作者:[Paul Brown][a] 选题:[lujun9972][b] 译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181019 How to use Pandoc to produce a research paper.md b/published/201810/20181019 How to use Pandoc to produce a research paper.md similarity index 51% rename from translated/tech/20181019 How to use Pandoc to produce a research paper.md rename to published/201810/20181019 How to use Pandoc to produce a research paper.md index 516ab8ba37..3ccbc8df1c 100644 --- a/translated/tech/20181019 How to use Pandoc to produce a research paper.md +++ b/published/201810/20181019 How to use Pandoc to produce a research paper.md @@ -1,19 +1,21 @@ -用 Pandoc 做一篇调研论文 +用 Pandoc 生成一篇调研论文 ====== -学习如何用 Markdown 管理引用、图像、表格、以及更多。 + +> 学习如何用 Markdown 管理章节引用、图像、表格以及更多。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) -这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。 +这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用章节、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。 -### 调查 +### 调研 -调研论文一般包括引用、图像、表格和参考书目。[Pandoc][3] 本身并不能交叉引用这些,但是但是它能够利用 [pandoc-crossref][4] 过滤来完成自动编号和章节、图像、表格的交叉引用。 +调研论文一般包括对章节、图像、表格和参考书目的引用。[Pandoc][3] 本身并不能交叉引用这些,但是它能够利用 [pandoc-crossref][4] 过滤器来完成自动编号和章节、图像、表格的交叉引用。 -让我们开始正常的使用 LaTax 重写 [一个教育调研报告的例子][5],然后用 Markdown(和一些 LaTax)、Pandoc 和 Pandoc-crossref 再重写。 +让我们从重写原本以 LaTax 撰写的 [一个教育调研报告的例子][5] 开始,然后用 Markdown(和一些 LaTax)、Pandoc 和 Pandoc-crossref 重写。 #### 添加并引用章节 -要想章节被自动编号,必须使用 Markdown 标题 H1 编写。子章节使用子标题 H2-H4 编写(通常不需要更多的东西)。例如一个章节的标题是 “履行”,写作 `# 履行 {#sec: 履行}`,然后 Pandoc 会把它转化为 `3. 履行`(或者转换为相应的章节标号)。`履行` 这个标题使用了 H1 并且声明了一个 `{#sec: 履行}` 的标签,这是作者引用了该章节的标签。要想引用一个章节,在对应章节后面输入 `@` 符号并使用方括号括起来即可: `[@sec:履行]` +要想章节被自动编号,必须使用 Markdown H1 标题编写。子章节使用 H2-H4 子标题编写(通常不需要更多级别了)。例如一个章节的标题是 “Implementation”,写作 `# Implementation {#sec: implementation}`,然后 Pandoc 会把它转化为 `3. Implementation `(或者转换为相应的章节编号)。`Implementation` 这个标题使用了 H1 并且声明了一个 `{#sec: implementation}` 的标签,这是作者用于引用该章节的标签。要想引用一个章节,输入 `@` 符号并跟上对应章节标签,使用方括号括起来即可: `[@ sec:implementation]` [在这篇论文中][5], 我们发现了下面这个例子: @@ -27,16 +29,17 @@ Pandoc 转换: we lack experience (consistency between TAs, Section 4). ``` -章节被自动(这包含在文章最后的 `Makefile` 当中)标号。要创建无标号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### 设计一个可维护的游戏 {-}` 就以标题 “设计一个可维护的游戏”,创建了一个无标号的章节。 +章节被自动编号(这在本文最后的 `Makefile` 当中说明)。要创建无编号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### Designing a game for maintainability {-}` 就以标题 “Designing a game for maintainability”,创建了一个无标号的章节。 #### 添加并引用图像 -添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似: +添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似: ``` ![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} ``` -上面这一行是告诉 Pandoc,有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了应该引用的图像的名字。 + +上面这一行是告诉 Pandoc,有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了用于引用该图像的名字。 这里是从一篇论文中进行图像引用的例子: @@ -51,46 +54,47 @@ The boxes "Enjoy", "Grade" and "Motivation" (Fig. 1) ... ``` #### 添加及引用参考书目 -大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子: + +大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子: ``` @inproceedings{wrigstad2017mastery, -    Author =       {Wrigstad, Tobias and Castegren, Elias}, -    Booktitle =    {SPLASH-E}, -    Title =        {Mastery Learning-Like Teaching with Achievements}, -    Year =         2017 + Author = {Wrigstad, Tobias and Castegren, Elias}, + Booktitle = {SPLASH-E}, + Title = {Mastery Learning-Like Teaching with Achievements}, + Year = 2017 } @inproceedings{review-gamification-framework, -  Author =       {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno}, -  Publisher =    {IEEE}, -  Booktitle =    {2015 7th International Conference on Games and Virtual Worlds -                  for Serious Applications (VS-Games)}, -  Doi =          {10.1109/VS-GAMES.2015.7295760}, -  Keywords =     {formal specification;serious games (computing);design -                  framework;formal design process;game components;game design -                  elements;gamification design frameworks;gamification-based -                  solutions;Bibliographies;Context;Design -                  methodology;Ethics;Games;Proposals}, -  Month =        {Sept}, -  Pages =        {1-8}, -  Title =        {A Literature Review of Gamification Design Frameworks}, -  Year =         2015, -  Bdsk-Url-1 =   {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760} + Author = {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno}, + Publisher = {IEEE}, + Booktitle = {2015 7th International Conference on Games and Virtual Worlds + for Serious Applications (VS-Games)}, + Doi = {10.1109/VS-GAMES.2015.7295760}, + Keywords = {formal specification;serious games (computing);design + framework;formal design process;game components;game design + elements;gamification design frameworks;gamification-based + solutions;Bibliographies;Context;Design + methodology;Ethics;Games;Proposals}, + Month = {Sept}, + Pages = {1-8}, + Title = {A Literature Review of Gamification Design Frameworks}, + Year = 2015, + Bdsk-Url-1 = {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760} } ... ``` -第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 (`inproceedings`) 的类型,以及用来指向那篇论文 (`wrigstad2017mastery`) 的标签。 +第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 的类型(`inproceedings`),以及用来指向那篇论文的标签(`wrigstad2017mastery`)。 -引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入: +引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入: ``` the achievement-driven learning methodology [@wrigstad2017mastery] ``` -Pandoc 将会输出: +Pandoc 将会输出: ``` the achievement- driven learning methodology [30] @@ -100,25 +104,23 @@ the achievement- driven learning methodology [30] ![](https://opensource.com/sites/default/files/uploads/bibliography-example_0.png) -引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514` 和 `gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用: +引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514` 和 `gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用: ``` Thus, the most important benefit is its potential to increase students' motivation - and engagement [@SEABORN201514;@gamification-leaderboard-benefits]. ``` -Pandoc 将会产生: +Pandoc 将会产生: ``` Thus, the most important benefit is its potential to increase students’ motivation - and engagement [26, 28] ``` ### 问题案例 -一个常见的问题是项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下它们在此处的元素组合,使得他们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。 +一个常见的问题是所需项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下那些元素放置的位置,使得它们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。 我们看一个上面提到的图像的例子: @@ -126,7 +128,7 @@ and engagement [26, 28] ![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} ``` -然后使用 LaTeX 重写: +然后使用 LaTeX 重写: ``` \begin{figure}[t] @@ -139,17 +141,17 @@ and engagement [26, 28] ### 产生一篇论文 -到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生产一篇 PDF 格式的论文,生成 PDF,我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX,使用一套自定义的模板和元信息文件生成一篇调研论文,以及如何构建 LaTeX 文档为最终的 PDF 格式。 +到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生成一篇 PDF 格式的论文。要生成 PDF,我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX,使用一套自定义的模板和元信息文件生成一篇调研论文,以及如何将 LaTeX 文档编译为最终的 PDF 格式。 -很多会议都提供了一个 **.cls** 文件或者一套论文该有样子的模板; 例如,他们是否应该使用两列的格式以及其他的设计风格。在我们的例子中,会议提供了一个名为 **acmart.cls** 的文件。 +很多会议都提供了一个 .cls 文件或者一套论文应有样式的模板;例如,它们是否应该使用两列的格式以及其它的设计风格。在我们的例子中,会议提供了一个名为 `acmart.cls` 的文件。 -作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中: +作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中: ``` pandoc -D latex > mytemplate.tex ``` -默认的模板包含以下代码: +默认的模板包含以下代码: ``` $if(author)$ @@ -161,32 +163,30 @@ $if(institute)$ $endif$ ``` -因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们可以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处)更新这个模板 +因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们更新这个模板以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处): ``` latex $for(author)$ -    $if(author.name)$ -        \author{$author.name$} -        $if(author.affiliation)$ -            \affiliation{\institution{$author.affiliation$}} -        $endif$ -        $if(author.email)$ -            \email{$author.email$} -        $endif$ -    $else$ -        $author$ -    $endif$ + $if(author.name)$ + \author{$author.name$} + $if(author.affiliation)$ + \affiliation{\institution{$author.affiliation$}} + $endif$ + $if(author.email)$ + \email{$author.email$} + $endif$ + $else$ + $author$ + $endif$ $endfor$ ``` 要让这些更改起作用,我们还应该有下面的文件: - * `main.md` 包含调研论文 - * `biblio.bib` 包含参考书目数据库 - * `acmart.cls` 我们使用的文档的集合 - * `mytemplate.tex` 是我们使用的模板文件(代替默认的) - - +* `main.md` 包含调研论文 +* `biblio.bib` 包含参考书目数据库 +* `acmart.cls` 我们使用的文档的集合 +* `mytemplate.tex` 是我们使用的模板文件(代替默认的) 让我们添加论文的元信息到一个 `meta.yaml` 文件: @@ -211,7 +211,7 @@ abstract: |   An achievement-driven methodology strives to give students more control over their learning with enough flexibility to engage them in deeper learning. (more stuff continues) include-before: | -   \```{=latex} +   \` ``{=latex}   \copyrightyear{2018}   \acmYear{2018}   \setcopyright{acmlicensed} @@ -234,7 +234,7 @@ include-before: |   \ccsdesc[500]{Applied computing~Education}   \keywords{gamification, education, software design, UML} -   \``` +   \` `` figPrefix:   - "Fig."   - "Figs." @@ -246,23 +246,21 @@ secPrefix: 这个元信息文件使用 LaTeX 设置下列参数: - * `template` 指向使用的模板(’mytemplate.tex‘) - * `documentclass` 指向使用的 LaTeX 文档集合 (`acmart`) - * `classoption` 是在 `sigconf` 的案例中,指向这个类的选项 - * `title` 指定论文的标题 - * `author` 是一个包含例如 `name`, `affiliation`, 和 `email` 的地方 - * `bibliography` 指向包含参考书目的文件 (biblio.bib) - * `abstract` 包含论文的摘要 - * `include-before` 是这篇论文的真实内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过 - * `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.`。 - * `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览) - - +* `template` 指向使用的模板(`mytemplate.tex`) +* `documentclass` 指向使用的 LaTeX 文档集合(`acmart`) +* `classoption` 是在 `sigconf` 的案例中,指向这个类的选项 +* `title` 指定论文的标题 +* `author` 是一个包含例如 `name`、`affiliation` 和 `email` 的地方 +* `bibliography` 指向包含参考书目的文件(`biblio.bib`) +* `abstract` 包含论文的摘要 +* `include-before` 是这篇论文的具体内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过 +* `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.` +* `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览) 现在已经设置好了元信息,让我们来创建一个 `Makefile`,它会产生你想要的输出。`Makefile` 使用 Pandoc 产生 LaTeX 文件,`pandoc-crossref` 产生交叉引用,`pdflatex` 构建 LaTeX 为 PDF,`bibtex ` 处理引用。 -`Makefile` 已经展示如下: +`Makefile` 已经展示如下: ``` all: paper @@ -281,18 +279,16 @@ clean: .PHONY: all clean paper ``` -Pandoc 使用下面的标记: +Pandoc 使用下面的标记: - * `-s` 创建一个独立的 LaTeX 文档 - * `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤 - * `--natbib` 用 `natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染 - * `--template` 设置使用的模板文件 - * `-N` 为章节的标题编号 - * `-f` 和 `-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9]. +* `-s` 创建一个独立的 LaTeX 文档 +* `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤 +* `--natbib` 用 `natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染 +* `--template` 设置使用的模板文件 +* `-N` 为章节的标题编号 +* `-f` 和 `-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9]。 - - -由 LaTeX 产生 PDF,接着引导行 [从 bibtex][10] 处理参考书目: +要从 LaTeX 产生 PDF,按 [来自bibtex][10] 的指导处理参考书目: ``` @pdflatex main.tex &> /dev/null @@ -301,7 +297,7 @@ Pandoc 使用下面的标记: @pdflatex main.tex &> /dev/null ``` -脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。 +脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。 最终的结果展示如下。这篇文章的库可以在 [GitHub][11] 找到: @@ -309,9 +305,9 @@ Pandoc 使用下面的标记: ### 结论 -在我看来,研究的重点是协作,思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的调查员似乎更喜欢 DOCX 文档。 +在我看来,研究的重点是协作、思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的研究人员似乎更喜欢 DOCX 文档。 -当身处不同社区的调查员一同写一篇论文时,他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号,DOCX 对于工程师来说不会是最简便的选择,LaTeX 对于缺乏编程经验的调查员来说也有一些问题。就像这篇文章中展示的,Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。 +当身处不同社区的研究人员一同写一篇论文时,他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号,DOCX 对于工程师来说不会是最简便的选择,LaTeX 对于缺乏编程经验的研究人员来说也有一些问题。就像这篇文章中展示的,Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。 -------------------------------------------------------------------------------- @@ -320,7 +316,7 @@ via: https://opensource.com/article/18/9/pandoc-research-paper 作者:[Kiko Fernandez-Reyes][a] 选题:[lujun9972][b] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181019 How to use Pandoc to produce a research paper.md b/published/20181019 How to use Pandoc to produce a research paper.md new file mode 100644 index 0000000000..3ccbc8df1c --- /dev/null +++ b/published/20181019 How to use Pandoc to produce a research paper.md @@ -0,0 +1,335 @@ +用 Pandoc 生成一篇调研论文 +====== + +> 学习如何用 Markdown 管理章节引用、图像、表格以及更多。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T) + +这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用章节、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。 + +### 调研 + +调研论文一般包括对章节、图像、表格和参考书目的引用。[Pandoc][3] 本身并不能交叉引用这些,但是它能够利用 [pandoc-crossref][4] 过滤器来完成自动编号和章节、图像、表格的交叉引用。 + +让我们从重写原本以 LaTax 撰写的 [一个教育调研报告的例子][5] 开始,然后用 Markdown(和一些 LaTax)、Pandoc 和 Pandoc-crossref 重写。 + +#### 添加并引用章节 + +要想章节被自动编号,必须使用 Markdown H1 标题编写。子章节使用 H2-H4 子标题编写(通常不需要更多级别了)。例如一个章节的标题是 “Implementation”,写作 `# Implementation {#sec: implementation}`,然后 Pandoc 会把它转化为 `3. Implementation `(或者转换为相应的章节编号)。`Implementation` 这个标题使用了 H1 并且声明了一个 `{#sec: implementation}` 的标签,这是作者用于引用该章节的标签。要想引用一个章节,输入 `@` 符号并跟上对应章节标签,使用方括号括起来即可: `[@ sec:implementation]` + +[在这篇论文中][5], 我们发现了下面这个例子: + +``` +we lack experience (consistency between TAs, [@sec:implementation]). +``` + +Pandoc 转换: + +``` +we lack experience (consistency between TAs, Section 4). +``` + +章节被自动编号(这在本文最后的 `Makefile` 当中说明)。要创建无编号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### Designing a game for maintainability {-}` 就以标题 “Designing a game for maintainability”,创建了一个无标号的章节。 + +#### 添加并引用图像 + +添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似: + +``` +![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} +``` + +上面这一行是告诉 Pandoc,有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了用于引用该图像的名字。 + +这里是从一篇论文中进行图像引用的例子: + +``` +The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix]) ... +``` + +Pandoc 产生如下输出: + +``` +The boxes "Enjoy", "Grade" and "Motivation" (Fig. 1) ... +``` + +#### 添加及引用参考书目 + +大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子: + +``` +@inproceedings{wrigstad2017mastery, + Author = {Wrigstad, Tobias and Castegren, Elias}, + Booktitle = {SPLASH-E}, + Title = {Mastery Learning-Like Teaching with Achievements}, + Year = 2017 +} + +@inproceedings{review-gamification-framework, + Author = {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno}, + Publisher = {IEEE}, + Booktitle = {2015 7th International Conference on Games and Virtual Worlds + for Serious Applications (VS-Games)}, + Doi = {10.1109/VS-GAMES.2015.7295760}, + Keywords = {formal specification;serious games (computing);design + framework;formal design process;game components;game design + elements;gamification design frameworks;gamification-based + solutions;Bibliographies;Context;Design + methodology;Ethics;Games;Proposals}, + Month = {Sept}, + Pages = {1-8}, + Title = {A Literature Review of Gamification Design Frameworks}, + Year = 2015, + Bdsk-Url-1 = {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760} +} + +... +``` + +第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 的类型(`inproceedings`),以及用来指向那篇论文的标签(`wrigstad2017mastery`)。 + +引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入: + +``` +the achievement-driven learning methodology [@wrigstad2017mastery] +``` + +Pandoc 将会输出: + +``` +the achievement- driven learning methodology [30] +``` + +这篇论文将会产生像下面这样被标号的参考书目: + +![](https://opensource.com/sites/default/files/uploads/bibliography-example_0.png) + +引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514` 和 `gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用: + +``` +Thus, the most important benefit is its potential to increase students' motivation +and engagement [@SEABORN201514;@gamification-leaderboard-benefits]. +``` + +Pandoc 将会产生: + +``` +Thus, the most important benefit is its potential to increase students’ motivation +and engagement [26, 28] +``` + +### 问题案例 + +一个常见的问题是所需项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下那些元素放置的位置,使得它们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。 + +我们看一个上面提到的图像的例子: + +``` +![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix} +``` + +然后使用 LaTeX 重写: + +``` +\begin{figure}[t] +\includegraphics{data/scatterplots/RScatterplotMatrix2.png} +\caption{\label{fig:matrix}Scatterplot matrix} +\end{figure} +``` + +在 LaTeX 中,`figure` 环境参数中的 `[t]` 选项表示这张图用该位于该页的最顶部。有关更多选项,参阅 [LaTex/Floats, Figures, and Captions][7] 这篇 Wikibooks 的文章。 + +### 产生一篇论文 + +到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生成一篇 PDF 格式的论文。要生成 PDF,我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX,使用一套自定义的模板和元信息文件生成一篇调研论文,以及如何将 LaTeX 文档编译为最终的 PDF 格式。 + +很多会议都提供了一个 .cls 文件或者一套论文应有样式的模板;例如,它们是否应该使用两列的格式以及其它的设计风格。在我们的例子中,会议提供了一个名为 `acmart.cls` 的文件。 + +作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中: + +``` +pandoc -D latex > mytemplate.tex +``` + +默认的模板包含以下代码: + +``` +$if(author)$ +\author{$for(author)$$author$$sep$ \and $endfor$} +$endif$ +$if(institute)$ +\providecommand{\institute}[1]{} +\institute{$for(institute)$$institute$$sep$ \and $endfor$} +$endif$ +``` + +因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们更新这个模板以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处): + +``` +latex +$for(author)$ + $if(author.name)$ + \author{$author.name$} + $if(author.affiliation)$ + \affiliation{\institution{$author.affiliation$}} + $endif$ + $if(author.email)$ + \email{$author.email$} + $endif$ + $else$ + $author$ + $endif$ +$endfor$ +``` +要让这些更改起作用,我们还应该有下面的文件: + +* `main.md` 包含调研论文 +* `biblio.bib` 包含参考书目数据库 +* `acmart.cls` 我们使用的文档的集合 +* `mytemplate.tex` 是我们使用的模板文件(代替默认的) + +让我们添加论文的元信息到一个 `meta.yaml` 文件: + +``` +--- +template: 'mytemplate.tex' +documentclass: acmart +classoption: sigconf +title: The impact of opt-in gamification on `\\`{=latex} students' grades in a software design course +author: +- name: Kiko Fernandez-Reyes +  affiliation: Uppsala University +  email: kiko.fernandez@it.uu.se +- name: Dave Clarke +  affiliation: Uppsala University +  email: dave.clarke@it.uu.se +- name: Janina Hornbach +  affiliation: Uppsala University +  email: janina.hornbach@fek.uu.se +bibliography: biblio.bib +abstract: | +  An achievement-driven methodology strives to give students more control over their learning with enough flexibility to engage them in deeper learning. (more stuff continues) + +include-before: | +   \` ``{=latex} +   \copyrightyear{2018} +   \acmYear{2018} +   \setcopyright{acmlicensed} +   \acmConference[MODELS '18 Companion]{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems}{October 14--19, 2018}{Copenhagen, Denmark} +   \acmBooktitle{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems (MODELS '18 Companion), October 14--19, 2018, Copenhagen, Denmark} +   \acmPrice{XX.XX} +   \acmDOI{10.1145/3270112.3270118} +   \acmISBN{978-1-4503-5965-8/18/10} + +   \begin{CCSXML} +   +   +   10010405.10010489 +   Applied computing~Education +   500 +   +   +   \end{CCSXML} + +   \ccsdesc[500]{Applied computing~Education} + +   \keywords{gamification, education, software design, UML} +   \` `` +figPrefix: +  - "Fig." +  - "Figs." +secPrefix: +  - "Section" +  - "Sections" +... +``` + +这个元信息文件使用 LaTeX 设置下列参数: + +* `template` 指向使用的模板(`mytemplate.tex`) +* `documentclass` 指向使用的 LaTeX 文档集合(`acmart`) +* `classoption` 是在 `sigconf` 的案例中,指向这个类的选项 +* `title` 指定论文的标题 +* `author` 是一个包含例如 `name`、`affiliation` 和 `email` 的地方 +* `bibliography` 指向包含参考书目的文件(`biblio.bib`) +* `abstract` 包含论文的摘要 +* `include-before` 是这篇论文的具体内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过 +* `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.` +* `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览) + +现在已经设置好了元信息,让我们来创建一个 `Makefile`,它会产生你想要的输出。`Makefile` 使用 Pandoc 产生 LaTeX 文件,`pandoc-crossref` 产生交叉引用,`pdflatex` 构建 LaTeX 为 PDF,`bibtex ` 处理引用。 + + +`Makefile` 已经展示如下: + +``` +all: paper + +paper: +        @pandoc -s -F pandoc-crossref --natbib meta.yaml --template=mytemplate.tex -N \ +         -f markdown -t latex+raw_tex+tex_math_dollars+citations -o main.tex main.md +        @pdflatex main.tex &> /dev/null +        @bibtex main &> /dev/null +        @pdflatex main.tex &> /dev/null +        @pdflatex main.tex &> /dev/null + +clean: +        rm main.aux main.tex main.log main.bbl main.blg main.out + +.PHONY: all clean paper +``` + +Pandoc 使用下面的标记: + +* `-s` 创建一个独立的 LaTeX 文档 +* `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤 +* `--natbib` 用 `natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染 +* `--template` 设置使用的模板文件 +* `-N` 为章节的标题编号 +* `-f` 和 `-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9]。 + +要从 LaTeX 产生 PDF,按 [来自bibtex][10] 的指导处理参考书目: + +``` +@pdflatex main.tex &> /dev/null +@bibtex main &> /dev/null +@pdflatex main.tex &> /dev/null +@pdflatex main.tex &> /dev/null +``` + +脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。 + +最终的结果展示如下。这篇文章的库可以在 [GitHub][11] 找到: + +![](https://opensource.com/sites/default/files/uploads/abstract-image.png) + +### 结论 + +在我看来,研究的重点是协作、思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的研究人员似乎更喜欢 DOCX 文档。 + +当身处不同社区的研究人员一同写一篇论文时,他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号,DOCX 对于工程师来说不会是最简便的选择,LaTeX 对于缺乏编程经验的研究人员来说也有一些问题。就像这篇文章中展示的,Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/pandoc-research-paper + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Markdown +[2]: https://www.latex-project.org/ +[3]: https://pandoc.org/ +[4]: http://lierdakil.github.io/pandoc-crossref/ +[5]: https://dl.acm.org/citation.cfm?id=3270118 +[6]: https://github.com/kikofernandez/pandoc-examples/blob/master/research-paper/biblio.bib +[7]: https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions#Figures +[8]: https://www.sharelatex.com/learn/latex/Creating_a_document_in_LaTeX#The_preamble_of_a_document +[9]: http://pandoc.org/MANUAL.html#citations +[10]: http://www.bibtex.org/Using/ +[11]: https://github.com/kikofernandez/pandoc-examples/tree/master/research-paper diff --git a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md b/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md deleted file mode 100644 index beb6f372b9..0000000000 --- a/sources/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md +++ /dev/null @@ -1,134 +0,0 @@ -20 questions DevOps job candidates should be prepared to answer Translating by FelixYFZ -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3) -Hiring the wrong person is [expensive][1]. Recruiting, hiring, and onboarding a new employee can cost a company as much as $240,000, according to Jörgen Sundberg, CEO of Link Humans. When you make the wrong hire: - - * You lose what they know. - * You lose who they know. - * Your team could go into the [storming][2] phase of group development. - * Your company risks disorganization. - - - -When you lose an employee, you lose a piece of the fabric of the company. It's also worth mentioning the pain on the other end. The person hired into the wrong job may experience stress, feelings of overall dissatisfaction, and even health issues. - -On the other hand, when you get it right, your new hire will: - - * Enhance the existing culture, making your organization an even a better place to work. Studies show that a positive work culture helps [drive long-term financial performance][3] and that if you work in a happy environment, you’re more likely to do better in life. - * Love working with your organization. When people love what they do, they tend to do it well. - - - -Hiring to fit or enhance your existing culture is essential in DevOps and agile teams. That means hiring someone who can encourage effective collaboration so that individual contributors from varying backgrounds, and teams with different goals and working styles, can work together productively. Your new hire should help teams collaborate to maximize their value while also increasing employee satisfaction and balancing conflicting organizational goals. He or she should be able to choose tools and workflows wisely to complement your organization. Culture is everything. - -As a follow-up to our November 2017 post, [20 questions DevOps hiring managers should be prepared to answer][4], this article will focus on how to hire for the best mutual fit. - -### Why hiring goes wrong - -The typical hiring strategy many companies use today is based on a talent surplus: - - * Post on job boards. - * Focus on candidates with the skills they need. - * Find as many candidates as possible. - * Interview to weed out the weak. - * Conduct formal interviews to do more weeding. - * Assess, vote, and select. - * Close on compensation. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/hiring_graphic.png?itok=1udGbkhB) - -Job boards were invented during the Great Depression when millions of people were out of work and there was a talent surplus. There is no talent surplus in today's job market, yet we’re still using a hiring strategy that's based on one. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/732px-unemployed_men_queued_outside_a_depression_soup_kitchen_opened_in_chicago_by_al_capone_02-1931_-_nara_-_541927.jpg?itok=HSs4NjCN) - -### Hire for mutual fit: Use culture and emotions - -The idea behind the talent surplus hiring strategy is to design jobs and then slot people into them. - -Instead, do the opposite: Find talented people who will positively add to your business culture, then find the best fit for them in a job they’ll love. To do this, you must be open to creating jobs around their passions. - -**Who is looking for a job?** According to a 2016 survey of more than 50,000 U.S. developers, [85.7% of respondents][5] were either not interested in new opportunities or were not actively looking for them. And of those who were looking, a whopping [28.3% of job discoveries][5] came from referrals by friends. If you’re searching only for people who are looking for jobs, you’re missing out on top talent. - -**Use your team to find and vet potential recruits**. For example, if Diane is a developer on your team, chances are she has [been coding for years][6] and has met fellow developers along the way who also love what they do. Wouldn’t you think her chances of vetting potential recruits for skills, knowledge, and intelligence would be higher than having someone from HR find and vet potential recruits? And before asking Diane to share her knowledge of fellow recruits, inform her of the upcoming mission, explain your desire to hire a diverse team of passionate explorers, and describe some of the areas where help will be needed in the future. - -**What do employees want?** A comprehensive study comparing the wants and needs of Millennials, GenX’ers, and Baby Boomers shows that within two percentage points, we all [want the same things][7]: - - 1. To make a positive impact on the organization - 2. To help solve social and/or environmental challenges - 3. To work with a diverse group of people - - - -### The interview challenge - -The interview should be a two-way conversation for finding a mutual fit between the person hiring and the person interviewing. Focus your interview on CQ ([Cultural Quotient][7]) and EQ ([Emotional Quotient][8]): Will this person reinforce and add to your culture and love working with you? Can you help make them successful at their job? - -**For the hiring manager:** Every interview is an opportunity to learn how your organization could become more irresistible to prospective team members, and every positive interview can be your best opportunity to finding talent, even if you don’t hire that person. Everyone remembers being interviewed if it is a positive experience. Even if they don’t get hired, they will talk about the experience with their friends, and you may get a referral as a result. There is a big upside to this: If you’re not attracting this talent, you have the opportunity to learn the reason and fix it. - -**For the interviewee** : Each interview experience is an opportunity to unlock your passions. - -### 20 questions to help you unlock the passions of potential hires - - 1. What are you passionate about? - - 2. What makes you think, "I can't wait to get to work this morning!” - - 3. What is the most fun you’ve ever had? - - 4. What is your favorite example of a problem you’ve solved, and how did you solve it? - - 5. How do you feel about paired learning? - - 6. What’s at the top of your mind when you arrive at, and leave, the office? - - 7. If you could have changed one thing in your previous/current job, what would it be? - - 8. What are you excited to learn while working here? - - 9. What do you aspire to in life, and how are you pursuing it? - - 10. What do you want, or feel you need, to learn to achieve these aspirations? - - 11. What values do you hold? - - 12. How do you live those values? - - 13. What does balance mean in your life? - - 14. What work interactions are you are most proud of? Why? - - 15. What type of environment do you like to create? - - 16. How do you like to be treated? - - 17. What do you trust vs. verify? - - 18. Tell me about a recent learning you had when working on a project. - - 19. What else should we know about you? - - 20. If you were hiring me, what questions would you ask me? - - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/questions-devops-employees-should-answer - -作者:[Catherine Louis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/catherinelouis -[1]:https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/cost-of-bad-hires.aspx -[2]:https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development -[3]:http://www.forbes.com/sites/johnkotter/2011/02/10/does-corporate-culture-drive-financial-performance/ -[4]:https://opensource.com/article/17/11/inclusive-workforce-takes-work -[5]:https://insights.stackoverflow.com/survey/2016#work-job-discovery -[6]:https://research.hackerrank.com/developer-skills/2018/ -[7]:http://www-935.ibm.com/services/us/gbs/thoughtleadership/millennialworkplace/ -[8]:https://en.wikipedia.org/wiki/Emotional_intelligence diff --git a/sources/talk/20180409 5 steps to building a cloud that meets your users- needs.md b/sources/talk/20180409 5 steps to building a cloud that meets your users- needs.md index 9ba926c722..db17eca751 100644 --- a/sources/talk/20180409 5 steps to building a cloud that meets your users- needs.md +++ b/sources/talk/20180409 5 steps to building a cloud that meets your users- needs.md @@ -1,3 +1,4 @@ +Translating by FelixYFZ 5 steps to building a cloud that meets your users' needs ====== diff --git a/sources/talk/20180805 Where Vim Came From.md b/sources/talk/20180805 Where Vim Came From.md index d3cf1abe82..88cf579d00 100644 --- a/sources/talk/20180805 Where Vim Came From.md +++ b/sources/talk/20180805 Where Vim Came From.md @@ -1,3 +1,5 @@ +thecyanbird translating + Where Vim Came From ====== I recently stumbled across a file format known as Intel HEX. As far as I can gather, Intel HEX files (which use the `.hex` extension) are meant to make binary images less opaque by encoding them as lines of hexadecimal digits. Apparently they are used by people who program microcontrollers or need to burn data into ROM. In any case, when I opened up a HEX file in Vim for the first time, I discovered something shocking. Here was this file format that, at least to me, was deeply esoteric, but Vim already knew all about it. Each line of a HEX file is a record divided into different fields—Vim had gone ahead and colored each of the fields a different color. `set ft?` I asked, in awe. `filetype=hex`, Vim answered, triumphant. diff --git a/sources/talk/20181019 What is an SRE and how does it relate to DevOps.md b/sources/talk/20181019 What is an SRE and how does it relate to DevOps.md deleted file mode 100644 index 7093b36cd5..0000000000 --- a/sources/talk/20181019 What is an SRE and how does it relate to DevOps.md +++ /dev/null @@ -1,71 +0,0 @@ -translating by belitex - -What is an SRE and how does it relate to DevOps? -====== -The SRE role is common in large enterprises, but smaller businesses need it, too. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) - -Even though the site reliability engineer (SRE) role has become prevalent in recent years, many people—even in the software industry—don't know what it is or does. This article aims to clear that up by explaining what an SRE is, how it relates to DevOps, and how an SRE works when your entire engineering organization can fit in a coffee shop. - -### What is site reliability engineering? - -[Site Reliability Engineering: How Google Runs Production Systems][1], written by a group of Google engineers, is considered the definitive book on site reliability engineering. Google vice president of engineering Ben Treynor Sloss [coined the term][2] back in the early 2000s. He defined it as: "It's what happens when you ask a software engineer to design an operations function." - -Sysadmins have been writing code for a long time, but for many of those years, a team of sysadmins managed many machines manually. Back then, "many" may have been dozens or hundreds, but when you scale to thousands or hundreds of thousands of hosts, you simply can't continue to throw people at the problem. When the number of machines gets that large, the obvious solution is to use code to manage hosts (and the software that runs on them). - -Also, until fairly recently, the operations team was completely separate from the developers. The skillsets for each job were considered completely different. The SRE role tries to bring both jobs together. - -Before we dig deeper into what makes an SRE and how SREs work with the development team, we need to understand how site reliability engineering works within the DevOps paradigm. - -### Site reliability engineering and DevOps - -At its core, site reliability engineering is an implementation of the DevOps paradigm. There seems to be a wide array of ways to [define DevOps][3]. The traditional model, where the development ("devs") and operations ("ops") teams were separated, led to the team that writes the code not being responsible for how it works when customers start using it. The development team would "throw the code over the wall" to the operations team to install and support. - -This situation can lead to a significant amount of dysfunction. The goals of the dev and ops teams are constantly at odds—a developer wants customers to use the "latest and greatest" piece of code, but the operations team wants a steady system with as little change as possible. Their premise is that any change can introduce instability, while a system with no changes should continue to behave in the same manner. (Noting that minimizing change on the software side is not the only factor in preventing instability is important. For example, if your web application stays exactly the same, but the number of customers grows by 10x, your application may break in many different ways.) - -The premise of DevOps is that by merging these two distinct jobs into one, you eliminate contention. If the "dev" wants to deploy new code all the time, they have to deal with any fallout the new code creates. As Amazon's [Werner Vogels said][4], "you build it, you run it" (in production). But developers already have a lot to worry about. They are continually pushed to develop new features for their employer's products. Asking them to understand the infrastructure, including how to deploy, configure, and monitor their service, may be asking a little too much from them. This is where an SRE steps in. - -When a web application is developed, there are often many people that contribute. There are user interface designers, graphic designers, frontend engineers, backend engineers, and a whole host of other specialties (depending on the technologies used). Requirements include how the code gets managed (e.g., deployed, configured, monitored)—which are the SRE's areas of specialty. But, just as an engineer developing a nice look and feel for an application benefits from knowledge of the backend-engineer's job (e.g., how data is fetched from a database), the SRE understands how the deployment system works and how to adapt it to the specific needs of that particular codebase or project. - -So, an SRE is not just "an ops person who codes." Rather, the SRE is another member of the development team with a different set of skills particularly around deployment, configuration management, monitoring, metrics, etc. But, just as an engineer developing a nice look and feel for an application must know how data is fetched from a data store, an SRE is not singly responsible for these areas. The entire team works together to deliver a product that can be easily updated, managed, and monitored. - -The need for an SRE naturally comes about when a team is implementing DevOps but realizes they are asking too much of the developers and need a specialist for what the ops team used to handle. - -### How the SRE works at a startup - -This is great when there are hundreds of employees (let alone when you are the size of Google or Facebook). Large companies have SRE teams that are split up and embedded into each development team. But a startup doesn't have those economies of scale, and engineers often wear many hats. So, where does the "SRE hat" sit in a small company? One approach is to fully adopt DevOps and have the developers be responsible for the typical tasks an SRE would perform at a larger company. On the other side of the spectrum, you hire specialists — a.k.a., SREs. - -The most obvious advantage of trying to put the SRE hat on a developer's head is it scales well as your team grows. Also, the developer will understand all the quirks of the application. But many startups use a wide variety of SaaS products to power their infrastructure. The most obvious is the infrastructure platform itself. Then you add in metrics systems, site monitoring, log analysis, containers, and more. While these technologies solve some problems, they create an additional complexity cost. The developer would need to understand all those technologies and services in addition to the core technologies (e.g., languages) the application uses. In the end, keeping on top of all of that technology can be overwhelming. - -The other option is to hire a specialist to handle the SRE job. Their responsibility would be to focus on deployment, configuration, monitoring, and metrics, freeing up the developer's time to write the application. The disadvantage is that the SRE would have to split their time between multiple, different applications (i.e., the SRE needs to support the breadth of applications throughout engineering). This likely means they may not have the time to gain any depth of knowledge of any of the applications; however, they would be in a position to see how all the different pieces fit together. This "30,000-foot view" can help prioritize the weak spots to fix in the system as a whole. - -There is one key piece of information I am ignoring: your other engineers. They may have a deep desire to understand how deployment works and how to use the metrics system to the best of their ability. Also, hiring an SRE is not an easy task. You are looking for a mix of sysadmin skills and software engineering skills. (I am specific about software engineers, vs. just "being able to code," because software engineering involves more than just writing code [e.g., writing good tests or documentation].) - -Therefore, in some cases, it may make more sense for the "SRE hat" to live on a developer's head. If so, keep an eye on the amount of complexity in both the code and the infrastructure (SaaS or internal). At some point, the complexity on either end will likely push toward more specialization. - -### Conclusion - -An SRE team is one of the most efficient ways to implement the DevOps paradigm in a startup. I have seen a couple of different approaches, but I believe that hiring a dedicated SRE (pretty early) at your startup will free up time for the developers to focus on their specific challenges. The SRE can focus on improving the tools (and processes) that make the developers more productive. Also, an SRE will focus on making sure your customers have a product that is reliable and secure. - -Craig Sebenik will present [SRE (and DevOps) at a Startup][5] at [LISA18][6], October 29-31 in Nashville, Tennessee. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/sre-startup - -作者:[Craig Sebenik][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/craig5 -[b]: https://github.com/lujun9972 -[1]: http://shop.oreilly.com/product/0636920041528.do -[2]: https://landing.google.com/sre/interview/ben-treynor.html -[3]: https://opensource.com/resources/devops -[4]: https://queue.acm.org/detail.cfm?id=1142065 -[5]: https://www.usenix.org/conference/lisa18/presentation/sebenik -[6]: https://www.usenix.org/conference/lisa18 diff --git a/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md b/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md new file mode 100644 index 0000000000..376809b08b --- /dev/null +++ b/sources/talk/20181025 What breaks our systems- A taxonomy of black swans.md @@ -0,0 +1,134 @@ +translating by belitex + +What breaks our systems: A taxonomy of black swans +====== + +Find and fix outlier events that create issues before they trigger severe production problems. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/black-swan-pair_0.png?itok=MkshwqVg) + +Black swans are a metaphor for outlier events that are severe in impact (like the 2008 financial crash). In production systems, these are the incidents that trigger problems that you didn't know you had, cause major visible impact, and can't be fixed quickly and easily by a rollback or some other standard response from your on-call playbook. They are the events you tell new engineers about years after the fact. + +Black swans, by definition, can't be predicted, but sometimes there are patterns we can find and use to create defenses against categories of related problems. + +For example, a large proportion of failures are a direct result of changes (code, environment, or configuration). Each bug triggered in this way is distinctive and unpredictable, but the common practice of canarying all changes is somewhat effective against this class of problems, and automated rollbacks have become a standard mitigation. + +As our profession continues to mature, other kinds of problems are becoming well-understood classes of hazards with generalized prevention strategies. + +### Black swans observed in the wild + +All technology organizations have production problems, but not all of them share their analyses. The organizations that publicly discuss incidents are doing us all a service. The following incidents describe one class of a problem and are by no means isolated instances. We all have black swans lurking in our systems; it's just some of us don't know it yet. + +#### Hitting limits + +Running headlong into any sort of limit can produce very severe incidents. A canonical example of this was [Instapaper's outage in February 2017][1] . I challenge any engineer who has carried a pager to read the outage report without a chill running up their spine. Instapaper's production database was on a filesystem that, unknown to the team running the service, had a 2TB limit. With no warning, it stopped accepting writes. Full recovery took days and required migrating its database. + +Limits can strike in various ways. Sentry hit [limits on maximum transaction IDs in Postgres][2] . Platform.sh hit [size limits on a pipe buffer][3] . SparkPost [triggered AWS's DDoS protection][4] . Foursquare hit a performance cliff when one of its [datastores ran out of RAM][5] + +One way to get advance knowledge of system limits is to test periodically. Good load testing (on a production replica) ought to involve write transactions and should involve growing each datastore beyond its current production size. It's easy to forget to test things that aren't your main datastores (such as Zookeeper). If you hit limits during testing, you have time to fix the problems. Given that resolution of limits-related issues can involve major changes (like splitting a datastore), time is invaluable. + +When it comes to cloud services, if your service generates unusual loads or uses less widely used products or features (such as older or newer ones), you may be more at risk of hitting limits. It's worth load testing these, too. But warn your cloud provider first. + +Finally, where limits are known, add monitoring (with associated documentation) so you will know when your systems are approaching those ceilings. Don't rely on people still being around to remember. + +#### Spreading slowness + +> "The world is much more correlated than we give credit to. And so we see more of what Nassim Taleb calls 'black swan events'—rare events happen more often than they should because the world is more correlated." +> —[Richard Thaler][6] + +HostedGraphite's postmortem on how an [AWS outage took down its load balancers][7] (which are not hosted on AWS) is a good example of just how much correlation exists in distributed computing systems. In this case, the load-balancer connection pools were saturated by slow connections from customers that were hosted in AWS. The same kinds of saturation can happen with application threads, locks, and database connections—any kind of resource monopolized by slow operations. + +HostedGraphite's incident is an example of externally imposed slowness, but often slowness can result from saturation somewhere in your own system creating a cascade and causing other parts of your system to slow down. An [incident at Spotify][8] demonstrates such spread—the streaming service's frontends became unhealthy due to saturation in a different microservice. Enforcing deadlines for all requests, as well as limiting the length of request queues, can prevent such spread. Your service will serve at least some traffic, and recovery will be easier because fewer parts of your system will be broken. + +Retries should be limited with exponential backoff and some jitter. An outage at Square, in which its [Redis datastore became overloaded][9] due to a piece of code that retried failed transactions up to 500 times with no backoff, demonstrates the potential severity of excessive retries. The [Circuit Breaker][10] design pattern can be helpful here, too. + +Dashboards should be designed to clearly show [utilization, saturation, and errors][11] for all resources so problems can be found quickly. + +#### Thundering herds + +Often, failure scenarios arise when a system is under unusually heavy load. This can arise organically from users, but often it arises from systems. A surge of cron jobs that starts at midnight is a venerable example. Mobile clients can also be a source of coordinated demand if they are programmed to fetch updates at the same time (of course, it is much better to jitter such requests). + +Events occurring at pre-configured times aren't the only source of thundering herds. Slack experienced [multiple outages][12] over a short time due to large numbers of clients being disconnected and immediately reconnecting, causing large spikes of load. CircleCI saw a [severe outage][13] when a GitLab outage ended, leading to a surge of builds queued in its database, which became saturated and very slow. + +Almost any service can be the target of a thundering herd. Planning for such eventualities—and testing that your plan works as intended—is therefore a must. Client backoff and [load shedding][14] are often core to such approaches. + +If your systems must constantly ingest data that can't be dropped, it's key to have a scalable way to buffer this data in a queue for later processing. + +#### Automation systems are complex systems + +> "Complex systems are intrinsically hazardous systems." +> —[Richard Cook, MD][15] + +If your systems must constantly ingest data that can't be dropped, it's key to have a scalable way to buffer this data in a queue for later processing. + +The trend for the past several years has been strongly towards more automation of software operations. Automation of anything that can reduce your system's capacity (e.g., erasing disks, decommissioning devices, taking down serving jobs) needs to be done with care. Accidents (due to bugs or incorrect invocations) with this kind of automation can take down your system very efficiently, potentially in ways that are hard to recover from. + +The trend for the past several years has been strongly towards more automation of software operations. Automation of anything that can reduce your system's capacity (e.g., erasing disks, decommissioning devices, taking down serving jobs) needs to be done with care. Accidents (due to bugs or incorrect invocations) with this kind of automation can take down your system very efficiently, potentially in ways that are hard to recover from. + +Christina Schulman and Etienne Perot of Google describe some examples in their talk [Help Protect Your Data Centers with Safety Constraints][16]. One incident sent Google's entire in-house content delivery network (CDN) to disk-erase. + +Schulman and Perot suggest using a central service to manage constraints, which limits the pace at which destructive automation can operate, and being aware of system conditions (for example, avoiding destructive operations if the service has recently had an alert). + +Automation systems can also cause havoc when they interact with operators (or with other automated systems). [Reddit][17] experienced a major outage when its automation restarted a system that operators had stopped for maintenance. Once you have multiple automation systems, their potential interactions become extremely complex and impossible to predict. + +It will help to deal with the inevitable surprises if all this automation writes logs to an easily searchable, central place. Automation systems should always have a mechanism to allow them to be quickly turned off (fully or only for a subset of operations or targets). + +### Defense against the dark swans + +These are not the only black swans that might be waiting to strike your systems. There are many other kinds of severe problem that can be avoided using techniques such as canarying, load testing, chaos engineering, disaster testing, and fuzz testing—and of course designing for redundancy and resiliency. Even with all that, at some point your system will fail. + +To ensure your organization can respond effectively, make sure your key technical staff and your leadership have a way to coordinate during an outage. For example, one unpleasant issue you might have to deal with is a complete outage of your network. It's important to have a fail-safe communications channel completely independent of your own infrastructure and its dependencies. For instance, if you run on AWS, using a service that also runs on AWS as your fail-safe communication method is not a good idea. A phone bridge or an IRC server that runs somewhere separate from your main systems is good. Make sure everyone knows what the communications platform is and practices using it. + +Another principle is to ensure that your monitoring and your operational tools rely on your production systems as little as possible. Separate your control and your data planes so you can make changes even when systems are not healthy. Don't use a single message queue for both data processing and config changes or monitoring, for example—use separate instances. In [SparkPost: The Day the DNS Died][4], Jeremy Blosser presents an example where critical tools relied on the production DNS setup, which failed. + +### The psychology of battling the black swan + +To ensure your organization can respond effectively, make sure your key technical staff and your leadership have a way to coordinate during an outage. + +Dealing with major incidents in production can be stressful. It really helps to have a structured incident-management process in place for these situations. Many technology organizations ( + +Dealing with major incidents in production can be stressful. It really helps to have a structured incident-management process in place for these situations. Many technology organizations ( [including Google][18] ) successfully use a version of FEMA's Incident Command System. There should be a clear way for any on-call individual to call for assistance in the event of a major problem they can't resolve alone. + +For long-running incidents, it's important to make sure people don't work for unreasonable lengths of time and get breaks to eat and sleep (uninterrupted by a pager). It's easy for exhausted engineers to make a mistake or overlook something that might resolve the incident faster. + +### Learn more + +There are many other things that could be said about black (or formerly black) swans and strategies for dealing with them. If you'd like to learn more, I highly recommend these two books dealing with resilience and stability in production: Susan Fowler's [Production-Ready Microservices][19] and Michael T. Nygard's [Release It!][20]. + +Laura Nolan will present [What Breaks Our Systems: A Taxonomy of Black Swans][21] at [LISA18][22], October 29-31 in Nashville, Tennessee, USA. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/taxonomy-black-swans + +作者:[Laura Nolan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lauranolan +[b]: https://github.com/lujun9972 +[1]: https://medium.com/making-instapaper/instapaper-outage-cause-recovery-3c32a7e9cc5f +[2]: https://blog.sentry.io/2015/07/23/transaction-id-wraparound-in-postgres.html +[3]: https://medium.com/@florian_7764/technical-post-mortem-of-the-august-incident-82ab4c3d6547 +[4]: https://www.usenix.org/conference/srecon18americas/presentation/blosser +[5]: https://groups.google.com/forum/#!topic/mongodb-user/UoqU8ofp134 +[6]: https://en.wikipedia.org/wiki/Richard_Thaler +[7]: https://blog.hostedgraphite.com/2018/03/01/spooky-action-at-a-distance-how-an-aws-outage-ate-our-load-balancer/ +[8]: https://labs.spotify.com/2013/06/04/incident-management-at-spotify/ +[9]: https://medium.com/square-corner-blog/incident-summary-2017-03-16-2f65be39297 +[10]: https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern +[11]: http://www.brendangregg.com/usemethod.html +[12]: https://slackhq.com/this-was-not-normal-really +[13]: https://circleci.statuspage.io/incidents/hr0mm9xmm3x6 +[14]: https://www.youtube.com/watch?v=XNEIkivvaV4 +[15]: https://web.mit.edu/2.75/resources/random/How%20Complex%20Systems%20Fail.pdf +[16]: https://www.usenix.org/conference/srecon18americas/presentation/schulman +[17]: https://www.reddit.com/r/announcements/comments/4y0m56/why_reddit_was_down_on_aug_11/ +[18]: https://landing.google.com/sre/book/chapters/managing-incidents.html +[19]: http://shop.oreilly.com/product/0636920053675.do +[20]: https://www.oreilly.com/library/view/release-it/9781680500264/ +[21]: https://www.usenix.org/conference/lisa18/presentation/nolan +[22]: https://www.usenix.org/conference/lisa18 diff --git a/sources/talk/20181026 Directing traffic- Demystifying internet-scale load balancing.md b/sources/talk/20181026 Directing traffic- Demystifying internet-scale load balancing.md new file mode 100644 index 0000000000..6ebcba69e3 --- /dev/null +++ b/sources/talk/20181026 Directing traffic- Demystifying internet-scale load balancing.md @@ -0,0 +1,108 @@ +Directing traffic: Demystifying internet-scale load balancing +====== +Common techniques used to balance network traffic come with advantages and trade-offs. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys) +Large, multi-site, internet-facing systems, including content-delivery networks (CDNs) and cloud providers, have several options for balancing traffic coming onto their networks. In this article, we'll describe common traffic-balancing designs, including techniques and trade-offs. + +If you were an early cloud computing provider, you could take a single customer web server, assign it an IP address, configure a domain name system (DNS) record to associate it with a human-readable name, and advertise the IP address via the border gateway protocol (BGP), the standard way of exchanging routing information between networks. + +It wasn't load balancing per se, but there probably was load distribution across redundant network paths and networking technologies to increase availability by routing around unavailable infrastructure (giving rise to phenomena like [asymmetric routing][1]). + +### Doing simple DNS load balancing + +As traffic to your customer's service grows, the business' owners want higher availability. You add a second web server with its own publicly accessible IP address and update the DNS record to direct users to both web servers (hopefully somewhat evenly). This is OK for a while until one web server unexpectedly goes offline. Assuming you detect the failure quickly, you can update the DNS configuration (either manually or with software) to stop referencing the broken server. + +Unfortunately, because DNS records are cached, around 50% of requests to the service will likely fail until the record expires from the client caches and those of other nameservers in the DNS hierarchy. DNS records generally have a time to live (TTL) of several minutes or more, so this can create a significant impact on your system's availability. + +Worse, some proportion of clients ignore TTL entirely, so some requests will be directed to your offline web server for some time. Setting very short DNS TTLs is not a great idea either; it means higher load on DNS services plus increased latency because clients will have to perform DNS lookups more often. If your DNS service is unavailable for any reason, access to your service will degrade more quickly with a shorter TTL because fewer clients will have your service's IP address cached. + +### Adding network load balancing + +To work around this problem, you can add a redundant pair of [Layer 4][2] (L4) network load balancers that serve the same virtual IP (VIP) address. They could be hardware appliances or software balancers like [HAProxy][3]. This means the DNS record points only at the VIP and no longer does load balancing. + +![Layer 4 load balancers balance connections across webservers.][5] + +Layer 4 load balancers balance connections from users across two webservers. + +The L4 balancers load-balance traffic from the internet to the backend servers. This is generally done based on a hash (a mathematical function) of each IP packet's 5-tuple: the source and destination IP address and port plus the protocol (such as TCP or UDP). This is fast and efficient (and still maintains essential properties of TCP) and doesn't require the balancers to maintain state per connection. (For more information, [Google's paper on Maglev][6] discusses implementation of a software L4 balancer in significant detail.) + +The L4 balancers can do health-checking and send traffic only to web servers that pass checks. Unlike in DNS balancing, there is minimal delay in redirecting traffic to another web server if one crashes, although existing connections will be reset. + +L4 balancers can do weighted balancing, dealing with backends with varying capacity. L4 balancing gives significant power and flexibility to operators while being relatively inexpensive in terms of computing power. + +### Going multi-site + +The system continues to grow. Your customers want to stay up even if your data center goes down. You build a new data center with its own set of service backends and another cluster of L4 balancers, which serve the same VIP as before. The DNS setup doesn't change. + +The edge routers in both sites advertise address space, including the service VIP. Requests sent to that VIP can reach either site, depending on how each network between the end user and the system is connected and how their routing policies are configured. This is known as anycast. Most of the time, this works fine. If one site isn't operating, you can stop advertising the VIP for the service via BGP, and traffic will quickly move to the alternative site. + +![Serving from multiple sites using anycast][8] + +Serving from multiple sites using anycast. + +This setup has several problems. Its worst failing is that you can't control where traffic flows or limit how much traffic is sent to a given site. You also don't have an explicit way to route users to the nearest site (in terms of network latency), but the network protocols and configurations that determine the routes should, in most cases, route requests to the nearest site. + +### Controlling inbound requests in a multi-site system + +To maintain stability, you need to be able to control how much traffic is served to each site. You can get that control by assigning a different VIP to each site and use DNS to balance them using simple or weighted [round-robin][9]. + +![Serving from multiple sites using a primary VIP][11] + +Serving from multiple sites using a primary VIP per site, backed up by secondary sites, with geo-aware DNS. + +You now have two new problems. + +First, using DNS balancing means you have cached records, which is not good if you need to redirect traffic quickly. + +Second, whenever users do a fresh DNS lookup, a VIP connects them to the service at an arbitrary site, which may not be the closest site to them. If your service runs on widely separated sites, individual users will experience wide variations in your system's responsiveness, depending upon the network latency between them and the instance of your service they are using. + +You can solve the first problem by having each site constantly advertise and serve the VIPs for all the other sites (and consequently the VIP for any faulty site). Networking tricks (such as advertising less-specific routes from the backups) can ensure that VIP's primary site is preferred, as long as it is available. This is done via BGP, so we should see traffic move within a minute or two of updating BGP. + +There isn't an elegant solution to the problem of serving users from sites other than the nearest healthy site with capacity. Many large internet-facing services use DNS services that attempt to return different results to users in different locations, with some degree of success. This approach is always somewhat [complex and error-prone][12], given that internet-addressing schemes are not organized geographically, blocks of addresses can change locations (e.g., when a company reorganizes its network), and many end users can be served from a single caching nameserver. + +### Adding Layer 7 load balancing + +Over time, your customers begin to ask for more advanced features. + +While L4 load balancers can efficiently distribute load among multiple web servers, they operate only on source and destination IP addresses, protocol, and ports. They don't know anything about the content of a request, so you can't implement many advanced features in an L4 balancer. Layer 7 (L7) load balancers are aware of the structure and contents of requests and can do far more. + +Some things that can be implemented in L7 load balancers are caching, rate limiting, fault injection, and cost-aware load balancing (some requests require much more server time to process). + +They can also balance based on a request's attributes (e.g., HTTP cookies), terminate SSL connections, and help defend against application layer denial-of-service (DoS) attacks. The downside of L7 balancers at scale is cost—they do more computation to process requests, and each active request consumes some system resources. Running L4 balancers in front of one or more pools of L7 balancers can help with scaling. + +### Conclusion + +Load balancing is a difficult and complex problem. In addition to the strategies described in this article, there are different [load-balancing algorithms][13], high-availability techniques used to implement load balancers, client load-balancing techniques, and the recent rise of service meshes. + +Core load-balancing patterns have evolved alongside the growth of cloud computing, and they will continue to improve as large web services work to improve the control and flexibility that load-balancing techniques offer./p> + +Laura Nolan and Murali Suriar will present [Keeping the Balance: Load Balancing Demystified][14] at [LISA18][15], October 29-31 in Nashville, Tennessee, USA. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/internet-scale-load-balancing + +作者:[Laura Nolan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lauranolan +[b]: https://github.com/lujun9972 +[1]: https://www.noction.com/blog/bgp-and-asymmetric-routing +[2]: https://en.wikipedia.org/wiki/Transport_layer +[3]: https://www.haproxy.com/blog/failover-and-worst-case-management-with-haproxy/ +[4]: /file/412596 +[5]: https://opensource.com/sites/default/files/uploads/loadbalancing1_l4-network-loadbalancing.png (Layer 4 load balancers balance connections across webservers.) +[6]: https://ai.google/research/pubs/pub44824 +[7]: /file/412601 +[8]: https://opensource.com/sites/default/files/uploads/loadbalancing2_going-multisite.png (Serving from multiple sites using anycast) +[9]: https://en.wikipedia.org/wiki/Round-robin_scheduling +[10]: /file/412606 +[11]: https://opensource.com/sites/default/files/uploads/loadbalancing3_controlling-inbound-requests.png (Serving from multiple sites using a primary VIP) +[12]: https://landing.google.com/sre/book/chapters/load-balancing-frontend.html +[13]: https://medium.com/netflix-techblog/netflix-edge-load-balancing-695308b5548c +[14]: https://www.usenix.org/conference/lisa18/presentation/suriar +[15]: https://www.usenix.org/conference/lisa18 diff --git a/sources/talk/20181031 3 scary sysadmin stories.md b/sources/talk/20181031 3 scary sysadmin stories.md new file mode 100644 index 0000000000..6810012f57 --- /dev/null +++ b/sources/talk/20181031 3 scary sysadmin stories.md @@ -0,0 +1,124 @@ +3 scary sysadmin stories +====== + +Terrifying ghosts are hanging around every data center, just waiting to haunt the unsuspecting sysadmin. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/spooky_halloween_haunted_house.jpg?itok=UkRBeItZ) + +> "It's all just a bunch of hocus pocus!" — Max in [Hocus Pocus][1] + +Over my many years as a system administrator, I've heard many horror stories about the different ghosts that have haunted new admins due to their inexperience. + +Here are three of the stories that stand out to me the most in helping build my character as a good sysadmin. + +### The ghost of the failed restore + +In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online. + +But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change. + +With great fear, I asked the senior sysadmin what to do to fix this behavior. + +"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin. + +"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. + +"Exactly! But you preferred to leave early without finishing that task," he said. + +"Oh my! I thought it was optional!" I exclaimed. + +"It was, it was…" + +**Moral of the story:** Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time. + +### The dark window + +Once upon a night watch, reflecting I was, lonely and tired, +Looking at the file window on my screen. +Clicking randomly, nearly napping, suddenly came a beeping +From some server, sounding gently, sounding on my pager. +"It's just a warning," I muttered, "sounding on my pager— +Only this and nothing more." +Soon again I heard a beeping somewhat louder than before. +Opening my pager with great disdain, +There was the message from a server of the saintly days of yore: +"The legacy application, it's down, doesn't respond," and nothing more. +There were many stories of this server, +Incredibly, almost terrified, +I went down to the data center to review it. +I sat engaged in guessing, what would be the console to restart it +Without keyboard, mouse, or monitor? +"The task level up"—I think—"only this and nothing more." +Then, thinking, "In another rack, I saw a similar server, +I'll take its monitor and keyboard, nothing bad." +Suddenly, this server shut down, and my pager beeped again: +"The legacy application, it's down, doesn't respond", and nothing more. +Bemused, I sat down to call my sysadmin mentor: +"I wanted to use the console of another server, and now both are out." +"Did you follow my advice? Don't use the graphics console, the terminal is better." +Of course, I remember, it was last December; +I felt fear, a horror that I had never felt before; +"It is a tool of the past and nothing more." +With great shame I understood my mistake: +"Master," I said, "truly, your forgiveness I implore; +but the fact is I thought it was not used anymore. +A dark window and nothing more." +"Learn it well, little kid," he spoke. +"In the terminal you can trust, it's your friend and much, much more." +Step by step, my master showed me to connect with the terminal, +And restarting each one +With infinite patience, he taught me +That from that dark window I should not separate +Never, nevermore. + +**Moral of the story:** Fluency in the command-line terminal is a skill often abandoned and considered archaic by newer generations, but it improves your flexibility and productivity as a sysadmin in obvious and subtle ways. + +### Troll bridge + +I'd been a sysadmin for three or four years when one of my old mentors was removed from work. The older man was known for making fun of the new guys in the group—the ones who brought from the university the desire to improve processes with the newly released community operating system. My manager assigned me the older man's office, a small space under the access stairs to the data center—"Troll Bridge," they called it—and the few legacy servers he still managed. + +While reviewing those legacy servers, I realized most of them had many scripts that did practically all the work. I just had to check that they did not go offline due to an electrical failure. I started using those methods, adapting them so my own servers would work the same way, making my tasks more efficient and, at the same time, requiring less of my time to complete them. My day soon became surfing the internet, watching funny videos, and even participating in internet forums. + +A couple of years went by, and I maintained my work in the same way. When a new server arrived, I automated its tasks so I could free myself and continue with my usual participation in internet forums. One day, when I shared one of my scripts in the internet forum, a new admin told me I could simplify it using one novelty language, a new trend that was becoming popular among the new folks. + +"I am a sysadmin, not a programmer," I answered. "They will never be the same." + +From that day on, I dedicated myself to ridiculing the kids who told me I should program in the new languages. + +"You do not know, newbie," I answered every time, "this job will never change." + +A few years later, my responsibilities increased, and my manager wanted me to modify the code of the applications hosted on my server. + +"That's what the job is about now," said my manager. "Development and operations are joining; if you're not willing to do it, we'll bring in some guy who does." + +"I will never do it, it's not my role," I said. + +"Well then…" he said, looking at me harshly. + +I've been here ever since. Hiding. Waiting. Under my bridge. + +I watch from the shadows as the people pass: up the stairs, muttering, or talking about the things the new applications do. Sometimes people pause beneath my bridge, to talk, or share code, or make plans. And I watch them, but they don't see me. + +I'm just going to stay here, in the darkness under the bridge. I can hear you all out there, everything you say. + +Oh yes, I can hear you. +But I'm not coming out. + +**Moral of the story:** "The lazy sysadmin is the best sysadmin" is a well-known phrase that means if we are proactive enough to automate all our processes properly, we will have a lot of free time. The best sysadmins never seem to be very busy; they prefer to be relaxed and let the system do the work for them. "Work smarter not harder." However, if we don't use this free time productively, we can fall into obsoleteness and become something we do not want. The best sysadmins reinvent themselves constantly; they are always researching and learning. + +Following these stories' morals—and continually learning from my mistakes—helped me improve my management skills and create the good habits necessary for the sysadmin job. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/3-scary-sysadmin-stories + +作者:[Alex Callejas][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/darkaxl +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Hocus_Pocus_(1993_film) diff --git a/sources/talk/20181031 How open source hardware increases security.md b/sources/talk/20181031 How open source hardware increases security.md new file mode 100644 index 0000000000..9e823436cf --- /dev/null +++ b/sources/talk/20181031 How open source hardware increases security.md @@ -0,0 +1,84 @@ +How open source hardware increases security +====== +Want to boost cybersecurity at your organization? Switch to open source hardware. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/esp8266_board_hardware.jpg?itok=OTmNpKV1) + +Hardware hacks are particularly scary because they trump any software security safeguards—for example, they can render all accounts on a server password-less. + +Fortunately, we can benefit from what the software industry has learned from decades of fighting prolific software hackers: Using open source techniques can, perhaps counterintuitively, [make a system more secure][1]. Open source hardware and distributed manufacturing can provide protection from future attacks. + +### Trust—but verify + +Imagine you are a 007 agent holding classified documents. Would you feel more secure locking them in a safe whose manufacturer keeps the workings of the locks secret, or in a safe whose design is published openly so that everyone (including thieves) can judge its quality—thus enabling you to rely exclusively on technical complexity for protection? + +The former approach might be perfectly secure—you simply don’t know. But why would you trust any manufacturer that could be compromised now or in the future? In contrast, the open system is almost certain to be secure, especially if enough time has passed for it to be tested by multiple companies, governments, and individuals. + +To a large degree, the software world has seen the benefits of moving to free and open source software. That's why open source is run on all [supercomputers][2], [90% of the cloud, 82% of the smartphone market, and 62% of the embedded systems market][3]. Open source appears poised to dominate the future, with over [70% of the IoT][4]. + +In fact, security is one of the core benefits of [open source][5]. While open source is not inherently more secure, it allows you to verify security yourself (or pay someone more qualified to do so). With closed source programs, you must trust, without verification, that a program works properly. To quote President Reagan: "Trust—but verify." The bottom line is that open source allows users to make more informed choices about the security of a system—choices that are based on their own independent judgment. + +### Open source hardware + +This concept also holds true for electronic devices. Most electronics customers have no idea what is in their products, and even technically sophisticated companies like Amazon may not know exactly what is in the hardware that runs their servers because they use proprietary products that are made by other companies. + +In the incident mentioned above, Chinese spies recently used a tiny microchip, not much bigger than a grain of rice, to infiltrate hardware made by SuperMicro (the Microsoft of the hardware world). These chips enabled outside infiltrators to access the core server functions of some of America’s leading companies and government operations, including DOD data centers, CIA drone operations, and the onboard networks of Navy warships. Operatives from the People’s Liberation Army or similar groups could have reverse-engineered or made identical or disguised modules (in this case, the chips looked like signal-conditioning couplers, a common motherboard component, rather than the spy devices they were). + +Having the source available helps customers much more than hackers, as most customers do not have the resources to reverse-engineer the electronics they buy. Without the device's source, or design, it's difficult to determine whether or not hardware has been hacked. + +Enter [open source hardware][6]: hardware design that is publicly available so that anyone can study, modify, test, distribute, make, or sell it, or hardware based on it. The hardware’s source is available to everyone. + +### Distributed manufacturing for cybersecurity + +Open source hardware and distributed manufacturing could have prevented the Chinese hack that rightfully terrified the security world. Organizations that require tight security, such as military groups, could then check the product's code and bring production in-house if necessary. + +This open source future may not be far off. Recently I co-authored, with Shane Oberloier, an [article][7] that discusses a low-cost open source benchtop device that enables anyone to make a wide range of open source electronic products. The number of open source electronics designs is proliferating on websites like [Hackaday][8], [Open Electronics][9], and the [Open Circuit Institute][10], as are communities based on specific products like [Arduino][11] and around companies like [Adafruit Industries][12] and [SparkFun Electronics][13]. + +Every level of manufacturing that users can do themselves increases the security of the device. Not long ago, you had to be an expert to make even a simple breadboard design. Now, with open source mills for boards and electronics repositories, small companies and even individuals can make reasonably sophisticated electronic devices. While most builders are still using black-box chips on their devices, this is also changing as [open source chips gain traction][14]. + +![](https://opensource.com/sites/default/files/uploads/800px-oscircuitmill.png) + +Creating electronics that are open source all the way down to the chip is certainly possible—and the more besieged we are by hardware hacks, perhaps it is even inevitable. Companies, governments, and other organizations that care about cybersecurity should strongly consider moving toward open source—perhaps first by establishing purchasing policies for software and hardware that makes the code accessible so they can test for security weaknesses. + +Although every customer and every manufacturer of an open source hardware product will have different standards of quality and security, this does not necessarily mean weaker security. Customers should choose whatever version of an open source product best meets their needs, just as users can choose their flavor of Linux. For example, do you run [Fedora][15] for free, or do you, like [90% of Fortune Global 500 companies][16], pay Red Hat for its version and support? + +Red Hat makes billions of dollars a year for the service it provides, on top of a product that can ostensibly be downloaded for free. Open source hardware can follow the [same business model][17]; it is just a less mature field, lagging [open source software by about 15 years][18]. + +The core source code for hardware devices would be controlled by their manufacturer, following the "[benevolent dictator for life][19]" model. Code of any kind (infected or not) is screened before it becomes part of the root. This is true for hardware, too. For example, Aleph Objects manufacturers the popular [open source LulzBot brand of 3D printer][20], a commercial 3D printer that's essentially designed to be hacked. Users have made [dozens of modifications][21] (mods) to the printer, and while they are available, Aleph uses only the ones that meet its QC standards in each subsequent version of the printer. Sure, downloading a mod could mess up your own machine, but infecting the source code of the next LulzBot that way would be nearly impossible. Customers are also able to more easily check the security of the machines themselves. + +While [challenges certainly remain for the security of open source products][22], the open hardware model can help enhance cybersecurity—from the Pentagon to your living room. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/cybersecurity-demands-rapid-switch-open-source-hardware + +作者:[Joshua Pearce][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jmpearce +[b]: https://github.com/lujun9972 +[1]: https://dl.acm.org/citation.cfm?id=1188921 +[2]: https://www.zdnet.com/article/supercomputers-all-linux-all-the-time/ +[3]: https://www.serverwatch.com/server-news/linux-foundation-on-track-for-best-year-ever-as-open-source-dominates.html +[4]: https://www.itprotoday.com/iot/survey-shows-linux-top-operating-system-internet-things-devices +[5]: https://www.infoworld.com/article/2985242/linux/why-is-open-source-software-more-secure.html +[6]: https://www.oshwa.org/definition/ +[7]: https://www.mdpi.com/2411-5134/3/3/64/htm +[8]: https://hackaday.io/ +[9]: https://www.open-electronics.org/ +[10]: http://opencircuitinstitute.org/ +[11]: https://www.arduino.cc/ +[12]: http://www.adafruit.com/ +[13]: https://www.sparkfun.com/ +[14]: https://www.wired.com/story/using-open-source-designs-to-create-more-specialized-chips/ +[15]: https://getfedora.org/ +[16]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[17]: https://openhardware.metajnl.com/articles/10.5334/joh.4/ +[18]: https://www.mdpi.com/2411-5134/3/3/44/htm +[19]: https://www.theatlantic.com/technology/archive/2014/01/on-the-reign-of-benevolent-dictators-for-life-in-software/283139/ +[20]: https://www.lulzbot.com/ +[21]: https://forum.lulzbot.com/viewtopic.php?t=2378 +[22]: https://ieeexplore.ieee.org/abstract/document/8250205 diff --git a/sources/tech/20171002 Three Alternatives for Enabling Two Factor Authentication For SSH On Ubuntu 16.04 And Debian Jessie.md b/sources/tech/20171002 Three Alternatives for Enabling Two Factor Authentication For SSH On Ubuntu 16.04 And Debian Jessie.md index ff78b3f809..cbe5e1f9bd 100644 --- a/sources/tech/20171002 Three Alternatives for Enabling Two Factor Authentication For SSH On Ubuntu 16.04 And Debian Jessie.md +++ b/sources/tech/20171002 Three Alternatives for Enabling Two Factor Authentication For SSH On Ubuntu 16.04 And Debian Jessie.md @@ -1,3 +1,5 @@ +Translating by cielllll + Three Alternatives for Enabling Two Factor Authentication For SSH On Ubuntu 16.04 And Debian Jessie ====== Security is now more important than ever and securing your SSH server is one of the most important things that you can do as a systems administrator. Traditionally this has meant disabling password authentication and instead using SSH keys. Whilst this is absolutely the first thing you should do that doesn't mean that SSH can't be made even more secure. diff --git a/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md b/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md index 5bab9d8c65..2936591150 100644 --- a/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md +++ b/sources/tech/20171005 10 Games You Can Play on Linux with Wine.md @@ -1,3 +1,4 @@ +### fuzheng1998 reapplying 10 Games You Can Play on Linux with Wine ====== ![](https://www.maketecheasier.com/assets/uploads/2017/09/wine-games-feat.jpg) diff --git a/sources/tech/20171012 7 Best eBook Readers for Linux.md b/sources/tech/20171012 7 Best eBook Readers for Linux.md index 128c667d88..5198f1bdc0 100644 --- a/sources/tech/20171012 7 Best eBook Readers for Linux.md +++ b/sources/tech/20171012 7 Best eBook Readers for Linux.md @@ -1,4 +1,3 @@ -Translating by bayar199468 7 Best eBook Readers for Linux ====== **Brief:** In this article, we are covering some of the best ebook readers for Linux. These apps give a better reading experience and some will even help in managing your ebooks. diff --git a/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md b/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md index 12d15af78f..5fe31f92cf 100644 --- a/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md +++ b/sources/tech/20171116 How to use a here documents to write data to a file in bash script.md @@ -1,3 +1,5 @@ +translating by Flowsnow + How to use a here documents to write data to a file in bash script ====== diff --git a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md index dbdebf63e3..80975288e4 100644 --- a/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md +++ b/sources/tech/20171202 Easily control delivery of your Python applications to millions of Linux users with Snapcraft.md @@ -1,3 +1,4 @@ +Translating by DavidChenLiang Python ============================================================ diff --git a/sources/tech/20171202 Simulating the Altair.md b/sources/tech/20171202 Simulating the Altair.md deleted file mode 100644 index 3de613cb9a..0000000000 --- a/sources/tech/20171202 Simulating the Altair.md +++ /dev/null @@ -1,70 +0,0 @@ -translating---geekpi - -Simulating the Altair -====== -The [Altair 8800][1] was a build-it-yourself home computer kit released in 1975. The Altair was basically the first personal computer, though it predated the advent of that term by several years. It is Adam (or Eve) to every Dell, HP, or Macbook out there. - -Some people thought it’d be awesome to write an emulator for the Z80—a processor closely related to the Altair’s Intel 8080—and then thought it needed a simulation of the Altair’s control panel on top of it. So if you’ve ever wondered what it was like to use a computer in 1975, you can run the Altair on your Macbook: - -![Altair 8800][2] - -### Installing it - -You can download Z80 pack from the FTP server available [here][3]. You’re looking for the latest Z80 pack release, something like `z80pack-1.26.tgz`. - -First unpack the file: - -``` -$ tar -xvf z80pack-1.26.tgz -``` - -Move into the unpacked directory: - -``` -$ cd z80pack-1.26 -``` - -The control panel simulation is based on a library called `frontpanel`. You’ll have to compile that library first. If you move into the `frontpanel` directory, you will find a `README` file listing the libraries own dependencies. Your experience here will almost certainly differ from mine, but perhaps my travails will be illustrative. I had the dependencies installed, but via [Homebrew][4]. To get the library to compile I just had to make sure that `/usr/local/include` was added to Clang’s include path in `Makefile.osx`. - -If you’ve satisfied the dependencies, you should be able to compile the library (we’re now in `z80pack-1.26/frontpanel`: - -``` -$ make -f Makefile.osx ... -$ make -f Makefile.osx clean -``` - -You should end up with `libfrontpanel.so`. I copied this to `/usr/local/lib`. - -The Altair simulator is under `z80pack-1.26/altairsim`. You now need to compile the simulator itself. Move into `z80pack-1.26/altairsim/srcsim` and run `make` once more: - -``` -$ make -f Makefile.osx ... -$ make -f Makefile.osx clean -``` - -That process will create an executable called `altairsim` one level up in `z80pack-1.26/altairsim`. Run that executable and you should see that iconic Altair control panel! - -And if you really want to nerd out, read through the original [Altair manual][5]. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out. - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2017/12/02/simulating-the-altair.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://en.wikipedia.org/wiki/Altair_8800 -[2]: https://www.autometer.de/unix4fun/z80pack/altair.png -[3]: http://www.autometer.de/unix4fun/z80pack/ftp/ -[4]: http://brew.sh/ -[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf -[6]: https://twitter.com/TwoBitHistory -[7]: https://twobithistory.org/feed.xml diff --git a/sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md b/sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md deleted file mode 100644 index 7d576b8a73..0000000000 --- a/sources/tech/20171208 24 Must Have Essential Linux Applications In 2017.md +++ /dev/null @@ -1,253 +0,0 @@ -Translating by cycoe... -cycoe 翻译中 -24 Must Have Essential Linux Applications In 2017 -====== -Brief: What are the must have applications for Linux? The answer is subjective and it depends on for what purpose do you use your desktop Linux. But there are still some essentials Linux apps that are more likely to be used by most Linux user. We have listed such best Linux applications that you should have installed in every Linux distribution you use. - -The world of Linux, everything is full of alternatives. You have to choose a distro? You have got several dozens of them. Are you trying to find a decent music player? Alternatives are there too. - -But not all of them are built with the same thing in mind – some of them might target minimalism while others might offer tons of features. Finding the right application for your needs can be quite confusing and a tiresome task. Let’s make that a bit easier. - -### Best free applications for Linux users - -I’m putting together a list of essential free Linux applications I prefer to use in different categories. I’m not saying that they are the best, but I have tried lots of applications in each category and finally liked the listed ones better. So, you are more than welcome to mention your favorite applications in the comment section. - -We have also compiled a nice video of this list. Do subscribe to our YouTube channel for more such educational Linux videos: - -### Web Browser - -![Web Browsers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg) -[Save][1]Web Browsers - -#### [Google Chrome][12] - -Google Chrome is a powerful and complete solution for a web browser. It comes with excellent syncing capabilities and offers a vast collection of extensions. If you are accustomed to Google eco-system Google Chrome is for you without any doubt. If you prefer a more open source solution, you may want to try out [Chromium][13], which is the project Google Chrome is based on. - -#### [Firefox][14] - -If you are not a fan of Google Chrome, you can try out Firefox. It’s been around for a long time and is a very stable and robust web browser. - -#### [Vivaldi][15] - -However, if you want something new and different, you can check out Vivaldi. Vivaldi takes a completely fresh approach towards web browser. It’s from former team members of Opera and built on top of the Chromium project. It’s lightweight and customizable. Though it is still quite new and still missing out some features, it feels amazingly refreshing and does a really decent job. - -[Suggested read[Review] Otter Browser Brings Hope To Opera Lovers][40] - -### Download Manager - -![Download Managers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg) -[Save][2]Download Managers - -#### [uGet][16] - -uGet is the best download manager I have come across. It is open source and offers everything you can expect from a download manager. uGet offers advanced settings for managing downloads. It can queue and resume downloads, use multiple connections for downloading large files, download files to different directories according to categories and so on. - -#### [XDM][17] - -Xtreme Download Manager (XDM) is a powerful and open source tool developed with Java. It has all the basic features of a download manager, including – video grabber, smart scheduler and browser integration. - -[Suggested read4 Best Download Managers For Linux][41] - -### BitTorrent Client - -![BitTorrent Clients](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg) -[Save][3]BitTorrent Clients - -#### [Deluge][18] - -Deluge is a open source BitTorrent client. It has a beautiful user interface. If you are used to using uTorrent for Windows, Deluge interface will feel familiar. It has various configuration options as well as plugins support for various tasks. - -#### [Transmission][19] - -Transmission takes the minimal approach. It is an open source BitTorrent client with a minimal user interface. Transmission comes pre-installed with many Linux distributions. - -[Suggested readTop 5 Torrent Clients For Ubuntu Linux][42] - -### Cloud Storage - -![Cloud Storages](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg) -[Save][4]Cloud Storages - -#### [Dropbox][20] - -Dropbox is one of the most popular cloud storage service available out there. It gives you 2GB free storage to start with. Dropbox has a robust and straight-forward Linux client. - -#### [MEGA][21] - -MEGA offers 50GB of free storage. But that is not the best thing about it. The best thing about MEGA is that it has end-to-end encryption support for your files. MEGA has a solid Linux client named MEGAsync. - -[Suggested readBest Free Cloud Services For Linux in 2017][43] - -### Communication - -![Communication Apps](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg) -[Save][5]Communication Apps - -#### [Pidgin][22] - -Pidgin is an open source instant messenger client. It supports many chatting platforms including – Google Talk, Yahoo and even IRC. Pidgin is extensible through third-party plugins, that can provide a lot of additional functionalities to Pidgin. - -You can also use [Franz][23] or [Rambox][24] to use several messaging services in one application. - -#### [Skype][25] - -We all know Skype, it is one of the most popular video chatting platforms. Recently it has [released a brand new desktop client][26] for Linux. - -[Suggested read6 Best Messaging Apps Available For Linux In 2017][44] - -### Office Suite - -![Office Suites](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg) -[Save][6]Office Suites - -#### [LibreOffice][27] - -LibreOffice is the most actively developed open source office suite for Linux. It has mainly six modules – Writer, Calc, Impress, Draw, Math and Base. And every one of them supports a wide range of file formats. LibreOffice also supports third-party extensions. It is the default office suite for many of the Linux distributions. - -#### [WPS Office][28] - -If you want to try out something other than LibreOffice, WPS Office might be your go-to. WPS Office suite includes writer, presentation and spreadsheets support. - -[Suggested read6 Best Open Source Alternatives to Microsoft Office for Linux][45] - -### Music Player - -![Music Players](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg) -[Save][7]Music Players - -#### [Lollypop][29] - -This is a relatively new music player. Lollypop is open source and has a beautiful yet simple user interface. It offers a nice music organizer, scrobbling support, online radio and a party mode. Though it is a simple music player without so many advanced features, it is worth giving it a try. - -#### [Rhythmbox][30] - -Rhythmbox is the music player mainly developed for GNOME desktop environment but it works on other desktop environments as well. It does all the basic tasks of a music player, including – CD Ripping & Burning, scribbling etc. It also has support for iPod. - -#### [cmus][31] - -If you want minimalism and love your terminal window, cmus is for you. Personally, I’m a fan and user of this one. cmus is a small, fast and powerful console music player for Unix-like operating systems. It has all the basic music player features. And you can also extend its functionalities with additional extensions and scripts. - -[Suggested readHow To Install Tomahawk Player In Ubuntu 14.04 And Linux Mint 17][46] - -### Video Player - -![Video Player](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg) -[Save][8]Video Players - -#### [VLC][32] - -VLC is an open source media player. It is simple, fast, lightweight and really powerful. VLC can play almost any media formats you can throw at it out-of-the-box. It can also stream online medias. It also have some nifty extensions for various tasks like downloading subtitles right from the player. - -#### [Kodi][33] - -Kodi is a full-fledged media center. Kodi is open source and very popular among its user base. It can handle videos, music, pictures, podcasts and even games, from both local and network media storage. You can even record TV with it. The behavior of Kodi can be customized via add-ons and skins. - -[Suggested read4 Format Factory Alternative In Linux][47] - -### Photo Editor - -![Photo Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg) -[Save][9]Photo Editors - -#### [GIMP][34] - -GIMP is the Photoshop alternative for Linux. It is open source, full-featured and professional photo editing software. It is packed with a wide range of tools for manipulating images. And on top of that, there is various customization options and third-party plugins for enhancing the experience. - -#### [Krita][35] - -Krita is mainly a painting tool but serves as a photo editing application as well. It is open source and packed with lots of sophisticated and advanced tools. - -[Suggested readBest Photo Applications For Linux][48] - -### Text Editor - -Every Linux distribution comes with their own solution for text editors. Generally, they are quite simple and without much functionality. But here are some text editors with enhanced capabilities. - -![Text Editors](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg) -[Save][10]Text Editors - -#### [Atom][36] - -Atom is the modern and hackable text editor maintained by GitHub. It is completely open-source and offers everything you can think of to get out of a text editor. You can use it right out-of-the-box or you can customize and tune it just the way you want. And it has a ton of extensions and themes from the community up for grab. - -#### [Sublime Text][37] - -Sublime Text is one of the most popular text editors. Though it is not free, it allows you to use the software for evaluation without any time limit. Sublime Text is a feature-rich and sophisticated piece of software. And of course, it has plugins and themes support. - -[Suggested read4 Best Modern Open Source Code Editors For Linux][49] - -### Launcher - -![Launchers](https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg) -[Save][11]Launchers - -#### [Albert][38] - -Albert is inspired by Alfred (a productivity application for Mac, which is totally kickass by-the-way) and still in the development phase. Albert is fast, extensible and customizable. The goal is to “Access everything with virtually zero effort”. It integrates with your Linux distribution nicely and helps you to boost your productivity. - -#### [Synapse][39] - -Synapse has been around for years. It’s a simple launcher that can search and run applications. It can also speed up various workflows like – controlling music, searching files, directories, bookmarks etc., running commands and such. - -As Abhishek advised, we will keep this list of best Linux software updated with our readers’ (i.e. yours) feedback. So, what are your favorite must have Linux applications? Share with us and do suggest more categories of software to add to this list. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/essential-linux-applications/ - -作者:[Munif Tanjim][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/munif/ -[1]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Web-Browser-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Web%20Browsers -[2]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Download-Manager-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Download%20Managers -[3]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-BitTorrent-Client-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=BitTorrent%20Clients -[4]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Cloud-Storage-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Cloud%20Storages -[5]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Communication-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Communication%20Apps -[6]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Office-Suite-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Office%20Suites -[7]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Music-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Music%20Players -[8]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Video-Player-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Video%20Player -[9]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Photo-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Photo%20Editors -[10]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Text-Editor-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Text%20Editors -[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/10/Essential-Linux-Apps-Launcher-1024x512.jpg&url=https://itsfoss.com/essential-linux-applications/&is_video=false&description=Launchers -[12]:https://www.google.com/chrome/browser -[13]:https://www.chromium.org/Home -[14]:https://www.mozilla.org/en-US/firefox -[15]:https://vivaldi.com -[16]:http://ugetdm.com/ -[17]:http://xdman.sourceforge.net/ -[18]:http://deluge-torrent.org/ -[19]:https://transmissionbt.com/ -[20]:https://www.dropbox.com -[21]:https://mega.nz/ -[22]:https://www.pidgin.im/ -[23]:https://itsfoss.com/franz-messaging-app/ -[24]:http://rambox.pro/ -[25]:https://www.skype.com -[26]:https://itsfoss.com/skpe-alpha-linux/ -[27]:https://www.libreoffice.org -[28]:https://www.wps.com -[29]:http://gnumdk.github.io/lollypop-web/ -[30]:https://wiki.gnome.org/Apps/Rhythmbox -[31]:https://cmus.github.io/ -[32]:http://www.videolan.org -[33]:https://kodi.tv -[34]:https://www.gimp.org/ -[35]:https://krita.org/en/ -[36]:https://atom.io/ -[37]:http://www.sublimetext.com/ -[38]:https://github.com/ManuelSchneid3r/albert -[39]:https://launchpad.net/synapse-project -[40]:https://itsfoss.com/otter-browser-review/ -[41]:https://itsfoss.com/4-best-download-managers-for-linux/ -[42]:https://itsfoss.com/best-torrent-ubuntu/ -[43]:https://itsfoss.com/cloud-services-linux/ -[44]:https://itsfoss.com/best-messaging-apps-linux/ -[45]:https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/ -[46]:https://itsfoss.com/install-tomahawk-ubuntu-1404-linux-mint-17/ -[47]:https://itsfoss.com/format-factory-alternative-linux/ -[48]:https://itsfoss.com/image-applications-ubuntu-linux/ -[49]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ diff --git a/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md b/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md index 88b1eb96e1..00507d2b9c 100644 --- a/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md +++ b/sources/tech/20180110 Using Your Own Private Registry with Docker Enterprise Edition.md @@ -1,3 +1,5 @@ +fuowang 翻译中 + Using Your Own Private Registry with Docker Enterprise Edition ====== diff --git a/sources/tech/20180716 How To Find The Mounted Filesystem Type In Linux.md b/sources/tech/20180716 How To Find The Mounted Filesystem Type In Linux.md deleted file mode 100644 index 5005cf44f7..0000000000 --- a/sources/tech/20180716 How To Find The Mounted Filesystem Type In Linux.md +++ /dev/null @@ -1,259 +0,0 @@ -How To Find The Mounted Filesystem Type In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/07/filesystem-720x340.png) - -As you may already know, the Linux supports numerous filesystems, such as Ext4, ext3, ext2, sysfs, securityfs, FAT16, FAT32, NTFS, and many. The most commonly used filesystem is Ext4. Ever wondered what type of filesystem are you currently using in your Linux system? No? Worry not! We got your back. This guide explains how to find the mounted filesystem type in Unix-like operating systems. - -### Find The Mounted Filesystem Type In Linux - -There can be many ways to find the filesystem type in Linux. Here, I have given 8 different methods. Let us get started, shall we? - -#### Method 1 – Using findmnt command - -This is the most commonly used method to find out the type of a filesystem. The **findmnt** command will list all mounted filesystems or search for a filesystem. The findmnt command can be able to search in **/etc/fstab** , **/etc/mtab** or **/proc/self/mountinfo**. - -findmnt command comes pre-installed in most Linux distributions, because it is part of the package named **util-linux**. Just in case if it is not available, simply install this package and you’re good to go. For instance, you can install **util-linux** package in Debian-based systems using command: -``` -$ sudo apt install util-linux - -``` - -Let us go ahead and see how to use findmnt command to find out the mounted filesystems. - -If you run it without any arguments/options, it will list all mounted filesystems in a tree-like format as shown below. -``` -$ findmnt - -``` - -**Sample output:** - -![][2] - -As you can see, the findmnt command displays the target mount point (TARGET), source device (SOURCE), file system type (FSTYPE), and relevant mount options, like whether the filesystem is read/write or read-only. (OPTIONS). In my case, my root(/) filesystem type is EXT4. - -If you don’t like/want to display the output in tree-like format, use **-l** flag to display in simple, plain format. -``` -$ findmnt -l - -``` - -![][3] - -You can also list a particular type of filesystem, for example **ext4** , using **-t** option. -``` -$ findmnt -t ext4 -TARGET SOURCE FSTYPE OPTIONS -/ /dev/sda2 ext4 rw,relatime,commit=360 -└─/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered - -``` - -Findmnt can produce df style output as well. -``` -$ findmnt --df - -``` - -Or -``` -$ findmnt -D - -``` - -Sample output: -``` -SOURCE FSTYPE SIZE USED AVAIL USE% TARGET -dev devtmpfs 3.9G 0 3.9G 0% /dev -run tmpfs 3.9G 1.1M 3.9G 0% /run -/dev/sda2 ext4 456.3G 342.5G 90.6G 75% / -tmpfs tmpfs 3.9G 32.2M 3.8G 1% /dev/shm -tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup -bpf bpf 0 0 0 - /sys/fs/bpf -tmpfs tmpfs 3.9G 8.4M 3.9G 0% /tmp -/dev/loop0 squashfs 82.1M 82.1M 0 100% /var/lib/snapd/snap/core/4327 -/dev/sda1 ext4 92.8M 55.7M 30.1M 60% /boot -tmpfs tmpfs 788.8M 32K 788.8M 0% /run/user/1000 -gvfsd-fuse fuse.gvfsd-fuse 0 0 0 - /run/user/1000/gvfs - -``` - -You can also display filesystems for a specific device, or mountpoint too. - -Search for a device: -``` -$ findmnt /dev/sda1 -TARGET SOURCE FSTYPE OPTIONS -/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered - -``` - -Search for a mountpoint: -``` -$ findmnt / -TARGET SOURCE FSTYPE OPTIONS -/ /dev/sda2 ext4 rw,relatime,commit=360 - -``` - -You can even find filesystems with specific label: -``` -$ findmnt LABEL=Storage - -``` - -For more details, refer the man pages. -``` -$ man findmnt - -``` - -The findmnt command is just enough to find the type of a mounted filesystem in Linux. It is created for that specific purpose only. However, there are also few other ways available to find out the filesystem type. If you’re interested to know, read on. - -#### Method 2 – Using blkid command - -The **blkid** command is used locate and print block device attributes. It is also part of the util-linux package, so you don’t bother to install it. - -To find out the type of a filesystem using blkid command, run: -``` -$ blkid /dev/sda1 - -``` - -#### Method 3 – Using df command - -The **df** command is used to report filesystem disk space usage in Unix-like operating systems. To find the type of all mounted filesystems, simply run: -``` -$ df -T - -``` - -**Sample output:** - -![][4] - -For details about df command, refer the following guide. - -Also, check man pages. -``` -$ man df - -``` - -#### Method 4 – Using file command - -The **file** command determines the type of a specified file. It works just fine for files with no file extension. - -Run the following command to find the filesystem type of a partition: -``` -$ sudo file -sL /dev/sda1 -[sudo] password for sk: -/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=83a1dbbf-1e15-4b45-94fe-134d3872af96 (needs journal recovery) (extents) (large files) (huge files) - -``` - -Check man pages for more details: -``` -$ man file - -``` - -#### Method 5 – Using fsck command - -The **fsck** command is used to check the integrity of a filesystem or repair it. You can find the type of a filesystem by passing the partition as an argument like below. -``` -$ fsck -N /dev/sda1 -fsck from util-linux 2.32 -[/usr/bin/fsck.ext4 (1) -- /boot] fsck.ext4 /dev/sda1 - -``` - -For more details, refer man pages. -``` -$ man fsck - -``` - -#### Method 6 – Using fstab Command - -**fstab** is a file that contains static information about the filesystems. This file usually contains the mount point, filesystem type and mount options. - -To view the type of a filesystem, simply run: -``` -$ cat /etc/fstab - -``` - -![][5] - -For more details, refer man pages. -``` -$ man fstab - -``` - -#### Method 7 – Using lsblk command - -The **lsblk** command displays the information about devices. - -To display info about mounted filesystems, simply run: -``` -$ lsblk -f -NAME FSTYPE LABEL UUID MOUNTPOINT -loop0 squashfs /var/lib/snapd/snap/core/4327 -sda -├─sda1 ext4 83a1dbbf-1e15-4b45-94fe-134d3872af96 /boot -├─sda2 ext4 4d25ddb0-5b20-40b4-ae35-ef96376d6594 / -└─sda3 swap 1f8f5e2e-7c17-4f35-97e6-8bce7a4849cb [SWAP] -sr0 - -``` - -For more details, refer man pages. -``` -$ man lsblk - -``` - -#### Method 8 – Using mount command - -The **mount** command is used to mount a local or remote filesystems in Unix-like systems. - -To find out the type of a filesystem using mount command, do: -``` -$ mount | grep "^/dev" -/dev/sda2 on / type ext4 (rw,relatime,commit=360) -/dev/sda1 on /boot type ext4 (rw,relatime,commit=360,data=ordered) - -``` - -For more details, refer man pages. -``` -$ man mount - -``` - -And, that’s all for now folks. You now know 8 different Linux commands to find out the type of a mounted Linux filesystems. If you know any other methods, feel free to let me know in the comment section below. I will check and update this guide accordingly. - -More good stuffs to come. Stay tuned! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-find-the-mounted-filesystem-type-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-1.png -[3]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-2.png -[4]:http://www.ostechnix.com/wp-content/uploads/2018/07/df.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/fstab.png diff --git a/sources/tech/20180725 Build an interactive CLI with Node.js.md b/sources/tech/20180725 Build an interactive CLI with Node.js.md index 6ec13f1cfc..f240e51efd 100644 --- a/sources/tech/20180725 Build an interactive CLI with Node.js.md +++ b/sources/tech/20180725 Build an interactive CLI with Node.js.md @@ -1,3 +1,5 @@ +translating by Flowsnow + Build an interactive CLI with Node.js ====== diff --git a/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md b/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md index c4adc3ac07..025199d93c 100644 --- a/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md +++ b/sources/tech/20180810 How To Quickly Serve Files And Folders Over HTTP In Linux.md @@ -1,3 +1,5 @@ +translating---geekpi + How To Quickly Serve Files And Folders Over HTTP In Linux ====== diff --git a/sources/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md b/sources/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md deleted file mode 100644 index 8f6ef80dbe..0000000000 --- a/sources/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md +++ /dev/null @@ -1,120 +0,0 @@ -How To Disable Ads In Terminal Welcome Message In Ubuntu Server -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/disable-ads-in-Terminal-welcome-message-in-Ubuntu-720x340.jpg) - -If you’re using any latest Ubuntu server edition, you might have noticed some promotional links in welcome message which are not relevant to Ubuntu server platform. As you might already know **MOTD** , abbreviation of **M** essage **O** f **T** he **D** ay, displays a welcome message at every login in Linux systems. Usually, the welcome message contains the version of your OS, basic system information, official documentation link, and the links to read about the latest security updates etc. This is what we usually see at every time we login either via SSH or on the local machine. However, there some additional links started to appear in the terminal welcome message lately. I have already noticed this link few times, but I didn’t care about it and never clicked it though. Here is the Terminal welcome message shown in my Ubuntu 18.04 LTS server. - -![](http://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message.png) - -As you can see in the above screenshot, there is also a bit.ly link and Ubuntu wiki link in the welcome message. Some of you may surprise and wondering what this is. There is nothing to worry about the links in the welcome message. It may look sort of ad-like, but those are not really commercial ads. The links are actually pointing to [**Ubuntu official blog**][1] and [**Ubuntu wiki**][2]. As I said earlier, one of link is not relevant and doesn’t has any details related to Ubuntu server. That’s why I called them ads in the first place. - -Even though most of us won’t visit bit.ly links, but some people may visit those links out of curiosity and ended up disappointed realizing that it simply points you to an external link. You can use any URL unshortners services, such as unshorten.it, to see where they lead before visiting the actual link. Alternatively, you can just type a plus sign ( **+** ) at the end of the bit.ly link to see where they lead and some statistics about the link. - -![](http://www.ostechnix.com/wp-content/uploads/2018/08/shortlink.png) - -### What is MOTD and how it works? - -Back in 2009, **Dustin Kirkland** from Canonical introduced the concept of MOTD in Ubuntu. It’s a flexible framework that enables the administrators or distro packages to add executable scripts in /etc/update-motd.d/* location to generate informative, interesting messages displayed at login. It was originally implemented for Landscape (a commercial service from Canonical), however other distribution maintainers found it useful and adopted this feature in their own distributions as well. - -If you look in **/etc/update-motd.d/** location in your Ubuntu system, you’ll see a set of scripts. One prints the generic “welcome” banner. The next one prints 3 links showing where to find help for the OS. The other one counts and displays the number of package updates available for the local system. Another one tells you if a reboot is required and so on. - -From Ubuntu 17.04 onwards, the developers have added **/etc/update-motd.d/50-motd-news** , a script to include some additional information in the welcome message. They additional information are; - - 1. Important critical information, such as - -ShellShock, Heartbleed etc. - - 2. End-of-Life (EOL) messages, new feature availability, etc. - - 3. Some fun and informative posts published in Ubuntu official blog and other news about Ubuntu. - - - - -Asynchronously, about 60 seconds after boot, a systemd timer runs “/etc/update-motd.d/50-motd-news –force” script. It sources 3 config variables defined in /etc/default/motd-news script. The default values are: ENABLED=1, URLS=”, WAIT=”5″. - -Here is the contents of /etc/default/motd-news file: -``` -$ cat /etc/default/motd-news -# Enable/disable the dynamic MOTD news service -# This is a useful way to provide dynamic, informative -# information pertinent to the users and administrators -# of the local system -ENABLED=1 - -# Configure the source of dynamic MOTD news -# White space separated list of 0 to many news services -# For security reasons, these must be https -# and have a valid certificate -# Canonical runs a service at motd.ubuntu.com, and you -# can easily run one too -URLS="https://motd.ubuntu.com" - -# Specify the time in seconds, you're willing to wait for -# dynamic MOTD news -# Note that news messages are fetched in the background by -# a systemd timer, so this should never block boot or login -WAIT=5 - -``` - -Good thing is MOTD is fully customizable, so you can disable it entirely (ENABLED=0), change or add scripts as per your wish, and change the wait time in seconds - -If MOTD is enabled, that systemd timer job will loop over each of the URLS, trim them to 80 characters per line, and a maximum of 10 lines, and concatenate them to a cache file in /var/cache/motd-news. This systemd timer job will re-run and update the /var/cache/motd-news every 12 hours. Upon user login, the contents of /var/cache/motd-news is just printed to screen. This is how MOTD works. - -Also, a custom user-agent string is included in **/etc/update-motd.d/50-motd-news** file to report information about your computer. If you look into **/etc/update-motd.d/50-motd-news** file, you will see the following code. -``` -# Piece together the user agent -USER_AGENT="curl/$curl_ver $lsb $platform $cpu $uptime" - -``` - -That means, the MOTD retriever reports your **operating system release** , **hardware platform** , **CPU type** and **uptime** to Canonical. - -Hope you got the basic idea about MOTD. - -Let us now get back to the topic. I don’t want this feature. How do I disable it? If the promotional links in the welcome message still bothers you and you wanted to disable them permanently, here is a quick way to disable it. - -### Disable Ads In Terminal Welcome Message In Ubuntu Server - -To disable these ads, edit file: -``` -$ sudo vi /etc/default/motd-news - -``` - -Find the following line and set its value as 0 (zero). -``` -[...] -ENABLED=0 -[...] - -``` - -Save and close the file. Now, reboot your system and see if the welcome message stills showing the links from Ubuntu blog. - -![](https://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message-1.png) - -See? There are no links from Ubuntu blog and Ubuntu wiki now. - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-disable-ads-in-terminal-welcome-message-in-ubuntu-server/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://blog.ubuntu.com/ -[2]:https://wiki.ubuntu.com/ diff --git a/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md b/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md index 5c520c8021..ae6a1f32d9 100644 --- a/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md +++ b/sources/tech/20180824 Joplin- Encrypted Open Source Note Taking And To-Do Application.md @@ -1,3 +1,5 @@ +translating---geekpi + Joplin: Encrypted Open Source Note Taking And To-Do Application ====== **[Joplin][1] is a free and open source note taking and to-do application available for Linux, Windows, macOS, Android and iOS. Its key features include end-to-end encryption, Markdown support, and synchronization via third-party services like NextCloud, Dropbox, OneDrive or WebDAV.** diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md index 769f9ba420..c25239b7ba 100644 --- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md +++ b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md @@ -1,4 +1,3 @@ -Translating by z52527 Publishing Markdown to HTML with MDwiki ====== diff --git a/sources/tech/20180831 Test containers with Python and Conu.md b/sources/tech/20180831 Test containers with Python and Conu.md index e28ca4674e..9911901d51 100644 --- a/sources/tech/20180831 Test containers with Python and Conu.md +++ b/sources/tech/20180831 Test containers with Python and Conu.md @@ -1,4 +1,4 @@ -Test containers with Python and Conu +translating by GraveAccent Test containers with Python and Conu ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/08/conu-816x345.jpg) diff --git a/sources/tech/20180903 A Cross-platform High-quality GIF Encoder.md b/sources/tech/20180903 A Cross-platform High-quality GIF Encoder.md deleted file mode 100644 index 7a7f79064b..0000000000 --- a/sources/tech/20180903 A Cross-platform High-quality GIF Encoder.md +++ /dev/null @@ -1,160 +0,0 @@ -A Cross-platform High-quality GIF Encoder -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/09/gifski-720x340.png) - -As a content writer, I needed to add images in my articles. Sometimes, it is better to add videos or gif images to explain the concept a bit easier. The readers can easily understand the guide much better by watching the output in video or gif format than the text. The day before, I have written about [**Flameshot**][1], a feature-rich and powerful screenshot tool for Linux. Today, I will show you how to make high quality gif images either from a video or set of images. Meet **Gifski** , a cross-platform, open source, command line High-quality GIF encoder based on **Pngquant**. - -For those wondering, pngquant is a command line lossy PNG image compressor. Trust me, pngquant is one of the best loss-less PNG compressor that I ever use. It compresses PNG images **upto 70%** without losing the original quality and and preserves full alpha transparency. The compressed images are compatible with all web browsers and operating systems. Since Gifski is based on Pngquant, it uses pngquant’s features for creating efficient GIF animations. Gifski is capable of creating animated GIFs that use thousands of colors per frame. Gifski is also requires **ffmpeg** to convert video into PNG images. - -### **Installing Gifski** - -Make sure you have installed FFMpeg and Pngquant. - -FFmpeg is available in the default repositories of most Linux distributions, so you can install it using the default package manager. For installation instructions, refer the following guide. - -Pngquant is available in [**AUR**][2]. To install it in Arch-based systems, use any AUR helper programs like [**Yay**][3]. -``` -$ yay -S pngquant - -``` - -On Debian-based systems, run: -``` -$ sudo apt install pngquant - -``` - -If pngquant is not available for your distro, compile and install it from source. You will need **`libpng-dev`** package installed with development headers. -``` -$ git clone --recursive https://github.com/kornelski/pngquant.git - -$ make - -$ sudo make install - -``` - -After installing the prerequisites, install Gifski. You can install it using **cargo** if you have installed [**Rust**][4] programming language. -``` -$ cargo install gifski - -``` - -You can also get it with [**Linuxbrew**][5] package manager. -``` -$ brew install gifski - -``` - -If you don’t want to install cargo or Linuxbrew, download the latest binary executables from [**releases page**][6] and compile and install gifski manually. - -### Create high-quality GIF animations using Gifski high-quality GIF encoder - -Go to the location where you have kept the PNG images and run the following command to create GIF animation from the set of images: -``` -$ gifski -o file.gif *.png - -``` - -Here file.gif is the final output gif animation. - -Gifski has also some other additional features, like; - - * Create GIF animation with specific dimension - * Show specific number of animations per second - * Encode with a specific quality - * Encode faster - * Encode images exactly in the order given, rather than sorted - - - -To create GIF animation with specific dimension, for example width=800 and height=400, use the following command: -``` -$ gifski -o file.gif -W 800 -H 400 *.png - -``` - -You can set how many number of animation frames per second you want in the gif animation. The default value is **20**. To do so, run: -``` -$ gifski -o file.gif --fps 1 *.png - -``` - -In the above example, I have used one animation frame per second. - -We can encode with specific quality on the scale of 1-100. Obviously, the lower quality may give smaller file and higher quality give bigger seize gif animation. -``` -$ gifski -o file.gif --quality 50 *.png - -``` - -Gifski will take more time when you encode large number of images. To make the encoding process 3 times faster than usual speed, run: -``` -$ gifski -o file.gif --fast *.png - -``` - -Please note that it will reduce quality to 10% and create bigger animation file. - -To encode images exactly in the order given (rather than sorted), use **`--nosort`** option. -``` -$ gifski -o file.gif --nosort *.png - -``` - -If you do not to loop the GIF, simple use **`--once`** option. -``` -$ gifski -o file.gif --once *.png - -``` - -**Create GIF animation from Video file** - -Some times you might want to an animated file from a video. It is also possible. This is where FFmpeg comes in help. First convert the video into PNG frames first like below. -``` -$ ffmpeg -i video.mp4 frame%04d.png - -``` - -The above command makes image files namely “frame0001.png”, “frame0002.png”, “frame0003.png”…, etc. from video.mp4 (%04d makes the frame number) and save them in the current working directory. - -After converting the image files, simply run the following command to make the animated GIF file. -``` -$ gifski -o file.gif *.png - -``` - -For more details, refer the help section. -``` -$ gifski -h - -``` - -Here is the sample animated file created using Gifski. - -As you can see, the quality of the GIF file is really great. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/gifski-a-cross-platform-high-quality-gif-encoder/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/ -[2]: https://aur.archlinux.org/packages/pngquant/ -[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[4]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ -[5]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ -[6]: https://github.com/ImageOptim/gifski/releases diff --git a/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md b/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md index 0552fb3d09..ffcdf9f47d 100644 --- a/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md +++ b/sources/tech/20180905 How To Run MS-DOS Games And Programs In Linux.md @@ -1,3 +1,5 @@ +Translating by way-ww + How To Run MS-DOS Games And Programs In Linux ====== diff --git a/sources/tech/20180907 6 open source tools for writing a book.md b/sources/tech/20180907 6 open source tools for writing a book.md deleted file mode 100644 index 8b8140bd61..0000000000 --- a/sources/tech/20180907 6 open source tools for writing a book.md +++ /dev/null @@ -1,67 +0,0 @@ -6 open source tools for writing a book -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4) - -I first used and contributed to free and open source software in 1993, and since then I've been an open source software developer and evangelist. I've written or contributed to dozens of open source software projects, although the one that I'll be remembered for is the [FreeDOS Project][1], an open source implementation of the DOS operating system. - -I recently wrote a book about FreeDOS. [_Using FreeDOS_][2] is my celebration of the 24th anniversary of FreeDOS. It is a collection of how-to's about installing and using FreeDOS, essays about my favorite DOS applications, and quick-reference guides to the DOS command line and DOS batch programming. I've been working on this book for the last few months, with the help of a great professional editor. - -_Using FreeDOS_ is available under the Creative Commons Attribution (cc-by) International Public License. You can download the EPUB and PDF versions at no charge from the [FreeDOS e-books][2] website. (I'm also planning a print version, for those who prefer a bound copy.) - -The book was produced almost entirely with open source software. I'd like to share a brief insight into the tools I used to create, edit, and produce _Using FreeDOS_. - -### Google Docs - -[Google Docs][3] is the only tool I used that isn't open source software. I uploaded my first drafts to Google Docs so my editor and I could collaborate. I'm sure there are open source collaboration tools, but Google Doc's ability to let two people edit the same document at the same time, make comments, edit suggestions, and change tracking—not to mention its use of paragraph styles and the ability to download the finished document—made it a valuable part of the editing process. - -### LibreOffice - -I started on [LibreOffice][4] 6.0 but I finished the book using LibreOffice 6.1. I love LibreOffice's rich support of styles. Paragraph styles made it easy to apply a style for titles, headers, body text, sample code, and other text. Character styles let me modify the appearance of text within a paragraph, such as inline sample code or a different style to indicate a filename. Graphics styles let me apply certain styling to screenshots and other images. And page styles allowed me to easily modify the layout and appearance of the page. - -### GIMP - -My book includes a lot of DOS program screenshots, website screenshots, and FreeDOS logos. I used [GIMP][5] to modify these images for the book. Usually, this was simply cropping or resizing an image, but as I prepare the print edition of the book, I'm using GIMP to create a few images that will be simpler for print layout. - -### Inkscape - -Most of the FreeDOS logos and fish mascots are in SVG format, and I used [Inkscape][6] for any image tweaking here. And in preparing the PDF version of the ebook, I wanted a simple blue banner at top of the page, with the FreeDOS logo in the corner. After some experimenting, I found it easier to create an SVG image in Inkscape that looked like the banner I wanted, and I pasted that into the header. - -### ImageMagick - -While it's great to use GIMP to do the fine work, sometimes it's faster to run an [ImageMagick][7] command over a set of images, such as to convert into PNG format or to resize images. - -### Sigil - -LibreOffice can export directly to EPUB format, but it wasn't a great transfer. I haven't tried creating an EPUB with LibreOffice 6.1, but LibreOffice 6.0 didn't include my images. It also added styles in a weird way. I used [Sigil][8] to tweak the EPUB file and make everything look right. Sigil even has a preview function so you can see what the EPUB will look like. - -### QEMU - -Because this book is about installing and running FreeDOS, I needed to actually run FreeDOS. You can boot FreeDOS inside any PC emulator, including VirtualBox, QEMU, GNOME Boxes, PCem, and Bochs. But I like the simplicity of [QEMU][9]. And the QEMU console lets you issue a screen dump in PPM format, which is ideal for grabbing screenshots to include in the book. - -Of course, I have to mention running [GNOME][10] on [Linux][11]. I use the [Fedora][12] distribution of Linux. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/writing-book-open-source-tools - -作者:[Jim Hall][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jim-hall -[1]: http://www.freedos.org/ -[2]: http://www.freedos.org/ebook/ -[3]: https://www.google.com/docs/about/ -[4]: https://www.libreoffice.org/ -[5]: https://www.gimp.org/ -[6]: https://inkscape.org/ -[7]: https://www.imagemagick.org/ -[8]: https://sigil-ebook.com/ -[9]: https://www.qemu.org/ -[10]: https://www.gnome.org/ -[11]: https://www.kernel.org/ -[12]: https://getfedora.org/ diff --git a/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md b/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md index f542b15808..ae353bf11f 100644 --- a/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md +++ b/sources/tech/20180925 9 Easiest Ways To Find Out Process ID (PID) In Linux.md @@ -1,4 +1,3 @@ -【moria(knuth.fan at gmail.com)翻译中】 9 Easiest Ways To Find Out Process ID (PID) In Linux ====== Everybody knows about PID, Exactly what is PID? Why you want PID? What are you going to do using PID? Are you having the same questions on your mind? If so, you are in the right place to get all the details. diff --git a/sources/tech/20180928 Using Grails with jQuery and DataTables.md b/sources/tech/20180928 Using Grails with jQuery and DataTables.md deleted file mode 100644 index 0f02fabe8a..0000000000 --- a/sources/tech/20180928 Using Grails with jQuery and DataTables.md +++ /dev/null @@ -1,546 +0,0 @@ -[translating by jrg 20181014] - -Using Grails with jQuery and DataTables -====== - -Learn to build a Grails-based data browser that lets users visualize complex tabular data. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw) - -I’m a huge fan of [Grails][1]. Granted, I’m mostly a data person who likes to explore and analyze data using command-line tools. But even data people sometimes need to _look at_ the data, and sometimes using data means having a great data browser. With Grails, [jQuery][2], and the [DataTables jQuery plugin][3], we can make really nice tabular data browsers. - -The [DataTables website][3] offers a lot of decent “recipe-style” documentation that shows how to put together some fine sample applications, and it includes the necessary JavaScript, HTML, and occasional [PHP][4] to accomplish some pretty spiffy stuff. But for those who would rather use Grails as their backend, a bit of interpretation is necessary. Also, the sample application data used is a single flat table of employees of a fictional company, so the complexity of dealing with table relations serves as an exercise for the reader. - -In this article, we’ll fill those two gaps by creating a Grails application with a slightly more complex data structure and a DataTables browser. In doing so, we’ll cover Grails criteria, which are [Groovy][5] -fied Java Hibernate criteria. I’ve put the code for the application on [GitHub][6] , so this article is oriented toward explaining the nuances of the code. - -For prerequisites, you will need Java, Groovy, and Grails environments set up. With Grails, I tend to use a terminal window and [Vim][7], so that’s what’s used here. To get a modern Java, I suggest downloading and installing the [Open Java Development Kit][8] (OpenJDK) provided by your Linux distro (which should be Java 8, 9, 10 or 11; at the time of writing, I’m working with Java 8). From my point of view, the best way to get up-to-date Groovy and Grails is to use [SDKMAN!][9]. - -Readers who have never tried Grails will probably need to do some background reading. As a starting point, I recommend [Creating Your First Grails Application][10]. - -### Getting the employee browser application - -As mentioned above, I’ve put the source code for this sample employee browser application on [GitHub][6]. For further explanation, the application **embrow** was built using the following commands in a Linux terminal window: - -``` -cd Projects -grails create-app com.nuevaconsulting.embrow -``` - -The domain classes and unit tests are created as follows: - -``` -grails create-domain-class com.nuevaconsulting.embrow.Position -grails create-domain-class com.nuevaconsulting.embrow.Office -grails create-domain-class com.nuevaconsulting.embrow.Employeecd embrowgrails createdomaincom.grails createdomaincom.grails createdomaincom. -``` - -The domain classes built this way have no attributes, so they must be edited as follows: - -The Position domain class: - -``` -package com.nuevaconsulting.embrow -  -class Position { - -    String name -    int starting - -    static constraints = { -        name nullable: false, blank: false -        starting nullable: false -    } -}com.Stringint startingstatic constraintsnullableblankstarting nullable -``` - -The Office domain class: - -``` -package com.nuevaconsulting.embrow -  -class Office { - -    String name -    String address -    String city -    String country - -    static constraints = { -        name nullable: false, blank: false -        address nullable: false, blank: false -        city nullable: false, blank: false -        country nullable: false, blank: false -    } -} -``` - -And the Employee domain class: - -``` -package com.nuevaconsulting.embrow -  -class Employee { - -    String surname -    String givenNames -    Position position -    Office office -    int extension -    Date hired -    int salary -    static constraints = { -        surname nullable: false, blank: false -        givenNames nullable: false, blank: false -        : false -        office nullable: false -        extension nullable: false -        hired nullable: false -        salary nullable: false -    } -} -``` - -Note that whereas the Position and Office domain classes use predefined Groovy types String and int, the Employee domain class defines fields that are of type Position and Office (as well as the predefined Date). This causes the creation of the database table in which instances of Employee are stored to contain references, or foreign keys, to the tables in which instances of Position and Office are stored. - -Now you can generate the controllers, views, and various other test components: - -``` --all com.nuevaconsulting.embrow.Position -grails generate-all com.nuevaconsulting.embrow.Office -grails generate-all com.nuevaconsulting.embrow.Employeegrails generateall com.grails generateall com.grails generateall com. -``` - -At this point, you have a basic create-read-update-delete (CRUD) application ready to go. I’ve included some base data in the **grails-app/init/com/nuevaconsulting/BootStrap.groovy** to populate the tables. - -If you run the application with the command: - -``` -grails run-app -``` - -you will see the following screen in the browser at **** - -![Embrow home screen][12] - -The Embrow application home screen - -Clicking on the link for the OfficeController gives you a screen that looks like this: - -![Office list][14] - -The office list - -Note that this list is generated by the **OfficeController index** method and displayed by the view `office/index.gsp`. - -Similarly, clicking on the **EmployeeController** gives a screen that looks like this: - -![Employee controller][16] - -The employee controller - -Ok, that’s pretty ugly—what’s with the Position and Office links? - -Well, the views generated by the `generate-all` commands above create an **index.gsp** file that uses the Grails tag that by default shows the class name ( **com.nuevaconsulting.embrow.Position** ) and the persistent instance identifier ( **30** ). This behavior can be customized to yield something better looking, and there is some pretty neat stuff with the autogenerated links, the autogenerated pagination, and the autogenerated sortable columns. - -But even when it's fully cleaned up, this employee browser offers limited functionality. For example, what if you want to find all employees whose position includes the text “dev”? What if you want to combine columns for sorting so that the primary sort key is a surname and the secondary sort key is an office name? Or what if you want to export a sorted subset to a spreadsheet or PDF to email to someone who doesn’t have access to the browser? - -The jQuery DataTables plugin provides this kind of extra functionality and allows you to create a full-fledged tabular data browser. - -### Creating the employee browser view and controller methods - -In order to create an employee browser based on jQuery DataTables, you must complete two tasks: - - 1. Create a Grails view that incorporates the HTML and JavaScript required to enable the DataTables - - 2. Add a method to the Grails controller to handle the new view - - - - -#### The employee browser view - -In the directory **embrow/grails-app/views/employee** , start by making a copy of the **index.gsp** file, calling it **browser.gsp** : - -``` -cd Projects -cd embrow/grails-app/views/employee -cp gsp browser.gsp -``` - -At this point, you want to customize the new **browser.gsp** file to add the relevant jQuery DataTables code. - -As a rule, I like to grab my JavaScript and CSS from a content provider when feasible; to do so in this case, after the line: - -``` -<g:message code="default.list.label" args="[entityName]" /> -``` - -insert the following lines: - -``` - - - - - - - - - - - - -``` - -Next, remove the code that provided the data pagination in **index.gsp** : - -``` -
-

- -
${flash.message}
-
- - - -
-``` - -and insert the code that materializes the jQuery DataTables. - -The first part to insert is the HTML that creates the basic tabular structure of the browser. For the application where DataTables talks to a database backend, provide only the table headers and footers; the DataTables JavaScript takes care of the table contents. - -``` -
-

Employee Browser

- - - - - - - - - - - - - - - - - - - - - - - -
SurnameGiven name(s)PositionOfficeExtensionHiredSalary
SurnameGiven name(s)PositionOfficeExtensionHiredSalary
-
-``` - -Next, insert a JavaScript block, which serves three primary functions: It sets the size of the text boxes shown in the footer for column filtering, it establishes the DataTables table model, and it creates a handler to do the column filtering. - -``` - -$('#employee_dt tfoot th').each( function() {javascript -``` - -The code below handles sizing the filter boxes at the bottoms of the table columns: - -``` -var title = $(this).text(); -if (title == 'Extension' || title == 'Hired') -$(this).html(''); -else -$(this).html(''); -});titletitletitletitletitle -``` - -Next, define the table model. This is where all the table options are provided, including the scrolling, rather than paginated, nature of the interface, the cryptic decorations to be provided according to the dom string, the ability to export data to CSV and other formats, as well as where the Ajax connection to the server is established. Note that the URL is created with a Groovy GString call to the Grails **createLink()** method, referring to the **browserLister** action in the **EmployeeController**. Also of interest is the definition of the columns of the table. This information is sent across to the back end, which queries the database and returns the appropriate records. - -``` -var table = $('#employee_dt').DataTable( { -"scrollY": 500, -"deferRender": true, -"scroller": true, -"dom": "Brtip", -"buttons": [ 'copy', 'csv', 'excel', 'pdf', 'print' ], -"processing": true, -"serverSide": true, -"ajax": { -"url": "${createLink(controller: 'employee', action: 'browserLister')}", -"type": "POST", -}, -"columns": [ -{ "data": "surname" }, -{ "data": "givenNames" }, -{ "data": "position" }, -{ "data": "office" }, -{ "data": "extension" }, -{ "data": "hired" }, -{ "data": "salary" } -] -}); -``` - -Finally, monitor the filter columns for changes and use them to apply the filter(s). - -``` -table.columns().every(function() { -var that = this; -$('input', this.footer()).on('keyup change', function(e) { -if (that.search() != this.value && 8 < e.keyCode && e.keyCode < 32) -that.search(this.value).draw(); -}); -``` - -And that’s it for the JavaScript. This completes the changes to the view code. - -``` -}); - -``` - -Here’s a screenshot of the UI this view creates: - -![](https://opensource.com/sites/default/files/uploads/screen_4.png) - -Here’s another screenshot showing the filtering and multi-column sorting at work (looking for employees whose positions include the characters “dev”, ordering first by office, then by surname): - -![](https://opensource.com/sites/default/files/uploads/screen_5.png) - -Here’s another screenshot, showing what happens when you click on the CSV button: - -![](https://opensource.com/sites/default/files/uploads/screen6.png) - -And finally, here’s a screenshot showing the CSV data opened in LibreOffice: - -![](https://opensource.com/sites/default/files/uploads/screen7.png) - -Ok, so the view part looked pretty straightforward; therefore, the controller action must do all the heavy lifting, right? Let’s see… - -#### The employee controller browserLister action - -Recall that we saw this string - -``` -"${createLink(controller: 'employee', action: 'browserLister')}" -``` - -as the URL used for the Ajax calls from the DataTables table model. [createLink() is the method][17] behind a Grails tag that is used to dynamically generate a link as the HTML is preprocessed on the Grails server. This ends up generating a link to the **EmployeeController** , located in - -``` -embrow/grails-app/controllers/com/nuevaconsulting/embrow/EmployeeController.groovy -``` - -and specifically to the controller method **browserLister()**. I’ve left some print statements in the code so that the intermediate results can be seen in the terminal window where the application is running. - -``` -    def browserLister() { -        // Applies filters and sorting to return a list of desired employees -``` - -First, print out the parameters passed to **browserLister()**. I usually start building controller methods with this code so that I’m completely clear on what my controller is receiving. - -``` -      println "employee browserLister params $params" -        println() -``` - -Next, process those parameters to put them in a more usable shape. First, the jQuery DataTables parameters, a Groovy map called **jqdtParams** : - -``` -def jqdtParams = [:] -params.each { key, value -> - def keyFields = key.replace(']','').split(/\[/) - def table = jqdtParams - for (int f = 0; f < keyFields.size() - 1; f++) { - def keyField = keyFields[f] - if (!table.containsKey(keyField)) - table[keyField] = [:] - table = table[keyField] - } - table[keyFields[-1]] = value -} -println "employee dataTableParams $jqdtParams" -println() -``` - -Next, the column data, a Groovy map called **columnMap** : - -``` -def columnMap = jqdtParams.columns.collectEntries { k, v -> - def whereTerm = null - switch (v.data) { - case 'extension': - case 'hired': - case 'salary': - if (v.search.value ==~ /\d+(,\d+)*/) - whereTerm = v.search.value.split(',').collect { it as Integer } - break - default: - if (v.search.value ==~ /[A-Za-z0-9 ]+/) - whereTerm = "%${v.search.value}%" as String - break - } - [(v.data): [where: whereTerm]] -} -println "employee columnMap $columnMap" -println() -``` - -Next, a list of all column names, retrieved from **columnMap** , and a corresponding list of how those columns should be ordered in the view, Groovy lists called **allColumnList** and **orderList** , respectively: - -``` -def allColumnList = columnMap.keySet() as List -println "employee allColumnList $allColumnList" -def orderList = jqdtParams.order.collect { k, v -> [allColumnList[v.column as Integer], v.dir] } -println "employee orderList $orderList" -``` - -We’re going to use Grails’ implementation of Hibernate criteria to actually carry out the selection of elements to be displayed as well as their ordering and pagination. Criteria requires a filter closure; in most examples, this is given as part of the creation of the criteria instance itself, but here we define the filter closure beforehand. Note in this case the relatively complex interpretation of the “date hired” filter, which is treated as a year and applied to establish date ranges, and the use of **createAlias** to allow us to reach into related classes Position and Office: - -``` -def filterer = { - createAlias 'position', 'p' - createAlias 'office', 'o' - - if (columnMap.surname.where) ilike 'surname', columnMap.surname.where - if (columnMap.givenNames.where) ilike 'givenNames', columnMap.givenNames.where - if (columnMap.position.where) ilike 'p.name', columnMap.position.where - if (columnMap.office.where) ilike 'o.name', columnMap.office.where - if (columnMap.extension.where) inList 'extension', columnMap.extension.where - if (columnMap.salary.where) inList 'salary', columnMap.salary.where - if (columnMap.hired.where) { - if (columnMap.hired.where.size() > 1) { - or { - columnMap.hired.where.each { - between 'hired', Date.parse('yyyy/MM/dd',"${it}/01/01" as String), - Date.parse('yyyy/MM/dd',"${it}/12/31" as String) - } - } - } else { - between 'hired', Date.parse('yyyy/MM/dd',"${columnMap.hired.where[0]}/01/01" as String), - Date.parse('yyyy/MM/dd',"${columnMap.hired.where[0]}/12/31" as String) - } - } -} -``` - -At this point, it’s time to apply the foregoing. The first step is to get a total count of all the Employee instances, required by the pagination code: - -``` -        def recordsTotal = Employee.count() -        println "employee recordsTotal $recordsTotal" -``` - -Next, apply the filter to the Employee instances to get the count of filtered results, which will always be less than or equal to the total number (again, this is for the pagination code): - -``` -        def c = Employee.createCriteria() -        def recordsFiltered = c.count { -            filterer.delegate = delegate -            filterer() -        } -        println "employee recordsFiltered $recordsFiltered" - -``` - -Once you have those two counts, you can get the actual filtered instances using the pagination and ordering information as well. - -``` - def orderer = Employee.withCriteria { - filterer.delegate = delegate - filterer() - orderList.each { oi -> - switch (oi[0]) { - case 'surname': order 'surname', oi[1]; break - case 'givenNames': order 'givenNames', oi[1]; break - case 'position': order 'p.name', oi[1]; break - case 'office': order 'o.name', oi[1]; break - case 'extension': order 'extension', oi[1]; break - case 'hired': order 'hired', oi[1]; break - case 'salary': order 'salary', oi[1]; break - } - } - maxResults (jqdtParams.length as Integer) - firstResult (jqdtParams.start as Integer) - } -``` - -To be completely clear, the pagination code in JTables manages three counts: the total number of records in the data set, the number resulting after the filters are applied, and the number to be displayed on the page (whether the display is scrolling or paginated). The ordering is applied to all the filtered records and the pagination is applied to chunks of those filtered records for display purposes. - -Next, process the results returned by the orderer, creating links to the Employee, Position, and Office instance in each row so the user can click on these links to get all the detail on the relevant instance: - -``` -        def dollarFormatter = new DecimalFormat('$##,###.##') -        def employees = orderer.collect { employee -> -            ['surname': "${employee.surname}", -                'givenNames': employee.givenNames, -                'position': "${employee.position?.name}", -                'office': "${employee.office?.name}", -                'extension': employee.extension, -                'hired': employee.hired.format('yyyy/MM/dd'), -                'salary': dollarFormatter.format(employee.salary)] -        } -``` - -And finally, create the result you want to return and give it back as JSON, which is what jQuery DataTables requires. - -``` - def result = [draw: jqdtParams.draw, recordsTotal: recordsTotal, recordsFiltered: recordsFiltered, data: employees] - render(result as JSON) - } -``` - -That’s it. - -If you’re familiar with Grails, this probably seems like more work than you might have originally thought, but there’s no rocket science here, just a lot of moving parts. However, if you haven’t had much exposure to Grails (or to Groovy), there’s a lot of new stuff to understand—closures, delegates, and builders, among other things. - -In that case, where to start? The best place is to learn about Groovy itself, especially [Groovy closures][18] and [Groovy delegates and builders][19]. Then go back to the reading suggested above on Grails and Hibernate criteria queries. - -### Conclusions - -jQuery DataTables make awesome tabular data browsers for Grails. Coding the view isn’t too tricky, but the PHP examples provided in the DataTables documentation take you only so far. In particular, they aren’t written with Grails programmers in mind, nor do they explore the finer details of using elements that are references to other classes (essentially lookup tables). - -I’ve used this approach to make a couple of data browsers that allow the user to select which columns to view and accumulate record counts, or just to browse the data. The performance is good even in million-row tables on a relatively modest VPS. - -One caveat: I have stumbled upon some problems with the various Hibernate criteria mechanisms exposed in Grails (see my other GitHub repositories), so care and experimentation is required. If all else fails, the alternative approach is to build SQL strings on the fly and execute them instead. As of this writing, I prefer to work with Grails criteria, unless I get into messy subqueries, but that may just reflect my relative lack of experience with subqueries in Hibernate. - -I hope you Grails programmers out there find this interesting. Please feel free to leave comments or suggestions below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables - -作者:[Chris Hermansen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[1]: https://grails.org/ -[2]: https://jquery.com/ -[3]: https://datatables.net/ -[4]: http://php.net/ -[5]: http://groovy-lang.org/ -[6]: https://github.com/monetschemist/grails-datatables -[7]: https://www.vim.org/ -[8]: http://openjdk.java.net/ -[9]: http://sdkman.io/ -[10]: http://guides.grails.org/creating-your-first-grails-app/guide/index.html -[11]: https://opensource.com/file/410061 -[12]: https://opensource.com/sites/default/files/uploads/screen_1.png (Embrow home screen) -[13]: https://opensource.com/file/410066 -[14]: https://opensource.com/sites/default/files/uploads/screen_2.png (Office list screenshot) -[15]: https://opensource.com/file/410071 -[16]: https://opensource.com/sites/default/files/uploads/screen3.png (Employee controller screenshot) -[17]: https://gsp.grails.org/latest/ref/Tags/createLink.html -[18]: http://groovy-lang.org/closures.html -[19]: http://groovy-lang.org/dsls.html diff --git a/sources/tech/20181002 4 open source invoicing tools for small businesses.md b/sources/tech/20181002 4 open source invoicing tools for small businesses.md deleted file mode 100644 index 546e0c289f..0000000000 --- a/sources/tech/20181002 4 open source invoicing tools for small businesses.md +++ /dev/null @@ -1,78 +0,0 @@ -fuowang 翻译中 - -4 open source invoicing tools for small businesses -====== -Manage your billing and get paid with easy-to-use, web-based invoicing software. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp) - -No matter what your reasons for starting a small business, the key to keeping that business going is getting paid. Getting paid usually means sending a client an invoice. - -It's easy enough to whip up an invoice using LibreOffice Writer or LibreOffice Calc, but sometimes you need a bit more. A more professional look. A way of keeping track of your invoices. Reminders about when to follow up on the invoices you've sent. - -There's a wide range of commercial and closed-source invoicing tools out there. But the offerings on the open source side of the fence are just as good, and maybe even more flexible, than their closed source counterparts. - -Let's take a look at four web-based open source invoicing tools that are great choices for freelancers and small businesses on a tight budget. I reviewed two of them in 2014, in an [earlier version][1] of this article. These four picks are easy to use and you can use them on just about any device. - -### Invoice Ninja - -I've never been a fan of the term ninja. Despite that, I like [Invoice Ninja][2]. A lot. It melds a simple interface with a set of features that let you create, manage, and send invoices to clients and customers. - -You can easily configure multiple clients, track payments and outstanding invoices, generate quotes, and email invoices. What sets Invoice Ninja apart from its competitors is its [integration with][3] over 40 online popular payment gateways, including PayPal, Stripe, WePay, and Apple Pay. - -[Download][4] a version that you can install on your own server or get an account with the [hosted version][5] of Invoice Ninja. There's a free version and a paid tier that will set you back US$ 8 a month. - -### InvoicePlane - -Once upon a time, there was a nifty open source invoicing tool called FusionInvoice. One day, its creators took the latest version of the code proprietary. That didn't end happily, as FusionInvoice's doors were shut for good in 2018. But that wasn't the end of the application. An old version of the code stayed open source and morphed into [InvoicePlane][6], which packs all of FusionInvoice's goodness. - -Creating an invoice takes just a couple of clicks. You can make them as minimal or detailed as you need. When you're ready, you can email your invoices or output them as PDFs. You can also create recurring invoices for clients or customers you regularly bill. - -InvoicePlane does more than generate and track invoices. You can also create quotes for jobs or goods, track products you sell, view and enter payments, and run reports on your invoices. - -[Grab the code][7] and install it on your web server. Or, if you're not quite ready to do that, [take the demo][8] for a spin. - -### OpenSourceBilling - -Described by its developer as "beautifully simple billing software," [OpenSourceBilling][9] lives up to the description. It has one of the cleanest interfaces I've seen, which makes configuring and using the tool a breeze. - -OpenSourceBilling stands out because of its dashboard, which tracks your current and past invoices, as well as any outstanding amounts. Your information is broken up into graphs and tables, which makes it easy to follow. - -You do much of the configuration on the invoice itself. You can add items, tax rates, clients, and even payment terms with a click and a few keystrokes. OpenSourceBilling saves that information across all of your invoices, both new and old. - -As with some of the other tools we've looked at, OpenSourceBilling has a [demo][10] you can try. - -### BambooInvoice - -When I was a full-time freelance writer and consultant, I used [BambooInvoice][11] to bill my clients. When its original developer stopped working on the software, I was a bit disappointed. But BambooInvoice is back, and it's as good as ever. - -What attracted me to BambooInvoice is its simplicity. It does one thing and does it well. You can create and edit invoices, and BambooInvoice keeps track of them by client and by the invoice numbers you assign to them. It also lets you know which invoices are open or overdue. You can email the invoices from within the application or generate PDFs. You can also run reports to keep tabs on your income. - -To [install][12] and use BambooInvoice, you'll need a web server running PHP 5 or newer as well as a MySQL database. Chances are you already have access to one, so you're good to go. - -Do you have a favorite open source invoicing tool? Feel free to share it by leaving a comment. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/open-source-invoicing-tools - -作者:[Scott Nesbitt][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[1]: https://opensource.com/business/14/9/4-open-source-invoice-tools -[2]: https://www.invoiceninja.org/ -[3]: https://www.invoiceninja.com/integrations/ -[4]: https://github.com/invoiceninja/invoiceninja -[5]: https://www.invoiceninja.com/invoicing-pricing-plans/ -[6]: https://invoiceplane.com/ -[7]: https://wiki.invoiceplane.com/en/1.5/getting-started/installation -[8]: https://demo.invoiceplane.com/ -[9]: http://www.opensourcebilling.org/ -[10]: http://demo.opensourcebilling.org/ -[11]: https://www.bambooinvoice.net/ -[12]: https://sourceforge.net/projects/bambooinvoice/ diff --git a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md index a9b20ac54d..443627f702 100644 --- a/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md +++ b/sources/tech/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md @@ -1,3 +1,5 @@ +Translating by jlztan + KeeWeb – An Open Source, Cross Platform Password Manager ====== diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md index 27616a9f6e..71adf0112b 100644 --- a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md +++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md @@ -1,3 +1,5 @@ +translating by cyleft + Taking notes with Laverna, a web-based information organizer ====== diff --git a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md b/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md deleted file mode 100644 index c119f69ebf..0000000000 --- a/sources/tech/20181009 6 Commands To Shutdown And Reboot The Linux System From Terminal.md +++ /dev/null @@ -1,331 +0,0 @@ -translating---cyleft -==== - -6 Commands To Shutdown And Reboot The Linux System From Terminal -====== -Linux administrator performing many tasks in their routine work. The system Shutdown and Reboot task also included in it. - -It’s one of the risky task for them because some times it wont come back due to some reasons and they need to spend more time on it to troubleshoot. - -These task can be performed through CLI in Linux. Most of the time Linux administrator prefer to perform these kind of tasks via CLI because they are familiar on this. - -There are few commands are available in Linux to perform these tasks and user needs to choose appropriate command to perform the task based on the requirement. - -All these commands has their own feature and allow Linux admin to use it. - -**Suggested Read :** -**(#)** [11 Methods To Find System/Server Uptime In Linux][1] -**(#)** [Tuptime – A Tool To Report The Historical And Statistical Running Time Of Linux System][2] - -When the system is initiated for Shutdown or Reboot. It will be notified to all logged-in users and processes. Also, it wont allow any new logins if the time argument is used. - -I would suggest you to double check before you perform this action because you need to follow few prerequisites to make sure everything is fine. - -Those steps are listed below. - - * Make sure you should have a console access to troubleshoot further in case any issues arise. VMWare access for VMs and IPMI/iLO/iDRAC access for physical servers. - * You have to create a ticket as per your company procedure either Incident or Change ticket and get approval - * Take the important configuration files backup and move to other servers for safety - * Verify the log files (Perform the pre-check) - * Communicate about your activity with other dependencies teams like DBA, Application, etc - * Ask them to bring down their Database service or Application service and get a confirmation from them. - * Validate the same from your end using the appropriate command to double confirm this. - * Finally reboot the system - * Verify the log files (Perform the post-check), If everything is good then move to next step. If you found something is wrong then troubleshoot accordingly. - * If it’s back to up and running, ask the dependencies team to bring up their applications. - * Monitor for some time, and communicate back to them saying everything is working fine as expected. - - - -This task can be performed using following commands. - - * **`shutdown Command:`** shutdown command used to halt, power-off or reboot the machine. - * **`halt Command:`** halt command used to halt, power-off or reboot the machine. - * **`poweroff Command:`** poweroff command used to halt, power-off or reboot the machine. - * **`reboot Command:`** reboot command used to halt, power-off or reboot the machine. - * **`init Command:`** init (short for initialization) is the first process started during booting of the computer system. - * **`systemctl Command:`** systemd is a system and service manager for Linux operating systems. - - - -### Method-1: How To Shutdown And Reboot The Linux System Using Shutdown Command - -shutdown command used to power-off or reboot a Linux remote machine or local host. It’s offering -multiple options to perform this task effectively. If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed. - -The general syntax is - -``` -# shutdown [OPTION] [TIME] [MESSAGE] - -``` - -Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system. - -``` -# shutdown -h now - -``` - - * **`-h:`** Equivalent to –poweroff, unless –halt is specified. - - - -Alternatively we can use the shutdown command with `halt` option to bring down the machine immediately. - -``` -# shutdown --halt now -or -# shutdown -H now - -``` - - * **`-H, --halt:`** Halt the machine. - - - -Alternatively we can use the shutdown command with `poweroff` option to bring down the machine immediately. - -``` -# shutdown --poweroff now -or -# shutdown -P now - -``` - - * **`-P, --poweroff:`** Power-off the machine (the default). - - - -Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system. - -``` -# shutdown -h now - -``` - - * **`-h:`** Equivalent to –poweroff, unless –halt is specified. - - - -If you run the below commands without time parameter, it will wait for a minute then execute the given command. - -``` -# shutdown -h -Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel. - -[email protected]# -Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): - -The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! - -``` - -All other logged in users can see a broadcast message in their terminal like below. - -``` -[[email protected] ~]$ -Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT): - -The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT! - -``` - -for Halt option. - -``` -# shutdown -H -Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel. - -[email protected]# -Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT): - -The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT! - -``` - -for Poweroff option. - -``` -# shutdown -P -Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel. - -[email protected]# -Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT): - -The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT! - -``` - -This can be cancelled by hitting `shutdown -c` option on your terminal. - -``` -# shutdown -c - -Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT): - -The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT! - -``` - -All other logged in users can see a broadcast message in their terminal like below. - -``` -[[email protected] ~]$ -Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT): - -The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT! - -``` - -Add a time parameter, if you want to perform shutdown or reboot in `N` seconds. Here you can add broadcast a custom message to logged-in users. In this example, we are rebooting the machine in another 5 minutes. - -``` -# shutdown -r +5 "To activate the latest Kernel" -Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel. - -[[email protected] ~]# -Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT): - -To activate the latest Kernel -The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT! - -``` - -Run the below command to reboot a Linux machine immediately. It will kill all the processes immediately and will reboot the system. - -``` -# shutdown -r now - -``` - - * **`-r, --reboot:`** Reboot the machine. - - - -### Method-2: How To Shutdown And Reboot The Linux System Using reboot Command - -reboot command used to power-off or reboot a Linux remote machine or local host. Reboot command comes with two useful options. - -It will perform a graceful shutdown and restart of the machine (This is similar to your restart option which is available in your system menu). - -Run “reboot’ command without any option to reboot Linux machine. - -``` -# reboot - -``` - -Run the “reboot” command with `-p` option to power-off or shutdown the Linux machine. - -``` -# reboot -p - -``` - - * **`-p, --poweroff:`** Power-off the machine, either halt or poweroff commands is invoked. - - - -Run the “reboot” command with `-f` option to forcefully reboot the Linux machine (This is similar to pressing the power button on the CPU). - -``` -# reboot -f - -``` - - * **`-f, --force:`** Force immediate halt, power-off, or reboot. - - - -### Method-3: How To Shutdown And Reboot The Linux System Using init Command - -init (short for initialization) is the first process started during booting of the computer system. - -It will check the /etc/inittab file to decide the Linux run level. Also, allow users to perform shutdown and reboot the Linux machine. There are seven runlevels exist, from zero to six. - -**Suggested Read :** -**(#)** [How To Check All Running Services In Linux][3] - -Run the below init command to shutdown the system . - -``` -# init 0 - -``` - - * **`0:`** Halt – to shutdown the system. - - - -Run the below init command to reboot the system . - -``` -# init 6 - -``` - - * **`6:`** Reboot – to reboot the system. - - - -### Method-4: How To Shutdown The Linux System Using halt Command - -halt command used to power-off or shutdown a Linux remote machine or local host. -halt terminates all processes and shuts down the cpu. - -``` -# halt - -``` - -### Method-5: How To Shutdown The Linux System Using poweroff Command - -poweroff command used to power-off or shutdown a Linux remote machine or local host. Poweroff is exactly like halt, but it also turns off the unit itself (lights and everything on a PC). It sends an ACPI command to the board, then to the PSU, to cut the power. - -``` -# poweroff - -``` - -### Method-6: How To Shutdown And Reboot The Linux System Using systemctl Command - -Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems. - -systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1. - -**Suggested Read :** -**(#)** [chkservice – A Tool For Managing Systemd Units From Linux Terminal][4] - -It’s a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. - -systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status). - -systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file. - -``` -# systemctl halt -# systemctl poweroff -# systemctl reboot -# systemctl suspend -# systemctl hibernate - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/prakash/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ -[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/ -[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/ -[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/ diff --git a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md index 8ee4f34897..f2885b177c 100644 --- a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md +++ b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md @@ -1,3 +1,4 @@ +translating by leemeans Exploring the Linux kernel: The secrets of Kconfig/kbuild ====== Dive into understanding how the Linux config/build system works. diff --git a/sources/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/sources/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md deleted file mode 100644 index e1b66d4f6d..0000000000 --- a/sources/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md +++ /dev/null @@ -1,248 +0,0 @@ -Translating by way-ww - -How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command -====== -It’s a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way. - -In Linux, whenever we install any packages which has services or daemons. By default all the services “init & systemd” scripts will be added into it but it wont enabled. - -Hence, we need to enable or disable the service manually if it’s required. There are three major init systems are available in Linux which are very famous and still in use. - -### What is init System? - -In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel. - -It’s holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down. - -Init looks at the `/etc/inittab` file to decide the Linux run level then it starts all other processes & applications in the background as per the run level. - -BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process. - -Below are the available run levels for Linux (There are seven runlevels exist, from zero to six). - - * **`0:`** halt - * **`1:`** Single user mode - * **`2:`** Multiuser, without NFS - * **`3:`** Full multiuser mode - * **`4:`** Unused - * **`5:`** X11 (GUI – Graphical User Interface) - * **`:`** reboot - - - -Below three init systems are widely used in Linux. - - * System V (Sys V) - * Upstart - * systemd - - - -### What is System V (Sys V)? - -System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it’s a parent process for everything. - -Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart. - -But systemd has been adopted by several major Linux distributions over the traditional SysV init systems. - -### What is Upstart? - -Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running. - -It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init. - -It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd. - -### What is systemd? - -Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems. - -systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1. - -It’s a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status). - -systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file. - -### How to Enable or Disable Services on Boot Using chkconfig Commmand? - -The chkconfig utility is a command-line tool that allows you to specify in which -runlevel to start a selected service, as well as to list all available services along with their current setting. - -Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command. - -All the services script are located on `/etc/rd.d/init.d`. - -### How to list All Services in run-level - -The `-–list` parameter displays all the services along with their current status (What run-level the services are enabled or disabled). - -``` - # chkconfig --list - NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off - abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off - abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off - acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off - atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off - auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off - . - . -``` - -### How to check the Status of Specific Service - -If you would like to see a particular service status in run-level then use the following format and grep the required service. - -In this case, we are going to check the `auditd` service status in run-level. - -``` - # chkconfig --list| grep auditd - auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off -``` - -### How to Enable a Particular Service on Run Levels - -Use `--level` parameter to enable a service in the required run-level. In this case, we are going to enable `httpd` service on run-level 3 and 5. - -``` - # chkconfig --level 35 httpd on -``` - -### How to Disable a Particular Service on Run Levels - -Use `--level` parameter to disable a service in the required run-level. In this case, we are going to enable `httpd` service on run-level 3 and 5. - -``` - # chkconfig --level 35 httpd off -``` - -### How to Add a new Service to the Startup List - -The `-–add` parameter allows us to add any new service to the startup. By default, it will turn on level 2, 3, 4 and 5 automatically for that service. - -``` - # chkconfig --add nagios -``` - -### How to Remove a Service from Startup List - -Use `--del` parameter to remove the service from the startup list. Here, we are going to remove the Nagios service from the startup list. - -``` - # chkconfig --del nagios -``` - -### How to Enable or Disable Services on Boot Using systemctl Command? - -systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status). - -All the created systemd unit files are located on `/etc/systemd/system/`. - -### How to list All Services - -Use the following command to list all the services which included enabled and disabled. - -``` - # systemctl list-unit-files --type=service - UNIT FILE STATE - arp-ethers.service disabled - auditd.service enabled - [email protected] enabled - blk-availability.service disabled - brandbot.service static - [email protected] static - chrony-wait.service disabled - chronyd.service enabled - cloud-config.service enabled - cloud-final.service enabled - cloud-init-local.service enabled - cloud-init.service enabled - console-getty.service disabled - console-shell.service disabled - [email protected] static - cpupower.service disabled - crond.service enabled - . - . - 150 unit files listed. -``` - -If you would like to see a particular service status then use the following format and grep the required service. In this case, we are going to check the `httpd` service status. - -``` - # systemctl list-unit-files --type=service | grep httpd - httpd.service disabled -``` - -### How to Enable a Particular Service on boot - -Use the following systemctl command format to enable a particular service. To enable a service, it will create a symlink. The same can be found below. - -``` - # systemctl enable httpd - Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. -``` - -Run the following command to double check whether the services is enabled or not on boot. - -``` - # systemctl is-enabled httpd - enabled -``` - -### How to Disable a Particular Service on boot - -Use the following systemctl command format to disable a particular service. When you run the command, it will remove a symlink which was created by you while enabling the service. The same can be found below. - -``` - # systemctl disable httpd - Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service. -``` - -Run the following command to double check whether the services is disabled or not on boot. - -``` - # systemctl is-enabled httpd - disabled -``` - -### How to Check the current run level - -Use the following systemctl command to verify which run-level you are in. Still “runlevel” command works with systemd, however runlevels is a legacy concept in systemd so, i would advise you to use systemctl command for all activity. - -We are in `run-level 3`, the same is showing below as `multi-user.target`. - -``` - # systemctl list-units --type=target - UNIT LOAD ACTIVE SUB DESCRIPTION - basic.target loaded active active Basic System - cloud-config.target loaded active active Cloud-config availability - cryptsetup.target loaded active active Local Encrypted Volumes - getty.target loaded active active Login Prompts - local-fs-pre.target loaded active active Local File Systems (Pre) - local-fs.target loaded active active Local File Systems - multi-user.target loaded active active Multi-User System - network-online.target loaded active active Network is Online - network-pre.target loaded active active Network (Pre) - network.target loaded active active Network - paths.target loaded active active Paths - remote-fs.target loaded active active Remote File Systems - slices.target loaded active active Slices - sockets.target loaded active active Sockets - swap.target loaded active active Swap - sysinit.target loaded active active System Initialization - timers.target loaded active active Timers -``` --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/prakash/ -[b]: https://github.com/lujun9972 diff --git a/sources/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md b/sources/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md deleted file mode 100644 index dbb5a54dde..0000000000 --- a/sources/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md +++ /dev/null @@ -1,96 +0,0 @@ -Translating by MjSeven - - -Kali Linux: What You Must Know Before Using it – FOSS Post -====== -![](https://i1.wp.com/fosspost.org/wp-content/uploads/2018/10/kali-linux.png?fit=1237%2C527&ssl=1) - -Kali Linux is the industry’s leading Linux distribution in penetration testing and ethical hacking. It is a distribution that comes shipped with tons and tons of hacking and penetration tools and software by default, and is widely recognized in all parts of the world, even among Windows users who may not even know what Linux is. - -Because of the latter, many people are trying to get alone with Kali Linux although they don’t even understand the basics of a Linux system. The reasons may vary from having fun, faking being a hacker to impress a girlfriend or simply trying to hack the neighbors’ WiFi network to get a free Internet, all of which is a bad thing to do if you are planning to use Kali Linux. - -Here are some tips that you should know before even planning to use Kali Linux - -### Kali Linux is Not for Beginners - -![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-000.png?resize=850%2C478&ssl=1) -Kali Linux Default GNOME Desktop - -If you are someone who has just started to use Linux few months ago, or if you are don’t consider yourself to be above average in terms of knowledge, then Kali Linux is not for you. If you are going to ask stuff like “How do I install Steam on Kali? How do I make my printer work on Kali? How do I solve the APT sources error on Kali”? Then Kali Linux is not suitable for you. - -Kali Linux is mainly made for professionals wanting to run penetration testing suits or people who want to learn ethical hacking and digital forensics. But even if you were from the latter, the average Kali Linux user is expected to face a lot of trouble while using Kali Linux for his day-to-day usage. He’s also expected to take a very careful approach to how he uses the tools and software, it’s not just “let’s install it and run everything”. Every tool must be carefully used, every software you install must be carefully examined. - -**Good Read:** [What are the components of a Linux system?][1] - -Stuff which the average Linux user can’t do normally. A better approach would be to spend few weeks learning about Linux and its daemons, services, software, distributions and the way it works, and then watch few dozens of videos and courses about ethical hacking, and only then, try to use Kali to apply what you learned. - -### it Can Get You Hacked - -![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-001.png?resize=850%2C478&ssl=1) -Kali Linux Hacking & Testing Tools - -In a normal Linux system, there’s one account for normal user and one separate account for root. This is not the case in Kali Linux. Kali Linux uses the root account by default and doesn’t provide you with a normal user account. This is because almost all security tools available in Kali do require root privileges, and to avoid asking you for root password every minute, they designed it that way. - -Of course, you could simply create a normal user account and start using it. Well, it’s still not recommended because that’s not how the Kali Linux system design is meant to work. You’ll face a lot of problems then in using programs, opening ports, debugging software, discovering why this thing doesn’t work only to discover that it was a weird privilege bug. You will also be annoyed by all the tools that will require you to enter the password each time you try to do anything on your system. - -Now, since you are forced to use it in as a root user, all the software you run on your system will also run with root privileges. This is bad if you don’t know what you are doing, because if there’s a vulnerability in Firefox for example and you visit one of the infected dark web sites, the hacker will be able to get full root permissions on your PC and hack you, which would have been limited if you were using a normal user account. Also, some tools that you may install and use can open ports and leak information without your knowledge, so if you are not extremely careful, people can hack you in the same way you may try to hack them. - -If you visit Facebook groups related to Kali Linux on few occasions, you’ll notice that almost a quarter of the posts in these groups are people calling for help because someone hacked them. - -### it Can Get You in Jail - -Kali Linux provide the software as it is. Then, it is your own responsibility alone of how you use them. - -In most advanced countries around the world, using penetration testing tools against public WiFi networks or the devices of others can easily get you in jail. Now don’t think that you can’t be tracked just because you are using Kali, many systems are configured to have complex logging devices to simply track whoever tries to listen or hack their networks, and you may stumble upon one of these, and it will destroy you life. - -Don’t ever use Kali Linux tools against devices/networks which do not belong to you or given explicit permission to try hacking them. If you say that you didn’t know what you were doing, it won’t be accepted as an excuse in a court. - -### Modified Kernel and Software - -Kali is [based][2] on Debian (Testing branch, which means that Kali Linux uses a rolling release model), so it uses most of the software architecture from there, and you will find most of the software in Kali Linux just as they are in Debian. - -However, some packages were modified to harden security and fix some possible vulnerabilities. The Linux kernel that Kali uses for example is patched to allow wireless injection on various devices. These patches are not normally available in the vanilla kernel. Also, Kali Linux does not depend on Debian servers and mirrors, but builds the packages by its own servers. Here’s the default software sources in the latest release: - -``` - deb http://http.kali.org/kali kali-rolling main contrib non-free - deb-src http://http.kali.org/kali kali-rolling main contrib non-free -``` - -That’s why, for some specific software, you will find a different behaviour when using the same program in Kali Linux or using it in Fedora, for example. You can see a full list of Kali Linux software from [git.kali.org][3]. You can also find our [own generated list of installed packages][4] on Kali Linux (GNOME). - -More importantly, Kali Linux official documentation extremely suggests to NOT add any other 3rd-party software repositories, because since Kali Linux is a rolling release and depends on Debian Testing, you will most likely break your system by just adding a new repository source due to dependencies conflicts and package hooks. - -### Don’t Install Kali Linux - -![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-002.png?resize=750%2C504&ssl=1) -Running wpscan on fosspost.org using Kali Linux - -I use Kali Linux on rare occasions to test the software and servers I deploy. However, I will never dare to install it and use it as a primary system. - -If you are going to use it as a primary system, then you will have to keep your own personal files, password, data and everything else on your system. You will also need to install tons of daily-use software in order to ease your life. But as we mentioned above, using Kali Linux is very risky and should be done very carefully, and if you get hacked, you will lose all your data and it may get exposed to a wider audience. Your personal information can also be used to track you if you are doing non-legal stuff. You may even destroy your data by yourself if you are not careful about how you use the tools. - -Even professional white hackers don’t recommend installing it as a primary system, but rather, use it from USB to just do your penetration testing work and then leave back to your normal Linux distribution. - -### The Bottom Line - -As you may see now, using Kali is not an easy decision to take lightly. If you are planning to be a whiter hacker and you need to use Kali to learn, then go for it after learning the basics and spending few months with a normal system. But be careful for what you are doing to avoid being in trouble. - -If you are planning to use Kali or if you need any help, I’ll be happy to hear your thoughts in the comments. - --------------------------------------------------------------------------------- - -via: https://fosspost.org/articles/must-know-before-using-kali-linux - -作者:[M.Hanny Sabbagh][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fosspost.org/author/mhsabbagh -[b]: https://github.com/lujun9972 -[1]: https://fosspost.org/articles/what-are-the-components-of-a-linux-distribution -[2]: https://www.kali.org/news/kali-linux-rolling-edition-2016-1/ -[3]: http://git.kali.org -[4]: https://paste.ubuntu.com/p/bctSVWwpVw/ diff --git a/sources/tech/20181016 Lab 4- Preemptive Multitasking.md b/sources/tech/20181016 Lab 4- Preemptive Multitasking.md deleted file mode 100644 index de68cd7f39..0000000000 --- a/sources/tech/20181016 Lab 4- Preemptive Multitasking.md +++ /dev/null @@ -1,596 +0,0 @@ -Translating by qhwdw -Lab 4: Preemptive Multitasking -====== -### Lab 4: Preemptive Multitasking - -**Part A due Thursday, October 18, 2018 -Part B due Thursday, October 25, 2018 -Part C due Thursday, November 1, 2018** - -#### Introduction - -In this lab you will implement preemptive multitasking among multiple simultaneously active user-mode environments. - -In part A you will add multiprocessor support to JOS, implement round-robin scheduling, and add basic environment management system calls (calls that create and destroy environments, and allocate/map memory). - -In part B, you will implement a Unix-like `fork()`, which allows a user-mode environment to create copies of itself. - -Finally, in part C you will add support for inter-process communication (IPC), allowing different user-mode environments to communicate and synchronize with each other explicitly. You will also add support for hardware clock interrupts and preemption. - -##### Getting Started - -Use Git to commit your Lab 3 source, fetch the latest version of the course repository, and then create a local branch called `lab4` based on our lab4 branch, `origin/lab4`: - -``` - athena% cd ~/6.828/lab - athena% add git - athena% git pull - Already up-to-date. - athena% git checkout -b lab4 origin/lab4 - Branch lab4 set up to track remote branch refs/remotes/origin/lab4. - Switched to a new branch "lab4" - athena% git merge lab3 - Merge made by recursive. - ... - athena% -``` - -Lab 4 contains a number of new source files, some of which you should browse before you start: -| kern/cpu.h | Kernel-private definitions for multiprocessor support | -| kern/mpconfig.c | Code to read the multiprocessor configuration | -| kern/lapic.c | Kernel code driving the local APIC unit in each processor | -| kern/mpentry.S | Assembly-language entry code for non-boot CPUs | -| kern/spinlock.h | Kernel-private definitions for spin locks, including the big kernel lock | -| kern/spinlock.c | Kernel code implementing spin locks | -| kern/sched.c | Code skeleton of the scheduler that you are about to implement | - -##### Lab Requirements - -This lab is divided into three parts, A, B, and C. We have allocated one week in the schedule for each part. - -As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. (You do not need to do one challenge problem per part, just one for the whole lab.) Additionally, you will need to write up a brief description of the challenge problem that you implemented. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab4.txt` in the top level of your `lab` directory before handing in your work. - -#### Part A: Multiprocessor Support and Cooperative Multitasking - -In the first part of this lab, you will first extend JOS to run on a multiprocessor system, and then implement some new JOS kernel system calls to allow user-level environments to create additional new environments. You will also implement _cooperative_ round-robin scheduling, allowing the kernel to switch from one environment to another when the current environment voluntarily relinquishes the CPU (or exits). Later in part C you will implement _preemptive_ scheduling, which allows the kernel to re-take control of the CPU from an environment after a certain time has passed even if the environment does not cooperate. - -##### Multiprocessor Support - -We are going to make JOS support "symmetric multiprocessing" (SMP), a multiprocessor model in which all CPUs have equivalent access to system resources such as memory and I/O buses. While all CPUs are functionally identical in SMP, during the boot process they can be classified into two types: the bootstrap processor (BSP) is responsible for initializing the system and for booting the operating system; and the application processors (APs) are activated by the BSP only after the operating system is up and running. Which processor is the BSP is determined by the hardware and the BIOS. Up to this point, all your existing JOS code has been running on the BSP. - -In an SMP system, each CPU has an accompanying local APIC (LAPIC) unit. The LAPIC units are responsible for delivering interrupts throughout the system. The LAPIC also provides its connected CPU with a unique identifier. In this lab, we make use of the following basic functionality of the LAPIC unit (in `kern/lapic.c`): - - * Reading the LAPIC identifier (APIC ID) to tell which CPU our code is currently running on (see `cpunum()`). - * Sending the `STARTUP` interprocessor interrupt (IPI) from the BSP to the APs to bring up other CPUs (see `lapic_startap()`). - * In part C, we program LAPIC's built-in timer to trigger clock interrupts to support preemptive multitasking (see `apic_init()`). - - - -A processor accesses its LAPIC using memory-mapped I/O (MMIO). In MMIO, a portion of _physical_ memory is hardwired to the registers of some I/O devices, so the same load/store instructions typically used to access memory can be used to access device registers. You've already seen one IO hole at physical address `0xA0000` (we use this to write to the VGA display buffer). The LAPIC lives in a hole starting at physical address `0xFE000000` (32MB short of 4GB), so it's too high for us to access using our usual direct map at KERNBASE. The JOS virtual memory map leaves a 4MB gap at `MMIOBASE` so we have a place to map devices like this. Since later labs introduce more MMIO regions, you'll write a simple function to allocate space from this region and map device memory to it. - -``` -Exercise 1. Implement `mmio_map_region` in `kern/pmap.c`. To see how this is used, look at the beginning of `lapic_init` in `kern/lapic.c`. You'll have to do the next exercise, too, before the tests for `mmio_map_region` will run. -``` - -###### Application Processor Bootstrap - -Before booting up APs, the BSP should first collect information about the multiprocessor system, such as the total number of CPUs, their APIC IDs and the MMIO address of the LAPIC unit. The `mp_init()` function in `kern/mpconfig.c` retrieves this information by reading the MP configuration table that resides in the BIOS's region of memory. - -The `boot_aps()` function (in `kern/init.c`) drives the AP bootstrap process. APs start in real mode, much like how the bootloader started in `boot/boot.S`, so `boot_aps()` copies the AP entry code (`kern/mpentry.S`) to a memory location that is addressable in the real mode. Unlike with the bootloader, we have some control over where the AP will start executing code; we copy the entry code to `0x7000` (`MPENTRY_PADDR`), but any unused, page-aligned physical address below 640KB would work. - -After that, `boot_aps()` activates APs one after another, by sending `STARTUP` IPIs to the LAPIC unit of the corresponding AP, along with an initial `CS:IP` address at which the AP should start running its entry code (`MPENTRY_PADDR` in our case). The entry code in `kern/mpentry.S` is quite similar to that of `boot/boot.S`. After some brief setup, it puts the AP into protected mode with paging enabled, and then calls the C setup routine `mp_main()` (also in `kern/init.c`). `boot_aps()` waits for the AP to signal a `CPU_STARTED` flag in `cpu_status` field of its `struct CpuInfo` before going on to wake up the next one. - -``` -Exercise 2. Read `boot_aps()` and `mp_main()` in `kern/init.c`, and the assembly code in `kern/mpentry.S`. Make sure you understand the control flow transfer during the bootstrap of APs. Then modify your implementation of `page_init()` in `kern/pmap.c` to avoid adding the page at `MPENTRY_PADDR` to the free list, so that we can safely copy and run AP bootstrap code at that physical address. Your code should pass the updated `check_page_free_list()` test (but might fail the updated `check_kern_pgdir()` test, which we will fix soon). -``` - -``` -Question - - 1. Compare `kern/mpentry.S` side by side with `boot/boot.S`. Bearing in mind that `kern/mpentry.S` is compiled and linked to run above `KERNBASE` just like everything else in the kernel, what is the purpose of macro `MPBOOTPHYS`? Why is it necessary in `kern/mpentry.S` but not in `boot/boot.S`? In other words, what could go wrong if it were omitted in `kern/mpentry.S`? -Hint: recall the differences between the link address and the load address that we have discussed in Lab 1. -``` - - -###### Per-CPU State and Initialization - -When writing a multiprocessor OS, it is important to distinguish between per-CPU state that is private to each processor, and global state that the whole system shares. `kern/cpu.h` defines most of the per-CPU state, including `struct CpuInfo`, which stores per-CPU variables. `cpunum()` always returns the ID of the CPU that calls it, which can be used as an index into arrays like `cpus`. Alternatively, the macro `thiscpu` is shorthand for the current CPU's `struct CpuInfo`. - -Here is the per-CPU state you should be aware of: - - * **Per-CPU kernel stack**. -Because multiple CPUs can trap into the kernel simultaneously, we need a separate kernel stack for each processor to prevent them from interfering with each other's execution. The array `percpu_kstacks[NCPU][KSTKSIZE]` reserves space for NCPU's worth of kernel stacks. - -In Lab 2, you mapped the physical memory that `bootstack` refers to as the BSP's kernel stack just below `KSTACKTOP`. Similarly, in this lab, you will map each CPU's kernel stack into this region with guard pages acting as a buffer between them. CPU 0's stack will still grow down from `KSTACKTOP`; CPU 1's stack will start `KSTKGAP` bytes below the bottom of CPU 0's stack, and so on. `inc/memlayout.h` shows the mapping layout. - - * **Per-CPU TSS and TSS descriptor**. -A per-CPU task state segment (TSS) is also needed in order to specify where each CPU's kernel stack lives. The TSS for CPU _i_ is stored in `cpus[i].cpu_ts`, and the corresponding TSS descriptor is defined in the GDT entry `gdt[(GD_TSS0 >> 3) + i]`. The global `ts` variable defined in `kern/trap.c` will no longer be useful. - - * **Per-CPU current environment pointer**. -Since each CPU can run different user process simultaneously, we redefined the symbol `curenv` to refer to `cpus[cpunum()].cpu_env` (or `thiscpu->cpu_env`), which points to the environment _currently_ executing on the _current_ CPU (the CPU on which the code is running). - - * **Per-CPU system registers**. -All registers, including system registers, are private to a CPU. Therefore, instructions that initialize these registers, such as `lcr3()`, `ltr()`, `lgdt()`, `lidt()`, etc., must be executed once on each CPU. Functions `env_init_percpu()` and `trap_init_percpu()` are defined for this purpose. - - - -``` -Exercise 3. Modify `mem_init_mp()` (in `kern/pmap.c`) to map per-CPU stacks starting at `KSTACKTOP`, as shown in `inc/memlayout.h`. The size of each stack is `KSTKSIZE` bytes plus `KSTKGAP` bytes of unmapped guard pages. Your code should pass the new check in `check_kern_pgdir()`. -``` - -``` -Exercise 4. The code in `trap_init_percpu()` (`kern/trap.c`) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs. (Note: your new code should not use the global `ts` variable any more.) -``` - -When you finish the above exercises, run JOS in QEMU with 4 CPUs using make qemu CPUS=4 (or make qemu-nox CPUS=4), you should see output like this: - -``` - ... - Physical memory: 66556K available, base = 640K, extended = 65532K - check_page_alloc() succeeded! - check_page() succeeded! - check_kern_pgdir() succeeded! - check_page_installed_pgdir() succeeded! - SMP: CPU 0 found 4 CPU(s) - enabled interrupts: 1 2 - SMP: CPU 1 starting - SMP: CPU 2 starting - SMP: CPU 3 starting -``` - -###### Locking - -Our current code spins after initializing the AP in `mp_main()`. Before letting the AP get any further, we need to first address race conditions when multiple CPUs run kernel code simultaneously. The simplest way to achieve this is to use a _big kernel lock_. The big kernel lock is a single global lock that is held whenever an environment enters kernel mode, and is released when the environment returns to user mode. In this model, environments in user mode can run concurrently on any available CPUs, but no more than one environment can run in kernel mode; any other environments that try to enter kernel mode are forced to wait. - -`kern/spinlock.h` declares the big kernel lock, namely `kernel_lock`. It also provides `lock_kernel()` and `unlock_kernel()`, shortcuts to acquire and release the lock. You should apply the big kernel lock at four locations: - - * In `i386_init()`, acquire the lock before the BSP wakes up the other CPUs. - * In `mp_main()`, acquire the lock after initializing the AP, and then call `sched_yield()` to start running environments on this AP. - * In `trap()`, acquire the lock when trapped from user mode. To determine whether a trap happened in user mode or in kernel mode, check the low bits of the `tf_cs`. - * In `env_run()`, release the lock _right before_ switching to user mode. Do not do that too early or too late, otherwise you will experience races or deadlocks. - - -``` -Exercise 5. Apply the big kernel lock as described above, by calling `lock_kernel()` and `unlock_kernel()` at the proper locations. -``` - -How to test if your locking is correct? You can't at this moment! But you will be able to after you implement the scheduler in the next exercise. - -``` -Question - - 2. It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock. -``` - -``` -Challenge! The big kernel lock is simple and easy to use. Nevertheless, it eliminates all concurrency in kernel mode. Most modern operating systems use different locks to protect different parts of their shared state, an approach called _fine-grained locking_. Fine-grained locking can increase performance significantly, but is more difficult to implement and error-prone. If you are brave enough, drop the big kernel lock and embrace concurrency in JOS! - -It is up to you to decide the locking granularity (the amount of data that a lock protects). As a hint, you may consider using spin locks to ensure exclusive access to these shared components in the JOS kernel: - - * The page allocator. - * The console driver. - * The scheduler. - * The inter-process communication (IPC) state that you will implement in the part C. -``` - - -##### Round-Robin Scheduling - -Your next task in this lab is to change the JOS kernel so that it can alternate between multiple environments in "round-robin" fashion. Round-robin scheduling in JOS works as follows: - - * The function `sched_yield()` in the new `kern/sched.c` is responsible for selecting a new environment to run. It searches sequentially through the `envs[]` array in circular fashion, starting just after the previously running environment (or at the beginning of the array if there was no previously running environment), picks the first environment it finds with a status of `ENV_RUNNABLE` (see `inc/env.h`), and calls `env_run()` to jump into that environment. - * `sched_yield()` must never run the same environment on two CPUs at the same time. It can tell that an environment is currently running on some CPU (possibly the current CPU) because that environment's status will be `ENV_RUNNING`. - * We have implemented a new system call for you, `sys_yield()`, which user environments can call to invoke the kernel's `sched_yield()` function and thereby voluntarily give up the CPU to a different environment. - - - -``` -Exercise 6. Implement round-robin scheduling in `sched_yield()` as described above. Don't forget to modify `syscall()` to dispatch `sys_yield()`. - -Make sure to invoke `sched_yield()` in `mp_main`. - -Modify `kern/init.c` to create three (or more!) environments that all run the program `user/yield.c`. - -Run make qemu. You should see the environments switch back and forth between each other five times before terminating, like below. - -Test also with several CPUS: make qemu CPUS=2. - - ... - Hello, I am environment 00001000. - Hello, I am environment 00001001. - Hello, I am environment 00001002. - Back in environment 00001000, iteration 0. - Back in environment 00001001, iteration 0. - Back in environment 00001002, iteration 0. - Back in environment 00001000, iteration 1. - Back in environment 00001001, iteration 1. - Back in environment 00001002, iteration 1. - ... - -After the `yield` programs exit, there will be no runnable environment in the system, the scheduler should invoke the JOS kernel monitor. If any of this does not happen, then fix your code before proceeding. -``` - -``` -Question - - 3. In your implementation of `env_run()` you should have called `lcr3()`. Before and after the call to `lcr3()`, your code makes references (at least it should) to the variable `e`, the argument to `env_run`. Upon loading the `%cr3` register, the addressing context used by the MMU is instantly changed. But a virtual address (namely `e`) has meaning relative to a given address context--the address context specifies the physical address to which the virtual address maps. Why can the pointer `e` be dereferenced both before and after the addressing switch? - 4. Whenever the kernel switches from one environment to another, it must ensure the old environment's registers are saved so they can be restored properly later. Why? Where does this happen? -``` - -``` -Challenge! Add a less trivial scheduling policy to the kernel, such as a fixed-priority scheduler that allows each environment to be assigned a priority and ensures that higher-priority environments are always chosen in preference to lower-priority environments. If you're feeling really adventurous, try implementing a Unix-style adjustable-priority scheduler or even a lottery or stride scheduler. (Look up "lottery scheduling" and "stride scheduling" in Google.) - -Write a test program or two that verifies that your scheduling algorithm is working correctly (i.e., the right environments get run in the right order). It may be easier to write these test programs once you have implemented `fork()` and IPC in parts B and C of this lab. -``` - -``` -Challenge! The JOS kernel currently does not allow applications to use the x86 processor's x87 floating-point unit (FPU), MMX instructions, or Streaming SIMD Extensions (SSE). Extend the `Env` structure to provide a save area for the processor's floating point state, and extend the context switching code to save and restore this state properly when switching from one environment to another. The `FXSAVE` and `FXRSTOR` instructions may be useful, but note that these are not in the old i386 user's manual because they were introduced in more recent processors. Write a user-level test program that does something cool with floating-point. -``` - -##### System Calls for Environment Creation - -Although your kernel is now capable of running and switching between multiple user-level environments, it is still limited to running environments that the _kernel_ initially set up. You will now implement the necessary JOS system calls to allow _user_ environments to create and start other new user environments. - -Unix provides the `fork()` system call as its process creation primitive. Unix `fork()` copies the entire address space of calling process (the parent) to create a new process (the child). The only differences between the two observable from user space are their process IDs and parent process IDs (as returned by `getpid` and `getppid`). In the parent, `fork()` returns the child's process ID, while in the child, `fork()` returns 0. By default, each process gets its own private address space, and neither process's modifications to memory are visible to the other. - -You will provide a different, more primitive set of JOS system calls for creating new user-mode environments. With these system calls you will be able to implement a Unix-like `fork()` entirely in user space, in addition to other styles of environment creation. The new system calls you will write for JOS are as follows: - - * `sys_exofork`: -This system call creates a new environment with an almost blank slate: nothing is mapped in the user portion of its address space, and it is not runnable. The new environment will have the same register state as the parent environment at the time of the `sys_exofork` call. In the parent, `sys_exofork` will return the `envid_t` of the newly created environment (or a negative error code if the environment allocation failed). In the child, however, it will return 0. (Since the child starts out marked as not runnable, `sys_exofork` will not actually return in the child until the parent has explicitly allowed this by marking the child runnable using....) - * `sys_env_set_status`: -Sets the status of a specified environment to `ENV_RUNNABLE` or `ENV_NOT_RUNNABLE`. This system call is typically used to mark a new environment ready to run, once its address space and register state has been fully initialized. - * `sys_page_alloc`: -Allocates a page of physical memory and maps it at a given virtual address in a given environment's address space. - * `sys_page_map`: -Copy a page mapping ( _not_ the contents of a page!) from one environment's address space to another, leaving a memory sharing arrangement in place so that the new and the old mappings both refer to the same page of physical memory. - * `sys_page_unmap`: -Unmap a page mapped at a given virtual address in a given environment. - - - -For all of the system calls above that accept environment IDs, the JOS kernel supports the convention that a value of 0 means "the current environment." This convention is implemented by `envid2env()` in `kern/env.c`. - -We have provided a very primitive implementation of a Unix-like `fork()` in the test program `user/dumbfork.c`. This test program uses the above system calls to create and run a child environment with a copy of its own address space. The two environments then switch back and forth using `sys_yield` as in the previous exercise. The parent exits after 10 iterations, whereas the child exits after 20. - -``` -Exercise 7. Implement the system calls described above in `kern/syscall.c` and make sure `syscall()` calls them. You will need to use various functions in `kern/pmap.c` and `kern/env.c`, particularly `envid2env()`. For now, whenever you call `envid2env()`, pass 1 in the `checkperm` parameter. Be sure you check for any invalid system call arguments, returning `-E_INVAL` in that case. Test your JOS kernel with `user/dumbfork` and make sure it works before proceeding. -``` - -``` -Challenge! Add the additional system calls necessary to _read_ all of the vital state of an existing environment as well as set it up. Then implement a user mode program that forks off a child environment, runs it for a while (e.g., a few iterations of `sys_yield()`), then takes a complete snapshot or _checkpoint_ of the child environment, runs the child for a while longer, and finally restores the child environment to the state it was in at the checkpoint and continues it from there. Thus, you are effectively "replaying" the execution of the child environment from an intermediate state. Make the child environment perform some interaction with the user using `sys_cgetc()` or `readline()` so that the user can view and mutate its internal state, and verify that with your checkpoint/restart you can give the child environment a case of selective amnesia, making it "forget" everything that happened beyond a certain point. -``` - -This completes Part A of the lab; make sure it passes all of the Part A tests when you run make grade, and hand it in using make handin as usual. If you are trying to figure out why a particular test case is failing, run ./grade-lab4 -v, which will show you the output of the kernel builds and QEMU runs for each test, until a test fails. When a test fails, the script will stop, and then you can inspect `jos.out` to see what the kernel actually printed. - -#### Part B: Copy-on-Write Fork - -As mentioned earlier, Unix provides the `fork()` system call as its primary process creation primitive. The `fork()` system call copies the address space of the calling process (the parent) to create a new process (the child). - -xv6 Unix implements `fork()` by copying all data from the parent's pages into new pages allocated for the child. This is essentially the same approach that `dumbfork()` takes. The copying of the parent's address space into the child is the most expensive part of the `fork()` operation. - -However, a call to `fork()` is frequently followed almost immediately by a call to `exec()` in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling `exec()`. - -For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to _share_ the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as _copy-on-write_. To do this, on `fork()` the kernel would copy the address space _mappings_ from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a `fork()` followed by an `exec()` in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls `exec()`. - -In the next piece of this lab, you will implement a "proper" Unix-like `fork()` with copy-on-write, as a user space library routine. Implementing `fork()` and copy-on-write support in user space has the benefit that the kernel remains much simpler and thus more likely to be correct. It also lets individual user-mode programs define their own semantics for `fork()`. A program that wants a slightly different implementation (for example, the expensive always-copy version like `dumbfork()`, or one in which the parent and child actually share memory afterward) can easily provide its own. - -##### User-level page fault handling - -A user-level copy-on-write `fork()` needs to know about page faults on write-protected pages, so that's what you'll implement first. Copy-on-write is only one of many possible uses for user-level page fault handling. - -It's common to set up an address space so that page faults indicate when some action needs to take place. For example, most Unix kernels initially map only a single page in a new process's stack region, and allocate and map additional stack pages later "on demand" as the process's stack consumption increases and causes page faults on stack addresses that are not yet mapped. A typical Unix kernel must keep track of what action to take when a page fault occurs in each region of a process's space. For example, a fault in the stack region will typically allocate and map new page of physical memory. A fault in the program's BSS region will typically allocate a new page, fill it with zeroes, and map it. In systems with demand-paged executables, a fault in the text region will read the corresponding page of the binary off of disk and then map it. - -This is a lot of information for the kernel to keep track of. Instead of taking the traditional Unix approach, you will decide what to do about each page fault in user space, where bugs are less damaging. This design has the added benefit of allowing programs great flexibility in defining their memory regions; you'll use user-level page fault handling later for mapping and accessing files on a disk-based file system. - -###### Setting the Page Fault Handler - -In order to handle its own page faults, a user environment will need to register a _page fault handler entrypoint_ with the JOS kernel. The user environment registers its page fault entrypoint via the new `sys_env_set_pgfault_upcall` system call. We have added a new member to the `Env` structure, `env_pgfault_upcall`, to record this information. - -``` -Exercise 8. Implement the `sys_env_set_pgfault_upcall` system call. Be sure to enable permission checking when looking up the environment ID of the target environment, since this is a "dangerous" system call. -``` - -###### Normal and Exception Stacks in User Environments - -During normal execution, a user environment in JOS will run on the _normal_ user stack: its `ESP` register starts out pointing at `USTACKTOP`, and the stack data it pushes resides on the page between `USTACKTOP-PGSIZE` and `USTACKTOP-1` inclusive. When a page fault occurs in user mode, however, the kernel will restart the user environment running a designated user-level page fault handler on a different stack, namely the _user exception_ stack. In essence, we will make the JOS kernel implement automatic "stack switching" on behalf of the user environment, in much the same way that the x86 _processor_ already implements stack switching on behalf of JOS when transferring from user mode to kernel mode! - -The JOS user exception stack is also one page in size, and its top is defined to be at virtual address `UXSTACKTOP`, so the valid bytes of the user exception stack are from `UXSTACKTOP-PGSIZE` through `UXSTACKTOP-1` inclusive. While running on this exception stack, the user-level page fault handler can use JOS's regular system calls to map new pages or adjust mappings so as to fix whatever problem originally caused the page fault. Then the user-level page fault handler returns, via an assembly language stub, to the faulting code on the original stack. - -Each user environment that wants to support user-level page fault handling will need to allocate memory for its own exception stack, using the `sys_page_alloc()` system call introduced in part A. - -###### Invoking the User Page Fault Handler - -You will now need to change the page fault handling code in `kern/trap.c` to handle page faults from user mode as follows. We will call the state of the user environment at the time of the fault the _trap-time_ state. - -If there is no page fault handler registered, the JOS kernel destroys the user environment with a message as before. Otherwise, the kernel sets up a trap frame on the exception stack that looks like a `struct UTrapframe` from `inc/trap.h`: - -``` - <-- UXSTACKTOP - trap-time esp - trap-time eflags - trap-time eip - trap-time eax start of struct PushRegs - trap-time ecx - trap-time edx - trap-time ebx - trap-time esp - trap-time ebp - trap-time esi - trap-time edi end of struct PushRegs - tf_err (error code) - fault_va <-- %esp when handler is run - -``` - -The kernel then arranges for the user environment to resume execution with the page fault handler running on the exception stack with this stack frame; you must figure out how to make this happen. The `fault_va` is the virtual address that caused the page fault. - -If the user environment is _already_ running on the user exception stack when an exception occurs, then the page fault handler itself has faulted. In this case, you should start the new stack frame just under the current `tf->tf_esp` rather than at `UXSTACKTOP`. You should first push an empty 32-bit word, then a `struct UTrapframe`. - -To test whether `tf->tf_esp` is already on the user exception stack, check whether it is in the range between `UXSTACKTOP-PGSIZE` and `UXSTACKTOP-1`, inclusive. - -``` -Exercise 9. Implement the code in `page_fault_handler` in `kern/trap.c` required to dispatch page faults to the user-mode handler. Be sure to take appropriate precautions when writing into the exception stack. (What happens if the user environment runs out of space on the exception stack?) -``` - -###### User-mode Page Fault Entrypoint - -Next, you need to implement the assembly routine that will take care of calling the C page fault handler and resume execution at the original faulting instruction. This assembly routine is the handler that will be registered with the kernel using `sys_env_set_pgfault_upcall()`. - -``` -Exercise 10. Implement the `_pgfault_upcall` routine in `lib/pfentry.S`. The interesting part is returning to the original point in the user code that caused the page fault. You'll return directly there, without going back through the kernel. The hard part is simultaneously switching stacks and re-loading the EIP. -``` - -Finally, you need to implement the C user library side of the user-level page fault handling mechanism. - -``` -Exercise 11. Finish `set_pgfault_handler()` in `lib/pgfault.c`. -``` - -###### Testing - -Run `user/faultread` (make run-faultread). You should see: - -``` - ... - [00000000] new env 00001000 - [00001000] user fault va 00000000 ip 0080003a - TRAP frame ... - [00001000] free env 00001000 -``` - -Run `user/faultdie`. You should see: - -``` - ... - [00000000] new env 00001000 - i faulted at va deadbeef, err 6 - [00001000] exiting gracefully - [00001000] free env 00001000 -``` - -Run `user/faultalloc`. You should see: - -``` - ... - [00000000] new env 00001000 - fault deadbeef - this string was faulted in at deadbeef - fault cafebffe - fault cafec000 - this string was faulted in at cafebffe - [00001000] exiting gracefully - [00001000] free env 00001000 -``` - -If you see only the first "this string" line, it means you are not handling recursive page faults properly. - -Run `user/faultallocbad`. You should see: - -``` - ... - [00000000] new env 00001000 - [00001000] user_mem_check assertion failure for va deadbeef - [00001000] free env 00001000 -``` - -Make sure you understand why `user/faultalloc` and `user/faultallocbad` behave differently. - -``` -Challenge! Extend your kernel so that not only page faults, but _all_ types of processor exceptions that code running in user space can generate, can be redirected to a user-mode exception handler. Write user-mode test programs to test user-mode handling of various exceptions such as divide-by-zero, general protection fault, and illegal opcode. -``` - -##### Implementing Copy-on-Write Fork - -You now have the kernel facilities to implement copy-on-write `fork()` entirely in user space. - -We have provided a skeleton for your `fork()` in `lib/fork.c`. Like `dumbfork()`, `fork()` should create a new environment, then scan through the parent environment's entire address space and set up corresponding page mappings in the child. The key difference is that, while `dumbfork()` copied _pages_ , `fork()` will initially only copy page _mappings_. `fork()` will copy each page only when one of the environments tries to write it. - -The basic control flow for `fork()` is as follows: - - 1. The parent installs `pgfault()` as the C-level page fault handler, using the `set_pgfault_handler()` function you implemented above. - - 2. The parent calls `sys_exofork()` to create a child environment. - - 3. For each writable or copy-on-write page in its address space below UTOP, the parent calls `duppage`, which should map the page copy-on-write into the address space of the child and then _remap_ the page copy-on-write in its own address space. [ Note: The ordering here (i.e., marking a page as COW in the child before marking it in the parent) actually matters! Can you see why? Try to think of a specific case where reversing the order could cause trouble. ] `duppage` sets both PTEs so that the page is not writeable, and to contain `PTE_COW` in the "avail" field to distinguish copy-on-write pages from genuine read-only pages. - -The exception stack is _not_ remapped this way, however. Instead you need to allocate a fresh page in the child for the exception stack. Since the page fault handler will be doing the actual copying and the page fault handler runs on the exception stack, the exception stack cannot be made copy-on-write: who would copy it? - -`fork()` also needs to handle pages that are present, but not writable or copy-on-write. - - 4. The parent sets the user page fault entrypoint for the child to look like its own. - - 5. The child is now ready to run, so the parent marks it runnable. - - - - -Each time one of the environments writes a copy-on-write page that it hasn't yet written, it will take a page fault. Here's the control flow for the user page fault handler: - - 1. The kernel propagates the page fault to `_pgfault_upcall`, which calls `fork()`'s `pgfault()` handler. - 2. `pgfault()` checks that the fault is a write (check for `FEC_WR` in the error code) and that the PTE for the page is marked `PTE_COW`. If not, panic. - 3. `pgfault()` allocates a new page mapped at a temporary location and copies the contents of the faulting page into it. Then the fault handler maps the new page at the appropriate address with read/write permissions, in place of the old read-only mapping. - - - -The user-level `lib/fork.c` code must consult the environment's page tables for several of the operations above (e.g., that the PTE for a page is marked `PTE_COW`). The kernel maps the environment's page tables at `UVPT` exactly for this purpose. It uses a [clever mapping trick][1] to make it to make it easy to lookup PTEs for user code. `lib/entry.S` sets up `uvpt` and `uvpd` so that you can easily lookup page-table information in `lib/fork.c`. - -`````` -Exercise 12. Implement `fork`, `duppage` and `pgfault` in `lib/fork.c`. - -Test your code with the `forktree` program. It should produce the following messages, with interspersed 'new env', 'free env', and 'exiting gracefully' messages. The messages may not appear in this order, and the environment IDs may be different. - - 1000: I am '' - 1001: I am '0' - 2000: I am '00' - 2001: I am '000' - 1002: I am '1' - 3000: I am '11' - 3001: I am '10' - 4000: I am '100' - 1003: I am '01' - 5000: I am '010' - 4001: I am '011' - 2002: I am '110' - 1004: I am '001' - 1005: I am '111' - 1006: I am '101' -``` - -``` -Challenge! Implement a shared-memory `fork()` called `sfork()`. This version should have the parent and child _share_ all their memory pages (so writes in one environment appear in the other) except for pages in the stack area, which should be treated in the usual copy-on-write manner. Modify `user/forktree.c` to use `sfork()` instead of regular `fork()`. Also, once you have finished implementing IPC in part C, use your `sfork()` to run `user/pingpongs`. You will have to find a new way to provide the functionality of the global `thisenv` pointer. -``` - -``` -Challenge! Your implementation of `fork` makes a huge number of system calls. On the x86, switching into the kernel using interrupts has non-trivial cost. Augment the system call interface so that it is possible to send a batch of system calls at once. Then change `fork` to use this interface. - -How much faster is your new `fork`? - -You can answer this (roughly) by using analytical arguments to estimate how much of an improvement batching system calls will make to the performance of your `fork`: How expensive is an `int 0x30` instruction? How many times do you execute `int 0x30` in your `fork`? Is accessing the `TSS` stack switch also expensive? And so on... - -Alternatively, you can boot your kernel on real hardware and _really_ benchmark your code. See the `RDTSC` (read time-stamp counter) instruction, defined in the IA32 manual, which counts the number of clock cycles that have elapsed since the last processor reset. QEMU doesn't emulate this instruction faithfully (it can either count the number of virtual instructions executed or use the host TSC, neither of which reflects the number of cycles a real CPU would require). -``` - -This ends part B. Make sure you pass all of the Part B tests when you run make grade. As usual, you can hand in your submission with make handin. - -#### Part C: Preemptive Multitasking and Inter-Process communication (IPC) - -In the final part of lab 4 you will modify the kernel to preempt uncooperative environments and to allow environments to pass messages to each other explicitly. - -##### Clock Interrupts and Preemption - -Run the `user/spin` test program. This test program forks off a child environment, which simply spins forever in a tight loop once it receives control of the CPU. Neither the parent environment nor the kernel ever regains the CPU. This is obviously not an ideal situation in terms of protecting the system from bugs or malicious code in user-mode environments, because any user-mode environment can bring the whole system to a halt simply by getting into an infinite loop and never giving back the CPU. In order to allow the kernel to _preempt_ a running environment, forcefully retaking control of the CPU from it, we must extend the JOS kernel to support external hardware interrupts from the clock hardware. - -###### Interrupt discipline - -External interrupts (i.e., device interrupts) are referred to as IRQs. There are 16 possible IRQs, numbered 0 through 15. The mapping from IRQ number to IDT entry is not fixed. `pic_init` in `picirq.c` maps IRQs 0-15 to IDT entries `IRQ_OFFSET` through `IRQ_OFFSET+15`. - -In `inc/trap.h`, `IRQ_OFFSET` is defined to be decimal 32. Thus the IDT entries 32-47 correspond to the IRQs 0-15. For example, the clock interrupt is IRQ 0. Thus, IDT[IRQ_OFFSET+0] (i.e., IDT[32]) contains the address of the clock's interrupt handler routine in the kernel. This `IRQ_OFFSET` is chosen so that the device interrupts do not overlap with the processor exceptions, which could obviously cause confusion. (In fact, in the early days of PCs running MS-DOS, the `IRQ_OFFSET` effectively _was_ zero, which indeed caused massive confusion between handling hardware interrupts and handling processor exceptions!) - -In JOS, we make a key simplification compared to xv6 Unix. External device interrupts are _always_ disabled when in the kernel (and, like xv6, enabled when in user space). External interrupts are controlled by the `FL_IF` flag bit of the `%eflags` register (see `inc/mmu.h`). When this bit is set, external interrupts are enabled. While the bit can be modified in several ways, because of our simplification, we will handle it solely through the process of saving and restoring `%eflags` register as we enter and leave user mode. - -You will have to ensure that the `FL_IF` flag is set in user environments when they run so that when an interrupt arrives, it gets passed through to the processor and handled by your interrupt code. Otherwise, interrupts are _masked_ , or ignored until interrupts are re-enabled. We masked interrupts with the very first instruction of the bootloader, and so far we have never gotten around to re-enabling them. - -``` -Exercise 13. Modify `kern/trapentry.S` and `kern/trap.c` to initialize the appropriate entries in the IDT and provide handlers for IRQs 0 through 15. Then modify the code in `env_alloc()` in `kern/env.c` to ensure that user environments are always run with interrupts enabled. - -Also uncomment the `sti` instruction in `sched_halt()` so that idle CPUs unmask interrupts. - -The processor never pushes an error code when invoking a hardware interrupt handler. You might want to re-read section 9.2 of the [80386 Reference Manual][2], or section 5.8 of the [IA-32 Intel Architecture Software Developer's Manual, Volume 3][3], at this time. - -After doing this exercise, if you run your kernel with any test program that runs for a non-trivial length of time (e.g., `spin`), you should see the kernel print trap frames for hardware interrupts. While interrupts are now enabled in the processor, JOS isn't yet handling them, so you should see it misattribute each interrupt to the currently running user environment and destroy it. Eventually it should run out of environments to destroy and drop into the monitor. -``` - -###### Handling Clock Interrupts - -In the `user/spin` program, after the child environment was first run, it just spun in a loop, and the kernel never got control back. We need to program the hardware to generate clock interrupts periodically, which will force control back to the kernel where we can switch control to a different user environment. - -The calls to `lapic_init` and `pic_init` (from `i386_init` in `init.c`), which we have written for you, set up the clock and the interrupt controller to generate interrupts. You now need to write the code to handle these interrupts. - -``` -Exercise 14. Modify the kernel's `trap_dispatch()` function so that it calls `sched_yield()` to find and run a different environment whenever a clock interrupt takes place. - -You should now be able to get the `user/spin` test to work: the parent environment should fork off the child, `sys_yield()` to it a couple times but in each case regain control of the CPU after one time slice, and finally kill the child environment and terminate gracefully. -``` - -This is a great time to do some _regression testing_. Make sure that you haven't broken any earlier part of that lab that used to work (e.g. `forktree`) by enabling interrupts. Also, try running with multiple CPUs using make CPUS=2 _target_. You should also be able to pass `stresssched` now. Run make grade to see for sure. You should now get a total score of 65/80 points on this lab. - -##### Inter-Process communication (IPC) - -(Technically in JOS this is "inter-environment communication" or "IEC", but everyone else calls it IPC, so we'll use the standard term.) - -We've been focusing on the isolation aspects of the operating system, the ways it provides the illusion that each program has a machine all to itself. Another important service of an operating system is to allow programs to communicate with each other when they want to. It can be quite powerful to let programs interact with other programs. The Unix pipe model is the canonical example. - -There are many models for interprocess communication. Even today there are still debates about which models are best. We won't get into that debate. Instead, we'll implement a simple IPC mechanism and then try it out. - -###### IPC in JOS - -You will implement a few additional JOS kernel system calls that collectively provide a simple interprocess communication mechanism. You will implement two system calls, `sys_ipc_recv` and `sys_ipc_try_send`. Then you will implement two library wrappers `ipc_recv` and `ipc_send`. - -The "messages" that user environments can send to each other using JOS's IPC mechanism consist of two components: a single 32-bit value, and optionally a single page mapping. Allowing environments to pass page mappings in messages provides an efficient way to transfer more data than will fit into a single 32-bit integer, and also allows environments to set up shared memory arrangements easily. - -###### Sending and Receiving Messages - -To receive a message, an environment calls `sys_ipc_recv`. This system call de-schedules the current environment and does not run it again until a message has been received. When an environment is waiting to receive a message, _any_ other environment can send it a message - not just a particular environment, and not just environments that have a parent/child arrangement with the receiving environment. In other words, the permission checking that you implemented in Part A will not apply to IPC, because the IPC system calls are carefully designed so as to be "safe": an environment cannot cause another environment to malfunction simply by sending it messages (unless the target environment is also buggy). - -To try to send a value, an environment calls `sys_ipc_try_send` with both the receiver's environment id and the value to be sent. If the named environment is actually receiving (it has called `sys_ipc_recv` and not gotten a value yet), then the send delivers the message and returns 0. Otherwise the send returns `-E_IPC_NOT_RECV` to indicate that the target environment is not currently expecting to receive a value. - -A library function `ipc_recv` in user space will take care of calling `sys_ipc_recv` and then looking up the information about the received values in the current environment's `struct Env`. - -Similarly, a library function `ipc_send` will take care of repeatedly calling `sys_ipc_try_send` until the send succeeds. - -###### Transferring Pages - -When an environment calls `sys_ipc_recv` with a valid `dstva` parameter (below `UTOP`), the environment is stating that it is willing to receive a page mapping. If the sender sends a page, then that page should be mapped at `dstva` in the receiver's address space. If the receiver already had a page mapped at `dstva`, then that previous page is unmapped. - -When an environment calls `sys_ipc_try_send` with a valid `srcva` (below `UTOP`), it means the sender wants to send the page currently mapped at `srcva` to the receiver, with permissions `perm`. After a successful IPC, the sender keeps its original mapping for the page at `srcva` in its address space, but the receiver also obtains a mapping for this same physical page at the `dstva` originally specified by the receiver, in the receiver's address space. As a result this page becomes shared between the sender and receiver. - -If either the sender or the receiver does not indicate that a page should be transferred, then no page is transferred. After any IPC the kernel sets the new field `env_ipc_perm` in the receiver's `Env` structure to the permissions of the page received, or zero if no page was received. - -###### Implementing IPC - -``` -Exercise 15. Implement `sys_ipc_recv` and `sys_ipc_try_send` in `kern/syscall.c`. Read the comments on both before implementing them, since they have to work together. When you call `envid2env` in these routines, you should set the `checkperm` flag to 0, meaning that any environment is allowed to send IPC messages to any other environment, and the kernel does no special permission checking other than verifying that the target envid is valid. - -Then implement the `ipc_recv` and `ipc_send` functions in `lib/ipc.c`. - -Use the `user/pingpong` and `user/primes` functions to test your IPC mechanism. `user/primes` will generate for each prime number a new environment until JOS runs out of environments. You might find it interesting to read `user/primes.c` to see all the forking and IPC going on behind the scenes. -``` - -``` -Challenge! Why does `ipc_send` have to loop? Change the system call interface so it doesn't have to. Make sure you can handle multiple environments trying to send to one environment at the same time. -``` - -``` -Challenge! The prime sieve is only one neat use of message passing between a large number of concurrent programs. Read C. A. R. Hoare, ``Communicating Sequential Processes,'' _Communications of the ACM_ 21(8) (August 1978), 666-667, and implement the matrix multiplication example. -``` - -``` -Challenge! One of the most impressive examples of the power of message passing is Doug McIlroy's power series calculator, described in [M. Douglas McIlroy, ``Squinting at Power Series,'' _Software--Practice and Experience_ , 20(7) (July 1990), 661-683][4]. Implement his power series calculator and compute the power series for _sin_ ( _x_ + _x_ ^3). -``` - -``` -Challenge! Make JOS's IPC mechanism more efficient by applying some of the techniques from Liedtke's paper, [Improving IPC by Kernel Design][5], or any other tricks you may think of. Feel free to modify the kernel's system call API for this purpose, as long as your code is backwards compatible with what our grading scripts expect. -``` - -**This ends part C.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab4.txt`. - -Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab4.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 4', then make handin and follow the directions. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://pdos.csail.mit.edu -[b]: https://github.com/lujun9972 -[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html -[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm -[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf -[4]: https://swtch.com/~rsc/thread/squint.pdf -[5]: http://dl.acm.org/citation.cfm?id=168633 diff --git a/sources/tech/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md b/sources/tech/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md deleted file mode 100644 index 60350e676e..0000000000 --- a/sources/tech/20181018 MidnightBSD Hits 1.0- Checkout What-s New.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -MidnightBSD Hits 1.0! Checkout What’s New -====== -A couple days ago, Lucas Holt announced the release of MidnightBSD 1.0. Let’s take a quick look at what is included in this new release. - -### What is MidnightBSD? - -![MidnightBSD][1] - -[MidnightBSD][2] is a fork of FreeBSD. Lucas created MightnightBSD to be an option for desktop users and for BSD newbies. He wanted to create something that would allow people to quickly get a desktop experience on BSD. He believed that other options had too much of a focus on the server market. - -### What is in MidnightBSD 1.0? - -According to the [release notes][3], most of the work in 1.0 went towards updating the base system, improving the package manager and updating tools. The new release is compatible with FreeBSD 10-Stable. - -Mports (MidnightBSD’s package management system) has been upgraded to support installing multiple packages with one command. The `mport upgrade` command has been fixed. Mports now tracks deprecated and expired packages. A new package format was also introduced. - - - -Other changes include: - - * [ZFS][4] is now supported as a boot file system. Previously, ZFS could only be used for additional storage. - * Support for NVME SSDs - * AMD Ryzen and Radeon support have been improved. - * Intel, Broadcom, and other drivers updated. - * bhyve support has been ported from FreeBSD - * The sensors framework was removed because it was causing locking issues. - * Sudo was removed and replaced with [doas][5] from OpenBSD. - * Added support for Microsoft hyper-v - - - -### Before you upgrade… - -If you are a current MidnightBSD user or are thinking of trying out the new release, it would be a good idea to wait. Lucas is currently rebuilding packages to support the new package format and tooling. He also plans to upgrade packages and ports for the desktop environment over the next couple of months. He is currently working on porting Firefox 52 ESR because it is the last release that does not require Rust. He also hopes to get a newer version of Chromium ported to MidnightBSD. I would recommend keeping an eye on the MidnightBSD [Twi][6][t][6][ter][6] feed. - -### What happened to 0.9? - -You might notice that the previous release of MidnightBSD was 0.8.6. Now, you might be wondering “Why the jump to 1.0”? According to Lucas, he ran into several issues while developing 0.9. In fact, he restarted it several times. He ending up taking CURRENT in a different direction than the 0.9 branch and it became 1.0. Some packages also had an issue with the 0.* numbering system. - -### Help Needed - -Currently, the MidnightBSD project is the work of pretty much one guy, Lucas Holt. This is the main reason why development has been slow. If you are interested in helping out, you can contact him on [Twitter][6]. - -In the [release announcement video][7]. Lucas said that he had encountered problems with upstream projects accepting patches. They seem to think that MidnightBSD is too small. This often means that he has to port an application from scratch. - -### Thoughts - -I have a thing for the underdog. Of all the BSDs that I have interacted with, that monicker fits MidnightBSD the most. One guy trying to create an easy desktop experience. Currently, there is only one other BSD trying to do something similar: Project Trident. I think that this is a real barrier to BSDs success. Linux succeeds because people can quickly and easily install it. Hopefully, MidnightBSD does that for BSD, but right now it has a long way to go. - -Have you ever used MidnightBSD? If not, what is your favorite BSD? What other BSD topics should we cover? Let us know in the comments below. - -If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][8]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/midnightbsd-1-0-release/ - -作者:[John Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/john/ -[b]: https://github.com/lujun9972 -[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/midnightbsd-wallpaper.jpeg -[2]: https://www.midnightbsd.org/ -[3]: https://www.midnightbsd.org/notes/ -[4]: https://itsfoss.com/what-is-zfs/ -[5]: https://man.openbsd.org/doas -[6]: https://twitter.com/midnightbsd -[7]: https://www.youtube.com/watch?v=-rlk2wFsjJ4 -[8]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md b/sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md new file mode 100644 index 0000000000..5dc19ed73c --- /dev/null +++ b/sources/tech/20181023 How to Check HP iLO Firmware Version from Linux Command Line.md @@ -0,0 +1,131 @@ +How to Check HP iLO Firmware Version from Linux Command Line +====== +There are many utilities are available in Linux to get a [hardware information][1]. + +Each tool has their own unique feature which help us to gather the required information. + +We have already wrote many articles about this, the hardware tools are Dmidecode, hwinfo, lshw, inxi, lspci, lssci, lsusb, lsblk, neofetch, screenfetch, etc., + +Today we are going to discuss about the same topic. I will tell you, how to check HP iLO firmware version through Linux command line. + +Also read a following articles which is related to Linux hardware. + +**Suggested Read :** +**(#)** [LSHW (Hardware Lister) – A Nifty Tool To Get A Hardware Information On Linux][2] +**(#)** [inxi – A Great Tool to Check Hardware Information on Linux][3] +**(#)** [Dmidecode – Easy Way To Get Linux System Hardware Information][4] +**(#)** [Neofetch – Shows Linux System Information With ASCII Distribution Logo][5] +**(#)** [ScreenFetch – Fetch Linux System Information on Terminal with Distribution ASCII art logo][6] +**(#)** [16 Methods To Check If A Linux System Is Physical or Virtual Machine][7] +**(#)** [hwinfo (Hardware Info) – A Nifty Tool To Detect System Hardware Information On Linux][8] +**(#)** [How To Find WWN, WWNN and WWPN Number Of HBA Card In Linux][9] +**(#)** [How To Check System Hardware Manufacturer, Model And Serial Number In Linux][1] +**(#)** [How To Use lspci, lsscsi, lsusb, And lsblk To Get Linux System Devices Information][10] + +### What is iLO? + +iLO stands for Integrated Lights-Out is a proprietary embedded server management technology by Hewlett-Packard which provides out-of-band management facilities. + +I can say this in simple term, it’s a dedicated device management channel which allow users to manage and monitor the device remotely regardless of whether the machine is powered on, or whether an operating system is installed or functional. + +It allows a system administrator to monitor all devices such as CPU, RAM, Hardware RAID, fan speed, power voltages, chassis intrusion, firmware (BIOS or UEFI), also manage remote terminals (KVM over IP), remote reboot, shutdown, powering on, etc. + +The below list of lights out management (LOM) technology offered by other vendors. + + * **`iLO:`** Integrated Lights-Out by HP + * **`IMM:`** Integrated Management Module by IBM + * **`iDRAC:`** Integrated Dell Remote Access Controllers by Dell + * **`IPMI:`** Intelligent Platform Management Interface – General Standard, it’s used on Supermicro hardware + * **`AMT:`** Intel Active Management Technology by Intel + * **`CIMC:`** Cisco Integrated Management Controller by Cisco + + + +The below table will give the details about iLO version and supported hardware’s. + + * **`iLO:`** ProLiant G2, G3, G4, and G6 servers, model numbers under 300 + * **`iLO 2:`** ProLiant G5 and G6 servers, model numbers 300 and higher + * **`iLO 3:`** ProLiant G7 servers + * **`iLO 4:`** ProLiant Gen8 and Gen9 servers + * **`iLO 5:`** ProLiant Gen10 servers + + + +There are three easy ways to check HP iLO firmware version in Linux, Here we are going to show you one by one. + +### Method-1: Using Dmidcode Command + +[Dmidecode][4] is a tool which reads a computer’s DMI (stands for Desktop Management Interface) (some say SMBIOS – stands for System Management BIOS) table contents and display system hardware information in a human-readable format. + +This table contains a description of the system’s hardware components, as well as other useful information such as serial number, Manufacturer information, Release Date, and BIOS revision, etc,., + +The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolution’s (such as the fastest supported CPU or the maximal amount of memory supported). This will help you to analyze your hardware capability like whether it’s support latest application version or not? + +As you run it, dmidecode will try to locate the DMI table. If it succeeds, it will then parse this table and display a list of records which you expect. + +First, learn about DMI Types and its keywords, so that we can play nicely without any trouble otherwise we can’t. + +``` +# dmidecode | grep "Firmware Revision" + Firmware Revision: 2.40 +``` + +### Method-2: Using HPONCFG Utility + +HPONCFG is an online configuration tool used to set up and reconfigure iLO without requiring a reboot of the server operating system. The utility runs in a command-line mode and must be executed from an operating system command line on the local server. HPONCFG enables you to initially configure features exposed through the RBSU or iLO. + +Before using HPONCFG, the iLO Management Interface Driver must be loaded on the server. HPONCFG displays a warning if the driver is not installed. + +To install this, visit the [HP website][11] and get the latest hponcfg package by searching the following keyword (sample search key word for iLO 4 “HPE Integrated Lights-Out 4 (iLO 4)”). In that you need to click “HP Lights-Out Online Configuration Utility for Linux (AMD64/EM64T)” and download the package. + +``` +# rpm -ivh /tmp/hponcfg-5.3.0-0.x86_64.rpm +``` + +Use hponcfg command to get the information. + +``` +# hponcfg | grep Firmware +Firmware Revision = 2.40 Device type = iLO 4 Driver name = hpilo +``` + +### Method-3: Using CURL Command + +We can use cURL command to get some of the information in XML format, for HP iLO, iLO 2, iLO 3, iLO 4 and iLO 5. + +Using cURL command we can get the iLO firmware version without to login to the server or console. + +Make sure you have to use right iLO management IP instead of us to get the details. I have removed all the unnecessary details from the below output for better clarification. + +``` +# curl -k https://10.2.0.101/xmldata?item=All + +ProLiant DL380p G8 +Integrated Lights-Out 4 (iLO 4) +2.40 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-check-hp-ilo-firmware-version-from-linux-command-line/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/how-to-check-system-hardware-manufacturer-model-and-serial-number-in-linux/ +[2]: https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ +[3]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ +[4]: https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/ +[5]: https://www.2daygeek.com/neofetch-display-linux-systems-information-ascii-distribution-logo-terminal/ +[6]: https://www.2daygeek.com/install-screenfetch-to-fetch-linux-system-information-on-terminal-with-distribution-ascii-art-logo/ +[7]: https://www.2daygeek.com/check-linux-system-physical-virtual-machine-virtualization-technology/ +[8]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ +[9]: https://www.2daygeek.com/how-to-find-wwn-wwnn-and-wwpn-number-of-hba-card-in-linux/ +[10]: https://www.2daygeek.com/check-system-hardware-devices-bus-information-lspci-lsscsi-lsusb-lsblk-linux/ +[11]: https://support.hpe.com/hpesc/public/home diff --git a/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md b/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md index 465c6b2f50..25a1c29f68 100644 --- a/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md +++ b/sources/tech/20181024 4 cool new projects to try in COPR for October 2018.md @@ -1,3 +1,5 @@ +translating---geekpi + 4 cool new projects to try in COPR for October 2018 ====== diff --git a/sources/tech/20181024 Get organized at the Linux command line with Calcurse.md b/sources/tech/20181024 Get organized at the Linux command line with Calcurse.md deleted file mode 100644 index 9f67503f2e..0000000000 --- a/sources/tech/20181024 Get organized at the Linux command line with Calcurse.md +++ /dev/null @@ -1,87 +0,0 @@ -translating---geekpi - -Get organized at the Linux command line with Calcurse -====== - -Keep up with your calendar and to-do list with Calcurse. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT) - -Do you need complex, feature-packed graphical or web applications to get and stay organized? I don't think so. The right command line tool can do the job and do it well. - -Of course, uttering the words command and line together can strike fear into the hearts of some Linux users. The command line, to them, is terra incognita. - -Organizing yourself at the command line is easy with [Calcurse][1]. Calcurse brings a graphical look and feel to a text-based interface. You get the simplicity and focus of the command line married to ease of use and navigation. - -Let's take a closer look at Calcurse, which is open sourced under the BSD License. - -### Getting the software - -If compiling code is your thing (it's not mine, generally), you can grab the source code from the [Calcurse website][1]. Otherwise, get the [binary installer][2] for your Linux distribution. You might even be able to get Calcurse from your Linux distro's package manager. It never hurts to check. - -Compile or install Calcurse (neither takes all that long), and you're ready to go. - -### Using Calcurse - -Crack open a terminal window and type **calcurse**. - -![](https://opensource.com/sites/default/files/uploads/calcurse-main.png) - -Calcurse's interface consists of three panels: - - * Appointments (the left side of the screen) - * Calendar (the top right) - * To-do list (the bottom right) - - - -Move between the panels by pressing the Tab key on your keyboard. To add a new item to a panel, press **a**. Calcurse walks you through what you need to do to add the item. - -One interesting quirk is that the Appointment and Calendar panels work together. You add an appointment by tabbing to the Calendar panel. There, you choose the date for your appointment. Once you do that, you tab back to the Appointments panel. I know … - -Press **a** to set a start time, a duration (in minutes), and a description of the appointment. The start time and duration are optional. Calcurse displays appointments on the day they're due. - -![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png) - -Here's what a day's appointments look like: - -![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png) - -The to-do list works on its own. Tab to the ToDo panel and (again) press **a**. Type a description of the task, then set a priority (1 is the highest and 9 is the lowest). Calcurse lists your uncompleted tasks in the ToDo panel. - -![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png) - -If your task has a long description, Calcurse truncates it. You can view long descriptions by navigating to the task using the up or down arrow keys on your keyboard, then pressing **v**. - -![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png) - -Calcurse saves its information in text files in a hidden folder called **.calcurse** in your home directory—for example, **/home/scott/.calcurse**. If Calcurse stops working, it's easy to find your information. - -### Other useful features - -Other Calcurse features include the ability to set recurring appointments. To do that, find the appointment you want to repeat and press **r** in the Appointments panel. You'll be asked to set the frequency (for example, daily or weekly) and how long you want the appointment to repeat. - -You can also import calendars in [ICAL][3] format or export your data in either ICAL or [PCAL][4] format. With ICAL, you can share your data with other calendar applications. With PCAL, you can generate a Postscript version of your calendar. - -There are also a number of command line arguments you can pass to Calcurse. You can read about them [in the documentation][5]. - -While simple, Calcurse does a solid job of helping you keep organized. You'll need to be a bit more mindful of your tasks and appointments, but you'll be able to focus better on what you need to do and where you need to be. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/calcurse - -作者:[Scott Nesbitt][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/scottnesbitt -[b]: https://github.com/lujun9972 -[1]: http://www.calcurse.org/ -[2]: http://www.calcurse.org/downloads/#packages -[3]: https://tools.ietf.org/html/rfc2445 -[4]: http://pcal.sourceforge.net/ -[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation diff --git a/sources/tech/20181025 How to write your favorite R functions in Python.md b/sources/tech/20181025 How to write your favorite R functions in Python.md new file mode 100644 index 0000000000..a06d3557b9 --- /dev/null +++ b/sources/tech/20181025 How to write your favorite R functions in Python.md @@ -0,0 +1,153 @@ +How to write your favorite R functions in Python +====== +R or Python? This Python script mimics convenient R-style functions for doing statistics nice and easy. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0) + +One of the great modern battles of data science and machine learning is "Python vs. R." There is no doubt that both have gained enormous ground in recent years to become top programming languages for data science, predictive analytics, and machine learning. In fact, according to a recent IEEE article, Python overtook C++ as the [top programming language][1] and R firmly secured its spot in the top 10. + +However, there are some fundamental differences between these two. [R was developed primarily][2] as a tool for statistical analysis and quick prototyping of a data analysis problem. Python, on the other hand, was developed as a general purpose, modern object-oriented language in the same vein as C++ or Java but with a simpler learning curve and more flexible demeanor. Consequently, R continues to be extremely popular among statisticians, quantitative biologists, physicists, and economists, whereas Python has slowly emerged as the top language for day-to-day scripting, automation, backend web development, analytics, and general machine learning frameworks and has an extensive support base and open source development community work. + +### Mimicking functional programming in a Python environment + +[R's nature as a functional programming language][3] provides users with an extremely simple and compact interface for quick calculations of probabilities and essential descriptive/inferential statistics for a data analysis problem. For example, wouldn't it be great to be able to solve the following problems with just a single, compact function call? + + * How to calculate the mean/median/mode of a data vector. + * How to calculate the cumulative probability of some event following a normal distribution. What if the distribution is Poisson? + * How to calculate the inter-quartile range of a series of data points. + * How to generate a few random numbers following a Student's t-distribution. + + + +The R programming environment can do all of these. + +On the other hand, Python's scripting ability allows analysts to use those statistics in a wide variety of analytics pipelines with limitless sophistication and creativity. + +To combine the advantages of both worlds, you just need a simple Python-based wrapper library that contains the most commonly used functions pertaining to probability distributions and descriptive statistics defined in R-style. This enables you to call those functions really fast without having to go to the proper Python statistical libraries and figure out the whole list of methods and arguments. + +### Python wrapper script for most convenient R-functions + +[I wrote a Python script][4] to define the most convenient and widely used R-functions in simple, statistical analysis—in Python. After importing this script, you will be able to use those R-functions naturally, just like in an R programming environment. + +The goal of this script is to provide simple Python subroutines mimicking R-style statistical functions for quickly calculating density/point estimates, cumulative distributions, and quantiles and generating random variates for important probability distributions. + +To maintain the spirit of R styling, the script uses no class hierarchy and only raw functions are defined in the file. Therefore, a user can import this one Python script and use all the functions whenever they're needed with a single name call. + +Note that I use the word mimic. Under no circumstance am I claiming to emulate R's true functional programming paradigm, which consists of a deep environmental setup and complex relationships between those environments and objects. This script allows me (and I hope countless other Python users) to quickly fire up a Python program or Jupyter notebook, import the script, and start doing simple descriptive statistics in no time. That's the goal, nothing more, nothing less. + +If you've coded in R (maybe in grad school) and are just starting to learn and use Python for data analysis, you will be happy to see and use some of the same well-known functions in your Jupyter notebook in a manner similar to how you use them in your R environment. + +Whatever your reason, using this script is fun. + +### Simple examples + +To start, just import the script and start working with lists of numbers as if they were data vectors in R. + +``` +from R_functions import * +lst=[20,12,16,32,27,65,44,45,22,18] + +``` + +Say you want to calculate the [Tuckey five-number][5] summary from a vector of data points. You just call one simple function, **fivenum** , and pass on the vector. It will return the five-number summary in a NumPy array. + +``` +lst=[20,12,16,32,27,65,44,45,22,18] +fivenum(lst) +> array([12. , 18.5, 24.5, 41. , 65. ]) +``` + +Maybe you want to know the answer to the following question: + +Suppose a machine outputs 10 finished goods per hour on average with a standard deviation of 2. The output pattern follows a near normal distribution. What is the probability that the machine will output at least 7 but no more than 12 units in the next hour? + +The answer is essentially this: + +![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_1.png) + +You can obtain the answer with just one line of code using **pnorm** : + +``` +pnorm(12,10,2)-pnorm(7,10,2) +> 0.7745375447996848 +``` + +Or maybe you need to answer the following: + +Suppose you have a loaded coin with the probability of turning heads up 60% every time you toss it. You are playing a game of 10 tosses. How do you plot and map out the chances of all the possible number of wins (from 0 to 10) with this coin? + +You can obtain a nice bar chart with just a few lines of code using just one function, **dbinom** : + +``` +probs=[] +import matplotlib.pyplot as plt +for i in range(11): +    probs.append(dbinom(i,10,0.6)) +plt.bar(range(11),height=probs) +plt.grid(True) +plt.show() +``` + +![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_2.png) + +### Simple interface for probability calculations + +R offers an extremely simple and intuitive interface for quick calculations from essential probability distributions. The interface goes like this: + + * **d** {distribution} gives the density function value at a point **x** + * **p** {distribution} gives the cumulative value at a point **x** + * **q** {distribution} gives the quantile function value at a probability **p** + * **r** {distribution} generates one or multiple random variates + + + +In our implementation, we stick to this interface and its associated argument list so you can execute these functions exactly like you would in an R environment. + +### Currently implemented functions + +The following R-style functions are implemented in the script for fast calling. + + * Mean, median, variance, standard deviation + * Tuckey five-number summary, IQR + * Covariance of a matrix or between two vectors + * Density, cumulative probability, quantile function, and random variate generation for the following distributions: normal, uniform, binomial, Poisson, F, Student's t, Chi-square, beta, and gamma. + + + +### Work in progress + +Obviously, this is a work in progress, and I plan to add some other convenient R-functions to this script. For example, in R, a single line of command **lm** can get you an ordinary least-square fitted model to a numerical dataset with all the necessary inferential statistics (P-values, standard error, etc.). This is powerfully brief and compact! On the other hand, standard linear regression problems in Python are often tackled using [Scikit-learn][6], which needs a bit more scripting for this use, so I plan to incorporate this single function linear model fitting feature using Python's [statsmodels][7] backend. + +If you like and use this script in your work, please help others find it by starring or forking its [GitHub repository][8]. Also, you can check my other [GitHub repos][9] for fun code snippets in Python, R, or MATLAB and some machine learning resources. + +If you have any questions or ideas to share, please contact me at [tirthajyoti[AT]gmail.com][10]. If you are, like me, passionate about machine learning and data science, please [add me on LinkedIn][11] or [follow me on Twitter. ][12] + +Originally published on [Towards Data Science][13]. Reposted under [CC BY-SA 4.0][14]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/write-favorite-r-functions-python + +作者:[Tirthajyoti Sarkar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tirthajyoti +[b]: https://github.com/lujun9972 +[1]: https://spectrum.ieee.org/at-work/innovation/the-2018-top-programming-languages +[2]: https://www.coursera.org/lecture/r-programming/overview-and-history-of-r-pAbaE +[3]: http://adv-r.had.co.nz/Functional-programming.html +[4]: https://github.com/tirthajyoti/StatsUsingPython/blob/master/R_Functions.py +[5]: https://en.wikipedia.org/wiki/Five-number_summary +[6]: http://scikit-learn.org/stable/ +[7]: https://www.statsmodels.org/stable/index.html +[8]: https://github.com/tirthajyoti/StatsUsingPython +[9]: https://github.com/tirthajyoti?tab=repositories +[10]: mailto:tirthajyoti@gmail.com +[11]: https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/ +[12]: https://twitter.com/tirthajyotiS +[13]: https://towardsdatascience.com/how-to-write-your-favorite-r-functions-in-python-11e1e9c29089 +[14]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20181025 Understanding Linux Links- Part 2.md b/sources/tech/20181025 Understanding Linux Links- Part 2.md new file mode 100644 index 0000000000..925138f038 --- /dev/null +++ b/sources/tech/20181025 Understanding Linux Links- Part 2.md @@ -0,0 +1,98 @@ +Understanding Linux Links: Part 2 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/links-fikri-rasyid-7853.jpg?itok=0jBT_1M2) + +In the [first part of this series][1], we looked at hard links and soft links and discussed some of the various ways that linking can be useful. Linking may seem straightforward, but there are some non-obvious quirks you have to be aware of. That’s what we’ll be looking at here. Consider, for example, at the way we created the link to _libblah_ in the previous article. Notice how we linked from within the destination folder: + +``` +cd /usr/local/lib + +ln -s /usr/lib/libblah +``` + +That will work. But this: + +``` +cd /usr/lib + +ln -s libblah /usr/local/lib +``` + +That is, linking from within the original folder to the destination folder, will not work. + +The reason for that is that _ln_ will think you are linking from inside _/usr/local/lib_ to _/usr/local/lib_ and will create a linked file from _libblah_ in _/usr/local/lib_ to _libblah_ also in _/usr/local/lib_. This is because all the link file gets is the name of the file ( _libblah_ ) but not the path to the file. The end result is a very broken link. + +However, this: + +``` +cd /usr/lib + +ln -s /usr/lib/libblah /usr/local/lib +``` + +will work. Then again, it would work regardless of from where you executed the instruction within the filesystem. Using absolute paths, that is, spelling out the whole the path, from root (/) drilling down to to the file or directory itself, is just best practice. + +Another thing to note is that, as long as both _/usr/lib_ and _/usr/local/lib_ are on the same partition, making a hard link like this: + +``` +cd /usr/lib + +ln -s libblah /usr/local/lib +``` + +will also work because hard links don't rely on pointing to a file within the filesystem to work. + +Where hard links will not work is if you want to link across partitions. Say you have _fileA_ on partition A and the partition is mounted at _/path/to/partitionA/directory_. If you want to link _fileA_ to _/path/to/partitionB/directory_ that is on partition B, this will not work: + +``` +ln /path/to/partitionA/directory/file /path/to/partitionB/directory +``` + +As we saw previously, hard links are entries in a partition table that point to data on the *same partition*. You can't have an entry in the table of one partition pointing to data on another partition. Your only choice here would be to us a soft link: + +``` +ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory +``` + +Another thing that soft links can do and hard links cannot is link to whole directories: + +``` +ln -s /path/to/some/directory /path/to/some/other/directory +``` + +will create a link to _/path/to/some/directory_ within _/path/to/some/other/directory_ without a hitch. + +Trying to do the same by hard linking will show you an error saying that you are not allowed to do that. And the reason for that is unending recursiveness: if you have directory B inside directory A, and then you link A inside B, you have situation, because then A contains B within A inside B that incorporates A that encloses B, and so on ad-infinitum. + +You can have recursive using soft links, but why would you do that to yourself? + +### Should I use a hard or a soft link? + +In general you can use soft links everywhere and for everything. In fact, there are situations in which you can only use soft links. That said, hard links are slightly more efficient: they take up less space on disk and are faster to access. On most machines you will not notice the difference, though: the difference in space and speed will be negligible given today's massive and speedy hard disks. However, if you are using Linux on an embedded system with a small storage and a low-powered processor, you may want to give hard links some consideration. + +Another reason to use hard links is that a hard link is much more difficult to break. If you have a soft link and you accidentally move or delete the file it is pointing to, your soft link will be broken and point to... nothing. There is no danger of this happening with a hard link, since the hard link points directly to the data on the disk. Indeed, the space on the disk will not be flagged as free until the last hard link pointing to it is erased from the file system. + +Soft links, on the other hand can do more than hard links and point to anything, be it file or directory. They can also point to items that are on different partitions. These two things alone often make them the only choice. + +### Next Time + +Now we have covered files and directories and the basic tools to manipulate them, you are ready to move onto the tools that let you explore the directory hierarchy, find data within files, and examine the contents. That's what we'll be dealing with in the next installment. See you then! + +Learn more about Linux through the free ["Introduction to Linux" ][2]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/understanding-linux-links-part-2 + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1 +[2]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181026 An Overview of Android Pie.md b/sources/tech/20181026 An Overview of Android Pie.md new file mode 100644 index 0000000000..9fa365327f --- /dev/null +++ b/sources/tech/20181026 An Overview of Android Pie.md @@ -0,0 +1,138 @@ +An Overview of Android Pie +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/android-pie.jpg?itok=Sx4rbOWY) + +Let’s talk about Android for a moment. Yes, I know it’s only Linux by way of a modified kernel, but what isn’t these days? And seeing as how the developers of Android have released what many (including yours truly) believe to be the most significant evolution of the platform to date, there’s plenty to talk about. Of course, before we get into that, it does need to be mentioned (and most of you will already know this) that the whole of Android isn’t open source. Although much of it is, when you get into the bits that connect to Google services, things start to close up. One major service is the Google Play Store, a functionality that is very much proprietary. But this isn’t about how much of Android is open or closed, this is about Pie. +Delicious, nutritious … efficient and battery-saving Pie. + +I’ve been working with Android Pie on my Essential PH-1 daily driver (a phone that I really love, but understand how shaky the ground is under the company). After using Android Pie for a while now, I can safely say you want it. It’s that good. But what about the ninth release of Android makes it so special? Let’s dig in and find out. Our focus will be on the aspects that affect users, not developers, so I won’t dive deep into the underlying works. + +### Gesture-Based Navigation + +Much has been made about Android’s new gesture-based navigation—much of it not good. To be honest, this was a feature that aroused all of my curiosity. When it was first announced, no one really had much of an idea what it would be like. Would users be working with multi touch gestures to navigate around the Android interface? Or would this be something completely different. + + +![Android Pie][2] + +Figure 1: The Android Pie recent apps overview. + +[Used with permission][3] + +The reality is, gesture-based navigation is much more subtle and simple than what most assumed. And it all boils down to the Home button. With gesture-based navigation enabled, the Home button and the Recents button have been combined into a single feature. This means, in order to gain access to your recent apps, you can’t simply tap that square Recents button. Instead, the Recent apps overview (Figure 1) is opened with a short swipe up from the home button. + +Another change is how the App Drawer is accessed. In similar fashion to opening the Recents overview, the App Drawer is opened via a long swipe up from the Home button. + +As for the back button? It’s not been removed. Instead, what you’ll find is it appears (in the left side of the home screen dock) when an app calls for it. Sometimes that back button will appear, even if an app includes its own back button. + +Thing is, however, if you don’t like gesture-based navigation, you can disable it. To do so, follow these steps: + + 1. Open Settings + + 2. Scroll down and tap System > Gestures + + 3. Tap Swipe up on Home button + + 4. Tap the On/Off slider (Figure 2) until it’s in the Off position + + + + +### Battery Life + +AI has become a crucial factor in Android. In fact, it is AI that has helped to greatly improve battery life in Android. This new feature is called Adaptive Battery and works by prioritizing battery power for the apps and services you use most. By using AI, Android learns how you use your Apps and, after a short period, can then shut down unused apps, so they aren’t draining your battery while waiting in memory. + +The only caveat to Adaptive Battery is, should the AI pick up “bad habits” and your battery start to prematurely drain, the only way to reset the function is by way of a factory reset. Even with that small oversight, the improvement in battery life from Android Oreo to Pie is significant. + +### Changes to Split Screen + +Split Screen has been available to Android for some time. However, with Android Pie, how it’s used has slightly changed. This change only affects those who have gesture-based navigation enabled (otherwise, it remains the same). In order to work with Split Screen on Android 9.0, follow these steps: + +![Adding an app][5] + +Figure 3: Adding an app to split screen mode in Android Pie. + +[Used with permission][3] + + 1. Swipe upward from the Home button to open the Recent apps overview. + + 2. Locate the app you want to place in the top portion of the screen. + + 3. Long press the app’s circle icon (located at the top of the app card) to reveal a new popup menu (Figure 3) + + 4. Tap Split Screen and the app will open in the top half of the screen. + + 5. Locate the second app you want to open and, tap it to add it to the bottom half of the screen. + + + + +Using Split Screen and closing apps with the feature remains the same as it was. + +### + +![Actions][7] + +Figure 4: Android App Actions in action. + +[Used with permission][3] + +### App Actions + +This is another feature that was introduced some time ago, but was given some serious attention for the release of Android Pie. App Actions make it such that you can do certain things with an app, directly from the apps launcher. + +For instance, if you long-press the GMail launcher, you can select to reply to a recent email, or compose a new email. Back in Android Oreo, that feature came in the form of a popup list of actions. With Android Pie, the feature now better fits with the Material Design scheme of things (Figure 4). + +![Sound control][9] + +Figure 5: Sound control in Android Pie. + +[Used with permission][3] + +### Sound Controls + +Ah, the ever-changing world of sound controls on Android. Android Oreo had an outstanding method of controlling your sound, by way of minor tweaks to the Do Not Disturb feature. With Android Pie, that feature finds itself in a continued state of evolution. + +What Android Pie nailed is the quick access buttons to controlling sound on a device. Now, if you press either the volume up or down button, you’ll see a new popup menu that allows you to control if your device is silenced and/or vibrations are muted. By tapping the top icon in that popup menu (Figure 5), you can cycle through silence, mute, or full sound. + +### Screenshots + +Because I write about Android, I tend to take a lot of screenshots. With Android Pie came one of my favorite improvements: sharing screenshots. Instead of having to open Google Photos, locate the screenshot to be shared, open the image, and share the image, Pie gives you a pop-up menu (after you take a screenshot) that allows you to share, edit, or delete the image in question. + +![Sharing ][11] + +Figure 6: Sharing screenshots just got a whole lot easier. + +[Used with permission][3] + +If you want to share the screenshot, take it, wait for the menu to pop up, tap Share (Figure 6), and then share it from the standard Android sharing menu. + +### A More Satisfying Android Experience + +The ninth iteration of Android has brought about a far more satisfying user experience. What I’ve illustrated only scratches the surface of what Android Pie brings to the table. For more information, check out Google’s official [Android Pie website][12]. And if your device has yet to receive the upgrade, have a bit of patience. Pie is well worth the wait. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/2018/10/overview-android-pie + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: /files/images/pie1png +[2]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_1.png?itok=BsSe8kqS (Android Pie) +[3]: /licenses/category/used-permission +[4]: /files/images/pie3png +[5]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_3.png?itok=F-NB1dqI (Adding an app) +[6]: /files/images/pie4png +[7]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_4.png?itok=Ex-NzYSo (Actions) +[8]: /files/images/pie5png +[9]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_5.png?itok=NMW2vIlL (Sound control) +[10]: /files/images/pie6png +[11]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_6.png?itok=7Ik8_4jC (Sharing ) +[12]: https://www.android.com/versions/pie-9-0/ diff --git a/sources/tech/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md b/sources/tech/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md new file mode 100644 index 0000000000..fda7de542e --- /dev/null +++ b/sources/tech/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md @@ -0,0 +1,84 @@ +Ultimate Plumber – Writing Linux Pipes With Instant Live Preview +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber-720x340.jpg) + +As you may already know, **Pipe** command is used to send the output of one command/program/process to another command/program/process for further processing in Unix-like operating systems. Using the Pipe command, we can combine two or more commands and redirect the standard input or output of one command to another easily and quickly. A pipe is represented by a vertical bar character ( **|** ) between two or more Linux commands. The general syntax of a pipe command is given below. + +``` +Command-1 | Command-2 | Command-3 | …| Command-N +``` + +If you use Pipe command often, I have a good news for you. Now, you can preview the Linux pipes results instantly while writing them. Say hello to **“Ultimate Plumber”** , shortly **UP** , a command line tool for writing Linux pipes with instant live preview. It is used to build complex Pipelines quickly, easily with instant, scrollable preview of the command results. The UP tool is quite handy if you often need to repeat piped commands to get the desired result. + +In this brief guide, I will show you how to install UP and build complex Linux pipelines easily. + +**Important warning:** + +Please be careful when using this tool in production! It could be dangerous and you might inadvertently delete any important data. You must particularly be careful when using “rm” or “dd” commands with UP tool. You have been warned! + +### Writing Linux Pipes With Instant Live Preview Using Ultimate Plumber + +Here is a simple example to understand the underlying concept of UP. For example, let us pipe the output of **lshw** command into UP. To do so, type the following command in your Terminal and press ENTER: + +``` +$ lshw |& up +``` + +You will see an input box at the top of the screen as shown in the below screenshot. +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Ultimate-Plumber.png) +In the input box, start typing any pipelines and press ENTER key to execute the command you just typed. Now, the Ultimate Plumber utility will immediately show you the output of the pipeline in the **scrollable window** below. You can browse through the results using **PgUp/PgDn** or **Ctrl+ ** keys. + +Once you’re satisfied with the result, press **Ctrl-X** to exit the UP. The Linux pipe command you just built will be saved in a file named **up1.sh** in the current working directory. If this file is already exists, an additional file named **up2.sh** will be created to save the result. This will go on until 1000 files. If you don’t want to save the output, just press **Ctrl-C**. + +You can view the contents of the upX.sh file with cat command. Here is the output of my **up2.sh** file: + +``` +$ cat up2.sh +#!/bin/bash +grep network -A5 | grep : | cut -d: -f2- | paste - - +``` + +If the command you piped into UP is long running, you will see a **~** (tilde) character in the top-left corner of the window. It means that UP is still waiting for the inputs. In such cases, you may need to freeze the Up’s input buffer size temporarily by pressing **Ctrl-S**. To unfreeze UP back, simply press **Ctrl-Q**. The current input buffer size of Ultimate Plumber is **40 MB**. Once you reached this limit, you will see a **+** (plus) sign on the top-left corner of the screen. + +Here is the short demo of UP tool in action: +![](https://www.ostechnix.com/wp-content/uploads/2018/10/up.gif) + +### Installing Ultimate Plumber + +Liked it? Great! Go ahead and install it on your Linux system and start using it. Installing UP is quite easy! All you have to do is open your Terminal and run the following two commands to install UP. + +Download the latest Ultimate Plumber binary file from the [**releases page**][1] and put it in your path, for example **/usr/local/bin/**. + +``` +$ sudo wget -O /usr/local/bin/up wget https://github.com/akavel/up/releases/download/v0.2.1/up +``` + +Then, make the UP binary as executable using command: + +``` +$ sudo chmod a+x /usr/local/bin/up +``` + +Done! Start building Linux pipelines as described above!! + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/ultimate-plumber-writing-linux-pipes-with-instant-live-preview/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/akavel/up/releases diff --git a/sources/tech/20181027 Design faster web pages, part 3- Font and CSS tweaks.md b/sources/tech/20181027 Design faster web pages, part 3- Font and CSS tweaks.md new file mode 100644 index 0000000000..dc5b7cfbf2 --- /dev/null +++ b/sources/tech/20181027 Design faster web pages, part 3- Font and CSS tweaks.md @@ -0,0 +1,75 @@ +Translating by StdioA + +Design faster web pages, part 3: Font and CSS tweaks +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/10/designfaster3-816x345.jpg) + +Welcome back to this series of articles on designing faster web pages. [Part 1][1] and [part 2][2] of this series covered how to lose browser fat through optimizing and replacing images. This part looks at how to lose additional fat in CSS ([Cascading Style Sheets][3]) and fonts. + +### Tweaking CSS + +First things first: let’s look at where the problem originates. CSS was once a huge step forward. You can use it to style several pages from a central style sheet. Nowadays, many web developers use frameworks like Bootstrap. + +While these frameworks are certainly helpful, many people simply copy and paste the whole framework. Bootstrap is huge; the “minimal” version of 4.0 is currently 144.9 KB. Perhaps in the era of terabytes of data, this isn’t much. But as they say, even small cattle makes a mess. + +Look back at the [getfedora.org][4] example. Recall in [part 1][1], the first analysis showed the CSS files used nearly ten times more space than the HTML itself. Here’s a display of the stylesheets used: + +![][5] + +That’s nine different stylesheets. Many styles in them that are also unused on the page. + +#### Remove, merge, and compress/minify + +The font-awesome CSS inhabits the extreme end of included, unused styles. There are only three glyphs of the font used on the page. To make that up in KB, the font-awesome CSS used at getfedora.org is originally 25.2 KB. After cleaning out all unused styles, it’s only 1.3 KB. This is only about 4% of its original size! For Bootstrap CSS, the difference is 118.3 KB original, and 13.2 KB after removing unused styles. + +The next question is, must there be a bootstrap.css and a font-awesome.css? Or can they be combined? Yes, they can. That doesn’t save much file space, but the browser now requests fewer files to succesfully render the page. + +Finally, after merging the CSS files, try to remove unused styles and minify them. In this way, you save 10.1 KB for a final size of 4.3 KB. + +Unfortunately, there’s no packaged “minifier” tool in Fedoras repositories yet. However, there are hundreds of online services to do that for you. Or you can use [CSS-HTML-JS Minify][6], which is Python, and therefore easy to isntall. There’s not an available tool to purify CSS, but there are web services like [UnCSS][7]. + +### Font improvement + +[CSS3][8] came with something a lot of web developer like. They could define fonts the browser downloads in the background to render the page. Since then, a lot of web designers are very happy, especially after they discovered the usage of icon fonts for web design. Font sets like [Font Awesome][9] are quiet popular today and widely used. Here’s the size of that content: + +``` +current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2 +``` + +So the question is, do you need all the glyphs? In all probability, no. You can get rid of them with [FontForge][10], but that’s a lot of work. You could also use [Fontello][11]. Use the public instance, or set up your own, as it’s free software and available on [Github][12]. + +The downside of such customized font sets is you must host the font by yourself. You can’t use other online font services to provide updates. But this may not really be a downside, compared to faster performance. + +### Conclusion + +Now you’ve done everything you can to the content itself, to minimize what the browser loads and interprets. From now on, only tricks with the administration of the server can help. + +One easy to do, but which many people do wrong, is decide on some intelligent caching. For instance, a CSS or picture file can be cached for a week. Whatever you do, if you use a proxy service like Cloudflare or build your own proxy, minimze the pages first. Users like fast loading pages. They’ll (silently) thank you for it, and the server will have a smaller load, too. + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-3-font-css-tweaks/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/ +[2]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/ +[3]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets +[4]: https://getfedora.org +[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/CSS_delivery_tool_-_Examine_how_a_page_uses_CSS_-_2018-02-24_15.00.46.png +[6]: https://github.com/juancarlospaco/css-html-js-minify +[7]: https://uncss-online.com/ +[8]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS3 +[9]: https://fontawesome.com/ +[10]: https://fontforge.github.io/en-US/ +[11]: http://fontello.com/ +[12]: https://github.com/fontello/fontello diff --git a/sources/tech/20181029 Create animated, scalable vector graphic images with MacSVG.md b/sources/tech/20181029 Create animated, scalable vector graphic images with MacSVG.md new file mode 100644 index 0000000000..df990db3bc --- /dev/null +++ b/sources/tech/20181029 Create animated, scalable vector graphic images with MacSVG.md @@ -0,0 +1,69 @@ +Create animated, scalable vector graphic images with MacSVG +====== + +Open source SVG: The writing is on the wall + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_design_paper_plane_2_0.jpg?itok=xKdP-GWE) + +The Neo-Babylonian regent [Belshazzar][1] did not heed [the writing on the wall][2] that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser. + +Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics][3] (SVG). + +An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites. + +### Move it, move it + +You can create SVG images with several drawing programs, including open source [Inkscape][4] and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention. + +[MacSVG][5] is one tool that will get your images moving. You can find the source code on [GitHub][6]. + +Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website][5]. + +I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image. + +![](https://opensource.com/sites/default/files/uploads/macsvg-screen.png) + +It is important to first learn what makes “the writing on the wall” actually write. + +The attribute behind the animated writing is [stroke-dasharray][7]. Breaking the term into three words helps explain what is happening: Stroke refers to the line or stroke you would make with a pen, whether physical or digital. Dash means breaking the stroke down into a series of dashes. Array means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why. + +With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting. + +### The writing is on the WordPress + +I was ready to upload and share my SVG example on my [WordPress][8] site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support][9] allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall][10] to Babylonians everywhere. + +I opened the source code of my SVG in [Brackets][11], and here are the results: + +``` + + +Path animation with stroke-dasharrayThis example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing. +``` + +What would you use MacSVG for? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/macsvg-open-source-tool-animation + +作者:[Jeff Macharyas][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rikki-endsley +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Belshazzar +[2]: https://en.wikipedia.org/wiki/Belshazzar%27s_feast +[3]: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics +[4]: https://inkscape.org/ +[5]: https://macsvg.org/ +[6]: https://github.com/dsward2/macSVG +[7]: https://gist.github.com/mbostock/5649592 +[8]: https://macharyas.com/ +[9]: https://wordpress.org/plugins/svg-support/ +[10]: https://macharyas.com/index.php/2018/10/14/open-source-svg/ +[11]: http://brackets.io/ diff --git a/sources/tech/20181029 DF-SHOW - A Terminal File Manager Based On An Old DOS Application.md b/sources/tech/20181029 DF-SHOW - A Terminal File Manager Based On An Old DOS Application.md new file mode 100644 index 0000000000..f250cca056 --- /dev/null +++ b/sources/tech/20181029 DF-SHOW - A Terminal File Manager Based On An Old DOS Application.md @@ -0,0 +1,162 @@ +DF-SHOW – A Terminal File Manager Based On An Old DOS Application +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-720x340.png) + +If you have worked on good-old MS-DOS, you might have used or heard about **DF-EDIT**. The DF-EDIT, stands for **D** irectory **F** ile **Edit** or, is an obscure DOS file manager, originally written by **Larry Kroeker** for MS-DOS and PC-DOS systems. It is used to display the contents of a given directory or file in MS-DOS and PC-DOS systems. Today, I stumbled upon a similar utility named **DF-SHOW** ( **D** irectory **F** ile **S** how), a terminal file manager for Unix-like operating systems. It is an Unix rewrite of obscure DF-EDIT file manager and is based on DF-EDIT 2.3d release from 1986. DF-SHOW is completely free, open source and released under GPLv3. + +DF-SHOW can be able to, + + * List contents of a directory, + * View files, + * Edit files using your default file editor, + * Copy files to/from different locations, + * Rename files, + * Delete files, + * Create new directories from within the DF-SHOW interface, + * Update file permissions, owners and groups, + * Search files matching a search term, + * Launch executable files. + + + +### DF-SHOW Usage + +DF-SHOW consists of two programs, namely **“show”** and **“sf”**. + +**Show command** + +The “show” program (similar to the `ls` command) is used to display the contents of a directory, create new directories, rename, delete files/folders, update permissions, search files and so on. + +To view the list of contents in a directory, use the following command: + +``` +$ show +``` + +Example: + +``` +$ show dfshow +``` + +Here, dfshow is a directory. If you invoke the “show” command without specifying a directory path, it will display the contents of current directory. + +Here is how DF-SHOW default interface looks like. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-1.png) + +As you can see, DF-SHOW interface is self-explanatory. + +On the top bar, you see the list of available options such as Copy, Delete, Edit, Modify etc. + +Complete list of available options are given below: + + * **C** opy, + * **D** elete, + * **E** dit, + * **H** idden, + * **M** odify, + * **Q** uit, + * **R** ename, + * **S** how, + * h **U** nt, + * e **X** ec, + * **R** un command, + * **E** dit file, + * **H** elp, + * **M** ake dir, + * **Q** uit, + * **S** how dir + + + +In each option, one letter has been capitalized and marked as bold. Just press the capitalized letter to perform the respective operation. For example, to rename a file, just press **R** and type the new name and hit ENTER to rename the selected item. + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-2.png) + +To display all options or cancel an operation, just press **ESC** key. + +Also, you will see a bunch of function keys at the bottom of DF-SHOW interface to navigate through the contents of a directory. + + * **UP/DOWN** arrows or **F1/F2** – Move up and down (one line at time), + * **PgUp/Pg/Dn** – Move one page at a time, + * **F3/F4** – Instantly go to Top and bottom of the list, + * **F5** – Refresh, + * **F6** – Mark/Unmark files (Files marked will be indicated with an ***** in front of them), + * **F7/F8** – Mark/Unmark all files at once, + * **F9** – Sort the list by – Date & time, Name, Size., + + + +Press **h** to learn more details about **show** command and its options. + +To exit DF-SHOW, simply press **q**. + +**SF Command** + +The “sf” (show files) is used to display the contents of a file. + +``` +$ sf +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dfshow-3.png) + +Press **h** to learn more “sf” command and its options. To quit, press **q**. + +Want to give it a try? Great! Go ahead and install DF-SHOW on your Linux system as described below. + +### Installing DF-SHOW + +DF-SHOW is available in [**AUR**][1], so you can install it on any Arch-based system using AUR programs such as [**Yay**][2]. + +``` +$ yay -S dfshow +``` + +On Ubuntu and its derivatives: + +``` +$ sudo add-apt-repository ppa:ian-hawdon/dfshow + +$ sudo apt-get update + +$ sudo apt-get install dfshow +``` + +On other Linux distributions, you can compile and build it from the source as shown below. + +``` +$ git clone https://github.com/roberthawdon/dfshow +$ cd dfshow +$ ./bootstrap +$ ./configure +$ make +$ sudo make install +``` + +The author of DF-SHOW project has only rewritten some of the applications of DF-EDIT utility. Since the source code is freely available on GitHub, you can add more features, improve the code and submit or fix the bugs (if there are any). It is still in alpha stage, but fully functional. + +Have you tried it already? If so, how’d go? Tell us your experience in the comments section below. + +And, that’s all for now. Hope this was useful.More good stuffs to come. + +Stay tuned! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/df-show-a-terminal-file-manager-based-on-an-old-dos-application/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://aur.archlinux.org/packages/dfshow/ +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ diff --git a/sources/tech/20181029 Machine learning with Python- Essential hacks and tricks.md b/sources/tech/20181029 Machine learning with Python- Essential hacks and tricks.md new file mode 100644 index 0000000000..a3896df3f0 --- /dev/null +++ b/sources/tech/20181029 Machine learning with Python- Essential hacks and tricks.md @@ -0,0 +1,112 @@ +Machine learning with Python: Essential hacks and tricks +====== +Master machine learning, AI, and deep learning with Python. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S) + +It's never been easier to get started with machine learning. In addition to structured massive open online courses (MOOCs), there are a huge number of incredible, free resources available around the web. Here are a few that have helped me. + + 2. Learn to clearly differentiate between the buzzwords—for example, machine learning, artificial intelligence, deep learning, data science, computer vision, and robotics. Read or listen to talks by experts on each of them. Watch this [amazing video by Brandon Rohrer][1], an influential data scientist. Or this video about the [clear differences between various roles][2] associated with data science. + + + 3. Clearly set a goal for what you want to learn. Then go and take [that Coursera course][3]. Or take the one [from the University of Washington][4], which is pretty good too. + + + 5. If you are enthusiastic about taking online courses, check out this article for guidance on [choosing the right MOOC][5]. + + + 6. Most of all, develop a feel for it. Join some good social forums, but resist the temptation to latch onto sensationalized headlines and news. Do your own reading to understand what it is and what it is not, where it might go, and what possibilities it can open up. Then sit back and think about how you can apply machine learning or imbue data science principles into your daily work. Build a simple regression model to predict the cost of your next lunch or download your electricity usage data from your energy provider and do a simple time-series plot in Excel to discover some pattern of usage. And after you are thoroughly enamored with machine learning, you can watch this video. + + + +### Is Python a good language for machine learning/AI? + +Familiarity and moderate expertise in at least one high-level programming language is useful for beginners in machine learning. Unless you are a Ph.D. researcher working on a purely theoretical proof of some complex algorithm, you are expected to mostly use the existing machine learning algorithms and apply them in solving novel problems. This requires you to put on a programming hat. + +There's a lot of talk about the best language for data science. While the debate rages, grab a coffee and read this insightful FreeCodeCamp article to learn about [data science languages][6] . Or, check out this post on KDnuggets to dive directly into the [Python vs. R debate][7] + +For now, it's widely believed that Python helps developers be more productive from development to deployment and maintenance. Python's syntax is simpler and at a higher level when compared to Java, C, and C++. It has a vibrant community, open source culture, hundreds of high-quality libraries focused on machine learning, and a huge support base from big names in the industry (e.g., Google, Dropbox, Airbnb, etc.). + +### Fundamental Python libraries + +Assuming you go with the widespread opinion that Python is the best language for machine learning, there are a few core Python packages and libraries you need to master. + +#### NumPy + +Short for [Numerical Python][8], NumPy is the fundamental package required for high-performance scientific computing and data analysis in the Python ecosystem. It's the foundation on which nearly all of the higher-level tools, such as [Pandas][9] and [scikit-learn][10], are built. [TensorFlow][11] uses NumPy arrays as the fundamental building blocks underpinning Tensor objects and graphflow for deep learning tasks. Many NumPy operations are implemented in C, making them super fast. For data science and modern machine learning tasks, this is an invaluable advantage. + +![](https://opensource.com/sites/default/files/uploads/machine-learning-python_numpy-cheat-sheet.jpeg) + +#### Pandas + +Pandas is the most popular library in the scientific Python ecosystem for doing general-purpose data analysis. Pandas is built upon a NumPy array, thereby preserving fast execution speed and offering many data engineering features, including: + + * Reading/writing many different data formats + * Selecting subsets of data + * Calculating across rows and down columns + * Finding and filling missing data + * Applying operations to independent groups within the data + * Reshaping data into different forms + * Combing multiple datasets together + * Advanced time-series functionality + * Visualization through Matplotlib and Seaborn + +![](https://opensource.com/sites/default/files/uploads/pandas_cheat_sheet_github.png) + +#### Matplotlib and Seaborn + +Data visualization and storytelling with data are essential skills for every data scientist because it's crtitical to be able to communicate insights from analyses to any audience effectively. This is an equally critical part of your machine learning pipeline, as you often have to perform an exploratory analysis of a dataset before deciding to apply a particular machine learning algorithm. + +[Matplotlib][12] is the most widely used 2D Python visualization library. It's equipped with a dazzling array of commands and interfaces for producing publication-quality graphics from your data. This amazingly detailed and rich article will help you [get started with Matplotlib][13]. + +![](https://opensource.com/sites/default/files/uploads/matplotlib_gallery_-1.png) +[Seaborn][14] is another great visualization library focused on statistical plotting. It provides an API (with flexible choices for plot style and color defaults) on top of Matplotlib, defines simple high-level functions for common statistical plot types, and integrates with functionality provided by Pandas. You can start with this great tutorial on [Seaborn for beginners][15]. + +![](https://opensource.com/sites/default/files/uploads/machine-learning-python_seaborn.png) + +#### Scikit-learn + +Scikit-learn is the most important general machine learning Python package to master. It features various [classification][16], [regression][17], and [clustering][18] algorithms, including [support vector machines][19], [random forests][20], [gradient boosting][21], [k-means][22], and [DBSCAN][23], and is designed to interoperate with the Python numerical and scientific libraries NumPy and [SciPy][24]. It provides a range of supervised and unsupervised learning algorithms via a consistent interface. The library has a level of robustness and support required for use in production systems. This means it has a deep focus on concerns such as ease of use, code quality, collaboration, documentation, and performance. Look at this [gentle introduction to machine learning vocabulary][25] used in the Scikit-learn universe or this article demonstrating [a simple machine learning pipeline][26] method using Scikit-learn. + +This article was originally published on [Heartbeat][27] under [CC BY-SA 4.0][28]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/machine-learning-python-essential-hacks-and-tricks + +作者:[Tirthajyoti Sarkar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/tirthajyoti +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/watch?v=tKa0zDDDaQk +[2]: https://www.youtube.com/watch?v=Ura_ioOcpQI +[3]: https://www.coursera.org/learn/machine-learning +[4]: https://www.coursera.org/specializations/machine-learning +[5]: https://towardsdatascience.com/how-to-choose-effective-moocs-for-machine-learning-and-data-science-8681700ed83f +[6]: https://medium.freecodecamp.org/which-languages-should-you-learn-for-data-science-e806ba55a81f +[7]: https://www.kdnuggets.com/2017/09/python-vs-r-data-science-machine-learning.html +[8]: http://numpy.org/ +[9]: https://pandas.pydata.org/ +[10]: http://scikit-learn.org/ +[11]: https://www.tensorflow.org/ +[12]: https://matplotlib.org/ +[13]: https://realpython.com/python-matplotlib-guide/ +[14]: https://seaborn.pydata.org/ +[15]: https://www.datacamp.com/community/tutorials/seaborn-python-tutorial +[16]: https://en.wikipedia.org/wiki/Statistical_classification +[17]: https://en.wikipedia.org/wiki/Regression_analysis +[18]: https://en.wikipedia.org/wiki/Cluster_analysis +[19]: https://en.wikipedia.org/wiki/Support_vector_machine +[20]: https://en.wikipedia.org/wiki/Random_forests +[21]: https://en.wikipedia.org/wiki/Gradient_boosting +[22]: https://en.wikipedia.org/wiki/K-means_clustering +[23]: https://en.wikipedia.org/wiki/DBSCAN +[24]: https://en.wikipedia.org/wiki/SciPy +[25]: http://scikit-learn.org/stable/tutorial/basic/tutorial.html +[26]: https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49 +[27]: https://heartbeat.fritz.ai/some-essential-hacks-and-tricks-for-machine-learning-with-python-5478bc6593f2 +[28]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20181030 Podman- A more secure way to run containers.md b/sources/tech/20181030 Podman- A more secure way to run containers.md new file mode 100644 index 0000000000..a6252d87cc --- /dev/null +++ b/sources/tech/20181030 Podman- A more secure way to run containers.md @@ -0,0 +1,130 @@ +Podman: A more secure way to run containers +====== +Podman uses a traditional fork/exec model (vs. a client/server model) for running containers. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq) + +Before I get into the main topic of this article, [Podman][1] and containers, I need to get a little technical about the Linux audit feature. + +### What is audit? + +The Linux kernel has an interesting security feature called **audit**. It allows administrators to watch for security events on a system and have them logged to the audit.log, which can be stored locally or remotely on another machine to prevent a hacker from trying to cover his tracks. + +The **/etc/shadow** file is a common security file to watch, since adding a record to it could allow an attacker to get return access to the system. Administrators want to know if any process modified the file. You can do this by executing the command: + +``` +# auditctl -w /etc/shadow +``` + +Now let's see what happens if I modify the /etc/shadow file: + +``` +# touch /etc/shadow +# ausearch -f /etc/shadow -i -ts recent + +type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)` +``` + +There's a lot of information in the audit record, but I highlighted that it recorded that root modified the /etc/shadow file and the owner of the process' audit UID ( **auid** ) was **dwalsh**. + +Did the kernel do that? + +#### Tracking the login UID + +**loginuid** , stored in **/proc/self/loginuid** , that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it. + +There is a field called, stored in, that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it. + +When I log into the system, the login program sets the loginuid field for my login process. + +My UID, dwalsh, is 3267. + +``` +$ cat /proc/self/loginuid +3267 +``` + +Now, even if I become root, my login UID stays the same. + +``` +$ sudo cat /proc/self/loginuid +3267 +``` + +Note that every process that's forked and executed from the initial login process automatically inherits the loginuid. This is how the kernel knew that the person who logged was dwalsh. + +### Containers + +Now let's look at containers. + +``` +sudo podman run fedora cat /proc/self/loginuid +3267 +``` + +Even the container process retains my loginuid. Now let's try with Docker. + +``` +sudo docker run fedora cat /proc/self/loginuid +4294967295 +``` + +### Why the difference? + +Podman uses a traditional fork/exec model for the container, so the container process is an offspring of the Podman process. Docker uses a client/server model. The **docker** command I executed is the Docker client tool, and it communicates with the Docker daemon via a client/server operation. Then the Docker daemon creates the container and handles communications of stdin/stdout back to the Docker client tool. + +The default loginuid of processes (before their loginuid is set) is 4294967295. Since the container is an offspring of the Docker daemon and the Docker daemon is a child of the init system, we see that systemd, Docker daemon, and the container processes all have the same loginuid, 4294967295, which audit refers to as the unset audit UID. + +``` +cat /proc/1/loginuid +4294967295 +``` + +### How can this be abused? + +Let's look at what would happen if a container process launched by Docker modifies the /etc/shadow file. + +``` +$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow +$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null) +``` + +In the Docker case, the auid is unset (4294967295); this means the security officer might know that a process modified the /etc/shadow file but the identity was lost. + +If that attacker then removed the Docker container, there would be no trace on the system of who modified the /etc/shadow file. + +Now let's look at the exact same scenario with Podman. + +``` +$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow +$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null) +``` + +Everything is recorded correctly with Podman since it uses traditional fork/exec. + +This was just a simple example of watching the /etc/shadow file, but the auditing system is very powerful for watching what processes do on a system. Using a fork/exec container runtime for launching containers (instead of a client/server container runtime) allows you to maintain better security through audit logging. + +### Final thoughts + +There are many other nice features about the fork/exec model versus the client/server model when launching containers. For example, systemd features include: + + * **SD_NOTIFY:** If you put a Podman command into a systemd unit file, the container process can return notice up the stack through Podman that the service is ready to receive tasks. This is something that can't be done in client/server mode. + * **Socket activation:** You can pass down connected sockets from systemd to Podman and onto the container process to use them. This is impossible in the client/server model. + + + +The nicest feature, in my opinion, is **running Podman and containers as a non-root user**. This means you never have give a user root privileges on the host, while in the client/server model (like Docker employs), you must open a socket to a privileged daemon running as root to launch the containers. There you are at the mercy of the security mechanisms implemented in the daemon versus the security mechanisms implemented in the host operating systems—a dangerous proposition. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/podman-more-secure-way-run-containers + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://podman.io diff --git a/sources/tech/20181031 8 creepy commands that haunt the terminal - Opensource.com.md b/sources/tech/20181031 8 creepy commands that haunt the terminal - Opensource.com.md new file mode 100644 index 0000000000..a2e9f1aa2b --- /dev/null +++ b/sources/tech/20181031 8 creepy commands that haunt the terminal - Opensource.com.md @@ -0,0 +1,60 @@ +8 creepy commands that haunt the terminal | Opensource.com +====== + +Welcome to the spookier side of Linux. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/halloween_bag_bat_diy.jpg?itok=24M0lX25) + +It’s that time of year again: The weather gets chilly, the leaves change colors, and kids everywhere transform into tiny ghosts, goblins, and zombies. But did you know that Unix (and Linux) and its various offshoots are also chock-full of creepy crawly things? Let’s take a quick look at some of the spookier aspects of the operating system we all know and love. + +### daemon + +Unix just wouldn’t be the same without all the various daemons that haunt the system. A `daemon` is a process that runs in the background and provides useful services to both the user and the operating system itself. Think SSH, FTP, HTTP, etc. + +### zombie + +Every now and then a zombie, a process that has been killed but refuses to go away, shows up. When this happens, you have no choice but to dispatch it using whatever tools you have available. A zombie usually indicates that something is wrong with the process that spawned it. + +### kill + +Not only can you use the `kill` command to dispatch a zombie, but you can also use it to kill any process that’s adversely affecting your system. Have a process that’s using too much RAM or CPU cycles? Dispatch it with the `kill` command. + +### cat + +The `cat` command has nothing to do with felines and everything to do with combining files: `cat` is short for "concatenate." You can even use this handy command to view the contents of a file. + + +### tail + + +The `tail` command is useful when you want to see last n number of lines in a file. It’s also great when you want to monitor a file. + +### which + +No, not that kind of witch, but the command that prints the location of the files associated with any command passed to it. `which python`, for example, will print the locations of every version of Python on your system. + +### crypt + +The `crypt` command, known these days as `mcrypt`, is handy when you want to scramble (encrypt) the contents of a file so that no one but you can read it. Like most Unix commands, you can use `crypt` standalone or within a system script. + +### shred + +The `shred` command is handy when you not only want to delete a file but you also want to ensure that no one will ever be able to recover it. Using the `rm` command to delete a file isn’t enough. You also need to overwrite the space that the file previously occupied. That’s where `shred` comes in. + +These are just a few of the spooky things you’ll find hiding inside Unix. Do you know more creepy commands? Feel free to let me know. + +Happy Halloween! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/spookier-side-unix-linux + +作者:[Patrick H.Mullins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pmullins +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20181031 Working with data streams on the Linux command line.md b/sources/tech/20181031 Working with data streams on the Linux command line.md new file mode 100644 index 0000000000..87403558d7 --- /dev/null +++ b/sources/tech/20181031 Working with data streams on the Linux command line.md @@ -0,0 +1,302 @@ +Working with data streams on the Linux command line +====== +Learn to connect data streams from one utility to another using STDIO. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pipe-pipeline-grid.png?itok=kkpzKxKg) + +**Author’s note:** Much of the content in this article is excerpted, with some significant edits to fit the Opensource.com article format, from Chapter 3: Data Streams, of my new book, [The Linux Philosophy for SysAdmins][1]. + +Everything in Linux revolves around streams of data—particularly text streams. Data streams are the raw materials upon which the [GNU Utilities][2], the Linux core utilities, and many other command-line tools perform their work. + +As its name implies, a data stream is a stream of data—especially text data—being passed from one file, device, or program to another using STDIO. This chapter introduces the use of pipes to connect streams of data from one utility program to another using STDIO. You will learn that the function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file. + +I use the term “transform” in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the sysadmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file. + +The standard term, “filters,” implies something with which I don’t agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don’t want to breathe. + +Although they do sometimes filter out unwanted data from a stream, I much prefer the term “transformers” because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. I expect that I am alone in this. + +Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the sysadmin to perform some operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device connected to a computer. + +The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the core utilities are transformer programs and use STDIO. + +In the Unix and Linux worlds, a stream is a flow of text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a sysadmin, your job is intimately associated with manipulating the creation and flow of these data streams. In this post, we will explore data streams—what they are, how to create them, and a little bit about how to use them. + +### Text streams—a universal interface + +The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux. + +> “This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.” +> +> — Doug McIlroy, Basics of the Unix Philosophy + +### STDIO + +STDIO was developed by Ken Thompson as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device. + +There are three STDIO data streams, each of which is automatically opened as a file at the startup of a program—well, those programs that use STDIO. Each STDIO data stream is associated with a file handle, which is just a set of metadata that describes the attributes of the file. File handles 0, 1, and 2 are explicitly defined by convention and long practice as STDIN, STDOUT, and STDERR, respectively. + +**STDIN, File handle 0** , is standard input which is usually input from the keyboard. STDIN can be redirected from any file, including device files, instead of the keyboard. It is not common to need to redirect STDIN, but it can be done. + +**STDOUT, File handle 1** , is standard output which sends the data stream to the display by default. It is common to redirect STDOUT to a file or to pipe it to another program for further processing. + +**STDERR, File handle 2**. The data stream for STDERR is also usually sent to the display. + +If STDOUT is redirected to a file, STDERR continues to be displayed on the screen. This ensures that when the data stream itself is not displayed on the terminal, that STDERR is, thus ensuring that the user will see any errors resulting from execution of the program. STDERR can also be redirected to the same or passed on to the next transformer program in a pipeline. + +STDIO is implemented as a C library, **stdio.h** , which can be included in the source code of programs so that it can be compiled into the resulting executable. + +### Simple streams + +You can perform the following experiments safely in the **/tmp** directory of your Linux host. As the root user, make **/tmp** the PWD, create a test directory, and then make the new directory the PWD. + +``` +# cd /tmp ; mkdir test ; cd test +``` + +Enter and run the following command line program to create some files with content on the drive. We use the `dmesg` command simply to provide data for the files to contain. The contents don’t matter as much as just the fact that each file has some content. + +``` +# for I in 0 1 2 3 4 5 6 7 8 9 ; do dmesg > file$I.txt ; done +``` + +Verify that there are now at least 10 files in **/tmp/** with the names **file0.txt** through **file9.txt**. + +``` +# ll +total 1320 +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file0.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file1.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file2.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file3.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file4.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file5.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file6.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file7.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file8.txt +-rw-r--r-- 1 root root 131402 Oct 17 15:50 file9.txt +``` + +We have generated data streams using the `dmesg` command, which was redirected to a series of files. Most of the core utilities use STDIO as their output stream and those that generate data streams, rather than acting to transform the data stream in some way, can be used to create the data streams that we will use for our experiments. Data streams can be as short as one line or even a single character, and as long as needed. + +### Exploring the hard drive + +It is now time to do a little exploring. In this experiment, we will look at some of the filesystem structures. + +Let’s start with something simple. You should be at least somewhat familiar with the `dd` command. Officially known as “disk dump,” many sysadmins call it “disk destroyer” for good reason. Many of us have inadvertently destroyed the contents of an entire hard drive or partition using the `dd` command. That is why we will hang out in the **/tmp/test** directory to perform some of these experiments. + +Despite its reputation, `dd` can be quite useful in exploring various types of storage media, hard drives, and partitions. We will also use it as a tool to explore other aspects of Linux. + +Log into a terminal session as root if you are not already. We first need to determine the device special file for your hard drive using the `lsblk` command. + +``` +[root@studentvm1 test]# lsblk -i +NAME                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda                                    8:0    0   60G  0 disk +|-sda1                                 8:1    0    1G  0 part /boot +`-sda2                                 8:2    0   59G  0 part +  |-fedora_studentvm1-pool00_tmeta   253:0    0    4M  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-pool00_tdata   253:1    0    2G  0 lvm   +  | `-fedora_studentvm1-pool00-tpool 253:2    0    2G  0 lvm   +  |   |-fedora_studentvm1-root       253:3    0    2G  0 lvm  / +  |   `-fedora_studentvm1-pool00     253:6    0    2G  0 lvm   +  |-fedora_studentvm1-swap           253:4    0   10G  0 lvm  [SWAP] +  |-fedora_studentvm1-usr            253:5    0   15G  0 lvm  /usr +  |-fedora_studentvm1-home           253:7    0    2G  0 lvm  /home +  |-fedora_studentvm1-var            253:8    0   10G  0 lvm  /var +  `-fedora_studentvm1-tmp            253:9    0    5G  0 lvm  /tmp +sr0                                   11:0    1 1024M  0 rom +``` + +We can see from this that there is only one hard drive on this host, that the device special file associated with it is **/dev/sda** , and that it has two partitions. The **/dev/sda1** partition is the boot partition, and the **/dev/sda2** partition contains a volume group on which the rest of the host’s logical volumes have been created. + +As root in the terminal session, use the `dd` command to view the boot record of the hard drive, assuming it is assigned to the **/dev/sda** device. The `bs=` argument is not what you might think; it simply specifies the block size, and the `count=` argument specifies the number of blocks to dump to STDIO. The `if=` argument specifies the source of the data stream, in this case, the **/dev/sda** device. Notice that we are not looking at the first block of the partition, we are looking at the very first block of the hard drive. + +``` +[root@studentvm1 test]# dd if=/dev/sda bs=512 count=1 +�c�#�м���؎���|�#�#���!#��8#u +                            ��#���u��#�#�#�|���t#�L#�#�|���#�����€t��pt#���y|1��؎м ��d|<�t#��R�|1��D#@�D��D#�##f�#\|f�f�#`|f�\ +                                      �D#p�B�#r�p�#�K`#�#��1��������#a`���#f��u#����f1�f�TCPAf�#f�#a�&Z|�#}�#�.}�4�3}�.�#��GRUB GeomHard DiskRead Error +�#��#� ) character, aka “gt”, is the syntactical symbol for redirection of STDOUT. + +Redirecting the STDOUT of a command can be used to create a file containing the results from that command. + +``` +[student@studentvm1 ~]$ df -h > diskusage.txt +``` + +There is no output to the terminal from this command unless there is an error. This is because the STDOUT data stream is redirected to the file and STDERR is still directed to the STDOUT device, which is the display. You can view the contents of the file you just created using this next command: + +``` +[student@studentvm1 test]# cat diskusage.txt +Filesystem                          Size  Used Avail Use% Mounted on +devtmpfs                            2.0G     0  2.0G   0% /dev +tmpfs                               2.0G     0  2.0G   0% /dev/shm +tmpfs                               2.0G  1.2M  2.0G   1% /run +tmpfs                               2.0G     0  2.0G   0% /sys/fs/cgroup +/dev/mapper/fedora_studentvm1-root  2.0G   50M  1.8G   3% / +/dev/mapper/fedora_studentvm1-usr    15G  4.5G  9.5G  33% /usr +/dev/mapper/fedora_studentvm1-var   9.8G  1.1G  8.2G  12% /var +/dev/mapper/fedora_studentvm1-tmp   4.9G   21M  4.6G   1% /tmp +/dev/mapper/fedora_studentvm1-home  2.0G  7.2M  1.8G   1% /home +/dev/sda1                           976M  221M  689M  25% /boot +tmpfs                               395M     0  395M   0% /run/user/0 +tmpfs                               395M   12K  395M   1% /run/user/1000 +``` + +When using the > symbol to redirect the data stream, the specified file is created if it does not already exist. If it does exist, the contents are overwritten by the data stream from the command. You can use double greater-than symbols, >>, to append the new data stream to any existing content in the file. + +``` +[student@studentvm1 ~]$ df -h >> diskusage.txt +``` + +You can use `cat` and/or `less` to view the **diskusage.txt** file in order to verify that the new data was appended to the end of the file. + +The < (less than) symbol redirects data to the STDIN of the program. You might want to use this method to input data from a file to STDIN of a command that does not take a filename as an argument but that does use STDIN. Although input sources can be redirected to STDIN, such as a file that is used as input to grep, it is generally not necessary as grep also takes a filename as an argument to specify the input source. Most other commands also take a filename as an argument for their input source. + +### Just grep’ing around + +The `grep` command is used to select lines that match a specified pattern from a stream of data. `grep` is one of the most commonly used transformer utilities and can be used in some very creative and interesting ways. The `grep` command is one of the few that can correctly be called a filter because it does filter out all the lines of the data stream that you do not want; it leaves only the lines that you do want in the remaining data stream. + +If the PWD is not the **/tmp/test** directory, make it so. Let’s first create a stream of random data to store in a file. In this case, we want somewhat less random data that would be limited to printable characters. A good password generator program can do this. The following program (you may have to install `pwgen` if it is not already) creates a file that contains 50,000 passwords that are 80 characters long using every printable character. Try it without redirecting to the **random.txt** file first to see what that looks like, and then do it once redirecting the output data stream to the file. + +``` +$ pwgen -sy 80 50000 > random.txt +``` + +Considering that there are so many passwords, it is very likely that some character strings in them are the same. First, `cat` the **random.txt** file, then use the `grep` command to locate some short, randomly selected strings from the last ten passwords on the screen. I saw the word “see” in one of those ten passwords, so my command looked like this: `grep see random.txt`, and you can try that, but you should also pick some strings of your own to check. Short strings of two to four characters work best. + +``` +$ grep see random.txt +        R=p)'s/~0}wr~2(OqaL.S7DNyxlmO69`"12u]h@rp[D2%3}1b87+>Vk,;4a0hX]d7see;1%9|wMp6Yl. +        bSM_mt_hPy|YZ1NU@[;zV2-see)>(BSK~n5mmb9~h)yx{a&$_e +        cjR1QWZwEgl48[3i-(^x9D=v)seeYT2R#M:>wDh?Tn$]HZU7}j!7bIiIr^cI.DI)W0D"'vZU@.Kxd1E1 +        z=tXcjVv^G\nW`,y=bED]d|7%s6iYT^a^Bvsee:v\UmWT02|P|nq%A*;+Ng[$S%*s)-ls"dUfo|0P5+n +``` + +### Summary + +It is the use of pipes and redirection that allows many of the amazing and powerful tasks that can be performed with data streams on the Linux command line. It is pipes that transport STDIO data streams from one program or file to another. The ability to pipe streams of data through one or more transformer programs supports powerful and flexible manipulation of data in those streams. + +Each of the programs in the pipelines demonstrated in the experiments is small, and each does one thing well. They are also transformers; that is, they take Standard Input, process it in some way, and then send the result to Standard Output. Implementation of these programs as transformers to send processed data streams from their own Standard Output to the Standard Input of the other programs is complementary to, and necessary for, the implementation of pipes as a Linux tool. + +STDIO is nothing more than streams of data. This data can be almost anything from the output of a command to list the files in a directory, or an unending stream of data from a special device like **/dev/urandom** , or even a stream that contains all of the raw data from a hard drive or a partition. + +Any device on a Linux computer can be treated like a data stream. You can use ordinary tools like `dd` and `cat` to dump data from a device into a STDIO data stream that can be processed using other ordinary Linux tools. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/linux-data-streams + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://www.apress.com/us/book/9781484237298 +[2]: https://www.gnu.org/software/coreutils/coreutils.html +[3]: https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm diff --git a/translated/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md b/translated/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md new file mode 100644 index 0000000000..c236b5fef4 --- /dev/null +++ b/translated/talk/20180308 20 questions DevOps job candidates should be prepared to answer.md @@ -0,0 +1,92 @@ +DevOps应聘者应该准备回答的20个问题 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hire-job-career.png?itok=SrZo0QJ3) +聘请一个不合适的人代价是很高的。根据Link人力资源的首席执行官Jörgen Sundberg的统计,招聘,雇佣一名新员工将会花费公司$240,000之多,当你进行了一次不合适的招聘: + * 你失去了他们所知道的。 + * 你失去了他们认识的人 + * 你的团队将可能进入到一个组织发展的震荡阶段 + * 你的公司将会面临组织破裂的风险 + +当你失去一名员工的时候,你就像丢失了公司图谱中的一块。同样值得一提的是另一端的疼痛。应聘到一个错误工作岗位的员工会感受到很大的压力以及整个身心的不满意,甚至是健康问题。 +另外一方面,当你招聘到合适的人时,新的员工将会: + * 丰富公司现有的文化,使你的组织成为一个更好的工作场所。研究表明一个积极的工作文化能够帮助驱动一个更长久的财务业绩,而且如果你在一个欢快的环境中工 作,你更有可能在生活中做的更好。 + * 热爱和你的组织在一起工作。当人们热爱他们所在做的,他们会趋向于做的更好。 + +招聘适合的或者加强现有的文化在DevOps和敏捷团多中是必不可少的。也就是说雇佣到一个能够鼓励积极合作的人,以便来自不同背景,有着不同目标和工作方式的团队能够在一起有效的工作。你新雇佣的员工因应该能够帮助团队合作来充分发挥放大他们的价值同时也能够增加员工的满意度以及平衡组织目标的冲突。他或者她应该能够通过明智的选择工具和工作流来促进你的组织,文化就是一切。 + +作为我们2017年11月发布的一篇文章,[DevOps的招聘经理应该准备回答的20个问题][4],这篇文章将会重点关注在如何招聘最适合的人。 +### 为什么招聘走错了方向 +很多公司现在在用的典型的雇佣策略是基于人才过剩的基础上: + + * 职位公告栏。 + * 关注和所需才能符合的应聘者。 + * 尽可能找多的候选者。 + * 通过面试淘汰弱者。 + * 通过正式的面试淘汰更多的弱者。 + * 评估,投票,选择。 + * 渐渐接近补偿。 + +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/hiring_graphic.png?itok=1udGbkhB) + +职位公告栏是有成千上万失业者人才过剩的经济大萧条时期发明的。在今天的求职市场上已经没有人才过剩了,然而我们仍然在使用基于此的招聘策略。 +![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/732px-unemployed_men_queued_outside_a_depression_soup_kitchen_opened_in_chicago_by_al_capone_02-1931_-_nara_-_541927.jpg?itok=HSs4NjCN) + +### 雇佣最合适的人员:运用文化和情感 +在人才过剩雇佣策略背后的思想是去设计工作岗位然后将人员安排进去。 +相反,做相反的事情:寻找将会积极融入你的商业文化的人才,然后为他们寻找他们热爱的最合适的岗位。要想如此实现,你必须能够围绕他们热情为他们创造工作岗位。 +**谁正在寻找一份工作?** 根据一份2016年对美国50,000名开发者的调查显示,[85.7%的受访对象][5]要么对新的机会不感兴趣,要么对于寻找新工作没有积极性。在寻找工作的那部分中,有将近[28.3%的求职者][5]来自于朋友的推荐。如果你只是在那些在找工作的人中寻找人才,你将会错过高端的人才。 +**运用团队力量去发现和寻找潜力的雇员**。列如,戴安娜是你的团队中的一名开发者,她所提供的机会即使她已经从事编程很多年而且在期间已经结识了很多从事热爱他们所从事的工作的人。难道你不认为她所推荐的潜在员工在技能,知识和智慧上要比HR所寻找的要优秀吗?在要求戴安娜分享她同伴之前,通知她即将到来的使命任务,向她阐明你要雇佣潜在有探索精神的团队,描述在将来会需要的知识领域。 +**雇员想要什么?**一份来自千禧年,婴儿潮实时期出生的人的对比综合性研究显示,20% 的人所想要的是相同的: + 1. 对组织产生积极的影响 + 2. 帮助解决社交或者环境上的挑战 + 3. 和一群有动力的人一起工作 + +### 面试的挑战 +面试应该是招聘者和应聘者双方为了寻找最合适的人才进行的一次双方之间的对话。将面试聚焦在企业文化和情感对话两个问题上:这个应聘者将会丰富你的企业文化并且会热爱和你在一起工作吗?你能够在工作中帮他们取得成功吗? +**对于招聘经理来说:** 每一次的面试都是你学习如何将自己的组织变得对未来的团队成员更有吸引力,并且每次积极的面试多都可能是你发现人才(即使你不会雇佣)的机会。每个人都将会记得积极有效的面试的经历。即使他们不会被雇佣,他们将会和他们的朋友谈论这次经历,你竟会得到一个被推荐的机会。这又很大的好处:如果你无法吸引到这个人才,你也将会从中学习吸取经验并且改善。 +**对面试者来说**:每次的面试都是你释放激情的机会 + +### 助你释放潜在雇员激情的20个问题 + 1. 你热爱什么? + 2. “今天早晨我已经迫不及待的要去工作”你怎么看待这句话? + 3. 你曾经最快乐的是什么? + 4. 你曾经解决问题的最典型的例子是什么,你是如何解决的? + 5. 你如何看待配对学习? + 6. 你到达办公室和离开办公室心里最先想到的是什么? + 7. 你如果你有一次改变你之前或者现在的共工作的一件事的机会,将会是什么事? + 8. 当你在这工作的时候,你最兴奋去学习什么? + 9. 你的梦想是什么,你如何去实现? + 10. 你在学会如何去实现你的追求的时候想要或者需要什么? + 11. 你的价值观是什么? + 12. 你是如何坚守自己的价值观的? + 13. 平衡在你的生活中意味着什么? + 14. 你最引以为傲的工作交流能力是什么?为什么? + 15. 你最喜欢营造什么样的环境? + 16. 你喜欢别人怎样对待你? + 17. 你信任我们什么,如何验证? + 18. 告诉我们你在最近的一个项目中学习到什么? + 19. 我们还能知道你的其他方面的什么? + 20. 如果你正在雇佣我,你将会问我什么问题? + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/3/questions-devops-employees-should-answer + +作者:[Catherine Louis][a] +译者:[FelixYFZ](https://github.com/FelixYFZ) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/catherinelouis +[1]:https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/cost-of-bad-hires.aspx +[2]:https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development +[3]:http://www.forbes.com/sites/johnkotter/2011/02/10/does-corporate-culture-drive-financial-performance/ +[4]:https://opensource.com/article/17/11/inclusive-workforce-takes-work +[5]:https://insights.stackoverflow.com/survey/2016#work-job-discovery +[6]:https://research.hackerrank.com/developer-skills/2018/ +[7]:http://www-935.ibm.com/services/us/gbs/thoughtleadership/millennialworkplace/ +[8]:https://en.wikipedia.org/wiki/Emotional_intelligence diff --git a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md b/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md new file mode 100644 index 0000000000..80700d6fb9 --- /dev/null +++ b/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md @@ -0,0 +1,68 @@ +什么是 SRE?它和 DevOps 是怎么关联的? +===== + +大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) + +虽然站点可靠性工程师(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 + +### 什么是站点可靠性工程? + +谷歌的几个工程师写的《 [SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” + +虽然系统管理员从很久之前就在写代码,但是过去的很多时候系统管理团队是手动管理机器的。当时他们管理的机器可能有几十台或者上百台,不过当这个数字涨到了几千甚至几十万的时候,就不能简单的靠人去解决问题了。规模如此大的情况下,很明显应该用代码去管理机器(以及机器上运行的软件)。 + +另外,一直到近几年,运维团队和开发团队都还是完全独立的。两个岗位的技能要求也被认为是完全不同的。SRE 的角色想尝试把这两份工作结合起来。 + +在深入探讨什么是 SRE 以及 SRE 如何和开发团队协作之前,我们需要先了解一下 SRE 在 DevOps 范例中是怎么工作的。 + +### SRE 和 DevOps + +站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“devs”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 + +这种情况会导致大量失衡。开发和运维的目标总是不一致 —— 开发希望用户体验到“最新最棒”的代码,但是运维想要的是变更尽量少的稳定系统。运维是这样假定的,任何变更都可能引发不稳定,而不做任何变更的系统可以一直保持稳定。(减少软件的变更次数并不是避免故障的唯一因素,认识到这一点很重要。例如,虽然你的 web 应用保持不变,但是当用户数量涨到十倍时,服务可能就会以各种方式出问题。) + +DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开发团队时刻都想把新代码部署上线,那么他们也必须对新代码引起的故障负责。就像亚马逊的 [Werner Vogels 说的][4]那样,“谁开发,谁运维”(生产环境)。但是开发人员已经有一大堆问题了。他们不断的被推动着去开发老板要的产品功能。再让他们去了解基础设施,包括如何部署、配置还有监控服务,这对他们的要求有点太多了。所以就需要 SRE 了。 + +开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师,图形设计师,前端工程师,后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署,配置,监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 + +所以 SRE 不仅仅是“写代码的运维工程师”。相反,SRE 是开发团队的成员,他们有着不同的技能,特别是在发布部署、配置管理、监控、指标等方面。但是,就像前端工程师必须知道如何从数据库中获取数据一样,SRE 也不是只负责这些领域。为了提供更容易升级、管理和监控的产品,整个团队共同努力。 + +当一个团队在做 DevOps 实践,但是他们意识到对开发的要求太多了,过去由运维团队做的事情,现在需要一个专家来专门处理。这个时候,对 SRE 的需求很自然地就出现了。 + +### SRE 在初创公司怎么工作 + +如果你们公司有好几百位员工,那是非常好的(如果到了 Google 和 Facebook 的规模就更不用说了)。大公司的 SRE 团队分散在各个开发团队里。但是一个初创公司没有这种规模经济,工程师经常身兼数职。那么小公司该让谁做 SRE 呢?其中一种方案是完全践行 DevOps,那些大公司里属于 SRE 的典型任务,在小公司就让开发者去负责。另一种方案,则是聘请专家 —— 也就是 SRE。 + +让开发人员做 SRE 最显著的优点是,团队规模变大的时候也能很好的扩展。而且,开发人员将会全面地了解应用的特性。但是,许多初创公司的基础设施包含了各种各样的 SaaS 产品,这种多样性在基础设施上体现的最明显,因为连基础设施本身也是多种多样。然后你们在某个基础设施上引入指标系统、站点监控、日志分析、容器等等。这些技术解决了一部分问题,也增加了复杂度。开发人员除了要了解应用程序的核心技术(比如开发语言),还要了解上述所有技术和服务。最终,掌握所有的这些技术让人无法承受。 + +另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“ 三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 + +有一个关键信息我还没提到:其他的工程师。他们可能很渴望了解发布部署的原理,也很想尽全力学会使用指标系统。而且,雇一个 SRE 可不是一件简单的事儿。因为你要找的是一个既懂系统管理又懂软件工程的人。(我之所以明确地说软件工程而不是说“能写代码”,是因为除了写代码之外软件工程还包括很多东西,比如编写良好的测试或文档。) + +因此,在某些情况下让开发人员做 SRE 可能更合理一些。如果这样做了,得同时关注代码和基础设施(购买 SaaS 或内部自建)的复杂程度。这两边的复杂性,有时候能促进专业化。 + +### 总结 + +在初创公司做 DevOps 实践最有效的方式是组建 SRE 小组。我见过一些不同的方案,但是我相信初创公司(尽早)招聘专职 SRE 可以解放开发人员,让开发人员专注于特定的挑战。SRE 可以把精力放在改善工具(流程)上,以提高开发人员的生产力。不仅如此,SRE 还专注于确保交付给客户的产品是可靠且安全的。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/sre-startup + +作者:[Craig Sebenik][a] +选题:[lujun9972][b] +译者:[BeliteX](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/craig5 +[b]: https://github.com/lujun9972 +[1]: http://shop.oreilly.com/product/0636920041528.do +[2]: https://landing.google.com/sre/interview/ben-treynor.html +[3]: https://opensource.com/resources/devops +[4]: https://queue.acm.org/detail.cfm?id=1142065 +[5]: https://www.usenix.org/conference/lisa18/presentation/sebenik +[6]: https://www.usenix.org/conference/lisa18 diff --git a/translated/tech/20171214 Peeking into your Linux packages.md b/translated/tech/20171214 Peeking into your Linux packages.md deleted file mode 100644 index ed7dca5ce1..0000000000 --- a/translated/tech/20171214 Peeking into your Linux packages.md +++ /dev/null @@ -1,130 +0,0 @@ -探秘你的Linux软件包 -====== -你有没有想过你的 Linux 系统上安装了多少千个软件包? 是的,我说的是“千”。 即使是相当一般的 Linux 系统也可能安装了超过一千个软件包。 有很多方法可以获得这些包到底是什么包的详细信息。 - -首先,要在基于 Debian 的发行版(如 Ubuntu)上快速得到已安装的软件包数量,请使用 **apt list --installed**, 如下: - -``` -$ apt list --installed | wc -l -2067 - -``` - -这个数字实际上多了一个,因为输出中包含了 “Listing ...” 作为它的第一行。 这个命令会更准确: - -``` -$ apt list --installed | grep -v "^Listing" | wc -l -2066 - -``` - -要获得所有这些包的详细信息,请按以下方式浏览列表: - -``` -$ apt list --installed | more -Listing... -a11y-profile-manager-indicator/xenial,now 0.1.10-0ubuntu3 amd64 [installed] -account-plugin-aim/xenial,now 3.12.11-0ubuntu3 amd64 [installed] -account-plugin-facebook/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed] -account-plugin-flickr/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed] -account-plugin-google/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed] -account-plugin-jabber/xenial,now 3.12.11-0ubuntu3 amd64 [installed] -account-plugin-salut/xenial,now 3.12.11-0ubuntu3 amd64 [installed] - -``` - -这需要观察很多细节--特别是让你的眼睛在所有 2000 多个文件中徘徊。 它包含包名称,版本等,但不是我们人类解析的最简单的信息显示。 dpkg-query 使得描述更容易理解,但这些描述塞满你的命令窗口,除非窗口非常宽。 因此,为了让此篇文章更容易阅读,下面的数据显示已经分成了左右两侧。 - -左侧: -``` -$ dpkg-query -l | more -Desired=Unknown/Install/Remove/Purge/Hold -| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend -|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) -||/ Name Version -+++-==============================================-=================================- -ii a11y-profile-manager-indicator 0.1.10-0ubuntu3 -ii account-plugin-aim 3.12.11-0ubuntu3 -ii account-plugin-facebook 0.12+16.04.20160126-0ubuntu1 -ii account-plugin-flickr 0.12+16.04.20160126-0ubuntu1 -ii account-plugin-google 0.12+16.04.20160126-0ubuntu1 -ii account-plugin-jabber 3.12.11-0ubuntu3 -ii account-plugin-salut 3.12.11-0ubuntu3 -ii account-plugin-twitter 0.12+16.04.20160126-0ubuntu1 -rc account-plugin-windows-live 0.11+14.04.20140409.1-0ubuntu2 - -``` - -右侧: -``` -Architecture Description -============-===================================================================== -amd64 Accessibility Profile Manager - Unity desktop indicator -amd64 Messaging account plugin for AIM -all GNOME Control Center account plugin for single signon - facebook -all GNOME Control Center account plugin for single signon - flickr -all GNOME Control Center account plugin for single signon -amd64 Messaging account plugin for Jabber/XMPP -amd64 Messaging account plugin for Local XMPP (Salut) -all GNOME Control Center account plugin for single signon - twitter -all GNOME Control Center account plugin for single signon - windows live - -``` - -每行开头的 “ii” 和 “rc” 名称(见上文“左侧”)是包状态指示符。 第一个字母表示包的理想状态: - -``` -u -- unknown -i -- install -r -- remove/deinstall -p -- purge (remove including config files) -h -- hold - -``` - -第二个代表包的当前状态: - -``` -n -- not-installed -i -- installed -c -- config-files (only the config files are installed) -U -- unpacked -F -- half-configured (the configuration failed for some reason) -h -- half-installed (installation failed for some reason) -W -- triggers-awaited (the package is waiting for a trigger from another package) -t -- triggers-pending (the package has been triggered) - -``` - -在通常的双字符字段末尾添加的 “R” 表示需要重新安装。 你可能永远不会碰到这些。 - -快速查看整体包状态的一种简单方法是计算在不同状态中包含的包的数量: - -``` -$ dpkg-query -l | tail -n +6 | awk '{print $1}' | sort | uniq -c - 2066 ii - 134 rc - -``` - -我从上面的 dpkg-query 输出中排除了前五行,因为这些是标题行,会混淆输出。 - -这两行基本上告诉我们,在这个系统上,应该安装了 2066 个软件包,而 134 个其他的软件包已被删除,但已经留下了配置文件。 你始终可以使用以下命令删除程序包的剩余配置文件: - -``` -$ sudo dpkg --purge xfont-mathml -``` - -请注意,如果程序包二进制文件和配置文件都已经安装了,则上面的命令将两者都删除。 - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3242808/linux/peeking-into-your-linux-packages.html - -作者:[Sandra Henry-Stocker][a] -译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file diff --git a/translated/tech/20180716 How To Find The Mounted Filesystem Type In Linux.md b/translated/tech/20180716 How To Find The Mounted Filesystem Type In Linux.md new file mode 100644 index 0000000000..481a48ea3b --- /dev/null +++ b/translated/tech/20180716 How To Find The Mounted Filesystem Type In Linux.md @@ -0,0 +1,236 @@ +如何在 Linux 中查看已挂载的文件系统类型 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/07/filesystem-720x340.png) + +如你所知,Linux 支持非常多的文件系统,例如 Ext4、ext3、ext2、sysfs、securityfs、FAT16、FAT32、NTFS 等等,当前被使用最多的文件系统是 Ext4。你曾经疑惑过你的 Linux 系统使用的是什么类型的文件系统吗?没有疑惑过?不用担心!我们将帮助你。本指南将解释如何在类 Unix 的操作系统中查看已挂载的文件系统类型。 + +### 在 Linux 中查看已挂载的文件系统类型 + +有很多种方法可以在 Linux 中查看已挂载的文件系统类型,下面我将给出 8 种不同的方法。那现在就让我们开始吧! + +#### 方法 1 – 使用 `findmnt` 命令 + +这是查出文件系统类型最常使用的方法。**findmnt** 命令将列出所有已挂载的文件系统或者搜索出某个文件系统。`findmnt` 命令能够在 `/etc/fstab`、`/etc/mtab` 或 `/proc/self/mountinfo` 这几个文件中进行搜索。 + +`findmnt` 预装在大多数的 Linux 发行版中,因为它是 **util-linux** 包的一部分。为了防止 `findmnt` 命令不可用,你可以安装这个软件包。例如,你可以使用下面的命令在基于 Debian 的系统中安装 **util-linux** 包: +``` +$ sudo apt install util-linux +``` + +下面让我们继续看看如何使用 `findmnt` 来找出已挂载的文件系统。 + +假如你只敲 `findmnt` 命令而不带任何的参数或选项,它将像下面展示的那样以树状图形式列举出所有已挂载的文件系统。 +``` +$ findmnt +``` + +**示例输出:** + +![][2] + +正如你看到的那样,`findmnt` 展示出了目标挂载点(TARGET)、源设备(SOURCE)、文件系统类型(FSTYPE)以及相关的挂载选项(OPTIONS),例如文件系统是否是可读可写或者只读的。以我的系统为例,我的根(`/`)文件系统的类型是 EXT4 。 + +假如你不想以树状图的形式来展示输出,可以使用 **-l** 选项来以简单平凡的形式来展示输出: +``` +$ findmnt -l +``` + +![][3] + +你还可以使用 **-t** 选项来列举出特定类型的文件系统,例如下面展示的 **ext4** 文件系统类型: +``` +$ findmnt -t ext4 +TARGET SOURCE FSTYPE OPTIONS +/ /dev/sda2 ext4 rw,relatime,commit=360 +└─/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered +``` + +`findmnt` 还可以生成 `df` 类型的输出,使用命令 +``` +$ findmnt --df +``` +或 +``` +$ findmnt -D +``` + +**示例输出:** + +``` +SOURCE FSTYPE SIZE USED AVAIL USE% TARGET +dev devtmpfs 3.9G 0 3.9G 0% /dev +run tmpfs 3.9G 1.1M 3.9G 0% /run +/dev/sda2 ext4 456.3G 342.5G 90.6G 75% / +tmpfs tmpfs 3.9G 32.2M 3.8G 1% /dev/shm +tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup +bpf bpf 0 0 0 - /sys/fs/bpf +tmpfs tmpfs 3.9G 8.4M 3.9G 0% /tmp +/dev/loop0 squashfs 82.1M 82.1M 0 100% /var/lib/snapd/snap/core/4327 +/dev/sda1 ext4 92.8M 55.7M 30.1M 60% /boot +tmpfs tmpfs 788.8M 32K 788.8M 0% /run/user/1000 +gvfsd-fuse fuse.gvfsd-fuse 0 0 0 - /run/user/1000/gvfs +``` + +你还可以展示某个特定设备或者挂载点的文件系统类型。 + +查看某个特定的设备: +``` +$ findmnt /dev/sda1 +TARGET SOURCE FSTYPE OPTIONS +/boot /dev/sda1 ext4 rw,relatime,commit=360,data=ordered +``` + +查看某个特定的挂载点: +``` +$ findmnt / +TARGET SOURCE FSTYPE OPTIONS +/ /dev/sda2 ext4 rw,relatime,commit=360 +``` + +你甚至还可以查看某个特定标签的文件系统的类型: +``` +$ findmnt LABEL=Storage +``` + +更多详情,请参考其 man 手册。 +``` +$ man findmnt +``` + +`findmnt` 命令已足够完成在 Linux 中查看已挂载文件系统类型的任务,这个命令就是为了这个特定任务而生的。然而,还存在其他方法来查看文件系统的类型,假如你感兴趣的话,请接着让下看。 + +#### 方法 2 – 使用 `blkid` 命令 + +**blkid** 命令被用来查找和打印块设备的属性。它也是 **util-linux** 包的一部分,所以你不必再安装它。 + +为了使用 `blkid` 命令来查看某个文件系统的类型,可以运行: +``` +$ blkid /dev/sda1 +``` + +#### 方法 3 – 使用 `df` 命令 + +在类 Unix 的操作系统中, **df** 命令被用来报告文件系统的磁盘空间使用情况。为了查看所有已挂载文件系统的类型,只需要运行: +``` +$ df -T +``` + +**示例输出:** + +![][4] + +关于 `df` 命令的更多细节,可以参考下面的指南。 + +- [针对新手的 df 命令教程](https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/) + +同样也可以参考其 man 手册: +``` +$ man df +``` + +#### 方法 4 – 使用 `file` 命令 + +**file** 命令可以判读出某个特定文件的类型,即便该文件没有文件后缀名也同样适用。 + +运行下面的命令来找出某个特定分区的文件系统类型: +``` +$ sudo file -sL /dev/sda1 +[sudo] password for sk: +/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=83a1dbbf-1e15-4b45-94fe-134d3872af96 (needs journal recovery) (extents) (large files) (huge files) +``` + +查看其 man 手册可以知晓更多细节: +``` +$ man file +``` + +#### 方法 5 – 使用 `fsck` 命令 + +**fsck** 命令被用来检查某个文件系统是否健全或者修复它。你可以像下面那样通过将分区名字作为 `fsck` 的参数来查看该分区的文件系统类型: + +``` +$ fsck -N /dev/sda1 +fsck from util-linux 2.32 +[/usr/bin/fsck.ext4 (1) -- /boot] fsck.ext4 /dev/sda1 +``` + +如果想知道更多的内容,请查看其 man 手册: +``` +$ man fsck +``` + +#### 方法 6 – 使用 `fstab` 命令 + +**fstab** 是一个包含文件系统静态信息的文件。这个文件通常包含了挂载点、文件系统类型和挂载选项等信息。 + +要查看某个文件系统的类型,只需要运行: +``` +$ cat /etc/fstab +``` + +![][5] + +更多详情,请查看其 man 手册: +``` +$ man fstab +``` + +#### 方法 7 – 使用 `lsblk` 命令 + +**lsblk** 命令可以展示设备的信息。 + +要展示已挂载文件系统的信息,只需运行: +``` +$ lsblk -f +NAME FSTYPE LABEL UUID MOUNTPOINT +loop0 squashfs /var/lib/snapd/snap/core/4327 +sda +├─sda1 ext4 83a1dbbf-1e15-4b45-94fe-134d3872af96 /boot +├─sda2 ext4 4d25ddb0-5b20-40b4-ae35-ef96376d6594 / +└─sda3 swap 1f8f5e2e-7c17-4f35-97e6-8bce7a4849cb [SWAP] +sr0 +``` + +更多细节,可以参考它的 man 手册: +``` +$ man lsblk +``` + +#### 方法 8 – 使用 `mount` 命令 + +**mount** 被用来在类 Unix 系统中挂载本地或远程的文件系统。 + +要使用 `mount` 命令查看文件系统的类型,可以像下面这样做: +``` +$ mount | grep "^/dev" +/dev/sda2 on / type ext4 (rw,relatime,commit=360) +/dev/sda1 on /boot type ext4 (rw,relatime,commit=360,data=ordered) +``` + +更多详情,请参考其 man 手册的内容: +``` +$ man mount +``` + +好了,上面便是今天的全部内容了。现在你知道了 8 种不同的 Linux 命令来查看已挂载的 Linux 文件系统的类型。假如你知道其他的命令来完成同样的任务,请在下面的评论部分让我们知晓,我将确认并相应地升级本教程。 + +更过精彩内容即将呈现,请保持关注! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-the-mounted-filesystem-type-in-linux/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-1.png +[3]:http://www.ostechnix.com/wp-content/uploads/2018/07/findmnt-2.png +[4]:http://www.ostechnix.com/wp-content/uploads/2018/07/df.png +[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/fstab.png diff --git a/translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md b/translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md deleted file mode 100644 index 0ea7e841af..0000000000 --- a/translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md +++ /dev/null @@ -1,143 +0,0 @@ -如何移除或禁用 Ubuntu Dock -====== - -![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png) - -**如果你想用其它 dock(例如 Plank dock)或面板来替换 Ubuntu 18.04 中的 Dock,或者你想要移除或禁用 Ubuntu Dock,本文会告诉你如何做。** - -Ubuntu Dock - 屏幕左侧栏,可用于固定应用程序或访问已安装的应用程序。使用默认的 Ubuntu 会话时,[无法][1]使用 Gnome Tweaks 禁用它。如果你需要,还是有几种方法来摆脱它的。下面我将列出 4 种方法可以移除或禁用 Ubuntu Dock,以及每个方法的缺点(如果有的话),还有如何撤销每个方法的更改。本文还包括在没有 Ubuntu Dock 的情况下访问多任务视图和已安装应用程序列表的其它方法。 -(to 校正:Activities Overview 在本文翻译为多任务视图,如有不妥,请改正) -### 如何在没有 Ubuntu Dock 的情况下访问多任务试图 - -如果没有 Ubuntu Dock,你可能无法访问活动的或已安装的应用程序列表(但是可以通过单击 Dock 底部的“显示应用程序”按钮从 Ubuntu Dock 访问)。例如,如果你想使用 Plank Dock。(to 校正:这里是什么意思呢) - -显然,如果你安装了 Dash to Panel 扩展来使用它而不是 Ubuntu Dock,那么情况并非如此。因为 Dash to Panel 提供了一个按钮来访问多任务视图或已安装的应用程序。 - -根据你计划使用的 Dock 而不是 Ubuntu Dock,如果无法访问多任务视图,那么你可以启用 Activities Overview Hot Corner 选项,只需将鼠标移动到屏幕的左上角即可打开 Activities。访问已安装的应用程序列表的另一种方法是使用快捷键:`Super + A`。 - -如果要启用 Activities Overview hot corner,使用以下命令: -``` -gsettings set org.gnome.shell enable-hot-corners true - -``` - -如果以后要撤销此操作并禁用 hot corners,那么你需要使用以下命令: -``` -gsettings set org.gnome.shell enable-hot-corners false - -``` - -你可以使用 Gnome Tweaks 应用程序(该选项位于 Gnome Tweaks 的 `Top Bar` 部分)启用或禁用 Activities Overview Hot Corner 选项,可以使用以下命令进行安装: -``` -sudo apt install gnome-tweaks - -``` - -### 如何移除或禁用 Ubuntu Dock - -下面你将找到 4 种摆脱 Ubuntu Dock 的方法,环境在 Ubuntu 18.04 下。 - -**方法 1: 移除 Gnome Shell Ubuntu Dock 包。** - -摆脱 Ubuntu Dock 的最简单方法就是删除包。 - -这将会从你的系统中完全移除 Ubuntu Dock 扩展,但同时也移除了 `ubuntu-desktop` 元数据包。如果你移除 `ubuntu-desktop` 元数据包,不会马上出现问题,因为它本身没有任何作用。`ubuntu-meta` 包依赖于组成 Ubuntu 桌面的大量包。它的依赖关系不会被删除,也不会被破坏。问题是如果你以后想升级到新的 Ubuntu 版本,那么将不会安装任何新的 `ubuntu-desktop` 依赖项。 - -为了解决这个问题,你可以在升级到较新的 Ubuntu 版本之前安装 `ubuntu-desktop` 元包(例如,如果你想从 Ubuntu 18.04 升级到 18.10)。 - -如果你对此没有意见,并且想要从系统中删除 Ubuntu Dock 扩展包,使用以下命令: -``` -sudo apt remove gnome-shell-extension-ubuntu-dock - -``` - -如果以后要撤消更改,只需使用以下命令安装扩展: -``` -sudo apt install gnome-shell-extension-ubuntu-dock - -``` - -或者重新安装 `ubuntu-desktop` 元数据包(这将会安装你可能已删除的任何 ubuntu-desktop 依赖项,包括 Ubuntu Dock),你可以使用以下命令: -``` -sudo apt install ubuntu-desktop - -``` - -**选项2:安装并使用 vanilla Gnome 会话而不是默认的 Ubuntu 会话。** - -摆脱 Ubuntu Dock 的另一种方法是安装和使用 vanilla Gnome 会话。安装 vanilla Gnome 会话还将安装此会话所依赖的其它软件包,如 Gnome 文档,地图,音乐,联系人,照片,跟踪器等。 - -通过安装 vanilla Gnome 会话,你还将获得默认 Gnome GDM 登录和锁定屏幕主题,而不是 Ubuntu 默认值,另外还有 Adwaita Gtk 主题和图标。你可以使用 Gnome Tweaks 应用程序轻松更改 Gtk 和图标主题。 - -此外,默认情况下将禁用 AppIndicators 扩展(因此使用 AppIndicators 托盘的应用程序不会显示在顶部面板上),但你可以使用 Gnome Tweaks 启用此功能(在扩展中,启用 Ubuntu appindicators 扩展)。 - -同样,你也可以从 vanilla Gnome 会话启用或禁用 Ubuntu Dock,这在 Ubuntu 会话中是不可能的(使用 Ubuntu 会话时无法从 Gnome Tweaks 禁用 Ubuntu Dock)。 - -如果你不想安装 vanilla Gnome 会话所需的这些额外软件包,那么这个移除 Ubuntu Dock 的这个选项不适合你,请查看其它选项。 - -如果你对此没有意见,以下是你需要做的事情。要在 Ubuntu 中安装普通的 Gnome 会话,使用以下命令: -``` -sudo apt install vanilla-gnome-desktop - -``` - -安装完成后,重启系统。在登录屏幕上,单击用户名,单击 `Sign in` 按钮旁边的齿轮图标,然后选择 `GNOME` 而不是 `Ubuntu`,之后继续登录。 - -![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png) - -如果要撤销此操作并移除 vanilla Gnome 会话,可以使用以下命令清除 vanilla Gnome 软件包,然后删除它安装的依赖项(第二条命令): -``` -sudo apt purge vanilla-gnome-desktop -sudo apt autoremove - -``` - -然后重新启动,并以相同的方式从 GDM 登录屏幕中选择 Ubuntu。 - -**选项 3:从桌面上永久隐藏 Ubuntu Dock,而不是将其移除。** - -如果你希望永久隐藏 Ubuntu Dock,不让它显示在桌面上,但不移除它或使用 vanilla Gnome 会话,你可以使用 Dconf 编辑器轻松完成此操作。这样做的缺点是 Ubuntu Dock 仍然会使用一些系统资源,即使你没有在桌面上使用它,但你也可以轻松恢复它而无需安装或移除任何包。 - -Ubuntu Dock 只对你的桌面隐藏,当你进入叠加模式(Activities)时,你仍然可以看到并从那里使用 Ubuntu Dock。 - -要永久隐藏 Ubuntu Dock,使用 Dconf 编辑器导航到 `/org/gnome/shell/extensions/dash-to-dock` 并禁用以下选项(将它们设置为 false):`autohide`, `dock-fixed` 和 `intellihide`。 - -如果你愿意,可以从命令行实现此目的,运行以下命令: -``` -gsettings set org.gnome.shell.extensions.dash-to-dock autohide false -gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false -gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false - -``` - -如果你改变主意了并想撤销此操作,你可以使用 Dconf 编辑器从 `/org/gnome/shell/extensions/dash-to-dock` 中启动 `autohide`, `dock-fixed` 和 `intellihide`(将它们设置为 true),或者你可以使用以下这些命令: -``` -gsettings set org.gnome.shell.extensions.dash-to-dock autohide true -gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true -gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true - -``` - -**选项 4:使用 Dash to Panel 扩展。** - -[Dash to Panel][2] 是 Gnome Shell 的一个高度可配置面板,是 Ubuntu Dock 或 Dash to Dock 的一个很好的替代品(Ubuntu Dock 是从 Dash to Dock 克隆而来的)。安装和启动 Dash to Panel 扩展会禁用 Ubuntu Dock,因此你无需执行其它任何操作。 - -你可以从 [extensions.gnome.org][3] 来安装 Dash to Panel。 - -如果你改变主意并希望重新使用 Ubuntu Dock,那么你可以使用 Gnome Tweaks 应用程序禁用 Dash to Panel,或者通过单击以下网址旁边的 X 按钮完全移除 Dash to Panel: https://extensions.gnome.org/local/。 - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020 -[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html -[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/ diff --git a/translated/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md b/translated/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md new file mode 100644 index 0000000000..48a556d29a --- /dev/null +++ b/translated/tech/20180820 How To Disable Ads In Terminal Welcome Message In Ubuntu Server.md @@ -0,0 +1,110 @@ +如何在 Ubuntu 服务器中禁用终端欢迎消息中的广告 +====== + +如果你正在使用最新的 Ubuntu 服务器版本,你可能已经注意到欢迎消息中有一些与 Ubuntu 服务器平台无关的促销链接。你可能已经知道 **MOTD**,即 **M**essage **O**f **T**he **D**ay 的开头首字母,在 Linux 系统每次登录时都会显示欢迎信息。通常,欢迎消息包含操作系统版本,基本系统信息,官方文档链接以及有关最新安全更新等的链接。这些是我们每次通过 SSH 或本地登录时通常会看到的内容。但是,最近在终端欢迎消息中出现了一些其他链接。我已经几次注意到这些链接,但我并在意,也从未点击过。以下是我的 Ubuntu 18.04 LTS 服务器上显示的终端欢迎消息。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message.png) + +正如你在上面截图中所看到的,欢迎消息中有一个 bit.ly 链接和 Ubuntu wiki 链接。有些人可能会惊讶并想知道这是什么。其实欢迎信息中的链接无需担心。它可能看起来像广告,但并不是商业广告。链接实际上指的是 [**Ubuntu 官方博客**][1] 和 [**Ubuntu wiki**][2]。正如我之前所说,其中的一个链接是不相关的,没有任何与 Ubuntu 服务器相关的细节,这就是为什么我开头称它们为广告。 +(to 校正:这里是其中一个链接不相关还是两个链接都不相关) + +虽然我们大多数人都不会访问 bit.ly 链接,但是有些人可能出于好奇去访问这些链接,结果失望地发现它只是指向一个外部链接。你可以使用任何 URL 短网址服务,例如 unshorten.it,在访问真正链接之前,查看它会指向哪里。或者,你只需在 bit.ly 链接的末尾输入加号(**+**)即可查看它们的实际位置以及有关链接的一些统计信息。 + +![](http://www.ostechnix.com/wp-content/uploads/2018/08/shortlink.png) + +### 什么是 MOTD 以及它是如何工作的? + +2009 年,来自 Canonical 的 **Dustin Kirkland** 在 Ubuntu 中引入了 MOTD 的概念。它是一个灵活的框架,使管理员或发行包能够在 /etc/update-motd.d/* 位置添加可执行脚本,目的是生成在登录时显示有益的,有趣的消息。它最初是为 Landscape(Canonical 的商业服务)实现的,但是其它发行版维护者发现它很有用,并且在他们自己的发行版中也采用了这个特性。 + +如果你在 Ubuntu 系统中查看 **/etc/update-motd.d/**,你会看到一组脚本。一个是打印通用的 “ Welcome” 横幅。下一个打印 3 个链接,显示在哪里可以找到操作系统的帮助。另一个计算并显示本地系统包可以更新的数量。另一个脚本告诉你是否需要重新启动等等。 + +从 Ubuntu 17.04 起,开发人员添加了 **/etc/update-motd.d/50-motd-news**,这是一个脚本用来在欢迎消息中包含一些附加信息。这些附加信息是: + + 1. 重要的关键信息,例如 ShellShock, Heartbleed 等 + + 2. 生命周期(EOL)消息,新功能可用性等 + + 3. 在 Ubuntu 官方博客和其他有关 Ubuntu 的新闻中发布的一些有趣且有益的帖子 + +另一个特点是异步,启动后约 60 秒,systemd 计时器运行 “/etc/update-motd.d/50-motd-news –force” 脚本。它提供了 /etc/default/motd-news 脚本中定义的 3 个配置变量。默认值为:ENABLED=1, URLS=”, WAIT=”5″。 + +以下是 /etc/default/motd-news 文件的内容: +``` +$ cat /etc/default/motd-news +# Enable/disable the dynamic MOTD news service +# This is a useful way to provide dynamic, informative +# information pertinent to the users and administrators +# of the local system +ENABLED=1 + +# Configure the source of dynamic MOTD news +# White space separated list of 0 to many news services +# For security reasons, these must be https +# and have a valid certificate +# Canonical runs a service at motd.ubuntu.com, and you +# can easily run one too +URLS="https://motd.ubuntu.com" + +# Specify the time in seconds, you're willing to wait for +# dynamic MOTD news +# Note that news messages are fetched in the background by +# a systemd timer, so this should never block boot or login +WAIT=5 + +``` + +好事情是 MOTD 是完全可定制的,所以你可以彻底禁用它(ENABLED=0),根据你的意愿更改或添加脚本,并以秒为单位更改等待时间。 + +如果启用了 MOTD,那么 systemd 计时器作业将循环遍历每个 URL,将它们缩减到每行 80 个字符,最多 10 行,并将它们连接(to 校正:也可能是链接?)到 /var/cache/motd-news 中的缓存文件。此 systemd 计时器作业将每隔 12 小时运行并更新 /var/cache/motd-news。用户登录后,/var/cache/motd-news 的内容会打印到屏幕上。这就是 MOTD 的工作原理。 + +此外,**/etc/update-motd.d/50-motd-news** 文件中包含自定义用户代理字符串,以报告有关计算机的信息。如果你查看 **/etc/update-motd.d/50-motd-news** 文件,你会看到 +``` +# Piece together the user agent +USER_AGENT="curl/$curl_ver $lsb $platform $cpu $uptime" +``` + +这意味着,MOTD 检索器将向 Canonical 报告你的**操作系统版本**,**硬件平台**,**CPU 类型**和**正常运行时间**。 + +到这里,希望你对 MOTD 有了一个基本的了解。 + +现在让我们回到主题,我不想要这个功能。我该如何禁用它?如果欢迎消息中的促销链接仍然困扰你,并且你想永久禁用它们,则可以通过以下方法快速禁用它。 + +### 在 Ubuntu 服务器中禁用终端欢迎消息中的广告 + +要禁用这些广告,编辑文件: +``` +$ sudo vi /etc/default/motd-news +``` + +找到以下行并将其值设置为 0(零)。 +``` +[...] +ENABLED=0 +[...] +``` + +保存并关闭文件。现在,重新启动系统,看看欢迎消息是否仍然显示来自 Ubuntu 博客的链接。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/08/Ubuntu-Terminal-welcome-message-1.png) + +看到没?现在没有来自 Ubuntu 博客和 Ubuntu wiki 的链接。 + +这就是全部内容了。希望这对你有所帮助。更多好东西要来了,敬请关注! + +顺祝时祺! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-disable-ads-in-terminal-welcome-message-in-ubuntu-server/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.ostechnix.com/author/sk/ +[1]:https://blog.ubuntu.com/ +[2]:https://wiki.ubuntu.com/ diff --git a/translated/tech/20180823 CLI- improved.md b/translated/tech/20180823 CLI- improved.md deleted file mode 100644 index 05ffb2318e..0000000000 --- a/translated/tech/20180823 CLI- improved.md +++ /dev/null @@ -1,350 +0,0 @@ - -命令行:增强版 -====== - -我不确定有多少Web 开发者能完全逃避使用命令行。就我来说,我从1997年上大学就开始使用命令行了,那时的l33t-hacker 让我着迷,同时我也觉得它很难掌握。 - -过去这些年我的命令行本领在逐步加强,我经常会去搜寻在我工作中能使用的更好的命令行工具。下面就是我现在使用的用于增强原有命令行工具的列表。 - - -### 怎么忽略我所做的命令行增强 - -通常情况下我会用别名将新的或者增强的命令行工具链接到原来的命令行(如`cat`和`ping`)。 - - -如果我需要运行原来的命令的话(有时我确实需要这么做),我会像下面这样来运行未加修改的原来的命令行。(我用的是Mac,你的输出可能不一样) - - -``` -$ \cat # 忽略叫 "cat" 的别名 - 具体解释: https://stackoverflow.com/a/16506263/22617 -$ command cat # 忽略函数和别名 - -``` - -### bat > cat - -`cat`用于打印文件的内容,如果你在命令行上要花很多时间的话,例如语法高亮之类的功能会非常有用。我首先发现了[ccat][3]这个有语法高亮功能的的工具,然后我发现了[bat][4],它的功能有语法高亮,分页,行号和git集成。 - - -`bat`命令也能让我在输出里(只要输出比屏幕的高度长) -使用`/`关键字绑定来搜索(和用`less`搜索功能一样)。 - - -![Simple bat output][5] - -我将别名`cat`链接到了`bat`命令: - - - -``` -alias cat='bat' - -``` - -💾 [Installation directions][4] - -### prettyping > ping - -`ping`非常有用,当我碰到“糟了,是不是什么服务挂了?/我的网不通了?”这种情况下我最先想到的工具就是它了。但是`prettyping`(“prettyping” 可不是指"pre typing")(译注:英文字面意思是'预打印')在`ping`上加上了友好的输出,这可让我感觉命令行友好了很多呢。 - - -![/images/cli-improved/ping.gif][6] - -我也将`ping`用别名链接到了`prettyping`命令: - - -``` -alias ping='prettyping --nolegend' - -``` - -💾 [Installation directions][7] - -### fzf > ctrl+r - -在命令行上使用`ctrl+r`将允许你在命令历史里[反向搜索][8]使用过的命令,这是个挺好的小技巧,但是它需要你给出非常精确的输入才能正常运行。 - -`fzf`这个工具相比于`ctrl+r`有了**巨大的**进步。它能针对命令行历史进行模糊查询,并且提供了对可能的合格结果进行全面交互式预览。 - - -除了搜索命令历史,`fzf`还能预览和打开文件,我在下面的视频里展示了这些功能。 - - -为了这个预览的效果,我创建了一个叫`preview`的别名,它将`fzf`和前文提到的`bat`组合起来完成预览功能,还给上面绑定了一个定制的热键Ctrl+o来打开 VS Code: - - -``` -alias preview="fzf --preview 'bat --color \"always\" {}'" -# 支持在 VS Code 里用ctrl+o 来打开选择的文件 -export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'" - -``` - -💾 [Installation directions][9] - -### htop > top - -`top`是当我想快速诊断为什么机器上的CPU跑的那么累或者风扇为什么突然呼呼大做的时候首先会想到的工具。我在产品环境也会使用这个工具。讨厌的是Mac上的`top`和 Linux 上的`top`有着极大的不同(恕我直言,应该是差的多)。 - - -不过,`htop`是对 Linux 上的`top`和 Mac 上蹩脚的`top`的极大改进。它增加了包括颜色输出编码,键盘热键绑定以及不同的视图输出,这极大的帮助了我来理解进程之间的父子关系。 - - -方便的热键绑定包括: - - * P - CPU使用率排序 - * M - 内存使用排序 - * F4 - 用字符串过滤进程(例如只看包括"node"的进程) - * space - 锚定一个单独进程,这样我能观察它是否有尖峰状态 - - -![htop output][10] - -在Mac Sieera 上htop 有个奇怪的bug,不过这个bug可以通过以root运行来绕过(我实在记不清这个bug 是什么,但是这个别名能搞定它,有点讨厌的是我得每次都输入root密码。): - - -``` -alias top="sudo htop" # 给top加上别名并且绕过 Sieera 上的bug -``` - -💾 [Installation directions][11] - -### diff-so-fancy > diff - -我非常确定我是一些年前从 Paul Irish 那儿学来的这个技巧,尽管我很少直接使用`diff`,但我的git命令行会一直使用`diff`。`diff-so-fancy`给了我代码语法颜色和更改字符高亮的功能。 - - -![diff so fancy][12] - -在我的`~/.gitconfig`文件里我有下面的选项来打开`git diff`和`git show`的`diff-so-fancy`功能。 - - -``` -[pager] - diff = diff-so-fancy | less --tabs=1,5 -RFX - show = diff-so-fancy | less --tabs=1,5 -RFX - -``` - -💾 [Installation directions][13] - -### fd > find - -尽管我使用 Mac, 但我从来不是一个Spotlight的拥趸,我觉得它的性能很差,关键字也难记,加上更新它自己的数据库时会拖慢CPU,简直一无是处。我经常使用[Alfred][14],但是它的搜索功能也工作的不是很好。 - - -我倾向于在命令行中搜索文件,但是`find`的难用在于很难去记住那些合适的表达式来描述我想要的文件。(而且 Mac 上的 find 命令和非Mac的find命令还有些许不同,这更加深了我的失望。) - -`fd`是一个很好的替代品(它的作者和`bat`的作者是同一个人)。它非常快而且对于我经常要搜索的命令非常好记。 - - - -几个使用方便的例子: - -``` -$ fd cli # 所有包含"cli"的文件名 -$ fd -e md # 所有以.md作为扩展名的文件 -$ fd cli -x wc -w # 搜索"cli"并且在每个搜索结果上运行`wc -w` - - -``` - -![fd output][15] - -💾 [Installation directions][16] - -### ncdu > du - -对我来说,知道当前的磁盘空间使用是非常重要的任务。我用过 Mac 上的[Dish Daisy][17],但是我觉得那个程序产生结果有点慢。 - - -`du -sh`命令是我经常会跑的命令(`-sh`是指结果以`总结`和`人类可读`的方式显示),我经常会想要深入挖掘那些占用了大量磁盘空间的目录,看看到底是什么在占用空间。 - -`ncdu`是一个非常棒的替代品。它提供了一个交互式的界面并且允许快速的扫描那些占用了大量磁盘空间的目录和文件,它又快又准。(尽管不管在哪个工具的情况下,扫描我的home目录都要很长时间,它有550G) - - -一旦当我找到一个目录我想要“处理”一下(如删除,移动或压缩文件),我都会使用命令+点击屏幕[iTerm2][18]上部的目录名字来对那个目录执行搜索。 - - -![ncdu output][19] - -还有另外一个选择[一个叫nnn的另外选择][20],它提供了一个更漂亮的界面,它也提供文件尺寸和使用情况,实际上它更像一个全功能的文件管理器。 - - -我的`ncdu`使用下面的别名链接: - -``` -alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules" - -``` - - -选项有: - - * `--color dark` 使用颜色方案 - * `-rr` 只读模式(防止误删和运行新的登陆程序) - * `--exclude` 忽略不想操作的目录 - - - -💾 [Installation directions][21] - -### tldr > man - -几乎所有的单独命令行工具都有一个相伴的手册,其可以被`man <命令名>`来调出,但是在`man`的输出里找到东西可有点让人困惑,而且在一个包含了所有的技术细节的输出里找东西也挺可怕的。 - - -这就是TL;DR(译注:英文里`文档太长,没空去读`的缩写)项目创建的初衷。这是一个由社区驱动的文档系统,而且针对的是命令行。就我现在用下来,我还没碰到过一个命令它没有相应的文档,你[也可以做贡献][22]。 - - -![TLDR output for 'fd'][23] - -作为一个小技巧,我将`tldr`的别名链接到`help`(这样输入会快一点。。。) - -``` -alias help='tldr' - -``` - -💾 [Installation directions][24] - -### ack || ag > grep - -`grep`毫无疑问是一个命令行上的强力工具,但是这些年来它已经被一些工具超越了,其中两个叫`ack`和`ag`。 - - -我个人对`ack`和`ag`都尝试过,而且没有非常明显的个人偏好,(那也就是说他们都很棒,并且很相似)。我倾向于默认只使用`ack`,因为这三个字符就在指尖,很好打。并且,`ack`有大量的`ack --`参数可以使用,(你一定会体会到这一点。) - - -`ack`和`ag`都将使用正则表达式来表达搜索,这非常契合我的工作,我能指定搜索的文件类型而不用使用类似于`--js`或`--html`的文件标识(尽管`ag`比`ack`在文件类型过滤器里包括了更多的文件类型。) - - -两个工具都支持常见的`grep`选项,如`-B`和`-A`用于在搜索的上下文里指代`之前`和`之后`。 - - -![ack in action][25] - -因为`ack`不支持markdown(而我又恰好写了很多markdown), 我在我的`~/.ackrc`文件里放了如下的定制语句: - - - -``` ---type-set=md=.md,.mkd,.markdown ---pager=less -FRX - -``` - -💾 Installation directions: [ack][26], [ag][27] - -[Futher reading on ack & ag][28] - -### jq > grep et al - -我是[jq][29]的粉丝之一。当然一开始我也在它的语法里苦苦挣扎,好在我对查询语言还算有些使用心得,现在我对`jq`可以说是每天都要用。(不过从前我要么使用grep 或者使用一个叫[json][30]的工具,相比而言后者的功能就非常基础了。) - - -我甚至开始撰写一个`jq`的教程系列(有2500字并且还在增加),我还发布了一个[web tool][31]和一个Mac 上的应用(这个还没有发布。) - - -`jq`允许我传入一个 JSON 并且能非常简单的将其转变为一个 使用JSON格式的结果,这正是我想要的。下面这个例子允许我用一个命令更新我的所有节点依赖(为了阅读方便,我将其分成为多行。) - - -``` -$ npm i $(echo $(\ - npm outdated --json | \ - jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \ -)) - -``` -上面的命令将使用npm 的 JSON 输出格式来列出所有的过期节点依赖,然后将下面的源JSON转换为: - - -``` -{ - "node-jq": { - "current": "0.7.0", - "wanted": "0.7.0", - "latest": "1.2.0", - "location": "node_modules/node-jq" - }, - "uuid": { - "current": "3.1.0", - "wanted": "3.2.1", - "latest": "3.2.1", - "location": "node_modules/uuid" - } -} - -``` - -转换结果为:(译注:原文此处并未给出结果) - -上面的结果会被作为`npm install`的输入,你瞧,我的升级就这样全部搞定了。(当然,这里有点小题大做了。) - - -### 很荣幸提及一些其他的工具 - -我也在开始尝试一些别的工具,但我还没有完全掌握他们。(除了`ponysay`,当我新启动一个命令行会话时,它就会出现。) - - - * [ponysay][32] > cowsay - * [csvkit][33] > awk et al - * [noti][34] > `display notification` - * [entr][35] > watch - - - -### 你有什么好点子吗? - - -上面是我的命令行清单。能告诉我们你的吗?你有没有试着去增强一些你每天都会用到的命令呢?请告诉我,我非常乐意知道。 - - - --------------------------------------------------------------------------------- - -via: https://remysharp.com/2018/08/23/cli-improved - -作者:[Remy Sharp][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:DavidChenLiang(https://github.com/DavidChenLiang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://remysharp.com -[1]: https://remysharp.com/images/terminal-600.jpg -[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount -[3]: https://github.com/jingweno/ccat -[4]: https://github.com/sharkdp/bat -[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output) -[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output) -[7]: http://denilson.sa.nom.br/prettyping/ -[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks -[9]: https://github.com/junegunn/fzf -[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output) -[11]: http://hisham.hm/htop/ -[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output) -[13]: https://github.com/so-fancy/diff-so-fancy -[14]: https://www.alfredapp.com/ -[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output) -[16]: https://github.com/sharkdp/fd/ -[17]: https://daisydiskapp.com/ -[18]: https://www.iterm2.com/ -[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output) -[20]: https://github.com/jarun/nnn -[21]: https://dev.yorhel.nl/ncdu -[22]: https://github.com/tldr-pages/tldr#contributing -[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for 'fd') -[24]: http://tldr-pages.github.io/ -[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args) -[26]: https://beyondgrep.com -[27]: https://github.com/ggreer/the_silver_searcher -[28]: http://conqueringthecommandline.com/book/ack_ag -[29]: https://stedolan.github.io/jq -[30]: http://trentm.com/json/ -[31]: https://jqterm.com -[32]: https://github.com/erkin/ponysay -[33]: https://csvkit.readthedocs.io/en/1.0.3/ -[34]: https://github.com/variadico/noti -[35]: http://www.entrproject.org/ diff --git a/translated/tech/20180903 A Cross-platform High-quality GIF Encoder.md b/translated/tech/20180903 A Cross-platform High-quality GIF Encoder.md new file mode 100644 index 0000000000..314797a174 --- /dev/null +++ b/translated/tech/20180903 A Cross-platform High-quality GIF Encoder.md @@ -0,0 +1,147 @@ +Gifski – 一个跨平台的高质量 GIF 编码器 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/09/gifski-720x340.png) + +作为一名文字工作者,我需要在我的文章中添加图片。有时为了更容易讲清楚某个概念,我还会添加视频或者 gif 动图,相比于文字,通过视频或者 gif 格式的输出,读者可以更容易地理解我的指导。前些天,我已经写了篇文章来介绍针对 Linux 的功能丰富的强大截屏工具 [**Flameshot**][1]。今天,我将向你展示如何从一段视频或者一些图片来制作高质量的 gif 动图。这个工具就是 **Gifski**,一个跨平台、开源、基于 **Pngquant** 的高质量命令行 GIF 编码器。 + +对于那些好奇 pngquant 是什么的读者,简单来说 pngquant 是一个针对 PNG 图片的无损压缩命令行工具。相信我,pngquant 是我使用过的最好的 PNG 无损压缩工具。它可以将 PNG 图片最高压缩 **70%** 而不会损失图片的原有质量并保存了所有的阿尔法透明度。经过压缩的图片可以在所有的网络浏览器和系统中使用。而 Gifski 是基于 Pngquant 的,它使用 pngquant 的功能来创建高质量的 GIF 动图。Gifski 能够创建每帧包含上千种颜色的 GIF 动图。Gifski 也需要 **ffmpeg** 来将视频转换为 PNG 图片。 + +### **安装 Gifski** + +首先需要确保你安装了 FFMpeg 和 Pngquant。 + +FFmpeg 在大多数的 Linux 发行版的默认软件仓库中都可以获取到,所以你可以使用默认的包管理器来安装它。具体的安装过程,请参考下面链接中的指导。 + +- [在 Linux 中如何安装 FFmpeg](https://www.ostechnix.com/install-ffmpeg-linux/) + +Pngquant 可以从 [**AUR**][2] 中获取到。要在基于 Arch 的系统安装它,使用任意一个 AUR 帮助程序即可,例如下面示例中的 [**Yay**][3]: +``` +$ yay -S pngquant +``` + +在基于 Debian 的系统中,运行: +``` +$ sudo apt install pngquant +``` + +假如在你使用的发行版中没有 pngquant,你可以从源码编译并安装它。为此你还需要安装 **`libpng-dev`** 包。 +``` +$ git clone --recursive https://github.com/kornelski/pngquant.git + +$ make + +$ sudo make install +``` + +安装完上述依赖后,再安装 Gifski。假如你已经安装了 [**Rust**][4] 编程语言,你可以使用 **cargo** 来安装它: +``` +$ cargo install gifski +``` + +另外,你还可以使用 [**Linuxbrew**][5] 包管理器来安装它: +``` +$ brew install gifski +``` + +假如你不想安装 cargo 或 Linuxbrew,可以从它的 [发布页面][6] 下载最新的二进制程序,或者手动从源码编译并安装 gifski 。 + +### 使用 Gifski 来创建高质量的 GIF 动图 + +进入你保存 PNG 图片的目录,然后运行下面的命令来从这些图片创建 GIF 动图: +``` +$ gifski -o file.gif *.png +``` + +上面的 `file.gif` 为最后输出的 gif 动图。 + +Gifski 还有其他的特性,例如: + + * 创建特定大小的 GIF 动图 + * 在每秒钟展示特定数目的动图 + * 以特定的质量编码 + * 更快速度的编码 + * 以给定顺序来编码图片,而不是以排序的结果来编码 + +为了创建特定大小的 GIF 动图,例如宽为 800,高为 400,可以使用下面的命令: +``` +$ gifski -o file.gif -W 800 -H 400 *.png + +``` + +你可以设定 GIF 动图在每秒钟展示多少帧,默认值是 **20**。为此,可以运行下面的命令: +``` +$ gifski -o file.gif --fps 1 *.png +``` + +在上面的例子中,我指定每秒钟展示 1 帧。 + +我们还能够以特定质量(1-100 范围内)来编码。显然,更低的质量将生成更小的文件,更高的质量将生成更大的 GIF 动图文件。 +``` +$ gifski -o file.gif --quality 50 *.png +``` + +当需要编码大量图片时,Gifski 将会花费更多时间。如果想要编码过程加快到通常速度的 3 倍左右,可以运行: +``` +$ gifski -o file.gif --fast *.png +``` + +请注意上面的命令产生的 GIF 动图文件将减少 10% 的质量并且文件大小也会更大。 + +如果想让图片以某个给定的顺序(而不是通过排序)精确地被编码,可以使用 **`--nosort`** 选项。 +``` +$ gifski -o file.gif --nosort *.png +``` + +假如你不想让 GIF 循环播放,只需要使用 **`--once`** 选项即可: +``` +$ gifski -o file.gif --once *.png +``` + +**从视频创建 GIF 动图** + +有时或许你想从一个视频创建 GIF 动图。这也是可以做到的,这时候 FFmpeg 便能提供帮助。首先像下面这样,将视频转换成一系列的 PNG 图片: +``` +$ ffmpeg -i video.mp4 frame%04d.png +``` + +上面的命令将会从 `video.mp4` 这个视频文件创建名为“frame0001.png”、“frame0002.png”、“frame0003.png”等等形式的图片(其中的 `%04d` 代表帧数),然后将这些图片保存在当前的工作目录。 + +转换好图片后,只需要运行下面的命令便可以制作 GIF 动图了: +``` +$ gifski -o file.gif *.png +``` + +想知晓更多的细节,请参考它的帮助部分: +``` +$ gifski -h +``` + +下面是使用 Gifski 创建的示例 GIF 动图文件。 + +![](https://gif.ski/jazz-chromecast-ultra.gif) + +正如你看到的那样,GIF 动图的质量看起来是非常好的。 + +好了,这就是全部内容了。希望这篇指南对你有所帮助。更多精彩内容即将呈现,请保持关注! + +干杯吧! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/gifski-a-cross-platform-high-quality-gif-encoder/ + +作者:[SK][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[1]: https://www.ostechnix.com/flameshot-a-simple-yet-powerful-feature-rich-screenshot-tool/ +[2]: https://aur.archlinux.org/packages/pngquant/ +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://www.ostechnix.com/install-rust-programming-language-in-linux/ +[5]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ +[6]: https://github.com/ImageOptim/gifski/releases diff --git a/translated/tech/20180907 6 open source tools for writing a book.md b/translated/tech/20180907 6 open source tools for writing a book.md new file mode 100644 index 0000000000..ef1edd8cff --- /dev/null +++ b/translated/tech/20180907 6 open source tools for writing a book.md @@ -0,0 +1,67 @@ +6 个用于写书的开源工具 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-austen-writing-code.png?itok=XPxRMtQ4) + +我在 1993 年首次使用并贡献了免费和开源软件,从那时起我一直是一名开源软件开发人员和传播者。尽管我一个被记住的项目是[ FreeDOS 项目][1], 一个 DOS 操作系统的开源实现,但我已经编写或者贡献了数十个开源软件项目。 + +我最近写了一本关于 FreeDOS 的书。 [_使用 FreeDOS_][2]是我庆祝 FreeDOS 出现 24 周年。它是关于安装和使用 FreeDOS、关于我最喜欢的 DOS 程序的文章,以及 DOS 命令行和 DOS 批处理编程的快速参考指南的集合。在一位出色的专业编辑的帮助下,我在过去的几个月里一直在编写这本书。 + +_使用 FreeDOS_ 可在知识共享署名(cc-by)国际公共许可证下获得。你可以从[FreeDO S电子书][2]网站免费下载 EPUB 和 PDF 版本。(我也计划为那些喜欢纸质的人提供打印版本。) + +这本书几乎完全是用开源软件制作的。我想分享一下对用来创建、编辑和生成_使用 FreeDOS_的工具的看法。 + +### Google 文档 + +[Google 文档][3]是我使用的唯一不是开源软件的工具。我将我的第一份草稿上传到 Google 文档,这样我就能与编辑器进行协作。我确信有开源协作工具,但 Google 文档能够让两个人同时编辑同一个文档、发表评论、编辑建议和更改跟踪 - 更不用说它使用段落样式和能够下载完成的文档 - 这使其成为编辑过程中有价值的一部分。 + +### LibreOffice + +我开始使用 [LibreOffice][4] 6.0,但我最终使用 LibreOffice 6.1 完成了这本书。我喜欢 LibreOffice 对样式的丰富支持。段落样式可以轻松地为标题、页眉、正文、示例代码和其他文本应用样式。字符样式允许我修改段落中文本的外观,例如内联示例代码或用不同的样式代表文件名。图形样式让我可以将某些样式应用于截图和其他图像。页面样式允许我轻松修改页面的布局和外观。 + +### GIMP + +我的书包括很多 DOS 程序截图,网站截图和 FreeDOS logo。我用 [GIMP][5] 修改了这本书的图像。通常,只是裁剪或调整图像大小,但在我准备本书的印刷版时,我使用 GIMP 创建了一些更易于打印布局的图像。 + +### Inkscape + +大多数 FreeDOS logo 和小鱼吉祥物都是 SVG 格式,我使用 [Inkscape][6]来调整它们。在准备电子书的 PDF 版本时,我想在页面顶部放置一个简单的蓝色横幅,角落里有 FreeDOS logo。实验后,我发现在 Inkscape 中创建一个我想要的横幅 SVG 图案更容易,然后我将其粘贴到页眉中。 + +### ImageMagick + +虽然使用 GIMP 来完成这项工作也很好,但有时在一组图像上运行 [ImageMagick][7] 命令会更快,例如转换为 PNG 格式或调整图像大小。 + +### Sigil + +LibreOffice 可以直接导出到 EPUB 格式,但它不是个好的转换器。我没有尝试使用 LibreOffice 6.1 创建 EPUB,但 LibreOffice 6.0 没有包含我的图像。它还以奇怪的方式添加了样式。我使用 [Sigil][8] 来调整 EPUB 并使一切看起来正常。Sigil 甚至还有预览功能,因此你可以看到 EPUB 的样子。 + +### QEMU + +因为本书是关于安装和运行 FreeDOS 的,所以我需要实际运行 FreeDOS。你可以在任何 PC 模拟器中启动 FreeDOS,包括 VirtualBox、QEMU、GNOME Boxes、PCem 和 Bochs。但我喜欢 [QEMU] [9] 的简单性。QEMU 控制台允许你以 PPM 转储屏幕,这非常适合抓取截图来包含在书中。 + +当然,我不得不提到在 [Linux][11] 上运行 [GNOME][10]。我使用 Linux 的 [Fedora][12] 发行版。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/writing-book-open-source-tools + +作者:[Jim Hall][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jim-hall +[1]: http://www.freedos.org/ +[2]: http://www.freedos.org/ebook/ +[3]: https://www.google.com/docs/about/ +[4]: https://www.libreoffice.org/ +[5]: https://www.gimp.org/ +[6]: https://inkscape.org/ +[7]: https://www.imagemagick.org/ +[8]: https://sigil-ebook.com/ +[9]: https://www.qemu.org/ +[10]: https://www.gnome.org/ +[11]: https://www.kernel.org/ +[12]: https://getfedora.org/ diff --git a/translated/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md b/translated/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md deleted file mode 100644 index 869e596f89..0000000000 --- a/translated/tech/20180921 Control your data with Syncthing- An open source synchronization tool.md +++ /dev/null @@ -1,111 +0,0 @@ -使用 Syncthing —— 一个开源同步工具来把握你数据的控制权 - -决定如何存储和共享您的个人信息。 - -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) - -如今,我们的一些最重要的财产——从家人和朋友的照片和视频到财务和医疗文件——都是数据。 -即便是云存储服务的迅猛发展,我们仍有对隐私和个人数据缺乏控制的担忧。从 PRISM 的监控计划到谷歌[让 APP 开发者扫描你的个人邮件][1],这些新闻的报道应该会让我们对我们个人信息的安全性有所顾虑。 - -[Syncthing][2] 可以让你放下心来。它是一款开源点对点的文件同步工具,可以运行在Linux、Windows、Mac、Android和其他 (抱歉,没有iOS)。Syncthing 使用自定的协议,叫[块交换协议](3)。简而言之,Syncting 能让你无需拥有服务器来跨设备同步数据,。 - -### Linux - -在这篇文章中,我将解释如何在 Linux 电脑和安卓手机之间安装和同步文件。 - -Syncting 在大多数流行的发行版都能下载。Fedora 28 包含其最新版本。 - -要在 Fedora 上安装 Syncthing,你能在软件中心搜索,或者执行以下命令: - -``` -sudo dnf install syncthing syncthing-gtk -``` - -一旦安装好后,打开它。你将会看到一个助手帮你配置 Syncthing。点击 **下一步** 直到它要求配置 WebUI。最安全的选项是选择**监听本地地址**。那将会禁止 Web 接口并且阻止未经授权的用户。 - -![Syncthing in Setup WebUI dialog box][5] - -Syncthing 安装时的 WebUI 对话框 - -关闭对话框。现在 Syncthing 安装好了。是时间分享一个文件夹,连接一台设备开始同步了。但是,让我们用你其他的客户端继续。 - -### Android - -Syncthing 在 Google Play 和 F-Droid 应用商店都能下载 - -![](https://opensource.com/sites/default/files/uploads/syncthing2.png) - -安装应用程序后,会显示欢迎界面。给 Syncthing 授予你设备存储的权限。 -你可能会被要求为了此应用程序而禁用电池优化。这样做是安全的,因为我们将优化应用程序,使其仅在插入并连接到无线网络时同步。 - -点击主菜单图标来到**设置**,然后是**运行条件**。点击**总是在后台运行**, **仅在充电时运行**和**仅在 WIFI 下运行**。现在你的安卓客户端已经准备好与你的设备交换文件。 - -Syncting 中有两个重要的概念需要记住:文件夹和设备。文件夹是你想要分享的,但是你必须有一台设备来分享。 Syncthing 允许你用不同的设备分享独立的文件夹。设备是通过交换设备 ID 来添加的。设备ID是在 Syncting 首次启动时创建的一个唯一的密码安全标识符。 - -### 连接设备 - -现在让我们连接你的Linux机器和你的Android客户端。 - -在您的Linux计算机中,打开 Syncting,单击 **设置** 图标,然后单击 **显示ID** ,就会显示一个二维码。 - -在你的安卓手机上,打开 Syncthing。在主界面上,点击 **设备** 页后点击 **+** 。在第一个区域内点击二维码符号来启动二维码扫描。 - -将你手机的摄像头对准电脑上的二维码。设备ID字段将由您的桌面客户端设备 ID 填充。起一个适合的名字并保存。因为添加设备有两种方式,现在你需要在电脑客户端上确认你想要添加安卓手机。你的电脑客户端可能会花上好几分钟来请求确认。当提示确认时,点击**添加**。 - -![](https://opensource.com/sites/default/files/uploads/syncthing6.png) - -在 **新设备** 窗口,你能确认并配置一些关于你设备的选项,像是**设备名** 和 **地址**。如果你在地址那一栏选择 dynamic (动态),客户端将会自动探测设备的 IP 地址,但是你想要保持住某一个 IP 地址,你能将该地址填进这一栏里。如果你已经创建了文件夹(或者在这之后),你也能与新设备分享这个文件夹。 - -![](https://opensource.com/sites/default/files/uploads/syncthing7.png) - -你的电脑和安卓设备已经配对,可以交换文件了。(如果你有多台电脑或手机,只需重复这些步骤。) - -### 分享文件夹 - -既然您想要同步的设备之间已经连接,现在是时候共享一个文件夹了。您可以在电脑上共享文件夹,添加了该文件夹中的设备将获得一份副本。 - -若要共享文件夹,请转至**设置**并单击**添加共享文件夹**: - -![](https://opensource.com/sites/default/files/uploads/syncthing8.png) - -在下一个窗口中,输入要共享的文件夹的信息: - -![](https://opensource.com/sites/default/files/uploads/syncthing9.png) - -你可以使用任何你想要的标签。**文件夹ID **将随机生成,用于识别客户端之间的文件夹。在**路径**里,点击**浏览**就能定位到你想要分享的文件夹。如果你想 Syncthing 监控文件夹的变化(例如删除,新建文件等),点击** 监控文件系统变化** - -记住,当你分享一个文件夹,在其他客户端的任何改动都将会反映到每一台设备上。这意味着如果你在其他电脑和手机设备之间分享了一个包含图片的文件夹,在这些客户端上的改动都会同步到每一台设备。如果这不是你想要的,你能让你的文件夹“只是发送"给其他客户端,但是其他客户端的改动都不会被同步。 - -完成后,转至**与设备共享**页并选择要与之同步文件夹的主机: - -您选择的所有设备都需要接受共享请求;您将在设备上收到通知。 - -正如共享文件夹时一样,您必须配置新的共享文件夹: - -![](https://opensource.com/sites/default/files/uploads/syncthing12.png) - -同样,在这里您可以定义任何标签,但是 ID 必须匹配每个客户端。在文件夹选项中,选择文件夹及其文件的位置。请记住,此文件夹中所做的任何更改都将反映到文件夹所允许同步的每个设备上。 - -这些是连接设备和与 Syncting 共享文件夹的步骤。开始复制可能需要几分钟时间,这取决于您的网络设置或您是否不在同一网络上。 - -Syncting 提供了更多出色的功能和选项。试试看,并把握你数据的控制权。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/take-control-your-data-syncthing - -作者:[Michael Zamot][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[ypingcn](https://github.com/ypingcn) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mzamot -[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695 -[2]: https://syncthing.net/ -[3]: https://docs.syncthing.net/specs/bep-v1.html -[4]: /file/410191 -[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png "Syncthing in Setup WebUI dialog box" diff --git a/translated/tech/20180928 Using Grails with jQuery and DataTables.md b/translated/tech/20180928 Using Grails with jQuery and DataTables.md new file mode 100644 index 0000000000..99df42dc91 --- /dev/null +++ b/translated/tech/20180928 Using Grails with jQuery and DataTables.md @@ -0,0 +1,538 @@ +将 Grails 与 jQuery 和 DataTables 一起使用 +====== + +本文介绍如何构建一个基于 Grails 的数据浏览器来可视化复杂的表格数据。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw) + +我是 [Grails][1] 的忠实粉丝。当然,我主要是热衷于利用命令行工具来探索和分析数据的数据人。数据人经常需要_查看_数据,这也意味着他们通常拥有优秀的数据浏览器。利用 Grails,[jQuery][2],以及 [DataTables jQuery 插件][3],我们可以制作出非常友好的表格数据浏览器。 + +[DataTables 网站][3]提供了许多“食谱风格”的教程文档,展示了如何组合一些优秀的示例应用程序,这些程序包含了完成一些非常漂亮的东西所必要的 JavaScript,HTML,以及偶尔出现的 [PHP][4]。但对于那些宁愿使用 Grails 作为后端的人来说,有必要进行一些说明示教。此外,样本程序中使用的数据是虚构公司的员工的单个平面表数据,因此处理这些复杂的表关系可以作为读者的一个练习项目。 + +本文中,我们将创建具有略微复杂的数据结构和 DataTables 浏览器的 Grails 应用程序。我们将介绍 Grails 标准 [Groovy][5]-fied Java Hibernate 标准。我已将代码托管在 [GitHub][6] 上方便大家访问,因此本文主要是对代码细节的解读。 + +首先,你需要配置 Java,Groovy,Grails 的使用环境。对于 Grails,我倾向于使用终端窗口和 [Vim][7],本文也使用它们。为获得现代 Java,建议下载并安装 Linux 发行版提供的 [Open Java Development Kit][8] (OpenJDK)(应该是 Java 8,9,10或11,撰写本文时,我正在使用 Java 8)。从我的角度来看,获取最新的 Groovy 和 Grails 的最佳方法是使用 [SKDMAN!][9]。 + +从未尝试过 Grails 的读者可能需要做一些背景资料阅读。作为初学者,推荐文章 [创建你的第一个 Grails 应用程序][10]。 + +### 获取员工信息浏览器应用程序 + +正如上文所提,我将本文中员工信息浏览器的源代码托管在 [GitHub][6]上。进一步讲,应用程序 **embrow** 是在 Linux 终端中用如下命令构建的: + +``` +cd Projects +grails create-app com.nuevaconsulting.embrow +``` + +域类和单元测试创建如下: + +``` +grails create-domain-class com.nuevaconsulting.embrow.Position +grails create-domain-class com.nuevaconsulting.embrow.Office +grails create-domain-class com.nuevaconsulting.embrow.Employeecd embrowgrails createdomaincom.grails createdomaincom.grails createdomaincom. +``` + +这种方式构建的域类没有属性,因此必须按如下方式编辑它们: + +Position 域类: + +``` +package com.nuevaconsulting.embrow +  +class Position { + +    String name +    int starting + +    static constraints = { +        name nullable: false, blank: false +        starting nullable: false +    } +}com.Stringint startingstatic constraintsnullableblankstarting nullable +``` + +Office 域类: + +``` +package com.nuevaconsulting.embrow +  +class Office { + +    String name +    String address +    String city +    String country + +    static constraints = { +        name nullable: false, blank: false +        address nullable: false, blank: false +        city nullable: false, blank: false +        country nullable: false, blank: false +    } +} +``` + +Enployee 域类: + +``` +package com.nuevaconsulting.embrow +  +class Employee { + +    String surname +    String givenNames +    Position position +    Office office +    int extension +    Date hired +    int salary +    static constraints = { +        surname nullable: false, blank: false +        givenNames nullable: false, blank: false +        : false +        office nullable: false +        extension nullable: false +        hired nullable: false +        salary nullable: false +    } +} +``` + +请注意,虽然 Position 和 Office 域类使用了预定义的 Groovy 类型 String 以及 int,但 Employee 域类定义了 Position 和 Office 字段(以及预定义的 Date)。这会导致创建数据库表,其中存储的 Employee 实例中包含了指向存储 Position 和 Office 实例表的引用或者外键。 + +现在你可以生成控制器,视图,以及其他各种测试组件: + +``` +-all com.nuevaconsulting.embrow.Position +grails generate-all com.nuevaconsulting.embrow.Office +grails generate-all com.nuevaconsulting.embrow.Employeegrails generateall com.grails generateall com.grails generateall com. +``` + +此时,你已经准备好基本的 create-read-update-delete(CRUD)应用程序。我在**grails-app/init/com/nuevaconsulting/BootStrap.groovy**中包含了一些基础数据来填充表格。 + +如果你用如下命令来启动应用程序: + +``` +grails run-app +``` + +在浏览器输入****,你将会看到如下界面: + +![Embrow home screen][12] + +Embrow 应用程序主界面。 + +单击 OfficeController,会跳转到如下界面: + +![Office list][14] + +Office 列表 + +注意,此表由 **OfficeController index** 生成,并由视图 `office/index.gsp` 显示。 + +同样,单击 **EmployeeController** 跳转到如下界面: + +![Employee controller][16] + +employee controller + +好吧,这很丑陋: Position 和 Office 链接是什么? + +上面的命令 `generate-all` 生成的视图创建了一个叫 **index.gsp** 的文件,它使用 Grails 标签,该标签默认会显示类名(**com.nuevaconsulting.embrow.Position**)和持久化示例标识符(**30**)。这个操作可以自定义用来产生更好看的东西,并且自动生成链接,自动生成分页以及自动生成可拍序列的一些非常简洁直观的东西。 + +但该员工信息浏览器功能也是有限的。例如,如果想查找 position 信息中包含 “dev” 的员工该怎么办?如果要组合排序,以姓氏为主排序关键字,office 为辅助排序关键字,该怎么办?或者,你需要将已排序的数据导出到电子表格或 PDF 文档以便通过电子邮件发送给无法访问浏览器的人,该怎么办? + +jQuery DataTables 插件提供了这些所需的功能。允许你创建一个完成的表格数据浏览器。 + +### 创建员工信息浏览器视图和控制器的方法 + +要基于 jQuery DataTables 创建员工信息浏览器,你必须先完成以下两个任务: + 1. 创建 Grails 视图,其中包含启用 DataTable 所需的 HTML 和 JavaScript + + +#### 员工信息浏览器视图 + +在目录 **embrow/grails-app/views/employee** 中,首先复制 **index.gsp** 文件,重命名为 **browser.gsp**: + +``` +cd Projects +cd embrow/grails-app/views/employee +cp gsp browser.gsp +``` + +此刻,你自定义新的 **browser.gsp** 文件来添加相关的 jQuery DataTables 代码。 + +通常,在可能的时候,我喜欢从内容提供商处获得 JavaScript 和 CSS;在下面这行后面: + +``` +<g:message code="default.list.label" args="[entityName]" /> +``` + +插入如下代码: + +``` + + + + + + + + + + + + +``` + +然后删除 **index.gsp** 中提供数据分页的代码: + +``` +
+

+ +
${flash.message}
+
+ + + +
+``` + +并插入实现 jQuery DataTables 的代码。 + +要插入的第一部分是 HTML,它将创建浏览器的基本表格结构。DataTables 与后端通信的应用程序来说,它们只提供表格页眉和页脚;DataTables JavaScript 则负责表中内容。 + +``` +
+

Employee Browser

+ + + + + + + + + + + + + + + + + + + + + + + +
SurnameGiven name(s)PositionOfficeExtensionHiredSalary
SurnameGiven name(s)PositionOfficeExtensionHiredSalary
+
+``` + +接下来,插入一个 JavaScript 块,它主要提供三个功能:它设置页脚中显示的文本框的大小,以进行列过滤,建立 DataTables 表模型,并创建一个处理程序来进行列过滤。 + +``` + +$('#employee_dt tfoot th').each( function() {javascript +``` + +下面的代码处理表格列底部的过滤器框的大小: + +``` +var title = $(this).text(); +if (title == 'Extension' || title == 'Hired') +$(this).html(''); +else +$(this).html(''); +});titletitletitletitletitle +``` + +接下来,定义表模型。 这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 dom 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 Ajax 连接。 请注意,使用 Groovy GString 调用 Grails **createLink()** 的方法创建 URL,在 **EmployeeController** 中指向 **browserLister** 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 + +``` +var table = $('#employee_dt').DataTable( { +"scrollY": 500, +"deferRender": true, +"scroller": true, +"dom": "Brtip", +"buttons": [ 'copy', 'csv', 'excel', 'pdf', 'print' ], +"processing": true, +"serverSide": true, +"ajax": { +"url": "${createLink(controller: 'employee', action: 'browserLister')}", +"type": "POST", +}, +"columns": [ +{ "data": "surname" }, +{ "data": "givenNames" }, +{ "data": "position" }, +{ "data": "office" }, +{ "data": "extension" }, +{ "data": "hired" }, +{ "data": "salary" } +] +}); +``` + +最后,监视过滤器列以进行更改,并使用它们来应用过滤器。 + +``` +table.columns().every(function() { +var that = this; +$('input', this.footer()).on('keyup change', function(e) { +if (that.search() != this.value && 8 < e.keyCode && e.keyCode < 32) +that.search(this.value).draw(); +}); +``` + +这就是 JavaScript,这样就完成了对视图代码的更改。 + +``` +}); + +``` + +以下是此视图创建的UI的屏幕截图: + +![](https://opensource.com/sites/default/files/uploads/screen_4.png) + +这是另一个屏幕截图,显示了过滤和多列排序(寻找 position 包括字符 “dev” 的员工,先按 office 排序,然后按姓氏排序): + +![](https://opensource.com/sites/default/files/uploads/screen_5.png) + +这是另一个屏幕截图,显示单击 CSV 按钮时会发生什么: + +![](https://opensource.com/sites/default/files/uploads/screen6.png) + +最后,这是一个截图,显示在 LibreOffice 中打开的 CSV 数据: + +![](https://opensource.com/sites/default/files/uploads/screen7.png) + +好的,视图部分看起来非常简单; 因此,控制器必须做所有繁重的工作,对吧? 让我们来看看… + +#### 控制器 browserLister 操作 + +回想一下,我们看到过这个字符串 + +``` +"${createLink(controller: 'employee', action: 'browserLister')}" +``` + +对于从 DataTables 模型中调用 Ajax 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 **EmployeeController** 的链接,位于: + +``` +embrow/grails-app/controllers/com/nuevaconsulting/embrow/EmployeeController.groovy +``` + +特别是控制器方法 **browserLister()**。我在代码中留了一些 print 语句,以便在运行时能够在终端看到中间结果。 + +``` +    def browserLister() { +        // Applies filters and sorting to return a list of desired employees +``` + +首先,打印出传递给 **browserLister()** 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 + +``` +      println "employee browserLister params $params" +        println() +``` + +接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 **jqdtParams**的 Groovy 映射: + +``` +def jqdtParams = [:] +params.each { key, value -> + def keyFields = key.replace(']','').split(/\[/) + def table = jqdtParams + for (int f = 0; f < keyFields.size() - 1; f++) { + def keyField = keyFields[f] + if (!table.containsKey(keyField)) + table[keyField] = [:] + table = table[keyField] + } + table[keyFields[-1]] = value +} +println "employee dataTableParams $jqdtParams" +println() +``` + +接下来,列数据,一个名为 **columnMap**的 Groovy 映射: + +``` +def columnMap = jqdtParams.columns.collectEntries { k, v -> + def whereTerm = null + switch (v.data) { + case 'extension': + case 'hired': + case 'salary': + if (v.search.value ==~ /\d+(,\d+)*/) + whereTerm = v.search.value.split(',').collect { it as Integer } + break + default: + if (v.search.value ==~ /[A-Za-z0-9 ]+/) + whereTerm = "%${v.search.value}%" as String + break + } + [(v.data): [where: whereTerm]] +} +println "employee columnMap $columnMap" +println() +``` + +接下来,从 **columnMap** 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 **allColumnList**和 **orderList**: + +``` +def allColumnList = columnMap.keySet() as List +println "employee allColumnList $allColumnList" +def orderList = jqdtParams.order.collect { k, v -> [allColumnList[v.column as Integer], v.dir] } +println "employee orderList $orderList" +``` + +我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭; 在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 **createAlias** 以允许我们进入相关类别 Position 和 Office: + +``` +def filterer = { + createAlias 'position', 'p' + createAlias 'office', 'o' + + if (columnMap.surname.where) ilike 'surname', columnMap.surname.where + if (columnMap.givenNames.where) ilike 'givenNames', columnMap.givenNames.where + if (columnMap.position.where) ilike 'p.name', columnMap.position.where + if (columnMap.office.where) ilike 'o.name', columnMap.office.where + if (columnMap.extension.where) inList 'extension', columnMap.extension.where + if (columnMap.salary.where) inList 'salary', columnMap.salary.where + if (columnMap.hired.where) { + if (columnMap.hired.where.size() > 1) { + or { + columnMap.hired.where.each { + between 'hired', Date.parse('yyyy/MM/dd',"${it}/01/01" as String), + Date.parse('yyyy/MM/dd',"${it}/12/31" as String) + } + } + } else { + between 'hired', Date.parse('yyyy/MM/dd',"${columnMap.hired.where[0]}/01/01" as String), + Date.parse('yyyy/MM/dd',"${columnMap.hired.where[0]}/12/31" as String) + } + } +} +``` + +是时候应用上述内容了。第一步是获取分页代码所需的所有 Employee 实例的总数: + +``` +        def recordsTotal = Employee.count() +        println "employee recordsTotal $recordsTotal" +``` + +接下来,将过滤器应用于 Employee 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): + +``` +        def c = Employee.createCriteria() +        def recordsFiltered = c.count { +            filterer.delegate = delegate +            filterer() +        } +        println "employee recordsFiltered $recordsFiltered" + +``` + +获得这两个计数后,你还可以使用分页和排序信息获取实际过滤的实例。 + +``` + def orderer = Employee.withCriteria { + filterer.delegate = delegate + filterer() + orderList.each { oi -> + switch (oi[0]) { + case 'surname': order 'surname', oi[1]; break + case 'givenNames': order 'givenNames', oi[1]; break + case 'position': order 'p.name', oi[1]; break + case 'office': order 'o.name', oi[1]; break + case 'extension': order 'extension', oi[1]; break + case 'hired': order 'hired', oi[1]; break + case 'salary': order 'salary', oi[1]; break + } + } + maxResults (jqdtParams.length as Integer) + firstResult (jqdtParams.start as Integer) + } +``` + +要完全清楚,JTable 中的分页代码管理三个计数:数据集中的记录总数,应用过滤器后得到的数字,以及要在页面上显示的数字(显示是滚动还是分页)。 排序应用于所有过滤的记录,并且分页应用于那些过滤的记录的块以用于显示目的。 + +接下来,处理命令返回的结果,在每行中创建指向 Employee,Position 和 Office 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: + +``` +        def dollarFormatter = new DecimalFormat('$##,###.##') +        def employees = orderer.collect { employee -> +            ['surname': "${employee.surname}", +                'givenNames': employee.givenNames, +                'position': "${employee.position?.name}", +                'office': "${employee.office?.name}", +                'extension': employee.extension, +                'hired': employee.hired.format('yyyy/MM/dd'), +                'salary': dollarFormatter.format(employee.salary)] +        } +``` + +最后,创建要返回的结果并将其作为 JSON 返回,这是 jQuery DataTables 所需要的。 + +``` + def result = [draw: jqdtParams.draw, recordsTotal: recordsTotal, recordsFiltered: recordsFiltered, data: employees] + render(result as JSON) + } +``` + +大功告成 +如果你熟悉 Grails,这可能看起来比你原先想象的要多,但这里没有火箭式的一步到位方法,只是很多分散的操作步骤。但是,如果你没有太多接触 Grails(或 Groovy),那么需要了解很多新东西 - 闭包,代理和构建器等等。 + +在那种情况下,从哪里开始? 最好的地方是了解 Groovy 本身,尤其是 [Groovy closures][18] 和 [Groovy delegates and builders][19]。然后再去阅读上面关于 Grails 和 Hibernate 条件查询的建议阅读文章。 + +### 结语 + +jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 + +我使用这种方法制作了几个数据浏览器,允许用户选择要查看和累积记录计数的列,或者只是浏览数据。即使在相对适度的 VPS 上的百万行表中,性能也很好。 + +一个警告:我偶然发现了 Grails 中暴露的各种 Hibernate 标准机制的一些问题(请参阅我的其他 GitHub 代码库),因此需要谨慎和实验。如果所有其他方法都失败了,另一种方法是动态构建 SQL 字符串并执行它们。在撰写本文时,我更喜欢使用 Grails 标准,除非我遇到杂乱的子查询,但这可能只反映了我在 Hibernate 中对子查询的相对缺乏经验。 + +我希望 Grails 程序员发现本文的有趣性。请随时在下面留下评论或建议。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jrg](https://github.com/jrglinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://grails.org/ +[2]: https://jquery.com/ +[3]: https://datatables.net/ +[4]: http://php.net/ +[5]: http://groovy-lang.org/ +[6]: https://github.com/monetschemist/grails-datatables +[7]: https://www.vim.org/ +[8]: http://openjdk.java.net/ +[9]: http://sdkman.io/ +[10]: http://guides.grails.org/creating-your-first-grails-app/guide/index.html +[11]: https://opensource.com/file/410061 +[12]: https://opensource.com/sites/default/files/uploads/screen_1.png "Embrow home screen" +[13]: https://opensource.com/file/410066 +[14]: https://opensource.com/sites/default/files/uploads/screen_2.png "Office list screenshot" +[15]: https://opensource.com/file/410071 +[16]: https://opensource.com/sites/default/files/uploads/screen3.png "Employee controller screenshot" +[17]: https://gsp.grails.org/latest/ref/Tags/createLink.html +[18]: http://groovy-lang.org/closures.html +[19]: http://groovy-lang.org/dsls.html diff --git a/translated/tech/20181002 4 open source invoicing tools for small businesses.md b/translated/tech/20181002 4 open source invoicing tools for small businesses.md new file mode 100644 index 0000000000..f333c318bc --- /dev/null +++ b/translated/tech/20181002 4 open source invoicing tools for small businesses.md @@ -0,0 +1,76 @@ +适用于小型企业的 4 个开源发票工具 +====== +用基于 web 的发票软件管理你的账单,完成收款,十分简单。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_lovemoneyglory2.png?itok=AvneLxFp) + +无论您开办小型企业的原因是什么,保持业务发展的关键是可以盈利。收款也就意味着向客户提供发票。 + +使用 LibreOffice Writer 或 LibreOffice Calc 提供发票很容易,但有时候你需要的不止这些。从更专业的角度看。一种跟进发票的方法。提醒你何时跟进你发出的发票。 + +在这里有各种各样的商业闭源发票管理工具。但是开源界的产品和相对应的闭源商业工具比起来,并不差,没准还更灵活。 + +让我们一起了解这 4 款基于 web 的开源发票工具,它们很适用于预算紧张的自由职业者和小型企业。2014 年,我在本文的[早期版本][1]中提到了其中两个工具。这 4 个工具用起来都很简单,并且你可以在任何设备上使用它们。 + +### Invoice Ninja + +我不是很喜欢 ninja 这个词。尽管如此,我喜欢 [Invoice Ninja][2]。非常喜欢。它将功能融合在一个简单的界面,其中包含一组功能,可让创建,管理和向客户、消费者发送发票。 + +您可以轻松配置多个客户端,跟进付款和未结清的发票,生成报价并用电子邮件发送发票。Invoice Ninja 与其竞争对手不同,它[集成][3]了超过 40 个流行支付方式,包括 PayPal,Stripe,WePay 以及 Apple Pay。 + +[下载][4]一个可以安装到自己服务器上的版本,或者获取一个[托管版][5]的账户,都可以使用 Invoice Ninja。它有免费版,也有每月 8 美元的收费版。 + +### InvoicePlane + +以前,有一个叫做 FusionInvoice 的漂亮的开源发票工具。有一天,FusionInvoice 的开发者将最新版本的代码设为了专有。这件事结局并不完美,因为 FusionInvoice 从 2018 年起再也不开源了。但这不代表这个工具完蛋了。它旧版本的代码依然是开源的,并且再次开发为包括 FusionInvoice 所有优点的新工具 [InvoicePlane][6]。 + +只需点几下鼠标即可制作发票。你可以根据需要将它们设为最简或者最详细。一切准备就绪时,你可以用电子邮件发送发票或者输出为 PDF 文件。你还可以为经常开发票的客户或消费者制作定期发票。 + +InvoicePlane 不仅可以生成或跟进发票。你还可以为任务或商品创制报价,跟进你销售的产品,查看确认付款,并在发票上生成报告。 + +[获取代码][7]并将其安装在你的 Web 服务器上。或者,如果你还没准备好安装它,可以[拿小样][8]试用以下。 + +### OpenSourceBilling + +[OpenSourceBilling][9] 被它的开发者称赞为“非常简单的计费软件”,当之无愧。它拥有最简洁的交互界面,配置使用起来轻而易举。 + +OpenSourceBilling 因它的商业智能仪表盘脱颖而出,它可以跟进跟进你当前和以前的发票,以及任何没有支付的款项。它以图表的形式整理信息,使之很容易阅读。 + +你可以在发票上配置很多信息。只需点几下鼠标按几下键盘,即可添加项目、税率、客户名称以及付款条件。OpenSourceBilling 将这些信息保存在你所有的发票当中,不管新发票还是旧发票。 + +与我们之前讨论过的工具一样,OpenSourceBilling 也有可以试用的[程序小样][10]。 + +### BambooInvoice + +当我是一个全职自由作家和顾问时,我通过 [BambooInvoice][11] 向客户收费。当它最初的开发者停止维护此软件时,我有点失望。但是 BambooInvoice 又回来了,并一如既往的好。 + +BambooInvoice 的简洁很吸引我。它只做一件事并做的很好。你可以创建并修改发票,BambooInvoice 会根据客户和分配的发票编号负责跟进。它会告诉你哪些发票是开放的或过期的。你可以在程序中通过电子邮件发送发票或者导出为 PDF 文件。你还可以生成报告密切关注收入。 + +要[安装][12]并使用 BambooInvoice,你需要一个运行 PHP 5 或更高版本的 web 服务器,并运行 MySQL 数据库。机会就在你面前,所以你很乐意去用它。 + +你又最喜欢的开源发票工具吗?请自由分享评论。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/open-source-invoicing-tools + +作者:[Scott Nesbitt][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[fuowang](https://github.com/fuowang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[1]: https://opensource.com/business/14/9/4-open-source-invoice-tools +[2]: https://www.invoiceninja.org/ +[3]: https://www.invoiceninja.com/integrations/ +[4]: https://github.com/invoiceninja/invoiceninja +[5]: https://www.invoiceninja.com/invoicing-pricing-plans/ +[6]: https://invoiceplane.com/ +[7]: https://wiki.invoiceplane.com/en/1.5/getting-started/installation +[8]: https://demo.invoiceplane.com/ +[9]: http://www.opensourcebilling.org/ +[10]: http://demo.opensourcebilling.org/ +[11]: https://www.bambooinvoice.net/ +[12]: https://sourceforge.net/projects/bambooinvoice/ diff --git a/translated/tech/20181010 How To List The Enabled-Active Repositories In Linux.md b/translated/tech/20181010 How To List The Enabled-Active Repositories In Linux.md deleted file mode 100644 index d78084e5e0..0000000000 --- a/translated/tech/20181010 How To List The Enabled-Active Repositories In Linux.md +++ /dev/null @@ -1,289 +0,0 @@ -列出在 Linux 上已开启/激活的仓库 -====== -这里有很多方法可以列出在 Linux 已开启的仓库。 - -我们将在下面展示给你列出已激活仓库的简便方法。 - -这有助于你知晓你的系统上都开启了哪些仓库。 - -一旦你掌握了这些信息,你就可以添加任何之前还没有准备开启的仓库了。 - -举个例子,如果你想开启 `epel repository` ,你需要先检查 epel repository 是否已经开启了。这篇教程将会帮助你做这件事情。 - -### 什么是仓库? - -存储特定程序软件包的中枢位置就是一个软件仓库。 - -所有的 Linux 发行版都开发了他们自己的仓库,而且允许用户下载并安装这些软件包到他们的机器上。 - -每个供应商都提供了一套包管理工具,用以管理他们的仓库,比如搜索、安装、更新、升级、移除等等。 - -大多数 Linux 发行版都作为免费软件,除了 RHEL 和 SUSE。接收他们的仓库你需要先购买订阅。 - -**建议阅读:** -**(#)** [在 Linux 上,如何通过 DNF/YUM 设置管理命令添加、开启、关闭一个仓库][1] -**(#)** [在 Linux 上如何以尺寸列出已安装的包][2] -**(#)** [在 Linux 上如何列出升级的包][3] -**(#)** [在 Linux 上如何查看一个特定包已安装/已升级/已更新/已移除/已清除的数据][4] -**(#)** [在 Linux 上如何查看一个包的详细信息][5] -**(#)** [在你的 Linux 发行版上如何查看一个包是否可用][6] -**(#)** [在 Linux 如何列出可用的软件包组][7] -**(#)** [Newbies corner - 一个图形化的 Linux 包管理的前端工具][8] -**(#)** [Linux 专家须知,命令行包管理 & 使用列表][9] - -### 在 RHEL/CentOS上列出已开启的库 - -RHEL 和 CentOS 系统使用的是 RPM 包管理,所以我们可以使用 `Yum 包管理` 查看这些信息。 - -YUM 代表的是 `Yellowdog Updater,Modified`,它是一个包管理的开源前端,作用在基于 RPM 的系统上,例如 RHEL 和 CentOS。 - -YUM 是获取、安装、删除、查询和管理来自发行版仓库和其他第三方库的 RPM 包的主要工具。 - -**建议阅读:Suggested Read :** [在 RHEL/CentOS 系统上用 YUM 命令管理包][10] - -基于 RHEL 的系统主要提供以下三个主要的仓库。这些仓库是默认开启的。 - - * **`base:`** 它包含了所有的核心包和基础包。 - * **`extras:`** 它向 CentOS 提供不破坏上游兼容性或更新基本组件的额外功能。这是一个上游仓库,还有额外的 CentOS 包。 - * **`updates:`** 它提供了 bug 修复包、安全性包和增强包。 - - - -``` -# yum repolist -或者 -# yum repolist enabled - -Loaded plugins: fastestmirror -Determining fastest mirrors -epel: ewr.edge.kernel.org -repo id repo name status -!base/7/x86_64 CentOS-7 - Base 9,911 -!epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 12,687 -!extras/7/x86_64 CentOS-7 - Extras 403 -!updates/7/x86_64 CentOS-7 - Updates 1,348 -repolist: 24,349 - -``` - -### 如何列出 Fedora 上已开启的包 - -DNF 代表 Dandified yum。我们可以说 DNF 是下一代的 yum 包管理,使用了 hawkey/libsolv 作为后端。自从 Fedroa 18 开始,Aleš Kozumplík 就开始研究 DNF 最终在 Fedora 22 上实现。 - -Fedora 22 及之后的系统上都使用 Dnf 安装、升级、搜索和移除包。它可以自动解决依赖问题,并使包的安装过程平顺没有任何麻烦。 - -因为 Yum 许多未解决的问题,现在 Yum 已经被 DNF 所替代。你问为什么?他没有给 Yum 打补丁。Aleš Kozumplík 解释说修补在技术上太困难了,YUM 团队无法立即承受这些变更,还有其他的问题,YUM 是 56k 行,而 DNF 是 29k 行。因此,除了 fork 之外,别无选择。 - -**建议阅读:** [在 Fedora 上使用 DNF(Fork 自 YUM)管理软件][11] - -Fedora 主要提供下面两个主仓库。这些库将被默认开启。 - - * **`fedora:`** 它包括所有的核心包和基础包。 - * **`updates:`** 它提供了来自稳定发行版的 bug 修复包、安全性包和增强包 - - - -``` -# dnf repolist -或者 -# dnf repolist enabled - -Last metadata expiration check: 0:02:56 ago on Wed 10 Oct 2018 06:12:22 PM IST. -repo id repo name status -docker-ce-stable Docker CE Stable - x86_64 6 -*fedora Fedora 26 - x86_64 53,912 -home_mhogomchungu mhogomchungu's Home Project (Fedora_25) 19 -home_moritzmolch_gencfsm Gnome Encfs Manager (Fedora_25) 5 -mystro256-gnome-redshift Copr repo for gnome-redshift owned by mystro256 6 -nodesource Node.js Packages for Fedora Linux 26 - x86_64 83 -rabiny-albert Copr repo for albert owned by rabiny 3 -*rpmfusion-free RPM Fusion for Fedora 26 - Free 536 -*rpmfusion-free-updates RPM Fusion for Fedora 26 - Free - Updates 278 -*rpmfusion-nonfree RPM Fusion for Fedora 26 - Nonfree 202 -*rpmfusion-nonfree-updates RPM Fusion for Fedora 26 - Nonfree - Updates 95 -*updates Fedora 26 - x86_64 - Updates 14,595 - -``` - -### 如何列出 Debian/Ubuntu 上已开启的仓库 - -基于 Debian 的系统使用的是 APT/APT-GET 包管理,因此我们可以使用 `APT/APT-GET 包管理` 去获取更多的信息。 - -APT 代表 Advanced Packaging Tool,它取代了 apt-get,就像 DNF 取代 Yum一样。 它具有丰富的命令行工具,在一个命令(APT)中包含了所有,如 apt-cache,apt-search,dpkg,apt-cdrom,apt-config,apt-key等。 还有其他几个独特的功能。 例如,我们可以通过 APT 轻松安装 .dpkg 软件包,而我们无法通过 Apt-Get 获得和包含在 APT 命令中类似的更多功能。 由于未能解决的 apt-get 问题,用 APT 取代了 APT-GET 的锁定。 - -APT_GET 代表 Advanced Packaging Tool。apt-get 是一个强大的命令行工具,它用以自动下载和安装新的软件包、升级已存在的软件包、更新包索引列表、还有升级整个基于 Debian 的系统。 - -``` -# apt-cache policy -Package files: - 100 /var/lib/dpkg/status - release a=now - 500 http://ppa.launchpad.net/peek-developers/stable/ubuntu artful/main amd64 Packages - release v=17.10,o=LP-PPA-peek-developers-stable,a=artful,n=artful,l=Peek stable releases,c=main,b=amd64 - origin ppa.launchpad.net - 500 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages - release v=17.10,o=LP-PPA-notepadqq-team-notepadqq,a=artful,n=artful,l=Notepadqq,c=main,b=amd64 - origin ppa.launchpad.net - 500 http://dl.google.com/linux/chrome/deb stable/main amd64 Packages - release v=1.0,o=Google, Inc.,a=stable,n=stable,l=Google,c=main,b=amd64 - origin dl.google.com - 500 https://download.docker.com/linux/ubuntu artful/stable amd64 Packages - release o=Docker,a=artful,l=Docker CE,c=stable,b=amd64 - origin download.docker.com - 500 http://security.ubuntu.com/ubuntu artful-security/multiverse amd64 Packages - release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=multiverse,b=amd64 - origin security.ubuntu.com - 500 http://security.ubuntu.com/ubuntu artful-security/universe amd64 Packages - release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=universe,b=amd64 - origin security.ubuntu.com - 500 http://security.ubuntu.com/ubuntu artful-security/restricted i386 Packages - release v=17.10,o=Ubuntu,a=artful-security,n=artful,l=Ubuntu,c=restricted,b=i386 - origin security.ubuntu.com -. -. - origin in.archive.ubuntu.com - 500 http://in.archive.ubuntu.com/ubuntu artful/restricted amd64 Packages - release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=restricted,b=amd64 - origin in.archive.ubuntu.com - 500 http://in.archive.ubuntu.com/ubuntu artful/main i386 Packages - release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=i386 - origin in.archive.ubuntu.com - 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages - release v=17.10,o=Ubuntu,a=artful,n=artful,l=Ubuntu,c=main,b=amd64 - origin in.archive.ubuntu.com -Pinned packages: - -``` - -### 如何在 openSUSE 上列出已开启的仓库 - -openSUSE 使用 zypper 包管理,因此我们可以使用 zypper 包管理获得更多信息。 - -Zypper 是 suse 和 openSUSE 发行版的命令行包管理。它用于安装、更新、搜索、移除包和管理仓库,执行各种查询等。Zypper 以 libzypp(ZYpp 系统管理库)作为后端。 - -**建议阅读:** [在 openSUSE 和 suse 系统上使用 Zypper 命令管理包][12] - -``` -# zypper repos - -# | Alias | Name | Enabled | GPG Check | Refresh ---+-----------------------+-----------------------------------------------------+---------+-----------+-------- -1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes -2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes -3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No -4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes -5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes -6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes -7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes - -``` - -以 URI 列出仓库。 - -``` -# zypper lr -u - -# | Alias | Name | Enabled | GPG Check | Refresh | URI ---+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------------------------------------------------------------------------------- -1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | http://ftp.gwdg.de/pub/linux/packman/suse/openSUSE_Leap_42.1/ -2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | http://dl.google.com/linux/chrome/rpm/stable/x86_64 -3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/home:/lazka0:/ql-stable/openSUSE_42.1/ -4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/non-oss/ -5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/distribution/leap/42.1/repo/oss/ -6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/oss/ -7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | http://download.opensuse.org/update/leap/42.1/non-oss/ - -``` - -通过优先级列出仓库。 - -``` -# zypper lr -p - -# | Alias | Name | Enabled | GPG Check | Refresh | Priority ---+-----------------------+-----------------------------------------------------+---------+-----------+---------+--------- -1 | packman-repository | packman-repository | Yes | (r ) Yes | Yes | 99 -2 | google-chrome | google-chrome | Yes | (r ) Yes | Yes | 99 -3 | home_lazka0_ql-stable | Stable Quod Libet / Ex Falso Builds (openSUSE_42.1) | Yes | (r ) Yes | No | 99 -4 | repo-non-oss | openSUSE-leap/42.1-Non-Oss | Yes | (r ) Yes | Yes | 99 -5 | repo-oss | openSUSE-leap/42.1-Oss | Yes | (r ) Yes | Yes | 99 -6 | repo-update | openSUSE-42.1-Update | Yes | (r ) Yes | Yes | 99 -7 | repo-update-non-oss | openSUSE-42.1-Update-Non-Oss | Yes | (r ) Yes | Yes | 99 - -``` - -### 如何列出 Arch Linux 上已开启的仓库 - -基于 Arch Linux 的系统使用 pacman 包管理,因此我们可以使用 pacman 包管理获取这些信息。 - -pacman 代表 package manager utility。pacman 是一个命令行实用程序,用以安装、构建、移除和管理 Arch Linux 包。pacman 使用 libalpm (Arch Linux包管理库)作为后端去进行这些操作。 - -**建议阅读:** [在基于 Arch Linux的系统上使用 Pacman命令管理包][13] - -``` -# pacman -Syy -:: Synchronizing package databases... - core 132.6 KiB 1524K/s 00:00 [############################################] 100% - extra 1859.0 KiB 750K/s 00:02 [############################################] 100% - community 3.5 MiB 149K/s 00:24 [############################################] 100% - multilib 182.7 KiB 1363K/s 00:00 [############################################] 100% - -``` - -### 如何使用 INXI Utility 列出 Linux 上已开启的仓库 - -inix 是 Linux 上检查硬件信息非常有用的工具,还提供很多的选项去获取 Linux 上的所有硬件信息,我从未在 Linux 上发现其他有如此效用的程序。它由 locsmif fork 自 ingenius infobash。 - -inix 是一个可以快速显示硬件信息、CPU、硬盘、Xorg、桌面、内核、GCC 版本、进程、内存使用和很多其他有用信息的程序,还使用于论坛技术支持和调试工具上。 - -这个实用程序将会显示所有发行版仓库的数据信息,例如 RHEL、CentOS、Fedora、Debain、Ubuntu、LinuxMint、ArchLinux、openSUSE、Manjaro等。 - -**建议阅读:** [inxi – 一个在 Linux 上检查硬件信息的好工具][14] - -``` -# inxi -r -Repos: Active apt sources in file: /etc/apt/sources.list - deb http://in.archive.ubuntu.com/ubuntu/ yakkety main restricted - deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates main restricted - deb http://in.archive.ubuntu.com/ubuntu/ yakkety universe - deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates universe - deb http://in.archive.ubuntu.com/ubuntu/ yakkety multiverse - deb http://in.archive.ubuntu.com/ubuntu/ yakkety-updates multiverse - deb http://in.archive.ubuntu.com/ubuntu/ yakkety-backports main restricted universe multiverse - deb http://security.ubuntu.com/ubuntu yakkety-security main restricted - deb http://security.ubuntu.com/ubuntu yakkety-security universe - deb http://security.ubuntu.com/ubuntu yakkety-security multiverse - Active apt sources in file: /etc/apt/sources.list.d/arc-theme.list - deb http://download.opensuse.org/repositories/home:/Horst3180/xUbuntu_16.04/ / - Active apt sources in file: /etc/apt/sources.list.d/snwh-ubuntu-pulp-yakkety.list - deb http://ppa.launchpad.net/snwh/pulp/ubuntu yakkety main - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-list-the-enabled-active-repositories-in-linux/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972][b] -译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/prakash/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/how-to-add-enable-disable-a-repository-dnf-yum-config-manager-on-linux/ -[2]: https://www.2daygeek.com/how-to-list-installed-packages-by-size-largest-on-linux/ -[3]: https://www.2daygeek.com/how-to-view-list-the-available-packages-updates-in-linux/ -[4]: https://www.2daygeek.com/how-to-view-a-particular-package-installed-updated-upgraded-removed-erased-date-on-linux/ -[5]: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-in-linux/ -[6]: https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/ -[7]: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ -[8]: https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/ -[9]: https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/ -[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[12]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[13]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ -[14]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ diff --git a/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md new file mode 100644 index 0000000000..8184021df9 --- /dev/null +++ b/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md @@ -0,0 +1,485 @@ +如何使用chkconfig和systemctl命令启用或禁用linux服务 +====== + +对于Linux管理员来说这是一个重要(美妙)的话题,所以每个人都必须知道并练习怎样才能更高效的使用它们。 + + + +在Linux中,无论何时当你安装任何带有服务和守护进程的包,系统默认会把这些进程添加到 “init & systemd” 脚本中,不过此时它们并没有被启动 。 + + + +我们需要手动的开启或者关闭那些服务。Linux中有三个著名的且一直在被使用的init系统。 + + + +### 什么是init系统? + + + +在以Linux/Unix 为基础的操作系统上,init (初始化的简称) 是内核引导系统启动过程中第一个启动的进程。 + + + +init的进程id(pid)是1,除非系统关机否则它将会一直在后台运行。 + + + +Init 首先根据 `/etc/inittab` 文件决定Linux运行的级别,然后根据运行级别在后台启动所有其他进程和应用程序。 + + + +BIOS, MBR, GRUB 和内核程序在启动init之前就作为linux的引导程序的一部分开始工作了。 + + + +下面是Linux中可以使用的运行级别(从0~6总共七个运行级别) + + + + * **`0:`** 关机 + + * **`1:`** 单用户模式 + + * **`2:`** 多用户模式(没有NFS) + + * **`3:`** 完全的多用户模式 + + * **`4:`** 系统未使用 + + * **`5:`** 图形界面模式 + + * **`:`** 重启 + + + + + +下面是Linux系统中最常用的三个init系统 + + + + * System V (Sys V) + + * Upstart + + * systemd + + + + + +### 什么是 System V (Sys V)? + + + +System V (Sys V)是类Unix系统第一个传统的init系统之一。init是内核引导系统启动过程中第一支启动的程序 ,它是所有程序的父进程。 + + + +大部分Linux发行版最开始使用的是叫作System V(Sys V)的传统的init系统。在过去的几年中,已经有好几个init系统被发布用来解决标准版本中的设计限制,例如:launchd, the Service Management Facility, systemd 和 Upstart。 + + + +与传统的 SysV init系统相比,systemd已经被几个主要的Linux发行版所采用。 + + + +### 什么是 Upstart? + + + +Upstart 是一个基于事件的/sbin/init守护进程的替代品,它在系统启动过程中处理任务和服务的启动,在系统运行期间监视它们,在系统关机的时候关闭它们。 + + + +它最初是为Ubuntu而设计,但是它也能够完美的部署在其他所有Linux系统中,用来代替古老的System-V。 + + + +Upstart被用于Ubuntu 从 9.10 到 Ubuntu 14.10和基于RHEL 6的系统,之后它被systemd取代。 + + + +### 什么是 systemd? + + + +Systemd是一个新的init系统和系统管理器, 和传统的SysV相比,它可以用于所有主要的Linux发行版。 + + + +systemd 兼容 SysV 和 LSB init脚本。 它可以直接替代Sys V init系统。systemd是被内核启动的第一支程序,它的PID 是1。 + + + +systemd是所有程序的父进程,Fedora 15 是第一个用systemd取代upstart的发行版。systemctl用于命令行,它是管理systemd的守护进程/服务的主要工具,例如:(开启,重启,关闭,启用,禁用,重载和状态) + + + +systemd 使用.service 文件而不是bash脚本 (SysVinit 使用的). systemd将所有守护进程添加到cgroups中排序,你可以通过浏览`/cgroup/systemd` 文件查看系统等级。 + + + +### 如何使用chkconfig命令启用或禁用引导服务? + + + +chkconfig实用程序是一个命令行工具,允许你在指定运行级别下启动所选服务,以及列出所有可用服务及其当前设置。 + + + +此外,它还允许我们从启动中启用或禁用服务。前提是你有超级管理员权限(root或者sudo)运行这个命令。 + + + +所有的服务脚本位于 `/etc/rd.d/init.d`文件中 + + + +### 如何列出运行级别中所有的服务 + + + + `--list` 参数会展示所有的服务及其当前状态 (启用或禁用服务的运行级别) + + + +``` + + # chkconfig --list + + NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off + + abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off + + abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off + + acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off + + atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off + + auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off + + . + + . + +``` + + + +### 如何查看指定服务的状态 + + + +如果你想查看运行级别下某个服务的状态,你可以使用下面的格式匹配出需要的服务。 + + + +比如说我想查看运行级别中`auditd`服务的状态 + + + +``` + + # chkconfig --list| grep auditd + + auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off + +``` + + + +### 如何在指定运行级别中启用服务 + + + +使用`--level`参数启用指定运行级别下的某个服务,下面展示如何在运行级别3和运行级别5下启用 `httpd` 服务。 + + + +``` + + # chkconfig --level 35 httpd on + +``` + + + +### 如何在指定运行级别下禁用服务 + + + +同样使用 `--level`参数禁用指定运行级别下的服务,下面展示的是在运行级别3和运行级别5中禁用`httpd`服务。 + + + +``` + + # chkconfig --level 35 httpd off + +``` + + + +### 如何将一个新服务添加到启动列表中 + + + +`-–add`参数允许我们添加任何信服务到启动列表中, 默认情况下,新添加的服务会在运行级别2,3,4,5下自动开启。 + + + +``` + + # chkconfig --add nagios + +``` + + + +### 如何从启动列表中删除服务 + + + +可以使用 `--del` 参数从启动列表中删除服务,下面展示的事如何从启动列表中删除Nagios服务。 + + + +``` + + # chkconfig --del nagios + +``` + + + +### 如何使用systemctl命令启用或禁用开机自启服务? + + + +systemctl用于命令行,它是一个基础工具用来管理systemd的守护进程/服务,例如:(开启,重启,关闭,启用,禁用,重载和状态) + + + +所有服务创建的unit文件位与`/etc/systemd/system/`. + + + +### 如何列出全部的服务 + + + +使用下面的命令列出全部的服务(包括启用的和禁用的) + + + +``` + + # systemctl list-unit-files --type=service + + UNIT FILE STATE + + arp-ethers.service disabled + + auditd.service enabled + + [email protected] enabled + + blk-availability.service disabled + + brandbot.service static + + [email protected] static + + chrony-wait.service disabled + + chronyd.service enabled + + cloud-config.service enabled + + cloud-final.service enabled + + cloud-init-local.service enabled + + cloud-init.service enabled + + console-getty.service disabled + + console-shell.service disabled + + [email protected] static + + cpupower.service disabled + + crond.service enabled + + . + + . + + 150 unit files listed. + +``` + + + +使用下面的格式通过正则表达式匹配出你想要查看的服务的当前状态。下面是使用systemctl命令查看`httpd` 服务的状态。 + + + +``` + + # systemctl list-unit-files --type=service | grep httpd + + httpd.service disabled + +``` + + + +### 如何让指定的服务开机自启 + + + +使用下面格式的systemctl命令启用一个指定的服务。启用服务将会创建一个符号链接,如下可见 + + + +``` + + # systemctl enable httpd + + Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. + +``` + + + +运行下列命令再次确认服务是否被启用。 + + + +``` + + # systemctl is-enabled httpd + + enabled + +``` + + + +### 如何禁用指定的服务 + + + +运行下面的命令禁用服务将会移除你启用服务时所创建的 + + + +``` + + # systemctl disable httpd + + Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service. + +``` + + + +运行下面的命令再次确认服务是否被禁用 + + + +``` + + # systemctl is-enabled httpd + + disabled + +``` + + + +### 如何查看系统当前的运行级别 + + + +使用systemctl命令确认你系统当前的运行级别,'运行级'别仍然由systemd管理,不过,运行级别对于systemd来说是一个历史遗留的概念。所以我建议你全部使用systemctl命令。 + + + +我们当前处于`运行级别3`, 下面显示的是`multi-user.target`。 + + + +``` + + # systemctl list-units --type=target + + UNIT LOAD ACTIVE SUB DESCRIPTION + + basic.target loaded active active Basic System + + cloud-config.target loaded active active Cloud-config availability + + cryptsetup.target loaded active active Local Encrypted Volumes + + getty.target loaded active active Login Prompts + + local-fs-pre.target loaded active active Local File Systems (Pre) + + local-fs.target loaded active active Local File Systems + + multi-user.target loaded active active Multi-User System + + network-online.target loaded active active Network is Online + + network-pre.target loaded active active Network (Pre) + + network.target loaded active active Network + + paths.target loaded active active Paths + + remote-fs.target loaded active active Remote File Systems + + slices.target loaded active active Slices + + sockets.target loaded active active Sockets + + swap.target loaded active active Swap + + sysinit.target loaded active active System Initialization + + timers.target loaded active active Timers + +``` + +-------------------------------------------------------------------------------- + + + +via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/ + + + +作者:[Prakash Subramanian][a] + +选题:[lujun9972][b] + +译者:[way-ww](https://github.com/way-ww) + +校对:[校对者ID](https://github.com/校对者ID) + + + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + + +[a]: https://www.2daygeek.com/author/prakash/ + +[b]: https://github.com/lujun9972 + diff --git a/translated/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md b/translated/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md new file mode 100644 index 0000000000..4f01447600 --- /dev/null +++ b/translated/tech/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md @@ -0,0 +1,95 @@ +Kali Linux:在开始使用之前你必须知道的 – FOSS Post +====== + +![](https://i1.wp.com/fosspost.org/wp-content/uploads/2018/10/kali-linux.png?fit=1237%2C527&ssl=1) + +Kali Linux 在渗透测试和白帽子方面,是业界领先的 Linux 发行版。默认情况下,该发行版附带了大量黑客和渗透工具和软件,并且在全世界都得到了广泛认可。即使在那些甚至可能不知道 Linux 是什么的 Windows 用户中也是如此。 + +由于后者的原因,许多人都试图单独使用 Kali Linux,尽管他们甚至不了解 Linux 系统的基础知识。原因可能各不相同,有的为了玩乐,有的是为了取悦女友而伪装成黑客,有的仅仅是试图破解邻居的 WiFi 网络以免费上网。如果你打算使用 Kali Linux,所有的这些都是不好的事情。 + +在计划使用 Kali Linux 之前,你应该了解一些提示。 + +### Kali Linux 不适合初学者 + +![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-000.png?resize=850%2C478&ssl=1) +Kali Linux 默认 GNOME 桌面 + +如果你是几个月前刚开始使用 Linux 的人,或者你认为自己的知识水平低于平均水平,那么 Kali Linux 就不适合你。如果你打算问“如何在 Kali 上安装 Stream?如何让我的打印机在 Kali 上工作?如何解决 Kali 上的 APT 源错误?”这些东西,那么 Kali Linux 并不适合你。 + +Kali Linux 主要面向想要运行渗透测试的专家或想要学习成为白帽子和数字取证的人。但即使你来自后者,普通的 Kali Linux 用户在日常使用时也会遇到很多麻烦。他还被要求以非常谨慎的方式使用工具和软件,而不仅仅是“让我们安装并运行一切”。每一个工具必须小心使用,你安装的每一个软件都必须仔细检查。 + +**建议阅读:** [Linux 系统的组件是什么?][1] + +普通 Linux 用户无法做正常的事情。(to 校正:这里什么意思呢?)一个更好的方法是花几周时间学习 Linux 及其守护进程,服务,软件,发行版及其工作方式,然后观看几十个关于白帽子攻击的视频和课程,然后再尝试使用 Kali 来应用你学习到的东西。 + +### 它会让你被黑客攻击 + +![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-001.png?resize=850%2C478&ssl=1) +Kali Linux 入侵和测试工具 + +在普通的 Linux 系统中,普通用户有一个账户,而 root 用户也有一个单独的账号。但在 Kali Linux 中并非如此。Kali Linux 默认使用 root 账户,不提供普通用户账户。这是因为 Kali 中几乎所有可用的安全工具都需要 root 权限,并且为了避免每分钟要求你输入 root 密码,所以这样设计。 + +当然,你可以简单地创建一个普通用户账户并开始使用它。但是,这种方式仍然不推荐,因为这不是 Kali Linux 系统设计的工作方式。然后,在使用程序,打开端口,调试软件时,你会遇到很多问题,你会发现为什么这个东西不起作用,最终却发现它是一个奇怪的权限错误。另外每次在系统上做任何事情时,你会被每次运行工具都要求输入密码而烦恼。 + +现在,由于你被迫以 root 用户身份使用它,因此你在系统上运行的所有软件也将以 root 权限运行。如果你不知道自己在做什么,那么这很糟糕,因为如果 Firefox 中存在漏洞,并且你访问了一个受感染的网站,那么黑客能够在你的 PC 上获得全部 root 权限并入侵你。如果你使用的是普通用户账户,则会收到限制。此外,你安装和使用的某些工具可能会在你不知情的情况下打开端口并泄露信息,因此如果你不是非常小心,人们可能会以你尝试入侵他们的方式入侵你。 + +如果你在一些情况下访问于与 Kali Linux 相关的 Facebook 群组,你会发现这些群组中几乎有四分之一的帖子是人们在寻求帮助,因为有人入侵了他们。 + +### 它可以让你入狱 + +Kali Linux 仅提供软件。那么,如何使用它们完全是你自己的责任。 + +在世界上大多数发达国家,使用针对公共 WiFi 网络或其他设备的渗透测试工具很容易让你入狱。现在不要以为你使用了 Kali 就无法被跟踪,许多系统都配置了复杂的日志记录设备来简单地跟踪试图监听或入侵其网络的人,你可能无意间成为其中的一个,那么它会毁掉你的生活。 + +永远不要对不属于你的设备或网络使用 Kali Linux 系统,也不要明确允许对它们进行入侵。如果你说你不知道你在做什么,在法庭上它不会被当作借口来接受。 + +### 修改了内核和软件 + +Kali [基于][2] Debian(测试分支,这意味着 Kali Linux 使用滚动发布模型),因此它使用了 Debian 的大部分软件体系结构,你会发现 Kali Linux 中的大部分软件跟 Debian 中的没什么区别。 + +但是,Kali 修改了一些包来加强安全性并修复了一些可能的漏洞。例如,Kali 使用的 Linux 内核被打了补丁,允许在各种设备上进行无线注入。这些补丁通常在普通内核中不可用。此外,Kali Linux 不依赖于 Debian 服务器和镜像,而是通过自己的服务器构建软件包。以下是最新版本中的默认软件源: +``` + deb http://http.kali.org/kali kali-rolling main contrib non-free + deb-src http://http.kali.org/kali kali-rolling main contrib non-free +``` + +这就是为什么,对于某些特定的软件,当你在 Kali Linux 和 Fedora 中使用相同的程序时,你会发现不同的行为。你可以从 [git.kali.org][3] 中查看 Kali Linux 软件的完整列表。你还可以在 Kali Linux(GNOME)上找到我们[自己生成的已安装包列表][4]。 + +更重要的是,Kali Linux 官方文档极力建议不要添加任何其他第三方软件仓库,因为 Kali Linux 是一个滚动发行版,并且依赖于 Debian 测试,由于依赖关系冲突和包钩子,所以你很可能只是添加一个新的仓库源就会破坏系统。 + +### 不要安装 Kali Linux + +![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-002.png?resize=750%2C504&ssl=1) + +使用 Kali Linux 在 fosspost.org 上运行 wpscan + +我在极少数情况下使用 Kali Linux 来测试我部署的软件和服务器。但是,我永远不敢安装它并将其用作主系统。 + +如果你要将其用作主系统,那么你必须保留自己的个人文件,密码,数据以及系统上的所有内容。你还需要安装大量日常使用的软件,以解放你的生活。但正如我们上面提到的,使用 Kali Linux 是非常危险的,应该非常小心地进行,如果你被入侵了,你将丢失所有数据,并且可能会暴露给更多的人。如果你在做一些不合法的事情,你的个人信息也可用于跟踪你。如果你不小心使用这些工具,那么你甚至可能会毁掉自己的数据。 + +即使是专业的白帽子也不建议将其作为主系统安装,而是通过 USB 使用它来进行渗透测试工作,然后再回到普通的 Linux 发行版。 + +### 底线 + +正如你现在所看到的,使用 Kali 并不是一个轻松的决定。如果你打算成为一个白帽子,你需要使用 Kali 来学习,那么在学习了基础知识并花了几个月的时间使用普通 Linux 系统之后再来学习 Kali。但是小心你正在做的事情,以避免遇到麻烦。 + +如果你打算使用 Kali,或者你需要任何帮助,我很乐意在评论中听到你的想法。 + + +-------------------------------------------------------------------------------- + +via: https://fosspost.org/articles/must-know-before-using-kali-linux + +作者:[M.Hanny Sabbagh][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fosspost.org/author/mhsabbagh +[b]: https://github.com/lujun9972 +[1]: https://fosspost.org/articles/what-are-the-components-of-a-linux-distribution +[2]: https://www.kali.org/news/kali-linux-rolling-edition-2016-1/ +[3]: http://git.kali.org +[4]: https://paste.ubuntu.com/p/bctSVWwpVw/ diff --git a/translated/tech/20181016 Lab 4- Preemptive Multitasking.md b/translated/tech/20181016 Lab 4- Preemptive Multitasking.md new file mode 100644 index 0000000000..9302b7288a --- /dev/null +++ b/translated/tech/20181016 Lab 4- Preemptive Multitasking.md @@ -0,0 +1,590 @@ +实验 4:抢占式多任务处理 +====== +### 实验 4:抢占式多任务处理 + +#### 简介 + +在本实验中,你将在多个同时活动的用户模式中的环境之间实现抢占式多任务处理。 + +在 Part A 中,你将在 JOS 中添加对多处理器的支持,以实现循环调度。并且添加基本的环境管理方面的系统调用(创建和销毁环境的系统调用、以及分配/映射内存)。 + +在 Part B 中,你将要实现一个类 Unix 的 `fork()`,它将允许一个用户模式中的环境去创建一个它自已的副本。 + +最后,在 Part C 中,你将在 JOS 中添加对进程间通讯(IPC)的支持,以允许不同用户模式环境之间进行显式通讯和同步。你也将要去添加对硬件时钟中断和优先权的支持。 + +##### 预备知识 + +使用 git 去提交你的实验 3 的源代码,并获取课程仓库的最新版本,然后创建一个名为 `lab4` 的本地分支,它跟踪我们的名为 `origin/lab4` 的远程 `lab4` 分支: + +```markdown + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab4 origin/lab4 + Branch lab4 set up to track remote branch refs/remotes/origin/lab4. + Switched to a new branch "lab4" + athena% git merge lab3 + Merge made by recursive. + ... + athena% +``` + +实验 4 包含了一些新的源文件,在开始之前你应该去浏览一遍: +```markdown +kern/cpu.h Kernel-private definitions for multiprocessor support +kern/mpconfig.c Code to read the multiprocessor configuration +kern/lapic.c Kernel code driving the local APIC unit in each processor +kern/mpentry.S Assembly-language entry code for non-boot CPUs +kern/spinlock.h Kernel-private definitions for spin locks, including the big kernel lock +kern/spinlock.c Kernel code implementing spin locks +kern/sched.c Code skeleton of the scheduler that you are about to implement +``` + +##### 实验要求 + +本实验分为三部分:Part A、Part B、和 Part C。我们计划为每个部分分配一周的时间。 + +和以前一样,你需要完成实验中出现的、所有常规练习和至少一个挑战问题。(不是每个部分做一个挑战问题,是整个实验做一个挑战问题即可。)另外,你还要写出你实现的挑战问题的详细描述。如果你实现了多个挑战问题,你只需写出其中一个即可,虽然我们的课程欢迎你完成越多的挑战越好。在动手实验之前,请将你的挑战问题的答案写在一个名为 `answers-lab4.txt` 的文件中,并把它放在你的 `lab` 目录的根下。 + +#### Part A:多处理器支持和协调多任务处理 + +在本实验的第一部分,将去扩展你的 JOS 内核,以便于它能够在一个多处理器的系统上运行,并且要在 JOS 内核中实现一些新的系统调用,以便于它允许用户级环境创建附加的新环境。你也要去实现协调的循环调度,在当前的环境自愿放弃 CPU(或退出)时,允许内核将一个环境切换到另一个环境。稍后在 Part C 中,你将要实现抢占调度,它允许内核在环境占有 CPU 一段时间后,从这个环境上重新取回对 CPU 的控制,那怕是在那个环境不配合的情况下。 + +##### 多处理器支持 + +我们继续去让 JOS 支持 “对称多处理器”(SMP),在一个多处理器的模型中,所有 CPU 们都有平等访问系统资源(如内存和 I/O 总线)的权利。虽然在 SMP 中所有 CPU 们都有相同的功能,但是在引导进程的过程中,它们被分成两种类型:引导程序处理器(BSP)负责初始化系统和引导操作系统;而在操作系统启动并正常运行后,应用程序处理器(AP)将被 BSP 激活。哪个处理器做 BSP 是由硬件和 BIOS 来决定的。到目前为止,你所有的已存在的 JOS 代码都是运行在 BSP 上的。 + +在一个 SMP 系统上,每个 CPU 都伴有一个本地 APIC(LAPIC)单元。这个 LAPIC 单元负责传递系统中的中断。LAPIC 还为它所连接的 CPU 提供一个唯一的标识符。在本实验中,我们将使用 LAPIC 单元(它在 `kern/lapic.c` 中)中的下列基本功能: + + * 读取 LAPIC 标识符(APIC ID),去告诉那个 CPU 现在我们的代码正在它上面运行(查看 `cpunum()`)。 + * 从 BSP 到 AP 之间发送处理器间中断(IPI) `STARTUP`,以启动其它 CPU(查看 `lapic_startap()`)。 + * 在 Part C 中,我们设置 LAPIC 的内置定时器去触发时钟中断,以便于支持抢占式多任务处理(查看 `apic_init()`)。 + + + +一个处理器使用内存映射的 I/O(MMIO)来访问它的 LAPIC。在 MMIO 中,一部分物理内存是硬编码到一些 I/O 设备的寄存器中,因此,访问内存时一般可以使用相同的 `load/store` 指令去访问设备的寄存器。正如你所看到的,在物理地址 `0xA0000` 处就是一个 IO 入口(就是我们写入 VGA 缓冲区的入口)。LAPIC 就在那里,它从物理地址 `0xFE000000` 处(4GB 减去 32MB 处)开始,这个地址对于我们在 KERNBASE 处使用直接映射访问来说太高了。JOS 虚拟内存映射在 `MMIOBASE` 处,留下一个 4MB 的空隙,以便于我们有一个地方,能像这样去映射设备。由于在后面的实验中,我们将介绍更多的 MMIO 区域,你将要写一个简单的函数,从这个区域中去分配空间,并将设备的内存映射到那里。 + +```markdown +练习 1、实现 `kern/pmap.c` 中的 `mmio_map_region`。去看一下它是如何使用的,从 `kern/lapic.c` 中的 `lapic_init` 开始看起。在 `mmio_map_region` 的测试运行之前,你还要做下一个练习。 +``` + +###### 引导应用程序处理器 + +在引导应用程序处理器之前,引导程序处理器应该会首先去收集关于多处理器系统的信息,比如总的 CPU 数、它们的 APIC ID 以及 LAPIC 单元的 MMIO 地址。在 `kern/mpconfig.c` 中的 `mp_init()` 函数,通过读取内存中位于 BIOS 区域里的 MP 配置表来获得这些信息。 + +`boot_aps()` 函数(在 `kern/init.c` 中)驱动 AP 的引导过程。AP 们在实模式中开始,与 `boot/boot.S` 中启动引导加载程序非常相似。因此,`boot_aps()` 将 AP 入口代码(`kern/mpentry.S`)复制到实模式中的那个可寻址内存地址上。不像使用引导加载程序那样,我们可以控制 AP 将从哪里开始运行代码;我们复制入口代码到 `0x7000`(`MPENTRY_PADDR`)处,但是复制到任何低于 640KB 的、未使用的、页对齐的物理地址上都是可以运行的。 + +在那之后,通过发送 IPI `STARTUP` 到相关 AP 的 LAPIC 单元,以及一个初始的 `CS:IP` 地址(AP 将从那儿开始运行它的入口代码,在我们的案例中是 `MPENTRY_PADDR` ),`boot_aps()` 将一个接一个地激活 AP。在 `kern/mpentry.S` 中的入口代码非常类似于 `boot/boot.S`。在一些简短的设置之后,它启用分页,使 AP 进入保护模式,然后调用 C 设置程序 `mp_main()`(它也在 `kern/init.c` 中)。在继续唤醒下一个 AP 之前, `boot_aps()` 将等待这个 AP 去传递一个 `CPU_STARTED` 标志到它的 `struct CpuInfo` 中的 `cpu_status` 字段中。 + +```markdown +练习 2、阅读 `kern/init.c` 中的 `boot_aps()` 和 `mp_main()`,以及在 `kern/mpentry.S` 中的汇编代码。确保你理解了在 AP 引导过程中的控制流转移。然后修改在 `kern/pmap.c` 中的、你自己的 `page_init()`,实现避免在 `MPENTRY_PADDR` 处添加页到空闲列表上,以便于我们能够在物理地址上安全地复制和运行 AP 引导程序代码。你的代码应该会通过更新后的 `check_page_free_list()` 的测试(但可能会在更新后的 `check_kern_pgdir()` 上测试失败,我们在后面会修复它)。 +``` + +```markdown +问题 + 1、比较 `kern/mpentry.S` 和 `boot/boot.S`。记住,那个 `kern/mpentry.S` 是编译和链接后的,运行在 `KERNBASE` 上面的,就像内核中的其它程序一样,宏 `MPBOOTPHYS` 的作用是什么?为什么它需要在 `kern/mpentry.S` 中,而不是在 `boot/boot.S` 中?换句话说,如果在 `kern/mpentry.S` 中删掉它,会发生什么错误? +提示:回顾链接地址和加载地址的区别,我们在实验 1 中讨论过它们。 +``` + + +###### 每个 CPU 的状态和初始化 + +当写一个多处理器操作系统时,区分每个 CPU 的状态是非常重要的,而每个 CPU 的状态对其它处理器是不公开的,而全局状态是整个系统共享的。`kern/cpu.h` 定义了大部分每个 CPU 的状态,包括 `struct CpuInfo`,它保存了每个 CPU 的变量。`cpunum()` 总是返回调用它的那个 CPU 的 ID,它可以被用作是数组的索引,比如 `cpus`。或者,宏 `thiscpu` 是当前 CPU 的 `struct CpuInfo` 缩略表示。 + +下面是你应该知道的每个 CPU 的状态: + + * **每个 CPU 的内核栈** +因为内核能够同时捕获多个 CPU,因此,我们需要为每个 CPU 准备一个单独的内核栈,以防止它们运行的程序之间产生相互干扰。数组 `percpu_kstacks[NCPU][KSTKSIZE]` 为 NCPU 的内核栈资产保留了空间。 + +在实验 2 中,你映射的 `bootstack` 所引用的物理内存,就作为 `KSTACKTOP` 以下的 BSP 的内核栈。同样,在本实验中,你将每个 CPU 的内核栈映射到这个区域,而使用保护页做为它们之间的缓冲区。CPU 0 的栈将从 `KSTACKTOP` 处向下增长;CPU 1 的栈将从 CPU 0 的栈底部的 `KSTKGAP` 字节处开始,依次类推。在 `inc/memlayout.h` 中展示了这个映射布局。 + + * **每个 CPU 的 TSS 和 TSS 描述符** +为了指定每个 CPU 的内核栈在哪里,也需要有一个每个 CPU 的任务状态描述符(TSS)。CPU _i_ 的任务状态描述符是保存在 `cpus[i].cpu_ts` 中,而对应的 TSS 描述符是定义在 GDT 条目 `gdt[(GD_TSS0 >> 3) + i]` 中。在 `kern/trap.c` 中定义的全局变量 `ts` 将不再被使用。 + + * **每个 CPU 当前的环境指针** +由于每个 CPU 都能同时运行不同的用户进程,所以我们重新定义了符号 `curenv`,让它指向到 `cpus[cpunum()].cpu_env`(或 `thiscpu->cpu_env`),它指向到当前 CPU(代码正在运行的那个 CPU)上当前正在运行的环境上。 + + * **每个 CPU 的系统寄存器** +所有的寄存器,包括系统寄存器,都是一个 CPU 私有的。所以,初始化这些寄存器的指令,比如 `lcr3()`、`ltr()`、`lgdt()`、`lidt()`、等待,必须在每个 CPU 上运行一次。函数 `env_init_percpu()` 和 `trap_init_percpu()` 就是为此目的而定义的。 + + + +```markdown +练习 3、修改 `mem_init_mp()`(在 `kern/pmap.c` 中)去映射每个 CPU 的栈从 `KSTACKTOP` 处开始,就像在 `inc/memlayout.h` 中展示的那样。每个栈的大小是 `KSTKSIZE` 字节加上未映射的保护页 `KSTKGAP` 的字节。你的代码应该会通过在 `check_kern_pgdir()` 中的新的检查。 +``` + +```markdown +练习 4、在 `trap_init_percpu()`(在 `kern/trap.c` 文件中)的代码为 BSP 初始化 TSS 和 TSS 描述符。在实验 3 中它就运行过,但是当它运行在其它的 CPU 上就会出错。修改这些代码以便它能在所有 CPU 上都正常运行。(注意:你的新代码应该还不能使用全局变量 `ts`) +``` + +在你完成上述练习后,在 QEMU 中使用 4 个 CPU(使用 `make qemu CPUS=4` 或 `make qemu-nox CPUS=4`)来运行 JOS,你应该看到类似下面的输出: + +```c + ... + Physical memory: 66556K available, base = 640K, extended = 65532K + check_page_alloc() succeeded! + check_page() succeeded! + check_kern_pgdir() succeeded! + check_page_installed_pgdir() succeeded! + SMP: CPU 0 found 4 CPU(s) + enabled interrupts: 1 2 + SMP: CPU 1 starting + SMP: CPU 2 starting + SMP: CPU 3 starting +``` + +###### 锁定 + +在 `mp_main()` 中初始化 AP 后我们的代码快速运行起来。在你更进一步增强 AP 之前,我们需要首先去处理多个 CPU 同时运行内核代码的争用状况。达到这一目标的最简单的方法是使用大内核锁。大内核锁是一个单个的全局锁,当一个环境进入内核模式时,它将被加锁,而这个环境返回到用户模式时它将释放锁。在这种模型中,在用户模式中运行的环境可以同时运行在任何可用的 CPU 上,但是只有一个环境能够运行在内核模式中;而任何尝试进入内核模式的其它环境都被强制等待。 + +`kern/spinlock.h` 中声明大内核锁,即 `kernel_lock`。它也提供 `lock_kernel()` 和 `unlock_kernel()`,快捷地去获取/释放锁。你应该在以下的四个位置应用大内核锁: + + * 在 `i386_init()` 时,在 BSP 唤醒其它 CPU 之前获取锁。 + * 在 `mp_main()` 时,在初始化 AP 之后获取锁,然后调用 `sched_yield()` 在这个 AP 上开始运行环境。 + * 在 `trap()` 时,当从用户模式中捕获一个陷阱trap时获取锁。在检查 `tf_cs` 的低位比特,以确定一个陷阱是发生在用户模式还是内核模式时。 + * 在 `env_run()` 中,在切换到用户模式之前释放锁。不能太早也不能太晚,否则你将可能会产生争用或死锁。 + + +```markdown +练习 5、在上面所描述的情况中,通过在合适的位置调用 `lock_kernel()` 和 `unlock_kernel()` 应用大内核锁。 +``` + +如果你的锁定是正确的,如何去测试它?实际上,到目前为止,还无法测试!但是在下一个练习中,你实现了调度之后,就可以测试了。 + +``` +问题 + 2、看上去使用一个大内核锁,可以保证在一个时间中只有一个 CPU 能够运行内核代码。为什么每个 CPU 仍然需要单独的内核栈?描述一下使用一个共享内核栈出现错误的场景,即便是在它使用了大内核锁保护的情况下。 +``` + +``` +小挑战!大内核锁很简单,也易于使用。尽管如此,它消除了内核模式的所有并发。大多数现代操作系统使用不同的锁,一种称之为细粒度锁定的方法,去保护它们的共享的栈的不同部分。细粒度锁能够大幅提升性能,但是实现起来更困难并且易出错。如果你有足够的勇气,在 JOS 中删除大内核锁,去拥抱并发吧! + +由你来决定锁的粒度(一个锁保护的数据量)。给你一个提示,你可以考虑在 JOS 内核中使用一个自旋锁去确保你独占访问这些共享的组件: + + * 页分配器 + * 控制台驱动 + * 调度器 + * 你将在 Part C 中实现的进程间通讯(IPC)的状态 +``` + + +##### 循环调度 + +本实验中,你的下一个任务是去修改 JOS 内核,以使它能够在多个环境之间以“循环”的方式去交替。JOS 中的循环调度工作方式如下: + + * 在新的 `kern/sched.c` 中的 `sched_yield()` 函数负责去选择一个新环境来运行。它按顺序以循环的方式在数组 `envs[]` 中进行搜索,在前一个运行的环境之后开始(或如果之前没有运行的环境,就从数组起点开始),选择状态为 `ENV_RUNNABLE` 的第一个环境(查看 `inc/env.h`),并调用 `env_run()` 去跳转到那个环境。 + * `sched_yield()` 必须做到,同一个时间在两个 CPU 上绝对不能运行相同的环境。它可以判断出一个环境正运行在一些 CPU(可能是当前 CPU)上,因为,那个正在运行的环境的状态将是 `ENV_RUNNING`。 + * 我们已经为你实现了一个新的系统调用 `sys_yield()`,用户环境调用它去调用内核的 `sched_yield()` 函数,并因此将自愿把对 CPU 的控制禅让给另外的一个环境。 + + + +```c +练习 6、像上面描述的那样,在 `sched_yield()` 中实现循环调度。不要忘了去修改 `syscall()` 以派发 `sys_yield()`。 + +确保在 `mp_main` 中调用了 `sched_yield()`。 + +修改 `kern/init.c` 去创建三个(或更多个!)运行程序 `user/yield.c`的环境。 + +运行 `make qemu`。在它终止之前,你应该会看到像下面这样,在环境之间来回切换了五次。 + +也可以使用几个 CPU 来测试:make qemu CPUS=2。 + + ... + Hello, I am environment 00001000. + Hello, I am environment 00001001. + Hello, I am environment 00001002. + Back in environment 00001000, iteration 0. + Back in environment 00001001, iteration 0. + Back in environment 00001002, iteration 0. + Back in environment 00001000, iteration 1. + Back in environment 00001001, iteration 1. + Back in environment 00001002, iteration 1. + ... + +在程序 `yield` 退出之后,系统中将没有可运行的环境,调度器应该会调用 JOS 内核监视器。如果它什么也没有发生,那么你应该在继续之前修复你的代码。 +``` + +```c +问题 + 3、在你实现的 `env_run()` 中,你应该会调用 `lcr3()`。在调用 `lcr3()` 的之前和之后,你的代码引用(至少它应该会)变量 `e`,它是 `env_run` 的参数。在加载 `%cr3` 寄存器时,MMU 使用的地址上下文将马上被改变。但一个虚拟地址(即 `e`)相对一个给定的地址上下文是有意义的 —— 地址上下文指定了物理地址到那个虚拟地址的映射。为什么指针 `e` 在地址切换之前和之后被解除引用? + 4、无论何时,内核从一个环境切换到另一个环境,它必须要确保旧环境的寄存器内容已经被保存,以便于它们稍后能够正确地还原。为什么?这种事件发生在什么地方? +``` + +```c +小挑战!给内核添加一个小小的调度策略,比如一个固定优先级的调度器,它将会给每个环境分配一个优先级,并且在执行中,较高优先级的环境总是比低优先级的环境优先被选定。如果你想去冒险一下,尝试实现一个类 Unix 的、优先级可调整的调度器,或者甚至是一个彩票调度器或跨步调度器。(可以在 Google 中查找“彩票调度”和“跨步调度”的相关资料) + +写一个或两个测试程序,去测试你的调度算法是否工作正常(即,正确的算法能够按正确的次序运行)。如果你实现了本实验的 Part B 和 Part C 部分的 `fork()` 和 IPC,写这些测试程序可能会更容易。 +``` + +```markdown +小挑战!目前的 JOS 内核还不能应用到使用了 x87 协处理器、MMX 指令集、或流式 SIMD 扩展(SSE)的 x86 处理器上。扩展数据结构 `Env` 去提供一个能够保存处理器的浮点状态的地方,并且扩展上下文切换代码,当从一个环境切换到另一个环境时,能够保存和还原正确的状态。`FXSAVE` 和 `FXRSTOR` 指令或许对你有帮助,但是需要注意的是,这些指令在旧的 x86 用户手册上没有,因为它是在较新的处理器上引入的。写一个用户级的测试程序,让它使用浮点做一些很酷的事情。 +``` + +##### 创建环境的系统调用 + +虽然你的内核现在已经有了在多个用户级环境之间切换的功能,但是由于内核初始化设置的原因,它在运行环境时仍然是受限的。现在,你需要去实现必需的 JOS 系统调用,以允许用户环境去创建和启动其它的新用户环境。 + +Unix 提供了 `fork()` 系统调用作为它的进程创建原语。Unix 的 `fork()` 通过复制调用进程(父进程)的整个地址空间去创建一个新进程(子进程)。从用户空间中能够观察到它们之间的仅有的两个差别是,它们的进程 ID 和父进程 ID(由 `getpid` 和 `getppid` 返回)。在父进程中,`fork()` 返回子进程 ID,而在子进程中,`fork()` 返回 0。默认情况下,每个进程得到它自己的私有地址空间,一个进程对内存的修改对另一个进程都是不可见的。 + +为创建一个用户模式下的新的环境,你将要提供一个不同的、更原始的 JOS 系统调用集。使用这些系统调用,除了其它类型的环境创建之外,你可以在用户空间中实现一个完整的类 Unix 的 `fork()`。你将要为 JOS 编写的新的系统调用如下: + + * `sys_exofork`: +这个系统调用创建一个新的空白的环境:在它的地址空间的用户部分什么都没有映射,并且它也不能运行。这个新的环境与 `sys_exofork` 调用时创建它的父环境的寄存器状态完全相同。在父进程中,`sys_exofork` 将返回新创建进程的 `envid_t`(如果环境分配失败的话,返回的是一个负的错误代码)。在子进程中,它将返回 0。(因为子进程从一开始就被标记为不可运行,在子进程中,`sys_exofork` 将并不真的返回,直到它的父进程使用 .... 显式地将子进程标记为可运行之前。) + * `sys_env_set_status`: +设置指定的环境状态为 `ENV_RUNNABLE` 或 `ENV_NOT_RUNNABLE`。这个系统调用一般是在,一个新环境的地址空间和寄存器状态已经完全初始化完成之后,用于去标记一个准备去运行的新环境。 + * `sys_page_alloc`: +分配一个物理内存页,并映射它到一个给定的环境地址空间中、给定的一个虚拟地址上。 + * `sys_page_map`: +从一个环境的地址空间中复制一个页映射(不是页内容!)到另一个环境的地址空间中,保持一个内存共享,以便于新的和旧的映射共同指向到同一个物理内存页。 + * `sys_page_unmap`: +在一个给定的环境中,取消映射一个给定的已映射的虚拟地址。 + + + +上面所有的系统调用都接受环境 ID 作为参数,JOS 内核支持一个约定,那就是用值 “0” 来表示“当前环境”。这个约定在 `kern/env.c` 中的 `envid2env()` 中实现的。 + +在我们的 `user/dumbfork.c` 中的测试程序里,提供了一个类 Unix 的 `fork()` 的非常原始的实现。这个测试程序使用了上面的系统调用,去创建和运行一个复制了它自己地址空间的子环境。然后,这两个环境像前面的练习那样使用 `sys_yield` 来回切换,父进程在迭代 10 次后退出,而子进程在迭代 20 次后退出。 + +```c +练习 7、在 `kern/syscall.c` 中实现上面描述的系统调用,并确保 `syscall()` 能调用它们。你将需要使用 `kern/pmap.c` 和 `kern/env.c` 中的多个函数,尤其是要用到 `envid2env()`。目前,每当你调用 `envid2env()` 时,在 `checkperm` 中传递参数 1。你务必要做检查任何无效的系统调用参数,在那个案例中,就返回了 `-E_INVAL`。使用 `user/dumbfork` 测试你的 JOS 内核,并在继续之前确保它运行正常。 +``` + +```c +小挑战!添加另外的系统调用,必须能够读取已存在的、所有的、环境的重要状态,以及设置它们。然后实现一个能够 fork 出子环境的用户模式程序,运行它一小会(即,迭代几次 `sys_yield()`),然后取得几张屏幕截图或子环境的检查点,然后运行子环境一段时间,然后还原子环境到检查点时的状态,然后从这里继续开始。这样,你就可以有效地从一个中间状态“回放”了子环境的运行。确保子环境与用户使用 `sys_cgetc()` 或 `readline()` 执行了一些交互,这样,那个用户就能够查看和突变它的内部状态,并且你可以通过给子环境给定一个选择性遗忘的状况,来验证你的检查点/重启动的有效性,使它“遗忘”了在某些点之前发生的事情。 +``` + +到此为止,已经完成了本实验的 Part A 部分;在你运行 `make grade` 之前确保它通过了所有的 Part A 的测试,并且和以往一样,使用 `make handin` 去提交它。如果你想尝试找出为什么一些特定的测试是失败的,可以运行 `run ./grade-lab4 -v`,它将向你展示内核构建的输出,和测试失败时的 QEMU 运行情况。当测试失败时,这个脚本将停止运行,然后你可以去检查 `jos.out` 的内容,去查看内核真实的输出内容。 + +#### Part B:写时复制 Fork + +正如在前面提到过的,Unix 提供 `fork()` 系统调用作为它主要的进程创建原语。`fork()` 系统调用通过复制调用进程(父进程)的地址空间来创建一个新进程(子进程)。 + +xv6 Unix 的 `fork()` 从父进程的页上复制所有数据,然后将它分配到子进程的新页上。从本质上看,它与 `dumbfork()` 所采取的方法是相同的。复制父进程的地址空间到子进程,是 `fork()` 操作中代价最高的部分。 + +但是,一个对 `fork()` 的调用后,经常是紧接着几乎立即在子进程中有一个到 `exec()` 的调用,它使用一个新程序来替换子进程的内存。这是 shell 默认去做的事,在这种情况下,在复制父进程地址空间上花费的时间是非常浪费的,因为在调用 `exec()` 之前,子进程使用的内存非常少。 + +基于这个原因,Unix 的最新版本利用了虚拟内存硬件的优势,允许父进程和子进程去共享映射到它们各自地址空间上的内存,直到其中一个进程真实地修改了它们为止。这个技术就是众所周知的“写时复制”。为实现这一点,在 `fork()` 时,内核将复制从父进程到子进程的地址空间的映射,而不是所映射的页的内容,并且同时设置正在共享中的页为只读。当两个进程中的其中一个尝试去写入到它们共享的页上时,进程将产生一个页故障。在这时,Unix 内核才意识到那个页实际上是“虚拟的”或“写时复制”的副本,然后它生成一个新的、私有的、那个发生页故障的进程可写的、页的副本。在这种方式中,个人的页的内容并不进行真实地复制,直到它们真正进行写入时才进行复制。这种优化使得一个`fork()` 后在子进程中跟随一个 `exec()` 变得代价很低了:子进程在调用 `exec()` 时或许仅需要复制一个页(它的栈的当前页)。 + +在本实验的下一段中,你将实现一个带有“写时复制”的“真正的”类 Unix 的 `fork()`,来作为一个常规的用户空间库。在用户空间中实现 `fork()` 和写时复制有一个好处就是,让内核始终保持简单,并且因此更不易出错。它也让个别的用户模式程序在 `fork()` 上定义了它们自己的语义。一个有略微不同实现的程序(例如,代价昂贵的、总是复制的 `dumbfork()` 版本,或父子进程真实共享内存的后面的那一个),它自己可以很容易提供。 + +##### 用户级页故障处理 + +一个用户级写时复制 `fork()` 需要知道关于在写保护页上的页故障相关的信息,因此,这是你首先需要去实现的东西。对用户级页故障处理来说,写时复制仅是众多可能的用途之一。 + +它通常是配置一个地址空间,因此在一些动作需要时,那个页故障将指示去处。例如,主流的 Unix 内核在一个新进程的栈区域中,初始的映射仅是单个页,并且在后面“按需”分配和映射额外的栈页,因此,进程的栈消费是逐渐增加的,并因此导致在尚未映射的栈地址上发生页故障。在每个进程空间的区域上发生一个页故障时,一个典型的 Unix 内核必须对它的动作保持跟踪。例如,在栈区域中的一个页故障,一般情况下将分配和映射新的物理内存页。一个在程序的 BSS 区域中的页故障,一般情况下将分配一个新页,然后用 0 填充它并映射它。在一个按需分页的系统上的一个可执行文件中,在文本区域中的页故障将从磁盘上读取相应的二进制页并映射它。 + +内核跟踪有大量的信息,与传统的 Unix 方法不同,你将决定在每个用户空间中关于每个页故障应该做的事。用户空间中的 bug 危害都较小。这种设计带来了额外的好处,那就是允许程序员在定义它们的内存区域时,会有很好的灵活性;对于映射和访问基于磁盘文件系统上的文件时,你应该使用后面的用户级页故障处理。 + +###### 设置页故障服务程序 + +为了处理它自己的页故障,一个用户环境将需要在 JOS 内核上注册一个页故障服务程序入口。用户环境通过新的 `sys_env_set_pgfault_upcall` 系统调用来注册它的页故障入口。我们给结构 `Env` 增加了一个新的成员 `env_pgfault_upcall`,让它去记录这个信息。 + +```markdown +练习 8、实现 `sys_env_set_pgfault_upcall` 系统调用。当查找目标环境的环境 ID 时,一定要确认启用了权限检查,因为这是一个“危险的”系统调用。 +``` + +###### 在用户环境中的正常和异常栈 + +在正常运行期间,JOS 中的一个用户环境运行在正常的用户栈上:它的 `ESP` 寄存器开始指向到 `USTACKTOP`,而它所推送的栈数据将驻留在 `USTACKTOP-PGSIZE` 和 `USTACKTOP-1`(含)之间的页上。但是,当在用户模式中发生页故障时,内核将在一个不同的栈上重新启动用户环境,运行一个用户级页故障指定的服务程序,即用户异常栈。其它,我们将让 JOS 内核为用户环境实现自动的“栈切换”,当从用户模式转换到内核模式时,x86 处理器就以大致相同的方式为 JOS 实现了栈切换。 + +JOS 用户异常栈也是一个页的大小,并且它的顶部被定义在虚拟地址 `UXSTACKTOP` 处,因此用户异常栈的有效字节数是从 `UXSTACKTOP-PGSIZE` 到 `UXSTACKTOP-1`(含)。尽管运行在异常栈上,用户页故障服务程序能够使用 JOS 的普通系统调用去映射新页或调整映射,以便于去修复最初导致页故障发生的各种问题。然后用户级页故障服务程序通过汇编语言 `stub` 返回到原始栈上的故障代码。 + +每个想去支持用户级页故障处理的用户环境,都需要为它自己的异常栈使用在 Part A 中介绍的 `sys_page_alloc()` 系统调用去分配内存。 + +###### 调用用户页故障服务程序 + +现在,你需要去修改 `kern/trap.c` 中的页故障处理代码,以能够处理接下来在用户模式中发生的页故障。我们将故障发生时用户环境的状态称之为捕获时状态。 + +如果这里没有注册页故障服务程序,JOS 内核将像前面那样,使用一个消息来销毁用户环境。否则,内核将在异常栈上设置一个陷阱帧,它看起来就像是来自 `inc/trap.h` 文件中的一个 `struct UTrapframe` 一样: + +```assembly + <-- UXSTACKTOP + trap-time esp + trap-time eflags + trap-time eip + trap-time eax start of struct PushRegs + trap-time ecx + trap-time edx + trap-time ebx + trap-time esp + trap-time ebp + trap-time esi + trap-time edi end of struct PushRegs + tf_err (error code) + fault_va <-- %esp when handler is run + +``` + +然后,内核安排这个用户环境重新运行,使用这个栈帧在异常栈上运行页故障服务程序;你必须搞清楚为什么发生这种情况。`fault_va` 是引发页故障的虚拟地址。 + +如果在一个异常发生时,用户环境已经在用户异常栈上运行,那么页故障服务程序自身将会失败。在这种情况下,你应该在当前的 `tf->tf_esp` 下,而不是在 `UXSTACKTOP` 下启动一个新的栈帧。 + +去测试 `tf->tf_esp` 是否已经在用户异常栈上准备好,可以去检查它是否在 `UXSTACKTOP-PGSIZE` 和 `UXSTACKTOP-1`(含)的范围内。 + +```markdown +练习 9、实现在 `kern/trap.c` 中的 `page_fault_handler` 的代码,要求派发页故障到用户模式故障服务程序上。在写入到异常栈时,一定要采取适当的预防措施。(如果用户环境运行时溢出了异常栈,会发生什么事情?) +``` + +###### 用户模式页故障入口点 + +接下来,你需要去实现汇编程序,它将调用 C 页故障服务程序,并在原始的故障指令处恢复程序运行。这个汇编程序是一个故障服务程序,它由内核使用 `sys_env_set_pgfault_upcall()` 来注册。 + +```markdown +练习 10、实现在 `lib/pfentry.S` 中的 `_pgfault_upcall` 程序。最有趣的部分是返回到用户代码中产生页故障的原始位置。你将要直接返回到那里,不能通过内核返回。最难的部分是同时切换栈和重新加载 EIP。 +``` + +最后,你需要去实现用户级页故障处理机制的 C 用户库。 + +```c +练习 11、完成 `lib/pgfault.c` 中的 `set_pgfault_handler()`。 +``` + +###### 测试 + +运行 `user/faultread`(make run-faultread)你应该会看到: + +```c + ... + [00000000] new env 00001000 + [00001000] user fault va 00000000 ip 0080003a + TRAP frame ... + [00001000] free env 00001000 +``` + +运行 `user/faultdie` 你应该会看到: + +```c + ... + [00000000] new env 00001000 + i faulted at va deadbeef, err 6 + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +运行 `user/faultalloc` 你应该会看到: + +```c + ... + [00000000] new env 00001000 + fault deadbeef + this string was faulted in at deadbeef + fault cafebffe + fault cafec000 + this string was faulted in at cafebffe + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +如果你只看到第一个 "this string” 行,意味着你没有正确地处理递归页故障。 + +运行 `user/faultallocbad` 你应该会看到: + +```c + ... + [00000000] new env 00001000 + [00001000] user_mem_check assertion failure for va deadbeef + [00001000] free env 00001000 +``` + +确保你理解了为什么 `user/faultalloc` 和 `user/faultallocbad` 的行为是不一样的。 + +```markdown +小挑战!扩展你的内核,让它不仅是页故障,而是在用户空间中运行的代码能够产生的所有类型的处理器异常,都能够被重定向到一个用户模式中的异常服务程序上。写出用户模式测试程序,去测试各种各样的用户模式异常处理,比如除零错误、一般保护故障、以及非法操作码。 +``` + +##### 实现写时复制 Fork + +现在,你有个内核功能要去实现,那就是在用户空间中完整地实现写时复制 `fork()`。 + +我们在 `lib/fork.c` 中为你的 `fork()` 提供了一个框架。像 `dumbfork()`、`fork()` 应该会创建一个新环境,然后通过扫描父环境的整个地址空间,并在子环境中设置相关的页映射。重要的差别在于,`dumbfork()` 复制了页,而 `fork()` 开始只是复制了页映射。`fork()` 仅当在其中一个环境尝试去写入它时才复制每个页。 + +`fork()` 的基本控制流如下: + + 1. 父环境使用你在上面实现的 `set_pgfault_handler()` 函数,安装 `pgfault()` 作为 C 级页故障服务程序。 + + 2. 父环境调用 `sys_exofork()` 去创建一个子环境。 + + 3. 在它的地址空间中,低于 UTOP 位置的、每个可写入页、或写时复制页上,父环境调用 `duppage` 后,它应该会映射页写时复制到子环境的地址空间中,然后在它自己的地址空间中重新映射页写时复制。[ 注意:这里的顺序很重要(即,在父环境中标记之前,先在子环境中标记该页为 COW)!你能明白是为什么吗?尝试去想一个具体的案例,将顺序颠倒一下会发生什么样的问题。] `duppage` 把两个 PTE 都设置了,致使那个页不可写入,并且在 "avail” 字段中通过包含 `PTE_COW` 来从真正的只读页中区分写时复制页。 + +然而异常栈是不能通过这种方式重映射的。对于异常栈,你需要在子环境中分配一个新页。因为页故障服务程序不能做真实的复制,并且页故障服务程序是运行在异常栈上的,异常栈不能进行写时复制:那么谁来复制它呢? + +`fork()` 也需要去处理存在的页,但不能写入或写时复制。 + + 4. 父环境为子环境设置了用户页故障入口点,让它看起来像它自己的一样。 + + 5. 现在,子环境准备去运行,所以父环境标记它为可运行。 + + + + +每次其中一个环境写一个还没有写入的写时复制页时,它将产生一个页故障。下面是用户页故障服务程序的控制流: + + 1. 内核传递页故障到 `_pgfault_upcall`,它调用 `fork()` 的 `pgfault()` 服务程序。 + 2. `pgfault()` 检测到那个故障是一个写入(在错误代码中检查 `FEC_WR`),然后将那个页的 PTE 标记为 `PTE_COW`。如果不是一个写入,则崩溃。 + 3. `pgfault()` 在一个临时位置分配一个映射的新页,并将故障页的内容复制进去。然后,故障服务程序以读取/写入权限映射新页到合适的地址,替换旧的只读映射。 + + + +对于上面的几个操作,用户级 `lib/fork.c` 代码必须查询环境的页表(即,那个页的 PTE 是否标记为 `PET_COW`)。为此,内核在 `UVPT` 位置精确地映射环境的页表。它使用一个 [聪明的映射技巧][1] 去标记它,以使用户代码查找 PTE 时更容易。`lib/entry.S` 设置 `uvpt` 和 `uvpd`,以便于你能够在 `lib/fork.c` 中轻松查找页表信息。 + +```c +练习 12、在 `lib/fork.c` 中实现 `fork`、`duppage` 和 `pgfault`。 + +使用 `forktree` 程序测试你的代码。它应该会产生下列的信息,在信息中会有 'new env'、'free env'、和 'exiting gracefully' 这样的字眼。信息可能不是按如下的顺序出现的,并且环境 ID 也可能不一样。 + + 1000: I am '' + 1001: I am '0' + 2000: I am '00' + 2001: I am '000' + 1002: I am '1' + 3000: I am '11' + 3001: I am '10' + 4000: I am '100' + 1003: I am '01' + 5000: I am '010' + 4001: I am '011' + 2002: I am '110' + 1004: I am '001' + 1005: I am '111' + 1006: I am '101' +``` + +```c +小挑战!实现一个名为 `sfork()` 的共享内存的 `fork()`。这个版本的 `sfork()` 中,父子环境共享所有的内存页(因此,一个环境中对内存写入,就会改变另一个环境数据),除了在栈区域中的页以外,它应该使用写时复制来处理这些页。修改 `user/forktree.c` 去使用 `sfork()` 而是不常见的 `fork()`。另外,你在 Part C 中实现了 IPC 之后,使用你的 `sfork()` 去运行 `user/pingpongs`。你将找到提供全局指针 `thisenv` 功能的一个新方式。 +``` + +```markdown +小挑战!你实现的 `fork` 将产生大量的系统调用。在 x86 上,使用中断切换到内核模式将产生较高的代价。增加系统调用接口,以便于它能够一次发送批量的系统调用。然后修改 `fork` 去使用这个接口。 + +你的新的 `fork` 有多快? + +你可以用一个分析来论证,批量提交对你的 `fork` 的性能改变,以它来(粗略地)回答这个问题:使用一个 `int 0x30` 指令的代价有多高?在你的 `fork` 中运行了多少次 `int 0x30` 指令?访问 `TSS` 栈切换的代价高吗?等待 ... + +或者,你可以在真实的硬件上引导你的内核,并且真实地对你的代码做基准测试。查看 `RDTSC`(读取时间戳计数器)指令,它的定义在 IA32 手册中,它计数自上一次处理器重置以来流逝的时钟周期数。QEMU 并不能真实地模拟这个指令(它能够计数运行的虚拟指令数量,或使用主机的 TSC,但是这两种方式都不能反映真实的 CPU 周期数)。 +``` + +到此为止,Part B 部分结束了。在你运行 `make grade` 之前,确保你通过了所有的 Part B 部分的测试。和以前一样,你可以使用 `make handin` 去提交你的实验。 + +#### Part C:抢占式多任务处理和进程间通讯(IPC) + +在实验 4 的最后部分,你将修改内核去抢占不配合的环境,并允许环境之间显式地传递消息。 + +##### 时钟中断和抢占 + +运行测试程序 `user/spin`。这个测试程序 fork 出一个子环境,它控制了 CPU 之后,就永不停歇地运转起来。无论是父环境还是内核都不能回收对 CPU 的控制。从用户模式环境中保护系统免受 bug 或恶意代码攻击的角度来看,这显然不是个理想的状态,因为任何用户模式环境都能够通过简单的无限循环,并永不归还 CPU 控制权的方式,让整个系统处于暂停状态。为了允许内核去抢占一个运行中的环境,从其中夺回对 CPU 的控制权,我们必须去扩展 JOS 内核,以支持来自硬件时钟的外部硬件中断。 + +###### 中断规则 + +外部中断(即:设备中断)被称为 IRQ。现在有 16 个可能出现的 IRQ,编号 0 到 15。从 IRQ 号到 IDT 条目的映射是不固定的。在 `picirq.c` 中的 `pic_init` 映射 IRQ 0 - 15 到 IDT 条目 `IRQ_OFFSET` 到 `IRQ_OFFSET+15`。 + +在 `inc/trap.h` 中,`IRQ_OFFSET` 被定义为十进制的 32。所以,IDT 条目 32 - 47 对应 IRQ 0 - 15。例如,时钟中断是 IRQ 0,所以 IDT[IRQ_OFFSET+0](即:IDT[32])包含了内核中时钟中断服务程序的地址。这里选择 `IRQ_OFFSET` 是为了处理器异常不会覆盖设备中断,因为它会引起显而易见的混淆。(事实上,在早期运行 MS-DOS 的 PC 上, `IRQ_OFFSET` 事实上是 0,它确实导致了硬件中断服务程序和处理器异常处理之间的混淆!) + +在 JOS 中,相比 xv6 Unix 我们做了一个重要的简化。当处于内核模式时,外部设备中断总是被关闭(并且,像 xv6 一样,当处于用户空间时,再打开外部设备的中断)。外部中断由 `%eflags` 寄存器的 `FL_IF` 标志位来控制(查看 `inc/mmu.h`)。当这个标志位被设置时,外部中断被打开。虽然这个标志位可以使用几种方式来修改,但是为了简化,我们只通过进程所保存和恢复的 `%eflags` 寄存器值,作为我们进入和离开用户模式的方法。 + +处于用户环境中时,你将要确保 `FL_IF` 标志被设置,以便于出现一个中断时,它能够通过处理器来传递,让你的中断代码来处理。否则,中断将被屏蔽或忽略,直到中断被重新打开后。我们使用引导加载程序的第一个指令去屏蔽中断,并且到目前为止,还没有去重新打开它们。 + +```markdown +练习 13、修改 `kern/trapentry.S` 和 `kern/trap.c` 去初始化 IDT 中的相关条目,并为 IRQ 0 到 15 提供服务程序。然后修改 `kern/env.c` 中的 `env_alloc()` 的代码,以确保在用户环境中,中断总是打开的。 + +另外,在 `sched_halt()` 中取消注释 `sti` 指令,以便于空闲的 CPU 取消屏蔽中断。 + +当调用一个硬件中断服务程序时,处理器不会推送一个错误代码。在这个时候,你可能需要重新阅读 [80386 参考手册][2] 的 9.2 节,或 [IA-32 Intel 架构软件开发者手册 卷 3][3] 的 5.8 节。 + +在完成这个练习后,如果你在你的内核上使用任意的测试程序去持续运行(即:`spin`),你应该会看到内核输出中捕获的硬件中断的捕获帧。虽然在处理器上已经打开了中断,但是 JOS 并不能处理它们,因此,你应该会看到在当前运行的用户环境中每个中断的错误属性并被销毁,最终环境会被销毁并进入到监视器中。 +``` + +###### 处理时钟中断 + +在 `user/spin` 程序中,子环境首先运行之后,它只是进入一个高速循环中,并且内核再无法取得 CPU 控制权。我们需要对硬件编程,定期产生时钟中断,它将强制将 CPU 控制权返还给内核,在内核中,我们就能够将控制权切换到另外的用户环境中。 + +我们已经为你写好了对 `lapic_init` 和 `pic_init`(来自 `init.c` 中的 `i386_init`)的调用,它将设置时钟和中断控制器去产生中断。现在,你需要去写代码来处理这些中断。 + +```markdown +练习 14、修改内核的 `trap_dispatch()` 函数,以便于在时钟中断发生时,它能够调用 `sched_yield()` 去查找和运行一个另外的环境。 + +现在,你应该能够用 `user/spin` 去做测试了:父环境应该会 fork 出子环境,`sys_yield()` 到它许多次,但每次切换之后,将重新获得对 CPU 的控制权,最后杀死子环境后优雅地终止。 +``` + +这是做回归测试的好机会。确保你没有弄坏本实验的前面部分,确保打开中断能够正常工作(即: `forktree`)。另外,尝试使用 ` make CPUS=2 target` 在多个 CPU 上运行它。现在,你应该能够通过 `stresssched` 测试。可以运行 `make grade` 去确认。现在,你的得分应该是 65 分了(总分为 80)。 + +##### 进程间通讯(IPC) + +(严格来说,在 JOS 中这是“环境间通讯” 或 “IEC”,但所有人都称它为 IPC,因此我们使用标准的术语。) + +我们一直专注于操作系统的隔离部分,这就产生了一种错觉,好像每个程序都有一个机器完整地为它服务。一个操作系统的另一个重要服务是,当它们需要时,允许程序之间相互通讯。让程序与其它程序交互可以让它的功能更加强大。Unix 的管道模型就是一个权威的示例。 + +进程间通讯有许多模型。关于哪个模型最好的争论从来没有停止过。我们不去参与这种争论。相反,我们将要实现一个简单的 IPC 机制,然后尝试使用它。 + +###### JOS 中的 IPC + +你将要去实现另外几个 JOS 内核的系统调用,由它们共同来提供一个简单的进程间通讯机制。你将要实现两个系统调用,`sys_ipc_recv` 和 `sys_ipc_try_send`。然后你将要实现两个库去封装 `ipc_recv` 和 `ipc_send`。 + +用户环境可以使用 JOS 的 IPC 机制相互之间发送 “消息” 到每个其它环境,这些消息有两部分组成:一个单个的 32 位值,和可选的一个单个页映射。允许环境在消息中传递页映射,提供了一个高效的方式,传输比一个仅适合单个的 32 位整数更多的数据,并且也允许环境去轻松地设置安排共享内存。 + +###### 发送和接收消息 + +一个环境通过调用 `sys_ipc_recv` 去接收消息。这个系统调用将取消对当前环境的调度,并且不会再次去运行它,直到消息被接收为止。当一个环境正在等待接收一个消息时,任何其它环境都能够给它发送一个消息 — 而不仅是一个特定的环境,而且不仅是与接收环境有父子关系的环境。换句话说,你在 Part A 中实现的权限检查将不会应用到 IPC 上,因为 IPC 系统调用是经过慎重设计的,因此可以认为它是“安全的”:一个环境并不能通过给它发送消息导致另一个环境发生故障(除非目标环境也存在 Bug)。 + +尝试去发送一个值时,一个环境使用接收者的 ID 和要发送的值去调用 `sys_ipc_try_send` 来发送。如果指定的环境正在接收(它调用了 `sys_ipc_recv`,但尚未收到值),那么这个环境将去发送消息并返回 0。否则将返回 `-E_IPC_NOT_RECV` 来表示目标环境当前不希望来接收值。 + +在用户空间中的一个库函数 `ipc_recv` 将去调用 `sys_ipc_recv`,然后,在当前环境的 `struct Env` 中查找关于接收到的值的相关信息。 + +同样,一个库函数 `ipc_send` 将去不停地调用 `sys_ipc_try_send` 来发送消息,直到发送成功为止。 + +###### 转移页 + +当一个环境使用一个有效的 `dstva` 参数(低于 `UTOP`)去调用 `sys_ipc_recv` 时,环境将声明愿意去接收一个页映射。如果发送方发送一个页,那么那个页应该会被映射到接收者地址空间的 `dstva` 处。如果接收者在 `dstva` 已经有了一个页映射,那么已存在的那个页映射将被取消映射。 + +当一个环境使用一个有效的 `srcva` 参数(低于 `UTOP`)去调用 `sys_ipc_try_send` 时,意味着发送方希望使用 `perm` 权限去发送当前映射在 `srcva` 处的页给接收方。在 IPC 成功之后,发送方在它的地址空间中,保留了它最初映射到 `srcva` 位置的页。而接收方也获得了最初由它指定的、在它的地址空间中的 `dstva` 处的、映射到相同物理页的映射。最后的结果是,这个页成为发送方和接收方共享的页。 + +如果发送方和接收方都没有表示要转移这个页,那么就不会有页被转移。在任何 IPC 之后,内核将在接收方的 `Env` 结构上设置新的 `env_ipc_perm` 字段,以允许接收页,或者将它设置为 0,表示不再接收。 + +###### 实现 IPC + +```markdown +练习 15、实现 `kern/syscall.c` 中的 `sys_ipc_recv` 和 `sys_ipc_try_send`。在实现它们之前一起阅读它们的注释信息,因为它们要一起工作。当你在这些程序中调用 `envid2env` 时,你应该去设置 `checkperm` 的标志为 0,这意味着允许任何环境去发送 IPC 消息到另外的环境,并且内核除了验证目标 envid 是否有效外,不做特别的权限检查。 + +接着实现 `lib/ipc.c` 中的 `ipc_recv` 和 `ipc_send` 函数。 + +使用 `user/pingpong` 和 `user/primes` 函数去测试你的 IPC 机制。`user/primes` 将为每个质数生成一个新环境,直到 JOS 耗尽环境为止。你可能会发现,阅读 `user/primes.c` 非常有趣,你将看到所有的 fork 和 IPC 都是在幕后进行。 +``` + +``` +小挑战!为什么 `ipc_send` 要循环调用?修改系统调用接口,让它不去循环。确保你能处理多个环境尝试同时发送消息到一个环境上的情况。 +``` + +```markdown +小挑战!质数筛选是在大规模并发程序中传递消息的一个很巧妙的用法。阅读 C. A. R. Hoare 写的 《Communicating Sequential Processes》,Communications of the ACM_ 21(8) (August 1978), 666-667,并去实现矩阵乘法示例。 +``` + +```markdown +小挑战!控制消息传递的最令人印象深刻的一个例子是,Doug McIlroy 的幂序列计算器,它在 [M. Douglas McIlroy,《Squinting at Power Series》,Software--Practice and Experience, 20(7) (July 1990),661-683][4] 中做了详细描述。实现了它的幂序列计算器,并且计算了 _sin_ ( _x_ + _x_ ^3) 的幂序列。 +``` + +```markdown +小挑战!通过应用 Liedtke 的论文([通过内核设计改善 IPC 性能][5])中的一些技术、或你可以想到的其它技巧,来让 JOS 的 IPC 机制更高效。为此,你可以随意修改内核的系统调用 API,只要你的代码向后兼容我们的评级脚本就行。 +``` + +**Part C 到此结束了。**确保你通过了所有的评级测试,并且不要忘了将你的小挑战的答案写入到 `answers-lab4.txt` 中。 + +在动手实验之前, 使用 `git status` 和 `git diff` 去检查你的更改,并且不要忘了去使用 `git add answers-lab4.txt` 添加你的小挑战的答案。在你全部完成后,使用 `git commit -am 'my solutions to lab 4’` 提交你的更改,然后 `make handin` 并关注它的动向。 + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm +[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf +[4]: https://swtch.com/~rsc/thread/squint.pdf +[5]: http://dl.acm.org/citation.cfm?id=168633 \ No newline at end of file diff --git a/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md b/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md new file mode 100644 index 0000000000..55631b4713 --- /dev/null +++ b/translated/tech/20181017 Design faster web pages, part 2- Image replacement.md @@ -0,0 +1,177 @@ +设计更快的网页(二):图片替换 +====== +![](https://fedoramagazine.org/wp-content/uploads/2018/03/fasterwebsites2-816x345.jpg) + + +欢迎回到我们为了构建更快网页所写的系列文章。上一篇[文章][1]讨论了只通过图片压缩实现这个目标的方法。这个例子从一开始有 1.2MB 的“浏览器脂肪”,然后它减轻到了 488.9KB 的大小。但这还不够快!那么本文继续来给浏览器“减肥”。你可能在这个过程中会认为我们所做的事情有点疯狂,但一旦完成,你就会明白为什么要这么做了。 + +### 准备工作 + +本文再次从对网页的分析开始。使用 Firefox 内置的截图功能来对整个页面进行截图。你还需要[用 sudo][2] 来安装 Inkscape: + +``` +$ sudo dnf install inkscape +``` + +如果你想了解 Inkscape 的用法,Fedora 杂志上有几篇现成的[文章][3]。本文仅会介绍一些基本的 SVG 优化方法以供 Web 使用。 + +### 分析 + +我们再来用 [getfedora.org][4] 的网页来举例。 + +![Getfedora 的页面,对其中的图片做了标记][5] + +这次分析更好地以图形方式完成,这也就是它从屏幕截图开始的原因。上面的截图标记了页面中的所有图形元素。Fedora 网站团队已经针对两种情况措施(也有可能是四种,这样更好)来替换图像了。社交媒体的图标变成了字体的字形,而语言选择器变成了 SVG. + +我们有几个可以替换的选择: + + ++ CSS3 ++ 字体 ++ SVG ++ HTML5 Canvas + + +#### HTML5 Canvas + +简单来说,HTML5 Canvas 是一种 HTML 元素,它允许你借助脚本语言(通常是 JavaScript)在上面绘图,不过它现在还没有被广泛使用。因为它可以使用脚本语言来绘制,所以这个元素也可以用来做动画。这里有一些使用 HTML Canvas 实现的实例,比如[三角形模式][6]、[动态波浪][7]和[字体动画][8]。不过,在这种情况下,似乎这也不是最好的选择。 + +#### CSS3 + +使用层叠式样式表,你可以绘制图形,甚至可以让它们动起来。CSS 常被用来绘制按钮等元素。然而,使用 CSS 绘制的更复杂的图形通常只能在技术演示页面中看到。这是因为使用视觉来制作图形依然要比使用代码来的更快一些。 + +#### 字体 + +另外一种方式是使用字体来装饰网页,[Fontawesome][9] 在这方面很流行。比如,在这个例子中你可以使用字体来替换“风味”和“旋转”的图标。这种方法有一个负面影响,但解决起来很容易,我们会在本系列的下一部分中来介绍。 + +#### SVG + +这种图形格式已经存在了很长时间,而且它总是在浏览器中被使用。有很长一段时间并非所有浏览器都支持它,不过现在这已经成为历史了。所以,本例中图形替换的最佳方法是使用 SVG. + +### 为网页优化 SVG + +优化 SVG 以供互联网使用,需要几个步骤。 + +SVG 是一种 XML 方言。它用节点来描述圆形、矩形或文本路径等组件。每个节点都是一个 XML 元素。为了保证代码简洁,SVG 应该包含尽可能少的元素。 + +我们选用的 SVG 实例是带有一个咖啡杯的圆形图标。你有三种选项来用 SVG 描述它。 + +#### 一个圆形元素,上面有一个咖啡杯 + +``` + +``` + +#### 一个圆形路径,上面有一个咖啡杯 + +``` + +``` + +#### 单一路径 + +``` + +``` + +你应该可以看出,代码变得越来越复杂,需要更多的字符来描述它。当然,文件中包含更多的字符,就会导致更大的尺寸。 + +#### 节点清理 + +如果你在 Inkscape 中打开了实例 SVG 按下 F2,就会激活一个节点工具。你应该看到这样的界面: + +![Inkscape - 激活节点工具][10] + +这个例子中有五个不必要的节点——就是直线中间的那些。要删除它们,你可以使用已激活的节点工具依次选中它们,并按下 **Del** 键。然后,选中这条线的定义节点,并使用工具栏的工具把它们重新做成角。 + +![Inkscape - 将节点变成角的工具][11] + +如果不修复这些角,我们还有方法可以定义这条曲线,这条曲线会被保存,也就会增加文件体积。你可以手动清理这些节点,因为它无法有效的自动完成。现在,你已经为下一阶段做好了准备。 + +使用_另存为_功能,并选择_优化的 SVG_。这会弹出一个窗口,你可以在里面选择移除或保留哪些成分。 + +![Inkscape - “另存为”“优化的 SVG”][12] + +虽然这个 SVG 实例很小,但它还是从 3.2KB 减小到了 920 字节,不到原有的三分之一。 + +回到 getfedora 的页面:页面主要部分的背景中的灰色沃罗诺伊图,在经过本系列第一篇文章中的优化处理之后,从原先的 211.12 KB 减小到了 164.1 KB. + +页面中导出的原始 SVG 有 1.9 MB 大小。经过这些 SVG 优化步骤后,它只有 500.4 KB 了。太大了?好吧,现在的蓝色背景的体积是 564.98 KB。SVG 和 PNG 之间只有很小的差别。 + +#### 压缩文件 + +``` +$ ls -lh +insgesamt 928K +-rw-r--r--. 1 user user 161K 19. Feb 19:44 grey-pattern.png +-rw-rw-r--. 1 user user 160K 18. Feb 12:23 grey-pattern.png.gz +-rw-r--r--. 1 user user 489K 19. Feb 19:43 greyscale-pattern-opti.svg +-rw-rw-r--. 1 user user 112K 19. Feb 19:05 greyscale-pattern-opti.svg.gz +``` + +这是我为可视化这个主题所做的一个小测试的输出。你可能应该看到光栅图形——PNG——已经被压缩,不能再被压缩了。而 SVG,一个 XML 文件正相反。它是文本文件,所以可被压缩至原来的四分之一不到。因此,现在它的体积要比 PNG 小 50 KB 左右。 + +现代浏览器可以以原生方式处理压缩文件。所以,许多 Web 服务器都打开了 mod_deflate (Apache) 和 gzip (Nginx) 模式。这样我们就可以在传输过程中节省空间。你可以在[这儿][13]看看你的服务器是不是启用了它。 + +### 生产工具 + +首先,没有人希望每次都要用 Inkscape 来优化 SVG. 你可以在命令行中脱离 GUI 来运行 Inkscape,但你找不到选项来将 Inkscape SVG 转换成优化的 SVG. 用这种方式只能导出光栅图像。但是我们替代品: + + * SVGO (看起来开发过程已经不活跃了) + * Scour + + + +本例中我们使用 scour 来进行优化。先来安装它: + +``` +$ sudo dnf install scour +``` + +要想自动优化 SVG 文件,请运行 scour,就像这样: + +``` +[user@localhost ]$ scour INPUT.svg OUTPUT.svg -p 3 --create-groups --renderer-workaround --strip-xml-prolog --remove-descriptive-elements --enable-comment-stripping --disable-embed-rasters --no-line-breaks --enable-id-stripping --shorten-ids +``` + +这就是第二部分的结尾了。在这部分中你应该学会了如何将光栅图像替换成 SVG,并对它进行优化以供使用。请继续关注 Feroda 杂志,第三篇即将出炉。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/ + +作者:[Sirko Kemter][a] +选题:[lujun9972][b] +译者:[StdioA](https://github.com/StdioA) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/gnokii/ +[b]: https://github.com/lujun9972 +[1]: https://wp.me/p3XX0v-5fJ +[2]: https://fedoramagazine.org/howto-use-sudo/ +[3]: https://fedoramagazine.org/?s=Inkscape +[4]: https://getfedora.org +[5]: https://fedoramagazine.org/wp-content/uploads/2018/02/getfedora_mag.png +[6]: https://codepen.io/Cthulahoop/pen/umcvo +[7]: https://codepen.io/jackrugile/pen/BvLHg +[8]: https://codepen.io/tholman/pen/lDLhk +[9]: https://fontawesome.com/ +[10]: https://fedoramagazine.org/wp-content/uploads/2018/02/svg-optimization-nodes.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/02/node_cleaning.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/02/svg-optimizing-dialog.png +[13]: https://checkgzipcompression.com/?url=http%3A%2F%2Fgetfedora.org diff --git a/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md b/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md new file mode 100644 index 0000000000..6b6622dc5a --- /dev/null +++ b/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md @@ -0,0 +1,86 @@ +使用 Calcurse 在 Linux 命令行中组织任务 +====== + +使用 Calcurse 了解你的日历和待办事项列表。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT) + +你是否需要复杂,功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。正确的命令行工具可以完成工作并且做得很好。 + +当然,说出命令行这个词可能会让一些 Linux 用户感到害怕。对他们来说,命令行是未知领域。 + +使用 [Calcurse][1] 可以轻松地在命令行中进行组织任务。Calcurse 在基于文本的界面里带来了图形化外观。你可以得到简单、结合易用性的命令行和导航。 + +让我们仔细看看 Calcurse,它是在 BSD 许可证下开源的。 + +### 获取软件 + +如果你喜欢编译代码(我通常不喜欢),你可以从[Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。 + +编译或安装 Calcurse 后(两者都不用太长时间),你就可以开始使用了。 + +### 使用 Calcurse + +打开终端并输入 **calcurse**。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-main.png) + +Calcurse 的界面由三个面板组成: + + * 预约(屏幕左侧) +  * 日历(右上角) +  * 待办事项清单(右下角) + + + + +按键盘上的 Tab 键在面板之间移动。要在面板添加新项目,请按下 **a**。Calcurse 将指导你完成添加项目所需的操作。 + +一个有趣的地方地预约和日历面板一起生效。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板。我知道。。。 + +按下 **a** 设置开始时间,持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png) + +一天的预约看起来像: + +![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png) + +待办事项列表独立运作。选中待办面板并(再次)按下 **a**。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png) + +如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 **v** 查看描述。 + +![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png) + +Calcurse 将其信息以文本形式保存在你的主目录下名为 **.calcurse** 的隐藏文件夹中,例如 **/home/scott/.calcurse**。如果 Calcurse 停止工作,那也很容易找到你的信息。 + +### 其他有用的功能 + +Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 **r**。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。 + +你还可以导入 [ICAL][3] 格式的日历或以 ICAL 或 [PCAL][4] 格式导出数据。使用 ICAL,你可以与其他日历程序共享数据。使用 PCAL,你可以生成日历的 Postscript 版本。 + +你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]阅读它们。 + +虽然很简单,但 Calcurse 可以帮助你保持井井有条。你需要更加关注自己的任务和预约,但是你将能够更好地关注你需要做什么以及你需要做的方向。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/calcurse + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: http://www.calcurse.org/ +[2]: http://www.calcurse.org/downloads/#packages +[3]: https://tools.ietf.org/html/rfc2445 +[4]: http://pcal.sourceforge.net/ +[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation diff --git a/translated/tech/20181029 4 open source Android email clients.md b/translated/tech/20181029 4 open source Android email clients.md new file mode 100644 index 0000000000..285b472234 --- /dev/null +++ b/translated/tech/20181029 4 open source Android email clients.md @@ -0,0 +1,77 @@ +四个开源的Android邮件客户端 +====== +Email 现在还没有绝迹,而且现在大部分邮件都来自于移动设备。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6) + +现在一些年轻人正将邮件称之为“老年人的交流方式”,然而事实却是邮件绝对还没有消亡。虽然[协作工具][1],社交媒体,和短信很常用,但是它们还没做好取代邮件这种必要的商业(和社交)通信工具。 + +考虑到邮件还没有消失,并且(很多研究表明)人们都是在移动设备上阅读邮件,拥有一个好的移动邮件客户端就变得很关键。如果你是一个想使用开源的邮件客户端的 Android 用户,事情就变得有点棘手了。 + +我们提供了四个开源的 Andorid 邮件客户端供选择。其中两个可以通过 Andorid 官方应用商店 [Google Play][2] 下载。你也可以在 [Fossdroid][3] 或者 [F-Droid][4] 这些开源 Android 应用库中找到他们。(下方有每个应用的具体下载方式。) +### K-9 Mail + +[K-9 Mail][5] 拥有几乎和 Android 一样长的历史——它起源于 Android 1.0 邮件客户端的一个补丁。它支持 IMAP 和 WebDAV、多用户、附件、emojis 和其他经典的邮件客户端功能。它的[用户文档][6]提供了关于安装、启动、安全、阅读和发送邮件等等的帮助。 + +K-9 基于 [Apache 2.0][7] 协议开源,[源码][8]可以从 GitHub 上获得. 应用可以从 [Google Play][9]、[Amazon][10] 和 [F-Droid][11] 上下载。 + +### p≡p + +正如它的全称,”Pretty Easy Privacy”说的那样,[p≡p][12] 主要关注于隐私和安全通信。它提供自动的、端到端的邮件和附件加密(但要求你的收件人也要能够加密邮件——否则,p≡p会警告你的邮件将不加密发出)。 + +你可以从 GitLab 获得[源码][13](基于 [GPLv3][14] 协议),并且可以从应用的官网上找到相应的[文档][15]。应用可以在 [Fossdroid][16] 上免费下载或者在 [Google Play][17] 上支付一点儿象征性的费用下载。 + +### InboxPager + +[InboxPager][18] 允许你通过 SSL/TLS 协议收发邮件信息,这也表明如果你的邮件提供商(比如 Gmail )没有默认开启这个功能的话,你可能要做一些设置。(幸运的是, InboxPager 提供了 Gmail的[设置教程][19]。)它同时也支持通过 OpenKeychain 应用进行 OpenPGP 机密。 + +InboxPager 基于 [GPLv3][20] 协议,其源码可从 GitHub 获得,并且应用可以从 [F-Droid][21] 下载。 + +### FairEmail + +[FairEmail][22] 是一个极简的邮件客户端,它的功能集中于读写信息,没有任何多余的可能拖慢客户端的功能。它支持多个帐号和用户,消息线程,加密等等。 + +它基于 [GPLv3][23] 协议开源,[源码][24]可以从GitHub上获得。你可以在 [Fossdroid][25] 上下载 FairEamil; 对 Google Play 版本感兴趣的人可以从 [testing the software][26] 获得应用。 + +肯定还有更多的开源 Android 客户端(或者上述软件的加强版本)——活跃的开发者们可以关注一下。如果你知道还有哪些优秀的应用,可以在评论里和我们分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/open-source-android-email-clients + +作者:[Opensource.com][a] +选题:[lujun9972][b] +译者:[zianglei][c] +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com +[b]: https://github.com/lujun9972 +[c]: https://github.com/zianglei +[1]: https://opensource.com/alternatives/trello +[2]: https://play.google.com/store +[3]: https://fossdroid.com/ +[4]: https://f-droid.org/ +[5]: https://k9mail.github.io/ +[6]: https://k9mail.github.io/documentation.html +[7]: http://www.apache.org/licenses/LICENSE-2.0 +[8]: https://github.com/k9mail/k-9 +[9]: https://play.google.com/store/apps/details?id=com.fsck.k9 +[10]: https://www.amazon.com/K-9-Dog-Walkers-Mail/dp/B004JK61K0/ +[11]: https://f-droid.org/packages/com.fsck.k9/ +[12]: https://www.pep.security/android.html.en +[13]: https://pep-security.lu/gitlab/android/pep +[14]: https://pep-security.lu/gitlab/android/pep/blob/feature/material/LICENSE +[15]: https://www.pep.security/docs/ +[16]: https://fossdroid.com/a/p%E2%89%A1p.html +[17]: https://play.google.com/store/apps/details?id=security.pEp +[18]: https://github.com/itprojects/InboxPager +[19]: https://github.com/itprojects/InboxPager/blob/HEAD/README.md#gmail-configuration +[20]: https://github.com/itprojects/InboxPager/blob/c5641a6d644d001bd4cec520b5a96d7e588cb6ad/LICENSE +[21]: https://f-droid.org/en/packages/net.inbox.pager/ +[22]: https://email.faircode.eu/ +[23]: https://github.com/M66B/open-source-email/blob/master/LICENSE +[24]: https://github.com/M66B/open-source-email +[25]: https://fossdroid.com/a/fairemail.html +[26]: https://play.google.com/apps/testing/eu.faircode.email diff --git a/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md b/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md new file mode 100644 index 0000000000..8b0021bf26 --- /dev/null +++ b/translated/tech/20181030 How To Analyze And Explore The Contents Of Docker Images.md @@ -0,0 +1,94 @@ +如何分析并探索 Docker 容器镜像的内容 +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/10/dive-tool-720x340.png) + +或许你已经了解到 Docker 容器镜像是一个轻量、独立、含有运行某个应用所需全部软件的可执行包,这也是为什么容器镜像会经常被开发者用于构建和分发应用。假如你很好奇一个 Docker 镜像里面包含了什么东西,那么这篇简要的指南或许会帮助到你。今天,我们将学会使用一个名为 **Dive** 的工具来分析和探索 Docker 镜像每层的内容。通过分析 Docker 镜像,我们可以发现在各个层之间可能重复的文件并通过移除它们来减小 Docker 镜像的大小。Dive 工具不仅仅是一个 Docker 镜像分析工具,它还可以帮助我们来构建镜像。Dive 是一个用 Go 编程语言编写的免费开源工具。 + +### 安装 Dive + +首先从该项目的 [**发布页**][1] 下载最新版本,然后像下面展示的那样根据你所使用的发行版来安装它。 + +假如你正在使用 **Debian** 或者 **Ubuntu**,那么可以运行下面的命令来下载并安装它。 +``` +$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.deb +``` +``` +$ sudo apt install ./dive_0.0.8_linux_amd64.deb +``` + +**在 RHEL 或 CentOS 系统中** +``` +$ wget https://github.com/wagoodman/dive/releases/download/v0.0.8/dive_0.0.8_linux_amd64.rpm +``` +``` +$ sudo rpm -i dive_0.0.8_linux_amd64.rpm +``` + +Dive 也可以使用 [**Linuxbrew**][2] 包管理器来安装。 +``` +$ brew tap wagoodman/dive +``` +``` +$ brew install dive +``` + +至于其他的安装方法,请参考 [Dive 项目的 GitHub 网页][3]。 + +### 分析并探索 Docker 镜像的内容 + +要分析一个 Docker 镜像,只需要运行加上 Docker 镜像 ID的 dive 命令就可以了。你可以使用 `sudo docker images` 来得到 Docker 镜像的 ID。 +``` +$ sudo dive ea4c82dcd15a +``` + +上面命令中的 **ea4c82dcd15a** 是某个镜像的 id。 + +然后 Dive 命令将快速地分析给定 Docker 镜像的内容并将它在终端中展示出来。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dive-1.png) + +正如你在上面的截图中看到的那样,在终端的左边一栏列出了给定 Docker 镜像的各个层及其详细内容,浪费的空间大小等信息。右边一栏则给出了给定 Docker 镜像每一层的内容。你可以使用 **Ctrl+SPACEBAR** 来在左右栏之间切换,使用 **UP/DOWN** 上下键来在目录树中进行浏览。 + +下面是 `Dive` 的快捷键列表: + * **Ctrl+Spacebar** – 在左右栏之间切换 + * **Spacebar** – 展开或收起目录树 + * **Ctrl+A** – 文件树视图:展示或隐藏增加的文件 + * **Ctrl+R** – 文件树视图:展示或隐藏被移除的文件 + * **Ctrl+M** – 文件树视图:展示或隐藏被修改的文件 + * **Ctrl+U** – 文件树视图:展示或隐藏未修改的文件 + * **Ctrl+L** – 层视图:展示当前层的变化 + * **Ctrl+A** – 层视图:展示总的变化 + * **Ctrl+/** – 筛选文件 + * **Ctrl+C** – 退出 + +在上面的例子中,我使用了 `sudo` 权限,这是因为我的 Docker 镜像存储在 **/var/lib/docker/** 目录中。假如你的镜像保存在你的家目录 `$HOME`或者在其他不属于 `root` 用户的目录,你就没有必要使用 `sudo` 命令。 + +你还可以使用下面的单个命令来构建一个 Docker 镜像并立刻分析该镜像: +``` +$ dive build -t +``` + +Dive 工具仍处于 beta 阶段,所以可能会存在 bug。假如你遇到了 bug,请在该项目的 GitHub 主页上进行报告。 + +好了,这就是今天的全部内容。现在你知道如何使用 Dive 工具来探索和分析 Docker 容器镜像的内容以及利用它构建镜像。希望本文对你有所帮助。 + +更多精彩内容即将呈现,请保持关注! + +干杯! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-analyze-and-explore-the-contents-of-docker-images/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/wagoodman/dive/releases +[2]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/ +[3]: https://github.com/wagoodman/dive \ No newline at end of file