pull from LCTT
This commit is contained in:
cycoe 2018-10-27 13:18:23 +08:00
commit 82fdd6f04b
77 changed files with 4770 additions and 3651 deletions

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "comic"]
path = comic
url = https://wxy@github.com/LCTT/comic.git

View File

@ -1,3 +1,18 @@
language: c
script:
- sh ./scripts/check.sh
- ./scripts/badge.sh
branches:
only:
- master
except:
- gh-pages
git:
submodules: false
deploy:
provider: pages
skip_cleanup: true
github_token: $GITHUB_TOKEN
local_dir: build
on:
branch: master

View File

@ -1,14 +1,19 @@
简介
-------------------------------
![待翻译](https://lctt.github.io/TranslateProject/badge/sources.svg)
![翻译中](https://lctt.github.io/TranslateProject/badge/translating.svg)
![待校正](https://lctt.github.io/TranslateProject/badge/translated.svg)
![已发布](https://lctt.github.io/TranslateProject/badge/published.svg)
LCTT 是“Linux中国”[https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
[![Travis (.org)](https://img.shields.io/travis/LCTT/TranslateProject.svg)](https://travis-ci.org/LCTT/TranslateProject)
[![GitHub contributors](https://img.shields.io/github/contributors/LCTT/TranslateProject.svg)](https://github.com/LCTT/TranslateProject/graphs/contributors)
[![GitHub closed pull requests](https://img.shields.io/github/issues-pr-closed/LCTT/TranslateProject.svg)](https://github.com/LCTT/TranslateProject/pulls?q=is%3Apr+is%3Aclosed)
LCTT 已经拥有几百名活跃成员并欢迎更多的Linux志愿者加入我们的团队。
简介
-------------------------------
[LCTT](https://linux.cn/lctt/) 是“Linux中国”[https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
LCTT 已经拥有几百名活跃成员,并欢迎更多的 Linux 志愿者加入我们的团队。
![logo](https://linux.cn/static/image/common/lctt_logo.png)
@ -70,6 +75,10 @@ LCTT 的组成
* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
* 2018/02/20 遭遇 DMCA 仓库被封。
* 2018/05/15 提升 MjSeven 为核心成员。
* 2018/08/01 [发布 Linux 中国通证LCCN](https://linux.cn/article-9886-1.html)。
* 2018/08/17 提升 pityonline 为核心成员,担任校对,并接受他的建议采用 PR 审核模式。
* 2018/09/10 [LCTT 五周年](https://linux.cn/article-9999-1.html)。
* 2018/10/25 重构了 CI感谢 vizv、lujun9972、bestony。
核心成员
-------------------------------
@ -78,13 +87,16 @@ LCTT 的组成
- 组长 @wxy,
- 选题 @oska874,
- 选题 @lujun9972,
- 技术 @bestony,
- 校对 @jasminepeng,
- 校对 @pityonline,
- 钻石译者 @geekpi,
- 钻石译者 @qhwdw,
- 钻石译者 @GOLinux,
- 钻石译者 @ictlyh,
- 技术组长 @bestony,
- 漫画组长 @GHLandy,
- LFS 组长 @martin2011qi,
- 核心成员 @GHLandy,
- 核心成员 @martin2011qi,
- 核心成员 @ictlyh,
- 核心成员 @strugglingyouth,
- 核心成员 @FSSlc,
- 核心成员 @zpl1025,
@ -96,8 +108,6 @@ LCTT 的组成
- 核心成员 @Locez,
- 核心成员 @ucasFL,
- 核心成员 @rusking,
- 核心成员 @qhwdw,
- 核心成员 @lujun9972
- 核心成员 @MjSeven
- 前任选题 @DeadFire,
- 前任校对 @reinoir222,

1
comic

@ -1 +0,0 @@
Subproject commit e5db5b880dac1302ee0571ecaaa1f8ea7cf61901

View File

@ -1,51 +1,49 @@
# [使用 Argbash 来改进你的 Bash 脚本][1]
使用 Argbash 来改进你的 Bash 脚本
======
![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png)
你编写或维护过有意义的 bash 脚本吗如果回答是那么你可能希望它们以标准且健壮的方式接收命令行参数。Fedora 最近得到了[一个很好的附加组件][2],它可以帮助你生成更好的脚本。不用担心,它不会花费你很多时间或精力。
### 为什么 Argbash?
### 为什么需要 Argbash?
Bash 是一种解释性的命令行语言,没有标准库。因此,如果你编写 bash 脚本并希望命令行界面符合 [POSIX][3] 和 [GNU CLI][4] 标准,那么你只需习惯两个选项
Bash 是一种解释性的命令行语言,没有标准库。因此,如果你编写 bash 脚本并希望命令行界面符合 [POSIX][3] 和 [GNU CLI][4] 标准,那么你一般只有两种选择
1. 直接编写为脚本量身定制的参数解析功能(可使用内置的 `getopts`)。
2. 使用外部 bash 模块。
第一个选项看起来非常愚蠢,因为正确实现接口并非易事。但是,从 [Stack Overflow][5] 到 [Bash Hackers][6] wiki 的各种站点上,它被认为是最佳选择。
第一个选项看起来非常愚蠢,因为正确实现接口并非易事。但是,从 [Stack Overflow][5] 到 [Bash Hackers][6] wiki 的各种站点上,它被认为是最佳选择。
第二个选项看起来更聪明,但使用模块有它自己的问题。最大的问题是你必须将其代码与脚本捆绑在一起。这可能意味着:
* 你将库作为单独的文件分发
* 要么,你将库作为单独的文件分发
* 或者,在脚本的开头包含库代码
* 在脚本的开头包含库代码
有两个文件而不是一个是愚蠢的;但采用一个文件的话,会让一堆上千行的复杂代码污染了你的脚本。
有两个文件而不是一个是愚蠢的但一个文件会使用一串超过千行的复杂代码去污染你的脚本。to 校正:这句话原文不知该如何理解)
这是 Argbash [项目诞生][7]的主要原因。Argbash 是一个代码生成器,它为你的脚本生成一个量身定制的解析库。与其他 bash 模块的通用代码不同,它生成脚本所需的最少代码。此外,如果你不需要 100% 符合这些 CLI 标准,你可以生成更简单的代码。
这是 Argbash [项目诞生][7]的主要原因。Argbash 是一个代码生成器,它为你的脚本生成一个量身定制的解析库。与其他 bash 模块的通用代码不同,它生成你的脚本所需的最少代码。此外,如果你不需要 100% 符合那些 CLI 标准的话,你可以生成更简单的代码。
### 示例
### 分析
#### 分析
假设你要实现一个脚本,它可以在终端窗口中[绘制条形图][8],你可以通过多次重复选择一个字符来做到这一点。这意味着你需要从命令行获取以下信息:
假设你要实现一个脚本,它可以在终端窗口中[绘制条形图][8],你可以通过重复一个字符选定的次数来做到这一点。这意味着你需要从命令行获取以下信息:
* _这个字符是直线的元素。如果未指定使用破折号。_ 在命令行上,这将是单值位置参数 _character_,其默认值为 -。
* _哪个字符是组成该行的元素。如果未指定使用破折号 `-`。_ 在命令行上,这是个单值定位参数 `character`,其默认值为 `-`LCTT 译注:定位参数是指确定位置的参数,此处 `character` 需是命令行的第一个参数)
* _直线的长度。如果未指定会选择 `80`。_ 这是一个单值可选参数 `length`,默认值为 `80`
* _Verbose 模式用于调试。_ 这是一个布尔型参数 `verbose`,默认情况下关闭。
* _直线的长度。如果未指定会选择 80。_ 这是一个单值可选参数 _-length_,默认值为 80。
由于脚本的主体非常简单因此本文主要关注从命令行获取用户的输入到合适的脚本变量。Argbash 生成的代码会将参数解析结果保存到 shell 变量 `_arg_character`、`_arg_length` 和 `_arg_verbose` 当中
* _Verbose 模式用于调试。_ 这是一个布尔型参数 _verbose_,默认情况下关闭。
#### 执行
由于脚本的主体非常简单因此本文主要关注从命令行获取用户的输入到合适的脚本变量。Argbash 生成的代码将解析结果保存到 shell 变量 _arg\_character_, _arg\_length_ 和 _arg\_verbose_。
接下来,你还需要 `argbash-init``argbash` bash 脚本,它们是 argbash 包的一部分。因此,运行以下命令:
### 执行
要继续下去,你还需要 _argbash-init__argbash_ bash 脚本,它们是 _argbash_ 包的一部分。因此,运行以下命令:
```
sudo dnf install argbash
```
然后,使用 _argbash-init_ 来为 _argbash_ 生成模板,它会生成可执行脚本。你需要三个参数:一个名为 _character_ 的位置参数,一个可选的 _length_ 参数以及一个可选的布尔 _verbose_。将这些传递给 _argbash-init_,然后将输出传递给 _argbash_ :
然后,使用 `argbash-init` 来为 `argbash` 生成模板,它会生成可执行脚本。你需要三个参数:一个名为 `character` 的定位参数,一个可选的 `length` 参数以及一个可选的布尔 `verbose`。将这些传递给 `argbash-init`,然后将输出传递给 `argbash` :
```
argbash-init --pos character --opt length --opt-bool verbose script-template.sh
argbash script-template.sh -o script
@ -53,6 +51,7 @@ argbash script-template.sh -o script
```
看到帮助信息了吗?看起来该脚本不知道字符参数的默认选项。因此,看一下 [Argbash API][9],然后通过编辑脚本的模板部分来解决问题:
```
# ...
# ARG_OPTIONAL_SINGLE([length],[l],[Length of the line],[80])
@ -62,7 +61,8 @@ argbash script-template.sh -o script
# ...
```
Argbash 非常智能,它试图让每个生成的脚本都成为自己的模板,这意味着你不必担心存储源模版以供进一步使用。你不应该丢失生成的 bash 脚本。现在尝试重新生成将来的线条绘图以按预期工作to 校正:这里不清楚)
Argbash 非常智能,它试图让每个生成的脚本都成为自己的模板,这意味着你不需要存储源模版以供进一步使用,你也不要丢掉生成的 bash 脚本。现在,尝试重新生成如你所预期的下一个线条绘图脚本:
```
argbash script -o script
./script
@ -72,24 +72,24 @@ argbash script -o script
### 结论
你可能会发现包含解析代码的部分很长,但考虑到它允许你调用 _./script.sh x -Vl50_,它将被理解为与 _./script -V -l 50 x_ 相同的方式。确实需要一些代码才能做到这一点。
你可能会发现包含解析代码的部分很长,但考虑到它允许你`./script.sh x -Vl50` 的方式调用,并且能像 `./script -V -l 50 x` 一样工作。确实需要一些代码才能做到这一点。
但是,通过调用 _argbash-init_ 并将参数 _-mode_ 设置为 _minimal_,你可以将生成的代码复杂度和解析能力之间的平衡转向更简单的代码。这个选项将脚本的大小减少了大约 20 行,这相当于生成的解析代码大小减少了大约 25%。另一方面,_full_ 选项使脚本更加智能。
但是,通过调用 `argbash-init` 并将参数 `-mode` 设置为 `minimal`,你可以平衡生成的代码复杂度和解析能力,而转向更简单的代码。这个选项将脚本的大小减少了大约 20 行,这相当于生成的解析代码大小减少了大约 25%。另一方面,`full` 模式使脚本更加智能。
如果你想要检查生成的代码,请给 _argbash_ 提供参数 _-commented_,它会将注释放入解析代码中,从而揭示各个部分背后的意图。与其他参数解析库相比较,如 [shflags][10], [argsparse][11] 或 [bash-modules/arguments][12],你将看到 Argbash 强大的简单性。如果出现了严重的错误你需要快速修复解析功能中的一个故障Argbash 也允许你这样做。
如果你想要检查生成的代码,请给 `argbash` 提供参数 `-commented`,它会将注释放入解析代码中,从而揭示各个部分背后的意图。与其他参数解析库相比较,如 [shflags][10], [argsparse][11] 或 [bash-modules/arguments][12],你将看到 Argbash 强大的简单性。如果出现了严重的错误你需要快速修复解析功能中的一个故障Argbash 也允许你这样做。
由于你很有可能是 Fedora 用户,因此你可以享受从官方仓库安装命令行 Argbash 的便利。然而,在你的服务中还有一个[在线解析代码生成器][13]。此外,如果你在服务器上使用 Docker 工作,你可以试试 [Argbash Docker 镜像][14]。
由于你很有可能是 Fedora 用户,因此你可以享受从官方仓库安装命令行 Argbash 的便利。不过,也有一个[在线解析代码生成器][13]服务可以使用。此外,如果你在服务器上使用 Docker 工作,你可以试试 [Argbash Docker 镜像][14]。
因此,请享受并确保你的脚本具有令用户满意的命令行界面。Argbash 随时为你提供帮助,你只需付出很少的努力。
这样你可以让你的脚本具有令用户满意的命令行界面。Argbash 随时为你提供帮助,你只需付出很少的努力。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/improve-bash-scripts-argbash/
作者:[Matěj Týč ][a]
作者:[Matěj Týč][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,125 @@
一窥你安装的 Linux 软件包
======
> 这些最有用的命令可以让你了解安装在你的 Debian 类的 Linux 系统上的包的情况。
![](https://images.idgesg.net/images/article/2017/12/christmas-packages-100744371-large.jpg)
你有没有想过你的 Linux 系统上安装了几千个软件包? 是的,我说的是“千”。 即使是相当一般的 Linux 系统也可能安装了上千个软件包。 有很多方法可以获得这些包到底是什么包的详细信息。
首先,要在基于 Debian 的发行版(如 Ubuntu上快速得到已安装的软件包数量请使用 `apt list --installed` 如下:
```
$ apt list --installed | wc -l
2067
```
这个数字实际上多了一个,因为输出中包含了 “Listing ...” 作为它的第一行。 这个命令会更准确:
```
$ apt list --installed | grep -v "^Listing" | wc -l
2066
```
要获得所有这些包的详细信息,请按以下方式浏览列表:
```
$ apt list --installed | more
Listing...
a11y-profile-manager-indicator/xenial,now 0.1.10-0ubuntu3 amd64 [installed]
account-plugin-aim/xenial,now 3.12.11-0ubuntu3 amd64 [installed]
account-plugin-facebook/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed]
account-plugin-flickr/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed]
account-plugin-google/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed]
account-plugin-jabber/xenial,now 3.12.11-0ubuntu3 amd64 [installed]
account-plugin-salut/xenial,now 3.12.11-0ubuntu3 amd64 [installed]
```
这需要观察很多细节 —— 特别是让你的眼睛在所有 2000 多个文件中徘徊。 它包含包名称、版本等,以及更多但并不是以最易于我们人类解析的显示信息。 `dpkg-query` 使得描述更容易理解,但这些描述会塞满你的命令窗口,除非窗口非常宽。 因此,为了让此篇文章更容易阅读,下面的数据显示已经分成了左右两侧。
左侧:
```
$ dpkg-query -l | more
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version
+++-==============================================-=================================-
ii a11y-profile-manager-indicator 0.1.10-0ubuntu3
ii account-plugin-aim 3.12.11-0ubuntu3
ii account-plugin-facebook 0.12+16.04.20160126-0ubuntu1
ii account-plugin-flickr 0.12+16.04.20160126-0ubuntu1
ii account-plugin-google 0.12+16.04.20160126-0ubuntu1
ii account-plugin-jabber 3.12.11-0ubuntu3
ii account-plugin-salut 3.12.11-0ubuntu3
ii account-plugin-twitter 0.12+16.04.20160126-0ubuntu1
rc account-plugin-windows-live 0.11+14.04.20140409.1-0ubuntu2
```
右侧:
```
Architecture Description
============-=====================================================================
amd64 Accessibility Profile Manager - Unity desktop indicator
amd64 Messaging account plugin for AIM
all GNOME Control Center account plugin for single signon - facebook
all GNOME Control Center account plugin for single signon - flickr
all GNOME Control Center account plugin for single signon
amd64 Messaging account plugin for Jabber/XMPP
amd64 Messaging account plugin for Local XMPP (Salut)
all GNOME Control Center account plugin for single signon - twitter
all GNOME Control Center account plugin for single signon - windows live
```
每行开头的 `ii``rc` 名称(见上文“左侧”)是包状态指示符。 第一个字母表示包的预期状态:
- `u` -- 未知
- `i` -- 安装
- `r` -- 移除/反安装
- `p` -- 清除(也包括配置文件)
- `h` -- 保留
第二个代表包的当前状态:
- `n` -- 未安装
- `i` -- 已安装
- `c` -- 配置文件(只安装了配置文件)
- `U` -- 未打包
- `F` -- 半配置(出于某些原因配置失败)
- `h` -- 半安装(出于某些原因配置失败)
- `W` -- 等待触发(该包等待另外一个包的触发器)
- `t` -- 待定触发(该包被触发)
在通常的双字符字段末尾添加的 `R` 表示需要重新安装。 你可能永远不会碰到这些。
快速查看整体包状态的一种简单方法是计算在不同状态中包含的包的数量:
```
$ dpkg-query -l | tail -n +6 | awk '{print $1}' | sort | uniq -c
2066 ii
134 rc
```
我从上面的 `dpkg-query` 输出中排除了前五行,因为这些是标题行,会混淆输出。
这两行基本上告诉我们,在这个系统上,应该安装了 2066 个软件包,而 134 个其他的软件包已被删除,但留下了配置文件。 你始终可以使用以下命令删除程序包的剩余配置文件:
```
$ sudo dpkg --purge xfont-mathml
```
请注意,如果程序包二进制文件和配置文件都已经安装了,则上面的命令将两者都删除。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3242808/linux/peeking-into-your-linux-packages.html
作者:[Sandra Henry-Stocker][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/

View File

@ -1,12 +1,13 @@
如何使用 Apache Web 服务器配置多个站点
=====
> 如何在流行而强大的 Apache Web 服务器上托管两个或多个站点。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/apache-feathers.jpg?itok=fnrpsu3G)
在我的[上一篇文章][1]中,我解释了如何为单个站点配置 Apache Web 服务器,事实证明这很容易。在这篇文章中,我将向你展示如何使用单个 Apache 实例来服务多个站点。
注意:我写这篇文章的环境是 Fedora 27 虚拟机,配置了 Apache 2.4.29。如果你有另一个 Fedora 的发行版,那么你使用的命令以及配置文件的位置和内容可能会有所不同。
注意:我写这篇文章的环境是 Fedora 27 虚拟机,配置了 Apache 2.4.29。如果你用另一个发行版或不同的 Fedora 版本,那么你使用的命令以及配置文件的位置和内容可能会有所不同。
正如我之前的文章中提到的Apache 的所有配置文件都位于 `/etc/httpd/conf``/etc/httpd/conf.d`。默认情况下,站点的数据位于 `/var/www` 中。对于多个站点,你需要提供多个位置,每个位置对应托管的站点。
@ -14,113 +15,93 @@
使用基于名称的虚拟主机,你可以为多个站点使用一个 IP 地址。现代 Web 服务器,包括 Apache使用指定 URL 的 `hostname` 部分来确定哪个虚拟 Web 主机响应页面请求。这仅仅需要比一个站点更多的配置。
即使你只从单个站点开始,我也建议你将其设置为虚拟主机,这样可以在以后更轻松地添加更多站点。在本文中,我将在上一篇文章中介绍我们停止的位置,因此你需要设置原始站点,即基于名称的虚拟站点。
即使你只从单个站点开始,我也建议你将其设置为虚拟主机,这样可以在以后更轻松地添加更多站点。在本文中,我将从上一篇文章中我们停止的地方开始,因此你需要设置原来的站点,即基于名称的虚拟站点。
### 准备原站点
### 准备原来的站点
在设置第二个站点之前,你需要为现有网站提供基于名称的虚拟主机。如果你现在没有站,[请返回并立即创建一个][1]。
在设置第二个站点之前,你需要为现有网站提供基于名称的虚拟主机。如果你现在没有站[请返回并立即创建一个][1]。
一旦你有了站点,将以下内容添加到 `/etc/httpd/conf/httpd.conf` 配置文件的底部(添加此内容是你需要对 `httpd.conf` 文件进行的唯一更改):
```
<VirtualHost 127.0.0.1:80>
    DocumentRoot /var/www/html
    ServerName www.site1.org
</VirtualHost>
```
这将是第一个虚拟主机节to 校正:这里虚拟主机节不太清除),它应该保持为第一个,以使其成为默认定义。这意味着通过 IP 地址或解析为此 IP 地址但没有特定命名主机配置节的其它名称对服务器的 HTTP 访问将定向到此虚拟主机。所有其它虚拟主机配置节都应遵循此节
这将是第一个虚拟主机配置节,它应该保持为第一个,以使其成为默认定义。这意味着通过 IP 地址或解析为此 IP 地址但没有特定命名主机配置节的其它名称对服务器的 HTTP 访问将定向到此虚拟主机。所有其它虚拟主机配置节都应跟在此节之后
你还需要使用 `/etc/hosts` 中的条目设置你的网站以提供名称解析。上次,我们只使用了 `localhost` 的 IP 地址。通常,这可以使用你使用的任何名称服务来完成,例如 Google 或 Godaddy。对于你的测试网站通过在 `/etc/hosts` 中的 `localhost` 行添加一个新名称来完成此操作。添加两个网站的条目,方便你以后不需再次编辑此文件。结果如下:
```
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 www.site1.org www.site2.org
```
让我们将 `/var/www/html/index.html` 文件改变得更加明显一点。它应该看起来像这样(带有一些额外的文本来识别这是站点 1
```
<h1>Hello World</h1>
Web site 1.
```
重新启动 HTTPD 服务器,已启用对 `httpd` 配置的更改。然后,你可以从命令行使用 Lynx 文本模式查看网站。
```
[root@testvm1 ~]# systemctl restart httpd
[root@testvm1 ~]# lynx www.site1.org
                                              Hello World
  Web site 1.
Hello World
Web site 1.
<snip>
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
Arrow keys: Up and Down to move.  Right to follow a link; Left to go back.
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
```
你可以看到原始网站的修改内容,没有明显的错误,先按下 "Q" 键,然后按 "Y" 退出 Lynx Web 浏览器。
你可以看到原始网站的修改内容,没有明显的错误,先按下 `Q` 键,然后按 `Y` 退出 Lynx Web 浏览器。
### 配置第二个站点
现在你已经准备好建立第二个网站。使用以下命令创建新的网站目录结构:
```
[root@testvm1 html]# mkdir -p /var/www/html2
```
注意,第二个站点只是第二个 `html` 目录,与第一个站点位于同一 `/var/www` 目录下。
现在创建一个新的索引文件 `/var/www/html2/index.html`,其中包含以下内容(此索引文件稍有不同,以区别于原始网站):
现在创建一个新的索引文件 `/var/www/html2/index.html`,其中包含以下内容(此索引文件稍有不同,以区别于原来的网站):
```
<h1>Hello World -- Again</h1>
Web site 2.
```
`httpd.conf` 中为第二个站点创建一个新的配置节,并将其放在上一个虚拟主机节下面(这两个应该看起来非常相似)。此节告诉 Web 服务器在哪里可以找到第二个站点的 HTML 文件。
`httpd.conf` 中为第二个站点创建一个新的配置节,并将其放在上一个虚拟主机配置节下面(这两个应该看起来非常相似)。此节告诉 Web 服务器在哪里可以找到第二个站点的 HTML 文件。
```
<VirtualHost 127.0.0.1:80>
    DocumentRoot /var/www/html2
    ServerName www.site2.org
</VirtualHost>
```
重启 HTTPD并使用 Lynx 来查看结果。
```
[root@testvm1 httpd]# systemctl restart httpd
[root@testvm1 httpd]# lynx www.site2.org
Hello World -- Again
                                    Hello World -- Again
   Web site 2.
Web site 2.
<snip>
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
Arrow keys: Up and Down to move.  Right to follow a link; Left to go back.
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
```
@ -144,10 +125,10 @@ via: https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
作者:[David Both][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/article/18/2/how-configure-apache-web-server
[1]:https://linux.cn/article-9506-1.html
[2]:https://httpd.apache.org/docs/2.4/

View File

@ -1,13 +1,13 @@
用 GNOME Boxes 下载一个镜像
用 GNOME Boxes 下载一个操作系统镜像
======
![](https://fedoramagazine.org/wp-content/uploads/2018/06/boxes-install-os-816x345.jpg)
Boxes 是 GNOME 上的虚拟机应用。最近 Boxes 添加了一个新的特性,使得它在运行不同的 Linux 发行版时更加容易。你现在可以在 Boxes 中自动安装列表中这些发行版。该列表甚至包括红帽企业 Linux。红帽开发人员计划包括[免费订阅红帽企业版 Linux][1]。 使用[红帽开发者][2]帐户Boxes 可以自动设置一个名为 Developer Suite 订阅的 RHEL 虚拟机。 下面是它的工作原理。
Boxes 是 GNOME 上的虚拟机应用。最近 Boxes 添加了一个新的特性,使得它在运行不同的 Linux 发行版时更加容易。你现在可以在 Boxes 中自动安装那些发行版以及像 FreeBSD 和 FreeDOS 这样的操作系统,甚至还包括红帽企业 Linux。红帽开发者计划包括了一个[红帽企业版 Linux 的免费订阅][1]。 使用[红帽开发者][2]帐户Boxes 可以自动设置一个名为 Developer Suite 订阅的 RHEL 虚拟机。 下面是它的工作原理。
### 红帽企业版 Linux
### 红帽企业版 Linux
要创建一个红帽企业版 Linux 的虚拟机,启动 Boxes点击新建。从源选择列表中选择下载一个镜像。在顶部点击红帽企业版 Linux。这将会打开网址为 [developers.redhat.com][2] 的一个网络表单。使用已有的红帽开发者账号登录,或是新建一个。
要创建一个红帽企业版 Linux 的虚拟机,启动 Boxes点击新建。从源选择列表中选择下载一个镜像。在顶部,点击红帽企业版 Linux。这将会打开网址为 [developers.redhat.com][2] 的一个 Web 表单。使用已有的红帽开发者账号登录,或是新建一个。
![][3]
@ -15,11 +15,11 @@ Boxes 是 GNOME 上的虚拟机应用。最近 Boxes 添加了一个新的特性
![][5]
点击提交,然后就会开始下载安装磁盘镜像。下载需要的时间取决于你的网络状况。在这期间你可以去喝杯茶或者咖啡歇息一下。
点击提交,然后就会开始下载安装磁盘镜像。下载需要的时间取决于你的网络状况。在这期间你可以去喝杯茶或者咖啡歇息一下。
![][6]
媒体下载完成(一般位于 ~/Downloads Boxes 会有一个快速安装的显示。填入账号和密码然后点击继续,当你确认了虚拟机的信息之后点击创建。快速安装会自动完成接下来的整个安装!(现在你可以去享受你的第二杯茶或者咖啡了)
介质下载完成(一般位于 `~/Downloads` Boxes 会有一个“快速安装”的显示。填入账号和密码然后点击“继续”,当你确认了虚拟机的信息之后点击“创建”。“快速安装”会自动完成接下来的整个安装!(现在你可以去享受你的第二杯茶或者咖啡了)
![][7]
@ -27,7 +27,7 @@ Boxes 是 GNOME 上的虚拟机应用。最近 Boxes 添加了一个新的特性
![][9]
等到安装结束,虚拟机会直接重启并登录到桌面。在虚拟机里,在应用菜单的系统工具一栏启动红帽订阅管理。这一步需要输入管理员密码。
等到安装结束,虚拟机会直接重启并登录到桌面。在虚拟机里,在应用菜单的“系统工具”一栏启动“红帽订阅管理”。这一步需要输入 root 密码。
![][10]
@ -37,13 +37,13 @@ Boxes 是 GNOME 上的虚拟机应用。最近 Boxes 添加了一个新的特性
![][12]
现在你可以通过任何一种更新方法,像是 yum 或是 GNOME Software 进行下载和更新了。
现在你可以通过任何一种更新方法,像是 `yum` 或是 GNOME Software 进行下载和更新了。
![][13]
### FreeDOS 或是其他
Boxes 可以安装很多的 Linux 发行版,而不仅仅只是红帽企业版。 作为 KVM 和 qemu 的前端Boxes 支持各种操作系统。 使用 [libosinfo][14]Boxes 可以自动下载(在某些情况下安装)相当多不同操作系统。
Boxes 可以安装很多操作系统,而不仅仅只是红帽企业版。 作为 KVM 和 qemu 的前端Boxes 支持各种操作系统。使用 [libosinfo][14]Boxes 可以自动下载(在某些情况下安装)相当多不同操作系统。
![][15]
@ -53,13 +53,23 @@ Boxes 可以安装很多的 Linux 发行版,而不仅仅只是红帽企业版
![][17]
### 在 Boxes 上受欢迎的操作系统
### Boxes 上流行的操作系统
这里仅仅是一些目前在它上面比较受欢迎的选择。
![][18]![][19]![][20]![][21]![][22]![][23]
![][18]
Fedora 会定期更新它的操作系统信息数据库。确保你会经常检查是否有新的操作系统选项。
![][19]
![][20]
![][21]
![][22]
![][23]
Fedora 会定期更新它的操作系统信息数据库osinfo-db。确保你会经常检查是否有新的操作系统选项。
--------------------------------------------------------------------------------
@ -69,7 +79,7 @@ via: https://fedoramagazine.org/download-os-gnome-boxes/
作者:[Link Dupont][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,42 +1,39 @@
为什么 Python 这么慢?
============================================================
==========
Python 现在越来越火,已经迅速扩张到包括 DevOps、数据科学、web 开发、信息安全等各个领域当中。
Python 现在越来越火,已经迅速扩张到包括 DevOps、数据科学、Web 开发、信息安全等各个领域当中。
然而,相比起 Python 扩张的速度Python 代码的运行速度就显得有点逊色了。
![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg)
> 在代码运行速度方面Java、C、C++、C和 Python 要如何进行比较呢?并没有一个放之四海而皆准的标准,因为具体结果很大程度上取决于运行的程序类型,而<ruby>语言基准测试<rt>Computer Language Benchmarks Games</rt></ruby>可以作为[衡量的一个方面][5]。
> 在代码运行速度方面Java、C、C++、C# 和 Python 要如何进行比较呢?并没有一个放之四海而皆准的标准,因为具体结果很大程度上取决于运行的程序类型,而<ruby>语言基准测试<rt>Computer Language Benchmarks Games</rt></ruby>可以作为[衡量的一个方面][5]。
根据我这些年来进行语言基准测试的经验来看Python 比很多语言运行起来都要慢。无论是使用 [JIT][7] 编译器的 C、Java还是使用 [AOT][8] 编译器的 C、C ++,又或者是 JavaScript 这些解释型语言Python 都[比它们运行得慢][6]。
根据我这些年来进行语言基准测试的经验来看Python 比很多语言运行起来都要慢。无论是使用 [JIT][7] 编译器的 C、Java还是使用 [AOT][8] 编译器的 C、C++,又或者是 JavaScript 这些解释型语言Python 都[比它们运行得慢][6]。
注意:对于文中的 Python ,一般指 CPython 这个官方的实现。当然我也会在本文中提到其它语言的 Python 实现。
注意:对于文中的 Python ,一般指 CPython 这个官方的实现。当然我也会在本文中提到其它语言的 Python 实现。
> 我要回答的是这个问题对于一个类似的程序Python 要比其它语言慢 2 到 10 倍不等,这其中的原因是什么?又有没有改善的方法呢?
主流的说法有这些:
* “是<ruby>全局解释器锁<rt>Global Interpreter Lock</rt></ruby>GIL的原因”
* “是因为 Python 是解释型语言而不是编译型语言”
* “是因为 Python 是一种动态类型的语言”
哪一个才是是影响 Python 运行效率的主要原因呢?
### 是全局解释器锁的原因吗?
现在很多计算机都配备了具有多个核的 CPU ,有时甚至还会有多个处理器。为了更充分利用它们的处理能力,操作系统定义了一个称为线程的低级结构。某一个进程(例如 Chrome 浏览器可以建立多个线程在系统内执行不同的操作。在这种情况下CPU 密集型进程就可以跨核心共享负载了,这样的做法可以大大提高应用程序的运行效率。
现在很多计算机都配备了具有多个核的 CPU ,有时甚至还会有多个处理器。为了更充分利用它们的处理能力,操作系统定义了一个称为线程的低级结构。某一个进程(例如 Chrome 浏览器可以建立多个线程在系统内执行不同的操作。在这种情况下CPU 密集型进程就可以跨核心分担负载了,这样的做法可以大大提高应用程序的运行效率。
例如在我写这篇文章时,我的 Chrome 浏览器打开了 44 个线程。要知道的是,基于 POSIX 的操作系统(例如 Mac OS、Linux和 Windows 操作系统的线程结构、API 都是不同的,因此操作系统还负责对各个线程的调度。
例如在我写这篇文章时,我的 Chrome 浏览器打开了 44 个线程。需要提及的是,基于 POSIX 的操作系统(例如 Mac OS、Linux和 Windows 操作系统的线程结构、API 都是不同的,因此操作系统还负责对各个线程的调度。
如果你还没有写过多线程执行的代码,你就需要了解一下线程锁的概念了。多线程进程比单线程进程更为复杂,是因为需要使用线程锁来确保同一个内存地址中的数据不会被多个线程同时访问或更改。
CPython 解释器在创建变量时,首先会分配内存,然后对该变量的引用进行计数,这称为<ruby>引用计数<rt>reference counting</rt></ruby>。如果变量的引用数变为 0这个变量就会从内存中释放掉。这就是在 for 循环代码块内创建临时变量不会增加内存消耗的原因。
而当多个线程内共享一个变量时CPython 锁定引用计数的关键就在于使用了 GIL它会谨慎地控制线程的执行情况无论同时存在多少个线程每次只允许一个线程进行操作。
而当多个线程内共享一个变量时CPython 锁定引用计数的关键就在于使用了 GIL它会谨慎地控制线程的执行情况无论同时存在多少个线程解释器每次只允许一个线程进行操作。
#### 这会对 Python 程序的性能有什么影响?
@ -45,9 +42,10 @@ CPython 解释器在创建变量时,首先会分配内存,然后对该变量
但如果你通过在单进程中使用多线程实现并发,并且是 IO 密集型(例如网络 IO 或磁盘 IO的线程GIL 竞争的效果就很明显了。
![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png)
由 David Beazley 提供的 GIL 竞争情况图[http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]
对于一个 web 应用(例如 Django同时还使用了 WSGI那么对这个 web 应用的每一个请求都是一个单独的 Python 进程,而且每个请求只有一个锁。同时 Python 解释器的启动也比较慢,某些 WSGI 实现还具有“守护进程模式”,[就会导致 Python 进程非常繁忙][9]。
*由 David Beazley 提供的 GIL 竞争情况图[http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]*
对于一个 web 应用(例如 Django同时还使用了 WSGI那么对这个 web 应用的每一个请求都运行一个**单独**的 Python 解释器,而且每个请求只有一个锁。同时因为 Python 解释器的启动比较慢,某些 WSGI 实现还具有“守护进程模式”,[可以使 Python 进程一直就绪][9]。
#### 其它的 Python 解释器表现如何?
@ -57,46 +55,43 @@ CPython 解释器在创建变量时,首先会分配内存,然后对该变量
#### JavaScript 在这方面又是怎样做的呢?
所有的 Javascript 引擎使用的都是 [mark-and-sweep 垃圾收集算法][12],而 GIL 使用的则是 CPython 的内存管理算法。因此 JavaScript 没有 GIL而且它是单线程的也不需要用到 GIL JavaScript 的事件循环和 Promise/Callback 模式实现了以异步编程的方式代替并发。在 Python 当中也有一个类似的 asyncio 事件循环。
所有的 Javascript 引擎使用的都是 [mark-and-sweep 垃圾收集算法][12],而 GIL 使用的则是 CPython 的内存管理算法。
JavaScript 没有 GIL而且它是单线程的也不需要用到 GIL JavaScript 的事件循环和 Promise/Callback 模式实现了以异步编程的方式代替并发。在 Python 当中也有一个类似的 asyncio 事件循环。
### 是因为 Python 是解释型语言吗?
我经常会听到这个说法,但其实当终端上执行 `python myscript.py` 之后CPython 会对代码进行一系列的读取、语法分析、解析、编译、解释和执行的操作。
我经常会听到这个说法,但是这过于粗陋地简化了 Python 所实际做的工作了。其实当终端上执行 `python myscript.py` 之后CPython 会对代码进行一系列的读取、语法分析、解析、编译、解释和执行的操作。
如果你对这一系列过程感兴趣,也可以阅读一下我之前的文章:
如果你对这一系列过程感兴趣,也可以阅读一下我之前的文章:[在 6 分钟内修改 Python 语言][13] 。
[在 6 分钟内修改 Python 语言][13]
`.pyc` 文件的创建是这个过程的重点。在代码编译阶段Python 3 会将字节码序列写入 `__pycache__/` 下的文件中,而 Python 2 则会将字节码序列写入当前目录的 `.pyc` 文件中。对于你编写的脚本、导入的所有代码以及第三方模块都是如此。
创建 `.pyc` 文件是这个过程的重点。在代码编译阶段Python 3 会将字节码序列写入 `__pycache__/` 下的文件中,而 Python 2 则会将字节码序列写入当前目录的 `.pyc` 文件中。对于你编写的脚本、导入的所有代码以及第三方模块都是如此。
因此绝大多数情况下除非你的代码是一次性的……Python 都会解释字节码并执行。与 Java、C#.NET 相比:
因此绝大多数情况下除非你的代码是一次性的……Python 都会解释字节码并本地执行。与 Java、C#.NET 相比:
> Java 代码会被编译为“中间语言”,由 Java 虚拟机读取字节码,并将其即时编译为机器码。.NET CIL 也是如此,.NET CLRCommon-Language-Runtime将字节码即时编译为机器码。
既然 Python 像 Java 和 C# 那样使用虚拟机或某种字节码,为什么 Python 在基准测试中仍然比 Java 和 C# 慢得多呢?首要原因是,.NET 和 Java 都是 JIT 编译的。
既然 Python 像 Java 和 C# 那样使用虚拟机或某种字节码,为什么 Python 在基准测试中仍然比 Java 和 C# 慢得多呢?首要原因是,.NET 和 Java 都是 JIT 编译的。
<ruby>即时编译<rt>Just-in-time compilation</rt></ruby>JIT需要一种中间语言以便将代码拆分为多个块或多个帧。而<ruby>提前编译器<rt>ahead of time compiler</rt></ruby>AOT则需要确保 CPU 在任何交互发生之前理解每一行代码。
<ruby>即时<rt>Just-in-time</rt></ruby>JIT编译需要一种中间语言,以便将代码拆分为多个块(或多个帧)。而<ruby>提前<rt>ahead of time</rt></ruby>AOT编译器则需要确保 CPU 在任何交互发生之前理解每一行代码。
JIT 本身是不会让执行速度加快的,因为它执行的仍然是同样的字节码序列。但是 JIT 会允许运行时的优化。一个优秀的 JIT 优化器会分析出程序的哪些部分会被多次执行,这就是程序中的“热点”,然后,优化器会将这些热点编译得更为高效以实现优化。
JIT 本身不会使执行速度加快,因为它执行的仍然是同样的字节码序列。但是 JIT 会允许在运行时进行优化。一个优秀的 JIT 优化器会分析出程序的哪些部分会被多次执行,这就是程序中的“热点”,然后优化器会将这些代码替换为更有效率的版本以实现优化。
这就意味着如果你的程序是多次重复相同的操作时有可能会被优化器优化得更快。而且Java 和 C# 是强类型语言,因此优化器对代码的判断可以更为准确。
这就意味着如果你的程序是多次重复相同的操作时有可能会被优化器优化得更快。而且Java 和 C# 是强类型语言,因此优化器对代码的判断可以更为准确。
PyPy 使用了明显快于 CPython 的 JIT。更详细的结果可以在这篇性能基准测试文章中看到
[哪一个 Python 版本最快?][15]
PyPy 使用了明显快于 CPython 的 JIT。更详细的结果可以在这篇性能基准测试文章中看到[哪一个 Python 版本最快?][15]。
#### 那为什么 CPython 不使用 JIT 呢?
JIT 也不是完美的,它的一个显著缺点就在于启动时间。 CPython 的启动时间已经相对比较慢,而 PyPy 比 CPython 启动还要慢 2 到 3 倍,所以 Java 虚拟机启动速度已经是出了名的慢了。.NET CLR则通过在系统启动时自启动来优化体验 甚至还有专门运行 CLR 的操作系统。
JIT 也不是完美的,它的一个显著缺点就在于启动时间。 CPython 的启动时间已经相对比较慢,而 PyPy 比 CPython 启动还要慢 2 到 3 倍。Java 虚拟机启动速度也是出了名的慢。.NET CLR 则通过在系统启动时启动来优化体验,而 CLR 的开发者也是在 CLR 上开发该操作系统。
因此如果你的 Python 进程在一次启动后就长时间运行JIT 就比较有意义了,因为代码里有“热点”可以优化。
因此如果你有个长时间运行单一 Python 进程JIT 就比较有意义了,因为代码里有“热点”可以优化。
尽管如此CPython 仍然是通用的代码实现。设想如果使用 Python 开发命令行程序,但每次调用 CLI 时都必须等待 JIT 缓慢启动,这种体验就相当不好了。
不过CPython 是个通用的实现。设想如果使用 Python 开发命令行程序,但每次调用 CLI 时都必须等待 JIT 缓慢启动,这种体验就相当不好了。
CPython 必须通过大量用例的测试,才有可能实现[将 JIT 插入到 CPython 中][17],但这个改进工作的进度基本处于停滞不前的状态。
CPython 试图用于各种使用情况。有可能实现[将 JIT 插入到 CPython 中][17],但这个改进工作的进度基本处于停滞不前的状态。
> 如果你想充分发挥 JIT 的优势请使用PyPy。
> 如果你想充分发挥 JIT 的优势,请使用 PyPy。
### 是因为 Python 是一种动态类型的语言吗?
@ -113,11 +108,11 @@ a = "foo"
Python 也实现了这样的转换,但用户看不到这些转换,也不需要关心这些转换。
变量类型不固定并不是 Python 运行慢的原因Python 通过巧妙的设计让用户可以让各种结构变得动态:可以在运行时更改对象上的方法,也可以在运行时让模块调用新声明的值,几乎可以做到任何事。
不用必须声明类型并不是为了使 Python 运行慢Python 的设计是让用户可以让各种东西变得动态:可以在运行时更改对象上的方法,也可以在运行时动态添加底层系统调用到值的声明上,几乎可以做到任何事。
但也正是这种设计使得 Python 的优化难度变得很大
但也正是这种设计使得 Python 的优化异常的难。
为了证明我的观点,我使用了一个 `dtrace` 这个 Mac OS 上的系统调用跟踪工具。CPython 中没有内置 dTrace因此必须重新对 CPython 进行编译。以下使用 Python 3.6.6 进行为例:
为了证明我的观点,我使用了一个 Mac OS 上的系统调用跟踪工具 DTrace。CPython 发布版本中没有内置 DTrace因此必须重新对 CPython 进行编译。以下以 Python 3.6.6 为例:
```
wget https://github.com/python/cpython/archive/v3.6.6.zip
@ -127,22 +122,19 @@ cd v3.6.6
make
```
这样 `python.exe` 将使用 dtrace 追踪所有代码。[Paul Ross 也作过关于 dtrace 的闪电演讲][19]。你可以下载 Python 的 dtrace 启动文件来查看函数调用、系统调用、CPU 时间、执行时间,以及各种其它的内容。
这样 `python.exe` 将使用 DTrace 追踪所有代码。[Paul Ross 也作过关于 DTrace 的闪电演讲][19]。你可以下载 Python 的 DTrace 启动文件来查看函数调用、执行时间、CPU 时间、系统调用,以及各种其它的内容。
`sudo dtrace -s toolkit/<tracer>.d -c ../cpython/python.exe script.py`
```
sudo dtrace -s toolkit/<tracer>.d -c ../cpython/python.exe script.py
```
`py_callflow` 追踪器显示了程序里调用的所有函数。
![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif)
`py_callflow` 追踪器[显示](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif)了程序里调用的所有函数。
那么Python 的动态类型会让它变慢吗?
* 类型比较和类型转换消耗的资源是比较多的,每次读取、写入或引用变量时都会检查变量的类型
* Python 的动态程度让它难以被优化,因此很多 Python 的替代品都为了提升速度而在灵活性方面作出了妥协
* 而 [Cython][2] 结合了 C 的静态类型和 Python 来优化已知类型的代码,它可以将[性能提升][3] 84 倍。
* 类型比较和类型转换消耗的资源是比较多的,每次读取、写入或引用变量时都会检查变量的类型
* Python 的动态程度让它难以被优化,因此很多 Python 的替代品能够如此快都是为了提升速度而在灵活性方面作出了妥协
* 而 [Cython][2] 结合了 C 的静态类型和 Python 来优化已知类型的代码,它[可以将][3]性能提升 **84 倍**
### 总结
@ -158,7 +150,7 @@ make
Jake VDP 的优秀文章(略微过时) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21]
Dave Beazleys 关于 GIL 的演讲 [http://www.dabeaz.com/python/GIL.pdf][22]
Dave Beazley 关于 GIL 的演讲 [http://www.dabeaz.com/python/GIL.pdf][22]
JIT 编译器的那些事 [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23]
@ -169,7 +161,7 @@ via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b
作者:[Anthony Shaw][a]
选题:[oska874][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,59 @@
6 个托管 git 仓库的地方
======
> GitHub 被收购导致一些用户去寻找这个流行的代码仓库的替代品。这里有一些你可以考虑一下。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL)
也许你是少数一些没有注意到的人之一,就在之前,[微软收购了 GitHub][1]。两家公司达成了共识。微软在近些年已经变成了开源的有力支持者,而 GitHub 从成立起,就已经成为了大量的开源项目的实际代码库。
然而,最近发生的这次收购可能会带给你一些苦恼。毕竟公司的收购让你意识到了你的开源代码放在了一个商业平台上。可能你现在还没准备好迁移到其他的平台上去,但是至少这可以给你提供一些可选项。让我们找找网上现在都有哪些可用的平台。
### 选择之一: GitHub
严格来说,这是一个合格的选项。[GitHub][2] 历史上没有什么失信的地方,而且微软后来也一直笑对开源。把你的项目继续放在 GitHub 上,保持观望没有什么不可以。它现在依然是最大的软件开发的网络社区,同时还有许多对于问题追踪、代码审查、持续集成、通用的代码管理等很有用的工具。而且它还是基于 Git 的,这是每个人都喜欢的开源版本控制系统。你的代码还是你的代码。如果没有出现什么问题,那保持原状是没错的。
### 选择之二: GitLab
[GitLab][3] 是考虑替代代码库平台时的主要竞争者。它是完全开源的。你可以像在 GitHub 一样把你的代码托管在 GitLab但你也可以选择在你自己的服务器上自行托管自己的 GitLab 实例并完全控制谁可以访问那里的所有内容以及如何访问和管理。GitLab 与 GitHub 功能几乎相同,有些人甚至可能会说它的持续集成和测试工具更优越。尽管 GitLab 上的开发者社区肯定比 GitHub 上的开发者社区要小,但这并没有什么。你可能会在那里的人群中找到更多志同道合的开发者。
### 选择之三: Bitbucket
[Bitbucket][4] 已经存在很多年了。在某些方面,它可以作为 GitHub 未来的一面镜子。Bitbucket 八年前被一家大公司Atlassian收购并且已经经历了一些变化。它仍然是一个像 GitHub 这样的商业平台但它远不是一个创业公司而且从组织上说它的基础相当稳定。Bitbucket 具有 GitHub 和 GitLab 上的大部分功能,以及它自己的一些新功能,如对 [Mercurial][5] 仓库的原生支持。
### 选择之四: SourceForge
[SourceForge][6] 是开源代码库的鼻祖。如果你曾经有一个开源项目Sourceforge 就是那个托管你的代码并向其他人分享你的发布版本的地方。它迁移到 Git 版本控制用了一段时间它有一些商业收购和再次收购的历史以及一些对某些开源项目糟糕的捆绑决策。也就是说SourceForge 从那时起似乎已经恢复,该网站仍然是一个有着不少开源项目的地方。然而,很多人仍然感到有点受伤,而且有些人并不是很支持它的平台货币化的各种尝试,所以一定要睁大眼睛。
### 选择之五: 自己管理
如果你想自己掌握自己项目的命运除了你自己没人可以指责你那么一切都由自己来做可能对你来说是最佳的选择。无论对于大项目还是小项目都是好的选择。Git 是开源的,所以自己托管也很容易。如果你想要问题追踪和代码审查功能,你可以运行一个 GitLab 或者 [Phabricator][7] 的实例。对于持续集成,你可以设置自己的 [Jenkins][8] 自动化服务实例。是的,你需要对自己的基础架构开销和相关的安全要求负责。但是,这个设置过程并不是很困难。所以如果你不想自己的代码被其他人的平台所吞没,这就是一种很好的方法。
### 选择之六:以上全部
以下是所有这些的美妙之处:尽管这些平台上有一些专有的选项,但它们仍然建立在坚实的开源技术之上。而且不仅仅是开源,而是明确设计为分布在大型网络(如互联网)上的多个节点上。你不需要只使用一个。你可以使用一对……或者全部。使用 GitLab 将你自己的设施作为保证的基础,并在 GitHub 和 Bitbucket 上安装克隆存储库,以进行问题跟踪和持续集成。将你的主代码库保留在 GitHub 上,但是出于你自己的考虑,可以在 GitLab 上安装“备份”克隆。
关键在于你可以选择。我们能有这么多选择,都是得益于那些非常有用而强大的项目之上的开源许可证。未来一片光明。
当然,在这个列表中我肯定忽略了一些开源平台。方便的话请补充给我们。你是否使用了多个平台?哪个是你最喜欢的?你都可以在这里说出来!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/github-alternatives
作者:[Jason van Gumster][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mairin
[1]: https://www.theverge.com/2018/6/4/17422788/microsoft-github-acquisition-official-deal
[2]: https://github.com/
[3]: https://gitlab.com
[4]: https://bitbucket.org
[5]: https://www.mercurial-scm.org/wiki/Repository
[6]: https://sourceforge.net
[7]: https://phacility.com/phabricator/
[8]: https://jenkins.io

View File

@ -0,0 +1,166 @@
为什么 Linux 用户应该试一试 Rust
======
> 在 Linux 系统上安装 Rust 编程语言可能是你近年来所做的最有价值的事情之一。
![](https://images.idgesg.net/images/article/2018/09/rust-rusted-metal-100773678-large.jpg)
Rust 是一种相当年轻和现代的编程语言,具有许多使其非常灵活而及其安全的功能。数据显示它正在变得非常受欢迎,连续三年([2016][1]、[2017][2] 和 [2018][3])在 Stack Overflow 开发者调查中获得“最受喜爱的编程语言”的第一名。
Rust 也是开源语言的一种,它具有一系列特殊的功能,使得它可以适应许多不同的编程项目。 它最初源于 2006 年 Mozilla 员工的个人项目几年后2009 年)被 Mozilla 选为特别项目,然后在 2010 年宣布供公众使用。
Rust 程序运行速度极快可防止段错误并能保证线程安全。这些属性使该语言极大地吸引了专注于应用程序安全性的开发人员。Rust 也是一种非常易读的语言,可用于从简单程序到非常大而复杂的项目。
Rust 优点:
* 内存安全 —— Rust 不会受到悬空指针、缓冲区溢出或其他与内存相关的错误的影响。它提供内存安全,无回收垃圾。
* 通用 —— Rust 是适用于任何类型编程的语言
* 快速 —— Rust 在性能上与 C / C++ 相当,但具有更好的安全功能。
* 高效 —— Rust 是为了便于并发编程而构建的。
* 面向项目 —— Rust 具有内置的依赖关系和构建管理系统 Cargo。
* 得到很好的支持 —— Rust 有一个令人印象深刻的[支持社区][4]。
Rust 还强制执行 RAII<ruby>资源获取初始化<rt>Resource Acquisition Is Initialization</rt></ruby>)。这意味着当一个对象超出范围时,将调用其析构函数并释放其资源,从而提供防止资源泄漏的屏蔽。它提供了功能抽象和一个很棒的[类型系统][5],并具有速度和数学健全性。
简而言之Rust 是一种令人印象深刻的系统编程语言,具有其它大多数语言所缺乏的功能,使其成为 C、C++ 和 Objective-C 等多年来一直被使用的语言的有力竞争者。
### 安装 Rust
安装 Rust 是一个相当简单的过程。
```
$ curl https://sh.rustup.rs -sSf | sh
```
安装 Rust 后,使用 `rustc --version``which` 命令显示版本信息。
```
$ which rustc
rustc 1.27.2 (58cc626de 2018-07-18)
$ rustc --version
rustc 1.27.2 (58cc626de 2018-07-18)
```
### Rust 入门
Rust 即使是最简单的代码也与你之前使用过的语言输入的完全不同。
```
$ cat hello.rs
fn main() {
// Print a greeting
println!("Hello, world!");
}
```
在这些行中,我们正在设置一个函数(`main`),添加一个描述该函数的注释,并使用 `println` 语句来创建输出。您可以使用下面显示的命令编译然后运行程序。
```
$ rustc hello.rs
$ ./hello
Hello, world!
```
另外,你也可以创建一个“项目”(通常仅用于比这个更复杂的程序!)来保持代码的有序性。
```
$ mkdir ~/projects
$ cd ~/projects
$ mkdir hello_world
$ cd hello_world
```
请注意,即使是简单的程序,一旦编译,就会变成相当大的可执行文件。
```
$ ./hello
Hello, world!
$ ls -l hello*
-rwxrwxr-x 1 shs shs 5486784 Sep 23 19:02 hello <== executable
-rw-rw-r-- 1 shs shs 68 Sep 23 15:25 hello.rs
```
当然这只是一个开始且传统的“Hello, world!” 程序。 Rust 语言具有一系列可帮助你快速进入高级编程技能的功能。
### 学习 Rust
![rust programming language book cover][6]
*No Starch Press*
Steve Klabnik 和 Carol Nichols 的《[Rust 编程语言][7]》 2018一书提供了学习 Rust 的最佳方法之一。 这本书由核心开发团队的两名成员撰写,可从 [No Starch Press][7] 出版社获得纸制书或者从 [rust-lang.org][8] 获得电子书。它已经成为 Rust 开发者社区中的参考书。
在所涉及的众多主题中,你将了解这些高级主题:
* 所有权和 borrowing
* 安全保障
* 测试和错误处理
* 智能指针和多线程
* 高级模式匹配
* 使用 Cargo内置包管理器
* 使用 Rust 的高级编译器
#### 目录
- 前言Nicholas Matsakis 和 Aaron Turon 编写)
- 致谢
- 介绍
- 第 1 章:新手入门
- 第 2 章:猜谜游戏
- 第 3 章:通用编程概念
- 第 4 章:了解所有权
- 第 5 章:结构
- 第 6 章:枚举和模式匹配
- 第 7 章:模块
- 第 8 章:常见集合
- 第 9 章:错误处理
- 第 10 章:通用类型、特征和生命周期
- 第 11 章:测试
- 第 12 章:输入/输出项目
- 第 13 章:迭代器和闭包
- 第 14 章:关于货物和 Crates.io 的更多信息
- 第 15 章:智能指针
- 第 16 章:并发
- 第 17 章Rust 是面向对象的吗?
- 第 18 章:模式
- 第 19 章:关于生命周期的更多信息
- 第 20 章:高级类型系统功能
- 附录 A关键字
- 附录 B运算符和符号
- 附录 C可衍生的特征
- 附录 D
- 索引
《[Rust 编程语言][7]》 将你从基本安装和语言语法带到复杂的主题例如模块、错误处理、crates与其他语言中的 “library”
或“package”同义模块允许你将你的代码分配到 crate 本身),生命周期等。
可能最重要的是,本书可以让您从基本的编程技巧转向构建和编译复杂、安全且非常有用的程序。
### 结束
如果你已经准备好用一种非常值得花时间和精力学习并且越来越受欢迎的语言进行一些严肃的编程,那么 Rust 是一个不错的选择!
加入 [Facebook][9] 和 [LinkedIn][10] 上的 Network World 社区,评论最重要的话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3308162/linux/why-you-should-try-rust.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[way-ww](https://github.com/way-ww)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]: https://insights.stackoverflow.com/survey/2016#technology-most-loved-dreaded-and-wanted
[2]: https://insights.stackoverflow.com/survey/2017#technology-most-loved-dreaded-and-wanted-languages
[3]: https://insights.stackoverflow.com/survey/2018#technology-most-loved-dreaded-and-wanted-languages
[4]: https://www.rust-lang.org/en-US/community.html
[5]: https://doc.rust-lang.org/reference/type-system.html
[6]: https://images.idgesg.net/images/article/2018/09/rust-programming-language_book-cover-100773679-small.jpg
[7]: https://nostarch.com/Rust
[8]: https://doc.rust-lang.org/book/2018-edition/index.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,25 +1,25 @@
CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理
CPU 电源管理器:Linux 系统中 CPU 主频的控制和管理
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Manage-CPU-Frequency-720x340.jpeg)
你使用笔记本的话,可能知道 Linux 系统的电源管理做的很不好。虽然有 **TLP**、[**Laptop Mode Tools** 和 **powertop**][1] 这些工具来辅助减少电量消耗,但跟 Windows 和 Mac OS 系统比较起来,电池的整个使用周期还是不尽如意。此外,还有一种降低功耗的办法就是限制 CPU 的频率。这是可行的,然而却需要编写很复杂的终端命令来设置,所以使用起来不太方便。幸好,有一款名为 **CPU Power Manager** 的 GNOME 扩展插件,可以很容易的就设置和管理你的 CPU 主频。GNOME 桌面系统中CPU Power Manager 使用名为 **intel_pstate**功率驱动程序(几乎所有的 Intel CPU 都支持)来控制和管理 CPU 主频。
你使用笔记本的话,可能知道 Linux 系统的电源管理做的很不好。虽然有 **TLP**、[**Laptop Mode Tools** 和 **powertop**][1] 这些工具来辅助减少电量消耗,但跟 Windows 和 Mac OS 系统比较起来,电池的整个使用周期还是不尽如意。此外,还有一种降低功耗的办法就是限制 CPU 的频率。这是可行的,然而却需要编写很复杂的终端命令来设置,所以使用起来不太方便。幸好,有一款名为 **CPU Power Manager** 的 GNOME 扩展插件,可以很容易的就设置和管理你的 CPU 主频。GNOME 桌面系统中CPU Power Manager 使用名为 **intel_pstate**频率调整驱动程序(几乎所有的 Intel CPU 都支持)来控制和管理 CPU 主频。
使用这个扩展插件的另一个原因是可以减少系统的发热量,因为很多系统在正常使用中的发热量总让人不舒服,限制 CPU 的主频就可以减低发热量。它还可以减少 CPU 和其他组件的磨损。
### 安装 CPU Power Manager
首先,进入[**扩展插件主页面**][2],安装此扩展插件。
首先,进入[扩展插件主页面][2],安装此扩展插件。
安装好插件后,在 GNOME 顶部栏的右侧会出现一个 CPU 图标。点击图标,会出现安装此扩展一个选项提示,如下示:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-icon.png)
点击**“尝试安装”**按纽,会弹出输入密码确认框。插件需要 root 权限来添加 policykit 规则,进而控制 CPU 主频。下面是弹出的提示框样子:
点击“尝试安装”按纽,会弹出输入密码确认框。插件需要 root 权限来添加 policykit 规则,进而控制 CPU 主频。下面是弹出的提示框样子:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-1.png)
输入密码,点击**“认证”**按纽,完成安装。最后在 **/usr/share/polkit-1/actions** 目录下添加了一个名为 **mko.cpupower.setcpufreq.policy** 的 policykit 文件。
输入密码,点击“认证”按纽,完成安装。最后在 `/usr/share/polkit-1/actions` 目录下添加了一个名为 `mko.cpupower.setcpufreq.policy` 的 policykit 文件。
都安装完成后,如果点击右上脚的 CPU 图标,会出现如下所示:
@ -27,12 +27,10 @@ CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理
### 功能特性
* **查看 CPU 主频:** 显然,你可以通过这个提示窗口看到 CPU 的当前运行频率。
* **设置最大最小主频:** 使用此扩展你可以根据列出的最大、最小频率百分比进度条来分别设置其频率限制。一旦设置CPU 将会严格按照此设置范围运行。
* **开/关 Turbo Boost:** 这是我最喜欢的功能特性。大多数 Intel CPU 都有 “Turbo Boost” 特性,为了提高额外性能,其中的一个内核为自动进行超频。此功能虽然可以使系统获得更高的性能,但也大大增加功耗。所以,如果不做 CPU 密集运行的话,为节约电能,最好关闭 Turbo Boost 功能。事实上,在我电脑上,我大部分时间是把 Turbo Boost 关闭的。
* **生成配置文件:** 可以生成最大和最小频率的配置文件,就可以很轻松打开/关闭,而不是每次手工调整设置。
* **查看 CPU 主频:** 显然,你可以通过这个提示窗口看到 CPU 的当前运行频率。
* **设置最大、最小主频:** 使用此扩展你可以根据列出的最大、最小频率百分比进度条来分别设置其频率限制。一旦设置CPU 将会严格按照此设置范围运行。
* **开/关 Turbo Boost** 这是我最喜欢的功能特性。大多数 Intel CPU 都有 “Turbo Boost” 特性,为了提高额外性能,其中的一个内核为自动进行超频。此功能虽然可以使系统获得更高的性能,但也大大增加功耗。所以,如果不做 CPU 密集运行的话,为节约电能,最好关闭 Turbo Boost 功能。事实上,在我电脑上,我大部分时间是把 Turbo Boost 关闭的。
* **生成配置文件:** 可以生成最大和最小频率的配置文件,就可以很轻松打开/关闭,而不是每次手工调整设置。
### 偏好设置
@ -40,24 +38,23 @@ CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences.png)
如你所见,你可以设置是否显示 CPU 主频,也可以设置是否以 **Ghz** 来代替 **Mhz** 显示。
如你所见,你可以设置是否显示 CPU 主频,也可以设置是否以 **Ghz** 来代替 **Mhz** 显示。
你也可以编辑和创建/删除配置:
你也可以编辑和创建/删除配置文件:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences-1.png)
可以为每个配置分别设置最大、最小主频及开/关 Turbo boost。
可以为每个配置文件分别设置最大、最小主频及开/关 Turbo boost。
### 结论
正如我在开始时所说的Linux 系统的电源管理并不是最好的,许多人总是希望他们的 Linux 笔记本电脑电池能多用几分钟。如果你也是其中一员,就试试此扩展插件吧。为了省电,虽然这是非常规的做法,但有效果。我确实喜欢这个插件,到现在已经使用了好几个月了。
What do you think about this extension? Put your thoughts in the comments below!你对此插件有何看法呢?请把你的观点留在下面的评论区吧。
你对此插件有何看法呢?请把你的观点留在下面的评论区吧。
祝贺!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequency-in-linux/
@ -65,7 +62,7 @@ via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequenc
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,27 +1,27 @@
万维网的创建者正在创建一个新的分布式网络
万维网的创建者正在创建一个新的去中心化网络
======
**万维网的创建者 Tim Berners-Lee 公布了他计划创建一个新的分布式网络,网络中的数据将由用户控制**
> 万维网WWW的创建者 Tim Berners-Lee 公布了他计划创建一个新的去中心化网络,该网络中的数据将由用户控制。
[Tim Berners-Lee] [1]以创建万维网而闻名万维网就是你现在所知的互联网。二十多年之后Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过分布式网络将权力交回给人们。
[Tim Berners-Lee][1] 以创建万维网而闻名万维网就是你现在所知的互联网。二十多年之后Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过<ruby>去中心化网络<rt>Decentralized Web</rt></ruby>将权力交回给人们。
Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们”
Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们”
> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡——以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。
> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡 —— 以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。
![Tim Berners-Lee is creating a decentralized web with open source project Solid][3]
基本上,[Solid][4]是一个使用现有网络构建的平台,在这里你可以创建自己的 “pods” (个人数据存储)。你决定这个 “pods” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。
基本上,[Solid][4] 是一个使用现有网络构建的平台,在这里你可以创建自己的 “pod” (个人数据存储)。你决定这个 “pod” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。
Berners-Lee 相信 Solid "将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。"
Berners-Lee 相信 Solid 将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。
开发人员需要将 Solid 集成进他们的应用程序和网站中。 Solid 仍在早期阶段,所以目前没有相关的应用程序。但是项目网站宣称“第一批 Solid 应用程序正在开发当中”。
Berners-Lee 已经创立一家名为[Inrupt][5] 的初创公司,并已从麻省理工学院休假来全职工作在 Solid来将其”从少部分人的愿景带到多数人的现实“。
Berners-Lee 已经创立一家名为 [Inrupt][5] 的初创公司,并已从麻省理工学院休学术假来全职工作在 Solid来将其”从少部分人的愿景带到多数人的现实“。
如果你对 Solid 感兴趣,[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于分布式网络的成功。
如果你对 Solid 感兴趣,可以[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于去中心化网络的成功。
你认为[分布式网络][8]会成为现实吗?你是如何看待分布式网络,特别是 Solid 项目的?
你认为[去中心化网络][8]会成为现实吗?你是如何看待去中心化网络,特别是 Solid 项目的?
--------------------------------------------------------------------------------
@ -30,7 +30,7 @@ via: https://itsfoss.com/solid-decentralized-web/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ypingcn](https://github.com/ypingcn)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,64 +3,64 @@
![](https://fedoramagazine.org/wp-content/uploads/2018/10/podman-816x345.jpg)
Linux容器是有由 Linux 内核提供的具有特定隔离功能的进程 - 包括文件系统、进程和网络隔离。容器有助于实现可移植性 - 应用可以在容器镜像中与其依赖项一起分发,并可在几乎任何有容器运行时的 Linux 系统上运行。
Linux 容器是由 Linux 内核所提供的具有特定隔离功能的进程 —— 包括文件系统、进程和网络的隔离。容器有助于实现可移植性 —— 应用可以在容器镜像中与其依赖项一起分发,并可在几乎任何有容器运行时环境的 Linux 系统上运行。
虽然容器技术存在了很长时间,但 Linux 容器是由 Docker 广泛推广。 “Docker” 这个词可以指几个不同的东西,包括容器技术和工具,周围的社区,或者 Docker Inc. 公司。但是,在本文中,我将用来指管理 Linux 容器的技术和工具。
虽然容器技术存在了很长时间,但 Linux 容器是由 Docker 而得到了广泛推广。 “Docker” 这个词可以指几个不同的东西,包括容器技术和工具,周围的社区,或者 Docker Inc. 公司。但是,在本文中,我将用来指管理 Linux 容器的技术和工具。
### 什么是 Docker
[Docker][1] 是一个以 root 身份在你的系统上运行的守护程序,它利用 Linux 内核的功能来管理正在运行的容器。除了运行容器之外,它还可以轻松管理容器镜像 - 与容器托管交互、存储映像、管理容器版本等。它基本上支持运行单个容器所需的所有操作。
[Docker][1] 是一个以 root 身份在你的系统上运行的守护程序,它利用 Linux 内核的功能来管理正在运行的容器。除了运行容器之外,它还可以轻松管理容器镜像 —— 与容器注册库交互、存储映像、管理容器版本等。它基本上支持运行单个容器所需的所有操作。
但即使 Docker 是管理 Linux 容器的一个非常方便的工具,它也有两个缺点:它是一个需要在你的系统上运行的守护进程,并且需要以 root 权限运行这可能有一定的安全隐患。然而Podman 在解决这两个问题。
### Podman 介绍
[Podman][2] 是一个容器运行时,提供与 Docker 非常相似的功能。正如已经提示的那样,它不需要在你的系统上运行任何守护进程,并且它也可以在没有 root 权限的情况下运行。让我们看看使用 Podman 运行 Linux 容器的一些示例。
[Podman][2] 是一个容器运行时环境,提供与 Docker 非常相似的功能。正如已经提示的那样,它不需要在你的系统上运行任何守护进程,并且它也可以在没有 root 权限的情况下运行。让我们看看使用 Podman 运行 Linux 容器的一些示例。
#### 使用 Podman 运行容器
其中一个最简单的例子可能是运行 Fedora 容器,在命令行中打印 “Hello world!”:
```
$ podman run --rm -it fedora:28 echo "Hello world!"
$ podman run --rm -it fedora:28 echo "Hello world!"
```
使用通用 Dockerfile 构建镜像的方式与 Docker 相同:
```
$ cat Dockerfile
FROM fedora:28
RUN dnf -y install cowsay
$ cat Dockerfile
FROM fedora:28
RUN dnf -y install cowsay
$ podman build . -t hello-world
... output omitted ...
$ podman build . -t hello-world
... output omitted ...
$ podman run --rm -it hello-world cowsay "Hello!"
$ podman run --rm -it hello-world cowsay "Hello!"
```
为了构建容器Podman 在后台调用另一个名为 Buildah 的工具。你可以阅读最近一篇[关于使用 Buildah 构建容器镜像的文章][3] - 它不仅仅是使用典型的 Dockerfile。
为了构建容器Podman 在后台调用另一个名为 Buildah 的工具。你可以阅读最近一篇[关于使用 Buildah 构建容器镜像的文章][3] —— 它不仅仅是使用典型的 Dockerfile。
除了构建和运行容器外Podman 还可以与容器托管进行交互。要登录容器托管,例如广泛使用的 Docker Hub请运行
除了构建和运行容器外Podman 还可以与容器托管进行交互。要登录容器注册库,例如广泛使用的 Docker Hub请运行
```
$ podman login docker.io
$ podman login docker.io
```
为了推送我刚刚构建的镜像,我只需打上标记来代表特定的容器托管,然后直接推送它。
为了推送我刚刚构建的镜像,我只需打上标记来代表特定的容器注册库,然后直接推送它。
```
$ podman -t hello-world docker.io/asamalik/hello-world
$ podman push docker.io/asamalik/hello-world
$ podman -t hello-world docker.io/asamalik/hello-world
$ podman push docker.io/asamalik/hello-world
```
顺便说一下,你是否注意到我如何以非 root 用户身份运行所有内容?此外,我的系统上没有运行大的守护进程!
顺便说一下,你是否注意到我如何以非 root 用户身份运行所有内容?此外,我的系统上没有运行又重的守护进程!
#### 安装 Podman
Podman 默认在 [Silverblue][4] 上提供 - 一个基于容器的工作流的新一代 Linux 工作站。要在任何 Fedora 版本上安装它,只需运行:
Podman 默认在 [Silverblue][4] 上提供 —— 一个基于容器的工作流的新一代 Linux 工作站。要在任何 Fedora 版本上安装它,只需运行:
```
$ sudo dnf install podman
$ sudo dnf install podman
```
--------------------------------------------------------------------------------
@ -70,7 +70,7 @@ via: https://fedoramagazine.org/running-containers-with-podman/
作者:[Adam Šamalík][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,6 +1,6 @@
使用 Lakka Linux 将你的旧 PC 变成复古游戏主机
======
**如果你有一台吃灰的旧计算机,你可以用 Lakka Linux 将它变成像 PlayStation 那样的复古游戏主机。**
> 如果你有一台吃灰的旧计算机,你可以用 Lakka Linux 将它变成像 PlayStation 那样的复古游戏主机。
你可能已经了解[专门用于复活旧计算机的 Linux 发行版][1]。但是你知道有个 Linux 发行版专门是为了将旧电脑变成复古游戏主机创建的么?
@ -12,8 +12,7 @@
Lakka 提供类似的界面和类似的体验。我稍后会谈到“体验”。先看一下界面。
<https://itsfoss.com/wp-content/uploads/2018/10/lakka-linux-gaming-console.webm>
Lakka 复古游戏界面
[Lakka 复古游戏界面](https://itsfoss.com/wp-content/uploads/2018/10/lakka-linux-gaming-console.webm)
### Lakka为复古游戏而生的 Linux 发行版
@ -27,20 +26,18 @@ Lakka 是轻量级的,你可以将它安装在大多数老系统或单板计
它支持大量的模拟器。你只需要在系统上下载 ROMLakka 将从这些 ROM 运行游戏。你可以在[这里][6]找到支持的模拟器和硬件列表。
它通过顺滑的图形界面让你能够在许多计算机和主机上运行经典游戏。设置也是统一的,因此可以一劳永逸地完成配置。
它通过顺滑的图形界面让你能够在许多计算机和主机上运行经典游戏。设置也是统一的,因此可以一劳永逸地完成配置。
让我总结一下 Lakka 的主要特点:
* RetroArch 中与 PlayStation 类似的界面
  * 支持许多复古游戏模拟器
  * 支持最多 5 名玩家在同一系统上玩游戏
  * 存档允许你随时保存游戏中的进度
  * 你可以使用各种图形过滤器改善旧游戏的外表
  * 你可以通过网络加入多人游戏
  * 开箱即用支持 XBOX360、Dualshock 3 和 8bitdo 等多种游戏手柄
  * 连接到 [RetroAchievements] [7] 获取奖杯和徽章
* RetroArch 中与 PlayStation 类似的界面
* 支持许多复古游戏模拟器
* 支持最多 5 名玩家在同一系统上玩游戏
* 存档允许你随时保存游戏中的进度
* 你可以使用各种图形过滤器改善旧游戏的外表
* 你可以通过网络加入多人游戏
* 开箱即用支持 XBOX360、Dualshock 3 和 8bitdo 等多种游戏手柄
* 连接到 [RetroAchievements] [7] 获取奖杯和徽章
### 获取 Lakka
@ -50,7 +47,7 @@ Lakka 是轻量级的,你可以将它安装在大多数老系统或单板计
[项目的 FAQ 部分][8]回答了常见的疑问,所以如有任何其他的问题,请参考它。
[获取 Lakka][9]
- [获取 Lakka][9]
你喜欢复古游戏吗?你使用什么模拟器?你以前用过 Lakka 吗?在评论区与我们分享你的观点。
@ -61,7 +58,7 @@ via: https://itsfoss.com/lakka-retrogaming-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -75,4 +72,4 @@ via: https://itsfoss.com/lakka-retrogaming-linux/
[6]: http://www.lakka.tv/powerful/
[7]: https://retroachievements.org/
[8]: http://www.lakka.tv/doc/FAQ/
[9]; http://www.lakka.tv/disclaimer/
[9]: http://www.lakka.tv/disclaimer/

10
scripts/badge.sh Executable file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# 重新生成badge
set -o errexit
SCRIPTS_DIR=$(cd $(dirname "$0") && pwd)
BUILD_DIR=$(cd $SCRIPTS_DIR/.. && pwd)/build
mkdir -p ${BUILD_DIR}/badge
for catalog in published translated translating sources;do
${SCRIPTS_DIR}/badge/show_status.sh -s ${catalog} > ${BUILD_DIR}/badge/${catalog}.svg
done

95
scripts/badge/show_status.sh Executable file
View File

@ -0,0 +1,95 @@
#!/usr/bin/env bash
set -e
function help()
{
cat <<EOF
Usage: ${0##*/} [+-s] [published] [translated] [translating] [sources]
显示已发布、已翻译、正在翻译和待翻译的数量
-s 输出为svg格式
EOF
}
while getopts :s OPT; do
case $OPT in
s|+s)
show_format="svg"
;;
*)
help
exit 2
esac
done
shift $(( OPTIND - 1 ))
OPTIND=1
declare -A catalog_comment_dict
declare -A catalog_color_dict
catalog_comment_dict=([sources]="待翻译" [translating]="翻译中" [translated]="待校对" [published]="已发布")
catalog_color_dict=([sources]="#97CA00" [translating]="#00BCD5" [translated]="#FF9800" [published]="#FF5722")
function count_files_under_dir()
{
local dir=$1
local pattern=$2
find ${dir} -name "${pattern}" -type f |wc -l
}
cd "$(dirname $0)/../.." # 进入TP root
for catalog in "$@";do
case "${catalog}" in
published)
num=$(count_files_under_dir "${catalog}" "[0-9]*.md")
;;
translated)
num=$(count_files_under_dir "${catalog}" "[0-9]*.md")
;;
translating)
num=$(git grep -niE "translat|fanyi|翻译" sources/*.md |awk -F ":" '{if ($2<=3) print $1}' |wc -l)
;;
sources)
total=$(count_files_under_dir "${catalog}" "[0-9]*.md")
translating_num=$(git grep -niE "translat|fanyi|翻译" sources/*.md |awk -F ":" '{if ($2<=3) print $1}' |wc -l)
num=$((${total} - ${translating_num}))
;;
*)
help
exit 2
esac
comment=${catalog_comment_dict[${catalog}]}
color=${catalog_color_dict[${catalog}]}
if [[ "${show_format}" == "svg" ]];then
cat <<EOF
<svg xmlns="http://www.w3.org/2000/svg" width="100" height="20">
<linearGradient id="b" x2="0" y2="100%">
<stop offset="0" stop-color="#bbb" stop-opacity=".1" />
<stop offset="1" stop-opacity=".1" />
</linearGradient>
<mask id="a">
<rect width="100" height="20" rx="3" fill="#fff" />
</mask>
<g mask="url(#a)">
<path fill="#555" d="M0 0 h60 v20 H0 z" />
<path fill="${color}" d="M60 0 h40 v20 H60 z" />
<path fill="url(#b)" d="M0 0 h100 v20 H0 z" />
</g>
<g fill="#fff" font-family="DejaVu Sans" font-size="11">
<text x="12" y="15" fill="#010101" fill-opacity=".3">${comment}</text>
<text x="12" y="14">${comment}</text>
<text x="70" y="15" fill="#010101" fill-opacity=".3">${num}</text>
<text x="70" y="14">${num}</text>
</g>
</svg>
EOF
else
cat<<EOF
${comment}: ${num}
EOF
fi
done

22
sign.md
View File

@ -1,22 +0,0 @@
---
via来源链接
作者:[作者名][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,
[Linux中国](https://linux.cn/) 荣誉推出
[a]:作者链接
[1]:文内链接
[2]:
[3]:
[4]:
[5]:
[6]:
[7]:
[8]:
[9]:

View File

@ -0,0 +1,93 @@
The Rise and Rise of JSON
======
JSON has taken over the world. Today, when any two applications communicate with each other across the internet, odds are they do so using JSON. It has been adopted by all the big players: Of the ten most popular web APIs, a list consisting mostly of APIs offered by major companies like Google, Facebook, and Twitter, only one API exposes data in XML rather than JSON. Twitter, to take an illustrative example from that list, supported XML until 2013, when it released a new version of its API that dropped XML in favor of using JSON exclusively. JSON has also been widely adopted by the programming rank and file: According to Stack Overflow, a question and answer site for programmers, more questions are now asked about JSON than about any other data interchange format.
![][1]
XML still survives in many places. It is used across the web for SVGs and for RSS and Atom feeds. When Android developers want to declare that their app requires a permission from the user, they do so in their apps manifest, which is written in XML. XML also isnt the only alternative to JSON—some people now use technologies like YAML or Googles Protocol Buffers. But these are nowhere near as popular as JSON. For the time being, JSON appears to be the go-to format for communicating with other programs over the internet.
JSONs dominance is surprising when you consider that as recently as 2005 the web world was salivating over the potential of “Asynchronous JavaScript and XML” and not “Asynchronous JavaScript and JSON.” It is of course possible that this had nothing to do with the relative popularity of the two formats at the time and reflects only that “AJAX” must have seemed a more appealing acronym than “AJAJ.” But even if some people were already using JSON instead of XML in 2005 (and in fact not many people were yet), one still wonders how XMLs fortunes could have declined so precipitously that a mere decade or so later “Asynchronous JavaScript and XML” has become an ironic misnomer. What happened in that decade? How did JSON supersede XML in so many applications? And who came up with this data format now depended on by engineers and systems all over the world?
### The Birth of JSON
The first JSON message was sent in April of 2001. Since this was a historically significant moment in computing, the message was sent from a computer in a Bay-Area garage. Douglas Crockford and Chip Morningstar, co-founders of a technology consulting company called State Software, had gathered in Morningstars garage to test out an idea.
Crockford and Morningstar were trying to build AJAX applications well before the term “AJAX” had been coined. Browser support for what they were attempting was not good. They wanted to pass data to their application after the initial page load, but they had not found a way to do this that would work across all the browsers they were targeting.
Though its hard to believe today, Internet Explorer represented the bleeding edge of web browsing in 2001. As early as 1999, Internet Explorer 5 supported a primordial form of XMLHttpRequest, which programmers could access using a framework called ActiveX. Crockford and Morningstar could have used this technology to fetch data for their application, but they could not have used the same solution in Netscape 4, another browser that they sought to support. So Crockford and Morningstar had to use a different system that worked in both browsers.
The first JSON message looked like this:
```
<html><head><script>
document.domain = 'fudco';
parent.session.receive(
{ to: "session", do: "test",
text: "Hello world" }
)
</script></head></html>
```
Only a small part of the message resembles JSON as we know it today. The message itself is actually an HTML document containing some JavaScript. The part that resembles JSON is just a JavaScript object literal being passed to a function called `receive()`.
Crockford and Morningstar had decided that they could abuse an HTML frame to send themselves data. They could point a frame at a URL that would return an HTML document like the one above. When the HTML was received, the JavaScript would be run, passing the object literal back to the application. This worked as long as you were careful to sidestep browser protections preventing a sub-window from accessing its parent; you can see that Crockford and Mornginstar did that by explicitly setting the document domain. (This frame-based technique, sometimes called the hidden frame technique, was commonly used in the late 90s before the widespread implementation of XMLHttpRequest.)
The amazing thing about the first JSON message is that its not obviously the first usage of a new kind of data format at all. Its just JavaScript! In fact the idea of using JavaScript this way is so straightforward that Crockford himself has said that he wasnt the first person to do it—he claims that somebody at Netscape was using JavaScript array literals to communicate information as early as 1996. Since the message is just JavaScript, it doesnt require any kind of special parsing. The JavaScript interpreter can do it all.
The first ever JSON message actually ran afoul of the JavaScript interpreter. JavaScript reserves an enormous number of words—there are 64 reserved words as of ECMAScript 6—and Crockford and Morningstar had unwittingly used one in their message. They had used `do` as a key, but `do` is reserved. Since JavaScript has so many reserved words, Crockford decided that, rather than avoid using all those reserved words, he would just mandate that all JSON keys be quoted. A quoted key would be treated as a string by the JavaScript interpreter, meaning that reserved words could be used safely. This is why JSON keys are quoted to this day.
Crockford and Morningstar realized they had something that could be used in all sorts of applications. They wanted to name their format “JSML”, for JavaScript Markup Language, but found that the acronym was already being used for something called Java Speech Markup Language. So they decided to go with “JavaScript Object Notation”, or JSON. They began pitching it to clients but soon found that clients were unwilling to take a chance on an unknown technology that lacked an official specification. So Crockford decided he would write one.
In 2002, Crockford bought the domain [JSON.org][2] and put up the JSON grammar and an example implementation of a parser. The website is still up, though it now includes a prominent link to the JSON ECMA standard ratified in 2013. After putting up the website, Crockford did little more to promote JSON, but soon found that lots of people were submitting JSON parser implementations in all sorts of different programming languages. JSONs lineage clearly tied it to JavaScript, but it became apparent that JSON was well-suited to data interchange between arbitrary pairs of languages.
### Doing AJAX Wrong
JSON got a big boost in 2005. That year, a web designer and developer named Jesse James Garrett coined the term “AJAX” in a blog post. He was careful to stress that AJAX wasnt any one new technology, but rather “several technologies, each flourishing in its own right, coming together in powerful new ways.” AJAX was the name that Garrett was giving to a new approach to web application development that he had noticed gaining favor. His blog post went on to describe how developers could leverage JavaScript and XMLHttpRequest to build new kinds of applications that were more responsive and stateful than the typical web page. He pointed to Gmail and Flickr as examples of websites already relying on AJAX techniques.
The “X” in “AJAX” stood for XML, of course. But in a follow-up Q&A post, Garrett pointed to JSON as an entirely acceptable alternative to XML. He wrote that “XML is the most fully-developed means of getting data in and out of an AJAX client, but theres no reason you couldnt accomplish the same effects using a technology like JavaScript Object Notation or any similar means of structuring data.”
Developers indeed found that they could easily use JSON to build AJAX applications and many came to prefer it to XML. And so, ironically, the interest in AJAX led to an explosion in JSONs popularity. It was around this time that JSON drew the attention of the blogosphere.
In 2006, Dave Winer, a prolific blogger and the engineer behind a number of XML-based technologies such as RSS and XML-RPC, complained that JSON was reinventing XML for no good reason. Though one might think that a contest between data interchange formats would be unlikely to engender death threats, Winer wrote:
> No doubt I can write a routine to parse [JSON], but look at how deep they went to re-invent, XML itself wasnt good enough for them, for some reason (Id love to hear the reason). Who did this travesty? Lets find a tree and string them up. Now.
Its easy to understand Winers frustration. XML has never been widely loved. Even Winer has said that he does not love XML. But XML was designed to be a system that could be used by everyone for almost anything imaginable. To that end, XML is actually a meta-language that allows you to define domain-specific languages for individual applications—RSS, the web feed technology, and SOAP (Simple Object Access Protocol) are examples. Winer felt that it was important to work toward consensus because of all the benefits a common interchange format could bring. He felt that XMLs flexibility should be able to accommodate everybodys needs. And yet here was JSON, a format offering no benefits over XML except those enabled by throwing out the cruft that made XML so flexible.
Crockford saw Winers blog post and left a comment on it. In response to the charge that JSON was reinventing XML, Crockford wrote, “The good thing about reinventing the wheel is that you can get a round one.”
### JSON vs XML
By 2014, JSON had been officially specified by both an ECMA standard and an RFC. It had its own MIME type. JSON had made it to the big leagues.
Why did JSON become so much more popular than XML?
On [JSON.org][2], Crockford summarizes some of JSONs advantages over XML. He writes that JSON is easier for both humans and machines to understand, since its syntax is minimal and its structure is predictable. Other bloggers have focused on XMLs verbosity and “the angle bracket tax.” Each opening tag in XML must be matched with a closing tag, meaning that an XML document contains a lot of redundant information. This can make an XML document much larger than an equivalent JSON document when uncompressed, but, perhaps more importantly, it also makes an XML document harder to read.
Crockford has also claimed that another enormous advantage for JSON is that JSON was designed as a data interchange format. It was meant to carry structured information between programs from the very beginning. XML, though it has been used for the same purpose, was originally designed as a document markup language. It evolved from SGML (Standard Generalized Markup Language), which in turn evolved from a markup language called Scribe, intended as a word processing system similar to LaTeX. In XML, a tag can contain what is called “mixed content,” or text with inline tags surrounding words or phrases. This recalls the image of an editor marking up a manuscript with a red or blue pen, which is arguably the central metaphor of a markup language. JSON, on the other hand, does not support a clear analogue to mixed content, but that means that its structure can be simpler. A document is best modeled as a tree, but by throwing out the document idea Crockford could limit JSON to dictionaries and arrays, the basic and familiar elements all programmers use to build their programs.
Finally, my own hunch is that people disliked XML because it was confusing, and it was confusing because it seemed to come in so many different flavors. At first blush, its not obvious where the line is between XML proper and its sub-languages like RSS, ATOM, SOAP, or SVG. The first lines of a typical XML document establish the XML version and then the particular sub-language the XML document should conform to. That is a lot of variation to account for already, especially when compared to JSON, which is so straightforward that no new version of the JSON specification is ever expected to be written. The designers of XML, in their attempt to make XML the one data interchange format to rule them all, fell victim to that classic programmers pitfall: over-engineering. XML was so generalized that it was hard to use for something simple.
In 2000, a campaign was launched to get HTML to conform to the XML standard. A specification was published for XML-compliant HTML, thereafter known as XHTML. Some browser vendors immediately started supporting the new standard, but it quickly became obvious that the vast HTML-producing public were unwilling to revise their habits. The new standard called for stricter validation of XHTML than had been the norm for HTML, but too many websites depended on HTMLs forgiving rules. By 2009, an attempt to write a second version of the XHTML standard was aborted when it became clear that the future of HTML was going to be HTML5, a standard that did not insist on XML compliance.
If the XHTML effort had succeeded, then maybe XML would have become the common data format that its designers hoped it would be. Imagine a world in which HTML documents and API responses had the exact same structure. In such a world, JSON might not have become as ubiquitous as it is today. But I read the failure of XHTML as a kind of moral defeat for the XML camp. If XML wasnt the best tool for HTML, then maybe there were better tools out there for other applications also. In that world, our world, it is easy to see how a format as simple and narrowly tailored as JSON could find great success.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][3] on Twitter or subscribe to the [RSS feed][4] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/json.svg
[2]: http://JSON.org
[3]: https://twitter.com/TwoBitHistory
[4]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,50 @@
The Most Important Database You've Never Heard of
======
In 1962, JFK challenged Americans to send a man to the moon by the end of the decade, inspiring a heroic engineering effort that culminated in Neil Armstrongs first steps on the lunar surface. Many of the fruits of this engineering effort were highly visible and sexy—there were new spacecraft, new spacesuits, and moon buggies. But the Apollo Program was so staggeringly complex that new technologies had to be invented even to do the mundane things. One of these technologies was IBMs Information Management System (IMS).
IMS is a database management system. NASA needed one in order to keep track of all the parts that went into building a Saturn V rocket, which—because there were two million of them—was expected to be a challenge. Databases were a new idea in the 1960s and there werent any already available for NASA to use, so, in 1965, NASA asked IBM to work with North American Aviation and Caterpillar Tractor to create one. By 1968, IBM had installed a working version of IMS at NASA, though at the time it was called ICS/DL/I for “Informational Control System and Data Language/Interface.” (IBM seems to have gone through a brief, unfortunate infatuation with the slash; see [PL/I][1].) Two years later, IBM rebranded ICS/DL/I as “IMS” and began selling it to other customers. It was one of the first commercially available database management systems.
The incredible thing about IMS is that it is still in use today. And not just on a small scale: Banks, insurance companies, hospitals, and government agencies still use IMS for all sorts of critical tasks. Over 95% of Fortune 1000 companies use IMS in some capacity, as do all of the top five US banks. Whenever you withdraw cash from an ATM, the odds are exceedingly good that you are interacting with IMS at some point in the course of your transaction. In a world where the relational database is an old workhorse increasingly in competition with trendy new NoSQL databases, IMS is a freaking dinosaur. It is a relic from an era before the relational database was even invented, which didnt happen until 1970. And yet it seems to be the database system in charge of all the important stuff.
I think this makes IMS pretty interesting. Depending on how you feel about relational databases, it either offers insight into how the relational model improved on its predecessors or else exemplifies an alternative model better suited to certain problems.
IMS works according to a hierarchical model, meaning that, instead of thinking about data as tables that can be brought together using JOIN operations, IMS thinks about data as trees. Each kind of record you store can have other kinds of records as children; these child record types represent additional information that you might be interested in given a record of the parent type.
To take an example, say that you want to store information about bank customers. You might have one type of record to represent customers and another type of record to represent accounts. Like in a relational database, where each table has columns, these records will have different fields; we might want to have a first name field, a last name field, and a city field for each customer. We must then decide whether we are likely to first lookup a customer and then information about that customers account, or whether we are likely to first lookup an account and then information about that accounts owner. Assuming we decide that we will access customers first, then we will make our account record type a child of our customer record type. Diagrammed, our database model would look something like this:
![][2]
And an actual database might look like:
![][3]
By modeling our data this way, we are hewing close to the reality of how our data is stored. Each parent record includes pointers to its children, meaning that moving down our tree from the root node is efficient. (Actually, each parent basically stores just one pointer to the first of its children. The children in turn contain pointers to their siblings. This ensures that the size of a record does not vary with the number of children it has.) This efficiency can make data accesses very fast, provided that we are accessing our data in ways that we anticipated when we first structured our database. According to IBM, an IMS instance can process over 100,000 transactions a second, which is probably a large part of why IMS is still used, particularly at banks. But the downside is that we have lost a lot of flexibility. If we want to access our data in ways we did not anticipate, we will have a hard time.
To illustrate this, consider what might happen if we decide that we would like to access accounts before customers. Perhaps customers are calling in to update their addresses, and we would like them to uniquely identify themselves using their account numbers. So we want to use an account number to find an account, and then from there find the accounts owner. But since all accesses start at the root of our tree, theres no way for us to get to an account efficiently without first deciding on a customer. To fix this problem, we could introduce a second tree or hierarchy starting with account records; these account records would then have customer records as children. This would let us access accounts and then customers efficiently. But it would involve duplicating information that we already have stored in our database—we would have two trees storing the same information in different orders. Another option would be to establish an index of accounts that could point us to the right account record given an account number. That would work too, but it would entail extra work during insert and update operations in the future.
It was precisely this inflexibility and the problem of duplicated information that pushed E. F. Codd to propose the relational model. In his 1970 paper, A Relational Model of Data for Large Shared Data Banks, he states at the outset that he intends to present a model for data storage that can protect users from having to know anything about how their data is stored. Looked at one way, the hierarchical model is entirely an artifact of how the designers of IMS chose to store data. It is a bottom-up model, the implication of a physical reality. The relational model, on the other hand, is an abstract model based on relational algebra, and is top-down in that the data storage scheme can be anything provided it accommodates the model. The relational models great advantage is that, just because youve made decisions that have caused the database to store your data in a particular way, you wont find yourself effectively unable to make certain queries.
All that said, the relational model is an abstraction, and we all know abstractions arent free. Banks and large institutions have stuck with IMS partly because of the performance benefits, though its hard to say if those benefits would be enough to keep them from switching to a modern database if they werent also trying to avoid rewriting mission-critical legacy code. However, todays popular NoSQL databases demonstrate that there are people willing to drop the conveniences of the relational model in return for better performance. Something like MongoDB, which encourages its users to store data in a denormalized form, isnt all that different from IMS. If you choose to store some entity inside of another JSON record, then in effect you have created something like the IMS hierarchy, and you have constrained your ability to query for that data in the future. But perhaps thats a tradeoff youre willing to make. So, even if IMS hadnt predated E. F. Codds relational model by several years, there are still reasons why IMS creators might not have adopted the relational model wholesale.
Unfortunately, IMS isnt something that you can download and take for a spin on your own computer. First of all, IMS is not free, so you would have to buy it from IBM. But the bigger problem is that IMS only runs on IBM mainframes like the IBM z13. Thats a shame, because it would be a joy to play around with IMS and get a sense for exactly how it differs from something like MySQL. But even without that opportunity, its interesting to think about software systems that work in ways we dont expect or arent used to. And its especially interesting when those systems, alien as they are, turn out to undergird your local hospital, the entire financial sector, and even the federal government.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/10/07/the-most-important-database.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/PL/I
[2]: https://twobithistory.org/images/hierarchical-model.png
[3]: https://twobithistory.org/images/hierarchical-db.png
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,84 @@
The Ruby Story
======
Ruby has always been one of my favorite languages, though Ive sometimes found it hard to express why that is. The best Ive been able to do is this musical analogy: Whereas Python feels to me like punk rock—its simple, predictable, but rigid—Ruby feels like jazz. Ruby gives programmers a radical freedom to express themselves, though that comes at the cost of added complexity and can lead to programmers writing programs that dont make immediate sense to other people.
Ive always been aware that freedom of expression is a core value of the Ruby community. But what I didnt appreciate is how deeply important it was to the development and popularization of Ruby in the first place. One might create a programming lanugage in pursuit of better peformance, or perhaps timesaving abstractions—the Ruby story is interesting because instead the goal was, from the very beginning, nothing more or less than the happiness of the programmer.
### Yukihiro Matsumoto
Yukihiro Matsumoto, also known as “Matz,” graduated from the University of Tsukuba in 1990. Tsukuba is a small town just northeast of Tokyo, known as a center for scientific research and technological devlopment. The University of Tsukuba is particularly well-regarded for its STEM programs. Matsumoto studied Information Science, with a focus on programming languages. For a time he worked in a programming language lab run by Ikuo Nakata.
Matsumoto started working on Ruby in 1993, only a few years after graduating. He began working on Ruby because he was looking for a scripting language with features that no existing scripting language could provide. He was using Perl at the time, but felt that it was too much of a “toy language.” Python also fell short; in his own words:
> I knew Python then. But I didnt like it, because I didnt think it was a true object-oriented language—OO features appeared to be an add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for one, but couldnt find one.
So one way of understanding Matsumotos motivations in creating Ruby is that he was trying to create a better, object-oriented version of Perl.
But at other times, Matsumoto has said that his primary motivation in creating Ruby was simply to make himself and others happier. Toward the end of a Google tech talk that Matsumoto gave in 2008, he showed the following slide:
![][1]
He told his audience,
> I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of the Ruby language.
Matsumoto goes on to joke that he created Ruby for selfish reasons, because he was so underwhelmed by other languages that he just wanted to create something that would make him happy.
The slide epitomizes Matsumotos humble style. Matsumoto, it turns out, is a practicing Mormon, and Ive wondered whether his religious commitments have any bearing on his legendary kindness. In any case, this kindness is so well known that the Ruby community has a principle known as MINASWAN, or “Matz Is Nice And So We Are Nice.” The slide must have struck the audience at Google as an unusual one—I imagine that any random slide drawn from a Google tech talk is dense with code samples and metrics showing how one engineering solution is faster or more efficient than another. Few, I suspect, come close to stating nobler goals more simply.
Ruby was influenced primarily by Perl. Perl was created by Larry Wall in the late 1980s as a means of processing and transforming text-based reports. It became well-known for its text processing and regular expression capabilities. A Perl program contains many syntactic elements that would be familiar to a Ruby programmer—there are `$` signs, `@` signs, and even `elsif`s, which Id always thought were one of Rubys less felicitous idiosyncracies. On a deeper level, Ruby borrows much of Perls regular expression handling and standard library.
But Perl was by no means the only influence on Ruby. Prior to beginning work on Ruby, Matsumoto worked on a mail client written entirely in Emacs Lisp. The experience taught him a lot about the inner workings of Emacs and the Lisp language, which Matsumoto has said influenced the underlying object model of Ruby. On top of that he added a Smalltalk-style messsage passing system which forms the basis for any behavior relying on Rubys `#method_missing`. Matsumoto has also claimed Ada and Eiffel as influences on Ruby.
When it came time to decide on a name for Ruby, Matsumoto and a colleague, Keiju Ishitsuka, considered several alternatives. They were looking for something that suggested Rubys relationship to Perl and also to shell scripting. In an [instant message exchange][2] that is well-worth reading, Ishitsuka and Matsumoto probably spend too much time thinking about the relationship between shells, clams, oysters, and pearls and get close to calling the Ruby language “Coral” or “Bisque” instead. Thankfully, they decided to go with “Ruby”, the idea being that it was, like “pearl”, the name of a valuable jewel. It also turns out that the birthstone for June is a pearl while the birthstone for July is a ruby, meaning that the name “Ruby” is another tongue-in-cheek “incremental improvement” name like C++ or C#.
### Ruby Goes West
Ruby grew popular in Japan very quickly. Soon after its initial release in 1995, Matz was hired by a Japanese software consulting group called Netlab (also known as Network Applied Communication Laboratory) to work on Ruby full-time. By 2000, only five years after it was initially released, Ruby was more popular in Japan than Python. But it was only just beginning to make its way to English-speaking countries. There had been a Japanese-language mailing list for Ruby discussion since almost the very beginning of Rubys existence, but the English-language mailing list wasnt started until 1998. Initially, the English-language mailing list was used by Japanese Rubyists writing in English, but this gradually changed as awareness of Ruby grew.
In 2000, Dave Thomas published Programming Ruby, the first English-language book to cover Ruby. The book became known as the “pickaxe” book for the pickaxe it featured on its cover. It introduced Ruby to many programmers in the West for the first time. Like it had in Japan, Ruby spread quickly, and by 2002 the English-language Ruby mailing list had more traffic than the original Japanese-language mailing list.
By 2005, Ruby had become more popular, but it was still not a mainstream programming language. That changed with the release of Ruby on Rails. Ruby on Rails was the “killer app” for Ruby, and it did more than any other project to popularize Ruby. After the release of Ruby on Rails, interest in Ruby shot up across the board, as measured by the TIOBE language index:
![][3]
Its sometimes joked that the only programs anybody writes in Ruby are Ruby-on-Rails web applications. That makes it sound as if Ruby on Rails completely took over the Ruby community, which is only partly true. While Ruby has certainly come to be known as that language people write Rails apps in, Rails owes as much to Ruby as Ruby owes to Rails.
The Ruby philosophy heavily informed the design and implementation of Rails. David Heinemeier Hansson, who created Rails, often talks about how his first contact with Ruby was an almost religious experience. He has said that the encounter was so transformative that it “imbued him with a calling to do missionary work in service of Matzs creation.” For Hansson, Rubys no-shackles approach was a politically courageous rebellion against the top-down impositions made by languages like Python and Java. He appreciated that the language trusted him and empowered him to make his own judgements about how best to express his programs.
Like Matsumoto, Hansson claims that he created Rails out of a frustration with the status quo and a desire to make things better for himself. He, like Matsumoto, prioritized programmer happiness above all else, evaluating additions to Rails by what he calls “The Principle of The Bigger Smile.” Whatever made Hansson smile more was what made it into the Rails codebase. As a result, Rails would come to include unorthodox features like the “Inflector” class (which tries to map singular class names to plural database table names automatically) and Rails `Time` extensions (allowing programmers to write cute expressions like `2.days.ago`). To some, these features were truly weird, but the success of Rails is testament to the number of people who found it made their lives much easier.
And so, while it might seem that Rails was an incidental application of Ruby that happened to become extremely popular, Rails in fact embodies many of Rubys core principles. Futhermore, its hard to see how Rails could have been built in any other language, given its dependence on Rubys macro-like class method calls to implement things like model associations. Some people might take the fact that so much of Ruby development revolves around Ruby on Rails as a sign of an unhealthy ecosystem, but there are good reasons that Ruby and Ruby on Rails are so intertwined.
### The Future of Ruby
People seem to have an inordinate amount of interest in whether or not Ruby (and Ruby on Rails) are dying. Since as early as 2011, it seems that Stack Overflow and Quora have been full of programmers asking whether or not they should bother learning Ruby if it will no longer be around in the next few years. These concerns are not unjustified; according to the TIOBE index and to Stack Overflow trends, Ruby and Ruby on Rails have been shrinking in popularity. Though Ruby on Rails was once the hot new thing, it has since been eclipsed by hotter and newer frameworks.
One theory for why this has happened is that programmers are abandoning dynamically typed languages for statically typed ones. Analysts at TIOBE index figure that a rise in quality requirements have made runtime exceptions increasingly unacceptable. They cite TypeScript as an example of this trend—a whole new version of JavaScript was created just to ensure that client-side code could be written with the benefit of compile-time safety guarantees.
A more likely answer, I think, is just that Ruby on Rails now has many more competitors than it once did. When Rails was first introduced in 2005, there werent that many ways to create web applications—the main alternative was Java. Today, you can create web applications using great frameworks built for Go, JavaScript, or Python, to name only the most popular options. The web world also seems to be moving toward a more distributed architecture for applications, meaning that, rather than having one codebase responsible for everything from database access to view rendering, responsibilites are split between different components that focus on doing one thing well. Rails feels overbroad and bloated for something as focused as a JSON API that talks to a JavaScript frontend.
All that said, there are reasons to be optimistic about Rubys future. Both Rails and Ruby continue to be actively developed. Matsumoto and others are working hard on Rubys third major release, which they aim to make three times faster than the existing version of Ruby, possibly alleviating the performance concerns that have always dogged Ruby. And even if the world of web frameworks has become more diverse since 2005, that doesnt mean that there wont always be room for Ruby on Rails. It is now a mature tool with an enormous amount of built-in power that will always be a good choice for certain kinds of applications.
But even if Ruby and Rails go the way of the dinosaurs, one thing that seems certain to survive is the Ruby ethos of programmer happiness. Ruby has had a profound influence on the design of many new programming languages, which have adopted many of its best ideas. Other new lanuages have tried to be “more modern” interpretations of Ruby: Elixir, for example, is a version of Ruby that emphasizes the functional programming paradigm, while Crystal, which is still in development, aims to be a statically typed version of Ruby. Many programmers around the world have fallen in love with Ruby and its syntax, so we can count on its influence persisting for a long while to come.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/11/19/the-ruby-story.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/matz.png
[2]: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/88819
[3]: https://twobithistory.org/images/tiobe_ruby.png
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,44 @@
Important Papers: Codd and the Relational Model
======
Its hard to believe today, but the relational database was once the cool new kid on the block. In 2017, the relational model competes with all sorts of cutting-edge NoSQL technologies that make relational database systems seem old-fashioned and boring. Yet, 50 years ago, none of the dominant database systems were relational. Nobody had thought to structure their data that way. When the relational model did come along, it was a radical new idea that revolutionized the database world and spawned a multi-billion dollar industry.
The relational model was introduced in 1970. Edgar F. Codd, a researcher at IBM, published a [paper][1] called “A Relational Model of Data for Large Shared Data Banks.” The paper was a rewrite of a paper he had circulated internally at IBM a year earlier. The paper is unassuming; Codd does not announce in his abstract that he has discovered a brilliant new approach to storing data. He only claims to have employed a novel tool (the mathematical notion of a “relation”) to address some of the inadequacies of the prevailing database models.
In 1970, there were two schools of thought about how to structure a database: the hierarchical model and the network model. The hierarchical model was used by IBMs Information Management System (IMS), the dominant database system at the time. The network model had been specified by a standards committee called CODASYL (which also—random tidbit—specified COBOL) and implemented by several other database system vendors. The two models were not really that different; both could be called “navigational” models. They persisted tree or graph data structures to disk using pointers to preserve the links between the data. Retrieving a record stored toward the bottom of the tree would involve first navigating through all of its ancestor records. These databases were fast (IMS is still used by many financial institutions partly for this reason, see [this excellent blog post][2]) but inflexible. Woe unto those database administrators who suddenly found themselves needing to query records from the bottom of the tree without having an obvious place to start at the top.
Codd saw this inflexibility as a symptom of a larger problem. Programs using a hierarchical or network database had to know about how the stored data was structured. Programs had to know this because they were responsible for navigating down this structure to find the information they needed. This was so true that when Charles Bachman, a major pioneer of the network model, received a Turing Award for his work in 1973, he gave a speech titled “[The Programmer as Navigator][3].” Of course, if programs were saddled with this responsibility, then they would immediately break if the structure of the database ever changed. In the introduction to his 1970 paper, Codd motivates the search for a better model by arguing that we need “data independence,” which he defines as “the independence of application programs and terminal activities from growth in data types and changes in data representation.” The relational model, he argues, “appears to be superior in several respects to the graph or network model presently in vogue,” partly because, among other benefits, the relational model “provides a means of describing data with its natural structure only.” By this he meant that programs could safely ignore any artificial structures (like trees) imposed upon the data for storage and retrieval purposes only.
To further illustrate the problem with the navigational models, Codd devotes the first section of his paper to an example data set involving machine parts and assembly projects. This dataset, he says, could be represented in existing systems in at least five different ways. Any program that is developed assuming one of five structures will fail when run against at least three of the other structures. The program could instead try to figure out ahead of time which of the structures it might be dealing with, but it would be difficult to do so in this specific case and practically impossible in the general case. So, as long as the program needs to know about how the data is structured, we cannot switch to an alternative structure without breaking the program. This is a real bummer because (and this is from the abstract) “changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.”
Codd then introduces his relational model. This model would be refined and expanded in subsequent papers: In 1971, Codd wrote about ALPHA, a SQL-like query language he created; in another 1971 paper, he introduced the first three normal forms we know and love today; and in 1972, he further developed relational algebra and relational calculus, the mathematically rigorous underpinnings of the relational model. But Codds 1970 paper contains the kernel of the relational idea:
> The term relation is used here in its accepted mathematical sense. Given sets (not necessarily distinct), is a relation on these sets if it is a set of -tuples each of which has its first element from , its second element from , and so on. We shall refer to as the th domain of . As defined above, is said to have degree . Relations of degree 1 are often called unary, degree 2 binary, degree 3 ternary, and degree n-ary.
Today, we call a relation a table, and a domain an attribute or a column. The word “table” actually appears nowhere in the paper, though Codds visual representations of relations (which he calls “arrays”) do resemble tables. Codd defines several more terms, some of which we continue to use and others we have replaced. He explains primary and foreign keys, as well as what he calls the “active domain,” which is the set of all distinct values that actually appear in a given domain or column. He then spends some time distinguishing between a “simple” and a “nonsimple” domain. A simple domain contains “atomic” or “nondecomposable” values, like integers. A nonsimple domain has relations as elements. The example Codd gives here is that of an employee with a salary history. The salary history is not one salary but a collection of salaries each associated with a date. So a salary history cannot be represented by a single number or string.
Its not obvious how one could store a nonsimple domain in a multi-dimensional array, AKA a table. The temptation might be to denote the nonsimple relationship using some kind of pointer, but then we would be repeating the mistakes of the navigational models. Instead. Codd introduces normalization, which at least in the 1970 paper involves nothing more than turning nonsimple domains into simple ones. This is done by expanding the child relation so that it includes the primary key of the parent. Each tuple of the child relation references its parent using simple domains, eliminating the need for a nonsimple domain in the parent. Normalization means no pointers, sidestepping all the problems they cause in the navigational models.
At this point, anyone reading Codds paper would have several questions, such as “Okay, how would I actually query such a system?” Codd mentions the possibility of creating a universal sublanguage for querying relational databases from other programs, but declines to define such a language in this particular paper. He does explain, in mathematical terms, many of the fundamental operations such a language would have to support, like joins, “projection” (`SELECT` in SQL), and “restriction” (`WHERE`). The amazing thing about Codds 1970 paper is that, really, all the ideas are there—weve been writing `SELECT` statements and joins for almost half a century now.
Codd wraps up the paper by discussing ways in which a normalized relational database, on top of its other benefits, can reduce redundancy and improve consistency in data storage. Altogether, the paper is only 11 pages long and not that difficult of a read. I encourage you to look through it yourself. It would be another ten years before Codds ideas were properly implemented in a functioning system, but, when they finally were, those systems were so obviously better than previous systems that they took the world by storm.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/12/29/codd-relational-model.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://cs.uwaterloo.ca/~david/cs848s14/codd-relational.pdf
[2]: https://twobithistory.org/2017/10/07/the-most-important-database.html
[3]: https://pdfs.semanticscholar.org/f371/d196bf0e7b43df6dcbbc44de461925a21709.pdf
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -1,4 +1,4 @@
How writing can change your career for the better, even if you don't identify as a writer Translating by FelixYFZ
How writing can change your career for the better, even if you don't identify as a writer
======
Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?

View File

@ -0,0 +1,106 @@
Whatever Happened to the Semantic Web?
======
In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the worlds best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine.
They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this:
> The entertainment system was belting out the Beatles “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctors office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. Im going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctors office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Moms prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Petes and Lucys busy schedules.
The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web.
![][1]
For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.”
Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Todays physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…”
In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn?
### Semweb Hucksters and Their Metacrap
To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream.
The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans.
The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember.
Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users.
Indeed, the web had already seen people abusing the HTML `<meta>` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `<meta>` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `<meta>` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science.
Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand.
### Building the Semantic Web
If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert.
The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future.
The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge.
RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies:
```
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ex: <http://www.example.org/> .
ex:vincent_donofrio ex:starred_in ex:law_and_order_ci .
ex:law_and_order_ci rdf:type ex:tv_show .
ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix .
```
The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you dont know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.)
Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information.
In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project.
Berners-Lees article launched the second phase of the Semantic Webs development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3Cs new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system.
![][4]
The third phase of the Semantic Webs development involved adapting the W3Cs standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `<script>` tag on an HTML page, making it an alternative to both RDF and RDFa.
Work on JSON-LD coincided with the development of [schema.org][5], a centralized collection of simple schemas for describing things that might exist on the web. schema.org was started by Google, Bing, and Yahoo with the express purpose of delivering better search results by agreeing to a common set of vocabularies. schema.org vocabularies, together with JSON-LD, are now used to drive features like Googles Knowledge Graph. The approach was a more practical and less abstract one, where immediate applications in search results were the focus. The schema.org team are careful to state on their website that they are not attempting to create a “universal ontology.”
Today, work on the Semantic Web seems to have petered out. The W3C still does some work on the Semantic Web under the heading of “Data Activity,” which might charitably be called the fourth phase of the Semantic Web project. But its telling that the most recent “Data Activity” project is a study of what the W3C must do to improve its standardization process. Even the W3C now appears to recognize that few of its Semantic Web standards have been widely adopted and that simpler standards would have been more successful. The attitude at the W3C seems to be one of retrenchment and introspection, perhaps in the hope of being better prepared when the Semantic Web looks promising again.
### A Lingering Legacy
And so the Semantic Web, as colorfully described by one person, is “as dead as last years roadkill.” At least, the version of the Semantic Web originally proposed by Tim Berners-Lee, which once seemed to be the imminent future of the web, is unlikely to emerge soon. That said, many of the technologies and ideas that were developed amid the push to create the Semantic Web have been repurposed and live on in various applications. As already mentioned, Google relies on Semantic Web technologies—now primarily JSON-LD—to generate useful conceptual summaries next to search results. schema.org maintains a list of “vocabularies” that web developers can use to publish easily understood data for a wide audience—it is a new, more practical imagining of what a public, shared ontology might look like. And to some degree, the many REST APIs now available constitute a diminished Semantic Web. What wasnt possible in 2001 now is: You can easily build applications that make use of data from across the web. The difference is that you must sign up for each API one by one beforehand, which in addition to being wearisome also gives whoever hosts the API enormous control over how you access their data.
Another modern application of Semantic Web technologies, perhaps the most popular and successful in recent years outside of Google, is Facebooks [OpenGraph][6] protocol. The OpenGraph protocol defines a schema that web developers can use (via RDFa) to determine how a web page is displayed when shared in a social media application. For example, a web developer working at the New York Times might use OpenGraph to specify the title and thumbnail that should appear when a New York Times article is shared in Facebook. In one sense, this is an application of Semantic Web technologies true to the Semantic Webs origins in research on metadata. Tagging a webpage with extra information about who wrote it and what it is about is exactly the kind of metadata authoring the Semantic Web was going to depend on. But in another sense, OpenGraph is an application of Semantic Web technologies to further a purpose somewhat at odds with the philosophy of the web. The metadata isnt meant to be general-purpose, after all. People tag their webpages using OpenGraph because they want links to their content to unfurl properly in Facebook. And the more information Facebook knows about your website, the closer Facebook gets to simply reproducing your entire website within Facebook, portending a future where the open web is a mythical land beyond Facebooks towering blue walls.
Whats fascinating about JSON-LD and OpenGraph is that you can use them without knowing anything about subject-predicate-object triplets, RDF, RDF Schema, ontologies, OWL, or really any other Semantic Web technologies—you dont even have to know XML. Manu Sporny has even said that the JSON-LD working group at W3C made a special effort to avoid references to RDF in the JSON-LD specification. This is almost certainly why these technologies have succeeded and continue to be popular. Nobody wants to use a tool that can only be fully understood by reading a whole family of specifications.
Its interesting to consider what might have happened if simple formats like JSON-LD had appeared earlier. The Semantic Web could have sidestepped its fatal association with XML. More people might have been tempted to mark up their websites with RDF, but even that may not have saved the Semantic Web. Sean B. Palmer, an Internet Person that has scrubbed all biographical information about himself from the internet but who claims to have worked in the Semantic Web world for a while in the 2000s, posits that the real problem was the lack of a truly decentralized infrastructure to host the Semantic Web on. To host your own website, you need to buy a domain name from ICANN, configure it correctly using DNS, and then pay someone to host your content if you dont already have a server of your own. We shouldnt be surprised if the average person finds it easier to enter their information into a giant, corporate data repository. And in a web of giant, corporate data repositories, there are no compelling use cases for Semantic Web technologies.
So the problems that confronted the Semantic Web were more numerous and profound than just “XML sucks.” All the same, its hard to believe that the Semantic Web is truly dead and gone. Some of the particular technologies that the W3C dreamed up in the early 2000s may not have a future, but the decentralized vision of the web that Tim Berners-Lee and his follow researchers described in Scientific American is too compelling to simply disappear. Imagine a web where, rather than filling out the same tedious form every time you register for a service, you were somehow able to authorize services to get that information from your own website. Imagine a Facebook that keeps your list of friends, hosted on your own website, up-to-date, rather than vice-versa. Basically, the Semantic Web was going to be a web where everyone gets to have their own personal REST API, whether they know the first thing about computers or not. Conceived of that way, its easy to see why the Semantic Web hasnt yet been realized. There are so many engineering and security issues to sort out between here and there. But its also easy to see why the dream of the Semantic Web seduced so many people.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][7] on Twitter or subscribe to the [RSS feed][8] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/05/27/semantic-web.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/scientific_american_cover.jpg
[2]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
[3]: http://wiki.dbpedia.org/
[4]: https://twobithistory.org/images/linked_data.png
[5]: http://schema.org/
[6]: http://ogp.me/
[7]: https://twitter.com/TwoBitHistory
[8]: https://twobithistory.org/feed.xml

View File

@ -1,3 +1,5 @@
thecyanbird translating
Where Vim Came From
======
I recently stumbled across a file format known as Intel HEX. As far as I can gather, Intel HEX files (which use the `.hex` extension) are meant to make binary images less opaque by encoding them as lines of hexadecimal digits. Apparently they are used by people who program microcontrollers or need to burn data into ROM. In any case, when I opened up a HEX file in Vim for the first time, I discovered something shocking. Here was this file format that, at least to me, was deeply esoteric, but Vim already knew all about it. Each line of a HEX file is a record divided into different fields—Vim had gone ahead and colored each of the fields a different color. `set ft?` I asked, in awe. `filetype=hex`, Vim answered, triumphant.

View File

@ -1,110 +0,0 @@
HankChow translating
3 areas to drive DevOps change
======
Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
Pain avoidance is a powerful motivator. Some studies hint that even [plants experience a type of pain][1] and take steps to defend themselves. Yet we have plenty of examples of humans enduring pain on purpose—exercise often hurts, but we still do it. When we believe the payoff is worth the pain, we'll endure almost anything.
The truth is that driving large-scale organizational change is painful. It hurts for those having to change their values and behaviors, it hurts for leadership, and it hurts for the people just trying to do their jobs. In the case of DevOps, though, I can tell you the pain is worth it.
I've seen firsthand how teams learn they must spend time improving their technical processes, take ownership of their automation pipelines, and become masters of their fate. They gain the tools they need to be successful.
![Improvements after DevOps transformation][3]
Image by Lee Eason. CC BY-SA 4.0
This chart shows the value of that change. In a company where I directed a DevOps transformation, its 60+ teams submitted more than 900 requests per month to release management. If you add up the time those tickets stayed open, it came to more than 350 days per month. What could your company do with an extra 350 person-days per month? In addition to the improvements seen above, they went from 100 to 9,000 deployments per month, a 24% decrease in high-severity bugs, happier engineers, and improved net promoter scores (NPS). The biggest NPS improvements link to the teams furthest along on their DevOps journey, as the [Puppet State of DevOps][4] report predicted. The bottom line is that investments into technical process improvement translate into better business outcomes.
DevOps leaders must focus on three main areas to drive this change: executives, culture, and team health.
### Executives
The bottom line is that investments into technical process improvement translate into better business outcomes.
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
DevOps leaders must help executives come along for the ride. Educating leaders gives them options when they're making decisions and makes it more likely they'll choose paths that help your company.
For example, let's say your executives believe DevOps is going to improve how you deploy your products into production, but they don't understand how. You've been working with a software team to help automate their deployment. When an executive hears about a deploy failure (and there will be failures), they will want to understand how it occurred. When they learn the software team did the deployment rather than the release management team, they may try to protect the business by decreeing all production releases must go through traditional change controls. You will lose credibility, and teams will be far less likely to trust you and accept further changes.
It takes longer to rebuild trust with executives and get their support after an incident than it would have taken to educate them in the first place. Put the time in upfront to build alignment, and it will pay off as you implement tactical changes.
Two pieces of advice when building that alignment:
* First, **don't ignore any constraints** they raise. If they have worries about contracts or security, make the heads of legal and security your new best friends. By partnering with them, you'll build their trust and avoid making costly mistakes.
* Second, **use metrics to build a bridge** between what your delivery teams are doing and your executives' concerns. If the business has a goal to reduce customer churn, and you know from research that many customers leave because of unplanned downtime, reinforce that your teams are committed to tracking and improving Mean Time To Detection and Resolution (MTTD and MTTR). You can use those key metrics to show meaningful progress that teams and executives understand and get behind.
### Culture
DevOps is a culture of continuous improvement focused on code, build, deploy, and operational processes. Culture describes the organization's values and behaviors. Essentially, we're talking about changing how people behave, which is never easy.
I recommend reading [The Wolf in CIO's Clothing][5]. Spend time thinking about psychology and motivation. Read [Drive][6] or at least watch Daniel Pink's excellent [TED Talk][7]. Read [The Hero with a Thousand Faces][8] and learn to identify the different journeys everyone is on. If none of these things sound interesting, you are not the right person to drive change in your company. Otherwise, read on!
Essentially, we're talking about changing how people behave, which is never easy.
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
Explain that the company and its organizational goals are changing, and the team must alter its values. It's helpful to express this in terms of contrast. For example, your company may have historically valued cost savings above all else. That value is there for a reason—the company was cash-strapped. To get new products out, the infrastructure group had to tightly couple services by sharing database clusters or servers. Over time, those practices created a real mess that became hard to maintain. Simple changes started breaking things in unexpected ways. This led to tight change-control processes that were painful for delivery teams, so they stopped changing things.
Play that movie for five years, and you end up with little to no innovation, legacy technology, attraction and retention problems, and poor-quality products. You've grown the company, but you've hit a ceiling, and you can't continue to grow with those same values and behaviors. Now you must put engineering efficiency above cost saving. If one option will help teams maintain their service easier, but the other option is cheaper in the short term, you go with the first option.
You must tell this story again and again. Then you must celebrate any time a team expresses the new value through their behavior—even if they make a mistake. When a team has a deploy failure, congratulate them for taking the risk and encourage them to keep learning. Explain how their behavior is leading to the right outcome and support them. Over time, teams will see the message is real, and they'll feel safe altering their behavior.
### Team health
Have you ever been in a planning meeting and heard something like this: "We can't really estimate that story until John gets back from vacation. He's the only one who knows that area of the code well enough." Or: "We can't get this task done because it's got a cross-team dependency on network engineering, and the guy that set up the firewall is out sick." Or: "John knows that system best; if he estimated the story at a 3, then let's just go with that." When the team works on that story, who will most likely do the work? That's right, John will, and the cycle will continue.
For a long time, we've accepted that this is just the nature of software development. If we don't solve for it, we perpetuate the cycle.
Entropy will always drive teams naturally towards disorder and bad health. Our job as team members and leaders is to intentionally manage against that entropy and keep our teams healthy. Transformations like DevOps, agile, moving to the cloud, or refactoring a legacy application all amplify and accelerate that entropy. That's because transformations add new skills and expertise needed for the team to take on that new type of work.
Let's look at an example of a product team refactoring its legacy monolith. As usual, they build those new services in AWS. The legacy monolith was deployed to the data center, monitored, and backed up by IT. IT made sure the application's infosec requirements were met at the infrastructure layer. They conducted disaster recovery tests, patched the servers, and installed and configured required intrusion detection and antivirus agents. And they kept change control records, required for the annual audit process, of everything was done to the application's infrastructure.
I often see product teams make the fatal mistake of thinking IT is all cost and bottleneck. They're hungry to shed the skin of IT and use the public cloud, but they never stop to appreciate the critical services IT provides. Moving to the cloud means you implement these things differently; they don't go away. AWS is still a data center, and any team utilizing it accepts the related responsibilities.
In practice, this means product teams must learn how to do those IT services when they move to the cloud. So, when our fictional product team starts refactoring its legacy application and putting new services in in the cloud, it will need a vastly expanded skillset to be successful. Those skills don't magically appear—they're learned or hired—and team leaders and managers must actively manage the process.
I built [Tekata.io][9] because I couldn't find any tools to support me as I helped my teams evolve. Tekata is free and easy to use, but the tool is not as important as the people and process. Make sure you build continuous learning into your cadence and keep track of your team's weak spots. Those weak spots affect your ability to deliver, and filling them usually involves learning new things, so there's a wonderful synergy here. In fact, 76% of millennials think professional development opportunities are [one of the most important elements][10] of company culture.
### Proof is in the payoff
DevOps transformations involve altering the behavior, and therefore the culture, of your teams. That must be done with executive support and understanding. At the same time, those behavior changes mean learning new skills, and that process must also be managed carefully. But the payoff for pulling this off is more productive teams, happier and more engaged team members, higher quality products, and happier customers.
Lee Eason will present [Tales From A DevOps Transformation][11] at [All Things Open][12], October 21-23 in Raleigh, N.C.
Disclaimer: All opinions are statements in this article are exclusively those of Lee Eason and are not representative of Ipreo or IHS Markit.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tales-devops-transformation
作者:[Lee Eason][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/leeeason
[b]: https://github.com/lujun9972
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
[2]: /file/411061
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png (Improvements after DevOps transformation)
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
[5]: https://www.gartner.com/en/publications/wolf-cio
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
[9]: https://tekata.io/
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
[12]: https://allthingsopen.org/

View File

@ -1,3 +1,5 @@
Northurland Translating
How Lisp Became God's Own Programming Language
======
When programmers discuss the relative merits of different programming languages, they often talk about them in prosaic terms as if they were so many tools in a tool belt—one might be more appropriate for systems programming, another might be more appropriate for gluing together other programs to accomplish some ad hoc task. This is as it should be. Languages have different strengths and claiming that a language is better than other languages without reference to a specific use case only invites an unproductive and vitriolic debate.

View File

@ -0,0 +1,60 @@
5 tips for facilitators of agile meetings
======
Boost your team's productivity and motivation with these agile principles.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-women-meeting-team.png?itok=BdDKxT1w)
As Agile practitioner, I often hear that the best way to have business meetings is to avoid more meetings, or to cancel them altogether.
Do your meetings fail to keep attendees engaged or run longer than they should? Perhaps you have mixed feelings about participating in meetings—but don't want to be excluded?
If all this sounds familiar, read on.
### How do we fix meetings?
To succeed in this role, you must understand that agile is not something that you do, but something that you can become.
Meetings are an integral part of work culture, so improving them can bring important benefits. But improving how meetings are structured requires a change in how the entire organization is led and managed. This is where the agile mindset comes into play.
An agile mindset is an _attitude that equates failure and problems with opportunities for learning, and a belief that we can all improve over time._ Meetings can bring great value to an organization, as long as they are not pointless. The best way to eliminate pointless meetings is to have a meeting facilitator with an agile mindset. The key attribute of agile-driven facilitation is to focus on problem-solving.
Agile meeting facilitators confronting a complex problem start by breaking the meeting agenda down into modules. They also place more value on adapting to change than sticking to a plan. They work with meeting attendees to develop a solution based on feedback loops. This assures audience engagement and makes the meetings productive. The result is an integrated, agreed-upon solution that comprises a set of coherent action items aligned on a goal
### What are the skills of an agile meeting facilitator?
An agile meeting facilitator is able to quickly adapt to changing circumstances. He or she integrates all stakeholders and encourages them to share knowledge and skills.
To succeed in this role, you must understand that agile is not something that you do, but something that you can become. As the [Manifesto for Agile Software Development][1] notes, tools and processes are important, but it is more important to have competent people working together effectively.
### 5 tips for agile meeting facilitation
1. **Start with the problem in mind.** Identify the purpose of the meeting and narrow the agenda items to those that are most important. Stay tuned in and focused.
2. **Make sure that a senior leader doesnt run the meeting.** Many senior leaders tend to create an environment in which the team expects to be told what to do. Instead, create an environment in which diverse ideas are the norm. Encourage open discussion in which leaders share where—but not how—innovation is needed. This reduces the layer of control and approval, increases the time focused on decision-making, and boosts the teams motivation.
3. **Identify bottlenecks early.** Bureaucratic procedures or lack of collaboration between team members leads to meeting meltdowns and poor results. Anticipate how things might go wrong and be prepared to offer suggestions, not dictate solutions.
4. **Show, dont tell.** Share the meeting goals and create the meeting agenda in advance. Allow time to adjust the agenda items and their order to achieve the best flow. Make sure that the meetings agenda is clear and visible to all attendees.
5. **Know when to wait.** Map out a clear timeline for the meeting and help keep the meeting on track. Understand when you should allow an item to go long versus when you should table a discussion. This will go a long way toward helping you stay on track.
The ultimate goal is to create a work environment that encourages contribution and empowers the team. Improving how meetings are run will help your organization transition from a traditional hierarchy to a more agile enterprise.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/agile-culture-5-tips-meeting-facilitators
作者:[Dominika Bula][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dominika
[b]: https://github.com/lujun9972
[1]: http://agilemanifesto.org/

View File

@ -0,0 +1,74 @@
Why it matters that Microsoft released old versions of MS-DOS as open source
======
Microsoft's release of MS-DOS 1.25 and 2.0 on GitHub adopts an open source license that's compatible with GNU GPL.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_business_sign_store.jpg?itok=g4QibRqg)
One open source software project I work on is the FreeDOS Project. It's a complete, free, DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.
So I took notice when Microsoft recently released the source code to MS-DOS 1.25 and 2.0 via a [GitHub repository][1]. This is a huge step for Microsoft, and Id like to briefly explain why it is significant.
### MS-DOS as open source software
Some open source fans may recall that this is not the first time Microsoft has officially released the MS-DOS source code. On March 25, 2014, Microsoft posted the source code to MS-DOS 1.1 and 2.0 via the [Computer History Museum][2]. Unfortunately, this source code was released under a “look but do not touch” license that limited what you could do with it. According to the license from the 2014 source code release, users were barred from re-using it in other projects and could use it “[solely for non-commercial research, experimentation, and educational purposes.][3]”
The museum license wasnt friendly to open source software, and as a result, the MS-DOS source code was ignored. On the FreeDOS Project, we interpreted the “look but do not touch” license as a potential risk to FreeDOS, so we decided developers who had viewed the MS-DOS source code could not contribute to FreeDOS.
But Microsofts recent MS-DOS source code release represents a significant change. This MS-DOS source code uses the MIT License (also called the Expat License). Quoting Microsofts [LICENSE.md][4] file on GitHub:
> ## MS-DOS v1.25 and v2.0 Source Code
>
> Copyright © Microsoft Corporation.
>
> All rights reserved.
>
> MIT License.
>
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the Software), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
>
> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
>
> THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,TORT OR OTHERWISE, ARISING FROM OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
If that text looks familiar to you, it is because thats the same text as the MIT License recognized by the [Open Source Initiative][5]. Its also the same as the Expat License recognized by the [Free Software Foundation][6].
The Free Software Foundation (via GNU) says the Expat License is compatible with the [GNU General Public License][7]. Specifically, GNU describes the Expat License as “a lax, permissive non-copyleft free software license, compatible with the GNU GPL. It is sometimes ambiguously referred to as the MIT License.” Also according to GNU, when they say a license is [compatible with the GNU GPL][8], “you can combine code released under the other license [MIT/Expat License] with code released under the GNU GPL in one larger program.”
Microsofts use of the MIT/Expat License for the original MS-DOS source code is significant because the license is not only open source software but free software.
### What does it mean?
This is great, but theres a practical side to the source code release. You might think, “If Microsoft has released the MS-DOS source code under a license compatible with the GNU GPL, will that help FreeDOS?”
Not really. Here's why: FreeDOS started from an original source code base, independent from MS-DOS. Certain functions and behaviors of MS-DOS were identified and documented in the comprehensive [Interrupt List by Ralf Brown][9], and we provided MS-DOS compatibility in FreeDOS by referencing the Interrupt List. But many significant fundamental technical differences remain between FreeDOS and MS-DOS. For example, FreeDOS uses a completely different memory structure and memory layout. You cant simply forklift MS-DOS source code into FreeDOS and expect it to work. The code assumptions are quite different.
Theres also the simple matter that these are very old versions of MS-DOS. For example, MS-DOS 2.0 was the first version to support directories and redirection. But these versions of MS-DOS did not yet include more advanced features, including networking, CDROM support, and 386 support such as EMM386. These features have been standard in FreeDOS for a long time.
So the MS-DOS source code release is interesting, but FreeDOS would not be able to reuse this code for any modern features anyway. FreeDOS has already surpassed these versions of MS-DOS in functionality and features.
### Congratulations
Still, its important to recognize the big step that Microsoft has taken in releasing these versions of MS-DOS as open source software. The new MS-DOS source code release on GitHub does away with the restrictive license from 2014 and adopts a recognized open source software license that is compatible with the GNU GPL. Congratulations to Microsoft for releasing MS-DOS 1.25 and 2.0 under an open source license!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/microsoft-open-source-old-versions-ms-dos
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://github.com/Microsoft/MS-DOS
[2]: http://www.computerhistory.org/press/ms-source-code.html
[3]: http://www.computerhistory.org/atchm/microsoft-research-license-agreement-msdos-v1-1-v2-0/
[4]: https://github.com/Microsoft/MS-DOS/blob/master/LICENSE.md
[5]: https://opensource.org/licenses/MIT
[6]: https://directory.fsf.org/wiki/License:Expat
[7]: https://www.gnu.org/licenses/license-list.en.html#Expat
[8]: https://www.gnu.org/licenses/gpl-faq.html#WhatDoesCompatMean
[9]: http://www.cs.cmu.edu/~ralf/files.html

View File

@ -1,516 +0,0 @@
fuowang 翻译中
9 Best Free Video Editing Software for Linux In 2017
======
**Brief: Here are best video editors for Linux, their feature, pros and cons and how to install them on your Linux distributions.**
![Best Video editors for Linux][1]
![Best Video editors for Linux][2]
We have discussed [best photo management applications for Linux][3], [best code editors for Linux][4] in similar articles in the past. Today we shall see the **best video editing software for Linux**.
When asked about free video editing software, Windows Movie Maker and iMovie is what most people often suggest.
Unfortunately, both of them are not available for GNU/Linux. But you don't need to worry about it, we have pooled together a list of **best free video editors** for you.
## Best Video Editors for Linux
Let's have a look at the best free video editing software for Linux below. Here's a quick summary if you think the article is too long to read. You can click on the links to jump to the relevant section of the article:
Video Editors Main Usage Type Kdenlive General purpose video editing Free and Open Source OpenShot General purpose video editing Free and Open Source Shotcut General purpose video editing Free and Open Source Flowblade General purpose video editing Free and Open Source Lightworks Professional grade video editing Freemium Blender Professional grade 3D editing Free and Open Source Cinelerra General purpose video editing Free and Open Source DaVinci Resolve Professional grade video editing Freemium VidCutter Simple video split and merge Free and Open Source
### 1\. Kdenlive
![Kdenlive-free video editor on ubuntu][1]
![Kdenlive-free video editor on ubuntu][5]
[Kdenlive][6] is a free and [open source][7] video editing software from [KDE][8] that provides support for dual video monitors, a multi-track timeline, clip list, customizable layout support, basic effects, and basic transitions.
It supports a wide variety of file formats and a wide range of camcorders and cameras including Low resolution camcorder (Raw and AVI DV editing), Mpeg2, mpeg4 and h264 AVCHD (small cameras and camcorders), High resolution camcorder files, including HDV and AVCHD camcorders, Professional camcorders, including XDCAM-HD™ streams, IMX™ (D10) streams, DVCAM (D10) , DVCAM, DVCPRO™, DVCPRO50™ streams and DNxHD™ streams.
If you are looking for an iMovie alternative for Linux, Kdenlive would be your best bet.
#### Kdenlive features
* Multi-track video editing
* A wide range of audio and video formats
* Configurable interface and shortcuts
* Easily create tiles using text or images
* Plenty of effects and transitions
* Audio and video scopes make sure the footage is correctly balanced
* Proxy editing
* Automatic save
* Wide hardware support
* Keyframeable effects
#### Pros
* All-purpose video editor
* Not too complicated for those who are familiar with video editing
#### Cons
* It may still be confusing if you are looking for something extremely simple
* KDE applications are infamous for being bloated
#### Installing Kdenlive
Kdenlive is available for all major Linux distributions. You can simply search for it in your software center. Various packages are available in the [download section of Kdenlive website][9].
Command line enthusiasts can install it from the terminal by running the following command in Debian and Ubuntu-based Linux distributions:
```
sudo apt install kdenlive
```
### 2\. OpenShot
![Openshot-free-video-editor-on-ubuntu][1]
![Openshot-free-video-editor-on-ubuntu][10]
[OpenShot][11] is another multi-purpose video editor for Linux. OpenShot can help you create videos with transitions and effects. You can also adjust audio levels. Of course, it support of most formats and codecs.
You can also export your film to DVD, upload to YouTube, Vimeo, Xbox 360, and many common video formats. OpenShot is a tad bit simpler than Kdenlive. So if you need a video editor with a simple UI OpenShot is a good choice.
There is also a neat documentation to [get you started with OpenShot][12].
#### OpenShot features
* Cross-platform, available on Linux, macOS, and Windows
* Support for a wide range of video, audio, and image formats
* Powerful curve-based Keyframe animations
* Desktop integration with drag and drop support
* Unlimited tracks or layers
* Clip resizing, scaling, trimming, snapping, rotation, and cutting
* Video transitions with real-time previews
* Compositing, image overlays and watermarks
* Title templates, title creation, sub-titles
* Support for 2D animation via image sequences
* 3D animated titles and effects
* SVG friendly for creating and including vector titles and credits
* Scrolling motion picture credits
* Frame accuracy (step through each frame of video)
* Time-mapping and speed changes on clips
* Audio mixing and editing
* Digital video effects, including brightness, gamma, hue, greyscale, chroma key etc
#### Pros
* All-purpose video editor for average video editing needs
* Available on Windows and macOS along with Linux
#### Cons
* It may be simple but if you are extremely new to video editing, there is definitely a learning curve involved here
* You may still not find up to the mark of a professional-grade, movie making editing software
#### Installing OpenShot
OpenShot is also available in the repository of all major Linux distributions. You can simply search for it in your software center. You can also get it from its [official website][13].
My favorite way is to use the following command in Debian and Ubuntu-based Linux distributions:
```
sudo apt install openshot
```
### 3\. Shotcut
![Shotcut Linux video editor][1]
![Shotcut Linux video editor][14]
[Shotcut][15] is another video editor for Linux that can be put in the same league as Kdenlive and OpenShot. While it does provide similar features as the other two discussed above, Shotcut is a bit advanced with support for 4K videos.
Support for a number of audio, video format, transitions and effects are some of the numerous features of Shotcut. External monitor is also supported here.
There is a collection of video tutorials to [get you started with Shotcut][16]. It is also available for Windows and macOS so you can use your learning on other operating systems as well.
#### Shotcut features
* Cross-platform, available on Linux, macOS, and Windows
* Support for a wide range of video, audio, and image formats
* Native timeline editing
* Mix and match resolutions and frame rates within a project
* Audio filters, mixing and effects
* Video transitions and filters
* Multitrack timeline with thumbnails and waveforms
* Unlimited undo and redo for playlist edits including a history view
* Clip resizing, scaling, trimming, snapping, rotation, and cutting
* Trimming on source clip player or timeline with ripple option
* External monitoring on an extra system display/monitor
* Hardware support
You can read about more features [here][17].
#### Pros
* All-purpose video editor for common video editing needs
* Support for 4K videos
* Available on Windows and macOS along with Linux
#### Cons
* Too many features reduce the simplicity of the software
#### Installing Shotcut
Shotcut is available in [Snap][18] format. You can find it in Ubuntu Software Center. For other distributions, you can get the executable file from its [download page][19].
### 4\. Flowblade
![Flowblade movie editor on ubuntu][1]
![Flowblade movie editor on ubuntu][20]
[Flowblade][21] is a multitrack non-linear video editor for Linux. Like the above-discussed ones, this too is a free and open source software. It comes with a stylish and modern user interface.
Written in Python, it is designed to provide a fast, and precise. Flowblade has focused on providing the best possible experience on Linux and other free platforms. So there's no Windows and OS X version for now. Feels good to be a Linux exclusive.
You also get a decent [documentation][22] to help you use all of its features.
#### Flowblade features
* Lightweight application
* Provide simple interface for simple tasks like split, merge, overwrite etc
* Plenty of audio and video effects and filters
* Supports [proxy editing][23]
* Drag and drop support
* Support for a wide range of video, audio, and image formats
* Batch rendering
* Watermarks
* Video transitions and filters
* Multitrack timeline with thumbnails and waveforms
You can read about more [Flowblade features][24] here.
#### Pros
* Lightweight
* Good for general purpose video editing
#### Cons
* Not available on other platforms
#### Installing Flowblade
Flowblade should be available in the repositories of all major Linux distributions. You can install it from the software center. More information is available on its [download page][25].
Alternatively, you can install Flowblade in Ubuntu and other Ubuntu based systems, using the command below:
```
sudo apt install flowblade
```
### 5\. Lightworks
![Lightworks running on ubuntu 16.04][1]
![Lightworks running on ubuntu 16.04][26]
If you looking for a video editor software that has more feature, this is the answer. [Lightworks][27] is a cross-platform professional video editor, available for Linux, Mac OS X and Windows.
It is an award-winning professional [non-linear editing][28] (NLE) software that supports resolutions up to 4K as well as video in SD and HD formats.
Lightworks is available for Linux, however, it is not open source.
This application has two versions:
* Lightworks Free
* Lightworks Pro
Pro version has more features such as higher resolution support, 4K and Blue Ray support etc.
Extensive documentation is available on its [website][29]. You can also refer to videos at [Lightworks video tutorials page][30]
#### Lightworks features
* Cross-platform
* Simple & intuitive User Interface
* Easy timeline editing & trimming
* Real-time ready to use audio & video FX
* Access amazing royalty-free audio & video content
* Lo-Res Proxy workflows for 4K
* Export video for YouTube/Vimeo, SD/HD, up to 4K
* Drag and drop support
* Wide variety of audio and video effects and filters
#### Pros
* Professional, feature-rich video editor
#### Cons
* Limited free version
#### Installing Lightworks
Lightworks provides DEB packages for Debian and Ubuntu-based Linux distributions and RPM packages for Fedora-based Linux distributions. You can find the packages on its [download page][31].
### 6\. Blender
![Blender running on Ubuntu 16.04][1]
![Blender running on Ubuntu 16.04][32]
[Blender][33] is a professional, industry-grade open source, cross-platform video editor. It is popular for 3D works. Blender has been used in several Hollywood movies including Spider Man series.
Although originally designed for produce 3D modeling, but it can also be used for video editing and input capabilities with a variety of formats.
#### Blender features
* Live preview, luma waveform, chroma vectorscope and histogram displays
* Audio mixing, syncing, scrubbing and waveform visualization
* Up to 32 slots for adding video, images, audio, scenes, masks and effects
* Speed control, adjustment layers, transitions, keyframes, filters and more
You can read about more features [here][34].
#### Pros
* Cross-platform
* Professional grade editing
#### Cons
* Complicated
* Mainly for 3D animation, not focused on regular video editing
#### Installing Blender
The latest version of Blender can be downloaded from its [download page][35].
### 7\. Cinelerra
![Cinelerra video editor for Linux][1]
![Cinelerra video editor for Linux][36]
[Cinelerra][37] has been available since 1998 and has been downloaded over 5 million times. It was the first video editor to provide non-linear editing on 64-bit systems back in 2003. It was a go-to video editor for Linux users at that time but it lost its sheen afterward as some developers abandoned the project.
Good thing is that its back on track and is being developed actively again.
There is some [interesting backdrop story][38] about how and why Cinelerra was started if you care to read.
#### Cinelerra features
* Non-linear editing
* Support for HD videos
* Built-in frame renderer
* Various video effects
* Unlimited layers
* Split pane editing
#### Pros
* All-purpose video editor
#### Cons
* Not suitable for beginners
* No packages available
#### Installing Cinelerra
You can download the source code from [SourceForge][39]. More information on its [download page][40].
### 8\. DaVinci Resolve
![DaVinci Resolve video editor][1]
![DaVinci Resolve video editor][41]
If you want Hollywood level video editing, use the tool the professionals use in Hollywood. [DaVinci Resolve][42] from Blackmagic is what professionals are using for editing movies and tv shows.
DaVinci Resolve is not your regular video editor. It's a full-fledged editing tool that provides editing, color correction and professional audio post-production in a single application.
DaVinci Resolve is not open source. Like LightWorks, it too provides a free version for Linux. The pro version costs $300.
#### DaVinci Resolve features
* High-performance playback engine
* All kind of edit types such as overwrite, insert, ripple overwrite, replace, fit to fill, append at end
* Advanced Trimming
* Audio Overlays
* Multicam Editing allows editing footage from multiple cameras in real-time
* Transition and filter-effects
* Speed effects
* Timeline curve editor
* Non-linear editing for VFX
#### Pros
* Cross-platform
* Professional grade video editor
#### Cons
* Not suitable for average editing
* Not open source
* Some features are not available in the free version
#### Installing DaVinci Resolve
You can download DaVinci Resolve for Linux from [its website][42]. You'll have to register, even for the free version.
### 9\. VidCutter
![VidCutter video editor for Linux][1]
![VidCutter video editor for Linux][43]
Unlike all the other video editors discussed here, [VidCutter][44] is utterly simple. It doesn't do much except splitting videos and merging. But at times you just need this and VidCutter gives you just that.
#### VidCutter features
* Cross-platform app available for Linux, Windows and MacOS
* Supports most of the common video formats such as: AVI, MP4, MPEG 1/2, WMV, MP3, MOV, 3GP, FLV etc
* Simple interface
* Trims and merges the videos, nothing more than that
#### Pros
* Cross-platform
* Good for simple split and merge
#### Cons
* Not suitable for regular video editing
* Crashes often
#### Installing VidCutter
If you are using Ubuntu-based Linux distributions, you can use the official PPA:
```
sudo add-apt-repository ppa:ozmartian/apps
sudo apt-get update
sudo apt-get install vidcutter
```
It is available in AUR so Arch Linux users can also install it easily. For other Linux distributions, you can find the installation files on its [GitHub page][45].
### Which is the best video editing software for Linux?
A number of video editors mentioned here use [FFmpeg][46]. You can use FFmpeg on your own as well. It's a command line only tool so I didn't include it in the main list but it would have been unfair to not mention it at all.
If you need an editor for simply cutting and joining videos, go with VidCutter.
If you need something more than that, **OpenShot** or **Kdenlive** is a good choice. These are suitable for beginners and a system with standard specification.
If you have a high-end computer and need advanced features you can go out with **Lightworks** or **DaVinci Resolve**. If you are looking for more advanced features for 3D works, **Blender** has got your back.
So that's all I can write about the ** best video editing software for Linux** such as Ubuntu, Linux Mint, Elementary, and other Linux distributions. Share with us which video editor you like the most.
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[It'S Foss Team][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/itsfoss/
[1]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[2]:https://itsfoss.com/wp-content/uploads/2016/06/best-Video-editors-Linux-800x450.png
[3]:https://itsfoss.com/linux-photo-management-software/
[4]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[5]:https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg
[6]:https://kdenlive.org/
[7]:https://itsfoss.com/tag/open-source/
[8]:https://www.kde.org/
[9]:https://kdenlive.org/download/
[10]:https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg
[11]:http://www.openshot.org/
[12]:http://www.openshot.org/user-guide/
[13]:http://www.openshot.org/download/
[14]:https://itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux-800x503.jpg
[15]:https://www.shotcut.org/
[16]:https://www.shotcut.org/tutorials/
[17]:https://www.shotcut.org/features/
[18]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[19]:https://www.shotcut.org/download/
[20]:https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg
[21]:http://jliljebl.github.io/flowblade/
[22]:https://jliljebl.github.io/flowblade/webhelp/help.html
[23]:https://jliljebl.github.io/flowblade/webhelp/proxy.html
[24]:https://jliljebl.github.io/flowblade/features.html
[25]:https://jliljebl.github.io/flowblade/download.html
[26]:https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg
[27]:https://www.lwks.com/
[28]:https://en.wikipedia.org/wiki/Non-linear_editing_system
[29]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=4
[30]:https://www.lwks.com/videotutorials
[31]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=1
[32]:https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg
[33]:https://www.blender.org/
[34]:https://www.blender.org/features/
[35]:https://www.blender.org/download/
[36]:https://itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg
[37]:http://cinelerra.org/
[38]:http://cinelerra.org/our-story
[39]:https://sourceforge.net/projects/heroines/files/cinelerra-6-src.tar.xz/download
[40]:http://cinelerra.org/download
[41]:https://itsfoss.com/wp-content/uploads/2016/06/davinci-resolve-vdeo-editor-800x450.jpg
[42]:https://www.blackmagicdesign.com/products/davinciresolve/
[43]:https://itsfoss.com/wp-content/uploads/2016/06/vidcutter-screenshot-800x585.jpeg
[44]:https://itsfoss.com/vidcutter-video-editor-linux/
[45]:https://github.com/ozmartian/vidcutter/releases
[46]:https://www.ffmpeg.org/

View File

@ -0,0 +1,113 @@
The Lineage of Man
======
Ive always found man pages fascinating. Formatted as strangely as they are and accessible primarily through the terminal, they have always felt to me like relics of an ancient past. Some man pages probably are ancient: Id love to know how many times the man page for `cat` or say `tee` has been revised since the early days of Unix, but Im willing to bet its not many. Man pages are mysterious—its not obvious where they come from, where they live on your computer, or what kind of file they might be stored in—and yet its hard to believe that something so fundamental and so obviously governed by rigid conventions could remain so inscrutable. Where did the man page conventions come from? Where are they codified? If I wanted to write my own man page, where would I even begin?
The story of `man` is inextricably tied to the story of Unix. The very first version of Unix, completed in 1971 but only available internally at Bell Labs, did not provide a `man` command. But Douglas McIlroy, who at the time was head of the Computing Techniques Research Department and managed the Unix project, insisted that some kind of documentation be made available. He pushed Ken Thompson and Dennis Ritchie, the two programmers commonly credited with creating Unix, to write some. The result was the [first edition][1] of the Unix Programmers Manual.
The first edition of the Unix Programmers Manual consisted of (physical!) pages collected together in a single binder. It documented only 61 different commands, along with a couple dozen system calls and a few library routines. Though the `man` command itself was not to come until later, the first edition of the Unix Programmers Manual established many of the conventions that man pages adhere to today, even in the absence of an official specification. The documentation for each command included the well-known NAME, SYNOPSIS, DESCRIPTION, and SEE ALSO headings. Optional flags were enclosed in square brackets and meta-arguments (for example, “file” where a file path is expected) were underlined. The manual also established the canonical manual sections such as Section 1 for General Commands, Section 2 for System Calls, and so on; these sections were, at the time, simply sections of a very long printed document. Thompson and Ritchie could not have known that they were establishing a tradition that would survive for decades and decades, but that is what they did.
McIlroy later speculated about why the man page format has survived as long as it has. In a technical report about the conceptual development of Unix, he noted that the original man pages were written in a “terse, yet informal, prose style” that together with the alphabetical ordering of information “encouraged accurate on-line documentation.” In a nod to an experience with man pages that all programmers have had at one time or another, he added that the man page format “was popular with initiates who needed to look up facts, albeit sometimes frustrating for beginners who didnt know what facts to look for.” McIlroy was highlighting the sometimes-overlooked distinction between tutorial and reference; man pages may not be much use for the former, but they are perfect for the latter.
The `man` command was a part of Unix by the time the [second edition][2] of the Unix Programmers Manual was printed. It made the entire manual available “on-line”, meaning interactively, which was seen as enormously useful. The `man` command has its own manual page in the second edition (this page is the original `man man`), which explains that `man` can be used to “run off a particular section of this manual.” Among the original Unix wizards, the term “run off” referred to the physical act of printing a document but also to the program they used to typeset documents, `roff`. The `roff` program had been used to typeset both the first and second editions of the Unix Programmers Manual before they were printed, but it was now also used by `man` to process man pages before they were displayed. The man pages themselves were stored on every Unix system in a file format meant to be read by `roff`.
`roff` was the first in a long lineage of typesetting programs that have always been used to format man pages. Its own development can be traced back to a program called `RUNOFF` that was written in the mid-60s. At Bell Labs, `roff` spawned several successors including `nroff` (en-roff) and `troff` (tee-roff). `nroff` was designed to improve on `roff` and better output text to terminals, while `troff` tackled the problem of printing using a CAT phototypesetter. (If you dont know what phototypesetting is, as I did not, I refer you to [this][3] eminently watchable film.) All of these programs were based on a kind of markup language consisting of two-letter commands inserted at the beginning of every line in a document. These commands could control such things as font size, text positioning, line spacing, and so on. Today, the most common implementation of the `roff` system is `groff`, a part of the GNU project.
Its easy to get a sense of what `roff` input files look like by just taking a gander at some of the man pages stored on your own computer. At least on a BSD-derived system like MacOS, you can use the `--path` argument to `man` to find out where the man page for a particular command is stored. Typically this will be under `/usr/share/man` or `/usr/local/share/man`. Using `man` this way, you can find the path for the `man` man page itself and then open it in a text editor. It will not look anything like what youre used to looking at with `man`. On my system, the first couple dozen lines are:
```
.TH man 1 "September 19, 2005"
.LO 1
.SH NAME
man \- format and display the on-line manual pages
.SH SYNOPSIS
.B man
.RB [ \-acdfFhkKtwW ]
.RB [ --path ]
.RB [ \-m
.IR system ]
.RB [ \-p
.IR string ]
.RB [ \-C
.IR config_file ]
.RB [ \-M
.IR pathlist ]
.RB [ \-P
.IR pager ]
.RB [ \-B
.IR browser ]
.RB [ \-H
.IR htmlpager ]
.RB [ \-S
.IR section_list ]
.RI [ section ]
.I "name ..."
.SH DESCRIPTION
.B man
formats and displays the on-line manual pages. If you specify
.IR section ,
.B man
only looks in that section of the manual.
.I name
is normally the name of the manual page, which is typically the name
of a command, function, or file.
However, if
.I name
contains a slash
.RB ( / )
then
.B man
interprets it as a file specification, so that you can do
.B "man ./foo.5"
or even
.B "man /cd/foo/bar.1.gz\fR.\fP"
.PP
See below for a description of where
.B man
looks for the manual page files.
```
You can make out, for example, that all of the section headings are preceded by `.SH`, and everything that would appear in bold is preceded by `.B`. These commands are `roff` macros specifically designed for writing man pages. The macros used here are part of a package called `man`, but there are other packages such as `mdoc` that you can use for the same purpose. The macros make writing man pages much simpler than it would otherwise be. They also enforce consistency by always compiling down to the same set of lower-level `roff` commands. The `man` and `mdoc` packages are now documented under [GROFF_MAN(7)][4] and [GROFF_MDOC(7)][5] respectively.
The entire `roff` system is reminiscent of LaTeX, another typesetting tool that today enjoys much more popularity. LaTeX is essentially a big bucket of macros built on top of the core TeX system designed by Donald Knuth. Like with `roff`, there are many other macro packages that you can incorporate into your LaTeX documents. These macro packages mean that you almost never have to write raw TeX yourself. LaTeX has superseded `roff` in many domains, but it is poorly suited to formatting text for a terminal, so nobody uses it to write man pages.
If you were to write a man page today, in 2017, how would you go about it? You certainly could write a man page using a `roff` macro package like `man` or `mdoc`. The syntax is unfamiliar and unwieldy. But the macros abstract away so much of the complexity that you can write a reasonably complete man page without learning very many commands. That said, there are now other options worth considering.
[Pandoc][6] is a widely used software tool for converting documents from one format to another. You can use Pandoc to convert Markdown files into `man`-macro-based man pages, meaning that you can now write your man pages in something as straightforward as Markdown. Pandoc supports many more Markdown constructs than most Markdown converters, giving you lots of ways to format your man page. While this convenience comes at the cost of some control, its unlikely that you will ever need something that would warrant dropping down to the `roff` macro level. If youre curious about what these Markdown files might look like, Ive written [a few of my own][7] to document a tool I created for keeping notes on how to use different command-line utilities. NPMs [documentation][8] is also written in Markdown and converted to a `roff` man format later, though they use a JavaScript package called [marked-man][9] instead of Pandoc to do the conversion.
So there are now plenty of ways to write man pages, giving you lots of freedom to choose the tool you think best. That said, youd better ensure that your man page reads like every other man page that has ever been written. Even though there is now so much flexibility in tooling, the man page conventions are as strong as ever. And you might be tempted to skip writing a man page altogether—after all, you probably have documentation on the web, or maybe you just want to rely on the `--help` flag—but youre forgoing the patina of respectability a man page can provide. The man page is an institution that doesnt seem likely to disappear or evolve soon, which is fascinating, because there are so many ways in which we could do man pages better. XML didnt come off well in my [last post][10], but it would be the perfect format here, and it would allow us to do something like query `man` about an option:
```
$ man grep -v
Selected lines are those not matching any of the specified patterns.
```
Imagine that! But it seems that were all too used to man pages the way they are. In a field where rapid change is the norm, maybe some stability—particularly in a documentation system we all turn to in moments of ignorance and confusion—is a good thing.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][11] on Twitter or subscribe to the [RSS feed][12] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/09/28/the-lineage-of-man.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.bell-labs.com/usr/dmr/www/1stEdman.html
[2]: http://bitsavers.informatik.uni-stuttgart.de/pdf/att/unix/2nd_Edition/UNIX_Programmers_Manual_2ed_Jun72.pdf
[3]: https://vimeo.com/127605644
[4]: http://man7.org/linux/man-pages/man7/groff_man.7.html
[5]: http://man7.org/linux/man-pages/man7/groff_mdoc.7.html
[6]: http://pandoc.org/
[7]: https://github.com/sinclairtarget/um/tree/02365bd0c0a229efb936b3d6234294e512e8a218/doc
[8]: https://github.com/npm/npm/blob/20589f4b028d3e8a617800ac6289d27f39e548e8/doc/cli/npm.md
[9]: https://www.npmjs.com/package/marked-man
[10]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
[11]: https://twitter.com/TwoBitHistory
[12]: https://twobithistory.org/feed.xml

View File

@ -1,3 +1,4 @@
### fuzheng1998 reapplying
10 Games You Can Play on Linux with Wine
======
![](https://www.maketecheasier.com/assets/uploads/2017/09/wine-games-feat.jpg)

View File

@ -0,0 +1,70 @@
translating---geekpi
Simulating the Altair
======
The [Altair 8800][1] was a build-it-yourself home computer kit released in 1975. The Altair was basically the first personal computer, though it predated the advent of that term by several years. It is Adam (or Eve) to every Dell, HP, or Macbook out there.
Some people thought itd be awesome to write an emulator for the Z80—a processor closely related to the Altairs Intel 8080—and then thought it needed a simulation of the Altairs control panel on top of it. So if youve ever wondered what it was like to use a computer in 1975, you can run the Altair on your Macbook:
![Altair 8800][2]
### Installing it
You can download Z80 pack from the FTP server available [here][3]. Youre looking for the latest Z80 pack release, something like `z80pack-1.26.tgz`.
First unpack the file:
```
$ tar -xvf z80pack-1.26.tgz
```
Move into the unpacked directory:
```
$ cd z80pack-1.26
```
The control panel simulation is based on a library called `frontpanel`. Youll have to compile that library first. If you move into the `frontpanel` directory, you will find a `README` file listing the libraries own dependencies. Your experience here will almost certainly differ from mine, but perhaps my travails will be illustrative. I had the dependencies installed, but via [Homebrew][4]. To get the library to compile I just had to make sure that `/usr/local/include` was added to Clangs include path in `Makefile.osx`.
If youve satisfied the dependencies, you should be able to compile the library (were now in `z80pack-1.26/frontpanel`:
```
$ make -f Makefile.osx ...
$ make -f Makefile.osx clean
```
You should end up with `libfrontpanel.so`. I copied this to `/usr/local/lib`.
The Altair simulator is under `z80pack-1.26/altairsim`. You now need to compile the simulator itself. Move into `z80pack-1.26/altairsim/srcsim` and run `make` once more:
```
$ make -f Makefile.osx ...
$ make -f Makefile.osx clean
```
That process will create an executable called `altairsim` one level up in `z80pack-1.26/altairsim`. Run that executable and you should see that iconic Altair control panel!
And if you really want to nerd out, read through the original [Altair manual][5].
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/12/02/simulating-the-altair.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Altair_8800
[2]: https://www.autometer.de/unix4fun/z80pack/altair.png
[3]: http://www.autometer.de/unix4fun/z80pack/ftp/
[4]: http://brew.sh/
[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf
[6]: https://twitter.com/TwoBitHistory
[7]: https://twobithistory.org/feed.xml

View File

@ -1,248 +0,0 @@
Manage Your Games Using Lutris In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
Let us start the first day of 2018 with Games! Today, we are going to talk about **Lutris** , an open source gaming platform for Linux. You can install, remove, configure, launch and manage your games using Lutris. It helps you to manage your Linux games, Windows games, emulated console games and browser games, in a single interface. It also includes community-written installer scripts to make a game's installation process a lot easier.
Lutris comes with more than 20 emulators installed automatically (Or you can install them in a single click) that provides support for most gaming systems from the late 70's to the present day. The currently supported gaming platforms are;
* Native Linux
* Windows
* Steam (Linux and Windows)
* MS-DOS
* Arcade machines
* Amiga computers
* Atari 8 and 16bit computers and consoles
* Browsers (Flash or HTML5 games)
* Commmodore 8 bit computers
* SCUMM based games and other point and click adventure games
* Magnavox Odyssey², Videopac+
* Mattel Intellivision
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
* Game Cube and Wii
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
* SNK Neo Geo, Neo Geo Pocket
* Sony PlayStation
* Sony PlayStation 2
* Sony PSP
* Z-Machine games like Zork
* And more yet to come.
### Installing Lutris
Like Steam, Lutris consists of two parts; the website and the client application. From the website you can browse for the available games, add your favorite games to your personal library and install them using the installation link.
First, let us install Lutris client. It currently supports Arch Linux, Debian, Fedora, Gentoo, openSUSE, and Ubuntu.
For Arch Linux and its derivatives like Antergos, Manjaro Linux, it is available in [**AUR**][1]. So you can install it using any AUR helper programs.
Using [**Pacaur**][2]:
```
pacaur -S lutris
```
Using **[Packer][3]** :
```
packer -S lutris
```
Using [**Yaourt**][4]:
```
yaourt -S lutris
```
Using [**Yay**][5]:
```
yay -S lutris
```
**On Debian:**
On **Debian 9.0** run the following commands as **root** :
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
On **Debian 8.0** run the following as **root** :
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
On **Fedora 27** run the following as **root** :
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
dnf install lutris
```
On **Fedora 26** run the following as **root** :
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
dnf install lutris
```
On **openSUSE Tumbleweed** run the following as **root** :
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
zypper refresh
zypper install lutris
```
On **openSUSE Leap 42.3** run the following as **root** :
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
zypper refresh
zypper install lutris
```
On **Ubuntu 17.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
On **Ubuntu 17.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
On **Ubuntu 16.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
On **Ubuntu 16.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
For other platforms, refer the [**Lutris download link**][6].
### Manage Your Games Using Lutris
Once installed, open Lutris from your Menu or Application launcher. At first launch, the default interface of Lutris will look like below.
[![][7]][8]
**Connecting to your Lutris.net account**
Next, you need to connect your Lutris.net account to your client in-order to sync the games from your personal library. To do so, [**register a new account**][9] if you don't have one already. Then, click on **" Connecting to your Lutirs.net account to sync your library"** link in the Lutris client.
Enter your user credentials and click **Connect**.
[![][7]][10]
Now you're connected to your Lutris.net account.
[![][7]][11]**Browse Games**
To search any games, click on the Browse icon (the game controller icon) in the tool bar. It will automatically take you to Games page of Lutris website. You can see there all available games in an alphabetical order. Lutris website has lot of games and more games are being added constantly.
[![][7]][12]
Choose any games of your choice and add them to your library.
[![][7]][13]
Then, go back to your Lutris client and click **Menu - > Lutris -> Synchronize library**. Now you will see all games in your library in your local Lutris client interface.
[![][7]][14]
If you don't see the games, just restart Lutris client once.
**Installing Games**
To install a game, just right click on it and click **Install** button. For example, I am going to install [**2048 game**][15] in my system. As you see in the below screenshot, it asks me to choose the version to install. Since it has only one version (i.e online), it was selected automatically. Click **Continue**.
[![][7]][16]Click Install:
[![][7]][17]
After installation completed, you can either launch the newly installed game or simply close the window and continue installing other games in your library.
**Import Steam library**
You can also import your Steam library. To do so, go to your Lutris profile and click the **" Sign in through Steam"** button. You will then be redirected to Steam and will be asked to enter your user credentials. Once you authorized it, your Steam account will be connected with your Lutris account. Please be mindful that your Steam account should be public in order to sync the games from the library. You can switch it back to private after the sync is completed.
**Adding games manually**
Lutris has the option to add games manually. To do so, click the plus (+) sign on the toolbar.
[![][7]][18]
In the next window, enter the game's name, and choose the runners in the Game info tab. The runners are programs such as Wine, Steam for linux etc., that helps you to launch a game. You can install runners from Menu -> Manage runners.
[![][7]][19]
Then go to the next tab and choose Game's main executable or ISO. Finally click Save. The good thing is you can add multiple version of same games.
**Removing games**
To remove any installed game, just right click on it in the local library of your Lutris client application. Choose "Remove" and then "Apply".
[![][7]][20]
Lutris is just like Steam. Just add the games to your library in the website and the client will install them for you!
And, that's all for today, folks! We will be posting more good and useful stuffs in this year. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/lutris/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://lutris.net/downloads/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
[9]:https://lutris.net/user/register/
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()

View File

@ -1,135 +0,0 @@
Getting started with Python for data science
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_open_data_520x292.jpg?itok=R8rBrlk7)
Whether you're a budding data science enthusiast with a math or computer science background or an expert in an unrelated field, the possibilities data science offers are within your reach. And you don't need expensive, highly specialized enterprise software—the open source tools discussed in this article are all you need to get started.
[Python][1], its machine-learning and data science libraries ([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7], etc.), and its extensive list of visualization libraries ([Matplotlib][8], [pyplot][9], [Plotly][10], etc.) are excellent FOSS tools for beginners and experts alike. Easy to learn, popular enough to offer community support, and armed with the latest emerging techniques and algorithms developed for data science, these comprise one of the best toolsets you can acquire when starting out.
Many of these Python libraries are built on top of each other (known as dependencies), and the basis is the [NumPy][7] library. Designed specifically for data science, NumPy is often used to store relevant portions of datasets in its ndarray datatype, which is a convenient datatype for storing records from relational tables as `cvs` files or in any other format, and vice-versa. It is particularly convenient when scikit functions are applied to multidimensional arrays. SQL is great for querying databases, but to perform complex and resource-intensive data science operations, storing data in ndarray boosts efficiency and speed (but make sure you have ample RAM when dealing with large datasets). When you get to using pandas for knowledge extraction and analysis, the almost seamless conversion between DataFrame datatype in pandas and ndarray in NumPy creates a powerful combination for extraction and compute-intensive operations, respectively.
For a quick demonstration, lets fire up the Python shell and load an open dataset on crime statistics from the city of Baltimore in a pandas DataFrame variable, and view a portion of the loaded frame:
```
>>>  import   pandas as  pd
>>>  crime_stats   =  pd.read_csv('BPD_Arrests.csv')
>>>  crime_stats.head()
```
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/crime_stats_chart.jpg?itok=_rPXJYHz)
We can now perform most of the queries on this pandas DataFrame that we can with SQL in databases. For instance, to get all the unique values of the "Description" attribute, the SQL query is:
```
$  SELECT  unique(“Description”)   from   crime_stats;
```
This same query written for a pandas DataFrame looks like this:
```
>>>  crime_stats['Description'].unique()
['COMMON   ASSAULT'   'LARCENY'   'ROBBERY   - STREET'   'AGG.   ASSAULT'
'LARCENY   FROM   AUTO'   'HOMICIDE'   'BURGLARY'   'AUTO   THEFT'
'ROBBERY   - RESIDENCE'   'ROBBERY   - COMMERCIAL'   'ROBBERY   - CARJACKING'
'ASSAULT   BY  THREAT'   'SHOOTING'   'RAPE'   'ARSON']
```
which returns a NumPy array (ndarray):
```
>>>  type(crime_stats['Description'].unique())
<class   'numpy.ndarray'>
```
Next lets feed this data into a neural network to see how accurately it can predict the type of weapon used, given data such as the time the crime was committed, the type of crime, and the neighborhood in which it happened:
```
>>>  from   sklearn.neural_network   import   MLPClassifier
>>>  import   numpy   as np
>>>
>>>  prediction   =  crime_stats[[Weapon]]
>>>  predictors   =  crime_stats['CrimeTime',   CrimeCode,   Neighborhood]
>>>
>>>  nn_model   =  MLPClassifier(solver='lbfgs',   alpha=1e-5,   hidden_layer_sizes=(5,
2),   random_state=1)
>>>
>>>predict_weapon   =  nn_model.fit(prediction,   predictors)
```
Now that the learning model is ready, we can perform several tests to determine its quality and reliability. For starters, lets feed a training set data (the portion of the original dataset used to train the model and not included in creating the model):
```
>>>  predict_weapon.predict(training_set_weapons)
array([4,   4,   4,   ..., 0,   4,   4])
```
As you can see, it returns a list, with each number predicting the weapon for each of the records in the training set. We see numbers rather than weapon names, as most classification algorithms are optimized with numerical data. For categorical data, there are techniques that can reliably convert attributes into numerical representations. In this case, the technique used is Label Encoding, using the LabelEncoder function in the sklearn preprocessing library: `preprocessing.LabelEncoder()`. It has a function to transform and inverse transform data and their numerical representations. In this example, we can use the `inverse_transform` function of LabelEncoder() to see what Weapons 0 and 4 are:
```
>>>  preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
array(['HANDS',   'FIREARM',   'HANDS',   ..., 'FIREARM',   'FIREARM',   'FIREARM']
```
This is fun to see, but to get an idea of how accurate this model is, let's calculate several scores as percentages:
```
>>>  nn_model.score(X,   y)
0.81999999999999995
```
This shows that our neural network model is ~82% accurate. That result seems impressive, but it is important to check its effectiveness when used on a different crime dataset. There are other tests, like correlations, confusion, matrices, etc., to do this. Although our model has high accuracy, it is not very useful for general crime datasets as this particular dataset has a disproportionate number of rows that list FIREARM as the weapon used. Unless it is re-trained, our classifier is most likely to predict FIREARM, even if the input dataset has a different distribution.
It is important to clean the data and remove outliers and aberrations before we classify it. The better the preprocessing, the better the accuracy of our insights. Also, feeding the model/classifier with too much data to get higher accuracy (generally over ~90%) is a bad idea because it looks accurate but is not useful due to [overfitting][11].
[Jupyter notebooks][12] are a great interactive alternative to the command line. While the CLI is fine for most things, Jupyter shines when you want to run snippets on the go to generate visualizations. It also formats data better than the terminal.
[This article][13] has a list of some of the best free resources for machine learning, but plenty of additional guidance and tutorials are available. You will also find many open datasets available to use, based on your interests and inclinations. As a starting point, the datasets maintained by [Kaggle][14], and those available at state government websites are excellent resources.
Payal Singh will be presenting at SCaLE16x this year, March 8-11 in Pasadena, California. To attend and get 50% of your ticket, [register][15] using promo code **OSDC**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/getting-started-data-science
作者:[Payal Singh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/payalsingh
[1]:https://www.python.org/
[2]:https://pandas.pydata.org/
[3]:https://keras.io/
[4]:https://www.tensorflow.org/
[5]:http://scikit-learn.org/stable/
[6]:https://www.scipy.org/
[7]:http://www.numpy.org/
[8]:https://matplotlib.org/
[9]:https://matplotlib.org/api/pyplot_api.html
[10]:https://plot.ly/
[11]:https://www.kdnuggets.com/2014/06/cardinal-sin-data-mining-data-science.html
[12]:http://jupyter.org/
[13]:https://machinelearningmastery.com/best-machine-learning-resources-for-getting-started/
[14]:https://www.kaggle.com/
[15]:https://register.socallinuxexpo.org/reg6/

View File

@ -1,3 +1,5 @@
FSSlc translating
How To Find The Mounted Filesystem Type In Linux
======

View File

@ -1,166 +0,0 @@
LuuMing translating
Setting Up a Timer with systemd in Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/clock-650753_1920.jpg?itok=RiRyCbAP)
Previously, we saw how to enable and disable systemd services [by hand][1], [at boot time and on power down][2], [when a certain device is activated][3], and [when something changes in the filesystem][4].
Timers add yet another way of starting services, based on... well, time. Although similar to cron jobs, systemd timers are slightly more flexible. Let's see how they work.
### "Run when"
Let's expand the [Minetest][5] [service you set up][1] in [the first two articles of this series][2] as our first example on how to use timer units. If you haven't read those articles yet, you may want to go and give them a look now.
So you will "improve" your Minetest set up by creating a timer that will run the game's server 1 minute after boot up has finished instead of right away. The reason for this could be that, as you want your service to do other stuff, like send emails to the players telling them the game is available, you will want to make sure other services (like the network) are fully up and running before doing anything fancy.
Jumping in at the deep end, your _minetest.timer_ unit will look like this:
```
# minetest.timer
[Unit]
Description=Runs the minetest.service 1 minute after boot up
[Timer]
OnBootSec=1 m
Unit=minetest.service
[Install]
WantedBy=basic.target
```
Not hard at all.
As usual, you have a `[Unit]` section with a description of what the unit does. Nothing new there. The `[Timer]` section is new, but it is pretty self-explanatory: it contains information on when the service will be triggered and the service to trigger. In this case, the `OnBootSec` is the directive you need to tell systemd to run the service after boot has finished.
Other directives you could use are:
* `OnActiveSec=`, which tells systemd how long to wait after the timer itself is activated before starting the service.
* `OnStartupSec=`, on the other hand, tells systemd how long to wait after systemd was started before starting the service.
* `OnUnitActiveSec=` tells systemd how long to wait after the service the timer is activating was last activated.
* `OnUnitInactiveSec=` tells systemd how long to wait after the service the timer is activating was last deactivated.
Continuing down the _minetest.timer_ unit, the `basic.target` is usually used as a synchronization point for late boot services. This means it makes _minetest.timer_ wait until local mount points and swap devices are mounted, sockets, timers, path units and other basic initialization processes are running before letting _minetest.timer_ start. As we explained in [the second article on systemd units][2], _targets_ are like the old run levels and can be used to put your machine into one state or another, or, like here, to tell your service to wait until a certain state has been reached.
The _minetest.service_ you developed in the first two articles [ended up][2] looking like this:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
User=
ExecStart= /usr/games/minetest --server
ExecStartPost= /home//bin/mtsendmail.sh "Ready to rumble?" "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home//bin/mtsendmail.sh "Off to bed. Nightie night!" "Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
[Install]
WantedBy= multi-user.target
```
Theres nothing you need to change here. But you do have to change _mtsendmail.sh_ (your email sending script) from this:
```
#!/bin/bash
# mtsendmail
sleep 20
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
sleep 10
```
to this:
```
#!/bin/bash
# mtsendmail.sh
echo $1 | mutt -F /home/paul/.muttrc -s "$2" pbrown@mykolab.com
```
What you are doing is stripping out those hacky pauses in the Bash script. Systemd does the waiting now.
### Making it work
To make sure things work, disable _minetest.service_ :
```
sudo systemctl disable minetest
```
so it doesn't get started when the system starts; and, instead, enable _minetest.timer_ :
```
sudo systemctl enable minetest.timer
```
Now you can reboot you server machine and, when you run `sudo journalctl -u minetest.*` you will see how, first the _minetest.timer_ unit gets executed and then the _minetest.service_ starts up after a minute... more or less.
![minetest timer][7]
Figure 1: The minetest.service gets started one minute after the minetest.timer... more or less.
[Used with permission][8]
### A Matter of Time
A couple of clarifications about why the _minetest.timer_ entry in the systemd's Journal shows its start time as 09:08:33, while the _minetest.service_ starts at 09:09:18, that is less than a minute later: First, remember we said that the `OnBootSec=` directive calculates when to start a service from when boot is complete. By the time _minetest.timer_ comes along, boot has finished a few seconds ago.
The other thing is that systemd gives itself a margin of error (by default, 1 minute) to run stuff. This helps distribute the load when several resource-intensive processes are running at the same time: by giving itself a minute, systemd can wait for some processes to power down. This also means that _minetest.service_ will start somewhere between the 1 minute and 2 minute mark after boot is completed, but when exactly within that range is anybody's guess.
For the record, [you can change the margin of error with `AccuracySec=` directive][9].
Another thing you can do is check when all the timers on your system are scheduled to run or the last time the ran:
```
systemctl list-timers --all
```
![check timer][11]
Figure 2: Check when your timers are scheduled to fire or when they fired last.
[Used with permission][8]
The final thing to take into consideration is the format you should use to express the periods of time. Systemd is very flexible in that respect: `2 h`, `2 hours` or `2hr` will all work to express a 2 hour delay. For seconds, you can use `seconds`, `second`, `sec`, and `s`, the same way as for minutes you can use `minutes`, `minute`, `min`, and `m`. You can see a full list of time units systemd understands by checking `man systemd.time`.
### Next Time
You'll see how to use calendar dates and times to run services at regular intervals and how to combine timers and device units to run services at defined point in time after you plug in some hardware.
See you then!
Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/setting-timer-systemd-linux
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[2]:https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
[3]:https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change
[4]:https://www.linux.com/blog/learn/intro-to-linux/2018/6/systemd-services-monitoring-files-and-directories
[5]:https://www.minetest.net/
[6]:/files/images/minetest-timer-1png
[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest-timer-1.png?itok=TG0xJvYM (minetest timer)
[8]:/licenses/category/used-permission
[9]:https://www.freedesktop.org/software/systemd/man/systemd.timer.html#AccuracySec=
[10]:/files/images/minetest-timer-2png
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest-timer-2.png?itok=pYxyVx8- (check timer)
[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,145 +0,0 @@
Translating by MjSeven
How To Remove Or Disable Ubuntu Dock
======
![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png)
**If you want to replace the Ubuntu Dock in Ubuntu 18.04 with some other dock (like Plank dock for example) or panel, and you want to remove or disable the Ubuntu Dock, here's what you can do and how.**
Ubuntu Dock - the bar on the left-hand side of the screen which can be used to pin applications and access installed applications -
### How to access the Activities Overview without Ubuntu Dock
Without Ubuntu Dock, you may have no way of accessing the Activities / installed application list (which can be accessed from Ubuntu Dock by clicking on Show Applications button at the bottom of the dock). For example if you want to use Plank dock.
Obviously, that's not the case if you install Dash to Panel extension to use it instead Ubuntu Dock, because Dash to Panel provides a button to access the Activities Overview / installed applications.
Depending on what you plan to use instead of Ubuntu Dock, if there's no way of accessing the Activities Overview, you can enable the Activities Overview Hot Corner option and simply move your mouse to the upper left corner of the screen to open the Activities. Another way of accessing the installed application list is using a keyboard shortcut: `Super + A` .
If you want to enable the Activities Overview hot corner, use this command:
```
gsettings set org.gnome.shell enable-hot-corners true
```
If later you want to undo this and disable the hot corners, you need to use this command:
```
gsettings set org.gnome.shell enable-hot-corners false
```
You can also enable or disable the Activities Overview Hot Corner option by using the Gnome Tweaks application (the option is in the `Top Bar` section of Gnome Tweaks), which can be installed by using this command:
```
sudo apt install gnome-tweaks
```
### How to remove or disable Ubuntu Dock
Below you'll find 4 ways of getting rid of Ubuntu Dock which work in Ubuntu 18.04.
**Option 1: Remove the Gnome Shell Ubuntu Dock package.**
The easiest way of getting rid of the Ubuntu Dock is to remove the package.
This completely removes the Ubuntu Dock extension from your system, but it also removes the `ubuntu-desktop` meta package. There's no immediate issue if you remove the `ubuntu-desktop` meta package because does nothing by itself. The `ubuntu-meta` package depends on a large number of packages which make up the Ubuntu Desktop. Its dependencies won't be removed and nothing will break. The issue is that if you want to upgrade to a newer Ubuntu version, any new `ubuntu-desktop` dependencies won't be installed.
As a way around this, you can simply install the `ubuntu-desktop` meta package before upgrading to a newer Ubuntu version (for example if you want to upgrade from Ubuntu 18.04 to 18.10).
If you're ok with this and want to remove the Ubuntu Dock extension package from your system, use the following command:
```
sudo apt remove gnome-shell-extension-ubuntu-dock
```
If later you want to undo the changes, simply install the extension back using this command:
```
sudo apt install gnome-shell-extension-ubuntu-dock
```
Or to install the `ubuntu-desktop` meta package back (this will install any ubuntu-desktop dependencies you may have removed, including Ubuntu Dock), you can use this command:
```
sudo apt install ubuntu-desktop
```
**Option 2: Install and use the vanilla Gnome session instead of the default Ubuntu session.**
Another way to get rid of Ubuntu Dock is to install and use the vanilla Gnome session. Installing the vanilla Gnome session will also install other packages this session depends on, like Gnome Documents, Maps, Music, Contacts, Photos, Tracker and more.
By installing the vanilla Gnome session, you'll also get the default Gnome GDM login / lock screen theme instead of the Ubuntu defaults as well as Adwaita Gtk theme and icons. You can easily change the Gtk and icon theme though, by using the Gnome Tweaks application.
Furthermore, the AppIndicators extension will be disabled by default (so applications that make use of the AppIndicators tray won't show up on the top panel), but you can enable this by using Gnome Tweaks (under Extensions, enable the Ubuntu appindicators extension).
In the same way, you can also enable or disable Ubuntu Dock from the vanilla Gnome session, which is not possible if you use the Ubuntu session (disabling Ubuntu Dock from Gnome Tweaks when using the Ubuntu session does nothing).
If you don't want to install these extra packages required by the vanilla Gnome session, this option of removing Ubuntu Dock is not for you so check out the other options.
If you are ok with this though, here's what you need to do. To install the vanilla Gnome session in Ubuntu, use this command:
```
sudo apt install vanilla-gnome-desktop
```
After the installation finishes, reboot your system and on the login screen, after you click on your username, click the gear icon next to the `Sign in` button, and select `GNOME` instead of `Ubuntu` , then proceed to login:
![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png)
In case you want to undo this and remove the vanilla Gnome session, you can purge the vanilla Gnome package and then remove the dependencies it installed (second command) using the following commands:
```
sudo apt purge vanilla-gnome-desktop
sudo apt autoremove
```
Then reboot and select Ubuntu in the same way, from the GDM login screen.
**Option 3: Permanently hide the Ubuntu Dock from your desktop instead of removing it.**
If you prefer to permanently hide the Ubuntu Dock from showing up on your desktop instead of uninstalling it or using the vanilla Gnome session, you can easily do this using Dconf Editor. The drawback to this is that Ubuntu Dock will still use some system resources even though you're not using in on your desktop, but you'll also be able to easily revert this without installing or removing any packages.
Ubuntu Dock is only hidden from your desktop though. When you go in overlay mode (Activities), you'll still see and be able to use Ubuntu Dock from there.
To permanently hide Ubuntu Dock, use Dconf Editor to navigate to `/org/gnome/shell/extensions/dash-to-dock` and disable (set them to false) the following options: `autohide` , `dock-fixed` and `intellihide` .
You can achieve this from the command line if you wish, buy running the commands below:
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide false
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false
```
In case you change your mind and you want to undo this, you can either use Dconf Editor and re-enable (set them to true) autohide, dock-fixed and intellihide from `/org/gnome/shell/extensions/dash-to-dock` , or you can use these commands:
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide true
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true
```
**Option 4: Use Dash to Panel extension.**
You can install Dash to Panel from
If you change your mind and you want Ubuntu Dock back, you can either disable Dash to Panel by using Gnome Tweaks app, or completely remove Dash to Panel by clicking the X button next to it from here:
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020
[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html
[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/

View File

@ -1,96 +0,0 @@
Translateing By DavidChenLiang
Top Linux developers' recommended programming books
======
Without question, Linux was created by brilliant programmers who employed good computer science knowledge. Let the Linux programmers whose names you know share the books that got them started and the technology references they recommend for today's developers. How many of them have you read?
Linux is, arguably, the operating system of the 21st century. While Linus Torvalds made a lot of good business and community decisions in building the open source community, the primary reason networking professionals and developers adopted Linux is the quality of its code and its usefulness. While Torvalds is a programming genius, he has been assisted by many other brilliant developers.
I asked Torvalds and other top Linux developers which books helped them on their road to programming excellence. This is what they told me.
### By shining C
Linux was developed in the 1990s, as were other fundamental open source applications. As a result, the tools and languages the developers used reflected the times, which meant a lot of C programming language. While [C is no longer as popular][1], for many established developers it was their first serious language, which is reflected in their choice of influential books.
“You shouldn't start programming with the languages I started with or the way I did,” says Torvalds. He started with BASIC, moved on to machine code (“not even assembly language, actual just numbers machine code,” he explains), then assembly language and C.
“None of those languages are what anybody should begin with anymore,” Torvalds says. “Some of them make no sense at all today (BASIC and machine code). And while C is still a major language, I don't think you should begin with it.”
It's not that he dislikes C. After all, Linux is written in [GNU C][2]. "I still think C is a great language with a pretty simple syntax and is very good for many things,” he says. But the effort to get started with it is much too high for it to be a good beginner language by today's standards. “I suspect you'd just get frustrated. Going from your first Hello World program to something you might actually use is just too big of a step."
From that era, the only programming book that stood out for Torvalds is Brian W. Kernighan and Dennis M. Ritchie's [C Programming Language][3], known in serious programming circles as K&R. “It was small, clear, concise,” he explains. “But you need to already have a programming background to appreciate it."
Torvalds is not the only open source developer to recommend K&R. Several others cite their well-thumbed copies as influential references, among them Wim Coekaerts, senior vice president for Linux and virtualization development at Oracle; Linux developer Alan Cox; Google Cloud CTO Brian Stevens; and Pete Graner, Canonical's vice president of technical operations.
If you want to tackle C today, Jeremy Allison, co-founder of Samba, recommends [21st Century C][4]. Then, Allison suggests, follow it up with the older but still thorough [Expert C Programming][5] as well as the 20-year-old [Programming with POSIX Threads][6].
### If not C, what?
Linux developers recommendations for current programming books naturally are an offshoot of the tools and languages they think are most suitable for todays development projects. They also reflect the developers personal preferences. For example, Allison thinks young developers would be well served by learning Go with the help of [The Go Programming Language][7] and Rust with [Programming Rust][8].
But it may make sense to think beyond programming languages (and thus books to teach you their techniques). To do something meaningful today, “start from some environment with a toolkit that does 99 percent of the obscure details for you, so that you can script things around it," Torvalds recommends.
"Honestly, the language itself isn't nearly as important as the infrastructure around it,” he continues. “Maybe you'd start with Java or Kotlin—not because of those languages per se, but because you want to write an app for your phone and the Android SDK ends up making those better choices. Or, maybe you're interested in games, so you start with one of the game engines, which often have some scripting language of their own."
That infrastructure includes programming books specific to the operating system itself. Graner followed K&R by reading W. Richard Stevens' [Unix Network Programming][10] books. In particular, Stevens' [TCP/IP Illustrated, Volume 1: The Protocols][11] is considered still relevant even though it's almost 30 years old. Because Linux development is largely [relevant to networking infrastructure][12], Graner also recommends the many OReilly books on [Sendmail][13], [Bash][14], [DNS][15], and [IMAP/POP][16].
Coekaerts is also fond of Maurice Bach's [The Design of the Unix Operating System][17]. So is James Bottomley, a Linux kernel developer who used Bach's tome to pull apart Linux when the OS was new.
### Design knowledge never goes stale
But even that may be too tech-specific. "All developers should start with design before syntax,” says Stevens. “[The Design of Everyday Things][18] is one of my favorites.”
Coekaerts likes Kernighan and Rob Pike's [The Practice of Programming][19]. The design-practice book wasn't around when Coekaerts was in school, “but I recommend it to everyone to read," he says.
Whenever you ask serious long-term developers about their favorite books, sooner or later someone's going to mention Donald Knuths [The Art of Computer Programming][20]. Dirk Hohndel, VMware's chief open source officer, considers it timeless though, admittedly, “not necessarily super-useful today."
### Read code. Lots of code
While programming books can teach you a lot, dont miss another opportunity that is unique to the open source community: [reading the code][21]. There are untold megabytes of examples of how to solve a given programming problem—and how you can get in trouble, too. Stevens says his No. 1 “book” for honing programming skills is having access to the Unix source code.
Dont overlook the opportunity to learn in person, too. “I learned BASIC by being in a computer club with other people all learning together,” says Cox. “In my opinion, that is still by far the best way to learn." He learned machine code from [Mastering Machine Code on Your ZX81][22] and the Honeywell L66 B compiler manuals, but working with other developers made a big difference.
“I still think the way to learn best remains to be with a group of people having fun and trying to solve a problem you care about together,” says Cox. “It doesn't matter if you are 5 or 55."
What struck me the most about these recommendations is how often the top Linux developers started at a low level—not just C or assembly language but machine language. Obviously, its been very useful in helping developers understand how computing works at a very basic level.
So, ready to give hard-core Linux development a try? Greg Kroah-Hartman, the Linux stable branch kernel maintainer, recommends Steve Oualline's [Practical C Programming][23] and Samuel Harbison and Guy Steele's [C: A Reference Manual][24]. Next, read "[HOWTO do Linux kernel development][25]." Then, says Kroah-Hartman, you'll be ready to start.
In the meantime, study hard, program lots, and best of luck to you in following the footsteps of Linux's top programmers.
--------------------------------------------------------------------------------
via: https://www.hpe.com/us/en/insights/articles/top-linux-developers-recommended-programming-books-1808.html
作者:[Steven Vaughan-Nichols][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html
[1]:https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/
[2]:https://www.gnu.org/software/gnu-c-manual/
[3]:https://amzn.to/2nhyjEO
[4]:https://amzn.to/2vsL8k9
[5]:https://amzn.to/2KBbWn9
[6]:https://amzn.to/2M0rfeR
[7]:https://amzn.to/2nhyrnMe
[8]:http://shop.oreilly.com/product/0636920040385.do
[9]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_linuxbooks_containerebook0818
[10]:https://amzn.to/2MfpbyC
[11]:https://amzn.to/2MpgrTn
[12]:https://www.hpe.com/us/en/insights/articles/how-to-see-whats-going-on-with-your-linux-system-right-now-1807.html
[13]:http://shop.oreilly.com/product/9780596510299.do
[14]:http://shop.oreilly.com/product/9780596009656.do
[15]:http://shop.oreilly.com/product/9780596100575.do
[16]:http://shop.oreilly.com/product/9780596000127.do
[17]:https://amzn.to/2vsCJgF
[18]:https://amzn.to/2APzt3Z
[19]:https://www.amazon.com/Practice-Programming-Addison-Wesley-Professional-Computing/dp/020161586X/ref=as_li_ss_tl?ie=UTF8&linkCode=sl1&tag=thegroovycorpora&linkId=e6bbdb1ca2182487069bf9089fc8107e&language=en_US
[20]:https://amzn.to/2OknFsJ
[21]:https://amzn.to/2M4VVL3
[22]:https://amzn.to/2OjccJA
[23]:http://shop.oreilly.com/product/9781565923065.do
[24]:https://amzn.to/2OjzgrT
[25]:https://www.kernel.org/doc/html/v4.16/process/howto.html

View File

@ -1,3 +1,5 @@
translating by Flowsnow
Quiet log noise with Python and machine learning
======

View File

@ -1,3 +1,5 @@
fuowang 翻译中
4 open source invoicing tools for small businesses
======
Manage your billing and get paid with easy-to-use, web-based invoicing software.

View File

@ -1,525 +0,0 @@
Translating by qhwdw
Lab 3: User Environments
======
### Lab 3: User Environments
#### Introduction
In this lab you will implement the basic kernel facilities required to get a protected user-mode environment (i.e., "process") running. You will enhance the JOS kernel to set up the data structures to keep track of user environments, create a single user environment, load a program image into it, and start it running. You will also make the JOS kernel capable of handling any system calls the user environment makes and handling any other exceptions it causes.
**Note:** In this lab, the terms _environment_ and _process_ are interchangeable - both refer to an abstraction that allows you to run a program. We introduce the term "environment" instead of the traditional term "process" in order to stress the point that JOS environments and UNIX processes provide different interfaces, and do not provide the same semantics.
##### Getting Started
Use Git to commit your changes after your Lab 2 submission (if any), fetch the latest version of the course repository, and then create a local branch called `lab3` based on our lab3 branch, `origin/lab3`:
```
athena% cd ~/6.828/lab
athena% add git
athena% git commit -am 'changes to lab2 after handin'
Created commit 734fab7: changes to lab2 after handin
4 files changed, 42 insertions(+), 9 deletions(-)
athena% git pull
Already up-to-date.
athena% git checkout -b lab3 origin/lab3
Branch lab3 set up to track remote branch refs/remotes/origin/lab3.
Switched to a new branch "lab3"
athena% git merge lab2
Merge made by recursive.
kern/pmap.c | 42 +++++++++++++++++++
1 files changed, 42 insertions(+), 0 deletions(-)
athena%
```
Lab 3 contains a number of new source files, which you should browse:
```
inc/ env.h Public definitions for user-mode environments
trap.h Public definitions for trap handling
syscall.h Public definitions for system calls from user environments to the kernel
lib.h Public definitions for the user-mode support library
kern/ env.h Kernel-private definitions for user-mode environments
env.c Kernel code implementing user-mode environments
trap.h Kernel-private trap handling definitions
trap.c Trap handling code
trapentry.S Assembly-language trap handler entry-points
syscall.h Kernel-private definitions for system call handling
syscall.c System call implementation code
lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a
entry.S Assembly-language entry-point for user environments
libmain.c User-mode library setup code called from entry.S
syscall.c User-mode system call stub functions
console.c User-mode implementations of putchar and getchar, providing console I/O
exit.c User-mode implementation of exit
panic.c User-mode implementation of panic
user/ * Various test programs to check kernel lab 3 code
```
In addition, a number of the source files we handed out for lab2 are modified in lab3. To see the differences, you can type:
```
$ git diff lab2
```
You may also want to take another look at the [lab tools guide][1], as it includes information on debugging user code that becomes relevant in this lab.
##### Lab Requirements
This lab is divided into two parts, A and B. Part A is due a week after this lab was assigned; you should commit your changes and make handin your lab before the Part A deadline, making sure your code passes all of the Part A tests (it is okay if your code does not pass the Part B tests yet). You only need to have the Part B tests passing by the Part B deadline at the end of the second week.
As in lab 2, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem (for the entire lab, not for each part). Write up brief answers to the questions posed in the lab and a one or two paragraph description of what you did to solve your chosen challenge problem in a file called `answers-lab3.txt` in the top level of your `lab` directory. (If you implement more than one challenge problem, you only need to describe one of them in the write-up.) Do not forget to include the answer file in your submission with git add answers-lab3.txt.
##### Inline Assembly
In this lab you may find GCC's inline assembly language feature useful, although it is also possible to complete the lab without using it. At the very least, you will need to be able to understand the fragments of inline assembly language ("`asm`" statements) that already exist in the source code we gave you. You can find several sources of information on GCC inline assembly language on the class [reference materials][2] page.
#### Part A: User Environments and Exception Handling
The new include file `inc/env.h` contains basic definitions for user environments in JOS. Read it now. The kernel uses the `Env` data structure to keep track of each user environment. In this lab you will initially create just one environment, but you will need to design the JOS kernel to support multiple environments; lab 4 will take advantage of this feature by allowing a user environment to `fork` other environments.
As you can see in `kern/env.c`, the kernel maintains three main global variables pertaining to environments:
```
struct Env *envs = NULL; // All environments
struct Env *curenv = NULL; // The current env
static struct Env *env_free_list; // Free environment list
```
Once JOS gets up and running, the `envs` pointer points to an array of `Env` structures representing all the environments in the system. In our design, the JOS kernel will support a maximum of `NENV` simultaneously active environments, although there will typically be far fewer running environments at any given time. (`NENV` is a constant `#define`'d in `inc/env.h`.) Once it is allocated, the `envs` array will contain a single instance of the `Env` data structure for each of the `NENV` possible environments.
The JOS kernel keeps all of the inactive `Env` structures on the `env_free_list`. This design allows easy allocation and deallocation of environments, as they merely have to be added to or removed from the free list.
The kernel uses the `curenv` symbol to keep track of the _currently executing_ environment at any given time. During boot up, before the first environment is run, `curenv` is initially set to `NULL`.
##### Environment State
The `Env` structure is defined in `inc/env.h` as follows (although more fields will be added in future labs):
```
struct Env {
struct Trapframe env_tf; // Saved registers
struct Env *env_link; // Next free Env
envid_t env_id; // Unique environment identifier
envid_t env_parent_id; // env_id of this env's parent
enum EnvType env_type; // Indicates special system environments
unsigned env_status; // Status of the environment
uint32_t env_runs; // Number of times environment has run
// Address space
pde_t *env_pgdir; // Kernel virtual address of page dir
};
```
Here's what the `Env` fields are for:
* **env_tf** :
This structure, defined in `inc/trap.h`, holds the saved register values for the environment while that environment is _not_ running: i.e., when the kernel or a different environment is running. The kernel saves these when switching from user to kernel mode, so that the environment can later be resumed where it left off.
* **env_link** :
This is a link to the next `Env` on the `env_free_list`. `env_free_list` points to the first free environment on the list.
* **env_id** :
The kernel stores here a value that uniquely identifiers the environment currently using this `Env` structure (i.e., using this particular slot in the `envs` array). After a user environment terminates, the kernel may re-allocate the same `Env` structure to a different environment - but the new environment will have a different `env_id` from the old one even though the new environment is re-using the same slot in the `envs` array.
* **env_parent_id** :
The kernel stores here the `env_id` of the environment that created this environment. In this way the environments can form a “family tree,” which will be useful for making security decisions about which environments are allowed to do what to whom.
* **env_type** :
This is used to distinguish special environments. For most environments, it will be `ENV_TYPE_USER`. We'll introduce a few more types for special system service environments in later labs.
* **env_status** :
This variable holds one of the following values:
* `ENV_FREE`:
Indicates that the `Env` structure is inactive, and therefore on the `env_free_list`.
* `ENV_RUNNABLE`:
Indicates that the `Env` structure represents an environment that is waiting to run on the processor.
* `ENV_RUNNING`:
Indicates that the `Env` structure represents the currently running environment.
* `ENV_NOT_RUNNABLE`:
Indicates that the `Env` structure represents a currently active environment, but it is not currently ready to run: for example, because it is waiting for an interprocess communication (IPC) from another environment.
* `ENV_DYING`:
Indicates that the `Env` structure represents a zombie environment. A zombie environment will be freed the next time it traps to the kernel. We will not use this flag until Lab 4.
* **env_pgdir** :
This variable holds the kernel _virtual address_ of this environment's page directory.
Like a Unix process, a JOS environment couples the concepts of "thread" and "address space". The thread is defined primarily by the saved registers (the `env_tf` field), and the address space is defined by the page directory and page tables pointed to by `env_pgdir`. To run an environment, the kernel must set up the CPU with _both_ the saved registers and the appropriate address space.
Our `struct Env` is analogous to `struct proc` in xv6. Both structures hold the environment's (i.e., process's) user-mode register state in a `Trapframe` structure. In JOS, individual environments do not have their own kernel stacks as processes do in xv6. There can be only one JOS environment active in the kernel at a time, so JOS needs only a _single_ kernel stack.
##### Allocating the Environments Array
In lab 2, you allocated memory in `mem_init()` for the `pages[]` array, which is a table the kernel uses to keep track of which pages are free and which are not. You will now need to modify `mem_init()` further to allocate a similar array of `Env` structures, called `envs`.
```
Exercise 1. Modify `mem_init()` in `kern/pmap.c` to allocate and map the `envs` array. This array consists of exactly `NENV` instances of the `Env` structure allocated much like how you allocated the `pages` array. Also like the `pages` array, the memory backing `envs` should also be mapped user read-only at `UENVS` (defined in `inc/memlayout.h`) so user processes can read from this array.
```
You should run your code and make sure `check_kern_pgdir()` succeeds.
##### Creating and Running Environments
You will now write the code in `kern/env.c` necessary to run a user environment. Because we do not yet have a filesystem, we will set up the kernel to load a static binary image that is _embedded within the kernel itself_. JOS embeds this binary in the kernel as a ELF executable image.
The Lab 3 `GNUmakefile` generates a number of binary images in the `obj/user/` directory. If you look at `kern/Makefrag`, you will notice some magic that "links" these binaries directly into the kernel executable as if they were `.o` files. The `-b binary` option on the linker command line causes these files to be linked in as "raw" uninterpreted binary files rather than as regular `.o` files produced by the compiler. (As far as the linker is concerned, these files do not have to be ELF images at all - they could be anything, such as text files or pictures!) If you look at `obj/kern/kernel.sym` after building the kernel, you will notice that the linker has "magically" produced a number of funny symbols with obscure names like `_binary_obj_user_hello_start`, `_binary_obj_user_hello_end`, and `_binary_obj_user_hello_size`. The linker generates these symbol names by mangling the file names of the binary files; the symbols provide the regular kernel code with a way to reference the embedded binary files.
In `i386_init()` in `kern/init.c` you'll see code to run one of these binary images in an environment. However, the critical functions to set up user environments are not complete; you will need to fill them in.
```
Exercise 2. In the file `env.c`, finish coding the following functions:
* `env_init()`
Initialize all of the `Env` structures in the `envs` array and add them to the `env_free_list`. Also calls `env_init_percpu`, which configures the segmentation hardware with separate segments for privilege level 0 (kernel) and privilege level 3 (user).
* `env_setup_vm()`
Allocate a page directory for a new environment and initialize the kernel portion of the new environment's address space.
* `region_alloc()`
Allocates and maps physical memory for an environment
* `load_icode()`
You will need to parse an ELF binary image, much like the boot loader already does, and load its contents into the user address space of a new environment.
* `env_create()`
Allocate an environment with `env_alloc` and call `load_icode` to load an ELF binary into it.
* `env_run()`
Start a given environment running in user mode.
As you write these functions, you might find the new cprintf verb `%e` useful -- it prints a description corresponding to an error code. For example,
r = -E_NO_MEM;
panic("env_alloc: %e", r);
will panic with the message "env_alloc: out of memory".
```
Below is a call graph of the code up to the point where the user code is invoked. Make sure you understand the purpose of each step.
* `start` (`kern/entry.S`)
* `i386_init` (`kern/init.c`)
* `cons_init`
* `mem_init`
* `env_init`
* `trap_init` (still incomplete at this point)
* `env_create`
* `env_run`
* `env_pop_tf`
Once you are done you should compile your kernel and run it under QEMU. If all goes well, your system should enter user space and execute the `hello` binary until it makes a system call with the `int` instruction. At that point there will be trouble, since JOS has not set up the hardware to allow any kind of transition from user space into the kernel. When the CPU discovers that it is not set up to handle this system call interrupt, it will generate a general protection exception, find that it can't handle that, generate a double fault exception, find that it can't handle that either, and finally give up with what's known as a "triple fault". Usually, you would then see the CPU reset and the system reboot. While this is important for legacy applications (see [this blog post][3] for an explanation of why), it's a pain for kernel development, so with the 6.828 patched QEMU you'll instead see a register dump and a "Triple fault." message.
We'll address this problem shortly, but for now we can use the debugger to check that we're entering user mode. Use make qemu-gdb and set a GDB breakpoint at `env_pop_tf`, which should be the last function you hit before actually entering user mode. Single step through this function using si; the processor should enter user mode after the `iret` instruction. You should then see the first instruction in the user environment's executable, which is the `cmpl` instruction at the label `start` in `lib/entry.S`. Now use b *0x... to set a breakpoint at the `int $0x30` in `sys_cputs()` in `hello` (see `obj/user/hello.asm` for the user-space address). This `int` is the system call to display a character to the console. If you cannot execute as far as the `int`, then something is wrong with your address space setup or program loading code; go back and fix it before continuing.
##### Handling Interrupts and Exceptions
At this point, the first `int $0x30` system call instruction in user space is a dead end: once the processor gets into user mode, there is no way to get back out. You will now need to implement basic exception and system call handling, so that it is possible for the kernel to recover control of the processor from user-mode code. The first thing you should do is thoroughly familiarize yourself with the x86 interrupt and exception mechanism.
```
Exercise 3. Read Chapter 9, Exceptions and Interrupts in the 80386 Programmer's Manual (or Chapter 5 of the IA-32 Developer's Manual), if you haven't already.
```
In this lab we generally follow Intel's terminology for interrupts, exceptions, and the like. However, terms such as exception, trap, interrupt, fault and abort have no standard meaning across architectures or operating systems, and are often used without regard to the subtle distinctions between them on a particular architecture such as the x86. When you see these terms outside of this lab, the meanings might be slightly different.
##### Basics of Protected Control Transfer
Exceptions and interrupts are both "protected control transfers," which cause the processor to switch from user to kernel mode (CPL=0) without giving the user-mode code any opportunity to interfere with the functioning of the kernel or other environments. In Intel's terminology, an _interrupt_ is a protected control transfer that is caused by an asynchronous event usually external to the processor, such as notification of external device I/O activity. An _exception_ , in contrast, is a protected control transfer caused synchronously by the currently running code, for example due to a divide by zero or an invalid memory access.
In order to ensure that these protected control transfers are actually _protected_ , the processor's interrupt/exception mechanism is designed so that the code currently running when the interrupt or exception occurs _does not get to choose arbitrarily where the kernel is entered or how_. Instead, the processor ensures that the kernel can be entered only under carefully controlled conditions. On the x86, two mechanisms work together to provide this protection:
1. **The Interrupt Descriptor Table.** The processor ensures that interrupts and exceptions can only cause the kernel to be entered at a few specific, well-defined entry-points _determined by the kernel itself_ , and not by the code running when the interrupt or exception is taken.
The x86 allows up to 256 different interrupt or exception entry points into the kernel, each with a different _interrupt vector_. A vector is a number between 0 and 255. An interrupt's vector is determined by the source of the interrupt: different devices, error conditions, and application requests to the kernel generate interrupts with different vectors. The CPU uses the vector as an index into the processor's _interrupt descriptor table_ (IDT), which the kernel sets up in kernel-private memory, much like the GDT. From the appropriate entry in this table the processor loads:
* the value to load into the instruction pointer (`EIP`) register, pointing to the kernel code designated to handle that type of exception.
* the value to load into the code segment (`CS`) register, which includes in bits 0-1 the privilege level at which the exception handler is to run. (In JOS, all exceptions are handled in kernel mode, privilege level 0.)
2. **The Task State Segment.** The processor needs a place to save the _old_ processor state before the interrupt or exception occurred, such as the original values of `EIP` and `CS` before the processor invoked the exception handler, so that the exception handler can later restore that old state and resume the interrupted code from where it left off. But this save area for the old processor state must in turn be protected from unprivileged user-mode code; otherwise buggy or malicious user code could compromise the kernel.
For this reason, when an x86 processor takes an interrupt or trap that causes a privilege level change from user to kernel mode, it also switches to a stack in the kernel's memory. A structure called the _task state segment_ (TSS) specifies the segment selector and address where this stack lives. The processor pushes (on this new stack) `SS`, `ESP`, `EFLAGS`, `CS`, `EIP`, and an optional error code. Then it loads the `CS` and `EIP` from the interrupt descriptor, and sets the `ESP` and `SS` to refer to the new stack.
Although the TSS is large and can potentially serve a variety of purposes, JOS only uses it to define the kernel stack that the processor should switch to when it transfers from user to kernel mode. Since "kernel mode" in JOS is privilege level 0 on the x86, the processor uses the `ESP0` and `SS0` fields of the TSS to define the kernel stack when entering kernel mode. JOS doesn't use any other TSS fields.
##### Types of Exceptions and Interrupts
All of the synchronous exceptions that the x86 processor can generate internally use interrupt vectors between 0 and 31, and therefore map to IDT entries 0-31. For example, a page fault always causes an exception through vector 14. Interrupt vectors greater than 31 are only used by _software interrupts_ , which can be generated by the `int` instruction, or asynchronous _hardware interrupts_ , caused by external devices when they need attention.
In this section we will extend JOS to handle the internally generated x86 exceptions in vectors 0-31. In the next section we will make JOS handle software interrupt vector 48 (0x30), which JOS (fairly arbitrarily) uses as its system call interrupt vector. In Lab 4 we will extend JOS to handle externally generated hardware interrupts such as the clock interrupt.
##### An Example
Let's put these pieces together and trace through an example. Let's say the processor is executing code in a user environment and encounters a divide instruction that attempts to divide by zero.
1. The processor switches to the stack defined by the `SS0` and `ESP0` fields of the TSS, which in JOS will hold the values `GD_KD` and `KSTACKTOP`, respectively.
2. The processor pushes the exception parameters on the kernel stack, starting at address `KSTACKTOP`:
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20 <---- ESP
+--------------------+
```
3. Because we're handling a divide error, which is interrupt vector 0 on the x86, the processor reads IDT entry 0 and sets `CS:EIP` to point to the handler function described by the entry.
4. The handler function takes control and handles the exception, for example by terminating the user environment.
For certain types of x86 exceptions, in addition to the "standard" five words above, the processor pushes onto the stack another word containing an _error code_. The page fault exception, number 14, is an important example. See the 80386 manual to determine for which exception numbers the processor pushes an error code, and what the error code means in that case. When the processor pushes an error code, the stack would look as follows at the beginning of the exception handler when coming in from user mode:
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20
| error code | " - 24 <---- ESP
+--------------------+
```
##### Nested Exceptions and Interrupts
The processor can take exceptions and interrupts both from kernel and user mode. It is only when entering the kernel from user mode, however, that the x86 processor automatically switches stacks before pushing its old register state onto the stack and invoking the appropriate exception handler through the IDT. If the processor is _already_ in kernel mode when the interrupt or exception occurs (the low 2 bits of the `CS` register are already zero), then the CPU just pushes more values on the same kernel stack. In this way, the kernel can gracefully handle _nested exceptions_ caused by code within the kernel itself. This capability is an important tool in implementing protection, as we will see later in the section on system calls.
If the processor is already in kernel mode and takes a nested exception, since it does not need to switch stacks, it does not save the old `SS` or `ESP` registers. For exception types that do not push an error code, the kernel stack therefore looks like the following on entry to the exception handler:
```
+--------------------+ <---- old ESP
| old EFLAGS | " - 4
| 0x00000 | old CS | " - 8
| old EIP | " - 12
+--------------------+
```
For exception types that push an error code, the processor pushes the error code immediately after the old `EIP`, as before.
There is one important caveat to the processor's nested exception capability. If the processor takes an exception while already in kernel mode, and _cannot push its old state onto the kernel stack_ for any reason such as lack of stack space, then there is nothing the processor can do to recover, so it simply resets itself. Needless to say, the kernel should be designed so that this can't happen.
##### Setting Up the IDT
You should now have the basic information you need in order to set up the IDT and handle exceptions in JOS. For now, you will set up the IDT to handle interrupt vectors 0-31 (the processor exceptions). We'll handle system call interrupts later in this lab and add interrupts 32-47 (the device IRQs) in a later lab.
The header files `inc/trap.h` and `kern/trap.h` contain important definitions related to interrupts and exceptions that you will need to become familiar with. The file `kern/trap.h` contains definitions that are strictly private to the kernel, while `inc/trap.h` contains definitions that may also be useful to user-level programs and libraries.
Note: Some of the exceptions in the range 0-31 are defined by Intel to be reserved. Since they will never be generated by the processor, it doesn't really matter how you handle them. Do whatever you think is cleanest.
The overall flow of control that you should achieve is depicted below:
```
IDT trapentry.S trap.c
+----------------+
| &handler1 |---------> handler1: trap (struct Trapframe *tf)
| | // do stuff {
| | call trap // handle the exception/interrupt
| | // ... }
+----------------+
| &handler2 |--------> handler2:
| | // do stuff
| | call trap
| | // ...
+----------------+
.
.
.
+----------------+
| &handlerX |--------> handlerX:
| | // do stuff
| | call trap
| | // ...
+----------------+
```
Each exception or interrupt should have its own handler in `trapentry.S` and `trap_init()` should initialize the IDT with the addresses of these handlers. Each of the handlers should build a `struct Trapframe` (see `inc/trap.h`) on the stack and call `trap()` (in `trap.c`) with a pointer to the Trapframe. `trap()` then handles the exception/interrupt or dispatches to a specific handler function.
```
Exercise 4. Edit `trapentry.S` and `trap.c` and implement the features described above. The macros `TRAPHANDLER` and `TRAPHANDLER_NOEC` in `trapentry.S` should help you, as well as the T_* defines in `inc/trap.h`. You will need to add an entry point in `trapentry.S` (using those macros) for each trap defined in `inc/trap.h`, and you'll have to provide `_alltraps` which the `TRAPHANDLER` macros refer to. You will also need to modify `trap_init()` to initialize the `idt` to point to each of these entry points defined in `trapentry.S`; the `SETGATE` macro will be helpful here.
Your `_alltraps` should:
1. push values to make the stack look like a struct Trapframe
2. load `GD_KD` into `%ds` and `%es`
3. `pushl %esp` to pass a pointer to the Trapframe as an argument to trap()
4. `call trap` (can `trap` ever return?)
Consider using the `pushal` instruction; it fits nicely with the layout of the `struct Trapframe`.
Test your trap handling code using some of the test programs in the `user` directory that cause exceptions before making any system calls, such as `user/divzero`. You should be able to get make grade to succeed on the `divzero`, `softint`, and `badsegment` tests at this point.
```
```
Challenge! You probably have a lot of very similar code right now, between the lists of `TRAPHANDLER` in `trapentry.S` and their installations in `trap.c`. Clean this up. Change the macros in `trapentry.S` to automatically generate a table for `trap.c` to use. Note that you can switch between laying down code and data in the assembler by using the directives `.text` and `.data`.
```
```
Questions
Answer the following questions in your `answers-lab3.txt`:
1. What is the purpose of having an individual handler function for each exception/interrupt? (i.e., if all exceptions/interrupts were delivered to the same handler, what feature that exists in the current implementation could not be provided?)
2. Did you have to do anything to make the `user/softint` program behave correctly? The grade script expects it to produce a general protection fault (trap 13), but `softint`'s code says `int $14`. _Why_ should this produce interrupt vector 13? What happens if the kernel actually allows `softint`'s `int $14` instruction to invoke the kernel's page fault handler (which is interrupt vector 14)?
```
This concludes part A of the lab. Don't forget to add `answers-lab3.txt`, commit your changes, and run make handin before the part A deadline.
#### Part B: Page Faults, Breakpoints Exceptions, and System Calls
Now that your kernel has basic exception handling capabilities, you will refine it to provide important operating system primitives that depend on exception handling.
##### Handling Page Faults
The page fault exception, interrupt vector 14 (`T_PGFLT`), is a particularly important one that we will exercise heavily throughout this lab and the next. When the processor takes a page fault, it stores the linear (i.e., virtual) address that caused the fault in a special processor control register, `CR2`. In `trap.c` we have provided the beginnings of a special function, `page_fault_handler()`, to handle page fault exceptions.
```
Exercise 5. Modify `trap_dispatch()` to dispatch page fault exceptions to `page_fault_handler()`. You should now be able to get make grade to succeed on the `faultread`, `faultreadkernel`, `faultwrite`, and `faultwritekernel` tests. If any of them don't work, figure out why and fix them. Remember that you can boot JOS into a particular user program using make run- _x_ or make run- _x_ -nox. For instance, make run-hello-nox runs the _hello_ user program.
```
You will further refine the kernel's page fault handling below, as you implement system calls.
##### The Breakpoint Exception
The breakpoint exception, interrupt vector 3 (`T_BRKPT`), is normally used to allow debuggers to insert breakpoints in a program's code by temporarily replacing the relevant program instruction with the special 1-byte `int3` software interrupt instruction. In JOS we will abuse this exception slightly by turning it into a primitive pseudo-system call that any user environment can use to invoke the JOS kernel monitor. This usage is actually somewhat appropriate if we think of the JOS kernel monitor as a primitive debugger. The user-mode implementation of `panic()` in `lib/panic.c`, for example, performs an `int3` after displaying its panic message.
```
Exercise 6. Modify `trap_dispatch()` to make breakpoint exceptions invoke the kernel monitor. You should now be able to get make grade to succeed on the `breakpoint` test.
```
```
Challenge! Modify the JOS kernel monitor so that you can 'continue' execution from the current location (e.g., after the `int3`, if the kernel monitor was invoked via the breakpoint exception), and so that you can single-step one instruction at a time. You will need to understand certain bits of the `EFLAGS` register in order to implement single-stepping.
Optional: If you're feeling really adventurous, find some x86 disassembler source code - e.g., by ripping it out of QEMU, or out of GNU binutils, or just write it yourself - and extend the JOS kernel monitor to be able to disassemble and display instructions as you are stepping through them. Combined with the symbol table loading from lab 1, this is the stuff of which real kernel debuggers are made.
```
```
Questions
3. The break point test case will either generate a break point exception or a general protection fault depending on how you initialized the break point entry in the IDT (i.e., your call to `SETGATE` from `trap_init`). Why? How do you need to set it up in order to get the breakpoint exception to work as specified above and what incorrect setup would cause it to trigger a general protection fault?
4. What do you think is the point of these mechanisms, particularly in light of what the `user/softint` test program does?
```
##### System calls
User processes ask the kernel to do things for them by invoking system calls. When the user process invokes a system call, the processor enters kernel mode, the processor and the kernel cooperate to save the user process's state, the kernel executes appropriate code in order to carry out the system call, and then resumes the user process. The exact details of how the user process gets the kernel's attention and how it specifies which call it wants to execute vary from system to system.
In the JOS kernel, we will use the `int` instruction, which causes a processor interrupt. In particular, we will use `int $0x30` as the system call interrupt. We have defined the constant `T_SYSCALL` to 48 (0x30) for you. You will have to set up the interrupt descriptor to allow user processes to cause that interrupt. Note that interrupt 0x30 cannot be generated by hardware, so there is no ambiguity caused by allowing user code to generate it.
The application will pass the system call number and the system call arguments in registers. This way, the kernel won't need to grub around in the user environment's stack or instruction stream. The system call number will go in `%eax`, and the arguments (up to five of them) will go in `%edx`, `%ecx`, `%ebx`, `%edi`, and `%esi`, respectively. The kernel passes the return value back in `%eax`. The assembly code to invoke a system call has been written for you, in `syscall()` in `lib/syscall.c`. You should read through it and make sure you understand what is going on.
```
Exercise 7. Add a handler in the kernel for interrupt vector `T_SYSCALL`. You will have to edit `kern/trapentry.S` and `kern/trap.c`'s `trap_init()`. You also need to change `trap_dispatch()` to handle the system call interrupt by calling `syscall()` (defined in `kern/syscall.c`) with the appropriate arguments, and then arranging for the return value to be passed back to the user process in `%eax`. Finally, you need to implement `syscall()` in `kern/syscall.c`. Make sure `syscall()` returns `-E_INVAL` if the system call number is invalid. You should read and understand `lib/syscall.c` (especially the inline assembly routine) in order to confirm your understanding of the system call interface. Handle all the system calls listed in `inc/syscall.h` by invoking the corresponding kernel function for each call.
Run the `user/hello` program under your kernel (make run-hello). It should print "`hello, world`" on the console and then cause a page fault in user mode. If this does not happen, it probably means your system call handler isn't quite right. You should also now be able to get make grade to succeed on the `testbss` test.
```
```
Challenge! Implement system calls using the `sysenter` and `sysexit` instructions instead of using `int 0x30` and `iret`.
The `sysenter/sysexit` instructions were designed by Intel to be faster than `int/iret`. They do this by using registers instead of the stack and by making assumptions about how the segmentation registers are used. The exact details of these instructions can be found in Volume 2B of the Intel reference manuals.
The easiest way to add support for these instructions in JOS is to add a `sysenter_handler` in `kern/trapentry.S` that saves enough information about the user environment to return to it, sets up the kernel environment, pushes the arguments to `syscall()` and calls `syscall()` directly. Once `syscall()` returns, set everything up for and execute the `sysexit` instruction. You will also need to add code to `kern/init.c` to set up the necessary model specific registers (MSRs). Section 6.1.2 in Volume 2 of the AMD Architecture Programmer's Manual and the reference on SYSENTER in Volume 2B of the Intel reference manuals give good descriptions of the relevant MSRs. You can find an implementation of `wrmsr` to add to `inc/x86.h` for writing to these MSRs [here][4].
Finally, `lib/syscall.c` must be changed to support making a system call with `sysenter`. Here is a possible register layout for the `sysenter` instruction:
eax - syscall number
edx, ecx, ebx, edi - arg1, arg2, arg3, arg4
esi - return pc
ebp - return esp
esp - trashed by sysenter
GCC's inline assembler will automatically save registers that you tell it to load values directly into. Don't forget to either save (push) and restore (pop) other registers that you clobber, or tell the inline assembler that you're clobbering them. The inline assembler doesn't support saving `%ebp`, so you will need to add code to save and restore it yourself. The return address can be put into `%esi` by using an instruction like `leal after_sysenter_label, %%esi`.
Note that this only supports 4 arguments, so you will need to leave the old method of doing system calls around to support 5 argument system calls. Furthermore, because this fast path doesn't update the current environment's trap frame, it won't be suitable for some of the system calls we add in later labs.
You may have to revisit your code once we enable asynchronous interrupts in the next lab. Specifically, you'll need to enable interrupts when returning to the user process, which `sysexit` doesn't do for you.
```
##### User-mode startup
A user program starts running at the top of `lib/entry.S`. After some setup, this code calls `libmain()`, in `lib/libmain.c`. You should modify `libmain()` to initialize the global pointer `thisenv` to point at this environment's `struct Env` in the `envs[]` array. (Note that `lib/entry.S` has already defined `envs` to point at the `UENVS` mapping you set up in Part A.) Hint: look in `inc/env.h` and use `sys_getenvid`.
`libmain()` then calls `umain`, which, in the case of the hello program, is in `user/hello.c`. Note that after printing "`hello, world`", it tries to access `thisenv->env_id`. This is why it faulted earlier. Now that you've initialized `thisenv` properly, it should not fault. If it still faults, you probably haven't mapped the `UENVS` area user-readable (back in Part A in `pmap.c`; this is the first time we've actually used the `UENVS` area).
```
Exercise 8. Add the required code to the user library, then boot your kernel. You should see `user/hello` print "`hello, world`" and then print "`i am environment 00001000`". `user/hello` then attempts to "exit" by calling `sys_env_destroy()` (see `lib/libmain.c` and `lib/exit.c`). Since the kernel currently only supports one user environment, it should report that it has destroyed the only environment and then drop into the kernel monitor. You should be able to get make grade to succeed on the `hello` test.
```
##### Page faults and memory protection
Memory protection is a crucial feature of an operating system, ensuring that bugs in one program cannot corrupt other programs or corrupt the operating system itself.
Operating systems usually rely on hardware support to implement memory protection. The OS keeps the hardware informed about which virtual addresses are valid and which are not. When a program tries to access an invalid address or one for which it has no permissions, the processor stops the program at the instruction causing the fault and then traps into the kernel with information about the attempted operation. If the fault is fixable, the kernel can fix it and let the program continue running. If the fault is not fixable, then the program cannot continue, since it will never get past the instruction causing the fault.
As an example of a fixable fault, consider an automatically extended stack. In many systems the kernel initially allocates a single stack page, and then if a program faults accessing pages further down the stack, the kernel will allocate those pages automatically and let the program continue. By doing this, the kernel only allocates as much stack memory as the program needs, but the program can work under the illusion that it has an arbitrarily large stack.
System calls present an interesting problem for memory protection. Most system call interfaces let user programs pass pointers to the kernel. These pointers point at user buffers to be read or written. The kernel then dereferences these pointers while carrying out the system call. There are two problems with this:
1. A page fault in the kernel is potentially a lot more serious than a page fault in a user program. If the kernel page-faults while manipulating its own data structures, that's a kernel bug, and the fault handler should panic the kernel (and hence the whole system). But when the kernel is dereferencing pointers given to it by the user program, it needs a way to remember that any page faults these dereferences cause are actually on behalf of the user program.
2. The kernel typically has more memory permissions than the user program. The user program might pass a pointer to a system call that points to memory that the kernel can read or write but that the program cannot. The kernel must be careful not to be tricked into dereferencing such a pointer, since that might reveal private information or destroy the integrity of the kernel.
For both of these reasons the kernel must be extremely careful when handling pointers presented by user programs.
You will now solve these two problems with a single mechanism that scrutinizes all pointers passed from userspace into the kernel. When a program passes the kernel a pointer, the kernel will check that the address is in the user part of the address space, and that the page table would allow the memory operation.
Thus, the kernel will never suffer a page fault due to dereferencing a user-supplied pointer. If the kernel does page fault, it should panic and terminate.
```
Exercise 9. Change `kern/trap.c` to panic if a page fault happens in kernel mode.
Hint: to determine whether a fault happened in user mode or in kernel mode, check the low bits of the `tf_cs`.
Read `user_mem_assert` in `kern/pmap.c` and implement `user_mem_check` in that same file.
Change `kern/syscall.c` to sanity check arguments to system calls.
Boot your kernel, running `user/buggyhello`. The environment should be destroyed, and the kernel should _not_ panic. You should see:
[00001000] user_mem_check assertion failure for va 00000001
[00001000] free env 00001000
Destroyed the only environment - nothing more to do!
Finally, change `debuginfo_eip` in `kern/kdebug.c` to call `user_mem_check` on `usd`, `stabs`, and `stabstr`. If you now run `user/breakpoint`, you should be able to run backtrace from the kernel monitor and see the backtrace traverse into `lib/libmain.c` before the kernel panics with a page fault. What causes this page fault? You don't need to fix it, but you should understand why it happens.
```
Note that the same mechanism you just implemented also works for malicious user applications (such as `user/evilhello`).
```
Exercise 10. Boot your kernel, running `user/evilhello`. The environment should be destroyed, and the kernel should not panic. You should see:
[00000000] new env 00001000
...
[00001000] user_mem_check assertion failure for va f010000c
[00001000] free env 00001000
```
**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab3.txt`. Commit your changes and type make handin in the `lab` directory to submit your work.
Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab3.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 3', then make handin and follow the directions.
--------------------------------------------------------------------------------
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/
作者:[csail.mit][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://pdos.csail.mit.edu
[b]: https://github.com/lujun9972
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html
[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx
[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c

View File

@ -1,3 +1,5 @@
HackChow translating
5 alerting and visualization tools for sysadmins
======
These open source tools help users understand system behavior and output, and provide alerts for potential problems.

View File

@ -1,3 +1,4 @@
translating by leemeans
Exploring the Linux kernel: The secrets of Kconfig/kbuild
======
Dive into understanding how the Linux config/build system works.

View File

@ -1,131 +0,0 @@
translating---geekpi
How To Lock Virtual Console Sessions On Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
When youre working on a shared system, you might not want the other users to sneak peak in your console to know what youre actually doing. If so, I know a simple trick to lock your own session while still allowing other users to use the system on other virtual consoles. Thanks to **Vlock** , stands for **V** irtual Console **lock** , a command line program to lock one or more sessions on the Linux console. If necessary, you can lock the entire console and disable the virtual console switching functionality altogether. Vlock is especially useful for the shared Linux systems which have multiple users with access to the console.
### Installing Vlock
On Arch-based systems, the Vlock package is replaced with **kpd** package which is preinstalled by default, so you need not to bother with installation.
On Debian, Ubuntu, Linux Mint, run the following command to install Vlock:
```
$ sudo apt-get install vlock
```
On Fedora:
```
$ sudo dnf install vlock
```
On RHEL, CentOS:
```
$ sudo yum install vlock
```
### Lock Virtual Console Sessions On Linux
The general syntax for Vlock is:
```
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
```
Where,
* **a** Lock all virtual console sessions,
* **c** Lock current virtual console session,
* **n** Switch to new empty console before locking all sessions,
* **s** Disable SysRq key mechanism,
* **t** Specify the timeout for the screensaver plugins,
* **h** Display help section,
* **v** Display version.
Let me show you some examples.
**1\. Lock current console session**
When running Vlock without any arguments, it locks the current console session (TYY) by default. To unlock the session, you need to enter either the current users password or the root password.
```
$ vlock
```
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
You can also use **-c** flag to lock the current console session.
```
$ vlock -c
```
Please note that this command will only lock the current console. You can switch to other consoles by pressing **ALT+F2**. For more details about switching between TTYs, refer the following guide.
Also, if the system has multiple users, the other users can still access their respective TTYs.
**2\. Lock all console sessions**
To lock all TTYs at the same time and also disable the virtual console switching functionality, run:
```
$ vlock -a
```
Again, to unlock the console sessions, just press ENTER key and type your current users password or root user password.
Please keep in mind that the **root user can always unlock any vlock session** at any time, unless disabled at compile time.
**3. Switch to new virtual console before locking all consoles
**
It is also possible to make Vlock to switch to new empty virtual console from X session before locking all consoles. To do so, use **-n** flag.
```
$ vlock -n
```
**4. Disable SysRq mechanism
**
As you may know, the Magic SysRq key mechanism allows the users to perform some operations when the system freeze. So the users can unlock the consoles using SysRq. In order to prevent this, pass the **-s** option to disable SysRq mechanism. Please remember, this only works if the **-a** option is given.
```
$ vlock -sa
```
For more options and its usage, refer the help section or the man pages.
```
$ vlock -h
$ man vlock
```
Vlock prevents the unauthorized users from gaining the console access. If youre looking for a simple console locking mechanism to your Linux machine, Vlock is worth checking!
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972

View File

@ -1,3 +1,6 @@
Translating by MjSeven
Kali Linux: What You Must Know Before Using it FOSS Post
======
![](https://i1.wp.com/fosspost.org/wp-content/uploads/2018/10/kali-linux.png?fit=1237%2C527&ssl=1)

View File

@ -1,80 +0,0 @@
Browsing the web with Min, a minimalist open source web browser
======
Not every web browser needs to carry every single feature. Min puts a minimalist spin on the everyday web browser.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
Does the world need another web browser? Even though the days of having a multiplicity of browsers to choose from are long gone, there still are folks out there developing new applications that help us use the web.
One of those new-fangled browsers is [Min][1]. As its name suggests (well, suggests to me, anyway), Min is a minimalist browser. That doesn't mean it's deficient in any significant way, and its open source, Apache 2.0 license piques my interest.
But is Min worth a look? Let's find out.
### Getting going
Min is one of many applications written using a development framework called [Electron][2]. (It's the same framework that brought us the [Atom text editor][3].) You can [get installers][4] for Linux, MacOS, and Windows. You can also grab the [source code from GitHub][5] and compile it if you're inclined.
I run Manjaro Linux, and there isn't an installer for that distro. Luckily, I was able to install Min from Manjaro's package manager.
Once that was done, I fired up Min by pressing Alt+F2, typing **min** in the run-application box, and pressing Enter, and I was ready to go.
![](https://opensource.com/sites/default/files/uploads/min-main.png)
Min is billed as a smarter, faster web browser. It definitely is fast—at the risk of drawing the ire of denizens of certain places on the web, I'll say that it starts faster than Firefox and Chrome on the laptops with which I tried it.
Browsing with Min is like browsing with Firefox or Chrome. Type a URL in the address bar, press Enter, and away you go.
### Min's features
While Min doesn't pack everything you'd find in browsers like Firefox or Chrome, it doesn't do too badly.
Like any other browser these days, Min supports multiple tabs. It also has a feature called Tasks, which lets you group your open tabs.
Min's default search engine is [DuckDuckGo][6]. I really like that touch because DuckDuckGo is one of my search engines of choice. If DuckDuckGo isn't your thing, you can set another search engine as the default in Min's preferences.
Instead of using tools like AdBlock to filter out content you don't want, Min has a built-in ad blocker. It uses the [EasyList filters][7], which were created for AdBlock. You can block scripts and images, and Min also has a built-in tracking blocker.
Like Firefox, Min has a reading mode called Reading List. Flipping the Reading List switch (well, clicking the icon in the address bar) removes most of the cruft from a page so you can focus on the words you're reading. Pages stay in the Reading List for 30 days.
![](https://opensource.com/sites/default/files/uploads/min-reading-list.png)
Speaking of focus, Min also has a Focus Mode that hides and prevents you from opening other tabs. So, if you're working in a web application, you'll need to click a few times if you feel like procrastinating.
Of course, Min has a number of keyboard shortcuts that can make using it a lot faster. You can find a reference for those shortcuts [on GitHub][8]. You can also change a number of them in Min's preferences.
I was pleasantly surprised to find Min can play videos on YouTube, Vimeo, Dailymotion, and similar sites. I also played sample tracks at music retailer 7Digital. I didn't try playing music on popular sites like Spotify or Last.fm (because I don't have accounts with them).
![](https://opensource.com/sites/default/files/uploads/min-video.png)
### What's not there
The features that Min doesn't pack are as noticeable as the ones it does. There doesn't seem to be a way to bookmark sites. You either have to rely on Min's search history to find your favorite links, or you'll have to rely on a bookmarking service.
On top of that, Min doesn't support plugins. That's not a deal breaker for me—not having plugins is undoubtedly one of the reasons the browser starts and runs so quickly. I know a number of people who are … well, I wouldn't go so far to say junkies, but they really like their plugins. Min wouldn't cut it for them.
### Final thoughts
Min isn't a bad browser. It's light and fast enough to appeal to the minimalists out there. That said, it lacks features that hardcore web browser users clamor for.
If you want a zippy browser that isn't weighed down by all the features of so-called modern web browsers, I suggest giving Min a serious look.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/min-web-browser
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://minbrowser.github.io/min/
[2]: http://electron.atom.io/apps/
[3]: https://opensource.com/article/17/5/atom-text-editor-packages-writers
[4]: https://github.com/minbrowser/min/releases/
[5]: https://github.com/minbrowser/min
[6]: http://duckduckgo.com
[7]: https://easylist.to/
[8]: https://github.com/minbrowser/min/wiki

View File

@ -1,3 +1,4 @@
(translating by runningwater)
How To Determine Which System Manager Is Running On Linux System
======
We all are heard about this word many times but only few of us know what is this exactly. We will show you how to identify the system manager.
@ -164,7 +165,7 @@ via: https://www.2daygeek.com/how-to-determine-which-init-system-manager-is-runn
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,133 +0,0 @@
Understanding Linux Links: Part 1
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-link-498708.jpg?itok=DyVEcEsc)
Along with `cp` and `mv`, both of which we talked about at length in [the previous installment of this series][1], links are another way of putting files and directories where you want them to be. The advantage is that links let you have one file or directory show up in several places at the same time.
As noted previously, at the physical disk level, things like files and directories don't really exist. A filesystem conjures them up for our human convenience. But at the disk level, there is something called a _partition table_ , which lives at the beginning of every partition, and then the data scattered over the rest of the disk.
Although there are different types of partition tables, the ones at the beginning of a partition containing your data will map where each directory and file starts and ends. The partition table acts like an index: When you load a file from your disk, your operating system looks up the entry on the table and the table says where the file starts on the disk and where it finishes. The disk header moves to the start point, reads the data until it reaches the end point and, hey presto: here's your file.
### Hard Links
A hard link is simply an entry in the partition table that points to an area on a disk that **has already been assigned to a file**. In other words, a hard link points to data that has already been indexed by another entry. Let's see how this works.
Open a terminal, create a directory for tests and move into it:
```
mkdir test_dir
cd test_dir
```
Create a file by [touching][1] it:
```
touch test.txt
```
For extra excitement (?), open _test.txt_ in a text editor and add some a few words into it.
Now make a hard link by executing:
```
ln test.txt hardlink_test.txt
```
Run `ls`, and you'll see your directory now contains two files... Or so it would seem. As you read before, really what you are seeing is two names for the exact same file: _hardlink_test.txt_ contains the same content, has not filled any more space in the disk (try with a large file to test this), and shares the same inode as _test.txt_ :
```
$ ls -li *test*
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt
```
_ls_ 's `-i` option shows the _inode number_ of a file. The _inode_ is the chunk of information in the partition table that contains the location of the file or directory on the disk, the last time it was modified, and other data. If two files share the same inode, they are, to all practical effects, the same file, regardless of where they are located in the directory tree.
### Fluffy Links
Soft links, also known as _symlinks_ , are different: a soft link is really an independent file, it has its own inode and its own little slot on the disk. But it only contains a snippet of data that points the operating system to another file or directory.
You can create a soft link using `ln` with the `-s` option:
```
ln -s test.txt softlink_test.txt
```
This will create the soft link _softlink_test.txt_ to _test.txt_ in the current directory.
By running `ls -li` again, you can see the difference between the two different kinds of links:
```
$ ls -li
total 8
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515855 lrwxrwxrwx 1 paul paul 8 oct 12 09:50 softlink_test.txt -> test.txt
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt
```
_hardlink_test.txt_ and _test.txt_ contain some text and take up the same space *literally*. They also share the same inode number. Meanwhile, _softlink_test.txt_ occupies much less and has a different inode number, marking it as a different file altogether. Using the _ls_ 's `-l` option also shows the file or directory your soft link points to.
### Why Use Links?
They are good for **applications that come with their own environment**. It often happens that your Linux distro does not come with the latest version of an application you need. Take the case of the fabulous [Blender 3D][2] design software. Blender allows you to create 3D still images as well as animated films and who wouldn't to have that on their machine? The problem is that the current version of Blender is always at least one version ahead of that found in any distribution.
Fortunately, [Blender provides downloads][3] that run out of the box. These packages come, apart from with the program itself, a complex framework of libraries and dependencies that Blender needs to work. All these bits and piece come within their own hierarchy of directories.
Every time you want to run Blender, you could `cd` into the folder you downloaded it to and run:
```
./blender
```
But that is inconvenient. It would be better if you could run the `blender` command from anywhere in your file system, as well as from your desktop command launchers.
The way to do that is to link the _blender_ executable into a _bin/_ directory. On many systems, you can make the `blender` command available from anywhere in the file system by linking to it like this:
```
ln -s /path/to/blender_directory/blender /home/<username>/bin
```
Another case in which you will need links is for **software that needs outdated libraries**. If you list your _/usr/lib_ directory with `ls -l,` you will see a lot of soft-linked files fly by. Take a closer look, and you will see that the links usually have similar names to the original files they are linking to. You may see _libblah_ linking to _libblah.so.2_ , and then, you may even notice that _libblah.so.2_ links in turn to _libblah.so.2.1.0_ , the original file.
This is because applications often require older versions of alibrary than what is installed. The problem is that, even if the more modern versions are still compatible with the older versions (and usually they are), the program will bork if it doesn't find the version it is looking for. To solve this problem distributions often create links so that the picky application believes it has found the older version, when, in reality, it has only found a link and ends up using the more up to date version of the library.
Somewhat related is what happens with **programs you compile yourself from the source code**. Programs you compile yourself often end up installed under _/usr/local_ : the program itself ends up in _/usr/local/bin_ and it looks for the libraries it needs _/_ in the _/usr/local/lib_ directory. But say that your new program needs _libblah_ , but _libblah_ lives in _/usr/lib_ and that's where all your other programs look for it. You can link it to _/usr/local/lib_ by doing:
```
ln -s /usr/lib/libblah /usr/local/lib
```
Or, if you prefer, by `cd`ing into _/usr/local/lib_...
```
cd /usr/local/lib
```
... and then linking with:
```
ln -s ../lib/libblah
```
There are dozens more cases in which linking proves useful, and you will undoubtedly discover them as you become more proficient in using Linux, but these are the most common. Next time, well look at some linking quirks you need to be aware of.
Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/blog/2018/8/linux-beginners-moving-things-around
[2]: https://www.blender.org/
[3]: https://www.blender.org/download/
[4]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,55 +0,0 @@
translating----geekpi
Edit your videos with Pitivi on Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2018/10/pitivi-816x346.png)
Looking to produce a video of your adventures this weekend? There are many different options for editing videos out there. However, if you are looking for a video editor that is simple to pick up, and also available in the official Fedora Repositories, give [Pitivi][1] a go.
Pitivi is an open source, non-linear video editor that uses the GStreamer framework. Out of the box on Fedora, Pitivi supports OGG Video, WebM, and a range of other formats. Additionally, more support for for video formats is available via gstreamer plugins. Pitivi is also tightly integrated with the GNOME Desktop, so the UI will feel at home among the other newer applications on Fedora Workstation.
### Installing Pitivi on Fedora
Pitivi is available in the Fedora Repositories. On Fedora Workstation, simply search and install Pitivi from the Software application.
![][2]
Alternatively, install Pitivi using the following command in the Terminal:
```
sudo dnf install pitivi
```
### Basic Editing
Pitivi has a wide range of tools built-in to allow quick and effective editing of your clips. Simply import videos, audio, and images into the Pitivi media library, then drag them onto the timeline. Additionally, pitivi allows you to easily split, trim, and group parts of clips together, in addition to simple fade transitions on the timeline.
![][3]
### Transitions and Effects
In addition to a basic fade between two clips, Pitivi also features a range of different transitions and wipes. Additionally, there are over a hundred effects that can be applied to either videos or audio to change how the media elements are played or displayed in your final presentation
![][4]
Pitivi also features a range of other great features, so be sure to check out the [tour][5] on their website for a full description of the features of the awesome Pitivi.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/edit-your-videos-with-pitivi-on-fedora/
作者:[Ryan Lerch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/introducing-flatpak/
[b]: https://github.com/lujun9972
[1]: http://www.pitivi.org/
[2]: https://fedoramagazine.org/wp-content/uploads/2018/10/Screenshot-from-2018-10-19-14-46-12.png
[3]: https://fedoramagazine.org/wp-content/uploads/2018/10/Screenshot-from-2018-10-19-15-37-29.png
[4]: http://www.pitivi.org/i/screenshots/archive/0.94.jpg
[5]: http://www.pitivi.org/?go=tour

View File

@ -1,341 +0,0 @@
How to use Pandoc to produce a research paper
======
Learn how to manage section references, figures, tables, and more in Markdown.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T)
This article takes a deep dive into how to produce a research paper using (mostly) [Markdown][1] syntax. We'll cover how to create and reference sections, figures (in Markdown and [LaTeX][2]) and bibliographies. We'll also discuss troublesome cases and why writing them in LaTeX is the right approach.
### Research
Research papers usually contain references to sections, figures, tables, and a bibliography. [Pandoc][3] by itself cannot easily cross-reference these, but it can leverage the [pandoc-crossref][4] filter to do the automatic numbering and cross-referencing of sections, figures, and tables.
Lets start by rewriting [an example of an educational research paper][5] originally written in LaTeX and rewrites it in Markdown (and some LaTeX) with Pandoc and pandoc-crossref.
#### Adding and referencing sections
Sections are automatically numbered and must be written using the Markdown heading H1. Subsections are written with subheadings H2-H4 (it is uncommon to need more than that). For example, to write a section titled “Implementation”, write `# Implementation {#sec:implementation}`, and Pandoc produces `3. Implementation` (or the corresponding numbered section). The title “Implementation” uses heading H1 and declares a label `{#sec:implementation}` that authors can use to refer to that section. To reference a section, type the `@` symbol followed by the label of the section and enclose it in square brackets: `[@sec:implementation]`.
[In this paper][5], we find the following example:
```
we lack experience (consistency between TAs, [@sec:implementation]).
```
Pandoc produces:
```
we lack experience (consistency between TAs, Section 4).
```
Sections are numbered automatically (this is covered in the `Makefile` at the end of the article). To create unnumbered sections, type the title of the section, followed by `{-}`. For example, `### Designing a game for maintainability {-}` creates an unnumbered subsection with the title “Designing a game for maintainability”.
#### Adding and referencing figures
Adding and referencing a figure is similar to referencing a section and adding a Markdown image:
```
![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix}
```
The line above tells Pandoc that there is a figure with the caption Scatterplot matrix and the path to the image is `data/scatterplots/RScatterplotMatrix2.png`. `{#fig:scatter-matrix}` declares the name that should be used to reference the figure.
Here is an example of a figure reference from the example paper:
```
The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix]) ...
```
Pandoc produces the following output:
```
The boxes "Enjoy", "Grade" and "Motivation" (Fig. 1) ...
```
#### Adding and referencing a bibliography
Most research papers keep references in a BibTeX database file. In this example, this file is named [biblio.bib][6] and it contains all the references of the paper. Here is what this file looks like:
```
@inproceedings{wrigstad2017mastery,
    Author =       {Wrigstad, Tobias and Castegren, Elias},
    Booktitle =    {SPLASH-E},
    Title =        {Mastery Learning-Like Teaching with Achievements},
    Year =         2017
}
@inproceedings{review-gamification-framework,
  Author =       {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno},
  Publisher =    {IEEE},
  Booktitle =    {2015 7th International Conference on Games and Virtual Worlds
                  for Serious Applications (VS-Games)},
  Doi =          {10.1109/VS-GAMES.2015.7295760},
  Keywords =     {formal specification;serious games (computing);design
                  framework;formal design process;game components;game design
                  elements;gamification design frameworks;gamification-based
                  solutions;Bibliographies;Context;Design
                  methodology;Ethics;Games;Proposals},
  Month =        {Sept},
  Pages =        {1-8},
  Title =        {A Literature Review of Gamification Design Frameworks},
  Year =         2015,
  Bdsk-Url-1 =   {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760}
}
...
```
The first line, `@inproceedings{wrigstad2017mastery,`, declares the type of publication (`inproceedings`) and the label used to refer to that paper (`wrigstad2017mastery`).
To cite the paper with its title, Mastery Learning-Like Teaching with Achievements, type:
```
the achievement-driven learning methodology [@wrigstad2017mastery]
```
Pandoc will output:
```
the achievement- driven learning methodology [30]
```
The paper we will produce includes a bibliography section with numbered references like these:
![](https://opensource.com/sites/default/files/uploads/bibliography-example_0.png)
Citing a collection of articles is easy: Simply cite each article, separating the labeled references using a semi-colon: `;`. If there are two labeled references—i.e., `SEABORN201514` and `gamification-leaderboard-benefits`—cite them together, like this:
```
Thus, the most important benefit is its potential to increase students' motivation
and engagement [@SEABORN201514;@gamification-leaderboard-benefits].
```
Pandoc will produce:
```
Thus, the most important benefit is its potential to increase students motivation
and engagement [26, 28]
```
### Problematic cases
A common problem involves objects that do not fit in the page. They then float to wherever they fit best, even if that position is not where the reader expects to see it. Since papers are easier to read when figures or tables appear close to where they are mentioned, we need to have some control over where these elements are placed. For this reason, I recommend the use of the `figure` LaTeX environment, which enables users to control the positioning of figures.
Lets take the figure example shown above:
```
![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix}
```
And rewrite it in LaTeX:
```
\begin{figure}[t]
\includegraphics{data/scatterplots/RScatterplotMatrix2.png}
\caption{\label{fig:matrix}Scatterplot matrix}
\end{figure}
```
In LaTeX, the `[t]` option in the `figure` environment declares that the image should be placed at the top of the page. For more options, refer to the Wikibooks article [LaTex/Floats, Figures, and Captions][7].
### Producing the paper
So far, we've covered how to add and reference (sub-)sections and figures and cite the bibliography—now let's review how to produce the research paper in PDF format. To generate the PDF, we will use Pandoc to generate a LaTeX file that can be compiled to the final PDF. We will also discuss how to generate the research paper in LaTeX using a customized template and a meta-information file, and how to compile the LaTeX document into its final PDF form.
Most conferences provide a **.cls** file or a template that specifies how papers should look; for example, whether they should use a two-column format and other design treatments. In our example, the conference provided a file named **acmart.cls**.
Authors are generally expected to include the institution to which they belong in their papers. However, this option was not included in the default Pandocs LaTeX template (note that the Pandoc template can be inspected by typing `pandoc -D latex`). To include the affiliation, take the default Pandocs LaTeX template and add a new field. The Pandoc template was copied into a file named `mytemplate.tex` as follows:
```
pandoc -D latex > mytemplate.tex
```
The default template contains the following code:
```
$if(author)$
\author{$for(author)$$author$$sep$ \and $endfor$}
$endif$
$if(institute)$
\providecommand{\institute}[1]{}
\institute{$for(institute)$$institute$$sep$ \and $endfor$}
$endif$
```
Because the template should include the authors affiliation and email address, among other things, we updated it to include these fields (we made other changes as well but did not include them here due to the file length):
```
latex
$for(author)$
    $if(author.name)$
        \author{$author.name$}
        $if(author.affiliation)$
            \affiliation{\institution{$author.affiliation$}}
        $endif$
        $if(author.email)$
            \email{$author.email$}
        $endif$
    $else$
        $author$
    $endif$
$endfor$
```
With these changes in place, we should have the following files:
* `main.md` contains the research paper
* `biblio.bib` contains the bibliographic database
* `acmart.cls` is the class of the document that we should use
* `mytemplate.tex` is the template file to use (instead of the default)
Lets add the meta-information of the paper in a `meta.yaml`file:
```
---
template: 'mytemplate.tex'
documentclass: acmart
classoption: sigconf
title: The impact of opt-in gamification on `\\`{=latex} students' grades in a software design course
author:
- name: Kiko Fernandez-Reyes
  affiliation: Uppsala University
  email: kiko.fernandez@it.uu.se
- name: Dave Clarke
  affiliation: Uppsala University
  email: dave.clarke@it.uu.se
- name: Janina Hornbach
  affiliation: Uppsala University
  email: janina.hornbach@fek.uu.se
bibliography: biblio.bib
abstract: |
  An achievement-driven methodology strives to give students more control over their learning with enough flexibility to engage them in deeper learning. (more stuff continues)
include-before: |
  \```{=latex}
  \copyrightyear{2018}
  \acmYear{2018}
  \setcopyright{acmlicensed}
  \acmConference[MODELS '18 Companion]{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems}{October 14--19, 2018}{Copenhagen, Denmark}
  \acmBooktitle{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems (MODELS '18 Companion), October 14--19, 2018, Copenhagen, Denmark}
  \acmPrice{XX.XX}
  \acmDOI{10.1145/3270112.3270118}
  \acmISBN{978-1-4503-5965-8/18/10}
  \begin{CCSXML}
  <ccs2012>
  <concept>
  <concept_id>10010405.10010489</concept_id>
  <concept_desc>Applied computing~Education</concept_desc>
  <concept_significance>500</concept_significance>
  </concept>
  </ccs2012>
  \end{CCSXML}
  \ccsdesc[500]{Applied computing~Education}
  \keywords{gamification, education, software design, UML}
  \```
figPrefix:
  - "Fig."
  - "Figs."
secPrefix:
  - "Section"
  - "Sections"
...
```
This meta-information file sets the following variables in LaTeX:
* `template` refers to the template to use (mytemplate.tex)
* `documentclass` refers to the LaTeX document class to use (`acmart`)
* `classoption` refers to the options of the class, in this case `sigconf`
* `title` specifies the title of the paper
* `author` is an object that contains other fields, such as `name`, `affiliation`, and `email`.
* `bibliography`refers to the file that contains the bibliography (biblio.bib)
* `abstract` contains the abstract of the paper
* `include-before`is information that should be included before the actual content of the paper; this is known as the [preamble][8] in LaTeX. I have included it here to show how to generate a computer science paper, but you may choose to skip it
* `figPrefix` specifies how to refer to figures in the document, i.e., what should be displayed when one refers to the figure `[@fig:scatter-matrix]`. For example, the current `figPrefix` produces in the example `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])` this output: `The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`. If there are multiple figures, the current setup declares that it should instead display `Figs.` next to the figure numbers.
* `secPrefix` specifies how to refer to sections mentioned elsewhere in the document (similar to figures, described above)
Now that the meta-information is set, lets create a `Makefile` that produces the desired output. This `Makefile` uses Pandoc to produce the LaTeX file, `pandoc-crossref` to produce the cross-references, `pdflatex` to compile the LaTeX to PDF, and `bibtex `to process the references.
The `Makefile` is shown below:
```
all: paper
paper:
        @pandoc -s -F pandoc-crossref --natbib meta.yaml --template=mytemplate.tex -N \
         -f markdown -t latex+raw_tex+tex_math_dollars+citations -o main.tex main.md
        @pdflatex main.tex &> /dev/null
        @bibtex main &> /dev/null
        @pdflatex main.tex &> /dev/null
        @pdflatex main.tex &> /dev/null
clean:
        rm main.aux main.tex main.log main.bbl main.blg main.out
.PHONY: all clean paper
```
Pandoc uses the following flags:
* `-s` to create a standalone LaTeX document
* `-F pandoc-crossref` to make use of the filter `pandoc-crossref`
* `--natbib` to render the bibliography with `natbib` (you can also choose `--biblatex`)
* `--template` sets the template file to use
* `-N` to number the section headings
* `-f` and `-t` specify the conversion from and to which format. `-t` usually contains the format and is followed by the Pandoc extensions used. In the example, we declared `raw_tex+tex_math_dollars+citations` to allow use of `raw_tex` LaTeX in the middle of the Markdown file. `tex_math_dollars` enables us to type math formulas as in LaTeX, and `citations` enables us to use [this extension][9].
To generate a PDF from LaTeX, follow the guidelines [from bibtex][10] to process the bibliography:
```
@pdflatex main.tex &> /dev/null
@bibtex main &> /dev/null
@pdflatex main.tex &> /dev/null
@pdflatex main.tex &> /dev/null
```
The script contains `@` to ignore the output, and we redirect the file handle of the standard output and error to `/dev/null`so that we dont see the output generated from the execution of these commands.
The final result is shown below. The repository for the article can be found [on GitHub][11]:
![](https://opensource.com/sites/default/files/uploads/abstract-image.png)
### Conclusion
In my opinion, research is all about collaboration, dissemination of ideas, and improving the state of the art in whatever field one happens to be in. Most computer scientists and engineers write papers using the LaTeX document system, which provides excellent support for math. Researchers from the social sciences seem to stick to DOCX documents.
When researchers from different communities write papers together, they should first discuss which format they will use. While DOCX may not be convenient for engineers if there is math involved, LaTeX may be troublesome for researchers who lack a programming background. As this article shows, Markdown is an easy-to-use language that can be used by both engineers and social scientists.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/pandoc-research-paper
作者:[Kiko Fernandez-Reyes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kikofernandez
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Markdown
[2]: https://www.latex-project.org/
[3]: https://pandoc.org/
[4]: http://lierdakil.github.io/pandoc-crossref/
[5]: https://dl.acm.org/citation.cfm?id=3270118
[6]: https://github.com/kikofernandez/pandoc-examples/blob/master/research-paper/biblio.bib
[7]: https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions#Figures
[8]: https://www.sharelatex.com/learn/latex/Creating_a_document_in_LaTeX#The_preamble_of_a_document
[9]: http://pandoc.org/MANUAL.html#citations
[10]: http://www.bibtex.org/Using/
[11]: https://github.com/kikofernandez/pandoc-examples/tree/master/research-paper

View File

@ -1,79 +0,0 @@
5 tips for choosing the right open source database
======
When selecting a mission-critical application, you can't afford to make mistakes.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
So, your company has a directive to adopt more open source database technologies, and they've recruited you to select the right direction. Whether you are an open source technology veteran or a newcomer, this is a daunting and overwhelming task.
Over the past several years, open source technology adoption has steadily increased in the enterprise space. With its popularity comes a crowded marketplace with open source software companies promising that their solution will solve every problem and fit every workload. Be wary of these promises. Choosing the right open source technology—especially a database—is an important and difficult decision you can't make lightly.
In my experience as an IT professional at [Percona][1] and other companies, I've been fortunate to work hands-on in adopting open source technologies and guiding others in making the right decisions. There are many important factors to consider; hopefully, this article will shine a light on a few.
### 1. Have a goal.
This may seem simple, but based on my many conversations with people exploring MySQL, MongoDB, or PostgreSQL, it is top of the list in importance.
To avoid getting overwhelmed by the unlimited combinations of open source database software in the market, have a specific goal in mind. Maybe your goal is to provide your internal developers with a standardized, open source database backend that is managed by your internal database team. Perhaps your goal is to rip and replace the entire functionality of a legacy application and database backend with new open source technology.
Once you have defined a goal, you can focus your efforts. This will lead to better conversations internally as well as externally with open source database software vendors and advocates.
### 2. Understand your workload.
Despite the increasing ability of database technologies to wear many hats, each specializes in certain areas, e.g., MongoDB is now transactional, MySQL now has JSON storage. A growing trend in open source databases involves providing check boxes claiming certain features are available. One of the biggest mistakes is not using the right tool for the right job. Something leads a company down the wrong path—perhaps an overzealous developer or a manager with tunnel vision. The unfortunate thing is that the wrong tool can work fine for smaller volumes of transactions and data, but later there will be bottlenecks that can be solved only by using a different tool.
If you want a data analytics warehouse, an open source relational database is probably not the right choice. If you want a transaction-processing app with rigid data integrity and consistency, NoSQL options may not be the right option.
### 3. Don't reinvent the wheel.
Open source database technologies have rapidly grown, expanded, and hardened over the past several decades. We've seen a transformation from new, questionably production-ready databases to proven, enterprise-grade database backends. It's no longer necessary to be a bleeding edge, early adopter to choose open source database technologies. Organizations have grown around these communities to provide production support and tooling in the open source database space for a growing number of startups, midsized businesses, and Fortune 500 companies.
Battery Ventures, a tech-focused investment firm, recently introduced its [BOSS Index][2] for tracking the most popular open source projects. It's not perfect, but it provides great insight into some of the most widely adopted and active open source projects. Not surprisingly, database technologies dominate the list, comprising five of the top 10 technologies. This is a great starting point for someone new to the open source database space. A lot of times, vendors have already produced suitable architectures for solving specific problems.
My point is that someone has probably already done what you are trying to do. Learn from their successes and failures. Even if it is not a perfect fit, a solution can likely be modified to suit your needs. For example, Amazon provides a [CloudFormation script][3] for deploying MongoDB in its EC2 environment.
If you are a bleeding-edge early adopter, that doesn't mean you can't explore. If you have a unique challenge or workload that seems to fit a new open source database technology, go for it. Keep in mind that there are inherent risks (and rewards!) to being an early adopter.
### 4\. Start simple
How many [nines][4] does your database truly need? "Achieving high availability" is often a nebulous goal for many companies. Of course, the most common answer is "it's mission-critical, and we cannot afford any downtime."
The more complicated your database environment, the more difficult and costly it is to manage. You can theoretically achieve higher uptime, but the tradeoffs will be the feasibility of management and performance. When in doubt, start simple. There are always options to scale out when the need arises.
For example, Booking.com is a widely known travel reservation site. It might be less widely known that it uses MySQL as a database backend. Nicolai Plum, a Booking.com senior systems architect, gave [a talk][5] outlining the evolution of the company's MySQL database. One of the takeaways was that the database started simple. It had to evolve over time, but in the beginning, simple masterreplica architecture sufficed. As the workload and dataset increased, it introduced load balancers, multiple read replicas, archiving to Hadoop for analytics, etc. However, the early architecture was extremely simple.
![](https://opensource.com/sites/default/files/uploads/internet_app_barrett_chambers.png)
### 5. When in doubt, ask an expert.
If you're unsure whether a database would be a good fit, reach out on forums, websites, or to vendors and strike up a conversation. This can be exciting as you research which database technologies meet your requirements and which do not. Often there are suitable alternatives that you haven't considered. The open source community is all about sharing knowledge.
There is one important thing to be aware of when reaching out to open source software and services vendors. Many have open-core business models that incentivize adopting their database software. Take their advice or guidance with a grain of salt and use your own ability to research, create proofs of concept, and explore alternatives.
### Conclusion
Choosing the right open source database is an important decision. Start by asking the right questions. All too often, people put the cart before the horse, making decisions before really understanding their needs.
Barrett Chambers will present [Choosing the Right Open Source Database][6] at [All Things Open][7], October 21-23 in Raleigh, N.C.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tips-choosing-right-open-source-database
作者:[Barrett Chambers][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barrettc
[b]: https://github.com/lujun9972
[1]: https://www.percona.com/
[2]: https://techcrunch.com/2017/04/07/tracking-the-explosive-growth-of-open-source-software/
[3]: https://docs.aws.amazon.com/quickstart/latest/mongodb/welcome.html
[4]: https://en.wikipedia.org/wiki/Five_nines
[5]: https://www.percona.com/live/mysql-conference-2015/sessions/bookingcom-evolution-mysql-system-design
[6]: https://allthingsopen.org/talk/choosing-the-right-open-source-database/
[7]: https://allthingsopen.org/

View File

@ -1,282 +0,0 @@
translating by dianbanjiu
How to set up WordPress on a Raspberry Pi
======
Run your WordPress website on your Raspberry Pi with this simple tutorial.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W)
WordPress is a popular open source blogging platform and content management system (CMS). It's easy to set up and has a thriving community of developers building websites and creating themes and plugins for others to use.
Although getting hosting packages with a "one-click WordPress setup" is easy, it's also simple to set up your own on a Linux server with only command-line access, and the [Raspberry Pi][1] is a perfect way to try it out and learn something along the way.
The four components of a commonly used web stack are Linux, Apache, MySQL, and PHP. Here's what you need to know about each.
### Linux
The Raspberry Pi runs Raspbian, which is a Linux distribution based on Debian and optimized to run well on Raspberry Pi hardware. It comes with two options to start: Desktop or Lite. The Desktop version boots to a familiar-looking desktop and comes with lots of educational software and programming tools, as well as the LibreOffice suite, Minecraft, and a web browser. The Lite version has no desktop environment, so it's command-line only and comes with only the essential software.
This tutorial will work with either version, but if you use the Lite version you'll have to use another computer to access your website.
### Apache
Apache is a popular web server application you can install on the Raspberry Pi to serve web pages. On its own, Apache can serve static HTML files over HTTP. With additional modules, it can serve dynamic web pages using scripting languages such as PHP.
Installing Apache is very simple. Open a terminal window and type the following command:
```
sudo apt install apache2 -y
```
By default, Apache puts a test HTML file in a web folder you can view from your Pi or another computer on your network. Just open the web browser and enter the address **<http://localhost>**. Alternatively (particularly if you're using Raspbian Lite), enter the Pi's IP address instead of **localhost**. You should see this in your browser window:
![](https://opensource.com/sites/default/files/uploads/apache-it-works.png)
This means you have Apache working!
This default webpage is just an HTML file on the filesystem. It is located at **/var/www/html/index.html**. You can try replacing this file with some HTML of your own using the [Leafpad][2] text editor:
```
cd /var/www/html/
sudo leafpad index.html
```
Save and close Leafpad then refresh the browser to see your changes.
### MySQL
MySQL (pronounced "my S-Q-L" or "my sequel") is a popular database engine. Like PHP, it's widely used on web servers, which is why projects like WordPress use it and why those projects are so popular.
Install MySQL Server by entering the following command into the terminal window:
```
sudo apt-get install mysql-server -y
```
WordPress uses MySQL to store posts, pages, user data, and lots of other content.
### PHP
PHP is a preprocessor: it's code that runs when the server receives a request for a web page via a web browser. It works out what needs to be shown on the page, then sends that page to the browser. Unlike static HTML, PHP can show different content under different circumstances. PHP is a very popular language on the web; huge projects like Facebook and Wikipedia are written in PHP.
Install PHP and the MySQL extension:
```
sudo apt-get install php php-mysql -y
```
Delete the **index.html** file and create **index.php** :
```
sudo rm index.html
sudo leafpad index.php
```
Add the following line:
```
<?php phpinfo(); ?>
```
Save, exit, and refresh your browser. You'll see the PHP status page:
![](https://opensource.com/sites/default/files/uploads/phpinfo.png)
### WordPress
You can download WordPress from [wordpress.org][3] using the **wget** command. Helpfully, the latest version of WordPress is always available at [wordpress.org/latest.tar.gz][4], so you can grab it without having to look it up on the website. As I'm writing, this is version 4.9.8.
Make sure you're in **/var/www/html** and delete everything in it:
```
cd /var/www/html/
sudo rm *
```
Download WordPress using **wget** , then extract the contents and move the WordPress files to the **html** directory:
```
sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .
```
Tidy up by removing the tarball and the now-empty **wordpress** directory:
```
sudo rm -rf wordpress latest.tar.gz
```
Running the **ls** or **tree -L 1** command will show the contents of a WordPress project:
```
.
├── index.php
├── license.txt
├── readme.html
├── wp-activate.php
├── wp-admin
├── wp-blog-header.php
├── wp-comments-post.php
├── wp-config-sample.php
├── wp-content
├── wp-cron.php
├── wp-includes
├── wp-links-opml.php
├── wp-load.php
├── wp-login.php
├── wp-mail.php
├── wp-settings.php
├── wp-signup.php
├── wp-trackback.php
└── xmlrpc.php
3 directories, 16 files
```
This is the source of a default WordPress installation. The files you edit to customize your installation belong in the **wp-content** folder.
You should now change the ownership of all these files to the Apache user:
```
sudo chown -R www-data: .
```
### WordPress database
To get your WordPress site set up, you need a database. This is where MySQL comes in!
Run the MySQL secure installation command in the terminal window:
```
sudo mysql_secure_installation
```
You will be asked a series of questions. There's no password set up initially, but you should set one in the second step. Make sure you enter a password you will remember, as you'll need it to connect to WordPress. Press Enter to say Yes to each question that follows.
When it's complete, you will see the messages "All done!" and "Thanks for using MariaDB!"
Run **mysql** in the terminal window:
```
sudo mysql -uroot -p
```
Enter the root password you created. You will be greeted by the message "Welcome to the MariaDB monitor." Create the database for your WordPress installation at the **MariaDB [(none)] >** prompt using:
```
create database wordpress;
```
Note the semicolon at the end of the statement. If the command is successful, you should see this:
```
Query OK, 1 row affected (0.00 sec)
```
Grant database privileges to the root user, entering your password at the end of the statement:
```
GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD';
```
For the changes to take effect, you will need to flush the database privileges:
```
FLUSH PRIVILEGES;
```
Exit the MariaDB prompt with **Ctrl+D** to return to the Bash shell.
### WordPress configuration
Open the web browser on your Raspberry Pi and open **<http://localhost>**. You should see a WordPress page asking you to pick your language. Select your language and click **Continue**. You will be presented with the WordPress welcome screen. Click the **Let's go!** button.
Fill out the basic site information as follows:
```
Database Name:      wordpress
User Name:          root
Password:           <YOUR PASSWORD>
Database Host:      localhost
Table Prefix:       wp_
```
Click **Submit** to proceed, then click **Run the install**.
![](https://opensource.com/sites/default/files/uploads/wp-info.png)
Fill in the form: Give your site a title, create a username and password, and enter your email address. Hit the **Install WordPress** button, then log in using the account you just created. Now that you're logged in and your site is set up, you can see your website by visiting **<http://localhost/wp-admin>**.
### Permalinks
It's a good idea to change your permalink settings to make your URLs more friendly.
To do this, log into WordPress and go to the dashboard. Go to **Settings** , then **Permalinks**. Select the **Post name** option and click **Save Changes**. You'll need to enable Apache's **rewrite** module:
```
sudo a2enmod rewrite
```
You'll also need to tell the virtual host serving the site to allow requests to be overwritten. Edit the Apache configuration file for your virtual host:
```
sudo leafpad /etc/apache2/sites-available/000-default.conf
```
Add the following lines after line 1:
```
<Directory "/var/www/html">
    AllowOverride All
</Directory>
```
Ensure it's within the **< VirtualHost *:80>** like so:
```
<VirtualHost *:80>
    <Directory "/var/www/html">
        AllowOverride All
    </Directory>
    ...
```
Save the file and exit, then restart Apache:
```
sudo systemctl restart apache2
```
### What's next?
WordPress is very customizable. By clicking your site name in the WordPress banner at the top of the page (when you're logged in), you'll be taken to the Dashboard. From there, you can change the theme, add pages and posts, edit the menu, add plugins, and do lots more.
Here are some interesting things you can try on the Raspberry Pi's web server.
* Add pages and posts to your website
* Install different themes from the Appearance menu
* Customize your website's theme or create your own
* Use your web server to display useful information for people on your network
Don't forget, the Raspberry Pi is a Linux computer. You can also follow these instructions to install WordPress on a server running Debian or Ubuntu.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sitewide-search?search_api_views_fulltext=raspberry%20pi
[2]: https://en.wikipedia.org/wiki/Leafpad
[3]: http://wordpress.org/
[4]: https://wordpress.org/latest.tar.gz

View File

@ -0,0 +1,85 @@
4 cool new projects to try in COPR for October 2018
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR is a [collection][1] of personal repositories for software that isnt carried in the standard Fedora repositories. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the standard set of Fedora Fedora packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Heres a set of new and interesting projects in COPR.
### GitKraken
[GitKraken][2] is a useful git client for people who prefer a graphical interface over command-line, providing all the features you expect. Additionally, GitKraken can create repositories and files, and has a built-in editor. A useful feature of GitKraken is the ability to stage lines or hunks of files, and to switch between branches fast. However, in some cases, you may experience performance issues with larger projects.
![][3]
#### Installation instructions
The repo currently provides GitKraken for Fedora 27, 28, 29 and Rawhide, and for OpenSUSE Tumbleweed. To install GitKraken, use these commands:
```
sudo dnf copr enable elken/gitkraken
sudo dnf install gitkraken
```
### Music On Console
[Music On Console][4] player, or mocp, is a simple console audio player. It has an interface similar to the Midnight Commander and is easy use. You simply navigate to a directory with music files and select a file or directory to play. In addition, mocp provides a set of commands, allowing it to be controlled directly from command line.
![][5]
#### Installation instructions
The repo currently provides Music On Console player for Fedora 28 and 29. To install mocp, use these commands:
```
sudo dnf copr enable Krzystof/Moc
sudo dnf install moc
```
### cnping
[Cnping][6] is a small graphical ping tool for IPv4, useful for visualization of changes in round-trip time. It offers an option to control the time period between each packet as well as the size of data sent. In addition to the graph shown, cnping provides basic statistics on round-trip times and packet loss.![][7]
#### Installation instructions
The repo currently provides cnping for Fedora 27, 28, 29 and Rawhide. To install cnping, use these commands:
```
sudo dnf copr enable dreua/cnping
sudo dnf install cnping
```
### Pdfsandwich
[Pdfsandwich][8] is a tool for adding text to PDF files which contain text in an image form — such as scanned books. It uses optical character recognition (OCR) to create an additional layer with the recognized text behind the original page. This can be useful for copying and working with the text.
#### Installation instructions
The repo currently provides pdfsandwich for Fedora 27, 28, 29 and Rawhide, and for EPEL 7. To install pdfsandwich, use these commands:
```
sudo dnf copr enable merlinm/pdfsandwich
sudo dnf install pdfsandwich
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-try-copr-october-2018/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org
[b]: https://github.com/lujun9972
[1]: https://copr.fedorainfracloud.org/
[2]: https://www.gitkraken.com/git-client
[3]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-gitkraken.png
[4]: http://moc.daper.net/
[5]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-mocp.png
[6]: https://github.com/cnlohr/cnping
[7]: https://fedoramagazine.org/wp-content/uploads/2018/10/copr-cnping.png
[8]: http://www.tobias-elze.de/pdfsandwich/

View File

@ -0,0 +1,87 @@
translating---geekpi
Get organized at the Linux command line with Calcurse
======
Keep up with your calendar and to-do list with Calcurse.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT)
Do you need complex, feature-packed graphical or web applications to get and stay organized? I don't think so. The right command line tool can do the job and do it well.
Of course, uttering the words command and line together can strike fear into the hearts of some Linux users. The command line, to them, is terra incognita.
Organizing yourself at the command line is easy with [Calcurse][1]. Calcurse brings a graphical look and feel to a text-based interface. You get the simplicity and focus of the command line married to ease of use and navigation.
Let's take a closer look at Calcurse, which is open sourced under the BSD License.
### Getting the software
If compiling code is your thing (it's not mine, generally), you can grab the source code from the [Calcurse website][1]. Otherwise, get the [binary installer][2] for your Linux distribution. You might even be able to get Calcurse from your Linux distro's package manager. It never hurts to check.
Compile or install Calcurse (neither takes all that long), and you're ready to go.
### Using Calcurse
Crack open a terminal window and type **calcurse**.
![](https://opensource.com/sites/default/files/uploads/calcurse-main.png)
Calcurse's interface consists of three panels:
* Appointments (the left side of the screen)
* Calendar (the top right)
* To-do list (the bottom right)
Move between the panels by pressing the Tab key on your keyboard. To add a new item to a panel, press **a**. Calcurse walks you through what you need to do to add the item.
One interesting quirk is that the Appointment and Calendar panels work together. You add an appointment by tabbing to the Calendar panel. There, you choose the date for your appointment. Once you do that, you tab back to the Appointments panel. I know …
Press **a** to set a start time, a duration (in minutes), and a description of the appointment. The start time and duration are optional. Calcurse displays appointments on the day they're due.
![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png)
Here's what a day's appointments look like:
![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png)
The to-do list works on its own. Tab to the ToDo panel and (again) press **a**. Type a description of the task, then set a priority (1 is the highest and 9 is the lowest). Calcurse lists your uncompleted tasks in the ToDo panel.
![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png)
If your task has a long description, Calcurse truncates it. You can view long descriptions by navigating to the task using the up or down arrow keys on your keyboard, then pressing **v**.
![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png)
Calcurse saves its information in text files in a hidden folder called **.calcurse** in your home directory—for example, **/home/scott/.calcurse**. If Calcurse stops working, it's easy to find your information.
### Other useful features
Other Calcurse features include the ability to set recurring appointments. To do that, find the appointment you want to repeat and press **r** in the Appointments panel. You'll be asked to set the frequency (for example, daily or weekly) and how long you want the appointment to repeat.
You can also import calendars in [ICAL][3] format or export your data in either ICAL or [PCAL][4] format. With ICAL, you can share your data with other calendar applications. With PCAL, you can generate a Postscript version of your calendar.
There are also a number of command line arguments you can pass to Calcurse. You can read about them [in the documentation][5].
While simple, Calcurse does a solid job of helping you keep organized. You'll need to be a bit more mindful of your tasks and appointments, but you'll be able to focus better on what you need to do and where you need to be.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/calcurse
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: http://www.calcurse.org/
[2]: http://www.calcurse.org/downloads/#packages
[3]: https://tools.ietf.org/html/rfc2445
[4]: http://pcal.sourceforge.net/
[5]: http://www.calcurse.org/files/manual.chunked/ar01s04.html#_invocation

View File

@ -0,0 +1,82 @@
Monitoring database health and behavior: Which metrics matter?
======
Monitoring your database can be overwhelming or seem not important. Here's how to do it right.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D)
We dont talk about our databases enough. In this age of instrumentation, we monitor our applications, our infrastructure, and even our users, but we sometimes forget that our database deserves monitoring, too. Thats largely because most databases do their job so well that we simply trust them to do it. Trust is great, but confirmation of our assumptions is even better.
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image1_-_bffs.png?itok=BZQM_Fos)
### Why monitor your databases?
There are plenty of reasons to monitor your databases, most of which are the same reasons you'd monitor any other part of your systems: Knowing whats going on in the various components of your applications makes you a better-informed developer who makes smarter decisions.
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image5_fire.png?itok=wsip2Fa4)
More specifically, databases are great indicators of system health and behavior. Odd behavior in the database can point to problem areas in your applications. Alternately, when theres odd behavior in your application, you can use database metrics to help expedite the debugging process.
### The problem
The slightest investigation reveals one problem with monitoring databases: Databases have a lot of metrics. "A lot" is an understatement—if you were Scrooge McDuck, you could swim through all of the metrics available. If this were Wrestlemania, the metrics would be folding chairs. Monitoring them all doesnt seem practical, so how do you decide which metrics to monitor?
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image2_db_metrics.png?itok=Jd9NY1bt)
### The solution
The best way to start monitoring databases is to identify some foundational, database-agnostic metrics. These metrics create a great start to understanding the lives of your databases.
### Throughput: How much is the database doing?
The easiest way to start monitoring a database is to track the number of requests the database receives. We have high expectations for our databases; we expect them to store data reliably and handle all of the queries we throw at them, which could be one massive query a day or millions of queries from users all day long. Throughput can tell you which of those is true.
You can also group requests by type (reads, writes, server-side, client-side, etc.) to begin analyzing the traffic.
### Execution time: How long does it take the database to do its job?
This metric seems obvious, but it often gets overlooked. You dont just want to know how many requests the database received, but also how long the database spent on each request. Its important to approach execution time with context, though: What's slow for a time-series database like InfluxDB isnt the same as what's slow for a relational database like MySQL. Slow in InfluxDB might mean milliseconds, whereas MySQLs default value for its `SLOW_QUERY` variable is ten seconds.
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image4_slow_is_relative.png?itok=9RkuzUi8)
Monitoring execution time is not the same thing as improving execution time, so beware of the temptation to spend time on optimizations if you have other problems in your app to fix.
### Concurrency: How many jobs is the database doing at the same time?
Once you know how many requests the database is handling and how long each one takes, you need to add a layer of complexity to start getting real value from these metrics.
If the database receives ten requests and each one takes ten seconds to complete, is the database busy for 100 seconds, ten seconds—or somewhere in between? The number of concurrent tasks changes the way the databases resources are used. When you consider things like the number of connections and threads, youll start to get a fuller picture of your database metrics.
Concurrency can also affect latency, which includes not only the time it takes for the task to be completed (execution time) but also the time the task needs to wait before its handled.
### Utilization: What percentage of the time was the database busy?
Utilization is a culmination of throughput, execution time, and concurrency to determine how often the database was available—or alternatively, how often the database was too busy to respond to a request.
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image6_telephone.png?itok=YzdpwUQP)
This metric is particularly useful for determining the overall health and performance of your database. If its available to respond to requests only 80% of the time, you can reallocate resources, work on optimization, or otherwise make changes to get closer to high availability.
### The good news
It can seem overwhelming to monitor and analyze, especially because most of us arent database experts and we may not have time to devote to understanding these metrics. But the good news is that most of this work is already done for us. Many databases have an internal performance database (Postgres: pg_stats, CouchDB: Runtime_Statistics, InfluxDB: _internal, etc.), which is designed by database engineers to monitor the metrics that matter for that particular database. You can see things as broad as the number of slow queries or as detailed as the average microseconds each event in the database takes.
### Conclusion
Databases create enough metrics to keep us all busy for a long time, and while the internal performance databases are full of useful information, its not always clear which metrics you should care about. Start with throughput, execution time, concurrency, and utilization, which provide enough information for you to start understanding the patterns in your database.
![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image3_3_hearts.png?itok=iHF-OSwx)
Are you monitoring your databases? Which metrics have you found to be useful? Tell me about it!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/database-metrics-matter
作者:[Katy Farmer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thekatertot
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,99 @@
推动 DevOps 变革的三个方面
======
推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
避免痛苦是一种强大的动力。一些研究表明,[植物也会通过遭受疼痛的过程][1]以采取措施来保护自己。我们人类有时也会刻意让自己受苦——在剧烈运动之后,身体可能会发生酸痛,但我们仍然坚持运动。那是因为当人认为整个过程利大于弊时,几乎可以忍受任何事情。
推动大规模的组织变革得过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。
我也曾经关注过一个团队耗费大量时间优化技术流程的过程,在这个过程中,团队逐渐将流程进行自动化改造,并最终获得了成功。
![Improvements after DevOps transformation][3]
图片来源Lee Eason. CC BY-SA 4.0
这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,<ruby>净推荐值<rt>Net Promoter Score</rt></ruby>NPS也提高了而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投资可以在业务成果上明显地体现出来。
而 DevOps 主导者在推动变革是必须关注这三个方面:团队管理,团队文化和团队活力。
### 团队管理
组织架构越大,业务领导与一线员工之间的距离就会越大,当然发生误解的可能性也会越大。而且各种技术工具和实际应用都在以日新月异的速度变化,这就导致业务领导几乎不可能对 DevOps 或敏捷开发的转型方向有一个亲身的了解。
DevOps 主导者必须和管理层密切合作,在进行决策的时候给出相关的意见,以帮助他们做出正确的决策。
公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。当管理层发现你在和软件团队执行自动化部署失败时,就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意作出进一步的改变。
如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,都会对你和管理层之间的信任造成难以消除的影响。所以,最好事先和管理层之间在各方面协调好,这会让你在后续的工作中避免很多麻烦。
对于和管理层之间的协调,这里有两条建议:
* 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。
* 二是**将管理层的重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的停机是造成客户流失的主要原因,那么就可以让团队对故障的<ruby>平均检测时间<rt>Mean Time To Detection</rt></ruby>MTTD<ruby>平均解决时间<rt>Mean Time To Resolution</rt></ruby>MTTR实行重点优化。你可以使用这些关键指标来量化团队的工作成果而管理层对此也可以有一个直观的了解。
### 团队文化
DevOps 是一种专注于持续改进代码、构建、部署和操作流程的文化,而团队文化代表了团队的价值观和行为。从本质上说,团队文化是要塑造团队成员的行为方式,而这并不是一件容易的事。
我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。
理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。也不能将这些价值观强加到团队成员身上,只需要让团队成员在目前的硬件条件下力所能及地做到最好就可以了
同时需要向团队成员阐明,公司正在发生组织上的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不通过共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。
如果这种状况持续多年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。
你必须强调团队的价值观。每当团队按照价值观取得了一定的工作进展,都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、继续学习,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,并逐渐切合团队的价值观。
### 团队活力
你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统最好,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。
我们一直都认为这就是软件开发的本质。但是如果我们不作出改变,这种循环就会一直保持下去。
熵的存在会让团队自发地变得混乱和缺乏活力团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵增加速这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。
我们来看一个产品团队重构遗留代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。
产品团队经常会犯一个致命的错误,就是认为 IT 部门是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。
实际上,产品团队在迁移到云时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构遗留的代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。
在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的弱项,因为它们会影响团队的交付能力,而修补这些弱项往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。
### 效果就是最好的证明
DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更快乐。
免责声明:本文中的内容仅为 Lee Eason 的个人立场,不代表 Ipreo 或 IHS Markit。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tales-devops-transformation
作者:[Lee Eason][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/leeeason
[b]: https://github.com/lujun9972
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
[2]: /file/411061
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png "Improvements after DevOps transformation"
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
[5]: https://www.gartner.com/en/publications/wolf-cio
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
[9]: https://tekata.io/
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
[12]: https://allthingsopen.org/

View File

@ -12,7 +12,7 @@
在本实验及后面的实验中,你将逐步构建你的内核。我们将会为你提供一些附加的资源。使用 Git 去获取这些资源、提交自实验 1 以来的改变(如有需要的话)、获取课程仓库的最新版本、以及在我们的实验 2 origin/lab2的基础上创建一个称为 lab2 的本地分支:
```
```c
athena% cd ~/6.828/lab
athena% add git
athena% git pull
@ -23,9 +23,11 @@ Switched to a new branch "lab2"
athena%
```
上面的 `git checkout -b` 命令其实做了两件事情:首先它创建了一个本地分支 `lab2`,它跟踪给我们提供课程内容的远程分支 `origin/lab2` ,第二件事情是,它更改的你的 `lab` 目录的内容反映到 `lab2` 分支上存储的文件中。Git 允许你在已存在的两个分支之间使用 `git checkout *branch-name*` 命令去切换,但是在你切换到另一个分支之前,你应该去提交那个分支上你做的任何出色的变更。
现在,你需要将你在 lab1 分支中的改变合并到 lab2 分支中,命令如下:
```
```c
athena% git merge lab1
Merge made by recursive.
kern/kdebug.c | 11 +++++++++--
@ -35,6 +37,8 @@ Merge made by recursive.
athena%
```
在一些案例中Git 或许并不能找到如何将你的更改与新的实验任务合并(例如,你在第二个实验任务中变更了一些代码的修改)。在那种情况下,你使用 git 命令去合并,它会告诉你哪个文件发生了冲突,你必须首先去解决冲突(通过编辑冲突的文件),然后使用 `git commit -a` 去重新提交文件。
实验 2 包含如下的新源代码,后面你将遍历它们:
- inc/memlayout.h
@ -53,13 +57,15 @@ athena%
在你准备进行实验和写代码之前,先添加你的 `answers-lab2.txt` 文件到 Git 仓库,提交你的改变然后去运行 `make handin`
```
```c
athena% git add answers-lab2.txt
athena% git commit -am "my answer to lab2"
[lab2 a823de9] my answer to lab2 4 files changed, 87 insertions(+), 10 deletions(-)
athena% make handin
```
正如前面所说的,我们将使用一个评级程序来分级你的解决方案,你可以在 `lab` 目录下运行 `make grade`,使用评级程序来测试你的内核。为了完成你的实验,你可以改变任何你需要的内核源代码和头文件。但毫无疑问的是,你不能以任何形式去改变或破坏评级代码。
### 第 1 部分:物理页面管理
操作系统必须跟踪物理内存页是否使用的状态。JOS 以页为最小粒度来管理 PC 的物理内存,以便于它使用 MMU 去映射和保护每个已分配的内存片段。
@ -98,6 +104,8 @@ athena% make handin
![屏幕快照 2018-09-04 11.22.20](https://ws1.sinaimg.cn/large/0069RVTdly1fuxgrc398jj30gx04bgm1.jpg)
一个 C 指针是虚拟地址的“偏移量”部分。在 `boot/boot.S` 中我们安装了一个全局描述符表GDT它通过设置所有的段基址为 0并且限制为 `0xffffffff` 来有效地禁用段转换。因此“段选择器”并不会生效,而线性地址总是等于虚拟地址的偏移量。在实验 3 中,为了设置权限级别,我们将与段有更多的交互。但是对于内存转换,我们将在整个 JOS 实验中忽略段,只专注于页转换。
回顾实验 1 中的第 3 部分,我们安装了一个简单的页表,这样内核就可以在 0xf0100000 链接的地址上运行,尽管它实际上是加载在 0x00100000 处的 ROM BIOS 的物理内存上。这个页表仅映射了 4MB 的内存。在实验中,你将要为 JOS 去设置虚拟内存布局,我们将从虚拟地址 0xf0000000 处开始扩展它,首先将物理内存扩展到 256MB并映射许多其它区域的虚拟内存。
> 练习 3
@ -165,9 +173,9 @@ JOS 分割处理器的 32 位线性地址空间为两部分:用户环境(进
你可以在 `inc/memlayout.h` 中找到一个图表,它有助于你去理解 JOS 内存布局,这在本实验和后面的实验中都会用到。
### 权限和缺页隔离
### 权限和故障隔离
由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则的话,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。
由于内核和用户的内存都存在于它们各自环境的地址空间中,因此我们需要在 x86 的页表中使用权限位去允许用户代码只能访问用户所属地址空间的部分。否则,用户代码中的 bug 可能会覆写内核数据,导致系统崩溃或者发生各种莫名其妙的的故障;用户代码也可能会偷窥其它环境的私有数据。
对于 ULIM 以上部分的内存,用户环境没有任何权限,只有内核才可以读取和写入这部分内存。对于 [UTOP,ULIM] 地址范围,内核和用户都有相同的权限:它们可以读取但不能写入这个地址范围。这个地址范围是用于向用户环境暴露某些只读的内核数据结构。最后,低于 UTOP 的地址空间是为用户环境所使用的;用户环境将为访问这些内核设置权限。

View File

@ -0,0 +1,520 @@
2017 年 Linux 上最好的 9 个免费视频编辑软件
======
**概要:这里介绍 Linux 上几个最好的视频编辑器,介绍他们的特性、利与弊,以及如何在你的 Linux 发行版上安装它们。**
![Linux 上最好的视频编辑器][1]
![Linux 上最好的视频编辑器][2]
我们曾经在一篇短文中讨论过[ Linux 上最好的照片管理应用][3][Linux 上最好的代码编辑器][4]。今天我们将讨论 **Linux 上最好的视频编辑软件**
当谈到免费视频编辑软件Windows Movie Maker 和 iMovie 是大部分人经常推荐的。
很不幸,上述两者在 GNU/Linux 上都不可用。但是不必担心,我们为你汇集了一个**最好的视频编辑器**清单。
## Linux 上最好的视频编辑器
接下来让我们一起看看这些最好的视频编辑软件。如果你觉得文章读起来太长,这里有一个快速摘要。你可以点击链接跳转到文章的相关章节:
视频编辑器 主要用途 类型
Kdenlive 通用视频编辑 免费开源
OpenShot 通用视频编辑 免费开源
Shotcut 通用视频编辑 免费开源
Flowblade 通用视频编辑 免费开源
Lightworks 专业级视频编辑 免费增值
Blender 专业级三维编辑 免费开源
Cinelerra 通用视频编辑 免费开源
DaVinci 专业级视频处理编辑 免费增值
VidCutter 简单视频拆分合并 免费开源
### 1\. Kdenlive
![Kdenlive - Ubuntu 上的免费视频编辑器][1]
![Kdenlive - Ubuntu 上的免费视频编辑器][5]
[Kdenlive][6] 是 [KDE][8] 上的一个免费且[开源][7]的视频编辑软件,支持双视频监控,多轨时间线,剪辑列表,支持自定义布局,基本效果,以及基本过渡。
它支持多种文件格式和多种摄像机、相机包括低分辨率摄像机Raw 和 AVI DV 编辑Mpeg2mpeg4 和 h264 AVCHD小型相机和便携式摄像机高分辨率摄像机文件包括 HDV 和 AVCHD 摄像机,专业摄像机,包括 XDCAM-HD™ 流, IMX™ (D10) 流DVCAM (D10)DVCAMDVCPRO™DVCPRO50™ 流以及 DNxHD™ 流。
如果你正寻找 Linux 上 iMovie 的替代品Kdenlive 会是你最好的选择。
#### Kdenlive 特性
* 多轨视频编辑
* 多种音视频格式支持
* 可配置的界面和快捷方式
* 使用文本或图像轻松创建切片
* 丰富的效果和过渡
* 音频和视频示波器可确保镜头绝对平衡
* 代理编辑
* 自动保存
* 广泛的硬件支持
* 关键帧效果
#### 优点
* 通用视频编辑器
* 对于那些熟悉视频编辑的人来说并不太复杂
#### 缺点
* 如果你想找的是极致简单的编辑软件,它可能还是令你有些困惑
* KDE 应用程序以臃肿而臭名昭著
#### 安装 Kdenlive
Kdenlive 适用于所有主要的 Linux 发行版。你只需在软件中心搜索即可。[Kdenlive 网站的下载部分][9]提供了各种软件包。
命令行爱好者可以通过在 Debian 和基于 Ubuntu 的 Linux 发行版中运行以下命令从终端安装它:
```
sudo apt install kdenlive
```
### 2\. OpenShot
![Openshot - ubuntu 上的免费视频编辑器][1]
![Openshot - ubuntu 上的免费视频编辑器][10]
[OpenShot][11] 是 Linux 上的另一个多用途视频编辑器。OpenShot 可以帮助你创建具有过渡和效果的视频。你还可以调整声音大小。当然,它支持大多数格式和编解码器。
你还可以将视频导出至 DVD上传至 YouTubeVimeoXbox 360 以及许多常见的视频格式。OpenShot 比 Kdenlive 要简单一些。因此如果你需要界面简单的视频编辑器OpenShot 是一个不错的选择。
它还有个简洁的[开始使用 Openshot][12] 文档。
#### OpenShot 特性
* 跨平台可在LinuxmacOS 和 Windows 上使用
* 支持多种视频,音频和图像格式
* 强大的基于曲线的关键帧动画
* 桌面集成与拖放支持
* 不受限制的音视频轨道或图层
* 可剪辑调整大小,缩放,修剪,捕捉,旋转和剪切
* 视频转换可实时预览
* 合成,图像层叠和水印
* 标题模板,标题创建,子标题
* 利用图像序列支持2D动画
* 3D 动画标题和效果
* 支持保存为 SVG 格式以及矢量标题和可信证
* 滚动动态图片
* 帧精度(逐步浏览每一帧视频)
* 剪辑的时间映射和速度更改
* 音频混合和编辑
* 数字视频效果,包括亮度,伽玛,色调,灰度,色度键等
#### 优点
* 用于一般视频编辑需求的通用视频编辑器
* 可在 Windows 和 macOS 以及 Linux 上使用
#### 缺点
* 软件用起来可能很简单,但如果你对视频编辑非常陌生,那么肯定需要一个曲折学习的过程
* 你可能仍然没有达到专业级电影制作编辑软件的水准
#### 安装 OpenShot
OpenShot 也可以在所有主流 Linux 发行版的软件仓库中使用。你只需在软件中心搜索即可。你也可以从[官方页面][13]中获取它。
在 Debian 和基于 Ubuntu 的 Linux 发行版中,我最喜欢运行以下命令来安装它:
```
sudo apt install openshot
```
### 3\. Shotcut
![Shotcut Linux 视频编辑器][1]
![Shotcut Linux 视频编辑器][14]
[Shotcut][15] 是 Linux 上的另一个编辑器,可以和 Kdenlive 与 OpenShot 归为同一联盟。虽然它确实与上面讨论的其他两个软件有类似的功能,但 Shotcut 更先进的地方是支持4K视频。
支持许多音频,视频格式,过渡和效果是 Shotcut 的众多功能中的一部分。它也支持外部监视器。
这里有一系列视频教程让你[轻松上手 Shotcut][16]。它也可在 Windows 和 macOS 上使用,因此你也可以在其他操作系统上学习。
#### Shotcut 特性
* 跨平台,可在 LinuxmacOS 和 Windows 上使用
* 支持各种视频,音频和图像格式
* 原生时间线编辑
* 混合并匹配项目中的分辨率和帧速率
* 音频滤波,混音和效果
* 视频转换和过滤
* 具有缩略图和波形的多轨时间轴
* 无限制撤消和重做播放列表编辑,包括历史记录视图
* 剪辑调整大小,缩放,修剪,捕捉,旋转和剪切
* 使用纹波选项修剪源剪辑播放器或时间轴
* 在额外系统显示/监视器上的外部监察
* 硬件支持
你可以在[这里][17]阅它的更多特性。
#### 优点
* 用于常见视频编辑需求的通用视频编辑器
* 支持 4K 视频
* 可在 WindowsmacOS 以及 Linux 上使用
#### 缺点
* 功能太多降低了软件的易用性
#### 安装 Shotcut
Shotcut 以 [Snap][18] 格式提供。你可以在 Ubuntu 软件中心找到它。对于其他发行版,你可以从此[下载页面][19]获取可执行文件来安装。
### 4\. Flowblade
![Flowblade ubuntu 上的视频编辑器][1]
![Flowblade ubuntu 上的视频编辑器][20]
[Flowblade][21] 是 Linux 上的一个多轨非线性视频编辑器。与上面讨论的一样,这也是一个免费开源的软件。它具有时尚和现代化的用户界面。
用 Python 编写它的设计初衷是快速且准确。Flowblade 专注于在 Linux 和其他免费平台上提供最佳体验。所以它没有在 Windows 和 OS X 上运行的版本。Linux 用户专享其实感觉也不错的。
你也可以查看这个不错的[文档][22]来帮助你使用它的所有功能。
#### Flowblade 特性
* 轻量级应用
* 为简单的任务提供简单的界面,如拆分,合并,覆盖等
* 大量的音视频效果和过滤器
* 支持[代理编辑][23]
* 支持拖拽
* 支持多种视频、音频和图像格式
* 批量渲染
* 水印
* 视频转换和过滤器
* 具有缩略图和波形的多轨时间轴
你可以在 [Flowblade 特性][24]里阅读关于它的更多信息。
#### 优点
* 轻量
* 适用于通用视频编辑
#### 缺点
* 不支持其他平台
#### 安装 Flowblade
Flowblade 应当在所有主流 Linux 发行版的软件仓库中都可以找到。你可以从软件中心安装它。也可以在[下载页面][25]查看更多信息。
另外,你可以在 Ubuntu 和基于 Ubuntu 的系统中使用一下命令安装 Flowblade
```
sudo apt install flowblade
```
### 5\. Lightworks
![Lightworks 运行在 ubuntu 16.04][1]
![Lightworks 运行在 ubuntu 16.04][26]
如果你在寻找一个具有更多特性的视频编辑器,这就是你想要的。[Lightworks][27] 是一个跨平台的专业视频编辑器,可以在 LinuxMac OS X 以及 Windows上使用。
它是一款屡获殊荣的专业[非线性编辑][28]NLE软件支持高达 4K 的分辨率以及 SD 和 HD 格式的视频。
Lightworks 可以在 Linux 上使用,然而它不开源。
Lightwokrs 有两个版本:
* Lightworks 免费版
* Lightworks 专业版
专业版有更多功能,比如支持更高的分辨率,支持 4K 和 蓝光视频等。
这个[页面][29]有广泛的可用文档。你也可以参考 [Lightworks 视频向导页][30]的视频。
#### Lightworks 特性
* 跨平台
* 简单直观的用户界面
* 简明的时间线编辑和修剪
* 实时可用的音频和视频FX
* 可用精彩的免版税音频和视频内容
* 适用于 4K 的 Lo-Res 代理工作流程
* 支持导出 YouTube/VimeoSD/HD视频最高可达4K
* 支持拖拽
* 各种音频和视频效果和滤镜
#### 优点
* 专业,功能丰富的视频编辑器
#### 缺点
* 免费版有使用限制
#### 安装 Lightworks
Lightworks 为 Debian 和基于 Ubuntu 的 Linux 提供了 DEB 安装包,为基于 Fedora 的 Linux 发行版提供了RPM 安装包。你可以在[下载页面][31]找到安装包。
### 6\. Blender
![Blender 运行在 Ubuntu 16.04][1]
![Blender 运行在 Ubuntu 16.04][32]
[Blender][33] 是一个专业的,工业级的开源跨平台视频编辑器。它在制作 3D 作品的工具当中较为流行。Blender 已被用于制作多部好莱坞电影,包括蜘蛛侠系列。
虽然最初设计用于制作 3D 模型,但它也具有多种格式视频的编辑功能。
#### Blender 特性
* 实时预览,亮度波形,色度矢量显示和直方图显示
* 音频混合,同步,擦洗和波形可视化
* 最多32个轨道用于添加视频图像音频场景面具和效果
* 速度控制,调整图层,过渡,关键帧,过滤器等
你可以在[这里][34]阅读更多相关特性。
#### 优点
* 跨平台
* 专业级视频编辑
#### 缺点
* 复杂
* 主要用于制作 3D 动画,不专门用于常规视频编辑
#### 安装 Blender
Blender 的最新版本可以从[下载页面][35]下载。
### 7\. Cinelerra
![Cinelerra Linux 上的视频编辑器][1]
![Cinelerra Linux 上的视频编辑器][36]
[Cinelerra][37] 从 1998 年发布以来已被下载超过500万次。它是 2003 年第一个在 64 位系统上提供非线性编辑的视频编辑器。当时它是Linux用户的首选视频编辑器但随后一些开发人员丢弃了此项目它也随之失去了光彩。
好消息是它正回到正轨并且良好地再次发展。
如果你想了解关于 Cinelerra 项目是如何开始的,这里有些[有趣的背景故事][38]。
#### Cinelerra 特性
* 非线性编辑
* 支持 HD 视频
* 内置框架渲染器
* 各种视频效果
* 不受限制的图层数量
* 拆分窗格编辑
#### 优点
* 通用视频编辑器
#### 缺点
* 不适用于新手
* 没有可用的安装包
#### 安装 Cinelerra
你可以从 [SourceForge][39] 下载源码。更多相关信息请看[下载页面][40]。
### 8\. DaVinci Resolve
![DaVinci Resolve 视频编辑器][1]
![DaVinci Resolve 视频编辑器][41]
如果你想要好莱坞级别的视频编辑器,那就用好莱坞正在使用的专业工具。来自 Blackmagic 公司的 [DaVinci Resolve][42] 就是专业人士用于编辑电影和电视节目的专业工具。
DaVinci Resolve 不是常规的视频编辑器。它是一个成熟的编辑工具,在这一个应用程序中提供编辑,色彩校正和专业音频后期制作功能。
DaVinci Resolve 不开源。类似于 LightWorks它也为 Linux 提供一个免费版本。专业版售价是 $300。
#### DaVinci Resolve 特性
* 高性能播放引擎
* 支持所有类型的编辑类型,如覆盖,插入,波纹覆盖,替换,适合填充,末尾追加
* 高级修剪
* 音频叠加
* Multicam Editing 可实现实时编辑来自多个摄像机的镜头
* 过渡和过滤效果
* 速度效果
* 时间轴曲线编辑器
* VFX 的非线性编辑
#### 优点
* 跨平台
* 专业级视频编辑器
#### 缺点
* 不适用于通用视频编辑
* 不开源
* 免费版本中有些功能无法使用
#### 安装 DaVinci Resolve
你可以从[这个页面][42]下载 DaVinci Resolve。你需要注册哪怕仅仅下载免费版。
### 9\. VidCutter
![VidCutter Linux 上的视频编辑器][1]
![VidCutter Linux 上的视频编辑器][43]
不像这篇文章讨论的其他视频编辑器,[VidCutter][44] 非常简单。除了分割和合并视频之外,它没有其他太多功能。但有时你正好需要 VidCutter 提供的这些功能。
#### VidCutter 特性
* 适用于LinuxWindows 和 MacOS 的跨平台应用程序
* 支持绝大多数常见视频格式例如AVIMP4MPEG 1/2WMVMP3MOV3GPFLV 等等
* 界面简单
* 修剪和合并视频,仅此而已
#### 优点
* 跨平台
* 很适合做简单的视频分割和合并
#### 缺点
* 不适合用于通用视频编辑
* 经常崩溃
#### 安装 VidCutter
如果你使用的是基于 Ubuntu 的 Linux 发行版,你可以使用这个官方 PPA译者注PPA个人软件包档案PersonalPackageArchives
```
sudo add-apt-repository ppa:ozmartian/apps
sudo apt-get update
sudo apt-get install vidcutter
```
Arch Linux 用户可以轻松的使用 AUR 安装它。对于其他 Linux 发行版的用户,你可以从这个 [Github 页面][45]上查找安装文件。
### 哪个是 Linux 上最好的视频编辑软件?
文章里提到的一些视频编辑器使用 [FFmpeg][46]。你也可以自己使用 FFmpeg。它只是一个命令行工具所以我没有在上文的列表中提到但一句也不提又不公平。
如果你需要一个视频编辑器来简单的剪切和拼接视频,请使用 VidCutter。
如果你需要的不止这些,**OpenShot** 或者 **Kdenlive** 是不错的选择。他们有规格标准的系统,适用于初学者。
如果你拥有一台高端计算机并且需要高级功能,可以使用 **Lightworks** 或者 **DaVinci Resolve**。如果你在寻找更高级的工具用于制作 3D 作品If you are looking for more advanced features for 3D works,**Blender** 就得到了你的支持。
这就是关于 **Linux 上最好的视频编辑软件**我所能表达的全部内容像UbuntuLinux MintElementary以及其他 Linux 发行版。向我们分享你最喜欢的视频编辑器。
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[It'S Foss Team][a]
译者:[fuowang](https://github.com/fuowang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/itsfoss/
[1]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[2]:https://itsfoss.com/wp-content/uploads/2016/06/best-Video-editors-Linux-800x450.png
[3]:https://itsfoss.com/linux-photo-management-software/
[4]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[5]:https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg
[6]:https://kdenlive.org/
[7]:https://itsfoss.com/tag/open-source/
[8]:https://www.kde.org/
[9]:https://kdenlive.org/download/
[10]:https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg
[11]:http://www.openshot.org/
[12]:http://www.openshot.org/user-guide/
[13]:http://www.openshot.org/download/
[14]:https://itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux-800x503.jpg
[15]:https://www.shotcut.org/
[16]:https://www.shotcut.org/tutorials/
[17]:https://www.shotcut.org/features/
[18]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[19]:https://www.shotcut.org/download/
[20]:https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg
[21]:http://jliljebl.github.io/flowblade/
[22]:https://jliljebl.github.io/flowblade/webhelp/help.html
[23]:https://jliljebl.github.io/flowblade/webhelp/proxy.html
[24]:https://jliljebl.github.io/flowblade/features.html
[25]:https://jliljebl.github.io/flowblade/download.html
[26]:https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg
[27]:https://www.lwks.com/
[28]:https://en.wikipedia.org/wiki/Non-linear_editing_system
[29]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=4
[30]:https://www.lwks.com/videotutorials
[31]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=1
[32]:https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg
[33]:https://www.blender.org/
[34]:https://www.blender.org/features/
[35]:https://www.blender.org/download/
[36]:https://itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg
[37]:http://cinelerra.org/
[38]:http://cinelerra.org/our-story
[39]:https://sourceforge.net/projects/heroines/files/cinelerra-6-src.tar.xz/download
[40]:http://cinelerra.org/download
[41]:https://itsfoss.com/wp-content/uploads/2016/06/davinci-resolve-vdeo-editor-800x450.jpg
[42]:https://www.blackmagicdesign.com/products/davinciresolve/
[43]:https://itsfoss.com/wp-content/uploads/2016/06/vidcutter-screenshot-800x585.jpeg
[44]:https://itsfoss.com/vidcutter-video-editor-linux/
[45]:https://github.com/ozmartian/vidcutter/releases
[46]:https://www.ffmpeg.org/

View File

@ -1,130 +0,0 @@
探秘你的Linux软件包
======
你有没有想过你的 Linux 系统上安装了多少千个软件包? 是的,我说的是“千”。 即使是相当一般的 Linux 系统也可能安装了超过一千个软件包。 有很多方法可以获得这些包到底是什么包的详细信息。
首先,要在基于 Debian 的发行版(如 Ubuntu上快速得到已安装的软件包数量请使用 **apt list --installed** 如下:
```
$ apt list --installed | wc -l
2067
```
这个数字实际上多了一个,因为输出中包含了 “Listing ...” 作为它的第一行。 这个命令会更准确:
```
$ apt list --installed | grep -v "^Listing" | wc -l
2066
```
要获得所有这些包的详细信息,请按以下方式浏览列表:
```
$ apt list --installed | more
Listing...
a11y-profile-manager-indicator/xenial,now 0.1.10-0ubuntu3 amd64 [installed]
account-plugin-aim/xenial,now 3.12.11-0ubuntu3 amd64 [installed]
account-plugin-facebook/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed]
account-plugin-flickr/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed]
account-plugin-google/xenial,xenial,now 0.12+16.04.20160126-0ubuntu1 all [installed]
account-plugin-jabber/xenial,now 3.12.11-0ubuntu3 amd64 [installed]
account-plugin-salut/xenial,now 3.12.11-0ubuntu3 amd64 [installed]
```
这需要观察很多细节--特别是让你的眼睛在所有 2000 多个文件中徘徊。 它包含包名称,版本等,但不是我们人类解析的最简单的信息显示。 dpkg-query 使得描述更容易理解,但这些描述塞满你的命令窗口,除非窗口非常宽。 因此,为了让此篇文章更容易阅读,下面的数据显示已经分成了左右两侧。
左侧:
```
$ dpkg-query -l | more
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version
+++-==============================================-=================================-
ii a11y-profile-manager-indicator 0.1.10-0ubuntu3
ii account-plugin-aim 3.12.11-0ubuntu3
ii account-plugin-facebook 0.12+16.04.20160126-0ubuntu1
ii account-plugin-flickr 0.12+16.04.20160126-0ubuntu1
ii account-plugin-google 0.12+16.04.20160126-0ubuntu1
ii account-plugin-jabber 3.12.11-0ubuntu3
ii account-plugin-salut 3.12.11-0ubuntu3
ii account-plugin-twitter 0.12+16.04.20160126-0ubuntu1
rc account-plugin-windows-live 0.11+14.04.20140409.1-0ubuntu2
```
右侧:
```
Architecture Description
============-=====================================================================
amd64 Accessibility Profile Manager - Unity desktop indicator
amd64 Messaging account plugin for AIM
all GNOME Control Center account plugin for single signon - facebook
all GNOME Control Center account plugin for single signon - flickr
all GNOME Control Center account plugin for single signon
amd64 Messaging account plugin for Jabber/XMPP
amd64 Messaging account plugin for Local XMPP (Salut)
all GNOME Control Center account plugin for single signon - twitter
all GNOME Control Center account plugin for single signon - windows live
```
每行开头的 “ii” 和 “rc” 名称(见上文“左侧”)是包状态指示符。 第一个字母表示包的理想状态:
```
u -- unknown
i -- install
r -- remove/deinstall
p -- purge (remove including config files)
h -- hold
```
第二个代表包的当前状态:
```
n -- not-installed
i -- installed
c -- config-files (only the config files are installed)
U -- unpacked
F -- half-configured (the configuration failed for some reason)
h -- half-installed (installation failed for some reason)
W -- triggers-awaited (the package is waiting for a trigger from another package)
t -- triggers-pending (the package has been triggered)
```
在通常的双字符字段末尾添加的 “R” 表示需要重新安装。 你可能永远不会碰到这些。
快速查看整体包状态的一种简单方法是计算在不同状态中包含的包的数量:
```
$ dpkg-query -l | tail -n +6 | awk '{print $1}' | sort | uniq -c
2066 ii
134 rc
```
我从上面的 dpkg-query 输出中排除了前五行,因为这些是标题行,会混淆输出。
这两行基本上告诉我们,在这个系统上,应该安装了 2066 个软件包,而 134 个其他的软件包已被删除,但已经留下了配置文件。 你始终可以使用以下命令删除程序包的剩余配置文件:
```
$ sudo dpkg --purge xfont-mathml
```
请注意,如果程序包二进制文件和配置文件都已经安装了,则上面的命令将两者都删除。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3242808/linux/peeking-into-your-linux-packages.html
作者:[Sandra Henry-Stocker][a]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/

View File

@ -0,0 +1,250 @@
在 Linux 上使用 Lutries 管理你的游戏
======
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
让我们用游戏开始 2018 的第一天吧!今天我们要讨论的是 **Lutris**,一个 Linux 上的开源游戏平台。你可以使用 Lutries 安装、移除、配置、启动和管理你的游戏。它可以以一个界面帮你管理你的 Linux 游戏、Windows 游戏、仿真控制台游戏和浏览器游戏。它还包含社区编写的安装脚本,使得游戏的安装过程更加简单。
Lutries 自动安装(或者你可以单击点击安装)了超过 20 个模拟器,它提供了从七十年代到现在的大多数游戏系统。目前支持的游戏系统如下:
* Native Linux
* Windows
* Steam (Linux and Windows)
* MS-DOS
* 街机
* Amiga 电脑
* Atari 8 和 16 位计算机和控制器
* 浏览器 (Flash 或者 HTML5 游戏)
* Commmodore 8 位计算机
* 基于 SCUMM 的游戏和其他点击冒险游戏
* Magnavox Odyssey², Videopac+
* Mattel Intellivision
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
* Game Cube and Wii
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
* SNK Neo Geo, Neo Geo Pocket
* Sony PlayStation
* Sony PlayStation 2
* Sony PSP
* 像 Zork 这样的 Z-Machine 游戏
* 还有更多
### 安装 Lutris
就像 Steam 一样Lutries 包含两部分:网站和客户端程序。从网站你可以浏览可用的游戏,添加最喜欢的游戏到个人库,以及使用安装链接安装他们。
首先,我们还是来安装客户端。它目前支持 Arch Linux、Debian、Fedroa、Gentoo、openSUSE 和 Ubuntu。
对于 Arch Linux 和它的衍生版本,像是 Antergos, Manjaro Linux都可以在 [**AUR**][1] 中找到。因此,你可以使用 AUR 帮助程序安装它。
使用 [**Pacaur**][2]:
```
pacaur -S lutris
```
使用 **[Packer][3]** :
```
packer -S lutris
```
使用 [**Yaourt**][4]:
```
yaourt -S lutris
```
使用 [**Yay**][5]:
```
yay -S lutris
```
**Debian:**
**Debian 9.0** 上以 **root** 身份运行以下命令:
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
**Debian 8.0** 上以 **root** 身份运行以下命令:
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
**Fedora 27** 上以 **root** 身份运行以下命令: r
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
dnf install lutris
```
**Fedora 26** 上以 **root** 身份运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
dnf install lutris
```
**openSUSE Tumbleweed** 上以 **root** 身份运行以下命令:
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
zypper refresh
zypper install lutris
```
**openSUSE Leap 42.3** 上以 **root** 身份运行以下命令:
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
zypper refresh
zypper install lutris
```
**Ubuntu 17.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
**Ubuntu 17.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
**Ubuntu 16.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
**Ubuntu 16.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
对于其他平台,参考 [**Lutris 下载链接**][6].
### 使用 Lutris 管理你的游戏
安装完成后,从菜单或者应用启动器里打开 Lutries。首次启动时Lutries 的默认界面像下面这样:
[![][7]][8]
**登录你的 Lutris.net 账号**
为了能同步你个人库中的游戏,下一步你需要在客户端中登录你的 Lutris.net 账号。如果你没有,先 [**注册一个新的账号**][9]。然后点击 **"连接到你的 Lutirs.net 账号同步你的库 "** 连接到 Lutries 客户端。
输入你的账号信息然后点击 **继续**
[![][7]][10]
现在你已经连接到你的 Lutries.net 账号了。
[![][7]][11]**Browse Games**
点击工具栏里的浏览图标(游戏控制器图标)可以搜索任何游戏。它会自动定向到 Lutries 网站的游戏页。你可以以字母顺序查看所有可用的游戏。Lutries 现在已经有了很多游戏,而且还有更多的不断添加进来。
[![][7]][12]
任选一个游戏,添加到你的库中。
[![][7]][13]
然后返回到你的 Lutries 客户端,点击 **菜单 - > Lutris -> 同步库**。现在你可以在本地的 Lutries 客户端中看到所有在库中的游戏了。
[![][7]][14]
如果你没有看到游戏,只需要重启一次。
**安装游戏**
安装游戏,只需要点击游戏,然后点击 **安装** 按钮。例如,我想在我的系统安装 [**2048**][15],就像你在底下的截图中看到的,它要求我选择一个版本去安装。因为它只有一个版本(例如,在线),它就会自动选择这个版本。点击 **继续**
[![][7]][16]Click Install:
[![][7]][17]
安装完成之后,你可以启动新安装的游戏或是关闭这个窗口,继续从你的库中安装其他游戏。
**导入 Steam 库**
你也可以导入你的 Steam 库。在你的头像处点击 **"通过 Steam 登录"** 按钮。接下来你将被重定向到 Steam输入你的账号信息。填写正确后你的 Steam 账号将被连接到 Lutries 账号。请注意,为了同步库中的游戏,这里你的 Steam 账号将被公开。你可以在同步完成之后将其重新设为私密状态。
**手动添加游戏**
Lutries 有手动添加游戏的选项。在工具栏中点击 + 号登录。
[![][7]][18]
在下一个窗口,输入游戏名,在游戏信息栏选择一个运行器。运行器是指 Linux 上类似 wineSteam 之类的程序,它们可以帮助你启动这个游戏。你可以从 菜单 -> 管理运行器 中安装运行器。
[![][7]][19]
然后在下一栏中选择可执行文件或者 ISO。最后点击保存。有一个好消息是你可以添加一个游戏的多个版本。
**移除游戏**
移除任何已安装的游戏,只需在 Lutries 客户端的本地库中点击对应的游戏。选择 **移除** 然后 **应用**
[![][7]][20]
Lutries 就像 Steam。只是从网站向你的库中添加游戏并在客户端中为你安装它们。
各位,这就是今天所有的内容了。我们将会在今年发表更多好的和有用的文章。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
作者:[SK][a]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/lutris/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://lutris.net/downloads/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
[9]:https://lutris.net/user/register/
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()

View File

@ -0,0 +1,134 @@
Python 数据科学入门
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_open_data_520x292.jpg?itok=R8rBrlk7)
无论你是一个具有数学或计算机科学背景的数据科学爱好者,还是一个其它领域的专家,数据科学提供的可能性都在你力所能及的范围内,而且你不需要昂贵的,高度专业化的企业软件。本文中讨论的开源工具就是你入门时所需的全部内容。
[Python][1],其机器学习和数据科学库([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7] 等),以及大量可视化库([Matplotlib][8], [pyplot][9], [Plotly][10] 等)对于初学者和专家来说都是优秀的 FOSS译注全称为 Free and Open Source Software工具。它们易于学习很受欢迎且受到社区支持并拥有为数据科学开发的最新技术和算法。它们是你在开始学习时可以获得的最佳工具集之一。
许多 Python 库都是建立在彼此之上的(称为依赖项),其基础是 [NumPy][7] 库。NumPy 专门为数据科学设计,经常用于在其 ndarray 数据类型中存储数据集的相关部分。ndarray 是一种方便的数据类型,用于将关系表中的记录存储为 `cvs` 文件或其它任何格式,反之亦然。将 scikit 功能应用于多维数组时它特别方便。SQL 非常适合查询数据库,但是对于执行复杂和资源密集型的数据科学操作,在 ndarray 中存储数据可以提高效率和速度(确保在处理大量数据集时有足够的 RAM。当你使用 pandas 进行知识提取和分析时pandas 中的 DataFrame 数据类型和 NumPy 中的 ndarray 之间的无缝转换分别为提取和计算密集型操作创建了一个强大的组合。
为了快速演示,让我们启动 Python shel 并在 pandas DataFrame 变量中加载来自巴尔的摩Baltimore的犯罪统计数据的开放数据集并查看加载 frame 的一部分:
```
>>>  import pandas as pd
>>>  crime_stats = pd.read_csv('BPD_Arrests.csv')
>>>  crime_stats.head()
```
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/crime_stats_chart.jpg?itok=_rPXJYHz)
我们现在可以在这个 pandas DataFrame 上执行大多数查询就像我们可以在数据库中使用 SQL。例如要获取 "Description"属性的所有唯一值SQL 查询是:
```
$ SELECT unique(“Description”) from crime_stats;
```
利用 pandas DataFrame 编写相同的查询如下所示:
```
>>>  crime_stats['Description'].unique()
['COMMON   ASSAULT'   'LARCENY'   'ROBBERY   - STREET'   'AGG.   ASSAULT'
'LARCENY   FROM   AUTO'   'HOMICIDE'   'BURGLARY'   'AUTO   THEFT'
'ROBBERY   - RESIDENCE'   'ROBBERY   - COMMERCIAL'   'ROBBERY   - CARJACKING'
'ASSAULT   BY  THREAT'   'SHOOTING'   'RAPE'   'ARSON']
```
它返回的是一个 NumPy 数组ndarray 类型):
```
>>>  type(crime_stats['Description'].unique())
<class 'numpy.ndarray'>
```
接下来让我们将这些数据输入神经网络,看看它能多准确地预测使用的武器类型,给出的数据包括犯罪事件,犯罪类型以及发生的地点:
```
>>>  from sklearn.neural_network import MLPClassifier
>>>  import numpy as np
>>>
>>>  prediction = crime_stats[[Weapon]]
>>>  predictors = crime_stats['CrimeTime', CrimeCode, Neighborhood]
>>>
>>>  nn_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5,2), random_state=1)
>>>
>>>predict_weapon = nn_model.fit(prediction, predictors)
```
现在学习模型准备就绪,我们可以执行一些测试来确定其质量和可靠性。对于初学者,让我们输入一个训练集数据(用于训练模型的原始数据集的一部分,不包括在创建模型中):
```
>>>  predict_weapon.predict(training_set_weapons)
array([4, 4, 4, ..., 0, 4, 4])
```
如你所见,它返回一个列表,每个数字预测训练集中每个记录的武器。我们之所以看到的是数字而不是武器名称,是因为大多数分类算法都是用数字优化的。对于分类数据,有一些技术可以将属性转换为数字表示。在这种情况下,使用的技术是 Label Encoder使用 sklearn 预处理库中的 LabelEncoder 函数:`preprocessing.LabelEncoder()`。它能够对一个数据和其对应的数值表示来进行变换和逆变换。在这个例子中,我们可以使用 LabelEncoder() 的 `inverse_transform` 函数来查看武器 0 和 4 是什么:
```
>>>  preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
array(['HANDS',   'FIREARM',   'HANDS',   ..., 'FIREARM',   'FIREARM',   'FIREARM']
```
这很有趣,但为了了解这个模型的准确程度,我们将几个分数计算为百分比:
```
>>>  nn_model.score(X, y)
0.81999999999999995
```
这表明我们的神经网络模型准确度约为 82%。这个结果似乎令人印象深刻,但用于不同的犯罪数据集时,检查其有效性非常重要。还有其它测试来做这个,如相关性,混淆,矩阵等。尽管我们的模型有很高的准确率,但它对于一般犯罪数据集并不是非常有用,因为这个特定数据集具有不成比例的行数,其列出 FIREARM 作为使用的武器。除非重新训练,否则我们的分类器最有可能预测 FIREARM即使输入数据集有不同的分布。
在对数据进行分类之前清洗数据并删除异常值和畸形数据非常重要。预处理越好,我们的见解准确性就越高。此外,为模型或分类器提供过多数据(通常超过 90%)以获得更高的准确度是一个坏主意,因为它看起来准确但由于[过度拟合][11]而无效。
[Jupyter notebooks][12] 相对于命令行来说是一个很好的交互式替代品。虽然 CLI 对大多数事情都很好但是当你想要运行代码片段以生成可视化时Jupyter 会很出色。它比终端更好地格式化数据。
[这篇文章][13] 列出了一些最好的机器学习免费资源,但是还有很多其它的指导和教程。根据你的兴趣和爱好,你还会发现许多开放数据集可供使用。作为起点,由 [Kaggle][14] 维护的数据集,以及在州政府网站上提供的数据集是极好的资源。
(to 校正:最后这句话不知该如何理解)
Payal Singh 将出席今年 3 月 8 日至 11 日在 California加利福尼亚的 Pasadena帕萨迪纳举行的 SCaLE16x。要参加并获得 50% 的门票优惠,[注册][15]使用优惠码**OSDC**。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/getting-started-data-science
作者:[Payal Singh][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/payalsingh
[1]:https://www.python.org/
[2]:https://pandas.pydata.org/
[3]:https://keras.io/
[4]:https://www.tensorflow.org/
[5]:http://scikit-learn.org/stable/
[6]:https://www.scipy.org/
[7]:http://www.numpy.org/
[8]:https://matplotlib.org/
[9]:https://matplotlib.org/api/pyplot_api.html
[10]:https://plot.ly/
[11]:https://www.kdnuggets.com/2014/06/cardinal-sin-data-mining-data-science.html
[12]:http://jupyter.org/
[13]:https://machinelearningmastery.com/best-machine-learning-resources-for-getting-started/
[14]:https://www.kaggle.com/
[15]:https://register.socallinuxexpo.org/reg6/

View File

@ -0,0 +1,167 @@
在 Linux 上使用 systemd 设置定时器
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/clock-650753_1920.jpg?itok=RiRyCbAP)
之前,我们看到了如何[手动的][1]、[在开机与关机时][2]、[在启用某个设备时][3]、[在文件系统发生改变时][4]启用与禁用 systemd 服务。
定时器增加了另一种启动服务的方式,基于...时间。尽管与定时任务很相似,但 systemd 定时器稍微地灵活一些。让我们看看它是怎么工作的。
### “定时运行”
让我们展开[本系列前两篇文章][2]中[你所设置的 ][1] [Minetest][5] 服务器作为如何使用定时器单元的第一个例子。如果你还没有读过那几篇文章,可以现在去看看。
你将通过创建一个定时器来改进 Minetest 服务器,使得在定时器启动 1 分钟后运行游戏服务器而不是立即运行。这样做的原因可能是,在启动之前可能会用到其他的服务,例如发邮件给其他玩家告诉他们游戏已经准备就绪,你要确保其他的服务(例如网络)在开始前完全启动并运行。
跳到最底下,你的 `_minetest.timer_` 单元看起来就像这样:
```
# minetest.timer
[Unit]
Description=Runs the minetest.service 1 minute after boot up
[Timer]
OnBootSec=1 m
Unit=minetest.service
[Install]
WantedBy=basic.target
```
一点也不难吧。
通常,开头是 `[Unit]` 和一段描述单元作用的信息,这儿没什么新东西。`[Timer]` 这一节是新出现的,但它的作用不言自明:它包含了何时启动服务,启动哪个服务的信息。在这个例子当中,`OnBootSec` 是告诉 systemd 在系统启动后运行服务的指令。
其他的指令有:
* `OnActiveSec=`,告诉 systemd 在定时器启动后多长时间运行服务。
* `OnStartupSec=`,同样的,它告诉 systemd 在 systemd 进程启动后多长时间运行服务。
* `OnUnitActiveSec=`,告诉 systemd 在上次由定时器激活的服务启动后多长时间运行服务。
* `OnUnitInactiveSec=`,告诉 systemd 在上次由定时器激活的服务停用后多长时间运行服务。
继续 `_minetest.timer_` 单元,`basic.target` 通常用作<ruby>后期引导服务<rt>late boot services</rt></ruby><ruby>同步点<rt>synchronization point</rt></ruby>。这就意味着它可以让 `_minetest.timer_` 单元运行在安装完<ruby>本地挂载点<rt>local mount points</rt></ruby>或交换设备,套接字、定时器、路径单元和其他基本的初始化进程之后。就像在[第二篇文章中 systemd 单元][2]里解释的那样,`_targets_` 就像<ruby>旧的运行等级<rt>old run levels</rt></ruby>,可以将你的计算机置于某个状态,或像这样告诉你的服务在达到某个状态后开始运行。
在前两篇文章中你配置的`_minetest.service_`文件[最终][2]看起来就像这样:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
User=
ExecStart= /usr/games/minetest --server
ExecStartPost= /home//bin/mtsendmail.sh "Ready to rumble?" "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home//bin/mtsendmail.sh "Off to bed. Nightie night!" "Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
[Install]
WantedBy= multi-user.target
```
这儿没什么需要修改的。但是你需要将 `_mtsendmail.sh_`(发送你的 email 的脚本)从:
```
#!/bin/bash
# mtsendmail
sleep 20
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
sleep 10
```
改成:
```
#!/bin/bash
# mtsendmail.sh
echo $1 | mutt -F /home/paul/.muttrc -s "$2" pbrown@mykolab.com
```
你做的事是去除掉 Bash 脚本中那些蹩脚的停顿。Systemd 现在正在等待。
### 让它运行起来
确保一切运作正常,禁用 `_minetest.service_`
```
sudo systemctl disable minetest
```
这使得系统启动时它不会一同启动;然后,相反地,启用 `_minetest.timer_`
```
sudo systemctl enable minetest.timer
```
现在你就可以重启服务器了,当运行`sudo journalctl -u minetest.*`后,你就会看到 `_minetest.timer_` 单元执行后大约一分钟,`_minetest.service_` 单元开始运行。
![minetest timer][7]
图 1minetest.timer 运行大约 1 分钟后 minetest.service 开始运行
[经许可使用][8]
### 时间的问题
`_minetest.timer_` 在 systemd 的日志里显示的启动时间为 09:08:33 而 `_minetest.service` 启动时间是 09:09:18它们之间少于 1 分钟,关于这件事有几点需要说明一下:首先,请记住我们说过 `OnBootSec=` 指令是从引导完成后开始计算服务启动的时间。当 `_minetest.timer_` 的时间到来时,引导已经在几秒之前完成了。
另一件事情是 systemd 给自己设置了一个<ruby>误差幅度<rt>margin of error</rt></ruby>(默认是 1 分钟)来运行东西。这有助于在多个<ruby>资源密集型进程<rt>resource-intensive processes</rt></ruby>同时运行时分配负载:通过分配 1 分钟的时间systemd 可以等待某些进程关闭。这也意味着 `_minetest.service_`会在引导完成后的 1~2 分钟之间启动。但精确的时间谁也不知道。
作为记录,你可以用 `AccuracySec=` 指令[修改误差幅度][9]。
你也可以检查系统上所有的定时器何时运行或是上次运行的时间:
```
systemctl list-timers --all
```
![check timer][11]
图 2检查定时器何时运行或上次运行的时间
[经许可使用][8]
最后一件值得思考的事就是你应该用怎样的格式去表示一段时间。Systemd 在这方面非常灵活:`2 h``2 hours` 或 `2hr` 都可以用来表示 2 个小时。对于“秒”,你可以用 `seconds``second``sec` 和 `s`。“分”也是同样的方式:`minutes``minute``min` 和 `m`。你可以检查 `man systemd.time` 来查看 systemd 能够理解的所有时间单元。
### 下一次
下次你会看到如何使用日历中的日期和时间来定期运行服务,以及如何通过组合定时器与设备单元在插入某些硬件时运行服务。
回头见!
在 Linux 基金会和 edx 上通过免费课程 [“Introduction to Linux”][12] 学习更多关于 Linux 的知识。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/setting-timer-systemd-linux
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[LuuMing](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[2]:https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
[3]:https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change
[4]:https://www.linux.com/blog/learn/intro-to-linux/2018/6/systemd-services-monitoring-files-and-directories
[5]:https://www.minetest.net/
[6]:/files/images/minetest-timer-1png
[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest-timer-1.png?itok=TG0xJvYM (minetest timer)
[8]:/licenses/category/used-permission
[9]:https://www.freedesktop.org/software/systemd/man/systemd.timer.html#AccuracySec=
[10]:/files/images/minetest-timer-2png
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest-timer-2.png?itok=pYxyVx8- (check timer)
[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,143 @@
如何移除或禁用 Ubuntu Dock
======
![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png)
**如果你想用其它 dock例如 Plank dock或面板来替换 Ubuntu 18.04 中的 Dock或者你想要移除或禁用 Ubuntu Dock本文会告诉你如何做。**
Ubuntu Dock - 屏幕左侧栏,可用于固定应用程序或访问已安装的应用程序。使用默认的 Ubuntu 会话时,[无法][1]使用 Gnome Tweaks 禁用它。如果你需要,还是有几种方法来摆脱它的。下面我将列出 4 种方法可以移除或禁用 Ubuntu Dock以及每个方法的缺点如果有的话还有如何撤销每个方法的更改。本文还包括在没有 Ubuntu Dock 的情况下访问多任务视图和已安装应用程序列表的其它方法。
to 校正Activities Overview 在本文翻译为多任务视图,如有不妥,请改正)
### 如何在没有 Ubuntu Dock 的情况下访问多任务试图
如果没有 Ubuntu Dock你可能无法访问活动的或已安装的应用程序列表但是可以通过单击 Dock 底部的“显示应用程序”按钮从 Ubuntu Dock 访问)。例如,如果你想使用 Plank Dock。to 校正:这里是什么意思呢)
显然,如果你安装了 Dash to Panel 扩展来使用它而不是 Ubuntu Dock那么情况并非如此。因为 Dash to Panel 提供了一个按钮来访问多任务视图或已安装的应用程序。
根据你计划使用的 Dock 而不是 Ubuntu Dock如果无法访问多任务视图那么你可以启用 Activities Overview Hot Corner 选项,只需将鼠标移动到屏幕的左上角即可打开 Activities。访问已安装的应用程序列表的另一种方法是使用快捷键`Super + A`。
如果要启用 Activities Overview hot corner使用以下命令
```
gsettings set org.gnome.shell enable-hot-corners true
```
如果以后要撤销此操作并禁用 hot corners那么你需要使用以下命令
```
gsettings set org.gnome.shell enable-hot-corners false
```
你可以使用 Gnome Tweaks 应用程序(该选项位于 Gnome Tweaks 的 `Top Bar` 部分)启用或禁用 Activities Overview Hot Corner 选项,可以使用以下命令进行安装:
```
sudo apt install gnome-tweaks
```
### 如何移除或禁用 Ubuntu Dock
下面你将找到 4 种摆脱 Ubuntu Dock 的方法,环境在 Ubuntu 18.04 下。
**方法 1: 移除 Gnome Shell Ubuntu Dock 包。**
摆脱 Ubuntu Dock 的最简单方法就是删除包。
这将会从你的系统中完全移除 Ubuntu Dock 扩展,但同时也移除了 `ubuntu-desktop` 元数据包。如果你移除 `ubuntu-desktop` 元数据包,不会马上出现问题,因为它本身没有任何作用。`ubuntu-meta` 包依赖于组成 Ubuntu 桌面的大量包。它的依赖关系不会被删除,也不会被破坏。问题是如果你以后想升级到新的 Ubuntu 版本,那么将不会安装任何新的 `ubuntu-desktop` 依赖项。
为了解决这个问题,你可以在升级到较新的 Ubuntu 版本之前安装 `ubuntu-desktop` 元包(例如,如果你想从 Ubuntu 18.04 升级到 18.10)。
如果你对此没有意见,并且想要从系统中删除 Ubuntu Dock 扩展包,使用以下命令:
```
sudo apt remove gnome-shell-extension-ubuntu-dock
```
如果以后要撤消更改,只需使用以下命令安装扩展:
```
sudo apt install gnome-shell-extension-ubuntu-dock
```
或者重新安装 `ubuntu-desktop` 元数据包(这将会安装你可能已删除的任何 ubuntu-desktop 依赖项,包括 Ubuntu Dock你可以使用以下命令
```
sudo apt install ubuntu-desktop
```
**选项2安装并使用 vanilla Gnome 会话而不是默认的 Ubuntu 会话。**
摆脱 Ubuntu Dock 的另一种方法是安装和使用 vanilla Gnome 会话。安装 vanilla Gnome 会话还将安装此会话所依赖的其它软件包,如 Gnome 文档,地图,音乐,联系人,照片,跟踪器等。
通过安装 vanilla Gnome 会话,你还将获得默认 Gnome GDM 登录和锁定屏幕主题,而不是 Ubuntu 默认值,另外还有 Adwaita Gtk 主题和图标。你可以使用 Gnome Tweaks 应用程序轻松更改 Gtk 和图标主题。
此外,默认情况下将禁用 AppIndicators 扩展(因此使用 AppIndicators 托盘的应用程序不会显示在顶部面板上),但你可以使用 Gnome Tweaks 启用此功能(在扩展中,启用 Ubuntu appindicators 扩展)。
同样,你也可以从 vanilla Gnome 会话启用或禁用 Ubuntu Dock这在 Ubuntu 会话中是不可能的(使用 Ubuntu 会话时无法从 Gnome Tweaks 禁用 Ubuntu Dock
如果你不想安装 vanilla Gnome 会话所需的这些额外软件包,那么这个移除 Ubuntu Dock 的这个选项不适合你,请查看其它选项。
如果你对此没有意见,以下是你需要做的事情。要在 Ubuntu 中安装普通的 Gnome 会话,使用以下命令:
```
sudo apt install vanilla-gnome-desktop
```
安装完成后,重启系统。在登录屏幕上,单击用户名,单击 `Sign in` 按钮旁边的齿轮图标,然后选择 `GNOME` 而不是 `Ubuntu`,之后继续登录。
![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png)
如果要撤销此操作并移除 vanilla Gnome 会话,可以使用以下命令清除 vanilla Gnome 软件包,然后删除它安装的依赖项(第二条命令):
```
sudo apt purge vanilla-gnome-desktop
sudo apt autoremove
```
然后重新启动,并以相同的方式从 GDM 登录屏幕中选择 Ubuntu。
**选项 3从桌面上永久隐藏 Ubuntu Dock而不是将其移除。**
如果你希望永久隐藏 Ubuntu Dock不让它显示在桌面上但不移除它或使用 vanilla Gnome 会话,你可以使用 Dconf 编辑器轻松完成此操作。这样做的缺点是 Ubuntu Dock 仍然会使用一些系统资源,即使你没有在桌面上使用它,但你也可以轻松恢复它而无需安装或移除任何包。
Ubuntu Dock 只对你的桌面隐藏当你进入叠加模式Activities你仍然可以看到并从那里使用 Ubuntu Dock。
要永久隐藏 Ubuntu Dock使用 Dconf 编辑器导航到 `/org/gnome/shell/extensions/dash-to-dock` 并禁用以下选项(将它们设置为 false`autohide`, `dock-fixed``intellihide`
如果你愿意,可以从命令行实现此目的,运行以下命令:
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide false
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false
```
如果你改变主意了并想撤销此操作,你可以使用 Dconf 编辑器从 `/org/gnome/shell/extensions/dash-to-dock` 中启动 `autohide`, `dock-fixed``intellihide`(将它们设置为 true或者你可以使用以下这些命令
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide true
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true
```
**选项 4使用 Dash to Panel 扩展。**
[Dash to Panel][2] 是 Gnome Shell 的一个高度可配置面板,是 Ubuntu Dock 或 Dash to Dock 的一个很好的替代品Ubuntu Dock 是从 Dash to Dock 克隆而来的)。安装和启动 Dash to Panel 扩展会禁用 Ubuntu Dock因此你无需执行其它任何操作。
你可以从 [extensions.gnome.org][3] 来安装 Dash to Panel。
如果你改变主意并希望重新使用 Ubuntu Dock那么你可以使用 Gnome Tweaks 应用程序禁用 Dash to Panel或者通过单击以下网址旁边的 X 按钮完全移除 Dash to Panel: https://extensions.gnome.org/local/。
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020
[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html
[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/

View File

@ -0,0 +1,110 @@
顶级 Linux 开发者推荐的编程书籍
======
毫无疑问Linux 是由那些拥有深厚计算机知识背景而且才华横溢的程序员发明的。让那些大名鼎鼎的 Linux 程序员向今日的开发者分享一些曾经带领他们登堂入室的好书和技术参考吧,你会不会也读过其中几本呢?
Linux毫无争议的属于21世纪的操作系统。虽然Linus Torvalds 在建立开源社区这件事上做了很多工作和社区决策不过那些网络专家和开发者愿意接受Linux的原因还是因为它卓越的代码质量和高可用性。Torvalds 是个编程天才,同时必须承认他还是得到了很多其他同样极具才华的开发者的无私帮助。
就此我咨询了Torvalds 和其他一些顶级Linux开发者有哪些书籍帮助他们走上了成为顶级开发者的道路下面请听我一一道来。
### 熠熠生辉的 C语言
Linux 是在大约90年代开发出来的与它一起问世的还有其他一些完成基础功能的开源软件。与此相应那时的开发者使用的工具和语言反映了那个时代的印记。可能[C 语言不再流行了][1]可对于很多已经建功立业的开发者来说C 语言是他们的第一个实际开发中使用的语言,这一点也在他们推选的对他们有着深远影响的书单中反映出来。
Torvalds 说“你不应该再选用我那个时代使用的语言或者开发方式”他的开发道路始于BASIC然后转向机器码“甚至都不是汇编语言而是真真正正的二进制机器码”他解释道再然后转向汇编语言和 C 语言。
“任何人都不应该再从这些语言开始进入开发这条路了”,他补充道。“这些语言中的一些今天已经没有什么意义(如 BASIC 和机器语言)。尽管 C 还是一个主流语言,我也不推荐你从它开始你的开发工作”。
并不是他不喜欢 C。不管怎样Linux 是用[<ruby>C语言<rt>GNU C</rt></ruby>][2]写就的。“我始终认为 C 是一个伟大的语言,它有着非常简单的语法,对于很多方向的开发都很合适,但是我怀疑你会挫折重重,从你的第一个'Hello World'程序开始到你真正能开发出能用的东西当中有很大一步要走”。他认为,如果用现在的标准,如果作为现在的入门语言的话,从 C语言开始的代价太大。
在他那个时代Torvalds 的唯一选择的书就只能是Brian W. Kernighan 和Dennis M. Ritchie 合著的[<ruby>C 编程语言<rt>C Programming Language, 2nd Edition</rt></ruby>][3]在编程圈内也被尊称为K&R。“这本书简单精炼但是你要先有编程的背景才能欣赏它”。Torvalds 说到。
Torvalds 并不是唯一一个推荐K&R 的开源开发者。以下几位也同样引用了这本他们认为值得推荐的书籍他们有Linux 和 Oracle 虚拟化开发副总裁Wim CoekaertsLinux 开发者Alan Cox; Google 云 CTO Brian Stevens; Canonical 技术运营部副总裁Pete Graner。
如果你今日还想同 C 语言较量一番的话Jeremy AllisonSamba 的共同发起人,推荐[<ruby>21世纪的 C 语言<rt>21st Century C: C Tips from the New School</rt></ruby>][4]。他还建议,同时也去阅读一本比较旧但是写的更详细的[<ruby>C专家编程<rt>Expert C Programming: Deep C Secrets</rt></ruby>][5]和有着20年历史的[<ruby>UNIX POSIX多线程编程<rt>Programming with POSIX Threads</rt></ruby>][6]。
### 如果不选C 语言, 那选什么?
Linux 开发者推荐的书籍自然都是他们认为适合今时今日的开发项目的语言工具。这也折射了开发者自身的个人偏好。例如, Allison认为年轻的开发者应该在[<ruby>Go 编程语言<rt>The Go Programming Language </rt></ruby>][7]和[<ruby>Rust 编程<rt>Rust with Programming Rust</rt></ruby>][8]的帮助下去学习 Go 语言和 Rust 语言。
但是超越编程语言来考虑问题也不无道理(尽管这些书传授了你编程技巧)。今日要做些有意义的开发工作的话,"要从那些已经完成了99%显而易见工作的框架开始,然后你就能围绕着它开始写脚本了" Torvalds 推荐了这种做法。
“坦率来说,语言本身远远没有围绕着它的基础架构重要”,他继续道,“可能你会从 Java 或者Kotlin 开始,但那是因为你想为自己的手机开发一个应用,因此安卓 SDK 成为了最佳的选择,又或者,你对游戏开发感兴趣,你选择了一个游戏开发引擎来开始,而通常它们有着自己的脚本语言”。
这里提及的基础架构包括那些和操作系统本身相关的编程书籍。
Garner 在读完了大名鼎鼎的 K&R后又拜读了W. Richard Steven 的[<ruby>Unix 网络编程<rt>Unix: Network Programming</rt></ruby>][10]。特别的是Steven 的[<ruby>TCP/IP详解卷1协议<rt>TCP/IP Illustrated, Volume 1: The Protocols</rt></ruby>][11]在出版了30年之后仍然被认为是必读的。因为 Linux 开发很大程度上和[和网络基础架构有关][12]Garner 也推荐了很多 OReilly 的书,包括[Sendmail][13],[Bash][14],[DNS][15],以及[IMAP/POP][16]。
Coekaerts也是Maurice Bach的[<ruby>UNIX操作系统设计<rt>The Design of the Unix Operation System</rt></ruby>][17]的书迷之一。James Bottomley 也是这本书的推崇者,作为一个 Linux 内核开发者,当 Linux 刚刚问世时James就用Bach 的这本书所传授的知识将它研究了个底朝天。
### 软件设计知识永不过时
尽管这样说有点太局限在技术领域。Stevens 还是说到,“所有的开发者都应该在开始钻研语法前先研究如何设计,[<ruby>日常物品的设计<rt>The Design of Everyday Things</rt></ruby>][18]是我的最爱”。
Coekaerts 喜欢Kernighan 和 Rob Pike合著的[<ruby>程序设计实践<rt>The Practic of Programming</rt></ruby>][19]。这本关于设计实践的书当 Coekaerts 还在学校念书的时候还未出版,他说道,“但是我把它推荐给每一个人”。
不管何时,当你问一个长期认真对待开发工作的开发者他最喜欢的计算机书籍时,你迟早会听到一个名字和一本书:
Donald Knuth和他所著的[<ruby>计算机程序设计艺术(1-4A)<rt>The Art of Computer Programming, Volumes 1-4A</rt></ruby>][20]。Dirk Hohndel,VMware 首席开源官,认为这本书尽管有永恒的价值,但他也承认,“今时今日并非及其有用”。(译注:不代表译者观点)
### 读代码。大量的读。
编程书籍能教会你很多,也请别错过另外一个在开源社区特有的学习机会:[<ruby>如何阅读代码<rt>Code Reading: The Open Source Perspective</rt></ruby>][21]。那里有不可计数的代码例子阐述如何解决编程问题(以及如何让你陷入麻烦...。Stevens 说,谈到磨炼编程技巧,在他的书单里排名第一的“书”是 Unix 的源代码。
"也请不要忽略从他人身上学习的各种机会。", Cox道“我是在一个计算机俱乐部里和其他人一起学的 BASIC在我看来这仍然是一个学习的最好办法”他从[<ruby>精通 ZX81机器码<rt>Mastering machine code on your ZX81</rt></ruby>][22]这本书和 Honeywell L66 B 编译器手册里学习到了如何编写机器码,但是学习技术这点来说,单纯阅读和与其他开发者在工作中共同学习仍然有着很大的不同。
Cox 说,“我始终认为最好的学习方法是和一群人一起试图去解决你们共同关心的一些问题并从中找到快乐这和你是5岁还是55岁无关”。
最让我吃惊的是这些顶级 Linux 开发者都是在非常底层级别开始他们的开发之旅的,甚至不是从汇编语言或 C 语言,而是从机器码开始开发。毫无疑问,这对帮助开发者理解计算机在非常微观的底层级别是怎么工作的起了非常大的作用。
那么现在你准备好尝试一下硬核 Linux 开发了吗Greg Kroah-Hartman,这位 Linux 内核过期分支的维护者推荐了Steve Oualline 的[<ruby>实用 C 语言编程<rt>Practical C Programming</rt></ruby>][23]和Samuel harbison 以及Guy Steels 合著的[<ruby>C语言参考手册<rt>C: A Reference Manual</rt></ruby>][24]。接下来请阅读“[<ruby>如何进行 Linux 内核开发<rt>HOWTO do Linux kernel development</rt></ruby>][25]”,到这时就像Kroah-Hartman所说你已经准备好启程了。
于此同时,还请你刻苦学习并大量编码,最后祝你在跟随顶级 Linux 开发者脚步的道路上好运相随。
--------------------------------------------------------------------------------
via: https://www.hpe.com/us/en/insights/articles/top-linux-developers-recommended-programming-books-1808.html
作者:[Steven Vaughan-Nichols][a]
选题:[lujun9972](https://github.com/lujun9972)
译者DavidChenLiang(https://github.com/DavidChenLiang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html
[1]:https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/
[2]:https://www.gnu.org/software/gnu-c-manual/
[3]:https://amzn.to/2nhyjEO
[4]:https://amzn.to/2vsL8k9
[5]:https://amzn.to/2KBbWn9
[6]:https://amzn.to/2M0rfeR
[7]:https://amzn.to/2nhyrnMe
[8]:http://shop.oreilly.com/product/0636920040385.do
[9]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_linuxbooks_containerebook0818
[10]:https://amzn.to/2MfpbyC
[11]:https://amzn.to/2MpgrTn
[12]:https://www.hpe.com/us/en/insights/articles/how-to-see-whats-going-on-with-your-linux-system-right-now-1807.html
[13]:http://shop.oreilly.com/product/9780596510299.do
[14]:http://shop.oreilly.com/product/9780596009656.do
[15]:http://shop.oreilly.com/product/9780596100575.do
[16]:http://shop.oreilly.com/product/9780596000127.do
[17]:https://amzn.to/2vsCJgF
[18]:https://amzn.to/2APzt3Z
[19]:https://www.amazon.com/Practice-Programming-Addison-Wesley-Professional-Computing/dp/020161586X/ref=as_li_ss_tl?ie=UTF8&linkCode=sl1&tag=thegroovycorpora&linkId=e6bbdb1ca2182487069bf9089fc8107e&language=en_US
[20]:https://amzn.to/2OknFsJ
[21]:https://amzn.to/2M4VVL3
[22]:https://amzn.to/2OjccJA
[23]:http://shop.oreilly.com/product/9781565923065.do
[24]:https://amzn.to/2OjzgrT
[25]:https://www.kernel.org/doc/html/v4.16/process/howto.html

View File

@ -1,59 +0,0 @@
6 个托管你 git 仓库的地方
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL)
也许你是少数一些没有注意到的人之一就在几周前,[微软收购了 GitHub][1]。两家公司达成了共识。微软在近些年已经变成了开源的有力支持者GitHub 从成立起,就已经成为了许多开源项目的实际代码库。
然而,最近的购买可能会带给你一些烦躁。毕竟公司的收购让你意识到了你的开源代码放在了一个商业平台上。可能你现在还没准备好迁移到其他的平台上去,但是至少这可以给你提供一些可选项。让我们找找网上现在都有哪些可用的平台。
### 选择之一: GitHub
严格来说,这是一个合格的选项。[GitHub][2] 历史上没有什么糟糕的失败,而且微软最近也确实发展了不少开源项目。把你的项目继续放在 GitHub 上,继续保持观望没有什么不可以。它现在依然是最大的软件开发的网络社区,同时还有许多对于问题追踪、代码复查、持续集成、通用的代码管理很有用的工具。而且它还是基于 Git 的Git 是每个人都喜欢的开源版本控制系统。你的代码还是你的代码。
### 选择之二: GitLab
[GitLab][3] 是代码库平台主要的竞争者。它是完全开源的。你可以像在 GitHhub 一样把你的代码托管在 GitLab但你也可以选择在你自己的服务器上自行托管你自己的 GitLab 实例,并完全控制谁可以访问那里的所有内容以及如何访问、管理。 GitLab 与 GitHub 功能几乎相同,有些人甚至可能会说它的持续集成和测试工具更优越。尽管 GitLab 上的开发者社区肯定比 GitHub 上的开发者社区要小,但它仍然没有什么可以被指责的。你可能会在那里的人群中找到更多志同道合的开发者。
### 选择之三: Bitbucket
[Bitbucket][4] 已经存在很多年了。在某些方面,它可以作为 GitHub 未来的一面镜子。 Bitbucket 八年前被一家大公司Atlassian收购并且已经经历了一些转换过程。 它仍然是一个像 GitHub 这样的商业平台,但它远不是一个创业公司,而且从组织上说它的基础相当稳定。 Bitbucket 分享了 GitHub 和 GitLab 上的大部分功能,以及它自己的一些新功能,如对 [Mercurial][5] 存储库的本机支持。
### 选择之四: SourceForge
[SourceForge][6] 是开源代码库的鼻祖。如果你曾经有一个开源项目Sourceforge 是一个托管你的代码和向他人分享你的发行版的地方。迁移到 Git 进行版本控制需要一段时间它有自己的商业收购和重新组构的事件以及一些开源项目的一些不幸的捆绑决策。也就是说SourceForge 从那时起似乎已经恢复,该网站仍然是一个有着不少开源项目的地方。 然而,很多人仍然感到有点受伤,而且有些人并不是各种尝试通过平台货币化的忠实粉丝,所以一定要睁大眼睛。
### 选择之五: 自己管理
如果你想自己掌握自己项目的命运除了你自己没人可以责备你然后一切都由自己来做对你来说可能是最佳的选择。无论对于大项目还是小项目。Git 是开源的,所以自己托管也很容易。如果你问题追踪和代码审查,你可以运行一个 GitLab 或者 [Phabricator][7] 的实例。对于持续集成,你可以设置自己的 [Jenkins][8] 自动化服务的实例。是的,你需要对自己的基础架构开销和相关的安全要求负责。但是,这个设置过程并不是很困难。所以如果你不想自己的代码被其他人的平台所吞没,这就是一种很好的方法。
### 选择之六:以上全部
以下是所有这些的美妙之处:尽管这些平台上有一些专有的选项,但它们仍然建立在坚实的开源技术之上。 而且不仅仅是开源,而是明确设计为分布在大型网络(如互联网)上的多个节点上。 你不需要只使用一个。 你可以使用一对......或者全部。 使用 GitLab 将你自己的设置作为保证的基础,并在 GitHub 和 Bitbucket 上安装克隆存储库,以进行问题跟踪和持续集成。 将你的主代码库保留在 GitHub 上,但是为了你自己的想法,可以在 GitLab 上安装“备份”克隆。
关键在于你的选择是什么。我们能有这么多选择,都是得益于那些非常有用的项目上的开源协议。未来一片光明。
当然,在这个列表中我肯定忽略了一些开源平台。你是否使用了很多的平台?哪个是你最喜欢的?你都可以在这里说出来!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/github-alternatives
作者:[Jason van Gumster][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mairin
[1]: https://www.theverge.com/2018/6/4/17422788/microsoft-github-acquisition-official-deal
[2]: https://github.com/
[3]: https://gitlab.com
[4]: https://bitbucket.org
[5]: https://www.mercurial-scm.org/wiki/Repository
[6]: https://sourceforge.net
[7]: https://phacility.com/phabricator/
[8]: https://jenkins.io

View File

@ -1,177 +0,0 @@
为什么linux用户应该尝试Rust
======
![](https://images.idgesg.net/images/article/2018/09/rust-rusted-metal-100773678-large.jpg)
Rust是一种相当年轻和现代的编程语言因为具有许多功能所以它非常灵活而且非常安全。 数据显示它正在变得非常受欢迎连续三年在Stack Overflow开发者调查中获得“最受喜爱的编程语言”的第一名 - [2016] [1][2017] [2]和[2018] [3]。
Rust也是开源语言的一种它具有一系列特功能使得它可以适应许多不同的编程项目。 它最初源于2006年Mozilla员工的个人项目几年后2009年被Mozilla收集为特别项目然后在2010年宣布供公众使用。
Rust程序运行速度极快可防止段错误并保证线程安全。 这些属性使该语言极大地吸引了专注于应用程序安全性的开发人员。 Rust也是一种非常易读的语言可用于从简单程序到非常大而复杂的项目。
Rust 优点:
* 内存安全—— Rust不会受到悬空指针缓冲区溢出或其他与内存相关的错误的影响。 它提供内存安全,无回收垃圾。
* 通用 - Rust是适用于任何类型编程的适当语言
* 快速 - Rust在性能上与C / C ++相当,但具有更好的安全功能。
* 高效 - Rust是为了便于并发编程而构建的。
* 面向项目 - Rust具有内置的依赖关系和构建管理系统Cargo。
* 得到很好的支持 - Rust有一个令人印象深刻的[支持社区] [4]。
Rust还强制执行RAII资源获取初始化。 这意味着当一个对象超出范围时,将调用其析构函数并释放其资源,从而提供防止资源泄漏的屏蔽。 它提供了功能抽象和一个伟大的[类型系统] [5]以及速度和数学健全性。
简而言之Rust是一种令人印象深刻的系统编程语言具有其他大多数语言所缺乏的功能使其成为CC++和Objective-C等多年来一直被使用的语言的有力竞争者。
### 安装 Rust
安装Rust是一个相当简单的过程。
```
$ curl https://sh.rustup.rs -sSf | sh
```
安装Rust后使用rustc** --version **或** which **命令显示版本信息。
```
$ which rustc
rustc 1.27.2 (58cc626de 2018-07-18)
$ rustc --version
rustc 1.27.2 (58cc626de 2018-07-18)
```
### Rust入门
Rust即使是最简单的代码也与你之前使用过的语言的输入完全不同。
```
$ cat hello.rs
fn main() {
// Print a greeting
println!("Hello, world!");
}
```
在这些行中我们正在设置一个函数main添加一个描述该函数的注释并使用println语句来创建输出。 您可以使用下面显示的命令编译然后运行这样的程序。
```
$ rustc hello.rs
$ ./hello
Hello, world!
```
你可以创建一个“项目”(通常仅用于比这个更复杂的程序!)来保持代码的有序性。
```
$ mkdir ~/projects
$ cd ~/projects
$ mkdir hello_world
$ cd hello_world
```
请注意,即使是简单的程序,一旦编译,就会变成相当大的可执行文件。
```
$ ./hello
Hello, world!
$ ls -l hello*
-rwxrwxr-x 1 shs shs 5486784 Sep 23 19:02 hello <== executable
-rw-rw-r-- 1 shs shs 68 Sep 23 15:25 hello.rs
```
当然这只是一个开始且传统的“Hello, world!” 程序。 Rust语言具有一系列功能可帮助你快速进入高级编程技能。
### 学习 Rust
![rust programming language book cover][6]
No Starch Press
Steve Klabnik和Carol Nichols2018的Rust Programming Language一书提供了学习Rust的最佳方法之一。 这本书由核心开发团队的两名成员撰写,可从[No Starch Press] [7]出版社获得纸制书或者从[rust-lang.org] [8]获得电子书。 它已经成为Rust开发者社区中的参考书。
在所涉及的众多主题中,你将了解这些高级主题:
* 所有权和borrowing
* 安全保障
* 测试和错误处理
* 智能指针和多线程
* 高级模式匹配
* 使用Cargo内置包管理器
* 使用Rust的高级编译器
#### 目录
```
前言Nicholas Matsakis和Aaron Turon编写
致谢
介绍
第1章新手入门
第2章猜谜游戏
第3章通用编程概念
第4章了解所有权
第五章:结构
第6章枚举和模式匹配
第7章模块
第8章常见集合
第9章错误处理
第10章通用类型特征和生命周期
第11章测试
第12章输入/输出项目
第13章迭代器和闭包
第14章关于货物和Crates.io的更多信息
第15章智能指针
第16章并发
第17章Rust面向对象
第18章模式
第19章关于生命周期的更多信息
第20章高级类型系统功能
附录A关键字
附录B运算符和符号
附录C可衍生的特征
附录D
索引
```
[Rust编程语言] [7]将你从基本安装和语言语法带到复杂的主题例如模块错误处理crates与其他语言中的'library'或'package'同义),模块(允许你 将你的代码分配到包箱本身,生命周期等。
可能最重要的是,本书可以让您从基本的编程技巧转向构建和编译复杂,安全且非常有用的程序。
### 结束
如果你已经准备好用一种非常值得花时间和精力学习并且越来越受欢迎的语言进行一些严肃的编程那么Rust是一个不错的选择
加入[Facebook] [9]和[LinkedIn] [10]上的Network World社区评论最重要的话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3308162/linux/why-you-should-try-rust.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[way-ww](https://github.com/way-ww)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]: https://insights.stackoverflow.com/survey/2016#technology-most-loved-dreaded-and-wanted
[2]: https://insights.stackoverflow.com/survey/2017#technology-most-loved-dreaded-and-wanted-languages
[3]: https://insights.stackoverflow.com/survey/2018#technology-most-loved-dreaded-and-wanted-languages
[4]: https://www.rust-lang.org/en-US/community.html
[5]: https://doc.rust-lang.org/reference/type-system.html
[6]: https://images.idgesg.net/images/article/2018/09/rust-programming-language_book-cover-100773679-small.jpg
[7]: https://nostarch.com/Rust
[8]: https://doc.rust-lang.org/book/2018-edition/index.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,524 @@
实验 3用户环境
======
### 实验 3用户环境
#### 简介
在本实验中,你将要实现一个基本的内核功能,要求它能够保护运行的用户模式环境(即:进程)。你将去增强这个 JOS 内核,去配置数据结构以便于保持对用户环境的跟踪、创建一个单一用户环境、将程序镜像加载到用户环境中、并将它启动运行。你也要写出一些 JOS 内核的函数,用来处理任何用户环境生成的系统调用,以及处理由用户环境引进的各种异常。
**注意:** 在本实验中,术语**_“环境”_** 和**_“进程”_** 是可互换的 —— 它们都表示同一个抽象概念,那就是允许你去运行的程序。我在介绍中使用术语**“环境”**而不是使用传统术语**“进程”**的目的是为了强调一点,那就是 JOS 的环境和 UNIX 的进程提供了不同的接口,并且它们的语义也不相同。
##### 预备知识
使用 Git 去提交你自实验 2 以后的更改(如果有的话),获取课程仓库的最新版本,以及创建一个命名为 `lab3` 的本地分支,指向到我们的 lab3 分支上 `origin/lab3`
```
athena% cd ~/6.828/lab
athena% add git
athena% git commit -am 'changes to lab2 after handin'
Created commit 734fab7: changes to lab2 after handin
4 files changed, 42 insertions(+), 9 deletions(-)
athena% git pull
Already up-to-date.
athena% git checkout -b lab3 origin/lab3
Branch lab3 set up to track remote branch refs/remotes/origin/lab3.
Switched to a new branch "lab3"
athena% git merge lab2
Merge made by recursive.
kern/pmap.c | 42 +++++++++++++++++++
1 files changed, 42 insertions(+), 0 deletions(-)
athena%
```
实验 3 包含一些你将探索的新源文件:
```c
inc/ env.h Public definitions for user-mode environments
trap.h Public definitions for trap handling
syscall.h Public definitions for system calls from user environments to the kernel
lib.h Public definitions for the user-mode support library
kern/ env.h Kernel-private definitions for user-mode environments
env.c Kernel code implementing user-mode environments
trap.h Kernel-private trap handling definitions
trap.c Trap handling code
trapentry.S Assembly-language trap handler entry-points
syscall.h Kernel-private definitions for system call handling
syscall.c System call implementation code
lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a
entry.S Assembly-language entry-point for user environments
libmain.c User-mode library setup code called from entry.S
syscall.c User-mode system call stub functions
console.c User-mode implementations of putchar and getchar, providing console I/O
exit.c User-mode implementation of exit
panic.c User-mode implementation of panic
user/ * Various test programs to check kernel lab 3 code
```
另外,一些在实验 2 中的源文件在实验 3 中将被修改。如果想去查看有什么更改,可以运行:
```
$ git diff lab2
```
你也可以另外去看一下 [实验工具指南][1],它包含了与本实验有关的调试用户代码方面的信息。
##### 实验要求
本实验分为两部分Part A 和 Part B。Part A 在本实验完成后一周内提交;你将要提交你的更改和完成的动手实验,在提交之前要确保你的代码通过了 Part A 的所有检查(如果你的代码未通过 Part B 的检查也可以提交)。只需要在第二周提交 Part B 的期限之前代码检查通过即可。
由于在实验 2 中,你需要做实验中描述的所有正则表达式练习,并且至少通过一个挑战(是指整个实验,不是每个部分)。写出详细的问题答案并张贴在实验中,以及一到两个段落的关于你如何解决你选择的挑战问题的详细描述,并将它放在一个名为 `answers-lab3.txt` 的文件中,并将这个文件放在你的 `lab` 目标的根目录下。(如果你做了多个问题挑战,你仅需要提交其中一个即可)不要忘记使用 `git add answers-lab3.txt` 提交这个文件。
##### 行内汇编语言
在本实验中你可能找到使用了 GCC 的行内汇编语言特性,虽然不使用它也可以完成实验。但至少你需要去理解这些行内汇编语言片段,这些汇编语言("`asm`" 语句)片段已经存在于提供给你的源代码中。你可以在课程 [参考资料][2] 的页面上找到 GCC 行内汇编语言有关的信息。
#### Part A用户环境和异常处理
新文件 `inc/env.h` 中包含了在 JOS 中关于用户环境的基本定义。现在就去阅读它。内核使用数据结构 `Env` 去保持对每个用户环境的跟踪。在本实验的开始,你将只创建一个环境,但你需要去设计 JOS 内核支持多环境;实验 4 将带来这个高级特性,允许用户环境去 `fork` 其它环境。
正如你在 `kern/env.c` 中所看到的,内核维护了与环境相关的三个全局变量:
```
struct Env *envs = NULL; // All environments
struct Env *curenv = NULL; // The current env
static struct Env *env_free_list; // Free environment list
```
一旦 JOS 启动并运行,`envs` 指针指向到一个数组,即数据结构 `Env`它保存了系统中全部的环境。在我们的设计中JOS 内核将同时支持最大值为 `NENV` 个的活动的环境,虽然在一般情况下,任何给定时刻运行的环境很少。(`NENV` 是在 `inc/env.h` 中用 `#define` 定义的一个常量)一旦它被分配,对于每个 `NENV` 可能的环境,`envs` 数组将包含一个数据结构 `Env` 的单个实例。
JOS 内核在 `env_free_list` 上用数据结构 `Env` 保存了所有不活动的环境。这样的设计使得环境的分配和回收很容易,因为这只不过是添加或删除空闲列表的问题而已。
内核使用符号 `curenv` 来保持对任意给定时刻的 _当前正在运行的环境_ 进行跟踪。在系统引导期间,在第一个环境运行之前,`curenv` 被初始化为 `NULL`
##### 环境状态
数据结构 `Env` 被定义在文件 `inc/env.h` 中,内容如下:(在后面的实验中将添加更多的字段):
```c
struct Env {
struct Trapframe env_tf; // Saved registers
struct Env *env_link; // Next free Env
envid_t env_id; // Unique environment identifier
envid_t env_parent_id; // env_id of this env's parent
enum EnvType env_type; // Indicates special system environments
unsigned env_status; // Status of the environment
uint32_t env_runs; // Number of times environment has run
// Address space
pde_t *env_pgdir; // Kernel virtual address of page dir
};
```
以下是数据结构 `Env` 中的字段简介:
* **env_tf**
这个结构定义在 `inc/trap.h` 中,它用于在那个环境不运行时保持它保存在寄存器中的值,即:当内核或一个不同的环境在运行时。当从用户模式切换到内核模式时,内核将保存这些东西,以便于那个环境能够在稍后重新运行时回到中断运行的地方。
* **env_link**
这是一个链接,它链接到在 `env_free_list` 上的下一个 `Env` 上。`env_free_list` 指向到列表上第一个空闲的环境。
* **env_id**
内核在数据结构 `Env` 中保存了一个唯一标识当前环境的值(即:使用数组 `envs` 中的特定槽位)。在一个用户环境终止之后,内核可能给另外的环境重新分配相同的数据结构 `Env` —— 但是新的环境将有一个与已终止的旧的环境不同的 `env_id`,即便是新的环境在数组 `envs` 中复用了同一个槽位。
* **env_parent_id**
内核使用它来保存创建这个环境的父级环境的 `env_id`。通过这种方式,环境就可以形成一个“家族树”,这对于做出“哪个环境可以对谁做什么”这样的安全决策非常有用。
* **env_type**
它用于去区分特定的环境。对于大多数环境,它将是 `ENV_TYPE_USER` 的。在稍后的实验中,针对特定的系统服务环境,我们将引入更多的几种类型。
* **env_status**
这个变量持有以下几个值之一:
* `ENV_FREE`
表示那个 `Env` 结构是非活动的,并且因此它还在 `env_free_list` 上。
* `ENV_RUNNABLE`:
表示那个 `Env` 结构所代表的环境正等待被调度到处理器上去运行。
* `ENV_RUNNING`:
表示那个 `Env` 结构所代表的环境当前正在运行中。
* `ENV_NOT_RUNNABLE`:
表示那个 `Env` 结构所代表的是一个当前活动的环境但不是当前准备去运行的例如因为它正在因为一个来自其它环境的进程间通讯IPC而处于等待状态。
* `ENV_DYING`
表示那个 `Env` 结构所表示的是一个僵尸环境。一个僵尸环境将在下一次被内核捕获后被释放。我们在实验 4 之前不会去使用这个标志。
* **env_pgdir**
这个变量持有这个环境的内核虚拟地址的页目录。
就像一个 Unix 进程一样,一个 JOS 环境耦合了“线程”和“地址空间”的概念。线程主要由保存的寄存器来定义(`env_tf` 字段),而地址空间由页目录和 `env_pgdir` 所指向的页表所定义。为运行一个环境,内核必须使用保存的寄存器值和相关的地址空间去设置 CPU。
我们的 `struct Env` 与 xv6 中的 `struct proc` 类似。它们都在一个 `Trapframe` 结构中持有环境(即进程)的用户模式寄存器状态。在 JOS 中,单个的环境并不能像 xv6 中的进程那样拥有它们自己的内核栈。在这里,内核中任意时间只能有一个 JOS 环境处于活动中因此JOS 仅需要一个单个的内核栈。
##### 为环境分配数组
在实验 2 的 `mem_init()` 中,你为数组 `pages[]` 分配了内存,它是内核用于对页面分配与否的状态进行跟踪的一个表。你现在将需要去修改 `mem_init()`,以便于后面使用它分配一个与结构 `Env` 类似的数组,这个数组被称为 `envs`
```markdown
练习 1、修改在 `kern/pmap.c` 中的 `mem_init()`,以用于去分配和映射 `envs` 数组。这个数组完全由 `Env` 结构分配的实例 `NENV` 组成,就像你分配的 `pages` 数组一样。与 `pages` 数组一样,由内存支持的数组 `envs` 也将在 `UENVS`(它的定义在 `inc/memlayout.h` 文件中)中映射用户只读的内存,以便于用户进程能够从这个数组中读取。
```
你应该去运行你的代码,并确保 `check_kern_pgdir()` 是没有问题的。
##### 创建和运行环境
现在,你将在 `kern/env.c` 中写一些必需的代码去运行一个用户环境。因为我们并没有做一个文件系统因此我们将设置内核去加载一个嵌入到内核中的静态的二进制镜像。JOS 内核以一个 ELF 可运行镜像的方式将这个二进制镜像嵌入到内核中。
在实验 3 中,`GNUmakefile` 将在 `obj/user/` 目录中生成一些二进制镜像。如果你看到 `kern/Makefrag`,你将注意到一些奇怪的的东西,它们“链接”这些二进制直接进入到内核中运行,就像 `.o` 文件一样。在链接器命令行上的 `-b binary` 选项,将因此把它们链接为“原生的”不解析的二进制文件,而不是由编译器产生的普通的 `.o` 文件。(就链接器而言,这些文件压根就不是 ELF 镜像文件 —— 它们可以是任何东西,比如,一个文本文件或图片!)如果你在内核构建之后查看 `obj/kern/kernel.sym` ,你将会注意到链接器很奇怪的生成了一些有趣的、命名很费解的符号,比如像 `_binary_obj_user_hello_start`、`_binary_obj_user_hello_end`、以及 `_binary_obj_user_hello_size`。链接器通过改编二进制文件的命令来生成这些符号;这种符号为普通内核代码使用一种引入嵌入式二进制文件的方法。
`kern/init.c``i386_init()` 中,你将写一些代码在环境中运行这些二进制镜像中的一种。但是,设置用户环境的关键函数还没有实现;将需要你去完成它们。
```markdown
练习 2、在文件 `env.c` 中,写完以下函数的代码:
* `env_init()`
初始化 `envs` 数组中所有的 `Env` 结构,然后把它们添加到 `env_free_list` 中。也称为 `env_init_percpu`,它通过配置硬件,在硬件上为 level 0内核权限和 level 3用户权限使用单独的段。
* `env_setup_vm()`
为一个新环境分配一个页目录,并初始化新环境的地址空间的内核部分。
* `region_alloc()`
为一个新环境分配和映射物理内存
* `load_icode()`
你将需要去解析一个 ELF 二进制镜像,就像引导加载器那样,然后加载它的内容到一个新环境的用户地址空间中。
* `env_create()`
使用 `env_alloc` 去分配一个环境,并调用 `load_icode` 去加载一个 ELF 二进制
* `env_run()`
在用户模式中开始运行一个给定的环境
在你写这些函数时,你可能会发现新的 cprintf 动词 `%e` 非常有用 -- 它可以输出一个错误代码的相关描述。比如:
r = -E_NO_MEM;
panic("env_alloc: %e", r);
中 panic 将输出消息 "env_alloc: out of memory"。
```
下面是用户代码相关的调用图。确保你理解了每一步的用途。
* `start` (`kern/entry.S`)
* `i386_init` (`kern/init.c`)
* `cons_init`
* `mem_init`
* `env_init`
* `trap_init`(到目前为止还未完成)
* `env_create`
* `env_run`
* `env_pop_tf`
在完成以上函数后,你应该去编译内核并在 QEMU 下运行它。如果一切正常,你的系统将进入到用户空间并运行二进制的 `hello` ,直到使用 `int` 指令生成一个系统调用为止。在那个时刻将存在一个问题,因为 JOS 尚未设置硬件去允许从用户空间到内核空间的各种转换。当 CPU 发现没有系统调用中断的服务程序时,它将生成一个一般保护异常,找到那个异常并去处理它,还将生成一个双重故障异常,同样也找到它并处理它,并且最后会出现所谓的“三重故障异常”。通常情况下,你将随后看到 CPU 复位以及系统重引导。虽然对于传统的应用程序(在 [这篇博客文章][3] 中解释了原因)这是重大的问题,但是对于内核开发来说,这是一个痛苦的过程,因此,在打了 6.828 补丁的 QEMU 上,你将可以看到转储的寄存器内容和一个“三重故障”的信息。
我们马上就会去处理这些问题,但是现在,我们可以使用调试器去检查我们是否进入了用户模式。使用 `make qemu-gdb` 并在 `env_pop_tf` 处设置一个 GDB 断点,它是你进入用户模式之前到达的最后一个函数。使用 `si` 单步进入这个函数;处理器将在 `iret` 指令之后进入用户模式。然后你将会看到在用户环境运行的第一个指令,它将是在 `lib/entry.S` 中的标签 `start` 的第一个指令 `cmpl`。现在,在 `hello` 中的 `sys_cputs()``int $0x30` 处使用 `b *0x...`(关于用户空间的地址,请查看 `obj/user/hello.asm` )设置断点。这个指令 `int` 是系统调用去显示一个字符到控制台。如果到 `int` 还没有运行,那么可能在你的地址空间设置或程序加载代码时发生了错误;返回去找到问题并解决后重新运行。
##### 处理中断和异常
到目前为止,在用户空间中的第一个系统调用指令 `int $0x30` 已正式寿终正寝了:一旦处理器进入用户模式,将无法返回。因此,现在,你需要去实现基本的异常和系统调用服务程序,因为那样才有可能让内核从用户模式代码中恢复对处理器的控制。你所做的第一件事情就是彻底地掌握 x86 的中断和异常机制的使用。
```
练习 3、如果你对中断和异常机制不熟悉的话阅读 80386 程序员手册的第 9 章(或 IA-32 开发者手册的第 5 章)。
```
在这个实验中,对于中断、异常、以其它类似的东西,我们将遵循 Intel 的术语习惯。由于如<ruby>异常<rt>exception</rt></ruby><ruby>陷阱<rt>trap</rt></ruby><ruby>中断<rt>interrupt</rt></ruby><ruby>故障<rt>fault</rt></ruby><ruby>中止<rt>abort</rt></ruby>这些术语在不同的架构和操作系统上并没有一个统一的标准,我们经常在特定的架构下(如 x86并不去考虑它们之间的细微差别。当你在本实验以外的地方看到这些术语时它们的含义可能有细微的差别。
##### 受保护的控制转移基础
异常和中断都是“受保护的控制转移”它将导致处理器从用户模式切换到内核模式CPL=0而不会让用户模式的代码干扰到内核的其它函数或其它的环境。在 Intel 的术语中,一个中断就是一个“受保护的控制转移”,它是由于处理器以外的外部异步事件所引发的,比如外部设备 I/O 活动通知。而异常正好与之相反,它是由当前正在运行的代码所引发的同步的、受保护的控制转移,比如由于发生了一个除零错误或对无效内存的访问。
为了确保这些受保护的控制转移是真正地受到保护,处理器的中断/异常机制设计是:当中断/异常发生时,当前运行的代码不能随意选择进入内核的位置和方式。而是,处理器在确保内核能够严格控制的条件下才能进入内核。在 x86 上,有两种机制协同来提供这种保护:
1. **中断描述符表** 处理器确保中断和异常仅能够导致内核进入几个特定的、由内核本身定义好的、明确的入口点,而不是去运行中断或异常发生时的代码。
x86 允许最多有 256 个不同的中断或异常入口点去进入内核,每个入口点都使用一个不同的中断向量。一个向量是一个介于 0 和 255 之间的数字。一个中断向量是由中断源确定的不同的设备、错误条件、以及应用程序去请求内核使用不同的向量生成中断。CPU 使用向量作为进入处理器的中断描述符表IDT的索引它是内核设置的内核私有内存GDT 也是。从这个表中的适当的条目中,处理器将加载:
* 将值加载到指令指针寄存器EIP指向内核代码设计好的用于处理这种异常的服务程序。
* 将值加载到代码段寄存器CS它包含运行权限为 0—1 级别的、要运行的异常服务程序。(在 JOS 中,所有的异常处理程序都运行在内核模式中,运行级别为 level 0。
2. **任务状态描述符表** 处理器在中断或异常发生时,需要一个地方去保存旧的处理器状态,比如,处理器在调用异常服务程序之前的 `EIP``CS` 的原始值,这样那个异常服务程序就能够稍后通过还原旧的状态来回到中断发生时的代码位置。但是对于已保存的处理器的旧状态必须被保护起来,不能被无权限的用户模式代码访问;否则代码中的 bug 或恶意用户代码将危及内核。
基于这个原因,当一个 x86 处理器产生一个中断或陷阱时,将导致权限级别的变更,从用户模式转换到内核模式,它也将导致在内核的内存中发生栈切换。有一个被称为 TSS 的任务状态描述符表规定段描述符和这个栈所处的地址。处理器在这个新栈上推送 `SS`、`ESP`、`EFLAGS`、`CS`、`EIP`、以及一个可选的错误代码。然后它从中断描述符上加载 `CS``EIP` 的值,然后设置 `ESP``SS` 去指向新的栈。
虽然 TSS 很大并且默默地为各种用途服务,但是 JOS 仅用它去定义当从用户模式到内核模式的转移发生时,处理器即将切换过去的内核栈。因为在 JOS 中的“内核模式”仅运行在 x86 的 level 0 权限上,当进入内核模式时,处理器使用 TSS 上的 `ESP0``SS0` 字段去定义内核栈。JOS 并不去使用 TSS 的任何其它字段。
##### 异常和中断的类型
所有的 x86 处理器上的同步异常都能够产生一个内部使用的、介于 0 到 31 之间的中断向量,因此它映射到 IDT 就是条目 0-31。例如一个页故障总是通过向量 14 引发一个异常。大于 31 的中断向量仅用于软件中断,它由 `int` 指令生成,或异步硬件中断,当需要时,它们由外部设备产生。
在这一节中,我们将扩展 JOS 去处理向量为 0-31 之间的、内部产生的 x86 异常。在下一节中,我们将完成 JOS 的 480x30号软件中断向量JOS 将(随意选择的)使用它作为系统调用中断向量。在实验 4 中,我们将扩展 JOS 去处理外部生成的硬件中断,比如时钟中断。
##### 一个示例
我们把这些片断综合到一起,通过一个示例来巩固一下。我们假设处理器在用户环境下运行代码,遇到一个除零问题。
1. 处理器去切换到由 TSS 中的 `SS0``ESP0` 定义的栈,在 JOS 中,它们各自保存着值 `GD_KD``KSTACKTOP`
2. 处理器在内核栈上推入异常参数,起始地址为 `KSTACKTOP`
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20 <---- ESP
+--------------------+
```
3. 由于我们要处理一个除零错误,它将在 x86 上产生一个中断向量 0处理器读取 IDT 的条目 0然后设置 `CS:EIP` 去指向由条目描述的处理函数。
4. 处理服务程序函数将接管控制权并处理异常,例如中止用户环境。
对于某些类型的 x86 异常,除了以上的五个“标准的”寄存器外,处理器还推入另一个包含错误代码的寄存器值到栈中。页故障异常,向量号为 14就是一个重要的示例。查看 80386 手册去确定哪些异常推入一个错误代码,以及错误代码在那个案例中的意义。当处理器推入一个错误代码后,当从用户模式中进入内核模式,异常处理服务程序开始时的栈看起来应该如下所示:
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20
| error code | " - 24 <---- ESP
+--------------------+
```
##### 嵌套的异常和中断
处理器能够处理来自用户和内核模式中的异常和中断。当收到来自用户模式的异常和中断时才会进入内核模式中,而且,在推送它的旧寄存器状态到栈中和通过 IDT 调用相关的异常服务程序之前x86 处理器会自动切换栈。如果当异常或中断发生时,处理器已经处于内核模式中(`CS` 寄存器低位两个比特为 0那么 CPU 只是推入一些值到相同的内核栈中。在这种方式中,内核可以优雅地处理嵌套的异常,嵌套的异常一般由内核本身的代码所引发。在实现保护时,这种功能是非常重要的工具,我们将在稍后的系统调用中看到它。
如果处理器已经处于内核模式中,并且发生了一个嵌套的异常,由于它并不需要切换栈,它也就不需要去保存旧的 `SS``ESP` 寄存器。对于不推入错误代码的异常类型,在进入到异常服务程序时,它的内核栈看起来应该如下图:
```
+--------------------+ <---- old ESP
| old EFLAGS | " - 4
| 0x00000 | old CS | " - 8
| old EIP | " - 12
+--------------------+
```
对于需要推入一个错误代码的异常类型,处理器将在旧的 `EIP` 之后,立即推入一个错误代码,就和前面一样。
关于处理器的异常嵌套的功能,这里有一个重要的警告。如果处理器正处于内核模式时发生了一个异常,并且不论是什么原因,比如栈空间泄漏,都不会去推送它的旧的状态,那么这时处理器将不能做任何的恢复,它只是简单地重置。毫无疑问,内核应该被设计为禁止发生这种情况。
##### 设置 IDT
到目前为止,你应该有了在 JOS 中为了设置 IDT 和处理异常所需的基本信息。现在,我们去设置 IDT 以处理中断向量 0-31处理器异常。我们将在本实验的稍后部分处理系统调用然后在后面的实验中增加中断 32-47设备 IRQ
在头文件 `inc/trap.h``kern/trap.h` 中包含了中断和异常相关的重要定义,你需要去熟悉使用它们。在文件`kern/trap.h` 中包含了到内核的、严格的、秘密的定义,可是在 `inc/trap.h` 中包含的定义也可以被用到用户级程序和库上。
注意:在范围 0-31 中的一些异常是被 Intel 定义为保留。因为在它们的处理器上从未产生过,你如何处理它们都不会有大问题。你想如何做它都是可以的。
你将要实现的完整的控制流如下图所描述:
```c
IDT trapentry.S trap.c
+----------------+
| &handler1 |---------> handler1: trap (struct Trapframe *tf)
| | // do stuff {
| | call trap // handle the exception/interrupt
| | // ... }
+----------------+
| &handler2 |--------> handler2:
| | // do stuff
| | call trap
| | // ...
+----------------+
.
.
.
+----------------+
| &handlerX |--------> handlerX:
| | // do stuff
| | call trap
| | // ...
+----------------+
```
每个异常或中断都应该在 `trapentry.S` 中有它自己的处理程序,并且 `trap_init()` 应该使用这些处理程序的地址去初始化 IDT。每个处理程序都应该在栈上构建一个 `struct Trapframe`(查看 `inc/trap.h`),然后使用一个指针调用 `trap()`(在 `trap.c` 中)到 `Trapframe`。`trap()` 接着处理异常/中断或派发给一个特定的处理函数。
```markdown
练习 4、编辑 `trapentry.S``trap.c`,然后实现上面所描述的功能。在 `trapentry.S` 中的宏 `TRAPHANDLER``TRAPHANDLER_NOEC` 将会帮你,还有在 `inc/trap.h` 中的 T_* defines。你需要在 `trapentry.S` 中为每个定义在 `inc/trap.h` 中的陷阱添加一个入口点(使用这些宏),并且你将有 t、o 提供的 `_alltraps`,这是由宏 `TRAPHANDLER`指向到它。你也需要去修改 `trap_init()` 来初始化 `idt`,以使它指向到每个在 `trapentry.S` 中定义的入口点;宏 `SETGATE` 将有助你实现它。
你的 `_alltraps` 应该:
1. 推送值以使栈看上去像一个结构 Trapframe
2. 加载 `GD_KD``%ds``%es`
3. `pushl %esp` 去传递一个指针到 Trapframe 以作为一个 trap() 的参数
4. `call trap` `trap` 能够返回吗?)
考虑使用 `pushal` 指令;它非常适合 `struct Trapframe` 的布局。
使用一些在 `user` 目录中的测试程序来测试你的陷阱处理代码,这些测试程序在生成任何系统调用之前能引发异常,比如 `user/divzero`。在这时,你应该能够成功完成 `divzero`、`softint`、以有 `badsegment` 测试。
```
```markdown
小挑战!目前,在 `trapentry.S` 中列出的 `TRAPHANDLER` 和他们安装在 `trap.c` 中可能有许多代码非常相似。清除它们。修改 `trapentry.S` 中的宏去自动为 `trap.c` 生成一个表。注意,你可以直接使用 `.text``.data` 在汇编器中切换放置其中的代码和数据。
```
```markdown
问题
在你的 `answers-lab3.txt` 中回答下列问题:
1. 为每个异常/中断设置一个独立的服务程序函数的目的是什么?(即:如果所有的异常/中断都传递给同一个服务程序,在我们的当前实现中能否提供这样的特性?)
2. 你需要做什么事情才能让 `user/softint` 程序正常运行评级脚本预计将会产生一个一般保护故障trap 13但是 `softint` 的代码显示为 `int $14`。为什么它产生的中断向量是 13如果内核允许 `softint``int $14` 指令去调用内核页故障的服务程序(它的中断向量是 14会发生什么事情
```
本实验的 Part A 部分结束了。不要忘了去添加 `answers-lab3.txt` 文件,提交你的变更,然后在 Part A 作业的提交截止日期之前运行 `make handin`
#### Part B页故障、断点异常、和系统调用
现在,你的内核已经有了最基本的异常处理能力,你将要去继续改进它,来提供依赖异常服务程序的操作系统原语。
##### 处理页故障
页故障异常,中断向量为 14`T_PGFLT`),它是一个非常重要的东西,我们将通过本实验和接下来的实验来大量练习它。当处理器产生一个页故障时,处理器将在它的一个特定的控制寄存器(`CR2`)中保存导致这个故障的线性地址(即:虚拟地址)。在 `trap.c` 中我们提供了一个专门处理它的函数的一个雏形,它就是 `page_fault_handler()`,我们将用它来处理页故障异常。
```markdown
练习 5、修改 `trap_dispatch()` 将页故障异常派发到 `page_fault_handler()` 上。你现在应该能够成功测试 `faultread`、`faultreadkernel`、`faultwrite`、和 `faultwritekernel` 了。如果它们中的任何一个不能正常工作,找出问题并修复它。记住,你可以使用 make run- _x_ 或 make run- _x_ -nox 去重引导 JOS 进入到一个特定的用户程序。比如,你可以运行 make run-hello-nox 去运行 the _hello_ user 程序。
```
下面,你将进一步细化内核的页故障服务程序,因为你要实现系统调用了。
##### 断点异常
断点异常,中断向量为 3`T_BRKPT`),它一般用在调试上,它在一个程序代码中插入断点,从而使用特定的 1 字节的 `int3` 软件中断指令来临时替换相应的程序指令。在 JOS 中,我们将稍微“滥用”一下这个异常,通过将它打造成一个伪系统调用原语,使得任何用户环境都可以用它来调用 JOS 内核监视器。如果我们将 JOS 内核监视认为是原始调试器,那么这种用法是合适的。例如,在 `lib/panic.c` 中实现的用户模式下的 `panic()` ,它在显示它的 `panic` 消息后运行一个 `int3` 中断。
```markdown
练习 6、修改 `trap_dispatch()`,让它在调用内核监视器时产生一个断点异常。你现在应该可以在 `breakpoint` 上成功完成测试。
```
```markdown
小挑战!修改 JOS 内核监视器,以便于你能够从当前位置(即:在 `int3` 之后,断点异常调用了内核监视器) '继续' 异常,并且因此你就可以一次运行一个单步指令。为了实现单步运行,你需要去理解 `EFLAGS` 寄存器中的某些比特的意义。
可选:如果你富有冒险精神,找一些 x86 反汇编的代码 —— 即通过从 QEMU 中、或从 GNU 二进制工具中分离、或你自己编写 —— 然后扩展 JOS 内核监视器,以使它能够反汇编,显示你的每步的指令。结合实验 1 中的符号表,这将是你写的一个真正的内核调试器。
```
```markdown
问题
3. 在断点测试案例中,根据你在 IDT 中如何初始化断点条目的不同情况(即:你的从 `trap_init``SETGATE` 的调用),既有可能产生一个断点异常,也有可能产生一个一般保护故障。为什么?为了能够像上面的案例那样工作,你需要如何去设置它,什么样的不正确设置才会触发一个一般保护故障?
4. 你认为这些机制的意义是什么?尤其是要考虑 `user/softint` 测试程序的工作原理。
```
##### 系统调用
用户进程请求内核为它做事情就是通过系统调用来实现的。当用户进程请求一个系统调用时,处理器首先进入内核模式,处理器和内核配合去保存用户进程的状态,内核为了完成系统调用会运行有关的代码,然后重新回到用户进程。用户进程如何获得内核的关注以及它如何指定它需要的系统调用的具体细节,这在不同的系统上是不同的。
在 JOS 内核中,我们使用 `int` 指令,它将导致产生一个处理器中断。尤其是,我们使用 `int $0x30` 作为系统调用中断。我们定义常量 `T_SYSCALL` 为 480x30。你将需要去设置中断描述符表以允许用户进程去触发那个中断。注意那个中断 0x30 并不是由硬件生成的,因此允许用户代码去产生它并不会引起歧义。
应用程序将在寄存器中传递系统调用号和系统调用参数。通过这种方式,内核就不需要去遍历用户环境的栈或指令流。系统调用号将放在 `%eax` 中,而参数(最多五个)将分别放在 `%edx`、`%ecx`、`%ebx`、`%edi`、和 `%esi` 中。内核将在 `%eax` 中传递返回值。在 `lib/syscall.c` 中的 `syscall()` 中已为你编写了使用一个系统调用的汇编代码。你可以通过阅读它来确保你已经理解了它们都做了什么。
```markdown
练习 7、在内核中为中断向量 `T_SYSCALL` 添加一个服务程序。你将需要去编辑 `kern/trapentry.S``kern/trap.c``trap_init()`。还需要去修改 `trap_dispatch()`,以便于通过使用适当的参数来调用 `syscall()` (定义在 `kern/syscall.c`)以处理系统调用中断,然后将系统调用的返回值安排在 `%eax` 中传递给用户进程。最后,你需要去实现 `kern/syscall.c` 中的 `syscall()`。如果系统调用号是无效值,确保 `syscall()` 返回值一定是 `-E_INVAL`。为确保你理解了系统调用的接口,你应该去阅读和掌握 `lib/syscall.c` 文件(尤其是行内汇编的动作),对于在 `inc/syscall.h` 中列出的每个系统调用都需要通过调用相关的内核函数来处理A。
在你的内核中运行 `user/hello` 程序make run-hello。它应该在控制台上输出 "`hello, world`",然后在用户模式中产生一个页故障。如果没有产生页故障,可能意味着你的系统调用服务程序不太正确。现在,你应该有能力成功通过 `testbss` 测试。
```
```markdown
小挑战!使用 `sysenter``sysexit` 指令而不是使用 `int 0x30``iret` 来实现系统调用。
`sysenter/sysexit` 指令是由 Intel 设计的,它的运行速度要比 `int/iret` 指令快。它使用寄存器而不是栈来做到这一点,并且通过假定了分段寄存器是如何使用的。关于这些指令的详细内容可以在 Intel 参考手册 2B 卷中找到。
在 JOS 中添加对这些指令支持的最容易的方法是,在 `kern/trapentry.S` 中添加一个 `sysenter_handler`,在它里面保存足够多的关于用户环境返回、设置内核环境、推送参数到 `syscall()`、以及直接调用 `syscall()` 的信息。一旦 `syscall()` 返回,它将设置好运行 `sysexit` 指令所需的一切东西。你也将需要在 `kern/init.c` 中添加一些代码以设置特殊模块寄存器MSRs。在 AMD 架构程序员手册第 2 卷的 6.1.2 节中和 Intel 参考手册的 2B 卷的 SYSENTER 上都有关于 MSRs 的很详细的描述。对于如何去写 MSRs在[这里][4]你可以找到一个添加到 `inc/x86.h` 中的 `wrmsr` 的实现。
最后,`lib/syscall.c` 必须要修改,以便于支持用 `sysenter` 来生成一个系统调用。下面是 `sysenter` 指令的一种可能的寄存器布局:
eax - syscall number
edx, ecx, ebx, edi - arg1, arg2, arg3, arg4
esi - return pc
ebp - return esp
esp - trashed by sysenter
GCC 的内联汇编器将自动保存你告诉它的直接加载进寄存器的值。不要忘了同时去保存push和恢复pop你使用的其它寄存器或告诉内联汇编器你正在使用它们。内联汇编器不支持保存 `%ebp`,因此你需要自己去增加一些代码来保存和恢复它们,返回地址可以使用一个像 `leal after_sysenter_label, %%esi` 的指令置入到 `%esi` 中。
注意,它仅支持 4 个参数,因此你需要保留支持 5 个参数的系统调用的旧方法。而且,因为这个快速路径并不更新当前环境的 trap 帧,因此,在我们添加到后续实验中的一些系统调用上,它并不适合。
在接下来的实验中我们启用了异步中断,你需要再次去评估一下你的代码。尤其是,当返回到用户进程时,你需要去启用中断,而 `sysexit` 指令并不会为你去做这一动作。
```
##### 启动用户模式
一个用户程序是从 `lib/entry.S` 的顶部开始运行的。在一些配置之后,代码调用 `lib/libmain.c` 中的 `libmain()`。你应该去修改 `libmain()` 以初始化全局指针 `thisenv`,使它指向到这个环境在数组 `envs[]` 中的 `struct Env`。(注意那个 `lib/entry.S` 中已经定义 `envs` 去指向到在 Part A 中映射的你的设置。)提示:查看 `inc/env.h` 和使用 `sys_getenvid`
`libmain()` 接下来调用 `umain`,在 hello 程序的案例中,`umain` 是在 `user/hello.c` 中。注意,它在输出 "`hello, world`” 之后,它尝试去访问 `thisenv->env_id`。这就是为什么前面会发生故障的原因了。现在,你已经正确地初始化了 `thisenv`,它应该不会再发生故障了。如果仍然会发生故障,或许是因为你没有映射 `UENVS` 区域为用户可读取(回到前面 Part A 中 查看 `pmap.c`);这是我们第一次真实地使用 `UENVS` 区域)。
```markdown
练习 8、添加要求的代码到用户库然后引导你的内核。你应该能够看到 `user/hello` 程序会输出 "`hello, world`" 然后输出 "`i am environment 00001000`"。`user/hello` 接下来会通过调用 `sys_env_destroy()`(查看`lib/libmain.c` 和 `lib/exit.c`)尝试去"退出"。由于内核目前仅支持一个用户环境,它应该会报告它毁坏了唯一的环境,然后进入到内核监视器中。现在你应该能够成功通过 `hello` 的测试。
```
##### 页故障和内存保护
内存保护是一个操作系统中最重要的特性,通过它来保证一个程序中的 bug 不会破坏其它程序或操作系统本身。
操作系统一般是依靠硬件的支持来实现内存保护。操作系统会告诉硬件哪些虚拟地址是有效的,而哪些是无效的。当一个程序尝试去访问一个无效地址或它没有访问权限的地址时,处理器会在导致故障发生的位置停止程序运行,然后捕获内核中关于尝试操作的相关信息。如果故障是可修复的,内核可能修复它并让程序继续运行。如果故障不可修复,那么程序就不能继续,因为它绝对不会跳过那个导致故障的指令。
作为一个可修复故障的示例,假设一个自动扩展的栈。在许多系统上,内核初始化分配一个单栈页,然后如果程序发生的故障是去访问这个栈页下面的页,那么内核会自动分配这些页,并让程序继续运行。通过这种方式,内核只分配程序所需要的内存栈,但是程序可以运行在一个任意大小的栈的假像中。
对于内存保护,系统调用中有一个非常有趣的问题。许多系统调用接口让用户程序传递指针到内核中。这些指针指向用户要读取或写入的缓冲区。然后内核在执行系统调用时废弃这些指针。这样就有两个问题:
1. 内核中的页故障可能比用户程序中的页故障多的多。如果内核在维护它自己的数据结构时发生页故障,那就是一个内核 bug而故障服务程序将使整个内核和整个系统崩溃。但是当内核废弃了由用户程序传递给它的指针后它就需要一种方式去记住那些废弃指针所导致的页故障其实是代表用户程序的。
2. 一般情况下内核拥有比用户程序更多的权限。用户程序可以传递一个指针到系统调用,而指针指向的区域有可能是内核可以读取或写入而用户程序不可访问的区域。内核必须要非常小心,不能被废弃的这种指针欺骗,因为这可能导致泄露私有信息或破坏内核的完整性。
由于以上的原因,内核在处理由用户程序提供的指针时必须格外小心。
现在,你可以通过使用一个简单的机制来仔细检查所有从用户空间传递给内核的指针来解决这个问题。当一个程序给内核传递指针时,内核将检查它的地址是否在地址空间的用户部分,然后页表才允许对内存的操作。
这样,内核在废弃一个用户提供的指针时就绝不会发生页故障。如果内核出现这种页故障,它应该崩溃并终止。
```markdown
练习 9、如果在内核模式中发生一个页故障修改 `kern/trap.c` 去崩溃。
提示:判断一个页故障是发生在用户模式还是内核模式,去检查 `tf_cs` 的低位比特即可。
阅读 `kern/pmap.c` 中的 `user_mem_assert` 并在那个文件中实现 `user_mem_check`
修改 `kern/syscall.c` 去常态化检查传递给系统调用的参数。
引导你的内核,运行 `user/buggyhello`。环境将被毁坏,而内核将不会崩溃。你将会看到:
[00001000] user_mem_check assertion failure for va 00000001
[00001000] free env 00001000
Destroyed the only environment - nothing more to do!
最后,修改在 `kern/kdebug.c` 中的 `debuginfo_eip`,在 `usd`、`stabs`、和 `stabstr` 上调用 `user_mem_check`。如果你现在运行 `user/breakpoint`,你应该能够从内核监视器中运行回溯,然后在内核因页故障崩溃前看到回溯进入到 `lib/libmain.c`。是什么导致了这个页故障?你不需要去修复它,但是你应该明白它是如何发生的。
```
注意,刚才实现的这些机制也同样适用于恶意用户程序(比如 `user/evilhello`)。
```
练习 10、引导你的内核运行 `user/evilhello`。环境应该被毁坏,并且内核不会崩溃。你应该能看到:
[00000000] new env 00001000
...
[00001000] user_mem_check assertion failure for va f010000c
[00001000] free env 00001000
```
**本实验到此结束。**确保你通过了所有的等级测试,并且不要忘记去写下问题的答案,在 `answers-lab3.txt` 中详细描述你的挑战练习的解决方案。提交你的变更并在 `lab` 目录下输入 `make handin` 去提交你的工作。
在动手实验之前,使用 `git status``git diff` 去检查你的变更,并不要忘记去 `git add answers-lab3.txt`。当你完成后,使用 `git commit -am 'my solutions to lab 3` 去提交你的变更,然后 `make handin` 并关注这个指南。
--------------------------------------------------------------------------------
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/
作者:[csail.mit][a]
选题:[lujun9972][b]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://pdos.csail.mit.edu
[b]: https://github.com/lujun9972
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html
[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx
[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c

View File

@ -0,0 +1,127 @@
如何在 Linux 上锁定虚拟控制台会话
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
当你在共享系统上工作时,你可能不希望其他用户在你的控制台中悄悄地看你在做什么。如果是这样,我知道有个简单的技巧来锁定自己的会话,同时仍然允许其他用户在其他虚拟控制台上使用该系统。要感谢 **Vlock****V** irtual Console **lock**),这是一个命令行程序,用于锁定 Linux 控制台上的一个或多个会话。如有必要你可以锁定整个控制台并完全禁用虚拟控制台切换功能。Vlock 对于有多个用户访问控制台的共享 Linux 系统特别有用。
### 安装 Vlock
在基于 Arch 的系统上Vlock 软件包被替换为默认预安装的 **kpd** 包,因此你无需为安装烦恼。
在 Debian、Ubuntu、Linux Mint 上,运行以下命令来安装 Vlock
```
$ sudo apt-get install vlock
```
在 Fedora 上:
```
$ sudo dnf install vlock
```
在 RHEL、CentOS 上:
```
$ sudo yum install vlock
```
### 在Linux上锁定虚拟控制台会话
Vlock 的一般语法是:
```
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
```
这里:
* **a** 锁定所有虚拟控制台会话,
* **c** 锁定当前虚拟控制台会话,
* **n** 在锁定所有会话之前切换到新的空控制台,
* **s** 禁用 SysRq 键机制,
* **t** 指定屏保插件的超时时间,
* **h** 显示帮助,
* **v** 显示版本。
让我举几个例子。
**1\. 锁定当前控制台会话**
在没有任何参数的情况下运行 Vlock 时,它默认锁定当前控制台会话 TYY。要解锁会话你需要输入当前用户的密码或 root 密码。
```
$ vlock
```
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
你还可以使用 **-c** 标志来锁定当前的控制台会话。
```
$ vlock -c
```
请注意,此命令仅锁定当前控制台。你可以按 **ALT+F2** 切换到其他控制台。有关在 TTY 之间切换的更多详细信息,请参阅以下指南。
此外,如果系统有多个用户,则其他用户仍可以访问其各自的 TTY。
**2\. 锁定所有控制台会话**
要同时锁定所有 TTY 并禁用虚拟控制台切换功能,请运行:
```
$ vlock -a
```
同样,要解锁控制台会话,只需按下回车键并输入当前用户的密码或 root 用户密码。
请记住,**root 用户可以随时解锁任何 vlock 会话**,除非在编译时禁用。
**3\. 在锁定所有控制台之前切换到新的虚拟控制台**
在锁定所有控制台之前,还可以使 Vlock 从 X 会话切换到新的空虚拟控制台。为此,请使用 **-n** 标志。
```
$ vlock -n
```
**4\. 禁用 SysRq 机制**
你也许知道,魔术 SysRq 键机制允许用户在系统死机时执行某些操作。因此,用户可以使用 SysRq 解锁控制台。为了防止这种情况,请传递 **-s** 选项以禁用 SysRq 机制。请记住,这只适用于有 **-a** 选项的时候。
```
$ vlock -sa
```
有关更多选项及其用法,请参阅帮助或手册页。
```
$ vlock -h
$ man vlock
```
Vlock 可防止未经授权的用户获得控制台访问权限。如果你在为 Linux 寻找一个简单的控制台锁定机制,那么 Vlock 值得一试!
就是这些了。希望这篇文章有用。还有更多好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,82 @@
使用极简浏览器 Min 浏览网页
======
> 并非所有 web 浏览器都要做到无所不能Min 就是一个极简主义风格的浏览器。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
现在还有开发新的网络浏览器的需要吗?即使现在浏览器领域已经成为了寡头市场,但仍然不断涌现出各种前所未有的浏览器产品。
[Min][1] 就是其中一个。顾名思义Min 是一个小的浏览器,也是一个极简主义的浏览器。但它麻雀虽小五脏俱全,而且还是一个开源的浏览器,它的 Apache 2.0 许可证引起了我的注意。
让我们来看看 Min 有什么值得关注的方面。
### 开始
Min 基于 [Electron][2] 框架开发,值得一提的是,[Atom 文本编辑器][3]也是基于这个框架开发的。它提供 Linux、MacOS 和 Windows 的[安装程序][4],当然也可以[从 GitHub 获取它的源代码][5]自行编译安装。
我使用的 Linux 发行版是 Manjaro但没有完全匹配这个发行版的安装程序。还好我通过 Manjaro 的包管理器也能安装 Min。
安装完成后,在终端就可以直接启动 Min。
![](https://opensource.com/sites/default/files/uploads/min-main.png)
Min 号称是更智能、更快速的浏览器。经过尝试以后,我觉得它比我在其它电脑上使用过的 Firefox 和 Chrome 浏览器启动得更快。
而使用 Min 浏览网页的过程则和 Firefox 或 Chrome 一样,只要再地址栏输入 URL回车就好了。
### Min 的功能
尽管 Min 不可能带有 Firefox 或 Chrome 等浏览器得所有功能,但它也有可取之处。
Min 和其它浏览器一样,支持页面选项卡。它还有一个称为 Tasks 的功能,可以对打开的选项卡进行分组。
[DuckDuckGo][6]是我最喜欢的搜索引擎,而 Min 的默认搜索引擎恰好就是它,这正合我意。当然,如果你喜欢另一个搜索引擎,也可以在 Min 的偏好设置中配置你喜欢的搜索引擎作为默认搜索引擎。
Min 没有使用类似 AdBlock 这样的插件来过滤你不想看到的内容,而是使用了一个名为 [EasyList][7] 的内置的广告拦截器,你可以使用它来屏蔽脚本和图片。另外 Min 还带有一个内置的防跟踪软件。
类似 FirefoxMin 有一个名为叫做 Reading List 的阅读模式。只需点击地址栏中的对应图标,就可以去除页面中的大部分无关内容,让你专注于正在阅读的内容。网页在阅读列表中可以保留 30 天。
![](https://opensource.com/sites/default/files/uploads/min-reading-list.png)
Min 还有一个专注模式,可以隐藏其它选项卡并阻止你打开新的选项卡。在专注模式下,如果一个 web 页面中进行工作,需要多点击好几次才能打开一个新页面。
Min 也有很多快捷键让你快速使用某个功能。你可以[在 GitHub 上][8]找到这些这些快捷键的参考文档,也可以在 Min 的偏好设置中进行更改。
我发现 Min 可以在 YouTube、Vimeo、Dailymotion 等视频网站上播放视频,还可以在音乐网站 7Digital 上播放音乐。但由于我没有账号,所以没法测试是否能在 Spotify 或 Last.fm 等这些网站上播放音乐。
![](https://opensource.com/sites/default/files/uploads/min-video.png)
### Min 的弱点
Min 确实也有自己的缺点,例如它无法将网站添加为书签。替代方案要么是查看 Min 的搜索历史来找回你需要的链接,要么是使用一个第三方的书签服务。
最大的缺点是 Min 不支持插件。这对我来说不是一件坏事因为浏览器启动速度和运行速度快的主要原因就在于此。当然也有一些人非常喜欢使用浏览器插件Min 就不是他们的选择。
### 总结
Min 算是一个中规中矩的浏览器它可以凭借轻量、快速的优点吸引很多极简主义的用户。但是对于追求多功能的用户来说Min 就显得相当捉襟见肘了。
.
所以,如果你想摆脱当今多功能浏览器的束缚,我觉得可以试用一下 Min。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/min-web-browser
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://minbrowser.github.io/min/
[2]: http://electron.atom.io/apps/
[3]: https://opensource.com/article/17/5/atom-text-editor-packages-writers
[4]: https://github.com/minbrowser/min/releases/
[5]: https://github.com/minbrowser/min
[6]: http://duckduckgo.com
[7]: https://easylist.to/
[8]: https://github.com/minbrowser/min/wiki

View File

@ -0,0 +1,123 @@
理解 Linux 链接:第一部分
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-link-498708.jpg?itok=DyVEcEsc)
除了 `cp``mv` 这两个我们在[本系列的前一部分][1]中详细讨论过的,链接是另一种方式可以将文件和目录放在你希它们放在的位置。它的优点是可以让你同时在多个位置显示一个文件或目录。
如前所述,在物理磁盘这个级别上,文件和目录之类的东西并不真正存在。文件系统为了方便人类使用,将它们虚构出来。但在磁盘级别上,有一个名为 _partition table_(分区表)的东西,它位于每个分区的开头,然后数据分散在磁盘的其余部分。
虽然有不同类型的分区表,但是在分区开头的表包含的数据将映射每个目录和文件的开始和结束位置。分区表的就像一个索引:当从磁盘加载文件时,操作系统会查找表中的条目,分区表会告诉文件在磁盘上的起始位置和结束位置。然后磁盘头移动到起点,读取数据,直到它到达终点,最后告诉 presto这就是你的文件。
### 硬链接
硬链接只是分区表中的一个条目,它指向磁盘上的某个区域,表示该区域**已经被分配给文件**。换句话说,硬链接指向已经被另一个条目索引的数据。让我们看看它是如何工作的。
打开终端,创建一个实验目录并进入:
```
mkdir test_dir
cd test_dir
```
使用 [touch][1] 创建一个文件:
```
touch test.txt
```
为了获得更多的体验(?),在文本编辑器中打开 _test.txt_ 并添加一些单词。
现在通过执行以下命令来建立硬链接:
```
ln test.txt hardlink_test.txt
```
运行 `ls`,你会看到你的目录现在包含两个文件,或者看起来如此。正如你之前读到的那样,你真正看到的是完全相同的文件的两个名称: _hardlink\_test.txt_ 包含相同的内容,没有填充磁盘中的任何更多空间(尝试使用大文件来测试),并与 _test.txt_ 使用相同的 inode
```
$ ls -li *test*
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt
```
_ls_ 的 `-i` 选项显示一个文件的 _inode 数值_。_inode_ 是分区表中的信息块,它包含磁盘上文件或目录的位置,上次修改的时间以及其它数据。如果两个文件使用相同的 inode那么无论它们在目录树中的位置如何它们在实际效果上都是相同的文件。
### 软链接
软链接,也称为 _symlinks_(系统链接),它是不同的:软链接实际上是一个独立的文件,它有自己的 inode 和它自己在磁盘上的小插槽。但它只包含一小段数据,将操作系统指向另一个文件或目录。
你可以使用 `ln``-s` 选项来创建一个软链接:
```
ln -s test.txt softlink_test.txt
```
这将在当前目录中创建软链接 _softlink\_test.txt_它指向 _test.txt_
再次执行 `ls -li`,你可以看到两种链接的不同之处:
```
$ ls -li
total 8
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt
16515855 lrwxrwxrwx 1 paul paul 8 oct 12 09:50 softlink_test.txt -> test.txt
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt
```
_hardlink\_test.txt_ 和 _test.txt_ 包含一些文本并占据相同的空格*字面*。它们使用相同的 inode 数值。与此同时_softlink\_test.txt_ 占用少得多,并且具有不同的 inode 数值,将其标记为完全不同的文件。使用 _ls_`-l` 选项还会显示软链接指向的文件或目录。
### 为什么要用链接?
它们适用于**带有自己环境的应用程序**。你的 Linux 发行版通常不会附带你需要应用程序的最新版本。以优秀的 [Blender 3D][2] 设计软件为例Blender 允许你创建 3D 静态图像以及动画电影,人人都想在自己的机器上拥有它。问题是,当前版本的 Blender 至少比任何发行版中的自带的高一个版本。
幸运的是,[Blender 提供下载][3]开箱即用。除了程序本身之外,这些软件包还包含了 Blender 需要运行的复杂的库和依赖框架。所有这些数据和块都在它们自己的目录层次中。
每次你想运行 Blender你都可以 `cd` 到你下载它的文件夹并运行:
```
./blender
```
但这很不方便。如果你可以从文件系统的任何地方,比如桌面命令启动器中运行 `blender` 命令会更好。
这样做的方法是将 _blender_ 可执行文件链接到 _bin/_ 目录。在许多系统上,你可以通过将其链接到文件系统中的任何位置来使 `blender` 命令可用,就像这样。
```
ln -s /path/to/blender_directory/blender /home/<username>/bin
```
你需要链接的另一个情况是**软件需要过时的库**。如果你用 `ls -l` 列出你的 _/usr/lib_ 目录,你会看到许多软链接文件飞过。仔细看看,你会看到软链接通常与它们链接到的原始文件具有相似的名称。你可能会看到 _libblah_ 链接到 _libblah.so.2_,你甚至可能会注意到 _libblah.so.2_ 依次链接到原始文件 _libblah.so.2.1.0_
这是因为应用程序通常需要安装比已安装版本更老的库。问题是,即使新版本仍然与旧版本(通常是)兼容,如果程序找不到它正在寻找的版本,程序将会出现问题。为了解决这个问题,发行版通常会创建链接,以便挑剔的应用程序相信它找到了旧版本,实际上它只找到了一个链接并最终使用了更新的库版本。
有些是和**你自己从源代码编译的程序**相关。你自己编译的程序通常最终安装在 _/usr/local_ 下,程序本身最终在 _/usr/local/bin_ 中,它在 _/usr/local/bin_ 目录中查找它需要的库。但假设你的新程序需要 _libblah_,但 _libblah__/usr/lib_ 中,这就是所有其它程序都会寻找到它的地方。你可以通过执行以下操作将其链接到 _/usr/local/lib_
```
ln -s /usr/lib/libblah /usr/local/lib
```
或者如果你愿意,可以 `cd`_/usr/local/lib_
```
cd /usr/local/lib
```
然后使用链接:
```
ln -s ../lib/libblah
```
还有几十个案例证明软链接是有用的,当你使用 Linux 更熟练时,你肯定会发现它们,但这些是最常见的。下一次,我们将看一些你需要注意的链接怪异。
通过 Linux 基金会和 edX 的免费 ["Linux 简介"][4]课程了解有关 Linux 的更多信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/intro-to-linux/2018/10/linux-links-part-1
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/blog/2018/8/linux-beginners-moving-things-around
[2]: https://www.blender.org/
[3]: https://www.blender.org/download/
[4]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,53 @@
在 Fedora 上使用 Pitivi 编辑你的视频
======
![](https://fedoramagazine.org/wp-content/uploads/2018/10/pitivi-816x346.png)
想制作一部你本周末冒险的视频吗?视频编辑有很多选择。但是,如果你在寻找一个容易上手的视频编辑器,并且也可以在官方 Fedora 仓库中找到,请尝试一下[Pitivi][1]。
Pitivi 是一个使用 GStreamer 框架的开源非线性视频编辑器。在 Fedora 下开箱即用Pitivi 支持 OGG、WebM 和一系列其他格式。此外,通过 gstreamer 插件可以获得更多视频格式支持。Pitivi 也与 GNOME 桌面紧密集成,因此相比其他新的程序,它的 UI 在 Fedora Workstation 上会感觉很熟悉。
### 在 Fedora 上安装 Pitivi
Pitivi 可以在 Fedora 仓库中找到。在 Fedora Workstation 上,只需在应用中心搜索并安装 Pitivi。
![][2]
或者,使用以下命令在终端中安装 Pitivi
```
sudo dnf install pitivi
```
### 基本编辑
Pitivi 内置了多种工具,可以快速有效地编辑剪辑。只需将视频、音频和图像导入 Pitivi 媒体库然后将它们拖到时间线上即可。此外除了时间线上的简单淡入淡出过渡之外pitivi 还允许你轻松地将剪辑的各个部分分割、修剪和分组。
![][3]
### 过渡和效果
除了两个剪辑之间的基本淡入淡出外Pitivi 还具有一系列不同的过渡和擦除功能。此外,有超过一百种效果可应用于视频或音频,以更改媒体元素在最终演示中的播放或显示方式。
![][4]
Pitivi 还具有一系列其他强大功能,因此请务必查看其网站上的[教程][5]来获得 Pitivi 功能的完整描述。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/edit-your-videos-with-pitivi-on-fedora/
作者:[Ryan Lerch][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/introducing-flatpak/
[b]: https://github.com/lujun9972
[1]: http://www.pitivi.org/
[2]: https://fedoramagazine.org/wp-content/uploads/2018/10/Screenshot-from-2018-10-19-14-46-12.png
[3]: https://fedoramagazine.org/wp-content/uploads/2018/10/Screenshot-from-2018-10-19-15-37-29.png
[4]: http://www.pitivi.org/i/screenshots/archive/0.94.jpg
[5]: http://www.pitivi.org/?go=tour

View File

@ -0,0 +1,339 @@
用 Pandoc 做一篇调研论文
======
学习如何用 Markdown 管理引用、图像、表格、以及更多。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T)
这篇文章对于使用 [Markdown][1] 语法做一篇调研论文进行了一个深度体验。覆盖了如何创建和引用、图像(用 Markdown 和 [LaTeX][2])和参考书目。我们也讨论了一些棘手的案例和为什么使用 LaTex 是一个正确的做法。
### 调查
调研论文一般包括引用、图像、表格和参考书目。[Pandoc][3] 本身并不能交叉引用这些,但是但是它能够利用 [pandoc-crossref][4] 过滤来完成自动编号和章节、图像、表格的交叉引用。
让我们开始正常的使用 LaTax 重写 [一个教育调研报告的例子][5],然后用 Markdown和一些 LaTax、Pandoc 和 Pandoc-crossref 再重写。
#### 添加并引用章节
要想章节被自动编号,必须使用 Markdown 标题 H1 编写。子章节使用子标题 H2-H4 编写(通常不需要更多的东西)。例如一个章节的标题是 “履行”,写作 `# 履行 {#sec: 履行}`,然后 Pandoc 会把它转化为 `3. 履行`(或者转换为相应的章节标号)。`履行` 这个标题使用了 H1 并且声明了一个 `{#sec: 履行}` 的标签,这是作者引用了该章节的标签。要想引用一个章节,在对应章节后面输入 `@` 符号并使用方括号括起来即可: `[@sec:履行]`
[在这篇论文中][5], 我们发现了下面这个例子:
```
we lack experience (consistency between TAs, [@sec:implementation]).
```
Pandoc 转换:
```
we lack experience (consistency between TAs, Section 4).
```
章节被自动(这包含在文章最后的 `Makefile` 当中)标号。要创建无标号的章节,输入章节的标题并在最后添加 `{-}`。例如:`### 设计一个可维护的游戏 {-}` 就以标题 “设计一个可维护的游戏”,创建了一个无标号的章节。
#### 添加并引用图像
添加并引用一个图像,跟添加并引用一个章节和添加一个 Markdown 图片很相似:
```
![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix}
```
上面这一行是告诉 Pandoc有一个标有 Scatterplot matrix 的图像以及这张图片路径是 `data/scatterplots/RScatterplotMatrix2.png`。`{#fig:scatter-matrix}` 表明了应该引用的图像的名字。
这里是从一篇论文中进行图像引用的例子:
```
The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix]) ...
```
Pandoc 产生如下输出:
```
The boxes "Enjoy", "Grade" and "Motivation" (Fig. 1) ...
```
#### 添加及引用参考书目
大多数调研报告都把引用放在一个 BibTeX 的数据库文件中。在这个例子中,该文件被命名为 [biblio.bib][6],它包含了论文中所有的引用。下面是这个文件的样子:
```
@inproceedings{wrigstad2017mastery,
    Author =       {Wrigstad, Tobias and Castegren, Elias},
    Booktitle =    {SPLASH-E},
    Title =        {Mastery Learning-Like Teaching with Achievements},
    Year =         2017
}
@inproceedings{review-gamification-framework,
  Author =       {A. Mora and D. Riera and C. Gonzalez and J. Arnedo-Moreno},
  Publisher =    {IEEE},
  Booktitle =    {2015 7th International Conference on Games and Virtual Worlds
                  for Serious Applications (VS-Games)},
  Doi =          {10.1109/VS-GAMES.2015.7295760},
  Keywords =     {formal specification;serious games (computing);design
                  framework;formal design process;game components;game design
                  elements;gamification design frameworks;gamification-based
                  solutions;Bibliographies;Context;Design
                  methodology;Ethics;Games;Proposals},
  Month =        {Sept},
  Pages =        {1-8},
  Title =        {A Literature Review of Gamification Design Frameworks},
  Year =         2015,
  Bdsk-Url-1 =   {http://dx.doi.org/10.1109/VS-GAMES.2015.7295760}
}
...
```
第一行的 `@inproceedings{wrigstad2017mastery,` 表明了出版物 (`inproceedings`) 的类型,以及用来指向那篇论文 (`wrigstad2017mastery`) 的标签。
引用这篇题为 “Mastery Learning-Like Teaching with Achievements” 的论文, 输入:
```
the achievement-driven learning methodology [@wrigstad2017mastery]
```
Pandoc 将会输出:
```
the achievement- driven learning methodology [30]
```
这篇论文将会产生像下面这样被标号的参考书目:
![](https://opensource.com/sites/default/files/uploads/bibliography-example_0.png)
引用文章的集合也很容易:只要引用使用分号 `;` 分隔开被标记的参考文献就可以了。如果一个引用有两个标签 —— 例如: `SEABORN201514``gamification-leaderboard-benefits`—— 像下面这样把它们放在一起引用:
```
Thus, the most important benefit is its potential to increase students' motivation
and engagement [@SEABORN201514;@gamification-leaderboard-benefits].
```
Pandoc 将会产生:
```
Thus, the most important benefit is its potential to increase students motivation
and engagement [26, 28]
```
### 问题案例
一个常见的问题是项目与页面不匹配。不匹配的部分会自动移动到它们认为合适的地方,即便这些位置并不是读者期望看到的位置。因此在图像或者表格接近于它们被提及的地方时,我们需要调节一下它们在此处的元素组合,使得他们更加易于阅读。为了达到这个效果,我建议使用 `figure` 这个 LaTeX 环境参数,它可以让用户控制图像的位置。
我们看一个上面提到的图像的例子:
```
![Scatterplot matrix](data/scatterplots/RScatterplotMatrix2.png){#fig:scatter-matrix}
```
然后使用 LaTeX 重写:
```
\begin{figure}[t]
\includegraphics{data/scatterplots/RScatterplotMatrix2.png}
\caption{\label{fig:matrix}Scatterplot matrix}
\end{figure}
```
在 LaTeX 中,`figure` 环境参数中的 `[t]` 选项表示这张图用该位于该页的最顶部。有关更多选项,参阅 [LaTex/Floats, Figures, and Captions][7] 这篇 Wikibooks 的文章。
### 产生一篇论文
到目前为止,我们讲了如何添加和引用(子)章节、图像和参考书目,现在让我们重温一下如何生产一篇 PDF 格式的论文,生成 PDF我们将使用 Pandoc 生成一篇可以被构建成最终 PDF 的 LaTeX 文件。我们还会讨论如何以 LaTeX使用一套自定义的模板和元信息文件生成一篇调研论文以及如何构建 LaTeX 文档为最终的 PDF 格式。
很多会议都提供了一个 **.cls** 文件或者一套论文该有样子的模板; 例如,他们是否应该使用两列的格式以及其他的设计风格。在我们的例子中,会议提供了一个名为 **acmart.cls** 的文件。
作者通常想要在他们的论文中包含他们所属的机构,然而,这个选项并没有包含在默认的 Pandoc 的 LaTeX 模板(注意,可以通过输入 `pandoc -D latex` 来查看 Pandoc 模板)当中。要包含这个内容,找一个 Pandoc 默认的 LaTeX 模板,并添加一些新的内容。将这个模板像下面这样复制进一个名为 `mytemplate.tex` 的文件中:
```
pandoc -D latex > mytemplate.tex
```
默认的模板包含以下代码:
```
$if(author)$
\author{$for(author)$$author$$sep$ \and $endfor$}
$endif$
$if(institute)$
\providecommand{\institute}[1]{}
\institute{$for(institute)$$institute$$sep$ \and $endfor$}
$endif$
```
因为这个模板应该包含作者的联系方式和电子邮件地址,在其他一些选项之间,我们可以添加以下内容(我们还做了一些其他的更改,但是因为文件的长度,就没有包含在此处)更新这个模板
```
latex
$for(author)$
    $if(author.name)$
        \author{$author.name$}
        $if(author.affiliation)$
            \affiliation{\institution{$author.affiliation$}}
        $endif$
        $if(author.email)$
            \email{$author.email$}
        $endif$
    $else$
        $author$
    $endif$
$endfor$
```
要让这些更改起作用,我们还应该有下面的文件:
* `main.md` 包含调研论文
* `biblio.bib` 包含参考书目数据库
* `acmart.cls` 我们使用的文档的集合
* `mytemplate.tex` 是我们使用的模板文件(代替默认的)
让我们添加论文的元信息到一个 `meta.yaml` 文件:
```
---
template: 'mytemplate.tex'
documentclass: acmart
classoption: sigconf
title: The impact of opt-in gamification on `\\`{=latex} students' grades in a software design course
author:
- name: Kiko Fernandez-Reyes
  affiliation: Uppsala University
  email: kiko.fernandez@it.uu.se
- name: Dave Clarke
  affiliation: Uppsala University
  email: dave.clarke@it.uu.se
- name: Janina Hornbach
  affiliation: Uppsala University
  email: janina.hornbach@fek.uu.se
bibliography: biblio.bib
abstract: |
  An achievement-driven methodology strives to give students more control over their learning with enough flexibility to engage them in deeper learning. (more stuff continues)
include-before: |
  \```{=latex}
  \copyrightyear{2018}
  \acmYear{2018}
  \setcopyright{acmlicensed}
  \acmConference[MODELS '18 Companion]{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems}{October 14--19, 2018}{Copenhagen, Denmark}
  \acmBooktitle{ACM/IEEE 21th International Conference on Model Driven Engineering Languages and Systems (MODELS '18 Companion), October 14--19, 2018, Copenhagen, Denmark}
  \acmPrice{XX.XX}
  \acmDOI{10.1145/3270112.3270118}
  \acmISBN{978-1-4503-5965-8/18/10}
  \begin{CCSXML}
  <ccs2012>
  <concept>
  <concept_id>10010405.10010489</concept_id>
  <concept_desc>Applied computing~Education</concept_desc>
  <concept_significance>500</concept_significance>
  </concept>
  </ccs2012>
  \end{CCSXML}
  \ccsdesc[500]{Applied computing~Education}
  \keywords{gamification, education, software design, UML}
  \```
figPrefix:
  - "Fig."
  - "Figs."
secPrefix:
  - "Section"
  - "Sections"
...
```
这个元信息文件使用 LaTeX 设置下列参数:
* `template` 指向使用的模板mytemplate.tex
* `documentclass` 指向使用的 LaTeX 文档集合 (`acmart`)
* `classoption` 是在 `sigconf` 的案例中,指向这个类的选项
* `title` 指定论文的标题
* `author` 是一个包含例如 `name`, `affiliation`, 和 `email` 的地方
* `bibliography` 指向包含参考书目的文件 (biblio.bib)
* `abstract` 包含论文的摘要
* `include-before` 是这篇论文的真实内容之前应该被包含的信息;在 LaTeX 中被称为 [前言][8]。我在这里包含它去展示如何产生一篇计算机科学的论文,但是你可以选择跳过
* `figPrefix` 指向如何引用文档中的图像,例如:当引用图像的 `[@fig:scatter-matrix]` 时应该显示什么。例如,当前的 `figPrefix` 在这个例子 `The boxes "Enjoy", "Grade" and "Motivation" ([@fig:scatter-matrix])`中,产生了这样的输出:`The boxes "Enjoy", "Grade" and "Motivation" (Fig. 3)`。如果这里有很多图像,目前的设置表明它应该在图像号码旁边显示 `Figs.`
* `secPrefix` 指定如何引用文档中其他地方提到的部分(类似之前的图像和概览)
现在已经设置好了元信息,让我们来创建一个 `Makefile`,它会产生你想要的输出。`Makefile` 使用 Pandoc 产生 LaTeX 文件,`pandoc-crossref` 产生交叉引用,`pdflatex` 构建 LaTeX 为 PDF`bibtex ` 处理引用。
`Makefile` 已经展示如下:
```
all: paper
paper:
        @pandoc -s -F pandoc-crossref --natbib meta.yaml --template=mytemplate.tex -N \
         -f markdown -t latex+raw_tex+tex_math_dollars+citations -o main.tex main.md
        @pdflatex main.tex &> /dev/null
        @bibtex main &> /dev/null
        @pdflatex main.tex &> /dev/null
        @pdflatex main.tex &> /dev/null
clean:
        rm main.aux main.tex main.log main.bbl main.blg main.out
.PHONY: all clean paper
```
Pandoc 使用下面的标记:
* `-s` 创建一个独立的 LaTeX 文档
* `-F pandoc-crossref` 利用 `pandoc-crossref` 进行过滤
* `--natbib``natbib` (你也可以选择 `--biblatex`)对参考书目进行渲染
* `--template` 设置使用的模板文件
* `-N` 为章节的标题编号
* `-f``-t` 指定从哪个格式转换到哪个格式。`-t` 通常包含格式和 Pandoc 使用的扩展。在这个例子中,我们标明的 `raw_tex+tex_math_dollars+citations` 允许在 Markdown 中使用 `raw_tex` LaTeX。 `tex_math_dollars` 让我们能够像在 LaTeX 中一样输入数学符号,`citations` 让我们可以使用 [这个扩展][9].
由 LaTeX 产生 PDF接着引导行 [从 bibtex][10] 处理参考书目:
```
@pdflatex main.tex &> /dev/null
@bibtex main &> /dev/null
@pdflatex main.tex &> /dev/null
@pdflatex main.tex &> /dev/null
```
脚本用 `@` 忽略输出,并且重定向标准输出和错误到 `/dev/null` ,因此我们在使用这些命令的可执行文件时不会看到任何的输出。
最终的结果展示如下。这篇文章的库可以在 [GitHub][11] 找到:
![](https://opensource.com/sites/default/files/uploads/abstract-image.png)
### 结论
在我看来,研究的重点是协作,思想的传播,以及在任何一个恰好存在的领域中改进现有的技术。许多计算机科学家和工程师使用 LaTeX 文档系统来写论文,它对数学提供了完美的支持。来自社会科学的调查员似乎更喜欢 DOCX 文档。
当身处不同社区的调查员一同写一篇论文时他们首先应该讨论一下他们将要使用哪种格式。然而如果包含太多的数学符号DOCX 对于工程师来说不会是最简便的选择LaTeX 对于缺乏编程经验的调查员来说也有一些问题。就像这篇文章中展示的Markdown 是一门工程师和社会科学家都很轻易能够使用的语言。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/pandoc-research-paper
作者:[Kiko Fernandez-Reyes][a]
选题:[lujun9972][b]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kikofernandez
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Markdown
[2]: https://www.latex-project.org/
[3]: https://pandoc.org/
[4]: http://lierdakil.github.io/pandoc-crossref/
[5]: https://dl.acm.org/citation.cfm?id=3270118
[6]: https://github.com/kikofernandez/pandoc-examples/blob/master/research-paper/biblio.bib
[7]: https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions#Figures
[8]: https://www.sharelatex.com/learn/latex/Creating_a_document_in_LaTeX#The_preamble_of_a_document
[9]: http://pandoc.org/MANUAL.html#citations
[10]: http://www.bibtex.org/Using/
[11]: https://github.com/kikofernandez/pandoc-examples/tree/master/research-paper

View File

@ -0,0 +1,78 @@
正确选择开源数据库的 5 个技巧
======
> 对关键应用的选择不容许丝毫错误。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
你或许会遇到需要选择适合的开源数据库的情况。但这无论对于开源方面的老手或是新手,都是一项艰巨的任务。
在过去的几年中,采用开源技术的企业越来越多。面对这样的趋势,众多开源应用公司都纷纷承诺自己提供的解决方案能够各种问题、适应各种负载。但这些承诺不能轻信,在开源应用上的选择是重要而艰难的,尤其是数据库这种关键的应用。
凭借我在 [Percona][1] 和其它公司担任 IT 专家的经验,我很幸运能够指导其他人在开源技术的选择上做出正确的决策,因为需要考虑的重要因素太多了。希望通过这篇文章能够向大家分享这方面的一些技巧。
### 有一个明确的目标
这一点看似简单,但在和很多人聊过 MySQL、MongoDB、PostgreSQL 之后,我觉得这一点才是最重要的。
面对繁杂的开源数据库,更需要明确自己的目标。无论这个数据库是作为开发用的标准化数据库后端,抑或是用于替换遗留代码中的原有数据库,这都是一个明确的目标。
目标一旦确定,就可以集中精力与开源软件的提供方商讨更多细节了。
### 了解你的工作负载
尽管开源数据库技术的功能越来越丰富,但这些新加入的功能都不太具有普适性。譬如 MongoDB 新增了事务的支持、MySQL 新增了 JSON 存储的功能等等。目前开源数据库的普遍趋势是不断加入新的功能,但很多人的误区却在于没有选择最适合的工具来完成自己的工作——这样的人或许是一个自大的开发者,又或许是一个视野狭窄的主管——最终导致公司业务上的损失。最致命的是,在业务初期,使用了不适合的工具往往也可以顺利地完成任务,但随着业务的增长,很快就会到达瓶颈,尽管这个时候还可以替换更合适的工具,但成本就比较高了。
例如,如果你需要的是数据分析仓库,关系数据库可能不是一个适合的选择;如果你处理事务的应用要求严格的数据完整性和一致性,就不要考虑 NoSQL 了。
### 不要重新发明轮子
在过去的数十年,开源数据库技术迅速发展壮大。开源数据库从新生,到受到质疑,再到受到认可,现在已经成为很多企业生产环境的数据库。企业不再需要担心选择开源数据库技术会产生风险,因为开源数据库通常都有活跃的社区,可以为越来越多的初创公司、中型企业甚至 500 强公司提供开源数据库领域的支持和第三方工具。
Battery Ventures 是一家专注于技术的投资公司,最近推出了一个用于跟踪最受欢迎开源项目的 [BOSS 指数][2] 。它提供了对一些被广泛采用的开源项目和活跃的开源项目的详细情况。其中,数据库技术毫无悬念地占据了榜单的主导地位,在前十位之中占了一半。这个 BOSS 指数对于刚接触开源数据库领域的人来说,这是一个很好的切入点。当然,开源技术的提供者也会针对很多常见的典型问题给出对应的解决方案。
我认为,你想要做的事情很可能已经有人解决过了。即使这些先行者的解决方案不一定完全契合你的需求,但也可以从他们成功或失败案例中根据你自己的需求修改得出合适的解决方案。
如果你采用了一个最前沿的技术,这就是你探索的好机会了。如果你的工作负载刚好适合新的开源数据库技术,放胆去尝试吧。第一个吃螃蟹的人总是会得到意外的挑战和收获。
### 先从简单开始
你的数据库实际上需要达到多少个 [9][4] 的可用性?对许多公司来说,“实现高可用性”仅仅只是一个模糊的目标。当然,最常见的答案都会是“它是关键应用,我们无论多短的停机时间都是无法忍受的”。
数据库环境越复杂,管理的难度就越大,成本也会越高。理论上你总可以将数据库的可用性提得更高,但代价将会是大大增加的管理难度和性能下降。所以,先从简单开始,直到有需要时再逐步扩展。
例如Booking.com 是一个有名的旅游预订网站。但少有人知的是,它使用 MySQL 作为数据库后端。 Booking.com 高级系统架构师 Nicolai Plum 曾经发表过一次[演讲][5],讲述了他们公司使用 MySQL 数据库的历程。其中一个重点就是,在初始阶段数据库可以被配置得很简单,然后逐渐变得复杂。对于早期的数据库需求,一个简单的主从架构就足够了,但随着工作负载和数据量的增加,数据库引入了负载均衡、多个读取副本,还使用 Hadoop 进行分析。尽管如此,早期的架构仍然是非常简单的。
![](https://opensource.com/sites/default/files/uploads/internet_app_barrett_chambers.png)
### 有疑问,找专家
如果你仍然不确定数据库选择得是否合适,可以在论坛、网站或者与软件的提供者处商讨。研究各种开源数据库是否满足自己的需求是一件很有意义的事,因为总会发现你从不知道的技术。而开源社区就是分享这些信息的地方。
当你接触到开源软件和软件提供者时,有一件重要的事情需要注意。很多公司都有开放的核心业务模式,鼓励采用他们的数据库软件。你可以只接受他们的部分建议和指导,然后用你自己的能力去研究和探索替代方案。
### 总结
选择正确的开源数据库是一个重要的过程。很多时候,人们都会在真正理解需求之前就做出决定,这是本末倒置的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tips-choosing-right-open-source-database
作者:[Barrett Chambers][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barrettc
[b]: https://github.com/lujun9972
[1]: https://www.percona.com/
[2]: https://techcrunch.com/2017/04/07/tracking-the-explosive-growth-of-open-source-software/
[3]: https://docs.aws.amazon.com/quickstart/latest/mongodb/welcome.html
[4]: https://en.wikipedia.org/wiki/Five_nines
[5]: https://www.percona.com/live/mysql-conference-2015/sessions/bookingcom-evolution-mysql-system-design
[6]: https://allthingsopen.org/talk/choosing-the-right-open-source-database/
[7]: https://allthingsopen.org/

View File

@ -0,0 +1,275 @@
如何在 Rasspberry Pi 上搭建 WordPress
======
这篇简单的教程可以让你在 Rasspberry Pi 上运行你的 WordPress 网站。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W)
WordPress 是一个非常受欢迎的开源博客平台和内容管理平台CMS)。它很容易搭建,而且还有一个活跃的开发者社区构建网站、创建主题和插件供其他人使用。
虽然通过一键式 WordPress 设置获得托管包很容易,但通过命令行就可以在 Linux 服务器上设置自己的托管包,而且 Raspberry Pi 是一种用来尝试它并顺便学习一些东西的相当好的途径。
使用一个 web 堆栈的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。
### Linux
Raspberry Pi 上运行的系统是 Raspbian这是一个基于 Debian优化地可以很好的运行在 Raspberry Pi 硬件上的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。
这篇教程在两个版本上都可以使用,但是如果你使用的是精简版,你必须要有另外一台电脑去访问你的站点。
### Apache
Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的 Raspberry Pi 上伺服你的 web 页面。就其自身而言Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。
安装 Apache 非常简单。打开一个终端窗口,然后输入下面的命令:
```
sudo apt install apache2 -y
```
Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 **<http://localhost>**。或者(特别是你使用的是 Raspbian Lite 的话)输入你的 Pi 的 IP 地址代替 **localhost**。你应该会在你的浏览器窗口中看到这样的内容:
![](https://opensource.com/sites/default/files/uploads/apache-it-works.png)
这意味着你的 Apache 已经开始工作了!
这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 **/var/www/html/index/html**。你可以使用 [Leafpad][2] 文本编辑器写一些 HTML 去替换这个文件的内容。
```
cd /var/www/html/
sudo leafpad index.html
```
保存并关闭 Leafpad 然后刷新网页,查看你的更改。
### MySQL
MySQL (显然是 "my S-Q-L" 或者 "my sequel") 是一个很受欢迎的数据库引擎。就像 PHP它被非常广泛的应用于网页服务这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。
在一个终端窗口中输入以下命令安装 MySQL 服务:
```
sudo apt-get install mysql-server -y
```
WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。
### PHP
PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。,不像静态的 HTMLPHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。
安装 PHP 和 MySQL 的插件:
```
sudo apt-get install php php-mysql -y
```
删除 **index.html**,然后创建 **index.php**
```
sudo rm index.html
sudo leafpad index.php
```
在里面添加以下内容:
```
<?php phpinfo(); ?>
```
保存、退出、刷新你的网页。你将会看到 PHP 状态页:
![](https://opensource.com/sites/default/files/uploads/phpinfo.png)
### WordPress
你可以使用 **wget** 命令从 [wordpress.org][3] 下载 WordPress。最新的 WordPress 总是使用 [wordpress.org/latest.tar.gz][4] 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。
确保你在 **/var/www/html** 目录中,然后删除里面的所有内容:
```
cd /var/www/html/
sudo rm *
```
使用 **wget** 下载 WordPress然后提取里面的内容并移动提取的 WordPress 目录中的内容移动到 **html** 目录下:
```
sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .
```
现在可以删除压缩包和空的 **wordpress** 目录:
```
sudo rm -rf wordpress latest.tar.gz
```
运行 **ls** 或者 **tree -L 1** 命令显示 WordPress 项目下包含的内容:
```
.
├── index.php
├── license.txt
├── readme.html
├── wp-activate.php
├── wp-admin
├── wp-blog-header.php
├── wp-comments-post.php
├── wp-config-sample.php
├── wp-content
├── wp-cron.php
├── wp-includes
├── wp-links-opml.php
├── wp-load.php
├── wp-login.php
├── wp-mail.php
├── wp-settings.php
├── wp-signup.php
├── wp-trackback.php
└── xmlrpc.php
3 directories, 16 files
```
这是 WordPress 的默认安装源。在 **wp-content** 目录中,你可以编辑你的自定义安装。
你现在应该把所有文件的所有权改为 Apache 用户:
```
sudo chown -R www-data: .
```
### WordPress 数据库
为了搭建你的 WordPress 站点,你需要一个数据库。这里使用的是 MySQL。
在终端窗口运行 MySQL 的安全安装命令:
```
sudo mysql_secure_installation
```
你将会被问到一系列的问题。这里原来没有设置密码,但是在下一步你应该设置一个。确保你记住了你输入的密码,后面你需要使用它去连接你的 WordPress。按回车确认下面的所有问题。
当它完成之后,你将会看到 "All done!" 和 "Thanks for using MariaDB!" 的信息。
在终端窗口运行 **mysql** 命令:
```
sudo mysql -uroot -p
```
输入你创建的 root 密码。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 **MariaDB [(none)] >** 提示处使用以下命令,为你 WordPress 的安装创建一个数据库:
```
create database wordpress;
```
注意声明最后的分号,如果命令执行成功,你将看到下面的提示:
```
Query OK, 1 row affected (0.00 sec)
```
把 数据库权限交给 root 用户在声明的底部输入密码:
```
GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD';
```
为了让更改生效,你需要刷新数据库权限:
```
FLUSH PRIVILEGES;
```
**Ctrl+D** 退出 MariaDB 提示,返回到 Bash shell。
### WordPress 配置
在你的 Raspberry Pi 打开网页浏览器,地址栏输入 **<http://localhost>**。选择一个你想要在 WordPress 使用的语言,然后点击 **继续**。你将会看到 WordPress 的欢迎界面。点击 **让我们开始吧** 按钮。
按照下面这样填写基本的站点信息:
```
Database Name:      wordpress
User Name:          root
Password:           <YOUR PASSWORD>
Database Host:      localhost
Table Prefix:       wp_
```
点击 **提交** 继续,然后点击 **运行安装**
![](https://opensource.com/sites/default/files/uploads/wp-info.png)
按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 **安装 WordPress** 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 **<http://localhost/wp-admin>** 查看你的网站。
### 永久链接
更改你的永久链接,使得你的 URLs 更加友好是一个很好的想法。
要这样做,首先登录你的 WordPress ,进入仪表盘。进入 **设置****永久链接**。选择 **文章名** 选项,然后点击 **保存更改**。接着你需要开启 Apache 的 **改写** 模块。
```
sudo a2enmod rewrite
```
你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件
```
sudo leafpad /etc/apache2/sites-available/000-default.conf
```
在第一行后添加下面的内容:
```
<Directory "/var/www/html">
    AllowOverride All
</Directory>
```
确保其中有像这样的内容 **< VirtualHost \*:80>**
```
<VirtualHost *:80>
    <Directory "/var/www/html">
        AllowOverride All
    </Directory>
    ...
```
保存这个文件,然后退出,重启 Apache
```
sudo systemctl restart apache2
```
### 下一步?
WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘,。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。
这里有一些你可以在 Raspberry Pi 的网页服务上尝试的有趣的事情:
* 添加页面和文章到你的网站
* 从外观菜单安装不同的主题
* 自定义你的网站主题或是创建你自己的
* 使用你的网站服务向你的网络上的其他人显示有用的信息
不要忘记Raspberry Pi 是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sitewide-search?search_api_views_fulltext=raspberry%20pi
[2]: https://en.wikipedia.org/wiki/Leafpad
[3]: http://wordpress.org/
[4]: https://wordpress.org/latest.tar.gz

View File

@ -0,0 +1,98 @@
使用Python的toolz库开始函数式编程
======
toolz库允许你操作函数使其更容易理解更容易测试代码。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy)
在这个由两部分组成的系列文章的第二部分中,我们将继续探索如何将函数式编程方法中的好想法引入到 Python中以实现两全其美。
在上一篇文章中,我们介绍了[不可变数据结构][1]。 这些数据结构使得我们可以编写“纯”函数,或者说是没有副作用的函数,仅仅接受一些参数并返回结果,同时保持良好的性能。
在这篇文章中,我们使用 toolz 库来构建。 这个库具有操作此类函数的函数,并且它们在纯函数中表现得特别好。 在函数式编程世界中,它们通常被称为“高阶函数”,因为它们将函数作为参数,将函数作为结果返回。
让我们从这里开始:
```
def add_one_word(words, word):
    return words.set(words.get(word, 0) + 1)
```
这个函数假设它的第一个参数是一个不可变的类似字典的对象,它返回一个新的类似字典的在相关位置递增的对象:这就是一个简单的频率计数器。
但是,只有将它应用于单词流并做归纳时才有用。 我们可以使用内置模块 `functools` 中的归纳器。 `functools.reduce(function, stream, initializer)`
我们想要一个函数,应用于流,并且能能返回频率计数。
我们首先使用 `toolz.curry` 函数:
```
add_all_words = curry(functools.reduce, add_one_word)
```
使用此版本,我们需要提供初始化程序。 但是,我们不能只将 `pyrsistent.m` 函数添加到 `curry` 函数中中; 因为这个顺序是错误的。
```
add_all_words_flipped = flip(add_all_words)
```
The `flip` higher-level function returns a function that calls the original, with arguments flipped.
`flip` 这个高阶函数返回一个调用原始函数的函数,并且翻转参数顺序。
```
get_all_words = add_all_words_flipped(pyrsistent.m())
```
我们利用 `flip` 自动调整其参数的特性给它一个初始值:一个空字典。
现在我们可以执行 `get_all_words(word_stream)` 这个函数来获取频率字典。 但是,我们如何获得一个单词流呢? Python文件是行流的。
```
def to_words(lines):
    for line in lines:
        yield from line.split()
```
在单独测试每个函数后,我们可以将它们组合在一起:
```
words_from_file = toolz.compose(get_all_words, to_words)
```
在这种情况下,组合只是使两个函数很容易阅读:首先将文件的行流应用于 `to_words`,然后将 `get_all_words` 应用于 `to_words` 的结果。 散文似乎与代码相反。
当我们开始认真对待可组合性时,这很重要。 有时可以将代码编写为一个单元序列,单独测试每个单元,最后将它们全部组合。 如果有几个组合元素时,组合的顺序可能就很难理解。
`toolz` 库借用了 Unix 命令行的做法,并使用 `pipe` 作为执行相同操作的函数,但顺序相反。
```
words_from_file = toolz.pipe(to_words, get_all_words)
```
Now it reads more intuitively: Pipe the input into `to_words`, and pipe the results into `get_all_words`. On a command line, the equivalent would look like this:
现在读起来更直观了:将输入传递到 `to_words`,并将结果传递给 `get_all_words`。 在命令行上,等效写法如下所示:
```
$ cat files | to_words | get_all_words
```
The `toolz` library allows us to manipulate functions, slicing, dicing, and composing them to make our code easier to understand and to test.
`toolz` 库允许我们操作函数,切片,分割和组合,以使我们的代码更容易理解和测试。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/functional-programming-python-toolz
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[Flowsnow](https://github.com/Flowsnow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures