Merge remote-tracking branch 'upstream/master'

This commit is contained in:
David Chen 2018-10-25 14:04:07 +08:00
commit 27173b0d05
33 changed files with 2661 additions and 1998 deletions

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "comic"]
path = comic
url = https://wxy@github.com/LCTT/comic.git

View File

@ -1,3 +1,18 @@
language: c
script:
- sh ./scripts/check.sh
- ./scripts/badge.sh
branches:
only:
- master
except:
- gh-pages
git:
submodules: false
deploy:
provider: pages
skip_cleanup: true
github_token: $GITHUB_TOKEN
local_dir: build
on:
branch: master

View File

@ -6,9 +6,9 @@
![待校正](https://lctt.github.io/TranslateProject/badge/translated.svg)
![已发布](https://lctt.github.io/TranslateProject/badge/published.svg)
LCTT 是“Linux中国”[https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
[LCTT](https://linux.cn/lctt/) 是“Linux中国”[https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
LCTT 已经拥有几百名活跃成员并欢迎更多的Linux志愿者加入我们的团队。
LCTT 已经拥有几百名活跃成员,并欢迎更多的 Linux 志愿者加入我们的团队。
![logo](https://linux.cn/static/image/common/lctt_logo.png)
@ -70,6 +70,10 @@ LCTT 的组成
* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
* 2018/02/20 遭遇 DMCA 仓库被封。
* 2018/05/15 提升 MjSeven 为核心成员。
* 2018/08/01 [发布 Linux 中国通证LCCN](https://linux.cn/article-9886-1.html)。
* 2018/08/17 提升 pityonline 为核心成员,担任校对,并接受他的建议采用 PR 审核模式。
* 2018/09/10 [LCTT 五周年](https://linux.cn/article-9999-1.html)。
* 2018/10/25 重构了 CI感谢 vizv、lujun9972、bestony。
核心成员
-------------------------------
@ -78,13 +82,16 @@ LCTT 的组成
- 组长 @wxy,
- 选题 @oska874,
- 选题 @lujun9972,
- 技术 @bestony,
- 校对 @jasminepeng,
- 校对 @pityonline,
- 钻石译者 @geekpi,
- 钻石译者 @qhwdw,
- 钻石译者 @GOLinux,
- 钻石译者 @ictlyh,
- 技术组长 @bestony,
- 漫画组长 @GHLandy,
- LFS 组长 @martin2011qi,
- 核心成员 @GHLandy,
- 核心成员 @martin2011qi,
- 核心成员 @ictlyh,
- 核心成员 @strugglingyouth,
- 核心成员 @FSSlc,
- 核心成员 @zpl1025,
@ -96,8 +103,6 @@ LCTT 的组成
- 核心成员 @Locez,
- 核心成员 @ucasFL,
- 核心成员 @rusking,
- 核心成员 @qhwdw,
- 核心成员 @lujun9972
- 核心成员 @MjSeven
- 前任选题 @DeadFire,
- 前任校对 @reinoir222,

1
comic

@ -1 +0,0 @@
Subproject commit e5db5b880dac1302ee0571ecaaa1f8ea7cf61901

View File

@ -1,25 +1,25 @@
CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理
CPU 电源管理器:Linux 系统中 CPU 主频的控制和管理
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Manage-CPU-Frequency-720x340.jpeg)
你使用笔记本的话,可能知道 Linux 系统的电源管理做的很不好。虽然有 **TLP**、[**Laptop Mode Tools** 和 **powertop**][1] 这些工具来辅助减少电量消耗,但跟 Windows 和 Mac OS 系统比较起来,电池的整个使用周期还是不尽如意。此外,还有一种降低功耗的办法就是限制 CPU 的频率。这是可行的,然而却需要编写很复杂的终端命令来设置,所以使用起来不太方便。幸好,有一款名为 **CPU Power Manager** 的 GNOME 扩展插件,可以很容易的就设置和管理你的 CPU 主频。GNOME 桌面系统中CPU Power Manager 使用名为 **intel_pstate**功率驱动程序(几乎所有的 Intel CPU 都支持)来控制和管理 CPU 主频。
你使用笔记本的话,可能知道 Linux 系统的电源管理做的很不好。虽然有 **TLP**、[**Laptop Mode Tools** 和 **powertop**][1] 这些工具来辅助减少电量消耗,但跟 Windows 和 Mac OS 系统比较起来,电池的整个使用周期还是不尽如意。此外,还有一种降低功耗的办法就是限制 CPU 的频率。这是可行的,然而却需要编写很复杂的终端命令来设置,所以使用起来不太方便。幸好,有一款名为 **CPU Power Manager** 的 GNOME 扩展插件,可以很容易的就设置和管理你的 CPU 主频。GNOME 桌面系统中CPU Power Manager 使用名为 **intel_pstate**频率调整驱动程序(几乎所有的 Intel CPU 都支持)来控制和管理 CPU 主频。
使用这个扩展插件的另一个原因是可以减少系统的发热量,因为很多系统在正常使用中的发热量总让人不舒服,限制 CPU 的主频就可以减低发热量。它还可以减少 CPU 和其他组件的磨损。
### 安装 CPU Power Manager
首先,进入[**扩展插件主页面**][2],安装此扩展插件。
首先,进入[扩展插件主页面][2],安装此扩展插件。
安装好插件后,在 GNOME 顶部栏的右侧会出现一个 CPU 图标。点击图标,会出现安装此扩展一个选项提示,如下示:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-icon.png)
点击**“尝试安装”**按纽,会弹出输入密码确认框。插件需要 root 权限来添加 policykit 规则,进而控制 CPU 主频。下面是弹出的提示框样子:
点击“尝试安装”按纽,会弹出输入密码确认框。插件需要 root 权限来添加 policykit 规则,进而控制 CPU 主频。下面是弹出的提示框样子:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-1.png)
输入密码,点击**“认证”**按纽,完成安装。最后在 **/usr/share/polkit-1/actions** 目录下添加了一个名为 **mko.cpupower.setcpufreq.policy** 的 policykit 文件。
输入密码,点击“认证”按纽,完成安装。最后在 `/usr/share/polkit-1/actions` 目录下添加了一个名为 `mko.cpupower.setcpufreq.policy` 的 policykit 文件。
都安装完成后,如果点击右上脚的 CPU 图标,会出现如下所示:
@ -27,12 +27,10 @@ CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理
### 功能特性
* **查看 CPU 主频:** 显然,你可以通过这个提示窗口看到 CPU 的当前运行频率。
* **设置最大最小主频:** 使用此扩展你可以根据列出的最大、最小频率百分比进度条来分别设置其频率限制。一旦设置CPU 将会严格按照此设置范围运行。
* **开/关 Turbo Boost:** 这是我最喜欢的功能特性。大多数 Intel CPU 都有 “Turbo Boost” 特性,为了提高额外性能,其中的一个内核为自动进行超频。此功能虽然可以使系统获得更高的性能,但也大大增加功耗。所以,如果不做 CPU 密集运行的话,为节约电能,最好关闭 Turbo Boost 功能。事实上,在我电脑上,我大部分时间是把 Turbo Boost 关闭的。
* **生成配置文件:** 可以生成最大和最小频率的配置文件,就可以很轻松打开/关闭,而不是每次手工调整设置。
* **查看 CPU 主频:** 显然,你可以通过这个提示窗口看到 CPU 的当前运行频率。
* **设置最大、最小主频:** 使用此扩展你可以根据列出的最大、最小频率百分比进度条来分别设置其频率限制。一旦设置CPU 将会严格按照此设置范围运行。
* **开/关 Turbo Boost** 这是我最喜欢的功能特性。大多数 Intel CPU 都有 “Turbo Boost” 特性,为了提高额外性能,其中的一个内核为自动进行超频。此功能虽然可以使系统获得更高的性能,但也大大增加功耗。所以,如果不做 CPU 密集运行的话,为节约电能,最好关闭 Turbo Boost 功能。事实上,在我电脑上,我大部分时间是把 Turbo Boost 关闭的。
* **生成配置文件:** 可以生成最大和最小频率的配置文件,就可以很轻松打开/关闭,而不是每次手工调整设置。
### 偏好设置
@ -40,24 +38,23 @@ CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences.png)
如你所见,你可以设置是否显示 CPU 主频,也可以设置是否以 **Ghz** 来代替 **Mhz** 显示。
如你所见,你可以设置是否显示 CPU 主频,也可以设置是否以 **Ghz** 来代替 **Mhz** 显示。
你也可以编辑和创建/删除配置:
你也可以编辑和创建/删除配置文件:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences-1.png)
可以为每个配置分别设置最大、最小主频及开/关 Turbo boost。
可以为每个配置文件分别设置最大、最小主频及开/关 Turbo boost。
### 结论
正如我在开始时所说的Linux 系统的电源管理并不是最好的,许多人总是希望他们的 Linux 笔记本电脑电池能多用几分钟。如果你也是其中一员,就试试此扩展插件吧。为了省电,虽然这是非常规的做法,但有效果。我确实喜欢这个插件,到现在已经使用了好几个月了。
What do you think about this extension? Put your thoughts in the comments below!你对此插件有何看法呢?请把你的观点留在下面的评论区吧。
你对此插件有何看法呢?请把你的观点留在下面的评论区吧。
祝贺!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequency-in-linux/
@ -65,7 +62,7 @@ via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequenc
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,27 +1,27 @@
万维网的创建者正在创建一个新的分布式网络
万维网的创建者正在创建一个新的去中心化网络
======
**万维网的创建者 Tim Berners-Lee 公布了他计划创建一个新的分布式网络,网络中的数据将由用户控制**
> 万维网WWW的创建者 Tim Berners-Lee 公布了他计划创建一个新的去中心化网络,该网络中的数据将由用户控制。
[Tim Berners-Lee] [1]以创建万维网而闻名万维网就是你现在所知的互联网。二十多年之后Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过分布式网络将权力交回给人们。
[Tim Berners-Lee][1] 以创建万维网而闻名万维网就是你现在所知的互联网。二十多年之后Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过<ruby>去中心化网络<rt>Decentralized Web</rt></ruby>将权力交回给人们。
Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们”
Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们”
> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡——以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。
> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡 —— 以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。
![Tim Berners-Lee is creating a decentralized web with open source project Solid][3]
基本上,[Solid][4]是一个使用现有网络构建的平台,在这里你可以创建自己的 “pods” (个人数据存储)。你决定这个 “pods” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。
基本上,[Solid][4] 是一个使用现有网络构建的平台,在这里你可以创建自己的 “pod” (个人数据存储)。你决定这个 “pod” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。
Berners-Lee 相信 Solid "将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。"
Berners-Lee 相信 Solid 将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。
开发人员需要将 Solid 集成进他们的应用程序和网站中。 Solid 仍在早期阶段,所以目前没有相关的应用程序。但是项目网站宣称“第一批 Solid 应用程序正在开发当中”。
Berners-Lee 已经创立一家名为[Inrupt][5] 的初创公司,并已从麻省理工学院休假来全职工作在 Solid来将其”从少部分人的愿景带到多数人的现实“。
Berners-Lee 已经创立一家名为 [Inrupt][5] 的初创公司,并已从麻省理工学院休学术假来全职工作在 Solid来将其”从少部分人的愿景带到多数人的现实“。
如果你对 Solid 感兴趣,[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于分布式网络的成功。
如果你对 Solid 感兴趣,可以[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于去中心化网络的成功。
你认为[分布式网络][8]会成为现实吗?你是如何看待分布式网络,特别是 Solid 项目的?
你认为[去中心化网络][8]会成为现实吗?你是如何看待去中心化网络,特别是 Solid 项目的?
--------------------------------------------------------------------------------
@ -30,7 +30,7 @@ via: https://itsfoss.com/solid-decentralized-web/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ypingcn](https://github.com/ypingcn)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

10
scripts/badge.sh Executable file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# 重新生成badge
set -o errexit
SCRIPTS_DIR=$(cd $(dirname "$0") && pwd)
BUILD_DIR=$(cd $SCRIPTS_DIR/.. && pwd)/build
mkdir -p ${BUILD_DIR}/badge
for catalog in published translated translating sources;do
${SCRIPTS_DIR}/badge/show_status.sh -s ${catalog} > ${BUILD_DIR}/badge/${catalog}.svg
done

92
scripts/badge/show_status.sh Executable file
View File

@ -0,0 +1,92 @@
#!/usr/bin/env bash
set -e
function help()
{
cat <<EOF
Usage: ${0##*/} [+-s} [published] [translated] [translating] [sources]
显示已发布、已翻译、正在翻译和待翻译的数量
-s 输出为svg格式
EOF
}
while getopts :s OPT; do
case $OPT in
s|+s)
show_format="svg"
;;
*)
help
exit 2
esac
done
shift $(( OPTIND - 1 ))
OPTIND=1
declare -A catalog_comment_dict
catalog_comment_dict=([translated]="待校对" [published]="已发布" [translating]="翻译中" [sources]="待翻译")
function count_files_under_dir()
{
local dir=$1
local pattern=$2
find ${dir} -name "${pattern}" -type f |wc -l
}
cd "$(dirname $0)/../.." # 进入TP root
for catalog in "$@";do
case "${catalog}" in
published)
num=$(count_files_under_dir "${catalog}" "[0-9]*.md")
;;
translated)
num=$(count_files_under_dir "${catalog}" "[0-9]*.md")
;;
translating)
num=$(git grep -niE "translat|fanyi|翻译" sources/*.md |awk -F ":" '{if ($2<=3) print $1}' |wc -l)
;;
sources)
total=$(count_files_under_dir "${catalog}" "[0-9]*.md")
translating_num=$(git grep -niE "translat|fanyi|翻译" sources/*.md |awk -F ":" '{if ($2<=3) print $1}' |wc -l)
num=$((${total} - ${translating_num}))
;;
*)
help
exit 2
esac
comment=${catalog_comment_dict[${catalog}]}
if [[ "${show_format}" == "svg" ]];then
cat <<EOF
<svg xmlns="http://www.w3.org/2000/svg" width="120" height="20">
<linearGradient id="b" x2="0" y2="100%">
<stop offset="0" stop-color="#bbb" stop-opacity=".1" />
<stop offset="1" stop-opacity=".1" />
</linearGradient>
<mask id="a">
<rect width="132.53125" height="20" rx="3" fill="#fff" />
</mask>
<g mask="url(#a)">
<path fill="#555" d="M0 0 h70.53125 v20 H0 z" />
<path fill="#97CA00" d="M70.53125 0 h62.0 v20 H70.53125 z" />
<path fill="url(#b)" d="M0 0 h132.53125 v20 H0 z" />
</g>
<g fill="#fff" font-family="DejaVu Sans" font-size="11">
<text x="6" y="15" fill="#010101" fill-opacity=".3">${comment}</text>
<text x="6" y="14">${comment}</text>
<text x="74.53125" y="15" fill="#010101" fill-opacity=".3">${num}</text>
<text x="74.53125" y="14">${num}</text>
</g>
</svg>
EOF
else
cat<<EOF
${comment}: ${num}
EOF
fi
done

View File

@ -0,0 +1,93 @@
The Rise and Rise of JSON
======
JSON has taken over the world. Today, when any two applications communicate with each other across the internet, odds are they do so using JSON. It has been adopted by all the big players: Of the ten most popular web APIs, a list consisting mostly of APIs offered by major companies like Google, Facebook, and Twitter, only one API exposes data in XML rather than JSON. Twitter, to take an illustrative example from that list, supported XML until 2013, when it released a new version of its API that dropped XML in favor of using JSON exclusively. JSON has also been widely adopted by the programming rank and file: According to Stack Overflow, a question and answer site for programmers, more questions are now asked about JSON than about any other data interchange format.
![][1]
XML still survives in many places. It is used across the web for SVGs and for RSS and Atom feeds. When Android developers want to declare that their app requires a permission from the user, they do so in their apps manifest, which is written in XML. XML also isnt the only alternative to JSON—some people now use technologies like YAML or Googles Protocol Buffers. But these are nowhere near as popular as JSON. For the time being, JSON appears to be the go-to format for communicating with other programs over the internet.
JSONs dominance is surprising when you consider that as recently as 2005 the web world was salivating over the potential of “Asynchronous JavaScript and XML” and not “Asynchronous JavaScript and JSON.” It is of course possible that this had nothing to do with the relative popularity of the two formats at the time and reflects only that “AJAX” must have seemed a more appealing acronym than “AJAJ.” But even if some people were already using JSON instead of XML in 2005 (and in fact not many people were yet), one still wonders how XMLs fortunes could have declined so precipitously that a mere decade or so later “Asynchronous JavaScript and XML” has become an ironic misnomer. What happened in that decade? How did JSON supersede XML in so many applications? And who came up with this data format now depended on by engineers and systems all over the world?
### The Birth of JSON
The first JSON message was sent in April of 2001. Since this was a historically significant moment in computing, the message was sent from a computer in a Bay-Area garage. Douglas Crockford and Chip Morningstar, co-founders of a technology consulting company called State Software, had gathered in Morningstars garage to test out an idea.
Crockford and Morningstar were trying to build AJAX applications well before the term “AJAX” had been coined. Browser support for what they were attempting was not good. They wanted to pass data to their application after the initial page load, but they had not found a way to do this that would work across all the browsers they were targeting.
Though its hard to believe today, Internet Explorer represented the bleeding edge of web browsing in 2001. As early as 1999, Internet Explorer 5 supported a primordial form of XMLHttpRequest, which programmers could access using a framework called ActiveX. Crockford and Morningstar could have used this technology to fetch data for their application, but they could not have used the same solution in Netscape 4, another browser that they sought to support. So Crockford and Morningstar had to use a different system that worked in both browsers.
The first JSON message looked like this:
```
<html><head><script>
document.domain = 'fudco';
parent.session.receive(
{ to: "session", do: "test",
text: "Hello world" }
)
</script></head></html>
```
Only a small part of the message resembles JSON as we know it today. The message itself is actually an HTML document containing some JavaScript. The part that resembles JSON is just a JavaScript object literal being passed to a function called `receive()`.
Crockford and Morningstar had decided that they could abuse an HTML frame to send themselves data. They could point a frame at a URL that would return an HTML document like the one above. When the HTML was received, the JavaScript would be run, passing the object literal back to the application. This worked as long as you were careful to sidestep browser protections preventing a sub-window from accessing its parent; you can see that Crockford and Mornginstar did that by explicitly setting the document domain. (This frame-based technique, sometimes called the hidden frame technique, was commonly used in the late 90s before the widespread implementation of XMLHttpRequest.)
The amazing thing about the first JSON message is that its not obviously the first usage of a new kind of data format at all. Its just JavaScript! In fact the idea of using JavaScript this way is so straightforward that Crockford himself has said that he wasnt the first person to do it—he claims that somebody at Netscape was using JavaScript array literals to communicate information as early as 1996. Since the message is just JavaScript, it doesnt require any kind of special parsing. The JavaScript interpreter can do it all.
The first ever JSON message actually ran afoul of the JavaScript interpreter. JavaScript reserves an enormous number of words—there are 64 reserved words as of ECMAScript 6—and Crockford and Morningstar had unwittingly used one in their message. They had used `do` as a key, but `do` is reserved. Since JavaScript has so many reserved words, Crockford decided that, rather than avoid using all those reserved words, he would just mandate that all JSON keys be quoted. A quoted key would be treated as a string by the JavaScript interpreter, meaning that reserved words could be used safely. This is why JSON keys are quoted to this day.
Crockford and Morningstar realized they had something that could be used in all sorts of applications. They wanted to name their format “JSML”, for JavaScript Markup Language, but found that the acronym was already being used for something called Java Speech Markup Language. So they decided to go with “JavaScript Object Notation”, or JSON. They began pitching it to clients but soon found that clients were unwilling to take a chance on an unknown technology that lacked an official specification. So Crockford decided he would write one.
In 2002, Crockford bought the domain [JSON.org][2] and put up the JSON grammar and an example implementation of a parser. The website is still up, though it now includes a prominent link to the JSON ECMA standard ratified in 2013. After putting up the website, Crockford did little more to promote JSON, but soon found that lots of people were submitting JSON parser implementations in all sorts of different programming languages. JSONs lineage clearly tied it to JavaScript, but it became apparent that JSON was well-suited to data interchange between arbitrary pairs of languages.
### Doing AJAX Wrong
JSON got a big boost in 2005. That year, a web designer and developer named Jesse James Garrett coined the term “AJAX” in a blog post. He was careful to stress that AJAX wasnt any one new technology, but rather “several technologies, each flourishing in its own right, coming together in powerful new ways.” AJAX was the name that Garrett was giving to a new approach to web application development that he had noticed gaining favor. His blog post went on to describe how developers could leverage JavaScript and XMLHttpRequest to build new kinds of applications that were more responsive and stateful than the typical web page. He pointed to Gmail and Flickr as examples of websites already relying on AJAX techniques.
The “X” in “AJAX” stood for XML, of course. But in a follow-up Q&A post, Garrett pointed to JSON as an entirely acceptable alternative to XML. He wrote that “XML is the most fully-developed means of getting data in and out of an AJAX client, but theres no reason you couldnt accomplish the same effects using a technology like JavaScript Object Notation or any similar means of structuring data.”
Developers indeed found that they could easily use JSON to build AJAX applications and many came to prefer it to XML. And so, ironically, the interest in AJAX led to an explosion in JSONs popularity. It was around this time that JSON drew the attention of the blogosphere.
In 2006, Dave Winer, a prolific blogger and the engineer behind a number of XML-based technologies such as RSS and XML-RPC, complained that JSON was reinventing XML for no good reason. Though one might think that a contest between data interchange formats would be unlikely to engender death threats, Winer wrote:
> No doubt I can write a routine to parse [JSON], but look at how deep they went to re-invent, XML itself wasnt good enough for them, for some reason (Id love to hear the reason). Who did this travesty? Lets find a tree and string them up. Now.
Its easy to understand Winers frustration. XML has never been widely loved. Even Winer has said that he does not love XML. But XML was designed to be a system that could be used by everyone for almost anything imaginable. To that end, XML is actually a meta-language that allows you to define domain-specific languages for individual applications—RSS, the web feed technology, and SOAP (Simple Object Access Protocol) are examples. Winer felt that it was important to work toward consensus because of all the benefits a common interchange format could bring. He felt that XMLs flexibility should be able to accommodate everybodys needs. And yet here was JSON, a format offering no benefits over XML except those enabled by throwing out the cruft that made XML so flexible.
Crockford saw Winers blog post and left a comment on it. In response to the charge that JSON was reinventing XML, Crockford wrote, “The good thing about reinventing the wheel is that you can get a round one.”
### JSON vs XML
By 2014, JSON had been officially specified by both an ECMA standard and an RFC. It had its own MIME type. JSON had made it to the big leagues.
Why did JSON become so much more popular than XML?
On [JSON.org][2], Crockford summarizes some of JSONs advantages over XML. He writes that JSON is easier for both humans and machines to understand, since its syntax is minimal and its structure is predictable. Other bloggers have focused on XMLs verbosity and “the angle bracket tax.” Each opening tag in XML must be matched with a closing tag, meaning that an XML document contains a lot of redundant information. This can make an XML document much larger than an equivalent JSON document when uncompressed, but, perhaps more importantly, it also makes an XML document harder to read.
Crockford has also claimed that another enormous advantage for JSON is that JSON was designed as a data interchange format. It was meant to carry structured information between programs from the very beginning. XML, though it has been used for the same purpose, was originally designed as a document markup language. It evolved from SGML (Standard Generalized Markup Language), which in turn evolved from a markup language called Scribe, intended as a word processing system similar to LaTeX. In XML, a tag can contain what is called “mixed content,” or text with inline tags surrounding words or phrases. This recalls the image of an editor marking up a manuscript with a red or blue pen, which is arguably the central metaphor of a markup language. JSON, on the other hand, does not support a clear analogue to mixed content, but that means that its structure can be simpler. A document is best modeled as a tree, but by throwing out the document idea Crockford could limit JSON to dictionaries and arrays, the basic and familiar elements all programmers use to build their programs.
Finally, my own hunch is that people disliked XML because it was confusing, and it was confusing because it seemed to come in so many different flavors. At first blush, its not obvious where the line is between XML proper and its sub-languages like RSS, ATOM, SOAP, or SVG. The first lines of a typical XML document establish the XML version and then the particular sub-language the XML document should conform to. That is a lot of variation to account for already, especially when compared to JSON, which is so straightforward that no new version of the JSON specification is ever expected to be written. The designers of XML, in their attempt to make XML the one data interchange format to rule them all, fell victim to that classic programmers pitfall: over-engineering. XML was so generalized that it was hard to use for something simple.
In 2000, a campaign was launched to get HTML to conform to the XML standard. A specification was published for XML-compliant HTML, thereafter known as XHTML. Some browser vendors immediately started supporting the new standard, but it quickly became obvious that the vast HTML-producing public were unwilling to revise their habits. The new standard called for stricter validation of XHTML than had been the norm for HTML, but too many websites depended on HTMLs forgiving rules. By 2009, an attempt to write a second version of the XHTML standard was aborted when it became clear that the future of HTML was going to be HTML5, a standard that did not insist on XML compliance.
If the XHTML effort had succeeded, then maybe XML would have become the common data format that its designers hoped it would be. Imagine a world in which HTML documents and API responses had the exact same structure. In such a world, JSON might not have become as ubiquitous as it is today. But I read the failure of XHTML as a kind of moral defeat for the XML camp. If XML wasnt the best tool for HTML, then maybe there were better tools out there for other applications also. In that world, our world, it is easy to see how a format as simple and narrowly tailored as JSON could find great success.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][3] on Twitter or subscribe to the [RSS feed][4] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/json.svg
[2]: http://JSON.org
[3]: https://twitter.com/TwoBitHistory
[4]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,50 @@
The Most Important Database You've Never Heard of
======
In 1962, JFK challenged Americans to send a man to the moon by the end of the decade, inspiring a heroic engineering effort that culminated in Neil Armstrongs first steps on the lunar surface. Many of the fruits of this engineering effort were highly visible and sexy—there were new spacecraft, new spacesuits, and moon buggies. But the Apollo Program was so staggeringly complex that new technologies had to be invented even to do the mundane things. One of these technologies was IBMs Information Management System (IMS).
IMS is a database management system. NASA needed one in order to keep track of all the parts that went into building a Saturn V rocket, which—because there were two million of them—was expected to be a challenge. Databases were a new idea in the 1960s and there werent any already available for NASA to use, so, in 1965, NASA asked IBM to work with North American Aviation and Caterpillar Tractor to create one. By 1968, IBM had installed a working version of IMS at NASA, though at the time it was called ICS/DL/I for “Informational Control System and Data Language/Interface.” (IBM seems to have gone through a brief, unfortunate infatuation with the slash; see [PL/I][1].) Two years later, IBM rebranded ICS/DL/I as “IMS” and began selling it to other customers. It was one of the first commercially available database management systems.
The incredible thing about IMS is that it is still in use today. And not just on a small scale: Banks, insurance companies, hospitals, and government agencies still use IMS for all sorts of critical tasks. Over 95% of Fortune 1000 companies use IMS in some capacity, as do all of the top five US banks. Whenever you withdraw cash from an ATM, the odds are exceedingly good that you are interacting with IMS at some point in the course of your transaction. In a world where the relational database is an old workhorse increasingly in competition with trendy new NoSQL databases, IMS is a freaking dinosaur. It is a relic from an era before the relational database was even invented, which didnt happen until 1970. And yet it seems to be the database system in charge of all the important stuff.
I think this makes IMS pretty interesting. Depending on how you feel about relational databases, it either offers insight into how the relational model improved on its predecessors or else exemplifies an alternative model better suited to certain problems.
IMS works according to a hierarchical model, meaning that, instead of thinking about data as tables that can be brought together using JOIN operations, IMS thinks about data as trees. Each kind of record you store can have other kinds of records as children; these child record types represent additional information that you might be interested in given a record of the parent type.
To take an example, say that you want to store information about bank customers. You might have one type of record to represent customers and another type of record to represent accounts. Like in a relational database, where each table has columns, these records will have different fields; we might want to have a first name field, a last name field, and a city field for each customer. We must then decide whether we are likely to first lookup a customer and then information about that customers account, or whether we are likely to first lookup an account and then information about that accounts owner. Assuming we decide that we will access customers first, then we will make our account record type a child of our customer record type. Diagrammed, our database model would look something like this:
![][2]
And an actual database might look like:
![][3]
By modeling our data this way, we are hewing close to the reality of how our data is stored. Each parent record includes pointers to its children, meaning that moving down our tree from the root node is efficient. (Actually, each parent basically stores just one pointer to the first of its children. The children in turn contain pointers to their siblings. This ensures that the size of a record does not vary with the number of children it has.) This efficiency can make data accesses very fast, provided that we are accessing our data in ways that we anticipated when we first structured our database. According to IBM, an IMS instance can process over 100,000 transactions a second, which is probably a large part of why IMS is still used, particularly at banks. But the downside is that we have lost a lot of flexibility. If we want to access our data in ways we did not anticipate, we will have a hard time.
To illustrate this, consider what might happen if we decide that we would like to access accounts before customers. Perhaps customers are calling in to update their addresses, and we would like them to uniquely identify themselves using their account numbers. So we want to use an account number to find an account, and then from there find the accounts owner. But since all accesses start at the root of our tree, theres no way for us to get to an account efficiently without first deciding on a customer. To fix this problem, we could introduce a second tree or hierarchy starting with account records; these account records would then have customer records as children. This would let us access accounts and then customers efficiently. But it would involve duplicating information that we already have stored in our database—we would have two trees storing the same information in different orders. Another option would be to establish an index of accounts that could point us to the right account record given an account number. That would work too, but it would entail extra work during insert and update operations in the future.
It was precisely this inflexibility and the problem of duplicated information that pushed E. F. Codd to propose the relational model. In his 1970 paper, A Relational Model of Data for Large Shared Data Banks, he states at the outset that he intends to present a model for data storage that can protect users from having to know anything about how their data is stored. Looked at one way, the hierarchical model is entirely an artifact of how the designers of IMS chose to store data. It is a bottom-up model, the implication of a physical reality. The relational model, on the other hand, is an abstract model based on relational algebra, and is top-down in that the data storage scheme can be anything provided it accommodates the model. The relational models great advantage is that, just because youve made decisions that have caused the database to store your data in a particular way, you wont find yourself effectively unable to make certain queries.
All that said, the relational model is an abstraction, and we all know abstractions arent free. Banks and large institutions have stuck with IMS partly because of the performance benefits, though its hard to say if those benefits would be enough to keep them from switching to a modern database if they werent also trying to avoid rewriting mission-critical legacy code. However, todays popular NoSQL databases demonstrate that there are people willing to drop the conveniences of the relational model in return for better performance. Something like MongoDB, which encourages its users to store data in a denormalized form, isnt all that different from IMS. If you choose to store some entity inside of another JSON record, then in effect you have created something like the IMS hierarchy, and you have constrained your ability to query for that data in the future. But perhaps thats a tradeoff youre willing to make. So, even if IMS hadnt predated E. F. Codds relational model by several years, there are still reasons why IMS creators might not have adopted the relational model wholesale.
Unfortunately, IMS isnt something that you can download and take for a spin on your own computer. First of all, IMS is not free, so you would have to buy it from IBM. But the bigger problem is that IMS only runs on IBM mainframes like the IBM z13. Thats a shame, because it would be a joy to play around with IMS and get a sense for exactly how it differs from something like MySQL. But even without that opportunity, its interesting to think about software systems that work in ways we dont expect or arent used to. And its especially interesting when those systems, alien as they are, turn out to undergird your local hospital, the entire financial sector, and even the federal government.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/10/07/the-most-important-database.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/PL/I
[2]: https://twobithistory.org/images/hierarchical-model.png
[3]: https://twobithistory.org/images/hierarchical-db.png
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,84 @@
The Ruby Story
======
Ruby has always been one of my favorite languages, though Ive sometimes found it hard to express why that is. The best Ive been able to do is this musical analogy: Whereas Python feels to me like punk rock—its simple, predictable, but rigid—Ruby feels like jazz. Ruby gives programmers a radical freedom to express themselves, though that comes at the cost of added complexity and can lead to programmers writing programs that dont make immediate sense to other people.
Ive always been aware that freedom of expression is a core value of the Ruby community. But what I didnt appreciate is how deeply important it was to the development and popularization of Ruby in the first place. One might create a programming lanugage in pursuit of better peformance, or perhaps timesaving abstractions—the Ruby story is interesting because instead the goal was, from the very beginning, nothing more or less than the happiness of the programmer.
### Yukihiro Matsumoto
Yukihiro Matsumoto, also known as “Matz,” graduated from the University of Tsukuba in 1990. Tsukuba is a small town just northeast of Tokyo, known as a center for scientific research and technological devlopment. The University of Tsukuba is particularly well-regarded for its STEM programs. Matsumoto studied Information Science, with a focus on programming languages. For a time he worked in a programming language lab run by Ikuo Nakata.
Matsumoto started working on Ruby in 1993, only a few years after graduating. He began working on Ruby because he was looking for a scripting language with features that no existing scripting language could provide. He was using Perl at the time, but felt that it was too much of a “toy language.” Python also fell short; in his own words:
> I knew Python then. But I didnt like it, because I didnt think it was a true object-oriented language—OO features appeared to be an add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for one, but couldnt find one.
So one way of understanding Matsumotos motivations in creating Ruby is that he was trying to create a better, object-oriented version of Perl.
But at other times, Matsumoto has said that his primary motivation in creating Ruby was simply to make himself and others happier. Toward the end of a Google tech talk that Matsumoto gave in 2008, he showed the following slide:
![][1]
He told his audience,
> I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of the Ruby language.
Matsumoto goes on to joke that he created Ruby for selfish reasons, because he was so underwhelmed by other languages that he just wanted to create something that would make him happy.
The slide epitomizes Matsumotos humble style. Matsumoto, it turns out, is a practicing Mormon, and Ive wondered whether his religious commitments have any bearing on his legendary kindness. In any case, this kindness is so well known that the Ruby community has a principle known as MINASWAN, or “Matz Is Nice And So We Are Nice.” The slide must have struck the audience at Google as an unusual one—I imagine that any random slide drawn from a Google tech talk is dense with code samples and metrics showing how one engineering solution is faster or more efficient than another. Few, I suspect, come close to stating nobler goals more simply.
Ruby was influenced primarily by Perl. Perl was created by Larry Wall in the late 1980s as a means of processing and transforming text-based reports. It became well-known for its text processing and regular expression capabilities. A Perl program contains many syntactic elements that would be familiar to a Ruby programmer—there are `$` signs, `@` signs, and even `elsif`s, which Id always thought were one of Rubys less felicitous idiosyncracies. On a deeper level, Ruby borrows much of Perls regular expression handling and standard library.
But Perl was by no means the only influence on Ruby. Prior to beginning work on Ruby, Matsumoto worked on a mail client written entirely in Emacs Lisp. The experience taught him a lot about the inner workings of Emacs and the Lisp language, which Matsumoto has said influenced the underlying object model of Ruby. On top of that he added a Smalltalk-style messsage passing system which forms the basis for any behavior relying on Rubys `#method_missing`. Matsumoto has also claimed Ada and Eiffel as influences on Ruby.
When it came time to decide on a name for Ruby, Matsumoto and a colleague, Keiju Ishitsuka, considered several alternatives. They were looking for something that suggested Rubys relationship to Perl and also to shell scripting. In an [instant message exchange][2] that is well-worth reading, Ishitsuka and Matsumoto probably spend too much time thinking about the relationship between shells, clams, oysters, and pearls and get close to calling the Ruby language “Coral” or “Bisque” instead. Thankfully, they decided to go with “Ruby”, the idea being that it was, like “pearl”, the name of a valuable jewel. It also turns out that the birthstone for June is a pearl while the birthstone for July is a ruby, meaning that the name “Ruby” is another tongue-in-cheek “incremental improvement” name like C++ or C#.
### Ruby Goes West
Ruby grew popular in Japan very quickly. Soon after its initial release in 1995, Matz was hired by a Japanese software consulting group called Netlab (also known as Network Applied Communication Laboratory) to work on Ruby full-time. By 2000, only five years after it was initially released, Ruby was more popular in Japan than Python. But it was only just beginning to make its way to English-speaking countries. There had been a Japanese-language mailing list for Ruby discussion since almost the very beginning of Rubys existence, but the English-language mailing list wasnt started until 1998. Initially, the English-language mailing list was used by Japanese Rubyists writing in English, but this gradually changed as awareness of Ruby grew.
In 2000, Dave Thomas published Programming Ruby, the first English-language book to cover Ruby. The book became known as the “pickaxe” book for the pickaxe it featured on its cover. It introduced Ruby to many programmers in the West for the first time. Like it had in Japan, Ruby spread quickly, and by 2002 the English-language Ruby mailing list had more traffic than the original Japanese-language mailing list.
By 2005, Ruby had become more popular, but it was still not a mainstream programming language. That changed with the release of Ruby on Rails. Ruby on Rails was the “killer app” for Ruby, and it did more than any other project to popularize Ruby. After the release of Ruby on Rails, interest in Ruby shot up across the board, as measured by the TIOBE language index:
![][3]
Its sometimes joked that the only programs anybody writes in Ruby are Ruby-on-Rails web applications. That makes it sound as if Ruby on Rails completely took over the Ruby community, which is only partly true. While Ruby has certainly come to be known as that language people write Rails apps in, Rails owes as much to Ruby as Ruby owes to Rails.
The Ruby philosophy heavily informed the design and implementation of Rails. David Heinemeier Hansson, who created Rails, often talks about how his first contact with Ruby was an almost religious experience. He has said that the encounter was so transformative that it “imbued him with a calling to do missionary work in service of Matzs creation.” For Hansson, Rubys no-shackles approach was a politically courageous rebellion against the top-down impositions made by languages like Python and Java. He appreciated that the language trusted him and empowered him to make his own judgements about how best to express his programs.
Like Matsumoto, Hansson claims that he created Rails out of a frustration with the status quo and a desire to make things better for himself. He, like Matsumoto, prioritized programmer happiness above all else, evaluating additions to Rails by what he calls “The Principle of The Bigger Smile.” Whatever made Hansson smile more was what made it into the Rails codebase. As a result, Rails would come to include unorthodox features like the “Inflector” class (which tries to map singular class names to plural database table names automatically) and Rails `Time` extensions (allowing programmers to write cute expressions like `2.days.ago`). To some, these features were truly weird, but the success of Rails is testament to the number of people who found it made their lives much easier.
And so, while it might seem that Rails was an incidental application of Ruby that happened to become extremely popular, Rails in fact embodies many of Rubys core principles. Futhermore, its hard to see how Rails could have been built in any other language, given its dependence on Rubys macro-like class method calls to implement things like model associations. Some people might take the fact that so much of Ruby development revolves around Ruby on Rails as a sign of an unhealthy ecosystem, but there are good reasons that Ruby and Ruby on Rails are so intertwined.
### The Future of Ruby
People seem to have an inordinate amount of interest in whether or not Ruby (and Ruby on Rails) are dying. Since as early as 2011, it seems that Stack Overflow and Quora have been full of programmers asking whether or not they should bother learning Ruby if it will no longer be around in the next few years. These concerns are not unjustified; according to the TIOBE index and to Stack Overflow trends, Ruby and Ruby on Rails have been shrinking in popularity. Though Ruby on Rails was once the hot new thing, it has since been eclipsed by hotter and newer frameworks.
One theory for why this has happened is that programmers are abandoning dynamically typed languages for statically typed ones. Analysts at TIOBE index figure that a rise in quality requirements have made runtime exceptions increasingly unacceptable. They cite TypeScript as an example of this trend—a whole new version of JavaScript was created just to ensure that client-side code could be written with the benefit of compile-time safety guarantees.
A more likely answer, I think, is just that Ruby on Rails now has many more competitors than it once did. When Rails was first introduced in 2005, there werent that many ways to create web applications—the main alternative was Java. Today, you can create web applications using great frameworks built for Go, JavaScript, or Python, to name only the most popular options. The web world also seems to be moving toward a more distributed architecture for applications, meaning that, rather than having one codebase responsible for everything from database access to view rendering, responsibilites are split between different components that focus on doing one thing well. Rails feels overbroad and bloated for something as focused as a JSON API that talks to a JavaScript frontend.
All that said, there are reasons to be optimistic about Rubys future. Both Rails and Ruby continue to be actively developed. Matsumoto and others are working hard on Rubys third major release, which they aim to make three times faster than the existing version of Ruby, possibly alleviating the performance concerns that have always dogged Ruby. And even if the world of web frameworks has become more diverse since 2005, that doesnt mean that there wont always be room for Ruby on Rails. It is now a mature tool with an enormous amount of built-in power that will always be a good choice for certain kinds of applications.
But even if Ruby and Rails go the way of the dinosaurs, one thing that seems certain to survive is the Ruby ethos of programmer happiness. Ruby has had a profound influence on the design of many new programming languages, which have adopted many of its best ideas. Other new lanuages have tried to be “more modern” interpretations of Ruby: Elixir, for example, is a version of Ruby that emphasizes the functional programming paradigm, while Crystal, which is still in development, aims to be a statically typed version of Ruby. Many programmers around the world have fallen in love with Ruby and its syntax, so we can count on its influence persisting for a long while to come.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/11/19/the-ruby-story.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/matz.png
[2]: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/88819
[3]: https://twobithistory.org/images/tiobe_ruby.png
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,44 @@
Important Papers: Codd and the Relational Model
======
Its hard to believe today, but the relational database was once the cool new kid on the block. In 2017, the relational model competes with all sorts of cutting-edge NoSQL technologies that make relational database systems seem old-fashioned and boring. Yet, 50 years ago, none of the dominant database systems were relational. Nobody had thought to structure their data that way. When the relational model did come along, it was a radical new idea that revolutionized the database world and spawned a multi-billion dollar industry.
The relational model was introduced in 1970. Edgar F. Codd, a researcher at IBM, published a [paper][1] called “A Relational Model of Data for Large Shared Data Banks.” The paper was a rewrite of a paper he had circulated internally at IBM a year earlier. The paper is unassuming; Codd does not announce in his abstract that he has discovered a brilliant new approach to storing data. He only claims to have employed a novel tool (the mathematical notion of a “relation”) to address some of the inadequacies of the prevailing database models.
In 1970, there were two schools of thought about how to structure a database: the hierarchical model and the network model. The hierarchical model was used by IBMs Information Management System (IMS), the dominant database system at the time. The network model had been specified by a standards committee called CODASYL (which also—random tidbit—specified COBOL) and implemented by several other database system vendors. The two models were not really that different; both could be called “navigational” models. They persisted tree or graph data structures to disk using pointers to preserve the links between the data. Retrieving a record stored toward the bottom of the tree would involve first navigating through all of its ancestor records. These databases were fast (IMS is still used by many financial institutions partly for this reason, see [this excellent blog post][2]) but inflexible. Woe unto those database administrators who suddenly found themselves needing to query records from the bottom of the tree without having an obvious place to start at the top.
Codd saw this inflexibility as a symptom of a larger problem. Programs using a hierarchical or network database had to know about how the stored data was structured. Programs had to know this because they were responsible for navigating down this structure to find the information they needed. This was so true that when Charles Bachman, a major pioneer of the network model, received a Turing Award for his work in 1973, he gave a speech titled “[The Programmer as Navigator][3].” Of course, if programs were saddled with this responsibility, then they would immediately break if the structure of the database ever changed. In the introduction to his 1970 paper, Codd motivates the search for a better model by arguing that we need “data independence,” which he defines as “the independence of application programs and terminal activities from growth in data types and changes in data representation.” The relational model, he argues, “appears to be superior in several respects to the graph or network model presently in vogue,” partly because, among other benefits, the relational model “provides a means of describing data with its natural structure only.” By this he meant that programs could safely ignore any artificial structures (like trees) imposed upon the data for storage and retrieval purposes only.
To further illustrate the problem with the navigational models, Codd devotes the first section of his paper to an example data set involving machine parts and assembly projects. This dataset, he says, could be represented in existing systems in at least five different ways. Any program that is developed assuming one of five structures will fail when run against at least three of the other structures. The program could instead try to figure out ahead of time which of the structures it might be dealing with, but it would be difficult to do so in this specific case and practically impossible in the general case. So, as long as the program needs to know about how the data is structured, we cannot switch to an alternative structure without breaking the program. This is a real bummer because (and this is from the abstract) “changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.”
Codd then introduces his relational model. This model would be refined and expanded in subsequent papers: In 1971, Codd wrote about ALPHA, a SQL-like query language he created; in another 1971 paper, he introduced the first three normal forms we know and love today; and in 1972, he further developed relational algebra and relational calculus, the mathematically rigorous underpinnings of the relational model. But Codds 1970 paper contains the kernel of the relational idea:
> The term relation is used here in its accepted mathematical sense. Given sets (not necessarily distinct), is a relation on these sets if it is a set of -tuples each of which has its first element from , its second element from , and so on. We shall refer to as the th domain of . As defined above, is said to have degree . Relations of degree 1 are often called unary, degree 2 binary, degree 3 ternary, and degree n-ary.
Today, we call a relation a table, and a domain an attribute or a column. The word “table” actually appears nowhere in the paper, though Codds visual representations of relations (which he calls “arrays”) do resemble tables. Codd defines several more terms, some of which we continue to use and others we have replaced. He explains primary and foreign keys, as well as what he calls the “active domain,” which is the set of all distinct values that actually appear in a given domain or column. He then spends some time distinguishing between a “simple” and a “nonsimple” domain. A simple domain contains “atomic” or “nondecomposable” values, like integers. A nonsimple domain has relations as elements. The example Codd gives here is that of an employee with a salary history. The salary history is not one salary but a collection of salaries each associated with a date. So a salary history cannot be represented by a single number or string.
Its not obvious how one could store a nonsimple domain in a multi-dimensional array, AKA a table. The temptation might be to denote the nonsimple relationship using some kind of pointer, but then we would be repeating the mistakes of the navigational models. Instead. Codd introduces normalization, which at least in the 1970 paper involves nothing more than turning nonsimple domains into simple ones. This is done by expanding the child relation so that it includes the primary key of the parent. Each tuple of the child relation references its parent using simple domains, eliminating the need for a nonsimple domain in the parent. Normalization means no pointers, sidestepping all the problems they cause in the navigational models.
At this point, anyone reading Codds paper would have several questions, such as “Okay, how would I actually query such a system?” Codd mentions the possibility of creating a universal sublanguage for querying relational databases from other programs, but declines to define such a language in this particular paper. He does explain, in mathematical terms, many of the fundamental operations such a language would have to support, like joins, “projection” (`SELECT` in SQL), and “restriction” (`WHERE`). The amazing thing about Codds 1970 paper is that, really, all the ideas are there—weve been writing `SELECT` statements and joins for almost half a century now.
Codd wraps up the paper by discussing ways in which a normalized relational database, on top of its other benefits, can reduce redundancy and improve consistency in data storage. Altogether, the paper is only 11 pages long and not that difficult of a read. I encourage you to look through it yourself. It would be another ten years before Codds ideas were properly implemented in a functioning system, but, when they finally were, those systems were so obviously better than previous systems that they took the world by storm.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/12/29/codd-relational-model.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://cs.uwaterloo.ca/~david/cs848s14/codd-relational.pdf
[2]: https://twobithistory.org/2017/10/07/the-most-important-database.html
[3]: https://pdfs.semanticscholar.org/f371/d196bf0e7b43df6dcbbc44de461925a21709.pdf
[4]: https://twitter.com/TwoBitHistory
[5]: https://twobithistory.org/feed.xml

View File

@ -1,4 +1,4 @@
How writing can change your career for the better, even if you don't identify as a writer Translating by FelixYFZ
How writing can change your career for the better, even if you don't identify as a writer
======
Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?

View File

@ -0,0 +1,106 @@
Whatever Happened to the Semantic Web?
======
In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the worlds best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine.
They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this:
> The entertainment system was belting out the Beatles “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctors office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. Im going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctors office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Moms prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Petes and Lucys busy schedules.
The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web.
![][1]
For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.”
Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Todays physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…”
In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn?
### Semweb Hucksters and Their Metacrap
To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream.
The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans.
The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember.
Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users.
Indeed, the web had already seen people abusing the HTML `<meta>` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `<meta>` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `<meta>` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science.
Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand.
### Building the Semantic Web
If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert.
The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future.
The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge.
RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies:
```
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ex: <http://www.example.org/> .
ex:vincent_donofrio ex:starred_in ex:law_and_order_ci .
ex:law_and_order_ci rdf:type ex:tv_show .
ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix .
```
The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you dont know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.)
Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information.
In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project.
Berners-Lees article launched the second phase of the Semantic Webs development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3Cs new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system.
![][4]
The third phase of the Semantic Webs development involved adapting the W3Cs standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `<script>` tag on an HTML page, making it an alternative to both RDF and RDFa.
Work on JSON-LD coincided with the development of [schema.org][5], a centralized collection of simple schemas for describing things that might exist on the web. schema.org was started by Google, Bing, and Yahoo with the express purpose of delivering better search results by agreeing to a common set of vocabularies. schema.org vocabularies, together with JSON-LD, are now used to drive features like Googles Knowledge Graph. The approach was a more practical and less abstract one, where immediate applications in search results were the focus. The schema.org team are careful to state on their website that they are not attempting to create a “universal ontology.”
Today, work on the Semantic Web seems to have petered out. The W3C still does some work on the Semantic Web under the heading of “Data Activity,” which might charitably be called the fourth phase of the Semantic Web project. But its telling that the most recent “Data Activity” project is a study of what the W3C must do to improve its standardization process. Even the W3C now appears to recognize that few of its Semantic Web standards have been widely adopted and that simpler standards would have been more successful. The attitude at the W3C seems to be one of retrenchment and introspection, perhaps in the hope of being better prepared when the Semantic Web looks promising again.
### A Lingering Legacy
And so the Semantic Web, as colorfully described by one person, is “as dead as last years roadkill.” At least, the version of the Semantic Web originally proposed by Tim Berners-Lee, which once seemed to be the imminent future of the web, is unlikely to emerge soon. That said, many of the technologies and ideas that were developed amid the push to create the Semantic Web have been repurposed and live on in various applications. As already mentioned, Google relies on Semantic Web technologies—now primarily JSON-LD—to generate useful conceptual summaries next to search results. schema.org maintains a list of “vocabularies” that web developers can use to publish easily understood data for a wide audience—it is a new, more practical imagining of what a public, shared ontology might look like. And to some degree, the many REST APIs now available constitute a diminished Semantic Web. What wasnt possible in 2001 now is: You can easily build applications that make use of data from across the web. The difference is that you must sign up for each API one by one beforehand, which in addition to being wearisome also gives whoever hosts the API enormous control over how you access their data.
Another modern application of Semantic Web technologies, perhaps the most popular and successful in recent years outside of Google, is Facebooks [OpenGraph][6] protocol. The OpenGraph protocol defines a schema that web developers can use (via RDFa) to determine how a web page is displayed when shared in a social media application. For example, a web developer working at the New York Times might use OpenGraph to specify the title and thumbnail that should appear when a New York Times article is shared in Facebook. In one sense, this is an application of Semantic Web technologies true to the Semantic Webs origins in research on metadata. Tagging a webpage with extra information about who wrote it and what it is about is exactly the kind of metadata authoring the Semantic Web was going to depend on. But in another sense, OpenGraph is an application of Semantic Web technologies to further a purpose somewhat at odds with the philosophy of the web. The metadata isnt meant to be general-purpose, after all. People tag their webpages using OpenGraph because they want links to their content to unfurl properly in Facebook. And the more information Facebook knows about your website, the closer Facebook gets to simply reproducing your entire website within Facebook, portending a future where the open web is a mythical land beyond Facebooks towering blue walls.
Whats fascinating about JSON-LD and OpenGraph is that you can use them without knowing anything about subject-predicate-object triplets, RDF, RDF Schema, ontologies, OWL, or really any other Semantic Web technologies—you dont even have to know XML. Manu Sporny has even said that the JSON-LD working group at W3C made a special effort to avoid references to RDF in the JSON-LD specification. This is almost certainly why these technologies have succeeded and continue to be popular. Nobody wants to use a tool that can only be fully understood by reading a whole family of specifications.
Its interesting to consider what might have happened if simple formats like JSON-LD had appeared earlier. The Semantic Web could have sidestepped its fatal association with XML. More people might have been tempted to mark up their websites with RDF, but even that may not have saved the Semantic Web. Sean B. Palmer, an Internet Person that has scrubbed all biographical information about himself from the internet but who claims to have worked in the Semantic Web world for a while in the 2000s, posits that the real problem was the lack of a truly decentralized infrastructure to host the Semantic Web on. To host your own website, you need to buy a domain name from ICANN, configure it correctly using DNS, and then pay someone to host your content if you dont already have a server of your own. We shouldnt be surprised if the average person finds it easier to enter their information into a giant, corporate data repository. And in a web of giant, corporate data repositories, there are no compelling use cases for Semantic Web technologies.
So the problems that confronted the Semantic Web were more numerous and profound than just “XML sucks.” All the same, its hard to believe that the Semantic Web is truly dead and gone. Some of the particular technologies that the W3C dreamed up in the early 2000s may not have a future, but the decentralized vision of the web that Tim Berners-Lee and his follow researchers described in Scientific American is too compelling to simply disappear. Imagine a web where, rather than filling out the same tedious form every time you register for a service, you were somehow able to authorize services to get that information from your own website. Imagine a Facebook that keeps your list of friends, hosted on your own website, up-to-date, rather than vice-versa. Basically, the Semantic Web was going to be a web where everyone gets to have their own personal REST API, whether they know the first thing about computers or not. Conceived of that way, its easy to see why the Semantic Web hasnt yet been realized. There are so many engineering and security issues to sort out between here and there. But its also easy to see why the dream of the Semantic Web seduced so many people.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][7] on Twitter or subscribe to the [RSS feed][8] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/05/27/semantic-web.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://twobithistory.org/images/scientific_american_cover.jpg
[2]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
[3]: http://wiki.dbpedia.org/
[4]: https://twobithistory.org/images/linked_data.png
[5]: http://schema.org/
[6]: http://ogp.me/
[7]: https://twitter.com/TwoBitHistory
[8]: https://twobithistory.org/feed.xml

View File

@ -1,110 +0,0 @@
HankChow translating
3 areas to drive DevOps change
======
Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
Pain avoidance is a powerful motivator. Some studies hint that even [plants experience a type of pain][1] and take steps to defend themselves. Yet we have plenty of examples of humans enduring pain on purpose—exercise often hurts, but we still do it. When we believe the payoff is worth the pain, we'll endure almost anything.
The truth is that driving large-scale organizational change is painful. It hurts for those having to change their values and behaviors, it hurts for leadership, and it hurts for the people just trying to do their jobs. In the case of DevOps, though, I can tell you the pain is worth it.
I've seen firsthand how teams learn they must spend time improving their technical processes, take ownership of their automation pipelines, and become masters of their fate. They gain the tools they need to be successful.
![Improvements after DevOps transformation][3]
Image by Lee Eason. CC BY-SA 4.0
This chart shows the value of that change. In a company where I directed a DevOps transformation, its 60+ teams submitted more than 900 requests per month to release management. If you add up the time those tickets stayed open, it came to more than 350 days per month. What could your company do with an extra 350 person-days per month? In addition to the improvements seen above, they went from 100 to 9,000 deployments per month, a 24% decrease in high-severity bugs, happier engineers, and improved net promoter scores (NPS). The biggest NPS improvements link to the teams furthest along on their DevOps journey, as the [Puppet State of DevOps][4] report predicted. The bottom line is that investments into technical process improvement translate into better business outcomes.
DevOps leaders must focus on three main areas to drive this change: executives, culture, and team health.
### Executives
The bottom line is that investments into technical process improvement translate into better business outcomes.
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
DevOps leaders must help executives come along for the ride. Educating leaders gives them options when they're making decisions and makes it more likely they'll choose paths that help your company.
For example, let's say your executives believe DevOps is going to improve how you deploy your products into production, but they don't understand how. You've been working with a software team to help automate their deployment. When an executive hears about a deploy failure (and there will be failures), they will want to understand how it occurred. When they learn the software team did the deployment rather than the release management team, they may try to protect the business by decreeing all production releases must go through traditional change controls. You will lose credibility, and teams will be far less likely to trust you and accept further changes.
It takes longer to rebuild trust with executives and get their support after an incident than it would have taken to educate them in the first place. Put the time in upfront to build alignment, and it will pay off as you implement tactical changes.
Two pieces of advice when building that alignment:
* First, **don't ignore any constraints** they raise. If they have worries about contracts or security, make the heads of legal and security your new best friends. By partnering with them, you'll build their trust and avoid making costly mistakes.
* Second, **use metrics to build a bridge** between what your delivery teams are doing and your executives' concerns. If the business has a goal to reduce customer churn, and you know from research that many customers leave because of unplanned downtime, reinforce that your teams are committed to tracking and improving Mean Time To Detection and Resolution (MTTD and MTTR). You can use those key metrics to show meaningful progress that teams and executives understand and get behind.
### Culture
DevOps is a culture of continuous improvement focused on code, build, deploy, and operational processes. Culture describes the organization's values and behaviors. Essentially, we're talking about changing how people behave, which is never easy.
I recommend reading [The Wolf in CIO's Clothing][5]. Spend time thinking about psychology and motivation. Read [Drive][6] or at least watch Daniel Pink's excellent [TED Talk][7]. Read [The Hero with a Thousand Faces][8] and learn to identify the different journeys everyone is on. If none of these things sound interesting, you are not the right person to drive change in your company. Otherwise, read on!
Essentially, we're talking about changing how people behave, which is never easy.
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
Explain that the company and its organizational goals are changing, and the team must alter its values. It's helpful to express this in terms of contrast. For example, your company may have historically valued cost savings above all else. That value is there for a reason—the company was cash-strapped. To get new products out, the infrastructure group had to tightly couple services by sharing database clusters or servers. Over time, those practices created a real mess that became hard to maintain. Simple changes started breaking things in unexpected ways. This led to tight change-control processes that were painful for delivery teams, so they stopped changing things.
Play that movie for five years, and you end up with little to no innovation, legacy technology, attraction and retention problems, and poor-quality products. You've grown the company, but you've hit a ceiling, and you can't continue to grow with those same values and behaviors. Now you must put engineering efficiency above cost saving. If one option will help teams maintain their service easier, but the other option is cheaper in the short term, you go with the first option.
You must tell this story again and again. Then you must celebrate any time a team expresses the new value through their behavior—even if they make a mistake. When a team has a deploy failure, congratulate them for taking the risk and encourage them to keep learning. Explain how their behavior is leading to the right outcome and support them. Over time, teams will see the message is real, and they'll feel safe altering their behavior.
### Team health
Have you ever been in a planning meeting and heard something like this: "We can't really estimate that story until John gets back from vacation. He's the only one who knows that area of the code well enough." Or: "We can't get this task done because it's got a cross-team dependency on network engineering, and the guy that set up the firewall is out sick." Or: "John knows that system best; if he estimated the story at a 3, then let's just go with that." When the team works on that story, who will most likely do the work? That's right, John will, and the cycle will continue.
For a long time, we've accepted that this is just the nature of software development. If we don't solve for it, we perpetuate the cycle.
Entropy will always drive teams naturally towards disorder and bad health. Our job as team members and leaders is to intentionally manage against that entropy and keep our teams healthy. Transformations like DevOps, agile, moving to the cloud, or refactoring a legacy application all amplify and accelerate that entropy. That's because transformations add new skills and expertise needed for the team to take on that new type of work.
Let's look at an example of a product team refactoring its legacy monolith. As usual, they build those new services in AWS. The legacy monolith was deployed to the data center, monitored, and backed up by IT. IT made sure the application's infosec requirements were met at the infrastructure layer. They conducted disaster recovery tests, patched the servers, and installed and configured required intrusion detection and antivirus agents. And they kept change control records, required for the annual audit process, of everything was done to the application's infrastructure.
I often see product teams make the fatal mistake of thinking IT is all cost and bottleneck. They're hungry to shed the skin of IT and use the public cloud, but they never stop to appreciate the critical services IT provides. Moving to the cloud means you implement these things differently; they don't go away. AWS is still a data center, and any team utilizing it accepts the related responsibilities.
In practice, this means product teams must learn how to do those IT services when they move to the cloud. So, when our fictional product team starts refactoring its legacy application and putting new services in in the cloud, it will need a vastly expanded skillset to be successful. Those skills don't magically appear—they're learned or hired—and team leaders and managers must actively manage the process.
I built [Tekata.io][9] because I couldn't find any tools to support me as I helped my teams evolve. Tekata is free and easy to use, but the tool is not as important as the people and process. Make sure you build continuous learning into your cadence and keep track of your team's weak spots. Those weak spots affect your ability to deliver, and filling them usually involves learning new things, so there's a wonderful synergy here. In fact, 76% of millennials think professional development opportunities are [one of the most important elements][10] of company culture.
### Proof is in the payoff
DevOps transformations involve altering the behavior, and therefore the culture, of your teams. That must be done with executive support and understanding. At the same time, those behavior changes mean learning new skills, and that process must also be managed carefully. But the payoff for pulling this off is more productive teams, happier and more engaged team members, higher quality products, and happier customers.
Lee Eason will present [Tales From A DevOps Transformation][11] at [All Things Open][12], October 21-23 in Raleigh, N.C.
Disclaimer: All opinions are statements in this article are exclusively those of Lee Eason and are not representative of Ipreo or IHS Markit.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tales-devops-transformation
作者:[Lee Eason][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/leeeason
[b]: https://github.com/lujun9972
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
[2]: /file/411061
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png (Improvements after DevOps transformation)
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
[5]: https://www.gartner.com/en/publications/wolf-cio
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
[9]: https://tekata.io/
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
[12]: https://allthingsopen.org/

View File

@ -1,516 +0,0 @@
fuowang 翻译中
9 Best Free Video Editing Software for Linux In 2017
======
**Brief: Here are best video editors for Linux, their feature, pros and cons and how to install them on your Linux distributions.**
![Best Video editors for Linux][1]
![Best Video editors for Linux][2]
We have discussed [best photo management applications for Linux][3], [best code editors for Linux][4] in similar articles in the past. Today we shall see the **best video editing software for Linux**.
When asked about free video editing software, Windows Movie Maker and iMovie is what most people often suggest.
Unfortunately, both of them are not available for GNU/Linux. But you don't need to worry about it, we have pooled together a list of **best free video editors** for you.
## Best Video Editors for Linux
Let's have a look at the best free video editing software for Linux below. Here's a quick summary if you think the article is too long to read. You can click on the links to jump to the relevant section of the article:
Video Editors Main Usage Type Kdenlive General purpose video editing Free and Open Source OpenShot General purpose video editing Free and Open Source Shotcut General purpose video editing Free and Open Source Flowblade General purpose video editing Free and Open Source Lightworks Professional grade video editing Freemium Blender Professional grade 3D editing Free and Open Source Cinelerra General purpose video editing Free and Open Source DaVinci Resolve Professional grade video editing Freemium VidCutter Simple video split and merge Free and Open Source
### 1\. Kdenlive
![Kdenlive-free video editor on ubuntu][1]
![Kdenlive-free video editor on ubuntu][5]
[Kdenlive][6] is a free and [open source][7] video editing software from [KDE][8] that provides support for dual video monitors, a multi-track timeline, clip list, customizable layout support, basic effects, and basic transitions.
It supports a wide variety of file formats and a wide range of camcorders and cameras including Low resolution camcorder (Raw and AVI DV editing), Mpeg2, mpeg4 and h264 AVCHD (small cameras and camcorders), High resolution camcorder files, including HDV and AVCHD camcorders, Professional camcorders, including XDCAM-HD™ streams, IMX™ (D10) streams, DVCAM (D10) , DVCAM, DVCPRO™, DVCPRO50™ streams and DNxHD™ streams.
If you are looking for an iMovie alternative for Linux, Kdenlive would be your best bet.
#### Kdenlive features
* Multi-track video editing
* A wide range of audio and video formats
* Configurable interface and shortcuts
* Easily create tiles using text or images
* Plenty of effects and transitions
* Audio and video scopes make sure the footage is correctly balanced
* Proxy editing
* Automatic save
* Wide hardware support
* Keyframeable effects
#### Pros
* All-purpose video editor
* Not too complicated for those who are familiar with video editing
#### Cons
* It may still be confusing if you are looking for something extremely simple
* KDE applications are infamous for being bloated
#### Installing Kdenlive
Kdenlive is available for all major Linux distributions. You can simply search for it in your software center. Various packages are available in the [download section of Kdenlive website][9].
Command line enthusiasts can install it from the terminal by running the following command in Debian and Ubuntu-based Linux distributions:
```
sudo apt install kdenlive
```
### 2\. OpenShot
![Openshot-free-video-editor-on-ubuntu][1]
![Openshot-free-video-editor-on-ubuntu][10]
[OpenShot][11] is another multi-purpose video editor for Linux. OpenShot can help you create videos with transitions and effects. You can also adjust audio levels. Of course, it support of most formats and codecs.
You can also export your film to DVD, upload to YouTube, Vimeo, Xbox 360, and many common video formats. OpenShot is a tad bit simpler than Kdenlive. So if you need a video editor with a simple UI OpenShot is a good choice.
There is also a neat documentation to [get you started with OpenShot][12].
#### OpenShot features
* Cross-platform, available on Linux, macOS, and Windows
* Support for a wide range of video, audio, and image formats
* Powerful curve-based Keyframe animations
* Desktop integration with drag and drop support
* Unlimited tracks or layers
* Clip resizing, scaling, trimming, snapping, rotation, and cutting
* Video transitions with real-time previews
* Compositing, image overlays and watermarks
* Title templates, title creation, sub-titles
* Support for 2D animation via image sequences
* 3D animated titles and effects
* SVG friendly for creating and including vector titles and credits
* Scrolling motion picture credits
* Frame accuracy (step through each frame of video)
* Time-mapping and speed changes on clips
* Audio mixing and editing
* Digital video effects, including brightness, gamma, hue, greyscale, chroma key etc
#### Pros
* All-purpose video editor for average video editing needs
* Available on Windows and macOS along with Linux
#### Cons
* It may be simple but if you are extremely new to video editing, there is definitely a learning curve involved here
* You may still not find up to the mark of a professional-grade, movie making editing software
#### Installing OpenShot
OpenShot is also available in the repository of all major Linux distributions. You can simply search for it in your software center. You can also get it from its [official website][13].
My favorite way is to use the following command in Debian and Ubuntu-based Linux distributions:
```
sudo apt install openshot
```
### 3\. Shotcut
![Shotcut Linux video editor][1]
![Shotcut Linux video editor][14]
[Shotcut][15] is another video editor for Linux that can be put in the same league as Kdenlive and OpenShot. While it does provide similar features as the other two discussed above, Shotcut is a bit advanced with support for 4K videos.
Support for a number of audio, video format, transitions and effects are some of the numerous features of Shotcut. External monitor is also supported here.
There is a collection of video tutorials to [get you started with Shotcut][16]. It is also available for Windows and macOS so you can use your learning on other operating systems as well.
#### Shotcut features
* Cross-platform, available on Linux, macOS, and Windows
* Support for a wide range of video, audio, and image formats
* Native timeline editing
* Mix and match resolutions and frame rates within a project
* Audio filters, mixing and effects
* Video transitions and filters
* Multitrack timeline with thumbnails and waveforms
* Unlimited undo and redo for playlist edits including a history view
* Clip resizing, scaling, trimming, snapping, rotation, and cutting
* Trimming on source clip player or timeline with ripple option
* External monitoring on an extra system display/monitor
* Hardware support
You can read about more features [here][17].
#### Pros
* All-purpose video editor for common video editing needs
* Support for 4K videos
* Available on Windows and macOS along with Linux
#### Cons
* Too many features reduce the simplicity of the software
#### Installing Shotcut
Shotcut is available in [Snap][18] format. You can find it in Ubuntu Software Center. For other distributions, you can get the executable file from its [download page][19].
### 4\. Flowblade
![Flowblade movie editor on ubuntu][1]
![Flowblade movie editor on ubuntu][20]
[Flowblade][21] is a multitrack non-linear video editor for Linux. Like the above-discussed ones, this too is a free and open source software. It comes with a stylish and modern user interface.
Written in Python, it is designed to provide a fast, and precise. Flowblade has focused on providing the best possible experience on Linux and other free platforms. So there's no Windows and OS X version for now. Feels good to be a Linux exclusive.
You also get a decent [documentation][22] to help you use all of its features.
#### Flowblade features
* Lightweight application
* Provide simple interface for simple tasks like split, merge, overwrite etc
* Plenty of audio and video effects and filters
* Supports [proxy editing][23]
* Drag and drop support
* Support for a wide range of video, audio, and image formats
* Batch rendering
* Watermarks
* Video transitions and filters
* Multitrack timeline with thumbnails and waveforms
You can read about more [Flowblade features][24] here.
#### Pros
* Lightweight
* Good for general purpose video editing
#### Cons
* Not available on other platforms
#### Installing Flowblade
Flowblade should be available in the repositories of all major Linux distributions. You can install it from the software center. More information is available on its [download page][25].
Alternatively, you can install Flowblade in Ubuntu and other Ubuntu based systems, using the command below:
```
sudo apt install flowblade
```
### 5\. Lightworks
![Lightworks running on ubuntu 16.04][1]
![Lightworks running on ubuntu 16.04][26]
If you looking for a video editor software that has more feature, this is the answer. [Lightworks][27] is a cross-platform professional video editor, available for Linux, Mac OS X and Windows.
It is an award-winning professional [non-linear editing][28] (NLE) software that supports resolutions up to 4K as well as video in SD and HD formats.
Lightworks is available for Linux, however, it is not open source.
This application has two versions:
* Lightworks Free
* Lightworks Pro
Pro version has more features such as higher resolution support, 4K and Blue Ray support etc.
Extensive documentation is available on its [website][29]. You can also refer to videos at [Lightworks video tutorials page][30]
#### Lightworks features
* Cross-platform
* Simple & intuitive User Interface
* Easy timeline editing & trimming
* Real-time ready to use audio & video FX
* Access amazing royalty-free audio & video content
* Lo-Res Proxy workflows for 4K
* Export video for YouTube/Vimeo, SD/HD, up to 4K
* Drag and drop support
* Wide variety of audio and video effects and filters
#### Pros
* Professional, feature-rich video editor
#### Cons
* Limited free version
#### Installing Lightworks
Lightworks provides DEB packages for Debian and Ubuntu-based Linux distributions and RPM packages for Fedora-based Linux distributions. You can find the packages on its [download page][31].
### 6\. Blender
![Blender running on Ubuntu 16.04][1]
![Blender running on Ubuntu 16.04][32]
[Blender][33] is a professional, industry-grade open source, cross-platform video editor. It is popular for 3D works. Blender has been used in several Hollywood movies including Spider Man series.
Although originally designed for produce 3D modeling, but it can also be used for video editing and input capabilities with a variety of formats.
#### Blender features
* Live preview, luma waveform, chroma vectorscope and histogram displays
* Audio mixing, syncing, scrubbing and waveform visualization
* Up to 32 slots for adding video, images, audio, scenes, masks and effects
* Speed control, adjustment layers, transitions, keyframes, filters and more
You can read about more features [here][34].
#### Pros
* Cross-platform
* Professional grade editing
#### Cons
* Complicated
* Mainly for 3D animation, not focused on regular video editing
#### Installing Blender
The latest version of Blender can be downloaded from its [download page][35].
### 7\. Cinelerra
![Cinelerra video editor for Linux][1]
![Cinelerra video editor for Linux][36]
[Cinelerra][37] has been available since 1998 and has been downloaded over 5 million times. It was the first video editor to provide non-linear editing on 64-bit systems back in 2003. It was a go-to video editor for Linux users at that time but it lost its sheen afterward as some developers abandoned the project.
Good thing is that its back on track and is being developed actively again.
There is some [interesting backdrop story][38] about how and why Cinelerra was started if you care to read.
#### Cinelerra features
* Non-linear editing
* Support for HD videos
* Built-in frame renderer
* Various video effects
* Unlimited layers
* Split pane editing
#### Pros
* All-purpose video editor
#### Cons
* Not suitable for beginners
* No packages available
#### Installing Cinelerra
You can download the source code from [SourceForge][39]. More information on its [download page][40].
### 8\. DaVinci Resolve
![DaVinci Resolve video editor][1]
![DaVinci Resolve video editor][41]
If you want Hollywood level video editing, use the tool the professionals use in Hollywood. [DaVinci Resolve][42] from Blackmagic is what professionals are using for editing movies and tv shows.
DaVinci Resolve is not your regular video editor. It's a full-fledged editing tool that provides editing, color correction and professional audio post-production in a single application.
DaVinci Resolve is not open source. Like LightWorks, it too provides a free version for Linux. The pro version costs $300.
#### DaVinci Resolve features
* High-performance playback engine
* All kind of edit types such as overwrite, insert, ripple overwrite, replace, fit to fill, append at end
* Advanced Trimming
* Audio Overlays
* Multicam Editing allows editing footage from multiple cameras in real-time
* Transition and filter-effects
* Speed effects
* Timeline curve editor
* Non-linear editing for VFX
#### Pros
* Cross-platform
* Professional grade video editor
#### Cons
* Not suitable for average editing
* Not open source
* Some features are not available in the free version
#### Installing DaVinci Resolve
You can download DaVinci Resolve for Linux from [its website][42]. You'll have to register, even for the free version.
### 9\. VidCutter
![VidCutter video editor for Linux][1]
![VidCutter video editor for Linux][43]
Unlike all the other video editors discussed here, [VidCutter][44] is utterly simple. It doesn't do much except splitting videos and merging. But at times you just need this and VidCutter gives you just that.
#### VidCutter features
* Cross-platform app available for Linux, Windows and MacOS
* Supports most of the common video formats such as: AVI, MP4, MPEG 1/2, WMV, MP3, MOV, 3GP, FLV etc
* Simple interface
* Trims and merges the videos, nothing more than that
#### Pros
* Cross-platform
* Good for simple split and merge
#### Cons
* Not suitable for regular video editing
* Crashes often
#### Installing VidCutter
If you are using Ubuntu-based Linux distributions, you can use the official PPA:
```
sudo add-apt-repository ppa:ozmartian/apps
sudo apt-get update
sudo apt-get install vidcutter
```
It is available in AUR so Arch Linux users can also install it easily. For other Linux distributions, you can find the installation files on its [GitHub page][45].
### Which is the best video editing software for Linux?
A number of video editors mentioned here use [FFmpeg][46]. You can use FFmpeg on your own as well. It's a command line only tool so I didn't include it in the main list but it would have been unfair to not mention it at all.
If you need an editor for simply cutting and joining videos, go with VidCutter.
If you need something more than that, **OpenShot** or **Kdenlive** is a good choice. These are suitable for beginners and a system with standard specification.
If you have a high-end computer and need advanced features you can go out with **Lightworks** or **DaVinci Resolve**. If you are looking for more advanced features for 3D works, **Blender** has got your back.
So that's all I can write about the ** best video editing software for Linux** such as Ubuntu, Linux Mint, Elementary, and other Linux distributions. Share with us which video editor you like the most.
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[It'S Foss Team][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/itsfoss/
[1]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[2]:https://itsfoss.com/wp-content/uploads/2016/06/best-Video-editors-Linux-800x450.png
[3]:https://itsfoss.com/linux-photo-management-software/
[4]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[5]:https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg
[6]:https://kdenlive.org/
[7]:https://itsfoss.com/tag/open-source/
[8]:https://www.kde.org/
[9]:https://kdenlive.org/download/
[10]:https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg
[11]:http://www.openshot.org/
[12]:http://www.openshot.org/user-guide/
[13]:http://www.openshot.org/download/
[14]:https://itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux-800x503.jpg
[15]:https://www.shotcut.org/
[16]:https://www.shotcut.org/tutorials/
[17]:https://www.shotcut.org/features/
[18]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[19]:https://www.shotcut.org/download/
[20]:https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg
[21]:http://jliljebl.github.io/flowblade/
[22]:https://jliljebl.github.io/flowblade/webhelp/help.html
[23]:https://jliljebl.github.io/flowblade/webhelp/proxy.html
[24]:https://jliljebl.github.io/flowblade/features.html
[25]:https://jliljebl.github.io/flowblade/download.html
[26]:https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg
[27]:https://www.lwks.com/
[28]:https://en.wikipedia.org/wiki/Non-linear_editing_system
[29]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=4
[30]:https://www.lwks.com/videotutorials
[31]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=1
[32]:https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg
[33]:https://www.blender.org/
[34]:https://www.blender.org/features/
[35]:https://www.blender.org/download/
[36]:https://itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg
[37]:http://cinelerra.org/
[38]:http://cinelerra.org/our-story
[39]:https://sourceforge.net/projects/heroines/files/cinelerra-6-src.tar.xz/download
[40]:http://cinelerra.org/download
[41]:https://itsfoss.com/wp-content/uploads/2016/06/davinci-resolve-vdeo-editor-800x450.jpg
[42]:https://www.blackmagicdesign.com/products/davinciresolve/
[43]:https://itsfoss.com/wp-content/uploads/2016/06/vidcutter-screenshot-800x585.jpeg
[44]:https://itsfoss.com/vidcutter-video-editor-linux/
[45]:https://github.com/ozmartian/vidcutter/releases
[46]:https://www.ffmpeg.org/

View File

@ -0,0 +1,113 @@
The Lineage of Man
======
Ive always found man pages fascinating. Formatted as strangely as they are and accessible primarily through the terminal, they have always felt to me like relics of an ancient past. Some man pages probably are ancient: Id love to know how many times the man page for `cat` or say `tee` has been revised since the early days of Unix, but Im willing to bet its not many. Man pages are mysterious—its not obvious where they come from, where they live on your computer, or what kind of file they might be stored in—and yet its hard to believe that something so fundamental and so obviously governed by rigid conventions could remain so inscrutable. Where did the man page conventions come from? Where are they codified? If I wanted to write my own man page, where would I even begin?
The story of `man` is inextricably tied to the story of Unix. The very first version of Unix, completed in 1971 but only available internally at Bell Labs, did not provide a `man` command. But Douglas McIlroy, who at the time was head of the Computing Techniques Research Department and managed the Unix project, insisted that some kind of documentation be made available. He pushed Ken Thompson and Dennis Ritchie, the two programmers commonly credited with creating Unix, to write some. The result was the [first edition][1] of the Unix Programmers Manual.
The first edition of the Unix Programmers Manual consisted of (physical!) pages collected together in a single binder. It documented only 61 different commands, along with a couple dozen system calls and a few library routines. Though the `man` command itself was not to come until later, the first edition of the Unix Programmers Manual established many of the conventions that man pages adhere to today, even in the absence of an official specification. The documentation for each command included the well-known NAME, SYNOPSIS, DESCRIPTION, and SEE ALSO headings. Optional flags were enclosed in square brackets and meta-arguments (for example, “file” where a file path is expected) were underlined. The manual also established the canonical manual sections such as Section 1 for General Commands, Section 2 for System Calls, and so on; these sections were, at the time, simply sections of a very long printed document. Thompson and Ritchie could not have known that they were establishing a tradition that would survive for decades and decades, but that is what they did.
McIlroy later speculated about why the man page format has survived as long as it has. In a technical report about the conceptual development of Unix, he noted that the original man pages were written in a “terse, yet informal, prose style” that together with the alphabetical ordering of information “encouraged accurate on-line documentation.” In a nod to an experience with man pages that all programmers have had at one time or another, he added that the man page format “was popular with initiates who needed to look up facts, albeit sometimes frustrating for beginners who didnt know what facts to look for.” McIlroy was highlighting the sometimes-overlooked distinction between tutorial and reference; man pages may not be much use for the former, but they are perfect for the latter.
The `man` command was a part of Unix by the time the [second edition][2] of the Unix Programmers Manual was printed. It made the entire manual available “on-line”, meaning interactively, which was seen as enormously useful. The `man` command has its own manual page in the second edition (this page is the original `man man`), which explains that `man` can be used to “run off a particular section of this manual.” Among the original Unix wizards, the term “run off” referred to the physical act of printing a document but also to the program they used to typeset documents, `roff`. The `roff` program had been used to typeset both the first and second editions of the Unix Programmers Manual before they were printed, but it was now also used by `man` to process man pages before they were displayed. The man pages themselves were stored on every Unix system in a file format meant to be read by `roff`.
`roff` was the first in a long lineage of typesetting programs that have always been used to format man pages. Its own development can be traced back to a program called `RUNOFF` that was written in the mid-60s. At Bell Labs, `roff` spawned several successors including `nroff` (en-roff) and `troff` (tee-roff). `nroff` was designed to improve on `roff` and better output text to terminals, while `troff` tackled the problem of printing using a CAT phototypesetter. (If you dont know what phototypesetting is, as I did not, I refer you to [this][3] eminently watchable film.) All of these programs were based on a kind of markup language consisting of two-letter commands inserted at the beginning of every line in a document. These commands could control such things as font size, text positioning, line spacing, and so on. Today, the most common implementation of the `roff` system is `groff`, a part of the GNU project.
Its easy to get a sense of what `roff` input files look like by just taking a gander at some of the man pages stored on your own computer. At least on a BSD-derived system like MacOS, you can use the `--path` argument to `man` to find out where the man page for a particular command is stored. Typically this will be under `/usr/share/man` or `/usr/local/share/man`. Using `man` this way, you can find the path for the `man` man page itself and then open it in a text editor. It will not look anything like what youre used to looking at with `man`. On my system, the first couple dozen lines are:
```
.TH man 1 "September 19, 2005"
.LO 1
.SH NAME
man \- format and display the on-line manual pages
.SH SYNOPSIS
.B man
.RB [ \-acdfFhkKtwW ]
.RB [ --path ]
.RB [ \-m
.IR system ]
.RB [ \-p
.IR string ]
.RB [ \-C
.IR config_file ]
.RB [ \-M
.IR pathlist ]
.RB [ \-P
.IR pager ]
.RB [ \-B
.IR browser ]
.RB [ \-H
.IR htmlpager ]
.RB [ \-S
.IR section_list ]
.RI [ section ]
.I "name ..."
.SH DESCRIPTION
.B man
formats and displays the on-line manual pages. If you specify
.IR section ,
.B man
only looks in that section of the manual.
.I name
is normally the name of the manual page, which is typically the name
of a command, function, or file.
However, if
.I name
contains a slash
.RB ( / )
then
.B man
interprets it as a file specification, so that you can do
.B "man ./foo.5"
or even
.B "man /cd/foo/bar.1.gz\fR.\fP"
.PP
See below for a description of where
.B man
looks for the manual page files.
```
You can make out, for example, that all of the section headings are preceded by `.SH`, and everything that would appear in bold is preceded by `.B`. These commands are `roff` macros specifically designed for writing man pages. The macros used here are part of a package called `man`, but there are other packages such as `mdoc` that you can use for the same purpose. The macros make writing man pages much simpler than it would otherwise be. They also enforce consistency by always compiling down to the same set of lower-level `roff` commands. The `man` and `mdoc` packages are now documented under [GROFF_MAN(7)][4] and [GROFF_MDOC(7)][5] respectively.
The entire `roff` system is reminiscent of LaTeX, another typesetting tool that today enjoys much more popularity. LaTeX is essentially a big bucket of macros built on top of the core TeX system designed by Donald Knuth. Like with `roff`, there are many other macro packages that you can incorporate into your LaTeX documents. These macro packages mean that you almost never have to write raw TeX yourself. LaTeX has superseded `roff` in many domains, but it is poorly suited to formatting text for a terminal, so nobody uses it to write man pages.
If you were to write a man page today, in 2017, how would you go about it? You certainly could write a man page using a `roff` macro package like `man` or `mdoc`. The syntax is unfamiliar and unwieldy. But the macros abstract away so much of the complexity that you can write a reasonably complete man page without learning very many commands. That said, there are now other options worth considering.
[Pandoc][6] is a widely used software tool for converting documents from one format to another. You can use Pandoc to convert Markdown files into `man`-macro-based man pages, meaning that you can now write your man pages in something as straightforward as Markdown. Pandoc supports many more Markdown constructs than most Markdown converters, giving you lots of ways to format your man page. While this convenience comes at the cost of some control, its unlikely that you will ever need something that would warrant dropping down to the `roff` macro level. If youre curious about what these Markdown files might look like, Ive written [a few of my own][7] to document a tool I created for keeping notes on how to use different command-line utilities. NPMs [documentation][8] is also written in Markdown and converted to a `roff` man format later, though they use a JavaScript package called [marked-man][9] instead of Pandoc to do the conversion.
So there are now plenty of ways to write man pages, giving you lots of freedom to choose the tool you think best. That said, youd better ensure that your man page reads like every other man page that has ever been written. Even though there is now so much flexibility in tooling, the man page conventions are as strong as ever. And you might be tempted to skip writing a man page altogether—after all, you probably have documentation on the web, or maybe you just want to rely on the `--help` flag—but youre forgoing the patina of respectability a man page can provide. The man page is an institution that doesnt seem likely to disappear or evolve soon, which is fascinating, because there are so many ways in which we could do man pages better. XML didnt come off well in my [last post][10], but it would be the perfect format here, and it would allow us to do something like query `man` about an option:
```
$ man grep -v
Selected lines are those not matching any of the specified patterns.
```
Imagine that! But it seems that were all too used to man pages the way they are. In a field where rapid change is the norm, maybe some stability—particularly in a documentation system we all turn to in moments of ignorance and confusion—is a good thing.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][11] on Twitter or subscribe to the [RSS feed][12] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/09/28/the-lineage-of-man.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.bell-labs.com/usr/dmr/www/1stEdman.html
[2]: http://bitsavers.informatik.uni-stuttgart.de/pdf/att/unix/2nd_Edition/UNIX_Programmers_Manual_2ed_Jun72.pdf
[3]: https://vimeo.com/127605644
[4]: http://man7.org/linux/man-pages/man7/groff_man.7.html
[5]: http://man7.org/linux/man-pages/man7/groff_mdoc.7.html
[6]: http://pandoc.org/
[7]: https://github.com/sinclairtarget/um/tree/02365bd0c0a229efb936b3d6234294e512e8a218/doc
[8]: https://github.com/npm/npm/blob/20589f4b028d3e8a617800ac6289d27f39e548e8/doc/cli/npm.md
[9]: https://www.npmjs.com/package/marked-man
[10]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
[11]: https://twitter.com/TwoBitHistory
[12]: https://twobithistory.org/feed.xml

View File

@ -0,0 +1,70 @@
translating---geekpi
Simulating the Altair
======
The [Altair 8800][1] was a build-it-yourself home computer kit released in 1975. The Altair was basically the first personal computer, though it predated the advent of that term by several years. It is Adam (or Eve) to every Dell, HP, or Macbook out there.
Some people thought itd be awesome to write an emulator for the Z80—a processor closely related to the Altairs Intel 8080—and then thought it needed a simulation of the Altairs control panel on top of it. So if youve ever wondered what it was like to use a computer in 1975, you can run the Altair on your Macbook:
![Altair 8800][2]
### Installing it
You can download Z80 pack from the FTP server available [here][3]. Youre looking for the latest Z80 pack release, something like `z80pack-1.26.tgz`.
First unpack the file:
```
$ tar -xvf z80pack-1.26.tgz
```
Move into the unpacked directory:
```
$ cd z80pack-1.26
```
The control panel simulation is based on a library called `frontpanel`. Youll have to compile that library first. If you move into the `frontpanel` directory, you will find a `README` file listing the libraries own dependencies. Your experience here will almost certainly differ from mine, but perhaps my travails will be illustrative. I had the dependencies installed, but via [Homebrew][4]. To get the library to compile I just had to make sure that `/usr/local/include` was added to Clangs include path in `Makefile.osx`.
If youve satisfied the dependencies, you should be able to compile the library (were now in `z80pack-1.26/frontpanel`:
```
$ make -f Makefile.osx ...
$ make -f Makefile.osx clean
```
You should end up with `libfrontpanel.so`. I copied this to `/usr/local/lib`.
The Altair simulator is under `z80pack-1.26/altairsim`. You now need to compile the simulator itself. Move into `z80pack-1.26/altairsim/srcsim` and run `make` once more:
```
$ make -f Makefile.osx ...
$ make -f Makefile.osx clean
```
That process will create an executable called `altairsim` one level up in `z80pack-1.26/altairsim`. Run that executable and you should see that iconic Altair control panel!
And if you really want to nerd out, read through the original [Altair manual][5].
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out.
--------------------------------------------------------------------------------
via: https://twobithistory.org/2017/12/02/simulating-the-altair.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Altair_8800
[2]: https://www.autometer.de/unix4fun/z80pack/altair.png
[3]: http://www.autometer.de/unix4fun/z80pack/ftp/
[4]: http://brew.sh/
[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf
[6]: https://twitter.com/TwoBitHistory
[7]: https://twobithistory.org/feed.xml

View File

@ -1,3 +1,5 @@
Translating by cycoe...
cycoe 翻译中
24 Must Have Essential Linux Applications In 2017
======
Brief: What are the must have applications for Linux? The answer is subjective and it depends on for what purpose do you use your desktop Linux. But there are still some essentials Linux apps that are more likely to be used by most Linux user. We have listed such best Linux applications that you should have installed in every Linux distribution you use.

View File

@ -1,248 +0,0 @@
Manage Your Games Using Lutris In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
Let us start the first day of 2018 with Games! Today, we are going to talk about **Lutris** , an open source gaming platform for Linux. You can install, remove, configure, launch and manage your games using Lutris. It helps you to manage your Linux games, Windows games, emulated console games and browser games, in a single interface. It also includes community-written installer scripts to make a game's installation process a lot easier.
Lutris comes with more than 20 emulators installed automatically (Or you can install them in a single click) that provides support for most gaming systems from the late 70's to the present day. The currently supported gaming platforms are;
* Native Linux
* Windows
* Steam (Linux and Windows)
* MS-DOS
* Arcade machines
* Amiga computers
* Atari 8 and 16bit computers and consoles
* Browsers (Flash or HTML5 games)
* Commmodore 8 bit computers
* SCUMM based games and other point and click adventure games
* Magnavox Odyssey², Videopac+
* Mattel Intellivision
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
* Game Cube and Wii
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
* SNK Neo Geo, Neo Geo Pocket
* Sony PlayStation
* Sony PlayStation 2
* Sony PSP
* Z-Machine games like Zork
* And more yet to come.
### Installing Lutris
Like Steam, Lutris consists of two parts; the website and the client application. From the website you can browse for the available games, add your favorite games to your personal library and install them using the installation link.
First, let us install Lutris client. It currently supports Arch Linux, Debian, Fedora, Gentoo, openSUSE, and Ubuntu.
For Arch Linux and its derivatives like Antergos, Manjaro Linux, it is available in [**AUR**][1]. So you can install it using any AUR helper programs.
Using [**Pacaur**][2]:
```
pacaur -S lutris
```
Using **[Packer][3]** :
```
packer -S lutris
```
Using [**Yaourt**][4]:
```
yaourt -S lutris
```
Using [**Yay**][5]:
```
yay -S lutris
```
**On Debian:**
On **Debian 9.0** run the following commands as **root** :
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
On **Debian 8.0** run the following as **root** :
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
On **Fedora 27** run the following as **root** :
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
dnf install lutris
```
On **Fedora 26** run the following as **root** :
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
dnf install lutris
```
On **openSUSE Tumbleweed** run the following as **root** :
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
zypper refresh
zypper install lutris
```
On **openSUSE Leap 42.3** run the following as **root** :
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
zypper refresh
zypper install lutris
```
On **Ubuntu 17.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
On **Ubuntu 17.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
On **Ubuntu 16.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
On **Ubuntu 16.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
For other platforms, refer the [**Lutris download link**][6].
### Manage Your Games Using Lutris
Once installed, open Lutris from your Menu or Application launcher. At first launch, the default interface of Lutris will look like below.
[![][7]][8]
**Connecting to your Lutris.net account**
Next, you need to connect your Lutris.net account to your client in-order to sync the games from your personal library. To do so, [**register a new account**][9] if you don't have one already. Then, click on **" Connecting to your Lutirs.net account to sync your library"** link in the Lutris client.
Enter your user credentials and click **Connect**.
[![][7]][10]
Now you're connected to your Lutris.net account.
[![][7]][11]**Browse Games**
To search any games, click on the Browse icon (the game controller icon) in the tool bar. It will automatically take you to Games page of Lutris website. You can see there all available games in an alphabetical order. Lutris website has lot of games and more games are being added constantly.
[![][7]][12]
Choose any games of your choice and add them to your library.
[![][7]][13]
Then, go back to your Lutris client and click **Menu - > Lutris -> Synchronize library**. Now you will see all games in your library in your local Lutris client interface.
[![][7]][14]
If you don't see the games, just restart Lutris client once.
**Installing Games**
To install a game, just right click on it and click **Install** button. For example, I am going to install [**2048 game**][15] in my system. As you see in the below screenshot, it asks me to choose the version to install. Since it has only one version (i.e online), it was selected automatically. Click **Continue**.
[![][7]][16]Click Install:
[![][7]][17]
After installation completed, you can either launch the newly installed game or simply close the window and continue installing other games in your library.
**Import Steam library**
You can also import your Steam library. To do so, go to your Lutris profile and click the **" Sign in through Steam"** button. You will then be redirected to Steam and will be asked to enter your user credentials. Once you authorized it, your Steam account will be connected with your Lutris account. Please be mindful that your Steam account should be public in order to sync the games from the library. You can switch it back to private after the sync is completed.
**Adding games manually**
Lutris has the option to add games manually. To do so, click the plus (+) sign on the toolbar.
[![][7]][18]
In the next window, enter the game's name, and choose the runners in the Game info tab. The runners are programs such as Wine, Steam for linux etc., that helps you to launch a game. You can install runners from Menu -> Manage runners.
[![][7]][19]
Then go to the next tab and choose Game's main executable or ISO. Finally click Save. The good thing is you can add multiple version of same games.
**Removing games**
To remove any installed game, just right click on it in the local library of your Lutris client application. Choose "Remove" and then "Apply".
[![][7]][20]
Lutris is just like Steam. Just add the games to your library in the website and the client will install them for you!
And, that's all for today, folks! We will be posting more good and useful stuffs in this year. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/lutris/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://lutris.net/downloads/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
[9]:https://lutris.net/user/register/
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()

View File

@ -1,3 +1,6 @@
Translating by MjSeven
Getting started with Python for data science
======

View File

@ -1,145 +0,0 @@
Translating by MjSeven
How To Remove Or Disable Ubuntu Dock
======
![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png)
**If you want to replace the Ubuntu Dock in Ubuntu 18.04 with some other dock (like Plank dock for example) or panel, and you want to remove or disable the Ubuntu Dock, here's what you can do and how.**
Ubuntu Dock - the bar on the left-hand side of the screen which can be used to pin applications and access installed applications -
### How to access the Activities Overview without Ubuntu Dock
Without Ubuntu Dock, you may have no way of accessing the Activities / installed application list (which can be accessed from Ubuntu Dock by clicking on Show Applications button at the bottom of the dock). For example if you want to use Plank dock.
Obviously, that's not the case if you install Dash to Panel extension to use it instead Ubuntu Dock, because Dash to Panel provides a button to access the Activities Overview / installed applications.
Depending on what you plan to use instead of Ubuntu Dock, if there's no way of accessing the Activities Overview, you can enable the Activities Overview Hot Corner option and simply move your mouse to the upper left corner of the screen to open the Activities. Another way of accessing the installed application list is using a keyboard shortcut: `Super + A` .
If you want to enable the Activities Overview hot corner, use this command:
```
gsettings set org.gnome.shell enable-hot-corners true
```
If later you want to undo this and disable the hot corners, you need to use this command:
```
gsettings set org.gnome.shell enable-hot-corners false
```
You can also enable or disable the Activities Overview Hot Corner option by using the Gnome Tweaks application (the option is in the `Top Bar` section of Gnome Tweaks), which can be installed by using this command:
```
sudo apt install gnome-tweaks
```
### How to remove or disable Ubuntu Dock
Below you'll find 4 ways of getting rid of Ubuntu Dock which work in Ubuntu 18.04.
**Option 1: Remove the Gnome Shell Ubuntu Dock package.**
The easiest way of getting rid of the Ubuntu Dock is to remove the package.
This completely removes the Ubuntu Dock extension from your system, but it also removes the `ubuntu-desktop` meta package. There's no immediate issue if you remove the `ubuntu-desktop` meta package because does nothing by itself. The `ubuntu-meta` package depends on a large number of packages which make up the Ubuntu Desktop. Its dependencies won't be removed and nothing will break. The issue is that if you want to upgrade to a newer Ubuntu version, any new `ubuntu-desktop` dependencies won't be installed.
As a way around this, you can simply install the `ubuntu-desktop` meta package before upgrading to a newer Ubuntu version (for example if you want to upgrade from Ubuntu 18.04 to 18.10).
If you're ok with this and want to remove the Ubuntu Dock extension package from your system, use the following command:
```
sudo apt remove gnome-shell-extension-ubuntu-dock
```
If later you want to undo the changes, simply install the extension back using this command:
```
sudo apt install gnome-shell-extension-ubuntu-dock
```
Or to install the `ubuntu-desktop` meta package back (this will install any ubuntu-desktop dependencies you may have removed, including Ubuntu Dock), you can use this command:
```
sudo apt install ubuntu-desktop
```
**Option 2: Install and use the vanilla Gnome session instead of the default Ubuntu session.**
Another way to get rid of Ubuntu Dock is to install and use the vanilla Gnome session. Installing the vanilla Gnome session will also install other packages this session depends on, like Gnome Documents, Maps, Music, Contacts, Photos, Tracker and more.
By installing the vanilla Gnome session, you'll also get the default Gnome GDM login / lock screen theme instead of the Ubuntu defaults as well as Adwaita Gtk theme and icons. You can easily change the Gtk and icon theme though, by using the Gnome Tweaks application.
Furthermore, the AppIndicators extension will be disabled by default (so applications that make use of the AppIndicators tray won't show up on the top panel), but you can enable this by using Gnome Tweaks (under Extensions, enable the Ubuntu appindicators extension).
In the same way, you can also enable or disable Ubuntu Dock from the vanilla Gnome session, which is not possible if you use the Ubuntu session (disabling Ubuntu Dock from Gnome Tweaks when using the Ubuntu session does nothing).
If you don't want to install these extra packages required by the vanilla Gnome session, this option of removing Ubuntu Dock is not for you so check out the other options.
If you are ok with this though, here's what you need to do. To install the vanilla Gnome session in Ubuntu, use this command:
```
sudo apt install vanilla-gnome-desktop
```
After the installation finishes, reboot your system and on the login screen, after you click on your username, click the gear icon next to the `Sign in` button, and select `GNOME` instead of `Ubuntu` , then proceed to login:
![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png)
In case you want to undo this and remove the vanilla Gnome session, you can purge the vanilla Gnome package and then remove the dependencies it installed (second command) using the following commands:
```
sudo apt purge vanilla-gnome-desktop
sudo apt autoremove
```
Then reboot and select Ubuntu in the same way, from the GDM login screen.
**Option 3: Permanently hide the Ubuntu Dock from your desktop instead of removing it.**
If you prefer to permanently hide the Ubuntu Dock from showing up on your desktop instead of uninstalling it or using the vanilla Gnome session, you can easily do this using Dconf Editor. The drawback to this is that Ubuntu Dock will still use some system resources even though you're not using in on your desktop, but you'll also be able to easily revert this without installing or removing any packages.
Ubuntu Dock is only hidden from your desktop though. When you go in overlay mode (Activities), you'll still see and be able to use Ubuntu Dock from there.
To permanently hide Ubuntu Dock, use Dconf Editor to navigate to `/org/gnome/shell/extensions/dash-to-dock` and disable (set them to false) the following options: `autohide` , `dock-fixed` and `intellihide` .
You can achieve this from the command line if you wish, buy running the commands below:
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide false
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false
```
In case you change your mind and you want to undo this, you can either use Dconf Editor and re-enable (set them to true) autohide, dock-fixed and intellihide from `/org/gnome/shell/extensions/dash-to-dock` , or you can use these commands:
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide true
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true
```
**Option 4: Use Dash to Panel extension.**
You can install Dash to Panel from
If you change your mind and you want Ubuntu Dock back, you can either disable Dash to Panel by using Gnome Tweaks app, or completely remove Dash to Panel by clicking the X button next to it from here:
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020
[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html
[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/

View File

@ -1,525 +0,0 @@
Translating by qhwdw
Lab 3: User Environments
======
### Lab 3: User Environments
#### Introduction
In this lab you will implement the basic kernel facilities required to get a protected user-mode environment (i.e., "process") running. You will enhance the JOS kernel to set up the data structures to keep track of user environments, create a single user environment, load a program image into it, and start it running. You will also make the JOS kernel capable of handling any system calls the user environment makes and handling any other exceptions it causes.
**Note:** In this lab, the terms _environment_ and _process_ are interchangeable - both refer to an abstraction that allows you to run a program. We introduce the term "environment" instead of the traditional term "process" in order to stress the point that JOS environments and UNIX processes provide different interfaces, and do not provide the same semantics.
##### Getting Started
Use Git to commit your changes after your Lab 2 submission (if any), fetch the latest version of the course repository, and then create a local branch called `lab3` based on our lab3 branch, `origin/lab3`:
```
athena% cd ~/6.828/lab
athena% add git
athena% git commit -am 'changes to lab2 after handin'
Created commit 734fab7: changes to lab2 after handin
4 files changed, 42 insertions(+), 9 deletions(-)
athena% git pull
Already up-to-date.
athena% git checkout -b lab3 origin/lab3
Branch lab3 set up to track remote branch refs/remotes/origin/lab3.
Switched to a new branch "lab3"
athena% git merge lab2
Merge made by recursive.
kern/pmap.c | 42 +++++++++++++++++++
1 files changed, 42 insertions(+), 0 deletions(-)
athena%
```
Lab 3 contains a number of new source files, which you should browse:
```
inc/ env.h Public definitions for user-mode environments
trap.h Public definitions for trap handling
syscall.h Public definitions for system calls from user environments to the kernel
lib.h Public definitions for the user-mode support library
kern/ env.h Kernel-private definitions for user-mode environments
env.c Kernel code implementing user-mode environments
trap.h Kernel-private trap handling definitions
trap.c Trap handling code
trapentry.S Assembly-language trap handler entry-points
syscall.h Kernel-private definitions for system call handling
syscall.c System call implementation code
lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a
entry.S Assembly-language entry-point for user environments
libmain.c User-mode library setup code called from entry.S
syscall.c User-mode system call stub functions
console.c User-mode implementations of putchar and getchar, providing console I/O
exit.c User-mode implementation of exit
panic.c User-mode implementation of panic
user/ * Various test programs to check kernel lab 3 code
```
In addition, a number of the source files we handed out for lab2 are modified in lab3. To see the differences, you can type:
```
$ git diff lab2
```
You may also want to take another look at the [lab tools guide][1], as it includes information on debugging user code that becomes relevant in this lab.
##### Lab Requirements
This lab is divided into two parts, A and B. Part A is due a week after this lab was assigned; you should commit your changes and make handin your lab before the Part A deadline, making sure your code passes all of the Part A tests (it is okay if your code does not pass the Part B tests yet). You only need to have the Part B tests passing by the Part B deadline at the end of the second week.
As in lab 2, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem (for the entire lab, not for each part). Write up brief answers to the questions posed in the lab and a one or two paragraph description of what you did to solve your chosen challenge problem in a file called `answers-lab3.txt` in the top level of your `lab` directory. (If you implement more than one challenge problem, you only need to describe one of them in the write-up.) Do not forget to include the answer file in your submission with git add answers-lab3.txt.
##### Inline Assembly
In this lab you may find GCC's inline assembly language feature useful, although it is also possible to complete the lab without using it. At the very least, you will need to be able to understand the fragments of inline assembly language ("`asm`" statements) that already exist in the source code we gave you. You can find several sources of information on GCC inline assembly language on the class [reference materials][2] page.
#### Part A: User Environments and Exception Handling
The new include file `inc/env.h` contains basic definitions for user environments in JOS. Read it now. The kernel uses the `Env` data structure to keep track of each user environment. In this lab you will initially create just one environment, but you will need to design the JOS kernel to support multiple environments; lab 4 will take advantage of this feature by allowing a user environment to `fork` other environments.
As you can see in `kern/env.c`, the kernel maintains three main global variables pertaining to environments:
```
struct Env *envs = NULL; // All environments
struct Env *curenv = NULL; // The current env
static struct Env *env_free_list; // Free environment list
```
Once JOS gets up and running, the `envs` pointer points to an array of `Env` structures representing all the environments in the system. In our design, the JOS kernel will support a maximum of `NENV` simultaneously active environments, although there will typically be far fewer running environments at any given time. (`NENV` is a constant `#define`'d in `inc/env.h`.) Once it is allocated, the `envs` array will contain a single instance of the `Env` data structure for each of the `NENV` possible environments.
The JOS kernel keeps all of the inactive `Env` structures on the `env_free_list`. This design allows easy allocation and deallocation of environments, as they merely have to be added to or removed from the free list.
The kernel uses the `curenv` symbol to keep track of the _currently executing_ environment at any given time. During boot up, before the first environment is run, `curenv` is initially set to `NULL`.
##### Environment State
The `Env` structure is defined in `inc/env.h` as follows (although more fields will be added in future labs):
```
struct Env {
struct Trapframe env_tf; // Saved registers
struct Env *env_link; // Next free Env
envid_t env_id; // Unique environment identifier
envid_t env_parent_id; // env_id of this env's parent
enum EnvType env_type; // Indicates special system environments
unsigned env_status; // Status of the environment
uint32_t env_runs; // Number of times environment has run
// Address space
pde_t *env_pgdir; // Kernel virtual address of page dir
};
```
Here's what the `Env` fields are for:
* **env_tf** :
This structure, defined in `inc/trap.h`, holds the saved register values for the environment while that environment is _not_ running: i.e., when the kernel or a different environment is running. The kernel saves these when switching from user to kernel mode, so that the environment can later be resumed where it left off.
* **env_link** :
This is a link to the next `Env` on the `env_free_list`. `env_free_list` points to the first free environment on the list.
* **env_id** :
The kernel stores here a value that uniquely identifiers the environment currently using this `Env` structure (i.e., using this particular slot in the `envs` array). After a user environment terminates, the kernel may re-allocate the same `Env` structure to a different environment - but the new environment will have a different `env_id` from the old one even though the new environment is re-using the same slot in the `envs` array.
* **env_parent_id** :
The kernel stores here the `env_id` of the environment that created this environment. In this way the environments can form a “family tree,” which will be useful for making security decisions about which environments are allowed to do what to whom.
* **env_type** :
This is used to distinguish special environments. For most environments, it will be `ENV_TYPE_USER`. We'll introduce a few more types for special system service environments in later labs.
* **env_status** :
This variable holds one of the following values:
* `ENV_FREE`:
Indicates that the `Env` structure is inactive, and therefore on the `env_free_list`.
* `ENV_RUNNABLE`:
Indicates that the `Env` structure represents an environment that is waiting to run on the processor.
* `ENV_RUNNING`:
Indicates that the `Env` structure represents the currently running environment.
* `ENV_NOT_RUNNABLE`:
Indicates that the `Env` structure represents a currently active environment, but it is not currently ready to run: for example, because it is waiting for an interprocess communication (IPC) from another environment.
* `ENV_DYING`:
Indicates that the `Env` structure represents a zombie environment. A zombie environment will be freed the next time it traps to the kernel. We will not use this flag until Lab 4.
* **env_pgdir** :
This variable holds the kernel _virtual address_ of this environment's page directory.
Like a Unix process, a JOS environment couples the concepts of "thread" and "address space". The thread is defined primarily by the saved registers (the `env_tf` field), and the address space is defined by the page directory and page tables pointed to by `env_pgdir`. To run an environment, the kernel must set up the CPU with _both_ the saved registers and the appropriate address space.
Our `struct Env` is analogous to `struct proc` in xv6. Both structures hold the environment's (i.e., process's) user-mode register state in a `Trapframe` structure. In JOS, individual environments do not have their own kernel stacks as processes do in xv6. There can be only one JOS environment active in the kernel at a time, so JOS needs only a _single_ kernel stack.
##### Allocating the Environments Array
In lab 2, you allocated memory in `mem_init()` for the `pages[]` array, which is a table the kernel uses to keep track of which pages are free and which are not. You will now need to modify `mem_init()` further to allocate a similar array of `Env` structures, called `envs`.
```
Exercise 1. Modify `mem_init()` in `kern/pmap.c` to allocate and map the `envs` array. This array consists of exactly `NENV` instances of the `Env` structure allocated much like how you allocated the `pages` array. Also like the `pages` array, the memory backing `envs` should also be mapped user read-only at `UENVS` (defined in `inc/memlayout.h`) so user processes can read from this array.
```
You should run your code and make sure `check_kern_pgdir()` succeeds.
##### Creating and Running Environments
You will now write the code in `kern/env.c` necessary to run a user environment. Because we do not yet have a filesystem, we will set up the kernel to load a static binary image that is _embedded within the kernel itself_. JOS embeds this binary in the kernel as a ELF executable image.
The Lab 3 `GNUmakefile` generates a number of binary images in the `obj/user/` directory. If you look at `kern/Makefrag`, you will notice some magic that "links" these binaries directly into the kernel executable as if they were `.o` files. The `-b binary` option on the linker command line causes these files to be linked in as "raw" uninterpreted binary files rather than as regular `.o` files produced by the compiler. (As far as the linker is concerned, these files do not have to be ELF images at all - they could be anything, such as text files or pictures!) If you look at `obj/kern/kernel.sym` after building the kernel, you will notice that the linker has "magically" produced a number of funny symbols with obscure names like `_binary_obj_user_hello_start`, `_binary_obj_user_hello_end`, and `_binary_obj_user_hello_size`. The linker generates these symbol names by mangling the file names of the binary files; the symbols provide the regular kernel code with a way to reference the embedded binary files.
In `i386_init()` in `kern/init.c` you'll see code to run one of these binary images in an environment. However, the critical functions to set up user environments are not complete; you will need to fill them in.
```
Exercise 2. In the file `env.c`, finish coding the following functions:
* `env_init()`
Initialize all of the `Env` structures in the `envs` array and add them to the `env_free_list`. Also calls `env_init_percpu`, which configures the segmentation hardware with separate segments for privilege level 0 (kernel) and privilege level 3 (user).
* `env_setup_vm()`
Allocate a page directory for a new environment and initialize the kernel portion of the new environment's address space.
* `region_alloc()`
Allocates and maps physical memory for an environment
* `load_icode()`
You will need to parse an ELF binary image, much like the boot loader already does, and load its contents into the user address space of a new environment.
* `env_create()`
Allocate an environment with `env_alloc` and call `load_icode` to load an ELF binary into it.
* `env_run()`
Start a given environment running in user mode.
As you write these functions, you might find the new cprintf verb `%e` useful -- it prints a description corresponding to an error code. For example,
r = -E_NO_MEM;
panic("env_alloc: %e", r);
will panic with the message "env_alloc: out of memory".
```
Below is a call graph of the code up to the point where the user code is invoked. Make sure you understand the purpose of each step.
* `start` (`kern/entry.S`)
* `i386_init` (`kern/init.c`)
* `cons_init`
* `mem_init`
* `env_init`
* `trap_init` (still incomplete at this point)
* `env_create`
* `env_run`
* `env_pop_tf`
Once you are done you should compile your kernel and run it under QEMU. If all goes well, your system should enter user space and execute the `hello` binary until it makes a system call with the `int` instruction. At that point there will be trouble, since JOS has not set up the hardware to allow any kind of transition from user space into the kernel. When the CPU discovers that it is not set up to handle this system call interrupt, it will generate a general protection exception, find that it can't handle that, generate a double fault exception, find that it can't handle that either, and finally give up with what's known as a "triple fault". Usually, you would then see the CPU reset and the system reboot. While this is important for legacy applications (see [this blog post][3] for an explanation of why), it's a pain for kernel development, so with the 6.828 patched QEMU you'll instead see a register dump and a "Triple fault." message.
We'll address this problem shortly, but for now we can use the debugger to check that we're entering user mode. Use make qemu-gdb and set a GDB breakpoint at `env_pop_tf`, which should be the last function you hit before actually entering user mode. Single step through this function using si; the processor should enter user mode after the `iret` instruction. You should then see the first instruction in the user environment's executable, which is the `cmpl` instruction at the label `start` in `lib/entry.S`. Now use b *0x... to set a breakpoint at the `int $0x30` in `sys_cputs()` in `hello` (see `obj/user/hello.asm` for the user-space address). This `int` is the system call to display a character to the console. If you cannot execute as far as the `int`, then something is wrong with your address space setup or program loading code; go back and fix it before continuing.
##### Handling Interrupts and Exceptions
At this point, the first `int $0x30` system call instruction in user space is a dead end: once the processor gets into user mode, there is no way to get back out. You will now need to implement basic exception and system call handling, so that it is possible for the kernel to recover control of the processor from user-mode code. The first thing you should do is thoroughly familiarize yourself with the x86 interrupt and exception mechanism.
```
Exercise 3. Read Chapter 9, Exceptions and Interrupts in the 80386 Programmer's Manual (or Chapter 5 of the IA-32 Developer's Manual), if you haven't already.
```
In this lab we generally follow Intel's terminology for interrupts, exceptions, and the like. However, terms such as exception, trap, interrupt, fault and abort have no standard meaning across architectures or operating systems, and are often used without regard to the subtle distinctions between them on a particular architecture such as the x86. When you see these terms outside of this lab, the meanings might be slightly different.
##### Basics of Protected Control Transfer
Exceptions and interrupts are both "protected control transfers," which cause the processor to switch from user to kernel mode (CPL=0) without giving the user-mode code any opportunity to interfere with the functioning of the kernel or other environments. In Intel's terminology, an _interrupt_ is a protected control transfer that is caused by an asynchronous event usually external to the processor, such as notification of external device I/O activity. An _exception_ , in contrast, is a protected control transfer caused synchronously by the currently running code, for example due to a divide by zero or an invalid memory access.
In order to ensure that these protected control transfers are actually _protected_ , the processor's interrupt/exception mechanism is designed so that the code currently running when the interrupt or exception occurs _does not get to choose arbitrarily where the kernel is entered or how_. Instead, the processor ensures that the kernel can be entered only under carefully controlled conditions. On the x86, two mechanisms work together to provide this protection:
1. **The Interrupt Descriptor Table.** The processor ensures that interrupts and exceptions can only cause the kernel to be entered at a few specific, well-defined entry-points _determined by the kernel itself_ , and not by the code running when the interrupt or exception is taken.
The x86 allows up to 256 different interrupt or exception entry points into the kernel, each with a different _interrupt vector_. A vector is a number between 0 and 255. An interrupt's vector is determined by the source of the interrupt: different devices, error conditions, and application requests to the kernel generate interrupts with different vectors. The CPU uses the vector as an index into the processor's _interrupt descriptor table_ (IDT), which the kernel sets up in kernel-private memory, much like the GDT. From the appropriate entry in this table the processor loads:
* the value to load into the instruction pointer (`EIP`) register, pointing to the kernel code designated to handle that type of exception.
* the value to load into the code segment (`CS`) register, which includes in bits 0-1 the privilege level at which the exception handler is to run. (In JOS, all exceptions are handled in kernel mode, privilege level 0.)
2. **The Task State Segment.** The processor needs a place to save the _old_ processor state before the interrupt or exception occurred, such as the original values of `EIP` and `CS` before the processor invoked the exception handler, so that the exception handler can later restore that old state and resume the interrupted code from where it left off. But this save area for the old processor state must in turn be protected from unprivileged user-mode code; otherwise buggy or malicious user code could compromise the kernel.
For this reason, when an x86 processor takes an interrupt or trap that causes a privilege level change from user to kernel mode, it also switches to a stack in the kernel's memory. A structure called the _task state segment_ (TSS) specifies the segment selector and address where this stack lives. The processor pushes (on this new stack) `SS`, `ESP`, `EFLAGS`, `CS`, `EIP`, and an optional error code. Then it loads the `CS` and `EIP` from the interrupt descriptor, and sets the `ESP` and `SS` to refer to the new stack.
Although the TSS is large and can potentially serve a variety of purposes, JOS only uses it to define the kernel stack that the processor should switch to when it transfers from user to kernel mode. Since "kernel mode" in JOS is privilege level 0 on the x86, the processor uses the `ESP0` and `SS0` fields of the TSS to define the kernel stack when entering kernel mode. JOS doesn't use any other TSS fields.
##### Types of Exceptions and Interrupts
All of the synchronous exceptions that the x86 processor can generate internally use interrupt vectors between 0 and 31, and therefore map to IDT entries 0-31. For example, a page fault always causes an exception through vector 14. Interrupt vectors greater than 31 are only used by _software interrupts_ , which can be generated by the `int` instruction, or asynchronous _hardware interrupts_ , caused by external devices when they need attention.
In this section we will extend JOS to handle the internally generated x86 exceptions in vectors 0-31. In the next section we will make JOS handle software interrupt vector 48 (0x30), which JOS (fairly arbitrarily) uses as its system call interrupt vector. In Lab 4 we will extend JOS to handle externally generated hardware interrupts such as the clock interrupt.
##### An Example
Let's put these pieces together and trace through an example. Let's say the processor is executing code in a user environment and encounters a divide instruction that attempts to divide by zero.
1. The processor switches to the stack defined by the `SS0` and `ESP0` fields of the TSS, which in JOS will hold the values `GD_KD` and `KSTACKTOP`, respectively.
2. The processor pushes the exception parameters on the kernel stack, starting at address `KSTACKTOP`:
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20 <---- ESP
+--------------------+
```
3. Because we're handling a divide error, which is interrupt vector 0 on the x86, the processor reads IDT entry 0 and sets `CS:EIP` to point to the handler function described by the entry.
4. The handler function takes control and handles the exception, for example by terminating the user environment.
For certain types of x86 exceptions, in addition to the "standard" five words above, the processor pushes onto the stack another word containing an _error code_. The page fault exception, number 14, is an important example. See the 80386 manual to determine for which exception numbers the processor pushes an error code, and what the error code means in that case. When the processor pushes an error code, the stack would look as follows at the beginning of the exception handler when coming in from user mode:
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20
| error code | " - 24 <---- ESP
+--------------------+
```
##### Nested Exceptions and Interrupts
The processor can take exceptions and interrupts both from kernel and user mode. It is only when entering the kernel from user mode, however, that the x86 processor automatically switches stacks before pushing its old register state onto the stack and invoking the appropriate exception handler through the IDT. If the processor is _already_ in kernel mode when the interrupt or exception occurs (the low 2 bits of the `CS` register are already zero), then the CPU just pushes more values on the same kernel stack. In this way, the kernel can gracefully handle _nested exceptions_ caused by code within the kernel itself. This capability is an important tool in implementing protection, as we will see later in the section on system calls.
If the processor is already in kernel mode and takes a nested exception, since it does not need to switch stacks, it does not save the old `SS` or `ESP` registers. For exception types that do not push an error code, the kernel stack therefore looks like the following on entry to the exception handler:
```
+--------------------+ <---- old ESP
| old EFLAGS | " - 4
| 0x00000 | old CS | " - 8
| old EIP | " - 12
+--------------------+
```
For exception types that push an error code, the processor pushes the error code immediately after the old `EIP`, as before.
There is one important caveat to the processor's nested exception capability. If the processor takes an exception while already in kernel mode, and _cannot push its old state onto the kernel stack_ for any reason such as lack of stack space, then there is nothing the processor can do to recover, so it simply resets itself. Needless to say, the kernel should be designed so that this can't happen.
##### Setting Up the IDT
You should now have the basic information you need in order to set up the IDT and handle exceptions in JOS. For now, you will set up the IDT to handle interrupt vectors 0-31 (the processor exceptions). We'll handle system call interrupts later in this lab and add interrupts 32-47 (the device IRQs) in a later lab.
The header files `inc/trap.h` and `kern/trap.h` contain important definitions related to interrupts and exceptions that you will need to become familiar with. The file `kern/trap.h` contains definitions that are strictly private to the kernel, while `inc/trap.h` contains definitions that may also be useful to user-level programs and libraries.
Note: Some of the exceptions in the range 0-31 are defined by Intel to be reserved. Since they will never be generated by the processor, it doesn't really matter how you handle them. Do whatever you think is cleanest.
The overall flow of control that you should achieve is depicted below:
```
IDT trapentry.S trap.c
+----------------+
| &handler1 |---------> handler1: trap (struct Trapframe *tf)
| | // do stuff {
| | call trap // handle the exception/interrupt
| | // ... }
+----------------+
| &handler2 |--------> handler2:
| | // do stuff
| | call trap
| | // ...
+----------------+
.
.
.
+----------------+
| &handlerX |--------> handlerX:
| | // do stuff
| | call trap
| | // ...
+----------------+
```
Each exception or interrupt should have its own handler in `trapentry.S` and `trap_init()` should initialize the IDT with the addresses of these handlers. Each of the handlers should build a `struct Trapframe` (see `inc/trap.h`) on the stack and call `trap()` (in `trap.c`) with a pointer to the Trapframe. `trap()` then handles the exception/interrupt or dispatches to a specific handler function.
```
Exercise 4. Edit `trapentry.S` and `trap.c` and implement the features described above. The macros `TRAPHANDLER` and `TRAPHANDLER_NOEC` in `trapentry.S` should help you, as well as the T_* defines in `inc/trap.h`. You will need to add an entry point in `trapentry.S` (using those macros) for each trap defined in `inc/trap.h`, and you'll have to provide `_alltraps` which the `TRAPHANDLER` macros refer to. You will also need to modify `trap_init()` to initialize the `idt` to point to each of these entry points defined in `trapentry.S`; the `SETGATE` macro will be helpful here.
Your `_alltraps` should:
1. push values to make the stack look like a struct Trapframe
2. load `GD_KD` into `%ds` and `%es`
3. `pushl %esp` to pass a pointer to the Trapframe as an argument to trap()
4. `call trap` (can `trap` ever return?)
Consider using the `pushal` instruction; it fits nicely with the layout of the `struct Trapframe`.
Test your trap handling code using some of the test programs in the `user` directory that cause exceptions before making any system calls, such as `user/divzero`. You should be able to get make grade to succeed on the `divzero`, `softint`, and `badsegment` tests at this point.
```
```
Challenge! You probably have a lot of very similar code right now, between the lists of `TRAPHANDLER` in `trapentry.S` and their installations in `trap.c`. Clean this up. Change the macros in `trapentry.S` to automatically generate a table for `trap.c` to use. Note that you can switch between laying down code and data in the assembler by using the directives `.text` and `.data`.
```
```
Questions
Answer the following questions in your `answers-lab3.txt`:
1. What is the purpose of having an individual handler function for each exception/interrupt? (i.e., if all exceptions/interrupts were delivered to the same handler, what feature that exists in the current implementation could not be provided?)
2. Did you have to do anything to make the `user/softint` program behave correctly? The grade script expects it to produce a general protection fault (trap 13), but `softint`'s code says `int $14`. _Why_ should this produce interrupt vector 13? What happens if the kernel actually allows `softint`'s `int $14` instruction to invoke the kernel's page fault handler (which is interrupt vector 14)?
```
This concludes part A of the lab. Don't forget to add `answers-lab3.txt`, commit your changes, and run make handin before the part A deadline.
#### Part B: Page Faults, Breakpoints Exceptions, and System Calls
Now that your kernel has basic exception handling capabilities, you will refine it to provide important operating system primitives that depend on exception handling.
##### Handling Page Faults
The page fault exception, interrupt vector 14 (`T_PGFLT`), is a particularly important one that we will exercise heavily throughout this lab and the next. When the processor takes a page fault, it stores the linear (i.e., virtual) address that caused the fault in a special processor control register, `CR2`. In `trap.c` we have provided the beginnings of a special function, `page_fault_handler()`, to handle page fault exceptions.
```
Exercise 5. Modify `trap_dispatch()` to dispatch page fault exceptions to `page_fault_handler()`. You should now be able to get make grade to succeed on the `faultread`, `faultreadkernel`, `faultwrite`, and `faultwritekernel` tests. If any of them don't work, figure out why and fix them. Remember that you can boot JOS into a particular user program using make run- _x_ or make run- _x_ -nox. For instance, make run-hello-nox runs the _hello_ user program.
```
You will further refine the kernel's page fault handling below, as you implement system calls.
##### The Breakpoint Exception
The breakpoint exception, interrupt vector 3 (`T_BRKPT`), is normally used to allow debuggers to insert breakpoints in a program's code by temporarily replacing the relevant program instruction with the special 1-byte `int3` software interrupt instruction. In JOS we will abuse this exception slightly by turning it into a primitive pseudo-system call that any user environment can use to invoke the JOS kernel monitor. This usage is actually somewhat appropriate if we think of the JOS kernel monitor as a primitive debugger. The user-mode implementation of `panic()` in `lib/panic.c`, for example, performs an `int3` after displaying its panic message.
```
Exercise 6. Modify `trap_dispatch()` to make breakpoint exceptions invoke the kernel monitor. You should now be able to get make grade to succeed on the `breakpoint` test.
```
```
Challenge! Modify the JOS kernel monitor so that you can 'continue' execution from the current location (e.g., after the `int3`, if the kernel monitor was invoked via the breakpoint exception), and so that you can single-step one instruction at a time. You will need to understand certain bits of the `EFLAGS` register in order to implement single-stepping.
Optional: If you're feeling really adventurous, find some x86 disassembler source code - e.g., by ripping it out of QEMU, or out of GNU binutils, or just write it yourself - and extend the JOS kernel monitor to be able to disassemble and display instructions as you are stepping through them. Combined with the symbol table loading from lab 1, this is the stuff of which real kernel debuggers are made.
```
```
Questions
3. The break point test case will either generate a break point exception or a general protection fault depending on how you initialized the break point entry in the IDT (i.e., your call to `SETGATE` from `trap_init`). Why? How do you need to set it up in order to get the breakpoint exception to work as specified above and what incorrect setup would cause it to trigger a general protection fault?
4. What do you think is the point of these mechanisms, particularly in light of what the `user/softint` test program does?
```
##### System calls
User processes ask the kernel to do things for them by invoking system calls. When the user process invokes a system call, the processor enters kernel mode, the processor and the kernel cooperate to save the user process's state, the kernel executes appropriate code in order to carry out the system call, and then resumes the user process. The exact details of how the user process gets the kernel's attention and how it specifies which call it wants to execute vary from system to system.
In the JOS kernel, we will use the `int` instruction, which causes a processor interrupt. In particular, we will use `int $0x30` as the system call interrupt. We have defined the constant `T_SYSCALL` to 48 (0x30) for you. You will have to set up the interrupt descriptor to allow user processes to cause that interrupt. Note that interrupt 0x30 cannot be generated by hardware, so there is no ambiguity caused by allowing user code to generate it.
The application will pass the system call number and the system call arguments in registers. This way, the kernel won't need to grub around in the user environment's stack or instruction stream. The system call number will go in `%eax`, and the arguments (up to five of them) will go in `%edx`, `%ecx`, `%ebx`, `%edi`, and `%esi`, respectively. The kernel passes the return value back in `%eax`. The assembly code to invoke a system call has been written for you, in `syscall()` in `lib/syscall.c`. You should read through it and make sure you understand what is going on.
```
Exercise 7. Add a handler in the kernel for interrupt vector `T_SYSCALL`. You will have to edit `kern/trapentry.S` and `kern/trap.c`'s `trap_init()`. You also need to change `trap_dispatch()` to handle the system call interrupt by calling `syscall()` (defined in `kern/syscall.c`) with the appropriate arguments, and then arranging for the return value to be passed back to the user process in `%eax`. Finally, you need to implement `syscall()` in `kern/syscall.c`. Make sure `syscall()` returns `-E_INVAL` if the system call number is invalid. You should read and understand `lib/syscall.c` (especially the inline assembly routine) in order to confirm your understanding of the system call interface. Handle all the system calls listed in `inc/syscall.h` by invoking the corresponding kernel function for each call.
Run the `user/hello` program under your kernel (make run-hello). It should print "`hello, world`" on the console and then cause a page fault in user mode. If this does not happen, it probably means your system call handler isn't quite right. You should also now be able to get make grade to succeed on the `testbss` test.
```
```
Challenge! Implement system calls using the `sysenter` and `sysexit` instructions instead of using `int 0x30` and `iret`.
The `sysenter/sysexit` instructions were designed by Intel to be faster than `int/iret`. They do this by using registers instead of the stack and by making assumptions about how the segmentation registers are used. The exact details of these instructions can be found in Volume 2B of the Intel reference manuals.
The easiest way to add support for these instructions in JOS is to add a `sysenter_handler` in `kern/trapentry.S` that saves enough information about the user environment to return to it, sets up the kernel environment, pushes the arguments to `syscall()` and calls `syscall()` directly. Once `syscall()` returns, set everything up for and execute the `sysexit` instruction. You will also need to add code to `kern/init.c` to set up the necessary model specific registers (MSRs). Section 6.1.2 in Volume 2 of the AMD Architecture Programmer's Manual and the reference on SYSENTER in Volume 2B of the Intel reference manuals give good descriptions of the relevant MSRs. You can find an implementation of `wrmsr` to add to `inc/x86.h` for writing to these MSRs [here][4].
Finally, `lib/syscall.c` must be changed to support making a system call with `sysenter`. Here is a possible register layout for the `sysenter` instruction:
eax - syscall number
edx, ecx, ebx, edi - arg1, arg2, arg3, arg4
esi - return pc
ebp - return esp
esp - trashed by sysenter
GCC's inline assembler will automatically save registers that you tell it to load values directly into. Don't forget to either save (push) and restore (pop) other registers that you clobber, or tell the inline assembler that you're clobbering them. The inline assembler doesn't support saving `%ebp`, so you will need to add code to save and restore it yourself. The return address can be put into `%esi` by using an instruction like `leal after_sysenter_label, %%esi`.
Note that this only supports 4 arguments, so you will need to leave the old method of doing system calls around to support 5 argument system calls. Furthermore, because this fast path doesn't update the current environment's trap frame, it won't be suitable for some of the system calls we add in later labs.
You may have to revisit your code once we enable asynchronous interrupts in the next lab. Specifically, you'll need to enable interrupts when returning to the user process, which `sysexit` doesn't do for you.
```
##### User-mode startup
A user program starts running at the top of `lib/entry.S`. After some setup, this code calls `libmain()`, in `lib/libmain.c`. You should modify `libmain()` to initialize the global pointer `thisenv` to point at this environment's `struct Env` in the `envs[]` array. (Note that `lib/entry.S` has already defined `envs` to point at the `UENVS` mapping you set up in Part A.) Hint: look in `inc/env.h` and use `sys_getenvid`.
`libmain()` then calls `umain`, which, in the case of the hello program, is in `user/hello.c`. Note that after printing "`hello, world`", it tries to access `thisenv->env_id`. This is why it faulted earlier. Now that you've initialized `thisenv` properly, it should not fault. If it still faults, you probably haven't mapped the `UENVS` area user-readable (back in Part A in `pmap.c`; this is the first time we've actually used the `UENVS` area).
```
Exercise 8. Add the required code to the user library, then boot your kernel. You should see `user/hello` print "`hello, world`" and then print "`i am environment 00001000`". `user/hello` then attempts to "exit" by calling `sys_env_destroy()` (see `lib/libmain.c` and `lib/exit.c`). Since the kernel currently only supports one user environment, it should report that it has destroyed the only environment and then drop into the kernel monitor. You should be able to get make grade to succeed on the `hello` test.
```
##### Page faults and memory protection
Memory protection is a crucial feature of an operating system, ensuring that bugs in one program cannot corrupt other programs or corrupt the operating system itself.
Operating systems usually rely on hardware support to implement memory protection. The OS keeps the hardware informed about which virtual addresses are valid and which are not. When a program tries to access an invalid address or one for which it has no permissions, the processor stops the program at the instruction causing the fault and then traps into the kernel with information about the attempted operation. If the fault is fixable, the kernel can fix it and let the program continue running. If the fault is not fixable, then the program cannot continue, since it will never get past the instruction causing the fault.
As an example of a fixable fault, consider an automatically extended stack. In many systems the kernel initially allocates a single stack page, and then if a program faults accessing pages further down the stack, the kernel will allocate those pages automatically and let the program continue. By doing this, the kernel only allocates as much stack memory as the program needs, but the program can work under the illusion that it has an arbitrarily large stack.
System calls present an interesting problem for memory protection. Most system call interfaces let user programs pass pointers to the kernel. These pointers point at user buffers to be read or written. The kernel then dereferences these pointers while carrying out the system call. There are two problems with this:
1. A page fault in the kernel is potentially a lot more serious than a page fault in a user program. If the kernel page-faults while manipulating its own data structures, that's a kernel bug, and the fault handler should panic the kernel (and hence the whole system). But when the kernel is dereferencing pointers given to it by the user program, it needs a way to remember that any page faults these dereferences cause are actually on behalf of the user program.
2. The kernel typically has more memory permissions than the user program. The user program might pass a pointer to a system call that points to memory that the kernel can read or write but that the program cannot. The kernel must be careful not to be tricked into dereferencing such a pointer, since that might reveal private information or destroy the integrity of the kernel.
For both of these reasons the kernel must be extremely careful when handling pointers presented by user programs.
You will now solve these two problems with a single mechanism that scrutinizes all pointers passed from userspace into the kernel. When a program passes the kernel a pointer, the kernel will check that the address is in the user part of the address space, and that the page table would allow the memory operation.
Thus, the kernel will never suffer a page fault due to dereferencing a user-supplied pointer. If the kernel does page fault, it should panic and terminate.
```
Exercise 9. Change `kern/trap.c` to panic if a page fault happens in kernel mode.
Hint: to determine whether a fault happened in user mode or in kernel mode, check the low bits of the `tf_cs`.
Read `user_mem_assert` in `kern/pmap.c` and implement `user_mem_check` in that same file.
Change `kern/syscall.c` to sanity check arguments to system calls.
Boot your kernel, running `user/buggyhello`. The environment should be destroyed, and the kernel should _not_ panic. You should see:
[00001000] user_mem_check assertion failure for va 00000001
[00001000] free env 00001000
Destroyed the only environment - nothing more to do!
Finally, change `debuginfo_eip` in `kern/kdebug.c` to call `user_mem_check` on `usd`, `stabs`, and `stabstr`. If you now run `user/breakpoint`, you should be able to run backtrace from the kernel monitor and see the backtrace traverse into `lib/libmain.c` before the kernel panics with a page fault. What causes this page fault? You don't need to fix it, but you should understand why it happens.
```
Note that the same mechanism you just implemented also works for malicious user applications (such as `user/evilhello`).
```
Exercise 10. Boot your kernel, running `user/evilhello`. The environment should be destroyed, and the kernel should not panic. You should see:
[00000000] new env 00001000
...
[00001000] user_mem_check assertion failure for va f010000c
[00001000] free env 00001000
```
**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab3.txt`. Commit your changes and type make handin in the `lab` directory to submit your work.
Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab3.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 3', then make handin and follow the directions.
--------------------------------------------------------------------------------
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/
作者:[csail.mit][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://pdos.csail.mit.edu
[b]: https://github.com/lujun9972
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html
[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx
[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c

View File

@ -1,131 +0,0 @@
translating---geekpi
How To Lock Virtual Console Sessions On Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
When youre working on a shared system, you might not want the other users to sneak peak in your console to know what youre actually doing. If so, I know a simple trick to lock your own session while still allowing other users to use the system on other virtual consoles. Thanks to **Vlock** , stands for **V** irtual Console **lock** , a command line program to lock one or more sessions on the Linux console. If necessary, you can lock the entire console and disable the virtual console switching functionality altogether. Vlock is especially useful for the shared Linux systems which have multiple users with access to the console.
### Installing Vlock
On Arch-based systems, the Vlock package is replaced with **kpd** package which is preinstalled by default, so you need not to bother with installation.
On Debian, Ubuntu, Linux Mint, run the following command to install Vlock:
```
$ sudo apt-get install vlock
```
On Fedora:
```
$ sudo dnf install vlock
```
On RHEL, CentOS:
```
$ sudo yum install vlock
```
### Lock Virtual Console Sessions On Linux
The general syntax for Vlock is:
```
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
```
Where,
* **a** Lock all virtual console sessions,
* **c** Lock current virtual console session,
* **n** Switch to new empty console before locking all sessions,
* **s** Disable SysRq key mechanism,
* **t** Specify the timeout for the screensaver plugins,
* **h** Display help section,
* **v** Display version.
Let me show you some examples.
**1\. Lock current console session**
When running Vlock without any arguments, it locks the current console session (TYY) by default. To unlock the session, you need to enter either the current users password or the root password.
```
$ vlock
```
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
You can also use **-c** flag to lock the current console session.
```
$ vlock -c
```
Please note that this command will only lock the current console. You can switch to other consoles by pressing **ALT+F2**. For more details about switching between TTYs, refer the following guide.
Also, if the system has multiple users, the other users can still access their respective TTYs.
**2\. Lock all console sessions**
To lock all TTYs at the same time and also disable the virtual console switching functionality, run:
```
$ vlock -a
```
Again, to unlock the console sessions, just press ENTER key and type your current users password or root user password.
Please keep in mind that the **root user can always unlock any vlock session** at any time, unless disabled at compile time.
**3. Switch to new virtual console before locking all consoles
**
It is also possible to make Vlock to switch to new empty virtual console from X session before locking all consoles. To do so, use **-n** flag.
```
$ vlock -n
```
**4. Disable SysRq mechanism
**
As you may know, the Magic SysRq key mechanism allows the users to perform some operations when the system freeze. So the users can unlock the consoles using SysRq. In order to prevent this, pass the **-s** option to disable SysRq mechanism. Please remember, this only works if the **-a** option is given.
```
$ vlock -sa
```
For more options and its usage, refer the help section or the man pages.
```
$ vlock -h
$ man vlock
```
Vlock prevents the unauthorized users from gaining the console access. If youre looking for a simple console locking mechanism to your Linux machine, Vlock is worth checking!
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972

View File

@ -1,3 +1,5 @@
HankChow translating
5 tips for choosing the right open source database
======
When selecting a mission-critical application, you can't afford to make mistakes.

View File

@ -1,282 +0,0 @@
translating by dianbanjiu
How to set up WordPress on a Raspberry Pi
======
Run your WordPress website on your Raspberry Pi with this simple tutorial.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W)
WordPress is a popular open source blogging platform and content management system (CMS). It's easy to set up and has a thriving community of developers building websites and creating themes and plugins for others to use.
Although getting hosting packages with a "one-click WordPress setup" is easy, it's also simple to set up your own on a Linux server with only command-line access, and the [Raspberry Pi][1] is a perfect way to try it out and learn something along the way.
The four components of a commonly used web stack are Linux, Apache, MySQL, and PHP. Here's what you need to know about each.
### Linux
The Raspberry Pi runs Raspbian, which is a Linux distribution based on Debian and optimized to run well on Raspberry Pi hardware. It comes with two options to start: Desktop or Lite. The Desktop version boots to a familiar-looking desktop and comes with lots of educational software and programming tools, as well as the LibreOffice suite, Minecraft, and a web browser. The Lite version has no desktop environment, so it's command-line only and comes with only the essential software.
This tutorial will work with either version, but if you use the Lite version you'll have to use another computer to access your website.
### Apache
Apache is a popular web server application you can install on the Raspberry Pi to serve web pages. On its own, Apache can serve static HTML files over HTTP. With additional modules, it can serve dynamic web pages using scripting languages such as PHP.
Installing Apache is very simple. Open a terminal window and type the following command:
```
sudo apt install apache2 -y
```
By default, Apache puts a test HTML file in a web folder you can view from your Pi or another computer on your network. Just open the web browser and enter the address **<http://localhost>**. Alternatively (particularly if you're using Raspbian Lite), enter the Pi's IP address instead of **localhost**. You should see this in your browser window:
![](https://opensource.com/sites/default/files/uploads/apache-it-works.png)
This means you have Apache working!
This default webpage is just an HTML file on the filesystem. It is located at **/var/www/html/index.html**. You can try replacing this file with some HTML of your own using the [Leafpad][2] text editor:
```
cd /var/www/html/
sudo leafpad index.html
```
Save and close Leafpad then refresh the browser to see your changes.
### MySQL
MySQL (pronounced "my S-Q-L" or "my sequel") is a popular database engine. Like PHP, it's widely used on web servers, which is why projects like WordPress use it and why those projects are so popular.
Install MySQL Server by entering the following command into the terminal window:
```
sudo apt-get install mysql-server -y
```
WordPress uses MySQL to store posts, pages, user data, and lots of other content.
### PHP
PHP is a preprocessor: it's code that runs when the server receives a request for a web page via a web browser. It works out what needs to be shown on the page, then sends that page to the browser. Unlike static HTML, PHP can show different content under different circumstances. PHP is a very popular language on the web; huge projects like Facebook and Wikipedia are written in PHP.
Install PHP and the MySQL extension:
```
sudo apt-get install php php-mysql -y
```
Delete the **index.html** file and create **index.php** :
```
sudo rm index.html
sudo leafpad index.php
```
Add the following line:
```
<?php phpinfo(); ?>
```
Save, exit, and refresh your browser. You'll see the PHP status page:
![](https://opensource.com/sites/default/files/uploads/phpinfo.png)
### WordPress
You can download WordPress from [wordpress.org][3] using the **wget** command. Helpfully, the latest version of WordPress is always available at [wordpress.org/latest.tar.gz][4], so you can grab it without having to look it up on the website. As I'm writing, this is version 4.9.8.
Make sure you're in **/var/www/html** and delete everything in it:
```
cd /var/www/html/
sudo rm *
```
Download WordPress using **wget** , then extract the contents and move the WordPress files to the **html** directory:
```
sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .
```
Tidy up by removing the tarball and the now-empty **wordpress** directory:
```
sudo rm -rf wordpress latest.tar.gz
```
Running the **ls** or **tree -L 1** command will show the contents of a WordPress project:
```
.
├── index.php
├── license.txt
├── readme.html
├── wp-activate.php
├── wp-admin
├── wp-blog-header.php
├── wp-comments-post.php
├── wp-config-sample.php
├── wp-content
├── wp-cron.php
├── wp-includes
├── wp-links-opml.php
├── wp-load.php
├── wp-login.php
├── wp-mail.php
├── wp-settings.php
├── wp-signup.php
├── wp-trackback.php
└── xmlrpc.php
3 directories, 16 files
```
This is the source of a default WordPress installation. The files you edit to customize your installation belong in the **wp-content** folder.
You should now change the ownership of all these files to the Apache user:
```
sudo chown -R www-data: .
```
### WordPress database
To get your WordPress site set up, you need a database. This is where MySQL comes in!
Run the MySQL secure installation command in the terminal window:
```
sudo mysql_secure_installation
```
You will be asked a series of questions. There's no password set up initially, but you should set one in the second step. Make sure you enter a password you will remember, as you'll need it to connect to WordPress. Press Enter to say Yes to each question that follows.
When it's complete, you will see the messages "All done!" and "Thanks for using MariaDB!"
Run **mysql** in the terminal window:
```
sudo mysql -uroot -p
```
Enter the root password you created. You will be greeted by the message "Welcome to the MariaDB monitor." Create the database for your WordPress installation at the **MariaDB [(none)] >** prompt using:
```
create database wordpress;
```
Note the semicolon at the end of the statement. If the command is successful, you should see this:
```
Query OK, 1 row affected (0.00 sec)
```
Grant database privileges to the root user, entering your password at the end of the statement:
```
GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD';
```
For the changes to take effect, you will need to flush the database privileges:
```
FLUSH PRIVILEGES;
```
Exit the MariaDB prompt with **Ctrl+D** to return to the Bash shell.
### WordPress configuration
Open the web browser on your Raspberry Pi and open **<http://localhost>**. You should see a WordPress page asking you to pick your language. Select your language and click **Continue**. You will be presented with the WordPress welcome screen. Click the **Let's go!** button.
Fill out the basic site information as follows:
```
Database Name:      wordpress
User Name:          root
Password:           <YOUR PASSWORD>
Database Host:      localhost
Table Prefix:       wp_
```
Click **Submit** to proceed, then click **Run the install**.
![](https://opensource.com/sites/default/files/uploads/wp-info.png)
Fill in the form: Give your site a title, create a username and password, and enter your email address. Hit the **Install WordPress** button, then log in using the account you just created. Now that you're logged in and your site is set up, you can see your website by visiting **<http://localhost/wp-admin>**.
### Permalinks
It's a good idea to change your permalink settings to make your URLs more friendly.
To do this, log into WordPress and go to the dashboard. Go to **Settings** , then **Permalinks**. Select the **Post name** option and click **Save Changes**. You'll need to enable Apache's **rewrite** module:
```
sudo a2enmod rewrite
```
You'll also need to tell the virtual host serving the site to allow requests to be overwritten. Edit the Apache configuration file for your virtual host:
```
sudo leafpad /etc/apache2/sites-available/000-default.conf
```
Add the following lines after line 1:
```
<Directory "/var/www/html">
    AllowOverride All
</Directory>
```
Ensure it's within the **< VirtualHost *:80>** like so:
```
<VirtualHost *:80>
    <Directory "/var/www/html">
        AllowOverride All
    </Directory>
    ...
```
Save the file and exit, then restart Apache:
```
sudo systemctl restart apache2
```
### What's next?
WordPress is very customizable. By clicking your site name in the WordPress banner at the top of the page (when you're logged in), you'll be taken to the Dashboard. From there, you can change the theme, add pages and posts, edit the menu, add plugins, and do lots more.
Here are some interesting things you can try on the Raspberry Pi's web server.
* Add pages and posts to your website
* Install different themes from the Appearance menu
* Customize your website's theme or create your own
* Use your web server to display useful information for people on your network
Don't forget, the Raspberry Pi is a Linux computer. You can also follow these instructions to install WordPress on a server running Debian or Ubuntu.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sitewide-search?search_api_views_fulltext=raspberry%20pi
[2]: https://en.wikipedia.org/wiki/Leafpad
[3]: http://wordpress.org/
[4]: https://wordpress.org/latest.tar.gz

View File

@ -0,0 +1,99 @@
推动 DevOps 变革的三个方面
======
推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
避免痛苦是一种强大的动力。一些研究表明,[植物也会通过遭受疼痛的过程][1]以采取措施来保护自己。我们人类有时也会刻意让自己受苦——在剧烈运动之后,身体可能会发生酸痛,但我们仍然坚持运动。那是因为当人认为整个过程利大于弊时,几乎可以忍受任何事情。
推动大规模的组织变革得过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。
我也曾经关注过一个团队耗费大量时间优化技术流程的过程,在这个过程中,团队逐渐将流程进行自动化改造,并最终获得了成功。
![Improvements after DevOps transformation][3]
图片来源Lee Eason. CC BY-SA 4.0
这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,<ruby>净推荐值<rt>Net Promoter Score</rt></ruby>NPS也提高了而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投资可以在业务成果上明显地体现出来。
而 DevOps 主导者在推动变革是必须关注这三个方面:团队管理,团队文化和团队活力。
### 团队管理
组织架构越大,业务领导与一线员工之间的距离就会越大,当然发生误解的可能性也会越大。而且各种技术工具和实际应用都在以日新月异的速度变化,这就导致业务领导几乎不可能对 DevOps 或敏捷开发的转型方向有一个亲身的了解。
DevOps 主导者必须和管理层密切合作,在进行决策的时候给出相关的意见,以帮助他们做出正确的决策。
公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。当管理层发现你在和软件团队执行自动化部署失败时,就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意作出进一步的改变。
如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,都会对你和管理层之间的信任造成难以消除的影响。所以,最好事先和管理层之间在各方面协调好,这会让你在后续的工作中避免很多麻烦。
对于和管理层之间的协调,这里有两条建议:
* 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。
* 二是**将管理层的重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的停机是造成客户流失的主要原因,那么就可以让团队对故障的<ruby>平均检测时间<rt>Mean Time To Detection</rt></ruby>MTTD<ruby>平均解决时间<rt>Mean Time To Resolution</rt></ruby>MTTR实行重点优化。你可以使用这些关键指标来量化团队的工作成果而管理层对此也可以有一个直观的了解。
### 团队文化
DevOps 是一种专注于持续改进代码、构建、部署和操作流程的文化,而团队文化代表了团队的价值观和行为。从本质上说,团队文化是要塑造团队成员的行为方式,而这并不是一件容易的事。
我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。
理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。也不能将这些价值观强加到团队成员身上,只需要让团队成员在目前的硬件条件下力所能及地做到最好就可以了
同时需要向团队成员阐明,公司正在发生组织上的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不通过共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。
如果这种状况持续多年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。
你必须强调团队的价值观。每当团队按照价值观取得了一定的工作进展,都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、继续学习,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,并逐渐切合团队的价值观。
### 团队活力
你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统最好,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。
我们一直都认为这就是软件开发的本质。但是如果我们不作出改变,这种循环就会一直保持下去。
熵的存在会让团队自发地变得混乱和缺乏活力团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵增加速这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。
我们来看一个产品团队重构遗留代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。
产品团队经常会犯一个致命的错误,就是认为 IT 部门是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。
实际上,产品团队在迁移到云时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构遗留的代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。
在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的弱项,因为它们会影响团队的交付能力,而修补这些弱项往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。
### 效果就是最好的证明
DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更快乐。
免责声明:本文中的内容仅为 Lee Eason 的个人立场,不代表 Ipreo 或 IHS Markit。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tales-devops-transformation
作者:[Lee Eason][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/leeeason
[b]: https://github.com/lujun9972
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
[2]: /file/411061
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png "Improvements after DevOps transformation"
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
[5]: https://www.gartner.com/en/publications/wolf-cio
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
[9]: https://tekata.io/
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
[12]: https://allthingsopen.org/

View File

@ -0,0 +1,520 @@
2017 年 Linux 上最好的 9 个免费视频编辑软件
======
**概要:这里介绍 Linux 上几个最好的视频编辑器,介绍他们的特性、利与弊,以及如何在你的 Linux 发行版上安装它们。**
![Linux 上最好的视频编辑器][1]
![Linux 上最好的视频编辑器][2]
我们曾经在一篇短文中讨论过[ Linux 上最好的照片管理应用][3][Linux 上最好的代码编辑器][4]。今天我们将讨论 **Linux 上最好的视频编辑软件**
当谈到免费视频编辑软件Windows Movie Maker 和 iMovie 是大部分人经常推荐的。
很不幸,上述两者在 GNU/Linux 上都不可用。但是不必担心,我们为你汇集了一个**最好的视频编辑器**清单。
## Linux 上最好的视频编辑器
接下来让我们一起看看这些最好的视频编辑软件。如果你觉得文章读起来太长,这里有一个快速摘要。你可以点击链接跳转到文章的相关章节:
视频编辑器 主要用途 类型
Kdenlive 通用视频编辑 免费开源
OpenShot 通用视频编辑 免费开源
Shotcut 通用视频编辑 免费开源
Flowblade 通用视频编辑 免费开源
Lightworks 专业级视频编辑 免费增值
Blender 专业级三维编辑 免费开源
Cinelerra 通用视频编辑 免费开源
DaVinci 专业级视频处理编辑 免费增值
VidCutter 简单视频拆分合并 免费开源
### 1\. Kdenlive
![Kdenlive - Ubuntu 上的免费视频编辑器][1]
![Kdenlive - Ubuntu 上的免费视频编辑器][5]
[Kdenlive][6] 是 [KDE][8] 上的一个免费且[开源][7]的视频编辑软件,支持双视频监控,多轨时间线,剪辑列表,支持自定义布局,基本效果,以及基本过渡。
它支持多种文件格式和多种摄像机、相机包括低分辨率摄像机Raw 和 AVI DV 编辑Mpeg2mpeg4 和 h264 AVCHD小型相机和便携式摄像机高分辨率摄像机文件包括 HDV 和 AVCHD 摄像机,专业摄像机,包括 XDCAM-HD™ 流, IMX™ (D10) 流DVCAM (D10)DVCAMDVCPRO™DVCPRO50™ 流以及 DNxHD™ 流。
如果你正寻找 Linux 上 iMovie 的替代品Kdenlive 会是你最好的选择。
#### Kdenlive 特性
* 多轨视频编辑
* 多种音视频格式支持
* 可配置的界面和快捷方式
* 使用文本或图像轻松创建切片
* 丰富的效果和过渡
* 音频和视频示波器可确保镜头绝对平衡
* 代理编辑
* 自动保存
* 广泛的硬件支持
* 关键帧效果
#### 优点
* 通用视频编辑器
* 对于那些熟悉视频编辑的人来说并不太复杂
#### 缺点
* 如果你想找的是极致简单的编辑软件,它可能还是令你有些困惑
* KDE 应用程序以臃肿而臭名昭著
#### 安装 Kdenlive
Kdenlive 适用于所有主要的 Linux 发行版。你只需在软件中心搜索即可。[Kdenlive 网站的下载部分][9]提供了各种软件包。
命令行爱好者可以通过在 Debian 和基于 Ubuntu 的 Linux 发行版中运行以下命令从终端安装它:
```
sudo apt install kdenlive
```
### 2\. OpenShot
![Openshot - ubuntu 上的免费视频编辑器][1]
![Openshot - ubuntu 上的免费视频编辑器][10]
[OpenShot][11] 是 Linux 上的另一个多用途视频编辑器。OpenShot 可以帮助你创建具有过渡和效果的视频。你还可以调整声音大小。当然,它支持大多数格式和编解码器。
你还可以将视频导出至 DVD上传至 YouTubeVimeoXbox 360 以及许多常见的视频格式。OpenShot 比 Kdenlive 要简单一些。因此如果你需要界面简单的视频编辑器OpenShot 是一个不错的选择。
它还有个简洁的[开始使用 Openshot][12] 文档。
#### OpenShot 特性
* 跨平台可在LinuxmacOS 和 Windows 上使用
* 支持多种视频,音频和图像格式
* 强大的基于曲线的关键帧动画
* 桌面集成与拖放支持
* 不受限制的音视频轨道或图层
* 可剪辑调整大小,缩放,修剪,捕捉,旋转和剪切
* 视频转换可实时预览
* 合成,图像层叠和水印
* 标题模板,标题创建,子标题
* 利用图像序列支持2D动画
* 3D 动画标题和效果
* 支持保存为 SVG 格式以及矢量标题和可信证
* 滚动动态图片
* 帧精度(逐步浏览每一帧视频)
* 剪辑的时间映射和速度更改
* 音频混合和编辑
* 数字视频效果,包括亮度,伽玛,色调,灰度,色度键等
#### 优点
* 用于一般视频编辑需求的通用视频编辑器
* 可在 Windows 和 macOS 以及 Linux 上使用
#### 缺点
* 软件用起来可能很简单,但如果你对视频编辑非常陌生,那么肯定需要一个曲折学习的过程
* 你可能仍然没有达到专业级电影制作编辑软件的水准
#### 安装 OpenShot
OpenShot 也可以在所有主流 Linux 发行版的软件仓库中使用。你只需在软件中心搜索即可。你也可以从[官方页面][13]中获取它。
在 Debian 和基于 Ubuntu 的 Linux 发行版中,我最喜欢运行以下命令来安装它:
```
sudo apt install openshot
```
### 3\. Shotcut
![Shotcut Linux 视频编辑器][1]
![Shotcut Linux 视频编辑器][14]
[Shotcut][15] 是 Linux 上的另一个编辑器,可以和 Kdenlive 与 OpenShot 归为同一联盟。虽然它确实与上面讨论的其他两个软件有类似的功能,但 Shotcut 更先进的地方是支持4K视频。
支持许多音频,视频格式,过渡和效果是 Shotcut 的众多功能中的一部分。它也支持外部监视器。
这里有一系列视频教程让你[轻松上手 Shotcut][16]。它也可在 Windows 和 macOS 上使用,因此你也可以在其他操作系统上学习。
#### Shotcut 特性
* 跨平台,可在 LinuxmacOS 和 Windows 上使用
* 支持各种视频,音频和图像格式
* 原生时间线编辑
* 混合并匹配项目中的分辨率和帧速率
* 音频滤波,混音和效果
* 视频转换和过滤
* 具有缩略图和波形的多轨时间轴
* 无限制撤消和重做播放列表编辑,包括历史记录视图
* 剪辑调整大小,缩放,修剪,捕捉,旋转和剪切
* 使用纹波选项修剪源剪辑播放器或时间轴
* 在额外系统显示/监视器上的外部监察
* 硬件支持
你可以在[这里][17]阅它的更多特性。
#### 优点
* 用于常见视频编辑需求的通用视频编辑器
* 支持 4K 视频
* 可在 WindowsmacOS 以及 Linux 上使用
#### 缺点
* 功能太多降低了软件的易用性
#### 安装 Shotcut
Shotcut 以 [Snap][18] 格式提供。你可以在 Ubuntu 软件中心找到它。对于其他发行版,你可以从此[下载页面][19]获取可执行文件来安装。
### 4\. Flowblade
![Flowblade ubuntu 上的视频编辑器][1]
![Flowblade ubuntu 上的视频编辑器][20]
[Flowblade][21] 是 Linux 上的一个多轨非线性视频编辑器。与上面讨论的一样,这也是一个免费开源的软件。它具有时尚和现代化的用户界面。
用 Python 编写它的设计初衷是快速且准确。Flowblade 专注于在 Linux 和其他免费平台上提供最佳体验。所以它没有在 Windows 和 OS X 上运行的版本。Linux 用户专享其实感觉也不错的。
你也可以查看这个不错的[文档][22]来帮助你使用它的所有功能。
#### Flowblade 特性
* 轻量级应用
* 为简单的任务提供简单的界面,如拆分,合并,覆盖等
* 大量的音视频效果和过滤器
* 支持[代理编辑][23]
* 支持拖拽
* 支持多种视频、音频和图像格式
* 批量渲染
* 水印
* 视频转换和过滤器
* 具有缩略图和波形的多轨时间轴
你可以在 [Flowblade 特性][24]里阅读关于它的更多信息。
#### 优点
* 轻量
* 适用于通用视频编辑
#### 缺点
* 不支持其他平台
#### 安装 Flowblade
Flowblade 应当在所有主流 Linux 发行版的软件仓库中都可以找到。你可以从软件中心安装它。也可以在[下载页面][25]查看更多信息。
另外,你可以在 Ubuntu 和基于 Ubuntu 的系统中使用一下命令安装 Flowblade
```
sudo apt install flowblade
```
### 5\. Lightworks
![Lightworks 运行在 ubuntu 16.04][1]
![Lightworks 运行在 ubuntu 16.04][26]
如果你在寻找一个具有更多特性的视频编辑器,这就是你想要的。[Lightworks][27] 是一个跨平台的专业视频编辑器,可以在 LinuxMac OS X 以及 Windows上使用。
它是一款屡获殊荣的专业[非线性编辑][28]NLE软件支持高达 4K 的分辨率以及 SD 和 HD 格式的视频。
Lightworks 可以在 Linux 上使用,然而它不开源。
Lightwokrs 有两个版本:
* Lightworks 免费版
* Lightworks 专业版
专业版有更多功能,比如支持更高的分辨率,支持 4K 和 蓝光视频等。
这个[页面][29]有广泛的可用文档。你也可以参考 [Lightworks 视频向导页][30]的视频。
#### Lightworks 特性
* 跨平台
* 简单直观的用户界面
* 简明的时间线编辑和修剪
* 实时可用的音频和视频FX
* 可用精彩的免版税音频和视频内容
* 适用于 4K 的 Lo-Res 代理工作流程
* 支持导出 YouTube/VimeoSD/HD视频最高可达4K
* 支持拖拽
* 各种音频和视频效果和滤镜
#### 优点
* 专业,功能丰富的视频编辑器
#### 缺点
* 免费版有使用限制
#### 安装 Lightworks
Lightworks 为 Debian 和基于 Ubuntu 的 Linux 提供了 DEB 安装包,为基于 Fedora 的 Linux 发行版提供了RPM 安装包。你可以在[下载页面][31]找到安装包。
### 6\. Blender
![Blender 运行在 Ubuntu 16.04][1]
![Blender 运行在 Ubuntu 16.04][32]
[Blender][33] 是一个专业的,工业级的开源跨平台视频编辑器。它在制作 3D 作品的工具当中较为流行。Blender 已被用于制作多部好莱坞电影,包括蜘蛛侠系列。
虽然最初设计用于制作 3D 模型,但它也具有多种格式视频的编辑功能。
#### Blender 特性
* 实时预览,亮度波形,色度矢量显示和直方图显示
* 音频混合,同步,擦洗和波形可视化
* 最多32个轨道用于添加视频图像音频场景面具和效果
* 速度控制,调整图层,过渡,关键帧,过滤器等
你可以在[这里][34]阅读更多相关特性。
#### 优点
* 跨平台
* 专业级视频编辑
#### 缺点
* 复杂
* 主要用于制作 3D 动画,不专门用于常规视频编辑
#### 安装 Blender
Blender 的最新版本可以从[下载页面][35]下载。
### 7\. Cinelerra
![Cinelerra Linux 上的视频编辑器][1]
![Cinelerra Linux 上的视频编辑器][36]
[Cinelerra][37] 从 1998 年发布以来已被下载超过500万次。它是 2003 年第一个在 64 位系统上提供非线性编辑的视频编辑器。当时它是Linux用户的首选视频编辑器但随后一些开发人员丢弃了此项目它也随之失去了光彩。
好消息是它正回到正轨并且良好地再次发展。
如果你想了解关于 Cinelerra 项目是如何开始的,这里有些[有趣的背景故事][38]。
#### Cinelerra 特性
* 非线性编辑
* 支持 HD 视频
* 内置框架渲染器
* 各种视频效果
* 不受限制的图层数量
* 拆分窗格编辑
#### 优点
* 通用视频编辑器
#### 缺点
* 不适用于新手
* 没有可用的安装包
#### 安装 Cinelerra
你可以从 [SourceForge][39] 下载源码。更多相关信息请看[下载页面][40]。
### 8\. DaVinci Resolve
![DaVinci Resolve 视频编辑器][1]
![DaVinci Resolve 视频编辑器][41]
如果你想要好莱坞级别的视频编辑器,那就用好莱坞正在使用的专业工具。来自 Blackmagic 公司的 [DaVinci Resolve][42] 就是专业人士用于编辑电影和电视节目的专业工具。
DaVinci Resolve 不是常规的视频编辑器。它是一个成熟的编辑工具,在这一个应用程序中提供编辑,色彩校正和专业音频后期制作功能。
DaVinci Resolve 不开源。类似于 LightWorks它也为 Linux 提供一个免费版本。专业版售价是 $300。
#### DaVinci Resolve 特性
* 高性能播放引擎
* 支持所有类型的编辑类型,如覆盖,插入,波纹覆盖,替换,适合填充,末尾追加
* 高级修剪
* 音频叠加
* Multicam Editing 可实现实时编辑来自多个摄像机的镜头
* 过渡和过滤效果
* 速度效果
* 时间轴曲线编辑器
* VFX 的非线性编辑
#### 优点
* 跨平台
* 专业级视频编辑器
#### 缺点
* 不适用于通用视频编辑
* 不开源
* 免费版本中有些功能无法使用
#### 安装 DaVinci Resolve
你可以从[这个页面][42]下载 DaVinci Resolve。你需要注册哪怕仅仅下载免费版。
### 9\. VidCutter
![VidCutter Linux 上的视频编辑器][1]
![VidCutter Linux 上的视频编辑器][43]
不像这篇文章讨论的其他视频编辑器,[VidCutter][44] 非常简单。除了分割和合并视频之外,它没有其他太多功能。但有时你正好需要 VidCutter 提供的这些功能。
#### VidCutter 特性
* 适用于LinuxWindows 和 MacOS 的跨平台应用程序
* 支持绝大多数常见视频格式例如AVIMP4MPEG 1/2WMVMP3MOV3GPFLV 等等
* 界面简单
* 修剪和合并视频,仅此而已
#### 优点
* 跨平台
* 很适合做简单的视频分割和合并
#### 缺点
* 不适合用于通用视频编辑
* 经常崩溃
#### 安装 VidCutter
如果你使用的是基于 Ubuntu 的 Linux 发行版,你可以使用这个官方 PPA译者注PPA个人软件包档案PersonalPackageArchives
```
sudo add-apt-repository ppa:ozmartian/apps
sudo apt-get update
sudo apt-get install vidcutter
```
Arch Linux 用户可以轻松的使用 AUR 安装它。对于其他 Linux 发行版的用户,你可以从这个 [Github 页面][45]上查找安装文件。
### 哪个是 Linux 上最好的视频编辑软件?
文章里提到的一些视频编辑器使用 [FFmpeg][46]。你也可以自己使用 FFmpeg。它只是一个命令行工具所以我没有在上文的列表中提到但一句也不提又不公平。
如果你需要一个视频编辑器来简单的剪切和拼接视频,请使用 VidCutter。
如果你需要的不止这些,**OpenShot** 或者 **Kdenlive** 是不错的选择。他们有规格标准的系统,适用于初学者。
如果你拥有一台高端计算机并且需要高级功能,可以使用 **Lightworks** 或者 **DaVinci Resolve**。如果你在寻找更高级的工具用于制作 3D 作品If you are looking for more advanced features for 3D works,**Blender** 就得到了你的支持。
这就是关于 **Linux 上最好的视频编辑软件**我所能表达的全部内容像UbuntuLinux MintElementary以及其他 Linux 发行版。向我们分享你最喜欢的视频编辑器。
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-video-editing-software-linux/
作者:[It'S Foss Team][a]
译者:[fuowang](https://github.com/fuowang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/itsfoss/
[1]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[2]:https://itsfoss.com/wp-content/uploads/2016/06/best-Video-editors-Linux-800x450.png
[3]:https://itsfoss.com/linux-photo-management-software/
[4]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[5]:https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg
[6]:https://kdenlive.org/
[7]:https://itsfoss.com/tag/open-source/
[8]:https://www.kde.org/
[9]:https://kdenlive.org/download/
[10]:https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg
[11]:http://www.openshot.org/
[12]:http://www.openshot.org/user-guide/
[13]:http://www.openshot.org/download/
[14]:https://itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux-800x503.jpg
[15]:https://www.shotcut.org/
[16]:https://www.shotcut.org/tutorials/
[17]:https://www.shotcut.org/features/
[18]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
[19]:https://www.shotcut.org/download/
[20]:https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg
[21]:http://jliljebl.github.io/flowblade/
[22]:https://jliljebl.github.io/flowblade/webhelp/help.html
[23]:https://jliljebl.github.io/flowblade/webhelp/proxy.html
[24]:https://jliljebl.github.io/flowblade/features.html
[25]:https://jliljebl.github.io/flowblade/download.html
[26]:https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg
[27]:https://www.lwks.com/
[28]:https://en.wikipedia.org/wiki/Non-linear_editing_system
[29]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=4
[30]:https://www.lwks.com/videotutorials
[31]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=1
[32]:https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg
[33]:https://www.blender.org/
[34]:https://www.blender.org/features/
[35]:https://www.blender.org/download/
[36]:https://itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg
[37]:http://cinelerra.org/
[38]:http://cinelerra.org/our-story
[39]:https://sourceforge.net/projects/heroines/files/cinelerra-6-src.tar.xz/download
[40]:http://cinelerra.org/download
[41]:https://itsfoss.com/wp-content/uploads/2016/06/davinci-resolve-vdeo-editor-800x450.jpg
[42]:https://www.blackmagicdesign.com/products/davinciresolve/
[43]:https://itsfoss.com/wp-content/uploads/2016/06/vidcutter-screenshot-800x585.jpeg
[44]:https://itsfoss.com/vidcutter-video-editor-linux/
[45]:https://github.com/ozmartian/vidcutter/releases
[46]:https://www.ffmpeg.org/

View File

@ -0,0 +1,250 @@
在 Linux 上使用 Lutries 管理你的游戏
======
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
让我们用游戏开始 2018 的第一天吧!今天我们要讨论的是 **Lutris**,一个 Linux 上的开源游戏平台。你可以使用 Lutries 安装、移除、配置、启动和管理你的游戏。它可以以一个界面帮你管理你的 Linux 游戏、Windows 游戏、仿真控制台游戏和浏览器游戏。它还包含社区编写的安装脚本,使得游戏的安装过程更加简单。
Lutries 自动安装(或者你可以单击点击安装)了超过 20 个模拟器,它提供了从七十年代到现在的大多数游戏系统。目前支持的游戏系统如下:
* Native Linux
* Windows
* Steam (Linux and Windows)
* MS-DOS
* 街机
* Amiga 电脑
* Atari 8 和 16 位计算机和控制器
* 浏览器 (Flash 或者 HTML5 游戏)
* Commmodore 8 位计算机
* 基于 SCUMM 的游戏和其他点击冒险游戏
* Magnavox Odyssey², Videopac+
* Mattel Intellivision
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
* Game Cube and Wii
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
* SNK Neo Geo, Neo Geo Pocket
* Sony PlayStation
* Sony PlayStation 2
* Sony PSP
* 像 Zork 这样的 Z-Machine 游戏
* 还有更多
### 安装 Lutris
就像 Steam 一样Lutries 包含两部分:网站和客户端程序。从网站你可以浏览可用的游戏,添加最喜欢的游戏到个人库,以及使用安装链接安装他们。
首先,我们还是来安装客户端。它目前支持 Arch Linux、Debian、Fedroa、Gentoo、openSUSE 和 Ubuntu。
对于 Arch Linux 和它的衍生版本,像是 Antergos, Manjaro Linux都可以在 [**AUR**][1] 中找到。因此,你可以使用 AUR 帮助程序安装它。
使用 [**Pacaur**][2]:
```
pacaur -S lutris
```
使用 **[Packer][3]** :
```
packer -S lutris
```
使用 [**Yaourt**][4]:
```
yaourt -S lutris
```
使用 [**Yay**][5]:
```
yay -S lutris
```
**Debian:**
**Debian 9.0** 上以 **root** 身份运行以下命令:
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
**Debian 8.0** 上以 **root** 身份运行以下命令:
```
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install lutris
```
**Fedora 27** 上以 **root** 身份运行以下命令: r
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
dnf install lutris
```
**Fedora 26** 上以 **root** 身份运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
dnf install lutris
```
**openSUSE Tumbleweed** 上以 **root** 身份运行以下命令:
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
zypper refresh
zypper install lutris
```
**openSUSE Leap 42.3** 上以 **root** 身份运行以下命令:
```
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
zypper refresh
zypper install lutris
```
**Ubuntu 17.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
**Ubuntu 17.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
**Ubuntu 16.10**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
**Ubuntu 16.04**:
```
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
sudo apt-key add - < Release.key
sudo apt-get update
sudo apt-get install lutris
```
对于其他平台,参考 [**Lutris 下载链接**][6].
### 使用 Lutris 管理你的游戏
安装完成后,从菜单或者应用启动器里打开 Lutries。首次启动时Lutries 的默认界面像下面这样:
[![][7]][8]
**登录你的 Lutris.net 账号**
为了能同步你个人库中的游戏,下一步你需要在客户端中登录你的 Lutris.net 账号。如果你没有,先 [**注册一个新的账号**][9]。然后点击 **"连接到你的 Lutirs.net 账号同步你的库 "** 连接到 Lutries 客户端。
输入你的账号信息然后点击 **继续**
[![][7]][10]
现在你已经连接到你的 Lutries.net 账号了。
[![][7]][11]**Browse Games**
点击工具栏里的浏览图标(游戏控制器图标)可以搜索任何游戏。它会自动定向到 Lutries 网站的游戏页。你可以以字母顺序查看所有可用的游戏。Lutries 现在已经有了很多游戏,而且还有更多的不断添加进来。
[![][7]][12]
任选一个游戏,添加到你的库中。
[![][7]][13]
然后返回到你的 Lutries 客户端,点击 **菜单 - > Lutris -> 同步库**。现在你可以在本地的 Lutries 客户端中看到所有在库中的游戏了。
[![][7]][14]
如果你没有看到游戏,只需要重启一次。
**安装游戏**
安装游戏,只需要点击游戏,然后点击 **安装** 按钮。例如,我想在我的系统安装 [**2048**][15],就像你在底下的截图中看到的,它要求我选择一个版本去安装。因为它只有一个版本(例如,在线),它就会自动选择这个版本。点击 **继续**
[![][7]][16]Click Install:
[![][7]][17]
安装完成之后,你可以启动新安装的游戏或是关闭这个窗口,继续从你的库中安装其他游戏。
**导入 Steam 库**
你也可以导入你的 Steam 库。在你的头像处点击 **"通过 Steam 登录"** 按钮。接下来你将被重定向到 Steam输入你的账号信息。填写正确后你的 Steam 账号将被连接到 Lutries 账号。请注意,为了同步库中的游戏,这里你的 Steam 账号将被公开。你可以在同步完成之后将其重新设为私密状态。
**手动添加游戏**
Lutries 有手动添加游戏的选项。在工具栏中点击 + 号登录。
[![][7]][18]
在下一个窗口,输入游戏名,在游戏信息栏选择一个运行器。运行器是指 Linux 上类似 wineSteam 之类的程序,它们可以帮助你启动这个游戏。你可以从 菜单 -> 管理运行器 中安装运行器。
[![][7]][19]
然后在下一栏中选择可执行文件或者 ISO。最后点击保存。有一个好消息是你可以添加一个游戏的多个版本。
**移除游戏**
移除任何已安装的游戏,只需在 Lutries 客户端的本地库中点击对应的游戏。选择 **移除** 然后 **应用**
[![][7]][20]
Lutries 就像 Steam。只是从网站向你的库中添加游戏并在客户端中为你安装它们。
各位,这就是今天所有的内容了。我们将会在今年发表更多好的和有用的文章。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
作者:[SK][a]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/lutris/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://lutris.net/downloads/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
[9]:https://lutris.net/user/register/
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()

View File

@ -0,0 +1,143 @@
如何移除或禁用 Ubuntu Dock
======
![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png)
**如果你想用其它 dock例如 Plank dock或面板来替换 Ubuntu 18.04 中的 Dock或者你想要移除或禁用 Ubuntu Dock本文会告诉你如何做。**
Ubuntu Dock - 屏幕左侧栏,可用于固定应用程序或访问已安装的应用程序。使用默认的 Ubuntu 会话时,[无法][1]使用 Gnome Tweaks 禁用它。如果你需要,还是有几种方法来摆脱它的。下面我将列出 4 种方法可以移除或禁用 Ubuntu Dock以及每个方法的缺点如果有的话还有如何撤销每个方法的更改。本文还包括在没有 Ubuntu Dock 的情况下访问多任务视图和已安装应用程序列表的其它方法。
to 校正Activities Overview 在本文翻译为多任务视图,如有不妥,请改正)
### 如何在没有 Ubuntu Dock 的情况下访问多任务试图
如果没有 Ubuntu Dock你可能无法访问活动的或已安装的应用程序列表但是可以通过单击 Dock 底部的“显示应用程序”按钮从 Ubuntu Dock 访问)。例如,如果你想使用 Plank Dock。to 校正:这里是什么意思呢)
显然,如果你安装了 Dash to Panel 扩展来使用它而不是 Ubuntu Dock那么情况并非如此。因为 Dash to Panel 提供了一个按钮来访问多任务视图或已安装的应用程序。
根据你计划使用的 Dock 而不是 Ubuntu Dock如果无法访问多任务视图那么你可以启用 Activities Overview Hot Corner 选项,只需将鼠标移动到屏幕的左上角即可打开 Activities。访问已安装的应用程序列表的另一种方法是使用快捷键`Super + A`。
如果要启用 Activities Overview hot corner使用以下命令
```
gsettings set org.gnome.shell enable-hot-corners true
```
如果以后要撤销此操作并禁用 hot corners那么你需要使用以下命令
```
gsettings set org.gnome.shell enable-hot-corners false
```
你可以使用 Gnome Tweaks 应用程序(该选项位于 Gnome Tweaks 的 `Top Bar` 部分)启用或禁用 Activities Overview Hot Corner 选项,可以使用以下命令进行安装:
```
sudo apt install gnome-tweaks
```
### 如何移除或禁用 Ubuntu Dock
下面你将找到 4 种摆脱 Ubuntu Dock 的方法,环境在 Ubuntu 18.04 下。
**方法 1: 移除 Gnome Shell Ubuntu Dock 包。**
摆脱 Ubuntu Dock 的最简单方法就是删除包。
这将会从你的系统中完全移除 Ubuntu Dock 扩展,但同时也移除了 `ubuntu-desktop` 元数据包。如果你移除 `ubuntu-desktop` 元数据包,不会马上出现问题,因为它本身没有任何作用。`ubuntu-meta` 包依赖于组成 Ubuntu 桌面的大量包。它的依赖关系不会被删除,也不会被破坏。问题是如果你以后想升级到新的 Ubuntu 版本,那么将不会安装任何新的 `ubuntu-desktop` 依赖项。
为了解决这个问题,你可以在升级到较新的 Ubuntu 版本之前安装 `ubuntu-desktop` 元包(例如,如果你想从 Ubuntu 18.04 升级到 18.10)。
如果你对此没有意见,并且想要从系统中删除 Ubuntu Dock 扩展包,使用以下命令:
```
sudo apt remove gnome-shell-extension-ubuntu-dock
```
如果以后要撤消更改,只需使用以下命令安装扩展:
```
sudo apt install gnome-shell-extension-ubuntu-dock
```
或者重新安装 `ubuntu-desktop` 元数据包(这将会安装你可能已删除的任何 ubuntu-desktop 依赖项,包括 Ubuntu Dock你可以使用以下命令
```
sudo apt install ubuntu-desktop
```
**选项2安装并使用 vanilla Gnome 会话而不是默认的 Ubuntu 会话。**
摆脱 Ubuntu Dock 的另一种方法是安装和使用 vanilla Gnome 会话。安装 vanilla Gnome 会话还将安装此会话所依赖的其它软件包,如 Gnome 文档,地图,音乐,联系人,照片,跟踪器等。
通过安装 vanilla Gnome 会话,你还将获得默认 Gnome GDM 登录和锁定屏幕主题,而不是 Ubuntu 默认值,另外还有 Adwaita Gtk 主题和图标。你可以使用 Gnome Tweaks 应用程序轻松更改 Gtk 和图标主题。
此外,默认情况下将禁用 AppIndicators 扩展(因此使用 AppIndicators 托盘的应用程序不会显示在顶部面板上),但你可以使用 Gnome Tweaks 启用此功能(在扩展中,启用 Ubuntu appindicators 扩展)。
同样,你也可以从 vanilla Gnome 会话启用或禁用 Ubuntu Dock这在 Ubuntu 会话中是不可能的(使用 Ubuntu 会话时无法从 Gnome Tweaks 禁用 Ubuntu Dock
如果你不想安装 vanilla Gnome 会话所需的这些额外软件包,那么这个移除 Ubuntu Dock 的这个选项不适合你,请查看其它选项。
如果你对此没有意见,以下是你需要做的事情。要在 Ubuntu 中安装普通的 Gnome 会话,使用以下命令:
```
sudo apt install vanilla-gnome-desktop
```
安装完成后,重启系统。在登录屏幕上,单击用户名,单击 `Sign in` 按钮旁边的齿轮图标,然后选择 `GNOME` 而不是 `Ubuntu`,之后继续登录。
![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png)
如果要撤销此操作并移除 vanilla Gnome 会话,可以使用以下命令清除 vanilla Gnome 软件包,然后删除它安装的依赖项(第二条命令):
```
sudo apt purge vanilla-gnome-desktop
sudo apt autoremove
```
然后重新启动,并以相同的方式从 GDM 登录屏幕中选择 Ubuntu。
**选项 3从桌面上永久隐藏 Ubuntu Dock而不是将其移除。**
如果你希望永久隐藏 Ubuntu Dock不让它显示在桌面上但不移除它或使用 vanilla Gnome 会话,你可以使用 Dconf 编辑器轻松完成此操作。这样做的缺点是 Ubuntu Dock 仍然会使用一些系统资源,即使你没有在桌面上使用它,但你也可以轻松恢复它而无需安装或移除任何包。
Ubuntu Dock 只对你的桌面隐藏当你进入叠加模式Activities你仍然可以看到并从那里使用 Ubuntu Dock。
要永久隐藏 Ubuntu Dock使用 Dconf 编辑器导航到 `/org/gnome/shell/extensions/dash-to-dock` 并禁用以下选项(将它们设置为 false`autohide`, `dock-fixed``intellihide`
如果你愿意,可以从命令行实现此目的,运行以下命令:
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide false
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false
```
如果你改变主意了并想撤销此操作,你可以使用 Dconf 编辑器从 `/org/gnome/shell/extensions/dash-to-dock` 中启动 `autohide`, `dock-fixed``intellihide`(将它们设置为 true或者你可以使用以下这些命令
```
gsettings set org.gnome.shell.extensions.dash-to-dock autohide true
gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true
gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true
```
**选项 4使用 Dash to Panel 扩展。**
[Dash to Panel][2] 是 Gnome Shell 的一个高度可配置面板,是 Ubuntu Dock 或 Dash to Dock 的一个很好的替代品Ubuntu Dock 是从 Dash to Dock 克隆而来的)。安装和启动 Dash to Panel 扩展会禁用 Ubuntu Dock因此你无需执行其它任何操作。
你可以从 [extensions.gnome.org][3] 来安装 Dash to Panel。
如果你改变主意并希望重新使用 Ubuntu Dock那么你可以使用 Gnome Tweaks 应用程序禁用 Dash to Panel或者通过单击以下网址旁边的 X 按钮完全移除 Dash to Panel: https://extensions.gnome.org/local/。
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020
[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html
[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/

View File

@ -0,0 +1,524 @@
实验 3用户环境
======
### 实验 3用户环境
#### 简介
在本实验中,你将要实现一个基本的内核功能,要求它能够保护运行的用户模式环境(即:进程)。你将去增强这个 JOS 内核,去配置数据结构以便于保持对用户环境的跟踪、创建一个单一用户环境、将程序镜像加载到用户环境中、并将它启动运行。你也要写出一些 JOS 内核的函数,用来处理任何用户环境生成的系统调用,以及处理由用户环境引进的各种异常。
**注意:** 在本实验中,术语**_“环境”_** 和**_“进程”_** 是可互换的 —— 它们都表示同一个抽象概念,那就是允许你去运行的程序。我在介绍中使用术语**“环境”**而不是使用传统术语**“进程”**的目的是为了强调一点,那就是 JOS 的环境和 UNIX 的进程提供了不同的接口,并且它们的语义也不相同。
##### 预备知识
使用 Git 去提交你自实验 2 以后的更改(如果有的话),获取课程仓库的最新版本,以及创建一个命名为 `lab3` 的本地分支,指向到我们的 lab3 分支上 `origin/lab3`
```
athena% cd ~/6.828/lab
athena% add git
athena% git commit -am 'changes to lab2 after handin'
Created commit 734fab7: changes to lab2 after handin
4 files changed, 42 insertions(+), 9 deletions(-)
athena% git pull
Already up-to-date.
athena% git checkout -b lab3 origin/lab3
Branch lab3 set up to track remote branch refs/remotes/origin/lab3.
Switched to a new branch "lab3"
athena% git merge lab2
Merge made by recursive.
kern/pmap.c | 42 +++++++++++++++++++
1 files changed, 42 insertions(+), 0 deletions(-)
athena%
```
实验 3 包含一些你将探索的新源文件:
```c
inc/ env.h Public definitions for user-mode environments
trap.h Public definitions for trap handling
syscall.h Public definitions for system calls from user environments to the kernel
lib.h Public definitions for the user-mode support library
kern/ env.h Kernel-private definitions for user-mode environments
env.c Kernel code implementing user-mode environments
trap.h Kernel-private trap handling definitions
trap.c Trap handling code
trapentry.S Assembly-language trap handler entry-points
syscall.h Kernel-private definitions for system call handling
syscall.c System call implementation code
lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a
entry.S Assembly-language entry-point for user environments
libmain.c User-mode library setup code called from entry.S
syscall.c User-mode system call stub functions
console.c User-mode implementations of putchar and getchar, providing console I/O
exit.c User-mode implementation of exit
panic.c User-mode implementation of panic
user/ * Various test programs to check kernel lab 3 code
```
另外,一些在实验 2 中的源文件在实验 3 中将被修改。如果想去查看有什么更改,可以运行:
```
$ git diff lab2
```
你也可以另外去看一下 [实验工具指南][1],它包含了与本实验有关的调试用户代码方面的信息。
##### 实验要求
本实验分为两部分Part A 和 Part B。Part A 在本实验完成后一周内提交;你将要提交你的更改和完成的动手实验,在提交之前要确保你的代码通过了 Part A 的所有检查(如果你的代码未通过 Part B 的检查也可以提交)。只需要在第二周提交 Part B 的期限之前代码检查通过即可。
由于在实验 2 中,你需要做实验中描述的所有正则表达式练习,并且至少通过一个挑战(是指整个实验,不是每个部分)。写出详细的问题答案并张贴在实验中,以及一到两个段落的关于你如何解决你选择的挑战问题的详细描述,并将它放在一个名为 `answers-lab3.txt` 的文件中,并将这个文件放在你的 `lab` 目标的根目录下。(如果你做了多个问题挑战,你仅需要提交其中一个即可)不要忘记使用 `git add answers-lab3.txt` 提交这个文件。
##### 行内汇编语言
在本实验中你可能找到使用了 GCC 的行内汇编语言特性,虽然不使用它也可以完成实验。但至少你需要去理解这些行内汇编语言片段,这些汇编语言("`asm`" 语句)片段已经存在于提供给你的源代码中。你可以在课程 [参考资料][2] 的页面上找到 GCC 行内汇编语言有关的信息。
#### Part A用户环境和异常处理
新文件 `inc/env.h` 中包含了在 JOS 中关于用户环境的基本定义。现在就去阅读它。内核使用数据结构 `Env` 去保持对每个用户环境的跟踪。在本实验的开始,你将只创建一个环境,但你需要去设计 JOS 内核支持多环境;实验 4 将带来这个高级特性,允许用户环境去 `fork` 其它环境。
正如你在 `kern/env.c` 中所看到的,内核维护了与环境相关的三个全局变量:
```
struct Env *envs = NULL; // All environments
struct Env *curenv = NULL; // The current env
static struct Env *env_free_list; // Free environment list
```
一旦 JOS 启动并运行,`envs` 指针指向到一个数组,即数据结构 `Env`它保存了系统中全部的环境。在我们的设计中JOS 内核将同时支持最大值为 `NENV` 个的活动的环境,虽然在一般情况下,任何给定时刻运行的环境很少。(`NENV` 是在 `inc/env.h` 中用 `#define` 定义的一个常量)一旦它被分配,对于每个 `NENV` 可能的环境,`envs` 数组将包含一个数据结构 `Env` 的单个实例。
JOS 内核在 `env_free_list` 上用数据结构 `Env` 保存了所有不活动的环境。这样的设计使得环境的分配和回收很容易,因为这只不过是添加或删除空闲列表的问题而已。
内核使用符号 `curenv` 来保持对任意给定时刻的 _当前正在运行的环境_ 进行跟踪。在系统引导期间,在第一个环境运行之前,`curenv` 被初始化为 `NULL`
##### 环境状态
数据结构 `Env` 被定义在文件 `inc/env.h` 中,内容如下:(在后面的实验中将添加更多的字段):
```c
struct Env {
struct Trapframe env_tf; // Saved registers
struct Env *env_link; // Next free Env
envid_t env_id; // Unique environment identifier
envid_t env_parent_id; // env_id of this env's parent
enum EnvType env_type; // Indicates special system environments
unsigned env_status; // Status of the environment
uint32_t env_runs; // Number of times environment has run
// Address space
pde_t *env_pgdir; // Kernel virtual address of page dir
};
```
以下是数据结构 `Env` 中的字段简介:
* **env_tf**
这个结构定义在 `inc/trap.h` 中,它用于在那个环境不运行时保持它保存在寄存器中的值,即:当内核或一个不同的环境在运行时。当从用户模式切换到内核模式时,内核将保存这些东西,以便于那个环境能够在稍后重新运行时回到中断运行的地方。
* **env_link**
这是一个链接,它链接到在 `env_free_list` 上的下一个 `Env` 上。`env_free_list` 指向到列表上第一个空闲的环境。
* **env_id**
内核在数据结构 `Env` 中保存了一个唯一标识当前环境的值(即:使用数组 `envs` 中的特定槽位)。在一个用户环境终止之后,内核可能给另外的环境重新分配相同的数据结构 `Env` —— 但是新的环境将有一个与已终止的旧的环境不同的 `env_id`,即便是新的环境在数组 `envs` 中复用了同一个槽位。
* **env_parent_id**
内核使用它来保存创建这个环境的父级环境的 `env_id`。通过这种方式,环境就可以形成一个“家族树”,这对于做出“哪个环境可以对谁做什么”这样的安全决策非常有用。
* **env_type**
它用于去区分特定的环境。对于大多数环境,它将是 `ENV_TYPE_USER` 的。在稍后的实验中,针对特定的系统服务环境,我们将引入更多的几种类型。
* **env_status**
这个变量持有以下几个值之一:
* `ENV_FREE`
表示那个 `Env` 结构是非活动的,并且因此它还在 `env_free_list` 上。
* `ENV_RUNNABLE`:
表示那个 `Env` 结构所代表的环境正等待被调度到处理器上去运行。
* `ENV_RUNNING`:
表示那个 `Env` 结构所代表的环境当前正在运行中。
* `ENV_NOT_RUNNABLE`:
表示那个 `Env` 结构所代表的是一个当前活动的环境但不是当前准备去运行的例如因为它正在因为一个来自其它环境的进程间通讯IPC而处于等待状态。
* `ENV_DYING`
表示那个 `Env` 结构所表示的是一个僵尸环境。一个僵尸环境将在下一次被内核捕获后被释放。我们在实验 4 之前不会去使用这个标志。
* **env_pgdir**
这个变量持有这个环境的内核虚拟地址的页目录。
就像一个 Unix 进程一样,一个 JOS 环境耦合了“线程”和“地址空间”的概念。线程主要由保存的寄存器来定义(`env_tf` 字段),而地址空间由页目录和 `env_pgdir` 所指向的页表所定义。为运行一个环境,内核必须使用保存的寄存器值和相关的地址空间去设置 CPU。
我们的 `struct Env` 与 xv6 中的 `struct proc` 类似。它们都在一个 `Trapframe` 结构中持有环境(即进程)的用户模式寄存器状态。在 JOS 中,单个的环境并不能像 xv6 中的进程那样拥有它们自己的内核栈。在这里,内核中任意时间只能有一个 JOS 环境处于活动中因此JOS 仅需要一个单个的内核栈。
##### 为环境分配数组
在实验 2 的 `mem_init()` 中,你为数组 `pages[]` 分配了内存,它是内核用于对页面分配与否的状态进行跟踪的一个表。你现在将需要去修改 `mem_init()`,以便于后面使用它分配一个与结构 `Env` 类似的数组,这个数组被称为 `envs`
```markdown
练习 1、修改在 `kern/pmap.c` 中的 `mem_init()`,以用于去分配和映射 `envs` 数组。这个数组完全由 `Env` 结构分配的实例 `NENV` 组成,就像你分配的 `pages` 数组一样。与 `pages` 数组一样,由内存支持的数组 `envs` 也将在 `UENVS`(它的定义在 `inc/memlayout.h` 文件中)中映射用户只读的内存,以便于用户进程能够从这个数组中读取。
```
你应该去运行你的代码,并确保 `check_kern_pgdir()` 是没有问题的。
##### 创建和运行环境
现在,你将在 `kern/env.c` 中写一些必需的代码去运行一个用户环境。因为我们并没有做一个文件系统因此我们将设置内核去加载一个嵌入到内核中的静态的二进制镜像。JOS 内核以一个 ELF 可运行镜像的方式将这个二进制镜像嵌入到内核中。
在实验 3 中,`GNUmakefile` 将在 `obj/user/` 目录中生成一些二进制镜像。如果你看到 `kern/Makefrag`,你将注意到一些奇怪的的东西,它们“链接”这些二进制直接进入到内核中运行,就像 `.o` 文件一样。在链接器命令行上的 `-b binary` 选项,将因此把它们链接为“原生的”不解析的二进制文件,而不是由编译器产生的普通的 `.o` 文件。(就链接器而言,这些文件压根就不是 ELF 镜像文件 —— 它们可以是任何东西,比如,一个文本文件或图片!)如果你在内核构建之后查看 `obj/kern/kernel.sym` ,你将会注意到链接器很奇怪的生成了一些有趣的、命名很费解的符号,比如像 `_binary_obj_user_hello_start`、`_binary_obj_user_hello_end`、以及 `_binary_obj_user_hello_size`。链接器通过改编二进制文件的命令来生成这些符号;这种符号为普通内核代码使用一种引入嵌入式二进制文件的方法。
`kern/init.c``i386_init()` 中,你将写一些代码在环境中运行这些二进制镜像中的一种。但是,设置用户环境的关键函数还没有实现;将需要你去完成它们。
```markdown
练习 2、在文件 `env.c` 中,写完以下函数的代码:
* `env_init()`
初始化 `envs` 数组中所有的 `Env` 结构,然后把它们添加到 `env_free_list` 中。也称为 `env_init_percpu`,它通过配置硬件,在硬件上为 level 0内核权限和 level 3用户权限使用单独的段。
* `env_setup_vm()`
为一个新环境分配一个页目录,并初始化新环境的地址空间的内核部分。
* `region_alloc()`
为一个新环境分配和映射物理内存
* `load_icode()`
你将需要去解析一个 ELF 二进制镜像,就像引导加载器那样,然后加载它的内容到一个新环境的用户地址空间中。
* `env_create()`
使用 `env_alloc` 去分配一个环境,并调用 `load_icode` 去加载一个 ELF 二进制
* `env_run()`
在用户模式中开始运行一个给定的环境
在你写这些函数时,你可能会发现新的 cprintf 动词 `%e` 非常有用 -- 它可以输出一个错误代码的相关描述。比如:
r = -E_NO_MEM;
panic("env_alloc: %e", r);
中 panic 将输出消息 "env_alloc: out of memory"。
```
下面是用户代码相关的调用图。确保你理解了每一步的用途。
* `start` (`kern/entry.S`)
* `i386_init` (`kern/init.c`)
* `cons_init`
* `mem_init`
* `env_init`
* `trap_init`(到目前为止还未完成)
* `env_create`
* `env_run`
* `env_pop_tf`
在完成以上函数后,你应该去编译内核并在 QEMU 下运行它。如果一切正常,你的系统将进入到用户空间并运行二进制的 `hello` ,直到使用 `int` 指令生成一个系统调用为止。在那个时刻将存在一个问题,因为 JOS 尚未设置硬件去允许从用户空间到内核空间的各种转换。当 CPU 发现没有系统调用中断的服务程序时,它将生成一个一般保护异常,找到那个异常并去处理它,还将生成一个双重故障异常,同样也找到它并处理它,并且最后会出现所谓的“三重故障异常”。通常情况下,你将随后看到 CPU 复位以及系统重引导。虽然对于传统的应用程序(在 [这篇博客文章][3] 中解释了原因)这是重大的问题,但是对于内核开发来说,这是一个痛苦的过程,因此,在打了 6.828 补丁的 QEMU 上,你将可以看到转储的寄存器内容和一个“三重故障”的信息。
我们马上就会去处理这些问题,但是现在,我们可以使用调试器去检查我们是否进入了用户模式。使用 `make qemu-gdb` 并在 `env_pop_tf` 处设置一个 GDB 断点,它是你进入用户模式之前到达的最后一个函数。使用 `si` 单步进入这个函数;处理器将在 `iret` 指令之后进入用户模式。然后你将会看到在用户环境运行的第一个指令,它将是在 `lib/entry.S` 中的标签 `start` 的第一个指令 `cmpl`。现在,在 `hello` 中的 `sys_cputs()``int $0x30` 处使用 `b *0x...`(关于用户空间的地址,请查看 `obj/user/hello.asm` )设置断点。这个指令 `int` 是系统调用去显示一个字符到控制台。如果到 `int` 还没有运行,那么可能在你的地址空间设置或程序加载代码时发生了错误;返回去找到问题并解决后重新运行。
##### 处理中断和异常
到目前为止,在用户空间中的第一个系统调用指令 `int $0x30` 已正式寿终正寝了:一旦处理器进入用户模式,将无法返回。因此,现在,你需要去实现基本的异常和系统调用服务程序,因为那样才有可能让内核从用户模式代码中恢复对处理器的控制。你所做的第一件事情就是彻底地掌握 x86 的中断和异常机制的使用。
```
练习 3、如果你对中断和异常机制不熟悉的话阅读 80386 程序员手册的第 9 章(或 IA-32 开发者手册的第 5 章)。
```
在这个实验中,对于中断、异常、以其它类似的东西,我们将遵循 Intel 的术语习惯。由于如<ruby>异常<rt>exception</rt></ruby><ruby>陷阱<rt>trap</rt></ruby><ruby>中断<rt>interrupt</rt></ruby><ruby>故障<rt>fault</rt></ruby><ruby>中止<rt>abort</rt></ruby>这些术语在不同的架构和操作系统上并没有一个统一的标准,我们经常在特定的架构下(如 x86并不去考虑它们之间的细微差别。当你在本实验以外的地方看到这些术语时它们的含义可能有细微的差别。
##### 受保护的控制转移基础
异常和中断都是“受保护的控制转移”它将导致处理器从用户模式切换到内核模式CPL=0而不会让用户模式的代码干扰到内核的其它函数或其它的环境。在 Intel 的术语中,一个中断就是一个“受保护的控制转移”,它是由于处理器以外的外部异步事件所引发的,比如外部设备 I/O 活动通知。而异常正好与之相反,它是由当前正在运行的代码所引发的同步的、受保护的控制转移,比如由于发生了一个除零错误或对无效内存的访问。
为了确保这些受保护的控制转移是真正地受到保护,处理器的中断/异常机制设计是:当中断/异常发生时,当前运行的代码不能随意选择进入内核的位置和方式。而是,处理器在确保内核能够严格控制的条件下才能进入内核。在 x86 上,有两种机制协同来提供这种保护:
1. **中断描述符表** 处理器确保中断和异常仅能够导致内核进入几个特定的、由内核本身定义好的、明确的入口点,而不是去运行中断或异常发生时的代码。
x86 允许最多有 256 个不同的中断或异常入口点去进入内核,每个入口点都使用一个不同的中断向量。一个向量是一个介于 0 和 255 之间的数字。一个中断向量是由中断源确定的不同的设备、错误条件、以及应用程序去请求内核使用不同的向量生成中断。CPU 使用向量作为进入处理器的中断描述符表IDT的索引它是内核设置的内核私有内存GDT 也是。从这个表中的适当的条目中,处理器将加载:
* 将值加载到指令指针寄存器EIP指向内核代码设计好的用于处理这种异常的服务程序。
* 将值加载到代码段寄存器CS它包含运行权限为 0—1 级别的、要运行的异常服务程序。(在 JOS 中,所有的异常处理程序都运行在内核模式中,运行级别为 level 0。
2. **任务状态描述符表** 处理器在中断或异常发生时,需要一个地方去保存旧的处理器状态,比如,处理器在调用异常服务程序之前的 `EIP``CS` 的原始值,这样那个异常服务程序就能够稍后通过还原旧的状态来回到中断发生时的代码位置。但是对于已保存的处理器的旧状态必须被保护起来,不能被无权限的用户模式代码访问;否则代码中的 bug 或恶意用户代码将危及内核。
基于这个原因,当一个 x86 处理器产生一个中断或陷阱时,将导致权限级别的变更,从用户模式转换到内核模式,它也将导致在内核的内存中发生栈切换。有一个被称为 TSS 的任务状态描述符表规定段描述符和这个栈所处的地址。处理器在这个新栈上推送 `SS`、`ESP`、`EFLAGS`、`CS`、`EIP`、以及一个可选的错误代码。然后它从中断描述符上加载 `CS``EIP` 的值,然后设置 `ESP``SS` 去指向新的栈。
虽然 TSS 很大并且默默地为各种用途服务,但是 JOS 仅用它去定义当从用户模式到内核模式的转移发生时,处理器即将切换过去的内核栈。因为在 JOS 中的“内核模式”仅运行在 x86 的 level 0 权限上,当进入内核模式时,处理器使用 TSS 上的 `ESP0``SS0` 字段去定义内核栈。JOS 并不去使用 TSS 的任何其它字段。
##### 异常和中断的类型
所有的 x86 处理器上的同步异常都能够产生一个内部使用的、介于 0 到 31 之间的中断向量,因此它映射到 IDT 就是条目 0-31。例如一个页故障总是通过向量 14 引发一个异常。大于 31 的中断向量仅用于软件中断,它由 `int` 指令生成,或异步硬件中断,当需要时,它们由外部设备产生。
在这一节中,我们将扩展 JOS 去处理向量为 0-31 之间的、内部产生的 x86 异常。在下一节中,我们将完成 JOS 的 480x30号软件中断向量JOS 将(随意选择的)使用它作为系统调用中断向量。在实验 4 中,我们将扩展 JOS 去处理外部生成的硬件中断,比如时钟中断。
##### 一个示例
我们把这些片断综合到一起,通过一个示例来巩固一下。我们假设处理器在用户环境下运行代码,遇到一个除零问题。
1. 处理器去切换到由 TSS 中的 `SS0``ESP0` 定义的栈,在 JOS 中,它们各自保存着值 `GD_KD``KSTACKTOP`
2. 处理器在内核栈上推入异常参数,起始地址为 `KSTACKTOP`
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20 <---- ESP
+--------------------+
```
3. 由于我们要处理一个除零错误,它将在 x86 上产生一个中断向量 0处理器读取 IDT 的条目 0然后设置 `CS:EIP` 去指向由条目描述的处理函数。
4. 处理服务程序函数将接管控制权并处理异常,例如中止用户环境。
对于某些类型的 x86 异常,除了以上的五个“标准的”寄存器外,处理器还推入另一个包含错误代码的寄存器值到栈中。页故障异常,向量号为 14就是一个重要的示例。查看 80386 手册去确定哪些异常推入一个错误代码,以及错误代码在那个案例中的意义。当处理器推入一个错误代码后,当从用户模式中进入内核模式,异常处理服务程序开始时的栈看起来应该如下所示:
```
+--------------------+ KSTACKTOP
| 0x00000 | old SS | " - 4
| old ESP | " - 8
| old EFLAGS | " - 12
| 0x00000 | old CS | " - 16
| old EIP | " - 20
| error code | " - 24 <---- ESP
+--------------------+
```
##### 嵌套的异常和中断
处理器能够处理来自用户和内核模式中的异常和中断。当收到来自用户模式的异常和中断时才会进入内核模式中,而且,在推送它的旧寄存器状态到栈中和通过 IDT 调用相关的异常服务程序之前x86 处理器会自动切换栈。如果当异常或中断发生时,处理器已经处于内核模式中(`CS` 寄存器低位两个比特为 0那么 CPU 只是推入一些值到相同的内核栈中。在这种方式中,内核可以优雅地处理嵌套的异常,嵌套的异常一般由内核本身的代码所引发。在实现保护时,这种功能是非常重要的工具,我们将在稍后的系统调用中看到它。
如果处理器已经处于内核模式中,并且发生了一个嵌套的异常,由于它并不需要切换栈,它也就不需要去保存旧的 `SS``ESP` 寄存器。对于不推入错误代码的异常类型,在进入到异常服务程序时,它的内核栈看起来应该如下图:
```
+--------------------+ <---- old ESP
| old EFLAGS | " - 4
| 0x00000 | old CS | " - 8
| old EIP | " - 12
+--------------------+
```
对于需要推入一个错误代码的异常类型,处理器将在旧的 `EIP` 之后,立即推入一个错误代码,就和前面一样。
关于处理器的异常嵌套的功能,这里有一个重要的警告。如果处理器正处于内核模式时发生了一个异常,并且不论是什么原因,比如栈空间泄漏,都不会去推送它的旧的状态,那么这时处理器将不能做任何的恢复,它只是简单地重置。毫无疑问,内核应该被设计为禁止发生这种情况。
##### 设置 IDT
到目前为止,你应该有了在 JOS 中为了设置 IDT 和处理异常所需的基本信息。现在,我们去设置 IDT 以处理中断向量 0-31处理器异常。我们将在本实验的稍后部分处理系统调用然后在后面的实验中增加中断 32-47设备 IRQ
在头文件 `inc/trap.h``kern/trap.h` 中包含了中断和异常相关的重要定义,你需要去熟悉使用它们。在文件`kern/trap.h` 中包含了到内核的、严格的、秘密的定义,可是在 `inc/trap.h` 中包含的定义也可以被用到用户级程序和库上。
注意:在范围 0-31 中的一些异常是被 Intel 定义为保留。因为在它们的处理器上从未产生过,你如何处理它们都不会有大问题。你想如何做它都是可以的。
你将要实现的完整的控制流如下图所描述:
```c
IDT trapentry.S trap.c
+----------------+
| &handler1 |---------> handler1: trap (struct Trapframe *tf)
| | // do stuff {
| | call trap // handle the exception/interrupt
| | // ... }
+----------------+
| &handler2 |--------> handler2:
| | // do stuff
| | call trap
| | // ...
+----------------+
.
.
.
+----------------+
| &handlerX |--------> handlerX:
| | // do stuff
| | call trap
| | // ...
+----------------+
```
每个异常或中断都应该在 `trapentry.S` 中有它自己的处理程序,并且 `trap_init()` 应该使用这些处理程序的地址去初始化 IDT。每个处理程序都应该在栈上构建一个 `struct Trapframe`(查看 `inc/trap.h`),然后使用一个指针调用 `trap()`(在 `trap.c` 中)到 `Trapframe`。`trap()` 接着处理异常/中断或派发给一个特定的处理函数。
```markdown
练习 4、编辑 `trapentry.S``trap.c`,然后实现上面所描述的功能。在 `trapentry.S` 中的宏 `TRAPHANDLER``TRAPHANDLER_NOEC` 将会帮你,还有在 `inc/trap.h` 中的 T_* defines。你需要在 `trapentry.S` 中为每个定义在 `inc/trap.h` 中的陷阱添加一个入口点(使用这些宏),并且你将有 t、o 提供的 `_alltraps`,这是由宏 `TRAPHANDLER`指向到它。你也需要去修改 `trap_init()` 来初始化 `idt`,以使它指向到每个在 `trapentry.S` 中定义的入口点;宏 `SETGATE` 将有助你实现它。
你的 `_alltraps` 应该:
1. 推送值以使栈看上去像一个结构 Trapframe
2. 加载 `GD_KD``%ds``%es`
3. `pushl %esp` 去传递一个指针到 Trapframe 以作为一个 trap() 的参数
4. `call trap` `trap` 能够返回吗?)
考虑使用 `pushal` 指令;它非常适合 `struct Trapframe` 的布局。
使用一些在 `user` 目录中的测试程序来测试你的陷阱处理代码,这些测试程序在生成任何系统调用之前能引发异常,比如 `user/divzero`。在这时,你应该能够成功完成 `divzero`、`softint`、以有 `badsegment` 测试。
```
```markdown
小挑战!目前,在 `trapentry.S` 中列出的 `TRAPHANDLER` 和他们安装在 `trap.c` 中可能有许多代码非常相似。清除它们。修改 `trapentry.S` 中的宏去自动为 `trap.c` 生成一个表。注意,你可以直接使用 `.text``.data` 在汇编器中切换放置其中的代码和数据。
```
```markdown
问题
在你的 `answers-lab3.txt` 中回答下列问题:
1. 为每个异常/中断设置一个独立的服务程序函数的目的是什么?(即:如果所有的异常/中断都传递给同一个服务程序,在我们的当前实现中能否提供这样的特性?)
2. 你需要做什么事情才能让 `user/softint` 程序正常运行评级脚本预计将会产生一个一般保护故障trap 13但是 `softint` 的代码显示为 `int $14`。为什么它产生的中断向量是 13如果内核允许 `softint``int $14` 指令去调用内核页故障的服务程序(它的中断向量是 14会发生什么事情
```
本实验的 Part A 部分结束了。不要忘了去添加 `answers-lab3.txt` 文件,提交你的变更,然后在 Part A 作业的提交截止日期之前运行 `make handin`
#### Part B页故障、断点异常、和系统调用
现在,你的内核已经有了最基本的异常处理能力,你将要去继续改进它,来提供依赖异常服务程序的操作系统原语。
##### 处理页故障
页故障异常,中断向量为 14`T_PGFLT`),它是一个非常重要的东西,我们将通过本实验和接下来的实验来大量练习它。当处理器产生一个页故障时,处理器将在它的一个特定的控制寄存器(`CR2`)中保存导致这个故障的线性地址(即:虚拟地址)。在 `trap.c` 中我们提供了一个专门处理它的函数的一个雏形,它就是 `page_fault_handler()`,我们将用它来处理页故障异常。
```markdown
练习 5、修改 `trap_dispatch()` 将页故障异常派发到 `page_fault_handler()` 上。你现在应该能够成功测试 `faultread`、`faultreadkernel`、`faultwrite`、和 `faultwritekernel` 了。如果它们中的任何一个不能正常工作,找出问题并修复它。记住,你可以使用 make run- _x_ 或 make run- _x_ -nox 去重引导 JOS 进入到一个特定的用户程序。比如,你可以运行 make run-hello-nox 去运行 the _hello_ user 程序。
```
下面,你将进一步细化内核的页故障服务程序,因为你要实现系统调用了。
##### 断点异常
断点异常,中断向量为 3`T_BRKPT`),它一般用在调试上,它在一个程序代码中插入断点,从而使用特定的 1 字节的 `int3` 软件中断指令来临时替换相应的程序指令。在 JOS 中,我们将稍微“滥用”一下这个异常,通过将它打造成一个伪系统调用原语,使得任何用户环境都可以用它来调用 JOS 内核监视器。如果我们将 JOS 内核监视认为是原始调试器,那么这种用法是合适的。例如,在 `lib/panic.c` 中实现的用户模式下的 `panic()` ,它在显示它的 `panic` 消息后运行一个 `int3` 中断。
```markdown
练习 6、修改 `trap_dispatch()`,让它在调用内核监视器时产生一个断点异常。你现在应该可以在 `breakpoint` 上成功完成测试。
```
```markdown
小挑战!修改 JOS 内核监视器,以便于你能够从当前位置(即:在 `int3` 之后,断点异常调用了内核监视器) '继续' 异常,并且因此你就可以一次运行一个单步指令。为了实现单步运行,你需要去理解 `EFLAGS` 寄存器中的某些比特的意义。
可选:如果你富有冒险精神,找一些 x86 反汇编的代码 —— 即通过从 QEMU 中、或从 GNU 二进制工具中分离、或你自己编写 —— 然后扩展 JOS 内核监视器,以使它能够反汇编,显示你的每步的指令。结合实验 1 中的符号表,这将是你写的一个真正的内核调试器。
```
```markdown
问题
3. 在断点测试案例中,根据你在 IDT 中如何初始化断点条目的不同情况(即:你的从 `trap_init``SETGATE` 的调用),既有可能产生一个断点异常,也有可能产生一个一般保护故障。为什么?为了能够像上面的案例那样工作,你需要如何去设置它,什么样的不正确设置才会触发一个一般保护故障?
4. 你认为这些机制的意义是什么?尤其是要考虑 `user/softint` 测试程序的工作原理。
```
##### 系统调用
用户进程请求内核为它做事情就是通过系统调用来实现的。当用户进程请求一个系统调用时,处理器首先进入内核模式,处理器和内核配合去保存用户进程的状态,内核为了完成系统调用会运行有关的代码,然后重新回到用户进程。用户进程如何获得内核的关注以及它如何指定它需要的系统调用的具体细节,这在不同的系统上是不同的。
在 JOS 内核中,我们使用 `int` 指令,它将导致产生一个处理器中断。尤其是,我们使用 `int $0x30` 作为系统调用中断。我们定义常量 `T_SYSCALL` 为 480x30。你将需要去设置中断描述符表以允许用户进程去触发那个中断。注意那个中断 0x30 并不是由硬件生成的,因此允许用户代码去产生它并不会引起歧义。
应用程序将在寄存器中传递系统调用号和系统调用参数。通过这种方式,内核就不需要去遍历用户环境的栈或指令流。系统调用号将放在 `%eax` 中,而参数(最多五个)将分别放在 `%edx`、`%ecx`、`%ebx`、`%edi`、和 `%esi` 中。内核将在 `%eax` 中传递返回值。在 `lib/syscall.c` 中的 `syscall()` 中已为你编写了使用一个系统调用的汇编代码。你可以通过阅读它来确保你已经理解了它们都做了什么。
```markdown
练习 7、在内核中为中断向量 `T_SYSCALL` 添加一个服务程序。你将需要去编辑 `kern/trapentry.S``kern/trap.c``trap_init()`。还需要去修改 `trap_dispatch()`,以便于通过使用适当的参数来调用 `syscall()` (定义在 `kern/syscall.c`)以处理系统调用中断,然后将系统调用的返回值安排在 `%eax` 中传递给用户进程。最后,你需要去实现 `kern/syscall.c` 中的 `syscall()`。如果系统调用号是无效值,确保 `syscall()` 返回值一定是 `-E_INVAL`。为确保你理解了系统调用的接口,你应该去阅读和掌握 `lib/syscall.c` 文件(尤其是行内汇编的动作),对于在 `inc/syscall.h` 中列出的每个系统调用都需要通过调用相关的内核函数来处理A。
在你的内核中运行 `user/hello` 程序make run-hello。它应该在控制台上输出 "`hello, world`",然后在用户模式中产生一个页故障。如果没有产生页故障,可能意味着你的系统调用服务程序不太正确。现在,你应该有能力成功通过 `testbss` 测试。
```
```markdown
小挑战!使用 `sysenter``sysexit` 指令而不是使用 `int 0x30``iret` 来实现系统调用。
`sysenter/sysexit` 指令是由 Intel 设计的,它的运行速度要比 `int/iret` 指令快。它使用寄存器而不是栈来做到这一点,并且通过假定了分段寄存器是如何使用的。关于这些指令的详细内容可以在 Intel 参考手册 2B 卷中找到。
在 JOS 中添加对这些指令支持的最容易的方法是,在 `kern/trapentry.S` 中添加一个 `sysenter_handler`,在它里面保存足够多的关于用户环境返回、设置内核环境、推送参数到 `syscall()`、以及直接调用 `syscall()` 的信息。一旦 `syscall()` 返回,它将设置好运行 `sysexit` 指令所需的一切东西。你也将需要在 `kern/init.c` 中添加一些代码以设置特殊模块寄存器MSRs。在 AMD 架构程序员手册第 2 卷的 6.1.2 节中和 Intel 参考手册的 2B 卷的 SYSENTER 上都有关于 MSRs 的很详细的描述。对于如何去写 MSRs在[这里][4]你可以找到一个添加到 `inc/x86.h` 中的 `wrmsr` 的实现。
最后,`lib/syscall.c` 必须要修改,以便于支持用 `sysenter` 来生成一个系统调用。下面是 `sysenter` 指令的一种可能的寄存器布局:
eax - syscall number
edx, ecx, ebx, edi - arg1, arg2, arg3, arg4
esi - return pc
ebp - return esp
esp - trashed by sysenter
GCC 的内联汇编器将自动保存你告诉它的直接加载进寄存器的值。不要忘了同时去保存push和恢复pop你使用的其它寄存器或告诉内联汇编器你正在使用它们。内联汇编器不支持保存 `%ebp`,因此你需要自己去增加一些代码来保存和恢复它们,返回地址可以使用一个像 `leal after_sysenter_label, %%esi` 的指令置入到 `%esi` 中。
注意,它仅支持 4 个参数,因此你需要保留支持 5 个参数的系统调用的旧方法。而且,因为这个快速路径并不更新当前环境的 trap 帧,因此,在我们添加到后续实验中的一些系统调用上,它并不适合。
在接下来的实验中我们启用了异步中断,你需要再次去评估一下你的代码。尤其是,当返回到用户进程时,你需要去启用中断,而 `sysexit` 指令并不会为你去做这一动作。
```
##### 启动用户模式
一个用户程序是从 `lib/entry.S` 的顶部开始运行的。在一些配置之后,代码调用 `lib/libmain.c` 中的 `libmain()`。你应该去修改 `libmain()` 以初始化全局指针 `thisenv`,使它指向到这个环境在数组 `envs[]` 中的 `struct Env`。(注意那个 `lib/entry.S` 中已经定义 `envs` 去指向到在 Part A 中映射的你的设置。)提示:查看 `inc/env.h` 和使用 `sys_getenvid`
`libmain()` 接下来调用 `umain`,在 hello 程序的案例中,`umain` 是在 `user/hello.c` 中。注意,它在输出 "`hello, world`” 之后,它尝试去访问 `thisenv->env_id`。这就是为什么前面会发生故障的原因了。现在,你已经正确地初始化了 `thisenv`,它应该不会再发生故障了。如果仍然会发生故障,或许是因为你没有映射 `UENVS` 区域为用户可读取(回到前面 Part A 中 查看 `pmap.c`);这是我们第一次真实地使用 `UENVS` 区域)。
```markdown
练习 8、添加要求的代码到用户库然后引导你的内核。你应该能够看到 `user/hello` 程序会输出 "`hello, world`" 然后输出 "`i am environment 00001000`"。`user/hello` 接下来会通过调用 `sys_env_destroy()`(查看`lib/libmain.c` 和 `lib/exit.c`)尝试去"退出"。由于内核目前仅支持一个用户环境,它应该会报告它毁坏了唯一的环境,然后进入到内核监视器中。现在你应该能够成功通过 `hello` 的测试。
```
##### 页故障和内存保护
内存保护是一个操作系统中最重要的特性,通过它来保证一个程序中的 bug 不会破坏其它程序或操作系统本身。
操作系统一般是依靠硬件的支持来实现内存保护。操作系统会告诉硬件哪些虚拟地址是有效的,而哪些是无效的。当一个程序尝试去访问一个无效地址或它没有访问权限的地址时,处理器会在导致故障发生的位置停止程序运行,然后捕获内核中关于尝试操作的相关信息。如果故障是可修复的,内核可能修复它并让程序继续运行。如果故障不可修复,那么程序就不能继续,因为它绝对不会跳过那个导致故障的指令。
作为一个可修复故障的示例,假设一个自动扩展的栈。在许多系统上,内核初始化分配一个单栈页,然后如果程序发生的故障是去访问这个栈页下面的页,那么内核会自动分配这些页,并让程序继续运行。通过这种方式,内核只分配程序所需要的内存栈,但是程序可以运行在一个任意大小的栈的假像中。
对于内存保护,系统调用中有一个非常有趣的问题。许多系统调用接口让用户程序传递指针到内核中。这些指针指向用户要读取或写入的缓冲区。然后内核在执行系统调用时废弃这些指针。这样就有两个问题:
1. 内核中的页故障可能比用户程序中的页故障多的多。如果内核在维护它自己的数据结构时发生页故障,那就是一个内核 bug而故障服务程序将使整个内核和整个系统崩溃。但是当内核废弃了由用户程序传递给它的指针后它就需要一种方式去记住那些废弃指针所导致的页故障其实是代表用户程序的。
2. 一般情况下内核拥有比用户程序更多的权限。用户程序可以传递一个指针到系统调用,而指针指向的区域有可能是内核可以读取或写入而用户程序不可访问的区域。内核必须要非常小心,不能被废弃的这种指针欺骗,因为这可能导致泄露私有信息或破坏内核的完整性。
由于以上的原因,内核在处理由用户程序提供的指针时必须格外小心。
现在,你可以通过使用一个简单的机制来仔细检查所有从用户空间传递给内核的指针来解决这个问题。当一个程序给内核传递指针时,内核将检查它的地址是否在地址空间的用户部分,然后页表才允许对内存的操作。
这样,内核在废弃一个用户提供的指针时就绝不会发生页故障。如果内核出现这种页故障,它应该崩溃并终止。
```markdown
练习 9、如果在内核模式中发生一个页故障修改 `kern/trap.c` 去崩溃。
提示:判断一个页故障是发生在用户模式还是内核模式,去检查 `tf_cs` 的低位比特即可。
阅读 `kern/pmap.c` 中的 `user_mem_assert` 并在那个文件中实现 `user_mem_check`
修改 `kern/syscall.c` 去常态化检查传递给系统调用的参数。
引导你的内核,运行 `user/buggyhello`。环境将被毁坏,而内核将不会崩溃。你将会看到:
[00001000] user_mem_check assertion failure for va 00000001
[00001000] free env 00001000
Destroyed the only environment - nothing more to do!
最后,修改在 `kern/kdebug.c` 中的 `debuginfo_eip`,在 `usd`、`stabs`、和 `stabstr` 上调用 `user_mem_check`。如果你现在运行 `user/breakpoint`,你应该能够从内核监视器中运行回溯,然后在内核因页故障崩溃前看到回溯进入到 `lib/libmain.c`。是什么导致了这个页故障?你不需要去修复它,但是你应该明白它是如何发生的。
```
注意,刚才实现的这些机制也同样适用于恶意用户程序(比如 `user/evilhello`)。
```
练习 10、引导你的内核运行 `user/evilhello`。环境应该被毁坏,并且内核不会崩溃。你应该能看到:
[00000000] new env 00001000
...
[00001000] user_mem_check assertion failure for va f010000c
[00001000] free env 00001000
```
**本实验到此结束。**确保你通过了所有的等级测试,并且不要忘记去写下问题的答案,在 `answers-lab3.txt` 中详细描述你的挑战练习的解决方案。提交你的变更并在 `lab` 目录下输入 `make handin` 去提交你的工作。
在动手实验之前,使用 `git status``git diff` 去检查你的变更,并不要忘记去 `git add answers-lab3.txt`。当你完成后,使用 `git commit -am 'my solutions to lab 3` 去提交你的变更,然后 `make handin` 并关注这个指南。
--------------------------------------------------------------------------------
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/
作者:[csail.mit][a]
选题:[lujun9972][b]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://pdos.csail.mit.edu
[b]: https://github.com/lujun9972
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html
[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx
[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c

View File

@ -0,0 +1,127 @@
如何在 Linux 上锁定虚拟控制台会话
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
当你在共享系统上工作时,你可能不希望其他用户在你的控制台中悄悄地看你在做什么。如果是这样,我知道有个简单的技巧来锁定自己的会话,同时仍然允许其他用户在其他虚拟控制台上使用该系统。要感谢 **Vlock****V** irtual Console **lock**),这是一个命令行程序,用于锁定 Linux 控制台上的一个或多个会话。如有必要你可以锁定整个控制台并完全禁用虚拟控制台切换功能。Vlock 对于有多个用户访问控制台的共享 Linux 系统特别有用。
### 安装 Vlock
在基于 Arch 的系统上Vlock 软件包被替换为默认预安装的 **kpd** 包,因此你无需为安装烦恼。
在 Debian、Ubuntu、Linux Mint 上,运行以下命令来安装 Vlock
```
$ sudo apt-get install vlock
```
在 Fedora 上:
```
$ sudo dnf install vlock
```
在 RHEL、CentOS 上:
```
$ sudo yum install vlock
```
### 在Linux上锁定虚拟控制台会话
Vlock 的一般语法是:
```
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
```
这里:
* **a** 锁定所有虚拟控制台会话,
* **c** 锁定当前虚拟控制台会话,
* **n** 在锁定所有会话之前切换到新的空控制台,
* **s** 禁用 SysRq 键机制,
* **t** 指定屏保插件的超时时间,
* **h** 显示帮助,
* **v** 显示版本。
让我举几个例子。
**1\. 锁定当前控制台会话**
在没有任何参数的情况下运行 Vlock 时,它默认锁定当前控制台会话 TYY。要解锁会话你需要输入当前用户的密码或 root 密码。
```
$ vlock
```
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
你还可以使用 **-c** 标志来锁定当前的控制台会话。
```
$ vlock -c
```
请注意,此命令仅锁定当前控制台。你可以按 **ALT+F2** 切换到其他控制台。有关在 TTY 之间切换的更多详细信息,请参阅以下指南。
此外,如果系统有多个用户,则其他用户仍可以访问其各自的 TTY。
**2\. 锁定所有控制台会话**
要同时锁定所有 TTY 并禁用虚拟控制台切换功能,请运行:
```
$ vlock -a
```
同样,要解锁控制台会话,只需按下回车键并输入当前用户的密码或 root 用户密码。
请记住,**root 用户可以随时解锁任何 vlock 会话**,除非在编译时禁用。
**3\. 在锁定所有控制台之前切换到新的虚拟控制台**
在锁定所有控制台之前,还可以使 Vlock 从 X 会话切换到新的空虚拟控制台。为此,请使用 **-n** 标志。
```
$ vlock -n
```
**4\. 禁用 SysRq 机制**
你也许知道,魔术 SysRq 键机制允许用户在系统死机时执行某些操作。因此,用户可以使用 SysRq 解锁控制台。为了防止这种情况,请传递 **-s** 选项以禁用 SysRq 机制。请记住,这只适用于有 **-a** 选项的时候。
```
$ vlock -sa
```
有关更多选项及其用法,请参阅帮助或手册页。
```
$ vlock -h
$ man vlock
```
Vlock 可防止未经授权的用户获得控制台访问权限。如果你在为 Linux 寻找一个简单的控制台锁定机制,那么 Vlock 值得一试!
就是这些了。希望这篇文章有用。还有更多好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,275 @@
如何在 Rasspberry Pi 上搭建 WordPress
======
这篇简单的教程可以让你在 Rasspberry Pi 上运行你的 WordPress 网站。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W)
WordPress 是一个非常受欢迎的开源博客平台和内容管理平台CMS)。它很容易搭建,而且还有一个活跃的开发者社区构建网站、创建主题和插件供其他人使用。
虽然通过一键式 WordPress 设置获得托管包很容易,但通过命令行就可以在 Linux 服务器上设置自己的托管包,而且 Raspberry Pi 是一种用来尝试它并顺便学习一些东西的相当好的途径。
使用一个 web 堆栈的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。
### Linux
Raspberry Pi 上运行的系统是 Raspbian这是一个基于 Debian优化地可以很好的运行在 Raspberry Pi 硬件上的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。
这篇教程在两个版本上都可以使用,但是如果你使用的是精简版,你必须要有另外一台电脑去访问你的站点。
### Apache
Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的 Raspberry Pi 上伺服你的 web 页面。就其自身而言Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。
安装 Apache 非常简单。打开一个终端窗口,然后输入下面的命令:
```
sudo apt install apache2 -y
```
Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 **<http://localhost>**。或者(特别是你使用的是 Raspbian Lite 的话)输入你的 Pi 的 IP 地址代替 **localhost**。你应该会在你的浏览器窗口中看到这样的内容:
![](https://opensource.com/sites/default/files/uploads/apache-it-works.png)
这意味着你的 Apache 已经开始工作了!
这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 **/var/www/html/index/html**。你可以使用 [Leafpad][2] 文本编辑器写一些 HTML 去替换这个文件的内容。
```
cd /var/www/html/
sudo leafpad index.html
```
保存并关闭 Leafpad 然后刷新网页,查看你的更改。
### MySQL
MySQL (显然是 "my S-Q-L" 或者 "my sequel") 是一个很受欢迎的数据库引擎。就像 PHP它被非常广泛的应用于网页服务这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。
在一个终端窗口中输入以下命令安装 MySQL 服务:
```
sudo apt-get install mysql-server -y
```
WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。
### PHP
PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。,不像静态的 HTMLPHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。
安装 PHP 和 MySQL 的插件:
```
sudo apt-get install php php-mysql -y
```
删除 **index.html**,然后创建 **index.php**
```
sudo rm index.html
sudo leafpad index.php
```
在里面添加以下内容:
```
<?php phpinfo(); ?>
```
保存、退出、刷新你的网页。你将会看到 PHP 状态页:
![](https://opensource.com/sites/default/files/uploads/phpinfo.png)
### WordPress
你可以使用 **wget** 命令从 [wordpress.org][3] 下载 WordPress。最新的 WordPress 总是使用 [wordpress.org/latest.tar.gz][4] 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。
确保你在 **/var/www/html** 目录中,然后删除里面的所有内容:
```
cd /var/www/html/
sudo rm *
```
使用 **wget** 下载 WordPress然后提取里面的内容并移动提取的 WordPress 目录中的内容移动到 **html** 目录下:
```
sudo wget http://wordpress.org/latest.tar.gz
sudo tar xzf latest.tar.gz
sudo mv wordpress/* .
```
现在可以删除压缩包和空的 **wordpress** 目录:
```
sudo rm -rf wordpress latest.tar.gz
```
运行 **ls** 或者 **tree -L 1** 命令显示 WordPress 项目下包含的内容:
```
.
├── index.php
├── license.txt
├── readme.html
├── wp-activate.php
├── wp-admin
├── wp-blog-header.php
├── wp-comments-post.php
├── wp-config-sample.php
├── wp-content
├── wp-cron.php
├── wp-includes
├── wp-links-opml.php
├── wp-load.php
├── wp-login.php
├── wp-mail.php
├── wp-settings.php
├── wp-signup.php
├── wp-trackback.php
└── xmlrpc.php
3 directories, 16 files
```
这是 WordPress 的默认安装源。在 **wp-content** 目录中,你可以编辑你的自定义安装。
你现在应该把所有文件的所有权改为 Apache 用户:
```
sudo chown -R www-data: .
```
### WordPress 数据库
为了搭建你的 WordPress 站点,你需要一个数据库。这里使用的是 MySQL。
在终端窗口运行 MySQL 的安全安装命令:
```
sudo mysql_secure_installation
```
你将会被问到一系列的问题。这里原来没有设置密码,但是在下一步你应该设置一个。确保你记住了你输入的密码,后面你需要使用它去连接你的 WordPress。按回车确认下面的所有问题。
当它完成之后,你将会看到 "All done!" 和 "Thanks for using MariaDB!" 的信息。
在终端窗口运行 **mysql** 命令:
```
sudo mysql -uroot -p
```
输入你创建的 root 密码。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 **MariaDB [(none)] >** 提示处使用以下命令,为你 WordPress 的安装创建一个数据库:
```
create database wordpress;
```
注意声明最后的分号,如果命令执行成功,你将看到下面的提示:
```
Query OK, 1 row affected (0.00 sec)
```
把 数据库权限交给 root 用户在声明的底部输入密码:
```
GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD';
```
为了让更改生效,你需要刷新数据库权限:
```
FLUSH PRIVILEGES;
```
**Ctrl+D** 退出 MariaDB 提示,返回到 Bash shell。
### WordPress 配置
在你的 Raspberry Pi 打开网页浏览器,地址栏输入 **<http://localhost>**。选择一个你想要在 WordPress 使用的语言,然后点击 **继续**。你将会看到 WordPress 的欢迎界面。点击 **让我们开始吧** 按钮。
按照下面这样填写基本的站点信息:
```
Database Name:      wordpress
User Name:          root
Password:           <YOUR PASSWORD>
Database Host:      localhost
Table Prefix:       wp_
```
点击 **提交** 继续,然后点击 **运行安装**
![](https://opensource.com/sites/default/files/uploads/wp-info.png)
按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 **安装 WordPress** 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 **<http://localhost/wp-admin>** 查看你的网站。
### 永久链接
更改你的永久链接,使得你的 URLs 更加友好是一个很好的想法。
要这样做,首先登录你的 WordPress ,进入仪表盘。进入 **设置****永久链接**。选择 **文章名** 选项,然后点击 **保存更改**。接着你需要开启 Apache 的 **改写** 模块。
```
sudo a2enmod rewrite
```
你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件
```
sudo leafpad /etc/apache2/sites-available/000-default.conf
```
在第一行后添加下面的内容:
```
<Directory "/var/www/html">
    AllowOverride All
</Directory>
```
确保其中有像这样的内容 **< VirtualHost \*:80>**
```
<VirtualHost *:80>
    <Directory "/var/www/html">
        AllowOverride All
    </Directory>
    ...
```
保存这个文件,然后退出,重启 Apache
```
sudo systemctl restart apache2
```
### 下一步?
WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘,。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。
这里有一些你可以在 Raspberry Pi 的网页服务上尝试的有趣的事情:
* 添加页面和文章到你的网站
* 从外观菜单安装不同的主题
* 自定义你的网站主题或是创建你自己的
* 使用你的网站服务向你的网络上的其他人显示有用的信息
不要忘记Raspberry Pi 是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[dianbanjiu](https://github.com/dianbanjiu)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sitewide-search?search_api_views_fulltext=raspberry%20pi
[2]: https://en.wikipedia.org/wiki/Leafpad
[3]: http://wordpress.org/
[4]: https://wordpress.org/latest.tar.gz