From dd17d87289a795730f752a0c439030ca0bedfb90 Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Wed, 24 Oct 2018 19:01:15 +0800 Subject: [PATCH 01/32] translated --- ...w to set up WordPress on a Raspberry Pi.md | 282 ------------------ ...w to set up WordPress on a Raspberry Pi.md | 275 +++++++++++++++++ 2 files changed, 275 insertions(+), 282 deletions(-) delete mode 100644 sources/tech/20181022 How to set up WordPress on a Raspberry Pi.md create mode 100644 translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md diff --git a/sources/tech/20181022 How to set up WordPress on a Raspberry Pi.md b/sources/tech/20181022 How to set up WordPress on a Raspberry Pi.md deleted file mode 100644 index 66fc1dede0..0000000000 --- a/sources/tech/20181022 How to set up WordPress on a Raspberry Pi.md +++ /dev/null @@ -1,282 +0,0 @@ -translating by dianbanjiu -How to set up WordPress on a Raspberry Pi -====== - -Run your WordPress website on your Raspberry Pi with this simple tutorial. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W) - -WordPress is a popular open source blogging platform and content management system (CMS). It's easy to set up and has a thriving community of developers building websites and creating themes and plugins for others to use. - -Although getting hosting packages with a "one-click WordPress setup" is easy, it's also simple to set up your own on a Linux server with only command-line access, and the [Raspberry Pi][1] is a perfect way to try it out and learn something along the way. - -The four components of a commonly used web stack are Linux, Apache, MySQL, and PHP. Here's what you need to know about each. - -### Linux - -The Raspberry Pi runs Raspbian, which is a Linux distribution based on Debian and optimized to run well on Raspberry Pi hardware. It comes with two options to start: Desktop or Lite. The Desktop version boots to a familiar-looking desktop and comes with lots of educational software and programming tools, as well as the LibreOffice suite, Minecraft, and a web browser. The Lite version has no desktop environment, so it's command-line only and comes with only the essential software. - -This tutorial will work with either version, but if you use the Lite version you'll have to use another computer to access your website. - -### Apache - -Apache is a popular web server application you can install on the Raspberry Pi to serve web pages. On its own, Apache can serve static HTML files over HTTP. With additional modules, it can serve dynamic web pages using scripting languages such as PHP. - -Installing Apache is very simple. Open a terminal window and type the following command: - -``` -sudo apt install apache2 -y -``` - -By default, Apache puts a test HTML file in a web folder you can view from your Pi or another computer on your network. Just open the web browser and enter the address ****. Alternatively (particularly if you're using Raspbian Lite), enter the Pi's IP address instead of **localhost**. You should see this in your browser window: - -![](https://opensource.com/sites/default/files/uploads/apache-it-works.png) - -This means you have Apache working! - -This default webpage is just an HTML file on the filesystem. It is located at **/var/www/html/index.html**. You can try replacing this file with some HTML of your own using the [Leafpad][2] text editor: - -``` -cd /var/www/html/ -sudo leafpad index.html -``` - -Save and close Leafpad then refresh the browser to see your changes. - -### MySQL - -MySQL (pronounced "my S-Q-L" or "my sequel") is a popular database engine. Like PHP, it's widely used on web servers, which is why projects like WordPress use it and why those projects are so popular. - -Install MySQL Server by entering the following command into the terminal window: - -``` -sudo apt-get install mysql-server -y -``` - -WordPress uses MySQL to store posts, pages, user data, and lots of other content. - -### PHP - -PHP is a preprocessor: it's code that runs when the server receives a request for a web page via a web browser. It works out what needs to be shown on the page, then sends that page to the browser. Unlike static HTML, PHP can show different content under different circumstances. PHP is a very popular language on the web; huge projects like Facebook and Wikipedia are written in PHP. - -Install PHP and the MySQL extension: - -``` -sudo apt-get install php php-mysql -y -``` - -Delete the **index.html** file and create **index.php** : - -``` -sudo rm index.html -sudo leafpad index.php -``` - -Add the following line: - -``` - -``` - -Save, exit, and refresh your browser. You'll see the PHP status page: - -![](https://opensource.com/sites/default/files/uploads/phpinfo.png) - -### WordPress - -You can download WordPress from [wordpress.org][3] using the **wget** command. Helpfully, the latest version of WordPress is always available at [wordpress.org/latest.tar.gz][4], so you can grab it without having to look it up on the website. As I'm writing, this is version 4.9.8. - -Make sure you're in **/var/www/html** and delete everything in it: - -``` -cd /var/www/html/ -sudo rm * -``` - -Download WordPress using **wget** , then extract the contents and move the WordPress files to the **html** directory: - -``` -sudo wget http://wordpress.org/latest.tar.gz -sudo tar xzf latest.tar.gz -sudo mv wordpress/* . -``` - -Tidy up by removing the tarball and the now-empty **wordpress** directory: - -``` -sudo rm -rf wordpress latest.tar.gz -``` - -Running the **ls** or **tree -L 1** command will show the contents of a WordPress project: - -``` -. -├── index.php -├── license.txt -├── readme.html -├── wp-activate.php -├── wp-admin -├── wp-blog-header.php -├── wp-comments-post.php -├── wp-config-sample.php -├── wp-content -├── wp-cron.php -├── wp-includes -├── wp-links-opml.php -├── wp-load.php -├── wp-login.php -├── wp-mail.php -├── wp-settings.php -├── wp-signup.php -├── wp-trackback.php -└── xmlrpc.php - -3 directories, 16 files -``` - -This is the source of a default WordPress installation. The files you edit to customize your installation belong in the **wp-content** folder. - -You should now change the ownership of all these files to the Apache user: - -``` -sudo chown -R www-data: . -``` - -### WordPress database - -To get your WordPress site set up, you need a database. This is where MySQL comes in! - -Run the MySQL secure installation command in the terminal window: - -``` -sudo mysql_secure_installation -``` - -You will be asked a series of questions. There's no password set up initially, but you should set one in the second step. Make sure you enter a password you will remember, as you'll need it to connect to WordPress. Press Enter to say Yes to each question that follows. - -When it's complete, you will see the messages "All done!" and "Thanks for using MariaDB!" - -Run **mysql** in the terminal window: - -``` -sudo mysql -uroot -p -``` - -Enter the root password you created. You will be greeted by the message "Welcome to the MariaDB monitor." Create the database for your WordPress installation at the **MariaDB [(none)] >** prompt using: - -``` -create database wordpress; -``` - -Note the semicolon at the end of the statement. If the command is successful, you should see this: - -``` -Query OK, 1 row affected (0.00 sec) -``` - -Grant database privileges to the root user, entering your password at the end of the statement: - -``` -GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD'; -``` - -For the changes to take effect, you will need to flush the database privileges: - -``` -FLUSH PRIVILEGES; -``` - -Exit the MariaDB prompt with **Ctrl+D** to return to the Bash shell. - -### WordPress configuration - -Open the web browser on your Raspberry Pi and open ****. You should see a WordPress page asking you to pick your language. Select your language and click **Continue**. You will be presented with the WordPress welcome screen. Click the **Let's go!** button. - -Fill out the basic site information as follows: - -``` -Database Name:      wordpress -User Name:          root -Password:           -Database Host:      localhost -Table Prefix:       wp_ -``` - -Click **Submit** to proceed, then click **Run the install**. - -![](https://opensource.com/sites/default/files/uploads/wp-info.png) - -Fill in the form: Give your site a title, create a username and password, and enter your email address. Hit the **Install WordPress** button, then log in using the account you just created. Now that you're logged in and your site is set up, you can see your website by visiting ****. - -### Permalinks - -It's a good idea to change your permalink settings to make your URLs more friendly. - -To do this, log into WordPress and go to the dashboard. Go to **Settings** , then **Permalinks**. Select the **Post name** option and click **Save Changes**. You'll need to enable Apache's **rewrite** module: - -``` -sudo a2enmod rewrite -``` - -You'll also need to tell the virtual host serving the site to allow requests to be overwritten. Edit the Apache configuration file for your virtual host: - -``` -sudo leafpad /etc/apache2/sites-available/000-default.conf -``` - -Add the following lines after line 1: - -``` - -    AllowOverride All - -``` - -Ensure it's within the **< VirtualHost *:80>** like so: - -``` - -    -        AllowOverride All -    -    ... -``` - -Save the file and exit, then restart Apache: - -``` -sudo systemctl restart apache2 -``` - -### What's next? - -WordPress is very customizable. By clicking your site name in the WordPress banner at the top of the page (when you're logged in), you'll be taken to the Dashboard. From there, you can change the theme, add pages and posts, edit the menu, add plugins, and do lots more. - -Here are some interesting things you can try on the Raspberry Pi's web server. - - * Add pages and posts to your website - * Install different themes from the Appearance menu - * Customize your website's theme or create your own - * Use your web server to display useful information for people on your network - - - -Don't forget, the Raspberry Pi is a Linux computer. You can also follow these instructions to install WordPress on a server running Debian or Ubuntu. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi - -作者:[Ben Nuttall][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/bennuttall -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sitewide-search?search_api_views_fulltext=raspberry%20pi -[2]: https://en.wikipedia.org/wiki/Leafpad -[3]: http://wordpress.org/ -[4]: https://wordpress.org/latest.tar.gz diff --git a/translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md b/translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md new file mode 100644 index 0000000000..5153307eee --- /dev/null +++ b/translated/tech/20181022 How to set up WordPress on a Raspberry Pi.md @@ -0,0 +1,275 @@ +如何在 Rasspberry Pi 上搭建 WordPress +====== + +这篇简单的教程可以让你在 Rasspberry Pi 上运行你的 WordPress 网站。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_raspberry-pi-classroom_lead.png?itok=KIyhmR8W) + +WordPress 是一个非常受欢迎的开源博客平台和内容管理平台(CMS)。它很容易搭建,而且还有一个活跃的开发者社区构建网站、创建主题和插件供其他人使用。 + +虽然通过一键式 WordPress 设置获得托管包很容易,但通过命令行就可以在 Linux 服务器上设置自己的托管包,而且 Raspberry Pi 是一种用来尝试它并顺便学习一些东西的相当好的途径。 + +使用一个 web 堆栈的四个部分是 Linux、Apache、MySQL 和 PHP。这里是你对它们每一个需要了解的。 + +### Linux + +Raspberry Pi 上运行的系统是 Raspbian,这是一个基于 Debian,优化地可以很好的运行在 Raspberry Pi 硬件上的 Linux 发行版。你有两个选择:桌面版或是精简版。桌面版有一个熟悉的桌面还有很多教育软件和编程工具,像是 LibreOffice 套件、Mincraft,还有一个 web 浏览器。精简版本没有桌面环境,因此它只有命令行以及一些必要的软件。 + +这篇教程在两个版本上都可以使用,但是如果你使用的是精简版,你必须要有另外一台电脑去访问你的站点。 + +### Apache + +Apache 是一个受欢迎的 web 服务器应用,你可以安装在你的 Raspberry Pi 上伺服你的 web 页面。就其自身而言,Apache 可以通过 HTTP 提供静态 HTML 文件。使用额外的模块,它也可以使用像是 PHP 的脚本语言提供动态网页。 + +安装 Apache 非常简单。打开一个终端窗口,然后输入下面的命令: + +``` +sudo apt install apache2 -y +``` +Apache 默认放了一个测试文件在一个 web 目录中,你可以从你的电脑或是你网络中的其他计算机进行访问。只需要打开 web 浏览器,然后输入地址 ****。或者(特别是你使用的是 Raspbian Lite 的话)输入你的 Pi 的 IP 地址代替 **localhost**。你应该会在你的浏览器窗口中看到这样的内容: + +![](https://opensource.com/sites/default/files/uploads/apache-it-works.png) + +这意味着你的 Apache 已经开始工作了! + +这个默认的网页仅仅是你文件系统里的一个文件。它在你本地的 **/var/www/html/index/html**。你可以使用 [Leafpad][2] 文本编辑器写一些 HTML 去替换这个文件的内容。 + +``` +cd /var/www/html/ +sudo leafpad index.html +``` + +保存并关闭 Leafpad 然后刷新网页,查看你的更改。 + +### MySQL + +MySQL (显然是 "my S-Q-L" 或者 "my sequel") 是一个很受欢迎的数据库引擎。就像 PHP,它被非常广泛的应用于网页服务,这也是为什么像 WordPress 一样的项目选择了它,以及这些项目是为何如此受欢迎。 + +在一个终端窗口中输入以下命令安装 MySQL 服务: + +``` +sudo apt-get install mysql-server -y +``` + +WordPress 使用 MySQL 存储文章、页面、用户数据、还有许多其他的内容。 + +### PHP + +PHP 是一个预处理器:它是在服务器通过网络浏览器接受网页请求是运行的代码。它解决那些需要展示在网页上的内容,然后发送这些网页到浏览器上。,不像静态的 HTML,PHP 能在不同的情况下展示不同的内容。PHP 是一个在 web 上非常受欢迎的语言;很多像 Facebook 和 Wikipedia 的项目都使用 PHP 编写。 + +安装 PHP 和 MySQL 的插件: + +``` +sudo apt-get install php php-mysql -y +``` + +删除 **index.html**,然后创建 **index.php**: + +``` +sudo rm index.html +sudo leafpad index.php +``` + +在里面添加以下内容: + +``` + +``` + +保存、退出、刷新你的网页。你将会看到 PHP 状态页: + +![](https://opensource.com/sites/default/files/uploads/phpinfo.png) + +### WordPress + +你可以使用 **wget** 命令从 [wordpress.org][3] 下载 WordPress。最新的 WordPress 总是使用 [wordpress.org/latest.tar.gz][4] 这个网址,所以你可以直接抓取这些文件,而无需到网页里面查看,现在的版本是 4.9.8。 + +确保你在 **/var/www/html** 目录中,然后删除里面的所有内容: + +``` +cd /var/www/html/ +sudo rm * +``` + +使用 **wget** 下载 WordPress,然后提取里面的内容,并移动提取的 WordPress 目录中的内容移动到 **html** 目录下: + +``` +sudo wget http://wordpress.org/latest.tar.gz +sudo tar xzf latest.tar.gz +sudo mv wordpress/* . +``` + +现在可以删除压缩包和空的 **wordpress** 目录: + +``` +sudo rm -rf wordpress latest.tar.gz +``` + +运行 **ls** 或者 **tree -L 1** 命令显示 WordPress 项目下包含的内容: + +``` +. +├── index.php +├── license.txt +├── readme.html +├── wp-activate.php +├── wp-admin +├── wp-blog-header.php +├── wp-comments-post.php +├── wp-config-sample.php +├── wp-content +├── wp-cron.php +├── wp-includes +├── wp-links-opml.php +├── wp-load.php +├── wp-login.php +├── wp-mail.php +├── wp-settings.php +├── wp-signup.php +├── wp-trackback.php +└── xmlrpc.php + +3 directories, 16 files +``` + +这是 WordPress 的默认安装源。在 **wp-content** 目录中,你可以编辑你的自定义安装。 + +你现在应该把所有文件的所有权改为 Apache 用户: + +``` +sudo chown -R www-data: . +``` + +### WordPress 数据库 + +为了搭建你的 WordPress 站点,你需要一个数据库。这里使用的是 MySQL。 + +在终端窗口运行 MySQL 的安全安装命令: + +``` +sudo mysql_secure_installation +``` + +你将会被问到一系列的问题。这里原来没有设置密码,但是在下一步你应该设置一个。确保你记住了你输入的密码,后面你需要使用它去连接你的 WordPress。按回车确认下面的所有问题。 + +当它完成之后,你将会看到 "All done!" 和 "Thanks for using MariaDB!" 的信息。 + +在终端窗口运行 **mysql** 命令: + +``` +sudo mysql -uroot -p +``` +输入你创建的 root 密码。你将看到 “Welcome to the MariaDB monitor.” 的欢迎信息。在 **MariaDB [(none)] >** 提示处使用以下命令,为你 WordPress 的安装创建一个数据库: + +``` +create database wordpress; +``` +注意声明最后的分号,如果命令执行成功,你将看到下面的提示: + +``` +Query OK, 1 row affected (0.00 sec) +``` +把 数据库权限交给 root 用户在声明的底部输入密码: + +``` +GRANT ALL PRIVILEGES ON wordpress.* TO 'root'@'localhost' IDENTIFIED BY 'YOURPASSWORD'; +``` + +为了让更改生效,你需要刷新数据库权限: + +``` +FLUSH PRIVILEGES; +``` + +按 **Ctrl+D** 退出 MariaDB 提示,返回到 Bash shell。 + +### WordPress 配置 + +在你的 Raspberry Pi 打开网页浏览器,地址栏输入 ****。选择一个你想要在 WordPress 使用的语言,然后点击 **继续**。你将会看到 WordPress 的欢迎界面。点击 **让我们开始吧** 按钮。 + +按照下面这样填写基本的站点信息: + +``` +Database Name:      wordpress +User Name:          root +Password:           +Database Host:      localhost +Table Prefix:       wp_ +``` + +点击 **提交** 继续,然后点击 **运行安装**。 + +![](https://opensource.com/sites/default/files/uploads/wp-info.png) + +按下面的格式填写:为你的站点设置一个标题、创建一个用户名和密码、输入你的 email 地址。点击 **安装 WordPress** 按钮,然后使用你刚刚创建的账号登录,你现在已经登录,而且你的站点已经设置好了,你可以在浏览器地址栏输入 **** 查看你的网站。 + +### 永久链接 + +更改你的永久链接,使得你的 URLs 更加友好是一个很好的想法。 + +要这样做,首先登录你的 WordPress ,进入仪表盘。进入 **设置**,**永久链接**。选择 **文章名** 选项,然后点击 **保存更改**。接着你需要开启 Apache 的 **改写** 模块。 + +``` +sudo a2enmod rewrite +``` +你还需要告诉虚拟托管服务,站点允许改写请求。为你的虚拟主机编辑 Apache 配置文件 + +``` +sudo leafpad /etc/apache2/sites-available/000-default.conf +``` + +在第一行后添加下面的内容: + +``` + +    AllowOverride All + +``` + +确保其中有像这样的内容 **< VirtualHost \*:80>** + +``` + +    +        AllowOverride All +    +    ... +``` + +保存这个文件,然后退出,重启 Apache: + +``` +sudo systemctl restart apache2 +``` + +### 下一步? + +WordPress 是可以高度自定义的。在网站顶部横幅处点击你的站点名,你就会进入仪表盘,。在这里你可以修改主题、添加页面和文章、编辑菜单、添加插件、以及许多其他的事情。 + +这里有一些你可以在 Raspberry Pi 的网页服务上尝试的有趣的事情: + + * 添加页面和文章到你的网站 + * 从外观菜单安装不同的主题 + * 自定义你的网站主题或是创建你自己的 + * 使用你的网站服务向你的网络上的其他人显示有用的信息 + + +不要忘记,Raspberry Pi 是一台 Linux 电脑。你也可以使用相同的结构在运行着 Debian 或者 Ubuntu 的服务器上安装 WordPress。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/setting-wordpress-raspberry-pi + +作者:[Ben Nuttall][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bennuttall +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sitewide-search?search_api_views_fulltext=raspberry%20pi +[2]: https://en.wikipedia.org/wiki/Leafpad +[3]: http://wordpress.org/ +[4]: https://wordpress.org/latest.tar.gz From 996e76e0ce62f080a7d38177a54a77d295f3342c Mon Sep 17 00:00:00 2001 From: fuowang <1106694860@qq.com> Date: Wed, 24 Oct 2018 20:39:30 +0800 Subject: [PATCH 02/32] =?UTF-8?q?fuowang=20=E7=BF=BB=E8=AF=91=E5=AE=8C?= =?UTF-8?q?=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ideo Editing Software for Linux In 2017.md | 516 ----------------- ...ideo Editing Software for Linux In 2017.md | 520 ++++++++++++++++++ 2 files changed, 520 insertions(+), 516 deletions(-) delete mode 100644 sources/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md create mode 100644 translated/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md diff --git a/sources/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md b/sources/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md deleted file mode 100644 index fb96d6e178..0000000000 --- a/sources/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md +++ /dev/null @@ -1,516 +0,0 @@ -fuowang 翻译中 - -9 Best Free Video Editing Software for Linux In 2017 -====== -**Brief: Here are best video editors for Linux, their feature, pros and cons and how to install them on your Linux distributions.** - -![Best Video editors for Linux][1] - -![Best Video editors for Linux][2] - -We have discussed [best photo management applications for Linux][3], [best code editors for Linux][4] in similar articles in the past. Today we shall see the **best video editing software for Linux**. - -When asked about free video editing software, Windows Movie Maker and iMovie is what most people often suggest. - -Unfortunately, both of them are not available for GNU/Linux. But you don't need to worry about it, we have pooled together a list of **best free video editors** for you. - -## Best Video Editors for Linux - -Let's have a look at the best free video editing software for Linux below. Here's a quick summary if you think the article is too long to read. You can click on the links to jump to the relevant section of the article: - -Video Editors Main Usage Type Kdenlive General purpose video editing Free and Open Source OpenShot General purpose video editing Free and Open Source Shotcut General purpose video editing Free and Open Source Flowblade General purpose video editing Free and Open Source Lightworks Professional grade video editing Freemium Blender Professional grade 3D editing Free and Open Source Cinelerra General purpose video editing Free and Open Source DaVinci Resolve Professional grade video editing Freemium VidCutter Simple video split and merge Free and Open Source - -### 1\. Kdenlive - -![Kdenlive-free video editor on ubuntu][1] - -![Kdenlive-free video editor on ubuntu][5] -[Kdenlive][6] is a free and [open source][7] video editing software from [KDE][8] that provides support for dual video monitors, a multi-track timeline, clip list, customizable layout support, basic effects, and basic transitions. - -It supports a wide variety of file formats and a wide range of camcorders and cameras including Low resolution camcorder (Raw and AVI DV editing), Mpeg2, mpeg4 and h264 AVCHD (small cameras and camcorders), High resolution camcorder files, including HDV and AVCHD camcorders, Professional camcorders, including XDCAM-HD™ streams, IMX™ (D10) streams, DVCAM (D10) , DVCAM, DVCPRO™, DVCPRO50™ streams and DNxHD™ streams. - -If you are looking for an iMovie alternative for Linux, Kdenlive would be your best bet. - -#### Kdenlive features - - * Multi-track video editing - * A wide range of audio and video formats - * Configurable interface and shortcuts - * Easily create tiles using text or images - * Plenty of effects and transitions - * Audio and video scopes make sure the footage is correctly balanced - * Proxy editing - * Automatic save - * Wide hardware support - * Keyframeable effects - - - -#### Pros - - * All-purpose video editor - * Not too complicated for those who are familiar with video editing - - - -#### Cons - - * It may still be confusing if you are looking for something extremely simple - * KDE applications are infamous for being bloated - - - -#### Installing Kdenlive - -Kdenlive is available for all major Linux distributions. You can simply search for it in your software center. Various packages are available in the [download section of Kdenlive website][9]. - -Command line enthusiasts can install it from the terminal by running the following command in Debian and Ubuntu-based Linux distributions: -``` -sudo apt install kdenlive -``` - -### 2\. OpenShot - -![Openshot-free-video-editor-on-ubuntu][1] - -![Openshot-free-video-editor-on-ubuntu][10] - -[OpenShot][11] is another multi-purpose video editor for Linux. OpenShot can help you create videos with transitions and effects. You can also adjust audio levels. Of course, it support of most formats and codecs. - -You can also export your film to DVD, upload to YouTube, Vimeo, Xbox 360, and many common video formats. OpenShot is a tad bit simpler than Kdenlive. So if you need a video editor with a simple UI OpenShot is a good choice. - -There is also a neat documentation to [get you started with OpenShot][12]. - -#### OpenShot features - - * Cross-platform, available on Linux, macOS, and Windows - * Support for a wide range of video, audio, and image formats - * Powerful curve-based Keyframe animations - * Desktop integration with drag and drop support - * Unlimited tracks or layers - * Clip resizing, scaling, trimming, snapping, rotation, and cutting - * Video transitions with real-time previews - * Compositing, image overlays and watermarks - * Title templates, title creation, sub-titles - * Support for 2D animation via image sequences - * 3D animated titles and effects - * SVG friendly for creating and including vector titles and credits - * Scrolling motion picture credits - * Frame accuracy (step through each frame of video) - * Time-mapping and speed changes on clips - * Audio mixing and editing - * Digital video effects, including brightness, gamma, hue, greyscale, chroma key etc - - - -#### Pros - - * All-purpose video editor for average video editing needs - * Available on Windows and macOS along with Linux - - - -#### Cons - - * It may be simple but if you are extremely new to video editing, there is definitely a learning curve involved here - * You may still not find up to the mark of a professional-grade, movie making editing software - - - -#### Installing OpenShot - -OpenShot is also available in the repository of all major Linux distributions. You can simply search for it in your software center. You can also get it from its [official website][13]. - -My favorite way is to use the following command in Debian and Ubuntu-based Linux distributions: -``` -sudo apt install openshot -``` - -### 3\. Shotcut - -![Shotcut Linux video editor][1] - -![Shotcut Linux video editor][14] - -[Shotcut][15] is another video editor for Linux that can be put in the same league as Kdenlive and OpenShot. While it does provide similar features as the other two discussed above, Shotcut is a bit advanced with support for 4K videos. - -Support for a number of audio, video format, transitions and effects are some of the numerous features of Shotcut. External monitor is also supported here. - -There is a collection of video tutorials to [get you started with Shotcut][16]. It is also available for Windows and macOS so you can use your learning on other operating systems as well. - -#### Shotcut features - - * Cross-platform, available on Linux, macOS, and Windows - * Support for a wide range of video, audio, and image formats - * Native timeline editing - * Mix and match resolutions and frame rates within a project - * Audio filters, mixing and effects - * Video transitions and filters - * Multitrack timeline with thumbnails and waveforms - * Unlimited undo and redo for playlist edits including a history view - * Clip resizing, scaling, trimming, snapping, rotation, and cutting - * Trimming on source clip player or timeline with ripple option - * External monitoring on an extra system display/monitor - * Hardware support - - - -You can read about more features [here][17]. - -#### Pros - - * All-purpose video editor for common video editing needs - * Support for 4K videos - * Available on Windows and macOS along with Linux - - - -#### Cons - - * Too many features reduce the simplicity of the software - - - -#### Installing Shotcut - -Shotcut is available in [Snap][18] format. You can find it in Ubuntu Software Center. For other distributions, you can get the executable file from its [download page][19]. - -### 4\. Flowblade - -![Flowblade movie editor on ubuntu][1] - -![Flowblade movie editor on ubuntu][20] - -[Flowblade][21] is a multitrack non-linear video editor for Linux. Like the above-discussed ones, this too is a free and open source software. It comes with a stylish and modern user interface. - -Written in Python, it is designed to provide a fast, and precise. Flowblade has focused on providing the best possible experience on Linux and other free platforms. So there's no Windows and OS X version for now. Feels good to be a Linux exclusive. - -You also get a decent [documentation][22] to help you use all of its features. - -#### Flowblade features - - * Lightweight application - * Provide simple interface for simple tasks like split, merge, overwrite etc - * Plenty of audio and video effects and filters - * Supports [proxy editing][23] - * Drag and drop support - * Support for a wide range of video, audio, and image formats - * Batch rendering - * Watermarks - * Video transitions and filters - * Multitrack timeline with thumbnails and waveforms - - - -You can read about more [Flowblade features][24] here. - -#### Pros - - * Lightweight - * Good for general purpose video editing - - - -#### Cons - - * Not available on other platforms - - - -#### Installing Flowblade - -Flowblade should be available in the repositories of all major Linux distributions. You can install it from the software center. More information is available on its [download page][25]. - -Alternatively, you can install Flowblade in Ubuntu and other Ubuntu based systems, using the command below: -``` -sudo apt install flowblade -``` - -### 5\. Lightworks - -![Lightworks running on ubuntu 16.04][1] - -![Lightworks running on ubuntu 16.04][26] - -If you looking for a video editor software that has more feature, this is the answer. [Lightworks][27] is a cross-platform professional video editor, available for Linux, Mac OS X and Windows. - -It is an award-winning professional [non-linear editing][28] (NLE) software that supports resolutions up to 4K as well as video in SD and HD formats. - -Lightworks is available for Linux, however, it is not open source. - -This application has two versions: - - * Lightworks Free - * Lightworks Pro - - - -Pro version has more features such as higher resolution support, 4K and Blue Ray support etc. - -Extensive documentation is available on its [website][29]. You can also refer to videos at [Lightworks video tutorials page][30] - -#### Lightworks features - - * Cross-platform - * Simple & intuitive User Interface - * Easy timeline editing & trimming - * Real-time ready to use audio & video FX - * Access amazing royalty-free audio & video content - * Lo-Res Proxy workflows for 4K - * Export video for YouTube/Vimeo, SD/HD, up to 4K - * Drag and drop support - * Wide variety of audio and video effects and filters - - - -#### Pros - - * Professional, feature-rich video editor - - - -#### Cons - - * Limited free version - - - -#### Installing Lightworks - -Lightworks provides DEB packages for Debian and Ubuntu-based Linux distributions and RPM packages for Fedora-based Linux distributions. You can find the packages on its [download page][31]. - -### 6\. Blender - -![Blender running on Ubuntu 16.04][1] - -![Blender running on Ubuntu 16.04][32] - -[Blender][33] is a professional, industry-grade open source, cross-platform video editor. It is popular for 3D works. Blender has been used in several Hollywood movies including Spider Man series. - -Although originally designed for produce 3D modeling, but it can also be used for video editing and input capabilities with a variety of formats. - -#### Blender features - - * Live preview, luma waveform, chroma vectorscope and histogram displays - * Audio mixing, syncing, scrubbing and waveform visualization - * Up to 32 slots for adding video, images, audio, scenes, masks and effects - * Speed control, adjustment layers, transitions, keyframes, filters and more - - - -You can read about more features [here][34]. - -#### Pros - - * Cross-platform - * Professional grade editing - - - -#### Cons - - * Complicated - * Mainly for 3D animation, not focused on regular video editing - - - -#### Installing Blender - -The latest version of Blender can be downloaded from its [download page][35]. - -### 7\. Cinelerra - -![Cinelerra video editor for Linux][1] - -![Cinelerra video editor for Linux][36] - -[Cinelerra][37] has been available since 1998 and has been downloaded over 5 million times. It was the first video editor to provide non-linear editing on 64-bit systems back in 2003. It was a go-to video editor for Linux users at that time but it lost its sheen afterward as some developers abandoned the project. - -Good thing is that its back on track and is being developed actively again. - -There is some [interesting backdrop story][38] about how and why Cinelerra was started if you care to read. - -#### Cinelerra features - - * Non-linear editing - * Support for HD videos - * Built-in frame renderer - * Various video effects - * Unlimited layers - * Split pane editing - - - -#### Pros - - * All-purpose video editor - - - -#### Cons - - * Not suitable for beginners - * No packages available - - - -#### Installing Cinelerra - -You can download the source code from [SourceForge][39]. More information on its [download page][40]. - -### 8\. DaVinci Resolve - -![DaVinci Resolve video editor][1] - -![DaVinci Resolve video editor][41] - -If you want Hollywood level video editing, use the tool the professionals use in Hollywood. [DaVinci Resolve][42] from Blackmagic is what professionals are using for editing movies and tv shows. - -DaVinci Resolve is not your regular video editor. It's a full-fledged editing tool that provides editing, color correction and professional audio post-production in a single application. - -DaVinci Resolve is not open source. Like LightWorks, it too provides a free version for Linux. The pro version costs $300. - -#### DaVinci Resolve features - - * High-performance playback engine - * All kind of edit types such as overwrite, insert, ripple overwrite, replace, fit to fill, append at end - * Advanced Trimming - * Audio Overlays - * Multicam Editing allows editing footage from multiple cameras in real-time - * Transition and filter-effects - * Speed effects - * Timeline curve editor - * Non-linear editing for VFX - - - -#### Pros - - * Cross-platform - * Professional grade video editor - - - -#### Cons - - * Not suitable for average editing - * Not open source - * Some features are not available in the free version - - - -#### Installing DaVinci Resolve - -You can download DaVinci Resolve for Linux from [its website][42]. You'll have to register, even for the free version. - -### 9\. VidCutter - -![VidCutter video editor for Linux][1] - -![VidCutter video editor for Linux][43] - -Unlike all the other video editors discussed here, [VidCutter][44] is utterly simple. It doesn't do much except splitting videos and merging. But at times you just need this and VidCutter gives you just that. - -#### VidCutter features - - * Cross-platform app available for Linux, Windows and MacOS - * Supports most of the common video formats such as: AVI, MP4, MPEG 1/2, WMV, MP3, MOV, 3GP, FLV etc - * Simple interface - * Trims and merges the videos, nothing more than that - - - -#### Pros - - * Cross-platform - * Good for simple split and merge - - - -#### Cons - - * Not suitable for regular video editing - * Crashes often - - - -#### Installing VidCutter - -If you are using Ubuntu-based Linux distributions, you can use the official PPA: -``` -sudo add-apt-repository ppa:ozmartian/apps -sudo apt-get update -sudo apt-get install vidcutter -``` - -It is available in AUR so Arch Linux users can also install it easily. For other Linux distributions, you can find the installation files on its [GitHub page][45]. - -### Which is the best video editing software for Linux? - -A number of video editors mentioned here use [FFmpeg][46]. You can use FFmpeg on your own as well. It's a command line only tool so I didn't include it in the main list but it would have been unfair to not mention it at all. - -If you need an editor for simply cutting and joining videos, go with VidCutter. - -If you need something more than that, **OpenShot** or **Kdenlive** is a good choice. These are suitable for beginners and a system with standard specification. - -If you have a high-end computer and need advanced features you can go out with **Lightworks** or **DaVinci Resolve**. If you are looking for more advanced features for 3D works, **Blender** has got your back. - -So that's all I can write about the ** best video editing software for Linux** such as Ubuntu, Linux Mint, Elementary, and other Linux distributions. Share with us which video editor you like the most. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-video-editing-software-linux/ - -作者:[It'S Foss Team][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/itsfoss/ -[1]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= -[2]:https://itsfoss.com/wp-content/uploads/2016/06/best-Video-editors-Linux-800x450.png -[3]:https://itsfoss.com/linux-photo-management-software/ -[4]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ -[5]:https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg -[6]:https://kdenlive.org/ -[7]:https://itsfoss.com/tag/open-source/ -[8]:https://www.kde.org/ -[9]:https://kdenlive.org/download/ -[10]:https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg -[11]:http://www.openshot.org/ -[12]:http://www.openshot.org/user-guide/ -[13]:http://www.openshot.org/download/ -[14]:https://itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux-800x503.jpg -[15]:https://www.shotcut.org/ -[16]:https://www.shotcut.org/tutorials/ -[17]:https://www.shotcut.org/features/ -[18]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/ -[19]:https://www.shotcut.org/download/ -[20]:https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg -[21]:http://jliljebl.github.io/flowblade/ -[22]:https://jliljebl.github.io/flowblade/webhelp/help.html -[23]:https://jliljebl.github.io/flowblade/webhelp/proxy.html -[24]:https://jliljebl.github.io/flowblade/features.html -[25]:https://jliljebl.github.io/flowblade/download.html -[26]:https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg -[27]:https://www.lwks.com/ -[28]:https://en.wikipedia.org/wiki/Non-linear_editing_system -[29]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=4 -[30]:https://www.lwks.com/videotutorials -[31]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=1 -[32]:https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg -[33]:https://www.blender.org/ -[34]:https://www.blender.org/features/ -[35]:https://www.blender.org/download/ -[36]:https://itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg -[37]:http://cinelerra.org/ -[38]:http://cinelerra.org/our-story -[39]:https://sourceforge.net/projects/heroines/files/cinelerra-6-src.tar.xz/download -[40]:http://cinelerra.org/download -[41]:https://itsfoss.com/wp-content/uploads/2016/06/davinci-resolve-vdeo-editor-800x450.jpg -[42]:https://www.blackmagicdesign.com/products/davinciresolve/ -[43]:https://itsfoss.com/wp-content/uploads/2016/06/vidcutter-screenshot-800x585.jpeg -[44]:https://itsfoss.com/vidcutter-video-editor-linux/ -[45]:https://github.com/ozmartian/vidcutter/releases -[46]:https://www.ffmpeg.org/ diff --git a/translated/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md b/translated/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md new file mode 100644 index 0000000000..dfaf53b104 --- /dev/null +++ b/translated/tech/20160627 9 Best Free Video Editing Software for Linux In 2017.md @@ -0,0 +1,520 @@ +2017 年 Linux 上最好的 9 个免费视频编辑软件 +====== +**概要:这里介绍 Linux 上几个最好的视频编辑器,介绍他们的特性、利与弊,以及如何在你的 Linux 发行版上安装它们。** + +![Linux 上最好的视频编辑器][1] + +![Linux 上最好的视频编辑器][2] + +我们曾经在一篇短文中讨论过[ Linux 上最好的照片管理应用][3],[Linux 上最好的代码编辑器][4]。今天我们将讨论 **Linux 上最好的视频编辑软件**。 + +当谈到免费视频编辑软件,Windows Movie Maker 和 iMovie 是大部分人经常推荐的。 + +很不幸,上述两者在 GNU/Linux 上都不可用。但是不必担心,我们为你汇集了一个**最好的视频编辑器**清单。 + +## Linux 上最好的视频编辑器 + +接下来让我们一起看看这些最好的视频编辑软件。如果你觉得文章读起来太长,这里有一个快速摘要。你可以点击链接跳转到文章的相关章节: + +视频编辑器 主要用途 类型 +Kdenlive 通用视频编辑 免费开源 +OpenShot 通用视频编辑 免费开源 +Shotcut 通用视频编辑 免费开源 +Flowblade 通用视频编辑 免费开源 +Lightworks 专业级视频编辑 免费增值 +Blender 专业级三维编辑 免费开源 +Cinelerra 通用视频编辑 免费开源 +DaVinci 专业级视频处理编辑 免费增值 +VidCutter 简单视频拆分合并 免费开源 + +### 1\. Kdenlive + +![Kdenlive - Ubuntu 上的免费视频编辑器][1] + +![Kdenlive - Ubuntu 上的免费视频编辑器][5] +[Kdenlive][6] 是 [KDE][8] 上的一个免费且[开源][7]的视频编辑软件,支持双视频监控,多轨时间线,剪辑列表,支持自定义布局,基本效果,以及基本过渡。 + +它支持多种文件格式和多种摄像机、相机,包括低分辨率摄像机(Raw 和 AVI DV 编辑),Mpeg2,mpeg4 和 h264 AVCHD(小型相机和便携式摄像机),高分辨率摄像机文件,包括 HDV 和 AVCHD 摄像机,专业摄像机,包括 XDCAM-HD™ 流, IMX™ (D10) 流,DVCAM (D10),DVCAM,DVCPRO™,DVCPRO50™ 流以及 DNxHD™ 流。 + +如果你正寻找 Linux 上 iMovie 的替代品,Kdenlive 会是你最好的选择。 + +#### Kdenlive 特性 + + * 多轨视频编辑 + * 多种音视频格式支持 + * 可配置的界面和快捷方式 + * 使用文本或图像轻松创建切片 + * 丰富的效果和过渡 + * 音频和视频示波器可确保镜头绝对平衡 + * 代理编辑 + * 自动保存 + * 广泛的硬件支持 + * 关键帧效果 + + + +#### 优点 + + * 通用视频编辑器 + * 对于那些熟悉视频编辑的人来说并不太复杂 + + + +#### 缺点 + * 如果你想找的是极致简单的编辑软件,它可能还是令你有些困惑 + * KDE 应用程序以臃肿而臭名昭著 + + + +#### 安装 Kdenlive + +Kdenlive 适用于所有主要的 Linux 发行版。你只需在软件中心搜索即可。[Kdenlive 网站的下载部分][9]提供了各种软件包。 + +命令行爱好者可以通过在 Debian 和基于 Ubuntu 的 Linux 发行版中运行以下命令从终端安装它: +``` +sudo apt install kdenlive +``` + +### 2\. OpenShot + +![Openshot - ubuntu 上的免费视频编辑器][1] + +![Openshot - ubuntu 上的免费视频编辑器][10] + +[OpenShot][11] 是 Linux 上的另一个多用途视频编辑器。OpenShot 可以帮助你创建具有过渡和效果的视频。你还可以调整声音大小。当然,它支持大多数格式和编解码器。 + +你还可以将视频导出至 DVD,上传至 YouTube,Vimeo,Xbox 360 以及许多常见的视频格式。OpenShot 比 Kdenlive 要简单一些。因此,如果你需要界面简单的视频编辑器,OpenShot 是一个不错的选择。 + +它还有个简洁的[开始使用 Openshot][12] 文档。 + +#### OpenShot 特性 + + * 跨平台,可在Linux,macOS 和 Windows 上使用 + * 支持多种视频,音频和图像格式 + * 强大的基于曲线的关键帧动画 + * 桌面集成与拖放支持 + * 不受限制的音视频轨道或图层 + * 可剪辑调整大小,缩放,修剪,捕捉,旋转和剪切 + * 视频转换可实时预览 + * 合成,图像层叠和水印 + * 标题模板,标题创建,子标题 + * 利用图像序列支持2D动画 + * 3D 动画标题和效果 + * 支持保存为 SVG 格式以及矢量标题和可信证 + * 滚动动态图片 + * 帧精度(逐步浏览每一帧视频) + * 剪辑的时间映射和速度更改 + * 音频混合和编辑 + * 数字视频效果,包括亮度,伽玛,色调,灰度,色度键等 + + + +#### 优点 + + * 用于一般视频编辑需求的通用视频编辑器 + * 可在 Windows 和 macOS 以及 Linux 上使用 + + + +#### 缺点 + + * 软件用起来可能很简单,但如果你对视频编辑非常陌生,那么肯定需要一个曲折学习的过程 + * 你可能仍然没有达到专业级电影制作编辑软件的水准 + + + +#### 安装 OpenShot + +OpenShot 也可以在所有主流 Linux 发行版的软件仓库中使用。你只需在软件中心搜索即可。你也可以从[官方页面][13]中获取它。 + +在 Debian 和基于 Ubuntu 的 Linux 发行版中,我最喜欢运行以下命令来安装它: +``` +sudo apt install openshot +``` + +### 3\. Shotcut + +![Shotcut Linux 视频编辑器][1] + +![Shotcut Linux 视频编辑器][14] + +[Shotcut][15] 是 Linux 上的另一个编辑器,可以和 Kdenlive 与 OpenShot 归为同一联盟。虽然它确实与上面讨论的其他两个软件有类似的功能,但 Shotcut 更先进的地方是支持4K视频。 + +支持许多音频,视频格式,过渡和效果是 Shotcut 的众多功能中的一部分。它也支持外部监视器。 + +这里有一系列视频教程让你[轻松上手 Shotcut][16]。它也可在 Windows 和 macOS 上使用,因此你也可以在其他操作系统上学习。 + +#### Shotcut 特性 + + * 跨平台,可在 Linux,macOS 和 Windows 上使用 + * 支持各种视频,音频和图像格式 + * 原生时间线编辑 + * 混合并匹配项目中的分辨率和帧速率 + * 音频滤波,混音和效果 + * 视频转换和过滤 + * 具有缩略图和波形的多轨时间轴 + * 无限制撤消和重做播放列表编辑,包括历史记录视图 + * 剪辑调整大小,缩放,修剪,捕捉,旋转和剪切 + * 使用纹波选项修剪源剪辑播放器或时间轴 + * 在额外系统显示/监视器上的外部监察 + * 硬件支持 + + + +你可以在[这里][17]阅它的更多特性。 + +#### 优点 + + * 用于常见视频编辑需求的通用视频编辑器 + * 支持 4K 视频 + * 可在 Windows,macOS 以及 Linux 上使用 + + + +#### 缺点 + + * 功能太多降低了软件的易用性 + + + +#### 安装 Shotcut + +Shotcut 以 [Snap][18] 格式提供。你可以在 Ubuntu 软件中心找到它。对于其他发行版,你可以从此[下载页面][19]获取可执行文件来安装。 + +### 4\. Flowblade + +![Flowblade ubuntu 上的视频编辑器][1] + +![Flowblade ubuntu 上的视频编辑器][20] + +[Flowblade][21] 是 Linux 上的一个多轨非线性视频编辑器。与上面讨论的一样,这也是一个免费开源的软件。它具有时尚和现代化的用户界面。 + +用 Python 编写,它的设计初衷是快速且准确。Flowblade 专注于在 Linux 和其他免费平台上提供最佳体验。所以它没有在 Windows 和 OS X 上运行的版本。Linux 用户专享其实感觉也不错的。 + +你也可以查看这个不错的[文档][22]来帮助你使用它的所有功能。 + +#### Flowblade 特性 + + * 轻量级应用 + * 为简单的任务提供简单的界面,如拆分,合并,覆盖等 + * 大量的音视频效果和过滤器 + * 支持[代理编辑][23] + * 支持拖拽 + * 支持多种视频、音频和图像格式 + * 批量渲染 + * 水印 + * 视频转换和过滤器 + * 具有缩略图和波形的多轨时间轴 + + + +你可以在 [Flowblade 特性][24]里阅读关于它的更多信息。 + +#### 优点 + + * 轻量 + * 适用于通用视频编辑 + + + +#### 缺点 + + * 不支持其他平台 + + + +#### 安装 Flowblade + +Flowblade 应当在所有主流 Linux 发行版的软件仓库中都可以找到。你可以从软件中心安装它。也可以在[下载页面][25]查看更多信息。 + +另外,你可以在 Ubuntu 和基于 Ubuntu 的系统中使用一下命令安装 Flowblade: +``` +sudo apt install flowblade +``` + +### 5\. Lightworks + +![Lightworks 运行在 ubuntu 16.04][1] + +![Lightworks 运行在 ubuntu 16.04][26] + +如果你在寻找一个具有更多特性的视频编辑器,这就是你想要的。[Lightworks][27] 是一个跨平台的专业视频编辑器,可以在 Linux,Mac OS X 以及 Windows上使用。 + +它是一款屡获殊荣的专业[非线性编辑][28](NLE)软件,支持高达 4K 的分辨率以及 SD 和 HD 格式的视频。 + +Lightworks 可以在 Linux 上使用,然而它不开源。 + +Lightwokrs 有两个版本: + + * Lightworks 免费版 + * Lightworks 专业版 + + + +专业版有更多功能,比如支持更高的分辨率,支持 4K 和 蓝光视频等。 + +这个[页面][29]有广泛的可用文档。你也可以参考 [Lightworks 视频向导页][30]的视频。 + +#### Lightworks 特性 + + * 跨平台 + * 简单直观的用户界面 + * 简明的时间线编辑和修剪 + * 实时可用的音频和视频FX + * 可用精彩的免版税音频和视频内容 + * 适用于 4K 的 Lo-Res 代理工作流程 + * 支持导出 YouTube/Vimeo,SD/HD视频,最高可达4K + * 支持拖拽 + * 各种音频和视频效果和滤镜 + + + +#### 优点 + + * 专业,功能丰富的视频编辑器 + + + +#### 缺点 + + * 免费版有使用限制 + + + +#### 安装 Lightworks + +Lightworks 为 Debian 和基于 Ubuntu 的 Linux 提供了 DEB 安装包,为基于 Fedora 的 Linux 发行版提供了RPM 安装包。你可以在[下载页面][31]找到安装包。 + +### 6\. Blender + +![Blender 运行在 Ubuntu 16.04][1] + +![Blender 运行在 Ubuntu 16.04][32] + +[Blender][33] 是一个专业的,工业级的开源跨平台视频编辑器。它在制作 3D 作品的工具当中较为流行。Blender 已被用于制作多部好莱坞电影,包括蜘蛛侠系列。 + +虽然最初设计用于制作 3D 模型,但它也具有多种格式视频的编辑功能。 + +#### Blender 特性 + + * 实时预览,亮度波形,色度矢量显示和直方图显示 + * 音频混合,同步,擦洗和波形可视化 + * 最多32个轨道,用于添加视频,图像,音频,场景,面具和效果 + * 速度控制,调整图层,过渡,关键帧,过滤器等 + + + +你可以在[这里][34]阅读更多相关特性。 + +#### 优点 + + * 跨平台 + * 专业级视频编辑 + + + +#### 缺点 + + * 复杂 + * 主要用于制作 3D 动画,不专门用于常规视频编辑 + + + +#### 安装 Blender + +Blender 的最新版本可以从[下载页面][35]下载。 + +### 7\. Cinelerra + +![Cinelerra Linux 上的视频编辑器][1] + +![Cinelerra Linux 上的视频编辑器][36] + +[Cinelerra][37] 从 1998 年发布以来,已被下载超过500万次。它是 2003 年第一个在 64 位系统上提供非线性编辑的视频编辑器。当时它是Linux用户的首选视频编辑器,但随后一些开发人员丢弃了此项目,它也随之失去了光彩。 + +好消息是它正回到正轨并且良好地再次发展。 + +如果你想了解关于 Cinelerra 项目是如何开始的,这里有些[有趣的背景故事][38]。 + +#### Cinelerra 特性 + + * 非线性编辑 + * 支持 HD 视频 + * 内置框架渲染器 + * 各种视频效果 + * 不受限制的图层数量 + * 拆分窗格编辑 + + + +#### 优点 + * 通用视频编辑器 + + + +#### 缺点 + + * 不适用于新手 + * 没有可用的安装包 + + + +#### 安装 Cinelerra + +你可以从 [SourceForge][39] 下载源码。更多相关信息请看[下载页面][40]。 + +### 8\. DaVinci Resolve + +![DaVinci Resolve 视频编辑器][1] + +![DaVinci Resolve 视频编辑器][41] + +如果你想要好莱坞级别的视频编辑器,那就用好莱坞正在使用的专业工具。来自 Blackmagic 公司的 [DaVinci Resolve][42] 就是专业人士用于编辑电影和电视节目的专业工具。 + +DaVinci Resolve 不是常规的视频编辑器。它是一个成熟的编辑工具,在这一个应用程序中提供编辑,色彩校正和专业音频后期制作功能。 + +DaVinci Resolve 不开源。类似于 LightWorks,它也为 Linux 提供一个免费版本。专业版售价是 $300。 + +#### DaVinci Resolve 特性 + + * 高性能播放引擎 + * 支持所有类型的编辑类型,如覆盖,插入,波纹覆盖,替换,适合填充,末尾追加 + * 高级修剪 + * 音频叠加 + * Multicam Editing 可实现实时编辑来自多个摄像机的镜头 + * 过渡和过滤效果 + * 速度效果 + * 时间轴曲线编辑器 + * VFX 的非线性编辑 + + + +#### 优点 + * 跨平台 + * 专业级视频编辑器 + + + +#### 缺点 + + * 不适用于通用视频编辑 + * 不开源 + * 免费版本中有些功能无法使用 + + + +#### 安装 DaVinci Resolve + +你可以从[这个页面][42]下载 DaVinci Resolve。你需要注册,哪怕仅仅下载免费版。 + +### 9\. VidCutter + +![VidCutter Linux 上的视频编辑器][1] + +![VidCutter Linux 上的视频编辑器][43] + +不像这篇文章讨论的其他视频编辑器,[VidCutter][44] 非常简单。除了分割和合并视频之外,它没有其他太多功能。但有时你正好需要 VidCutter 提供的这些功能。 + +#### VidCutter 特性 + + * 适用于Linux,Windows 和 MacOS 的跨平台应用程序 + * 支持绝大多数常见视频格式,例如:AVI,MP4,MPEG 1/2,WMV,MP3,MOV,3GP,FLV 等等 + * 界面简单 + * 修剪和合并视频,仅此而已 + + + +#### 优点 + + * 跨平台 + * 很适合做简单的视频分割和合并 + + + +#### 缺点 + + * 不适合用于通用视频编辑 + * 经常崩溃 + + + +#### 安装 VidCutter + +如果你使用的是基于 Ubuntu 的 Linux 发行版,你可以使用这个官方 PPA(译者注:PPA,个人软件包档案,PersonalPackageArchives): +``` +sudo add-apt-repository ppa:ozmartian/apps +sudo apt-get update +sudo apt-get install vidcutter +``` + +Arch Linux 用户可以轻松的使用 AUR 安装它。对于其他 Linux 发行版的用户,你可以从这个 [Github 页面][45]上查找安装文件。 + +### 哪个是 Linux 上最好的视频编辑软件? + +文章里提到的一些视频编辑器使用 [FFmpeg][46]。你也可以自己使用 FFmpeg。它只是一个命令行工具,所以我没有在上文的列表中提到,但一句也不提又不公平。 + +如果你需要一个视频编辑器来简单的剪切和拼接视频,请使用 VidCutter。 + +如果你需要的不止这些,**OpenShot** 或者 **Kdenlive** 是不错的选择。他们有规格标准的系统,适用于初学者。 + +如果你拥有一台高端计算机并且需要高级功能,可以使用 **Lightworks** 或者 **DaVinci Resolve**。如果你在寻找更高级的工具用于制作 3D 作品,If you are looking for more advanced features for 3D works,**Blender** 就得到了你的支持。 + +这就是关于 **Linux 上最好的视频编辑软件**我所能表达的全部内容,像Ubuntu,Linux Mint,Elementary,以及其他 Linux 发行版。向我们分享你最喜欢的视频编辑器。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-video-editing-software-linux/ + +作者:[It'S Foss Team][a] +译者:[fuowang](https://github.com/fuowang) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/itsfoss/ +[1]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs= +[2]:https://itsfoss.com/wp-content/uploads/2016/06/best-Video-editors-Linux-800x450.png +[3]:https://itsfoss.com/linux-photo-management-software/ +[4]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[5]:https://itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg +[6]:https://kdenlive.org/ +[7]:https://itsfoss.com/tag/open-source/ +[8]:https://www.kde.org/ +[9]:https://kdenlive.org/download/ +[10]:https://itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg +[11]:http://www.openshot.org/ +[12]:http://www.openshot.org/user-guide/ +[13]:http://www.openshot.org/download/ +[14]:https://itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux-800x503.jpg +[15]:https://www.shotcut.org/ +[16]:https://www.shotcut.org/tutorials/ +[17]:https://www.shotcut.org/features/ +[18]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/ +[19]:https://www.shotcut.org/download/ +[20]:https://itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg +[21]:http://jliljebl.github.io/flowblade/ +[22]:https://jliljebl.github.io/flowblade/webhelp/help.html +[23]:https://jliljebl.github.io/flowblade/webhelp/proxy.html +[24]:https://jliljebl.github.io/flowblade/features.html +[25]:https://jliljebl.github.io/flowblade/download.html +[26]:https://itsfoss.com/wp-content/uploads/2016/06/lightworks-running-on-ubuntu-16.04.jpg +[27]:https://www.lwks.com/ +[28]:https://en.wikipedia.org/wiki/Non-linear_editing_system +[29]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=4 +[30]:https://www.lwks.com/videotutorials +[31]:https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206&tab=1 +[32]:https://itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg +[33]:https://www.blender.org/ +[34]:https://www.blender.org/features/ +[35]:https://www.blender.org/download/ +[36]:https://itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg +[37]:http://cinelerra.org/ +[38]:http://cinelerra.org/our-story +[39]:https://sourceforge.net/projects/heroines/files/cinelerra-6-src.tar.xz/download +[40]:http://cinelerra.org/download +[41]:https://itsfoss.com/wp-content/uploads/2016/06/davinci-resolve-vdeo-editor-800x450.jpg +[42]:https://www.blackmagicdesign.com/products/davinciresolve/ +[43]:https://itsfoss.com/wp-content/uploads/2016/06/vidcutter-screenshot-800x585.jpeg +[44]:https://itsfoss.com/vidcutter-video-editor-linux/ +[45]:https://github.com/ozmartian/vidcutter/releases +[46]:https://www.ffmpeg.org/ From 2b7fa004c554661a2c08e2f4727c6a578cfbbb36 Mon Sep 17 00:00:00 2001 From: qhwdw Date: Wed, 24 Oct 2018 20:44:12 +0800 Subject: [PATCH 03/32] Translated by qhwdw --- .../tech/20181004 Lab 3- User Environments.md | 525 ------------------ .../tech/20181004 Lab 3- User Environments.md | 524 +++++++++++++++++ 2 files changed, 524 insertions(+), 525 deletions(-) delete mode 100644 sources/tech/20181004 Lab 3- User Environments.md create mode 100644 translated/tech/20181004 Lab 3- User Environments.md diff --git a/sources/tech/20181004 Lab 3- User Environments.md b/sources/tech/20181004 Lab 3- User Environments.md deleted file mode 100644 index 4af7c09eba..0000000000 --- a/sources/tech/20181004 Lab 3- User Environments.md +++ /dev/null @@ -1,525 +0,0 @@ -Translating by qhwdw -Lab 3: User Environments -====== -### Lab 3: User Environments - -#### Introduction - -In this lab you will implement the basic kernel facilities required to get a protected user-mode environment (i.e., "process") running. You will enhance the JOS kernel to set up the data structures to keep track of user environments, create a single user environment, load a program image into it, and start it running. You will also make the JOS kernel capable of handling any system calls the user environment makes and handling any other exceptions it causes. - -**Note:** In this lab, the terms _environment_ and _process_ are interchangeable - both refer to an abstraction that allows you to run a program. We introduce the term "environment" instead of the traditional term "process" in order to stress the point that JOS environments and UNIX processes provide different interfaces, and do not provide the same semantics. - -##### Getting Started - -Use Git to commit your changes after your Lab 2 submission (if any), fetch the latest version of the course repository, and then create a local branch called `lab3` based on our lab3 branch, `origin/lab3`: - -``` - athena% cd ~/6.828/lab - athena% add git - athena% git commit -am 'changes to lab2 after handin' - Created commit 734fab7: changes to lab2 after handin - 4 files changed, 42 insertions(+), 9 deletions(-) - athena% git pull - Already up-to-date. - athena% git checkout -b lab3 origin/lab3 - Branch lab3 set up to track remote branch refs/remotes/origin/lab3. - Switched to a new branch "lab3" - athena% git merge lab2 - Merge made by recursive. - kern/pmap.c | 42 +++++++++++++++++++ - 1 files changed, 42 insertions(+), 0 deletions(-) - athena% -``` - -Lab 3 contains a number of new source files, which you should browse: - -``` -inc/ env.h Public definitions for user-mode environments - trap.h Public definitions for trap handling - syscall.h Public definitions for system calls from user environments to the kernel - lib.h Public definitions for the user-mode support library -kern/ env.h Kernel-private definitions for user-mode environments - env.c Kernel code implementing user-mode environments - trap.h Kernel-private trap handling definitions - trap.c Trap handling code - trapentry.S Assembly-language trap handler entry-points - syscall.h Kernel-private definitions for system call handling - syscall.c System call implementation code -lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a - entry.S Assembly-language entry-point for user environments - libmain.c User-mode library setup code called from entry.S - syscall.c User-mode system call stub functions - console.c User-mode implementations of putchar and getchar, providing console I/O - exit.c User-mode implementation of exit - panic.c User-mode implementation of panic -user/ * Various test programs to check kernel lab 3 code -``` - -In addition, a number of the source files we handed out for lab2 are modified in lab3. To see the differences, you can type: - -``` - $ git diff lab2 - -``` - -You may also want to take another look at the [lab tools guide][1], as it includes information on debugging user code that becomes relevant in this lab. - -##### Lab Requirements - -This lab is divided into two parts, A and B. Part A is due a week after this lab was assigned; you should commit your changes and make handin your lab before the Part A deadline, making sure your code passes all of the Part A tests (it is okay if your code does not pass the Part B tests yet). You only need to have the Part B tests passing by the Part B deadline at the end of the second week. - -As in lab 2, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem (for the entire lab, not for each part). Write up brief answers to the questions posed in the lab and a one or two paragraph description of what you did to solve your chosen challenge problem in a file called `answers-lab3.txt` in the top level of your `lab` directory. (If you implement more than one challenge problem, you only need to describe one of them in the write-up.) Do not forget to include the answer file in your submission with git add answers-lab3.txt. - -##### Inline Assembly - -In this lab you may find GCC's inline assembly language feature useful, although it is also possible to complete the lab without using it. At the very least, you will need to be able to understand the fragments of inline assembly language ("`asm`" statements) that already exist in the source code we gave you. You can find several sources of information on GCC inline assembly language on the class [reference materials][2] page. - -#### Part A: User Environments and Exception Handling - -The new include file `inc/env.h` contains basic definitions for user environments in JOS. Read it now. The kernel uses the `Env` data structure to keep track of each user environment. In this lab you will initially create just one environment, but you will need to design the JOS kernel to support multiple environments; lab 4 will take advantage of this feature by allowing a user environment to `fork` other environments. - -As you can see in `kern/env.c`, the kernel maintains three main global variables pertaining to environments: - -``` - struct Env *envs = NULL; // All environments - struct Env *curenv = NULL; // The current env - static struct Env *env_free_list; // Free environment list - -``` - -Once JOS gets up and running, the `envs` pointer points to an array of `Env` structures representing all the environments in the system. In our design, the JOS kernel will support a maximum of `NENV` simultaneously active environments, although there will typically be far fewer running environments at any given time. (`NENV` is a constant `#define`'d in `inc/env.h`.) Once it is allocated, the `envs` array will contain a single instance of the `Env` data structure for each of the `NENV` possible environments. - -The JOS kernel keeps all of the inactive `Env` structures on the `env_free_list`. This design allows easy allocation and deallocation of environments, as they merely have to be added to or removed from the free list. - -The kernel uses the `curenv` symbol to keep track of the _currently executing_ environment at any given time. During boot up, before the first environment is run, `curenv` is initially set to `NULL`. - -##### Environment State - -The `Env` structure is defined in `inc/env.h` as follows (although more fields will be added in future labs): - -``` - struct Env { - struct Trapframe env_tf; // Saved registers - struct Env *env_link; // Next free Env - envid_t env_id; // Unique environment identifier - envid_t env_parent_id; // env_id of this env's parent - enum EnvType env_type; // Indicates special system environments - unsigned env_status; // Status of the environment - uint32_t env_runs; // Number of times environment has run - - // Address space - pde_t *env_pgdir; // Kernel virtual address of page dir - }; -``` - -Here's what the `Env` fields are for: - - * **env_tf** : -This structure, defined in `inc/trap.h`, holds the saved register values for the environment while that environment is _not_ running: i.e., when the kernel or a different environment is running. The kernel saves these when switching from user to kernel mode, so that the environment can later be resumed where it left off. - * **env_link** : -This is a link to the next `Env` on the `env_free_list`. `env_free_list` points to the first free environment on the list. - * **env_id** : -The kernel stores here a value that uniquely identifiers the environment currently using this `Env` structure (i.e., using this particular slot in the `envs` array). After a user environment terminates, the kernel may re-allocate the same `Env` structure to a different environment - but the new environment will have a different `env_id` from the old one even though the new environment is re-using the same slot in the `envs` array. - * **env_parent_id** : -The kernel stores here the `env_id` of the environment that created this environment. In this way the environments can form a “family tree,” which will be useful for making security decisions about which environments are allowed to do what to whom. - * **env_type** : -This is used to distinguish special environments. For most environments, it will be `ENV_TYPE_USER`. We'll introduce a few more types for special system service environments in later labs. - * **env_status** : -This variable holds one of the following values: - * `ENV_FREE`: -Indicates that the `Env` structure is inactive, and therefore on the `env_free_list`. - * `ENV_RUNNABLE`: -Indicates that the `Env` structure represents an environment that is waiting to run on the processor. - * `ENV_RUNNING`: -Indicates that the `Env` structure represents the currently running environment. - * `ENV_NOT_RUNNABLE`: -Indicates that the `Env` structure represents a currently active environment, but it is not currently ready to run: for example, because it is waiting for an interprocess communication (IPC) from another environment. - * `ENV_DYING`: -Indicates that the `Env` structure represents a zombie environment. A zombie environment will be freed the next time it traps to the kernel. We will not use this flag until Lab 4. - * **env_pgdir** : -This variable holds the kernel _virtual address_ of this environment's page directory. - - - -Like a Unix process, a JOS environment couples the concepts of "thread" and "address space". The thread is defined primarily by the saved registers (the `env_tf` field), and the address space is defined by the page directory and page tables pointed to by `env_pgdir`. To run an environment, the kernel must set up the CPU with _both_ the saved registers and the appropriate address space. - -Our `struct Env` is analogous to `struct proc` in xv6. Both structures hold the environment's (i.e., process's) user-mode register state in a `Trapframe` structure. In JOS, individual environments do not have their own kernel stacks as processes do in xv6. There can be only one JOS environment active in the kernel at a time, so JOS needs only a _single_ kernel stack. - -##### Allocating the Environments Array - -In lab 2, you allocated memory in `mem_init()` for the `pages[]` array, which is a table the kernel uses to keep track of which pages are free and which are not. You will now need to modify `mem_init()` further to allocate a similar array of `Env` structures, called `envs`. - -``` -Exercise 1. Modify `mem_init()` in `kern/pmap.c` to allocate and map the `envs` array. This array consists of exactly `NENV` instances of the `Env` structure allocated much like how you allocated the `pages` array. Also like the `pages` array, the memory backing `envs` should also be mapped user read-only at `UENVS` (defined in `inc/memlayout.h`) so user processes can read from this array. -``` - -You should run your code and make sure `check_kern_pgdir()` succeeds. - -##### Creating and Running Environments - -You will now write the code in `kern/env.c` necessary to run a user environment. Because we do not yet have a filesystem, we will set up the kernel to load a static binary image that is _embedded within the kernel itself_. JOS embeds this binary in the kernel as a ELF executable image. - -The Lab 3 `GNUmakefile` generates a number of binary images in the `obj/user/` directory. If you look at `kern/Makefrag`, you will notice some magic that "links" these binaries directly into the kernel executable as if they were `.o` files. The `-b binary` option on the linker command line causes these files to be linked in as "raw" uninterpreted binary files rather than as regular `.o` files produced by the compiler. (As far as the linker is concerned, these files do not have to be ELF images at all - they could be anything, such as text files or pictures!) If you look at `obj/kern/kernel.sym` after building the kernel, you will notice that the linker has "magically" produced a number of funny symbols with obscure names like `_binary_obj_user_hello_start`, `_binary_obj_user_hello_end`, and `_binary_obj_user_hello_size`. The linker generates these symbol names by mangling the file names of the binary files; the symbols provide the regular kernel code with a way to reference the embedded binary files. - -In `i386_init()` in `kern/init.c` you'll see code to run one of these binary images in an environment. However, the critical functions to set up user environments are not complete; you will need to fill them in. - -``` -Exercise 2. In the file `env.c`, finish coding the following functions: - - * `env_init()` -Initialize all of the `Env` structures in the `envs` array and add them to the `env_free_list`. Also calls `env_init_percpu`, which configures the segmentation hardware with separate segments for privilege level 0 (kernel) and privilege level 3 (user). - * `env_setup_vm()` -Allocate a page directory for a new environment and initialize the kernel portion of the new environment's address space. - * `region_alloc()` -Allocates and maps physical memory for an environment - * `load_icode()` -You will need to parse an ELF binary image, much like the boot loader already does, and load its contents into the user address space of a new environment. - * `env_create()` -Allocate an environment with `env_alloc` and call `load_icode` to load an ELF binary into it. - * `env_run()` -Start a given environment running in user mode. - - - -As you write these functions, you might find the new cprintf verb `%e` useful -- it prints a description corresponding to an error code. For example, - - r = -E_NO_MEM; - panic("env_alloc: %e", r); - -will panic with the message "env_alloc: out of memory". -``` - -Below is a call graph of the code up to the point where the user code is invoked. Make sure you understand the purpose of each step. - - * `start` (`kern/entry.S`) - * `i386_init` (`kern/init.c`) - * `cons_init` - * `mem_init` - * `env_init` - * `trap_init` (still incomplete at this point) - * `env_create` - * `env_run` - * `env_pop_tf` - - - -Once you are done you should compile your kernel and run it under QEMU. If all goes well, your system should enter user space and execute the `hello` binary until it makes a system call with the `int` instruction. At that point there will be trouble, since JOS has not set up the hardware to allow any kind of transition from user space into the kernel. When the CPU discovers that it is not set up to handle this system call interrupt, it will generate a general protection exception, find that it can't handle that, generate a double fault exception, find that it can't handle that either, and finally give up with what's known as a "triple fault". Usually, you would then see the CPU reset and the system reboot. While this is important for legacy applications (see [this blog post][3] for an explanation of why), it's a pain for kernel development, so with the 6.828 patched QEMU you'll instead see a register dump and a "Triple fault." message. - -We'll address this problem shortly, but for now we can use the debugger to check that we're entering user mode. Use make qemu-gdb and set a GDB breakpoint at `env_pop_tf`, which should be the last function you hit before actually entering user mode. Single step through this function using si; the processor should enter user mode after the `iret` instruction. You should then see the first instruction in the user environment's executable, which is the `cmpl` instruction at the label `start` in `lib/entry.S`. Now use b *0x... to set a breakpoint at the `int $0x30` in `sys_cputs()` in `hello` (see `obj/user/hello.asm` for the user-space address). This `int` is the system call to display a character to the console. If you cannot execute as far as the `int`, then something is wrong with your address space setup or program loading code; go back and fix it before continuing. - -##### Handling Interrupts and Exceptions - -At this point, the first `int $0x30` system call instruction in user space is a dead end: once the processor gets into user mode, there is no way to get back out. You will now need to implement basic exception and system call handling, so that it is possible for the kernel to recover control of the processor from user-mode code. The first thing you should do is thoroughly familiarize yourself with the x86 interrupt and exception mechanism. - -``` -Exercise 3. Read Chapter 9, Exceptions and Interrupts in the 80386 Programmer's Manual (or Chapter 5 of the IA-32 Developer's Manual), if you haven't already. -``` - -In this lab we generally follow Intel's terminology for interrupts, exceptions, and the like. However, terms such as exception, trap, interrupt, fault and abort have no standard meaning across architectures or operating systems, and are often used without regard to the subtle distinctions between them on a particular architecture such as the x86. When you see these terms outside of this lab, the meanings might be slightly different. - -##### Basics of Protected Control Transfer - -Exceptions and interrupts are both "protected control transfers," which cause the processor to switch from user to kernel mode (CPL=0) without giving the user-mode code any opportunity to interfere with the functioning of the kernel or other environments. In Intel's terminology, an _interrupt_ is a protected control transfer that is caused by an asynchronous event usually external to the processor, such as notification of external device I/O activity. An _exception_ , in contrast, is a protected control transfer caused synchronously by the currently running code, for example due to a divide by zero or an invalid memory access. - -In order to ensure that these protected control transfers are actually _protected_ , the processor's interrupt/exception mechanism is designed so that the code currently running when the interrupt or exception occurs _does not get to choose arbitrarily where the kernel is entered or how_. Instead, the processor ensures that the kernel can be entered only under carefully controlled conditions. On the x86, two mechanisms work together to provide this protection: - - 1. **The Interrupt Descriptor Table.** The processor ensures that interrupts and exceptions can only cause the kernel to be entered at a few specific, well-defined entry-points _determined by the kernel itself_ , and not by the code running when the interrupt or exception is taken. - -The x86 allows up to 256 different interrupt or exception entry points into the kernel, each with a different _interrupt vector_. A vector is a number between 0 and 255. An interrupt's vector is determined by the source of the interrupt: different devices, error conditions, and application requests to the kernel generate interrupts with different vectors. The CPU uses the vector as an index into the processor's _interrupt descriptor table_ (IDT), which the kernel sets up in kernel-private memory, much like the GDT. From the appropriate entry in this table the processor loads: - - * the value to load into the instruction pointer (`EIP`) register, pointing to the kernel code designated to handle that type of exception. - * the value to load into the code segment (`CS`) register, which includes in bits 0-1 the privilege level at which the exception handler is to run. (In JOS, all exceptions are handled in kernel mode, privilege level 0.) - 2. **The Task State Segment.** The processor needs a place to save the _old_ processor state before the interrupt or exception occurred, such as the original values of `EIP` and `CS` before the processor invoked the exception handler, so that the exception handler can later restore that old state and resume the interrupted code from where it left off. But this save area for the old processor state must in turn be protected from unprivileged user-mode code; otherwise buggy or malicious user code could compromise the kernel. - -For this reason, when an x86 processor takes an interrupt or trap that causes a privilege level change from user to kernel mode, it also switches to a stack in the kernel's memory. A structure called the _task state segment_ (TSS) specifies the segment selector and address where this stack lives. The processor pushes (on this new stack) `SS`, `ESP`, `EFLAGS`, `CS`, `EIP`, and an optional error code. Then it loads the `CS` and `EIP` from the interrupt descriptor, and sets the `ESP` and `SS` to refer to the new stack. - -Although the TSS is large and can potentially serve a variety of purposes, JOS only uses it to define the kernel stack that the processor should switch to when it transfers from user to kernel mode. Since "kernel mode" in JOS is privilege level 0 on the x86, the processor uses the `ESP0` and `SS0` fields of the TSS to define the kernel stack when entering kernel mode. JOS doesn't use any other TSS fields. - - - - -##### Types of Exceptions and Interrupts - -All of the synchronous exceptions that the x86 processor can generate internally use interrupt vectors between 0 and 31, and therefore map to IDT entries 0-31. For example, a page fault always causes an exception through vector 14. Interrupt vectors greater than 31 are only used by _software interrupts_ , which can be generated by the `int` instruction, or asynchronous _hardware interrupts_ , caused by external devices when they need attention. - -In this section we will extend JOS to handle the internally generated x86 exceptions in vectors 0-31. In the next section we will make JOS handle software interrupt vector 48 (0x30), which JOS (fairly arbitrarily) uses as its system call interrupt vector. In Lab 4 we will extend JOS to handle externally generated hardware interrupts such as the clock interrupt. - -##### An Example - -Let's put these pieces together and trace through an example. Let's say the processor is executing code in a user environment and encounters a divide instruction that attempts to divide by zero. - - 1. The processor switches to the stack defined by the `SS0` and `ESP0` fields of the TSS, which in JOS will hold the values `GD_KD` and `KSTACKTOP`, respectively. - - 2. The processor pushes the exception parameters on the kernel stack, starting at address `KSTACKTOP`: - -``` - +--------------------+ KSTACKTOP - | 0x00000 | old SS | " - 4 - | old ESP | " - 8 - | old EFLAGS | " - 12 - | 0x00000 | old CS | " - 16 - | old EIP | " - 20 <---- ESP - +--------------------+ - -``` - - 3. Because we're handling a divide error, which is interrupt vector 0 on the x86, the processor reads IDT entry 0 and sets `CS:EIP` to point to the handler function described by the entry. - - 4. The handler function takes control and handles the exception, for example by terminating the user environment. - - - - -For certain types of x86 exceptions, in addition to the "standard" five words above, the processor pushes onto the stack another word containing an _error code_. The page fault exception, number 14, is an important example. See the 80386 manual to determine for which exception numbers the processor pushes an error code, and what the error code means in that case. When the processor pushes an error code, the stack would look as follows at the beginning of the exception handler when coming in from user mode: - -``` - +--------------------+ KSTACKTOP - | 0x00000 | old SS | " - 4 - | old ESP | " - 8 - | old EFLAGS | " - 12 - | 0x00000 | old CS | " - 16 - | old EIP | " - 20 - | error code | " - 24 <---- ESP - +--------------------+ -``` - -##### Nested Exceptions and Interrupts - -The processor can take exceptions and interrupts both from kernel and user mode. It is only when entering the kernel from user mode, however, that the x86 processor automatically switches stacks before pushing its old register state onto the stack and invoking the appropriate exception handler through the IDT. If the processor is _already_ in kernel mode when the interrupt or exception occurs (the low 2 bits of the `CS` register are already zero), then the CPU just pushes more values on the same kernel stack. In this way, the kernel can gracefully handle _nested exceptions_ caused by code within the kernel itself. This capability is an important tool in implementing protection, as we will see later in the section on system calls. - -If the processor is already in kernel mode and takes a nested exception, since it does not need to switch stacks, it does not save the old `SS` or `ESP` registers. For exception types that do not push an error code, the kernel stack therefore looks like the following on entry to the exception handler: - -``` - +--------------------+ <---- old ESP - | old EFLAGS | " - 4 - | 0x00000 | old CS | " - 8 - | old EIP | " - 12 - +--------------------+ -``` - -For exception types that push an error code, the processor pushes the error code immediately after the old `EIP`, as before. - -There is one important caveat to the processor's nested exception capability. If the processor takes an exception while already in kernel mode, and _cannot push its old state onto the kernel stack_ for any reason such as lack of stack space, then there is nothing the processor can do to recover, so it simply resets itself. Needless to say, the kernel should be designed so that this can't happen. - -##### Setting Up the IDT - -You should now have the basic information you need in order to set up the IDT and handle exceptions in JOS. For now, you will set up the IDT to handle interrupt vectors 0-31 (the processor exceptions). We'll handle system call interrupts later in this lab and add interrupts 32-47 (the device IRQs) in a later lab. - -The header files `inc/trap.h` and `kern/trap.h` contain important definitions related to interrupts and exceptions that you will need to become familiar with. The file `kern/trap.h` contains definitions that are strictly private to the kernel, while `inc/trap.h` contains definitions that may also be useful to user-level programs and libraries. - -Note: Some of the exceptions in the range 0-31 are defined by Intel to be reserved. Since they will never be generated by the processor, it doesn't really matter how you handle them. Do whatever you think is cleanest. - -The overall flow of control that you should achieve is depicted below: - -``` - IDT trapentry.S trap.c - -+----------------+ -| &handler1 |---------> handler1: trap (struct Trapframe *tf) -| | // do stuff { -| | call trap // handle the exception/interrupt -| | // ... } -+----------------+ -| &handler2 |--------> handler2: -| | // do stuff -| | call trap -| | // ... -+----------------+ - . - . - . -+----------------+ -| &handlerX |--------> handlerX: -| | // do stuff -| | call trap -| | // ... -+----------------+ -``` - -Each exception or interrupt should have its own handler in `trapentry.S` and `trap_init()` should initialize the IDT with the addresses of these handlers. Each of the handlers should build a `struct Trapframe` (see `inc/trap.h`) on the stack and call `trap()` (in `trap.c`) with a pointer to the Trapframe. `trap()` then handles the exception/interrupt or dispatches to a specific handler function. - -``` -Exercise 4. Edit `trapentry.S` and `trap.c` and implement the features described above. The macros `TRAPHANDLER` and `TRAPHANDLER_NOEC` in `trapentry.S` should help you, as well as the T_* defines in `inc/trap.h`. You will need to add an entry point in `trapentry.S` (using those macros) for each trap defined in `inc/trap.h`, and you'll have to provide `_alltraps` which the `TRAPHANDLER` macros refer to. You will also need to modify `trap_init()` to initialize the `idt` to point to each of these entry points defined in `trapentry.S`; the `SETGATE` macro will be helpful here. - -Your `_alltraps` should: - - 1. push values to make the stack look like a struct Trapframe - 2. load `GD_KD` into `%ds` and `%es` - 3. `pushl %esp` to pass a pointer to the Trapframe as an argument to trap() - 4. `call trap` (can `trap` ever return?) - - - -Consider using the `pushal` instruction; it fits nicely with the layout of the `struct Trapframe`. - -Test your trap handling code using some of the test programs in the `user` directory that cause exceptions before making any system calls, such as `user/divzero`. You should be able to get make grade to succeed on the `divzero`, `softint`, and `badsegment` tests at this point. -``` - -``` -Challenge! You probably have a lot of very similar code right now, between the lists of `TRAPHANDLER` in `trapentry.S` and their installations in `trap.c`. Clean this up. Change the macros in `trapentry.S` to automatically generate a table for `trap.c` to use. Note that you can switch between laying down code and data in the assembler by using the directives `.text` and `.data`. -``` - -``` -Questions - -Answer the following questions in your `answers-lab3.txt`: - - 1. What is the purpose of having an individual handler function for each exception/interrupt? (i.e., if all exceptions/interrupts were delivered to the same handler, what feature that exists in the current implementation could not be provided?) - 2. Did you have to do anything to make the `user/softint` program behave correctly? The grade script expects it to produce a general protection fault (trap 13), but `softint`'s code says `int $14`. _Why_ should this produce interrupt vector 13? What happens if the kernel actually allows `softint`'s `int $14` instruction to invoke the kernel's page fault handler (which is interrupt vector 14)? -``` - - -This concludes part A of the lab. Don't forget to add `answers-lab3.txt`, commit your changes, and run make handin before the part A deadline. - -#### Part B: Page Faults, Breakpoints Exceptions, and System Calls - -Now that your kernel has basic exception handling capabilities, you will refine it to provide important operating system primitives that depend on exception handling. - -##### Handling Page Faults - -The page fault exception, interrupt vector 14 (`T_PGFLT`), is a particularly important one that we will exercise heavily throughout this lab and the next. When the processor takes a page fault, it stores the linear (i.e., virtual) address that caused the fault in a special processor control register, `CR2`. In `trap.c` we have provided the beginnings of a special function, `page_fault_handler()`, to handle page fault exceptions. - -``` -Exercise 5. Modify `trap_dispatch()` to dispatch page fault exceptions to `page_fault_handler()`. You should now be able to get make grade to succeed on the `faultread`, `faultreadkernel`, `faultwrite`, and `faultwritekernel` tests. If any of them don't work, figure out why and fix them. Remember that you can boot JOS into a particular user program using make run- _x_ or make run- _x_ -nox. For instance, make run-hello-nox runs the _hello_ user program. -``` - -You will further refine the kernel's page fault handling below, as you implement system calls. - -##### The Breakpoint Exception - -The breakpoint exception, interrupt vector 3 (`T_BRKPT`), is normally used to allow debuggers to insert breakpoints in a program's code by temporarily replacing the relevant program instruction with the special 1-byte `int3` software interrupt instruction. In JOS we will abuse this exception slightly by turning it into a primitive pseudo-system call that any user environment can use to invoke the JOS kernel monitor. This usage is actually somewhat appropriate if we think of the JOS kernel monitor as a primitive debugger. The user-mode implementation of `panic()` in `lib/panic.c`, for example, performs an `int3` after displaying its panic message. - -``` -Exercise 6. Modify `trap_dispatch()` to make breakpoint exceptions invoke the kernel monitor. You should now be able to get make grade to succeed on the `breakpoint` test. -``` - -``` -Challenge! Modify the JOS kernel monitor so that you can 'continue' execution from the current location (e.g., after the `int3`, if the kernel monitor was invoked via the breakpoint exception), and so that you can single-step one instruction at a time. You will need to understand certain bits of the `EFLAGS` register in order to implement single-stepping. - -Optional: If you're feeling really adventurous, find some x86 disassembler source code - e.g., by ripping it out of QEMU, or out of GNU binutils, or just write it yourself - and extend the JOS kernel monitor to be able to disassemble and display instructions as you are stepping through them. Combined with the symbol table loading from lab 1, this is the stuff of which real kernel debuggers are made. -``` - -``` -Questions - - 3. The break point test case will either generate a break point exception or a general protection fault depending on how you initialized the break point entry in the IDT (i.e., your call to `SETGATE` from `trap_init`). Why? How do you need to set it up in order to get the breakpoint exception to work as specified above and what incorrect setup would cause it to trigger a general protection fault? - 4. What do you think is the point of these mechanisms, particularly in light of what the `user/softint` test program does? -``` - - -##### System calls - -User processes ask the kernel to do things for them by invoking system calls. When the user process invokes a system call, the processor enters kernel mode, the processor and the kernel cooperate to save the user process's state, the kernel executes appropriate code in order to carry out the system call, and then resumes the user process. The exact details of how the user process gets the kernel's attention and how it specifies which call it wants to execute vary from system to system. - -In the JOS kernel, we will use the `int` instruction, which causes a processor interrupt. In particular, we will use `int $0x30` as the system call interrupt. We have defined the constant `T_SYSCALL` to 48 (0x30) for you. You will have to set up the interrupt descriptor to allow user processes to cause that interrupt. Note that interrupt 0x30 cannot be generated by hardware, so there is no ambiguity caused by allowing user code to generate it. - -The application will pass the system call number and the system call arguments in registers. This way, the kernel won't need to grub around in the user environment's stack or instruction stream. The system call number will go in `%eax`, and the arguments (up to five of them) will go in `%edx`, `%ecx`, `%ebx`, `%edi`, and `%esi`, respectively. The kernel passes the return value back in `%eax`. The assembly code to invoke a system call has been written for you, in `syscall()` in `lib/syscall.c`. You should read through it and make sure you understand what is going on. - -``` -Exercise 7. Add a handler in the kernel for interrupt vector `T_SYSCALL`. You will have to edit `kern/trapentry.S` and `kern/trap.c`'s `trap_init()`. You also need to change `trap_dispatch()` to handle the system call interrupt by calling `syscall()` (defined in `kern/syscall.c`) with the appropriate arguments, and then arranging for the return value to be passed back to the user process in `%eax`. Finally, you need to implement `syscall()` in `kern/syscall.c`. Make sure `syscall()` returns `-E_INVAL` if the system call number is invalid. You should read and understand `lib/syscall.c` (especially the inline assembly routine) in order to confirm your understanding of the system call interface. Handle all the system calls listed in `inc/syscall.h` by invoking the corresponding kernel function for each call. - -Run the `user/hello` program under your kernel (make run-hello). It should print "`hello, world`" on the console and then cause a page fault in user mode. If this does not happen, it probably means your system call handler isn't quite right. You should also now be able to get make grade to succeed on the `testbss` test. -``` - -``` -Challenge! Implement system calls using the `sysenter` and `sysexit` instructions instead of using `int 0x30` and `iret`. - -The `sysenter/sysexit` instructions were designed by Intel to be faster than `int/iret`. They do this by using registers instead of the stack and by making assumptions about how the segmentation registers are used. The exact details of these instructions can be found in Volume 2B of the Intel reference manuals. - -The easiest way to add support for these instructions in JOS is to add a `sysenter_handler` in `kern/trapentry.S` that saves enough information about the user environment to return to it, sets up the kernel environment, pushes the arguments to `syscall()` and calls `syscall()` directly. Once `syscall()` returns, set everything up for and execute the `sysexit` instruction. You will also need to add code to `kern/init.c` to set up the necessary model specific registers (MSRs). Section 6.1.2 in Volume 2 of the AMD Architecture Programmer's Manual and the reference on SYSENTER in Volume 2B of the Intel reference manuals give good descriptions of the relevant MSRs. You can find an implementation of `wrmsr` to add to `inc/x86.h` for writing to these MSRs [here][4]. - -Finally, `lib/syscall.c` must be changed to support making a system call with `sysenter`. Here is a possible register layout for the `sysenter` instruction: - - eax - syscall number - edx, ecx, ebx, edi - arg1, arg2, arg3, arg4 - esi - return pc - ebp - return esp - esp - trashed by sysenter - -GCC's inline assembler will automatically save registers that you tell it to load values directly into. Don't forget to either save (push) and restore (pop) other registers that you clobber, or tell the inline assembler that you're clobbering them. The inline assembler doesn't support saving `%ebp`, so you will need to add code to save and restore it yourself. The return address can be put into `%esi` by using an instruction like `leal after_sysenter_label, %%esi`. - -Note that this only supports 4 arguments, so you will need to leave the old method of doing system calls around to support 5 argument system calls. Furthermore, because this fast path doesn't update the current environment's trap frame, it won't be suitable for some of the system calls we add in later labs. - -You may have to revisit your code once we enable asynchronous interrupts in the next lab. Specifically, you'll need to enable interrupts when returning to the user process, which `sysexit` doesn't do for you. -``` - -##### User-mode startup - -A user program starts running at the top of `lib/entry.S`. After some setup, this code calls `libmain()`, in `lib/libmain.c`. You should modify `libmain()` to initialize the global pointer `thisenv` to point at this environment's `struct Env` in the `envs[]` array. (Note that `lib/entry.S` has already defined `envs` to point at the `UENVS` mapping you set up in Part A.) Hint: look in `inc/env.h` and use `sys_getenvid`. - -`libmain()` then calls `umain`, which, in the case of the hello program, is in `user/hello.c`. Note that after printing "`hello, world`", it tries to access `thisenv->env_id`. This is why it faulted earlier. Now that you've initialized `thisenv` properly, it should not fault. If it still faults, you probably haven't mapped the `UENVS` area user-readable (back in Part A in `pmap.c`; this is the first time we've actually used the `UENVS` area). - -``` -Exercise 8. Add the required code to the user library, then boot your kernel. You should see `user/hello` print "`hello, world`" and then print "`i am environment 00001000`". `user/hello` then attempts to "exit" by calling `sys_env_destroy()` (see `lib/libmain.c` and `lib/exit.c`). Since the kernel currently only supports one user environment, it should report that it has destroyed the only environment and then drop into the kernel monitor. You should be able to get make grade to succeed on the `hello` test. -``` - -##### Page faults and memory protection - -Memory protection is a crucial feature of an operating system, ensuring that bugs in one program cannot corrupt other programs or corrupt the operating system itself. - -Operating systems usually rely on hardware support to implement memory protection. The OS keeps the hardware informed about which virtual addresses are valid and which are not. When a program tries to access an invalid address or one for which it has no permissions, the processor stops the program at the instruction causing the fault and then traps into the kernel with information about the attempted operation. If the fault is fixable, the kernel can fix it and let the program continue running. If the fault is not fixable, then the program cannot continue, since it will never get past the instruction causing the fault. - -As an example of a fixable fault, consider an automatically extended stack. In many systems the kernel initially allocates a single stack page, and then if a program faults accessing pages further down the stack, the kernel will allocate those pages automatically and let the program continue. By doing this, the kernel only allocates as much stack memory as the program needs, but the program can work under the illusion that it has an arbitrarily large stack. - -System calls present an interesting problem for memory protection. Most system call interfaces let user programs pass pointers to the kernel. These pointers point at user buffers to be read or written. The kernel then dereferences these pointers while carrying out the system call. There are two problems with this: - - 1. A page fault in the kernel is potentially a lot more serious than a page fault in a user program. If the kernel page-faults while manipulating its own data structures, that's a kernel bug, and the fault handler should panic the kernel (and hence the whole system). But when the kernel is dereferencing pointers given to it by the user program, it needs a way to remember that any page faults these dereferences cause are actually on behalf of the user program. - 2. The kernel typically has more memory permissions than the user program. The user program might pass a pointer to a system call that points to memory that the kernel can read or write but that the program cannot. The kernel must be careful not to be tricked into dereferencing such a pointer, since that might reveal private information or destroy the integrity of the kernel. - - - -For both of these reasons the kernel must be extremely careful when handling pointers presented by user programs. - -You will now solve these two problems with a single mechanism that scrutinizes all pointers passed from userspace into the kernel. When a program passes the kernel a pointer, the kernel will check that the address is in the user part of the address space, and that the page table would allow the memory operation. - -Thus, the kernel will never suffer a page fault due to dereferencing a user-supplied pointer. If the kernel does page fault, it should panic and terminate. - -``` -Exercise 9. Change `kern/trap.c` to panic if a page fault happens in kernel mode. - -Hint: to determine whether a fault happened in user mode or in kernel mode, check the low bits of the `tf_cs`. - -Read `user_mem_assert` in `kern/pmap.c` and implement `user_mem_check` in that same file. - -Change `kern/syscall.c` to sanity check arguments to system calls. - -Boot your kernel, running `user/buggyhello`. The environment should be destroyed, and the kernel should _not_ panic. You should see: - - [00001000] user_mem_check assertion failure for va 00000001 - [00001000] free env 00001000 - Destroyed the only environment - nothing more to do! -Finally, change `debuginfo_eip` in `kern/kdebug.c` to call `user_mem_check` on `usd`, `stabs`, and `stabstr`. If you now run `user/breakpoint`, you should be able to run backtrace from the kernel monitor and see the backtrace traverse into `lib/libmain.c` before the kernel panics with a page fault. What causes this page fault? You don't need to fix it, but you should understand why it happens. -``` - -Note that the same mechanism you just implemented also works for malicious user applications (such as `user/evilhello`). - -``` -Exercise 10. Boot your kernel, running `user/evilhello`. The environment should be destroyed, and the kernel should not panic. You should see: - - [00000000] new env 00001000 - ... - [00001000] user_mem_check assertion failure for va f010000c - [00001000] free env 00001000 -``` - -**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab3.txt`. Commit your changes and type make handin in the `lab` directory to submit your work. - -Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab3.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 3', then make handin and follow the directions. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://pdos.csail.mit.edu -[b]: https://github.com/lujun9972 -[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html -[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html -[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx -[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c diff --git a/translated/tech/20181004 Lab 3- User Environments.md b/translated/tech/20181004 Lab 3- User Environments.md new file mode 100644 index 0000000000..3d707bf99d --- /dev/null +++ b/translated/tech/20181004 Lab 3- User Environments.md @@ -0,0 +1,524 @@ +实验 3:用户环境 +====== +### 实验 3:用户环境 + +#### 简介 + +在本实验中,你将要实现一个基本的内核功能,要求它能够保护运行的用户模式环境(即:进程)。你将去增强这个 JOS 内核,去配置数据结构以便于保持对用户环境的跟踪、创建一个单一用户环境、将程序镜像加载到用户环境中、并将它启动运行。你也要写出一些 JOS 内核的函数,用来处理任何用户环境生成的系统调用,以及处理由用户环境引进的各种异常。 + +**注意:** 在本实验中,术语**_“环境”_** 和**_“进程”_** 是可互换的 —— 它们都表示同一个抽象概念,那就是允许你去运行的程序。我在介绍中使用术语**“环境”**而不是使用传统术语**“进程”**的目的是为了强调一点,那就是 JOS 的环境和 UNIX 的进程提供了不同的接口,并且它们的语义也不相同。 + +##### 预备知识 + +使用 Git 去提交你自实验 2 以后的更改(如果有的话),获取课程仓库的最新版本,以及创建一个命名为 `lab3` 的本地分支,指向到我们的 lab3 分支上 `origin/lab3` : + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git commit -am 'changes to lab2 after handin' + Created commit 734fab7: changes to lab2 after handin + 4 files changed, 42 insertions(+), 9 deletions(-) + athena% git pull + Already up-to-date. + athena% git checkout -b lab3 origin/lab3 + Branch lab3 set up to track remote branch refs/remotes/origin/lab3. + Switched to a new branch "lab3" + athena% git merge lab2 + Merge made by recursive. + kern/pmap.c | 42 +++++++++++++++++++ + 1 files changed, 42 insertions(+), 0 deletions(-) + athena% +``` + +实验 3 包含一些你将探索的新源文件: + +```c +inc/ env.h Public definitions for user-mode environments + trap.h Public definitions for trap handling + syscall.h Public definitions for system calls from user environments to the kernel + lib.h Public definitions for the user-mode support library +kern/ env.h Kernel-private definitions for user-mode environments + env.c Kernel code implementing user-mode environments + trap.h Kernel-private trap handling definitions + trap.c Trap handling code + trapentry.S Assembly-language trap handler entry-points + syscall.h Kernel-private definitions for system call handling + syscall.c System call implementation code +lib/ Makefrag Makefile fragment to build user-mode library, obj/lib/libjos.a + entry.S Assembly-language entry-point for user environments + libmain.c User-mode library setup code called from entry.S + syscall.c User-mode system call stub functions + console.c User-mode implementations of putchar and getchar, providing console I/O + exit.c User-mode implementation of exit + panic.c User-mode implementation of panic +user/ * Various test programs to check kernel lab 3 code +``` + +另外,一些在实验 2 中的源文件在实验 3 中将被修改。如果想去查看有什么更改,可以运行: + +``` + $ git diff lab2 + +``` + +你也可以另外去看一下 [实验工具指南][1],它包含了与本实验有关的调试用户代码方面的信息。 + +##### 实验要求 + +本实验分为两部分:Part A 和 Part B。Part A 在本实验完成后一周内提交;你将要提交你的更改和完成的动手实验,在提交之前要确保你的代码通过了 Part A 的所有检查(如果你的代码未通过 Part B 的检查也可以提交)。只需要在第二周提交 Part B 的期限之前代码检查通过即可。 + +由于在实验 2 中,你需要做实验中描述的所有正则表达式练习,并且至少通过一个挑战(是指整个实验,不是每个部分)。写出详细的问题答案并张贴在实验中,以及一到两个段落的关于你如何解决你选择的挑战问题的详细描述,并将它放在一个名为 `answers-lab3.txt` 的文件中,并将这个文件放在你的 `lab` 目标的根目录下。(如果你做了多个问题挑战,你仅需要提交其中一个即可)不要忘记使用 `git add answers-lab3.txt` 提交这个文件。 + +##### 行内汇编语言 + +在本实验中你可能找到使用了 GCC 的行内汇编语言特性,虽然不使用它也可以完成实验。但至少你需要去理解这些行内汇编语言片段,这些汇编语言("`asm`" 语句)片段已经存在于提供给你的源代码中。你可以在课程 [参考资料][2] 的页面上找到 GCC 行内汇编语言有关的信息。 + +#### Part A:用户环境和异常处理 + +新文件 `inc/env.h` 中包含了在 JOS 中关于用户环境的基本定义。现在就去阅读它。内核使用数据结构 `Env` 去保持对每个用户环境的跟踪。在本实验的开始,你将只创建一个环境,但你需要去设计 JOS 内核支持多环境;实验 4 将带来这个高级特性,允许用户环境去 `fork` 其它环境。 + +正如你在 `kern/env.c` 中所看到的,内核维护了与环境相关的三个全局变量: + +``` + struct Env *envs = NULL; // All environments + struct Env *curenv = NULL; // The current env + static struct Env *env_free_list; // Free environment list + +``` + +一旦 JOS 启动并运行,`envs` 指针指向到一个数组,即数据结构 `Env`,它保存了系统中全部的环境。在我们的设计中,JOS 内核将同时支持最大值为 `NENV` 个的活动的环境,虽然在一般情况下,任何给定时刻运行的环境很少。(`NENV` 是在 `inc/env.h` 中用 `#define` 定义的一个常量)一旦它被分配,对于每个 `NENV` 可能的环境,`envs` 数组将包含一个数据结构 `Env` 的单个实例。 + +JOS 内核在 `env_free_list` 上用数据结构 `Env` 保存了所有不活动的环境。这样的设计使得环境的分配和回收很容易,因为这只不过是添加或删除空闲列表的问题而已。 + +内核使用符号 `curenv` 来保持对任意给定时刻的 _当前正在运行的环境_ 进行跟踪。在系统引导期间,在第一个环境运行之前,`curenv` 被初始化为 `NULL`。 + +##### 环境状态 + +数据结构 `Env` 被定义在文件 `inc/env.h` 中,内容如下:(在后面的实验中将添加更多的字段): + +```c + struct Env { + struct Trapframe env_tf; // Saved registers + struct Env *env_link; // Next free Env + envid_t env_id; // Unique environment identifier + envid_t env_parent_id; // env_id of this env's parent + enum EnvType env_type; // Indicates special system environments + unsigned env_status; // Status of the environment + uint32_t env_runs; // Number of times environment has run + + // Address space + pde_t *env_pgdir; // Kernel virtual address of page dir + }; +``` + +以下是数据结构 `Env` 中的字段简介: + + * **env_tf**: +这个结构定义在 `inc/trap.h` 中,它用于在那个环境不运行时保持它保存在寄存器中的值,即:当内核或一个不同的环境在运行时。当从用户模式切换到内核模式时,内核将保存这些东西,以便于那个环境能够在稍后重新运行时回到中断运行的地方。 + * **env_link**: +这是一个链接,它链接到在 `env_free_list` 上的下一个 `Env` 上。`env_free_list` 指向到列表上第一个空闲的环境。 + * **env_id**: +内核在数据结构 `Env` 中保存了一个唯一标识当前环境的值(即:使用数组 `envs` 中的特定槽位)。在一个用户环境终止之后,内核可能给另外的环境重新分配相同的数据结构 `Env` —— 但是新的环境将有一个与已终止的旧的环境不同的 `env_id`,即便是新的环境在数组 `envs` 中复用了同一个槽位。 + * **env_parent_id**: +内核使用它来保存创建这个环境的父级环境的 `env_id`。通过这种方式,环境就可以形成一个“家族树”,这对于做出“哪个环境可以对谁做什么”这样的安全决策非常有用。 + * **env_type**: +它用于去区分特定的环境。对于大多数环境,它将是 `ENV_TYPE_USER` 的。在稍后的实验中,针对特定的系统服务环境,我们将引入更多的几种类型。 + * **env_status**: +这个变量持有以下几个值之一: + * `ENV_FREE`: +表示那个 `Env` 结构是非活动的,并且因此它还在 `env_free_list` 上。 + * `ENV_RUNNABLE`: +表示那个 `Env` 结构所代表的环境正等待被调度到处理器上去运行。 + * `ENV_RUNNING`: +表示那个 `Env` 结构所代表的环境当前正在运行中。 + * `ENV_NOT_RUNNABLE`: +表示那个 `Env` 结构所代表的是一个当前活动的环境,但不是当前准备去运行的:例如,因为它正在因为一个来自其它环境的进程间通讯(IPC)而处于等待状态。 + * `ENV_DYING`: +表示那个 `Env` 结构所表示的是一个僵尸环境。一个僵尸环境将在下一次被内核捕获后被释放。我们在实验 4 之前不会去使用这个标志。 + * **env_pgdir**: +这个变量持有这个环境的内核虚拟地址的页目录。 + + + +就像一个 Unix 进程一样,一个 JOS 环境耦合了“线程”和“地址空间”的概念。线程主要由保存的寄存器来定义(`env_tf` 字段),而地址空间由页目录和 `env_pgdir` 所指向的页表所定义。为运行一个环境,内核必须使用保存的寄存器值和相关的地址空间去设置 CPU。 + +我们的 `struct Env` 与 xv6 中的 `struct proc` 类似。它们都在一个 `Trapframe` 结构中持有环境(即进程)的用户模式寄存器状态。在 JOS 中,单个的环境并不能像 xv6 中的进程那样拥有它们自己的内核栈。在这里,内核中任意时间只能有一个 JOS 环境处于活动中,因此,JOS 仅需要一个单个的内核栈。 + +##### 为环境分配数组 + +在实验 2 的 `mem_init()` 中,你为数组 `pages[]` 分配了内存,它是内核用于对页面分配与否的状态进行跟踪的一个表。你现在将需要去修改 `mem_init()`,以便于后面使用它分配一个与结构 `Env` 类似的数组,这个数组被称为 `envs`。 + +```markdown +练习 1、修改在 `kern/pmap.c` 中的 `mem_init()`,以用于去分配和映射 `envs` 数组。这个数组完全由 `Env` 结构分配的实例 `NENV` 组成,就像你分配的 `pages` 数组一样。与 `pages` 数组一样,由内存支持的数组 `envs` 也将在 `UENVS`(它的定义在 `inc/memlayout.h` 文件中)中映射用户只读的内存,以便于用户进程能够从这个数组中读取。 +``` + +你应该去运行你的代码,并确保 `check_kern_pgdir()` 是没有问题的。 + +##### 创建和运行环境 + +现在,你将在 `kern/env.c` 中写一些必需的代码去运行一个用户环境。因为我们并没有做一个文件系统,因此,我们将设置内核去加载一个嵌入到内核中的静态的二进制镜像。JOS 内核以一个 ELF 可运行镜像的方式将这个二进制镜像嵌入到内核中。 + +在实验 3 中,`GNUmakefile` 将在 `obj/user/` 目录中生成一些二进制镜像。如果你看到 `kern/Makefrag`,你将注意到一些奇怪的的东西,它们“链接”这些二进制直接进入到内核中运行,就像 `.o` 文件一样。在链接器命令行上的 `-b binary` 选项,将因此把它们链接为“原生的”不解析的二进制文件,而不是由编译器产生的普通的 `.o` 文件。(就链接器而言,这些文件压根就不是 ELF 镜像文件 —— 它们可以是任何东西,比如,一个文本文件或图片!)如果你在内核构建之后查看 `obj/kern/kernel.sym` ,你将会注意到链接器很奇怪的生成了一些有趣的、命名很费解的符号,比如像 `_binary_obj_user_hello_start`、`_binary_obj_user_hello_end`、以及 `_binary_obj_user_hello_size`。链接器通过改编二进制文件的命令来生成这些符号;这种符号为普通内核代码使用一种引入嵌入式二进制文件的方法。 + +在 `kern/init.c` 的 `i386_init()` 中,你将写一些代码在环境中运行这些二进制镜像中的一种。但是,设置用户环境的关键函数还没有实现;将需要你去完成它们。 + +```markdown +练习 2、在文件 `env.c` 中,写完以下函数的代码: + + * `env_init()` +初始化 `envs` 数组中所有的 `Env` 结构,然后把它们添加到 `env_free_list` 中。也称为 `env_init_percpu`,它通过配置硬件,在硬件上为 level 0(内核)权限和 level 3(用户)权限使用单独的段。 + * `env_setup_vm()` +为一个新环境分配一个页目录,并初始化新环境的地址空间的内核部分。 + * `region_alloc()` +为一个新环境分配和映射物理内存 + * `load_icode()` +你将需要去解析一个 ELF 二进制镜像,就像引导加载器那样,然后加载它的内容到一个新环境的用户地址空间中。 + * `env_create()` +使用 `env_alloc` 去分配一个环境,并调用 `load_icode` 去加载一个 ELF 二进制 + * `env_run()` +在用户模式中开始运行一个给定的环境 + + + +在你写这些函数时,你可能会发现新的 cprintf 动词 `%e` 非常有用 -- 它可以输出一个错误代码的相关描述。比如: + + r = -E_NO_MEM; + panic("env_alloc: %e", r); + +中 panic 将输出消息 "env_alloc: out of memory"。 +``` + +下面是用户代码相关的调用图。确保你理解了每一步的用途。 + + * `start` (`kern/entry.S`) + * `i386_init` (`kern/init.c`) + * `cons_init` + * `mem_init` + * `env_init` + * `trap_init`(到目前为止还未完成) + * `env_create` + * `env_run` + * `env_pop_tf` + + + +在完成以上函数后,你应该去编译内核并在 QEMU 下运行它。如果一切正常,你的系统将进入到用户空间并运行二进制的 `hello` ,直到使用 `int` 指令生成一个系统调用为止。在那个时刻将存在一个问题,因为 JOS 尚未设置硬件去允许从用户空间到内核空间的各种转换。当 CPU 发现没有系统调用中断的服务程序时,它将生成一个一般保护异常,找到那个异常并去处理它,还将生成一个双重故障异常,同样也找到它并处理它,并且最后会出现所谓的“三重故障异常”。通常情况下,你将随后看到 CPU 复位以及系统重引导。虽然对于传统的应用程序(在 [这篇博客文章][3] 中解释了原因)这是重大的问题,但是对于内核开发来说,这是一个痛苦的过程,因此,在打了 6.828 补丁的 QEMU 上,你将可以看到转储的寄存器内容和一个“三重故障”的信息。 + +我们马上就会去处理这些问题,但是现在,我们可以使用调试器去检查我们是否进入了用户模式。使用 `make qemu-gdb` 并在 `env_pop_tf` 处设置一个 GDB 断点,它是你进入用户模式之前到达的最后一个函数。使用 `si` 单步进入这个函数;处理器将在 `iret` 指令之后进入用户模式。然后你将会看到在用户环境运行的第一个指令,它将是在 `lib/entry.S` 中的标签 `start` 的第一个指令 `cmpl`。现在,在 `hello` 中的 `sys_cputs()` 的 `int $0x30` 处使用 `b *0x...`(关于用户空间的地址,请查看 `obj/user/hello.asm` )设置断点。这个指令 `int` 是系统调用去显示一个字符到控制台。如果到 `int` 还没有运行,那么可能在你的地址空间设置或程序加载代码时发生了错误;返回去找到问题并解决后重新运行。 + +##### 处理中断和异常 + +到目前为止,在用户空间中的第一个系统调用指令 `int $0x30` 已正式寿终正寝了:一旦处理器进入用户模式,将无法返回。因此,现在,你需要去实现基本的异常和系统调用服务程序,因为那样才有可能让内核从用户模式代码中恢复对处理器的控制。你所做的第一件事情就是彻底地掌握 x86 的中断和异常机制的使用。 + +``` +练习 3、如果你对中断和异常机制不熟悉的话,阅读 80386 程序员手册的第 9 章(或 IA-32 开发者手册的第 5 章)。 +``` + +在这个实验中,对于中断、异常、以其它类似的东西,我们将遵循 Intel 的术语习惯。由于如异常exception陷阱trap中断interrupt故障fault中止abort这些术语在不同的架构和操作系统上并没有一个统一的标准,我们经常在特定的架构下(如 x86)并不去考虑它们之间的细微差别。当你在本实验以外的地方看到这些术语时,它们的含义可能有细微的差别。 + +##### 受保护的控制转移基础 + +异常和中断都是“受保护的控制转移”,它将导致处理器从用户模式切换到内核模式(CPL=0)而不会让用户模式的代码干扰到内核的其它函数或其它的环境。在 Intel 的术语中,一个中断就是一个“受保护的控制转移”,它是由于处理器以外的外部异步事件所引发的,比如外部设备 I/O 活动通知。而异常正好与之相反,它是由当前正在运行的代码所引发的同步的、受保护的控制转移,比如由于发生了一个除零错误或对无效内存的访问。 + +为了确保这些受保护的控制转移是真正地受到保护,处理器的中断/异常机制设计是:当中断/异常发生时,当前运行的代码不能随意选择进入内核的位置和方式。而是,处理器在确保内核能够严格控制的条件下才能进入内核。在 x86 上,有两种机制协同来提供这种保护: + + 1. **中断描述符表** 处理器确保中断和异常仅能够导致内核进入几个特定的、由内核本身定义好的、明确的入口点,而不是去运行中断或异常发生时的代码。 + +x86 允许最多有 256 个不同的中断或异常入口点去进入内核,每个入口点都使用一个不同的中断向量。一个向量是一个介于 0 和 255 之间的数字。一个中断向量是由中断源确定的:不同的设备、错误条件、以及应用程序去请求内核使用不同的向量生成中断。CPU 使用向量作为进入处理器的中断描述符表(IDT)的索引,它是内核设置的内核私有内存,GDT 也是。从这个表中的适当的条目中,处理器将加载: + + * 将值加载到指令指针寄存器(EIP),指向内核代码设计好的,用于处理这种异常的服务程序。 + * 将值加载到代码段寄存器(CS),它包含运行权限为 0—1 级别的、要运行的异常服务程序。(在 JOS 中,所有的异常处理程序都运行在内核模式中,运行级别为 level 0。) + 2. **任务状态描述符表** 处理器在中断或异常发生时,需要一个地方去保存旧的处理器状态,比如,处理器在调用异常服务程序之前的 `EIP` 和 `CS` 的原始值,这样那个异常服务程序就能够稍后通过还原旧的状态来回到中断发生时的代码位置。但是对于已保存的处理器的旧状态必须被保护起来,不能被无权限的用户模式代码访问;否则代码中的 bug 或恶意用户代码将危及内核。 + +基于这个原因,当一个 x86 处理器产生一个中断或陷阱时,将导致权限级别的变更,从用户模式转换到内核模式,它也将导致在内核的内存中发生栈切换。有一个被称为 TSS 的任务状态描述符表规定段描述符和这个栈所处的地址。处理器在这个新栈上推送 `SS`、`ESP`、`EFLAGS`、`CS`、`EIP`、以及一个可选的错误代码。然后它从中断描述符上加载 `CS` 和 `EIP` 的值,然后设置 `ESP` 和 `SS` 去指向新的栈。 + +虽然 TSS 很大并且默默地为各种用途服务,但是 JOS 仅用它去定义当从用户模式到内核模式的转移发生时,处理器即将切换过去的内核栈。因为在 JOS 中的“内核模式”仅运行在 x86 的 level 0 权限上,当进入内核模式时,处理器使用 TSS 上的 `ESP0` 和 `SS0` 字段去定义内核栈。JOS 并不去使用 TSS 的任何其它字段。 + + + + +##### 异常和中断的类型 + +所有的 x86 处理器上的同步异常都能够产生一个内部使用的、介于 0 到 31 之间的中断向量,因此它映射到 IDT 就是条目 0-31。例如,一个页故障总是通过向量 14 引发一个异常。大于 31 的中断向量仅用于软件中断,它由 `int` 指令生成,或异步硬件中断,当需要时,它们由外部设备产生。 + +在这一节中,我们将扩展 JOS 去处理向量为 0-31 之间的、内部产生的 x86 异常。在下一节中,我们将完成 JOS 的 48(0x30)号软件中断向量,JOS 将(随意选择的)使用它作为系统调用中断向量。在实验 4 中,我们将扩展 JOS 去处理外部生成的硬件中断,比如时钟中断。 + +##### 一个示例 + +我们把这些片断综合到一起,通过一个示例来巩固一下。我们假设处理器在用户环境下运行代码,遇到一个除零问题。 + + 1. 处理器去切换到由 TSS 中的 `SS0` 和 `ESP0` 定义的栈,在 JOS 中,它们各自保存着值 `GD_KD` 和 `KSTACKTOP`。 + + 2. 处理器在内核栈上推入异常参数,起始地址为 `KSTACKTOP`: + +``` + +--------------------+ KSTACKTOP + | 0x00000 | old SS | " - 4 + | old ESP | " - 8 + | old EFLAGS | " - 12 + | 0x00000 | old CS | " - 16 + | old EIP | " - 20 <---- ESP + +--------------------+ + +``` + + 3. 由于我们要处理一个除零错误,它将在 x86 上产生一个中断向量 0,处理器读取 IDT 的条目 0,然后设置 `CS:EIP` 去指向由条目描述的处理函数。 + + 4. 处理服务程序函数将接管控制权并处理异常,例如中止用户环境。 + + + + +对于某些类型的 x86 异常,除了以上的五个“标准的”寄存器外,处理器还推入另一个包含错误代码的寄存器值到栈中。页故障异常,向量号为 14,就是一个重要的示例。查看 80386 手册去确定哪些异常推入一个错误代码,以及错误代码在那个案例中的意义。当处理器推入一个错误代码后,当从用户模式中进入内核模式,异常处理服务程序开始时的栈看起来应该如下所示: + +``` + +--------------------+ KSTACKTOP + | 0x00000 | old SS | " - 4 + | old ESP | " - 8 + | old EFLAGS | " - 12 + | 0x00000 | old CS | " - 16 + | old EIP | " - 20 + | error code | " - 24 <---- ESP + +--------------------+ +``` + +##### 嵌套的异常和中断 + +处理器能够处理来自用户和内核模式中的异常和中断。当收到来自用户模式的异常和中断时才会进入内核模式中,而且,在推送它的旧寄存器状态到栈中和通过 IDT 调用相关的异常服务程序之前,x86 处理器会自动切换栈。如果当异常或中断发生时,处理器已经处于内核模式中(`CS` 寄存器低位两个比特为 0),那么 CPU 只是推入一些值到相同的内核栈中。在这种方式中,内核可以优雅地处理嵌套的异常,嵌套的异常一般由内核本身的代码所引发。在实现保护时,这种功能是非常重要的工具,我们将在稍后的系统调用中看到它。 + +如果处理器已经处于内核模式中,并且发生了一个嵌套的异常,由于它并不需要切换栈,它也就不需要去保存旧的 `SS` 或 `ESP` 寄存器。对于不推入错误代码的异常类型,在进入到异常服务程序时,它的内核栈看起来应该如下图: + +``` + +--------------------+ <---- old ESP + | old EFLAGS | " - 4 + | 0x00000 | old CS | " - 8 + | old EIP | " - 12 + +--------------------+ +``` + +对于需要推入一个错误代码的异常类型,处理器将在旧的 `EIP` 之后,立即推入一个错误代码,就和前面一样。 + +关于处理器的异常嵌套的功能,这里有一个重要的警告。如果处理器正处于内核模式时发生了一个异常,并且不论是什么原因,比如栈空间泄漏,都不会去推送它的旧的状态,那么这时处理器将不能做任何的恢复,它只是简单地重置。毫无疑问,内核应该被设计为禁止发生这种情况。 + +##### 设置 IDT + +到目前为止,你应该有了在 JOS 中为了设置 IDT 和处理异常所需的基本信息。现在,我们去设置 IDT 以处理中断向量 0-31(处理器异常)。我们将在本实验的稍后部分处理系统调用,然后在后面的实验中增加中断 32-47(设备 IRQ)。 + +在头文件 `inc/trap.h` 和 `kern/trap.h` 中包含了中断和异常相关的重要定义,你需要去熟悉使用它们。在文件`kern/trap.h` 中包含了到内核的、严格的、秘密的定义,可是在 `inc/trap.h` 中包含的定义也可以被用到用户级程序和库上。 + +注意:在范围 0-31 中的一些异常是被 Intel 定义为保留。因为在它们的处理器上从未产生过,你如何处理它们都不会有大问题。你想如何做它都是可以的。 + +你将要实现的完整的控制流如下图所描述: + +```c + IDT trapentry.S trap.c + ++----------------+ +| &handler1 |---------> handler1: trap (struct Trapframe *tf) +| | // do stuff { +| | call trap // handle the exception/interrupt +| | // ... } ++----------------+ +| &handler2 |--------> handler2: +| | // do stuff +| | call trap +| | // ... ++----------------+ + . + . + . ++----------------+ +| &handlerX |--------> handlerX: +| | // do stuff +| | call trap +| | // ... ++----------------+ +``` + +每个异常或中断都应该在 `trapentry.S` 中有它自己的处理程序,并且 `trap_init()` 应该使用这些处理程序的地址去初始化 IDT。每个处理程序都应该在栈上构建一个 `struct Trapframe`(查看 `inc/trap.h`),然后使用一个指针调用 `trap()`(在 `trap.c` 中)到 `Trapframe`。`trap()` 接着处理异常/中断或派发给一个特定的处理函数。 + +```markdown +练习 4、编辑 `trapentry.S` 和 `trap.c`,然后实现上面所描述的功能。在 `trapentry.S` 中的宏 `TRAPHANDLER` 和 `TRAPHANDLER_NOEC` 将会帮你,还有在 `inc/trap.h` 中的 T_* defines。你需要在 `trapentry.S` 中为每个定义在 `inc/trap.h` 中的陷阱添加一个入口点(使用这些宏),并且你将有 t、o 提供的 `_alltraps`,这是由宏 `TRAPHANDLER`指向到它。你也需要去修改 `trap_init()` 来初始化 `idt`,以使它指向到每个在 `trapentry.S` 中定义的入口点;宏 `SETGATE` 将有助你实现它。 + +你的 `_alltraps` 应该: + + 1. 推送值以使栈看上去像一个结构 Trapframe + 2. 加载 `GD_KD` 到 `%ds` 和 `%es` + 3. `pushl %esp` 去传递一个指针到 Trapframe 以作为一个 trap() 的参数 + 4. `call trap` (`trap` 能够返回吗?) + + + +考虑使用 `pushal` 指令;它非常适合 `struct Trapframe` 的布局。 + +使用一些在 `user` 目录中的测试程序来测试你的陷阱处理代码,这些测试程序在生成任何系统调用之前能引发异常,比如 `user/divzero`。在这时,你应该能够成功完成 `divzero`、`softint`、以有 `badsegment` 测试。 +``` + +```markdown +小挑战!目前,在 `trapentry.S` 中列出的 `TRAPHANDLER` 和他们安装在 `trap.c` 中可能有许多代码非常相似。清除它们。修改 `trapentry.S` 中的宏去自动为 `trap.c` 生成一个表。注意,你可以直接使用 `.text` 和 `.data` 在汇编器中切换放置其中的代码和数据。 +``` + +```markdown +问题 + +在你的 `answers-lab3.txt` 中回答下列问题: + + 1. 为每个异常/中断设置一个独立的服务程序函数的目的是什么?(即:如果所有的异常/中断都传递给同一个服务程序,在我们的当前实现中能否提供这样的特性?) + 2. 你需要做什么事情才能让 `user/softint` 程序正常运行?评级脚本预计将会产生一个一般保护故障(trap 13),但是 `softint` 的代码显示为 `int $14`。为什么它产生的中断向量是 13?如果内核允许 `softint` 的 `int $14` 指令去调用内核页故障的服务程序(它的中断向量是 14)会发生什么事情? +``` + + +本实验的 Part A 部分结束了。不要忘了去添加 `answers-lab3.txt` 文件,提交你的变更,然后在 Part A 作业的提交截止日期之前运行 `make handin`。 + +#### Part B:页故障、断点异常、和系统调用 + +现在,你的内核已经有了最基本的异常处理能力,你将要去继续改进它,来提供依赖异常服务程序的操作系统原语。 + +##### 处理页故障 + +页故障异常,中断向量为 14(`T_PGFLT`),它是一个非常重要的东西,我们将通过本实验和接下来的实验来大量练习它。当处理器产生一个页故障时,处理器将在它的一个特定的控制寄存器(`CR2`)中保存导致这个故障的线性地址(即:虚拟地址)。在 `trap.c` 中我们提供了一个专门处理它的函数的一个雏形,它就是 `page_fault_handler()`,我们将用它来处理页故障异常。 + +```markdown +练习 5、修改 `trap_dispatch()` 将页故障异常派发到 `page_fault_handler()` 上。你现在应该能够成功测试 `faultread`、`faultreadkernel`、`faultwrite`、和 `faultwritekernel` 了。如果它们中的任何一个不能正常工作,找出问题并修复它。记住,你可以使用 make run- _x_ 或 make run- _x_ -nox 去重引导 JOS 进入到一个特定的用户程序。比如,你可以运行 make run-hello-nox 去运行 the _hello_ user 程序。 +``` + +下面,你将进一步细化内核的页故障服务程序,因为你要实现系统调用了。 + +##### 断点异常 + +断点异常,中断向量为 3(`T_BRKPT`),它一般用在调试上,它在一个程序代码中插入断点,从而使用特定的 1 字节的 `int3` 软件中断指令来临时替换相应的程序指令。在 JOS 中,我们将稍微“滥用”一下这个异常,通过将它打造成一个伪系统调用原语,使得任何用户环境都可以用它来调用 JOS 内核监视器。如果我们将 JOS 内核监视认为是原始调试器,那么这种用法是合适的。例如,在 `lib/panic.c` 中实现的用户模式下的 `panic()` ,它在显示它的 `panic` 消息后运行一个 `int3` 中断。 + +```markdown +练习 6、修改 `trap_dispatch()`,让它在调用内核监视器时产生一个断点异常。你现在应该可以在 `breakpoint` 上成功完成测试。 +``` + +```markdown +小挑战!修改 JOS 内核监视器,以便于你能够从当前位置(即:在 `int3` 之后,断点异常调用了内核监视器) '继续' 异常,并且因此你就可以一次运行一个单步指令。为了实现单步运行,你需要去理解 `EFLAGS` 寄存器中的某些比特的意义。 + +可选:如果你富有冒险精神,找一些 x86 反汇编的代码 —— 即通过从 QEMU 中、或从 GNU 二进制工具中分离、或你自己编写 —— 然后扩展 JOS 内核监视器,以使它能够反汇编,显示你的每步的指令。结合实验 1 中的符号表,这将是你写的一个真正的内核调试器。 +``` + +```markdown +问题 + + 3. 在断点测试案例中,根据你在 IDT 中如何初始化断点条目的不同情况(即:你的从 `trap_init` 到 `SETGATE` 的调用),既有可能产生一个断点异常,也有可能产生一个一般保护故障。为什么?为了能够像上面的案例那样工作,你需要如何去设置它,什么样的不正确设置才会触发一个一般保护故障? + 4. 你认为这些机制的意义是什么?尤其是要考虑 `user/softint` 测试程序的工作原理。 +``` + + +##### 系统调用 + +用户进程请求内核为它做事情就是通过系统调用来实现的。当用户进程请求一个系统调用时,处理器首先进入内核模式,处理器和内核配合去保存用户进程的状态,内核为了完成系统调用会运行有关的代码,然后重新回到用户进程。用户进程如何获得内核的关注以及它如何指定它需要的系统调用的具体细节,这在不同的系统上是不同的。 + +在 JOS 内核中,我们使用 `int` 指令,它将导致产生一个处理器中断。尤其是,我们使用 `int $0x30` 作为系统调用中断。我们定义常量 `T_SYSCALL` 为 48(0x30)。你将需要去设置中断描述符表,以允许用户进程去触发那个中断。注意,那个中断 0x30 并不是由硬件生成的,因此允许用户代码去产生它并不会引起歧义。 + +应用程序将在寄存器中传递系统调用号和系统调用参数。通过这种方式,内核就不需要去遍历用户环境的栈或指令流。系统调用号将放在 `%eax` 中,而参数(最多五个)将分别放在 `%edx`、`%ecx`、`%ebx`、`%edi`、和 `%esi` 中。内核将在 `%eax` 中传递返回值。在 `lib/syscall.c` 中的 `syscall()` 中已为你编写了使用一个系统调用的汇编代码。你可以通过阅读它来确保你已经理解了它们都做了什么。 + +```markdown +练习 7、在内核中为中断向量 `T_SYSCALL` 添加一个服务程序。你将需要去编辑 `kern/trapentry.S` 和 `kern/trap.c` 的 `trap_init()`。还需要去修改 `trap_dispatch()`,以便于通过使用适当的参数来调用 `syscall()` (定义在 `kern/syscall.c`)以处理系统调用中断,然后将系统调用的返回值安排在 `%eax` 中传递给用户进程。最后,你需要去实现 `kern/syscall.c` 中的 `syscall()`。如果系统调用号是无效值,确保 `syscall()` 返回值一定是 `-E_INVAL`。为确保你理解了系统调用的接口,你应该去阅读和掌握 `lib/syscall.c` 文件(尤其是行内汇编的动作),对于在 `inc/syscall.h` 中列出的每个系统调用都需要通过调用相关的内核函数来处理A。 + +在你的内核中运行 `user/hello` 程序(make run-hello)。它应该在控制台上输出 "`hello, world`",然后在用户模式中产生一个页故障。如果没有产生页故障,可能意味着你的系统调用服务程序不太正确。现在,你应该有能力成功通过 `testbss` 测试。 +``` + +```markdown +小挑战!使用 `sysenter` 和 `sysexit` 指令而不是使用 `int 0x30` 和 `iret` 来实现系统调用。 + +`sysenter/sysexit` 指令是由 Intel 设计的,它的运行速度要比 `int/iret` 指令快。它使用寄存器而不是栈来做到这一点,并且通过假定了分段寄存器是如何使用的。关于这些指令的详细内容可以在 Intel 参考手册 2B 卷中找到。 + +在 JOS 中添加对这些指令支持的最容易的方法是,在 `kern/trapentry.S` 中添加一个 `sysenter_handler`,在它里面保存足够多的关于用户环境返回、设置内核环境、推送参数到 `syscall()`、以及直接调用 `syscall()` 的信息。一旦 `syscall()` 返回,它将设置好运行 `sysexit` 指令所需的一切东西。你也将需要在 `kern/init.c` 中添加一些代码,以设置特殊模块寄存器(MSRs)。在 AMD 架构程序员手册第 2 卷的 6.1.2 节中和 Intel 参考手册的 2B 卷的 SYSENTER 上都有关于 MSRs 的很详细的描述。对于如何去写 MSRs,在[这里][4]你可以找到一个添加到 `inc/x86.h` 中的 `wrmsr` 的实现。 + +最后,`lib/syscall.c` 必须要修改,以便于支持用 `sysenter` 来生成一个系统调用。下面是 `sysenter` 指令的一种可能的寄存器布局: + + eax - syscall number + edx, ecx, ebx, edi - arg1, arg2, arg3, arg4 + esi - return pc + ebp - return esp + esp - trashed by sysenter + +GCC 的内联汇编器将自动保存你告诉它的直接加载进寄存器的值。不要忘了同时去保存(push)和恢复(pop)你使用的其它寄存器,或告诉内联汇编器你正在使用它们。内联汇编器不支持保存 `%ebp`,因此你需要自己去增加一些代码来保存和恢复它们,返回地址可以使用一个像 `leal after_sysenter_label, %%esi` 的指令置入到 `%esi` 中。 + +注意,它仅支持 4 个参数,因此你需要保留支持 5 个参数的系统调用的旧方法。而且,因为这个快速路径并不更新当前环境的 trap 帧,因此,在我们添加到后续实验中的一些系统调用上,它并不适合。 + +在接下来的实验中我们启用了异步中断,你需要再次去评估一下你的代码。尤其是,当返回到用户进程时,你需要去启用中断,而 `sysexit` 指令并不会为你去做这一动作。 +``` + +##### 启动用户模式 + +一个用户程序是从 `lib/entry.S` 的顶部开始运行的。在一些配置之后,代码调用 `lib/libmain.c` 中的 `libmain()`。你应该去修改 `libmain()` 以初始化全局指针 `thisenv`,使它指向到这个环境在数组 `envs[]` 中的 `struct Env`。(注意那个 `lib/entry.S` 中已经定义 `envs` 去指向到在 Part A 中映射的你的设置。)提示:查看 `inc/env.h` 和使用 `sys_getenvid`。 + +`libmain()` 接下来调用 `umain`,在 hello 程序的案例中,`umain` 是在 `user/hello.c` 中。注意,它在输出 "`hello, world`” 之后,它尝试去访问 `thisenv->env_id`。这就是为什么前面会发生故障的原因了。现在,你已经正确地初始化了 `thisenv`,它应该不会再发生故障了。如果仍然会发生故障,或许是因为你没有映射 `UENVS` 区域为用户可读取(回到前面 Part A 中 查看 `pmap.c`);这是我们第一次真实地使用 `UENVS` 区域)。 + +```markdown +练习 8、添加要求的代码到用户库,然后引导你的内核。你应该能够看到 `user/hello` 程序会输出 "`hello, world`" 然后输出 "`i am environment 00001000`"。`user/hello` 接下来会通过调用 `sys_env_destroy()`(查看`lib/libmain.c` 和 `lib/exit.c`)尝试去"退出"。由于内核目前仅支持一个用户环境,它应该会报告它毁坏了唯一的环境,然后进入到内核监视器中。现在你应该能够成功通过 `hello` 的测试。 +``` + +##### 页故障和内存保护 + +内存保护是一个操作系统中最重要的特性,通过它来保证一个程序中的 bug 不会破坏其它程序或操作系统本身。 + +操作系统一般是依靠硬件的支持来实现内存保护。操作系统会告诉硬件哪些虚拟地址是有效的,而哪些是无效的。当一个程序尝试去访问一个无效地址或它没有访问权限的地址时,处理器会在导致故障发生的位置停止程序运行,然后捕获内核中关于尝试操作的相关信息。如果故障是可修复的,内核可能修复它并让程序继续运行。如果故障不可修复,那么程序就不能继续,因为它绝对不会跳过那个导致故障的指令。 + +作为一个可修复故障的示例,假设一个自动扩展的栈。在许多系统上,内核初始化分配一个单栈页,然后如果程序发生的故障是去访问这个栈页下面的页,那么内核会自动分配这些页,并让程序继续运行。通过这种方式,内核只分配程序所需要的内存栈,但是程序可以运行在一个任意大小的栈的假像中。 + +对于内存保护,系统调用中有一个非常有趣的问题。许多系统调用接口让用户程序传递指针到内核中。这些指针指向用户要读取或写入的缓冲区。然后内核在执行系统调用时废弃这些指针。这样就有两个问题: + + 1. 内核中的页故障可能比用户程序中的页故障多的多。如果内核在维护它自己的数据结构时发生页故障,那就是一个内核 bug,而故障服务程序将使整个内核(和整个系统)崩溃。但是当内核废弃了由用户程序传递给它的指针后,它就需要一种方式去记住那些废弃指针所导致的页故障其实是代表用户程序的。 + 2. 一般情况下内核拥有比用户程序更多的权限。用户程序可以传递一个指针到系统调用,而指针指向的区域有可能是内核可以读取或写入而用户程序不可访问的区域。内核必须要非常小心,不能被废弃的这种指针欺骗,因为这可能导致泄露私有信息或破坏内核的完整性。 + + + +由于以上的原因,内核在处理由用户程序提供的指针时必须格外小心。 + +现在,你可以通过使用一个简单的机制来仔细检查所有从用户空间传递给内核的指针来解决这个问题。当一个程序给内核传递指针时,内核将检查它的地址是否在地址空间的用户部分,然后页表才允许对内存的操作。 + +这样,内核在废弃一个用户提供的指针时就绝不会发生页故障。如果内核出现这种页故障,它应该崩溃并终止。 + +```markdown +练习 9、如果在内核模式中发生一个页故障,修改 `kern/trap.c` 去崩溃。 + +提示:判断一个页故障是发生在用户模式还是内核模式,去检查 `tf_cs` 的低位比特即可。 + +阅读 `kern/pmap.c` 中的 `user_mem_assert` 并在那个文件中实现 `user_mem_check`。 + +修改 `kern/syscall.c` 去常态化检查传递给系统调用的参数。 + +引导你的内核,运行 `user/buggyhello`。环境将被毁坏,而内核将不会崩溃。你将会看到: + + [00001000] user_mem_check assertion failure for va 00000001 + [00001000] free env 00001000 + Destroyed the only environment - nothing more to do! +最后,修改在 `kern/kdebug.c` 中的 `debuginfo_eip`,在 `usd`、`stabs`、和 `stabstr` 上调用 `user_mem_check`。如果你现在运行 `user/breakpoint`,你应该能够从内核监视器中运行回溯,然后在内核因页故障崩溃前看到回溯进入到 `lib/libmain.c`。是什么导致了这个页故障?你不需要去修复它,但是你应该明白它是如何发生的。 +``` + +注意,刚才实现的这些机制也同样适用于恶意用户程序(比如 `user/evilhello`)。 + +``` +练习 10、引导你的内核,运行 `user/evilhello`。环境应该被毁坏,并且内核不会崩溃。你应该能看到: + + [00000000] new env 00001000 + ... + [00001000] user_mem_check assertion failure for va f010000c + [00001000] free env 00001000 +``` + +**本实验到此结束。**确保你通过了所有的等级测试,并且不要忘记去写下问题的答案,在 `answers-lab3.txt` 中详细描述你的挑战练习的解决方案。提交你的变更并在 `lab` 目录下输入 `make handin` 去提交你的工作。 + +在动手实验之前,使用 `git status` 和 `git diff` 去检查你的变更,并不要忘记去 `git add answers-lab3.txt`。当你完成后,使用 `git commit -am 'my solutions to lab 3’` 去提交你的变更,然后 `make handin` 并关注这个指南。 + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab3/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/labguide.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/reference.html +[3]: http://blogs.msdn.com/larryosterman/archive/2005/02/08/369243.aspx +[4]: http://ftp.kh.edu.tw/Linux/SuSE/people/garloff/linux/k6mod.c From a0d0b162e6539de903d355422320913c586bcfd6 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Wed, 24 Oct 2018 21:35:40 +0800 Subject: [PATCH 04/32] Delete 20180810 How To Remove Or Disable Ubuntu Dock.md --- ...10 How To Remove Or Disable Ubuntu Dock.md | 145 ------------------ 1 file changed, 145 deletions(-) delete mode 100644 sources/tech/20180810 How To Remove Or Disable Ubuntu Dock.md diff --git a/sources/tech/20180810 How To Remove Or Disable Ubuntu Dock.md b/sources/tech/20180810 How To Remove Or Disable Ubuntu Dock.md deleted file mode 100644 index 709af3de95..0000000000 --- a/sources/tech/20180810 How To Remove Or Disable Ubuntu Dock.md +++ /dev/null @@ -1,145 +0,0 @@ -Translating by MjSeven - - -How To Remove Or Disable Ubuntu Dock -====== - -![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png) - -**If you want to replace the Ubuntu Dock in Ubuntu 18.04 with some other dock (like Plank dock for example) or panel, and you want to remove or disable the Ubuntu Dock, here's what you can do and how.** - -Ubuntu Dock - the bar on the left-hand side of the screen which can be used to pin applications and access installed applications - - - -### How to access the Activities Overview without Ubuntu Dock - -Without Ubuntu Dock, you may have no way of accessing the Activities / installed application list (which can be accessed from Ubuntu Dock by clicking on Show Applications button at the bottom of the dock). For example if you want to use Plank dock. - -Obviously, that's not the case if you install Dash to Panel extension to use it instead Ubuntu Dock, because Dash to Panel provides a button to access the Activities Overview / installed applications. - -Depending on what you plan to use instead of Ubuntu Dock, if there's no way of accessing the Activities Overview, you can enable the Activities Overview Hot Corner option and simply move your mouse to the upper left corner of the screen to open the Activities. Another way of accessing the installed application list is using a keyboard shortcut: `Super + A` . - -If you want to enable the Activities Overview hot corner, use this command: -``` -gsettings set org.gnome.shell enable-hot-corners true - -``` - -If later you want to undo this and disable the hot corners, you need to use this command: -``` -gsettings set org.gnome.shell enable-hot-corners false - -``` - -You can also enable or disable the Activities Overview Hot Corner option by using the Gnome Tweaks application (the option is in the `Top Bar` section of Gnome Tweaks), which can be installed by using this command: -``` -sudo apt install gnome-tweaks - -``` - -### How to remove or disable Ubuntu Dock - -Below you'll find 4 ways of getting rid of Ubuntu Dock which work in Ubuntu 18.04. - -**Option 1: Remove the Gnome Shell Ubuntu Dock package.** - -The easiest way of getting rid of the Ubuntu Dock is to remove the package. - -This completely removes the Ubuntu Dock extension from your system, but it also removes the `ubuntu-desktop` meta package. There's no immediate issue if you remove the `ubuntu-desktop` meta package because does nothing by itself. The `ubuntu-meta` package depends on a large number of packages which make up the Ubuntu Desktop. Its dependencies won't be removed and nothing will break. The issue is that if you want to upgrade to a newer Ubuntu version, any new `ubuntu-desktop` dependencies won't be installed. - -As a way around this, you can simply install the `ubuntu-desktop` meta package before upgrading to a newer Ubuntu version (for example if you want to upgrade from Ubuntu 18.04 to 18.10). - -If you're ok with this and want to remove the Ubuntu Dock extension package from your system, use the following command: -``` -sudo apt remove gnome-shell-extension-ubuntu-dock - -``` - -If later you want to undo the changes, simply install the extension back using this command: -``` -sudo apt install gnome-shell-extension-ubuntu-dock - -``` - -Or to install the `ubuntu-desktop` meta package back (this will install any ubuntu-desktop dependencies you may have removed, including Ubuntu Dock), you can use this command: -``` -sudo apt install ubuntu-desktop - -``` - -**Option 2: Install and use the vanilla Gnome session instead of the default Ubuntu session.** - -Another way to get rid of Ubuntu Dock is to install and use the vanilla Gnome session. Installing the vanilla Gnome session will also install other packages this session depends on, like Gnome Documents, Maps, Music, Contacts, Photos, Tracker and more. - -By installing the vanilla Gnome session, you'll also get the default Gnome GDM login / lock screen theme instead of the Ubuntu defaults as well as Adwaita Gtk theme and icons. You can easily change the Gtk and icon theme though, by using the Gnome Tweaks application. - -Furthermore, the AppIndicators extension will be disabled by default (so applications that make use of the AppIndicators tray won't show up on the top panel), but you can enable this by using Gnome Tweaks (under Extensions, enable the Ubuntu appindicators extension). - -In the same way, you can also enable or disable Ubuntu Dock from the vanilla Gnome session, which is not possible if you use the Ubuntu session (disabling Ubuntu Dock from Gnome Tweaks when using the Ubuntu session does nothing). - -If you don't want to install these extra packages required by the vanilla Gnome session, this option of removing Ubuntu Dock is not for you so check out the other options. - -If you are ok with this though, here's what you need to do. To install the vanilla Gnome session in Ubuntu, use this command: -``` -sudo apt install vanilla-gnome-desktop - -``` - -After the installation finishes, reboot your system and on the login screen, after you click on your username, click the gear icon next to the `Sign in` button, and select `GNOME` instead of `Ubuntu` , then proceed to login: - -![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png) - -In case you want to undo this and remove the vanilla Gnome session, you can purge the vanilla Gnome package and then remove the dependencies it installed (second command) using the following commands: -``` -sudo apt purge vanilla-gnome-desktop -sudo apt autoremove - -``` - -Then reboot and select Ubuntu in the same way, from the GDM login screen. - -**Option 3: Permanently hide the Ubuntu Dock from your desktop instead of removing it.** - -If you prefer to permanently hide the Ubuntu Dock from showing up on your desktop instead of uninstalling it or using the vanilla Gnome session, you can easily do this using Dconf Editor. The drawback to this is that Ubuntu Dock will still use some system resources even though you're not using in on your desktop, but you'll also be able to easily revert this without installing or removing any packages. - -Ubuntu Dock is only hidden from your desktop though. When you go in overlay mode (Activities), you'll still see and be able to use Ubuntu Dock from there. - -To permanently hide Ubuntu Dock, use Dconf Editor to navigate to `/org/gnome/shell/extensions/dash-to-dock` and disable (set them to false) the following options: `autohide` , `dock-fixed` and `intellihide` . - -You can achieve this from the command line if you wish, buy running the commands below: -``` -gsettings set org.gnome.shell.extensions.dash-to-dock autohide false -gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false -gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false - -``` -In case you change your mind and you want to undo this, you can either use Dconf Editor and re-enable (set them to true) autohide, dock-fixed and intellihide from `/org/gnome/shell/extensions/dash-to-dock` , or you can use these commands: -``` -gsettings set org.gnome.shell.extensions.dash-to-dock autohide true -gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true -gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true - -``` - -**Option 4: Use Dash to Panel extension.** - -You can install Dash to Panel from - -If you change your mind and you want Ubuntu Dock back, you can either disable Dash to Panel by using Gnome Tweaks app, or completely remove Dash to Panel by clicking the X button next to it from here: - - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020 -[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html -[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/ From 4009954b721b1b2d81df8fa8f042c756424ee288 Mon Sep 17 00:00:00 2001 From: MjSeven <33125422+MjSeven@users.noreply.github.com> Date: Wed, 24 Oct 2018 21:36:08 +0800 Subject: [PATCH 05/32] Create 20180810 How To Remove Or Disable Ubuntu Dock.md --- ...10 How To Remove Or Disable Ubuntu Dock.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md diff --git a/translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md b/translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md new file mode 100644 index 0000000000..0ea7e841af --- /dev/null +++ b/translated/tech/20180810 How To Remove Or Disable Ubuntu Dock.md @@ -0,0 +1,143 @@ +如何移除或禁用 Ubuntu Dock +====== + +![](https://1.bp.blogspot.com/-pClnjEJfPQc/W21nHNzU2DI/AAAAAAAABV0/HGXuQOYGzokyrGYQtRFeF_hT3_3BKHupQCLcBGAs/s640/ubuntu-dock.png) + +**如果你想用其它 dock(例如 Plank dock)或面板来替换 Ubuntu 18.04 中的 Dock,或者你想要移除或禁用 Ubuntu Dock,本文会告诉你如何做。** + +Ubuntu Dock - 屏幕左侧栏,可用于固定应用程序或访问已安装的应用程序。使用默认的 Ubuntu 会话时,[无法][1]使用 Gnome Tweaks 禁用它。如果你需要,还是有几种方法来摆脱它的。下面我将列出 4 种方法可以移除或禁用 Ubuntu Dock,以及每个方法的缺点(如果有的话),还有如何撤销每个方法的更改。本文还包括在没有 Ubuntu Dock 的情况下访问多任务视图和已安装应用程序列表的其它方法。 +(to 校正:Activities Overview 在本文翻译为多任务视图,如有不妥,请改正) +### 如何在没有 Ubuntu Dock 的情况下访问多任务试图 + +如果没有 Ubuntu Dock,你可能无法访问活动的或已安装的应用程序列表(但是可以通过单击 Dock 底部的“显示应用程序”按钮从 Ubuntu Dock 访问)。例如,如果你想使用 Plank Dock。(to 校正:这里是什么意思呢) + +显然,如果你安装了 Dash to Panel 扩展来使用它而不是 Ubuntu Dock,那么情况并非如此。因为 Dash to Panel 提供了一个按钮来访问多任务视图或已安装的应用程序。 + +根据你计划使用的 Dock 而不是 Ubuntu Dock,如果无法访问多任务视图,那么你可以启用 Activities Overview Hot Corner 选项,只需将鼠标移动到屏幕的左上角即可打开 Activities。访问已安装的应用程序列表的另一种方法是使用快捷键:`Super + A`。 + +如果要启用 Activities Overview hot corner,使用以下命令: +``` +gsettings set org.gnome.shell enable-hot-corners true + +``` + +如果以后要撤销此操作并禁用 hot corners,那么你需要使用以下命令: +``` +gsettings set org.gnome.shell enable-hot-corners false + +``` + +你可以使用 Gnome Tweaks 应用程序(该选项位于 Gnome Tweaks 的 `Top Bar` 部分)启用或禁用 Activities Overview Hot Corner 选项,可以使用以下命令进行安装: +``` +sudo apt install gnome-tweaks + +``` + +### 如何移除或禁用 Ubuntu Dock + +下面你将找到 4 种摆脱 Ubuntu Dock 的方法,环境在 Ubuntu 18.04 下。 + +**方法 1: 移除 Gnome Shell Ubuntu Dock 包。** + +摆脱 Ubuntu Dock 的最简单方法就是删除包。 + +这将会从你的系统中完全移除 Ubuntu Dock 扩展,但同时也移除了 `ubuntu-desktop` 元数据包。如果你移除 `ubuntu-desktop` 元数据包,不会马上出现问题,因为它本身没有任何作用。`ubuntu-meta` 包依赖于组成 Ubuntu 桌面的大量包。它的依赖关系不会被删除,也不会被破坏。问题是如果你以后想升级到新的 Ubuntu 版本,那么将不会安装任何新的 `ubuntu-desktop` 依赖项。 + +为了解决这个问题,你可以在升级到较新的 Ubuntu 版本之前安装 `ubuntu-desktop` 元包(例如,如果你想从 Ubuntu 18.04 升级到 18.10)。 + +如果你对此没有意见,并且想要从系统中删除 Ubuntu Dock 扩展包,使用以下命令: +``` +sudo apt remove gnome-shell-extension-ubuntu-dock + +``` + +如果以后要撤消更改,只需使用以下命令安装扩展: +``` +sudo apt install gnome-shell-extension-ubuntu-dock + +``` + +或者重新安装 `ubuntu-desktop` 元数据包(这将会安装你可能已删除的任何 ubuntu-desktop 依赖项,包括 Ubuntu Dock),你可以使用以下命令: +``` +sudo apt install ubuntu-desktop + +``` + +**选项2:安装并使用 vanilla Gnome 会话而不是默认的 Ubuntu 会话。** + +摆脱 Ubuntu Dock 的另一种方法是安装和使用 vanilla Gnome 会话。安装 vanilla Gnome 会话还将安装此会话所依赖的其它软件包,如 Gnome 文档,地图,音乐,联系人,照片,跟踪器等。 + +通过安装 vanilla Gnome 会话,你还将获得默认 Gnome GDM 登录和锁定屏幕主题,而不是 Ubuntu 默认值,另外还有 Adwaita Gtk 主题和图标。你可以使用 Gnome Tweaks 应用程序轻松更改 Gtk 和图标主题。 + +此外,默认情况下将禁用 AppIndicators 扩展(因此使用 AppIndicators 托盘的应用程序不会显示在顶部面板上),但你可以使用 Gnome Tweaks 启用此功能(在扩展中,启用 Ubuntu appindicators 扩展)。 + +同样,你也可以从 vanilla Gnome 会话启用或禁用 Ubuntu Dock,这在 Ubuntu 会话中是不可能的(使用 Ubuntu 会话时无法从 Gnome Tweaks 禁用 Ubuntu Dock)。 + +如果你不想安装 vanilla Gnome 会话所需的这些额外软件包,那么这个移除 Ubuntu Dock 的这个选项不适合你,请查看其它选项。 + +如果你对此没有意见,以下是你需要做的事情。要在 Ubuntu 中安装普通的 Gnome 会话,使用以下命令: +``` +sudo apt install vanilla-gnome-desktop + +``` + +安装完成后,重启系统。在登录屏幕上,单击用户名,单击 `Sign in` 按钮旁边的齿轮图标,然后选择 `GNOME` 而不是 `Ubuntu`,之后继续登录。 + +![](https://4.bp.blogspot.com/-mc-6H2MZ0VY/W21i_PIJ3pI/AAAAAAAABVo/96UvmRM1QJsbS2so1K8teMhsu7SdYh9zwCLcBGAs/s640/vanilla-gnome-session-ubuntu-login-screen.png) + +如果要撤销此操作并移除 vanilla Gnome 会话,可以使用以下命令清除 vanilla Gnome 软件包,然后删除它安装的依赖项(第二条命令): +``` +sudo apt purge vanilla-gnome-desktop +sudo apt autoremove + +``` + +然后重新启动,并以相同的方式从 GDM 登录屏幕中选择 Ubuntu。 + +**选项 3:从桌面上永久隐藏 Ubuntu Dock,而不是将其移除。** + +如果你希望永久隐藏 Ubuntu Dock,不让它显示在桌面上,但不移除它或使用 vanilla Gnome 会话,你可以使用 Dconf 编辑器轻松完成此操作。这样做的缺点是 Ubuntu Dock 仍然会使用一些系统资源,即使你没有在桌面上使用它,但你也可以轻松恢复它而无需安装或移除任何包。 + +Ubuntu Dock 只对你的桌面隐藏,当你进入叠加模式(Activities)时,你仍然可以看到并从那里使用 Ubuntu Dock。 + +要永久隐藏 Ubuntu Dock,使用 Dconf 编辑器导航到 `/org/gnome/shell/extensions/dash-to-dock` 并禁用以下选项(将它们设置为 false):`autohide`, `dock-fixed` 和 `intellihide`。 + +如果你愿意,可以从命令行实现此目的,运行以下命令: +``` +gsettings set org.gnome.shell.extensions.dash-to-dock autohide false +gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed false +gsettings set org.gnome.shell.extensions.dash-to-dock intellihide false + +``` + +如果你改变主意了并想撤销此操作,你可以使用 Dconf 编辑器从 `/org/gnome/shell/extensions/dash-to-dock` 中启动 `autohide`, `dock-fixed` 和 `intellihide`(将它们设置为 true),或者你可以使用以下这些命令: +``` +gsettings set org.gnome.shell.extensions.dash-to-dock autohide true +gsettings set org.gnome.shell.extensions.dash-to-dock dock-fixed true +gsettings set org.gnome.shell.extensions.dash-to-dock intellihide true + +``` + +**选项 4:使用 Dash to Panel 扩展。** + +[Dash to Panel][2] 是 Gnome Shell 的一个高度可配置面板,是 Ubuntu Dock 或 Dash to Dock 的一个很好的替代品(Ubuntu Dock 是从 Dash to Dock 克隆而来的)。安装和启动 Dash to Panel 扩展会禁用 Ubuntu Dock,因此你无需执行其它任何操作。 + +你可以从 [extensions.gnome.org][3] 来安装 Dash to Panel。 + +如果你改变主意并希望重新使用 Ubuntu Dock,那么你可以使用 Gnome Tweaks 应用程序禁用 Dash to Panel,或者通过单击以下网址旁边的 X 按钮完全移除 Dash to Panel: https://extensions.gnome.org/local/。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxuprising.com/2018/08/how-to-remove-or-disable-ubuntu-dock.html + +作者:[Logix][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/118280394805678839070 +[1]:https://bugs.launchpad.net/ubuntu/+source/gnome-tweak-tool/+bug/1713020 +[2]:https://www.linuxuprising.com/2018/05/gnome-shell-dash-to-panel-v14-brings.html +[3]:https://extensions.gnome.org/extension/1160/dash-to-panel/ From 95d6352bac58b5f76a3dd104e7c09a1a5922434d Mon Sep 17 00:00:00 2001 From: dianbanjiu Date: Wed, 24 Oct 2018 22:25:47 +0800 Subject: [PATCH 06/32] translating --- sources/tech/20180101 Manage Your Games Using Lutris In Linux.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180101 Manage Your Games Using Lutris In Linux.md b/sources/tech/20180101 Manage Your Games Using Lutris In Linux.md index e92c96bde2..bf7004dbaf 100644 --- a/sources/tech/20180101 Manage Your Games Using Lutris In Linux.md +++ b/sources/tech/20180101 Manage Your Games Using Lutris In Linux.md @@ -1,3 +1,4 @@ +translating by dianbanjiu Manage Your Games Using Lutris In Linux ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg) From 57ed76cd501e1d0d82b8c10644b78c8ef807fa46 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Oct 2018 22:42:38 +0800 Subject: [PATCH 07/32] =?UTF-8?q?=E6=B7=BB=E5=8A=A0badge?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 测试通过,还原检查 --- .travis.yml | 9 ++++ scripts/badge.sh | 10 ++++ scripts/badge/show_status.sh | 92 ++++++++++++++++++++++++++++++++++++ 3 files changed, 111 insertions(+) create mode 100755 scripts/badge.sh create mode 100755 scripts/badge/show_status.sh diff --git a/.travis.yml b/.travis.yml index 1bea11af3d..26ca1955e2 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,3 +1,12 @@ language: c script: - sh ./scripts/check.sh + - ./scripts/badge.sh + +deploy: + provider: pages + skip_cleanup: true + github_token: $GITHUB_TOKEN + local_dir: build + on: + branch: master diff --git a/scripts/badge.sh b/scripts/badge.sh new file mode 100755 index 0000000000..fd3070c7dc --- /dev/null +++ b/scripts/badge.sh @@ -0,0 +1,10 @@ +#!/usr/bin/env bash +# 重新生成badge +set -o errexit + +SCRIPTS_DIR=$(cd $(dirname "$0") && pwd) +BUILD_DIR=$(cd $SCRIPTS_DIR/.. && pwd)/build +mkdir -p ${BUILD_DIR}/badge +for catalog in published translated translating sources;do + ${SCRIPTS_DIR}/badge/show_status.sh -s ${catalog} > ${BUILD_DIR}/badge/${catalog}.svg +done diff --git a/scripts/badge/show_status.sh b/scripts/badge/show_status.sh new file mode 100755 index 0000000000..aab852b486 --- /dev/null +++ b/scripts/badge/show_status.sh @@ -0,0 +1,92 @@ +#!/usr/bin/env bash + +set -e + +function help() +{ + cat < + + + + + + + + + + + + + + ${comment} + ${comment} + ${num} + ${num} + + +EOF + else + cat< Date: Wed, 24 Oct 2018 23:07:28 +0800 Subject: [PATCH 08/32] PRF:20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md @runningwater --- ...ntrol And Manage CPU Frequency In Linux.md | 31 +++++++++---------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/translated/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md b/translated/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md index ad76c2d42b..6e8852ed4c 100644 --- a/translated/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md +++ b/translated/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md @@ -1,25 +1,25 @@ -CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理 +CPU 电源管理器:Linux 系统中 CPU 主频的控制和管理 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/Manage-CPU-Frequency-720x340.jpeg) -你使用笔记本的话,可能知道 Linux 系统的电源管理做的很不好。虽然有 **TLP**、[**Laptop Mode Tools** 和 **powertop**][1] 这些工具来辅助减少电量消耗,但跟 Windows 和 Mac OS 系统比较起来,电池的整个使用周期还是不尽如意。此外,还有一种降低功耗的办法就是限制 CPU 的频率。这是可行的,然而却需要编写很复杂的终端命令来设置,所以使用起来不太方便。幸好,有一款名为 **CPU Power Manager** 的 GNOME 扩展插件,可以很容易的就设置和管理你的 CPU 主频。GNOME 桌面系统中,CPU Power Manager 使用名为 **intel_pstate** 的功率驱动程序(几乎所有的 Intel CPU 都支持)来控制和管理 CPU 主频。 +你使用笔记本的话,可能知道 Linux 系统的电源管理做的很不好。虽然有 **TLP**、[**Laptop Mode Tools** 和 **powertop**][1] 这些工具来辅助减少电量消耗,但跟 Windows 和 Mac OS 系统比较起来,电池的整个使用周期还是不尽如意。此外,还有一种降低功耗的办法就是限制 CPU 的频率。这是可行的,然而却需要编写很复杂的终端命令来设置,所以使用起来不太方便。幸好,有一款名为 **CPU Power Manager** 的 GNOME 扩展插件,可以很容易的就设置和管理你的 CPU 主频。GNOME 桌面系统中,CPU Power Manager 使用名为 **intel_pstate** 的频率调整驱动程序(几乎所有的 Intel CPU 都支持)来控制和管理 CPU 主频。 使用这个扩展插件的另一个原因是可以减少系统的发热量,因为很多系统在正常使用中的发热量总让人不舒服,限制 CPU 的主频就可以减低发热量。它还可以减少 CPU 和其他组件的磨损。 ### 安装 CPU Power Manager -首先,进入[**扩展插件主页面**][2],安装此扩展插件。 +首先,进入[扩展插件主页面][2],安装此扩展插件。 安装好插件后,在 GNOME 顶部栏的右侧会出现一个 CPU 图标。点击图标,会出现安装此扩展一个选项提示,如下示: ![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-icon.png) -点击**“尝试安装”**按纽,会弹出输入密码确认框。插件需要 root 权限来添加 policykit 规则,进而控制 CPU 主频。下面是弹出的提示框样子: +点击“尝试安装”按纽,会弹出输入密码确认框。插件需要 root 权限来添加 policykit 规则,进而控制 CPU 主频。下面是弹出的提示框样子: ![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-1.png) -输入密码,点击**“认证”**按纽,完成安装。最后在 **/usr/share/polkit-1/actions** 目录下添加了一个名为 **mko.cpupower.setcpufreq.policy** 的 policykit 文件。 +输入密码,点击“认证”按纽,完成安装。最后在 `/usr/share/polkit-1/actions` 目录下添加了一个名为 `mko.cpupower.setcpufreq.policy` 的 policykit 文件。 都安装完成后,如果点击右上脚的 CPU 图标,会出现如下所示: @@ -27,12 +27,10 @@ CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理 ### 功能特性 - * **查看 CPU 主频:** 显然,你可以通过这个提示窗口看到 CPU 的当前运行频率。 - * **设置最大最小主频:** 使用此扩展,你可以根据列出的最大、最小频率百分比进度条来分别设置其频率限制。一旦设置,CPU 将会严格按照此设置范围运行。 - * **开/关 Turbo Boost:** 这是我最喜欢的功能特性。大多数 Intel CPU 都有 “Turbo Boost” 特性,为了提高额外性能,其中的一个内核为自动进行超频。此功能虽然可以使系统获得更高的性能,但也大大增加功耗。所以,如果不做 CPU 密集运行的话,为节约电能,最好关闭 Turbo Boost 功能。事实上,在我电脑上,我大部分时间是把 Turbo Boost 关闭的。 - * **生成配置文件:** 可以生成最大和最小频率的配置文件,就可以很轻松打开/关闭,而不是每次手工调整设置。 - - + * **查看 CPU 主频:** 显然,你可以通过这个提示窗口看到 CPU 的当前运行频率。 + * **设置最大、最小主频:** 使用此扩展,你可以根据列出的最大、最小频率百分比进度条来分别设置其频率限制。一旦设置,CPU 将会严格按照此设置范围运行。 + * **开/关 Turbo Boost:** 这是我最喜欢的功能特性。大多数 Intel CPU 都有 “Turbo Boost” 特性,为了提高额外性能,其中的一个内核为自动进行超频。此功能虽然可以使系统获得更高的性能,但也大大增加功耗。所以,如果不做 CPU 密集运行的话,为节约电能,最好关闭 Turbo Boost 功能。事实上,在我电脑上,我大部分时间是把 Turbo Boost 关闭的。 + * **生成配置文件:** 可以生成最大和最小频率的配置文件,就可以很轻松打开/关闭,而不是每次手工调整设置。 ### 偏好设置 @@ -40,24 +38,23 @@ CPU 电源管理工具 - Linux 系统中 CPU 主频的控制和管理 ![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences.png) -如你所见,你可以设置是否显示 CPU 主频,也可以设置是否以 **Ghz** 来代替 **Mhz** 显示。 +如你所见,你可以设置是否显示 CPU 主频,也可以设置是否以 **Ghz** 来代替 **Mhz** 显示。 -你也可以编辑和创建/删除配置: +你也可以编辑和创建/删除配置文件: ![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPU-Power-Manager-preferences-1.png) -可以为每个配置分别设置最大、最小主频及开/关 Turbo boost。 +可以为每个配置文件分别设置最大、最小主频及开/关 Turbo boost。 ### 结论 正如我在开始时所说的,Linux 系统的电源管理并不是最好的,许多人总是希望他们的 Linux 笔记本电脑电池能多用几分钟。如果你也是其中一员,就试试此扩展插件吧。为了省电,虽然这是非常规的做法,但有效果。我确实喜欢这个插件,到现在已经使用了好几个月了。 -What do you think about this extension? Put your thoughts in the comments below!你对此插件有何看法呢?请把你的观点留在下面的评论区吧。 +你对此插件有何看法呢?请把你的观点留在下面的评论区吧。 祝贺! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequency-in-linux/ @@ -65,7 +62,7 @@ via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequenc 作者:[EDITOR][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 02add8e7c388b666654bfc026279dd2a0caacfc1 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 24 Oct 2018 23:07:52 +0800 Subject: [PATCH 09/32] PUB:20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md @runningwater https://linux.cn/article-10151-1.html --- ...U Power Manager - Control And Manage CPU Frequency In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md (100%) diff --git a/translated/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md b/published/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md similarity index 100% rename from translated/talk/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md rename to published/20180926 CPU Power Manager - Control And Manage CPU Frequency In Linux.md From 4e2f416cae46f3e5444054f7e6606e508d99d164 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 24 Oct 2018 23:36:00 +0800 Subject: [PATCH 10/32] PRF:20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @ypingcn Decentralized 一词已经在区块链领域广泛译做“去中心化”了,所以我采用了该译法。 --- ...Web is Creating a New Decentralized Web.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/translated/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md b/translated/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md index e55455508d..776c5e5c8e 100644 --- a/translated/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md +++ b/translated/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md @@ -1,27 +1,27 @@ -万维网的创建者正在创建一个新的分布式网络 +万维网的创建者正在创建一个新的去中心化网络 ====== -**万维网的创建者 Tim Berners-Lee 公布了他计划创建一个新的分布式网络,网络中的数据将由用户控制** +> 万维网(WWW)的创建者 Tim Berners-Lee 公布了他计划创建一个新的去中心化网络,该网络中的数据将由用户控制。 -[Tim Berners-Lee] [1]以创建万维网而闻名,万维网就是你现在所知的互联网。二十多年之后,Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过分布式网络将权力交回给人们。 +[Tim Berners-Lee][1] 以创建万维网而闻名,万维网就是你现在所知的互联网。二十多年之后,Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过去中心化网络Decentralized Web将权力交回给人们。 -Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们” +Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们”。 -> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样,这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡——以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。 +> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样,这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡 —— 以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。 ![Tim Berners-Lee is creating a decentralized web with open source project Solid][3] -基本上,[Solid][4]是一个使用现有网络构建的平台,在这里你可以创建自己的 “pods” (个人数据存储)。你决定这个 “pods” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。 +基本上,[Solid][4] 是一个使用现有网络构建的平台,在这里你可以创建自己的 “pod” (个人数据存储)。你决定这个 “pod” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。 -Berners-Lee 相信 Solid "将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。" +Berners-Lee 相信 Solid “将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。” 开发人员需要将 Solid 集成进他们的应用程序和网站中。 Solid 仍在早期阶段,所以目前没有相关的应用程序。但是项目网站宣称“第一批 Solid 应用程序正在开发当中”。 -Berners-Lee 已经创立一家名为[Inrupt][5] 的初创公司,并已从麻省理工学院休假来全职工作在 Solid,来将其”从少部分人的愿景带到多数人的现实“。 +Berners-Lee 已经创立一家名为 [Inrupt][5] 的初创公司,并已从麻省理工学院休学术假来全职工作在 Solid,来将其”从少部分人的愿景带到多数人的现实“。 -如果你对 Solid 感兴趣,[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于分布式网络的成功。 +如果你对 Solid 感兴趣,可以[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于去中心化网络的成功。 -你认为[分布式网络][8]会成为现实吗?你是如何看待分布式网络,特别是 Solid 项目的? +你认为[去中心化网络][8]会成为现实吗?你是如何看待去中心化网络,特别是 Solid 项目的? -------------------------------------------------------------------------------- @@ -30,7 +30,7 @@ via: https://itsfoss.com/solid-decentralized-web/ 作者:[Abhishek Prakash][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[ypingcn](https://github.com/ypingcn) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 9a5a4c3e870651d4b1432e305da7165c24547bc4 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 24 Oct 2018 23:36:36 +0800 Subject: [PATCH 11/32] PUB:20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md @ypingcn https://linux.cn/article-10152-1.html --- ...r of the World Wide Web is Creating a New Decentralized Web.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/talk => published}/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md (100%) diff --git a/translated/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md b/published/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md similarity index 100% rename from translated/talk/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md rename to published/20180930 Creator of the World Wide Web is Creating a New Decentralized Web.md From 7901f78600e2eb7e29fa8a0b43d70a002c2e2c36 Mon Sep 17 00:00:00 2001 From: darksun Date: Wed, 24 Oct 2018 23:48:54 +0800 Subject: [PATCH 12/32] =?UTF-8?q?=E9=80=89=E9=A2=98:=20The=20Rise=20and=20?= =?UTF-8?q?Rise=20of=20JSON?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20170921 The Rise and Rise of JSON.md | 93 +++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100644 sources/talk/20170921 The Rise and Rise of JSON.md diff --git a/sources/talk/20170921 The Rise and Rise of JSON.md b/sources/talk/20170921 The Rise and Rise of JSON.md new file mode 100644 index 0000000000..84a594c89a --- /dev/null +++ b/sources/talk/20170921 The Rise and Rise of JSON.md @@ -0,0 +1,93 @@ +The Rise and Rise of JSON +====== +JSON has taken over the world. Today, when any two applications communicate with each other across the internet, odds are they do so using JSON. It has been adopted by all the big players: Of the ten most popular web APIs, a list consisting mostly of APIs offered by major companies like Google, Facebook, and Twitter, only one API exposes data in XML rather than JSON. Twitter, to take an illustrative example from that list, supported XML until 2013, when it released a new version of its API that dropped XML in favor of using JSON exclusively. JSON has also been widely adopted by the programming rank and file: According to Stack Overflow, a question and answer site for programmers, more questions are now asked about JSON than about any other data interchange format. + +![][1] + +XML still survives in many places. It is used across the web for SVGs and for RSS and Atom feeds. When Android developers want to declare that their app requires a permission from the user, they do so in their app’s manifest, which is written in XML. XML also isn’t the only alternative to JSON—some people now use technologies like YAML or Google’s Protocol Buffers. But these are nowhere near as popular as JSON. For the time being, JSON appears to be the go-to format for communicating with other programs over the internet. + +JSON’s dominance is surprising when you consider that as recently as 2005 the web world was salivating over the potential of “Asynchronous JavaScript and XML” and not “Asynchronous JavaScript and JSON.” It is of course possible that this had nothing to do with the relative popularity of the two formats at the time and reflects only that “AJAX” must have seemed a more appealing acronym than “AJAJ.” But even if some people were already using JSON instead of XML in 2005 (and in fact not many people were yet), one still wonders how XML’s fortunes could have declined so precipitously that a mere decade or so later “Asynchronous JavaScript and XML” has become an ironic misnomer. What happened in that decade? How did JSON supersede XML in so many applications? And who came up with this data format now depended on by engineers and systems all over the world? + +### The Birth of JSON + +The first JSON message was sent in April of 2001. Since this was a historically significant moment in computing, the message was sent from a computer in a Bay-Area garage. Douglas Crockford and Chip Morningstar, co-founders of a technology consulting company called State Software, had gathered in Morningstar’s garage to test out an idea. + +Crockford and Morningstar were trying to build AJAX applications well before the term “AJAX” had been coined. Browser support for what they were attempting was not good. They wanted to pass data to their application after the initial page load, but they had not found a way to do this that would work across all the browsers they were targeting. + +Though it’s hard to believe today, Internet Explorer represented the bleeding edge of web browsing in 2001. As early as 1999, Internet Explorer 5 supported a primordial form of XMLHttpRequest, which programmers could access using a framework called ActiveX. Crockford and Morningstar could have used this technology to fetch data for their application, but they could not have used the same solution in Netscape 4, another browser that they sought to support. So Crockford and Morningstar had to use a different system that worked in both browsers. + +The first JSON message looked like this: + +``` + +``` + +Only a small part of the message resembles JSON as we know it today. The message itself is actually an HTML document containing some JavaScript. The part that resembles JSON is just a JavaScript object literal being passed to a function called `receive()`. + +Crockford and Morningstar had decided that they could abuse an HTML frame to send themselves data. They could point a frame at a URL that would return an HTML document like the one above. When the HTML was received, the JavaScript would be run, passing the object literal back to the application. This worked as long as you were careful to sidestep browser protections preventing a sub-window from accessing its parent; you can see that Crockford and Mornginstar did that by explicitly setting the document domain. (This frame-based technique, sometimes called the hidden frame technique, was commonly used in the late 90s before the widespread implementation of XMLHttpRequest.) + +The amazing thing about the first JSON message is that it’s not obviously the first usage of a new kind of data format at all. It’s just JavaScript! In fact the idea of using JavaScript this way is so straightforward that Crockford himself has said that he wasn’t the first person to do it—he claims that somebody at Netscape was using JavaScript array literals to communicate information as early as 1996. Since the message is just JavaScript, it doesn’t require any kind of special parsing. The JavaScript interpreter can do it all. + +The first ever JSON message actually ran afoul of the JavaScript interpreter. JavaScript reserves an enormous number of words—there are 64 reserved words as of ECMAScript 6—and Crockford and Morningstar had unwittingly used one in their message. They had used `do` as a key, but `do` is reserved. Since JavaScript has so many reserved words, Crockford decided that, rather than avoid using all those reserved words, he would just mandate that all JSON keys be quoted. A quoted key would be treated as a string by the JavaScript interpreter, meaning that reserved words could be used safely. This is why JSON keys are quoted to this day. + +Crockford and Morningstar realized they had something that could be used in all sorts of applications. They wanted to name their format “JSML”, for JavaScript Markup Language, but found that the acronym was already being used for something called Java Speech Markup Language. So they decided to go with “JavaScript Object Notation”, or JSON. They began pitching it to clients but soon found that clients were unwilling to take a chance on an unknown technology that lacked an official specification. So Crockford decided he would write one. + +In 2002, Crockford bought the domain [JSON.org][2] and put up the JSON grammar and an example implementation of a parser. The website is still up, though it now includes a prominent link to the JSON ECMA standard ratified in 2013. After putting up the website, Crockford did little more to promote JSON, but soon found that lots of people were submitting JSON parser implementations in all sorts of different programming languages. JSON’s lineage clearly tied it to JavaScript, but it became apparent that JSON was well-suited to data interchange between arbitrary pairs of languages. + +### Doing AJAX Wrong + +JSON got a big boost in 2005. That year, a web designer and developer named Jesse James Garrett coined the term “AJAX” in a blog post. He was careful to stress that AJAX wasn’t any one new technology, but rather “several technologies, each flourishing in its own right, coming together in powerful new ways.” AJAX was the name that Garrett was giving to a new approach to web application development that he had noticed gaining favor. His blog post went on to describe how developers could leverage JavaScript and XMLHttpRequest to build new kinds of applications that were more responsive and stateful than the typical web page. He pointed to Gmail and Flickr as examples of websites already relying on AJAX techniques. + +The “X” in “AJAX” stood for XML, of course. But in a follow-up Q&A post, Garrett pointed to JSON as an entirely acceptable alternative to XML. He wrote that “XML is the most fully-developed means of getting data in and out of an AJAX client, but there’s no reason you couldn’t accomplish the same effects using a technology like JavaScript Object Notation or any similar means of structuring data.” + +Developers indeed found that they could easily use JSON to build AJAX applications and many came to prefer it to XML. And so, ironically, the interest in AJAX led to an explosion in JSON’s popularity. It was around this time that JSON drew the attention of the blogosphere. + +In 2006, Dave Winer, a prolific blogger and the engineer behind a number of XML-based technologies such as RSS and XML-RPC, complained that JSON was reinventing XML for no good reason. Though one might think that a contest between data interchange formats would be unlikely to engender death threats, Winer wrote: + +> No doubt I can write a routine to parse [JSON], but look at how deep they went to re-invent, XML itself wasn’t good enough for them, for some reason (I’d love to hear the reason). Who did this travesty? Let’s find a tree and string them up. Now. + +It’s easy to understand Winer’s frustration. XML has never been widely loved. Even Winer has said that he does not love XML. But XML was designed to be a system that could be used by everyone for almost anything imaginable. To that end, XML is actually a meta-language that allows you to define domain-specific languages for individual applications—RSS, the web feed technology, and SOAP (Simple Object Access Protocol) are examples. Winer felt that it was important to work toward consensus because of all the benefits a common interchange format could bring. He felt that XML’s flexibility should be able to accommodate everybody’s needs. And yet here was JSON, a format offering no benefits over XML except those enabled by throwing out the cruft that made XML so flexible. + +Crockford saw Winer’s blog post and left a comment on it. In response to the charge that JSON was reinventing XML, Crockford wrote, “The good thing about reinventing the wheel is that you can get a round one.” + +### JSON vs XML + +By 2014, JSON had been officially specified by both an ECMA standard and an RFC. It had its own MIME type. JSON had made it to the big leagues. + +Why did JSON become so much more popular than XML? + +On [JSON.org][2], Crockford summarizes some of JSON’s advantages over XML. He writes that JSON is easier for both humans and machines to understand, since its syntax is minimal and its structure is predictable. Other bloggers have focused on XML’s verbosity and “the angle bracket tax.” Each opening tag in XML must be matched with a closing tag, meaning that an XML document contains a lot of redundant information. This can make an XML document much larger than an equivalent JSON document when uncompressed, but, perhaps more importantly, it also makes an XML document harder to read. + +Crockford has also claimed that another enormous advantage for JSON is that JSON was designed as a data interchange format. It was meant to carry structured information between programs from the very beginning. XML, though it has been used for the same purpose, was originally designed as a document markup language. It evolved from SGML (Standard Generalized Markup Language), which in turn evolved from a markup language called Scribe, intended as a word processing system similar to LaTeX. In XML, a tag can contain what is called “mixed content,” or text with inline tags surrounding words or phrases. This recalls the image of an editor marking up a manuscript with a red or blue pen, which is arguably the central metaphor of a markup language. JSON, on the other hand, does not support a clear analogue to mixed content, but that means that its structure can be simpler. A document is best modeled as a tree, but by throwing out the document idea Crockford could limit JSON to dictionaries and arrays, the basic and familiar elements all programmers use to build their programs. + +Finally, my own hunch is that people disliked XML because it was confusing, and it was confusing because it seemed to come in so many different flavors. At first blush, it’s not obvious where the line is between XML proper and its sub-languages like RSS, ATOM, SOAP, or SVG. The first lines of a typical XML document establish the XML version and then the particular sub-language the XML document should conform to. That is a lot of variation to account for already, especially when compared to JSON, which is so straightforward that no new version of the JSON specification is ever expected to be written. The designers of XML, in their attempt to make XML the one data interchange format to rule them all, fell victim to that classic programmer’s pitfall: over-engineering. XML was so generalized that it was hard to use for something simple. + +In 2000, a campaign was launched to get HTML to conform to the XML standard. A specification was published for XML-compliant HTML, thereafter known as XHTML. Some browser vendors immediately started supporting the new standard, but it quickly became obvious that the vast HTML-producing public were unwilling to revise their habits. The new standard called for stricter validation of XHTML than had been the norm for HTML, but too many websites depended on HTML’s forgiving rules. By 2009, an attempt to write a second version of the XHTML standard was aborted when it became clear that the future of HTML was going to be HTML5, a standard that did not insist on XML compliance. + +If the XHTML effort had succeeded, then maybe XML would have become the common data format that its designers hoped it would be. Imagine a world in which HTML documents and API responses had the exact same structure. In such a world, JSON might not have become as ubiquitous as it is today. But I read the failure of XHTML as a kind of moral defeat for the XML camp. If XML wasn’t the best tool for HTML, then maybe there were better tools out there for other applications also. In that world, our world, it is easy to see how a format as simple and narrowly tailored as JSON could find great success. + +If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][3] on Twitter or subscribe to the [RSS feed][4] to make sure you know when a new post is out. + +-------------------------------------------------------------------------------- + +via: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://twobithistory.org/images/json.svg +[2]: http://JSON.org +[3]: https://twitter.com/TwoBitHistory +[4]: https://twobithistory.org/feed.xml From 0b3dca8ac93c8cff7ba01f88ad8f3cf50a590b67 Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Wed, 24 Oct 2018 17:00:34 +0000 Subject: [PATCH 13/32] Revert "Update 20180209 How writing can change your career for the better, even if you don-t identify as a writer.md" This reverts commit fb10ea9fff64158f48906d13a426f1499ebc0229. --- ...er for the better, even if you don-t identify as a writer.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md b/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md index 98d57bcca3..55618326c6 100644 --- a/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md +++ b/sources/talk/20180209 How writing can change your career for the better, even if you don-t identify as a writer.md @@ -1,4 +1,4 @@ -How writing can change your career for the better, even if you don't identify as a writer Translating by FelixYFZ +How writing can change your career for the better, even if you don't identify as a writer ====== Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed? From 0975aa41ce1f6a49845e6e34b1c193d67c25951b Mon Sep 17 00:00:00 2001 From: darksun Date: Thu, 25 Oct 2018 06:51:22 +0800 Subject: [PATCH 14/32] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Whatever=20Happened?= =?UTF-8?q?=20to=20the=20Semantic=20Web=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...7 Whatever Happened to the Semantic Web.md | 106 ++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 sources/talk/20180527 Whatever Happened to the Semantic Web.md diff --git a/sources/talk/20180527 Whatever Happened to the Semantic Web.md b/sources/talk/20180527 Whatever Happened to the Semantic Web.md new file mode 100644 index 0000000000..22d48c150a --- /dev/null +++ b/sources/talk/20180527 Whatever Happened to the Semantic Web.md @@ -0,0 +1,106 @@ +Whatever Happened to the Semantic Web? +====== +In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the world’s best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine. + +They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this: + +> The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I’m going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctor’s office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Mom’s prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete’s and Lucy’s busy schedules. + +The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web. + +![][1] + +For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.” + +Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Today’s physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…” + +In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn? + +### Semweb Hucksters and Their Metacrap + +To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream. + +The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans. + +The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember. + +Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users. + +Indeed, the web had already seen people abusing the HTML `` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science. + +Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz’ view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand. + +### Building the Semantic Web + +If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert. + +The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future. + +The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge. + +RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies: + +``` +@prefix rdf: . + +@prefix ex: . + + +ex:vincent_donofrio ex:starred_in ex:law_and_order_ci . + +ex:law_and_order_ci rdf:type ex:tv_show . + +ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix . +``` + +The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you don’t know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.) + +Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information. + +In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project. + +Berners-Lee’s article launched the second phase of the Semantic Web’s development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3C’s new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system. + +![][4] + +The third phase of the Semantic Web’s development involved adapting the W3C’s standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `