From 1e43fdcb15bdac3554f6cb980746c7a7ab94378c Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 22:22:09 +0800 Subject: [PATCH 001/140] PRF:20180910 How To List An Available Package Groups In Linux.md @HankChow --- ...st An Available Package Groups In Linux.md | 50 +++++++++---------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/translated/tech/20180910 How To List An Available Package Groups In Linux.md b/translated/tech/20180910 How To List An Available Package Groups In Linux.md index b192e6c5f0..dc4ebafc9f 100644 --- a/translated/tech/20180910 How To List An Available Package Groups In Linux.md +++ b/translated/tech/20180910 How To List An Available Package Groups In Linux.md @@ -1,10 +1,11 @@ 如何在 Linux 中列出可用的软件包组 ====== + 我们知道,如果想要在 Linux 中安装软件包,可以使用软件包管理器来进行安装。由于系统管理员需要频繁用到软件包管理器,所以它是 Linux 当中的一个重要工具。 但是如果想一次性安装一个软件包组,在 Linux 中有可能吗?又如何通过命令去实现呢? -在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt`或 `apt-get` 这样的官方软件包管理器。 +在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt` 或 `apt-get` 这样的官方软件包管理器。 在 Linux 中安装软件包组有很多好处。对于 LAMP 来说,安装过程会包含多个软件包,但如果安装软件包组命令来安装,只安装一个包就可以了。 @@ -13,19 +14,20 @@ 软件包组是一组用于公共功能的软件包,包括系统工具、声音和视频。 安装软件包组的过程中,会获取到一系列的依赖包,从而大大节省了时间。 **推荐阅读:** -**(#)** [如何在 Linux 上按照大小列出已安装的软件包][1] -**(#)** [如何在 Linux 上查看/列出可用的软件包更新][2] -**(#)** [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3] -**(#)** [如何在 Linux 上查看一个软件包的详细信息][4] -**(#)** [如何查看一个软件包是否在你的 Linux 发行版上可用][5] -**(#)** [萌新指导:一个可视化的 Linux 包管理工具][6] -**(#)** [老手必会:命令行软件包管理器的用法][7] + +- [如何在 Linux 上按照大小列出已安装的软件包][1] +- [如何在 Linux 上查看/列出可用的软件包更新][2] +- [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3] +- [如何在 Linux 上查看一个软件包的详细信息][4] +- [如何查看一个软件包是否在你的 Linux 发行版上可用][5] +- [萌新指导:一个可视化的 Linux 包管理工具][6] +- [老手必会:命令行软件包管理器的用法][7] ### 如何在 CentOS/RHEL 系统上列出可用的软件包组 RHEL 和 CentOS 系统使用的是 RPM 软件包,因此可以使用 `yum` 软件包管理器来获取相关的软件包信息。 -`yum` 是 Yellowdog Updater, Modified 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从分发库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。 +`yum` 是 “Yellowdog Updater, Modified” 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从发行版仓库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。 **推荐阅读:** [使用 yum 命令在 RHEL/CentOS 系统上管理软件包][8] @@ -69,10 +71,9 @@ Available Language Groups: . . Done - ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Performance Tools 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Performance Tools” 组相关联的软件包。 ``` # yum groupinfo "Performance Tools" @@ -103,18 +104,17 @@ Group: Performance Tools tiobench tuned tuned-utils - ``` ### 如何在 Fedora 系统上列出可用的软件包组 Fedora 系统使用的是 DNF 软件包管理器,因此可以通过 DNF 软件包管理器来获取相关的信息。 -DNF 的含义是 Dandified yum。、DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在Fedora 22 开始加入到系统中。 +DNF 的含义是 “Dandified yum”。DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在 Fedora 22 开始加入到系统中。 `dnf` 命令可以在 Fedora 22 及更高版本上安装、更新、搜索和删除软件包, 它可以自动解决软件包的依赖关系并其顺利安装,不会产生问题。 -由于一些长期未被解决的问题的存在,YUM 被 DNF 逐渐取代了。而 Aleš Kozumplík 的 DNF 却并未对 yum 的这些问题作出修补,他认为这是技术上的难题,YUM 团队也从不接受这些更改。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。 +YUM 被 DNF 取代是由于 YUM 中存在一些长期未被解决的问题。为什么 Aleš Kozumplík 没有对 yum 的这些问题作出修补呢,他认为补丁解决存在技术上的难题,而 YUM 团队也不会马上接受这些更改,还有一些重要的问题。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。 **推荐阅读:** [在 Fedora 系统上使用 DNF 命令管理软件包][9] @@ -167,13 +167,11 @@ Available Groups: Hardware Support Sound and Video System Tools - ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Editor 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Editor” 组相关联的软件包。 ``` - # dnf groupinfo Editors Last metadata expiration check: 0:04:57 ago on Sun 09 Sep 2018 07:10:36 PM IST. @@ -267,7 +265,7 @@ i | yast2_basis | 20150918-25.1 | @System | | yast2_install_wf | 20150918-25.1 | Main Repository (OSS) | ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 file_server 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “file_server” 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。 ``` # zypper info file_server @@ -346,7 +344,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -如果需要列出相关联的软件包,可以执行以下这个命令。 +如果需要列出相关联的软件包,也可以执行以下这个命令。 ``` # zypper info pattern file_server @@ -385,7 +383,7 @@ Contents : | yast2-tftp-server | package | Recommended ``` -如果需要列出相关联的软件包,可以执行以下这个命令。 +如果需要列出相关联的软件包,也可以执行以下这个命令。 ``` # zypper info -t pattern file_server @@ -431,7 +429,7 @@ Contents : [tasksel][11] 是 Debian/Ubuntu 系统上一个很方便的工具,只需要很少的操作就可以用它来安装好一组软件包。可以在 `/usr/share/tasksel` 目录下的 `.desc` 文件中安排软件包的安装任务。 -默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,类似软件包管理器中的元包(meta-packages)。 +默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,这个功能类似软件包管理器中的元包(meta-packages)。 `tasksel` 工具带有一个基于 zenity 的简单用户界面,例如命令行中的弹出图形对话框。 @@ -483,7 +481,7 @@ u openssh-server OpenSSH server u server Basic Ubuntu server ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 lamp-server 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “lamp-server” 组相关联的软件包。 ``` # tasksel --task-desc "lamp-server" @@ -494,7 +492,7 @@ Selects a ready-made Linux/Apache/MySQL/PHP server. 基于 Arch Linux 的系统使用的是 pacman 软件包管理器,因此可以通过 pacman 软件包管理器来获取相关的信息。 -pacman 是 package manager 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。 +pacman 是 “package manager” 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。 **推荐阅读:** [使用 pacman 在基于 Arch Linux 的系统上管理软件包][13] @@ -536,10 +534,9 @@ realtime sugar-fructose tesseract-data vim-plugins - ``` -如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 gnome 组相关联的软件包。 +如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “gnome” 组相关联的软件包。 ``` # pacman -Sg gnome @@ -603,7 +600,6 @@ Interrupt signal received ``` # pacman -Sg gnome | wc -l 64 - ``` -------------------------------------------------------------------------------- @@ -613,7 +609,7 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ 作者:[Prakash Subramanian][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8a1b2ab0d3243572c56f93ca73d78c18da5109ba Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 14 Oct 2018 22:22:33 +0800 Subject: [PATCH 002/140] PUB:20180910 How To List An Available Package Groups In Linux.md @HankChow https://linux.cn/article-10116-1.html --- .../20180910 How To List An Available Package Groups In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180910 How To List An Available Package Groups In Linux.md (100%) diff --git a/translated/tech/20180910 How To List An Available Package Groups In Linux.md b/published/20180910 How To List An Available Package Groups In Linux.md similarity index 100% rename from translated/tech/20180910 How To List An Available Package Groups In Linux.md rename to published/20180910 How To List An Available Package Groups In Linux.md From 661180467ff873e45a18e6fee8413fc0d3ded1f3 Mon Sep 17 00:00:00 2001 From: jrg Date: Sun, 14 Oct 2018 22:32:50 +0800 Subject: [PATCH 003/140] Update 20181010 An introduction to using tcpdump at the Linux command line.md translating by jrg, 20181014 --- ...n introduction to using tcpdump at the Linux command line.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md index 6998661f23..b498d0ca43 100644 --- a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md +++ b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md @@ -1,3 +1,5 @@ +[translation by jrg] + An introduction to using tcpdump at the Linux command line ====== From dcea3cbc4cc51265a019d046fcc7cc130a6c6fbb Mon Sep 17 00:00:00 2001 From: jrg Date: Sun, 14 Oct 2018 22:44:03 +0800 Subject: [PATCH 004/140] Update 20181001 16 iptables tips and tricks for sysadmins.md translating by jrg, 20181014 --- .../tech/20181001 16 iptables tips and tricks for sysadmins.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md index 9e07971c81..a9b3e77c5a 100644 --- a/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md +++ b/sources/tech/20181001 16 iptables tips and tricks for sysadmins.md @@ -1,3 +1,5 @@ +[translating by jrg, 20181014] + 16 iptables tips and tricks for sysadmins ====== Iptables provides powerful capabilities to control traffic coming in and out of your system. From 05f7f45efacbed1ba3599847dc1b770adaf1597a Mon Sep 17 00:00:00 2001 From: jrg Date: Sun, 14 Oct 2018 22:49:50 +0800 Subject: [PATCH 005/140] Update 20180928 Using Grails with jQuery and DataTables.md translating by jrg,20181014 --- .../tech/20180928 Using Grails with jQuery and DataTables.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180928 Using Grails with jQuery and DataTables.md b/sources/tech/20180928 Using Grails with jQuery and DataTables.md index 9a9ad08fb0..0f02fabe8a 100644 --- a/sources/tech/20180928 Using Grails with jQuery and DataTables.md +++ b/sources/tech/20180928 Using Grails with jQuery and DataTables.md @@ -1,3 +1,5 @@ +[translating by jrg 20181014] + Using Grails with jQuery and DataTables ====== From 73f8678eb53a8d8695acd03d0636363095c74d03 Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 15 Oct 2018 08:59:14 +0800 Subject: [PATCH 006/140] translated --- .../20181003 Introducing Swift on Fedora.md | 72 ------------------- .../20181003 Introducing Swift on Fedora.md | 70 ++++++++++++++++++ 2 files changed, 70 insertions(+), 72 deletions(-) delete mode 100644 sources/tech/20181003 Introducing Swift on Fedora.md create mode 100644 translated/tech/20181003 Introducing Swift on Fedora.md diff --git a/sources/tech/20181003 Introducing Swift on Fedora.md b/sources/tech/20181003 Introducing Swift on Fedora.md deleted file mode 100644 index 186117cd7c..0000000000 --- a/sources/tech/20181003 Introducing Swift on Fedora.md +++ /dev/null @@ -1,72 +0,0 @@ -translating---geekpi - -Introducing Swift on Fedora -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg) - -Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora. - -### Safe, Fast, Expressive - -Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed. - -Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let. - -Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator. - -Some additional features include: - - * Closures unified with function pointers - * Tuples and multiple return values - * Generics - * Fast and concise iteration over a range or collection - * Structs that support methods, extensions, and protocols - * Functional programming patterns, e.g., map and filter - * Powerful error handling built-in - * Advanced control flow with do, guard, defer, and repeat keywords - - - -### Try Swift out - -Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up. - -``` -$ swift -Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance. - 1> let greeting="Hello world!" -greeting: String = "Hello world!" - 2> print(greeting) -Hello world! - 3> greeting = "Hello universe!" -error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant -greeting = "Hello universe!" -~~~~~~~~ ^ - - - 3> - -``` - -Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved. - -Photo by [Uillian Vargas][3] on [Unsplash][4]. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/introducing-swift-fedora/ - -作者:[Link Dupont][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/linkdupont/ -[1]: https://swift.org/server/ -[2]: http://swift.org -[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText -[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/translated/tech/20181003 Introducing Swift on Fedora.md b/translated/tech/20181003 Introducing Swift on Fedora.md new file mode 100644 index 0000000000..ead00d327f --- /dev/null +++ b/translated/tech/20181003 Introducing Swift on Fedora.md @@ -0,0 +1,70 @@ +介绍 Fedora上的 Swift +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg) + +Swift 是一种使用现代方法构建安全性、性能和软件设计模式的通用编程语言。它旨在成为各种编程项目的最佳语言,从系统编程到桌面应用程序,以及扩展到云服务。阅读更多关于它的内容以及如何在 Fedora 中尝试它。 + +### 安全、快速、富有表现力 + +与许多现代编程语言一样,Swift 被设计为比基于 C 的语言更安全。例如,变量总是在可以使用之前初始化。检查数组和整数是否溢出。内存自动管理。 + +Swift 将意图放在语法中。要声明变量,请使用 var 关键字。要声明常量,请使用 let。 + +Swift 还保证对象永远不会是 nil。实际上,尝试使用已知为 nil 的对象将导致编译时错误。当使用 nil 值时,它支持一种称为 **optional** 的机制。optional 可能包含 nil,但使用 **?** 运算符可以安全地解包。 + +一些额外的功能包括: + + * 与函数指针统一的闭包 +  * 元组和多个返回值 +  * 泛型 +  * 对范围或集合进行快速而简洁的迭代 +  * 支持方法、扩展和协议的结构体 +  * 函数式编程模式,例如 map 和 filter +  * 内置强大的错误处理 +  * 拥有 do、guard、defer 和 repeat 关键字的高级控制流 + + + +### 尝试 Swift + +Swift 在 Fedora 28 中可用,包名为 **swift-lang**。安装完成后,运行 swift 并启动 REPL 控制台。 + +``` +$ swift +Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance. + 1> let greeting="Hello world!" +greeting: String = "Hello world!" + 2> print(greeting) +Hello world! + 3> greeting = "Hello universe!" +error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant +greeting = "Hello universe!" +~~~~~~~~ ^ + + + 3> + +``` + +Swift 有一个不断发展的社区,特别底,有一个[工作组][1]致力于使其成为一种高效且有力的服务器端编程语言。请访问[主页][2]了解更多参与方式。 + +图片由 [Uillian Vargas][3] 发布在 [Unsplash][4] 上。 + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/introducing-swift-fedora/ + +作者:[Link Dupont][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/linkdupont/ +[1]: https://swift.org/server/ +[2]: http://swift.org +[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText From 17160ba6da0c13aa3d92df2780fadb7f434e5aac Mon Sep 17 00:00:00 2001 From: geekpi Date: Mon, 15 Oct 2018 09:03:49 +0800 Subject: [PATCH 007/140] translating --- sources/tech/20180927 5 cool tiling window managers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180927 5 cool tiling window managers.md b/sources/tech/20180927 5 cool tiling window managers.md index f687918c65..14da5f3479 100644 --- a/sources/tech/20180927 5 cool tiling window managers.md +++ b/sources/tech/20180927 5 cool tiling window managers.md @@ -1,3 +1,5 @@ +translating---geekpi + 5 cool tiling window managers ====== From 9adac0e4b15c485da64b8a37692b561f452944fe Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:32:39 +0800 Subject: [PATCH 008/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20Instal?= =?UTF-8?q?l=20GRUB=20on=20Arch=20Linux=20(UEFI)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow to Install GRUB on Arch Linux (UEFI).md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md diff --git a/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md new file mode 100644 index 0000000000..97cb5e0362 --- /dev/null +++ b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md @@ -0,0 +1,74 @@ +How to Install GRUB on Arch Linux (UEFI) +====== + +![](http://fasterland.net/wp-content/uploads/2018/10/Arch-Linux-Boot-Menu-750x375.jpg) + +Some time ago, I wrote a tutorial on **[how to reinstall Grub][1] on Arch Linux after installing Windows.** + +A few weeks ago, I had to reinstall **Arch Linux** from scratch on my laptop and I discovered installing **Grub** was not as straightforward as I remembered. + +For this reason, I’m going to write this tutorial since **installing Grub on a UEFI bios** during a new **Arch Linux** installation it’s not too easy. + +### Locating the EFI partition + +The first important thing to do for installing **Grub** on **Arch Linux** is to locate the **EFI** partition. +Let’s run the following command in order to locate this partition: + +``` +# fdisk -l +``` + +We need to check the partition marked as **EFI System +**In my case is **/dev/sda2** + +After that, we need to mount this partition, for example, on /boot/efi: + +``` +# mkdir /boot/efi +# mount /dev/sdb2 /boot/efi +``` + +Another important thing to do is adding this partition into the **/etc/fstab** file. + +#### Installing Grub + +Now we can install Grub in our system: + +``` +# grub-mkconfig -o /boot/grub/grub.cfg +# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB +``` + +#### Adding Windows Automatically into the Grub Menu + +In order to automatically add the **Windows entry into the Grub menu** , we need to install the **os-prober** program: + +``` +# pacman -Sy os-prober +``` + +In order to add the entry item let’s run the following commands: + +``` +# os-prober +# grub-mkconfig -o /boot/grub/grub.cfg +# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB +``` + +You can find more about Grub on Arch Linux [here][2]. + +-------------------------------------------------------------------------------- + +via: http://fasterland.net/how-to-install-grub-on-arch-linux-uefi.html + +作者:[Francesco Mondello][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://fasterland.net/ +[b]: https://github.com/lujun9972 +[1]: http://fasterland.net/reinstall-grub-arch-linux.html +[2]: https://wiki.archlinux.org/index.php/GRUB From 004a86f18603fde8a732f0800e549f30b3a9028a Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:34:00 +0800 Subject: [PATCH 009/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=204=20Must-Have=20T?= =?UTF-8?q?ools=20for=20Monitoring=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 4 Must-Have Tools for Monitoring Linux.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md diff --git a/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md new file mode 100644 index 0000000000..beb3bab797 --- /dev/null +++ b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md @@ -0,0 +1,102 @@ +4 Must-Have Tools for Monitoring Linux +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring-main.jpg?itok=YHLK-gn6) + +Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source. + +But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools. + +### Top + +We’ll first start with the obvious. The top command is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. The top command not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly. + +![top][2] + +Figure 1: Top running on Elementary OS. + +[Used with permission][3] + +There is no need to install anything to use the top command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top. + +### Glances + +If you thought the top command offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2). + +![glances][5] + +Figure 2: The glances monitor displaying docker stats along with all the other information it offers. + +[Used with permission][3] + +You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository. + +### GNOME System Monitor + +If you're not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app. + +With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon). + +![GNOME System Monitor][7] + +Figure 3: GNOME System Monitor in action. + +[Used with permission][3] + +You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4). + +![GNOME System Monitor][9] + +Figure 4: The GNOME System Monitor Resources tab in action. + +[Used with permission][3] + +If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system. + +### Nagios + +If you’re looking for an enterprise-grade networking monitoring system, look no further than [Nagios][10]. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5). + +![Nagios ][12] + +Figure 5: With Nagios you can even start and stop services. + +[Used with permission][3] + +Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time. +The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers). + +Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it. + +### There’s More Where That Came From + +We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need. + +Learn more about Linux through the free ["Introduction to Linux" ][13] course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/learn/intro-to-linux/2018/10/4-must-have-tools-monitoring-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: /files/images/monitoring1jpg +[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_1.jpg?itok=UiyNGji0 (top) +[3]: /licenses/category/used-permission +[4]: /files/images/monitoring2jpg +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_2.jpg?itok=K3OxLcvE (glances) +[6]: /files/images/monitoring3jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_3.jpg?itok=UKcyEDcT (GNOME System Monitor) +[8]: /files/images/monitoring4jpg +[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_4.jpg?itok=orLRH3m0 (GNOME System Monitor) +[10]: https://www.nagios.org/ +[11]: /files/images/monitoring5jpg +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_5.jpg?itok=RGcLLWL7 (Nagios ) +[13]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux From 68938a14cc5fa2743dcb985a97106d12b762c7ef Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:37:07 +0800 Subject: [PATCH 010/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20to=20level?= =?UTF-8?q?=20up=20your=20organization's=20security=20expertise?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... your organization-s security expertise.md | 147 ++++++++++++++++++ 1 file changed, 147 insertions(+) create mode 100644 sources/talk/20181011 How to level up your organization-s security expertise.md diff --git a/sources/talk/20181011 How to level up your organization-s security expertise.md b/sources/talk/20181011 How to level up your organization-s security expertise.md new file mode 100644 index 0000000000..e67db6a3fb --- /dev/null +++ b/sources/talk/20181011 How to level up your organization-s security expertise.md @@ -0,0 +1,147 @@ +How to level up your organization's security expertise +====== +These best practices will make your employees more savvy and your organization more secure. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk) + +IT security is critical to every company these days. In the words of former FBI director Robert Mueller: “There are only two types of companies: Those that have been hacked, and those that will be.” + +At the same time, IT security is constantly evolving. We all know we need to keep up with the latest trends in cybersecurity and security tooling, but how can we do that without sacrificing our ability to keep moving forward on our business priorities? + +No single person in your organization can handle all of the security work alone; your entire development and operations team will need to develop an awareness of security tooling and best practices, just like they all need to build skills in open source and in agile software delivery. There are a number of best practices that can help you level up the overall security expertise in your company through basic and intermediate education, subject matter experts, and knowledge-sharing. + +### Basic education: Annual cybersecurity education and security contact information + +At IBM, we all complete an online cybersecurity training class each year. I recommend this as a best practice for other companies as well. The online training is taught at a basic level, and it doesn’t assume that anyone has a technical background. Topics include social engineering, phishing and spearfishing attacks, problematic websites, viruses and worms, and so on. We learn how to avoid situations that may put ourselves or our systems at risk, how to recognize signs of an attempted security breach, and how to report a problem if we notice something that seems suspicious. This online education serves the purpose of raising the overall security awareness and readiness of the organization at a low per-person cost. A nice side effect of this education is that this basic knowledge can be applied to our personal lives, and we can share what we learned with our family and friends as well. + +In addition to the general cybersecurity education, all employees should have annual training on data security and privacy regulations and how to comply with those. + +Finally, we make it easy to find the Corporate Security Incident Response team by sharing the link to its website in prominent places, including Slack, and setting up suggested matches to ensure that a search of our internal website will send people to the right place: + +![](https://opensource.com/sites/default/files/uploads/security_search_screen.png) + +### Intermediate education: Learn from your tools + +Another great source of security expertise is through pre-built security tools. For example, we have set up a set of automated security tests that run against our web services using IBM AppScan, and the reports it generates include background knowledge about the vulnerabilities it finds, the severity of the threat, how to determine if your application is susceptible to the vulnerability, and how to fix the problem, with code examples. + +Similarly, the free [npm audit command-line tool from npm, Inc.][1] will scan your open source Node.js modules and report any known vulnerabilities it finds. This tool also generates educational audit reports that include the severity of the threat, the vulnerable package, and versions with the vulnerability, an alternative package or versions that do not have the vulnerability, dependencies, and a link to more detailed information about the vulnerability. Here’s an example of a report from npm audit: + +| High | Regular Expression Denial of Service | +| --------------| ----------------------------------------- | +| Package | minimath | +| --------------| ----------------------------------------- | +| Dependency of | gulp [dev] | +| --------------| ----------------------------------------- | +| Path | gulp > vinyl-fs > glob-stream > minimatch | +| --------------| ----------------------------------------- | +| More info | https://nodesecurity.io/advisories/118 | + +Any good network-level security tool will also give you information on the types of attacks the tool is blocking and how it recognizes likely attacks. This information is available in the marketing materials online as well as the tool’s console and reports if you have access to those. + +Each of your development teams or squads should have at least one subject matter expert who takes the time to read and fully understand the vulnerability reports that are relevant to you. This is often the technical lead, but it could be anyone who is interested in learning more about security. Your local subject matter expert will be able to recognize similar security holes in the future earlier in the development and deployment process. + +Using the npm audit example above, a developer who reads and understands security advisory #118 from this report will be more likely to notice changes that may allow for a Regular Expression Denial of Service when reviewing code in the future. The team’s subject matter expert should also develop the skills needed to determine which of the vulnerability reports don’t actually apply to his or her specific project. + +### Intermediate education: Conferences + +Let’s not forget the value of attending security-related conferences, such as the [OWASP AppSec Conferences][2]. Conferences provide a great way for members of your team to focus on learning for a few days and bring back some of the newest ideas in the field. The “hallway track” of a conference, where we can learn from other practitioners, is also a valuable source of information. As much as most of us dislike being “sold to,” the sponsor hall at a conference is a good place to casually check out new security tools to see which ones you might be interested in evaluating later. + +If your organization is big enough, ask your DevOps and security tool vendors to come to you! If you’ve already procured some great tools, but adoption isn’t going as quickly as you would like, many vendors would be happy to provide your teams with some additional practical training. It’s in their best interests to increase the adoption of their tools (making you more likely to continue paying for their services and to increase your license count), just like it’s in your best interests to maximize the value you get out of the tools you’re paying for. We recently hosted a [Toolbox@IBM][3] \+ DevSecOps summit at our largest sites (those with a couple thousand IT professionals). More than a dozen vendors sponsored each event, came onsite, set up booths, and gave conference talks, just like they would at a technical conference. We also had several of our own presenters speaking about DevOps and security best practices that were working well for them, and we had booths set up by our Corporate Information Security Office, agile coaching, onsite tech support, and internal toolchain teams. We had several hundred attendees at each site. It was great for our technical community because we could focus on the tools that we had already procured, learn how other teams in our company were using them, and make connections to help each other in the future. + +When you send someone to a conference, it’s important to set the expectation that they will come back and share what they’ve learned with the team. We usually do this via an informal brown-bag lunch-and-learn, where people are encouraged to discuss new ideas interactively. + +### Subject-matter experts and knowledge-sharing: The secure engineering guild + +In the IBM Digital Business Group, we’ve adopted the squad model as described by [Spotify][4] and tweaked it to make it work for us. One sometimes-forgotten aspect of the squad model is the guild. Guilds are centers of excellence, focused around one topic or skill set, with members from many squads. Guild members learn together, share best practices with each other and their broader teams, and work to advance the state of the art. If you would like to establish your own secure engineering guild, here are some tips that have worked for me in setting up guilds in the past: + +**Step 1: Advertise and recruit** + +Your co-workers are busy people, so for many of them, a secure engineering guild could feel like just one more thing they have to cram into the week that doesn’t involve writing code. It’s important from the outset that the guild has a value proposition that will benefit its members as well as the organization. + +Zane Lackey from [Signal Sciences][5] gave me some excellent advice: It’s important to call out the truth. In the past, he said, security initiatives may have been more of a hindrance or even a blocker to getting work done. Your secure engineering guild needs to focus on ways to make your engineering team’s lives easier and more efficient instead. You need to find ways to automate more of the busywork related to security and to make your development teams more self-sufficient so you don’t have to rely on security “gates” or hurdles late in the development process. + +Here are some things that may attract people to your guild: + + * Learn about security vulnerabilities and what you can do to combat them + * Become a subject matter expert + * Participate in penetration testing + * Evaluate and pilot new security tools + * Add “Secure Engineering Guild” to your resume + + + +Here are some additional guild recruiting tips: + + * Reach out directly to your security experts and ask them to join: security architects, network security administrators, people from your corporate security department, and so on. + + * Bring in an external speaker who can get people excited about secure engineering. Advertise it as “sponsored by the Secure Engineering Guild” and collect names and contact information for people who want to join your guild, both before and after the talk. + + * Get executive support for the program. Perhaps one of your VPs will write a blog post extolling the virtues of secure engineering skills and asking people to join the guild (or perhaps you can draft the blog post for her or him to edit and publish). You can combine that blog post with advertising the external speaker if the timing allows. + + * Ask your management team to nominate someone from each squad to join the guild. This hardline approach is important if you have an urgent need to drive rapid improvement in your security posture. + + + + +**Step 2: Build a team** + +Guild meetings should be structured for action. It’s important to keep an agenda so people know what you plan to cover in each meeting, but leave time at the end for members to bring up any topics they want to discuss. Also be sure to take note of action items, and assign an owner and a target date for each of them. Finally, keep meeting minutes and send a brief summary out after each meeting. + +Your first few guild meetings are your best opportunity to set off on the right foot, with a bit of team-building. I like to run a little design thinking exercise where you ask team members to share their ideas for the guild’s mission statement, vote on their favorites, and use those to craft a simple and exciting mission statement. The mission statement should include three components: WHO will benefit, WHAT the guild will do, and the WOW factor. The exercise itself is valuable because you can learn why people have decided to volunteer to be a part of the guild in the first place, and what they hope will come of it. + +Another thing I like to do from the outset is ask people what they’re hoping to achieve as a guild. The guild should learn together, have fun, and do real work. Once you have those ideas out on the table, start putting owners and target dates next to those goals. + + * Would they like to run a book club? Get someone to suggest a book and set up book club meetings. + + * Would they like to share useful articles and blogs? Get someone to set up a Slack channel and invite everyone to it, or set up a shared document where people can contribute their favorite resources. + + * Would they like to pilot a new tool? Get someone to set up a free trial, try it out for their own team, and report back in a few weeks. + + * Would they like to continue a series of talks? Get someone to create a list of topics and speakers and send out the invitations. + + + + +If a few goals end up without owners or dates, that’s OK; just start a to-do list or backlog for people to refer to when they’ve completed their first task. + +Finally, survey the team to find the best time and day of the week for ongoing meetings and set those up. I recommend starting with weekly 30-minute meetings and adjust as needed. + +**Step 3: Keep the energy going, or reboot** + +As the months go on, your guild could start to lose energy. Here are some ways to keep the excitement going or reboot a guild that’s losing energy. + + * Don’t be an echo chamber. Invite people in from various parts of the organization to talk for a few minutes about what they’re doing with respect to security engineering, and where they have concerns or see gaps. + + * Show measurable progress. If you’ve been assigning owners to action items and completing them all along, you’ve certainly made progress, but if you look at it only from week to week, the progress can feel small or insignificant. Once per quarter, take a step back and write a blog about all you’ve accomplished and send it out to your organization. Showing off what you’ve accomplished makes the team proud of what they’ve accomplished, and it’s another opportunity to recruit even more people for your guild. + + * Don’t be afraid to take on a large project. The guild should not be an ivory tower; it should get things done. Your guild may, for example, decide to roll out a new security tool that you love across a large organization. With a little bit of project management and a lot of executive support, you can and should tackle cross-squad projects. The guild members can and should be responsible for getting stories from the large projects prioritized in their own squads’ backlogs and completed in a timely manner. + + * Periodically brainstorm the next set of action items. As time goes by, the most critical or pressing needs of your organization will likely change. People will be more motivated to work on the things they consider most important and urgent. + + * Reward the extra work. You might offer an executive-sponsored cash award for the most impactful secure engineering projects. You might also have the guild itself choose someone to send to a security conference now and then. + + + + +### Go forth, and make your company more secure + +A more secure company starts with a more educated team. Building upon that expertise, a secure engineering guild can drive real changes by developing and sharing best practices, finding the right owners for each action item, and driving them to closure. I hope you found a few tips here that will help you level up the security expertise in your organization. Please add your own helpful tips in the comments. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/how-level-security-expertise-your-organization + +作者:[Ann Marie Fred][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/annmarie99 +[b]: https://github.com/lujun9972 +[1]: https://www.npmjs.com/about +[2]: https://www.owasp.org/index.php/Category:OWASP_AppSec_Conference +[3]: mailto:Toolbox@IBM +[4]: https://medium.com/project-management-learnings/spotify-squad-framework-part-i-8f74bcfcd761 +[5]: https://www.signalsciences.com/ From 2218a1c10555f5eb93020cd408e3c1b3b6d81aa4 Mon Sep 17 00:00:00 2001 From: LuMing <784315443@qq.com> Date: Mon, 15 Oct 2018 10:41:39 +0800 Subject: [PATCH 011/140] translated --- ...ration between developers and designers.md | 67 ------------------- ...ration between developers and designers.md | 66 ++++++++++++++++++ 2 files changed, 66 insertions(+), 67 deletions(-) delete mode 100644 sources/talk/20180502 9 ways to improve collaboration between developers and designers.md create mode 100644 translated/talk/20180502 9 ways to improve collaboration between developers and designers.md diff --git a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md b/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md deleted file mode 100644 index 637a54ee91..0000000000 --- a/sources/talk/20180502 9 ways to improve collaboration between developers and designers.md +++ /dev/null @@ -1,67 +0,0 @@ -LuuMing translating -9 ways to improve collaboration between developers and designers -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV) - -This article was co-written with [Jason Porter][1]. - -Design is a crucial element in any software project. Sooner or later, the developers' reasons for writing all this code will be communicated to the designers, human beings who aren't as familiar with its inner workings as the development team. - -Stereotypes exist on both side of the divide; engineers often expect designers to be flaky and irrational, while designers often expect engineers to be inflexible and demanding. The truth is considerably more nuanced and, at the end of the day, the fates of designers and developers are forever intertwined. - -Here are nine things that can improve collaboration between the two. - -### 1\. First, knock down the wall. Seriously. - -There are loads of memes about the "wall of confusion" in just about every industry. No matter what else you do, the first step toward tearing down this wall is getting both sides to agree it needs to be gone. Once everyone agrees the existing processes aren't functioning optimally, you can pick and choose from the rest of these ideas to begin fixing the problems. - -### 2\. Learn to empathize. - -Before rolling up any sleeves to build better communication, take a break. This is a great junction point for team building. A time to recognize that we're all people, we all have strengths and weaknesses, and most importantly, we're all on the same team. Discussions around workflows and productivity can become feisty, so it's crucial to build a foundation of trust and cooperation before diving on in. - -### 3\. Recognize differences. - -Designers and developers attack the same problem from different angles. Given a similar problem, designers will seek the solution with the biggest impact while developers will seek the solution with the least amount of waste. These two viewpoints do not have to be mutually exclusive. There is plenty of room for negotiation and compromise, and somewhere in the middle is where the end user receives the best experience possible. - -### 4\. Embrace similarities. - -This is all about workflow. CI/CD, scrum, agile, etc., are all basically saying the same thing: Ideate, iterate, investigate, and repeat. Iteration and reiteration are common denominators for both kinds of work. So instead of running a design cycle followed by a development cycle, it makes much more sense to run them concurrently and in tandem. Syncing cycles allows teams to communicate, collaborate, and influence each other every step of the way. - -### 5\. Manage expectations. - -All conflict can be distilled down to one simple idea: incompatible expectations. Therefore, an easy way to prevent systemic breakdowns is to manage expectations by ensuring that teams are thinking before talking and talking before doing. Setting expectations often evolves organically through everyday conversation. Forcing them to happen by having meetings can be counterproductive. - -### 6\. Meet early and meet often. - -Meeting once at the beginning of work and once at the end simply isn't enough. This doesn't mean you need daily or even weekly meetings. Setting a cadence for meetings can also be counterproductive. Let them happen whenever they're necessary. Great things can happen with impromptu meetings—even at the watercooler! If your team is distributed or has even one remote employee, video conferencing, text chat, or phone calls are all excellent ways to meet. It's important that everyone on the team has multiple ways to communicate with each other. - -### 7\. Build your own lexicon. - -Designers and developers sometimes have different terms for similar ideas. One person's card is another person's tile is a third person's box. Ultimately, the fit and accuracy of a term aren't as important as everyone's agreement to use the same term consistently. - -### 8\. Make everyone a communication steward. - -Everyone in the group is responsible for maintaining effective communication, regardless of how or when it happens. Each person should strive to say what they mean and mean what they say. - -### 9\. Give a darn. - -It only takes one member of a team to sabotage progress. Go all in. If every individual doesn't care about the product or the goal, there will be problems with motivation to make changes or continue the process. - -This article is based on [Designers and developers: Finding common ground for effective collaboration][2], a talk the authors will be giving at [Red Hat Summit 2018][3], which will be held May 8-10 in San Francisco. [Register by May 7][3] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers - -作者:[Jason Brock][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jkbrock -[1]:https://opensource.com/users/lightguardjp -[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267 -[3]:https://www.redhat.com/en/summit/2018 diff --git a/translated/talk/20180502 9 ways to improve collaboration between developers and designers.md b/translated/talk/20180502 9 ways to improve collaboration between developers and designers.md new file mode 100644 index 0000000000..fe53452dd8 --- /dev/null +++ b/translated/talk/20180502 9 ways to improve collaboration between developers and designers.md @@ -0,0 +1,66 @@ +9 个方法,提升开发者与设计师之间的协作 +====== + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV) + +本文由我与 [Jason Porter][1] 共同完成。 + +在任何软件项目中,设计至关重要。设计师不像开发团队那样熟悉其内部工作,但迟早都要知道开发人员写代码的意图。 + +两边都有自己的成见。工程师经常认为设计师们古怪不理性,而设计师也认为工程师们死板要求高。在一天的工作快要结束时,情况会变得更加微妙。设计师和开发者们的命运永远交织在一起。 + +做到以下九件事,便可以增强他们之间的合作 + +### 1\. 首先,说实在的,打破壁垒。 + +几乎每一个行业都有“迷惑之墙wall of confusion”的模子。无论你干什么工作,拆除这堵墙的第一步就是要双方都认同它需要拆除。一旦所有的人都认为现有的流程效率低下,你就可以从其他想法中获得灵感,然后解决问题。 + +### 2\. 学会共情 + +在撸起袖子开始干之前,休息一下。这是团队建设的重要的交汇点。一个时机去认识到:我们都是成人,我们都有自己的优点与缺点,更重要的是,我们是一个团队。围绕工作流程与工作效率的讨论会经常发生,因此在开始之前,建立一个信任与协作的基础至关重要。 + +### 3\. 认识差异 + +设计师和开发者从不同的角度攻克问题。对于相同的问题,设计师会追求更好的效果,而开发者会寻求更高的效率。这两种观点不必互相排斥。谈判和妥协的余地很大,并且在二者之间必然存在一个用户满意度最佳的中点。 + +### 4\. 拥抱共性 + +这一切都是与工作流程相关的。持续集成Continuous Integration/持续交付Continuous Delivery,scrum,agille 等等,都基本上说了一件事:构思,迭代,考察,重复。迭代和重复是两种工作的相同点。因此,不再让开发周期紧跟设计周期,而是同时并行地运行它们,这样会更有意义。同步周期Syncing cycles允许团队在每一步上交流、协作、互相影响。 + +### 5\. 管理期望 + +一切冲突的起因一言以蔽之:期望不符。因此,防止系统性分裂的简单办法就是通过确保团队成员在说之前先想、在做之前先说来管理期望。设定的期望往往会通过日常对话不断演变。强迫团队通过开会以达到其效果可能会适得其反。 + +### 6\. 按需开会 + +只在工作开始和工作结束开一次会远远不够。但也不意味着每天或每周都要开会。定期开会也可能会适得其反。试着按需开会吧。即兴会议可能会发生很棒的事情,即使是在开水房。如果你的团队是分散式的或者甚至有一名远程员工,视频会议,文本聊天或者打电话都是开会的好方法。团队中的每人都有多种方式互相沟通,这一点非常重要。 + +### 7\. 建立词库 + +设计师和开发者有时候对相似的想法有着不同的术语,就像把猫叫了个咪。毕竟,所有人都用的惯比起术语的准确度和适应度更重要。 + +### 8\. 学会沟通 + +无论什么时候,团队中的每个人都有责任去维持一个有效的沟通。每个人都应该努力做到一字一板。 + +### 9\. 不断改善 + +仅一名团队成员就能破坏整个进度。全力以赴。如果每个人都不关心产品或目标,继续项目或者做出改变的动机就会出现问题。 + +本文参考 [Designers and developers: Finding common ground for effective collaboration][2],演讲的作者将会出席在旧金山五月 8-10 号举办的[Red Hat Summit 2018][3]。[五月 7 号][3]注册将节省 500 美元。支付时使用优惠码 **OPEN18** 以获得更多折扣。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers + +作者:[Jason Brock][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[LuuMing](https://github.com/LuuMing) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/jkbrock +[1]:https://opensource.com/users/lightguardjp +[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267 +[3]:https://www.redhat.com/en/summit/2018 From ca03f63b18a44eab67c3cf3aae661e2ce4d77156 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:45:08 +0800 Subject: [PATCH 012/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Happy=20birthday,?= =?UTF-8?q?=20KDE:=2011=20applications=20you=20never=20knew=20existed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 11 applications you never knew existed.md | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md diff --git a/sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md b/sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md new file mode 100644 index 0000000000..62ad686ccc --- /dev/null +++ b/sources/tech/20181012 Happy birthday, KDE- 11 applications you never knew existed.md @@ -0,0 +1,58 @@ +Happy birthday, KDE: 11 applications you never knew existed +====== +Which fun or quirky app do you need today? +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_DebucketizeOrgChart_A.png?itok=RB3WBeQQ) + +The Linux desktop environment KDE celebrates its 22nd anniversary on October 14 this year. There are a gazillion* applications created by the KDE community of users, many of which provide fun and quirky services. We perused the list and picked out 11 applications you might like to know exist. + +*Not really, but [there are a lot][1]. + +### 11 KDE applications you never knew existed + +1\. [KTeaTime][2] is a timer for steeping tea. Set it by choosing the type of tea you are drinking—green, black, herbal, etc.—and the timer will ding when it's ready to remove the tea bag and drink. + +2\. [KTux][3] is just a screensaver... or is it? Tux is flying in outer space in his green spaceship. + +3\. [Blinken][4] is a memory game based on Simon Says, an electronic game released in 1978. Players are challenged to remember sequences of increasing length. + +4\. [Tellico][5] is a collection manager for organizing your favorite hobby. Maybe you still collect baseball cards. Maybe you're part of a wine club. Maybe you're a serious bookworm. Maybe all three! + +5\. [KRecipes][6] is **not** a simple recipe manager. It's got a lot going on! Shopping lists, nutrient analysis, advanced search, recipe ratings, import/export various formats, and more. + +6\. [KHangMan][7] is based on the classic game Hangman where you guess the word letter by letter. This game is available in several languages, and it can be used to improve your learning of another language. It has four categories, one of which is "animals" which is great for kids. + +7\. [KLettres][8] is another app that may help you learn a new language. It teaches the alphabet and challenges the user to read and pronounce syllables. + +8\. [KDiamond][9] is similar to Bejeweled or other single player puzzle games where the goal of the game is to build lines of a certain number of the same type of jewel or object. In this case, diamonds. + +9\. [KolourPaint][10] is a very simple editing tool for your images or app for creating simple vectors. + +10\. [Kiriki][11] is a dice game for 2-6 players similar to Yahtzee. + +11\. [RSIBreak][12] doesn't start with a K. What!? It starts with an "RSI" for "Repetitive Strain Injury," which can occur from working for long hours, day in and day out, with a mouse and keyboard. This app reminds you to take breaks and can be personalized to meet your needs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/kde-applications + +作者:[Opensource.com][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com +[b]: https://github.com/lujun9972 +[1]: https://www.kde.org/applications/ +[2]: https://www.kde.org/applications/games/kteatime/ +[3]: https://userbase.kde.org/KTux +[4]: https://www.kde.org/applications/education/blinken +[5]: http://tellico-project.org/ +[6]: https://www.kde.org/applications/utilities/krecipes/ +[7]: https://edu.kde.org/khangman/ +[8]: https://edu.kde.org/klettres/ +[9]: https://games.kde.org/game.php?game=kdiamond +[10]: https://www.kde.org/applications/graphics/kolourpaint/ +[11]: https://www.kde.org/applications/games/kiriki/ +[12]: https://userbase.kde.org/RSIBreak From 780385af95912e458cca4245769fb377c75eaa54 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:53:45 +0800 Subject: [PATCH 013/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Command=20line=20?= =?UTF-8?q?quick=20tips:=20Reading=20files=20different=20ways?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...uick tips- Reading files different ways.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 sources/tech/20181012 Command line quick tips- Reading files different ways.md diff --git a/sources/tech/20181012 Command line quick tips- Reading files different ways.md b/sources/tech/20181012 Command line quick tips- Reading files different ways.md new file mode 100644 index 0000000000..30c82c1843 --- /dev/null +++ b/sources/tech/20181012 Command line quick tips- Reading files different ways.md @@ -0,0 +1,118 @@ +Command line quick tips: Reading files different ways +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg) + +Fedora is delightful to use as a graphical operating system. You can point and click your way through just about any task easily. But you’ve probably seen there is a powerful command line under the hood. To try it out in a shell, just open the Terminal application in your Fedora system. This article is one in a series that will show you some common command line utilities. + +In this installment you’ll learn how to read files in different ways. If you open a Terminal to do some work on your system, chances are good that you’ll need to read a file or two. + +### The whole enchilada + +The **cat** command is well known to terminal users. When you **cat** a file, you’re simply displaying the whole file to the screen. Really what’s happening under the hood is the file is read one line at a time, then each line is written to the screen. + +Imagine you have a file with one word per line, called myfile. To make this clear, the file will contain the word equivalent for a number on each line, like this: + +``` + + one + two + three + four + five + +``` + +So if you **cat** that file, you’ll see this output: + +``` + + $ cat myfile + one + two + three + four + five + +``` + +Nothing too surprising there, right? But here’s an interesting twist. You can also **cat** that file backward. For this, use the **tac** command. (Note that Fedora takes no blame for this debatable humor!) + +``` + + $ tac myfile + five + four + three + two + one + +``` + +The **cat** file also lets you ornament the file in different ways, in case that’s helpful. For instance, you can number lines: + +``` + + $ cat -n myfile + 1 one + 2 two + 3 three + 4 four + 5 five + +``` + +There are additional options that will show special characters and other features. To learn more, run the command **man cat** , and when done just hit **q** to exit back to the shell. + +### Picking over your food + +Often a file is too long to fit on a screen, and you may want to be able to go through it like a document. In that case, try the **less** command: + +``` + + $ less myfile + +``` + +You can use your arrow keys as well as **PgUp/PgDn** to move around the file. Again, you can use the **q** key to quit back to the shell. + +There’s actually a **more** command too, based on an older UNIX command. If it’s important to you to still see the file when you’re done, you might want to use it. The **less** command brings you back to the shell the way you left it, and clears the display of any sign of the file you looked at. + +### Just the appetizer (or dessert) + +Sometimes the output you want is just the beginning of a file. For instance, the file might be so long that when you **cat** the whole thing, the first few lines scroll past before you can see them. The **head** command will help you grab just those lines: + +``` + + $ head -n 2 myfile + one + two + +``` + +In the same way, you can use **tail** to just grab the end of a file: + +``` + + $ tail -n 3 myfile + three + four + five + +``` + +Of course these are only a few simple commands in this area. But they’ll get you started when it comes to reading files. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/commandline-quick-tips-reading-files-different-ways/ + +作者:[Paul W. Frields][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[b]: https://github.com/lujun9972 From 1188f5f1e2456d9cc9921b0458ffa0857dc5a30d Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 10:55:38 +0800 Subject: [PATCH 014/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Lock?= =?UTF-8?q?=20Virtual=20Console=20Sessions=20On=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Lock Virtual Console Sessions On Linux.md | 129 ++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md diff --git a/sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md b/sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md new file mode 100644 index 0000000000..8ec111c32c --- /dev/null +++ b/sources/tech/20181012 How To Lock Virtual Console Sessions On Linux.md @@ -0,0 +1,129 @@ +How To Lock Virtual Console Sessions On Linux +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png) + +When you’re working on a shared system, you might not want the other users to sneak peak in your console to know what you’re actually doing. If so, I know a simple trick to lock your own session while still allowing other users to use the system on other virtual consoles. Thanks to **Vlock** , stands for **V** irtual Console **lock** , a command line program to lock one or more sessions on the Linux console. If necessary, you can lock the entire console and disable the virtual console switching functionality altogether. Vlock is especially useful for the shared Linux systems which have multiple users with access to the console. + +### Installing Vlock + +On Arch-based systems, the Vlock package is replaced with **kpd** package which is preinstalled by default, so you need not to bother with installation. + +On Debian, Ubuntu, Linux Mint, run the following command to install Vlock: + +``` + $ sudo apt-get install vlock +``` + +On Fedora: + +``` + $ sudo dnf install vlock +``` + +On RHEL, CentOS: + +``` + $ sudo yum install vlock +``` + +### Lock Virtual Console Sessions On Linux + +The general syntax for Vlock is: + +``` + vlock [ -acnshv ] [ -t ] [ plugins... ] +``` + +Where, + + * **a** – Lock all virtual console sessions, + * **c** – Lock current virtual console session, + * **n** – Switch to new empty console before locking all sessions, + * **s** – Disable SysRq key mechanism, + * **t** – Specify the timeout for the screensaver plugins, + * **h** – Display help section, + * **v** – Display version. + + + +Let me show you some examples. + +**1\. Lock current console session** + +When running Vlock without any arguments, it locks the current console session (TYY) by default. To unlock the session, you need to enter either the current user’s password or the root password. + +``` + $ vlock +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif) + +You can also use **-c** flag to lock the current console session. + +``` + $ vlock -c +``` + +Please note that this command will only lock the current console. You can switch to other consoles by pressing **ALT+F2**. For more details about switching between TTYs, refer the following guide. + +Also, if the system has multiple users, the other users can still access their respective TTYs. + +**2\. Lock all console sessions** + +To lock all TTYs at the same time and also disable the virtual console switching functionality, run: + +``` + $ vlock -a +``` + +Again, to unlock the console sessions, just press ENTER key and type your current user’s password or root user password. + +Please keep in mind that the **root user can always unlock any vlock session** at any time, unless disabled at compile time. + +**3. Switch to new virtual console before locking all consoles +** + +It is also possible to make Vlock to switch to new empty virtual console from X session before locking all consoles. To do so, use **-n** flag. + +``` + $ vlock -n +``` + +**4. Disable SysRq mechanism +** + +As you may know, the Magic SysRq key mechanism allows the users to perform some operations when the system freeze. So the users can unlock the consoles using SysRq. In order to prevent this, pass the **-s** option to disable SysRq mechanism. Please remember, this only works if the **-a** option is given. + +``` + $ vlock -sa +``` + +For more options and its usage, refer the help section or the man pages. + +``` + $ vlock -h + $ man vlock +``` + +Vlock prevents the unauthorized users from gaining the console access. If you’re looking for a simple console locking mechanism to your Linux machine, Vlock is worth checking! + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 From b9faa61fef48545d6d35e503591862dd6824dc97 Mon Sep 17 00:00:00 2001 From: darksun Date: Mon, 15 Oct 2018 11:01:42 +0800 Subject: [PATCH 015/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Exploring=20the?= =?UTF-8?q?=20Linux=20kernel:=20The=20secrets=20of=20Kconfig/kbuild?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...x kernel- The secrets of Kconfig-kbuild.md | 257 ++++++++++++++++++ 1 file changed, 257 insertions(+) create mode 100644 sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md diff --git a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md new file mode 100644 index 0000000000..8ee4f34897 --- /dev/null +++ b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md @@ -0,0 +1,257 @@ +Exploring the Linux kernel: The secrets of Kconfig/kbuild +====== +Dive into understanding how the Linux config/build system works. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ) + +The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it. + +To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking. + +### Kconfig + +The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets: +| config | Update current config utilizing a line-oriented program | +| nconfig | Update current config utilizing a ncurses menu-based program | +| menuconfig | Update current config utilizing a menu-based program | +| xconfig | Update current config utilizing a Qt-based frontend | +| gconfig | Update current config utilizing a GTK+ based frontend | +| oldconfig | Update current config utilizing a provided .config as base | +| localmodconfig | Update current config disabling modules not loaded | +| localyesconfig | Update current config converting local mods to core | +| defconfig | New config with default from Arch-supplied defconfig | +| savedefconfig | Save current config as ./defconfig (minimal config) | +| allnoconfig | New config where all options are answered with 'no' | +| allyesconfig | New config where all options are accepted with 'yes' | +| allmodconfig | New config selecting modules when possible | +| alldefconfig | New config with all symbols set to default | +| randconfig | New config with a random answer to all options | +| listnewconfig | List new options | +| olddefconfig | Same as oldconfig but sets new symbols to their default value without prompting | +| kvmconfig | Enable additional options for KVM guest kernel support | +| xenconfig | Enable additional options for xen dom0 and guest kernel support | +| tinyconfig | Configure the tiniest possible kernel | + +I think **menuconfig** is the most popular of these targets. The targets are processed by different host programs, which are provided by the kernel and built during kernel building. Some targets have a GUI (for the user's convenience) while most don't. Kconfig-related tools and source code reside mainly under **scripts/kconfig/** in the kernel source. As we can see from **scripts/kconfig/Makefile** , there are several host programs, including **conf** , **mconf** , and **nconf**. Except for **conf** , each of them is responsible for one of the GUI-based config targets, so, **conf** deals with most of them. + +Logically, Kconfig's infrastructure has two parts: one implements a [new language][1] to define the configuration items (see the Kconfig files under the kernel source), and the other parses the Kconfig language and deals with configuration actions. + +Most of the config targets have roughly the same internal process (shown below): + +![](https://opensource.com/sites/default/files/uploads/kconfig_process.png) + +Note that all configuration items have a default value. + +The first step reads the Kconfig file under source root to construct an initial configuration database; then it updates the initial database by reading an existing configuration file according to this priority: + +> .config +> /lib/modules/$(shell,uname -r)/.config +> /etc/kernel-config +> /boot/config-$(shell,uname -r) +> ARCH_DEFCONFIG +> arch/$(ARCH)/defconfig + +If you are doing GUI-based configuration via **menuconfig** or command-line-based configuration via **oldconfig** , the database is updated according to your customization. Finally, the configuration database is dumped into the .config file. + +But the .config file is not the final fodder for kernel building; this is why the **syncconfig** target exists. **syncconfig** used to be a config target called **silentoldconfig** , but it doesn't do what the old name says, so it was renamed. Also, because it is for internal use (not for users), it was dropped from the list. + +Here is an illustration of what **syncconfig** does: + +![](https://opensource.com/sites/default/files/uploads/syncconfig.png) + +**syncconfig** takes .config as input and outputs many other files, which fall into three categories: + + * **auto.conf & tristate.conf** are used for makefile text processing. For example, you may see statements like this in a component's makefile: + +``` + obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o +``` + + * **autoconf.h** is used in C-language source files. + + * Empty header files under **include/config/** are used for configuration-dependency tracking during kbuild, which is explained below. + + + + +After configuration, we will know which files and code pieces are not compiled. + +### kbuild + +Component-wise building, called _recursive make_ , is a common way for GNU `make` to manage a large project. Kbuild is a good example of recursive make. By dividing source files into different modules/components, each component is managed by its own makefile. When you start building, a top makefile invokes each component's makefile in the proper order, builds the components, and collects them into the final executive. + +Kbuild refers to different kinds of makefiles: + + * **Makefile** is the top makefile located in source root. + * **.config** is the kernel configuration file. + * **arch/$(ARCH)/Makefile** is the arch makefile, which is the supplement to the top makefile. + * **scripts/Makefile.*** describes common rules for all kbuild makefiles. + * Finally, there are about 500 **kbuild makefiles**. + + + +The top makefile includes the arch makefile, reads the .config file, descends into subdirectories, invokes **make** on each component's makefile with the help of routines defined in **scripts/Makefile.*** , builds up each intermediate object, and links all the intermediate objects into vmlinux. Kernel document [Documentation/kbuild/makefiles.txt][2] describes all aspects of these makefiles. + +As an example, let's look at how vmlinux is produced on x86-64: + +![vmlinux overview][4] + +(The illustration is based on Richard Y. Steven's [blog][5]. It was updated and is used with the author's permission.) + +All the **.o** files that go into vmlinux first go into their own **built-in.a** , which is indicated via variables **KBUILD_VMLINUX_INIT** , **KBUILD_VMLINUX_MAIN** , **KBUILD_VMLINUX_LIBS** , then are collected into the vmlinux file. + +Take a look at how recursive make is implemented in the Linux kernel, with the help of simplified makefile code: + +``` + # In top Makefile + vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) +                 +$(call if_changed,link-vmlinux) + + # Variable assignments + vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS) + + export KBUILD_VMLINUX_INIT := $(head-y) $(init-y) + export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y) + export KBUILD_VMLINUX_LIBS := $(libs-y1) + export KBUILD_LDS          := arch/$(SRCARCH)/kernel/vmlinux.lds + + init-y          := init/ + drivers-y       := drivers/ sound/ firmware/ + net-y           := net/ + libs-y          := lib/ + core-y          := usr/ + virt-y          := virt/ + + # Transform to corresponding built-in.a + init-y          := $(patsubst %/, %/built-in.a, $(init-y)) + core-y          := $(patsubst %/, %/built-in.a, $(core-y)) + drivers-y       := $(patsubst %/, %/built-in.a, $(drivers-y)) + net-y           := $(patsubst %/, %/built-in.a, $(net-y)) + libs-y1         := $(patsubst %/, %/lib.a, $(libs-y)) + libs-y2         := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y))) + virt-y          := $(patsubst %/, %/built-in.a, $(virt-y)) + + # Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs + # are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs + # will be executed. Refer "4.6 Phony Targets" of `info make` + $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; + + # Variable vmlinux-dirs is the directory part of each built-in.a + vmlinux-dirs    := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ +                      $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ +                      $(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y))) + + # The entry of recursive make + $(vmlinux-dirs): +                 $(Q)$(MAKE) $(build)=$@ need-builtin=1 +``` + +The recursive make recipe is expanded, for example: + +``` + make -f scripts/Makefile.build obj=init need-builtin=1 +``` + +This means **make** will go into **scripts/Makefile.build** to continue the work of building each **built-in.a**. With the help of **scripts/link-vmlinux.sh** , the vmlinux file is finally under source root. + +#### Understanding vmlinux vs. bzImage + +Many Linux kernel developers may not be clear about the relationship between vmlinux and bzImage. For example, here is their relationship in x86-64: + +![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png) + +The source root vmlinux is stripped, compressed, put into **piggy.S** , then linked with other peer objects into **arch/x86/boot/compressed/vmlinux**. Meanwhile, a file called setup.bin is produced under **arch/x86/boot**. There may be an optional third file that has relocation info, depending on the configuration of **CONFIG_X86_NEED_RELOCS**. + +A host program called **build** , provided by the kernel, builds these two (or three) parts into the final bzImage file. + +#### Dependency tracking + +Kbuild tracks three kinds of dependencies: + + 1. All prerequisite files (both * **.c** and * **.h** ) + 2. **CONFIG_** options used in all prerequisite files + 3. Command-line dependencies used to compile the target + + + +The first one is easy to understand, but what about the second and third? Kernel developers often see code pieces like this: + +``` + #ifdef CONFIG_SMP + __boot_cpu_id = cpu; + #endif +``` + +When **CONFIG_SMP** changes, this piece of code should be recompiled. The command line for compiling a source file also matters, because different command lines may result in different object files. + +When a **.c** file uses a header file via a **#include** directive, you need write a rule like this: + +``` + main.o: defs.h +         recipe... +``` + +When managing a large project, you need a lot of these kinds of rules; writing them all would be tedious and boring. Fortunately, most modern C compilers can write these rules for you by looking at the **#include** lines in the source file. For the GNU Compiler Collection (GCC), it is just a matter of adding a command-line parameter: **-MD depfile** + +``` + # In scripts/Makefile.lib + c_flags        = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE)     \ +                  -include $(srctree)/include/linux/compiler_types.h       \ +                  $(__c_flags) $(modkern_cflags)                           \ +                  $(basename_flags) $(modname_flags) +``` + +This would generate a **.d** file with content like: + +``` + init_task.o: init/init_task.c include/linux/kconfig.h \ +  include/generated/autoconf.h include/linux/init_task.h \ +  include/linux/rcupdate.h include/linux/types.h \ +  ... +``` + +Then the host program **[fixdep][6]** takes care of the other two dependencies by taking the **depfile** and command line as input, then outputting a **. .cmd** file in makefile syntax, which records the command line and all the prerequisites (including the configuration) for a target. It looks like this: + +``` + # The command line used to compile the target + cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d  -nostdinc ... + ... + # The dependency files + deps_init/init_task.o := \ + $(wildcard include/config/posix/timers.h) \ + $(wildcard include/config/arch/task/struct/on/stack.h) \ + $(wildcard include/config/thread/info/in/task.h) \ + ... +   include/uapi/linux/types.h \ +   arch/x86/include/uapi/asm/types.h \ +   include/uapi/asm-generic/types.h \ +   ... +``` + +A **. .cmd** file will be included during recursive make, providing all the dependency info and helping to decide whether to rebuild a target or not. + +The secret behind this is that **fixdep** will parse the **depfile** ( **.d** file), then parse all the dependency files inside, search the text for all the **CONFIG_** strings, convert them to the corresponding empty header file, and add them to the target's prerequisites. Every time the configuration changes, the corresponding empty header file will be updated, too, so kbuild can detect that change and rebuild the target that depends on it. Because the command line is also recorded, it is easy to compare the last and current compiling parameters. + +### Looking ahead + +Kconfig/kbuild remained the same for a long time until the new maintainer, Masahiro Yamada, joined in early 2017, and now kbuild is under active development again. Don't be surprised if you soon see something different from what's in this article. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/kbuild-and-kconfig + +作者:[Cao Jin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/pinocchio +[b]: https://github.com/lujun9972 +[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt +[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt +[3]: https://opensource.com/file/411516 +[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview) +[5]: https://blog.csdn.net/richardysteven/article/details/52502734 +[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c From df725121ec08b87bb030a94a0ac5fbf028740d31 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 12:09:51 +0800 Subject: [PATCH 016/140] PRF:20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @geekpi --- ...tem Monitor Application Written In Rust.md | 30 +++++++++---------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md index 71aace4ce4..b25f2ecb0c 100644 --- a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md +++ b/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @@ -1,27 +1,25 @@ -Hegemon - 使用 Rust 编写的模块化系统监视程序 +Hegemon:使用 Rust 编写的模块化系统监视程序 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png) -在类 Unix 系统中监视运行进程时,最常用的程序是 **top** 和 top 的增强版 **htop**。我个人最喜欢的是 htop。但是,开发人员不时会发布这些程序的替代品。top 和 htop 工具的一个替代品是 **Hegemon**。它是使用 **Rust** 语言编写的模块化系统监视程序。 +在类 Unix 系统中监视运行进程时,最常用的程序是 `top` 和它的增强版 `htop`。我个人最喜欢的是 `htop`。但是,开发人员不时会发布这些程序的替代品。`top` 和 `htop` 工具的一个替代品是 `Hegemon`。它是使用 Rust 语言编写的模块化系统监视程序。 关于 Hegemon 的功能,我们可以列出以下这些: - * Hegemon 会监控 CPU、内存和交换页的使用情况。 -  * 它监控系统的温度和风扇速度。 -  * 更新间隔时间可以调整。默认值为 3 秒。 -  * 我们可以通过扩展数据流来展示更详细的图表和其他信息。 -  * 单元测试 -  * 干净的界面 -  * 免费且开源。 - - +* Hegemon 会监控 CPU、内存和交换页的使用情况。 +* 它监控系统的温度和风扇速度。 +* 更新间隔时间可以调整。默认值为 3 秒。 +* 我们可以通过扩展数据流来展示更详细的图表和其他信息。 +* 单元测试。 +* 干净的界面。 +* 自由开源。 ### 安装 Hegemon -确保已安装 **Rust 1.26** 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南: +确保已安装 Rust 1.26 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南: -[Install Rust Programming Language In Linux][2] +- [在 Linux 中安装 Rust 编程语言][2] 另外要安装 [libsensors][1] 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora)中: @@ -51,10 +49,10 @@ $ hegemon ![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif) -要退出,请按 **Q**。 +要退出,请按 `Q`。 -请注意,hegemon 仍处于早期开发阶段,并不能完全取代 **top** 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 github 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。 +请注意,hegemon 仍处于早期开发阶段,并不能完全取代 `top` 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 GitHub 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。 就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注! @@ -69,7 +67,7 @@ via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-writ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5d528e4e0ab704b95cdc2d62c2305a7363324802 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 12:14:53 +0800 Subject: [PATCH 017/140] PUB:20180925 Hegemon - A Modular System Monitor Application Written In Rust.md @geekpi https://linux.cn/article-10117-1.html --- ...emon - A Modular System Monitor Application Written In Rust.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md (100%) diff --git a/translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md b/published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md similarity index 100% rename from translated/tech/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md rename to published/20180925 Hegemon - A Modular System Monitor Application Written In Rust.md From 77972b7e8878148a2f7908c9439dfd2d1e7cfcaa Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 12:19:00 +0800 Subject: [PATCH 018/140] PUB:20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @littleji https://linux.cn/article-10118-1.html --- ... The Lines Of Source Code In Many Programming Languages.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md (96%) diff --git a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md similarity index 96% rename from translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md rename to published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md index d90663cd76..c6cd182cb2 100644 --- a/translated/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md +++ b/published/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md @@ -1,9 +1,9 @@ -cloc –– 计算不同编程语言源代码的行数 +cloc:计算不同编程语言源代码的行数 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png) -作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是免费的、开源的跨平台程序,使用 **Perl** 进行开发。 +作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是自由开源的跨平台程序,使用 **Perl** 进行开发。 ### 特点 From 448c9b7aac2492c7c3f62ff4990c1926f6447096 Mon Sep 17 00:00:00 2001 From: distant1219 Date: Mon, 15 Oct 2018 16:31:03 +0800 Subject: [PATCH 019/140] Update 20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md I want to translate this eaasy --- ...orch 1.0 Preview Release- Facebook-s newest Open Source AI.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md index 6418db9444..08551028b2 100644 --- a/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md +++ b/sources/tech/20181004 PyTorch 1.0 Preview Release- Facebook-s newest Open Source AI.md @@ -1,3 +1,4 @@ +distant1219 is translating PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI ====== Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0. From 529cd5c1a25ea915aed368d4e2c140555d76247a Mon Sep 17 00:00:00 2001 From: belitex Date: Mon, 15 Oct 2018 17:06:13 +0800 Subject: [PATCH 020/140] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?How=20Writing=20Can=20Expand=20Your=20Skills=20and=20Grow=20You?= =?UTF-8?q?r=20Career?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Expand Your Skills and Grow Your Career.md | 47 ------------------- ...Expand Your Skills and Grow Your Career.md | 45 ++++++++++++++++++ 2 files changed, 45 insertions(+), 47 deletions(-) delete mode 100644 sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md create mode 100644 translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md diff --git a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md deleted file mode 100644 index 324d3c8700..0000000000 --- a/sources/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md +++ /dev/null @@ -1,47 +0,0 @@ -translating by belitex -How Writing Can Expand Your Skills and Grow Your Career -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/graffiti-1281310_1920.jpg?itok=RCayfGKv) - -At the recent [Open Source Summit in Vancouver][1], I participated in a panel discussion called [How Writing can Change Your Career for the Better (Even if You don't Identify as a Writer][2]. The panel was moderated by Rikki Endsley, Community Manager and Editor for Opensource.com, and it included VM (Vicky) Brasseur, Open Source Strategy Consultant; Alex Williams, Founder, Editor in Chief, The New Stack; and Dawn Foster, Consultant, The Scale Factory. - -The talk was [inspired by this article][3], in which Rikki examined some ways that writing can "spark joy" and improve your career in unexpected ways. Full disclosure: I have known Rikki for a long time. We worked at the same company for many years, raised our children together, and remain close friends. - -### Write and learn - -As Rikki noted in the talk description, “even if you don't consider yourself to be ‘a writer,’ you should consider writing about your open source contributions, project, or community.” Writing can be a great way to share knowledge and engage others in your work, but it has personal benefits as well. It can help you meet new people, learn new skills, and improve your communication style. - -I find that writing often clarifies for me what I don’t know about a particular topic. The process highlights gaps in my understanding and motivates me to fill in those gaps through further research, reading, and asking questions. - -“Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it,” Rikki said. - -Writing about what you’ve just learned can be valuable to other learners as well. In her blog, [Julia Evans][4] often writes about learning new technical skills. She has a friendly, approachable style along with the ability to break down topics into bite-sized pieces. In her posts, Evans takes readers through her learning process, identifying what was and was not helpful to her along the way, essentially removing obstacles for her readers and clearing a path for those new to the topic. - -### Communicate more clearly - -Writing can help you practice thinking and speaking more precisely, especially if you’re writing (or speaking) for an international audience. [In this article,][5] for example, Isabel Drost-Fromm provides tips for removing ambiguity for non-native English speakers. Writing can also help you organize your thoughts before a presentation, whether you’re speaking at a conference or to your team. - -“The process of writing the articles helps me organize my talks and slides, and it was a great way to provide ‘notes’ for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person,” Rikki stated. - -If you’re interested in writing, I encourage you to do it. I highly recommend the articles mentioned here as a way to get started thinking about the story you have to tell. Unfortunately, our discussion at Open Source Summit was not recorded, but I hope we can do another talk in the future and share more ideas. - -Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates: - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/9/how-writing-can-help-you-learn-new-skills-and-grow-your-career - -作者:[Amber Ankerholz][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/aankerholz -[1]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/ -[2]: https://ossna18.sched.com/event/FAOF/panel-discussion-how-writing-can-change-your-career-for-the-better-even-if-you-dont-identify-as-a-writer-moderated-by-rikki-endsley-opensourcecom-red-hat?iframe=no# -[3]: https://opensource.com/article/18/2/career-changing-magic-writing -[4]: https://jvns.ca/ -[5]: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience diff --git a/translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md b/translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md new file mode 100644 index 0000000000..f75c55b892 --- /dev/null +++ b/translated/talk/20180919 How Writing Can Expand Your Skills and Grow Your Career.md @@ -0,0 +1,45 @@ +写作是如何帮助技能拓展和事业成长的 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/graffiti-1281310_1920.jpg?itok=RCayfGKv) + +在最近的[温哥华开源峰会][1]上,我参加了一个小组讨论,叫做“写作是如何改变你的职业生涯的(即使你不是个作家)”。主持人是 Opensource.com 的社区经理兼编辑 Rikki Endsley,成员有开源策略顾问 VM (Vicky) Brasseur,The New Stack 的创始人兼主编 Alex Williams,还有 The Scale Factory 的顾问 Dawn Foster。 + +Rikki 在她的[这篇文章][3]中总结了一些能愉悦你,并且能以意想不到的方式改善你职业生涯的写作方法,我在峰会上的发言是受她这篇文章的启发。透露一下,我认识 Rikki 很久了,我们在同一家公司共事了很多年,一起带过孩子,到现在还是很亲密的朋友。 + +### 写作和学习 + +正如 Rikki 对这个小组讨论的描述,“即使你自认为不是一个‘作家’,你也应该考虑写一下对开源的贡献,还有你的项目或者社区”。写作是一种很好的方式,来分享自己的知识并让别人参与到你的工作中来,当然它对个人也有好处。写作能帮助你结识新人,学习新技能,还能改善你的沟通。 + +我发现写作能让我搞清楚自己对某个主题有哪些不懂的地方。写作的过程会让知识体系的空白很突出,这激励了我通过进一步的研究、阅读和提问来填补空白。 + +Rikki 说:“写那些你不知道的东西会更加困难也更加耗时,但是也更有成就感,更有益于你的事业。我发现写我不知道的东西有助于自己学习,因为得研究透彻才能给读者解释清楚。” + +把你刚学到的东西写出来对其他也在学习这些知识的人是很有价值的。[Julia Evans][4] 经常在她的博客里写有关学习新技能的文章。她能把主题分解成一个个小的部分,这种方法对读者很友好,容易上手。Evans 在自己的博客中带领读者了解她的学习过程,指出在这个过程中哪些是对她有用的,哪些是没用的,基本消除了读者的学习障碍,为新手清扫了道路。 + +### 更明确的沟通 + + +写作有助于练习思考和准确讲话,尤其是面向国际受众写作(或演讲)时。例如,在[这篇文章中][5],Isabel Drost-Fromm 为那些母语不是英语的演讲者提供了几个技巧来消除歧义。不管是在会议上还是在自己团队内发言,写作还能帮你在演示之前理清思路。 + +Rikki 说:“写文章的过程有助于我组织整理自己的发言和演示稿,也是一个给参会者提供笔记的好方式,还可以分享给没有参加活动的更多国际观众。” + +如果你有兴趣,我鼓励你去写作。我强烈建议你参考这里提到的文章,开始思考你要写的内容。 不幸的是,我们在开源峰会上的讨论没有记录,但我希望将来能再做一次讨论,分享更多的想法。 + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/9/how-writing-can-help-you-learn-new-skills-and-grow-your-career + +作者:[Amber Ankerholz][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[belitex](https://github.com/belitex) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/aankerholz +[1]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/ +[2]: https://ossna18.sched.com/event/FAOF/panel-discussion-how-writing-can-change-your-career-for-the-better-even-if-you-dont-identify-as-a-writer-moderated-by-rikki-endsley-opensourcecom-red-hat?iframe=no# +[3]: https://opensource.com/article/18/2/career-changing-magic-writing +[4]: https://jvns.ca/ +[5]: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience From 8bbe4640dafd97f97bed001048644f9c133bc797 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Mon, 15 Oct 2018 18:01:50 +0800 Subject: [PATCH 021/140] Delete 20181009 How To Create And Maintain Your Own Man Pages.md --- ... Create And Maintain Your Own Man Pages.md | 199 ------------------ 1 file changed, 199 deletions(-) delete mode 100644 sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md deleted file mode 100644 index cb93af4b92..0000000000 --- a/sources/tech/20181009 How To Create And Maintain Your Own Man Pages.md +++ /dev/null @@ -1,199 +0,0 @@ -Translating by way-ww -How To Create And Maintain Your Own Man Pages -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png) - -We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If you’re looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking – how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what you’ve learned about a command so far. - -By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages. - -### Installing Um - -Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you haven’t installed Linuxbrew yet. - -Once Linuxbrew installed, run the following command to install Um utility. - -``` -$ brew install sinclairtarget/wst/um - -``` - -If you will see an output something like below, congratulations! Um has been installed and ready to use. - -``` -[...] -==> Installing sinclairtarget/wst/um -==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz -==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0 --=#=# # # -==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem -######################################################################## 100.0% -==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939 -==> Caveats -Bash completion has been installed to: -/home/linuxbrew/.linuxbrew/etc/bash_completion.d -==> Summary -🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds -==> Caveats -==> openssl -A CA file has been bootstrapped using certificates from the SystemRoots -keychain. To add additional certificates (e.g. the certificates added in -the System keychain), place .pem files in -/home/linuxbrew/.linuxbrew/etc/openssl/certs - -and run -/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash -==> ruby -Emacs Lisp files have been installed to: -/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby -==> um -Bash completion has been installed to: -/home/linuxbrew/.linuxbrew/etc/bash_completion.d - -``` - -Before going to use to make your man pages, you need to enable bash completion for Um. - -To do so, open your **~/.bash_profile** file: - -``` -$ nano ~/.bash_profile - -``` - -And, add the following lines in it: - -``` -if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then - . $(brew --prefix)/etc/bash_completion.d/um-completion.sh -fi - -``` - -Save and close the file. Run the following commands to update the changes. - -``` -$ source ~/.bash_profile - -``` - -All done. let us go ahead and create our first man page. - -### Create And Maintain Your Own Man Pages - -Let us say, you want to create your own man page for “dpkg” command. To do so, run: - -``` -$ um edit dpkg - -``` - -The above command will open a markdown template in your default editor: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png) - -My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template. - -Here is a sample: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png) - -As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ). - -Finally, view your newly created man page using command: - -``` -$ um dpkg - -``` - -![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png) - -As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details. - -``` -$ um edit dpkg - -``` - -To view the list of newly created man pages using Um, run: - -``` -$ um list - -``` - -All man pages will be saved under a directory named**`.um`**in your home directory - -Just in case, if you don’t want a particular page, simply delete it as shown below. - -``` -$ um rm dpkg - -``` - -To view the help section and all available general options, run: - -``` -$ um --help -usage: um - um [ARGS...] - -The first form is equivalent to `um read `. - -Subcommands: - um (l)ist List the available pages for the current topic. - um (r)ead Read the given page under the current topic. - um (e)dit Create or edit the given page under the current topic. - um rm Remove the given page. - um (t)opic [topic] Get or set the current topic. - um topics List all topics. - um (c)onfig [config key] Display configuration environment. - um (h)elp [sub-command] Display this help message, or the help message for a sub-command. - -``` - -### Configure Um - -To view the current configuration, run: - -``` -$ um config -Options prefixed by '*' are set in /home/sk/.um/umconfig. -editor = vi -pager = less -pages_directory = /home/sk/.um/pages -default_topic = shell -pages_ext = .md - -``` - -In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file. - -``` -pages_directory = /Users/myusername/Dropbox/um - -``` - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ -[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From 5f07a08baaf2448c518293f6cefae798a0c45ea4 Mon Sep 17 00:00:00 2001 From: way-ww <40491614+way-ww@users.noreply.github.com> Date: Mon, 15 Oct 2018 18:02:37 +0800 Subject: [PATCH 022/140] Create 20181009 How To Create And Maintain Your Own Man Pages.md --- ... Create And Maintain Your Own Man Pages.md | 199 ++++++++++++++++++ 1 file changed, 199 insertions(+) create mode 100644 translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md diff --git a/translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md b/translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md new file mode 100644 index 0000000000..407ad90947 --- /dev/null +++ b/translated/tech/20181009 How To Create And Maintain Your Own Man Pages.md @@ -0,0 +1,199 @@ +如何创建和维护你的Man手册 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png) + +我们已经讨论了一些[Man手册的替代方案] [1]。 这些替代方案主要用于学习简洁的Linux命令示例,而无需通过全面过于详细的手册页。 如果你正在寻找一种快速而简单的方法来轻松快速地学习Linux命令,那么这些替代方案值得尝试。 现在,你可能正在考虑 - 如何为Linux命令创建自己的man-like帮助页面? 这时**“Um”**就派上用场了。 Um是一个命令行实用程序,用于轻松创建和维护包含你到目前为止所了解的所有命令的Man页面。 + +通过创建自己的手册页,你可以在手册页中避免大量不必要的细节,并且只包含你需要记住的内容。 如果你想创建自己的一套man-like页面,“Um”也能为你提供帮助。 在这个简短的教程中,我们将学习如何安装“Um”命令以及如何创建自己的man手册页。 + +### 安装 Um + +Um适用于Linux和Mac OS。 目前,它只能在Linux系统中使用** Linuxbrew **软件包管理器来进行安装。 如果你尚未安装Linuxbrew,请参考以下链接。 + +安装Linuxbrew后,运行以下命令安装Um实用程序。 + +``` +$ brew install sinclairtarget/wst/um + +``` + +如果你会看到类似下面的输出,恭喜你! Um已经安装好并且可以使用了。 + +``` +[...] +==> Installing sinclairtarget/wst/um +==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz +==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0 +-=#=# # # +==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem +######################################################################## 100.0% +==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939 +==> Caveats +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d +==> Summary +[] /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds +==> Caveats +==> openssl +A CA file has been bootstrapped using certificates from the SystemRoots +keychain. To add additional certificates (e.g. the certificates added in +the System keychain), place .pem files in +/home/linuxbrew/.linuxbrew/etc/openssl/certs + +and run +/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash +==> ruby +Emacs Lisp files have been installed to: +/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby +==> um +Bash completion has been installed to: +/home/linuxbrew/.linuxbrew/etc/bash_completion.d + +``` + +在制作你的man手册页之前,你需要为Um启用bash补全。 + +要开启bash'补全,首先你需要打开 **~/.bash_profile** 文件: + +``` +$ nano ~/.bash_profile + +``` + +并在其中添加以下内容: + +``` +if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then + . $(brew --prefix)/etc/bash_completion.d/um-completion.sh +fi + +``` + +保存并关闭文件。运行以下命令以更新更改。 + +``` +$ source ~/.bash_profile + +``` + +准备工作全部完成。让我们继续创建我们的第一个man手册页。 + + +### 创建并维护自己的Man手册 + +如果你想为“dpkg”命令创建自己的Man手册。请运行: + +``` +$ um edit dpkg + +``` + +上面的命令将在默认编辑器中打开markdown模板: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png) + +我的默认编辑器是Vi,因此上面的命令会在Vi编辑器中打开它。现在,开始在此模板中添加有关“dpkg”命令的所有内容。 + +下面是一个示例: + +![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png) + +正如你在上图的输出中看到的,我为dpkg命令添加了概要,描述和两个参数选项。 你可以在Man手册中添加你所需要的所有部分。不过你也要确保为每个部分提供了适当且易于理解的标题。 完成后,保存并退出文件(如果使用Vi编辑器,请按ESC键并键入:wq)。 + +最后,使用以下命令查看新创建的Man手册页: + +``` +$ um dpkg + +``` + +![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png) + +如你所见,dpkg的Man手册页看起来与官方手册页完全相同。 如果要在手册页中编辑和/或添加更多详细信息,请再次运行相同的命令并添加更多详细信息。 + +``` +$ um edit dpkg + +``` + +要使用Um查看新创建的Man手册页列表,请运行: + +``` +$ um list + +``` + +所有手册页将保存在主目录中名为**`.um` **的目录下 + +以防万一,如果你不想要某个特定页面,只需删除它,如下所示。 + +``` +$ um rm dpkg + +``` + +要查看帮助部分和所有可用的常规选项,请运行: + +``` +$ um --help +usage: um + um [ARGS...] + +The first form is equivalent to `um read `. + +Subcommands: + um (l)ist List the available pages for the current topic. + um (r)ead Read the given page under the current topic. + um (e)dit Create or edit the given page under the current topic. + um rm Remove the given page. + um (t)opic [topic] Get or set the current topic. + um topics List all topics. + um (c)onfig [config key] Display configuration environment. + um (h)elp [sub-command] Display this help message, or the help message for a sub-command. + +``` + +### 配置 Um + +要查看当前配置,请运行: + +``` +$ um config +Options prefixed by '*' are set in /home/sk/.um/umconfig. +editor = vi +pager = less +pages_directory = /home/sk/.um/pages +default_topic = shell +pages_ext = .md + +``` + +在此文件中,你可以根据需要编辑和更改** pager **,** editor **,** default_topic **,** pages_directory **和** pages_ext **选项的值。 比如说,如果你想在** [Dropbox] [2] **文件夹中保存新创建的Um页面,只需更改/.um/umconfig**文件中** pages_directory **的值并将其更改为Dropbox文件夹即可。 + +``` +pages_directory = /Users/myusername/Dropbox/um + +``` + +这就是全部内容,希望这些能对你有用,更多好的内容敬请关注! + +干杯! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[way-ww](https://github.com/way-ww) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ From 69e4ebdb113750a82b185b27d699030846b55b87 Mon Sep 17 00:00:00 2001 From: sd886393 Date: Mon, 15 Oct 2018 19:13:01 +0800 Subject: [PATCH 023/140] =?UTF-8?q?=E3=80=90=E7=94=B3=E9=A2=86=E3=80=91201?= =?UTF-8?q?80829=204=20open=20source=20monitoring=20tools?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20180829 4 open source monitoring tools.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20180829 4 open source monitoring tools.md b/sources/tech/20180829 4 open source monitoring tools.md index a5b8bf6806..dbc59d8a29 100644 --- a/sources/tech/20180829 4 open source monitoring tools.md +++ b/sources/tech/20180829 4 open source monitoring tools.md @@ -1,3 +1,4 @@ +translating by sd886393 4 open source monitoring tools ====== From 80d11d997c3bd78d470e01f47dc9ae4f56de1b61 Mon Sep 17 00:00:00 2001 From: belitex Date: Mon, 15 Oct 2018 20:44:24 +0800 Subject: [PATCH 024/140] translating by belitex: A sysadmin-s guide to containers --- sources/tech/20180827 A sysadmin-s guide to containers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20180827 A sysadmin-s guide to containers.md b/sources/tech/20180827 A sysadmin-s guide to containers.md index a6529d8842..6acf9c2a45 100644 --- a/sources/tech/20180827 A sysadmin-s guide to containers.md +++ b/sources/tech/20180827 A sysadmin-s guide to containers.md @@ -1,3 +1,5 @@ +translating by belitex + A sysadmin's guide to containers ====== From 1a297c4363ad9d7de97b2c42610e079b26b09753 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Mon, 15 Oct 2018 20:55:03 +0800 Subject: [PATCH 025/140] Create 20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md --- ...nduct and Not Everyone is Happy With it.md | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md diff --git a/translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md new file mode 100644 index 0000000000..32a839ed81 --- /dev/null +++ b/translated/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md @@ -0,0 +1,166 @@ +Linux 拥有了新的行为准则,但是许多人都对此表示不满 +===== + +**Linux 内核有了新的行为准则Code of Conduct(CoC)。但在这条行为准则被签署以及发布仅仅 30 分钟之后,Linus Torvalds 就暂时离开了 Linux 内核的开发工作。因为新行为准则的作者那富有争议的过去,现在这件事成为了热点话题。许多人都对这新的行为准则表示不满。** + +如果你还不了解这件事,请参阅 [Linus Torvalds 对于自己之前的不良态度致歉并开始休假,以改善自己的行为态度][1] + +### Linux 内核开发遵守的新行为准则 + +Linux 内核开发者并不是以前没有需要遵守的行为准则,但是之前的[冲突准则code of conflict][2]现在被替换成了以“给内核开发社区营造更加热情,更方便他人参与的氛围”为目的的行为准则。 + +>“为营造一个开放并且热情的社区环境,我们,贡献者与维护者,许诺让每一个参与进我们项目和社区的人享受一个没有骚扰的体验。无关于他们的年纪,体型,身体残疾,种族,性别,性别认知与表达,社会经验,教育水平,社会或者经济地位,国籍,外表,人种,信仰,性认同和性取向。 + +你可以在这里阅读整篇行为准则 + +[Linux 行为准则][33] + +### Linus Torvalds 是被迫道歉并且休假的吗? + +![Linus Torvalds 的道歉][3] + +这个新的行为准则由 Linus Torvalds 和 Greg Kroah-Hartman (仅次于 Torvalds 的二把手)签发。来自 Intel 的 Dan Williams 和来自 Facebook 的 Chris Mason 也是该准则的签署者。 + +如果我正确地解读了时间线,在签署这个行为准则的半小时之后,Torvalds [发送了一封邮件,对自己之前的不良态度致歉][4]。他同时宣布会进行休假,以改善自己的行为态度。 + +不过有些人开始阅读这封邮件的话外之音,并对如下文字报以特别关注: + +>**在这周,许多社区成员批评了我之前种种不解人意的行为。我以前在邮件里进行的,对他人轻率的批评是非专业以及不必要的**。这种情况在我将事情放在私人渠道处理的时候尤为严重。我理解这件事情的严重性,这是不好的行为,我对此感到十分抱歉。 + +他是否是因为新的行为准则被强迫做出道歉,并且决定休假,可以通过这几行来判断。这也可以让我们采取一些措施,避免 Torvalds 被新的行为准则伤害。 + +### 有关贡献者盟约作者 Coraline Ada Ehmke 的争议 + +Linux 的行为准则基于[贡献者盟约Contributor Convenant1.4 版本][5]。贡献者盟约[被上百个开源项目所接纳][6],包括 Eclipse, Angular, Ruby, Kubernetes等项目。 + +贡献者盟约由 [Coraline Ada Ehmke][7] 创作,她是一个软件工程师,开源支持者,以及 [LGBT][8] 活动家。她对于促进开源世界的多样性做了显著的贡献。 + +Coraline 对于唯才是用的反对立场同样十分鲜明。[唯才是用meritocracy][9]这个词语源自拉丁文,本意为个人在系统内部的进步取决于他的“功绩”,例如智力水平,取得的证书以及教育程度。但[类似 Coraline 的活动家们认为][10]唯才是用是个糟糕的体系,因为他们只是通过人的智力产出来度量一个人,而并不重视他们的人性。 + +[![croraline meritocracy][11]][12] +图片来源:推特用户@nickmon1112 + +[Linus Torvalds 不止一次地说到,他在意的只是代码而并非写代码的人][13]。所以很明显,这忤逆了 Coraline 有关唯才是用体系的观点。 + +具体来说,Coraline 那被人关注饱受争议的过去,是一个关于 [Opal 项目][14]的事件。那是一个发生[在推特上的讨论][15],Elia,来自意大利的 Opal 项目核心开发者说“(那些变性人)不接受现实才是问题所在。” + +Coraline 并没有参加讨论,也不是 Opal 项目的贡献者。不过作为 LGBT 活动家,她以 Elia 发表“冒犯变性人群体的发言”为由,[要求他退出 Opal 项目][16]。 Coraline 和她的支持者——他们给这个项目做过贡献,通过在 GitHub 仓库平台上冗长且激烈的争论,试图将 Elia——此项目的核心开发者移出项目。 + +虽然 Elia 并没有离开这个项目,不过 Opal 项目的维护者同意实行一个行为准则。这个行为准则就是 Coraline 不停向维护者们宣扬的,她那著名的贡献者盟约。 + +不过故事到这里并没有结束。贡献者盟约稍后被更改,[加入了一些针对 Elia 的新条款][17]。这些新条款将行为准则的管束范围扩展到公共领域。不过这些更改稍后[被维护者们标记为恶意篡改][18]。最后 Opal 项目摆脱了贡献者盟约,并用自己的行为准则取而代之。 + +这个例子非常好的说明了,某些被冒犯的少数人群——他们并没有给这个项目哪怕一点贡献,是怎样试图去驱逐这个项目的核心开发者的。 + +### 人们对于 Linux 新的行为准则的以及 Torvalds 道歉的反映。 + +Linux 行为准则以及 Torvalds 的道歉一发布,社交媒体与论坛上就开始盛传种种谣言与[推测][19]。虽然很多人对新的行为准则感到满意,但仍有些人认为这是 [SJW 尝试渗透 Linux 社区][20]的阴谋。 + +Caroline 发布的一个富有嘲讽意味的推特让争论愈发激烈。 + +>我迫不及待期待看到大批的人离开 Linux 社区的场景了。现在它以及被 SJW 的成员渗透了。哈哈哈哈。 +[pic.twitter.com/eFeY6r4ENv][21] +> +>— Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018][22] + +随着对于 Linux 行为准则的争论持续发酵,Carolaine 公开宣称贡献者盟约是一份政治文件。这并不能被那些试图将政治因素排除在开源项目之外的人所接收。 + +>有些人说贡献者盟约是一份政治文件,他们说的没错。 +> +>— Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018][23] + +Nick Monroe,一位自由记者,宣称 Linux 行为准则远没有表面上看上去那么简单。为了证明自己的观点,他挖掘出了 Coraline 的过去。如果您愿意,可以阅读以下材料。 + +>好啦,你们已经看到过几千次了。这是一个行为准则。 +> +>它包含了社会认同的正义行为。 +> +>不过它或许没有看上去来的那么简单。[pic.twitter.com/8NUL2K1gu2][24] +> +>— Nick Monroe (@nickmon1112) [9 月 17 日, 2018][25] + +Nick 并不是唯一一个反对 Linux 新的行为准则的人。[SJW][26] 的参与引发了更多的阴谋论猜测。 + +>我猜今天关于 Linux 的大新闻就是现在,Linux 内核被一个post meritocracy世界观下的行为准则给掌控了。 +> +>这个行为准则的宗旨看起来不错。不过在实际操作中,它们通常被当作 SJW 分子攻击他们不喜之人的工具。况且,很多人都被 SJW 分子所厌恶。 +> +> — Mark Kern (@Grummz) [September 17, 2018][27] + +虽然很多人对于 Torvalds 的道歉感到欣慰,仍有一些人在责备 Torvalds 的态度。 + +>我是不是唯一一个认为 Linus Torvalds 这十几年来的态度恰好就是 Linux 和开源“社区”特有的那种,居高临下,粗鲁,鄙视一切新人的行为作风?反正作为一个新用户,我从来没有在 Linux 社区里感受到自己是受欢迎的。 +> +>— Jonathan Frappier (@jfrappier) [9 月 17 日, 2018][28] + +还有些人并不能接受 Torvalds 的道歉。 + +>哦快看啊,一个喜欢辱骂他人的开源软件维护者,在十几年的恶行之后,终于承认了他的行为**可能**是不正确的。 +> +>我关注的那些人因为这件事都惊讶到平地摔,并且决定给他(Linus Torvalds)寄饼干来庆祝。 🙄🙄🙄 +> +>— Kelly Ellis (@justkelly_ok) [9 月 17 日, 2018][29] + +Torvalds 的道歉引起了广泛关注 ;) + +>我现在要在我的个人档案里写上”我不知是否该原谅 Linus Torvalds“ 吗? +> +>— Verónica. (@maria_fibonacci) [9 月 17 日, 2018][30] + +不继续开玩笑了。有关 Linus 道歉的关注是由 Sharp 挑起的。他因为“恶劣的社区环境”于 2015 年退出了 Linux 内核的开发。 + +>现在我们要面对的问题是,这个成就 Linus,给予他肆意辱骂特权的社区能否迎来改变。不仅仅是 Linus 个人,Linux 内核开发社区也急需改变。 +> +>— Sage Sharp (@sagesharp) 9 月 17 日, 2018 + +### 你对于 Linux 行为准则怎么看? + +如果你问我的观点,我认为目前社区的确是需要一个行为准则。它能指导人们尊重他人,不因为他人的种族,宗教信仰,国籍,政治观点(左派或者右派)而歧视,营造出一个积极向上的社区氛围。 + +对于这个事件,你怎么看?你认为这个行为准则能够帮助 Linux 内核的开发,或者说因为 SJW 成员们的加入,情况会变得更糟? + +在 FOSS 里我们没有行为准则,不过我们都会持着文明友好的态度讨论问题。 + + via: https://itsfoss.com/linux-code-of-conduct/ + + 作者:[Abhishek Prakash][a] + 选题:[lujun9972](https://github.com/lujun9972) + 译者:[thecyanbird](https://github.com/thecyanbird) + 校对:[校对者ID](https://github.com/校对者ID) + + 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + [a]: https://itsfoss.com/author/abhishek/ + [1]: https://itsfoss.com/torvalds-takes-a-break-from-linux/ + [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e + [3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linus-torvalds-apologizes.jpeg + [4]: https://lkml.org/lkml/2018/9/16/167 + [5]: https://www.contributor-covenant.org/version/1/4/code-of-conduct.html + [6]: https://www.contributor-covenant.org/adopters + [7]: https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke + [8]: https://en.wikipedia.org/wiki/LGBT + [9]: https://en.wikipedia.org/wiki/Meritocracy + [10]: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy + [11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/croraline-meritocracy.jpg + [12]: https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg + [13]: https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/ + [14]: https://opalrb.com/ + [15]: https://twitter.com/krainboltgreene/status/611569515315507200 + [16]: https://github.com/opal/opal/issues/941 + [17]: https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11 + [18]: https://github.com/opal/opal/pull/948#issuecomment-113486020 + [19]: https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/ + [20]: https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/ + [21]: https://t.co/eFeY6r4ENv + [22]: https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw + [23]: https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw + [24]: https://t.co/8NUL2K1gu2 + [25]: https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw + [26]: https://www.urbandictionary.com/define.php?term=SJW + [27]: https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw + [28]: https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw + [29]: https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw + [30]: https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw + [31]: https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html + [32]: https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw +[33]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f From c709a3bbb70507682b573f99729b79c40617c9d4 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Mon, 15 Oct 2018 21:00:13 +0800 Subject: [PATCH 026/140] Delete 20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md --- ...nduct and Not Everyone is Happy With it.md | 169 ------------------ 1 file changed, 169 deletions(-) delete mode 100644 sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md diff --git a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md b/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md deleted file mode 100644 index 971a91f94f..0000000000 --- a/sources/talk/20180919 Linux Has a Code of Conduct and Not Everyone is Happy With it.md +++ /dev/null @@ -1,169 +0,0 @@ -thecyanbird translating - -Linux Has a Code of Conduct and Not Everyone is Happy With it -====== -**Linux kernel has a new code of conduct (CoC). Linus Torvalds took a break from Linux kernel development just 30 minutes after signing this code of conduct. And since **the writer of this code of conduct has had a controversial past,** it has now become a point of heated discussion. With all the politics involved, not many people are happy with this new CoC.** - -If you do not know already, [Linux creator Linus Torvalds has apologized for his past behavior and has taken a temporary break from Linux kernel development to improve his behavior][1]. - -### The new code of conduct for Linux kernel development - -Linux kernel developers have a code of conduct. It’s not like they didn’t have a code before, but the previous [code of conflict][2] is now replaced by this new code of conduct to “help make the kernel community a welcoming environment to participate in.” - -> “In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.” - -You can read the entire code of conduct on this commit page. - -[Linux Code of Conduct][33] - - -### Was Linus Torvalds forced to apologize and take a break? - -![Linus Torvalds Apologizes][3] - -The code of conduct was signed off by Linus Torvalds and Greg Kroah-Hartman (kind of second-in-command after Torvalds). Dan Williams of Intel and Chris Mason from Facebook were some of the other signees. - -If I have read through the timeline correctly, half an hour after signing this code of conduct, Torvalds sent a [mail apologizing for his past behavior][4]. He also announced taking a temporary break to improve upon his behavior. - -But at this point some people started reading between the lines, with a special attention to this line from his mail: - -> **This week people in our community confronted me about my lifetime of not understanding emotions**. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry. - -This particular line could be read as if he was coerced into apologizing and taking a break because of the new code of conduct. Though it could also be a precautionary measure to prevent Torvalds from violating the newly created code of conduct. - -### The controversy around Contributor Convent creator Coraline Ada Ehmke - -The Linux code of conduct is based on the [Contributor Covenant, version 1.4][5]. Contributor Convent has been adopted by hundreds of open source projects. Eclipse, Angular, Ruby, Kubernetes are some of the [many adopters of Contributor Convent][6]. - -Contributor Covenant has been created by [Coraline Ada Ehmke][7], a software developer, an open-source advocate, and an [LGBT][8] activist. She has been instrumental in promoting diversity in the open source world. - -Coraline has also been vocal about her stance against [meritocracy][9]. The Latin word meritocracy originally refers to a “system under which advancement within the system turns on “merits”, like intelligence, credentials, and education.” But activists like [Coraline believe][10] that meritocracy is a negative system where the worth of an individual is measured not by their humanity, but solely by their intellectual output. - -[![croraline meritocracy][11]][12] -Image credit: Twitter user @nickmon1112 - -Remember that [Linus Torvalds has repeatedly said that he cares about the code, not the person who writes it][13]. Clearly, this goes against Coraline’s view on meritocracy. - -Coraline has had a troubled incident in the past with a contributor of [Opal project][14]. There was a [discussion taking place on Twitter][15] where Elia, a core contributor to Opal project from Italy, said “(trans people) not accepting reality is the problem here”. - -Coraline was neither in the discussion nor was she a contributor to the Opal project. But as an LGBT activist, she took it to herself and [demanded that Elia be removed from the Opal Project][16] for his ‘views against trans people’. A lengthy and heated discussion took place on Opal’s GitHub repository. Coraline and her supporters, who never contributed to Opal, tried to coerce the moderators into removing Elia, a core contributor of the project. - -While Elia wasn’t removed from the project, Opal project maintainers agreed to put up a code of conduct in place. And this code of conduct was nothing else but Coraline’s famed Contributor Covenant that she had pitched to the maintainers herself. - -But the story didn’t end here. The Contributor Covenant was then modified and a [new clause added in order to get to Elia][17]. The new clause widened the scope of conduct in public spaces. This malicious change was [spotted by the maintainers][18] and they edited the clause. Opal eventually got rid of the Contributor Covenant and put in place its own guideline. - -This is a classic example of how a few offended people, who never contributed a single line of code to the project, tried to oust its core contributor. - -### People’s reaction on Linux Code of Conduct and Torvalds’ apology - -As soon as Linux code of conduct and Torvalds’ apology went public, Social Media and forums were rife with rumors and [speculations][19]. While many people appreciated this new development, there were some who saw a conspiracy by [SJW infiltrating Linux][20]. - -A sarcastic tweet by Caroline only fueled the fire. - -> I can’t wait for the mass exodus from Linux now that it’s been infiltrated by SJWs. Hahahah [pic.twitter.com/eFeY6r4ENv][21] -> -> — Coraline Ada Ehmke (@CoralineAda) [September 16, 2018][22] - -In the wake of the Linux CoC controversy, Coraline openly said that the Contributor Convent code of conduct is a political document. This did not go down well with the people who want the political stuff out of the open source projects. - -> Some people are saying that the Contributor Covenant is a political document, and they’re right. -> -> — Coraline Ada Ehmke (@CoralineAda) [September 16, 2018][23] - -Nick Monroe, a freelance journalist, dig up the past of Coraline in order to validate his claim that there is more to Linux CoC than meets the eye. You can go by the entire thread if you want. - -> Alright. You've seen this a million times before. It's a code of conduct blah blah blah -> -> that has social justice baked right into it. blah blah blah. -> -> But something is different about this. [pic.twitter.com/8NUL2K1gu2][24] -> -> — Nick Monroe (@nickmon1112) [September 17, 2018][25] - -Nick wasn’t the only one to disapprove of the new Linux CoC. The [SJW][26] involvement led to more skepticism. - -> I guess the big news in Linux today is that the Linux kernel is now governed by a Code of Conduct and a “post meritocracy” world view. -> -> In principle these CoCs look great. In practice they are abused tools to hunt people SJWs don’t like. And they don’t like a lot of people. -> -> — Mark Kern (@Grummz) [September 17, 2018][27] - -While there were many who appreciated Torvalds’ apology, there were a few who blamed Torvalds’ attitude: - -> Am I the only one who thinks Linus Torvalds attitude for decades was a prime contributors to how many of the condescending, rudes, jerks in Linux and open source "communities" behaved? I've never once felt welcomed into the Linux community as a new user. -> -> — Jonathan Frappier (@jfrappier) [September 17, 2018][28] - -And some were simply not amused with his apology: - -> Oh look, an abusive OSS maintainer finally admitted, after *decades* of abusive and toxic behavior, that his behavior *might* be an issue. -> -> And a bunch of people I follow are tripping all over themselves to give him cookies for that. 🙄🙄🙄 -> -> — Kelly Ellis (@justkelly_ok) [September 17, 2018][29] - -The entire Torvalds apology episode has raised a genuine concern ;) - -> Do we have to put "I don't/do forgive Linus Torvalds" in our bio now? -> -> — Verónica. (@maria_fibonacci) [September 17, 2018][30] - -Jokes apart, the genuine concern was raised by Sharp, who had [quit Linux Kernel development][31] in 2015 due to the ‘toxic community’. - -> The real test here is whether the community that built Linus up and protected his right to be verbally abusive will change. Linus not only needs to change himself, but the Linux kernel community needs to change as well. -> -> — Sage Sharp (@_sagesharp_) [September 17, 2018][32] - -### What do you think of Linux Code of Conduct? - -If you ask my opinion, I do think that a Code of Conduct is the need of the time. It guides people in behaving in a respectable way and helps create a positive environment for all kind of people irrespective of their race, ethnicity, religion, nationality and political views (both left and right). - -What are your views on the entire episode? Do you think the CoC will help Linux kernel development? Or will it deteriorate with the involvement of anti-meritocracy SJWs? - -We don’t have a code of conduct at It’s FOSS but let’s keep the discussion civil :) - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linux-code-of-conduct/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]: https://itsfoss.com/torvalds-takes-a-break-from-linux/ -[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e -[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linus-torvalds-apologizes.jpeg -[4]: https://lkml.org/lkml/2018/9/16/167 -[5]: https://www.contributor-covenant.org/version/1/4/code-of-conduct.html -[6]: https://www.contributor-covenant.org/adopters -[7]: https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke -[8]: https://en.wikipedia.org/wiki/LGBT -[9]: https://en.wikipedia.org/wiki/Meritocracy -[10]: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy -[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/croraline-meritocracy.jpg -[12]: https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg -[13]: https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/ -[14]: https://opalrb.com/ -[15]: https://twitter.com/krainboltgreene/status/611569515315507200 -[16]: https://github.com/opal/opal/issues/941 -[17]: https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11 -[18]: https://github.com/opal/opal/pull/948#issuecomment-113486020 -[19]: https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/ -[20]: https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/ -[21]: https://t.co/eFeY6r4ENv -[22]: https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw -[23]: https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw -[24]: https://t.co/8NUL2K1gu2 -[25]: https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw -[26]: https://www.urbandictionary.com/define.php?term=SJW -[27]: https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw -[28]: https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw -[29]: https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw -[30]: https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw -[31]: https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html -[32]: https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw -[33]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f From 71623588fb73761f9fb5728d1aa561dfe8143d32 Mon Sep 17 00:00:00 2001 From: David Chen Date: Sat, 13 Oct 2018 21:34:58 +0800 Subject: [PATCH 027/140] Third commit --- sources/tech/20180823 CLI- improved.md | 121 ++++++++++++++++++++++++- 1 file changed, 120 insertions(+), 1 deletion(-) diff --git a/sources/tech/20180823 CLI- improved.md b/sources/tech/20180823 CLI- improved.md index 52edaa28c8..8fb3ea9100 100644 --- a/sources/tech/20180823 CLI- improved.md +++ b/sources/tech/20180823 CLI- improved.md @@ -2,16 +2,29 @@ Translating by DavidChenLiang CLI: improved ====== +命令行:增强 +====== + +我不确定有多少web 开发者能完全逃避使用命令行。就我来说,我从1997年上大学就开始使用命令行了,那时l33t-hacker 让我着迷,同时我觉得又很难掌握它。 + I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth. +过去这些年我的命令行本领在逐步加强,我经常会去搜寻在我工作中能使用的更好的命令行工具。就此来说,下面就是我现在使用的增强的命令行工具的列表。 + Over the years my command line habits have improved and I often search for smarter tools for the jobs I commonly do. With that said, here's my current list of improved CLI tools. ### Ignoring my improvements +### 怎么忽略我的增强 + +在一些情况下我会用别名将新的和增强的命令行工具链接到原来的命令行(如`cat`和`ping`)。 In a number of cases I've aliased the new and improved command line tool over the original (as with `cat` and `ping`). +如果我需要运行原来的命令的话,有时我确实需要这么做,我能像下面这样来做。(我用的是Mac 你的输出可能不一样) + If I want to run the original command, which is sometimes I do need to do, then there's two ways I can do this (I'm on a Mac so your mileage may vary): + ``` $ \cat # ignore aliases named "cat" - explanation: https://stackoverflow.com/a/16506263/22617 $ command cat # ignore functions and aliases @@ -20,13 +33,22 @@ $ command cat # ignore functions and aliases ### bat > cat +`cat`用于打印文件的内容,但是如果你在命令行上花了很多时间的话,例如语法高亮之类的功能会非常有用。我先发现了[ccat][3]这个有语法高亮功能的的工具, 然后我发现了[bat][4],它有语法高亮,分页,行号和git集成。 + `cat` is used to print the contents of a file, but given more time spent in the command line, features like syntax highlighting come in very handy. I found [ccat][3] which offers highlighting then I found [bat][4] which has highlighting, paging, line numbers and git integration. +`bat`命令也能让我在输出里(只要输出比屏幕的高度长) +使用`/`关键字绑定来搜索(和用`less`搜索功能一样)。 + The `bat` command also allows me to search during output (only if the output is longer than the screen height) using the `/` key binding (similarly to `less` searching). ![Simple bat output][5] +我将别名`bat`链接到了`cat`命令: + I've also aliased `bat` to the `cat` command: + + ``` alias cat='bat' @@ -35,12 +57,18 @@ alias cat='bat' 💾 [Installation directions][4] ### prettyping > ping +### prettyping > ping + +`ping`非常有用,当我碰到“糟了,是不是神马服务挂了/我的网不通了?”这种情况下我最先想到的工具就是他了。但是`prettyping`(“prettyping” 可不是指"pre typing")(译注:英文字面意思是'预打印')在`ping`上加上了友好的输出,这可让我感觉命令行友好了很多呢。 `ping` is incredibly useful, and probably my goto tool for the "oh crap is X down/does my internet work!!!". But `prettyping` ("pretty ping" not "pre typing"!) gives ping a really nice output and just makes me feel like the command line is a bit more welcoming. ![/images/cli-improved/ping.gif][6] +我也将`ping`用别名链接到了`prettyping`命令: + I've also aliased `ping` to the `prettyping` command: + ``` alias ping='prettyping --nolegend' @@ -50,13 +78,22 @@ alias ping='prettyping --nolegend' ### fzf > ctrl+r +在命令行上使用`ctrl+r`将允许你在命令历史里发现搜索使用过的命令,这是个挺好的小技巧,但是它需要你给出非常精确的输入才能正常运行。 + In the terminal, using `ctrl+r` will allow you to [search backwards][8] through your history. It's a nice trick, albeit a bit fiddly. +`fzf`这个工具相比于`ctrl+r`有了**巨大的**进步。它能针对命令行历史进行模糊查询,并且提供了对可能合格的结果的全面交互式预览。 + The `fzf` tool is a **huge** enhancement on `ctrl+r`. It's a fuzzy search against the terminal history, with a fully interactive preview of the possible matches. +除了搜索命令历史,`fzf`还能预览和打开文件,我在下面的视频里展示了这些功能。 + In addition to searching through the history, `fzf` can also preview and open files, which is what I've done in the video below: +对于预览的效果,我创建了一个叫`preview`的别名,它将`fzf`和前文提到的`bat`组合起来完成预览功能,还给上面绑定了一个定制的热键Ctrl+o来打开 VS Code + For this preview effect, I created an alias called `preview` which combines `fzf` with `bat` for the preview and a custom key binding to open VS Code: + ``` alias preview="fzf --preview 'bat --color \"always\" {}'" # add support for ctrl+o to open selected file in VS Code @@ -68,10 +105,21 @@ export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'" ### htop > top +`top`是当我想快速诊断为什么机器上的CPU跑的那么累或者风扇为什么突然呼呼大做的时候首先会想到的工具。我在产品环境也会使用这个工具。讨厌的是Mac 上的`top`和 Linux 上的`top`有着极大的不同(包括其内部的 IMHO) + `top` is my goto tool for quickly diagnosing why the CPU on the machine is running hard or my fan is whirring. I also use these tools in production. Annoyingly (to me!) `top` on the Mac is vastly different (and inferior IMHO) to `top` on linux. +不过,`htop`是对 Linux 上的`top`和 Mac 上蹩脚的`top`的极大改进。它增加了包括颜色输出编码,键盘热键绑定以及不同的视图,这极大的帮助了我来理解进程之间的父子关系。 + However, `htop` is an improvement on both regular `top` and crappy-mac `top`. Lots of colour coding, keyboard bindings and different views which have helped me in the past to understand which processes belong to which. +方便的热键绑定包括: + + * P - CPU 使用率排序 + * M - 内存使用排序 + * F4 - 用字符串过滤进程(例如只看包括"node"的进程) + * space - 锚定一个单独进程,这样我能观察它是否有尖峰状态 + Handy key bindings include: * P - sort by CPU @@ -83,7 +131,10 @@ Handy key bindings include: ![htop output][10] +在Mac Sieera 上htop 有个奇怪的bug,可以通过以root 运行来绕过(我实在记不清这个bug 是什么,但是这个别名能搞定它,有点讨厌的是我得每次都输入 root 密码。): + There is a weird bug in Mac Sierra that can be overcome by running `htop` as root (I can't remember exactly what the bug is, but this alias fixes it - though annoying that I have to enter my password every now and again): + ``` alias top="sudo htop" # alias top and fix high sierra bug @@ -93,11 +144,16 @@ alias top="sudo htop" # alias top and fix high sierra bug ### diff-so-fancy > diff +我非常确定我是一些年前从 Paul Irish 那儿学来的这个技巧,尽管我很少直接使用`diff`,我的git 命令行会一直使用`diff`。`diff-so-fancy`给了我代码颜色和更改字符高亮的功能。 + I'm pretty sure I picked this one up from Paul Irish some years ago. Although I rarely fire up `diff` manually, my git commands use diff all the time. `diff-so-fancy` gives me both colour coding but also character highlight of changes. ![diff so fancy][12] +在我的`~/.gitconfig`文件里我有下面的选项来打开`git diff`和`git show`的`diff-so-fancy`功能。 + Then in my `~/.gitconfig` I have included the following entry to enable `diff-so-fancy` on `git diff` and `git show`: + ``` [pager] diff = diff-so-fancy | less --tabs=1,5 -RFX @@ -109,13 +165,22 @@ Then in my `~/.gitconfig` I have included the following entry to enable `diff-so ### fd > find +尽管我使用 Mac, 但我从来不是一个Spotlight的拥趸,我觉得他的功能很累赘,关键字也难记,更新它自己的数据库时会拖慢CPU,简直一无是处。我经常使用[Alfred][14],但是它额搜索功能也工作的不是很好。 + Although I use a Mac, I've never been a fan of Spotlight (I found it sluggish, hard to remember the keywords, the database update would hammer my CPU and generally useless!). I use [Alfred][14] a lot, but even the finder feature doesn't serve me well. +我倾向于在命令行中搜索文件,但是`find`的难用在于很难去记住那些合适的表达式来描述我想要的文件。(而且 Mac 上的 find 命令和非 Mac 的 find 命令还有些许不同,这更加深了我的失望。) + I tend to turn the command line to find files, but `find` is always a bit of a pain to remember the right expression to find what I want (and indeed the Mac flavour is slightly different non-mac find which adds to frustration). +`fd`是一个很好的替代品(它的作者和`bat`的作者是同一个人)。他非常快而且对于我经常要搜索的命令非常好记。 + `fd` is a great replacement (by the same individual who wrote `bat`). It is very fast and the common use cases I need to search with are simple to remember. A few handy commands: + +几个非常方便的例子: + ``` $ fd cli # all filenames containing "cli" $ fd -e md # all with .md extension @@ -129,19 +194,33 @@ $ fd cli -x wc -w # find "cli" and run `wc -w` on each file ### ncdu > du +对我来说,知道当前的磁盘空间使用是非常重要的任务。我用过 Mac 上的[Dish Daisy][17],但是我觉得那个程序产生结果有点慢。 + Knowing where disk space is being taking up is a fairly important task for me. I've used the Mac app [Disk Daisy][17] but I find that it can be a little slow to actually yield results. +`du -sh`命令是我经常会跑的命令(`-sh`是指结果以总结和人类可读的方式显示),不过我经常会想要深入挖掘那些占用了大量的目录。 + The `du -sh` command is what I'll use in the terminal (`-sh` means summary and human readable), but often I'll want to dig into the directories taking up the space. +`nudu`是一个非常棒的替代。它提供了一个交互式的界面并且允许快速的扫描那些占用了大量磁盘空间的目录和文件,它又快又准。(尽管不管在哪个工具的情况下,扫描我的home目录都要很长时间,它有550G) + + `ncdu` is a nice alternative. It offers an interactive interface and allows for quickly scanning which folders or files are responsible for taking up space and it's very quick to navigate. (Though any time I want to scan my entire home directory, it's going to take a long time, regardless of the tool - my directory is about 550gb). +一旦当我找到一个目录我想要`处理`一下(如删除,移动或压缩文件),我都会使用命令+点击屏幕[iTerm2][18]上部的目录名字来对那个目录执行搜索。 + Once I've found a directory I want to manage (to delete, move or compress files), I'll use the cmd + click the pathname at the top of the screen in [iTerm2][18] to launch finder to that directory. ![ncdu output][19] +还有另外一个选择[nnn][20],它提供了一个更漂亮的界面,尽管它也提供文件尺寸和使用情况,实际上它更像一个全功能的文件管理器。 + There's another [alternative called nnn][20] which offers a slightly nicer interface and although it does file sizes and usage by default, it's actually a fully fledged file manager. My `ncdu` is aliased to the following: + +我的`nudu`使用下面的别名链接: + ``` alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules" @@ -149,6 +228,13 @@ alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules" The options are: +选项有: + + * `--color dark` 使用颜色方案 + * `-rr` 只读模式(防止误删和运行新的登陆程序) + * `--exclude` 忽略不想操作的目录 + + * `--color dark` \- use a colour scheme * `-rr` \- read-only mode (prevents delete and spawn shell) * `--exclude` ignore directories I won't do anything about @@ -159,13 +245,20 @@ The options are: ### tldr > man +几乎所有的单独命令行工具都有一个相伴的手册,其可以被`man 命令`来调出,但是在`man`的输出里找到东西可有点然人困惑,而且在一个包含了所有的技术细节的输出里找东西也挺可怕的。 + It's amazing that nearly every single command line tool comes with a manual via `man `, but navigating the `man` output can be sometimes a little confusing, plus it can be daunting given all the technical information that's included in the manual output. +这就是TL;DR(译注:英文里`文档太长,没空去读`的缩写)项目的初衷。这个一个由社区驱动的文档系统,它针对的是命令行。就我现在用下来,我还没碰到过一个命令它没有相应的文档,你[也可以做贡献][22]. + This is where the TL;DR project comes in. It's a community driven documentation system that's available from the command line. So far in my own usage, I've not come across a command that's not been documented, but you can [also contribute too][22]. ![TLDR output for 'fd'][23] +作为一个小技巧,我将`tldr`的别名链接到`help`(这样输入会快一点。。。) + As a nicety, I've also aliased `tldr` to `help` (since it's quicker to type!): + ``` alias help='tldr' @@ -175,17 +268,29 @@ alias help='tldr' ### ack || ag > grep +`grep`毫无疑问是一个命令行上的强力工具,但是经过这些年它已经被一些工具超越了。其中两个叫`ack`和`ag`。 + `grep` is no doubt a powerful tool on the command line, but over the years it's been superseded by a number of tools. Two of which are `ack` and `ag`. +我个人来说`ack`和`ag`都尝试过,并没有非常明显的个人偏好,(那也就是说他们都很棒,并且很相似)。我倾向于默认只使用`ack`,因为这三个字符就在指尖,很好打。并且,`ack`有大量的`ack --bar`参数可以使用,(你一定会体会到这一点。) + I personally flitter between `ack` and `ag` without really remembering which I prefer (that's to say they're both very good and very similar!). I tend to default to `ack` only because it rolls of my fingers a little easier. Plus, `ack` comes with the mega `ack --bar` argument (I'll let you experiment)! +`ack`和`ag`都将使用正则表达式来表达搜索,并且对我的工作有关,我能指定搜索的文件类型而不用使用类似于`--js`或`--html`的文件标识(尽管`ag`比`ack`在文件类型过滤器里包括了更多的文件类型。) + Both `ack` and `ag` will (by default) use a regular expression to search, and extremely pertinent to my work, I can specify the file types to search within using flags like `--js` or `--html` (though here `ag` includes more files in the js filter than `ack`). +两个工具都支持常见的`grep`选项,如`-B`和`-A`用于在搜索的上下文里指代`之前`和`之后`。 + Both tools also support the usual `grep` options, like `-B` and `-A` for before and after context in the grep. ![ack in action][25] +因为`ack`不支持markdown(而我又恰好写了很多markdown), 我在我的`~/.ackrc`文件里放了如下的定制语句: + Since `ack` doesn't come with markdown support (and I write a lot in markdown), I've got this customisation in my `~/.ackrc` file: + + ``` --type-set=md=.md,.mkd,.markdown --pager=less -FRX @@ -198,11 +303,19 @@ Since `ack` doesn't come with markdown support (and I write a lot in markdown), ### jq > grep et al +我是[jq][29]的粉丝之一。当然一开始我也在它的语法里苦苦挣扎,还好我对查询语言还算有些使用心得,现在我对`jq`可以说是每天都要用。(不过从前我要么使用grep 或者使用一个叫[json][30]的工具,相比而言后者非常基础了。) + I'm a massive fanboy of [jq][29]. At first I struggled with the syntax, but I've since come around to the query language and use `jq` on a near daily basis (whereas before I'd either drop into node, use grep or use a tool called [json][30] which is very basic in comparison). +我甚至开始撰写一个`jq`的说明系列(有2500字并且还在增加),我还发布了一个[web tool][31]和一个Mac 上的应用(这个还没有发布。) + I've even started the process of writing a jq tutorial series (2,500 words and counting) and have published a [web tool][31] and a native mac app (yet to be released). +`jq`允许我传入一个 JSON 并且能非常简单的转变其为一个 JSON格式的适合我要求的结果。下面这个例子允许我用一个命令更新我的所有节点依赖(为了便于于都,我将其分成为多行。) + `jq` allows me to pass in JSON and transform the source very easily so that the JSON result fits my requirements. One such example allows me to update all my node dependencies in one command (broken into multiple lines for readability): + + ``` $ npm i $(echo $(\ npm outdated --json | \ @@ -235,6 +348,9 @@ The above command will list all the node dependencies that are out of date, and That result is then fed into the `npm install` command and voilà, I'm all upgraded (using the sledgehammer approach). ### Honourable mentions +### 很荣幸提及一些其他的工具 + +我也在开始尝试一些别的工具,但我还没有完全掌握他们。(出了`ponysay`,当我新启动一个命令行会话时,它就会出现。) Some of the other tools that I've started poking around with, but haven't used too often (with the exception of ponysay, which appears when I start a new terminal session!): @@ -246,6 +362,9 @@ Some of the other tools that I've started poking around with, but haven't used t ### What about you? +### 你有什么好点子吗? + +上面是我的命令行清单。能告诉我们你的吗?你有没有增强一些你每天都会用到的命令呢?我非常乐意知道。 So that's my list. How about you? What daily command line tools have you improved? I'd love to know. @@ -256,7 +375,7 @@ via: https://remysharp.com/2018/08/23/cli-improved 作者:[Remy Sharp][a] 选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) +译者:DavidChenLiang(https://github.com/DavidChenLiang) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From cafcbaa0523e00d37d2883abcb121b46154ea741 Mon Sep 17 00:00:00 2001 From: thecyanbird <2534930703@qq.com> Date: Mon, 15 Oct 2018 21:09:54 +0800 Subject: [PATCH 028/140] thecyanbird translating --- ... To Record Your Terminal And Generate Animated Gif Images.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md index 26d1941cc1..7b77a9cf73 100644 --- a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md +++ b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md @@ -1,3 +1,5 @@ +thecyanbird translating + Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images ====== This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics. From c7e00f9d275eb807a131975d27064a61d1477c50 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 23:00:39 +0800 Subject: [PATCH 029/140] PRF:20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @FSSlc --- ... And Mouse, But Not The Screen In Linux.md | 55 +++++++++++++------ 1 file changed, 37 insertions(+), 18 deletions(-) diff --git a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md index 3a0a0592cc..9b0c6608dd 100644 --- a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md +++ b/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @@ -3,33 +3,38 @@ ![](https://www.ostechnix.com/wp-content/uploads/2017/09/Lock-The-Keyboard-And-Mouse-720x340.jpg) -我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(译者注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。 +我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(LCTT 译注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。 ### 安装 xtrlock xtrlock 软件包在大多数 Linux 操作系统的默认软件仓库中都可以获取到。所以你可以使用你安装的发行版的包管理器来安装它。 在 **Arch Linux** 及其衍生发行版中,运行下面的命令来安装它: + ``` $ sudo pacman -S xtrlock ``` 在 **Fedora** 上使用: + ``` $ sudo dnf install xtrlock ``` -在 **RHEL, CentOS** 上使用: +在 **RHEL、CentOS** 上使用: + ``` $ sudo yum install xtrlock ``` 在 **SUSE/openSUSE** 上使用: + ``` $ sudo zypper install xtrlock ``` -在 **Debian, Ubuntu, Linux Mint** 上使用: +在 **Debian、Ubuntu、Linux Mint** 上使用: + ``` $ sudo apt-get install xtrlock ``` @@ -38,41 +43,50 @@ $ sudo apt-get install xtrlock 安装好 xtrlock 后,你需要根据你的选择来创建一个快捷键,通过这个快捷键来锁住键盘和鼠标。 -在 **/usr/local/bin** 目录下创建一个名为 **lockkbmouse** 的新文件: +(LCTT 译注:译者在自己的系统(Arch + Deepin)中发现这里的到下面创建快捷键的部分可以不必做,依然生效。) + +在 `/usr/local/bin` 目录下创建一个名为 `lockkbmouse` 的新文件: + ``` $ sudo vi /usr/local/bin/lockkbmouse ``` 然后将下面的命令添加到这个文件中: + ``` #!/bin/bash sleep 1 && xtrlock ``` + 保存并关闭这个文件。 然后使用下面的命令来使得它可以被执行: + ``` $ sudo chmod a+x /usr/local/bin/lockkbmouse ``` 接着,我们就需要创建快捷键了。 +#### 创建快捷键 + **在 Arch Linux MATE 桌面中** -依次点击 **System -> Preferences -> Hardware -> keyboard Shortcuts** +依次点击 “System -> Preferences -> Hardware -> keyboard Shortcuts” -然后点击 **Add** 来创建快捷键。 +然后点击 “Add” 来创建快捷键。 ![][2] -首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 **Apply** 按钮。 +首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 “Apply” 按钮。 + ``` bash -c "sleep 1 && xtrlock" ``` ![][3] -为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 **Alt+k** 这组快捷键。 +为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 `Alt+k` 这组快捷键。 ![][4] @@ -80,16 +94,17 @@ bash -c "sleep 1 && xtrlock" **在 Ubuntu GNOME 桌面中** -依次进入 **System Settings -> Devices -> Keyboard**,然后点击 **+** 这个符号。 +依次进入 “System Settings -> Devices -> Keyboard”,然后点击 “+” 这个符号。 + +键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 “Add” 按钮。 -键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 **Add** 按钮。 ``` bash -c "sleep 1 && xtrlock" ``` ![][5] -接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 **“Set shortcut”** 这个按钮就可以了。 +接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 “Set shortcut” 这个按钮就可以了。 ![][6] @@ -97,7 +112,7 @@ bash -c "sleep 1 && xtrlock" ![][7] -输入你选定的快捷键组合,例如我使用 **Alt+k**。 +输入你选定的快捷键组合,例如我使用 `Alt+k`。 ![][8] @@ -113,23 +128,26 @@ bash -c "sleep 1 && xtrlock" ### 将键盘和鼠标解锁 -要将键盘和鼠标解锁,只需要输入你的密码然后敲击“Enter”键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲 `ENTER` 键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 **ESC** 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 **BACKSPACE** 或者 **DELETE** 键就可以了。 +要将键盘和鼠标解锁,只需要输入你的密码然后敲击回车键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲回车键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 `ESC` 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 `BACKSPACE` 或者 `DELETE` 键就可以了。 ### 要是我被永久地锁住了怎么办? -以防你被永久地锁定了屏幕,切换至一个 TTY(例如 CTRL+ALT+F2)然后运行: +以防你被永久地锁定了屏幕,切换至一个 TTY(例如 `CTRL+ALT+F2`)然后运行: + ``` $ sudo killall xtrlock ``` -或者你还可以使用 **chvt** 命令来在 TTY 和 X 会话之间切换。 +或者你还可以使用 `chvt` 命令来在 TTY 和 X 会话之间切换。 例如,如果要切换到 TTY1,则运行: + ``` $ sudo chvt 1 ``` 要切换回 X 会话,则键入: + ``` $ sudo chvt 7 ``` @@ -137,6 +155,7 @@ $ sudo chvt 7 不同的发行版使用了不同的快捷键组合来在不同的 TTY 间切换。请参考你安装的对应发行版的官方网站了解更多详情。 如果想知道更多 xtrlock 的信息,请参考 man 页: + ``` $ man xtrlock ``` @@ -145,7 +164,7 @@ $ man xtrlock **资源:** - * [**Ubuntu 论坛**][10] +* [**Ubuntu 论坛**][10] -------------------------------------------------------------------------------- @@ -154,7 +173,7 @@ via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ 作者:[SK][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -167,5 +186,5 @@ via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ [6]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-1.png [7]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-2.png [8]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-3.png -[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrlock-1.png +[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrclock-1.png [10]:https://ubuntuforums.org/showthread.php?t=993800 From e497745c703489f326dd3ff8df548c1c2c9e58a7 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Mon, 15 Oct 2018 23:00:57 +0800 Subject: [PATCH 030/140] PUB:20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md @FSSlc https://linux.cn/article-10119-1.html --- ...To Lock The Keyboard And Mouse, But Not The Screen In Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md (100%) diff --git a/translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md b/published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md similarity index 100% rename from translated/tech/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md rename to published/20180817 How To Lock The Keyboard And Mouse, But Not The Screen In Linux.md From d67b663d79cadcf8294d6cbcf512ceb1e6495f5a Mon Sep 17 00:00:00 2001 From: lctt-bot Date: Mon, 15 Oct 2018 17:00:22 +0000 Subject: [PATCH 031/140] Revert "[Translating] Know Your Storage- Block, File - Object" This reverts commit 8ab0b6ccd7fb6da546290ec9a663ee23f4ac8a84. --- sources/tech/20180911 Know Your Storage- Block, File - Object.md | 1 - 1 file changed, 1 deletion(-) diff --git a/sources/tech/20180911 Know Your Storage- Block, File - Object.md b/sources/tech/20180911 Know Your Storage- Block, File - Object.md index 186b41d41a..24f179d9d5 100644 --- a/sources/tech/20180911 Know Your Storage- Block, File - Object.md +++ b/sources/tech/20180911 Know Your Storage- Block, File - Object.md @@ -1,4 +1,3 @@ -translating by name1e5s Know Your Storage: Block, File & Object ====== From e1ebeb8486560ec920a217bb5488f084f18729c5 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 16 Oct 2018 08:50:21 +0800 Subject: [PATCH 032/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20How=20To=20Browse?= =?UTF-8?q?=20And=20Read=20Entire=20Arch=20Wiki=20As=20Linux=20Man=20Pages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ead Entire Arch Wiki As Linux Man Pages.md | 151 ++++++++++++++++++ 1 file changed, 151 insertions(+) create mode 100644 sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md diff --git a/sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md b/sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md new file mode 100644 index 0000000000..fe61f32dda --- /dev/null +++ b/sources/tech/20181015 How To Browse And Read Entire Arch Wiki As Linux Man Pages.md @@ -0,0 +1,151 @@ +How To Browse And Read Entire Arch Wiki As Linux Man Pages +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/10/arch-wiki-720x340.jpg) + +A while ago, I wrote a guide that described how to browse the Arch Wiki from your Terminal using a command line script named [**arch-wiki-cli**][1]. Using this script, anyone can easily navigate through entire Arch Wiki website and read it with a text browser of your choice. Obviously, an active Internet connection is required to use this script. Today, I stumbled upon a similar utility named **“Arch-wiki-man”**. As the name says, it is also used to read the Arch Wiki from command line, but it doesn’t require Internet connection. Arch-wiki-man program helps you to browse and read entire Arch Wiki as Linux man pages. It will display any article from Arch Wiki in man pages format. Also, you need not to be online to browse Arch Wiki. The entire Arch Wiki will be downloaded locally and the updates are pushed automatically every two days. So, you always have an up-to-date, local copy of the Arch Wiki on your system. + +### Installing Arch-wiki-man + +Arch-wiki-man is available in [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3]. + +``` + $ yay -S arch-wiki-man +``` + +Alternatively, it can be installed using NPM package manager like below. Make sure you have [**installed NodeJS**][4] and run the following command to install it: + +``` + $ npm install -g arch-wiki-man +``` + +### Browse And Read Entire Arch Wiki As Linux Man Pages + +The typical syntax of Arch-wiki-man is: + +``` + $ awman +``` + +Let me show you some examples. + +**Search with one or more matches** + +Let us search for a [**Arch Linux installation guide**][5]. To do so, simply run: + +``` + $ awman Installation guide +``` + +The above command will search for the matches that contains the search term “Installation guide” in the Arch Wiki. If there are multiple matches for the given search term, a selection menu will appear. Choose the guide you want to read using **UP/DOWN arrows** or Vim-style keybindings ( **j/k** ) and hit ENTER to open it. The resulting guide will open in man pages format like below. + +![][6] + +Here, awman refers **a** rch **w** iki **m** an. + +All man command options are supported, so you can navigate through guide as the way you do when reading a man page. To view the help section, press **h**. + +![][7] + +To exit the selection menu without entering **man** , simply press **Ctrl+c**. + +To go back and/or quit man, type **q**. + +**Search matches in titles and descriptions** + +By default, Awman will search for the matches in titles only. You can, however, direct it to search for the matches in both the titles and descriptions as well. + +``` + $ awman -d vim +``` + +Or, + +``` + $ awman --desc-search vim +``` + +**Search for matches in contents** + +Apart from searching for matches in titles and descriptions, it is also possible to scan the contents for a match as well. Please note that this will significantly slower the search process. + +``` + $ awman -k emacs +``` + +Or, + +``` + $ awman --apropos emacs +``` + +**Open the search results in web browser** + +If you don’t want to view the arch wiki guides in man page format, you can open it in a web browser. To do so, run: + +``` + $ awman -w pacman +``` + +Or, + +``` + $ awman --web pacman +``` + +This command will open the resulting match in the default web browser rather than with **man** command. Please note that you need Internet connection to use this option. + +**Search in other languages** + +By default, Awman will open the Arch wiki pages in English. If you want to view the results in other languages, for example **Spanish** , simply do: + +``` + $ awman -l spanish codecs +``` + +![][8] + +To view the list of available language options, run: + +``` + + $ awman --list-languages + +``` + +**Update the local copy of Arch Wiki** + +Like I already said, the updates are pushed automatically every two days. If you want to update it manually, simply run: + +``` +$ awman-update +arch-wiki-man@1.3.0 /usr/lib/node_modules/arch-wiki-man +└── arch-wiki-md-repo@0.10.84 + +arch-wiki-md-repo has been successfully updated or reinstalled. +``` + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-browse-and-read-entire-arch-wiki-as-linux-man-pages/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/search-arch-wiki-website-commandline/ +[2]: https://aur.archlinux.org/packages/arch-wiki-man/ +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://www.ostechnix.com/install-node-js-linux/ +[5]: https://www.ostechnix.com/install-arch-linux-latest-version/ +[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-1.gif +[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-2.png +[8]: https://www.ostechnix.com/wp-content/uploads/2018/10/awman-3-1.png From 8e336162c7618734bed4ff5e5667db5db64f45ca Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 16 Oct 2018 09:01:59 +0800 Subject: [PATCH 033/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=204:=20Preemp?= =?UTF-8?q?tive=20Multitasking?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20181016 Lab 4- Preemptive Multitasking.md | 595 ++++++++++++++++++ 1 file changed, 595 insertions(+) create mode 100644 sources/tech/20181016 Lab 4- Preemptive Multitasking.md diff --git a/sources/tech/20181016 Lab 4- Preemptive Multitasking.md b/sources/tech/20181016 Lab 4- Preemptive Multitasking.md new file mode 100644 index 0000000000..9e510ed7c6 --- /dev/null +++ b/sources/tech/20181016 Lab 4- Preemptive Multitasking.md @@ -0,0 +1,595 @@ +Lab 4: Preemptive Multitasking +====== +### Lab 4: Preemptive Multitasking + +**Part A due Thursday, October 18, 2018 +Part B due Thursday, October 25, 2018 +Part C due Thursday, November 1, 2018** + +#### Introduction + +In this lab you will implement preemptive multitasking among multiple simultaneously active user-mode environments. + +In part A you will add multiprocessor support to JOS, implement round-robin scheduling, and add basic environment management system calls (calls that create and destroy environments, and allocate/map memory). + +In part B, you will implement a Unix-like `fork()`, which allows a user-mode environment to create copies of itself. + +Finally, in part C you will add support for inter-process communication (IPC), allowing different user-mode environments to communicate and synchronize with each other explicitly. You will also add support for hardware clock interrupts and preemption. + +##### Getting Started + +Use Git to commit your Lab 3 source, fetch the latest version of the course repository, and then create a local branch called `lab4` based on our lab4 branch, `origin/lab4`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab4 origin/lab4 + Branch lab4 set up to track remote branch refs/remotes/origin/lab4. + Switched to a new branch "lab4" + athena% git merge lab3 + Merge made by recursive. + ... + athena% +``` + +Lab 4 contains a number of new source files, some of which you should browse before you start: +| kern/cpu.h | Kernel-private definitions for multiprocessor support | +| kern/mpconfig.c | Code to read the multiprocessor configuration | +| kern/lapic.c | Kernel code driving the local APIC unit in each processor | +| kern/mpentry.S | Assembly-language entry code for non-boot CPUs | +| kern/spinlock.h | Kernel-private definitions for spin locks, including the big kernel lock | +| kern/spinlock.c | Kernel code implementing spin locks | +| kern/sched.c | Code skeleton of the scheduler that you are about to implement | + +##### Lab Requirements + +This lab is divided into three parts, A, B, and C. We have allocated one week in the schedule for each part. + +As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. (You do not need to do one challenge problem per part, just one for the whole lab.) Additionally, you will need to write up a brief description of the challenge problem that you implemented. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab4.txt` in the top level of your `lab` directory before handing in your work. + +#### Part A: Multiprocessor Support and Cooperative Multitasking + +In the first part of this lab, you will first extend JOS to run on a multiprocessor system, and then implement some new JOS kernel system calls to allow user-level environments to create additional new environments. You will also implement _cooperative_ round-robin scheduling, allowing the kernel to switch from one environment to another when the current environment voluntarily relinquishes the CPU (or exits). Later in part C you will implement _preemptive_ scheduling, which allows the kernel to re-take control of the CPU from an environment after a certain time has passed even if the environment does not cooperate. + +##### Multiprocessor Support + +We are going to make JOS support "symmetric multiprocessing" (SMP), a multiprocessor model in which all CPUs have equivalent access to system resources such as memory and I/O buses. While all CPUs are functionally identical in SMP, during the boot process they can be classified into two types: the bootstrap processor (BSP) is responsible for initializing the system and for booting the operating system; and the application processors (APs) are activated by the BSP only after the operating system is up and running. Which processor is the BSP is determined by the hardware and the BIOS. Up to this point, all your existing JOS code has been running on the BSP. + +In an SMP system, each CPU has an accompanying local APIC (LAPIC) unit. The LAPIC units are responsible for delivering interrupts throughout the system. The LAPIC also provides its connected CPU with a unique identifier. In this lab, we make use of the following basic functionality of the LAPIC unit (in `kern/lapic.c`): + + * Reading the LAPIC identifier (APIC ID) to tell which CPU our code is currently running on (see `cpunum()`). + * Sending the `STARTUP` interprocessor interrupt (IPI) from the BSP to the APs to bring up other CPUs (see `lapic_startap()`). + * In part C, we program LAPIC's built-in timer to trigger clock interrupts to support preemptive multitasking (see `apic_init()`). + + + +A processor accesses its LAPIC using memory-mapped I/O (MMIO). In MMIO, a portion of _physical_ memory is hardwired to the registers of some I/O devices, so the same load/store instructions typically used to access memory can be used to access device registers. You've already seen one IO hole at physical address `0xA0000` (we use this to write to the VGA display buffer). The LAPIC lives in a hole starting at physical address `0xFE000000` (32MB short of 4GB), so it's too high for us to access using our usual direct map at KERNBASE. The JOS virtual memory map leaves a 4MB gap at `MMIOBASE` so we have a place to map devices like this. Since later labs introduce more MMIO regions, you'll write a simple function to allocate space from this region and map device memory to it. + +``` +Exercise 1. Implement `mmio_map_region` in `kern/pmap.c`. To see how this is used, look at the beginning of `lapic_init` in `kern/lapic.c`. You'll have to do the next exercise, too, before the tests for `mmio_map_region` will run. +``` + +###### Application Processor Bootstrap + +Before booting up APs, the BSP should first collect information about the multiprocessor system, such as the total number of CPUs, their APIC IDs and the MMIO address of the LAPIC unit. The `mp_init()` function in `kern/mpconfig.c` retrieves this information by reading the MP configuration table that resides in the BIOS's region of memory. + +The `boot_aps()` function (in `kern/init.c`) drives the AP bootstrap process. APs start in real mode, much like how the bootloader started in `boot/boot.S`, so `boot_aps()` copies the AP entry code (`kern/mpentry.S`) to a memory location that is addressable in the real mode. Unlike with the bootloader, we have some control over where the AP will start executing code; we copy the entry code to `0x7000` (`MPENTRY_PADDR`), but any unused, page-aligned physical address below 640KB would work. + +After that, `boot_aps()` activates APs one after another, by sending `STARTUP` IPIs to the LAPIC unit of the corresponding AP, along with an initial `CS:IP` address at which the AP should start running its entry code (`MPENTRY_PADDR` in our case). The entry code in `kern/mpentry.S` is quite similar to that of `boot/boot.S`. After some brief setup, it puts the AP into protected mode with paging enabled, and then calls the C setup routine `mp_main()` (also in `kern/init.c`). `boot_aps()` waits for the AP to signal a `CPU_STARTED` flag in `cpu_status` field of its `struct CpuInfo` before going on to wake up the next one. + +``` +Exercise 2. Read `boot_aps()` and `mp_main()` in `kern/init.c`, and the assembly code in `kern/mpentry.S`. Make sure you understand the control flow transfer during the bootstrap of APs. Then modify your implementation of `page_init()` in `kern/pmap.c` to avoid adding the page at `MPENTRY_PADDR` to the free list, so that we can safely copy and run AP bootstrap code at that physical address. Your code should pass the updated `check_page_free_list()` test (but might fail the updated `check_kern_pgdir()` test, which we will fix soon). +``` + +``` +Question + + 1. Compare `kern/mpentry.S` side by side with `boot/boot.S`. Bearing in mind that `kern/mpentry.S` is compiled and linked to run above `KERNBASE` just like everything else in the kernel, what is the purpose of macro `MPBOOTPHYS`? Why is it necessary in `kern/mpentry.S` but not in `boot/boot.S`? In other words, what could go wrong if it were omitted in `kern/mpentry.S`? +Hint: recall the differences between the link address and the load address that we have discussed in Lab 1. +``` + + +###### Per-CPU State and Initialization + +When writing a multiprocessor OS, it is important to distinguish between per-CPU state that is private to each processor, and global state that the whole system shares. `kern/cpu.h` defines most of the per-CPU state, including `struct CpuInfo`, which stores per-CPU variables. `cpunum()` always returns the ID of the CPU that calls it, which can be used as an index into arrays like `cpus`. Alternatively, the macro `thiscpu` is shorthand for the current CPU's `struct CpuInfo`. + +Here is the per-CPU state you should be aware of: + + * **Per-CPU kernel stack**. +Because multiple CPUs can trap into the kernel simultaneously, we need a separate kernel stack for each processor to prevent them from interfering with each other's execution. The array `percpu_kstacks[NCPU][KSTKSIZE]` reserves space for NCPU's worth of kernel stacks. + +In Lab 2, you mapped the physical memory that `bootstack` refers to as the BSP's kernel stack just below `KSTACKTOP`. Similarly, in this lab, you will map each CPU's kernel stack into this region with guard pages acting as a buffer between them. CPU 0's stack will still grow down from `KSTACKTOP`; CPU 1's stack will start `KSTKGAP` bytes below the bottom of CPU 0's stack, and so on. `inc/memlayout.h` shows the mapping layout. + + * **Per-CPU TSS and TSS descriptor**. +A per-CPU task state segment (TSS) is also needed in order to specify where each CPU's kernel stack lives. The TSS for CPU _i_ is stored in `cpus[i].cpu_ts`, and the corresponding TSS descriptor is defined in the GDT entry `gdt[(GD_TSS0 >> 3) + i]`. The global `ts` variable defined in `kern/trap.c` will no longer be useful. + + * **Per-CPU current environment pointer**. +Since each CPU can run different user process simultaneously, we redefined the symbol `curenv` to refer to `cpus[cpunum()].cpu_env` (or `thiscpu->cpu_env`), which points to the environment _currently_ executing on the _current_ CPU (the CPU on which the code is running). + + * **Per-CPU system registers**. +All registers, including system registers, are private to a CPU. Therefore, instructions that initialize these registers, such as `lcr3()`, `ltr()`, `lgdt()`, `lidt()`, etc., must be executed once on each CPU. Functions `env_init_percpu()` and `trap_init_percpu()` are defined for this purpose. + + + +``` +Exercise 3. Modify `mem_init_mp()` (in `kern/pmap.c`) to map per-CPU stacks starting at `KSTACKTOP`, as shown in `inc/memlayout.h`. The size of each stack is `KSTKSIZE` bytes plus `KSTKGAP` bytes of unmapped guard pages. Your code should pass the new check in `check_kern_pgdir()`. +``` + +``` +Exercise 4. The code in `trap_init_percpu()` (`kern/trap.c`) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs. (Note: your new code should not use the global `ts` variable any more.) +``` + +When you finish the above exercises, run JOS in QEMU with 4 CPUs using make qemu CPUS=4 (or make qemu-nox CPUS=4), you should see output like this: + +``` + ... + Physical memory: 66556K available, base = 640K, extended = 65532K + check_page_alloc() succeeded! + check_page() succeeded! + check_kern_pgdir() succeeded! + check_page_installed_pgdir() succeeded! + SMP: CPU 0 found 4 CPU(s) + enabled interrupts: 1 2 + SMP: CPU 1 starting + SMP: CPU 2 starting + SMP: CPU 3 starting +``` + +###### Locking + +Our current code spins after initializing the AP in `mp_main()`. Before letting the AP get any further, we need to first address race conditions when multiple CPUs run kernel code simultaneously. The simplest way to achieve this is to use a _big kernel lock_. The big kernel lock is a single global lock that is held whenever an environment enters kernel mode, and is released when the environment returns to user mode. In this model, environments in user mode can run concurrently on any available CPUs, but no more than one environment can run in kernel mode; any other environments that try to enter kernel mode are forced to wait. + +`kern/spinlock.h` declares the big kernel lock, namely `kernel_lock`. It also provides `lock_kernel()` and `unlock_kernel()`, shortcuts to acquire and release the lock. You should apply the big kernel lock at four locations: + + * In `i386_init()`, acquire the lock before the BSP wakes up the other CPUs. + * In `mp_main()`, acquire the lock after initializing the AP, and then call `sched_yield()` to start running environments on this AP. + * In `trap()`, acquire the lock when trapped from user mode. To determine whether a trap happened in user mode or in kernel mode, check the low bits of the `tf_cs`. + * In `env_run()`, release the lock _right before_ switching to user mode. Do not do that too early or too late, otherwise you will experience races or deadlocks. + + +``` +Exercise 5. Apply the big kernel lock as described above, by calling `lock_kernel()` and `unlock_kernel()` at the proper locations. +``` + +How to test if your locking is correct? You can't at this moment! But you will be able to after you implement the scheduler in the next exercise. + +``` +Question + + 2. It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock. +``` + +``` +Challenge! The big kernel lock is simple and easy to use. Nevertheless, it eliminates all concurrency in kernel mode. Most modern operating systems use different locks to protect different parts of their shared state, an approach called _fine-grained locking_. Fine-grained locking can increase performance significantly, but is more difficult to implement and error-prone. If you are brave enough, drop the big kernel lock and embrace concurrency in JOS! + +It is up to you to decide the locking granularity (the amount of data that a lock protects). As a hint, you may consider using spin locks to ensure exclusive access to these shared components in the JOS kernel: + + * The page allocator. + * The console driver. + * The scheduler. + * The inter-process communication (IPC) state that you will implement in the part C. +``` + + +##### Round-Robin Scheduling + +Your next task in this lab is to change the JOS kernel so that it can alternate between multiple environments in "round-robin" fashion. Round-robin scheduling in JOS works as follows: + + * The function `sched_yield()` in the new `kern/sched.c` is responsible for selecting a new environment to run. It searches sequentially through the `envs[]` array in circular fashion, starting just after the previously running environment (or at the beginning of the array if there was no previously running environment), picks the first environment it finds with a status of `ENV_RUNNABLE` (see `inc/env.h`), and calls `env_run()` to jump into that environment. + * `sched_yield()` must never run the same environment on two CPUs at the same time. It can tell that an environment is currently running on some CPU (possibly the current CPU) because that environment's status will be `ENV_RUNNING`. + * We have implemented a new system call for you, `sys_yield()`, which user environments can call to invoke the kernel's `sched_yield()` function and thereby voluntarily give up the CPU to a different environment. + + + +``` +Exercise 6. Implement round-robin scheduling in `sched_yield()` as described above. Don't forget to modify `syscall()` to dispatch `sys_yield()`. + +Make sure to invoke `sched_yield()` in `mp_main`. + +Modify `kern/init.c` to create three (or more!) environments that all run the program `user/yield.c`. + +Run make qemu. You should see the environments switch back and forth between each other five times before terminating, like below. + +Test also with several CPUS: make qemu CPUS=2. + + ... + Hello, I am environment 00001000. + Hello, I am environment 00001001. + Hello, I am environment 00001002. + Back in environment 00001000, iteration 0. + Back in environment 00001001, iteration 0. + Back in environment 00001002, iteration 0. + Back in environment 00001000, iteration 1. + Back in environment 00001001, iteration 1. + Back in environment 00001002, iteration 1. + ... + +After the `yield` programs exit, there will be no runnable environment in the system, the scheduler should invoke the JOS kernel monitor. If any of this does not happen, then fix your code before proceeding. +``` + +``` +Question + + 3. In your implementation of `env_run()` you should have called `lcr3()`. Before and after the call to `lcr3()`, your code makes references (at least it should) to the variable `e`, the argument to `env_run`. Upon loading the `%cr3` register, the addressing context used by the MMU is instantly changed. But a virtual address (namely `e`) has meaning relative to a given address context--the address context specifies the physical address to which the virtual address maps. Why can the pointer `e` be dereferenced both before and after the addressing switch? + 4. Whenever the kernel switches from one environment to another, it must ensure the old environment's registers are saved so they can be restored properly later. Why? Where does this happen? +``` + +``` +Challenge! Add a less trivial scheduling policy to the kernel, such as a fixed-priority scheduler that allows each environment to be assigned a priority and ensures that higher-priority environments are always chosen in preference to lower-priority environments. If you're feeling really adventurous, try implementing a Unix-style adjustable-priority scheduler or even a lottery or stride scheduler. (Look up "lottery scheduling" and "stride scheduling" in Google.) + +Write a test program or two that verifies that your scheduling algorithm is working correctly (i.e., the right environments get run in the right order). It may be easier to write these test programs once you have implemented `fork()` and IPC in parts B and C of this lab. +``` + +``` +Challenge! The JOS kernel currently does not allow applications to use the x86 processor's x87 floating-point unit (FPU), MMX instructions, or Streaming SIMD Extensions (SSE). Extend the `Env` structure to provide a save area for the processor's floating point state, and extend the context switching code to save and restore this state properly when switching from one environment to another. The `FXSAVE` and `FXRSTOR` instructions may be useful, but note that these are not in the old i386 user's manual because they were introduced in more recent processors. Write a user-level test program that does something cool with floating-point. +``` + +##### System Calls for Environment Creation + +Although your kernel is now capable of running and switching between multiple user-level environments, it is still limited to running environments that the _kernel_ initially set up. You will now implement the necessary JOS system calls to allow _user_ environments to create and start other new user environments. + +Unix provides the `fork()` system call as its process creation primitive. Unix `fork()` copies the entire address space of calling process (the parent) to create a new process (the child). The only differences between the two observable from user space are their process IDs and parent process IDs (as returned by `getpid` and `getppid`). In the parent, `fork()` returns the child's process ID, while in the child, `fork()` returns 0. By default, each process gets its own private address space, and neither process's modifications to memory are visible to the other. + +You will provide a different, more primitive set of JOS system calls for creating new user-mode environments. With these system calls you will be able to implement a Unix-like `fork()` entirely in user space, in addition to other styles of environment creation. The new system calls you will write for JOS are as follows: + + * `sys_exofork`: +This system call creates a new environment with an almost blank slate: nothing is mapped in the user portion of its address space, and it is not runnable. The new environment will have the same register state as the parent environment at the time of the `sys_exofork` call. In the parent, `sys_exofork` will return the `envid_t` of the newly created environment (or a negative error code if the environment allocation failed). In the child, however, it will return 0. (Since the child starts out marked as not runnable, `sys_exofork` will not actually return in the child until the parent has explicitly allowed this by marking the child runnable using....) + * `sys_env_set_status`: +Sets the status of a specified environment to `ENV_RUNNABLE` or `ENV_NOT_RUNNABLE`. This system call is typically used to mark a new environment ready to run, once its address space and register state has been fully initialized. + * `sys_page_alloc`: +Allocates a page of physical memory and maps it at a given virtual address in a given environment's address space. + * `sys_page_map`: +Copy a page mapping ( _not_ the contents of a page!) from one environment's address space to another, leaving a memory sharing arrangement in place so that the new and the old mappings both refer to the same page of physical memory. + * `sys_page_unmap`: +Unmap a page mapped at a given virtual address in a given environment. + + + +For all of the system calls above that accept environment IDs, the JOS kernel supports the convention that a value of 0 means "the current environment." This convention is implemented by `envid2env()` in `kern/env.c`. + +We have provided a very primitive implementation of a Unix-like `fork()` in the test program `user/dumbfork.c`. This test program uses the above system calls to create and run a child environment with a copy of its own address space. The two environments then switch back and forth using `sys_yield` as in the previous exercise. The parent exits after 10 iterations, whereas the child exits after 20. + +``` +Exercise 7. Implement the system calls described above in `kern/syscall.c` and make sure `syscall()` calls them. You will need to use various functions in `kern/pmap.c` and `kern/env.c`, particularly `envid2env()`. For now, whenever you call `envid2env()`, pass 1 in the `checkperm` parameter. Be sure you check for any invalid system call arguments, returning `-E_INVAL` in that case. Test your JOS kernel with `user/dumbfork` and make sure it works before proceeding. +``` + +``` +Challenge! Add the additional system calls necessary to _read_ all of the vital state of an existing environment as well as set it up. Then implement a user mode program that forks off a child environment, runs it for a while (e.g., a few iterations of `sys_yield()`), then takes a complete snapshot or _checkpoint_ of the child environment, runs the child for a while longer, and finally restores the child environment to the state it was in at the checkpoint and continues it from there. Thus, you are effectively "replaying" the execution of the child environment from an intermediate state. Make the child environment perform some interaction with the user using `sys_cgetc()` or `readline()` so that the user can view and mutate its internal state, and verify that with your checkpoint/restart you can give the child environment a case of selective amnesia, making it "forget" everything that happened beyond a certain point. +``` + +This completes Part A of the lab; make sure it passes all of the Part A tests when you run make grade, and hand it in using make handin as usual. If you are trying to figure out why a particular test case is failing, run ./grade-lab4 -v, which will show you the output of the kernel builds and QEMU runs for each test, until a test fails. When a test fails, the script will stop, and then you can inspect `jos.out` to see what the kernel actually printed. + +#### Part B: Copy-on-Write Fork + +As mentioned earlier, Unix provides the `fork()` system call as its primary process creation primitive. The `fork()` system call copies the address space of the calling process (the parent) to create a new process (the child). + +xv6 Unix implements `fork()` by copying all data from the parent's pages into new pages allocated for the child. This is essentially the same approach that `dumbfork()` takes. The copying of the parent's address space into the child is the most expensive part of the `fork()` operation. + +However, a call to `fork()` is frequently followed almost immediately by a call to `exec()` in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling `exec()`. + +For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to _share_ the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as _copy-on-write_. To do this, on `fork()` the kernel would copy the address space _mappings_ from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a `fork()` followed by an `exec()` in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls `exec()`. + +In the next piece of this lab, you will implement a "proper" Unix-like `fork()` with copy-on-write, as a user space library routine. Implementing `fork()` and copy-on-write support in user space has the benefit that the kernel remains much simpler and thus more likely to be correct. It also lets individual user-mode programs define their own semantics for `fork()`. A program that wants a slightly different implementation (for example, the expensive always-copy version like `dumbfork()`, or one in which the parent and child actually share memory afterward) can easily provide its own. + +##### User-level page fault handling + +A user-level copy-on-write `fork()` needs to know about page faults on write-protected pages, so that's what you'll implement first. Copy-on-write is only one of many possible uses for user-level page fault handling. + +It's common to set up an address space so that page faults indicate when some action needs to take place. For example, most Unix kernels initially map only a single page in a new process's stack region, and allocate and map additional stack pages later "on demand" as the process's stack consumption increases and causes page faults on stack addresses that are not yet mapped. A typical Unix kernel must keep track of what action to take when a page fault occurs in each region of a process's space. For example, a fault in the stack region will typically allocate and map new page of physical memory. A fault in the program's BSS region will typically allocate a new page, fill it with zeroes, and map it. In systems with demand-paged executables, a fault in the text region will read the corresponding page of the binary off of disk and then map it. + +This is a lot of information for the kernel to keep track of. Instead of taking the traditional Unix approach, you will decide what to do about each page fault in user space, where bugs are less damaging. This design has the added benefit of allowing programs great flexibility in defining their memory regions; you'll use user-level page fault handling later for mapping and accessing files on a disk-based file system. + +###### Setting the Page Fault Handler + +In order to handle its own page faults, a user environment will need to register a _page fault handler entrypoint_ with the JOS kernel. The user environment registers its page fault entrypoint via the new `sys_env_set_pgfault_upcall` system call. We have added a new member to the `Env` structure, `env_pgfault_upcall`, to record this information. + +``` +Exercise 8. Implement the `sys_env_set_pgfault_upcall` system call. Be sure to enable permission checking when looking up the environment ID of the target environment, since this is a "dangerous" system call. +``` + +###### Normal and Exception Stacks in User Environments + +During normal execution, a user environment in JOS will run on the _normal_ user stack: its `ESP` register starts out pointing at `USTACKTOP`, and the stack data it pushes resides on the page between `USTACKTOP-PGSIZE` and `USTACKTOP-1` inclusive. When a page fault occurs in user mode, however, the kernel will restart the user environment running a designated user-level page fault handler on a different stack, namely the _user exception_ stack. In essence, we will make the JOS kernel implement automatic "stack switching" on behalf of the user environment, in much the same way that the x86 _processor_ already implements stack switching on behalf of JOS when transferring from user mode to kernel mode! + +The JOS user exception stack is also one page in size, and its top is defined to be at virtual address `UXSTACKTOP`, so the valid bytes of the user exception stack are from `UXSTACKTOP-PGSIZE` through `UXSTACKTOP-1` inclusive. While running on this exception stack, the user-level page fault handler can use JOS's regular system calls to map new pages or adjust mappings so as to fix whatever problem originally caused the page fault. Then the user-level page fault handler returns, via an assembly language stub, to the faulting code on the original stack. + +Each user environment that wants to support user-level page fault handling will need to allocate memory for its own exception stack, using the `sys_page_alloc()` system call introduced in part A. + +###### Invoking the User Page Fault Handler + +You will now need to change the page fault handling code in `kern/trap.c` to handle page faults from user mode as follows. We will call the state of the user environment at the time of the fault the _trap-time_ state. + +If there is no page fault handler registered, the JOS kernel destroys the user environment with a message as before. Otherwise, the kernel sets up a trap frame on the exception stack that looks like a `struct UTrapframe` from `inc/trap.h`: + +``` + <-- UXSTACKTOP + trap-time esp + trap-time eflags + trap-time eip + trap-time eax start of struct PushRegs + trap-time ecx + trap-time edx + trap-time ebx + trap-time esp + trap-time ebp + trap-time esi + trap-time edi end of struct PushRegs + tf_err (error code) + fault_va <-- %esp when handler is run + +``` + +The kernel then arranges for the user environment to resume execution with the page fault handler running on the exception stack with this stack frame; you must figure out how to make this happen. The `fault_va` is the virtual address that caused the page fault. + +If the user environment is _already_ running on the user exception stack when an exception occurs, then the page fault handler itself has faulted. In this case, you should start the new stack frame just under the current `tf->tf_esp` rather than at `UXSTACKTOP`. You should first push an empty 32-bit word, then a `struct UTrapframe`. + +To test whether `tf->tf_esp` is already on the user exception stack, check whether it is in the range between `UXSTACKTOP-PGSIZE` and `UXSTACKTOP-1`, inclusive. + +``` +Exercise 9. Implement the code in `page_fault_handler` in `kern/trap.c` required to dispatch page faults to the user-mode handler. Be sure to take appropriate precautions when writing into the exception stack. (What happens if the user environment runs out of space on the exception stack?) +``` + +###### User-mode Page Fault Entrypoint + +Next, you need to implement the assembly routine that will take care of calling the C page fault handler and resume execution at the original faulting instruction. This assembly routine is the handler that will be registered with the kernel using `sys_env_set_pgfault_upcall()`. + +``` +Exercise 10. Implement the `_pgfault_upcall` routine in `lib/pfentry.S`. The interesting part is returning to the original point in the user code that caused the page fault. You'll return directly there, without going back through the kernel. The hard part is simultaneously switching stacks and re-loading the EIP. +``` + +Finally, you need to implement the C user library side of the user-level page fault handling mechanism. + +``` +Exercise 11. Finish `set_pgfault_handler()` in `lib/pgfault.c`. +``` + +###### Testing + +Run `user/faultread` (make run-faultread). You should see: + +``` + ... + [00000000] new env 00001000 + [00001000] user fault va 00000000 ip 0080003a + TRAP frame ... + [00001000] free env 00001000 +``` + +Run `user/faultdie`. You should see: + +``` + ... + [00000000] new env 00001000 + i faulted at va deadbeef, err 6 + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +Run `user/faultalloc`. You should see: + +``` + ... + [00000000] new env 00001000 + fault deadbeef + this string was faulted in at deadbeef + fault cafebffe + fault cafec000 + this string was faulted in at cafebffe + [00001000] exiting gracefully + [00001000] free env 00001000 +``` + +If you see only the first "this string" line, it means you are not handling recursive page faults properly. + +Run `user/faultallocbad`. You should see: + +``` + ... + [00000000] new env 00001000 + [00001000] user_mem_check assertion failure for va deadbeef + [00001000] free env 00001000 +``` + +Make sure you understand why `user/faultalloc` and `user/faultallocbad` behave differently. + +``` +Challenge! Extend your kernel so that not only page faults, but _all_ types of processor exceptions that code running in user space can generate, can be redirected to a user-mode exception handler. Write user-mode test programs to test user-mode handling of various exceptions such as divide-by-zero, general protection fault, and illegal opcode. +``` + +##### Implementing Copy-on-Write Fork + +You now have the kernel facilities to implement copy-on-write `fork()` entirely in user space. + +We have provided a skeleton for your `fork()` in `lib/fork.c`. Like `dumbfork()`, `fork()` should create a new environment, then scan through the parent environment's entire address space and set up corresponding page mappings in the child. The key difference is that, while `dumbfork()` copied _pages_ , `fork()` will initially only copy page _mappings_. `fork()` will copy each page only when one of the environments tries to write it. + +The basic control flow for `fork()` is as follows: + + 1. The parent installs `pgfault()` as the C-level page fault handler, using the `set_pgfault_handler()` function you implemented above. + + 2. The parent calls `sys_exofork()` to create a child environment. + + 3. For each writable or copy-on-write page in its address space below UTOP, the parent calls `duppage`, which should map the page copy-on-write into the address space of the child and then _remap_ the page copy-on-write in its own address space. [ Note: The ordering here (i.e., marking a page as COW in the child before marking it in the parent) actually matters! Can you see why? Try to think of a specific case where reversing the order could cause trouble. ] `duppage` sets both PTEs so that the page is not writeable, and to contain `PTE_COW` in the "avail" field to distinguish copy-on-write pages from genuine read-only pages. + +The exception stack is _not_ remapped this way, however. Instead you need to allocate a fresh page in the child for the exception stack. Since the page fault handler will be doing the actual copying and the page fault handler runs on the exception stack, the exception stack cannot be made copy-on-write: who would copy it? + +`fork()` also needs to handle pages that are present, but not writable or copy-on-write. + + 4. The parent sets the user page fault entrypoint for the child to look like its own. + + 5. The child is now ready to run, so the parent marks it runnable. + + + + +Each time one of the environments writes a copy-on-write page that it hasn't yet written, it will take a page fault. Here's the control flow for the user page fault handler: + + 1. The kernel propagates the page fault to `_pgfault_upcall`, which calls `fork()`'s `pgfault()` handler. + 2. `pgfault()` checks that the fault is a write (check for `FEC_WR` in the error code) and that the PTE for the page is marked `PTE_COW`. If not, panic. + 3. `pgfault()` allocates a new page mapped at a temporary location and copies the contents of the faulting page into it. Then the fault handler maps the new page at the appropriate address with read/write permissions, in place of the old read-only mapping. + + + +The user-level `lib/fork.c` code must consult the environment's page tables for several of the operations above (e.g., that the PTE for a page is marked `PTE_COW`). The kernel maps the environment's page tables at `UVPT` exactly for this purpose. It uses a [clever mapping trick][1] to make it to make it easy to lookup PTEs for user code. `lib/entry.S` sets up `uvpt` and `uvpd` so that you can easily lookup page-table information in `lib/fork.c`. + +`````` +Exercise 12. Implement `fork`, `duppage` and `pgfault` in `lib/fork.c`. + +Test your code with the `forktree` program. It should produce the following messages, with interspersed 'new env', 'free env', and 'exiting gracefully' messages. The messages may not appear in this order, and the environment IDs may be different. + + 1000: I am '' + 1001: I am '0' + 2000: I am '00' + 2001: I am '000' + 1002: I am '1' + 3000: I am '11' + 3001: I am '10' + 4000: I am '100' + 1003: I am '01' + 5000: I am '010' + 4001: I am '011' + 2002: I am '110' + 1004: I am '001' + 1005: I am '111' + 1006: I am '101' +``` + +``` +Challenge! Implement a shared-memory `fork()` called `sfork()`. This version should have the parent and child _share_ all their memory pages (so writes in one environment appear in the other) except for pages in the stack area, which should be treated in the usual copy-on-write manner. Modify `user/forktree.c` to use `sfork()` instead of regular `fork()`. Also, once you have finished implementing IPC in part C, use your `sfork()` to run `user/pingpongs`. You will have to find a new way to provide the functionality of the global `thisenv` pointer. +``` + +``` +Challenge! Your implementation of `fork` makes a huge number of system calls. On the x86, switching into the kernel using interrupts has non-trivial cost. Augment the system call interface so that it is possible to send a batch of system calls at once. Then change `fork` to use this interface. + +How much faster is your new `fork`? + +You can answer this (roughly) by using analytical arguments to estimate how much of an improvement batching system calls will make to the performance of your `fork`: How expensive is an `int 0x30` instruction? How many times do you execute `int 0x30` in your `fork`? Is accessing the `TSS` stack switch also expensive? And so on... + +Alternatively, you can boot your kernel on real hardware and _really_ benchmark your code. See the `RDTSC` (read time-stamp counter) instruction, defined in the IA32 manual, which counts the number of clock cycles that have elapsed since the last processor reset. QEMU doesn't emulate this instruction faithfully (it can either count the number of virtual instructions executed or use the host TSC, neither of which reflects the number of cycles a real CPU would require). +``` + +This ends part B. Make sure you pass all of the Part B tests when you run make grade. As usual, you can hand in your submission with make handin. + +#### Part C: Preemptive Multitasking and Inter-Process communication (IPC) + +In the final part of lab 4 you will modify the kernel to preempt uncooperative environments and to allow environments to pass messages to each other explicitly. + +##### Clock Interrupts and Preemption + +Run the `user/spin` test program. This test program forks off a child environment, which simply spins forever in a tight loop once it receives control of the CPU. Neither the parent environment nor the kernel ever regains the CPU. This is obviously not an ideal situation in terms of protecting the system from bugs or malicious code in user-mode environments, because any user-mode environment can bring the whole system to a halt simply by getting into an infinite loop and never giving back the CPU. In order to allow the kernel to _preempt_ a running environment, forcefully retaking control of the CPU from it, we must extend the JOS kernel to support external hardware interrupts from the clock hardware. + +###### Interrupt discipline + +External interrupts (i.e., device interrupts) are referred to as IRQs. There are 16 possible IRQs, numbered 0 through 15. The mapping from IRQ number to IDT entry is not fixed. `pic_init` in `picirq.c` maps IRQs 0-15 to IDT entries `IRQ_OFFSET` through `IRQ_OFFSET+15`. + +In `inc/trap.h`, `IRQ_OFFSET` is defined to be decimal 32. Thus the IDT entries 32-47 correspond to the IRQs 0-15. For example, the clock interrupt is IRQ 0. Thus, IDT[IRQ_OFFSET+0] (i.e., IDT[32]) contains the address of the clock's interrupt handler routine in the kernel. This `IRQ_OFFSET` is chosen so that the device interrupts do not overlap with the processor exceptions, which could obviously cause confusion. (In fact, in the early days of PCs running MS-DOS, the `IRQ_OFFSET` effectively _was_ zero, which indeed caused massive confusion between handling hardware interrupts and handling processor exceptions!) + +In JOS, we make a key simplification compared to xv6 Unix. External device interrupts are _always_ disabled when in the kernel (and, like xv6, enabled when in user space). External interrupts are controlled by the `FL_IF` flag bit of the `%eflags` register (see `inc/mmu.h`). When this bit is set, external interrupts are enabled. While the bit can be modified in several ways, because of our simplification, we will handle it solely through the process of saving and restoring `%eflags` register as we enter and leave user mode. + +You will have to ensure that the `FL_IF` flag is set in user environments when they run so that when an interrupt arrives, it gets passed through to the processor and handled by your interrupt code. Otherwise, interrupts are _masked_ , or ignored until interrupts are re-enabled. We masked interrupts with the very first instruction of the bootloader, and so far we have never gotten around to re-enabling them. + +``` +Exercise 13. Modify `kern/trapentry.S` and `kern/trap.c` to initialize the appropriate entries in the IDT and provide handlers for IRQs 0 through 15. Then modify the code in `env_alloc()` in `kern/env.c` to ensure that user environments are always run with interrupts enabled. + +Also uncomment the `sti` instruction in `sched_halt()` so that idle CPUs unmask interrupts. + +The processor never pushes an error code when invoking a hardware interrupt handler. You might want to re-read section 9.2 of the [80386 Reference Manual][2], or section 5.8 of the [IA-32 Intel Architecture Software Developer's Manual, Volume 3][3], at this time. + +After doing this exercise, if you run your kernel with any test program that runs for a non-trivial length of time (e.g., `spin`), you should see the kernel print trap frames for hardware interrupts. While interrupts are now enabled in the processor, JOS isn't yet handling them, so you should see it misattribute each interrupt to the currently running user environment and destroy it. Eventually it should run out of environments to destroy and drop into the monitor. +``` + +###### Handling Clock Interrupts + +In the `user/spin` program, after the child environment was first run, it just spun in a loop, and the kernel never got control back. We need to program the hardware to generate clock interrupts periodically, which will force control back to the kernel where we can switch control to a different user environment. + +The calls to `lapic_init` and `pic_init` (from `i386_init` in `init.c`), which we have written for you, set up the clock and the interrupt controller to generate interrupts. You now need to write the code to handle these interrupts. + +``` +Exercise 14. Modify the kernel's `trap_dispatch()` function so that it calls `sched_yield()` to find and run a different environment whenever a clock interrupt takes place. + +You should now be able to get the `user/spin` test to work: the parent environment should fork off the child, `sys_yield()` to it a couple times but in each case regain control of the CPU after one time slice, and finally kill the child environment and terminate gracefully. +``` + +This is a great time to do some _regression testing_. Make sure that you haven't broken any earlier part of that lab that used to work (e.g. `forktree`) by enabling interrupts. Also, try running with multiple CPUs using make CPUS=2 _target_. You should also be able to pass `stresssched` now. Run make grade to see for sure. You should now get a total score of 65/80 points on this lab. + +##### Inter-Process communication (IPC) + +(Technically in JOS this is "inter-environment communication" or "IEC", but everyone else calls it IPC, so we'll use the standard term.) + +We've been focusing on the isolation aspects of the operating system, the ways it provides the illusion that each program has a machine all to itself. Another important service of an operating system is to allow programs to communicate with each other when they want to. It can be quite powerful to let programs interact with other programs. The Unix pipe model is the canonical example. + +There are many models for interprocess communication. Even today there are still debates about which models are best. We won't get into that debate. Instead, we'll implement a simple IPC mechanism and then try it out. + +###### IPC in JOS + +You will implement a few additional JOS kernel system calls that collectively provide a simple interprocess communication mechanism. You will implement two system calls, `sys_ipc_recv` and `sys_ipc_try_send`. Then you will implement two library wrappers `ipc_recv` and `ipc_send`. + +The "messages" that user environments can send to each other using JOS's IPC mechanism consist of two components: a single 32-bit value, and optionally a single page mapping. Allowing environments to pass page mappings in messages provides an efficient way to transfer more data than will fit into a single 32-bit integer, and also allows environments to set up shared memory arrangements easily. + +###### Sending and Receiving Messages + +To receive a message, an environment calls `sys_ipc_recv`. This system call de-schedules the current environment and does not run it again until a message has been received. When an environment is waiting to receive a message, _any_ other environment can send it a message - not just a particular environment, and not just environments that have a parent/child arrangement with the receiving environment. In other words, the permission checking that you implemented in Part A will not apply to IPC, because the IPC system calls are carefully designed so as to be "safe": an environment cannot cause another environment to malfunction simply by sending it messages (unless the target environment is also buggy). + +To try to send a value, an environment calls `sys_ipc_try_send` with both the receiver's environment id and the value to be sent. If the named environment is actually receiving (it has called `sys_ipc_recv` and not gotten a value yet), then the send delivers the message and returns 0. Otherwise the send returns `-E_IPC_NOT_RECV` to indicate that the target environment is not currently expecting to receive a value. + +A library function `ipc_recv` in user space will take care of calling `sys_ipc_recv` and then looking up the information about the received values in the current environment's `struct Env`. + +Similarly, a library function `ipc_send` will take care of repeatedly calling `sys_ipc_try_send` until the send succeeds. + +###### Transferring Pages + +When an environment calls `sys_ipc_recv` with a valid `dstva` parameter (below `UTOP`), the environment is stating that it is willing to receive a page mapping. If the sender sends a page, then that page should be mapped at `dstva` in the receiver's address space. If the receiver already had a page mapped at `dstva`, then that previous page is unmapped. + +When an environment calls `sys_ipc_try_send` with a valid `srcva` (below `UTOP`), it means the sender wants to send the page currently mapped at `srcva` to the receiver, with permissions `perm`. After a successful IPC, the sender keeps its original mapping for the page at `srcva` in its address space, but the receiver also obtains a mapping for this same physical page at the `dstva` originally specified by the receiver, in the receiver's address space. As a result this page becomes shared between the sender and receiver. + +If either the sender or the receiver does not indicate that a page should be transferred, then no page is transferred. After any IPC the kernel sets the new field `env_ipc_perm` in the receiver's `Env` structure to the permissions of the page received, or zero if no page was received. + +###### Implementing IPC + +``` +Exercise 15. Implement `sys_ipc_recv` and `sys_ipc_try_send` in `kern/syscall.c`. Read the comments on both before implementing them, since they have to work together. When you call `envid2env` in these routines, you should set the `checkperm` flag to 0, meaning that any environment is allowed to send IPC messages to any other environment, and the kernel does no special permission checking other than verifying that the target envid is valid. + +Then implement the `ipc_recv` and `ipc_send` functions in `lib/ipc.c`. + +Use the `user/pingpong` and `user/primes` functions to test your IPC mechanism. `user/primes` will generate for each prime number a new environment until JOS runs out of environments. You might find it interesting to read `user/primes.c` to see all the forking and IPC going on behind the scenes. +``` + +``` +Challenge! Why does `ipc_send` have to loop? Change the system call interface so it doesn't have to. Make sure you can handle multiple environments trying to send to one environment at the same time. +``` + +``` +Challenge! The prime sieve is only one neat use of message passing between a large number of concurrent programs. Read C. A. R. Hoare, ``Communicating Sequential Processes,'' _Communications of the ACM_ 21(8) (August 1978), 666-667, and implement the matrix multiplication example. +``` + +``` +Challenge! One of the most impressive examples of the power of message passing is Doug McIlroy's power series calculator, described in [M. Douglas McIlroy, ``Squinting at Power Series,'' _Software--Practice and Experience_ , 20(7) (July 1990), 661-683][4]. Implement his power series calculator and compute the power series for _sin_ ( _x_ + _x_ ^3). +``` + +``` +Challenge! Make JOS's IPC mechanism more efficient by applying some of the techniques from Liedtke's paper, [Improving IPC by Kernel Design][5], or any other tricks you may think of. Feel free to modify the kernel's system call API for this purpose, as long as your code is backwards compatible with what our grading scripts expect. +``` + +**This ends part C.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab4.txt`. + +Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab4.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 4', then make handin and follow the directions. + +-------------------------------------------------------------------------------- + +via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/ + +作者:[csail.mit][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://pdos.csail.mit.edu +[b]: https://github.com/lujun9972 +[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html +[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm +[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf +[4]: https://swtch.com/~rsc/thread/squint.pdf +[5]: http://dl.acm.org/citation.cfm?id=168633 From 1320bb4d156696eccf7af7c00be71afbdc9aa38e Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 16 Oct 2018 09:03:56 +0800 Subject: [PATCH 034/140] translated --- ...files with ls at the Linux command line.md | 75 ------------------- ...files with ls at the Linux command line.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 75 deletions(-) delete mode 100644 sources/tech/20181003 Tips for listing files with ls at the Linux command line.md create mode 100644 translated/tech/20181003 Tips for listing files with ls at the Linux command line.md diff --git a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md b/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md deleted file mode 100644 index fda48f1622..0000000000 --- a/sources/tech/20181003 Tips for listing files with ls at the Linux command line.md +++ /dev/null @@ -1,75 +0,0 @@ -translating---geekpi - -Tips for listing files with ls at the Linux command line -====== -Learn some of the Linux 'ls' command's most useful variations. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) - -One of the first commands I learned in Linux was `ls`. Knowing what’s in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important. - -My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column. - -Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files. - -According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MIT’s Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5]. - -Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with. - -`$ ls -l` provides a simple list of the directory: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png) - -Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png) - -To sort the directory by file sizes, use `ls -lS`: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png) - -To list the contents in reverse order, use `ls -lr`: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png) - -To list contents by columns, use `ls -c`: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png) - -`ls -al` provides a list of all the files in the same directory: - -![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png) - -Here are some additional options that I find useful and interesting: - - * List only the .txt files in the directory: `ls *.txt` - * List by file size: `ls -s` - * Sort by time and date: `ls -d` - * Sort by extension: `ls -X` - * Sort by file size: `ls -S` - * Long format with file size: `ls -ls` - * List only the .txt files in a directory: `ls *.txt` - - - -To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents. - -For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/ls-command - -作者:[Don Watkins][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf -[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html -[3]: https://en.wikipedia.org/wiki/Multics -[4]: https://en.wikipedia.org/wiki/Ls -[5]: http://www.gnu.org/s/coreutils/ -[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation diff --git a/translated/tech/20181003 Tips for listing files with ls at the Linux command line.md b/translated/tech/20181003 Tips for listing files with ls at the Linux command line.md new file mode 100644 index 0000000000..b0fe9643da --- /dev/null +++ b/translated/tech/20181003 Tips for listing files with ls at the Linux command line.md @@ -0,0 +1,73 @@ +在 Linux 命令行中使用 ls 列出文件的提示 +====== +学习一些 Linux "ls" 命令最有用的变化。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) + +我在 Linux 中最先学到的命令之一就是 `ls`。了解系统中文件所在目录中的内容非常重要。能够查看和修改不仅仅是一些文件还要所有文件也很重要。 + +我的第一个 Linux 备忘录是[单页 Linux 手册][1],它于 1999 年发布,它成为我的首选参考资料。当我开始探索 Linux 时,我把它贴在桌子上并经常参考它。它的第一页第一列的底部有使用 `ls -l` 列出文件的命令。 + +之后,我将学习这个最基本命令的其他迭代。通过 `ls` 命令,我开始了解 Linux 文件权限的复杂性以及哪些是我的文件,哪些需要 root 或者 root 权限来修改。随着时间的推移,我习惯使用命令行,虽然我仍然使用 `ls -l` 来查找目录中的文件,但我经常使用 `ls -al`,这样我就可以看到可能需要更改的隐藏文件,比如那些配置文件。 + +根据 Eric Fischer 在[Linux 文档项目][2]中关于 `ls` 命令的文章,该命令的根源可以追溯到 1961年 MIT 的相容分时系统 (CTSS +) 上的 `listf` 命令。当 CTSS 被 [Multics][3] 代替时,命令变为 `list`,并有像 `list -all` 的开关。根据[维基百科][4],“ls” 出现在 AT&T Unix 的原始版本中。我们今天在 Linux 系统上使用的 `ls` 命令来自 [GNU Core Utilities][5]。 + +大多数时候,我只使用几个迭代的命令。使用 `ls` 或 `ls -al` 查看目录内部是我通常使用该命令的方法,但是你还应该熟悉许多其他选项。 + +`$ ls -l` 提供了一个简单的目录列表: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png) + +使用我的 Fedora 28 系统中的手册页,我发现 `ls` 还有许多其他选项,所有这些选项都提供了有关 Linux 文件系统的有趣且有用的信息。通过在命令提示符下输入 `man ls`,我们可以开始探索其他一些选项: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png) + +要按文件大小对目录进行排序,请使用 `ls -lS`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png) + +要以相反的顺序列出内容,请使用 `ls -lr`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png) + +要按列列出内容,请使用 `ls -c`: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png) + +`ls -al` 提供了同一目录中所有文件的列表: + +![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png) + +以下是我认为有用且有趣的一些其他选项: + + * 仅列出目录中的 .txt 文件:`ls * .txt` +  * 按文件大小列出:`ls -s` +  * 按时间和日期排序:`ls -d` +  * 按扩展名排序:`ls -X` +  * 按文件大小排序:`ls -S` +  * 带有文件大小的长格式:`ls -ls` + + + +要生成指定格式的目录列表并将其定向到文件供以后查看,请输入 `ls -al> mydirectorylist`。最后,我找到的一个更奇特的命令是 `ls -R`,它提供了计算机上所有目录及其内容的递归列表。 + +有关 `ls` 命令的所有迭代的完整列表,请参阅 [GNU Core Utilities][6]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/ls-command + +作者:[Don Watkins][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf +[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html +[3]: https://en.wikipedia.org/wiki/Multics +[4]: https://en.wikipedia.org/wiki/Ls +[5]: http://www.gnu.org/s/coreutils/ +[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation From 9e3e000850e898efde26b06da0cacf868042e848 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 16 Oct 2018 09:07:03 +0800 Subject: [PATCH 035/140] translating --- .../tech/20181013 How to Install GRUB on Arch Linux (UEFI).md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md index 97cb5e0362..e456c1ee0e 100644 --- a/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md +++ b/sources/tech/20181013 How to Install GRUB on Arch Linux (UEFI).md @@ -1,3 +1,5 @@ +translating---geekpi + How to Install GRUB on Arch Linux (UEFI) ====== From 7c430e0a34873d2f7353f5557aac57f02311dcd5 Mon Sep 17 00:00:00 2001 From: darksun Date: Tue, 16 Oct 2018 09:08:29 +0800 Subject: [PATCH 036/140] =?UTF-8?q?=E9=80=89=E9=A2=98:=20Lab=205:=20File?= =?UTF-8?q?=20system,=20Spawn=20and=20Shell?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...016 Lab 5- File system, Spawn and Shell.md | 345 ++++++++++++++++++ 1 file changed, 345 insertions(+) create mode 100644 sources/tech/20181016 Lab 5- File system, Spawn and Shell.md diff --git a/sources/tech/20181016 Lab 5- File system, Spawn and Shell.md b/sources/tech/20181016 Lab 5- File system, Spawn and Shell.md new file mode 100644 index 0000000000..e7e623db11 --- /dev/null +++ b/sources/tech/20181016 Lab 5- File system, Spawn and Shell.md @@ -0,0 +1,345 @@ +Lab 5: File system, Spawn and Shell +====== + +**Due Thursday, November 15, 2018 +** + +### Introduction + +In this lab, you will implement `spawn`, a library call that loads and runs on-disk executables. You will then flesh out your kernel and library operating system enough to run a shell on the console. These features need a file system, and this lab introduces a simple read/write file system. + +#### Getting Started + +Use Git to fetch the latest version of the course repository, and then create a local branch called `lab5` based on our lab5 branch, `origin/lab5`: + +``` + athena% cd ~/6.828/lab + athena% add git + athena% git pull + Already up-to-date. + athena% git checkout -b lab5 origin/lab5 + Branch lab5 set up to track remote branch refs/remotes/origin/lab5. + Switched to a new branch "lab5" + athena% git merge lab4 + Merge made by recursive. + ..... + athena% +``` + +The main new component for this part of the lab is the file system environment, located in the new `fs` directory. Scan through all the files in this directory to get a feel for what all is new. Also, there are some new file system-related source files in the `user` and `lib` directories, + +| fs/fs.c | Code that mainipulates the file system's on-disk structure. | +| fs/bc.c | A simple block cache built on top of our user-level page fault handling facility. | +| fs/ide.c | Minimal PIO-based (non-interrupt-driven) IDE driver code. | +| fs/serv.c | The file system server that interacts with client environments using file system IPCs. | +| lib/fd.c | Code that implements the general UNIX-like file descriptor interface. | +| lib/file.c | The driver for on-disk file type, implemented as a file system IPC client. | +| lib/console.c | The driver for console input/output file type. | +| lib/spawn.c | Code skeleton of the spawn library call. | + +You should run the pingpong, primes, and forktree test cases from lab 4 again after merging in the new lab 5 code. You will need to comment out the `ENV_CREATE(fs_fs)` line in `kern/init.c` because `fs/fs.c` tries to do some I/O, which JOS does not allow yet. Similarly, temporarily comment out the call to `close_all()` in `lib/exit.c`; this function calls subroutines that you will implement later in the lab, and therefore will panic if called. If your lab 4 code doesn't contain any bugs, the test cases should run fine. Don't proceed until they work. Don't forget to un-comment these lines when you start Exercise 1. + +If they don't work, use git diff lab4 to review all the changes, making sure there isn't any code you wrote for lab4 (or before) missing from lab 5. Make sure that lab 4 still works. + +#### Lab Requirements + +As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Additionally, you will need to write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab5.txt` in the top level of your `lab5` directory before handing in your work. + +### File system preliminaries + +The file system you will work with is much simpler than most "real" file systems including that of xv6 UNIX, but it is powerful enough to provide the basic features: creating, reading, writing, and deleting files organized in a hierarchical directory structure. + +We are (for the moment anyway) developing only a single-user operating system, which provides protection sufficient to catch bugs but not to protect multiple mutually suspicious users from each other. Our file system therefore does not support the UNIX notions of file ownership or permissions. Our file system also currently does not support hard links, symbolic links, time stamps, or special device files like most UNIX file systems do. + +### On-Disk File System Structure + +Most UNIX file systems divide available disk space into two main types of regions: _inode_ regions and _data_ regions. UNIX file systems assign one _inode_ to each file in the file system; a file's inode holds critical meta-data about the file such as its `stat` attributes and pointers to its data blocks. The data regions are divided into much larger (typically 8KB or more) _data blocks_ , within which the file system stores file data and directory meta-data. Directory entries contain file names and pointers to inodes; a file is said to be _hard-linked_ if multiple directory entries in the file system refer to that file's inode. Since our file system will not support hard links, we do not need this level of indirection and therefore can make a convenient simplification: our file system will not use inodes at all and instead will simply store all of a file's (or sub-directory's) meta-data within the (one and only) directory entry describing that file. + +Both files and directories logically consist of a series of data blocks, which may be scattered throughout the disk much like the pages of an environment's virtual address space can be scattered throughout physical memory. The file system environment hides the details of block layout, presenting interfaces for reading and writing sequences of bytes at arbitrary offsets within files. The file system environment handles all modifications to directories internally as a part of performing actions such as file creation and deletion. Our file system does allow user environments to _read_ directory meta-data directly (e.g., with `read`), which means that user environments can perform directory scanning operations themselves (e.g., to implement the `ls` program) rather than having to rely on additional special calls to the file system. The disadvantage of this approach to directory scanning, and the reason most modern UNIX variants discourage it, is that it makes application programs dependent on the format of directory meta-data, making it difficult to change the file system's internal layout without changing or at least recompiling application programs as well. + +#### Sectors and Blocks + +Most disks cannot perform reads and writes at byte granularity and instead perform reads and writes in units of _sectors_. In JOS, sectors are 512 bytes each. File systems actually allocate and use disk storage in units of _blocks_. Be wary of the distinction between the two terms: _sector size_ is a property of the disk hardware, whereas _block size_ is an aspect of the operating system using the disk. A file system's block size must be a multiple of the sector size of the underlying disk. + +The UNIX xv6 file system uses a block size of 512 bytes, the same as the sector size of the underlying disk. Most modern file systems use a larger block size, however, because storage space has gotten much cheaper and it is more efficient to manage storage at larger granularities. Our file system will use a block size of 4096 bytes, conveniently matching the processor's page size. + +#### Superblocks + +![Disk layout][1] + +File systems typically reserve certain disk blocks at "easy-to-find" locations on the disk (such as the very start or the very end) to hold meta-data describing properties of the file system as a whole, such as the block size, disk size, any meta-data required to find the root directory, the time the file system was last mounted, the time the file system was last checked for errors, and so on. These special blocks are called _superblocks_. + +Our file system will have exactly one superblock, which will always be at block 1 on the disk. Its layout is defined by `struct Super` in `inc/fs.h`. Block 0 is typically reserved to hold boot loaders and partition tables, so file systems generally do not use the very first disk block. Many "real" file systems maintain multiple superblocks, replicated throughout several widely-spaced regions of the disk, so that if one of them is corrupted or the disk develops a media error in that region, the other superblocks can still be found and used to access the file system. + +#### File Meta-data + +![File structure][2] +The layout of the meta-data describing a file in our file system is described by `struct File` in `inc/fs.h`. This meta-data includes the file's name, size, type (regular file or directory), and pointers to the blocks comprising the file. As mentioned above, we do not have inodes, so this meta-data is stored in a directory entry on disk. Unlike in most "real" file systems, for simplicity we will use this one `File` structure to represent file meta-data as it appears _both on disk and in memory_. + +The `f_direct` array in `struct File` contains space to store the block numbers of the first 10 (`NDIRECT`) blocks of the file, which we call the file's _direct_ blocks. For small files up to 10*4096 = 40KB in size, this means that the block numbers of all of the file's blocks will fit directly within the `File` structure itself. For larger files, however, we need a place to hold the rest of the file's block numbers. For any file greater than 40KB in size, therefore, we allocate an additional disk block, called the file's _indirect block_ , to hold up to 4096/4 = 1024 additional block numbers. Our file system therefore allows files to be up to 1034 blocks, or just over four megabytes, in size. To support larger files, "real" file systems typically support _double-_ and _triple-indirect blocks_ as well. + +#### Directories versus Regular Files + +A `File` structure in our file system can represent either a _regular_ file or a directory; these two types of "files" are distinguished by the `type` field in the `File` structure. The file system manages regular files and directory-files in exactly the same way, except that it does not interpret the contents of the data blocks associated with regular files at all, whereas the file system interprets the contents of a directory-file as a series of `File` structures describing the files and subdirectories within the directory. + +The superblock in our file system contains a `File` structure (the `root` field in `struct Super`) that holds the meta-data for the file system's root directory. The contents of this directory-file is a sequence of `File` structures describing the files and directories located within the root directory of the file system. Any subdirectories in the root directory may in turn contain more `File` structures representing sub-subdirectories, and so on. + +### The File System + +The goal for this lab is not to have you implement the entire file system, but for you to implement only certain key components. In particular, you will be responsible for reading blocks into the block cache and flushing them back to disk; allocating disk blocks; mapping file offsets to disk blocks; and implementing read, write, and open in the IPC interface. Because you will not be implementing all of the file system yourself, it is very important that you familiarize yourself with the provided code and the various file system interfaces. + +### Disk Access + +The file system environment in our operating system needs to be able to access the disk, but we have not yet implemented any disk access functionality in our kernel. Instead of taking the conventional "monolithic" operating system strategy of adding an IDE disk driver to the kernel along with the necessary system calls to allow the file system to access it, we instead implement the IDE disk driver as part of the user-level file system environment. We will still need to modify the kernel slightly, in order to set things up so that the file system environment has the privileges it needs to implement disk access itself. + +It is easy to implement disk access in user space this way as long as we rely on polling, "programmed I/O" (PIO)-based disk access and do not use disk interrupts. It is possible to implement interrupt-driven device drivers in user mode as well (the L3 and L4 kernels do this, for example), but it is more difficult since the kernel must field device interrupts and dispatch them to the correct user-mode environment. + +The x86 processor uses the IOPL bits in the EFLAGS register to determine whether protected-mode code is allowed to perform special device I/O instructions such as the IN and OUT instructions. Since all of the IDE disk registers we need to access are located in the x86's I/O space rather than being memory-mapped, giving "I/O privilege" to the file system environment is the only thing we need to do in order to allow the file system to access these registers. In effect, the IOPL bits in the EFLAGS register provides the kernel with a simple "all-or-nothing" method of controlling whether user-mode code can access I/O space. In our case, we want the file system environment to be able to access I/O space, but we do not want any other environments to be able to access I/O space at all. + +``` +Exercise 1. `i386_init` identifies the file system environment by passing the type `ENV_TYPE_FS` to your environment creation function, `env_create`. Modify `env_create` in `env.c`, so that it gives the file system environment I/O privilege, but never gives that privilege to any other environment. + +Make sure you can start the file environment without causing a General Protection fault. You should pass the "fs i/o" test in make grade. +``` + +``` +Question + + 1. Do you have to do anything else to ensure that this I/O privilege setting is saved and restored properly when you subsequently switch from one environment to another? Why? +``` + + +Note that the `GNUmakefile` file in this lab sets up QEMU to use the file `obj/kern/kernel.img` as the image for disk 0 (typically "Drive C" under DOS/Windows) as before, and to use the (new) file `obj/fs/fs.img` as the image for disk 1 ("Drive D"). In this lab our file system should only ever touch disk 1; disk 0 is used only to boot the kernel. If you manage to corrupt either disk image in some way, you can reset both of them to their original, "pristine" versions simply by typing: + +``` + $ rm obj/kern/kernel.img obj/fs/fs.img + $ make +``` + +or by doing: + +``` + $ make clean + $ make +``` + +Challenge! Implement interrupt-driven IDE disk access, with or without DMA. You can decide whether to move the device driver into the kernel, keep it in user space along with the file system, or even (if you really want to get into the micro-kernel spirit) move it into a separate environment of its own. + +### The Block Cache + +In our file system, we will implement a simple "buffer cache" (really just a block cache) with the help of the processor's virtual memory system. The code for the block cache is in `fs/bc.c`. + +Our file system will be limited to handling disks of size 3GB or less. We reserve a large, fixed 3GB region of the file system environment's address space, from 0x10000000 (`DISKMAP`) up to 0xD0000000 (`DISKMAP+DISKMAX`), as a "memory mapped" version of the disk. For example, disk block 0 is mapped at virtual address 0x10000000, disk block 1 is mapped at virtual address 0x10001000, and so on. The `diskaddr` function in `fs/bc.c` implements this translation from disk block numbers to virtual addresses (along with some sanity checking). + +Since our file system environment has its own virtual address space independent of the virtual address spaces of all other environments in the system, and the only thing the file system environment needs to do is to implement file access, it is reasonable to reserve most of the file system environment's address space in this way. It would be awkward for a real file system implementation on a 32-bit machine to do this since modern disks are larger than 3GB. Such a buffer cache management approach may still be reasonable on a machine with a 64-bit address space. + +Of course, it would take a long time to read the entire disk into memory, so instead we'll implement a form of _demand paging_ , wherein we only allocate pages in the disk map region and read the corresponding block from the disk in response to a page fault in this region. This way, we can pretend that the entire disk is in memory. + +``` +Exercise 2. Implement the `bc_pgfault` and `flush_block` functions in `fs/bc.c`. `bc_pgfault` is a page fault handler, just like the one your wrote in the previous lab for copy-on-write fork, except that its job is to load pages in from the disk in response to a page fault. When writing this, keep in mind that (1) `addr` may not be aligned to a block boundary and (2) `ide_read` operates in sectors, not blocks. + +The `flush_block` function should write a block out to disk _if necessary_. `flush_block` shouldn't do anything if the block isn't even in the block cache (that is, the page isn't mapped) or if it's not dirty. We will use the VM hardware to keep track of whether a disk block has been modified since it was last read from or written to disk. To see whether a block needs writing, we can just look to see if the `PTE_D` "dirty" bit is set in the `uvpt` entry. (The `PTE_D` bit is set by the processor in response to a write to that page; see 5.2.4.3 in [chapter 5][3] of the 386 reference manual.) After writing the block to disk, `flush_block` should clear the `PTE_D` bit using `sys_page_map`. + +Use make grade to test your code. Your code should pass "check_bc", "check_super", and "check_bitmap". +``` + +`fs_init` function in `fs/fs.c` is a prime example of how to use the block cache. After initializing the block cache, it simply stores pointers into the disk map region in the `super` global variable. After this point, we can simply read from the `super` structure as if they were in memory and our page fault handler will read them from disk as necessary. + +``` +Challenge! The block cache has no eviction policy. Once a block gets faulted in to it, it never gets removed and will remain in memory forevermore. Add eviction to the buffer cache. Using the `PTE_A` "accessed" bits in the page tables, which the hardware sets on any access to a page, you can track approximate usage of disk blocks without the need to modify every place in the code that accesses the disk map region. Be careful with dirty blocks. +``` + +### The Block Bitmap + +After `fs_init` sets the `bitmap` pointer, we can treat `bitmap` as a packed array of bits, one for each block on the disk. See, for example, `block_is_free`, which simply checks whether a given block is marked free in the bitmap. + +``` +Exercise 3. Use `free_block` as a model to implement `alloc_block` in `fs/fs.c`, which should find a free disk block in the bitmap, mark it used, and return the number of that block. When you allocate a block, you should immediately flush the changed bitmap block to disk with `flush_block`, to help file system consistency. + +Use make grade to test your code. Your code should now pass "alloc_block". +``` + +### File Operations + +We have provided a variety of functions in `fs/fs.c` to implement the basic facilities you will need to interpret and manage `File` structures, scan and manage the entries of directory-files, and walk the file system from the root to resolve an absolute pathname. Read through _all_ of the code in `fs/fs.c` and make sure you understand what each function does before proceeding. + +``` +Exercise 4. Implement `file_block_walk` and `file_get_block`. `file_block_walk` maps from a block offset within a file to the pointer for that block in the `struct File` or the indirect block, very much like what `pgdir_walk` did for page tables. `file_get_block` goes one step further and maps to the actual disk block, allocating a new one if necessary. + +Use make grade to test your code. Your code should pass "file_open", "file_get_block", and "file_flush/file_truncated/file rewrite", and "testfile". +``` + +`file_block_walk` and `file_get_block` are the workhorses of the file system. For example, `file_read` and `file_write` are little more than the bookkeeping atop `file_get_block` necessary to copy bytes between scattered blocks and a sequential buffer. + +``` +Challenge! The file system is likely to be corrupted if it gets interrupted in the middle of an operation (for example, by a crash or a reboot). Implement soft updates or journalling to make the file system crash-resilient and demonstrate some situation where the old file system would get corrupted, but yours doesn't. +``` + +### The file system interface + +Now that we have the necessary functionality within the file system environment itself, we must make it accessible to other environments that wish to use the file system. Since other environments can't directly call functions in the file system environment, we'll expose access to the file system environment via a _remote procedure call_ , or RPC, abstraction, built atop JOS's IPC mechanism. Graphically, here's what a call to the file system server (say, read) looks like + +``` + Regular env FS env + +---------------+ +---------------+ + | read | | file_read | + | (lib/fd.c) | | (fs/fs.c) | +...|.......|.......|...|.......^.......|............... + | v | | | | RPC mechanism + | devfile_read | | serve_read | + | (lib/file.c) | | (fs/serv.c) | + | | | | ^ | + | v | | | | + | fsipc | | serve | + | (lib/file.c) | | (fs/serv.c) | + | | | | ^ | + | v | | | | + | ipc_send | | ipc_recv | + | | | | ^ | + +-------|-------+ +-------|-------+ + | | + +-------------------+ + +``` + +Everything below the dotted line is simply the mechanics of getting a read request from the regular environment to the file system environment. Starting at the beginning, `read` (which we provide) works on any file descriptor and simply dispatches to the appropriate device read function, in this case `devfile_read` (we can have more device types, like pipes). `devfile_read` implements `read` specifically for on-disk files. This and the other `devfile_*` functions in `lib/file.c` implement the client side of the FS operations and all work in roughly the same way, bundling up arguments in a request structure, calling `fsipc` to send the IPC request, and unpacking and returning the results. The `fsipc` function simply handles the common details of sending a request to the server and receiving the reply. + +The file system server code can be found in `fs/serv.c`. It loops in the `serve` function, endlessly receiving a request over IPC, dispatching that request to the appropriate handler function, and sending the result back via IPC. In the read example, `serve` will dispatch to `serve_read`, which will take care of the IPC details specific to read requests such as unpacking the request structure and finally call `file_read` to actually perform the file read. + +Recall that JOS's IPC mechanism lets an environment send a single 32-bit number and, optionally, share a page. To send a request from the client to the server, we use the 32-bit number for the request type (the file system server RPCs are numbered, just like how syscalls were numbered) and store the arguments to the request in a `union Fsipc` on the page shared via the IPC. On the client side, we always share the page at `fsipcbuf`; on the server side, we map the incoming request page at `fsreq` (`0x0ffff000`). + +The server also sends the response back via IPC. We use the 32-bit number for the function's return code. For most RPCs, this is all they return. `FSREQ_READ` and `FSREQ_STAT` also return data, which they simply write to the page that the client sent its request on. There's no need to send this page in the response IPC, since the client shared it with the file system server in the first place. Also, in its response, `FSREQ_OPEN` shares with the client a new "Fd page". We'll return to the file descriptor page shortly. + +``` +Exercise 5. Implement `serve_read` in `fs/serv.c`. + +`serve_read`'s heavy lifting will be done by the already-implemented `file_read` in `fs/fs.c` (which, in turn, is just a bunch of calls to `file_get_block`). `serve_read` just has to provide the RPC interface for file reading. Look at the comments and code in `serve_set_size` to get a general idea of how the server functions should be structured. + +Use make grade to test your code. Your code should pass "serve_open/file_stat/file_close" and "file_read" for a score of 70/150. +``` + +``` +Exercise 6. Implement `serve_write` in `fs/serv.c` and `devfile_write` in `lib/file.c`. + +Use make grade to test your code. Your code should pass "file_write", "file_read after file_write", "open", and "large file" for a score of 90/150. +``` + +### Spawning Processes + +We have given you the code for `spawn` (see `lib/spawn.c`) which creates a new environment, loads a program image from the file system into it, and then starts the child environment running this program. The parent process then continues running independently of the child. The `spawn` function effectively acts like a `fork` in UNIX followed by an immediate `exec` in the child process. + +We implemented `spawn` rather than a UNIX-style `exec` because `spawn` is easier to implement from user space in "exokernel fashion", without special help from the kernel. Think about what you would have to do in order to implement `exec` in user space, and be sure you understand why it is harder. + +``` +Exercise 7. `spawn` relies on the new syscall `sys_env_set_trapframe` to initialize the state of the newly created environment. Implement `sys_env_set_trapframe` in `kern/syscall.c` (don't forget to dispatch the new system call in `syscall()`). + +Test your code by running the `user/spawnhello` program from `kern/init.c`, which will attempt to spawn `/hello` from the file system. + +Use make grade to test your code. +``` + +``` +Challenge! Implement Unix-style `exec`. +``` + +``` +Challenge! Implement `mmap`-style memory-mapped files and modify `spawn` to map pages directly from the ELF image when possible. +``` + +### Sharing library state across fork and spawn + +The UNIX file descriptors are a general notion that also encompasses pipes, console I/O, etc. In JOS, each of these device types has a corresponding `struct Dev`, with pointers to the functions that implement read/write/etc. for that device type. `lib/fd.c` implements the general UNIX-like file descriptor interface on top of this. Each `struct Fd` indicates its device type, and most of the functions in `lib/fd.c` simply dispatch operations to functions in the appropriate `struct Dev`. + +`lib/fd.c` also maintains the _file descriptor table_ region in each application environment's address space, starting at `FDTABLE`. This area reserves a page's worth (4KB) of address space for each of the up to `MAXFD` (currently 32) file descriptors the application can have open at once. At any given time, a particular file descriptor table page is mapped if and only if the corresponding file descriptor is in use. Each file descriptor also has an optional "data page" in the region starting at `FILEDATA`, which devices can use if they choose. + +We would like to share file descriptor state across `fork` and `spawn`, but file descriptor state is kept in user-space memory. Right now, on `fork`, the memory will be marked copy-on-write, so the state will be duplicated rather than shared. (This means environments won't be able to seek in files they didn't open themselves and that pipes won't work across a fork.) On `spawn`, the memory will be left behind, not copied at all. (Effectively, the spawned environment starts with no open file descriptors.) + +We will change `fork` to know that certain regions of memory are used by the "library operating system" and should always be shared. Rather than hard-code a list of regions somewhere, we will set an otherwise-unused bit in the page table entries (just like we did with the `PTE_COW` bit in `fork`). + +We have defined a new `PTE_SHARE` bit in `inc/lib.h`. This bit is one of the three PTE bits that are marked "available for software use" in the Intel and AMD manuals. We will establish the convention that if a page table entry has this bit set, the PTE should be copied directly from parent to child in both `fork` and `spawn`. Note that this is different from marking it copy-on-write: as described in the first paragraph, we want to make sure to _share_ updates to the page. + +``` +Exercise 8. Change `duppage` in `lib/fork.c` to follow the new convention. If the page table entry has the `PTE_SHARE` bit set, just copy the mapping directly. (You should use `PTE_SYSCALL`, not `0xfff`, to mask out the relevant bits from the page table entry. `0xfff` picks up the accessed and dirty bits as well.) + +Likewise, implement `copy_shared_pages` in `lib/spawn.c`. It should loop through all page table entries in the current process (just like `fork` did), copying any page mappings that have the `PTE_SHARE` bit set into the child process. +``` + +Use make run-testpteshare to check that your code is behaving properly. You should see lines that say "`fork handles PTE_SHARE right`" and "`spawn handles PTE_SHARE right`". + +Use make run-testfdsharing to check that file descriptors are shared properly. You should see lines that say "`read in child succeeded`" and "`read in parent succeeded`". + +### The keyboard interface + +For the shell to work, we need a way to type at it. QEMU has been displaying output we write to the CGA display and the serial port, but so far we've only taken input while in the kernel monitor. In QEMU, input typed in the graphical window appear as input from the keyboard to JOS, while input typed to the console appear as characters on the serial port. `kern/console.c` already contains the keyboard and serial drivers that have been used by the kernel monitor since lab 1, but now you need to attach these to the rest of the system. + +``` +Exercise 9. In your `kern/trap.c`, call `kbd_intr` to handle trap `IRQ_OFFSET+IRQ_KBD` and `serial_intr` to handle trap `IRQ_OFFSET+IRQ_SERIAL`. +``` + +We implemented the console input/output file type for you, in `lib/console.c`. `kbd_intr` and `serial_intr` fill a buffer with the recently read input while the console file type drains the buffer (the console file type is used for stdin/stdout by default unless the user redirects them). + +Test your code by running make run-testkbd and type a few lines. The system should echo your lines back to you as you finish them. Try typing in both the console and the graphical window, if you have both available. + +### The Shell + +Run make run-icode or make run-icode-nox. This will run your kernel and start `user/icode`. `icode` execs `init`, which will set up the console as file descriptors 0 and 1 (standard input and standard output). It will then spawn `sh`, the shell. You should be able to run the following commands: + +``` + echo hello world | cat + cat lorem |cat + cat lorem |num + cat lorem |num |num |num |num |num + lsfd +``` + +Note that the user library routine `cprintf` prints straight to the console, without using the file descriptor code. This is great for debugging but not great for piping into other programs. To print output to a particular file descriptor (for example, 1, standard output), use `fprintf(1, "...", ...)`. `printf("...", ...)` is a short-cut for printing to FD 1. See `user/lsfd.c` for examples. + +``` +Exercise 10. + +The shell doesn't support I/O redirection. It would be nice to run sh