Merge pull request #31 from LCTT/master

更新
This commit is contained in:
zEpoch 2021-07-10 23:17:43 +08:00 committed by GitHub
commit f78e481530
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 972 additions and 309 deletions

View File

@ -3,34 +3,34 @@
[#]: author: (Jim Hall https://opensource.com/users/jim-hall) [#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-13567-1.html)
如何在 FreeDOS 上归档文件 如何在 FreeDOS 上归档文件
====== ======
有一个 FreeDOS 版的 tar但 DOS 上事实上的标准归档工具是 Zip 和 Unzip。
![Filing cabinet for organization][1]
在Linux上你可能熟悉标准的 Unix 归档命令:`tar`。FreeDOS 上也有 `tar` 的版本(还有其他一些流行的归档程序),但 DOS 上事实上的标准归档程序是 Zip 和 Unzip。Zip 和 Unzip 都默认安装在 FreeDOS 1.3 RC4 中 > 虽然有一个 FreeDOS 版的 tar但 DOS 上事实上的标准归档工具是 Zip 和 Unzip。
Zip 文件格式最初是由 PKWARE 的 Phil Katz 在 1989 年为 PKZIP 和 PKUNZIP 这对 DOS 归档工具构思的。Katz 将 Zip 文件的规范作为一个开放标准发布,因此任何人都可以创建 Zip 档案。作为开放规范的结果Zip 成为 DOS 上的一个标准归档。[Info-ZIP][2] 项目实现了一套开源的 `ZIP``UNZIP` 程序。 ![](https://img.linux.net.cn/data/attachment/album/202107/10/063340wp088ozz1fo9f1e1.jpg)
在 Linux 上,你可能熟悉标准的 Unix 归档命令:`tar`。FreeDOS 上也有 `tar` 的版本(还有其他一些流行的归档程序),但 DOS 上事实上的标准归档程序是 Zip 和 Unzip。Zip 和 Unzip 都默认安装在 FreeDOS 1.3 RC4 中。
Zip 文件格式最初是由 PKWARE 的 Phil Katz 在 1989 年为 PKZIP 和 PKUNZIP 这对 DOS 归档工具构思的。Katz 将 Zip 文件的规范作为一个开放标准发布,因此任何人都可以创建 Zip 档案。作为开放规范的结果Zip 成为 DOS 上的一个标准归档格式。[Info-ZIP][2] 项目实现了一套开源的 `ZIP``UNZIP` 程序。
### 对文件和目录进行压缩 ### 对文件和目录进行压缩
你可以在 DOS 命令行中使用 `ZIP` 来创建文件和目录的归档。这是一个方便的方法,可以为你的工作做一个备份,或者发布一个“包”,在未来的 FreeDOS 发行中使用。例如,假设我想为我的项目源码做一个备份,其中包含这些源文件: 你可以在 DOS 命令行中使用 `ZIP` 来创建文件和目录的归档。这是一个方便的方法,可以为你的工作做一个备份,或者发布一个“包”,在未来的 FreeDOS 发中使用。例如,假设我想为我的项目源码做一个备份,其中包含这些源文件:
![dir][3] ![dir][3]
我想把这些文件归档 *我想把这些文件归档*
Jim Hall,[CC-BY SA 4.0][4]
`ZIP` 有大量的命令行选项来做不同的事情,但我最常使用的命令行选项是 `-r` 来处理目录和子目录_递归_以及 `-9` 来提供可能的最大压缩。`ZIP` 和 `UNZIP` 使用类似 Unix 的命令行,所以你可以在破折号后面组合选项:`-9r` 将提供最大压缩并在 Zip 文件中包括子目录。 `ZIP` 有大量的命令行选项来做不同的事情,但我最常使用的命令行选项是 `-r` 来处理目录和子目录 _递归_,以及使用 `-9` 来提供可能的最大压缩。`ZIP` 和 `UNZIP` 使用类似 Unix 的命令行,所以你可以在破折号后面组合选项:`-9r` 将提供最大压缩并在 Zip 文件中包括子目录。
![zip][5] ![zip][5]
压缩一个目录树 *压缩一个目录树*
Jim Hall,[CC-BY SA 4.0][4]
在我的例子中,`ZIP` 能够将我的源文件从大约 33KB 压缩到大约 22KB为我节省了 11KB 的宝贵磁盘空间。你可能会得到不同的压缩率,这取决于你给 `ZIP` 的选项,或者你想在 Zip 文件中存储什么文件(以及有多少)。一般来说,非常长的文本文件(如源码)会产生良好的压缩效果,而非常小的文本文件(如只有几行的 DOS “批处理”文件)通常太短,无法很好地压缩。 在我的例子中,`ZIP` 能够将我的源文件从大约 33KB 压缩到大约 22KB为我节省了 11KB 的宝贵磁盘空间。你可能会得到不同的压缩率,这取决于你给 `ZIP` 的选项,或者你想在 Zip 文件中存储什么文件(以及有多少)。一般来说,非常长的文本文件(如源码)会产生良好的压缩效果,而非常小的文本文件(如只有几行的 DOS “批处理”文件)通常太短,无法很好地压缩。
@ -42,8 +42,7 @@ Zip 文件格式最初是由 PKWARE 的 Phil Katz 在 1989 年为 PKZIP 和 PKUN
![unzip -l][6] ![unzip -l][6]
用 unzip 列出归档文件的内容 *用 unzip 列出归档文件的内容*
(Jim Hall,[CC-BY SA 4.0][4])
该输出允让我看到 Zip 文件中的 14 个条目13 个文件加上 `SRC` 目录。 该输出允让我看到 Zip 文件中的 14 个条目13 个文件加上 `SRC` 目录。
@ -51,19 +50,17 @@ Zip 文件格式最初是由 PKWARE 的 Phil Katz 在 1989 年为 PKZIP 和 PKUN
![unzip -d temp][7] ![unzip -d temp][7]
你可以用 -d 来解压到目标路径 *你可以用 -d 来解压到目标路径*
Jim Hall,[CC-BY SA 4.0][4]
有时我想从一个 Zip 文件中提取一个文件。在这个例子中,假设我想提取一个 DOS 可执行程序 `TEST.EXE`。要提取单个文件,你要指定你想提取的 Zip 文件的完整路径。默认情况下,`UNZIP` 将使用 Zip 文件中提供的路径解压该文件。要省略路径信息,你可以添加 `-j`“junk the path” 选项。 有时我想从一个 Zip 文件中提取一个文件。在这个例子中,假设我想提取一个 DOS 可执行程序 `TEST.EXE`。要提取单个文件,你要指定你想提取的 Zip 文件的完整路径。默认情况下,`UNZIP` 将使用 Zip 文件中提供的路径解压该文件。要省略路径信息,你可以添加 `-j`“junk the path” 选项。
你也可以合选项。让我们从 Zip 文件中提取 `SRC\TEST.EXE` 程序,但省略完整路径并将其保存在 `TEMP` 目录下: 你也可以合选项。让我们从 Zip 文件中提取 `SRC\TEST.EXE` 程序,但省略完整路径并将其保存在 `TEMP` 目录下:
![unzip -j][8] ![unzip -j][8]
unzip 结合选项 *unzip 组合选项*
Jim Hall,[CC-BY SA 4.0][4]
因为 Zip 文件是一个开放的标准,所以我们会今天继续看到 Zip 文件。每个 Linux 发行版都支持使用 Info-ZIP 程序的 Zip 文件。你的 Linux 文件管理器可能也支持 Zip 文件。在 GNOME 文件管理器中,你应该可以右击一个文件夹并从下拉菜单中选择“压缩”。你可以选择创建一个新的归档文件,包括 Zip 文件。 因为 Zip 文件是一个开放的标准,所以我们会今天继续看到 Zip 文件。每个 Linux 发行版都可以通过 Info-ZIP 程序支持 Zip 文件。你的 Linux 文件管理器可能也支持 Zip 文件。在 GNOME 文件管理器中,你应该可以右击一个文件夹并从下拉菜单中选择“压缩”。你可以选择创建一个包括 Zip 文件在内的新的归档文件。
创建和管理 Zip 文件是任何 DOS 用户的一项关键技能。你可以在 Info-ZIP 网站上了解更多关于 `ZIP``UNZIP` 的信息,或者在命令行上使用 `h`(“帮助”)选项来打印选项列表。 创建和管理 Zip 文件是任何 DOS 用户的一项关键技能。你可以在 Info-ZIP 网站上了解更多关于 `ZIP``UNZIP` 的信息,或者在命令行上使用 `h`(“帮助”)选项来打印选项列表。
@ -74,7 +71,7 @@ via: https://opensource.com/article/21/6/archive-files-freedos
作者:[Jim Hall][a] 作者:[Jim Hall][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,30 +3,32 @@
[#]: author: (Don Watkins https://opensource.com/users/don-watkins) [#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: (geekpi) [#]: translator: (geekpi)
[#]: reviewer: ( ) [#]: reviewer: (wxy)
[#]: publisher: ( ) [#]: publisher: (wxy)
[#]: url: ( ) [#]: url: (https://linux.cn/article-13568-1.html)
这个开源移动应用来识别花草和树木 用开源移动应用 PlantNet 来识别花草和树木
====== ======
PlantNet 将开源技术与众包知识结合起来,帮助你成为业余植物学家。
> PlantNet 将开源技术与众包知识结合起来,帮助你成为业余植物学家。
![Fire pink flower in Maggie Valley, NC][1] ![Fire pink flower in Maggie Valley, NC][1]
在我居住的地方,有很多小路和道路两旁都有花草树木。我所在的社区因其每年的枫树节而闻名,枫树对我来说很容易识别。然而,还有许多其他的树我无法识别名字。花也是如此:蒲公英很容易发现,但我不知道在我的步行道上的野花的名字。 在我居住的地方很多小路和道路两旁都有花草树木。我所在的社区因其每年的枫树节而闻名,枫树对我来说很容易识别。然而,还有许多其他的树我无法识别名字。花也是如此:蒲公英很容易发现,但我不知道在我的步行道上的野花的名字。
最近,我的妻子告诉我了 PlantNet一个可以识别这些花草和树木的移动应用。它可以在 iOS 和 Android 上使用,而且是免费的,所以我决定试试。 最近,我的妻子告诉我了 PlantNet一个可以识别这些花草和树木的移动应用。它可以在 iOS 和 Android 上使用,而且是免费的,所以我决定试试。
### 以开源的方式识别植物 ### 以开源的方式识别植物
我在手机上下载了这个应用程序,开始用它来识别我在村子周围散步时的一些花草和树木。随着我对这个应用的熟悉,我注意到我拍摄的图片(以及其他用户拍摄的图片)是以知识共享署名-相同方式共享CC-BY-SA的许可方式共享的。进一步的调查显示PlantNet 是[开源][2]的。如果你喜欢,你可以匿名使用该应用,或者成为社区的注册成员。 我在手机上下载了这个应用程序,开始用它来识别我在村子周围散步时的一些花草和树木。随着我对这个应用的熟悉,我注意到我拍摄的图片(以及其他用户拍摄的图片)是以知识共享署名-相同方式共享CC-BY-SA的许可方式共享的。进一步的调查显示PlantNet 是 [开源][2] 的。如果你喜欢,你可以匿名使用该应用,或者成为社区的注册成员。
根据 [Cos4Cloud][3] 公民科学项目“PlantNet 是一个参与性的公民科学平台,用于收集、分享和审查基于自动识别的植物观察结果。它的目标是监测植物的生物多样性,促进公众对植物知识的获取”。它使用图像识别技术来清点生物多样性。 根据 [Cos4Cloud][3] 公民科学项目“PlantNet 是一个参与性的公民科学平台,用于收集、分享和审查基于自动识别的植物观察结果。它的目标是监测植物的生物多样性,促进公众对植物知识的获取”。它使用图像识别技术来清点生物多样性。
该项目的开发始于 2009 年,由法国的植物学家和计算机科学家进行。它最初是一个[网络应用][4],而智能手机应用程序于 2013 年推出。该项目是 [Floris'Tic][5] 倡议的一部分,这是法国的另一个项目,旨在促进植物科学的科学、技术和工业文化。 该项目的开发始于 2009 年,由法国的植物学家和计算机科学家进行。它最初是一个 [Web 应用][4],而智能手机应用程序于 2013 年推出。该项目是 [Floris'Tic][5] 倡议的一部分,这是法国的另一个项目,旨在促进植物科学的科学、技术和工业文化。
PlantNet 允许用户利用智能手机的摄像头来收集视觉标本,并由软件和社区进行识别。然后,这些照片将与全世界数百万加入 PlantNet 网络的人分享。 PlantNet 允许用户利用智能手机的摄像头来收集视觉标本,并由软件和社区进行识别。然后,这些照片将与全世界数百万加入 PlantNet 网络的人分享。
该项目说“PlantNet 系统的工作原理是比较用户通过他们寻求确定的植物器官(花、果实、叶……)的照片传送的视觉模式。这些图像被分析,并与每天协作制作和充实的图像库进行比较。然后,该系统提供一个可能的物种清单及其插图”。 该项目说“PlantNet 系统的工作原理是,比较用户通过他们寻求鉴定的植物器官(花、果实、叶……)的照片传送的视觉模式。这些图像被分析,并与每天协作制作和充实的图像库进行比较。然后,该系统提供一个可能的物种清单及其插图”。
### 使用 PlantNet ### 使用 PlantNet
@ -34,37 +36,27 @@ PlantNet 允许用户利用智能手机的摄像头来收集视觉标本,并
![PlantNet smartphone icon][6] ![PlantNet smartphone icon][6]
Don Watkins, [CC BY-SA 4.0][7]
当应用打开时,你会看到你已经在资料库中收集的标本。显示屏底部的相机图标允许你使用你的相机将图片添加到你的照片库。 当应用打开时,你会看到你已经在资料库中收集的标本。显示屏底部的相机图标允许你使用你的相机将图片添加到你的照片库。
![Pl@ntnet homescreen][8] ![Pl@ntnet homescreen][8]
Don Watkins, [CC BY-SA 4.0][7]
选择“相机”选项,将手机的摄像头对准你想识别的树木或花草。拍完照后,点击与你想识别的标本相匹配的选项(叶、花、树皮、果实等)。 选择“相机”选项,将手机的摄像头对准你想识别的树木或花草。拍完照后,点击与你想识别的标本相匹配的选项(叶、花、树皮、果实等)。
![Selecting plant type to identify][9] ![Selecting plant type to identify][9]
(Don Watkins, [CC BY-SA 4.0][7])
例如,如果你想通过叶子的特征来识别一个标本,请选择**叶子**。PlantNet 对其识别的确定程度进行了分配,从高到低的百分比不等。你还可以使用你的智能手机的 GPS 功能,将位置信息自动添加到你的数据收集中,你还可以添加注释。 例如,如果你想通过叶子的特征来识别一个标本,请选择**叶子**。PlantNet 对其识别的确定程度进行了分配,从高到低的百分比不等。你还可以使用你的智能手机的 GPS 功能,将位置信息自动添加到你的数据收集中,你还可以添加注释。
![Identified plant][10] ![Identified plant][10]
Don Watkins, [CC BY-SA 4.0][7]
你可以在你的智能手机上或通过你的用户 ID如果你创建了一个账户登录网站访问你上传的所有观测数据并跟踪社区是否批准了它们。从网站界面上你也可以下载 CSV 或电子表格格式的观察记录。 你可以在你的智能手机上或通过你的用户 ID如果你创建了一个账户登录网站访问你上传的所有观测数据并跟踪社区是否批准了它们。从网站界面上你也可以下载 CSV 或电子表格格式的观察记录。
![Pl@ntnet provides user stats][11] ![Pl@ntnet provides user stats][11]
Don Watkins, [CC BY-SA 4.0][7]
### 很好的户外活动 ### 很好的户外活动
我特别喜欢 PlantNet 与维基百科的链接,这样我就可以阅读更多关于我收集的植物数据的信息。 我特别喜欢 PlantNet 与维基百科的链接,这样我就可以阅读更多关于我收集的植物数据的信息。
目前全球大约有 1200 万 PlantNet 用户,所以数据集一直在增长。该应用是免费使用的,每天最多有 500 个请求。它还有一个 API以 JSON 格式提供数据,所以你甚至可以把 Pl antNet 的视觉识别引擎作为一个网络服务使用。 目前全球大约有 1200 万 PlantNet 用户,所以数据集一直在增长。该应用是免费使用的,每天最多可以有 500 个请求。它还有一个 API以 JSON 格式提供数据,所以你甚至可以把 Pl antNet 的视觉识别引擎作为一个 Web 服务使用。
PlantNet 的一个非常好的地方是,它结合了众包知识和开源技术,将用户相互联系起来,并与很好的户外活动联系起来。没有比这更好的理由来支持开源软件了。 PlantNet 的一个非常好的地方是,它结合了众包知识和开源技术,将用户相互联系起来,并与很好的户外活动联系起来。没有比这更好的理由来支持开源软件了。
@ -77,7 +69,7 @@ via: https://opensource.com/article/21/7/open-source-plantnet
作者:[Don Watkins][a] 作者:[Don Watkins][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,224 @@
[#]: subject: (Apply lean startup principles to your open source project)
[#]: via: (https://opensource.com/article/21/7/lean-startup-open-source)
[#]: author: (Ip Sam https://opensource.com/users/ipkeisam)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Apply lean startup principles to your open source project
======
Lean startup principles help you manage your open source project with
efficiency and success.
![Mobile devices are a big part of our daily lives][1]
There are a lot of benefits to starting an open source project. In general, open source projects benefit from collaboration, adoption, transparency, lower ownership costs, development best practices, more contributors and reviewers, and better quality.
When you contribute to open source projects, you can build your technical and leadership skills, get good experience for your resume, learn new development tools, understand industry trends, work with top engineers around the world, gain mentorship opportunities, meet people with similar interests, improve your people skills, and more.
When you develop your own open source project, you are much like the CEO of a startup company, and many startups use lean principles. This article demonstrates how you can apply lean startup principles to develop and advance your open source projects.
### Come up with a good open source idea
When brainstorming a good open source idea, consider three domains: industry, inventory, and customers. You want to come up with an idea at the intersection of these domains. For example, I am working on an open source project for the hybrid cloud. Cloud computing is my industry. Inventory could be the set of Ansible playbooks available for the cloud computing industry. The customers could be my OpenShift clients interested in using Ansible playbooks to set up their hybrid cloud infrastructure. These get me to the intersection of the three domains, and this could be a great open source idea.
![A good open source idea][2]
(Ip Sam, [CC BY-SA 4.0][3])
Once you identify your open source idea, start developing a proof of concept (PoC) prototype project, put it into a Git repository, and create the project's backlog:
1. Start a simple PoC prototype.
2. Push the PoC into a public GitHub repository.
3. Create your README, LICENSE, CONTRIBUTING, and CODE_OF_CONDUCT files.
4. Set up high-level epics and stories in Jira.
Next, start working on positioning your open source idea for long-term success. Consider your competitors' offerings, customers' needs, and company capabilities. For example, if I know customers want to do cloud automation in an OpenShift cluster, I understand the customers' needs. My team's capabilities are producing Ansible automation to automate OpenShift cluster installation and upgrades. And I need to make sure no competitors are doing similar work. In my open source project, I use Ansible for automation. Other companies may be using shell scripts or PowerShell scripts. Even though they are developing automation technologies, their offerings do not directly interfere with what my open source project is doing. Therefore, I can get into the sweet spot below.
![Position your open source idea][4]
(Ip Sam, [CC BY-SA 4.0][3])
### Use lean startup principles
Managing risks is important for an open source project. At any time, a project can lose funding, people, and resources. Business requirements and scope can change. Release schedules can be delayed or canceled. Timelines can change with a lot of uncertainty. As the CEO of your open source project, you can use lean startup principles to minimize risks and maximize profits for your project.
The lean startup principles focus on developing the minimal viable project (MVP) that gives you value from production as soon as possible. It also pushes for smaller release cycles and fast iteration. Using validated learning through feedback loops from production, you can validate your open source project and make modifications as product data becomes available.
In the lean startup cycle, first, you come up with your open source idea. Then you convert your idea into code by doing a simple MVP. In between, you set up the build phase (where you build unit tests to ensure the quality of your code), continuous integration/continuous development (CI/CD) pipelines for automated deployments, the cloud computing infrastructure, the developer sandbox, and other development best practices. These are part of your build phase to enable your developers to code faster.
You need to measure everything from coding to production data. For example, measure the MVP's usability to make sure end users can use it properly. Use monitoring and alerts so that you know when production issues occur. Static code analysis helps you catch code issues and security vulnerabilities. Finally, use feedback data from product users and stakeholders; it helps you measure faster.
All of this data enables you to learn about your project. You can learn faster by doing more customer interviews or working with customers during development cycles. You can also learn faster by setting up cross-functional teams with other business departments. As you go through the lean startup cycles again in the second iteration, you should be able to code faster, measure faster, and learn faster than in the first iteration.
![The lean startup cycle diagram][5]
(Ip Sam, [CC BY-SA 4.0][3])
Lean startup methodologies help you create an idea, formulate hypotheses from your idea, create an MVP, and test the MVP to validate your hypotheses. Then you go through what you learned to fine-tune your hypotheses and start your second iteration.
![Lean startup methodologies][6]
(Ip Sam, [CC BY-SA 4.0][3])
Lean startup is a combination of customer development and agile development. It supports faster iterative development cycles and incremental product development.
### Leverage design thinking, lean startup, and agile
![Lean startup and agile][7]
(Ip Sam, [CC BY-SA 4.0][3])
The figure below shows the three cycles of design thinking, lean startup, and agile. The first part is always design thinking (the green cycle), where you gather requirements to define your idea. Often, these come from product managers. Then you move to the lean startup (blue) cycle, where you come up with prototypes, do some experiments, and go through the learning period. Based on your learning, you can enter the agile (yellow) cycle. These often involve the typical two-week sprints, coming up with stories in your product backlogs, bringing stories from the backlog to sprint planning, doing sprint execution, moving stories to completion, and shipping incremental products to production by doing a deployment. At the end, you have the sprint review, sprint demo, and sprint retrospective. After completing the agile cycle, you go back to the blue cycle to begin the build process, measurement, and the next iteration.
![Design thinking, lean startup, and agile cycles together][8]
(Ip Sam, [CC BY-SA 4.0][3])
Often, you have multiple MVPs before you get to the final product. Use customer feedback and product data to shape the next MVP version.
![Lean startup MVPs][9]
(Ip Sam, [CC BY-SA 4.0][3])
The project management triangle helps you maintain product quality. Scope, cost, and time are the corners of the triangle. The shape of the triangle represents product quality. If you reduce your scope and costs, the triangle's shape will change, meaning your quality will decrease.
![Lean project management][10]
(Ip Sam, [CC BY-SA 4.0][3])
Lean startup principles can help you manage scope, cost, and time. By doing each release with the lean startup principles, you can get to incremental changes, small release cycles, small release feature sets, and feedback loops. For each idea, measure if it works, is making money, and generating business value. If your answer to each is yes, begin the reinvestment process to generate more new ideas.
![Lean releases][11]
(Ip Sam, [CC BY-SA 4.0][3])
### Recruit contributors and build a team
Project marketing is an important starting point for recruiting contributors and investors into your open source project. Learn how to sell your project, present it at major conferences and events, and demo your PoC or MVP at local developer meetups. If you work for a large organization, you may be able to turn it into an internal project.
Different people with different backgrounds, working styles, and cultures will come into your project, and your goal is to develop them into a high-performing team. The team will go through learning curves as it finds how to work together. For example, when you get new team members, your team's current working style may not fit them well, and you might get resistance. When individuals do not want to get out of their comfort zone to learn new skills, you might also get resistance. This might create a chaos situation, where team members disagree about how to do things. The team may split into different camps with different ideas. This brings team performance to the lowest point of the team maturity curve. When you reach the lowest point, listen to team members and work to understand their ideas and why people push back, then transform the different ideas into action items. You can leverage voting systems or use product data to validate the ideas. As conflicts are resolved, the team members will start to work better together and integrate their different ways of working into the overall team culture. This is where your team will reach a new status quo and team maturity.
![Team maturity graph][12]
(Ip Sam, [CC BY-SA 4.0][3])
The lean team management method can help you achieve team maturity. It leverages four levers of control: believe, boundary, measurement, and interactive.
* **Believe:** This is your team's set of core values. For example, my team believes in quality as a requirement for each product. Therefore, quality is the team's core value.
* **Boundaries:** These relate to setting rules based on core values. For example, my team believes in quality, and we set rules to require unit and integration tests for each code change before committing it into the Git repository. This becomes a boundary for the team. If somebody checks in code without a test, that person is not following the rule, and it's time for you to educate the team members.
* **Measurement:** This includes key performance indicators (KPIs), targets, and budget. For example, my team's KPIs include measuring story cycle time in the sprint, story release time, defect rates, test coverage rate, and product issues.
* **Interactive:** The interactive process is where you revise the product based on your last cycle. Then you start the next iteration for the next set of core values.
![Lean team management][13]
(Ip Sam, [CC BY-SA 4.0][3])
### Minimize waste
Eight types of waste—defects, overproduction, waiting, non-utilized talent, transportation, inventory, motion, and extra processing—in your development lifecycle harm team productivity.
* **Defects:** It is cheaper to fix a defect early in the development lifecycle, so many teams use [test-driven development][14] (TDD) to reduce defects.
* **Overproduction:** If you are doing more than what you are asked to do, you are over-producing. This is tied to scope in stories; the scope should be small so that you do not risk overproduction.
* **Waiting:** Waiting is not efficient and wastes your team's resources. During sprint daily standups, identify stories that are blocked or waiting on external dependencies. As a scrum master, it is your job to help escalate these blocking issues to reduce waiting time.
* **Non-used talent:** Unused talent is always a waste, so ensure everyone on the team has enough work to do.
* **Transportation:** If you are moving projects from one team to another, you introduce more learning curves and ramp-up time. Try to reduce how much projects are transported.
* **Inventory:** Too much inventory that can't be sold is a waste. The inventory takes up space in your storage and consumes management time.
* **Motion:** Excessive unnecessary movements, such as going to meetings in different locations or even commuting, are a waste. Many organizations have a work-from-home policy to reduce motion waste.
* **Extra processing:** If you are doing the same validation test repeatedly on the same product, this extra processing might be a waste. Come up with a process flow in your development cycle so that you avoid extra processing.
![Lean waste management][15]
(Ip Sam, [CC BY-SA 4.0][3])
As the CEO of your open source project, your time is very valuable. If it's your job, you only have eight hours a day. If it's your side hustle, your time is even more limited. Lean time management involves a very simple matrix, as shown below. If you have a task that's important to the vision and must be done by you, go ahead and do it. If the task is important to the vision but can be done by a team member, delegate it. If the task is not important to the vision and must be done by you, minimize the time you spend on it. Finally, if the task is not vision-important and can be done by others, it is a waste of resources.
![Lean time management][16]
(Ip Sam, [CC BY-SA 4.0][3])
### Develop a lean strategy
Lean strategy is a good way to analyze your open source product to come up with a long-term strategy. Ask yourself these questions: If your open source product did not exist, would your customers suffer any real loss? If so, what type of loss would it be? Is it difficult for your customers to replace your products to meet their needs? For example, if I am doing Ansible playbooks for cluster automation, can my customers replace my Ansible playbooks using a set of shell scripts? What would be the impact on the operational costs and learning curve?
![Lean strategy questions][17]
(Ip Sam, [CC BY-SA 4.0][3])
The lean strategy hierarchy is (from bottom to top) mission, values, vision, strategy, and balanced scorecard. At the base is your mission: Why do your open source project and team exist? What is your mission statement? Next are the values your team believes in. Vision is how you expect your product to evolve in the next two years. Strategy requires strategic planning. At the very top is the balanced scorecard, where you implement and monitor the plan for your open source project.
![Lean strategy hierarchy][18]
(Ip Sam, [CC BY-SA 4.0][3])
Your lean strategy statement includes your objective, scope, and competitive advantage. The objective defines what your strategy is designed to do and achieve in a specific time frame. The scope identifies the strategy requirements for your open source product to be successful. Your competitive advantage is the core of your strategy. How will you compete with competitors using this product and your strategy to achieve your objective?
![Lean strategy statement][19]
(Ip Sam, [CC BY-SA 4.0][3])
Pricing and differentiation are two important factors for achieving profit. You want to offer low price and high differentiation. Ask yourself: How is your product different from your competitors'? What are some features that are unique to your product? If you can maximize differentiation and minimize pricing, you will get to the profit frontier line.
![Lean Profit Management][20]
(Ip Sam, [CC BY-SA 4.0][3])
### Look to the future
After running your open source project for a while and tracking everything, it's time to consider where you want to focus next on your journey as the CEO of your open source project.
Begin by assessing perspective, capability, and profit potential.
* Perspective involves how your product fulfills customers' needs. For example, can my Ansible playbook reduce manual operations in the OpenShift cluster to save 20% of the customer's operation costs?
* Capability is what your product can do and what new features it can offer. For example, can my Ansible playbooks operate in a way that supports OpenShift 3 and OpenShift 4 clusters? On-premises clusters versus AWS clusters?
* Profit potential includes how much revenue your product is generating for you in a time frame. Does your product fall into the profit frontier margin in the image above?
![What's next?][21]
(Ip Sam, [CC BY-SA 4.0][3])
Open source is the future. There is a lot going on in the open source community, and many companies are moving to open source models. Open source is a great way to get people involved in your project, and lean startup principles help you manage your open source project with efficiency. It will help you and your open source project go a long way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/lean-startup-open-source
作者:[Ip Sam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ipkeisam
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mobile-demo-device-phone.png?itok=y9cHLI_F (Mobile devices are a big part of our daily lives)
[2]: https://opensource.com/sites/default/files/uploads/1_opensourceidea.jpg (A good open source idea)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/sites/default/files/uploads/2_positioning.jpg (Position your open source idea)
[5]: https://opensource.com/sites/default/files/uploads/3_leanstartupcycles.jpg (The lean startup cycle diagram)
[6]: https://opensource.com/sites/default/files/uploads/4_leanstartupmethodologies.jpg (Lean startup methodologies)
[7]: https://opensource.com/sites/default/files/uploads/5_leanstartupagile.jpg (Lean startup and agile)
[8]: https://opensource.com/sites/default/files/uploads/6_designthinkingleanstartupagile.jpg (Design thinking, lean startup, and agile cycles together)
[9]: https://opensource.com/sites/default/files/uploads/7_mvps.jpg (Lean startup MVPs)
[10]: https://opensource.com/sites/default/files/uploads/8_projectmanagementtriangle.jpg (Lean project management)
[11]: https://opensource.com/sites/default/files/uploads/9_leanreleases.jpg (Lean releases)
[12]: https://opensource.com/sites/default/files/uploads/10_teammaturity.jpg (Team maturity graph)
[13]: https://opensource.com/sites/default/files/uploads/11_leanteammanagement.jpg (Lean team management)
[14]: https://opensource.com/article/19/10/test-driven-development-best-practices
[15]: https://opensource.com/sites/default/files/uploads/12_leanwastemanagement.jpg (Lean waste management)
[16]: https://opensource.com/sites/default/files/uploads/13_leantimemanagement_0.jpg (Lean time management)
[17]: https://opensource.com/sites/default/files/uploads/14_leanstrategy.jpg (Lean strategy questions)
[18]: https://opensource.com/sites/default/files/uploads/15_leanstrategyhierarchy.jpg (Lean strategy hierarchy)
[19]: https://opensource.com/sites/default/files/uploads/16_leanstrategystatement.jpg (Lean strategy statement)
[20]: https://opensource.com/sites/default/files/uploads/17_profitfrontier.jpg (Lean Profit Management)
[21]: https://opensource.com/sites/default/files/uploads/18_whatsnext.jpg (What's next?)

View File

@ -1,265 +0,0 @@
[#]: subject: (Send and receive Gmail from the Linux command line)
[#]: via: (https://opensource.com/article/21/7/gmail-linux-terminal)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Send and receive Gmail from the Linux command line
======
Use Mutt to send and receive email from your terminal, even if you're
using hosted service such as Gmail.
![woman on laptop sitting at the window][1]
I'm a [Mutt][2] user. I like viewing and composing emails in the convenience of my Linux terminal. With a lightweight and minimal client like Mutt, I know that I can have my email available regardless of system specifications or internet access. And because I have a Linux terminal open more often than not, my email client essentially has no footprint on my desktop real estate. It's hidden away in a [terminal tab or multiplexer pane][3], so I can ignore it when I don't need it but get to it quickly when I do need it.
A commonly perceived problem with Mutt is that most of us use hosted email accounts these days and interact with actual email protocols only superficially. Mutt (and ELM before it) was created back in simpler times, when checking email was a call to `uucp` and a glance at `/var/mail`. However, it's adapted nicely to developing technology and works well with all sorts of modern protocols like POP, IMAP, and even LDAP, so you can use Mutt even if you're using Gmail as your email host.
Because it's relatively rare to run your own email server today, and because Gmail is very common, this tutorial assumes you're using Mutt with Gmail. If you're concerned about email privacy, consider opening an account with [ProtonMail][4] or [Tutanota][5], both of which provide fully encrypted email. Tutanota has many [open source components][6], and ProtonMail provides an [IMAP bridge][7] for paid users so that you don't have to work around accessing your email outside a browser. However, many companies, schools, and organizations don't run their own email services and just use Gmail, so you may have a Gmail account whether you want one or not.
If you are [running your own email server][8], setting up Mutt is even easier than what I demonstrate in this article, so just dive right in.
### Install Mutt
On Linux, you can install Mutt from your distribution's software repository and then create a `.mutt` directory to hold its configuration files:
```
$ sudo dnf install mutt
$ mkdir ~/.mutt
```
On macOS, use [MacPorts][9] or [Homebrew][10]. On Windows, use [Chocolatey][11].
Mutt is a mail user agent (MUA), meaning its job is to read, compose, and send email to an outbound mail spool. It's the job of some other application or service to actually transfer a message to or from a mail server (although there's a lot of integration with Mutt so that it seems like it's doing all the work even when it's not.) Understanding this separation of tasks can help configuration make a little more sense.
It also explains why you must have helper applications (in addition to Mutt) depending on what service you need to communicate with. For this article, I use IMAP so that my local copy of email and my email provider's remote copy of mail remain synchronized. Should you decide to use POP instead, that configuration is even easier to configure and can be done without any external tools. IMAP integration, however, requires OfflineIMAP, a Python application available from [its GitHub repository][12].
Eventually, you'll be able to install it with the `python3 -m pip` command, but as of this writing, you must install OfflineIMAP manually because it's still being ported from Python 2 to Python 3.
OfflineIMAP requires `imaplib2`, which is also in heavy development, so I prefer doing a manual install of that, as well. The process is the same: clone the source code repository with Git, change into the directory, and install with `pip`.
First, install the `rfc6555` dependency:
```
`$ python3 -m pip install --user rfc6555`
```
Next, install `imaplib2` from source:
```
$ git clone [git@github.com][13]:jazzband/imaplib2.git
$ pushd imaplib2.git
$ python3 -m pip install --upgrade --user .
$ popd
```
Finally, install OfflineIMAP from source:
```
$ git clone [git@github.com][13]:OfflineIMAP/offlineimap3.git
$ pushd offlineimap3.git
$ python3 -m pip install --upgrade --user .
$ popd
```
If you're using Cygwin on Windows, then you must also install [Portalocker][14].
### Configure OfflineIMAP
OfflineIMAP reads the configuration file `~/.offlineimaprc` by default. A template for this file, named `offlineimap.conf`, is included in the Git repository you cloned to install OfflineIMAP. Move the example file to your home directory:
```
`$ mv offlineimap3.git/offlineimap.conf ~/.offlineimaprc`
```
Open the file in your favorite text editor and read through it. It's a well-commented file, and it's good to get familiar with the options available.
Here's my `.offlineimaprc` as an example, with comments removed for brevity. Some values may be slightly different for you, but this gives you a reasonable idea of what your end product ought to look like:
```
[general]
ui = ttyui
accounts = %your-gmail-username%
pythonfile = ~/.mutt/password_prompt.py
fsync = False
[Account %your-gmail-username%]
localrepository = %your-gmail-username%-Local
remoterepository = %your-gmail-username%-Remote
status_backend = sqlite
postsynchook = notmuch new
[Repository %your-gmail-username%-Local]
type = Maildir
localfolders = ~/.mail/%your-gmail-username%-gmail.com
nametrans = lambda folder: {'drafts':  '[Gmail]/Drafts',
                            'sent':    '[Gmail]/Sent Mail',
                            'flagged': '[Gmail]/Starred',
                            'trash':   '[Gmail]/Trash',
                            'archive': '[Gmail]/All Mail',
                            }.get(folder, folder)
[Repository %your-gmail-username%-Remote]
maxconnections = 1
type = Gmail
remoteuser = %your-gmail-username%@gmail.com
remotepasseval = '%your-gmail-API-password%'
## remotepasseval = get_api_pass()
sslcacertfile = /etc/ssl/certs/ca-bundle.crt
realdelete = no
nametrans = lambda folder: {'[Gmail]/Drafts':    'drafts',
                            '[Gmail]/Sent Mail': 'sent',
                            '[Gmail]/Starred':   'flagged',
                            '[Gmail]/Trash':     'trash',
                            '[Gmail]/All Mail':  'archive',
                            }.get(folder, folder)
folderfilter = lambda folder: folder not in ['[Gmail]/Trash',
                                             '[Gmail]/Important',
                                             '[Gmail]/Spam',
                                             ]
```
There are two replaceable values in this file: `%your-gmail-username%` and `%your-gmail-API-password%`. Replace the first with your Gmail user name. That's the part of your email address on the left of the `@gmail.com` part. You must acquire the second value from Google through a two-factor authentication (2FA) setup process (even though you don't need to use 2FA to check email).
### Set up 2FA for Gmail
Google expects its users to use the Gmail website for email, so when you attempt to access your email outside of Gmail's interface, you're essentially doing so as a developer (even if you don't consider yourself a developer). In other words, you're creating what Google considers an "app." To obtain a developer-level _app password_, you must set up 2FA; through that process, you get an app password, which Mutt can use to log in outside the usual browser interface.
For safety, you can also add a recovery email address. To do that, go to Google's [Account Security page][15] and scroll down to **Recovery email**.
To set up 2FA, go back to the Account Security page, and click on **2-step Verification** to activate and configure it. This requires a mobile phone for setup.
After activating 2FA, you get a new Google Account Security option: **App passwords**. Click on it to create a new app password for Mutt. Google generates the password for you, so copy it and paste it into your `.offlineimaprc` file in the place of the `%your-gmail-API-password%` value.
Placing your API password in your `.offlineimaprc` file stores it in plain text, which can be dangerous. For a long while, I did this and felt fine about it because my home directory is encrypted. However, in the interest of better security, I now encrypt my API password with GnuPG. That's somewhat beyond the scope of this article, but I've written an article demonstrating how to [set up GPG password integration][16].
### Enable IMAP in Gmail
There's one last thing before you can say goodbye to the Gmail web interface forever: You must enable IMAP access to your Gmail account.
To do this, go to the Gmail web interface, click the "cog" icon in the upper-right corner, and select **See all settings**. In Gmail **Settings**, click the **POP/IMAP** tab, and enable the radio button next to **Enable IMAP**. Save your settings.
Now Gmail is configured to give you access to your email outside a web browser.
### Configure Mutt
Now that you're all set up for Mutt, you'll be happy to learn that configuring Mutt is the easy part. As with [.bashrc][17], [.zshrc][18], and .emacs files, there are many examples of very good .muttrc files available on the internet. For my configuration file, I borrowed options and ideas from [Kyle Rankin][19], [Paul Frields][20], and many others, so I've abbreviated my .muttrc file to just the essentials in the interest of simplicity:
```
set ssl_starttls=yes
set ssl_force_tls=yes
set from='[tux@example.com][21]'
set realname='Tux Example'
set folder = imaps://imap.gmail.com/
set spoolfile = imaps://imap.gmail.com/INBOX
set postponed="imaps://imap.gmail.com/[Gmail]/Drafts"
set smtp_url="smtp://smtp.gmail.com:25"
set move = no
set imap_keepalive = 900
set record="imaps://imap.gmail.com/[Gmail]/Sent Mail"
# Paths
set folder           = ~/.mail
set alias_file       = ~/.mutt/alias
set header_cache     = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = ~/.mutt/certificates
set mailcap_path     = ~/.mutt/mailcap
set tmpdir           = ~/.mutt/temp
set signature        = ~/.mutt/sig
set sig_on_top       = yes
# Basic Options
set wait_key = no
set mbox_type = Maildir
unset move               # gmail does that
# Sidebar Patch
set sidebar_visible = yes
set sidebar_width   = 16
color sidebar_new color221 color233
## Account Settings
# Default inbox
set spoolfile = "+example.com/INBOX"
# Mailboxes to show in the sidebar.
mailboxes +INBOX \
          +sent \
          +drafts
# Other special folder
set postponed = "+example.com/drafts"
# navigation
macro index gi "<change-folder>=example.com/INBOX<enter>" "Go to inbox"
macro index gt "<change-folder>=example.com/sent" "View sent"
```
Nothing in this file requires changing, but consider replacing the fake name `Tux Example` and the fake address `example.com` with something that applies to you. Copy and paste this text into a file and save it as `~/.mutt/muttrc`.
### Launch Mutt
Before launching Mutt, run `offlineimap` from a terminal to sync your computer with the remote server. The first run of this takes _a long time_, so leave it running overnight.
Once your account has synchronized, you can launch Mutt:
```
`$ mutt`
```
Mutt prompts you for permission to create the directories it needs to organize your email activity and then displays a view of your inbox.
![Mutt email client][22]
### Learn Mutt
Learning Mutt is a mixture of exploring the application and finding your favorite hacks for your .muttrc config. For example, my config file integrates Emacs for composing messages, LDAP so that I can search through contacts, GnuPG so that I can encrypt and decrypt messages, link harvesting, HTML views, and much more. You can make Mutt anything you want it to be (as long as you want it to be an email client), and the more you experiment, the more you discover.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/gmail-linux-terminal
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: http://www.mutt.org/
[3]: https://opensource.com/article/21/5/linux-terminal-multiplexer
[4]: https://protonmail.com
[5]: https://tutanota.com
[6]: https://github.com/tutao/tutanota
[7]: https://protonmail.com/bridge/
[8]: https://www.redhat.com/sysadmin/configuring-email-server
[9]: https://opensource.com/article/20/11/macports
[10]: https://opensource.com/article/20/6/homebrew-mac
[11]: https://opensource.com/article/20/3/chocolatey
[12]: https://github.com/OfflineIMAP/offlineimap3
[13]: mailto:git@github.com
[14]: https://pypi.org/project/portalocker
[15]: https://myaccount.google.com/security
[16]: https://opensource.com/article/21/6/enter-invisible-passwords-using-python-module
[17]: https://opensource.com/article/18/9/handy-bash-aliases
[18]: https://opensource.com/article/19/9/adding-plugins-zsh
[19]: https://twitter.com/kylerankin
[20]: https://twitter.com/stickster
[21]: mailto:tux@example.com
[22]: https://opensource.com/sites/default/files/mutt.png (Mutt email client)

View File

@ -2,7 +2,7 @@
[#]: via: (https://itsfoss.com/install-zlib-ubuntu/) [#]: via: (https://itsfoss.com/install-zlib-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972) [#]: collector: (lujun9972)
[#]: translator: ( ) [#]: translator: (MjSeven)
[#]: reviewer: ( ) [#]: reviewer: ( )
[#]: publisher: ( ) [#]: publisher: ( )
[#]: url: ( ) [#]: url: ( )

View File

@ -0,0 +1,261 @@
[#]: subject: (OS Chroot 101: covering btrfs subvolumes)
[#]: via: (https://fedoramagazine.org/os-chroot-101-covering-btrfs-subvolumes/)
[#]: author: (yannick duclap https://fedoramagazine.org/author/cybermeme/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
OS Chroot 101: covering btrfs subvolumes
======
![][1]
OS chroot command allows you to mount and run another Gnu/Linux from within your current Gnu/Linux. It does this by mounting nested partition(s) within your system and it gives you a shell which allows access to this chrooted OS. This will allow you to manage or debug another Gnu/Linux from your running Fedora Linux
### Intro
##### Disclaimer
When I say _chroot_, I mean the command, and _chrootDir_ a folder. _OSext_ is the external OS to work with. All the following commands are executed as superuser. For extra readability I removed the sudo at the beginning, just dont forget to be superadmin when performing the tasks. […] means I cut some terminal output.
First Im going to review how to do a chroot on a classic filesystem (ext4, xfs, fat, etc) and then well see how to do it on our brand new standard Btrfs and its subvolumes.
The process is similar to that used to [change the root password][2], or that we may use to repair a corrupted fstab (it happens, trust me). We can also use the chroot command to mount a Gnu/Linux in our Fedora Linux in order to perform operations (updates, file recovery, debugging, etc).
#### A few explanations
The [chroot][3] command lets you “change” temporarily the root location. This lets you partition a service or a user in the directory tree.
When you use _chroot_ to run a mounted Gnu/Linux OS, in order for it to be fully functional, you have to mount the special system folders in their “original places in the directory tree” in the chrootDir. This allows the chrooted OS to talk to the kernel.
These special system folders are:
* _/dev_ for the devices;
* _/proc_ which contains the information about the system (kernel and process);
* _/sys_ which contains the information about the hardware.
For example, _/dev_ has to be mounted in _chrootDir/dev_.
As I always learn better by practicing, lets do some hands on.
### Filesystems without btrfs subvolumes
#### The classic method
In the following example, the partition we are going to mount is the OSext root (_/_). This is located in _/dev/vda2_ and we will mount it in the chrootDir (_/mnt_) directory. _/mnt_ is not a necessity, you can also mount the partition somewhere else.
```
# mount /dev/vda2 /mnt
# mount --bind /dev /mnt/dev
# mount -t proc /proc /mnt/proc
# mount -t sysfs /sys /mnt/sys
# mount -t tmpfs tmpfs /mnt/run
# mkdir -p /mnt/run/systemd/resolve/
# echo 'nameserver 1.1.1.1' > /mnt/run/systemd/resolve/stub-resolv.conf
# chroot /mnt
```
The _bind_ option makes the contents accessible in both locations, _-t_ defines the filesystem type. See the [manpage][4] for more information.
We will mount _/run_ as _tmpfs_ (in the memory) because we are using systemd-resolved (this is the default now in Fedora). Then we will create the folder and the file _stub-resolv.conf,_ which is associated by a symbolic link to /_etc/resolv.conf_. This file contains the resolver IP. In this example, the resolver is 1.1.1.1, but you can use any resolver IP you like.
To exit the chroot, the shell command is _exit_. After that, we unmount all the folders we just mounted:
```
exit
# umount /mnt/dev
# umount /mnt/proc
# umount /mnt/sys
# umount /mnt/run
# umount /mnt
```
#### The case of lvm
In the case of lvm, the partitions are not available directly and must be mapped first.
```
# fdisk -l /dev/vda2
Disk /dev/vda2: 19 GiB, 20400046080 bytes, 39843840 sectors
[...]
I/O size (minimum/optimal): 512 bytes / 512 bytes
# mount /dev/vda2 /mnt/
mount: /mnt: unknown filesystem type 'LVM2_member'.
```
As you can see, we are not able to mount _/dev/vda2_ directly. We will now use the lvm tools to locate our partitions.
```
# pvscan
PV /dev/vda2 VG cl lvm2 [<19.00 GiB / 0 free]
Total: 1 [<19.00 GiB] / in use: 1 [<19.00 GiB] / in no VG: 0 [0]
# vgscan
Found volume group "cl" using metadata type lvm2
# lvscan
ACTIVE '/dev/cl/root' [10.00 GiB] inherit
ACTIVE '/dev/cl/swap' [2.00 GiB] inherit
ACTIVE '/dev/cl/home' [1.00 GiB] inherit
ACTIVE '/dev/cl/var' [<6.00 GiB] inherit
```
So here we can see where the logical volumes are mapped _/dev/cl_ and we can mount these partitions like we did before, using the same method:
```
# mount /dev/cl/root /mnt/
# mount /dev/cl/home /mnt/home/
# mount /dev/cl/var /mnt/var/
# mount --bind /dev /mnt/dev
# mount -t proc /proc /mnt/proc
# mount -t sysfs /sys /mnt/sys
# mount -t tmpfs tmpfs /mnt/run
# mkdir -p /mnt/run/systemd/resolve/
# echo 'nameserver 1.1.1.1' > /mnt/run/systemd/resolve/stub-resolv.conf
# chroot /mnt
```
### Btrfs filesystem with subvolumes
#### Overview of a btrfs partition with subvolumes
Lets have a look at the filesystem.
Fdisk tells us that there are only two partitions on the physical media.
```
# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
[…]
Device Boot Start End Sectors Size Id Type
/dev/vda1 * 2048 2099199 2097152 1G 83 Linux
/dev/vda2 2099200 41943039 39843840 19G 83 Linux
```
Here are the contents of the target systems fstab (OSext):
```
UUID=3de441bd-59fc-4a12-8343-8392faab5ac7 / btrfs subvol=root,compress=zstd:1 0 0
UUID=71dc4f0f-9562-40d6-830b-bea065d4f246 /boot ext4 defaults 1 2
UUID=3de441bd-59fc-4a12-8343-8392faab5ac7 /home btrfs subvol=home,compress=zstd:1 0 0
```
Looking at the _UUID_s in the _fstab_, we can see that there are two different ones.
One is an ext4, used here for _/boot_ and the other is a btrfs containing two mount points (the subvolumes), _/_ and _/home_.
#### Overview of a btrfs filesystem with subvolumes
Lets have a look at what is in the btrfs partition (_/dev/vda2_ here) by mounting it directly:
```
# mount /dev/vda2 /mnt/
# ls /mnt/
home root
# ls /mnt/root/
bin dev home lib64 media opt root sbin sys usr
boot etc lib lost+found mnt proc run srv tmp var
# ls /mnt/home/
user
# umount /mnt
```
Here we can see that in the mounted partition there are two folders (the subvolumes), that contain lots of different directories (the target file hierarchy).
To get this information about the subvolumes, there is a much more elegant way.
```
# mount /dev/vda2 /mnt/
# btrfs subvolume list /mnt
ID 256 gen 178 top level 5 path home
ID 258 gen 200 top level 5 path root
ID 262 gen 160 top level 258 path root/var/lib/machines
# umount /mnt
```
#### Practical chroot with btrfs subvolumes
Now that weve had a look at the contents of our partition, we will mount the system on chrootDir (_/mnt_ in the example). We will do this by adding the mount type as btrfs and the option for subvolume _subvol=SubVolumeName_. We will also add the special system folders and other partitions in the same way.
```
# mount /dev/vda2 /mnt/ -t btrfs -o subvol=root
# ls /mnt/
bin dev home lib64 media opt root sbin sys usr
boot etc lib lost+found mnt proc run srv tmp var
# ls /mnt/home/
<it's still empty>
# mount /dev/vda2 /mnt/home -t btrfs -o subvol=home
# ls /mnt/home/
user
# mount /dev/vda1 /boot
# mount --bind /dev /mnt/dev
# mount -t proc /proc /mnt/proc
# mount -t sysfs /sys /mnt/sys
# mount -t tmpfs tmpfs /mnt/run
# mkdir -p /mnt/run/systemd/resolve/
# echo 'nameserver 1.1.1.1' > /mnt/run/systemd/resolve/stub-resolv.conf
# chroot /mnt
```
When the job is done, we use the shell command _exit_ and unmount all previously mounted directories as well as the chrootDir itself (_/mnt_).
```
exit
# umount /mnt/boot
# umount /mnt/sys
# umount /mnt/proc
# umount /mnt/sys
# umount /mnt/run
# umount /mnt
```
### Conclusion
As you can see on the screenshot below, I performed a dnf update on a Fedora Linux 34 Workstation from a live [Fedora 33 security lab CD][5], that way, if a friend needs you to debug his/her/their Gnu/Linux, he/she/they just have to bring the hard drive to you and not the whole desktop/server machine.
![][6]
Be careful if you use a different shell between your host OS and OSext (the chrooted OS), for example ksh &lt;-&gt; bash.
In this case youll have to make sure that both systems have the same shell installed.
I hope this will be useful to anyone needing to debug, or if you just need to update your other Fedora Linux in your dual boot and dont want to have to restart 😉
This article just referred to a part of btrfs, for more information you can have a look at the the [wiki][7] which will give you all the information you need.
Have fun chrooting.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/os-chroot-101-covering-btrfs-subvolumes/
作者:[yannick duclap][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cybermeme/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/chroot-btrfs-816x345.jpg
[2]: https://fedoramagazine.org/reset-root-password-fedora/
[3]: https://man7.org/linux/man-pages/man2/chroot.2.html
[4]: https://man7.org/linux/man-pages/man8/mount.8.html
[5]: https://labs.fedoraproject.org/security/download/index.html
[6]: https://fedoramagazine.org/wp-content/uploads/2021/07/fedoraSecurity.png
[7]: https://fedoraproject.org/wiki/Btrfs

View File

@ -0,0 +1,99 @@
[#]: subject: (Troubleshooting bugs in an API implementation)
[#]: via: (https://opensource.com/article/21/7/listing-prefixes-s3-implementations)
[#]: author: (Alay Patel https://opensource.com/users/alpatel)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Troubleshooting bugs in an API implementation
======
Different API versions can cause unexpected problems.
![magnifying glass on computer screen, finding a bug in the code][1]
As distributed and cloud computing adoption increase, things are intrinsically getting harder to debug. This article shares a situation where you would expect a library to safeguard against different versions of an API. However, it didn't and it caused unexpected behavior that was very hard to debug. This might be a useful example of how ripping out layers of abstractions is sometimes necessary to get to the root cause of a problem in a systematic manner.
The S3 (Simple Storage Solution) API is an industry standard that provides the capability to interact with cloud storage programmatically. Many cloud providers implement it as one of the ways to interact with the object-store. Having different vendors to choose from is good to avoid vendor lock-in. Also, having different implementations to choose from means you get to select open source implementations of the popular standard that works best for you and your team.
However, the differences in API versions may cause unexpected problems, as we learned. This article leverages those differences to illustrate the troubleshooting process.
### Konveyor Crane
[Crane][2] is part of the [Konveyor community][3], which works to solve problems related to app modernization and portability to further the adoption of Kubernetes. Crane allows users to migrate applications (and the associated data) deployed on OpenShift 3 to OpenShift 4. Behind the scenes, it uses [Velero][4] to orchestrate the migration. Velero uses the object store to perform backup and restore operations.
### How Velero stores data
Velero can be configured to use a bucket from the object store as a backup storage location (where backup data is stored). Velero organizes backups in a directory called `<prefix>/backups` (with `prefix` being configurable). Under the `backups` directory, Velero creates a separate directory for each backup, e.g., `<prefix>/backups/<backup-name>`.
Additionally, to ensure that a backup created in the object store is available in the cluster and available for restoration, Velero makes a prefix list of all the directories under `backups`. It uses the ListObjectsV2 S3 API to implement this. The [ListObjectsV2][5] API differs from the [ListObjects][6] API in how it handles pagination.
### How API differences produced a bug
The differences between these two API versions are subtle. First, clients see the difference in the request that they send to the S3 server. When requesting a ListObjectV2, the client sends something like this:
```
GET /?list-type=2&amp;delimiter=Delimiter&amp;prefix=Prefix
HTTP/1.1
Host: Bucket.s3.example.objectstorage.softlayer.net
x-amz-request-payer: RequestPayer
x-amz-expected-bucket-owner: ExpectedBucketOwner
```
For ListObjects, the request looks very similar, but it's missing `list-type=2`:
```
GET /?delimiter=Delimiter&amp;marker=Marker&amp;prefix=Prefix
HTTP/1.1
Host: Bucket.s3.example.objectstorage.softlayer.net
x-amz-request-payer: RequestPayer
x-amz-expected-bucket-owner: ExpectedBucketOwner
```
For a server that ignores the `list-type=2` parameter, it is easy to respond to a basic ListObjectsV2 call with a ListObject response type.
The interesting difference between the API versions' response types is how pagination is implemented. Both versions share a common field called `isTruncated` in the response; this indicates whether the server has sent a complete set of keys in its response. In ListObjectsV2, this field is used along with the `NextContinuousToken` field to get the next page (and, hence, the next set of keys) and is iterated upon until the `isTruncated` field is false. However, in ListObjects API, the `NextMarker` field is used instead. There are subtle differences in how this is implemented.
### Our observations
When we observed the Velero debug logs, we discovered 555 total backup objects were found. However, when we ran the [s3cmd][7] command against the same bucket to list objects, it returned 788. After looking at the debug logs of the s3cmd command-line interface (CLI), we found that the s3cmd could talk to the server using ListObjects. We also noticed that the last field on the first page of the s3cmd debug log was the last field Velero saw in its list. This immediately rang bells that pagination is not implemented correctly with the ListObjectsV2 API.
In a ListObjectsV2 API, the `NextContinuousToken` field is used to take the client to the next page, and the `ListObjectV2Pages` method in the `aws-go-sdk` uses this field in its implementation. The logic is: if the `NextContinuousToken` field is empty, no more pages exist, so set `LastPage=true`.
Considering that a server could send a ListObject response without a `NextContinuousToken` set on a ListObjectV2Pages API call, it is clear that if the response is pagination with a ListObject response, ListObjectsV2Pages will read only the first page. This is exactly what happened and was verified by observing it in a debugger using a [sample program][8].
Simply by changing Velero's implementation to use the ListObjectsPages method (which uses the ListObjects API), Velero was able to report a backup count of 788, which was consistent with the s3cmd CLI.
Because of this semantic difference, the customer's migration efforts were blocked. The root cause stemmed from the libraries being used, and the analysis unblocked the customer.
### Conclusion
This case study shows how implementations of something as widely adopted as the S3 API could have bugs and can cause problems in unexpected ways.
To follow the technical analysis of how Konveyor's development team is solving modernization and migration issues, check out our engineering [knowledge base][9]. For updates on the Konveyor tools, join the community at [konveyor.io][10]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/listing-prefixes-s3-implementations
作者:[Alay Patel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alpatel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://www.konveyor.io/crane
[3]: https://www.redhat.com/en/blog/red-hat-and-ibm-research-launch-konveyor-project
[4]: https://velero.io/
[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
[6]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html
[7]: https://s3tools.org/usage
[8]: https://gist.github.com/alaypatel07/c2a1f34095813e8887ddcb3f6e90d262
[9]: http://engineering.konveyor.io/
[10]: https://konveyor.io/

View File

@ -0,0 +1,90 @@
[#]: subject: (What you need to know about security policies)
[#]: via: (https://opensource.com/article/21/7/what-security-policy)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
What you need to know about security policies
======
Learn about protecting your personal computer, server, and cloud systems
with SELinux, Kubernetes pod security, and firewalls.
![Lock][1]
A **security policy** is a set of permissions that govern access to a system, whether the system is an organization, a computer, a network, an application, a file, or any other resource. Security policies often start from the top down: Assume nobody can do anything, and then allow exceptions.
On a desktop PC, the default policy is that no user may interact with the computer until after logging in. Once you've successfully logged in, you inherit a set of digital permissions (in the form of metadata associated with your login account) to perform some set of actions. The same is true for your phone, a server or network on the internet, or any node in the cloud.
There are security policies designed for filesystems, firewalls, services, daemons, and individual files. Securing your digital infrastructure is a job that's never truly finished, and that can seem frustrating and intimidating. However, security policies exist so that you don't have to think about who or what can access your data. Being comfortably familiar with potential security issues is important, and reading through known security issues (such as NIST's great [RSS feed][2] for [CVE entries][3]) over your [power breakfast][4] can be more eye-opening than a good cup of coffee, but equally important is being familiar with the tools at your disposal to give you sensible defaults. These vary depending on what you're securing, so this article focuses on three areas: your personal computer, the server, and the cloud.
### SELinux
[SELinux][5] is a **labeling system** for your personal computer, servers, and the Linux nodes of the cloud. On a modern Linux system running SELinux, every process has a label, as does every file and directory. In fact, any system object gets a label. Luckily, you're not the one who has to do the labeling. These labels are created for you automatically by SELinux.
Policy rules govern what access is granted between labeled **processes** and labeled **objects**. The kernel enforces these rules. In other words, SELinux can ensure that an action is safe whether a user appears to deserve the right to perform that action or not. It does this by understanding what processes are permitted. This protects a system from a bad actor who gains escalated permissions—whether it's through a security exploit or by wandering over to your desk after you've gotten up for a coffee refill—by understanding the expected interactions of all of your computer's components.
For more information about SELinux, read our [illustrated guide to SELinux][6] by Dan Walsh. To learn more about using SELinux, read [A sysadmin's guide to SELinux][7] by Alex Callejas, and download our free [SELinux cheat sheet][8].
### Kubernetes pod security
In the world of the Kubernetes cloud, there are **Security Policies** and **Security Contexts**.
Pod [Security Policies][9] are an implementation of Kubernetes pod security resources. They are built-in resources that describe specific conditions that pods must conform to in order to be accepted and scheduled. For example, Pod Security Policies can leverage restrictions on which types of volumes a pod may be allowed to mount or what user or group IDs the pod is not allowed to use. Unlike Security Contexts, these are restrictions controlled by the cluster's Control Plane that decide if a given pod is allowed within the Kubernetes system, even before it is created. If the pod spec does not meet the requirements of the Pod Security Policy, it is rejected.
[Security Contexts][10] are similar to Pod Security Policies, in that they describe what a pod or container can and cannot do but in the context of the container runtime. Recall that the Pod Security Policies are enforced in the Control Plane. Security Contexts are provided in the spec of the pod and describe to the container runtime (e.g., Docker, CRI-O, etc.) specifically how the pod should run. There's a lot of overlap in the kinds of restrictions found in Pod Security Policies and Security Contexts. The former can be thought of as "these are the things a pod in this policy may do," while the latter is "this pod must be run with these specific rules."
#### The state of Pod Security Policies
Pod Security Policies are deprecated and will be removed in Kubernetes 1.25. In April 2021, Tabitha Sable of Kubernetes SIG Security wrote about the [deprecation and replacement of Pod Security Policies][11]. There's an open pull request that describes proposed [Kubernetes enhancements][12] with a new admission controller to enforce pod security standards, which is suggested as the replacement for the deprecated Pod Security Policies. The architecture acknowledges, however, that there's a large ecosystem of add-ons and complementary services that can be mixed and matched to provide coverage that meets an organization's needs.
For now, Kubernetes has published [Pod Security Standards][13] describing the overall concept of layered policy types, from totally unrestricted **Privileged** pods to minimally restricted **Baseline** and then heavily **Restricted** policies, and publishing these example policies as Pod Security Policies. The documentation describes what restrictions make up these different profiles and provide an excellent starting point to get familiar with different types of restrictions that might be applied to a pod to increase security.
#### Future of security policies
The future of pod security in Kubernetes will likely include an admission controller like the one proposed in the enhancement PR and a mix of add-ons for tweaking and adjusting how pods run in the cluster, such as [Open Policy Agent][14] (OPA). Kubernetes is extremely flexible given just how complicated its job is, and this change follows the same pattern that has allowed Kubernetes to be so successful: managing container orchestration well and allowing an entire ecosystem of add-ons and tools to enhance and extend the core so that it is not a one-size-fits-all solution.
### Firewalls
Protecting your network is just as important as protecting the computers inside it. For that, there are firewalls. Some firewalls come embedded in routers, but computers have firewalls too, and in large organizations, they run the firewall for the entire network.
Typical firewall policies are constructed by denying all traffic, followed by judicious exceptions for necessary incoming and outgoing communication. Individual users can learn more about the `firewall-cmd` in Seth Kenlon's [Getting started with Linux firewalls][15]. Sysadmins can learn more about firewalls in Seth's [Secure your network with firewall-cmd][16]. And both users and admins can benefit from our free [firewall-cmd cheat sheet][17].
### Security policies
Security policies are important for protecting people and their data no matter what the system. Buildings and tech conferences need security policies to keep people physically safe, and computers need security policies to keep data safe from abuse.
Spend some time thinking about the security of the systems in your life, getting familiar with the default policies, and choosing your level of comfort for the different risks you identify. Then establish a security policy, and stick to it. As with [backup plans][18], security won't get addressed unless it's _easy_, so make it second nature to maintain good security practices.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/what-security-policy
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum (Lock)
[2]: https://nvd.nist.gov/feeds/xml/cve/misc/nvd-rss-analyzed.xml
[3]: https://nvd.nist.gov/vuln/data-feeds#APIS
[4]: https://opensource.com/article/21/6/breakfast
[5]: https://en.wikipedia.org/wiki/Security-Enhanced_Linux
[6]: https://opensource.com/business/13/11/selinux-policy-guide
[7]: https://opensource.com/article/18/7/sysadmin-guide-selinux
[8]: https://opensource.com/downloads/cheat-sheet-selinux
[9]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
[10]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[11]: https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/
[12]: https://github.com/kubernetes/enhancements/issues/2579
[13]: https://kubernetes.io/docs/concepts/security/pod-security-standards/
[14]: https://www.openpolicyagent.org/
[15]: https://opensource.com/article/20/2/firewall-cheat-sheet
[16]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[17]: https://opensource.com/downloads/firewall-cheat-sheet
[18]: https://opensource.com/article/19/3/backup-solutions

View File

@ -0,0 +1,265 @@
[#]: subject: (Send and receive Gmail from the Linux command line)
[#]: via: (https://opensource.com/article/21/7/gmail-linux-terminal)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
在 Linux 命令行中收发 Gmail 邮件
======
即使你用的是诸如 Gmail 的托管邮件服务,你也可以通过 Mutt 在终端里收发电子邮件。
![woman on laptop sitting at the window][1]
我喜欢在 Linux 终端上读写电子邮件的便捷,因此我是 [Mutt][2] 这个轻量简洁的电子邮件客户端的忠实用户。对于电子邮件服务来说,不同的系统配置和网络接入并不会造成什么影响。这个客户端通常隐藏在我 Linux 终端的[某个标签页或者某个终端复用器的面板][3]上,需要用的时候随时可以调出来,不需要使用的时候放到后台,就不需要在桌面上一直放置一个电子邮件客户端的应用程序。
当今我们大多数人使用的都是托管电子邮件账号,在这个在这种使用场景中并不会与电子邮件协议发生过多的直接交互。而 Mutt以及更早的 ELM在创立之初实现的是对 `uucp` 的调用,以及对 `/var/mail` 的读取。当然 Mutt 也很与时俱进,随着各种流行的协议(如 POP、IMAP、LDAP出现它都实现了良好的支持。因此即使我们使用的是 Gmail 这种邮件服务,也可以与 Mutt 无缝衔接。
在大多数情况下,用户都不会拥有自己的电子邮件服务器,同时大部分用户都会选择 Gmail因此下文会以 Mutt + Gmail 为例作介绍。如果你比较注重电子邮件隐私,不妨考虑 [ProtonMail][4] 或者 [Tutanota][5],它们都提供完全加密的电子邮件服务。其中 Tutanota 包含很多[开源组件][6],而 ProtonMail 则为付费用户提供 [IMAP 桥][7] 简化了在非浏览器环境下的邮件访问。尽管如此,很多公司、学校和组织都没有自己的电子邮件服务,而是使用 Gmail 提供的邮件服务,这样一来,大部分用户都会有一个 Gmail 邮箱。
当然,如果你自己就[拥有电子邮件服务器][8],那么使用 Mutt 就更简单了。下面我们开始介绍。
### 安装 Mutt
在 Linux 系统上,一般可以直接从发行版提供的软件库中安装 Mutt另外需要在 home 目录中创建一个 `.mutt` 目录以存放配置文件:
```
$ sudo dnf install mutt
$ mkdir ~/.mutt
```
在 MacOS 上,可以通过 [MacPorts][9] 或者 [Homebrew][10] 安装;在 Windows 上则可以使用 [Chocolatey][11] 安装。
Mutt 是一个<ruby>邮件用户代理<rt>Mail User Agent</rt></ruby>MUA因此它的作用是读取、编写以及向外部邮件池发送邮件其它邮件应用或者邮件服务则仅仅是负责在用户和邮件服务器之间传递信息尽管它们可以和 Mutt 进行协作,让我们看起来是邮件应用们实现了所有功能,但实际上并非如此。在弄懂了两者之间的区别之后,我们会对 Mutt 的配置更加清楚。
除了 Mutt 之外,我们还需要视乎进行通信的服务种类选择一些辅助应用程序。在本文中我使用的是 IMAP 服务,这可以让我本地的电子邮件副本与电子邮件服务提供商的电子邮件副本保持同步。如果你选择 POP 服务配置的难度就更下一个台阶了也无需依赖其它外部工具。我们需要OfflineIMAP 这个 Python 应用程序来实现 IMAP 的集成,这个应用程序可以在[它的 GitHub 存储库][12]获取。
OfflineIMAP 目前仍然在从 Python 2 移植到 Python 3目前需要手动安装但以后你也可以通过 `python3 -m pip` 命令进行安装。
OfflineIMAP 依赖于 `imaplib2` 库,这个库也在开发当中,所以我更喜欢手动安装。同样地,也是通过 Git 将代码库克隆到本地,进入目录后使用 `pip` 安装。
首先安装 `rfc6555` 依赖:
```
`$ python3 -m pip install --user rfc6555`
```
然后从源码安装 `imaplib2`
```
$ git clone [git@github.com][13]:jazzband/imaplib2.git
$ pushd imaplib2.git
$ python3 -m pip install --upgrade --user .
$ popd
```
最后从源码安装 OfflineIMAP
```
$ git clone [git@github.com][13]:OfflineIMAP/offlineimap3.git
$ pushd offlineimap3.git
$ python3 -m pip install --upgrade --user .
$ popd
```
如果你使用的是 Windows 上的 Cygwin那么你还需要安装 [Portlocker][14]。
### 配置 OfflineIMAP
OfflineIMAP 默认使用 `~/.offlineimaprc` 这个配置文件,在它的代码库中会有一个名为 `offlineimap.conf` 的配置模板,可以直接将其复制到 home 目录下:
```
`$ mv offlineimap3.git/offlineimap.conf ~/.offlineimaprc`
```
你可以使用任何文本编辑器打开浏览这个配置文件,它的注释很完善,便于了解各个可用的配置项。
以下是我的 `.offlineimaprc` 配置文件,为了清晰起见,我把其中的注释去掉了。对于你来说其中有些配置项的值可能会略有不同,但或许会为你的配置带来一些启发:
```
[general]
ui = ttyui
accounts = %your-gmail-username%
pythonfile = ~/.mutt/password_prompt.py
fsync = False
[Account %your-gmail-username%]
localrepository = %your-gmail-username%-Local
remoterepository = %your-gmail-username%-Remote
status_backend = sqlite
postsynchook = notmuch new
[Repository %your-gmail-username%-Local]
type = Maildir
localfolders = ~/.mail/%your-gmail-username%-gmail.com
nametrans = lambda folder: {'drafts': '[Gmail]/Drafts',
'sent': '[Gmail]/Sent Mail',
'flagged': '[Gmail]/Starred',
'trash': '[Gmail]/Trash',
'archive': '[Gmail]/All Mail',
}.get(folder, folder)
[Repository %your-gmail-username%-Remote]
maxconnections = 1
type = Gmail
remoteuser = %your-gmail-username%@gmail.com
remotepasseval = '%your-gmail-API-password%'
## remotepasseval = get_api_pass()
sslcacertfile = /etc/ssl/certs/ca-bundle.crt
realdelete = no
nametrans = lambda folder: {'[Gmail]/Drafts': 'drafts',
'[Gmail]/Sent Mail': 'sent',
'[Gmail]/Starred': 'flagged',
'[Gmail]/Trash': 'trash',
'[Gmail]/All Mail': 'archive',
}.get(folder, folder)
folderfilter = lambda folder: folder not in ['[Gmail]/Trash',
'[Gmail]/Important',
'[Gmail]/Spam',
]
```
配置文件里有两个可以替换的值,分别是 `%your-gmail-username%``%your-gmail-API-password%`。其中第一个值需要替换为 Gmail 用户名,也就是邮件地址中 `@gmail.com` 左边的部分。而第二个值则需要通过双因素身份验证2FA后从 Google 获取。
### 为 Gmail 设置双因素身份验证2FA
Google 希望用户通过 Gmail 网站收发电子邮件,因此当你在 Gmail 网站以外操作电子邮件时,实际上是被 Google 作为“开发者”看待尽管你没有进行任何开发工作。也就是说Google 会认为你正在创建一个应用程序。要获得开发者层面的应用程序密码就必须设置双因素身份验证。完成了这个过程以后就可以获得一个应用程序密码Mutt 可以通过这个密码在浏览器以外的环境登录到你的电子邮箱中。
为了安全起见,你还可以在 Google 的[账号安全][15]页面中添加一个用于找回的电子邮件地址。
在账号安全页面中点击“两步验证”2-step Verification开始设置 2FA设置过程中需要用到一部手机。
激活 2FA 之后账号安全页面中会出现“应用程序密码”App Passwords选项点击就可以为 Mutt 创建一个新的应用程序密码。在 Google 生成密码之后,将其替换 `.offlineimaprc` 配置文件中的 `%your-gmail-API-password%` 值。
直接将应用程序密码记录在 `.offlineimaprc` 文件中,这种以纯文本形式存储的做法有一定的风险。因为我的 home 目录是加密的,所以我长期以来都是这样做的。但出于安全考虑,我现在已经改为使用 GnuPG 加密应用程序密码,这部分内容不在本文的讨论范围,关于如何设置 GPG 密码集成,可以参考我的[另一篇文章][16]。
### 在 Gmail 启用 IMAP
在启用 IMAP 访问 Gmail 账户之后,你就可以不再打开 Gmail 网站了。
在 Gmail 网站页面中点击右上角的“cog”图标选择“查看所有设置”See all settings。 在 Gmail 设置页面中点击“POP/IMAP”标签页并选中“启用 IMAP”enable IMAP然后保存设置。
现在就可以在浏览器以外访问你的 Gmail 电子邮件了。
### 配置 Mutt
Mutt 的配置过程相对简单。和 [.bashrc][17]、[.zshrc][18]、.emacs 这些配置文件一样,网络上有很多优秀的 .muttrc 配置文件可供参照。我自己的 .muttrc 配置文件则借鉴了 [Kyle Rankin][19]、[Paul Frields][20] 等人的配置项和想法。下面列出我的配置文件的一些要点:
```
set ssl_starttls=yes
set ssl_force_tls=yes
set from='[tux@example.com][21]'
set realname='Tux Example'
set folder = imaps://imap.gmail.com/
set spoolfile = imaps://imap.gmail.com/INBOX
set postponed="imaps://imap.gmail.com/[Gmail]/Drafts"
set smtp_url="smtp://smtp.gmail.com:25"
set move = no
set imap_keepalive = 900
set record="imaps://imap.gmail.com/[Gmail]/Sent Mail"
# Paths
set folder = ~/.mail
set alias_file = ~/.mutt/alias
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = ~/.mutt/certificates
set mailcap_path = ~/.mutt/mailcap
set tmpdir = ~/.mutt/temp
set signature = ~/.mutt/sig
set sig_on_top = yes
# Basic Options
set wait_key = no
set mbox_type = Maildir
unset move # gmail does that
# Sidebar Patch
set sidebar_visible = yes
set sidebar_width = 16
color sidebar_new color221 color233
## Account Settings
# Default inbox
set spoolfile = "+example.com/INBOX"
# Mailboxes to show in the sidebar.
mailboxes +INBOX \
+sent \
+drafts
# Other special folder
set postponed = "+example.com/drafts"
# navigation
macro index gi "&lt;change-folder&gt;=example.com/INBOX&lt;enter&gt;" "Go to inbox"
macro index gt "&lt;change-folder&gt;=example.com/sent" "View sent"
```
整个配置文件基本是开箱即用的,只需要将其中的 `Tux Example``example.com` 替换为你的实际值,并将其保存为 `~/.mutt/muttrc` 就可以使用了。
### 启动 Mutt
在启动 Mutt 之前,需要先启动 `offlineimap` 将远程邮件服务器上的邮件同步到本地。在首次启动的时候耗时可能会比较长,只需要让它默默运行直到同步完成就可以了。
在同步完成后,启动 Mutt
```
`$ mutt`
```
Mutt 会提示你打开用于管理电子邮件的目录权限,并展示收件箱的视图。
![Mutt email client][22]
### 学习使用 Mutt
在学习使用 Mutt 的过程中,你可以找到最符合你使用习惯的 .muttrc 配置。例如我的 .muttrc 配置文件集成了使用 Emacs 编写邮件、使用 LDAP 搜索联系人、使用 GnuPG 对邮件进行加解密、链接获取、HTML 视图等等一系列功能。你可以让 Mutt 做到任何你想让它做到的事情,你越探索,就能发现越多。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/gmail-linux-terminal
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: http://www.mutt.org/
[3]: https://opensource.com/article/21/5/linux-terminal-multiplexer
[4]: https://protonmail.com
[5]: https://tutanota.com
[6]: https://github.com/tutao/tutanota
[7]: https://protonmail.com/bridge/
[8]: https://www.redhat.com/sysadmin/configuring-email-server
[9]: https://opensource.com/article/20/11/macports
[10]: https://opensource.com/article/20/6/homebrew-mac
[11]: https://opensource.com/article/20/3/chocolatey
[12]: https://github.com/OfflineIMAP/offlineimap3
[13]: mailto:git@github.com
[14]: https://pypi.org/project/portalocker
[15]: https://myaccount.google.com/security
[16]: https://opensource.com/article/21/6/enter-invisible-passwords-using-python-module
[17]: https://opensource.com/article/18/9/handy-bash-aliases
[18]: https://opensource.com/article/19/9/adding-plugins-zsh
[19]: https://twitter.com/kylerankin
[20]: https://twitter.com/stickster
[21]: mailto:tux@example.com
[22]: https://opensource.com/sites/default/files/mutt.png (Mutt email client)