mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
commit
8955301bd6
242
published/20180528 What is behavior-driven Python.md
Normal file
242
published/20180528 What is behavior-driven Python.md
Normal file
@ -0,0 +1,242 @@
|
||||
什么是行为驱动的 Python?
|
||||
======
|
||||
|
||||
> 使用 Python behave 框架的行为驱动开发模式可以帮助你的团队更好的协作和测试自动化。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
|
||||
|
||||
您是否听说过<ruby>[行为驱动开发][1]<rt>behavior-driven development</rt></ruby>(BDD),并好奇这是个什么东西?也许你发现了团队成员在谈论“嫩瓜”(LCTT 译注:“<ruby>嫩瓜<rt>gherkin</rt></ruby>” 是一种简单的英语文本语言,工具 cucumber 通过解释它来执行测试脚本,见下文),而你却不知所云。或许你是一个 <ruby>Python 人<rt>Pythonista</rt></ruby>,正在寻找更好的方法来测试你的代码。 无论在什么情况下,了解 BDD 都可以帮助您和您的团队实现更好的协作和测试自动化,而 Python 的 [behave][21] 框架是一个很好的起点。
|
||||
|
||||
### 什么是 BDD?
|
||||
|
||||
在软件中,*行为*是指在明确定义的输入、动作和结果场景中功能是如何运转的。 产品可以表现出无数的行为,例如:
|
||||
|
||||
* 在网站上提交表单
|
||||
* 搜索想要的结果
|
||||
* 保存文档
|
||||
* 进行 REST API 调用
|
||||
* 运行命令行界面命令
|
||||
|
||||
根据产品的行为定义产品的功能可以更容易地描述产品,并对其进行开发和测试。 BDD 的核心是:使行为成为软件开发的焦点。在开发早期使用示例语言的规范来定义行为。最常见的行为规范语言之一是 Gherkin,Cucumber项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
|
||||
|
||||
下面是用Gherkin编写的行为规范的示例:
|
||||
|
||||
根据产品的行为定义产品的功能可以更容易地描述产品,开发产品并对其进行测试。 这是BDD的核心:使行为成为软件开发的焦点。 在开发早期使用[示例规范][2]的语言来定义行为。 最常见的行为规范语言之一是[Gherkin][3],来自 [Cucumber][4] 项目中的 Given-When-Then 场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和聚焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
|
||||
|
||||
下面是用 Gherkin 编写的行为规范的示例:
|
||||
|
||||
```
|
||||
Scenario: Basic DuckDuckGo Search
|
||||
Given the DuckDuckGo home page is displayed
|
||||
When the user searches for "panda"
|
||||
Then results are shown for "panda"
|
||||
```
|
||||
|
||||
快速浏览一下,行为是直观易懂的。 除少数关键字外,该语言为自由格式。 场景简洁而有意义。 一个真实的例子说明了这种行为。 步骤以声明的方式表明应该发生什么——而不会陷入如何如何的细节中。
|
||||
|
||||
[BDD 的主要优点][5]是良好的协作和自动化。 每个人都可以为行为开发做出贡献,而不仅仅是程序员。从流程开始就定义并理解预期的行为。测试可以与它们涵盖的功能一起自动化。每个测试都包含一个单一的、独特的行为,以避免重复。最后,现有的步骤可以通过新的行为规范重用,从而产生雪球效果。
|
||||
|
||||
### Python 的 behave 框架
|
||||
|
||||
behave 是 Python 中最流行的 BDD 框架之一。 它与其他基于 Gherkin 的 Cucumber 框架非常相似,尽管没有得到官方的 Cucumber 定名。 behave 有两个主要层:
|
||||
|
||||
1. 用 Gherkin 的 `.feature` 文件编写的行为规范
|
||||
2. 用 Python 模块编写的步骤定义和钩子,用于实现 Gherkin 步骤
|
||||
|
||||
如上例所示,Gherkin 场景有三部分格式:
|
||||
|
||||
1. 鉴于(Given)一些初始状态
|
||||
2. 每当(When)行为发生时
|
||||
3. 然后(Then)验证结果
|
||||
|
||||
当 behave 运行测试时,每个步骤由装饰器“粘合”到 Python 函数。
|
||||
|
||||
### 安装
|
||||
|
||||
作为先决条件,请确保在你的计算机上安装了 Python 和 `pip`。 我强烈建议使用 Python 3.(我还建议使用 [pipenv][6],但以下示例命令使用更基本的 `pip`。)
|
||||
|
||||
behave 框架只需要一个包:
|
||||
|
||||
```
|
||||
pip install behave
|
||||
```
|
||||
|
||||
其他包也可能有用,例如:
|
||||
|
||||
```
|
||||
pip install requests # 用于调用 REST API
|
||||
pip install selenium # 用于 web 浏览器交互
|
||||
```
|
||||
|
||||
GitHub 上的 [behavior-driven-Python][7] 项目包含本文中使用的示例。
|
||||
|
||||
### Gherkin 特点
|
||||
|
||||
behave 框架使用的 Gherkin 语法实际上是符合官方的 Cucumber Gherkin 标准的。`.feature` 文件包含了功能(`Feature`)部分,而场景部分又包含具有 Given-When-Then 步骤的场景(`Scenario`) 部分。 以下是一个例子:
|
||||
|
||||
```
|
||||
Feature: Cucumber Basket
|
||||
As a gardener,
|
||||
I want to carry many cucumbers in a basket,
|
||||
So that I don’t drop them all.
|
||||
|
||||
@cucumber-basket
|
||||
Scenario: Add and remove cucumbers
|
||||
Given the basket is empty
|
||||
When "4" cucumbers are added to the basket
|
||||
And "6" more cucumbers are added to the basket
|
||||
But "3" cucumbers are removed from the basket
|
||||
Then the basket contains "7" cucumbers
|
||||
```
|
||||
|
||||
这里有一些重要的事情需要注意:
|
||||
|
||||
- `Feature` 和 `Scenario` 部分都有[简短的描述性标题][8]。
|
||||
- 紧跟在 `Feature` 标题后面的行是会被 behave 框架忽略掉的注释。将功能描述放在那里是一种很好的做法。
|
||||
- `Scenario` 和 `Feature` 可以有标签(注意 `@cucumber-basket` 标记)用于钩子和过滤(如下所述)。
|
||||
- 步骤都遵循[严格的 Given-When-Then 顺序][9]。
|
||||
- 使用 `And` 和 `But` 可以为任何类型添加附加步骤。
|
||||
- 可以使用输入对步骤进行参数化——注意双引号里的值。
|
||||
|
||||
通过使用场景大纲(`Scenario Outline`),场景也可以写为具有多个输入组合的模板:
|
||||
|
||||
```
|
||||
Feature: Cucumber Basket
|
||||
|
||||
@cucumber-basket
|
||||
Scenario Outline: Add cucumbers
|
||||
Given the basket has “<initial>” cucumbers
|
||||
When "<more>" cucumbers are added to the basket
|
||||
Then the basket contains "<total>" cucumbers
|
||||
|
||||
Examples: Cucumber Counts
|
||||
| initial | more | total |
|
||||
| 0 | 1 | 1 |
|
||||
| 1 | 2 | 3 |
|
||||
| 5 | 4 | 9 |
|
||||
```
|
||||
|
||||
场景大纲总是有一个示例(`Examples`)表,其中第一行给出列标题,后续每一行给出一个输入组合。 只要列标题出现在由尖括号括起的步骤中,行值就会被替换。 在上面的示例中,场景将运行三次,因为有三行输入组合。 场景大纲是避免重复场景的好方法。
|
||||
|
||||
Gherkin 语言还有其他元素,但这些是主要的机制。 想了解更多信息,请阅读 Automation Panda 这个网站的文章 [Gherkin by Example][10] 和 [Writing Good Gherkin][11]。
|
||||
|
||||
### Python 机制
|
||||
|
||||
每个 Gherkin 步骤必须“粘合”到步骤定义——即提供了实现的 Python 函数。 每个函数都有一个带有匹配字符串的步骤类型装饰器。它还接收共享的上下文和任何步骤参数。功能文件必须放在名为 `features/` 的目录中,而步骤定义模块必须放在名为 `features/steps/` 的目录中。 任何功能文件都可以使用任何模块中的步骤定义——它们不需要具有相同的名称。 下面是一个示例 Python 模块,其中包含 cucumber basket 功能的步骤定义。
|
||||
|
||||
```
|
||||
from behave import *
|
||||
from cucumbers.basket import CucumberBasket
|
||||
|
||||
@given('the basket has "{initial:d}" cucumbers')
|
||||
def step_impl(context, initial):
|
||||
context.basket = CucumberBasket(initial_count=initial)
|
||||
|
||||
@when('"{some:d}" cucumbers are added to the basket')
|
||||
def step_impl(context, some):
|
||||
context.basket.add(some)
|
||||
|
||||
@then('the basket contains "{total:d}" cucumbers')
|
||||
def step_impl(context, total):
|
||||
assert context.basket.count == total
|
||||
```
|
||||
|
||||
可以使用三个[步骤匹配器][12]:`parse`、`cfparse` 和 `re`。默认的,也是最简单的匹配器是 `parse`,如上例所示。注意如何解析参数化值并将其作为输入参数传递给函数。一个常见的最佳实践是在步骤中给参数加双引号。
|
||||
|
||||
每个步骤定义函数还接收一个[上下文][13]变量,该变量保存当前正在运行的场景的数据,例如 `feature`、`scenario` 和 `tags` 字段。也可以添加自定义字段,用于在步骤之间共享数据。始终使用上下文来共享数据——永远不要使用全局变量!
|
||||
|
||||
behave 框架还支持[钩子][14]来处理 Gherkin 步骤之外的自动化问题。钩子是一个将在步骤、场景、功能或整个测试套件之前或之后运行的功能。钩子让人联想到[面向方面的编程][15]。它们应放在 `features/` 目录下的特殊 `environment.py` 文件中。钩子函数也可以检查当前场景的标签,因此可以有选择地应用逻辑。下面的示例显示了如何使用钩子为标记为 `@web` 的任何场景生成和销毁一个 Selenium WebDriver 实例。
|
||||
|
||||
```
|
||||
from selenium import webdriver
|
||||
|
||||
def before_scenario(context, scenario):
|
||||
if 'web' in context.tags:
|
||||
context.browser = webdriver.Firefox()
|
||||
context.browser.implicitly_wait(10)
|
||||
|
||||
def after_scenario(context, scenario):
|
||||
if 'web' in context.tags:
|
||||
context.browser.quit()
|
||||
```
|
||||
|
||||
注意:也可以使用 [fixtures][16] 进行构建和清理。
|
||||
|
||||
要了解一个 behave 项目应该是什么样子,这里是示例项目的目录结构:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/behave_dir_layout.png)
|
||||
|
||||
任何 Python 包和自定义模块都可以与 behave 框架一起使用。 使用良好的设计模式构建可扩展的测试自动化解决方案。步骤定义代码应简明扼要。
|
||||
|
||||
### 运行测试
|
||||
|
||||
要从命令行运行测试,请切换到项目的根目录并运行 behave 命令。 使用 `-help` 选项查看所有可用选项。
|
||||
|
||||
以下是一些常见用例:
|
||||
|
||||
```
|
||||
# run all tests
|
||||
behave
|
||||
|
||||
# run the scenarios in a feature file
|
||||
behave features/web.feature
|
||||
|
||||
# run all tests that have the @duckduckgo tag
|
||||
behave --tags @duckduckgo
|
||||
|
||||
# run all tests that do not have the @unit tag
|
||||
behave --tags ~@unit
|
||||
|
||||
# run all tests that have @basket and either @add or @remove
|
||||
behave --tags @basket --tags @add,@remove
|
||||
```
|
||||
|
||||
为方便起见,选项可以保存在 [config][17] 文件中。
|
||||
|
||||
### 其他选择
|
||||
|
||||
behave 不是 Python 中唯一的 BDD 测试框架。其他好的框架包括:
|
||||
|
||||
- pytest-bdd,是 pytest 的插件,和 behave 一样,它使用 Gherkin 功能文件和步骤定义模块,但它也利用了 pytest 的所有功能和插件。例如,它可以使用 pytest-xdist 并行运行 Gherkin 场景。 BDD 和非 BDD 测试也可以与相同的过滤器一起执行。pytest-bdd 还提供更灵活的目录布局。
|
||||
- radish 是一个 “Gherkin 增强版”框架——它将场景循环和前提条件添加到标准的 Gherkin 语言中,这使得它对程序员更友好。它还像 behave 一样提供了丰富的命令行选项。
|
||||
- lettuce 是一种较旧的 BDD 框架,与 behave 非常相似,在框架机制方面存在细微差别。然而,GitHub 最近显示该项目的活动很少(截至2018 年 5 月)。
|
||||
|
||||
任何这些框架都是不错的选择。
|
||||
|
||||
另外,请记住,Python 测试框架可用于任何黑盒测试,即使对于非 Python 产品也是如此! BDD 框架非常适合 Web 和服务测试,因为它们的测试是声明性的,而 Python 是一种[很好的测试自动化语言][18]。
|
||||
|
||||
本文基于作者的 [PyCon Cleveland 2018][19] 演讲“[行为驱动的Python][20]”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/behavior-driven-python
|
||||
|
||||
作者:[Andrew Knight][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/andylpk247
|
||||
[1]:https://automationpanda.com/bdd/
|
||||
[2]:https://en.wikipedia.org/wiki/Specification_by_example
|
||||
[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/
|
||||
[4]:https://cucumber.io/
|
||||
[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/
|
||||
[6]:https://docs.pipenv.org/
|
||||
[7]:https://github.com/AndyLPK247/behavior-driven-python
|
||||
[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/
|
||||
[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/
|
||||
[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/
|
||||
[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/
|
||||
[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters
|
||||
[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes
|
||||
[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions
|
||||
[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming
|
||||
[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures
|
||||
[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files
|
||||
[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/
|
||||
[19]:https://us.pycon.org/2018/
|
||||
[20]:https://us.pycon.org/2018/schedule/presentation/87/
|
||||
[21]:https://behave.readthedocs.io/en/latest/
|
@ -3,7 +3,7 @@
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/11/kvm-720x340.jpg)
|
||||
|
||||
我们已经讲解了 [在 Ubuntu 18.04 上配置 Oracle VirtualBox][1] 无头服务器。在本教程中,我们将讨论如何使用 **KVM** 去配置无头虚拟化服务器,以及如何从一个远程客户端去管理访客系统。正如你所知道的,KVM(**K** ernel-based **v** irtual **m** achine)是开源的,是对 Linux 的完全虚拟化。使用 KVM,我们可以在几分钟之内,很轻松地将任意 Linux 服务器转换到一个完全的虚拟化环境中,以及部署不同种类的虚拟机,比如 GNU/Linux、*BSD、Windows 等等。
|
||||
我们已经讲解了 [在 Ubuntu 18.04 无头服务器上配置 Oracle VirtualBox][1] 。在本教程中,我们将讨论如何使用 **KVM** 去配置无头虚拟化服务器,以及如何从一个远程客户端去管理访客系统。正如你所知道的,KVM(**K**ernel-based **v**irtual **m**achine)是开源的,是 Linux 上的全虚拟化。使用 KVM,我们可以在几分钟之内,很轻松地将任意 Linux 服务器转换到一个完全的虚拟化环境中,以及部署不同种类的虚拟机,比如 GNU/Linux、*BSD、Windows 等等。
|
||||
|
||||
### 使用 KVM 配置无头虚拟化服务器
|
||||
|
||||
@ -13,86 +13,81 @@
|
||||
|
||||
**KVM 虚拟化服务器:**
|
||||
|
||||
* **宿主机操作系统** – 最小化安装的 Ubuntu 18.04 LTS(没有 GUI)
|
||||
* **宿主机操作系统的 IP 地址**:192.168.225.22/24
|
||||
* **访客操作系统**(它将运行在 Ubuntu 18.04 的宿主机上):Ubuntu 16.04 LTS server
|
||||
|
||||
|
||||
* **宿主机操作系统** – 最小化安装的 Ubuntu 18.04 LTS(没有 GUI)
|
||||
* **宿主机操作系统的 IP 地址**:192.168.225.22/24
|
||||
* **访客操作系统**(它将运行在 Ubuntu 18.04 的宿主机上):Ubuntu 16.04 LTS server
|
||||
|
||||
**远程桌面客户端:**
|
||||
|
||||
* **操作系统** – Arch Linux
|
||||
|
||||
|
||||
* **操作系统** – Arch Linux
|
||||
|
||||
### 安装 KVM
|
||||
|
||||
首先,我们先检查一下我们的系统是否支持硬件虚拟化。为此,需要在终端中运行如下的命令:
|
||||
|
||||
```
|
||||
$ egrep -c '(vmx|svm)' /proc/cpuinfo
|
||||
|
||||
```
|
||||
|
||||
假如结果是 **zero (0)**,说明系统不支持硬件虚拟化,或者在 BIOS 中禁用了虚拟化。进入你的系统 BIOS 并检查虚拟化选项,然后启用它。
|
||||
假如结果是 `zero (0)`,说明系统不支持硬件虚拟化,或者在 BIOS 中禁用了虚拟化。进入你的系统 BIOS 并检查虚拟化选项,然后启用它。
|
||||
|
||||
假如结果是 **1** 或者 **更大的数**,说明系统将支持硬件虚拟化。然而,在你运行上面的命令之前,你需要始终保持 BIOS 中的虚拟化选项是启用的。
|
||||
假如结果是 `1` 或者 **更大的数**,说明系统将支持硬件虚拟化。然而,在你运行上面的命令之前,你需要始终保持 BIOS 中的虚拟化选项是启用的。
|
||||
|
||||
或者,你也可以使用如下的命令去验证它。但是为了使用这个命令你需要先安装 KVM。
|
||||
|
||||
```
|
||||
$ kvm-ok
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
INFO: /dev/kvm exists
|
||||
KVM acceleration can be used
|
||||
|
||||
```
|
||||
|
||||
如果输出的是如下这样的错误,你仍然可以在 KVM 中运行访客虚拟机,但是它的性能将非常差。
|
||||
|
||||
```
|
||||
INFO: Your CPU does not support KVM extensions
|
||||
INFO: For more detailed results, you should run this as root
|
||||
HINT: sudo /usr/sbin/kvm-ok
|
||||
|
||||
```
|
||||
|
||||
当然,还有其它的方法来检查你的 CPU 是否支持虚拟化。更多信息参考接下来的指南。
|
||||
|
||||
- [如何知道 CPU 是否支持虚拟技术(VT)](https://www.ostechnix.com/how-to-find-if-a-cpu-supports-virtualization-technology-vt/)
|
||||
|
||||
接下来,安装 KVM 和在 Linux 中配置虚拟化环境所需要的其它包。
|
||||
|
||||
在 Ubuntu 和其它基于 DEB 的系统上,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
|
||||
|
||||
```
|
||||
|
||||
KVM 安装完成后,启动 libvertd 服务(如果它没有启动的话):
|
||||
|
||||
```
|
||||
$ sudo systemctl enable libvirtd
|
||||
|
||||
$ sudo systemctl start libvirtd
|
||||
|
||||
```
|
||||
|
||||
### 创建虚拟机
|
||||
|
||||
所有的虚拟机文件和其它的相关文件都保存在 **/var/lib/libvirt/** 下。ISO 镜像的默认路径是 **/var/lib/libvirt/boot/**。
|
||||
所有的虚拟机文件和其它的相关文件都保存在 `/var/lib/libvirt/` 下。ISO 镜像的默认路径是 `/var/lib/libvirt/boot/`。
|
||||
|
||||
首先,我们先检查一下是否有虚拟机。查看可用的虚拟机列表,运行如下的命令:
|
||||
|
||||
```
|
||||
$ sudo virsh list --all
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
@ -102,14 +97,14 @@ Id Name State
|
||||
现在,我们来创建一个。
|
||||
|
||||
例如,我们来创建一个有 512 MB 内存、1 个 CPU 核心、8 GB 硬盘的 Ubuntu 16.04 虚拟机。
|
||||
|
||||
```
|
||||
$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc
|
||||
|
||||
```
|
||||
|
||||
请确保在路径 **/var/lib/libvirt/boot/** 中有一个 Ubuntu 16.04 的 ISO 镜像文件,或者在上面命令中给定的其它路径中有相应的镜像文件。
|
||||
请确保在路径 `/var/lib/libvirt/boot/` 中有一个 Ubuntu 16.04 的 ISO 镜像文件,或者在上面命令中给定的其它路径中有相应的镜像文件。
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer.
|
||||
@ -121,37 +116,38 @@ Domain installation still in progress. Waiting for installation to complete.
|
||||
Domain has shutdown. Continuing.
|
||||
Domain creation completed.
|
||||
Restarting guest.
|
||||
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
我们来分别讲解以上的命令和看到的每个选项的作用。
|
||||
|
||||
* **–name** : 这个选项定义虚拟机名字。在我们的案例中,这个虚拟机的名字是 **Ubuntu-16.04**。
|
||||
* **–ram=512** : 给虚拟机分配 512MB 内存。
|
||||
* **–vcpus=1** : 指明虚拟机中 CPU 核心的数量。
|
||||
* **–cpu host** : 通过暴露宿主机 CPU 的配置给访客系统来优化 CPU 属性。
|
||||
* **–hvm** : 要求完整的硬件虚拟化。
|
||||
* **–disk path** : 虚拟机硬盘的位置和大小。在我们的示例中,我分配了 8GB 的硬盘。
|
||||
* **–cdrom** : 安装 ISO 镜像的位置。请注意你必须在这个位置真的有一个 ISO 镜像。
|
||||
* **–graphics vnc** : 允许 VNC 从远程客户端访问虚拟机。
|
||||
|
||||
|
||||
* `–name`:这个选项定义虚拟机名字。在我们的案例中,这个虚拟机的名字是 `Ubuntu-16.04`。
|
||||
* `–ram=512`:给虚拟机分配 512MB 内存。
|
||||
* `–vcpus=1`:指明虚拟机中 CPU 核心的数量。
|
||||
* `–cpu host`:通过暴露宿主机 CPU 的配置给访客系统来优化 CPU 属性。
|
||||
* `–hvm`:要求完整的硬件虚拟化。
|
||||
* `–disk path`:虚拟机硬盘的位置和大小。在我们的示例中,我分配了 8GB 的硬盘。
|
||||
* `–cdrom`:安装 ISO 镜像的位置。请注意你必须在这个位置真的有一个 ISO 镜像。
|
||||
* `–graphics vnc`:允许 VNC 从远程客户端访问虚拟机。
|
||||
|
||||
### 使用 VNC 客户端访问虚拟机
|
||||
|
||||
现在,我们在远程桌面系统上使用 SSH 登入到 Ubuntu 服务器上(虚拟化服务器),如下所示。
|
||||
|
||||
在这里,**sk** 是我的 Ubuntu 服务器的用户名,而 **192.168.225.22** 是它的 IP 地址。
|
||||
```
|
||||
$ ssh sk@192.168.225.22
|
||||
```
|
||||
|
||||
在这里,`sk` 是我的 Ubuntu 服务器的用户名,而 `192.168.225.22` 是它的 IP 地址。
|
||||
|
||||
运行如下的命令找出 VNC 的端口号。我们从一个远程系统上访问虚拟机需要它。
|
||||
|
||||
```
|
||||
$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
|
||||
@ -160,10 +156,10 @@ $ sudo virsh dumpxml Ubuntu-16.04 | grep vnc
|
||||
|
||||
![][5]
|
||||
|
||||
记下那个端口号 **5900**。安装任意的 VNC 客户端应用程序。在本指南中,我们将使用 TigerVnc。TigerVNC 是 Arch Linux 默认仓库中可用的客户端。在 Arch 上安装它,运行如下命令:
|
||||
记下那个端口号 `5900`。安装任意的 VNC 客户端应用程序。在本指南中,我们将使用 TigerVnc。TigerVNC 是 Arch Linux 默认仓库中可用的客户端。在 Arch 上安装它,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo pacman -S tigervnc
|
||||
|
||||
```
|
||||
|
||||
在安装有 VNC 客户端的远程客户端系统上输入如下的 SSH 端口转发命令。
|
||||
@ -172,11 +168,11 @@ $ sudo pacman -S tigervnc
|
||||
$ ssh sk@192.168.225.22 -L 5900:127.0.0.1:5900
|
||||
```
|
||||
|
||||
再强调一次,**192.168.225.22** 是我的 Ubuntu 服务器(虚拟化服务器)的 IP 地址。
|
||||
再强调一次,`192.168.225.22` 是我的 Ubuntu 服务器(虚拟化服务器)的 IP 地址。
|
||||
|
||||
然后,从你的 Arch Linux(客户端)打开 VNC 客户端。
|
||||
|
||||
在 VNC 服务器框中输入 **localhost:5900**,然后点击 **Connect** 按钮。
|
||||
在 VNC 服务器框中输入 `localhost:5900`,然后点击 “Connect” 按钮。
|
||||
|
||||
![][6]
|
||||
|
||||
@ -188,43 +184,42 @@ $ ssh sk@192.168.225.22 -L 5900:127.0.0.1:5900
|
||||
|
||||
同样的,你可以根据你的服务器的硬件情况配置多个虚拟机。
|
||||
|
||||
或者,你可以使用 **virt-viewer** 实用程序在访客机器中安装操作系统。virt-viewer 在大多数 Linux 发行版的默认仓库中都可以找到。安装完 virt-viewer 之后,运行下列的命令去建立到虚拟机的访问连接。
|
||||
或者,你可以使用 `virt-viewer` 实用程序在访客机器中安装操作系统。`virt-viewer` 在大多数 Linux 发行版的默认仓库中都可以找到。安装完 `virt-viewer` 之后,运行下列的命令去建立到虚拟机的访问连接。
|
||||
|
||||
```
|
||||
$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
### 管理虚拟机
|
||||
|
||||
使用管理用户接口 virsh 从命令行去管理虚拟机是非常有趣的。命令非常容易记。我们来看一些例子。
|
||||
使用管理用户接口 `virsh` 从命令行去管理虚拟机是非常有趣的。命令非常容易记。我们来看一些例子。
|
||||
|
||||
查看运行的虚拟机,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo virsh list
|
||||
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ sudo virsh list --all
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
2 Ubuntu-16.04 running
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
启动一个虚拟机,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo virsh start Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
或者,也可以使用虚拟机 id 去启动它。
|
||||
@ -232,94 +227,85 @@ $ sudo virsh start Ubuntu-16.04
|
||||
![][10]
|
||||
|
||||
正如在上面的截图所看到的,Ubuntu 16.04 虚拟机的 Id 是 2。因此,启动它时,你也可以像下面一样只指定它的 ID。
|
||||
|
||||
```
|
||||
$ sudo virsh start 2
|
||||
|
||||
```
|
||||
|
||||
重启动一个虚拟机,运行如下命令:
|
||||
```
|
||||
$ sudo virsh reboot Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Domain Ubuntu-16.04 is being rebooted
|
||||
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
暂停一个运行中的虚拟机,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo virsh suspend Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Domain Ubuntu-16.04 suspended
|
||||
|
||||
```
|
||||
|
||||
让一个暂停的虚拟机重新运行,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo virsh resume Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Domain Ubuntu-16.04 resumed
|
||||
|
||||
```
|
||||
|
||||
关闭一个虚拟机,运行如下命令:
|
||||
|
||||
```
|
||||
$ sudo virsh shutdown Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Domain Ubuntu-16.04 is being shutdown
|
||||
|
||||
```
|
||||
|
||||
完全移除一个虚拟机,运行如下的命令:
|
||||
|
||||
```
|
||||
$ sudo virsh undefine Ubuntu-16.04
|
||||
|
||||
$ sudo virsh destroy Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Domain Ubuntu-16.04 destroyed
|
||||
|
||||
```
|
||||
|
||||
![][12]
|
||||
|
||||
关于它的更多选项,建议你去查看 man 手册页:
|
||||
|
||||
```
|
||||
$ man virsh
|
||||
|
||||
```
|
||||
|
||||
今天就到这里吧。开始在你的新的虚拟化环境中玩吧。对于研究和开发者、以及测试目的,KVM 虚拟化将是很好的选择,但它能做的远不止这些。如果你有充足的硬件资源,你可以将它用于大型的生产环境中。如果你还有其它好玩的发现,不要忘记在下面的评论区留下你的高见。
|
||||
|
||||
谢谢!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/
|
||||
@ -327,7 +313,7 @@ via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ub
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -128,7 +128,7 @@ CPU 任务优先级或类型:
|
||||
* 蓝色:低优先级
|
||||
* 绿色:正常优先级
|
||||
* 红色:内核任务
|
||||
* 蓝色:虚拟任务
|
||||
* 蓝绿色:虚拟任务
|
||||
* 条状图末尾的值是已用 CPU 的百分比
|
||||
|
||||
内存:
|
||||
|
@ -3,33 +3,38 @@
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/09/Lock-The-Keyboard-And-Mouse-720x340.jpg)
|
||||
|
||||
我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(译者注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。
|
||||
我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(LCTT 译注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。
|
||||
|
||||
### 安装 xtrlock
|
||||
|
||||
xtrlock 软件包在大多数 Linux 操作系统的默认软件仓库中都可以获取到。所以你可以使用你安装的发行版的包管理器来安装它。
|
||||
|
||||
在 **Arch Linux** 及其衍生发行版中,运行下面的命令来安装它:
|
||||
|
||||
```
|
||||
$ sudo pacman -S xtrlock
|
||||
```
|
||||
|
||||
在 **Fedora** 上使用:
|
||||
|
||||
```
|
||||
$ sudo dnf install xtrlock
|
||||
```
|
||||
|
||||
在 **RHEL, CentOS** 上使用:
|
||||
在 **RHEL、CentOS** 上使用:
|
||||
|
||||
```
|
||||
$ sudo yum install xtrlock
|
||||
```
|
||||
|
||||
在 **SUSE/openSUSE** 上使用:
|
||||
|
||||
```
|
||||
$ sudo zypper install xtrlock
|
||||
```
|
||||
|
||||
在 **Debian, Ubuntu, Linux Mint** 上使用:
|
||||
在 **Debian、Ubuntu、Linux Mint** 上使用:
|
||||
|
||||
```
|
||||
$ sudo apt-get install xtrlock
|
||||
```
|
||||
@ -38,41 +43,50 @@ $ sudo apt-get install xtrlock
|
||||
|
||||
安装好 xtrlock 后,你需要根据你的选择来创建一个快捷键,通过这个快捷键来锁住键盘和鼠标。
|
||||
|
||||
在 **/usr/local/bin** 目录下创建一个名为 **lockkbmouse** 的新文件:
|
||||
(LCTT 译注:译者在自己的系统(Arch + Deepin)中发现这里的到下面创建快捷键的部分可以不必做,依然生效。)
|
||||
|
||||
在 `/usr/local/bin` 目录下创建一个名为 `lockkbmouse` 的新文件:
|
||||
|
||||
```
|
||||
$ sudo vi /usr/local/bin/lockkbmouse
|
||||
```
|
||||
|
||||
然后将下面的命令添加到这个文件中:
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
sleep 1 && xtrlock
|
||||
```
|
||||
|
||||
保存并关闭这个文件。
|
||||
|
||||
然后使用下面的命令来使得它可以被执行:
|
||||
|
||||
```
|
||||
$ sudo chmod a+x /usr/local/bin/lockkbmouse
|
||||
```
|
||||
|
||||
接着,我们就需要创建快捷键了。
|
||||
|
||||
#### 创建快捷键
|
||||
|
||||
**在 Arch Linux MATE 桌面中**
|
||||
|
||||
依次点击 **System -> Preferences -> Hardware -> keyboard Shortcuts**
|
||||
依次点击 “System -> Preferences -> Hardware -> keyboard Shortcuts”
|
||||
|
||||
然后点击 **Add** 来创建快捷键。
|
||||
然后点击 “Add” 来创建快捷键。
|
||||
|
||||
![][2]
|
||||
|
||||
首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 **Apply** 按钮。
|
||||
首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 “Apply” 按钮。
|
||||
|
||||
```
|
||||
bash -c "sleep 1 && xtrlock"
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 **Alt+k** 这组快捷键。
|
||||
为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 `Alt+k` 这组快捷键。
|
||||
|
||||
![][4]
|
||||
|
||||
@ -80,16 +94,17 @@ bash -c "sleep 1 && xtrlock"
|
||||
|
||||
**在 Ubuntu GNOME 桌面中**
|
||||
|
||||
依次进入 **System Settings -> Devices -> Keyboard**,然后点击 **+** 这个符号。
|
||||
依次进入 “System Settings -> Devices -> Keyboard”,然后点击 “+” 这个符号。
|
||||
|
||||
键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 “Add” 按钮。
|
||||
|
||||
键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 **Add** 按钮。
|
||||
```
|
||||
bash -c "sleep 1 && xtrlock"
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 **“Set shortcut”** 这个按钮就可以了。
|
||||
接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 “Set shortcut” 这个按钮就可以了。
|
||||
|
||||
![][6]
|
||||
|
||||
@ -97,7 +112,7 @@ bash -c "sleep 1 && xtrlock"
|
||||
|
||||
![][7]
|
||||
|
||||
输入你选定的快捷键组合,例如我使用 **Alt+k**。
|
||||
输入你选定的快捷键组合,例如我使用 `Alt+k`。
|
||||
|
||||
![][8]
|
||||
|
||||
@ -113,23 +128,26 @@ bash -c "sleep 1 && xtrlock"
|
||||
|
||||
### 将键盘和鼠标解锁
|
||||
|
||||
要将键盘和鼠标解锁,只需要输入你的密码然后敲击“Enter”键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲 `ENTER` 键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 **ESC** 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 **BACKSPACE** 或者 **DELETE** 键就可以了。
|
||||
要将键盘和鼠标解锁,只需要输入你的密码然后敲击回车键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲回车键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 `ESC` 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 `BACKSPACE` 或者 `DELETE` 键就可以了。
|
||||
|
||||
### 要是我被永久地锁住了怎么办?
|
||||
|
||||
以防你被永久地锁定了屏幕,切换至一个 TTY(例如 CTRL+ALT+F2)然后运行:
|
||||
以防你被永久地锁定了屏幕,切换至一个 TTY(例如 `CTRL+ALT+F2`)然后运行:
|
||||
|
||||
```
|
||||
$ sudo killall xtrlock
|
||||
```
|
||||
|
||||
或者你还可以使用 **chvt** 命令来在 TTY 和 X 会话之间切换。
|
||||
或者你还可以使用 `chvt` 命令来在 TTY 和 X 会话之间切换。
|
||||
|
||||
例如,如果要切换到 TTY1,则运行:
|
||||
|
||||
```
|
||||
$ sudo chvt 1
|
||||
```
|
||||
|
||||
要切换回 X 会话,则键入:
|
||||
|
||||
```
|
||||
$ sudo chvt 7
|
||||
```
|
||||
@ -137,6 +155,7 @@ $ sudo chvt 7
|
||||
不同的发行版使用了不同的快捷键组合来在不同的 TTY 间切换。请参考你安装的对应发行版的官方网站了解更多详情。
|
||||
|
||||
如果想知道更多 xtrlock 的信息,请参考 man 页:
|
||||
|
||||
```
|
||||
$ man xtrlock
|
||||
```
|
||||
@ -145,7 +164,7 @@ $ man xtrlock
|
||||
|
||||
**资源:**
|
||||
|
||||
* [**Ubuntu 论坛**][10]
|
||||
* [**Ubuntu 论坛**][10]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -154,7 +173,7 @@ via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -167,5 +186,5 @@ via: https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-1.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-2.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/set-shortcut-key-3.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrlock-1.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/xtrclock-1.png
|
||||
[10]:https://ubuntuforums.org/showthread.php?t=993800
|
@ -1,10 +1,11 @@
|
||||
如何在 Linux 中列出可用的软件包组
|
||||
======
|
||||
|
||||
我们知道,如果想要在 Linux 中安装软件包,可以使用软件包管理器来进行安装。由于系统管理员需要频繁用到软件包管理器,所以它是 Linux 当中的一个重要工具。
|
||||
|
||||
但是如果想一次性安装一个软件包组,在 Linux 中有可能吗?又如何通过命令去实现呢?
|
||||
|
||||
在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt`或 `apt-get` 这样的官方软件包管理器。
|
||||
在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt` 或 `apt-get` 这样的官方软件包管理器。
|
||||
|
||||
在 Linux 中安装软件包组有很多好处。对于 LAMP 来说,安装过程会包含多个软件包,但如果安装软件包组命令来安装,只安装一个包就可以了。
|
||||
|
||||
@ -13,19 +14,20 @@
|
||||
软件包组是一组用于公共功能的软件包,包括系统工具、声音和视频。 安装软件包组的过程中,会获取到一系列的依赖包,从而大大节省了时间。
|
||||
|
||||
**推荐阅读:**
|
||||
**(#)** [如何在 Linux 上按照大小列出已安装的软件包][1]
|
||||
**(#)** [如何在 Linux 上查看/列出可用的软件包更新][2]
|
||||
**(#)** [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3]
|
||||
**(#)** [如何在 Linux 上查看一个软件包的详细信息][4]
|
||||
**(#)** [如何查看一个软件包是否在你的 Linux 发行版上可用][5]
|
||||
**(#)** [萌新指导:一个可视化的 Linux 包管理工具][6]
|
||||
**(#)** [老手必会:命令行软件包管理器的用法][7]
|
||||
|
||||
- [如何在 Linux 上按照大小列出已安装的软件包][1]
|
||||
- [如何在 Linux 上查看/列出可用的软件包更新][2]
|
||||
- [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间][3]
|
||||
- [如何在 Linux 上查看一个软件包的详细信息][4]
|
||||
- [如何查看一个软件包是否在你的 Linux 发行版上可用][5]
|
||||
- [萌新指导:一个可视化的 Linux 包管理工具][6]
|
||||
- [老手必会:命令行软件包管理器的用法][7]
|
||||
|
||||
### 如何在 CentOS/RHEL 系统上列出可用的软件包组
|
||||
|
||||
RHEL 和 CentOS 系统使用的是 RPM 软件包,因此可以使用 `yum` 软件包管理器来获取相关的软件包信息。
|
||||
|
||||
`yum` 是 Yellowdog Updater, Modified 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从分发库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。
|
||||
`yum` 是 “Yellowdog Updater, Modified” 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从发行版仓库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。
|
||||
|
||||
**推荐阅读:** [使用 yum 命令在 RHEL/CentOS 系统上管理软件包][8]
|
||||
|
||||
@ -69,10 +71,9 @@ Available Language Groups:
|
||||
.
|
||||
.
|
||||
Done
|
||||
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Performance Tools 组相关联的软件包。
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Performance Tools” 组相关联的软件包。
|
||||
|
||||
```
|
||||
# yum groupinfo "Performance Tools"
|
||||
@ -103,18 +104,17 @@ Group: Performance Tools
|
||||
tiobench
|
||||
tuned
|
||||
tuned-utils
|
||||
|
||||
```
|
||||
|
||||
### 如何在 Fedora 系统上列出可用的软件包组
|
||||
|
||||
Fedora 系统使用的是 DNF 软件包管理器,因此可以通过 DNF 软件包管理器来获取相关的信息。
|
||||
|
||||
DNF 的含义是 Dandified yum。、DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在Fedora 22 开始加入到系统中。
|
||||
DNF 的含义是 “Dandified yum”。DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在 Fedora 22 开始加入到系统中。
|
||||
|
||||
`dnf` 命令可以在 Fedora 22 及更高版本上安装、更新、搜索和删除软件包, 它可以自动解决软件包的依赖关系并其顺利安装,不会产生问题。
|
||||
|
||||
由于一些长期未被解决的问题的存在,YUM 被 DNF 逐渐取代了。而 Aleš Kozumplík 的 DNF 却并未对 yum 的这些问题作出修补,他认为这是技术上的难题,YUM 团队也从不接受这些更改。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。
|
||||
YUM 被 DNF 取代是由于 YUM 中存在一些长期未被解决的问题。为什么 Aleš Kozumplík 没有对 yum 的这些问题作出修补呢,他认为补丁解决存在技术上的难题,而 YUM 团队也不会马上接受这些更改,还有一些重要的问题。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。
|
||||
|
||||
**推荐阅读:** [在 Fedora 系统上使用 DNF 命令管理软件包][9]
|
||||
|
||||
@ -167,13 +167,11 @@ Available Groups:
|
||||
Hardware Support
|
||||
Sound and Video
|
||||
System Tools
|
||||
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 Editor 组相关联的软件包。
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Editor” 组相关联的软件包。
|
||||
|
||||
```
|
||||
|
||||
# dnf groupinfo Editors
|
||||
Last metadata expiration check: 0:04:57 ago on Sun 09 Sep 2018 07:10:36 PM IST.
|
||||
|
||||
@ -267,7 +265,7 @@ i | yast2_basis | 20150918-25.1 | @System |
|
||||
| yast2_install_wf | 20150918-25.1 | Main Repository (OSS) |
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 file_server 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “file_server” 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。
|
||||
|
||||
```
|
||||
# zypper info file_server
|
||||
@ -346,7 +344,7 @@ Contents :
|
||||
| yast2-tftp-server | package | Recommended
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。
|
||||
如果需要列出相关联的软件包,也可以执行以下这个命令。
|
||||
|
||||
```
|
||||
# zypper info pattern file_server
|
||||
@ -385,7 +383,7 @@ Contents :
|
||||
| yast2-tftp-server | package | Recommended
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。
|
||||
如果需要列出相关联的软件包,也可以执行以下这个命令。
|
||||
|
||||
```
|
||||
# zypper info -t pattern file_server
|
||||
@ -431,7 +429,7 @@ Contents :
|
||||
|
||||
[tasksel][11] 是 Debian/Ubuntu 系统上一个很方便的工具,只需要很少的操作就可以用它来安装好一组软件包。可以在 `/usr/share/tasksel` 目录下的 `.desc` 文件中安排软件包的安装任务。
|
||||
|
||||
默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,类似软件包管理器中的元包(meta-packages)。
|
||||
默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,这个功能类似软件包管理器中的元包(meta-packages)。
|
||||
|
||||
`tasksel` 工具带有一个基于 zenity 的简单用户界面,例如命令行中的弹出图形对话框。
|
||||
|
||||
@ -483,7 +481,7 @@ u openssh-server OpenSSH server
|
||||
u server Basic Ubuntu server
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 lamp-server 组相关联的软件包。
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “lamp-server” 组相关联的软件包。
|
||||
|
||||
```
|
||||
# tasksel --task-desc "lamp-server"
|
||||
@ -494,7 +492,7 @@ Selects a ready-made Linux/Apache/MySQL/PHP server.
|
||||
|
||||
基于 Arch Linux 的系统使用的是 pacman 软件包管理器,因此可以通过 pacman 软件包管理器来获取相关的信息。
|
||||
|
||||
pacman 是 package manager 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。
|
||||
pacman 是 “package manager” 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。
|
||||
|
||||
**推荐阅读:** [使用 pacman 在基于 Arch Linux 的系统上管理软件包][13]
|
||||
|
||||
@ -536,10 +534,9 @@ realtime
|
||||
sugar-fructose
|
||||
tesseract-data
|
||||
vim-plugins
|
||||
|
||||
```
|
||||
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 gnome 组相关联的软件包。
|
||||
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “gnome” 组相关联的软件包。
|
||||
|
||||
```
|
||||
# pacman -Sg gnome
|
||||
@ -603,7 +600,6 @@ Interrupt signal received
|
||||
```
|
||||
# pacman -Sg gnome | wc -l
|
||||
64
|
||||
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -613,7 +609,7 @@ via: https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,133 @@
|
||||
Linux vs Mac:Linux 比 Mac 好的 7 个原因
|
||||
======
|
||||
|
||||
最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问,Linux 是个非常优秀的平台。但是它和其它操作系统一样也会有缺点。对于某些专门的领域,像是游戏,Windows 当然更好。而对于视频编辑等任务,Mac 系统可能更为方便。这一切都取决于你的偏好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。
|
||||
|
||||
如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac。
|
||||
|
||||
### Linux 比 Mac 好的 7 个原因
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][2]
|
||||
|
||||
Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令、bash 和其它 shell,相比于 Windows,它们所支持的应用和游戏比较少。但也就是这点比较相似。
|
||||
|
||||
平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。
|
||||
|
||||
那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。
|
||||
|
||||
#### 1、价格
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][3]
|
||||
|
||||
假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。
|
||||
|
||||
那在这种情况下,你觉得花费几百美金买个系统完成这项工作,或者花费更多直接买个 MacBook 更好?当然,最终的决定权还是在你。
|
||||
|
||||
买个装好 Mac 系统的电脑?还是买个便宜的电脑,然后自己装上免费的 Linux 系统?这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro(专业的视频编辑软件)和 Logic Pro X(专业的音乐制作软件)(LCTT 译注:这两款软件都是苹果公司推出的)。
|
||||
|
||||
#### 2、硬件支持
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][4]
|
||||
|
||||
Linux 支持多种平台。无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了,你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5]。
|
||||
|
||||
而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备配套的。
|
||||
|
||||
这有一些[在非苹果系统上安装 Mac OS 的教程][6]。这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。
|
||||
|
||||
总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。
|
||||
|
||||
#### 3、安全性
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][7]
|
||||
|
||||
很多人都说 iOS 和 Mac 是非常安全的平台。的确,或许相比于 Windows,它确实比较安全,可并不一定有 Linux 安全。
|
||||
|
||||
我不是在危言耸听。Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户使用着很慢的 Mac 电脑并且为此深受折磨。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首。
|
||||
|
||||
从来没有绝对安全的操作系统,Linux 也不例外。Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。
|
||||
|
||||
这可能也是一个你应该选择 Linux 而不是 Mac 的原因。
|
||||
|
||||
#### 4、可定制性与灵活性
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][10]
|
||||
|
||||
如果你有不喜欢的东西,自己定制或者修改它都行。
|
||||
|
||||
举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][18]。你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。
|
||||
|
||||
除此之外,你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)来打造适合你的系统。这个在 MacOS 上可以做吗?
|
||||
|
||||
另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你喜欢 MacOS 上的工作方式,[Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里有一个[轻量级 Linux 发行版列表][5]。相比较而言,MacOS 缺乏这种灵活性。
|
||||
|
||||
#### 5、使用 Linux 有助于你的职业生涯(针对 IT 行业和科学领域的学生)
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][14]
|
||||
|
||||
对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。
|
||||
|
||||
但是当你开始使用 Linux 并且探索如何使用的时候,你将会积累非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行操作文件系统以及安装应用程序。你可能不会知道这些都是一些 IT 公司的新员工需要培训的内容。
|
||||
|
||||
除此之外,Linux 在就业市场上还有很大的发展空间。Linux 相关的技术有很多(云计算、Kubernetes、系统管理等),你可以学习、考取专业技能证书并获得一份相关的高薪工作。要学习这些,你必须使用 Linux。
|
||||
|
||||
#### 6、可靠
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][15]
|
||||
|
||||
想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。
|
||||
|
||||
但是它为什么可靠呢,相比于 MacOS,它的可靠体现在什么方面呢?
|
||||
|
||||
答案很简单 —— 给用户更多的控制权,同时提供更好的安全性。在 MacOS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux,你可以做任何你想做的事情 —— 这可能会导致(对某些人来说)糟糕的用户体验 —— 但它确实使其更可靠。
|
||||
|
||||
#### 7、开源
|
||||
|
||||
![Linux vs Mac:为什么 Linux 更好][16]
|
||||
|
||||
开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。上面讨论的大多数观点都是开源软件的直接优势。
|
||||
|
||||
简单解释一下,如果是开源软件,你可以自己查看或者修改源代码。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 MacOS 的源代码。
|
||||
|
||||
形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可太差劲了!
|
||||
|
||||
如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。
|
||||
|
||||
### 总结
|
||||
|
||||
现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢?
|
||||
|
||||
请在下方评论让我们知道你的想法。
|
||||
|
||||
提示:这里的图片是以“企鹅俱乐部”为原型的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-vs-mac/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Ryze-Borgia](https://github.com/Ryze-Borgia)
|
||||
校对:[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[1]: https://itsfoss.com/linux-better-than-windows/
|
||||
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg
|
||||
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg
|
||||
[5]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[6]: https://hackintosh.com/
|
||||
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg
|
||||
[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html
|
||||
[9]: https://www.imore.com/how-to-remove-browser-hijack
|
||||
[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg
|
||||
[11]: https://www.gnome.org/
|
||||
[12]: https://itsfoss.com/best-gnome-extensions/
|
||||
[13]: https://elementary.io/
|
||||
[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg
|
||||
[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg
|
||||
[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg
|
||||
[17]: https://opensource.com/life/15/12/why-open-source
|
||||
[18]: https://www.kde.org/plasma-desktop
|
@ -1,9 +1,13 @@
|
||||
Part-III 树莓派自建 NAS 云盘之云盘构建
|
||||
树莓派自建 NAS 云盘之——云盘构建
|
||||
======
|
||||
|
||||
用树莓派 NAS 云盘来保护数据的安全!
|
||||
> 用自行托管的树莓派 NAS 云盘来保护数据的安全!
|
||||
|
||||
在前面两篇文章中(译注:文章链接 [Part-I][1],[Part-II][2]),我们讨论了用树莓派搭建一个 NAS(network-attached storage) 所需要的一些 [软硬件环境及其操作步骤][1]。我们还制定了适当的 [备份策略][2] 来保护NAS上的数据。本文中,我们将介绍讨论利用 [Nestcloud][3] 来方便快捷的存储、获取以及分享你的数据。
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP)
|
||||
|
||||
在前面两篇文章中,我们讨论了用树莓派搭建一个 NAS 云盘所需要的一些 [软硬件环境及其操作步骤][1]。我们还制定了适当的 [备份策略][2] 来保护 NAS 上的数据。本文中,我们将介绍讨论利用 [Nestcloud][3] 来方便快捷的存储、获取以及分享你的数据。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/nas_part3.png)
|
||||
|
||||
### 必要的准备工作
|
||||
|
||||
@ -13,13 +17,13 @@ Part-III 树莓派自建 NAS 云盘之云盘构建
|
||||
|
||||
### 安装 Nextcloud
|
||||
|
||||
为了在树莓派(参考 [Part-I][1] 中步骤设置)中运行 Nextcloud,首先用命令 **apt** 安装 以下的一些依赖软件包。
|
||||
为了在树莓派(参考 [第一篇][1] 中步骤设置)中运行 Nextcloud,首先用命令 `apt` 安装 以下的一些依赖软件包。
|
||||
|
||||
```
|
||||
sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl
|
||||
```
|
||||
|
||||
其次,下载 Nextcloud。在树莓派中利用 **wget** 下载其 [最新的版本][5]。在 [Part-I] 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud,以确保每晚自动备份数据。
|
||||
其次,下载 Nextcloud。在树莓派中利用 `wget` 下载其 [最新的版本][5]。在 [第一篇][1] 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud,以确保每晚自动备份数据。
|
||||
|
||||
```
|
||||
sudo mkdir -p /nas/data/nextcloud
|
||||
@ -37,27 +41,27 @@ sudo chown -R www-data:www-data /nas/data/nextcloud
|
||||
|
||||
如上所述,Nextcloud 安装完毕。之前安装依赖软件包时就已经安装了 MySQL 数据库来存储 Nextcloud 的一些重要数据(例如,那些你创建的可以访问 Nextcloud 的用户的信息)。如果你更愿意使用 Pstgres 数据库,则上面的依赖软件包需要做一些调整。
|
||||
|
||||
以 root 权限启动 MySQL:
|
||||
以 root 权限启动 MySQL:
|
||||
|
||||
```
|
||||
sudo mysql
|
||||
```
|
||||
|
||||
这将会打开 SQL 提示符界面,在那里可以插入如下指令--使用数据库连接密码替换其中的占位符--为 Nextcloud 创建一个数据库。
|
||||
这将会打开 SQL 提示符界面,在那里可以插入如下指令——使用数据库连接密码替换其中的占位符——为 Nextcloud 创建一个数据库。
|
||||
|
||||
```
|
||||
CREATE USER nextcloud IDENTIFIED BY '<insert-password-here>';
|
||||
CREATE USER nextcloud IDENTIFIED BY '<这里插入密码>';
|
||||
CREATE DATABASE nextcloud;
|
||||
GRANT ALL ON nextcloud.* TO nextcloud;
|
||||
```
|
||||
|
||||
按 **Ctrl+D** 或输入 **quit** 退出 SQL 提示符界面。
|
||||
按 `Ctrl+D` 或输入 `quit` 退出 SQL 提示符界面。
|
||||
|
||||
### Web 服务器配置
|
||||
|
||||
Nextcloud 可以配置以适配于 Nginx 服务器或者其他 Web 服务器运行的环境。但本文中,我决定在我的树莓派 NAS 中运行 Apache 服务器(如果你有其他效果更好的服务器选择方案,不妨也跟我分享一下)。
|
||||
|
||||
首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 **/etc/apache2/sites-available/001-netxcloud.conf**,在其中输入下面的参数内容。修改其中 ServerName 为你的域名。
|
||||
首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 `/etc/apache2/sites-available/001-netxcloud.conf`,在其中输入下面的参数内容。修改其中 `ServerName` 为你的域名。
|
||||
|
||||
```
|
||||
<VirtualHost *:80>
|
||||
@ -78,13 +82,13 @@ a2ensite 001-nextcloud
|
||||
sudo systemctl reload apache2
|
||||
```
|
||||
|
||||
现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot][7] 下载 [Let's Encrypt][6] 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装][8] Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 **<http://nextcloud.pi-nas.com>** 自动跳转到 **<https://nextcloud.pi-nas.com>**。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。
|
||||
现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot][7] 下载 [Let's Encrypt][6] 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装][8] Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 `http://nextcloud.pi-nas.com` 自动跳转到 `https://nextcloud.pi-nas.com`。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。
|
||||
|
||||
### 配置 Nextcloud
|
||||
|
||||
最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [Part-II][2] 一文中设置的备份策略。
|
||||
最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [第二篇][2] 一文中设置的备份策略。
|
||||
|
||||
然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 <https://nextcloud.pi-nas.com/>settings/admin)。
|
||||
然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 `https://nextcloud.pi-nas.com/settings/admin`)。
|
||||
|
||||
恭喜你,到此为止,你已经成功在树莓派中安装了你自己的云 Nextcloud。去 Nextcloud 主页 [下载 Nextcloud 客户端][9],客户端可以同步数据并且离线访问服务器。移动端甚至可以上传图片等资源,然后电脑桌面都可以去访问它们。
|
||||
|
||||
@ -95,13 +99,13 @@ via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi
|
||||
作者:[Manuel Dewald][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[jrg](https://github.com/jrglinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ntlx
|
||||
[1]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi
|
||||
[2]: https://opensource.com/article/18/8/automate-backups-raspberry-pi
|
||||
[1]: https://linux.cn/article-10104-1.html?utm_source=index&utm_medium=more
|
||||
[2]: https://linux.cn/article-10112-1.html
|
||||
[3]: https://nextcloud.com/
|
||||
[4]: https://sourceforge.net/p/ddclient/wiki/Home/
|
||||
[5]: https://nextcloud.com/install/#instructions-server
|
@ -0,0 +1,167 @@
|
||||
Linux 拥有了新的行为准则,但是许多人都对此表示不满
|
||||
=====
|
||||
|
||||
> Linux 内核有了新的<ruby>行为准则<rt>Code of Conduct</rt></ruby>(CoC)。但在这条行为准则被签署以及发布仅仅 30 分钟之后,Linus Torvalds 就暂时离开了 Linux 内核的开发工作。因为新行为准则的作者那富有争议的过去,现在这件事成为了热点话题。许多人都对这新的行为准则表示不满。
|
||||
|
||||
如果你还不了解这件事,请参阅 [Linus Torvalds 对于自己之前的不良态度致歉并开始休假,以改善自己的行为态度][1]
|
||||
|
||||
### Linux 内核开发遵守的新行为准则
|
||||
|
||||
Linux 内核开发者并不是以前没有需要遵守的行为准则,但是之前的[<ruby>冲突准则<rt>code of conflict</rt></ruby>][2]现在被替换成了以“给内核开发社区营造更加热情,更方便他人参与的氛围”为目的的行为准则。
|
||||
|
||||
> “为营造一个开放并且热情的社区环境,我们,贡献者与维护者,许诺让每一个参与进我们项目和社区的人享受一个没有骚扰的体验。无关于他们的年纪、体型、身体残疾、种族、性别、性别认知与表达、社会经验、教育水平、社会或者经济地位、国籍、外表、人种、信仰、性认同和性取向。”
|
||||
|
||||
你可以在这里阅读整篇行为准则:[Linux 行为准则][33]。
|
||||
|
||||
### Linus Torvalds 是被迫道歉并且休假的吗?
|
||||
|
||||
![Linus Torvalds 的道歉][3]
|
||||
|
||||
这个新的行为准则由 Linus Torvalds 和 Greg Kroah-Hartman (仅次于 Torvalds 的二把手)签发。来自 Intel 的 Dan Williams 和来自 Facebook 的 Chris Mason 也是该准则的签署者之一。
|
||||
|
||||
如果我正确地解读了时间线,在签署这个行为准则的半小时之后,Torvalds [发送了一封邮件,对自己之前的不良态度致歉][4]。他同时宣布会进行休假,以改善自己的行为态度。
|
||||
|
||||
不过有些人开始阅读这封邮件的话外之音,并对如下文字报以特别关注:
|
||||
|
||||
>**在这周,许多社区成员批评了我之前种种不解人意的行为。我以前在邮件里进行的,对他人轻率的批评是非专业以及不必要的**。这种情况在我将事情放在私人渠道处理的时候尤为严重。我理解这件事情的严重性,这是不好的行为,我对此感到十分抱歉。
|
||||
|
||||
他是否是因为新的行为准则被强迫做出道歉,并且决定休假,可以通过这几行来判断。这也可以让我们采取一些措施,避免 Torvalds 被新的行为准则伤害。
|
||||
|
||||
### 有关贡献者盟约作者 Coraline Ada Ehmke 的争议
|
||||
|
||||
Linux 的行为准则基于[<ruby>贡献者盟约<rt>Contributor Convenant</rt></ruby>1.4 版本][5]。贡献者盟约[被上百个开源项目所接纳][6],包括 Eclipse、Angular、Ruby、Kubernetes 等项目。
|
||||
|
||||
贡献者盟约由 [Coraline Ada Ehmke][7] 创作,她是一个软件工程师,开源支持者,以及 [LGBT][8] 活动家。她对于促进开源世界的多样性做了显著的贡献。
|
||||
|
||||
Coraline 对于精英主义的反对立场同样十分鲜明。[<ruby>精英主义<rt>meritocracy</rt></ruby>][9]这个词语源自拉丁文,本意为系统内的进步取决于“精英”,例如智力水平、取得的证书以及教育程度。但[类似 Coraline 的活动家们认为][10]唯才是用是个糟糕的体系,因为它只是通过人的智力产出来度量一个人,而并不重视他们的人性。
|
||||
|
||||
[![croraline meritocracy][11]][12]
|
||||
|
||||
*图片来源:推特用户@nickmon1112*
|
||||
|
||||
[Linus Torvalds 不止一次地说到,他在意的只是代码而并非写代码的人][13]。所以很明显,这忤逆了 Coraline 有关唯才是用体系的观点。
|
||||
|
||||
具体来说,Coraline 那被人关注饱受争议的过去,是一个关于 [Opal 项目][14]贡献者的事件。那是一个发生[在推特上的讨论][15],Elia,来自意大利的 Opal 项目核心开发者说“(那些变性人)不接受现实才是问题所在。”
|
||||
|
||||
Coraline 并没有参加讨论,也不是 Opal 项目的贡献者。不过作为 LGBT 活动家,她以 Elia 发表“冒犯变性人群体的发言”为由,[要求他退出 Opal 项目][16]。 Coraline 和她的支持者——他们给这个项目做过贡献,通过在 GitHub 仓库平台上冗长且激烈的争论,试图将 Elia——此项目的核心开发者移出项目。
|
||||
|
||||
虽然 Elia 并没有离开这个项目,不过 Opal 项目的维护者同意实行一个行为准则。这个行为准则就是 Coraline 不停向维护者们宣扬的,她那著名的贡献者盟约。
|
||||
|
||||
不过故事到这里并没有结束。贡献者盟约稍后被更改,[加入了一些针对 Elia 的新条款][17]。这些新条款将行为准则的管束范围扩展到公共领域。不过这些更改稍后[被维护者们标记为恶意篡改][18]。最后 Opal 项目摆脱了贡献者盟约,并用自己的行为准则取而代之。
|
||||
|
||||
这个例子非常好的说明了,某些被冒犯的少数人群——哪怕他们并没有给这个项目做过一点贡献,是怎样试图去驱逐这个项目的核心开发者的。
|
||||
|
||||
### 人们对于 Linux 新的行为准则的以及 Torvalds 道歉的反映。
|
||||
|
||||
Linux 行为准则以及 Torvalds 的道歉一发布,社交媒体与论坛上就开始盛传种种谣言与[推测][19]。虽然很多人对新的行为准则感到满意,但仍有些人认为这是 [SJW 尝试渗透 Linux 社区][20]的阴谋。(LCTT 译注:SJW——Social Justice Warrior 所谓“为社会正义而战的人”。)
|
||||
|
||||
Caroline 发布的一个富有嘲讽意味的推特让争论愈发激烈。
|
||||
|
||||
>我迫不及待期待看到大批的人离开 Linux 社区的场景了。现在它以及被 SJW 的成员渗透了。哈哈哈哈。
|
||||
[pic.twitter.com/eFeY6r4ENv][21]
|
||||
>
|
||||
>— Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018][22]
|
||||
|
||||
随着对于 Linux 行为准则的争论持续发酵,Carolaine 公开宣称贡献者盟约是一份政治文件。这并不能被那些试图将政治因素排除在开源项目之外的人所接收。
|
||||
|
||||
>有些人说贡献者盟约是一份政治文件,他们说的没错。
|
||||
>
|
||||
>— Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018][23]
|
||||
|
||||
Nick Monroe,一位自由记者,宣称 Linux 行为准则远没有表面上看上去那么简单。为了证明自己的观点,他挖掘出了 Coraline 的过去。如果您愿意,可以阅读以下材料。
|
||||
|
||||
>好啦,你们已经看到过几千次了。这是一个行为准则。
|
||||
>
|
||||
>它包含了社会认同的正义行为。<https://t.co/KuQqeriYeJ>
|
||||
>
|
||||
>不过它或许没有看上去来的那么简单。[pic.twitter.com/8NUL2K1gu2][24]
|
||||
>
|
||||
>— Nick Monroe (@nickmon1112) [9 月 17 日, 2018][25]
|
||||
|
||||
Nick 并不是唯一一个反对 Linux 新的行为准则的人。[SJW][26] 的参与引发了更多的阴谋论猜测。
|
||||
|
||||
>我猜今天关于 Linux 的大新闻就是现在,Linux 内核被一个 “<ruby>后精英政治<rt>post meritocracy</rt></ruby>” 世界观下的行为准则给掌控了。
|
||||
>
|
||||
>这个行为准则的宗旨看起来不错。不过在实际操作中,它们通常被当作 SJW 分子攻击他们不喜之人的工具。况且,很多人都被 SJW 分子所厌恶。
|
||||
>
|
||||
> — Mark Kern (@Grummz) [September 17, 2018][27]
|
||||
|
||||
虽然很多人对于 Torvalds 的道歉感到欣慰,仍有一些人在责备 Torvalds 的态度。
|
||||
|
||||
>我是不是唯一一个认为 Linus Torvalds 这十几年来的态度恰好就是 Linux 和开源“社区”特有的那种,居高临下,粗鲁,鄙视一切新人的行为作风?反正作为一个新用户,我从来没有在 Linux 社区里感受到自己是受欢迎的。
|
||||
>
|
||||
>— Jonathan Frappier (@jfrappier) [9 月 17 日, 2018][28]
|
||||
|
||||
还有些人并不能接受 Torvalds 的道歉。
|
||||
|
||||
>哦快看啊,一个喜欢辱骂他人的开源软件维护者,在十几年的恶行之后,终于承认了他的行为**可能**是不正确的。
|
||||
>
|
||||
>我关注的那些人因为这件事都惊讶到平地摔,并且决定给他(Linus Torvalds)寄饼干来庆祝。 🙄🙄🙄
|
||||
>
|
||||
>— Kelly Ellis (@justkelly_ok) [9 月 17 日, 2018][29]
|
||||
|
||||
Torvalds 的道歉引起了广泛关注 ;)
|
||||
|
||||
>我现在要在我的个人档案里写上”我不知是否该原谅 Linus Torvalds“ 吗?
|
||||
>
|
||||
>— Verónica. (@maria_fibonacci) [9 月 17 日, 2018][30]
|
||||
|
||||
不继续开玩笑了。有关 Linus 道歉的关注是由 Sharp 挑起的。她因为“恶劣的社区环境”于 2015 年[退出了 Linux 内核的开发][31]。(LCTT 译注,Sarah Sharp 现在改名为“Sage Sharp”,并要求别人称其为“them”而不是“she”或“he”。)
|
||||
|
||||
>现在我们要面对的问题是,这个成就了 Linus,给予他肆意辱骂特权的社区能否迎来改变。不仅仅是 Linus 个人,Linux 内核开发社区也急需改变。<https://t.co/EG5KO43416>
|
||||
>
|
||||
>— Sage Sharp (@sagesharp) [9 月 17 日, 2018][32]
|
||||
|
||||
### 你对于 Linux 行为准则怎么看?
|
||||
|
||||
如果你问我的观点,我认为目前社区的确是需要一个行为准则。它能指导人们尊重他人,不因为他人的种族、宗教信仰、国籍、政治观点(左派或者右派)而歧视,营造出一个积极向上的社区氛围。
|
||||
|
||||
对于这个事件,你怎么看?你认为这个行为准则能够帮助 Linux 内核的开发,或者说因为 SJW 成员们的加入,情况会变得更糟?
|
||||
|
||||
在 FOSS 里我们没有行为准则,不过我们都会持着文明友好的态度讨论问题。
|
||||
|
||||
-------
|
||||
|
||||
via: https://itsfoss.com/linux-code-of-conduct/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[thecyanbird](https://github.com/thecyanbird)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]: https://linux.cn/article-10022-1.html
|
||||
[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linus-torvalds-apologizes.jpeg
|
||||
[4]: https://lkml.org/lkml/2018/9/16/167
|
||||
[5]: https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||
[6]: https://www.contributor-covenant.org/adopters
|
||||
[7]: https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke
|
||||
[8]: https://en.wikipedia.org/wiki/LGBT
|
||||
[9]: https://en.wikipedia.org/wiki/Meritocracy
|
||||
[10]: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy
|
||||
[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/croraline-meritocracy.jpg
|
||||
[12]: https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg
|
||||
[13]: https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/
|
||||
[14]: https://opalrb.com/
|
||||
[15]: https://twitter.com/krainboltgreene/status/611569515315507200
|
||||
[16]: https://github.com/opal/opal/issues/941
|
||||
[17]: https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11
|
||||
[18]: https://github.com/opal/opal/pull/948#issuecomment-113486020
|
||||
[19]: https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/
|
||||
[20]: https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/
|
||||
[21]: https://t.co/eFeY6r4ENv
|
||||
[22]: https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw
|
||||
[23]: https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw
|
||||
[24]: https://t.co/8NUL2K1gu2
|
||||
[25]: https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw
|
||||
[26]: https://www.urbandictionary.com/define.php?term=SJW
|
||||
[27]: https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw
|
||||
[28]: https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw
|
||||
[29]: https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw
|
||||
[30]: https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw
|
||||
[31]: https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html
|
||||
[32]: https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw
|
||||
[33]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f
|
@ -1,27 +1,24 @@
|
||||
一个简单,美观和跨平台的播客应用程序
|
||||
一个简单而美观的跨平台播客应用程序
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/cpod-720x340.png)
|
||||
|
||||
播客在过去几年中变得非常流行。 播客就是所谓的“信息娱乐”,它们通常是轻松的,但它通常会为你提供有价值的信息。 播客在过去几年中已经非常火爆了,如果你喜欢某些东西,很可能存在一个相关的播客。 Linux 桌面版上有很多播客播放器,但是如果你想要一些视觉上美观,有光滑动画并且可以在每个平台上运行的东西,那就并没有很多替代品可以替代 **CPod** 了。 CPod(以前称为 **Cumulonimbus**)是一个开源的最简单的播客应用程序,适用于 Linux,MacOS 和 Windows。
|
||||
播客在过去几年中变得非常流行。 播客就是所谓的“<ruby>信息娱乐<rt> infotainment</rt></ruby>”,它们通常是轻松的,但也会为你提供有价值的信息。 播客在过去几年中已经非常火爆了,如果你喜欢某些东西,就很可能有个相关的播客。 Linux 桌面版上有很多播客播放器,但是如果你想要一些视觉上美观、有顺滑的动画并且可以在每个平台上运行的东西,那就并没有很多替代品可以替代 CPod 了。 CPod(以前称为 Cumulonimbus)是一个开源而成熟的播客应用程序,适用于 Linux、MacOS 和 Windows。
|
||||
|
||||
CPod 运行在一个名为 **Electron** 的东西上 - 这个工具允许开发人员构建跨平台(例如 Windows,MacOs 和 Linux)的桌面图形化应用程序。 在本简要指南中,我们将讨论如何在 Linux 中安装和使用 CPod 播客应用程序。
|
||||
CPod 运行在一个名为 Electron 的东西上 —— 这个工具允许开发人员构建跨平台(例如 Windows、MacOS 和 Linux)的桌面图形化应用程序。 在本简要指南中,我们将讨论如何在 Linux 中安装和使用 CPod 播客应用程序。
|
||||
|
||||
### 安装 CPod
|
||||
|
||||
转到 CPod 的[**发布页面**][1]。 下载并安装所选平台的二进制文件。 如果你使用 Ubuntu / Debian,你只需从发布页面下载并安装 .deb 文件,如下所示。
|
||||
转到 CPod 的[发布页面][1]。 下载并安装所选平台的二进制文件。 如果你使用 Ubuntu / Debian,你只需从发布页面下载并安装 .deb 文件,如下所示。
|
||||
|
||||
```
|
||||
$ wget https://github.com/z-------------/CPod/releases/download/v1.25.7/CPod_1.25.7_amd64.deb
|
||||
|
||||
$ sudo apt update
|
||||
|
||||
$ sudo apt install gdebi
|
||||
|
||||
$ sudo gdebi CPod_1.25.7_amd64.deb
|
||||
```
|
||||
|
||||
如果你使用任何其他发行版,你可能需要在发行版页面中使用 **AppImage**。
|
||||
如果你使用其他发行版,你可能需要使用发布页面中的 AppImage。
|
||||
|
||||
从发布页面下载 AppImage 文件。
|
||||
|
||||
@ -37,17 +34,17 @@ $ chmod +x CPod-1.25.7-x86_64.AppImage
|
||||
$ ./CPod-1.25.7-x86_64.AppImage
|
||||
```
|
||||
|
||||
你将看到一个对话框询问是否将应用程序与系统集成。 如果要执行此操作,请单击**是**。
|
||||
你将看到一个对话框询问是否将应用程序与系统集成。 如果要执行此操作,请单击“yes”。
|
||||
|
||||
### 特征
|
||||
|
||||
**探索标签页**
|
||||
#### 探索标签页
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-features-tab.png)
|
||||
|
||||
CPod 使用 Apple iTunes 数据库查找播客。 这很好,因为 iTunes 数据库是最大的数据库。 如果那里有一个播客,很可能是在 iTunes 上。 要查找播客,只需使用探索部分中的顶部搜索栏即可。 探索部分还展示了一些受欢迎的播客。
|
||||
CPod 使用 Apple iTunes 数据库查找播客。 这很好,因为 iTunes 数据库是最大的这类数据库。 如果某个播客存在,那么很可能就在 iTunes 上。 要查找播客,只需使用探索部分中的顶部搜索栏即可。 探索部分还展示了一些受欢迎的播客。
|
||||
|
||||
**主标签页**
|
||||
#### 主标签页
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/09/CPod-home-tab.png)
|
||||
|
||||
@ -61,38 +58,40 @@ CPod 使用 Apple iTunes 数据库查找播客。 这很好,因为 iTunes 数
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/The-podcasts-queue.png)
|
||||
|
||||
**订阅标签页**
|
||||
#### 订阅标签页
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-subscriptions-tab.png)
|
||||
|
||||
你当然可以订阅你喜欢的播客。 你可以在订阅标签页中执行的其他一些操作是:
|
||||
|
||||
1.刷新播客艺术作品
|
||||
2.导出订阅到 .OPML 文件中,从 .OPML 文件中导入订阅。
|
||||
1. 刷新播客艺术作品
|
||||
2. 导出订阅到 .OPML 文件中,从 .OPML 文件中导入订阅。
|
||||
|
||||
|
||||
**播放器**
|
||||
#### 播放器
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/CPod-Podcast-Player.png)
|
||||
|
||||
播放器可能是 CPod 最美观的部分。 该应用程序根据播客的横幅更改整体外观。 底部有一个声音可视化器。 在右侧,你可以查看和搜索此播客的其他剧集。
|
||||
|
||||
**缺点/缺失功能**
|
||||
#### 缺点/缺失功能
|
||||
|
||||
虽然我喜欢这个应用程序,但 CPod 确实有一些特性和缺点:
|
||||
|
||||
1. 可怜的 MPRIS 集成 - 你可以从桌面环境的媒体播放器对话框中播放或者暂停播客,但这是不够的。 播客的名称未显示,你可以转到下一个或者上一个剧集。
|
||||
1. 糟糕的 MPRIS 集成 —— 你可以从桌面环境的媒体播放器对话框中播放或者暂停播客,但这是不够的。 播客的名称未显示,你可以转到下一个或者上一个剧集。
|
||||
2. 不支持章节。
|
||||
3. 没有自动下载 - 你必须手动下载剧集。
|
||||
4. 使用过程中的 CPU 使用率非常高(即使对于 Electron 应用程序)。
|
||||
3. 没有自动下载 —— 你必须手动下载剧集。
|
||||
4. 使用过程中的 CPU 使用率非常高(即使对于 Electron 应用程序而言)。
|
||||
|
||||
### 总结
|
||||
|
||||
### Verdict
|
||||
|
||||
虽然它确实有它的缺点,但 CPod 显然是最美观的播客播放器应用程序,并且它具有最基本的功能。 如果你喜欢使用视觉上美观的应用程序,并且不需要高级功能,那么这就是你的完美款 app。 我知道我马上就要使用它。
|
||||
虽然它确实有它的缺点,但 CPod 显然是最美观的播客播放器应用程序,并且它具有最基本的功能。 如果你喜欢使用视觉上美观的应用程序,并且不需要高级功能,那么这就是你的完美应用。我知道我肯定会使用它。
|
||||
|
||||
你喜欢 CPod 吗? 请将你的意见发表在下面的评论中。
|
||||
|
||||
**资源**
|
||||
|
||||
- [CPod GitHub 仓库](https://github.com/z-------------/CPod)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcast-app/
|
||||
@ -100,9 +99,9 @@ via: https://www.ostechnix.com/cpod-a-simple-beautiful-and-cross-platform-podcas
|
||||
作者:[EDITOR][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[1]: https://github.com/z-------------/CPod/releases
|
||||
[1]: https://github.com/z-------------/CPod/releases
|
@ -1,27 +1,25 @@
|
||||
Hegemon - 使用 Rust 编写的模块化系统监视程序
|
||||
Hegemon:使用 Rust 编写的模块化系统监视程序
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/hegemon-720x340.png)
|
||||
|
||||
在类 Unix 系统中监视运行进程时,最常用的程序是 **top** 和 top 的增强版 **htop**。我个人最喜欢的是 htop。但是,开发人员不时会发布这些程序的替代品。top 和 htop 工具的一个替代品是 **Hegemon**。它是使用 **Rust** 语言编写的模块化系统监视程序。
|
||||
在类 Unix 系统中监视运行进程时,最常用的程序是 `top` 和它的增强版 `htop`。我个人最喜欢的是 `htop`。但是,开发人员不时会发布这些程序的替代品。`top` 和 `htop` 工具的一个替代品是 `Hegemon`。它是使用 Rust 语言编写的模块化系统监视程序。
|
||||
|
||||
关于 Hegemon 的功能,我们可以列出以下这些:
|
||||
|
||||
* Hegemon 会监控 CPU、内存和交换页的使用情况。
|
||||
* 它监控系统的温度和风扇速度。
|
||||
* 更新间隔时间可以调整。默认值为 3 秒。
|
||||
* 我们可以通过扩展数据流来展示更详细的图表和其他信息。
|
||||
* 单元测试
|
||||
* 干净的界面
|
||||
* 免费且开源。
|
||||
|
||||
|
||||
* Hegemon 会监控 CPU、内存和交换页的使用情况。
|
||||
* 它监控系统的温度和风扇速度。
|
||||
* 更新间隔时间可以调整。默认值为 3 秒。
|
||||
* 我们可以通过扩展数据流来展示更详细的图表和其他信息。
|
||||
* 单元测试。
|
||||
* 干净的界面。
|
||||
* 自由开源。
|
||||
|
||||
### 安装 Hegemon
|
||||
|
||||
确保已安装 **Rust 1.26** 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南:
|
||||
确保已安装 Rust 1.26 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南:
|
||||
|
||||
[Install Rust Programming Language In Linux][2]
|
||||
- [在 Linux 中安装 Rust 编程语言][2]
|
||||
|
||||
另外要安装 [libsensors][1] 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora)中:
|
||||
|
||||
@ -51,10 +49,10 @@ $ hegemon
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Hegemon-in-action.gif)
|
||||
|
||||
要退出,请按 **Q**。
|
||||
要退出,请按 `Q`。
|
||||
|
||||
|
||||
请注意,hegemon 仍处于早期开发阶段,并不能完全取代 **top** 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 github 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。
|
||||
请注意,hegemon 仍处于早期开发阶段,并不能完全取代 `top` 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 GitHub 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。
|
||||
|
||||
就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注!
|
||||
|
||||
@ -69,7 +67,7 @@ via: https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-writ
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,34 +1,35 @@
|
||||
如何在救援(单用户模式)/紧急模式下启动 Ubuntu 18.04/Debian 9 服务器
|
||||
======
|
||||
将 Linux 服务器引导到单用户模式或**救援模式**是 Linux 管理员在关键时刻恢复服务器时通常使用的重要故障排除方法之一。在 Ubuntu 18.04 和 Debian 9 中,单用户模式被称为救援模式。
|
||||
|
||||
除了救援模式外,Linux 服务器可以在**紧急模式**下启动,它们之间的主要区别在于,紧急模式加载了带有只读根文件系统文件系统的最小环境,也没有启用任何网络或其他服务。但救援模式尝试挂载所有本地文件系统并尝试启动一些重要的服务,包括网络。
|
||||
将 Linux 服务器引导到单用户模式或<ruby>救援模式<rt>rescue mode</rt></ruby>是 Linux 管理员在关键时刻恢复服务器时通常使用的重要故障排除方法之一。在 Ubuntu 18.04 和 Debian 9 中,单用户模式被称为救援模式。
|
||||
|
||||
除了救援模式外,Linux 服务器可以在<ruby>紧急模式<rt>emergency mode</rt></ruby>下启动,它们之间的主要区别在于,紧急模式加载了带有只读根文件系统文件系统的最小环境,没有启用任何网络或其他服务。但救援模式尝试挂载所有本地文件系统并尝试启动一些重要的服务,包括网络。
|
||||
|
||||
在本文中,我们将讨论如何在救援模式和紧急模式下启动 Ubuntu 18.04 LTS/Debian 9 服务器。
|
||||
|
||||
#### 在单用户/救援模式下启动 Ubuntu 18.04 LTS 服务器:
|
||||
|
||||
重启服务器并进入启动加载程序 (Grub) 屏幕并选择 “**Ubuntu**”,启动加载器页面如下所示,
|
||||
重启服务器并进入启动加载程序 (Grub) 屏幕并选择 “Ubuntu”,启动加载器页面如下所示,
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Bootloader-Screen-Ubuntu18-04-Server.jpg)
|
||||
|
||||
按下 “**e**”,然后移动到以 “**linux**” 开头的行尾,并添加 “**systemd.unit=rescue.target**”。如果存在单词 “**$vt_handoff**” 就删除它。
|
||||
按下 `e`,然后移动到以 `linux` 开头的行尾,并添加 `systemd.unit=rescue.target`。如果存在单词 `$vt_handoff` 就删除它。
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-target-ubuntu18-04.jpg)
|
||||
|
||||
现在按 Ctrl-x 或 F10 启动,
|
||||
现在按 `Ctrl-x` 或 `F10` 启动,
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/rescue-mode-ubuntu18-04.jpg)
|
||||
|
||||
现在按回车键,然后你将得到所有文件系统都以读写模式挂载的 shell 并进行故障排除。完成故障排除后,可以使用 “**reboot**” 命令重新启动服务器。
|
||||
现在按回车键,然后你将得到所有文件系统都以读写模式挂载的 shell 并进行故障排除。完成故障排除后,可以使用 `reboot` 命令重新启动服务器。
|
||||
|
||||
#### 在紧急模式下启动 Ubuntu 18.04 LTS 服务器
|
||||
|
||||
重启服务器并进入启动加载程序页面并选择 “**Ubuntu**”,然后按 “**e**” 并移动到以 linux 开头的行尾,并添加 “**systemd.unit=emergency.target**“。
|
||||
重启服务器并进入启动加载程序页面并选择 “Ubuntu”,然后按 `e` 并移动到以 `linux` 开头的行尾,并添加 `systemd.unit=emergency.target`。
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergecny-target-ubuntu18-04-server.jpg)
|
||||
|
||||
现在按 Ctlr-x 或 F10 以紧急模式启动,你将获得一个 shell 并从那里进行故障排除。正如我们已经讨论过的那样,在紧急模式下,文件系统将以只读模式挂载,并且在这种模式下也不会有网络,
|
||||
现在按 `Ctrl-x` 或 `F10` 以紧急模式启动,你将获得一个 shell 并从那里进行故障排除。正如我们已经讨论过的那样,在紧急模式下,文件系统将以只读模式挂载,并且在这种模式下也不会有网络,
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg)
|
||||
|
||||
@ -43,17 +44,17 @@
|
||||
|
||||
#### 将 Debian 9 引导到救援和紧急模式
|
||||
|
||||
重启 Debian 9.x 服务器并进入 grub页面选择 “**Debian GNU/Linux**”。
|
||||
重启 Debian 9.x 服务器并进入 grub页面选择 “Debian GNU/Linux”。
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Debian9-Grub-Screen.jpg)
|
||||
|
||||
按下 “**e**” 并移动到 linux 开头的行尾并添加 “**systemd.unit=rescue.target**” 以在救援模式下启动系统, 要在紧急模式下启动,那就添加 “**systemd.unit=emergency.target**“
|
||||
按下 `e` 并移动到 linux 开头的行尾并添加 `systemd.unit=rescue.target` 以在救援模式下启动系统, 要在紧急模式下启动,那就添加 `systemd.unit=emergency.target`。
|
||||
|
||||
#### 救援模式:
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-mode-Debian9.jpg)
|
||||
|
||||
现在按 Ctrl-x 或 F10 以救援模式启动
|
||||
现在按 `Ctrl-x` 或 `F10` 以救援模式启动
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Rescue-Mode-Shell-Debian9.jpg)
|
||||
|
||||
@ -63,11 +64,11 @@
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-target-grub-debian9.jpg)
|
||||
|
||||
现在按下 ctrl-x 或 F10 以紧急模式启动系统
|
||||
现在按下 `ctrl-x` 或 `F10` 以紧急模式启动系统
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/09/Emergency-prompt-debian9.jpg)
|
||||
|
||||
按下回车获取 shell 并使用 “**mount -o remount,rw /**” 命令以读写模式挂载根文件系统。
|
||||
按下回车获取 shell 并使用 `mount -o remount,rw /` 命令以读写模式挂载根文件系统。
|
||||
|
||||
**注意:**如果已经在 Ubuntu 18.04 和 Debian 9 Server 中设置了 root 密码,那么你必须输入 root 密码才能在救援和紧急模式下获得 shell
|
||||
|
||||
@ -81,7 +82,7 @@ via: https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,6 +1,7 @@
|
||||
如何在双系统引导下替换 Linux 发行版
|
||||
======
|
||||
在双系统引导的状态下,你可以将已安装的 Linux 发行版替换为另一个发行版,同时还可以保留原本的个人数据。
|
||||
|
||||
> 在双系统引导的状态下,你可以将已安装的 Linux 发行版替换为另一个发行版,同时还可以保留原本的个人数据。
|
||||
|
||||
![How to Replace One Linux Distribution With Another From Dual Boot][1]
|
||||
|
||||
@ -26,11 +27,9 @@
|
||||
* 需要安装的 Linux 发行版的 USB live 版
|
||||
* 在外部磁盘备份 Windows 和 Linux 中的重要文件(并非必要,但建议备份一下)
|
||||
|
||||
|
||||
|
||||
#### 在替换 Linux 发行版时要记住保留你的 home 目录
|
||||
|
||||
如果想让个人文件在安装新 Linux 系统的过程中不受影响,原有的 Linux 系统必须具有单独的 root 目录和 home 目录。你可能会发现我的[双系统引导教程][8]在安装过程中不选择“与 Windows 一起安装”选项,而选择“其它”选项,然后手动创建 root 和 home 分区。所以,手动创建单独的 home 分区也算是一个磨刀不误砍柴工的操作。因为如果要在不丢失文件的情况下,将现有的 Linux 发行版替换为另一个发行版,需要将 home 目录存放在一个单独的分区上。
|
||||
如果想让个人文件在安装新 Linux 系统的过程中不受影响,原有的 Linux 系统必须具有单独的 root 目录和 home 目录。你可能会发现我的[双系统引导教程][8]在安装过程中不选择“与 Windows 共存”选项,而选择“其它”选项,然后手动创建 root 和 home 分区。所以,手动创建单独的 home 分区也算是一个磨刀不误砍柴工的操作。因为如果要在不丢失文件的情况下,将现有的 Linux 发行版替换为另一个发行版,需要将 home 目录存放在一个单独的分区上。
|
||||
|
||||
不过,你必须记住现有 Linux 系统的用户名和密码才能使用与新系统中相同的 home 目录。
|
||||
|
||||
@ -51,69 +50,80 @@
|
||||
在安装过程中,进入“安装类型”界面时,选择“其它”选项。
|
||||
|
||||
![Replacing one Linux with another from dual boot][10]
|
||||
(在这里选择“其它”选项)
|
||||
|
||||
*在这里选择“其它”选项*
|
||||
|
||||
#### 步骤 3:准备分区操作
|
||||
|
||||
下图是分区界面。你会看到使用 Ext4 文件系统类型来安装 Linux。
|
||||
|
||||
![Identifying Linux partition in dual boot][11]
|
||||
(确定 Linux 的安装位置)
|
||||
|
||||
*确定 Linux 的安装位置*
|
||||
|
||||
在上图中,标记为 Linux Mint 19 的 Ext4 分区是 root 分区,大小为 82691 MB 的第二个 Ext4 分区是 home 分区。在这里我这里没有使用[交换空间][12]。
|
||||
|
||||
如果你只有一个 Ext4 分区,就意味着你的 home 目录与 root 目录位于同一分区。在这种情况下,你就无法保留 home 目录中的文件了,这个时候我建议将重要文件复制到外部磁盘,否则这些文件将不会保留。
|
||||
|
||||
然后是删除 root 分区。选择 root 分区,然后点击 - 号,这个操作释放了一些磁盘空间。
|
||||
然后是删除 root 分区。选择 root 分区,然后点击 `-` 号,这个操作释放了一些磁盘空间。
|
||||
|
||||
![Delete root partition of your existing Linux install][13]
|
||||
(删除 root 分区)
|
||||
|
||||
磁盘空间释放出来后,点击 + 号。
|
||||
*删除 root 分区*
|
||||
|
||||
磁盘空间释放出来后,点击 `+` 号。
|
||||
|
||||
![Create root partition for the new Linux][14]
|
||||
(创建新的 root 分区)
|
||||
|
||||
*创建新的 root 分区*
|
||||
|
||||
现在已经在可用空间中创建一个新分区。如果你之前的 Linux 系统中只有一个 root 分区,就应该在这里创建 root 分区和 home 分区。如果需要,还可以创建交换分区。
|
||||
|
||||
如果你之前已经有 root 分区和 home 分区,那么只需要从已删除的 root 分区创建 root 分区就可以了。
|
||||
|
||||
![Create root partition for the new Linux][15]
|
||||
(创建 root 分区)
|
||||
|
||||
你可能有疑问,为什么要经过“删除”和“添加”两个过程,而不使用“更改”选项。这是因为以前使用“更改”选项好像没有效果,所以我更喜欢用 - 和 +。这是迷信吗?也许是吧。
|
||||
*创建 root 分区*
|
||||
|
||||
你可能有疑问,为什么要经过“删除”和“添加”两个过程,而不使用“更改”选项。这是因为以前使用“更改”选项好像没有效果,所以我更喜欢用 `-` 和 `+`。这是迷信吗?也许是吧。
|
||||
|
||||
这里有一个重要的步骤,对新创建的 root 分区进行格式化。在没有更改分区大小的情况下,默认是不会对分区进行格式化的。如果分区没有被格式化,之后可能会出现问题。
|
||||
|
||||
![][16]
|
||||
(格式化 root 分区很重要)
|
||||
|
||||
*格式化 root 分区很重要*
|
||||
|
||||
如果你在新的 Linux 系统上已经划分了单独的 home 分区,选中它并点击更改。
|
||||
|
||||
![Recreate home partition][17]
|
||||
(修改已有的 home 分区)
|
||||
|
||||
*修改已有的 home 分区*
|
||||
|
||||
然后指定将其作为 home 分区挂载即可。
|
||||
|
||||
![Specify the home mount point][18]
|
||||
(指定 home 分区的挂载点)
|
||||
|
||||
*指定 home 分区的挂载点*
|
||||
|
||||
如果你还有交换分区,可以重复与 home 分区相同的步骤,唯一不同的是要指定将空间用作交换空间。
|
||||
|
||||
现在的状态应该是有一个 root 分区(将被格式化)和一个 home 分区(如果需要,还可以使用交换分区)。点击“立即安装”可以开始安装。
|
||||
|
||||
![Verify partitions while replacing one Linux with another][19]
|
||||
(检查分区情况)
|
||||
|
||||
*检查分区情况*
|
||||
|
||||
接下来的几个界面就很熟悉了,要重点注意的是创建用户和密码的步骤。如果你之前有一个单独的 home 分区,并且还想使用相同的 home 目录,那你必须使用和之前相同的用户名和密码,至于设备名称则可以任意指定。
|
||||
|
||||
![To keep the home partition intact, use the previous user and password][20]
|
||||
(要保持 home 分区不变,请使用之前的用户名和密码)
|
||||
|
||||
*要保持 home 分区不变,请使用之前的用户名和密码*
|
||||
|
||||
接下来只要静待安装完成,不需执行任何操作。
|
||||
|
||||
![Wait for installation to finish][21]
|
||||
(等待安装完成)
|
||||
|
||||
*等待安装完成*
|
||||
|
||||
安装完成后重新启动系统,你就能使用新的 Linux 发行版。
|
||||
|
||||
@ -126,7 +136,7 @@ via: https://itsfoss.com/replace-linux-from-dual-boot/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,35 +1,35 @@
|
||||
三个开源的分布式追踪工具
|
||||
======
|
||||
|
||||
这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。
|
||||
> 这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
|
||||
|
||||
分布式追踪系统能够从头到尾地追踪分布式系统中的请求,跨越多个应用、服务、数据库以及像代理这样的中间件。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。
|
||||
分布式追踪系统能够从头到尾地追踪跨越了多个应用、服务、数据库以及像代理这样的中间件的分布式软件的请求。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。
|
||||
|
||||
用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能在退化,总是比客户告诉你要好。
|
||||
用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能的退化,总是比客户告诉你要好。
|
||||
|
||||
追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做 trace,trace 是抽象整个事务的概念。每一个 trace 由 span 组成,span 代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个 span 也有自己唯一的 ID。span 之下也可以创建子 span,子 span 可以有多个父 span。
|
||||
这种追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做<ruby>踪迹<rt>trace</rt></ruby>,“踪迹”是整个事务的抽象概念。每一个“踪迹”由<ruby>单元<rt>span</rt></ruby>组成,“单元”代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个“单元”也有自己唯一的 ID。“单元”之下也可以创建子“单元”,子“单元”可以有多个父“单元”。
|
||||
|
||||
当一次事务(或者说 trace)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个 trace 中的多个 span。很明显,这个图能让你一目了然地对事务有更深的了解。
|
||||
当一次事务(或者说踪迹)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个踪迹中的多个单元。很明显,这个图能让你一目了然地对事务有更深的了解。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png)
|
||||
|
||||
这个 demo 使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。
|
||||
这个演示使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。
|
||||
|
||||
那么 OpenTracing 到底是什么呢?我们来看看。
|
||||
|
||||
### OpenTracing API
|
||||
|
||||
[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin,Jaeger,和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。
|
||||
[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin、Jaeger 和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。
|
||||
|
||||
### OpenCensus
|
||||
|
||||
OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索结果中老是出现。它是一个和 OpenTracing 完全不同或者互补的竞争标准吗?
|
||||
|
||||
这个问题的答案取决于你的提问对象。我先尽我所能地解释一下他们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。
|
||||
这个问题的答案取决于你的提问对象。我先尽我所能地解释一下它们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。
|
||||
|
||||
使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin,Prometheus,Jaeger,Stackdriver,Datadog 和 SignalFx,不过任何人都可以创建一个导出器。
|
||||
使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin、Prometheus、Jaeger、Stackdriver、Datadog 和 SignalFx,不过任何人都可以创建一个导出器。
|
||||
|
||||
依我看这两者有很多重叠的部分,没有哪个一定比另外一个好,但是重要的是,要知道它们做什么事情和不做什么事情。OpenTracing 主要是一个规范,具体的实现和独断的设计由其他人来做。OpenCensus 更加独断地为本地组件提供了全面的解决方案,但是仍然需要其他系统做远程的聚合。
|
||||
|
||||
@ -39,23 +39,23 @@ OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索
|
||||
|
||||
Zipkin 是最早出现的这类工具之一。 谷歌在 2010 年发表了介绍其内部追踪系统 Dapper 的[论文][8],Twitter 以此为基础开发了 Zipkin。Zipkin 的开发语言 Java,用 Cassandra 或 ElasticSearch 作为可扩展的存储后端,这些选择能满足大部分公司的需求。Zipkin 支持的最低 Java 版本是 Java 6,它也使用了 [Thrift][9] 的二进制通信协议,Thrift 在 Twitter 的系统中很流行,现在作为 Apache 项目在托管。
|
||||
|
||||
这个系统包括上报器(客户端),数据收集器,查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的 trace ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些 span 的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP,Kafka 和 Scribe。
|
||||
这个系统包括上报器(客户端)、数据收集器、查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的踪迹 ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些单元的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP、Kafka 和 Scribe。
|
||||
|
||||
[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。
|
||||
[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。
|
||||
|
||||
#### Jaeger
|
||||
|
||||
[Jaeger][1] 来自 Uber,是一个比较新的项目,[CNCF][13] (云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。
|
||||
[Jaeger][1] 来自 Uber,是一个比较新的项目,[CNCF][13](云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。
|
||||
|
||||
Jaeger 的架构跟 Zipkin 很像,有客户端(上报器),数据收集器,查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。
|
||||
Jaeger 的架构跟 Zipkin 很像,有客户端(上报器)、数据收集器、查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。
|
||||
|
||||
默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 自适应采样已经在[路线图][15],它将通过添加额外的,能够帮助做决策的上下文,来改进采样算法。
|
||||
默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 [自适应采样已经在路线图当中][15],它将通过添加额外的、能够帮助做决策的上下文来改进采样算法。
|
||||
|
||||
#### Appdash
|
||||
|
||||
[Appdash][16] 也是一个用 Golang 写的分布式追踪系统,和 Jaeger 一样。Appdash 是 [Sourcegraph][17] 公司基于谷歌的 Dapper 和 Twitter 的 Zipkin 开发的。同样的,它也支持 Opentracing 标准,不过这是后来添加的功能,依赖了一个与默认组件不同的组件,因此增加了风险和复杂度。
|
||||
|
||||
从高层次来看,Appdash 的架构主要有三个部分:客户端,本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。 Appdash 提供了 Python,Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集 span 数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。
|
||||
从高层次来看,Appdash 的架构主要有三个部分:客户端、本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。Appdash 提供了 Python、Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集单元数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -64,7 +64,7 @@ via: https://opensource.com/article/18/9/distributed-tracing-tools
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[belitex](https://github.com/belitex)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,59 +3,51 @@
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Find-And-Delete-Duplicate-Files-720x340.png)
|
||||
|
||||
在编辑或修改配置文件或旧文件前,我经常会把它们备份到硬盘的某个地方,因此我如果意外地改错了这些文件,我可以从备份中恢复它们。但问题是如果我忘记清理备份文件,一段时间之后,我的磁盘会被这些大量重复文件填满。我觉得要么是懒得清理这些旧文件,要么是担心可能会删掉重要文件。如果你们像我一样,在类 Unix 操作系统中,大量多版本的相同文件放在不同的备份目录,你可以使用下面的工具找到并删除重复文件。
|
||||
在编辑或修改配置文件或旧文件前,我经常会把它们备份到硬盘的某个地方,因此我如果意外地改错了这些文件,我可以从备份中恢复它们。但问题是如果我忘记清理备份文件,一段时间之后,我的磁盘会被这些大量重复文件填满 —— 我觉得要么是懒得清理这些旧文件,要么是担心可能会删掉重要文件。如果你们像我一样,在类 Unix 操作系统中,大量多版本的相同文件放在不同的备份目录,你可以使用下面的工具找到并删除重复文件。
|
||||
|
||||
**提醒一句:**
|
||||
|
||||
在删除重复文件的时请尽量小心。如果你不小心,也许会导致[**意外丢失数据**][1]。我建议你在使用这些工具的时候要特别注意。
|
||||
在删除重复文件的时请尽量小心。如果你不小心,也许会导致[意外丢失数据][1]。我建议你在使用这些工具的时候要特别注意。
|
||||
|
||||
### 在 Linux 中找到并删除重复文件
|
||||
|
||||
|
||||
出于本指南的目的,我将讨论下面的三个工具:
|
||||
|
||||
1. Rdfind
|
||||
2. Fdupes
|
||||
3. FSlint
|
||||
|
||||
这三个工具是自由开源的,且运行在大多数类 Unix 系统中。
|
||||
|
||||
#### 1. Rdfind
|
||||
|
||||
这三个工具是免费的、开源的,且运行在大多数类 Unix 系统中。
|
||||
|
||||
##### 1. Rdfind
|
||||
|
||||
**Rdfind** 代表找到找到冗余数据,是一个通过访问目录和子目录来找出重复文件的免费、开源的工具。它是基于文件内容而不是文件名来比较。Rdfind 使用**排序**算法来区分原始文件和重复文件。如果你有两个或者更多的相同文件,Rdfind 会很智能的找到原始文件并认定剩下的文件为重复文件。一旦找到副本文件,它会向你报告。你可以决定是删除还是使用[**硬链接**或者**符号(软)链接**][2]代替它们。
|
||||
**Rdfind** 意即 **r**edundant **d**ata **find**(冗余数据查找),是一个通过访问目录和子目录来找出重复文件的自由开源的工具。它是基于文件内容而不是文件名来比较。Rdfind 使用**排序**算法来区分原始文件和重复文件。如果你有两个或者更多的相同文件,Rdfind 会很智能的找到原始文件并认定剩下的文件为重复文件。一旦找到副本文件,它会向你报告。你可以决定是删除还是使用[硬链接或者符号(软)链接][2]代替它们。
|
||||
|
||||
**安装 Rdfind**
|
||||
|
||||
Rdfind 存在于 [**AUR**][3] 中。因此,在基于 Arch 的系统中,你可以像下面一样使用任一如 [**Yay**][4] AUR 程序助手安装它。
|
||||
Rdfind 存在于 [AUR][3] 中。因此,在基于 Arch 的系统中,你可以像下面一样使用任一如 [Yay][4] AUR 程序助手安装它。
|
||||
|
||||
```
|
||||
$ yay -S rdfind
|
||||
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install rdfind
|
||||
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install rdfind
|
||||
|
||||
```
|
||||
|
||||
在 RHEL、CentOS 上:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
|
||||
$ sudo yum install rdfind
|
||||
|
||||
```
|
||||
|
||||
**用法**
|
||||
@ -64,12 +56,11 @@ $ sudo yum install rdfind
|
||||
|
||||
```
|
||||
$ rdfind ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/rdfind-1.png)
|
||||
|
||||
正如你看到上面的截屏,Rdfind 命令将扫描 ~/Downloads 目录,并将结果存储到当前工作目录下一个名为 **results.txt** 的文件中。你可以在 results.txt 文件中看到可能是重复文件的名字。
|
||||
正如你看到上面的截屏,Rdfind 命令将扫描 `~/Downloads` 目录,并将结果存储到当前工作目录下一个名为 `results.txt` 的文件中。你可以在 `results.txt` 文件中看到可能是重复文件的名字。
|
||||
|
||||
```
|
||||
$ cat results.txt
|
||||
@ -84,13 +75,12 @@ DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperle
|
||||
|
||||
```
|
||||
|
||||
通过检查 results.txt 文件,你可以很容易的找到那些重复文件。如果愿意你可以手动的删除它们。
|
||||
通过检查 `results.txt` 文件,你可以很容易的找到那些重复文件。如果愿意你可以手动的删除它们。
|
||||
|
||||
此外,你可在不修改其他事情情况下使用 **-dryrun** 选项找出所有重复文件,并在终端上输出汇总信息。
|
||||
此外,你可在不修改其他事情情况下使用 `-dryrun` 选项找出所有重复文件,并在终端上输出汇总信息。
|
||||
|
||||
```
|
||||
$ rdfind -dryrun true ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
一旦找到重复文件,你可以使用硬链接或符号链接代替他们。
|
||||
@ -99,21 +89,18 @@ $ rdfind -dryrun true ~/Downloads
|
||||
|
||||
```
|
||||
$ rdfind -makehardlinks true ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
使用符号链接/软链接代替所有重复文件,运行:
|
||||
|
||||
```
|
||||
$ rdfind -makesymlinks true ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
目录中有一些空文件,也许你想忽略他们,你可以像下面一样使用 **-ignoreempty** 选项:
|
||||
目录中有一些空文件,也许你想忽略他们,你可以像下面一样使用 `-ignoreempty` 选项:
|
||||
|
||||
```
|
||||
$ rdfind -ignoreempty true ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
如果你不再想要这些旧文件,删除重复文件,而不是使用硬链接或软链接代替它们。
|
||||
@ -122,33 +109,29 @@ $ rdfind -ignoreempty true ~/Downloads
|
||||
|
||||
```
|
||||
$ rdfind -deleteduplicates true ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
如果你不想忽略空文件,并且和所哟重复文件一起删除。运行:
|
||||
|
||||
```
|
||||
$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
更多细节,参照帮助部分:
|
||||
|
||||
```
|
||||
$ rdfind --help
|
||||
|
||||
```
|
||||
|
||||
手册页:
|
||||
|
||||
```
|
||||
$ man rdfind
|
||||
|
||||
```
|
||||
|
||||
##### 2. Fdupes
|
||||
#### 2. Fdupes
|
||||
|
||||
**Fdupes** 是另一个在指定目录以及子目录中识别和移除重复文件的命令行工具。这是一个使用 **C** 语言编写的免费、开源工具。Fdupes 通过对比文件大小、部分 MD5 签名、全部 MD5 签名,最后执行逐个字节对比校验来识别重复文件。
|
||||
**Fdupes** 是另一个在指定目录以及子目录中识别和移除重复文件的命令行工具。这是一个使用 C 语言编写的自由开源工具。Fdupes 通过对比文件大小、部分 MD5 签名、全部 MD5 签名,最后执行逐个字节对比校验来识别重复文件。
|
||||
|
||||
与 Rdfind 工具类似,Fdupes 附带非常少的选项来执行操作,如:
|
||||
|
||||
@ -159,8 +142,6 @@ $ man rdfind
|
||||
* 使用不同的拥有者/组或权限位来排除重复文件
|
||||
* 更多
|
||||
|
||||
|
||||
|
||||
**安装 Fdupes**
|
||||
|
||||
Fdupes 存在于大多数 Linux 发行版的默认仓库中。
|
||||
@ -169,39 +150,33 @@ Fdupes 存在于大多数 Linux 发行版的默认仓库中。
|
||||
|
||||
```
|
||||
$ sudo pacman -S fdupes
|
||||
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install fdupes
|
||||
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install fdupes
|
||||
|
||||
```
|
||||
|
||||
在 RHEL、CentOS 上:
|
||||
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
|
||||
$ sudo yum install fdupes
|
||||
|
||||
```
|
||||
|
||||
**用法**
|
||||
|
||||
Fdupes 用法非常简单。仅运行下面的命令就可以在目录中找到重复文件,如:**~/Downloads**.
|
||||
Fdupes 用法非常简单。仅运行下面的命令就可以在目录中找到重复文件,如:`~/Downloads`。
|
||||
|
||||
```
|
||||
$ fdupes ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
我系统中的样例输出:
|
||||
@ -209,69 +184,61 @@ $ fdupes ~/Downloads
|
||||
```
|
||||
/home/sk/Downloads/Hyperledger.pdf
|
||||
/home/sk/Downloads/Hyperledger(1).pdf
|
||||
|
||||
```
|
||||
你可以看到,在 **/home/sk/Downloads/** 目录下有一个重复文件。它仅显示了父级目录中的重复文件。如何显示子目录中的重复文件?像下面一样,使用 **-r** 选项。
|
||||
|
||||
你可以看到,在 `/home/sk/Downloads/` 目录下有一个重复文件。它仅显示了父级目录中的重复文件。如何显示子目录中的重复文件?像下面一样,使用 `-r` 选项。
|
||||
|
||||
```
|
||||
$ fdupes -r ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
现在你将看到 **/home/sk/Downloads/** 目录以及子目录中的重复文件。
|
||||
现在你将看到 `/home/sk/Downloads/` 目录以及子目录中的重复文件。
|
||||
|
||||
Fdupes 也可用来从多个目录中迅速查找重复文件。
|
||||
|
||||
```
|
||||
$ fdupes ~/Downloads ~/Documents/ostechnix
|
||||
|
||||
```
|
||||
|
||||
你甚至可以搜索多个目录,递归搜索其中一个目录,如下:
|
||||
|
||||
```
|
||||
$ fdupes ~/Downloads -r ~/Documents/ostechnix
|
||||
|
||||
```
|
||||
|
||||
上面的命令将搜索 “~/Downloads” 目录,“~/Documents/ostechnix” 目录和它的子目录中的重复文件。
|
||||
上面的命令将搜索 `~/Downloads` 目录,`~/Documents/ostechnix` 目录和它的子目录中的重复文件。
|
||||
|
||||
有时,你可能想要知道一个目录中重复文件的大小。你可以使用 **-S** 选项,如下:
|
||||
有时,你可能想要知道一个目录中重复文件的大小。你可以使用 `-S` 选项,如下:
|
||||
|
||||
```
|
||||
$ fdupes -S ~/Downloads
|
||||
403635 bytes each:
|
||||
/home/sk/Downloads/Hyperledger.pdf
|
||||
/home/sk/Downloads/Hyperledger(1).pdf
|
||||
|
||||
```
|
||||
|
||||
类似的,为了显示父目录和子目录中重复文件的大小,使用 **-Sr** 选项。
|
||||
类似的,为了显示父目录和子目录中重复文件的大小,使用 `-Sr` 选项。
|
||||
|
||||
我们可以在计算时分别使用 **-n** 和 **-A** 选项排除空白文件以及排除隐藏文件。
|
||||
我们可以在计算时分别使用 `-n` 和 `-A` 选项排除空白文件以及排除隐藏文件。
|
||||
|
||||
```
|
||||
$ fdupes -n ~/Downloads
|
||||
|
||||
$ fdupes -A ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
在搜索指定目录的重复文件时,第一个命令将排除零长度文件,后面的命令将排除隐藏文件。
|
||||
|
||||
汇总重复文件信息,使用 **-m** 选项。
|
||||
汇总重复文件信息,使用 `-m` 选项。
|
||||
|
||||
```
|
||||
$ fdupes -m ~/Downloads
|
||||
1 duplicate files (in 1 sets), occupying 403.6 kilobytes
|
||||
|
||||
```
|
||||
|
||||
删除所有重复文件,使用 **-d** 选项。
|
||||
删除所有重复文件,使用 `-d` 选项。
|
||||
|
||||
```
|
||||
$ fdupes -d ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
样例输出:
|
||||
@ -281,59 +248,51 @@ $ fdupes -d ~/Downloads
|
||||
[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf
|
||||
|
||||
Set 1 of 1, preserve files [1 - 2, all]:
|
||||
|
||||
```
|
||||
|
||||
这个命令将提示你保留还是删除所有其他重复文件。输入任一号码保留相应的文件,并删除剩下的文件。当使用这个选项的时候需要更加注意。如果不小心,你可能会删除原文件。
|
||||
|
||||
如果你想要每次保留每个重复文件集合的第一个文件,且无提示的删除其他文件,使用 **-dN** 选项(不推荐)。
|
||||
如果你想要每次保留每个重复文件集合的第一个文件,且无提示的删除其他文件,使用 `-dN` 选项(不推荐)。
|
||||
|
||||
```
|
||||
$ fdupes -dN ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
当遇到重复文件时删除它们,使用 **-I** 标志。
|
||||
当遇到重复文件时删除它们,使用 `-I` 标志。
|
||||
|
||||
```
|
||||
$ fdupes -I ~/Downloads
|
||||
|
||||
```
|
||||
|
||||
关于 Fdupes 的更多细节,查看帮助部分和 man 页面。
|
||||
|
||||
```
|
||||
$ fdupes --help
|
||||
|
||||
$ man fdupes
|
||||
|
||||
```
|
||||
|
||||
##### 3. FSlint
|
||||
#### 3. FSlint
|
||||
|
||||
**FSlint** 是另外一个查找重复文件的工具,有时我用它去掉 Linux 系统中不需要的重复文件并释放磁盘空间。不像另外两个工具,FSlint 有 GUI 和 CLI 两种模式。因此对于新手来说它更友好。FSlint 不仅仅找出重复文件,也找出坏符号链接、坏名字文件、临时文件、坏 IDS、空目录和非剥离二进制文件等等。
|
||||
**FSlint** 是另外一个查找重复文件的工具,有时我用它去掉 Linux 系统中不需要的重复文件并释放磁盘空间。不像另外两个工具,FSlint 有 GUI 和 CLI 两种模式。因此对于新手来说它更友好。FSlint 不仅仅找出重复文件,也找出坏符号链接、坏名字文件、临时文件、坏的用户 ID、空目录和非精简的二进制文件等等。
|
||||
|
||||
**安装 FSlint**
|
||||
|
||||
FSlint 存在于 [**AUR**][5],因此你可以使用任一 AUR 助手安装它。
|
||||
FSlint 存在于 [AUR][5],因此你可以使用任一 AUR 助手安装它。
|
||||
|
||||
```
|
||||
$ yay -S fslint
|
||||
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 上:
|
||||
|
||||
```
|
||||
$ sudo apt-get install fslint
|
||||
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install fslint
|
||||
|
||||
```
|
||||
|
||||
在 RHEL,CentOS 上:
|
||||
@ -341,7 +300,6 @@ $ sudo dnf install fslint
|
||||
```
|
||||
$ sudo yum install epel-release
|
||||
$ sudo yum install fslint
|
||||
|
||||
```
|
||||
|
||||
一旦安装完成,从菜单或者应用程序启动器启动它。
|
||||
@ -350,13 +308,13 @@ FSlint GUI 展示如下:
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-1.png)
|
||||
|
||||
如你所见,FSlint 接口友好、一目了然。在 **Search path** 栏,添加你要扫描的目录路径,点击左下角 **Find** 按钮查找重复文件。验证递归选项可以在目录和子目录中递归的搜索重复文件。FSlint 将快速的扫描给定的目录并列出重复文件。
|
||||
如你所见,FSlint 界面友好、一目了然。在 “Search path” 栏,添加你要扫描的目录路径,点击左下角 “Find” 按钮查找重复文件。验证递归选项可以在目录和子目录中递归的搜索重复文件。FSlint 将快速的扫描给定的目录并列出重复文件。
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/09/fslint-2.png)
|
||||
|
||||
从列表中选择那些要清理的重复文件,也可以选择 Save、Delete、Merge 和 Symlink 操作他们。
|
||||
从列表中选择那些要清理的重复文件,也可以选择 “Save”、“Delete”、“Merge” 和 “Symlink” 操作他们。
|
||||
|
||||
在 **Advanced search parameters** 栏,你可以在搜索重复文件的时候指定排除的路径。
|
||||
在 “Advanced search parameters” 栏,你可以在搜索重复文件的时候指定排除的路径。
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/09/fslint-3.png)
|
||||
|
||||
@ -364,52 +322,47 @@ FSlint GUI 展示如下:
|
||||
|
||||
FSlint 提供下面的 CLI 工具集在你的文件系统中查找重复文件。
|
||||
|
||||
* **findup** — 查找重复文件
|
||||
* **findnl** — 查找 Lint 名称文件(有问题的文件名)
|
||||
* **findu8** — 查找非法的 utf8 编码文件
|
||||
* **findbl** — 查找坏链接(有问题的符号链接)
|
||||
* **findsn** — 查找同名文件(可能有冲突的文件名)
|
||||
* **finded** — 查找空目录
|
||||
* **findid** — 查找死用户的文件
|
||||
* **findns** — 查找非剥离的可执行文件
|
||||
* **findrs** — 查找文件中多于的空白
|
||||
* **findtf** — 查找临时文件
|
||||
* **findul** — 查找可能未使用的库
|
||||
* **zipdir** — 回收 ext2 目录实体下浪费的空间
|
||||
* `findup` — 查找重复文件
|
||||
* `findnl` — 查找名称规范(有问题的文件名)
|
||||
* `findu8` — 查找非法的 utf8 编码的文件名
|
||||
* `findbl` — 查找坏链接(有问题的符号链接)
|
||||
* `findsn` — 查找同名文件(可能有冲突的文件名)
|
||||
* `finded` — 查找空目录
|
||||
* `findid` — 查找死用户的文件
|
||||
* `findns` — 查找非精简的可执行文件
|
||||
* `findrs` — 查找文件名中多余的空白
|
||||
* `findtf` — 查找临时文件
|
||||
* `findul` — 查找可能未使用的库
|
||||
* `zipdir` — 回收 ext2 目录项下浪费的空间
|
||||
|
||||
|
||||
|
||||
所有这些工具位于 **/usr/share/fslint/fslint/fslint** 下面。
|
||||
所有这些工具位于 `/usr/share/fslint/fslint/fslint` 下面。
|
||||
|
||||
|
||||
例如,在给定的目录中查找重复文件,运行:
|
||||
|
||||
```
|
||||
$ /usr/share/fslint/fslint/findup ~/Downloads/
|
||||
|
||||
```
|
||||
|
||||
类似的,找出空目录命令是:
|
||||
|
||||
```
|
||||
$ /usr/share/fslint/fslint/finded ~/Downloads/
|
||||
|
||||
```
|
||||
|
||||
获取每个工具更多细节,例如:**findup**,运行:
|
||||
获取每个工具更多细节,例如:`findup`,运行:
|
||||
|
||||
```
|
||||
$ /usr/share/fslint/fslint/findup --help
|
||||
|
||||
```
|
||||
|
||||
关于 FSlint 的更多细节,参照帮助部分和 man 页。
|
||||
|
||||
```
|
||||
$ /usr/share/fslint/fslint/fslint --help
|
||||
|
||||
$ man fslint
|
||||
|
||||
```
|
||||
|
||||
##### 总结
|
||||
@ -427,7 +380,7 @@ via: https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pygmalion666](https://github.com/pygmalion666)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,69 @@
|
||||
如何在家中使用 SSH 和 SFTP 协议
|
||||
======
|
||||
|
||||
> 通过 SSH 和 SFTP 协议,我们能够访问其他设备,有效而且安全的传输文件等等。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
|
||||
|
||||
几年前,我决定配置另外一台电脑,以便我能在工作时访问它来传输我所需要的文件。要做到这一点,最基本的一步是要求你的网络提供商(ISP)提供一个固定的地址。
|
||||
|
||||
有一个不必要但很重要的步骤,就是保证你的这个可以访问的系统是安全的。在我的这种情况下,我计划只在工作场所访问它,所以我能够限定访问的 IP 地址。即使如此,你依然要尽多的采用安全措施。一旦你建立起来这个系统,全世界的人们马上就能尝试访问你的系统。这是非常令人惊奇及恐慌的。你能通过日志文件来发现这一点。我推测有探测机器人在尽其所能的搜索那些没有安全措施的系统。
|
||||
|
||||
在我设置好系统不久后,我觉得这种访问没什么大用,为此,我将它关闭了以便不再为它操心。尽管如此,只要架设了它,在家庭网络中使用 SSH 和 SFTP 还是有点用的。
|
||||
|
||||
当然,有一个必备条件,这个另外的电脑必须已经开机了,至于电脑是否登录与否无所谓的。你也需要知道其 IP 地址。有两个方法能够知道,一个是通过浏览器访问你的路由器,一般情况下你的地址格式类似于 192.168.1.254 这样。通过一些搜索,很容易找出当前是开机的并且接在 eth0 或者 wifi 上的系统。如何识别你所要找到的电脑可能是个挑战。
|
||||
|
||||
更容易找到这个电脑的方式是,打开 shell,输入 :
|
||||
|
||||
```
|
||||
ifconfig
|
||||
```
|
||||
|
||||
命令会输出一些信息,你所需要的信息在 `inet` 后面,看起来和 192.168.1.234 类似。当你发现这个后,回到你要访问这台主机的客户端电脑,在命令行中输入 :
|
||||
|
||||
```
|
||||
ssh gregp@192.168.1.234
|
||||
```
|
||||
|
||||
如果要让上面的命令能够正常执行,`gregp` 必须是该主机系统中正确的用户名。你会被询问其密码。如果你键入的密码和用户名都是正确的,你将通过 shell 环境连接上了这台电脑。我坦诚,对于 SSH 我并不是经常使用的。我偶尔使用它,我能够运行 `dnf` 来更新我所常使用电脑之外的其它电脑。通常,我用 SFTP :
|
||||
|
||||
```
|
||||
sftp grego@192.168.1.234
|
||||
```
|
||||
|
||||
我更需要用简单的方法来把一个文件传输到另一个电脑。相对于闪存棒和额外的设备,它更加方便,耗时更少。
|
||||
|
||||
一旦连接建立成功,SFTP 有两个基本的命令,`get`,从主机接收文件 ;`put`,向主机发送文件。在连接之前,我经常在客户端移动到我想接收或者传输的文件夹下。在连接之后,你将处于一个顶层目录里,比如 `home/gregp`。一旦连接成功,你可以像在客户端一样的使用 `cd`,改变你在主机上的工作路径。你也许需要用 `ls` 来确认你的位置。
|
||||
|
||||
如果你想改变你的客户端的工作目录。用 `lcd` 命令( 即 local change directory 的意思)。同样的,用 `lls` 来显示客户端工作目录的内容。
|
||||
|
||||
如果主机上没有你想要的目录名,你该怎么办?用 `mkdir` 在主机上创建一个新的目录。或者你可以将整个目录的文件全拷贝到主机 :
|
||||
|
||||
```
|
||||
put -r thisDir/
|
||||
```
|
||||
|
||||
这将在主机上创建该目录并复制它的全部文件和子目录到主机上。这种传输是非常快速的,能达到硬件的上限。不像在互联网传输一样遇到网络瓶颈。要查看你能在 SFTP 会话中能够使用的命令列表:
|
||||
|
||||
```
|
||||
man sftp
|
||||
```
|
||||
|
||||
我也能够在我的电脑上的 Windows 虚拟机内用 SFTP,这是配置一个虚拟机而不是一个双系统的另外一个优势。这让我能够在系统的 Linux 部分移入或者移出文件。而我只需要在 Windows 中使用一个客户端就行。
|
||||
|
||||
你能够使用 SSH 或 SFTP 访问通过网线或者 WIFI 连接到你路由器的任何设备。这里,我使用了一个叫做 [SSHDroid][1] 的应用,能够在被动模式下运行 SSH。换句话来说,你能够用你的电脑访问作为主机的 Android 设备。近来我还发现了另外一个应用,[Admin Hands][2],不管你的客户端是平板还是手机,都能使用 SSH 或者 SFTP 操作。这个应用对于备份和手机分享照片是极好的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ssh-sftp-home-network
|
||||
|
||||
作者:[Geg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[singledo](https://github.com/singledo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/greg-p
|
||||
[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid
|
||||
[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US
|
68
published/20181003 Introducing Swift on Fedora.md
Normal file
68
published/20181003 Introducing Swift on Fedora.md
Normal file
@ -0,0 +1,68 @@
|
||||
介绍 Fedora 上的 Swift
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg)
|
||||
|
||||
Swift 是一种使用现代方法构建安全性、性能和软件设计模式的通用编程语言。它旨在成为各种编程项目的最佳语言,从系统编程到桌面应用程序,以及扩展到云服务。继续阅读了解它以及如何在 Fedora 中尝试它。
|
||||
|
||||
### 安全、快速、富有表现力
|
||||
|
||||
与许多现代编程语言一样,Swift 被设计为比基于 C 的语言更安全。例如,变量总是在使用之前初始化。检查数组和整数是否溢出。内存自动管理。
|
||||
|
||||
Swift 将意图放在语法中。要声明变量,请使用 `var` 关键字。要声明常量,请使用 `let`。
|
||||
|
||||
Swift 还保证对象永远不会是 `nil`。实际上,尝试使用已知为 `nil` 的对象将导致编译时错误。当使用 `nil` 值时,它支持一种称为 **optional** 的机制。optional 可能包含 `nil`,但使用 `?` 运算符可以安全地解包。
|
||||
|
||||
更多的功能包括:
|
||||
|
||||
* 与函数指针统一的闭包
|
||||
* 元组和多个返回值
|
||||
* 泛型
|
||||
* 对范围或集合进行快速而简洁的迭代
|
||||
* 支持方法、扩展和协议的结构体
|
||||
* 函数式编程模式,例如 `map` 和 `filter`
|
||||
* 内置强大的错误处理
|
||||
* 拥有 `do`、`guard`、`defer` 和 `repeat` 关键字的高级控制流
|
||||
|
||||
|
||||
### 尝试 Swift
|
||||
|
||||
Swift 在 Fedora 28 中可用,包名为 **swift-lang**。安装完成后,运行 `swift` 并启动 REPL 控制台。
|
||||
|
||||
```
|
||||
$ swift
|
||||
Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance.
|
||||
1> let greeting="Hello world!"
|
||||
greeting: String = "Hello world!"
|
||||
2> print(greeting)
|
||||
Hello world!
|
||||
3> greeting = "Hello universe!"
|
||||
error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant
|
||||
greeting = "Hello universe!"
|
||||
~~~~~~~~ ^
|
||||
|
||||
|
||||
3>
|
||||
```
|
||||
|
||||
Swift 有一个不断发展的社区,特别的,有一个[工作组][1]致力于使其成为一种高效且有力的服务器端编程语言。请访问其[主页][2]了解更多参与方式。
|
||||
|
||||
图片由 [Uillian Vargas][3] 发布在 [Unsplash][4] 上。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/introducing-swift-fedora/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/linkdupont/
|
||||
[1]: https://swift.org/server/
|
||||
[2]: http://swift.org
|
||||
[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,6 +1,7 @@
|
||||
使用 Python 为你的油箱加油
|
||||
======
|
||||
我来介绍一下我是如何使用 Python 来节省成本的。
|
||||
|
||||
> 我来介绍一下我是如何使用 Python 来节省成本的。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bulb-light-energy-power-idea.png?itok=zTEEmTZB)
|
||||
|
||||
@ -82,7 +83,7 @@ while i < 21: # 20 次迭代 (加油次数)
|
||||
|
||||
如你所见,这个调整会令混合汽油号数始终略高于 91。当然,我的油量表并没有 1/12 的刻度,但是 7/12 略小于 5/8,我可以近似地计算。
|
||||
|
||||
一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。但就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。
|
||||
一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。
|
||||
|
||||
当然最重要的是:开车不写码,写码不开车!
|
||||
|
||||
@ -93,7 +94,7 @@ via: https://opensource.com/article/18/10/python-gas-pump
|
||||
作者:[Greg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,186 @@
|
||||
如何创建和维护你自己的 man 手册
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png)
|
||||
|
||||
我们已经讨论了一些 [man 手册的替代方案][1]。 这些替代方案主要用于学习简洁的 Linux 命令示例,而无需通过全面而过于详细的手册页。 如果你正在寻找一种快速而简单的方法来轻松快速地学习 Linux 命令,那么这些替代方案值得尝试。 现在,你可能正在考虑 —— 如何为 Linux 命令创建自己的 man 式的帮助页面? 这时 “Um” 就派上用场了。 Um 是一个命令行实用程序,可以用于轻松创建和维护包含你到目前为止所了解的所有命令的 man 页面。
|
||||
|
||||
通过创建自己的手册页,你可以在手册页中避免大量不必要的细节,并且只包含你需要记住的内容。 如果你想创建自己的一套 man 式的页面,“Um” 也能为你提供帮助。 在这个简短的教程中,我们将学习如何安装 “Um” 命令以及如何创建自己的 man 手册页。
|
||||
|
||||
### 安装 Um
|
||||
|
||||
Um 适用于 Linux 和Mac OS。 目前,它只能在 Linux 系统中使用 Linuxbrew 软件包管理器来进行安装。 如果你尚未安装 Linuxbrew,请参考以下链接:
|
||||
|
||||
- [Linuxbrew:一个用于 Linux 和 MacOS 的通用包管理器][3]
|
||||
|
||||
安装 Linuxbrew 后,运行以下命令安装 Um 实用程序。
|
||||
|
||||
```
|
||||
$ brew install sinclairtarget/wst/um
|
||||
```
|
||||
|
||||
如果你会看到类似下面的输出,恭喜你! Um 已经安装好并且可以使用了。
|
||||
|
||||
```
|
||||
[...]
|
||||
==> Installing sinclairtarget/wst/um
|
||||
==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz
|
||||
==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0
|
||||
-=#=# # #
|
||||
==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem
|
||||
######################################################################## 100.0%
|
||||
==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939
|
||||
==> Caveats
|
||||
Bash completion has been installed to:
|
||||
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
|
||||
==> Summary
|
||||
[] /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds
|
||||
==> Caveats
|
||||
==> openssl
|
||||
A CA file has been bootstrapped using certificates from the SystemRoots
|
||||
keychain. To add additional certificates (e.g. the certificates added in
|
||||
the System keychain), place .pem files in
|
||||
/home/linuxbrew/.linuxbrew/etc/openssl/certs
|
||||
|
||||
and run
|
||||
/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash
|
||||
==> ruby
|
||||
Emacs Lisp files have been installed to:
|
||||
/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby
|
||||
==> um
|
||||
Bash completion has been installed to:
|
||||
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
|
||||
```
|
||||
|
||||
在制作你的 man 手册页之前,你需要为 Um 启用 bash 补全。
|
||||
|
||||
要开启 bash 补全,首先你需要打开 `~/.bash_profile` 文件:
|
||||
|
||||
```
|
||||
$ nano ~/.bash_profile
|
||||
```
|
||||
|
||||
并在其中添加以下内容:
|
||||
|
||||
```
|
||||
if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then
|
||||
. $(brew --prefix)/etc/bash_completion.d/um-completion.sh
|
||||
fi
|
||||
```
|
||||
|
||||
保存并关闭文件。运行以下命令以更新更改。
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
```
|
||||
|
||||
准备工作全部完成。让我们继续创建我们的第一个 man 手册页。
|
||||
|
||||
### 创建并维护自己的man手册
|
||||
|
||||
如果你想为 `dpkg` 命令创建自己的 man 手册。请运行:
|
||||
|
||||
```
|
||||
$ um edit dpkg
|
||||
```
|
||||
|
||||
上面的命令将在默认编辑器中打开 markdown 模板:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png)
|
||||
|
||||
我的默认编辑器是 Vi,因此上面的命令会在 Vi 编辑器中打开它。现在,开始在此模板中添加有关 `dpkg` 命令的所有内容。
|
||||
|
||||
下面是一个示例:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png)
|
||||
|
||||
正如你在上图的输出中看到的,我为 `dpkg` 命令添加了概要,描述和两个参数选项。 你可以在 man 手册中添加你所需要的所有部分。不过你也要确保为每个部分提供了适当且易于理解的标题。 完成后,保存并退出文件(如果使用 Vi 编辑器,请按 `ESC` 键并键入`:wq`)。
|
||||
|
||||
最后,使用以下命令查看新创建的 man 手册页:
|
||||
|
||||
```
|
||||
$ um dpkg
|
||||
```
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png)
|
||||
|
||||
如你所见,`dpkg` 的 man 手册页看起来与官方手册页完全相同。 如果要在手册页中编辑和/或添加更多详细信息,请再次运行相同的命令并添加更多详细信息。
|
||||
|
||||
```
|
||||
$ um edit dpkg
|
||||
```
|
||||
|
||||
要使用 Um 查看新创建的 man 手册页列表,请运行:
|
||||
|
||||
```
|
||||
$ um list
|
||||
```
|
||||
|
||||
所有手册页将保存在主目录中名为 `.um` 的目录下
|
||||
|
||||
以防万一,如果你不想要某个特定页面,只需删除它,如下所示。
|
||||
|
||||
```
|
||||
$ um rm dpkg
|
||||
```
|
||||
|
||||
要查看帮助部分和所有可用的常规选项,请运行:
|
||||
|
||||
```
|
||||
$ um --help
|
||||
usage: um <page name>
|
||||
um <sub-command> [ARGS...]
|
||||
|
||||
The first form is equivalent to `um read <page name>`.
|
||||
|
||||
Subcommands:
|
||||
um (l)ist List the available pages for the current topic.
|
||||
um (r)ead <page name> Read the given page under the current topic.
|
||||
um (e)dit <page name> Create or edit the given page under the current topic.
|
||||
um rm <page name> Remove the given page.
|
||||
um (t)opic [topic] Get or set the current topic.
|
||||
um topics List all topics.
|
||||
um (c)onfig [config key] Display configuration environment.
|
||||
um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
|
||||
```
|
||||
|
||||
### 配置 Um
|
||||
|
||||
要查看当前配置,请运行:
|
||||
|
||||
```
|
||||
$ um config
|
||||
Options prefixed by '*' are set in /home/sk/.um/umconfig.
|
||||
editor = vi
|
||||
pager = less
|
||||
pages_directory = /home/sk/.um/pages
|
||||
default_topic = shell
|
||||
pages_ext = .md
|
||||
```
|
||||
|
||||
在此文件中,你可以根据需要编辑和更改 `pager`、`editor`、`default_topic`、`pages_directory` 和 `pages_ext` 选项的值。 比如说,如果你想在 [Dropbox][2] 文件夹中保存新创建的 Um 页面,只需更改 `~/.um/umconfig` 文件中 `pages_directory` 的值并将其更改为 Dropbox 文件夹即可。
|
||||
|
||||
```
|
||||
pages_directory = /Users/myusername/Dropbox/um
|
||||
```
|
||||
|
||||
这就是全部内容,希望这些能对你有用,更多好的内容敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[way-ww](https://github.com/way-ww)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/
|
||||
[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
|
||||
[3]: https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/
|
@ -1,9 +1,9 @@
|
||||
cloc –– 计算不同编程语言源代码的行数
|
||||
cloc:计算不同编程语言源代码的行数
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png)
|
||||
|
||||
作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是免费的、开源的跨平台程序,使用 **Perl** 进行开发。
|
||||
作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**][1]。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是自由开源的跨平台程序,使用 **Perl** 进行开发。
|
||||
|
||||
### 特点
|
||||
|
@ -0,0 +1,123 @@
|
||||
Minikube 入门:笔记本上的 Kubernetes
|
||||
======
|
||||
|
||||
> 运行 Minikube 的分步指南。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ)
|
||||
|
||||
在 [Hello Minikube][1] 教程页面上 Minikube 被宣传为基于 Docker 运行 Kubernetes 的一种简单方法。 虽然该文档非常有用,但它主要是为 MacOS 编写的。 你可以深入挖掘在 Windows 或某个 Linux 发行版上的使用说明,但它们不是很清楚。 许多文档都是针对 Debian / Ubuntu 用户的,比如[安装 Minikube 的驱动程序][2]。
|
||||
|
||||
这篇指南旨在使得在基于 RHEL/Fedora/CentOS 的操作系统上更容易安装 Minikube。
|
||||
|
||||
### 先决条件
|
||||
|
||||
1. 你已经[安装了 Docker][3]。
|
||||
2. 你的计算机是一个基于 RHEL / CentOS / Fedora 的工作站。
|
||||
3. 你已经[安装了正常运行的 KVM2 虚拟机管理程序][4]。
|
||||
4. 你有一个可以工作的 docker-machine-driver-kvm2。 以下命令将安装该驱动程序:
|
||||
|
||||
```
|
||||
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
|
||||
chmod +x docker-machine-driver-kvm2 \
|
||||
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
|
||||
&& rm docker-machine-driver-kvm2
|
||||
```
|
||||
|
||||
### 下载、安装和启动Minikube
|
||||
|
||||
1、为你要即将下载的两个文件创建一个目录,两个文件分别是:[minikube][5] 和 [kubectl][6]。
|
||||
|
||||
2、打开终端窗口并运行以下命令来安装 minikube。
|
||||
|
||||
```
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||
```
|
||||
|
||||
请注意,minikube 版本(例如,minikube-linux-amd64)可能因计算机的规格而有所不同。
|
||||
|
||||
3、`chmod` 加执行权限。
|
||||
|
||||
```
|
||||
chmod +x minikube
|
||||
```
|
||||
|
||||
4、将文件移动到 `/usr/local/bin` 路径下,以便你能将其作为命令运行。
|
||||
|
||||
```
|
||||
mv minikube /usr/local/bin
|
||||
```
|
||||
|
||||
5、使用以下命令安装 `kubectl`(类似于 minikube 的安装过程)。
|
||||
|
||||
```
|
||||
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
使用 `curl` 命令确定最新版本的Kubernetes。
|
||||
|
||||
6、`chmod` 给 `kubectl` 加执行权限。
|
||||
|
||||
```
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
7、将 `kubectl` 移动到 `/usr/local/bin` 路径下作为命令运行。
|
||||
|
||||
```
|
||||
mv kubectl /usr/local/bin
|
||||
```
|
||||
|
||||
8、 运行 `minikube start` 命令。 为此,你需要有虚拟机管理程序。 我使用过 KVM2,你也可以使用 Virtualbox。 确保是以普通用户而不是 root 身份运行以下命令,以便为用户而不是 root 存储配置。
|
||||
|
||||
```
|
||||
minikube start --vm-driver=kvm2
|
||||
```
|
||||
|
||||
这可能需要一段时间,等一会。
|
||||
|
||||
9、 Minikube 应该下载并启动。 使用以下命令确保成功。
|
||||
|
||||
```
|
||||
cat ~/.kube/config
|
||||
```
|
||||
|
||||
10、 执行以下命令以运行 Minikube 作为上下文环境。 上下文环境决定了 `kubectl` 与哪个集群交互。 你可以在 `~/.kube/config` 文件中查看所有可用的上下文环境。
|
||||
|
||||
```
|
||||
kubectl config use-context minikube
|
||||
```
|
||||
|
||||
11、再次查看 `config` 文件以检查 Minikube 是否存在上下文环境。
|
||||
|
||||
```
|
||||
cat ~/.kube/config
|
||||
```
|
||||
|
||||
12、最后,运行以下命令打开浏览器查看 Kubernetes 仪表板。
|
||||
|
||||
```
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
现在 Minikube 已启动并运行,请阅读[通过 Minikube 在本地运行 Kubernetes][7] 这篇官网教程开始使用它。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/getting-started-minikube
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://kubernetes.io/docs/tutorials/hello-minikube
|
||||
[2]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md
|
||||
[3]: https://docs.docker.com/install
|
||||
[4]: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver
|
||||
[5]: https://github.com/kubernetes/minikube/releases
|
||||
[6]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl
|
||||
[7]: https://kubernetes.io/docs/setup/minikube
|
@ -0,0 +1,103 @@
|
||||
命令行小技巧:读取文件的不同方式
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg)
|
||||
|
||||
作为图形操作系统,Fedora 的使用是令人愉快的。你可以轻松地点击完成任何任务。但你可能已经看到了,在底层还有一个强大的命令行。想要在 shell 下体验,只需要在 Fedora 系统中打开你的终端应用。这篇文章是向你展示常见的命令行使用方法的系列文章之一。
|
||||
|
||||
在这部分,你将学习如何以不同的方式读取文件,如果你在系统中打开一个终端完成一些工作,你就有可能需要读取一两个文件。
|
||||
|
||||
### 一应俱全的大餐
|
||||
|
||||
对命令行终端的用户来说, `cat` 命令众所周知。 当你 `cat` 一个文件,你很容易的把整个文件内容展示在你的屏幕上。而真正发生在底层的是文件一次读取一行,然后一行一行写入屏幕。
|
||||
|
||||
假设你有一个文件,叫做 `myfile`, 这个文件每行只有一个单词。为了简单起见,每行的单词就是这行的行号,就像这样:
|
||||
|
||||
```
|
||||
one
|
||||
two
|
||||
three
|
||||
four
|
||||
five
|
||||
```
|
||||
|
||||
所以如果你 `cat` 这个文件,你就会看到如下输出:
|
||||
|
||||
```
|
||||
$ cat myfile
|
||||
one
|
||||
two
|
||||
three
|
||||
four
|
||||
five
|
||||
```
|
||||
|
||||
并没有太惊喜,不是吗? 但是有个有趣的转折,只要使用 `tac` 命令,你可以从后往前 `cat` 这个文件。(请注意, Fedora 对这种有争议的幽默不承担任何责任!)
|
||||
|
||||
```
|
||||
$ tac myfile
|
||||
five
|
||||
four
|
||||
three
|
||||
two
|
||||
one
|
||||
```
|
||||
|
||||
`cat` 命令允许你以不同的方式装饰输出,比如,你可以输出行号:
|
||||
|
||||
```
|
||||
$ cat -n myfile
|
||||
1 one
|
||||
2 two
|
||||
3 three
|
||||
4 four
|
||||
5 five
|
||||
```
|
||||
|
||||
还有其他选项可以显示特殊字符和其他功能。要了解更多, 请运行 `man cat` 命令, 看完之后,按 `q` 即可退出回到 shell。
|
||||
|
||||
### 挑选你的食物
|
||||
|
||||
通常,文件太长会无法全部显示在屏幕上,您可能希望能够像文档一样查看它。 这种情况下,可以试试 `less` 命令:
|
||||
|
||||
```
|
||||
$ less myfile
|
||||
```
|
||||
|
||||
你可以用方向键,也可以用 `PgUp`/`PgDn` 来查看文件, 按 `q` 就可以退回到 shell。
|
||||
|
||||
实际上,还有一个 `more` 命令,其基于老式的 UNIX 系统命令。如果在退回 shell 后仍想看到该文件的内容,则可能需要使用它。而 `less` 命令则让你回到你离开 shell 之前的样子,并且清除屏幕上你看到的所有的文件内容。
|
||||
|
||||
### 一点披萨或甜点
|
||||
|
||||
有时,你所需的输出只是文件的开头。 比如,有一个非常长的文件,当你使用 `cat` 命令时,会显示这个文件所有内容,前几行的内容很容易滚动过去,导致你看不到。`head` 命令会帮你获取文件的前几行:
|
||||
|
||||
```
|
||||
$ head -n 2 myfile
|
||||
one
|
||||
two
|
||||
```
|
||||
同样,你会用 `tail` 命令来查看文件的末尾几行:
|
||||
|
||||
```
|
||||
$ tail -n 3 myfile
|
||||
three
|
||||
four
|
||||
five
|
||||
```
|
||||
|
||||
当然,这些只是在这个领域的几个简单的命令。但它们可以让你在阅读文件时容易入手。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/commandline-quick-tips-reading-files-different-ways/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[distant1219](https://github.com/distant1219)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,149 @@
|
||||
在 Linux 手册页中查看整个 Arch Linux Wiki
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/arch-wiki-720x340.jpg)
|
||||
|
||||
不久之前,我写了篇关于一个名叫 [arch-wiki-cli][1] 的命令行脚本的文章,使用它可以在终端命令行中查看 Arch Linux Wiki。使用这个脚本,你可以很轻松的用你喜欢的文本浏览器查看整个 Arch Wiki 网站。显然,使用这个脚本需要你有网络连接。我今天偶然发现了一个名为 Arch-wiki-man 的程序,与其有着相同的功能。就跟名字说的一样,它可以让你在命令行查看 Arch Wiki,但是无需联网。它可以以手册页的形式为你显示来自 Arch Wiki 的任何文章。它会下载整个 Arch Wiki 到本地,并每两天自动推送一次。因此,你的系统上总能有一份 Arch Wiki 最新的副本。
|
||||
|
||||
### 安装 Arch-wiki-man
|
||||
|
||||
Arch-wiki-man 在 [AUR][2] 中可用,所以你可以通过类似[Yay][3] 的 AUR 帮助程序安装它。
|
||||
|
||||
```
|
||||
$ yay -S arch-wiki-man
|
||||
```
|
||||
|
||||
另外,它也可以使用 NPM 安装。首先确保你已经[安装了 NoodJS][4],然后使用以下命令安装它。
|
||||
|
||||
```
|
||||
$ npm install -g arch-wiki-man
|
||||
```
|
||||
|
||||
### 以手册页的形式查看整个 Arch Wiki
|
||||
|
||||
Arch-wiki-man 的典型语法如下:
|
||||
|
||||
```
|
||||
$ awman <search-query>
|
||||
```
|
||||
|
||||
下面看一些具体的例子:
|
||||
|
||||
#### 搜索一个或多个匹配项
|
||||
|
||||
只需要下面的命令,就可以搜索 [Arch Linux 安装指南][5]。
|
||||
|
||||
```
|
||||
$ awman Installation guide
|
||||
```
|
||||
|
||||
上面的命令将会从 Arch Wiki 中搜索所有包含 “Installation guide” 的条目。如果对于给出的搜索条目有很多的匹配项,将会展示为一个选择菜单。使用上下方向键或是 Vim 风格的方向键(`j`/`k`),移动到你想查看的指南上,点击回车打开。然后就会像下面这样,以手册页的形式展示指南的内容。
|
||||
|
||||
![][6]
|
||||
|
||||
awman 指的是 arch wiki man 的首字母组合。
|
||||
|
||||
它支持手册页的所有操作,所以你可以像使用手册页一样使用它。按 `h` 查看帮助选项。
|
||||
|
||||
![][7]
|
||||
|
||||
要退出选择菜单而不显示手册页,只需要按 `Ctrl+c`。
|
||||
|
||||
输入 `q` 返回或者/并且退出手册页。
|
||||
|
||||
#### 在标题或者概述中搜索匹配项
|
||||
|
||||
awman 默认只会在标题中搜索匹配项。但是你也可以指定它同时在标题和概述中搜索匹配项。
|
||||
|
||||
```
|
||||
$ awman -d vim
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ awman --desc-search vim
|
||||
```
|
||||
|
||||
#### 在目录中搜索匹配项
|
||||
|
||||
不同于在标题和概述中搜索匹配项,它也能够扫描整个内容以匹配。不过请注意,这样将会使搜索进程明显变慢。
|
||||
|
||||
```
|
||||
$ awman -k emacs
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ awman --apropos emacs
|
||||
```
|
||||
|
||||
#### 在 web 浏览器中打开搜索结果
|
||||
|
||||
如果你不想以手册页的形式查看 Arch Wiki 指南,你也可以像下面这样在 web 浏览器中打开它。
|
||||
|
||||
```
|
||||
$ awman -w pacman
|
||||
```
|
||||
|
||||
或者,
|
||||
|
||||
```
|
||||
$ awman --web pacman
|
||||
```
|
||||
|
||||
这条命令将会在 web 浏览器中打开匹配结果。请注意,使用这个选项需要网络连接。
|
||||
|
||||
#### 在其他语言中搜索
|
||||
|
||||
awman 默认打开的是英文的 Arch Wiki 页面。如果你想用其他的语言查看搜索结果,例如西班牙语,只需要像这样做:
|
||||
|
||||
```
|
||||
$ awman -l spanish codecs
|
||||
```
|
||||
|
||||
![][8]
|
||||
|
||||
使用以下命令查看可用的语言:
|
||||
|
||||
```
|
||||
$ awman --list-languages
|
||||
```
|
||||
|
||||
#### 升级本地的 Arch Wiki 副本
|
||||
|
||||
就像我已经说过的,更新会每两天自动推送一次。或者你也可以使用以下命令手动更新。
|
||||
|
||||
```
|
||||
$ awman-update
|
||||
arch-wiki-man@1.3.0 /usr/lib/node_modules/arch-wiki-man
|
||||
└── arch-wiki-md-repo@0.10.84
|
||||
|
||||
arch-wiki-md-repo has been successfully updated or reinstalled.
|
||||
```
|
||||
|
||||
:)
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-browse-and-read-entire-arch-wiki-as-linux-man-pages/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[dianbanjiu](https://github.com/dianbanjiu)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/search-arch-wiki-website-commandline/
|
||||
[2]: https://aur.archlinux.org/packages/arch-wiki-man/
|
||||
[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[4]: https://www.ostechnix.com/install-node-js-linux/
|
||||
[5]: https://www.ostechnix.com/install-arch-linux-latest-version/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-1.gif
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/awman-2.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2018/10/awman-3-1.png
|
@ -1,67 +0,0 @@
|
||||
LuuMing translating
|
||||
9 ways to improve collaboration between developers and designers
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV)
|
||||
|
||||
This article was co-written with [Jason Porter][1].
|
||||
|
||||
Design is a crucial element in any software project. Sooner or later, the developers' reasons for writing all this code will be communicated to the designers, human beings who aren't as familiar with its inner workings as the development team.
|
||||
|
||||
Stereotypes exist on both side of the divide; engineers often expect designers to be flaky and irrational, while designers often expect engineers to be inflexible and demanding. The truth is considerably more nuanced and, at the end of the day, the fates of designers and developers are forever intertwined.
|
||||
|
||||
Here are nine things that can improve collaboration between the two.
|
||||
|
||||
### 1\. First, knock down the wall. Seriously.
|
||||
|
||||
There are loads of memes about the "wall of confusion" in just about every industry. No matter what else you do, the first step toward tearing down this wall is getting both sides to agree it needs to be gone. Once everyone agrees the existing processes aren't functioning optimally, you can pick and choose from the rest of these ideas to begin fixing the problems.
|
||||
|
||||
### 2\. Learn to empathize.
|
||||
|
||||
Before rolling up any sleeves to build better communication, take a break. This is a great junction point for team building. A time to recognize that we're all people, we all have strengths and weaknesses, and most importantly, we're all on the same team. Discussions around workflows and productivity can become feisty, so it's crucial to build a foundation of trust and cooperation before diving on in.
|
||||
|
||||
### 3\. Recognize differences.
|
||||
|
||||
Designers and developers attack the same problem from different angles. Given a similar problem, designers will seek the solution with the biggest impact while developers will seek the solution with the least amount of waste. These two viewpoints do not have to be mutually exclusive. There is plenty of room for negotiation and compromise, and somewhere in the middle is where the end user receives the best experience possible.
|
||||
|
||||
### 4\. Embrace similarities.
|
||||
|
||||
This is all about workflow. CI/CD, scrum, agile, etc., are all basically saying the same thing: Ideate, iterate, investigate, and repeat. Iteration and reiteration are common denominators for both kinds of work. So instead of running a design cycle followed by a development cycle, it makes much more sense to run them concurrently and in tandem. Syncing cycles allows teams to communicate, collaborate, and influence each other every step of the way.
|
||||
|
||||
### 5\. Manage expectations.
|
||||
|
||||
All conflict can be distilled down to one simple idea: incompatible expectations. Therefore, an easy way to prevent systemic breakdowns is to manage expectations by ensuring that teams are thinking before talking and talking before doing. Setting expectations often evolves organically through everyday conversation. Forcing them to happen by having meetings can be counterproductive.
|
||||
|
||||
### 6\. Meet early and meet often.
|
||||
|
||||
Meeting once at the beginning of work and once at the end simply isn't enough. This doesn't mean you need daily or even weekly meetings. Setting a cadence for meetings can also be counterproductive. Let them happen whenever they're necessary. Great things can happen with impromptu meetings—even at the watercooler! If your team is distributed or has even one remote employee, video conferencing, text chat, or phone calls are all excellent ways to meet. It's important that everyone on the team has multiple ways to communicate with each other.
|
||||
|
||||
### 7\. Build your own lexicon.
|
||||
|
||||
Designers and developers sometimes have different terms for similar ideas. One person's card is another person's tile is a third person's box. Ultimately, the fit and accuracy of a term aren't as important as everyone's agreement to use the same term consistently.
|
||||
|
||||
### 8\. Make everyone a communication steward.
|
||||
|
||||
Everyone in the group is responsible for maintaining effective communication, regardless of how or when it happens. Each person should strive to say what they mean and mean what they say.
|
||||
|
||||
### 9\. Give a darn.
|
||||
|
||||
It only takes one member of a team to sabotage progress. Go all in. If every individual doesn't care about the product or the goal, there will be problems with motivation to make changes or continue the process.
|
||||
|
||||
This article is based on [Designers and developers: Finding common ground for effective collaboration][2], a talk the authors will be giving at [Red Hat Summit 2018][3], which will be held May 8-10 in San Francisco. [Register by May 7][3] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers
|
||||
|
||||
作者:[Jason Brock][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jkbrock
|
||||
[1]:https://opensource.com/users/lightguardjp
|
||||
[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267
|
||||
[3]:https://www.redhat.com/en/summit/2018
|
@ -1,6 +1,5 @@
|
||||
heguangzhi translating
|
||||
|
||||
|
||||
Linus, His Apology, And Why We Should Support Him
|
||||
======
|
||||
|
||||
|
@ -1,47 +0,0 @@
|
||||
translating by belitex
|
||||
How Writing Can Expand Your Skills and Grow Your Career
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/graffiti-1281310_1920.jpg?itok=RCayfGKv)
|
||||
|
||||
At the recent [Open Source Summit in Vancouver][1], I participated in a panel discussion called [How Writing can Change Your Career for the Better (Even if You don't Identify as a Writer][2]. The panel was moderated by Rikki Endsley, Community Manager and Editor for Opensource.com, and it included VM (Vicky) Brasseur, Open Source Strategy Consultant; Alex Williams, Founder, Editor in Chief, The New Stack; and Dawn Foster, Consultant, The Scale Factory.
|
||||
|
||||
The talk was [inspired by this article][3], in which Rikki examined some ways that writing can "spark joy" and improve your career in unexpected ways. Full disclosure: I have known Rikki for a long time. We worked at the same company for many years, raised our children together, and remain close friends.
|
||||
|
||||
### Write and learn
|
||||
|
||||
As Rikki noted in the talk description, “even if you don't consider yourself to be ‘a writer,’ you should consider writing about your open source contributions, project, or community.” Writing can be a great way to share knowledge and engage others in your work, but it has personal benefits as well. It can help you meet new people, learn new skills, and improve your communication style.
|
||||
|
||||
I find that writing often clarifies for me what I don’t know about a particular topic. The process highlights gaps in my understanding and motivates me to fill in those gaps through further research, reading, and asking questions.
|
||||
|
||||
“Writing about what you don't know can be much harder and more time consuming, but also much more fulfilling and help your career. I've found that writing about what I don't know helps me learn, because I have to research it and understand it well enough to explain it,” Rikki said.
|
||||
|
||||
Writing about what you’ve just learned can be valuable to other learners as well. In her blog, [Julia Evans][4] often writes about learning new technical skills. She has a friendly, approachable style along with the ability to break down topics into bite-sized pieces. In her posts, Evans takes readers through her learning process, identifying what was and was not helpful to her along the way, essentially removing obstacles for her readers and clearing a path for those new to the topic.
|
||||
|
||||
### Communicate more clearly
|
||||
|
||||
Writing can help you practice thinking and speaking more precisely, especially if you’re writing (or speaking) for an international audience. [In this article,][5] for example, Isabel Drost-Fromm provides tips for removing ambiguity for non-native English speakers. Writing can also help you organize your thoughts before a presentation, whether you’re speaking at a conference or to your team.
|
||||
|
||||
“The process of writing the articles helps me organize my talks and slides, and it was a great way to provide ‘notes’ for conference attendees, while sharing the topic with a larger international audience that wasn't at the event in person,” Rikki stated.
|
||||
|
||||
If you’re interested in writing, I encourage you to do it. I highly recommend the articles mentioned here as a way to get started thinking about the story you have to tell. Unfortunately, our discussion at Open Source Summit was not recorded, but I hope we can do another talk in the future and share more ideas.
|
||||
|
||||
Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates:
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/9/how-writing-can-help-you-learn-new-skills-and-grow-your-career
|
||||
|
||||
作者:[Amber Ankerholz][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/aankerholz
|
||||
[1]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/
|
||||
[2]: https://ossna18.sched.com/event/FAOF/panel-discussion-how-writing-can-change-your-career-for-the-better-even-if-you-dont-identify-as-a-writer-moderated-by-rikki-endsley-opensourcecom-red-hat?iframe=no#
|
||||
[3]: https://opensource.com/article/18/2/career-changing-magic-writing
|
||||
[4]: https://jvns.ca/
|
||||
[5]: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience
|
@ -1,169 +0,0 @@
|
||||
thecyanbird translating
|
||||
|
||||
Linux Has a Code of Conduct and Not Everyone is Happy With it
|
||||
======
|
||||
**Linux kernel has a new code of conduct (CoC). Linus Torvalds took a break from Linux kernel development just 30 minutes after signing this code of conduct. And since **the writer of this code of conduct has had a controversial past,** it has now become a point of heated discussion. With all the politics involved, not many people are happy with this new CoC.**
|
||||
|
||||
If you do not know already, [Linux creator Linus Torvalds has apologized for his past behavior and has taken a temporary break from Linux kernel development to improve his behavior][1].
|
||||
|
||||
### The new code of conduct for Linux kernel development
|
||||
|
||||
Linux kernel developers have a code of conduct. It’s not like they didn’t have a code before, but the previous [code of conflict][2] is now replaced by this new code of conduct to “help make the kernel community a welcoming environment to participate in.”
|
||||
|
||||
> “In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.”
|
||||
|
||||
You can read the entire code of conduct on this commit page.
|
||||
|
||||
[Linux Code of Conduct][33]
|
||||
|
||||
|
||||
### Was Linus Torvalds forced to apologize and take a break?
|
||||
|
||||
![Linus Torvalds Apologizes][3]
|
||||
|
||||
The code of conduct was signed off by Linus Torvalds and Greg Kroah-Hartman (kind of second-in-command after Torvalds). Dan Williams of Intel and Chris Mason from Facebook were some of the other signees.
|
||||
|
||||
If I have read through the timeline correctly, half an hour after signing this code of conduct, Torvalds sent a [mail apologizing for his past behavior][4]. He also announced taking a temporary break to improve upon his behavior.
|
||||
|
||||
But at this point some people started reading between the lines, with a special attention to this line from his mail:
|
||||
|
||||
> **This week people in our community confronted me about my lifetime of not understanding emotions**. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.
|
||||
|
||||
This particular line could be read as if he was coerced into apologizing and taking a break because of the new code of conduct. Though it could also be a precautionary measure to prevent Torvalds from violating the newly created code of conduct.
|
||||
|
||||
### The controversy around Contributor Convent creator Coraline Ada Ehmke
|
||||
|
||||
The Linux code of conduct is based on the [Contributor Covenant, version 1.4][5]. Contributor Convent has been adopted by hundreds of open source projects. Eclipse, Angular, Ruby, Kubernetes are some of the [many adopters of Contributor Convent][6].
|
||||
|
||||
Contributor Covenant has been created by [Coraline Ada Ehmke][7], a software developer, an open-source advocate, and an [LGBT][8] activist. She has been instrumental in promoting diversity in the open source world.
|
||||
|
||||
Coraline has also been vocal about her stance against [meritocracy][9]. The Latin word meritocracy originally refers to a “system under which advancement within the system turns on “merits”, like intelligence, credentials, and education.” But activists like [Coraline believe][10] that meritocracy is a negative system where the worth of an individual is measured not by their humanity, but solely by their intellectual output.
|
||||
|
||||
[![croraline meritocracy][11]][12]
|
||||
Image credit: Twitter user @nickmon1112
|
||||
|
||||
Remember that [Linus Torvalds has repeatedly said that he cares about the code, not the person who writes it][13]. Clearly, this goes against Coraline’s view on meritocracy.
|
||||
|
||||
Coraline has had a troubled incident in the past with a contributor of [Opal project][14]. There was a [discussion taking place on Twitter][15] where Elia, a core contributor to Opal project from Italy, said “(trans people) not accepting reality is the problem here”.
|
||||
|
||||
Coraline was neither in the discussion nor was she a contributor to the Opal project. But as an LGBT activist, she took it to herself and [demanded that Elia be removed from the Opal Project][16] for his ‘views against trans people’. A lengthy and heated discussion took place on Opal’s GitHub repository. Coraline and her supporters, who never contributed to Opal, tried to coerce the moderators into removing Elia, a core contributor of the project.
|
||||
|
||||
While Elia wasn’t removed from the project, Opal project maintainers agreed to put up a code of conduct in place. And this code of conduct was nothing else but Coraline’s famed Contributor Covenant that she had pitched to the maintainers herself.
|
||||
|
||||
But the story didn’t end here. The Contributor Covenant was then modified and a [new clause added in order to get to Elia][17]. The new clause widened the scope of conduct in public spaces. This malicious change was [spotted by the maintainers][18] and they edited the clause. Opal eventually got rid of the Contributor Covenant and put in place its own guideline.
|
||||
|
||||
This is a classic example of how a few offended people, who never contributed a single line of code to the project, tried to oust its core contributor.
|
||||
|
||||
### People’s reaction on Linux Code of Conduct and Torvalds’ apology
|
||||
|
||||
As soon as Linux code of conduct and Torvalds’ apology went public, Social Media and forums were rife with rumors and [speculations][19]. While many people appreciated this new development, there were some who saw a conspiracy by [SJW infiltrating Linux][20].
|
||||
|
||||
A sarcastic tweet by Caroline only fueled the fire.
|
||||
|
||||
> I can’t wait for the mass exodus from Linux now that it’s been infiltrated by SJWs. Hahahah [pic.twitter.com/eFeY6r4ENv][21]
|
||||
>
|
||||
> — Coraline Ada Ehmke (@CoralineAda) [September 16, 2018][22]
|
||||
|
||||
In the wake of the Linux CoC controversy, Coraline openly said that the Contributor Convent code of conduct is a political document. This did not go down well with the people who want the political stuff out of the open source projects.
|
||||
|
||||
> Some people are saying that the Contributor Covenant is a political document, and they’re right.
|
||||
>
|
||||
> — Coraline Ada Ehmke (@CoralineAda) [September 16, 2018][23]
|
||||
|
||||
Nick Monroe, a freelance journalist, dig up the past of Coraline in order to validate his claim that there is more to Linux CoC than meets the eye. You can go by the entire thread if you want.
|
||||
|
||||
> Alright. You've seen this a million times before. It's a code of conduct blah blah blah
|
||||
>
|
||||
> that has social justice baked right into it. blah blah blah.<https://t.co/KuQqeriYeJ>
|
||||
>
|
||||
> But something is different about this. [pic.twitter.com/8NUL2K1gu2][24]
|
||||
>
|
||||
> — Nick Monroe (@nickmon1112) [September 17, 2018][25]
|
||||
|
||||
Nick wasn’t the only one to disapprove of the new Linux CoC. The [SJW][26] involvement led to more skepticism.
|
||||
|
||||
> I guess the big news in Linux today is that the Linux kernel is now governed by a Code of Conduct and a “post meritocracy” world view.
|
||||
>
|
||||
> In principle these CoCs look great. In practice they are abused tools to hunt people SJWs don’t like. And they don’t like a lot of people.
|
||||
>
|
||||
> — Mark Kern (@Grummz) [September 17, 2018][27]
|
||||
|
||||
While there were many who appreciated Torvalds’ apology, there were a few who blamed Torvalds’ attitude:
|
||||
|
||||
> Am I the only one who thinks Linus Torvalds attitude for decades was a prime contributors to how many of the condescending, rudes, jerks in Linux and open source "communities" behaved? I've never once felt welcomed into the Linux community as a new user.
|
||||
>
|
||||
> — Jonathan Frappier (@jfrappier) [September 17, 2018][28]
|
||||
|
||||
And some were simply not amused with his apology:
|
||||
|
||||
> Oh look, an abusive OSS maintainer finally admitted, after *decades* of abusive and toxic behavior, that his behavior *might* be an issue.
|
||||
>
|
||||
> And a bunch of people I follow are tripping all over themselves to give him cookies for that. 🙄🙄🙄
|
||||
>
|
||||
> — Kelly Ellis (@justkelly_ok) [September 17, 2018][29]
|
||||
|
||||
The entire Torvalds apology episode has raised a genuine concern ;)
|
||||
|
||||
> Do we have to put "I don't/do forgive Linus Torvalds" in our bio now?
|
||||
>
|
||||
> — Verónica. (@maria_fibonacci) [September 17, 2018][30]
|
||||
|
||||
Jokes apart, the genuine concern was raised by Sharp, who had [quit Linux Kernel development][31] in 2015 due to the ‘toxic community’.
|
||||
|
||||
> The real test here is whether the community that built Linus up and protected his right to be verbally abusive will change. Linus not only needs to change himself, but the Linux kernel community needs to change as well. <https://t.co/EG5KO43416>
|
||||
>
|
||||
> — Sage Sharp (@_sagesharp_) [September 17, 2018][32]
|
||||
|
||||
### What do you think of Linux Code of Conduct?
|
||||
|
||||
If you ask my opinion, I do think that a Code of Conduct is the need of the time. It guides people in behaving in a respectable way and helps create a positive environment for all kind of people irrespective of their race, ethnicity, religion, nationality and political views (both left and right).
|
||||
|
||||
What are your views on the entire episode? Do you think the CoC will help Linux kernel development? Or will it deteriorate with the involvement of anti-meritocracy SJWs?
|
||||
|
||||
We don’t have a code of conduct at It’s FOSS but let’s keep the discussion civil :)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-code-of-conduct/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
|
||||
[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linus-torvalds-apologizes.jpeg
|
||||
[4]: https://lkml.org/lkml/2018/9/16/167
|
||||
[5]: https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
|
||||
[6]: https://www.contributor-covenant.org/adopters
|
||||
[7]: https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke
|
||||
[8]: https://en.wikipedia.org/wiki/LGBT
|
||||
[9]: https://en.wikipedia.org/wiki/Meritocracy
|
||||
[10]: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy
|
||||
[11]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/croraline-meritocracy.jpg
|
||||
[12]: https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg
|
||||
[13]: https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/
|
||||
[14]: https://opalrb.com/
|
||||
[15]: https://twitter.com/krainboltgreene/status/611569515315507200
|
||||
[16]: https://github.com/opal/opal/issues/941
|
||||
[17]: https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11
|
||||
[18]: https://github.com/opal/opal/pull/948#issuecomment-113486020
|
||||
[19]: https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/
|
||||
[20]: https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/
|
||||
[21]: https://t.co/eFeY6r4ENv
|
||||
[22]: https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw
|
||||
[23]: https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw
|
||||
[24]: https://t.co/8NUL2K1gu2
|
||||
[25]: https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw
|
||||
[26]: https://www.urbandictionary.com/define.php?term=SJW
|
||||
[27]: https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw
|
||||
[28]: https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw
|
||||
[29]: https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw
|
||||
[30]: https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw
|
||||
[31]: https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html
|
||||
[32]: https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw
|
||||
[33]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f
|
@ -1,3 +1,4 @@
|
||||
(translating by runningwater)
|
||||
CPU Power Manager – Control And Manage CPU Frequency In Linux
|
||||
======
|
||||
|
||||
@ -64,7 +65,7 @@ via: https://www.ostechnix.com/cpu-power-manager-control-and-manage-cpu-frequenc
|
||||
|
||||
作者:[EDITOR][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,44 +0,0 @@
|
||||
Creator of the World Wide Web is Creating a New Decentralized Web
|
||||
======
|
||||
**Creator of the world wide web, Tim Berners-Lee has unveiled his plans to create a new decentralized web where the data will be controlled by the users.**
|
||||
|
||||
[Tim Berners-Lee][1] is known for creating the world wide web, i.e., the internet you know today. More than two decades later, Tim is working to free the internet from the clutches of corporate giants and give the power back to the people via a decentralized web.
|
||||
|
||||
Berners-Lee was unhappy with the way ‘powerful forces’ of the internet handle data of the users for their own agenda. So he [started working on his own open source project][2] Solid “to restore the power and agency of individuals on the web.”
|
||||
|
||||
> Solid changes the current model where users have to hand over personal data to digital giants in exchange for perceived value. As we’ve all discovered, this hasn’t been in our best interests. Solid is how we evolve the web in order to restore balance — by giving every one of us complete control over data, personal or not, in a revolutionary way.
|
||||
|
||||
![Tim Berners-Lee is creating a decentralized web with open source project Solid][3]
|
||||
|
||||
Basically, [Solid][4] is a platform built using the existing web where you create own ‘pods’ (personal data store). You decide where this pod will be hosted, who will access which data element and how the data will be shared through this pod.
|
||||
|
||||
Berners-Lee believes that Solid “will empower individuals, developers and businesses with entirely new ways to conceive, build and find innovative, trusted and beneficial applications and services.”
|
||||
|
||||
Developers need to integrate Solid into their apps and sites. Solid is still in the early stages so there are no apps for now but the project website claims that “the first wave of Solid apps are being created now.”
|
||||
|
||||
Berners-Lee has created a startup called [Inrupt][5] and has taken a sabbatical from MIT to work full-time on Solid and to take it “from the vision of a few to the reality of many.”
|
||||
|
||||
If you are interested in Solid, [learn how to create apps][6] or [contribute to the project][7] in your own way. Of course, it will take a lot of effort to build and drive the broad adoption of Solid so every bit of contribution will count to the success of a decentralized web.
|
||||
|
||||
Do you think a [decentralized web][8] will be a reality? What do you think of decentralized web in general and project Solid in particular?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/solid-decentralized-web/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
|
||||
[2]: https://medium.com/@timberners_lee/one-small-step-for-the-web-87f92217d085
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/tim-berners-lee-solid-project.jpeg
|
||||
[4]: https://solid.inrupt.com/
|
||||
[5]: https://www.inrupt.com/
|
||||
[6]: https://solid.inrupt.com/docs/getting-started
|
||||
[7]: https://solid.inrupt.com/community
|
||||
[8]: https://tech.co/decentralized-internet-guide-2018-02
|
@ -1,3 +1,5 @@
|
||||
HankChow translating
|
||||
|
||||
3 areas to drive DevOps change
|
||||
======
|
||||
Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain.
|
||||
|
@ -0,0 +1,147 @@
|
||||
How to level up your organization's security expertise
|
||||
======
|
||||
These best practices will make your employees more savvy and your organization more secure.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
|
||||
|
||||
IT security is critical to every company these days. In the words of former FBI director Robert Mueller: “There are only two types of companies: Those that have been hacked, and those that will be.”
|
||||
|
||||
At the same time, IT security is constantly evolving. We all know we need to keep up with the latest trends in cybersecurity and security tooling, but how can we do that without sacrificing our ability to keep moving forward on our business priorities?
|
||||
|
||||
No single person in your organization can handle all of the security work alone; your entire development and operations team will need to develop an awareness of security tooling and best practices, just like they all need to build skills in open source and in agile software delivery. There are a number of best practices that can help you level up the overall security expertise in your company through basic and intermediate education, subject matter experts, and knowledge-sharing.
|
||||
|
||||
### Basic education: Annual cybersecurity education and security contact information
|
||||
|
||||
At IBM, we all complete an online cybersecurity training class each year. I recommend this as a best practice for other companies as well. The online training is taught at a basic level, and it doesn’t assume that anyone has a technical background. Topics include social engineering, phishing and spearfishing attacks, problematic websites, viruses and worms, and so on. We learn how to avoid situations that may put ourselves or our systems at risk, how to recognize signs of an attempted security breach, and how to report a problem if we notice something that seems suspicious. This online education serves the purpose of raising the overall security awareness and readiness of the organization at a low per-person cost. A nice side effect of this education is that this basic knowledge can be applied to our personal lives, and we can share what we learned with our family and friends as well.
|
||||
|
||||
In addition to the general cybersecurity education, all employees should have annual training on data security and privacy regulations and how to comply with those.
|
||||
|
||||
Finally, we make it easy to find the Corporate Security Incident Response team by sharing the link to its website in prominent places, including Slack, and setting up suggested matches to ensure that a search of our internal website will send people to the right place:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/security_search_screen.png)
|
||||
|
||||
### Intermediate education: Learn from your tools
|
||||
|
||||
Another great source of security expertise is through pre-built security tools. For example, we have set up a set of automated security tests that run against our web services using IBM AppScan, and the reports it generates include background knowledge about the vulnerabilities it finds, the severity of the threat, how to determine if your application is susceptible to the vulnerability, and how to fix the problem, with code examples.
|
||||
|
||||
Similarly, the free [npm audit command-line tool from npm, Inc.][1] will scan your open source Node.js modules and report any known vulnerabilities it finds. This tool also generates educational audit reports that include the severity of the threat, the vulnerable package, and versions with the vulnerability, an alternative package or versions that do not have the vulnerability, dependencies, and a link to more detailed information about the vulnerability. Here’s an example of a report from npm audit:
|
||||
|
||||
| High | Regular Expression Denial of Service |
|
||||
| --------------| ----------------------------------------- |
|
||||
| Package | minimath |
|
||||
| --------------| ----------------------------------------- |
|
||||
| Dependency of | gulp [dev] |
|
||||
| --------------| ----------------------------------------- |
|
||||
| Path | gulp > vinyl-fs > glob-stream > minimatch |
|
||||
| --------------| ----------------------------------------- |
|
||||
| More info | https://nodesecurity.io/advisories/118 |
|
||||
|
||||
Any good network-level security tool will also give you information on the types of attacks the tool is blocking and how it recognizes likely attacks. This information is available in the marketing materials online as well as the tool’s console and reports if you have access to those.
|
||||
|
||||
Each of your development teams or squads should have at least one subject matter expert who takes the time to read and fully understand the vulnerability reports that are relevant to you. This is often the technical lead, but it could be anyone who is interested in learning more about security. Your local subject matter expert will be able to recognize similar security holes in the future earlier in the development and deployment process.
|
||||
|
||||
Using the npm audit example above, a developer who reads and understands security advisory #118 from this report will be more likely to notice changes that may allow for a Regular Expression Denial of Service when reviewing code in the future. The team’s subject matter expert should also develop the skills needed to determine which of the vulnerability reports don’t actually apply to his or her specific project.
|
||||
|
||||
### Intermediate education: Conferences
|
||||
|
||||
Let’s not forget the value of attending security-related conferences, such as the [OWASP AppSec Conferences][2]. Conferences provide a great way for members of your team to focus on learning for a few days and bring back some of the newest ideas in the field. The “hallway track” of a conference, where we can learn from other practitioners, is also a valuable source of information. As much as most of us dislike being “sold to,” the sponsor hall at a conference is a good place to casually check out new security tools to see which ones you might be interested in evaluating later.
|
||||
|
||||
If your organization is big enough, ask your DevOps and security tool vendors to come to you! If you’ve already procured some great tools, but adoption isn’t going as quickly as you would like, many vendors would be happy to provide your teams with some additional practical training. It’s in their best interests to increase the adoption of their tools (making you more likely to continue paying for their services and to increase your license count), just like it’s in your best interests to maximize the value you get out of the tools you’re paying for. We recently hosted a [Toolbox@IBM][3] \+ DevSecOps summit at our largest sites (those with a couple thousand IT professionals). More than a dozen vendors sponsored each event, came onsite, set up booths, and gave conference talks, just like they would at a technical conference. We also had several of our own presenters speaking about DevOps and security best practices that were working well for them, and we had booths set up by our Corporate Information Security Office, agile coaching, onsite tech support, and internal toolchain teams. We had several hundred attendees at each site. It was great for our technical community because we could focus on the tools that we had already procured, learn how other teams in our company were using them, and make connections to help each other in the future.
|
||||
|
||||
When you send someone to a conference, it’s important to set the expectation that they will come back and share what they’ve learned with the team. We usually do this via an informal brown-bag lunch-and-learn, where people are encouraged to discuss new ideas interactively.
|
||||
|
||||
### Subject-matter experts and knowledge-sharing: The secure engineering guild
|
||||
|
||||
In the IBM Digital Business Group, we’ve adopted the squad model as described by [Spotify][4] and tweaked it to make it work for us. One sometimes-forgotten aspect of the squad model is the guild. Guilds are centers of excellence, focused around one topic or skill set, with members from many squads. Guild members learn together, share best practices with each other and their broader teams, and work to advance the state of the art. If you would like to establish your own secure engineering guild, here are some tips that have worked for me in setting up guilds in the past:
|
||||
|
||||
**Step 1: Advertise and recruit**
|
||||
|
||||
Your co-workers are busy people, so for many of them, a secure engineering guild could feel like just one more thing they have to cram into the week that doesn’t involve writing code. It’s important from the outset that the guild has a value proposition that will benefit its members as well as the organization.
|
||||
|
||||
Zane Lackey from [Signal Sciences][5] gave me some excellent advice: It’s important to call out the truth. In the past, he said, security initiatives may have been more of a hindrance or even a blocker to getting work done. Your secure engineering guild needs to focus on ways to make your engineering team’s lives easier and more efficient instead. You need to find ways to automate more of the busywork related to security and to make your development teams more self-sufficient so you don’t have to rely on security “gates” or hurdles late in the development process.
|
||||
|
||||
Here are some things that may attract people to your guild:
|
||||
|
||||
* Learn about security vulnerabilities and what you can do to combat them
|
||||
* Become a subject matter expert
|
||||
* Participate in penetration testing
|
||||
* Evaluate and pilot new security tools
|
||||
* Add “Secure Engineering Guild” to your resume
|
||||
|
||||
|
||||
|
||||
Here are some additional guild recruiting tips:
|
||||
|
||||
* Reach out directly to your security experts and ask them to join: security architects, network security administrators, people from your corporate security department, and so on.
|
||||
|
||||
* Bring in an external speaker who can get people excited about secure engineering. Advertise it as “sponsored by the Secure Engineering Guild” and collect names and contact information for people who want to join your guild, both before and after the talk.
|
||||
|
||||
* Get executive support for the program. Perhaps one of your VPs will write a blog post extolling the virtues of secure engineering skills and asking people to join the guild (or perhaps you can draft the blog post for her or him to edit and publish). You can combine that blog post with advertising the external speaker if the timing allows.
|
||||
|
||||
* Ask your management team to nominate someone from each squad to join the guild. This hardline approach is important if you have an urgent need to drive rapid improvement in your security posture.
|
||||
|
||||
|
||||
|
||||
|
||||
**Step 2: Build a team**
|
||||
|
||||
Guild meetings should be structured for action. It’s important to keep an agenda so people know what you plan to cover in each meeting, but leave time at the end for members to bring up any topics they want to discuss. Also be sure to take note of action items, and assign an owner and a target date for each of them. Finally, keep meeting minutes and send a brief summary out after each meeting.
|
||||
|
||||
Your first few guild meetings are your best opportunity to set off on the right foot, with a bit of team-building. I like to run a little design thinking exercise where you ask team members to share their ideas for the guild’s mission statement, vote on their favorites, and use those to craft a simple and exciting mission statement. The mission statement should include three components: WHO will benefit, WHAT the guild will do, and the WOW factor. The exercise itself is valuable because you can learn why people have decided to volunteer to be a part of the guild in the first place, and what they hope will come of it.
|
||||
|
||||
Another thing I like to do from the outset is ask people what they’re hoping to achieve as a guild. The guild should learn together, have fun, and do real work. Once you have those ideas out on the table, start putting owners and target dates next to those goals.
|
||||
|
||||
* Would they like to run a book club? Get someone to suggest a book and set up book club meetings.
|
||||
|
||||
* Would they like to share useful articles and blogs? Get someone to set up a Slack channel and invite everyone to it, or set up a shared document where people can contribute their favorite resources.
|
||||
|
||||
* Would they like to pilot a new tool? Get someone to set up a free trial, try it out for their own team, and report back in a few weeks.
|
||||
|
||||
* Would they like to continue a series of talks? Get someone to create a list of topics and speakers and send out the invitations.
|
||||
|
||||
|
||||
|
||||
|
||||
If a few goals end up without owners or dates, that’s OK; just start a to-do list or backlog for people to refer to when they’ve completed their first task.
|
||||
|
||||
Finally, survey the team to find the best time and day of the week for ongoing meetings and set those up. I recommend starting with weekly 30-minute meetings and adjust as needed.
|
||||
|
||||
**Step 3: Keep the energy going, or reboot**
|
||||
|
||||
As the months go on, your guild could start to lose energy. Here are some ways to keep the excitement going or reboot a guild that’s losing energy.
|
||||
|
||||
* Don’t be an echo chamber. Invite people in from various parts of the organization to talk for a few minutes about what they’re doing with respect to security engineering, and where they have concerns or see gaps.
|
||||
|
||||
* Show measurable progress. If you’ve been assigning owners to action items and completing them all along, you’ve certainly made progress, but if you look at it only from week to week, the progress can feel small or insignificant. Once per quarter, take a step back and write a blog about all you’ve accomplished and send it out to your organization. Showing off what you’ve accomplished makes the team proud of what they’ve accomplished, and it’s another opportunity to recruit even more people for your guild.
|
||||
|
||||
* Don’t be afraid to take on a large project. The guild should not be an ivory tower; it should get things done. Your guild may, for example, decide to roll out a new security tool that you love across a large organization. With a little bit of project management and a lot of executive support, you can and should tackle cross-squad projects. The guild members can and should be responsible for getting stories from the large projects prioritized in their own squads’ backlogs and completed in a timely manner.
|
||||
|
||||
* Periodically brainstorm the next set of action items. As time goes by, the most critical or pressing needs of your organization will likely change. People will be more motivated to work on the things they consider most important and urgent.
|
||||
|
||||
* Reward the extra work. You might offer an executive-sponsored cash award for the most impactful secure engineering projects. You might also have the guild itself choose someone to send to a security conference now and then.
|
||||
|
||||
|
||||
|
||||
|
||||
### Go forth, and make your company more secure
|
||||
|
||||
A more secure company starts with a more educated team. Building upon that expertise, a secure engineering guild can drive real changes by developing and sharing best practices, finding the right owners for each action item, and driving them to closure. I hope you found a few tips here that will help you level up the security expertise in your organization. Please add your own helpful tips in the comments.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/how-level-security-expertise-your-organization
|
||||
|
||||
作者:[Ann Marie Fred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/annmarie99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.npmjs.com/about
|
||||
[2]: https://www.owasp.org/index.php/Category:OWASP_AppSec_Conference
|
||||
[3]: mailto:Toolbox@IBM
|
||||
[4]: https://medium.com/project-management-learnings/spotify-squad-framework-part-i-8f74bcfcd761
|
||||
[5]: https://www.signalsciences.com/
|
@ -0,0 +1,64 @@
|
||||
We already have nice things, and other reasons not to write in-house ops tools
|
||||
======
|
||||
Let's look at the pitfalls of writing in-house ops tools, the circumstances that justify it, and how to do it better.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tool-hammer-nail-build-broken.png?itok=91xn-5wI)
|
||||
|
||||
When I was an ops consultant, I had the "great fortune" of seeing the dark underbelly of many companies in a relatively short period of time. Such fortune was exceptionally pronounced on one client engagement where I became the maintainer of an in-house deployment tool that had bloated to touch nearly every piece of infrastructure—despite lacking documentation and testing. Dismayed at the impossible task of maintaining this beast while tackling the real work of improving the product, I began reviewing my old client projects and probing my ops community for their strategies. What I found was an epidemic of "[not invented here][1]" (NIH) syndrome and a lack of collaboration with the broader community.
|
||||
|
||||
### The problem with NIH
|
||||
|
||||
One of the biggest problems of NIH is the time suck for engineers. Instead of working on functionality that adds value to the business, they're adding features to tools that solve standard problems such as deployment, continuous integration (CI), and configuration management.
|
||||
|
||||
This is a serious issue at small or midsized startups, where new hires need to hit the ground running. If they have to learn a completely new toolset, rather than drawing from their experience with industry-standard tools, the time it takes them to become useful increases dramatically. While the new hires are learning the in-house tools, the company remains reliant on the handful of people who wrote the tools to document, train, and troubleshoot them. Heaven forbid one of those engineers succumbs to [the bus factor][2], because the possibility of getting outside help if they forgot to document something is zero.
|
||||
|
||||
### Do you need to roll it yourself?
|
||||
|
||||
Before writing your own ops tool, ask yourself the following questions:
|
||||
|
||||
* Have we polled the greater ops community for solutions?
|
||||
* Have we compared the costs of proprietary tools to the estimated engineering time needed to maintain an in-house solution?
|
||||
* Have we identified open source solutions, even those that lack desired features, and attempted to contribute to them?
|
||||
* Can we fork any open source tools that are well-written but unmaintained?
|
||||
|
||||
|
||||
|
||||
If you still can't find a tool that meets your needs, you'll have to roll your own.
|
||||
|
||||
### Tips for rolling your own
|
||||
|
||||
Here's a checklist for rolling your own solutions:
|
||||
|
||||
1. In-house tooling should not be exempt from the high standards you apply to the rest of your code. Write it like you're going to open source it.
|
||||
2. Make sure you allow time in your sprints to work on feature requests, and don't allow features to be rushed in before proper testing and documentation.
|
||||
3. Keep it small. It's going to be much harder to exact any kind of exit strategy if your tool is a monstrosity that touches everything.
|
||||
4. Track your tool's usage and prune features that aren't actively utilized.
|
||||
|
||||
|
||||
|
||||
### Have an exit strategy
|
||||
|
||||
Open sourcing your in-house tool is not an exit strategy per se, but it may help you get outside contributors to free up your engineers' time. This is the more difficult strategy and will take some extra care and planning. Read "[Starting an Open Source Project][3]" and "[So You've Decided To Open-Source A Project At Work. What Now?][4]" before committing to this path. If you're interested in a cleaner exit, set aside time each quarter to research and test new open source replacements.
|
||||
|
||||
Regardless of which path you choose, explicitly stating that an in-house solution is not the preferred state—early in its development—should clear up any confusion and prevent the issue of changing directions from becoming political.
|
||||
|
||||
Sabice Arkenvirr will present [We Already Have Nice Things, Use Them!][5] at [LISA18][6], October 29-31 in Nashville, Tennessee, USA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/nice-things
|
||||
|
||||
作者:[Sabice Arkenvirr][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/vishuzdelishuz
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Not_invented_here
|
||||
[2]: https://en.wikipedia.org/wiki/Bus_factor
|
||||
[3]: https://opensource.guide/starting-a-project/
|
||||
[4]: https://www.smashingmagazine.com/2013/12/open-sourcing-projects-guide-getting-started/
|
||||
[5]: https://www.usenix.org/conference/lisa18/presentation/arkenvirr
|
||||
[6]: https://www.usenix.org/conference/lisa18
|
@ -0,0 +1,93 @@
|
||||
Think global: How to overcome cultural communication challenges
|
||||
======
|
||||
Use these tips to ensure that every member of your global development team feels involved and understood.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
|
||||
|
||||
A few weeks ago, I witnessed an interesting interaction between two work colleagues—Jason, who is from the United States; and Raj, who was visiting from India.
|
||||
|
||||
Raj typically calls into a daily standup meeting at 9:00am US Central Time from India, but since he was in the US, he and his teammates headed toward the scrum area for the meeting. Jason stopped Raj and said, “Raj, where are you going? Don’t you always call into the stand-up? It would feel strange if you don’t call in.” Raj responded, “Oh, is that so? No worries,” and headed back to his desk to call into the meeting.
|
||||
|
||||
I went to Raj’s desk. “Hey, Raj, why aren’t you going to the daily standup?” Raj replied, “Jason asked me to call in.” Meanwhile, Jason was waiting for Raj to come to the standup.
|
||||
|
||||
What happened here? Jason was obviously joking when he made the remark about Raj calling into the meeting. But how did Raj miss this?
|
||||
|
||||
Jason’s statement was meant as a joke, but Raj took it literally. This was a clear example of a misunderstanding that occurred due to unfamiliarity with each other’s cultural context.
|
||||
|
||||
I often encounter emails that end with “Please revert back to me.” At first, this phrase left me puzzled. I thought, "What changes do they want me to revert?" Finally, I figured out that “please revert” means “Please reply.”
|
||||
|
||||
In his TED talk, “[Managing Cross Cultural Remote Teams,][1]” Ricardo Fernandez describes an interaction with a South African colleague who ended an IM conversation with “I’ll call you just now.” Ricardo went back to his office and waited for the call. After fifteen minutes, he called his colleague: “Weren’t you going to call me just now?” The colleague responded, “Yes, I was going to call you just now.” That's when Ricardo realized that to his South African colleague, the phrase “just now” meant “sometime in the future.”
|
||||
|
||||
In today's workplace, our colleagues may not be located in the same office, city, or even country. A growing number of tech companies have a global workforce comprised of employees with varied experiences and perspectives. This diversity allows companies to compete in the rapidly evolving technological environment.
|
||||
|
||||
But geographically dispersed teams can face challenges. Managing and maintaining high-performing development teams is difficult even when the members are co-located; when team members come from different backgrounds and locations, that makes it even harder. Communication can deteriorate, misunderstandings can happen, and teams may stop trusting each other—all of which can affect the success of the company.
|
||||
|
||||
What factors can cause confusion in global communication? In her book, “[The Culture Map][2],” Erin Meyer presents eight scales into which all global cultures fit. We can use these scales to improve our relationships with international colleagues. She identifies the United States as a very low-context culture in the communication scale. In contrast, Japan is identified as a high-context culture.
|
||||
|
||||
What does it mean to be a high- or low-context culture? In the United States, children learn to communicate explicitly: “Say what you mean; mean what you say” is a common principle of communication. On the other hand, Japanese children learn to communicate effectively by mastering the ability to “read the air.” That means they are able to read between the lines and pick up on social cues when communicating.
|
||||
|
||||
Most Asian cultures follow the high-context style of communication. Not surprisingly, the United States, a young country composed of immigrants, follows a low-context culture: Since the people who immigrated to the United States came from different cultural backgrounds, they had no choice but to communicate explicitly and directly.
|
||||
|
||||
### The three R’s
|
||||
|
||||
How can we overcome challenges in cross-cultural communication? Americans communicating with Japanese colleagues, for example, should pay attention to the non-verbal cues, while Japanese communicating with Americans should prepare for more direct language. If you are facing a similar challenge, follow these three steps to communicate more effectively and improve relationships with your international colleagues.
|
||||
|
||||
#### Recognize the differences in cultural context
|
||||
|
||||
The first step toward effective cross-cultural communication is to recognize that there are differences. Start by increasing your awareness of other cultures.
|
||||
|
||||
#### Respect the differences in cultural context
|
||||
|
||||
Once you become aware that differences in cultural context can affect cross-cultural communication, the next step is to respect these differences. When you notice a different style of communication, learn to embrace the difference and actively listen to the other person’s point of view.
|
||||
|
||||
#### Reconcile the differences in cultural context
|
||||
|
||||
Merely recognizing and respecting cultural differences is not enough; you must also learn how to reconcile the cultural differences. Understanding and being empathetic towards the other culture will help you reconcile the differences and learn how to use them to better advance productivity.
|
||||
|
||||
### 5 ways to improve communications for cultural context
|
||||
|
||||
Over the years, I have incorporated various approaches, tips, and tricks to strengthen relationships among team members across the globe. These approaches have helped me overcome communication challenges with global colleagues. Here are a few examples:
|
||||
|
||||
#### Always use video conferencing when communicating with global teammates
|
||||
|
||||
Studies show that about 55% of communication is non-verbal. Body language offers many subtle cues that can help you decipher messages, and video conferencing enables geographically dispersed team members to see each other. Videoconferencing is my default choice when conducting remote meetings.
|
||||
|
||||
#### Ensure that every team member gets an opportunity to share their thoughts and ideas
|
||||
|
||||
Although I prefer to conduct meetings using video conferencing, this is not always possible. If video conferencing is not a common practice at your workplace, it might take some effort to get everyone comfortable with the concept. Start by encouraging everyone to participate in audio meetings.
|
||||
|
||||
One of our remote team members, who frequently met with us in audio conferences, mentioned that she often wanted to share ideas and contribute to the meeting but since we couldn’t see her and she couldn’t see us, she had no idea when to start speaking. If you are using audio conferencing, one way to mitigate this is to ensure that every team member gets an opportunity to share their ideas.
|
||||
|
||||
#### Learn from one another
|
||||
|
||||
Leverage your international friends to learn about their cultural context. This will help you interact more effectively with colleagues from these countries. I have friends from South Asia and South America who have helped me better understand their cultures, and this knowledge has helped me professionally.
|
||||
|
||||
For programmers, I recommend conducting code reviews with your global peers. This will help you understand how those from different cultures give and receive feedback, persuade others, and make technical decisions.
|
||||
|
||||
#### Be empathetic
|
||||
|
||||
Empathy is the key to strong relationships. The more you are able to put yourself in someone else's shoes, the better able you will be to gain trust and build long-lasting connections. Encourage “water-cooler” conversations among your global colleagues by allocating the first few minutes of each meeting for small talk. This offers the additional benefit of putting everyone in a more relaxed mindset. If you manage a global team, make sure every member feels included in the discussion.
|
||||
|
||||
#### Meet your global colleagues in person
|
||||
|
||||
The best way to build long-lasting relationships is to meet your team members in person. If your company can afford it, arrange for this to happen. Meeting colleagues with whom you have been working will likely strengthen your relationship with them. The companies I have worked for have a strong record of periodically sending US team members to other countries and global colleagues to the US office.
|
||||
|
||||
Another way to bring teams together is to attend conferences. This not only creates educational and training opportunities, but you can also carve out some in-person team time.
|
||||
|
||||
In today's increasingly global economy, it is becoming more important for companies to maintain a geographically diverse workforce to remain competitive. Although global teams can face communication challenges, it is possible to maintain a high-performing development team despite geographical and cultural differences. Share some of the techniques you use in the comments.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/think-global-communication-challenges
|
||||
|
||||
作者:[Avindra Fernando][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/avindrafernando
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.youtube.com/watch?v=QIoAkFpN8wQ
|
||||
[2]: https://www.amazon.com/The-Culture-Map-Invisible-Boundaries/dp/1610392507
|
@ -1,3 +1,5 @@
|
||||
fuowang 翻译中
|
||||
|
||||
9 Best Free Video Editing Software for Linux In 2017
|
||||
======
|
||||
**Brief: Here are best video editors for Linux, their feature, pros and cons and how to install them on your Linux distributions.**
|
||||
|
@ -1,4 +1,3 @@
|
||||
### fuzheng1998 translating
|
||||
10 Games You Can Play on Linux with Wine
|
||||
======
|
||||
![](https://www.maketecheasier.com/assets/uploads/2017/09/wine-games-feat.jpg)
|
||||
|
@ -1,116 +0,0 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
# [Improve your Bash scripts with Argbash][1]
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png)
|
||||
|
||||
Do you write or maintain non-trivial bash scripts? If so, you probably want them to accept command-line arguments in a standard and robust way. Fedora recently got [a nice addition][2] which can help you produce better scripts. And don’t worry, it won’t cost you much of your time or energy.
|
||||
|
||||
### Why Argbash?
|
||||
|
||||
Bash is an interpreted command-line language with no standard library. Therefore, if you write bash scripts and want command-line interfaces that conform to [POSIX][3] and [GNU CLI][4] standards, you’re used to only two options:
|
||||
|
||||
1. Write the argument-parsing functionality tailored to your script yourself (possibly using the `getopts` builtin).
|
||||
|
||||
2. Use an external bash module.
|
||||
|
||||
The first option looks incredibly silly as implementing the interface properly is not trivial. However, it is suggested as the best choice on various sites ranging from [Stack Overflow][5] to the [Bash Hackers][6] wiki.
|
||||
|
||||
The second option looks smarter, but using a module has its issues. The biggest is you have to bundle its code with your script. This may mean either:
|
||||
|
||||
* You distribute the library as a separate file, or
|
||||
|
||||
* You include the library code at the beginning of your script.
|
||||
|
||||
Having two files instead of one is awkward. So is polluting your bash scripts with a chunk of complex code over thousand lines long.
|
||||
|
||||
This was the main reason why the Argbash [project came to life][7]. Argbash is a code generator, so it generates a tailor-made parsing library for your script. Unlike the generic code of other bash modules, it produces minimal code your script needs. Moreover, you can request even simpler code if you don’t need 100% conformance to these CLI standards.
|
||||
|
||||
### Example
|
||||
|
||||
### Analysis
|
||||
|
||||
Let’s say you want to implement a script that [draws a bar][8] across the terminal window. You do that by repeating a single character of your choice multiple times. This means you need to get the following information from the command-line:
|
||||
|
||||
* _The character which is the element of the line. If not specified, use a dash._ On the command-line, this would be a single-valued positional argument _character_ with a default value of -.
|
||||
|
||||
* _Length of the line. If not specified, go for 80._ This is a single-valued optional argument _–length_ with a default of 80.
|
||||
|
||||
* _Verbose mode (for debugging)._ This is a boolean argument _verbose_ , off by default.
|
||||
|
||||
As the body of the script is really simple, this article focuses on getting the input of the user from the command-line to appropriate script variables. Argbash generates code that saves parsing results to shell variables __arg_character_ , __arg_length_ and __arg_verbose_ .
|
||||
|
||||
### Execution
|
||||
|
||||
In order to proceed, you need the _argbash-init_ and _argbash_ bash scripts that are parts of the _argbash_ package. Therefore, run this command:
|
||||
|
||||
```
|
||||
sudo dnf install argbash
|
||||
```
|
||||
|
||||
Then, use _argbash-init_ to generate a template for _argbash_ , which generates the executable script. You want three arguments: a positional one called _character_ , an optional _length_ and an optional boolean _verbose_ . Tell this to _argbash-init_ , and then pass the output to _argbash_ :
|
||||
|
||||
```
|
||||
argbash-init --pos character --opt length --opt-bool verbose script-template.sh
|
||||
argbash script-template.sh -o script
|
||||
./script
|
||||
```
|
||||
|
||||
See the help message? Looks like the script doesn’t know about the default option for the character argument. So take a look at the [Argbash API][9], and then fix the issue by editing the template section of the script:
|
||||
|
||||
```
|
||||
# ...
|
||||
# ARG_OPTIONAL_SINGLE([length],[l],[Length of the line],[80])
|
||||
# ARG_OPTIONAL_BOOLEAN([verbose],[V],[Debug mode])
|
||||
# ARG_POSITIONAL_SINGLE([character],[The element of the line],[-])
|
||||
# ARG_HELP([The line drawer])
|
||||
# ...
|
||||
```
|
||||
|
||||
Argbash is so smart that it tries to make every generated script a template of itself. This means you don’t have to worry about storing source templates for further use. You just shouldn’t lose your generated bash scripts. Now, try to regenerate the future line drawer to work as expected:
|
||||
|
||||
```
|
||||
argbash script -o script
|
||||
./script
|
||||
```
|
||||
|
||||
As you can see, everything is working all right. The only thing left to do is fill in the line drawing functionality itself.
|
||||
|
||||
### Conclusion
|
||||
|
||||
You might find the section containing parsing code quite long, but consider that it allows you to call _./script.sh x -Vl50_ and it will be understood the same way as _./script -V -l 50 x. I_ t does require some code to get this right.
|
||||
|
||||
However, you can shift the balance between generated code complexity and parsing abilities towards more simple code by calling _argbash-init_ with argument _–mode_ set to _minimal_ . This option reduces the size of the script by about 20 lines, which corresponds to a roughly 25% decrease of the generated parsing code size. On the other hand, the _full_ mode makes the script even smarter.
|
||||
|
||||
If you want to examine the generated code, give _argbash_ the argument _–commented_ , which puts comments into the parsing code that reveal the intent behind various sections. Compare that to other argument parsing libraries such as [shflags][10], [argsparse][11] or [bash-modules/arguments][12], and you’ll see the powerful simplicity of Argbash. If something goes horribly wrong and you need to fix a glitch in the parsing functionality quickly, Argbash allows you to do that as well.
|
||||
|
||||
As you’re most likely a Fedora user, you can enjoy the luxury of having command-line Argbash installed from the official repositories. However, there is also an [online parsing code generator][13] at your service. Furthermore, if you’re working on a server with Docker, you can appreciate the [Argbash Docker image][14].
|
||||
|
||||
So enjoy and make sure that your scripts have a command-line interface that pleases your users. Argbash is here to help, with minimal effort required from your side.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/improve-bash-scripts-argbash/
|
||||
|
||||
作者:[Matěj Týč ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/bubla/
|
||||
[1]:https://fedoramagazine.org/improve-bash-scripts-argbash/
|
||||
[2]:https://argbash.readthedocs.io/
|
||||
[3]:http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html
|
||||
[4]:https://www.gnu.org/prep/standards/html_node/Command_002dLine-Interfaces.html
|
||||
[5]:https://stackoverflow.com/questions/192249/how-do-i-parse-command-line-arguments-in-bash
|
||||
[6]:http://wiki.bash-hackers.org/howto/getopts_tutorial
|
||||
[7]:https://argbash.readthedocs.io/
|
||||
[8]:http://wiki.bash-hackers.org/snipplets/print_horizontal_line
|
||||
[9]:http://argbash.readthedocs.io/en/stable/guide.html#argbash-api
|
||||
[10]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
|
||||
[11]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
|
||||
[12]:https://raw.githubusercontent.com/vlisivka/bash-modules/master/main/bash-modules/src/bash-modules/arguments.sh
|
||||
[13]:https://argbash.io/generate
|
||||
[14]:https://hub.docker.com/r/matejak/argbash/
|
@ -1,157 +0,0 @@
|
||||
How to configure multiple websites with Apache web server
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/apache-feathers.jpg?itok=fnrpsu3G)
|
||||
In my [last post][1], I explained how to configure an Apache web server for a single website. It turned out to be very easy. In this post, I will show you how to serve multiple websites using a single instance of Apache.
|
||||
|
||||
Note: I wrote this article on a virtual machine using Fedora 27 with Apache 2.4.29. If you have another distribution or release of Fedora, the commands you will use and the locations and content of the configuration files may be different.
|
||||
|
||||
As my previous article mentioned, all of the configuration files for Apache are located in `/etc/httpd/conf` and `/etc/httpd/conf.d`. The data for the websites is located in `/var/www` by default. With multiple websites, you will need to provide multiple locations, one for each site you host.
|
||||
|
||||
### Name-based virtual hosting
|
||||
|
||||
With name-based virtual hosting, you can use a single IP address for multiple websites. Modern web servers, including Apache, use the `hostname` portion of the specified URL to determine which virtual web host responds to the page request. This requires only a little more configuration than for a single site.
|
||||
|
||||
Even if you are starting with only a single website, I recommend that you set it up as a virtual host, which will make it easier to add more sites later. In this article, I'll pick up where we left off in the previous article, so you'll need to set up the original website, a name-based virtual website.
|
||||
|
||||
### Preparing the original website
|
||||
|
||||
Before you set up a second website, you need to get name-based virtual hosting working for the existing site. If you do not have an existing website, [go back and create one now][1].
|
||||
|
||||
Once you have your site, add the following stanza to the bottom of its `/etc/httpd/conf/httpd.conf` configuration file (adding this stanza is the only change you need to make to the `httpd.conf` file):
|
||||
```
|
||||
<VirtualHost 127.0.0.1:80>
|
||||
|
||||
DocumentRoot /var/www/html
|
||||
|
||||
ServerName www.site1.org
|
||||
|
||||
</VirtualHost>
|
||||
|
||||
```
|
||||
|
||||
This will be the first virtual host stanza, and it should remain first, to make it the default definition. That means that HTTP access to the server by IP address, or by another name that resolves to this IP address but that does not have a specific named host configuration stanza, will be directed to this virtual host. All other virtual host configuration stanzas should follow this one.
|
||||
|
||||
You also need to set up your websites with entries in `/etc/hosts` to provide name resolution. Last time, we just used the IP address for `localhost`. Normally, this would be done using whichever name service you use; for example, Google or Godaddy. For your test website, do this by adding a new name to the `localhost` line in `/etc/hosts`. Add the entries for both websites so you don't need to edit this file again later. The result looks like this:
|
||||
```
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 www.site1.org www.site2.org
|
||||
|
||||
```
|
||||
|
||||
Let’s also change the `/var/www/html/index.html` file to be a little more explicit. It should look like this (with some additional text to identify this as website number 1):
|
||||
```
|
||||
<h1>Hello World</h1>
|
||||
|
||||
|
||||
|
||||
Web site 1.
|
||||
|
||||
```
|
||||
|
||||
Restart the HTTPD server to enable the changes to the `httpd` configuration. You can then look at the website using the Lynx text mode browser from the command line.
|
||||
```
|
||||
[root@testvm1 ~]# systemctl restart httpd
|
||||
|
||||
[root@testvm1 ~]# lynx www.site1.org
|
||||
|
||||
|
||||
|
||||
Hello World
|
||||
|
||||
Web site 1.
|
||||
|
||||
<snip>
|
||||
|
||||
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
|
||||
|
||||
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
|
||||
|
||||
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
|
||||
|
||||
```
|
||||
|
||||
You can see that the revised content for the original website is displayed and that there are no obvious errors. Press the “Q” key, followed by “Y” to exit the Lynx web browser.
|
||||
|
||||
### Configuring the second website
|
||||
|
||||
Now you are ready to set up the second website. Create a new website directory structure with the following command:
|
||||
```
|
||||
[root@testvm1 html]# mkdir -p /var/www/html2
|
||||
|
||||
```
|
||||
|
||||
Notice that the second website is simply a second `html` directory in the same `/var/www` directory as the first site.
|
||||
|
||||
Now create a new index file, `/var/www/html2/index.html`, with the following content (this index file is a bit different, to distinguish it from the one for the original website):
|
||||
```
|
||||
<h1>Hello World -- Again</h1>
|
||||
|
||||
|
||||
|
||||
Web site 2.
|
||||
|
||||
```
|
||||
|
||||
Create a new configuration stanza in `httpd.conf` for the second website and place it below the previous virtual host stanza (the two should look very similar). This stanza tells the web server where to find the HTML files for the second site.
|
||||
```
|
||||
<VirtualHost 127.0.0.1:80>
|
||||
|
||||
DocumentRoot /var/www/html2
|
||||
|
||||
ServerName www.site2.org
|
||||
|
||||
</VirtualHost>
|
||||
|
||||
```
|
||||
|
||||
Restart HTTPD again and use Lynx to view the results.
|
||||
```
|
||||
[root@testvm1 httpd]# systemctl restart httpd
|
||||
|
||||
[root@testvm1 httpd]# lynx www.site2.org
|
||||
|
||||
|
||||
|
||||
Hello World -- Again
|
||||
|
||||
|
||||
|
||||
Web site 2.
|
||||
|
||||
|
||||
|
||||
<snip>
|
||||
|
||||
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
|
||||
|
||||
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
|
||||
|
||||
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
|
||||
|
||||
```
|
||||
|
||||
Here I have compressed the resulting output to fit this space. The difference in the page indicates that this is the second website. To show both websites at the same time, open another terminal session and use the Lynx web browser to view the other site.
|
||||
|
||||
### Other considerations
|
||||
|
||||
This simple example shows how to serve up two websites with a single instance of the Apache HTTPD server. Configuring the virtual hosts becomes a bit more complex when other factors are considered.
|
||||
|
||||
For example, you may want to use some CGI scripts for one or both of these websites. To do this, you would create directories for the CGI programs in `/var/www`: `/var/www/cgi-bin` and `/var/www/cgi-bin2`, to be consistent with the HTML directory naming. You would then need to add configuration directives to the virtual host stanzas to specify the directory location for the CGI scripts. Each website could also have directories from which files could be downloaded; this would also require entries in the appropriate virtual host stanza.
|
||||
|
||||
The [Apache website][2] describes other methods for managing multiple websites, as well as configuration options from performance tuning to security.
|
||||
|
||||
Apache is a powerful web server that can be used to manage websites ranging from simple to highly complex. Although its overall share is shrinking, Apache remains the single most commonly used HTTPD server on the Internet.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
|
||||
|
||||
作者:[David Both][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dboth
|
||||
[1]:https://opensource.com/article/18/2/how-configure-apache-web-server
|
||||
[2]:https://httpd.apache.org/docs/2.4/
|
@ -1,3 +1,4 @@
|
||||
translating by dianbanjiu
|
||||
Download an OS with GNOME Boxes
|
||||
======
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,207 +0,0 @@
|
||||
HankChow translating
|
||||
|
||||
Why is Python so slow?
|
||||
============================================================
|
||||
|
||||
Python is booming in popularity. It is used in DevOps, Data Science, Web Development and Security.
|
||||
|
||||
It does not, however, win any medals for speed.
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg)
|
||||
|
||||
> How does Java compare in terms of speed to C or C++ or C# or Python? The answer depends greatly on the type of application you’re running. No benchmark is perfect, but The Computer Language Benchmarks Game is [a good starting point][5].
|
||||
|
||||
I’ve been referring to the Computer Language Benchmarks Game for over a decade; compared with other languages like Java, C#, Go, JavaScript, C++, Python is [one of the slowest][6]. This includes [JIT][7] (C#, Java) and [AOT][8] (C, C++) compilers, as well as interpreted languages like JavaScript.
|
||||
|
||||
_NB: When I say “Python”, I’m talking about the reference implementation of the language, CPython. I will refer to other runtimes in this article._
|
||||
|
||||
> I want to answer this question: When Python completes a comparable application 2–10x slower than another language, _why is it slow_ and can’t we _make it faster_ ?
|
||||
|
||||
Here are the top theories:
|
||||
|
||||
* “ _It’s the GIL (Global Interpreter Lock)_ ”
|
||||
|
||||
* “ _It’s because its interpreted and not compiled_ ”
|
||||
|
||||
* “ _It’s because its a dynamically typed language_ ”
|
||||
|
||||
Which one of these reasons has the biggest impact on performance?
|
||||
|
||||
### “It’s the GIL”
|
||||
|
||||
Modern computers come with CPU’s that have multiple cores, and sometimes multiple processors. In order to utilise all this extra processing power, the Operating System defines a low-level structure called a thread, where a process (e.g. Chrome Browser) can spawn multiple threads and have instructions for the system inside. That way if one process is particularly CPU-intensive, that load can be shared across the cores and this effectively makes most applications complete tasks faster.
|
||||
|
||||
My Chrome Browser, as I’m writing this article, has 44 threads open. Keep in mind that the structure and API of threading are different between POSIX-based (e.g. Mac OS and Linux) and Windows OS. The operating system also handles the scheduling of threads.
|
||||
|
||||
IF you haven’t done multi-threaded programming before, a concept you’ll need to quickly become familiar with locks. Unlike a single-threaded process, you need to ensure that when changing variables in memory, multiple threads don’t try and access/change the same memory address at the same time.
|
||||
|
||||
When CPython creates variables, it allocates the memory and then counts how many references to that variable exist, this is a concept known as reference counting. If the number of references is 0, then it frees that piece of memory from the system. This is why creating a “temporary” variable within say, the scope of a for loop, doesn’t blow up the memory consumption of your application.
|
||||
|
||||
The challenge then becomes when variables are shared within multiple threads, how CPython locks the reference count. There is a “global interpreter lock” that carefully controls thread execution. The interpreter can only execute one operation at a time, regardless of how many threads it has.
|
||||
|
||||
#### What does this mean to the performance of Python application?
|
||||
|
||||
If you have a single-threaded, single interpreter application. It will make no difference to the speed. Removing the GIL would have no impact on the performance of your code.
|
||||
|
||||
If you wanted to implement concurrency within a single interpreter (Python process) by using threading, and your threads were IO intensive (e.g. Network IO or Disk IO), you would see the consequences of GIL-contention.
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png)
|
||||
From David Beazley’s GIL visualised post [http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]
|
||||
|
||||
If you have a web-application (e.g. Django) and you’re using WSGI, then each request to your web-app is a separate Python interpreter, so there is only 1 lock _per_ request. Because the Python interpreter is slow to start, some WSGI implementations have a “Daemon Mode” [which keep Python process(es) on the go for you.][9]
|
||||
|
||||
#### What about other Python runtimes?
|
||||
|
||||
[PyPy has a GIL][10] and it is typically >3x faster than CPython.
|
||||
|
||||
[Jython does not have a GIL][11] because a Python thread in Jython is represented by a Java thread and benefits from the JVM memory-management system.
|
||||
|
||||
#### How does JavaScript do this?
|
||||
|
||||
Well, firstly all Javascript engines [use mark-and-sweep Garbage Collection][12]. As stated, the primary need for the GIL is CPython’s memory-management algorithm.
|
||||
|
||||
JavaScript does not have a GIL, but it’s also single-threaded so it doesn’t require one. JavaScript’s event-loop and Promise/Callback pattern are how asynchronous-programming is achieved in place of concurrency. Python has a similar thing with the asyncio event-loop.
|
||||
|
||||
### “It’s because its an interpreted language”
|
||||
|
||||
I hear this a lot and I find it a gross-simplification of the way CPython actually works. If at a terminal you wrote `python myscript.py` then CPython would start a long sequence of reading, lexing, parsing, compiling, interpreting and executing that code.
|
||||
|
||||
If you’re interested in how that process works, I’ve written about it before:
|
||||
|
||||
[Modifying the Python language in 6 minutes
|
||||
This week I raised my first pull-request to the CPython core project, which was declined :-( but as to not completely…hackernoon.com][13][][14]
|
||||
|
||||
An important point in that process is the creation of a `.pyc` file, at the compiler stage, the bytecode sequence is written to a file inside `__pycache__/`on Python 3 or in the same directory in Python 2\. This doesn’t just apply to your script, but all of the code you imported, including 3rd party modules.
|
||||
|
||||
So most of the time (unless you write code which you only ever run once?), Python is interpreting bytecode and executing it locally. Compare that with Java and C#.NET:
|
||||
|
||||
> Java compiles to an “Intermediate Language” and the Java Virtual Machine reads the bytecode and just-in-time compiles it to machine code. The .NET CIL is the same, the .NET Common-Language-Runtime, CLR, uses just-in-time compilation to machine code.
|
||||
|
||||
So, why is Python so much slower than both Java and C# in the benchmarks if they all use a virtual machine and some sort of Bytecode? Firstly, .NET and Java are JIT-Compiled.
|
||||
|
||||
JIT or Just-in-time compilation requires an intermediate language to allow the code to be split into chunks (or frames). Ahead of time (AOT) compilers are designed to ensure that the CPU can understand every line in the code before any interaction takes place.
|
||||
|
||||
The JIT itself does not make the execution any faster, because it is still executing the same bytecode sequences. However, JIT enables optimizations to be made at runtime. A good JIT optimizer will see which parts of the application are being executed a lot, call these “hot spots”. It will then make optimizations to those bits of code, by replacing them with more efficient versions.
|
||||
|
||||
This means that when your application does the same thing again and again, it can be significantly faster. Also, keep in mind that Java and C# are strongly-typed languages so the optimiser can make many more assumptions about the code.
|
||||
|
||||
PyPy has a JIT and as mentioned in the previous section, is significantly faster than CPython. This performance benchmark article goes into more detail —
|
||||
|
||||
[Which is the fastest version of Python?
|
||||
Of course, “it depends”, but what does it depend on and how can you assess which is the fastest version of Python for…hackernoon.com][15][][16]
|
||||
|
||||
#### So why doesn’t CPython use a JIT?
|
||||
|
||||
There are downsides to JITs: one of those is startup time. CPython startup time is already comparatively slow, PyPy is 2–3x slower to start than CPython. The Java Virtual Machine is notoriously slow to boot. The .NET CLR gets around this by starting at system-startup, but the developers of the CLR also develop the Operating System on which the CLR runs.
|
||||
|
||||
If you have a single Python process running for a long time, with code that can be optimized because it contains “hot spots”, then a JIT makes a lot of sense.
|
||||
|
||||
However, CPython is a general-purpose implementation. So if you were developing command-line applications using Python, having to wait for a JIT to start every time the CLI was called would be horribly slow.
|
||||
|
||||
CPython has to try and serve as many use cases as possible. There was the possibility of [plugging a JIT into CPython][17] but this project has largely stalled.
|
||||
|
||||
> If you want the benefits of a JIT and you have a workload that suits it, use PyPy.
|
||||
|
||||
### “It’s because its a dynamically typed language”
|
||||
|
||||
In a “Statically-Typed” language, you have to specify the type of a variable when it is declared. Those would include C, C++, Java, C#, Go.
|
||||
|
||||
In a dynamically-typed language, there are still the concept of types, but the type of a variable is dynamic.
|
||||
|
||||
```
|
||||
a = 1
|
||||
a = "foo"
|
||||
```
|
||||
|
||||
In this toy-example, Python creates a second variable with the same name and a type of `str` and deallocates the memory created for the first instance of `a`
|
||||
|
||||
Statically-typed languages aren’t designed as such to make your life hard, they are designed that way because of the way the CPU operates. If everything eventually needs to equate to a simple binary operation, you have to convert objects and types down to a low-level data structure.
|
||||
|
||||
Python does this for you, you just never see it, nor do you need to care.
|
||||
|
||||
Not having to declare the type isn’t what makes Python slow, the design of the Python language enables you to make almost anything dynamic. You can replace the methods on objects at runtime, you can monkey-patch low-level system calls to a value declared at runtime. Almost anything is possible.
|
||||
|
||||
It’s this design that makes it incredibly hard to optimise Python.
|
||||
|
||||
To illustrate my point, I’m going to use a syscall tracing tool that works in Mac OS called Dtrace. CPython distributions do not come with DTrace builtin, so you have to recompile CPython. I’m using 3.6.6 for my demo
|
||||
|
||||
```
|
||||
wget https://github.com/python/cpython/archive/v3.6.6.zip
|
||||
unzip v3.6.6.zip
|
||||
cd v3.6.6
|
||||
./configure --with-dtrace
|
||||
make
|
||||
```
|
||||
|
||||
Now `python.exe` will have Dtrace tracers throughout the code. [Paul Ross wrote an awesome Lightning Talk on Dtrace][19]. You can [download DTrace starter files][20] for Python to measure function calls, execution time, CPU time, syscalls, all sorts of fun. e.g.
|
||||
|
||||
`sudo dtrace -s toolkit/<tracer>.d -c ‘../cpython/python.exe script.py’`
|
||||
|
||||
The `py_callflow` tracer shows all the function calls in your application
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif)
|
||||
|
||||
So, does Python’s dynamic typing make it slow?
|
||||
|
||||
* Comparing and converting types is costly, every time a variable is read, written to or referenced the type is checked
|
||||
|
||||
* It is hard to optimise a language that is so dynamic. The reason many alternatives to Python are so much faster is that they make compromises to flexibility in the name of performance
|
||||
|
||||
* Looking at [Cython][2], which combines C-Static Types and Python to optimise code where the types are known[ can provide ][3]an 84x performanceimprovement.
|
||||
|
||||
### Conclusion
|
||||
|
||||
> Python is primarily slow because of its dynamic nature and versatility. It can be used as a tool for all sorts of problems, where more optimised and faster alternatives are probably available.
|
||||
|
||||
There are, however, ways of optimising your Python applications by leveraging async, understanding the profiling tools, and consider using multiple-interpreters.
|
||||
|
||||
For applications where startup time is unimportant and the code would benefit a JIT, consider PyPy.
|
||||
|
||||
For parts of your code where performance is critical and you have more statically-typed variables, consider using [Cython][4].
|
||||
|
||||
#### Further reading
|
||||
|
||||
Jake VDP’s excellent article (although slightly dated) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21]
|
||||
|
||||
Dave Beazley’s talk on the GIL [http://www.dabeaz.com/python/GIL.pdf][22]
|
||||
|
||||
All about JIT compilers [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b
|
||||
|
||||
作者:[Anthony Shaw][a]
|
||||
选题:[oska874][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup
|
||||
[b]:https://github.com/oska874
|
||||
[1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html
|
||||
[2]:http://cython.org/
|
||||
[3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html
|
||||
[4]:http://cython.org/
|
||||
[5]:http://algs4.cs.princeton.edu/faq/
|
||||
[6]:https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html
|
||||
[7]:https://en.wikipedia.org/wiki/Just-in-time_compilation
|
||||
[8]:https://en.wikipedia.org/wiki/Ahead-of-time_compilation
|
||||
[9]:https://www.slideshare.net/GrahamDumpleton/secrets-of-a-wsgi-master
|
||||
[10]:http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why
|
||||
[11]:http://www.jython.org/jythonbook/en/1.0/Concurrency.html#no-global-interpreter-lock
|
||||
[12]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
|
||||
[13]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
|
||||
[14]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
|
||||
[15]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
|
||||
[16]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
|
||||
[17]:https://www.slideshare.net/AnthonyShaw5/pyjion-a-jit-extension-system-for-cpython
|
||||
[18]:https://github.com/python/cpython/archive/v3.6.6.zip
|
||||
[19]:https://github.com/paulross/dtrace-py#the-lightning-talk
|
||||
[20]:https://github.com/paulross/dtrace-py/tree/master/toolkit
|
||||
[21]:https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/
|
||||
[22]:http://www.dabeaz.com/python/GIL.pdf
|
||||
[23]:https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/
|
@ -1,3 +1,4 @@
|
||||
LuuMing translating
|
||||
Setting Up a Timer with systemd in Linux
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,6 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
How To Remove Or Disable Ubuntu Dock
|
||||
======
|
||||
|
||||
|
@ -1,299 +0,0 @@
|
||||
Translating by DavidChenLiang
|
||||
|
||||
CLI: improved
|
||||
======
|
||||
I'm not sure many web developers can get away without visiting the command line. As for me, I've been using the command line since 1997, first at university when I felt both super cool l33t-hacker and simultaneously utterly out of my depth.
|
||||
|
||||
Over the years my command line habits have improved and I often search for smarter tools for the jobs I commonly do. With that said, here's my current list of improved CLI tools.
|
||||
|
||||
|
||||
### Ignoring my improvements
|
||||
|
||||
In a number of cases I've aliased the new and improved command line tool over the original (as with `cat` and `ping`).
|
||||
|
||||
If I want to run the original command, which is sometimes I do need to do, then there's two ways I can do this (I'm on a Mac so your mileage may vary):
|
||||
```
|
||||
$ \cat # ignore aliases named "cat" - explanation: https://stackoverflow.com/a/16506263/22617
|
||||
$ command cat # ignore functions and aliases
|
||||
|
||||
```
|
||||
|
||||
### bat > cat
|
||||
|
||||
`cat` is used to print the contents of a file, but given more time spent in the command line, features like syntax highlighting come in very handy. I found [ccat][3] which offers highlighting then I found [bat][4] which has highlighting, paging, line numbers and git integration.
|
||||
|
||||
The `bat` command also allows me to search during output (only if the output is longer than the screen height) using the `/` key binding (similarly to `less` searching).
|
||||
|
||||
![Simple bat output][5]
|
||||
|
||||
I've also aliased `bat` to the `cat` command:
|
||||
```
|
||||
alias cat='bat'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][4]
|
||||
|
||||
### prettyping > ping
|
||||
|
||||
`ping` is incredibly useful, and probably my goto tool for the "oh crap is X down/does my internet work!!!". But `prettyping` ("pretty ping" not "pre typing"!) gives ping a really nice output and just makes me feel like the command line is a bit more welcoming.
|
||||
|
||||
![/images/cli-improved/ping.gif][6]
|
||||
|
||||
I've also aliased `ping` to the `prettyping` command:
|
||||
```
|
||||
alias ping='prettyping --nolegend'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][7]
|
||||
|
||||
### fzf > ctrl+r
|
||||
|
||||
In the terminal, using `ctrl+r` will allow you to [search backwards][8] through your history. It's a nice trick, albeit a bit fiddly.
|
||||
|
||||
The `fzf` tool is a **huge** enhancement on `ctrl+r`. It's a fuzzy search against the terminal history, with a fully interactive preview of the possible matches.
|
||||
|
||||
In addition to searching through the history, `fzf` can also preview and open files, which is what I've done in the video below:
|
||||
|
||||
For this preview effect, I created an alias called `preview` which combines `fzf` with `bat` for the preview and a custom key binding to open VS Code:
|
||||
```
|
||||
alias preview="fzf --preview 'bat --color \"always\" {}'"
|
||||
# add support for ctrl+o to open selected file in VS Code
|
||||
export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'"
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][9]
|
||||
|
||||
### htop > top
|
||||
|
||||
`top` is my goto tool for quickly diagnosing why the CPU on the machine is running hard or my fan is whirring. I also use these tools in production. Annoyingly (to me!) `top` on the Mac is vastly different (and inferior IMHO) to `top` on linux.
|
||||
|
||||
However, `htop` is an improvement on both regular `top` and crappy-mac `top`. Lots of colour coding, keyboard bindings and different views which have helped me in the past to understand which processes belong to which.
|
||||
|
||||
Handy key bindings include:
|
||||
|
||||
* P - sort by CPU
|
||||
* M - sort by memory usage
|
||||
* F4 - filter processes by string (to narrow to just "node" for instance)
|
||||
* space - mark a single process so I can watch if the process is spiking
|
||||
|
||||
|
||||
|
||||
![htop output][10]
|
||||
|
||||
There is a weird bug in Mac Sierra that can be overcome by running `htop` as root (I can't remember exactly what the bug is, but this alias fixes it - though annoying that I have to enter my password every now and again):
|
||||
```
|
||||
alias top="sudo htop" # alias top and fix high sierra bug
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][11]
|
||||
|
||||
### diff-so-fancy > diff
|
||||
|
||||
I'm pretty sure I picked this one up from Paul Irish some years ago. Although I rarely fire up `diff` manually, my git commands use diff all the time. `diff-so-fancy` gives me both colour coding but also character highlight of changes.
|
||||
|
||||
![diff so fancy][12]
|
||||
|
||||
Then in my `~/.gitconfig` I have included the following entry to enable `diff-so-fancy` on `git diff` and `git show`:
|
||||
```
|
||||
[pager]
|
||||
diff = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
show = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][13]
|
||||
|
||||
### fd > find
|
||||
|
||||
Although I use a Mac, I've never been a fan of Spotlight (I found it sluggish, hard to remember the keywords, the database update would hammer my CPU and generally useless!). I use [Alfred][14] a lot, but even the finder feature doesn't serve me well.
|
||||
|
||||
I tend to turn the command line to find files, but `find` is always a bit of a pain to remember the right expression to find what I want (and indeed the Mac flavour is slightly different non-mac find which adds to frustration).
|
||||
|
||||
`fd` is a great replacement (by the same individual who wrote `bat`). It is very fast and the common use cases I need to search with are simple to remember.
|
||||
|
||||
A few handy commands:
|
||||
```
|
||||
$ fd cli # all filenames containing "cli"
|
||||
$ fd -e md # all with .md extension
|
||||
$ fd cli -x wc -w # find "cli" and run `wc -w` on each file
|
||||
|
||||
```
|
||||
|
||||
![fd output][15]
|
||||
|
||||
💾 [Installation directions][16]
|
||||
|
||||
### ncdu > du
|
||||
|
||||
Knowing where disk space is being taking up is a fairly important task for me. I've used the Mac app [Disk Daisy][17] but I find that it can be a little slow to actually yield results.
|
||||
|
||||
The `du -sh` command is what I'll use in the terminal (`-sh` means summary and human readable), but often I'll want to dig into the directories taking up the space.
|
||||
|
||||
`ncdu` is a nice alternative. It offers an interactive interface and allows for quickly scanning which folders or files are responsible for taking up space and it's very quick to navigate. (Though any time I want to scan my entire home directory, it's going to take a long time, regardless of the tool - my directory is about 550gb).
|
||||
|
||||
Once I've found a directory I want to manage (to delete, move or compress files), I'll use the cmd + click the pathname at the top of the screen in [iTerm2][18] to launch finder to that directory.
|
||||
|
||||
![ncdu output][19]
|
||||
|
||||
There's another [alternative called nnn][20] which offers a slightly nicer interface and although it does file sizes and usage by default, it's actually a fully fledged file manager.
|
||||
|
||||
My `ncdu` is aliased to the following:
|
||||
```
|
||||
alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules"
|
||||
|
||||
```
|
||||
|
||||
The options are:
|
||||
|
||||
* `--color dark` \- use a colour scheme
|
||||
* `-rr` \- read-only mode (prevents delete and spawn shell)
|
||||
* `--exclude` ignore directories I won't do anything about
|
||||
|
||||
|
||||
|
||||
💾 [Installation directions][21]
|
||||
|
||||
### tldr > man
|
||||
|
||||
It's amazing that nearly every single command line tool comes with a manual via `man <command>`, but navigating the `man` output can be sometimes a little confusing, plus it can be daunting given all the technical information that's included in the manual output.
|
||||
|
||||
This is where the TL;DR project comes in. It's a community driven documentation system that's available from the command line. So far in my own usage, I've not come across a command that's not been documented, but you can [also contribute too][22].
|
||||
|
||||
![TLDR output for 'fd'][23]
|
||||
|
||||
As a nicety, I've also aliased `tldr` to `help` (since it's quicker to type!):
|
||||
```
|
||||
alias help='tldr'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][24]
|
||||
|
||||
### ack || ag > grep
|
||||
|
||||
`grep` is no doubt a powerful tool on the command line, but over the years it's been superseded by a number of tools. Two of which are `ack` and `ag`.
|
||||
|
||||
I personally flitter between `ack` and `ag` without really remembering which I prefer (that's to say they're both very good and very similar!). I tend to default to `ack` only because it rolls of my fingers a little easier. Plus, `ack` comes with the mega `ack --bar` argument (I'll let you experiment)!
|
||||
|
||||
Both `ack` and `ag` will (by default) use a regular expression to search, and extremely pertinent to my work, I can specify the file types to search within using flags like `--js` or `--html` (though here `ag` includes more files in the js filter than `ack`).
|
||||
|
||||
Both tools also support the usual `grep` options, like `-B` and `-A` for before and after context in the grep.
|
||||
|
||||
![ack in action][25]
|
||||
|
||||
Since `ack` doesn't come with markdown support (and I write a lot in markdown), I've got this customisation in my `~/.ackrc` file:
|
||||
```
|
||||
--type-set=md=.md,.mkd,.markdown
|
||||
--pager=less -FRX
|
||||
|
||||
```
|
||||
|
||||
💾 Installation directions: [ack][26], [ag][27]
|
||||
|
||||
[Futher reading on ack & ag][28]
|
||||
|
||||
### jq > grep et al
|
||||
|
||||
I'm a massive fanboy of [jq][29]. At first I struggled with the syntax, but I've since come around to the query language and use `jq` on a near daily basis (whereas before I'd either drop into node, use grep or use a tool called [json][30] which is very basic in comparison).
|
||||
|
||||
I've even started the process of writing a jq tutorial series (2,500 words and counting) and have published a [web tool][31] and a native mac app (yet to be released).
|
||||
|
||||
`jq` allows me to pass in JSON and transform the source very easily so that the JSON result fits my requirements. One such example allows me to update all my node dependencies in one command (broken into multiple lines for readability):
|
||||
```
|
||||
$ npm i $(echo $(\
|
||||
npm outdated --json | \
|
||||
jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \
|
||||
))
|
||||
|
||||
```
|
||||
|
||||
The above command will list all the node dependencies that are out of date, and use npm's JSON output format, then transform the source JSON from this:
|
||||
```
|
||||
{
|
||||
"node-jq": {
|
||||
"current": "0.7.0",
|
||||
"wanted": "0.7.0",
|
||||
"latest": "1.2.0",
|
||||
"location": "node_modules/node-jq"
|
||||
},
|
||||
"uuid": {
|
||||
"current": "3.1.0",
|
||||
"wanted": "3.2.1",
|
||||
"latest": "3.2.1",
|
||||
"location": "node_modules/uuid"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
…to this:
|
||||
|
||||
That result is then fed into the `npm install` command and voilà, I'm all upgraded (using the sledgehammer approach).
|
||||
|
||||
### Honourable mentions
|
||||
|
||||
Some of the other tools that I've started poking around with, but haven't used too often (with the exception of ponysay, which appears when I start a new terminal session!):
|
||||
|
||||
* [ponysay][32] > cowsay
|
||||
* [csvkit][33] > awk et al
|
||||
* [noti][34] > `display notification`
|
||||
* [entr][35] > watch
|
||||
|
||||
|
||||
|
||||
### What about you?
|
||||
|
||||
So that's my list. How about you? What daily command line tools have you improved? I'd love to know.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://remysharp.com/2018/08/23/cli-improved
|
||||
|
||||
作者:[Remy Sharp][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://remysharp.com
|
||||
[1]: https://remysharp.com/images/terminal-600.jpg
|
||||
[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount
|
||||
[3]: https://github.com/jingweno/ccat
|
||||
[4]: https://github.com/sharkdp/bat
|
||||
[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output)
|
||||
[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output)
|
||||
[7]: http://denilson.sa.nom.br/prettyping/
|
||||
[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks
|
||||
[9]: https://github.com/junegunn/fzf
|
||||
[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output)
|
||||
[11]: http://hisham.hm/htop/
|
||||
[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output)
|
||||
[13]: https://github.com/so-fancy/diff-so-fancy
|
||||
[14]: https://www.alfredapp.com/
|
||||
[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output)
|
||||
[16]: https://github.com/sharkdp/fd/
|
||||
[17]: https://daisydiskapp.com/
|
||||
[18]: https://www.iterm2.com/
|
||||
[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output)
|
||||
[20]: https://github.com/jarun/nnn
|
||||
[21]: https://dev.yorhel.nl/ncdu
|
||||
[22]: https://github.com/tldr-pages/tldr#contributing
|
||||
[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for 'fd')
|
||||
[24]: http://tldr-pages.github.io/
|
||||
[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args)
|
||||
[26]: https://beyondgrep.com
|
||||
[27]: https://github.com/ggreer/the_silver_searcher
|
||||
[28]: http://conqueringthecommandline.com/book/ack_ag
|
||||
[29]: https://stedolan.github.io/jq
|
||||
[30]: http://trentm.com/json/
|
||||
[31]: https://jqterm.com
|
||||
[32]: https://github.com/erkin/ponysay
|
||||
[33]: https://csvkit.readthedocs.io/en/1.0.3/
|
||||
[34]: https://github.com/variadico/noti
|
||||
[35]: http://www.entrproject.org/
|
@ -1,59 +0,0 @@
|
||||
A sysadmin's guide to containers
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
|
||||
|
||||
The term "containers" is heavily overused. Also, depending on the context, it can mean different things to different people.
|
||||
|
||||
Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints (control groups [cgroups]), Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and namespaces (PID, network, mount, etc.).
|
||||
|
||||
If you boot a modern Linux system and took a look at any process with `cat /proc/PID/cgroup`, you see that the process is in a cgroup. If you look at `/proc/PID/status`, you see capabilities. If you look at `/proc/self/attr/current`, you see SELinux labels. If you look at `/proc/PID/ns`, you see the list of namespaces the process is in. So, if you define a container as a process with resource constraints, Linux security constraints, and namespaces, by definition every process on a Linux system is in a container. This is why we often say [Linux is containers, containers are Linux][1]. **Container runtimes** are tools that modify these resource constraints, security, and namespaces and launch the container.
|
||||
|
||||
Docker introduced the concept of a **container image** , which is a standard TAR file that combines:
|
||||
|
||||
* **Rootfs (container root filesystem):** A directory on the system that looks like the standard root (`/`) of the operating system. For example, a directory with `/usr`, `/var`, `/home`, etc.
|
||||
* **JSON file (container configuration):** Specifies how to run the rootfs; for example, what **command** or **entrypoint** to run in the rootfs when the container starts; **environment variables** to set for the container; the container's **working directory** ; and a few other settings.
|
||||
|
||||
|
||||
|
||||
Docker "`tar`'s up" the rootfs and the JSON file to create the **base image**. This enables you to install additional content on the rootfs, create a new JSON file, and `tar` the difference between the original image and the new image with the updated JSON file. This creates a **layered image**.
|
||||
|
||||
The definition of a container image was eventually standardized by the [Open Container Initiative (OCI)][2] standards body as the [OCI Image Specification][3].
|
||||
|
||||
Tools used to create container images are called **container image builders**. Sometimes container engines perform this task, but several standalone tools are available that can build container images.
|
||||
|
||||
Docker took these container images ( **tarballs** ) and moved them to a web service from which they could be pulled, developed a protocol to pull them, and called the web service a **container registry**.
|
||||
|
||||
**Container engines** are programs that can pull container images from container registries and reassemble them onto **container storage**. Container engines also launch **container runtimes** (see below).
|
||||
|
||||
![](https://opensource.com/sites/default/files/linux_container_internals_2.0_-_hosts.png)
|
||||
|
||||
Container storage is usually a **copy-on-write** (COW) layered filesystem. When you pull down a container image from a container registry, you first need to untar the rootfs and place it on disk. If you have multiple layers that make up your image, each layer is downloaded and stored on a different layer on the COW filesystem. The COW filesystem allows each layer to be stored separately, which maximizes sharing for layered images. Container engines often support multiple types of container storage, including `overlay`, `devicemapper`, `btrfs`, `aufs`, and `zfs`.
|
||||
|
||||
|
||||
After the container engine downloads the container image to container storage, it needs to create aThe runtime configuration combines input from the caller/user along with the content of the container image specification. For example, the caller might want to specify modifications to a running container's security, add additional environment variables, or mount volumes to the container.
|
||||
|
||||
The layout of the container runtime configuration and the exploded rootfs have also been standardized by the OCI standards body as the [OCI Runtime Specification][4].
|
||||
|
||||
Finally, the container engine launches a **container runtime** that reads the container runtime specification; modifies the Linux cgroups, Linux security constraints, and namespaces; and launches the container command to create the container's **PID 1**. At this point, the container engine can relay `stdin`/`stdout` back to the caller and control the container (e.g., stop, start, attach).
|
||||
|
||||
Note that many new container runtimes are being introduced to use different parts of Linux to isolate containers. People can now run containers using KVM separation (think mini virtual machines) or they can use other hypervisor strategies (like intercepting all system calls from processes in containers). Since we have a standard runtime specification, these tools can all be launched by the same container engines. Even Windows can use the OCI Runtime Specification for launching Windows containers.
|
||||
|
||||
At a much higher level are **container orchestrators.** Container orchestrators are tools used to coordinate the execution of containers on multiple different nodes. Container orchestrators talk to container engines to manage containers. Orchestrators tell the container engines to start containers and wire their networks together. Orchestrators can monitor the containers and launch additional containers as the load increases.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/sysadmins-guide-containers
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://www.redhat.com/en/blog/containers-are-linux
|
||||
[2]:https://www.opencontainers.org/
|
||||
[3]:https://github.com/opencontainers/image-spec/blob/master/spec.md
|
||||
[4]:https://github.com/opencontainers/runtime-spec
|
@ -1,3 +1,4 @@
|
||||
translating by sd886393
|
||||
4 open source monitoring tools
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating by dianbanjiu
|
||||
6 places to host your git repository
|
||||
======
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
KevinSJ 翻译中
|
||||
6 open source tools for writing a book
|
||||
======
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
translating by name1e5s
|
||||
Know Your Storage: Block, File & Object
|
||||
======
|
||||
|
||||
|
@ -1,397 +0,0 @@
|
||||
translating by Flowsnow
|
||||
|
||||
How to build rpm packages
|
||||
======
|
||||
|
||||
Save time and effort installing files and scripts across multiple hosts.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1)
|
||||
|
||||
I have used rpm-based package managers to install software on Red Hat and Fedora Linux since I started using Linux more than 20 years ago. I have used the **rpm** program itself, **yum** , and **DNF** , which is a close descendant of yum, to install and update packages on my Linux hosts. The yum and DNF tools are wrappers around the rpm utility that provide additional functionality, such as the ability to find and install package dependencies.
|
||||
|
||||
Over the years I have created a number of Bash scripts, some of which have separate configuration files, that I like to install on most of my new computers and virtual machines. It reached the point that it took a great deal of time to install all of these packages, so I decided to automate that process by creating an rpm package that I could copy to the target hosts and install all of these files in their proper locations. Although the **rpm** tool was formerly used to build rpm packages, that function was removed and a new tool,was created to build new rpms.
|
||||
|
||||
When I started this project, I found very little information about creating rpm packages, but I managed to find a book, Maximum RPM, that helped me figure it out. That book is now somewhat out of date, as is the vast majority of information I have found. It is also out of print, and used copies go for hundreds of dollars. The online version of [Maximum RPM][1] is available at no charge and is kept up to date. The [RPM website][2] also has links to other websites that have a lot of documentation about rpm. What other information there is tends to be brief and apparently assumes that you already have a good deal of knowledge about the process.
|
||||
|
||||
In addition, every one of the documents I found assumes that the code needs to be compiled from sources as in a development environment. I am not a developer. I am a sysadmin, and we sysadmins have different needs because we don’t—or we shouldn’t—compile code to use for administrative tasks; we should use shell scripts. So we have no source code in the sense that it is something that needs to be compiled into binary executables. What we have is a source that is also the executable.
|
||||
|
||||
For the most part, this project should be performed as the non-root user student. Rpms should never be built by root, but only by non-privileged users. I will indicate which parts should be performed as root and which by a non-root, unprivileged user.
|
||||
|
||||
### Preparation
|
||||
|
||||
First, open one terminal session and `su` to root. Be sure to use the `-` option to ensure that the complete root environment is enabled. I do not believe that sysadmins should use `sudo` for any administrative tasks. Find out why in my personal blog post: [Real SysAdmins don’t sudo][3].
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ su -
|
||||
Password:
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
Create a student user that can be used for this project and set a password for that user.
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# useradd -c "Student User" student
|
||||
[root@testvm1 ~]# passwd student
|
||||
Changing password for user student.
|
||||
New password: <Enter the password>
|
||||
Retype new password: <Enter the password>
|
||||
passwd: all authentication tokens updated successfully.
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
Building rpm packages requires the `rpm-build` package, which is likely not already installed. Install it now as root. Note that this command will also install several dependencies. The number may vary, depending upon the packages already installed on your host; it installed a total of 17 packages on my test VM, which is pretty minimal.
|
||||
|
||||
```
|
||||
dnf install -y rpm-build
|
||||
```
|
||||
|
||||
The rest of this project should be performed as the user student unless otherwise explicitly directed. Open another terminal session and use `su` to switch to that user to perform the rest of these steps. Download a tarball that I have prepared of a development directory structure, utils.tar, from GitHub using the following command:
|
||||
|
||||
```
|
||||
wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar
|
||||
```
|
||||
|
||||
This tarball includes all of the files and Bash scripts that will be installed by the final rpm. There is also a complete spec file, which you can use to build the rpm. We will go into detail about each section of the spec file.
|
||||
|
||||
As user student, using your home directory as your present working directory (pwd), untar the tarball.
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd ; tar -xvf utils.tar
|
||||
```
|
||||
|
||||
Use the `tree` command to verify that the directory structure of ~/development and the contained files looks like the following output:
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ tree development/
|
||||
development/
|
||||
├── license
|
||||
│ ├── Copyright.and.GPL.Notice.txt
|
||||
│ └── GPL_LICENSE.txt
|
||||
├── scripts
|
||||
│ ├── create_motd
|
||||
│ ├── die
|
||||
│ ├── mymotd
|
||||
│ └── sysdata
|
||||
└── spec
|
||||
└── utils.spec
|
||||
|
||||
3 directories, 7 files
|
||||
[student@testvm1 ~]$
|
||||
```
|
||||
|
||||
The `mymotd` script creates a “Message Of The Day” data stream that is sent to stdout. The `create_motd` script runs the `mymotd` scripts and redirects the output to the /etc/motd file. This file is used to display a daily message to users who log in remotely using SSH.
|
||||
|
||||
The `die` script is my own script that wraps the `kill` command in a bit of code that can find running programs that match a specified string and kill them. It uses `kill -9` to ensure that they cannot ignore the kill message.
|
||||
|
||||
The `sysdata` script can spew tens of thousands of lines of data about your computer hardware, the installed version of Linux, all installed packages, and the metadata of your hard drives. I use it to document the state of a host at a point in time. I can later use it for reference. I used to do this to maintain a record of hosts that I installed for customers.
|
||||
|
||||
You may need to change ownership of these files and directories to student.student. Do this, if necessary, using the following command:
|
||||
|
||||
```
|
||||
chown -R student.student development
|
||||
```
|
||||
|
||||
Most of the files and directories in this tree will be installed on Fedora systems by the rpm you create during this project.
|
||||
|
||||
### Creating the build directory structure
|
||||
|
||||
The `rpmbuild` command requires a very specific directory structure. You must create this directory structure yourself because no automated way is provided. Create the following directory structure in your home directory:
|
||||
|
||||
```
|
||||
~ ─ rpmbuild
|
||||
├── RPMS
|
||||
│ └── noarch
|
||||
├── SOURCES
|
||||
├── SPECS
|
||||
└── SRPMS
|
||||
```
|
||||
|
||||
We will not create the rpmbuild/RPMS/X86_64 directory because that would be architecture-specific for 64-bit compiled binaries. We have shell scripts that are not architecture-specific. In reality, we won’t be using the SRPMS directory either, which would contain source files for the compiler.
|
||||
|
||||
### Examining the spec file
|
||||
|
||||
Each spec file has a number of sections, some of which may be ignored or omitted, depending upon the specific circumstances of the rpm build. This particular spec file is not an example of a minimal file required to work, but it is a good example of a moderately complex spec file that packages files that do not need to be compiled. If a compile were required, it would be performed in the `%build` section, which is omitted from this spec file because it is not required.
|
||||
|
||||
#### Preamble
|
||||
|
||||
This is the only section of the spec file that does not have a label. It consists of much of the information you see when the command `rpm -qi [Package Name]` is run. Each datum is a single line which consists of a tag, which identifies it and text data for the value of the tag.
|
||||
|
||||
```
|
||||
###############################################################################
|
||||
# Spec file for utils
|
||||
################################################################################
|
||||
# Configured to be built by user student or other non-root user
|
||||
################################################################################
|
||||
#
|
||||
Summary: Utility scripts for testing RPM creation
|
||||
Name: utils
|
||||
Version: 1.0.0
|
||||
Release: 1
|
||||
License: GPL
|
||||
URL: http://www.both.org
|
||||
Group: System
|
||||
Packager: David Both
|
||||
Requires: bash
|
||||
Requires: screen
|
||||
Requires: mc
|
||||
Requires: dmidecode
|
||||
BuildRoot: ~/rpmbuild/
|
||||
|
||||
# Build with the following syntax:
|
||||
# rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
Comment lines are ignored by the `rpmbuild` program. I always like to add a comment to this section that contains the exact syntax of the `rpmbuild` command required to create the package. The Summary tag is a short description of the package. The Name, Version, and Release tags are used to create the name of the rpm file, as in utils-1.00-1.rpm. Incrementing the release and version numbers lets you create rpms that can be used to update older ones.
|
||||
|
||||
The License tag defines the license under which the package is released. I always use a variation of the GPL. Specifying the license is important to clarify the fact that the software contained in the package is open source. This is also why I included the license and GPL statement in the files that will be installed.
|
||||
|
||||
The URL is usually the web page of the project or project owner. In this case, it is my personal web page.
|
||||
|
||||
The Group tag is interesting and is usually used for GUI applications. The value of the Group tag determines which group of icons in the applications menu will contain the icon for the executable in this package. Used in conjunction with the Icon tag (which we are not using here), the Group tag allows adding the icon and the required information to launch a program into the applications menu structure.
|
||||
|
||||
The Packager tag is used to specify the person or organization responsible for maintaining and creating the package.
|
||||
|
||||
The Requires statements define the dependencies for this rpm. Each is a package name. If one of the specified packages is not present, the DNF installation utility will try to locate it in one of the defined repositories defined in /etc/yum.repos.d and install it if it exists. If DNF cannot find one or more of the required packages, it will throw an error indicating which packages are missing and terminate.
|
||||
|
||||
The BuildRoot line specifies the top-level directory in which the `rpmbuild` tool will find the spec file and in which it will create temporary directories while it builds the package. The finished package will be stored in the noarch subdirectory that we specified earlier. The comment showing the command syntax used to build this package includes the option `–target noarch`, which defines the target architecture. Because these are Bash scripts, they are not associated with a specific CPU architecture. If this option were omitted, the build would be targeted to the architecture of the CPU on which the build is being performed.
|
||||
|
||||
The `rpmbuild` program can target many different architectures, and using the `--target` option allows us to build architecture-specific packages on a host with a different architecture from the one on which the build is performed. So I could build a package intended for use on an i686 architecture on an x86_64 host, and vice versa.
|
||||
|
||||
Change the packager name to yours and the URL to your own website if you have one.
|
||||
|
||||
#### %description
|
||||
|
||||
The `%description` section of the spec file contains a description of the rpm package. It can be very short or can contain many lines of information. Our `%description` section is rather terse.
|
||||
|
||||
```
|
||||
%description
|
||||
A collection of utility scripts for testing RPM creation.
|
||||
```
|
||||
|
||||
#### %prep
|
||||
|
||||
The `%prep` section is the first script that is executed during the build process. This script is not executed during the installation of the package.
|
||||
|
||||
This script is just a Bash shell script. It prepares the build directory, creating directories used for the build as required and copying the appropriate files into their respective directories. This would include the sources required for a complete compile as part of the build.
|
||||
|
||||
The $RPM_BUILD_ROOT directory represents the root directory of an installed system. The directories created in the $RPM_BUILD_ROOT directory are fully qualified paths, such as /user/local/share/utils, /usr/local/bin, and so on, in a live filesystem.
|
||||
|
||||
In the case of our package, we have no pre-compile sources as all of our programs are Bash scripts. So we simply copy those scripts and other files into the directories where they belong in the installed system.
|
||||
|
||||
```
|
||||
%prep
|
||||
################################################################################
|
||||
# Create the build tree and copy the files from the development directories #
|
||||
# into the build tree. #
|
||||
################################################################################
|
||||
echo "BUILDROOT = $RPM_BUILD_ROOT"
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
cp /home/student/development/utils/scripts/* $RPM_BUILD_ROOT/usr/local/bin
|
||||
cp /home/student/development/utils/license/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
exit
|
||||
```
|
||||
|
||||
Note that the exit statement at the end of this section is required.
|
||||
|
||||
#### %files
|
||||
|
||||
This section of the spec file defines the files to be installed and their locations in the directory tree. It also specifies the file attributes and the owner and group owner for each file to be installed. The file permissions and ownerships are optional, but I recommend that they be explicitly set to eliminate any chance for those attributes to be incorrect or ambiguous when installed. Directories are created as required during the installation if they do not already exist.
|
||||
|
||||
```
|
||||
%files
|
||||
%attr(0744, root, root) /usr/local/bin/*
|
||||
%attr(0644, root, root) /usr/local/share/utils/*
|
||||
```
|
||||
|
||||
#### %pre
|
||||
|
||||
This section is empty in our lab project’s spec file. This would be the place to put any scripts that are required to run during installation of the rpm but prior to the installation of the files.
|
||||
|
||||
#### %post
|
||||
|
||||
This section of the spec file is another Bash script. This one runs after the installation of files. This section can be pretty much anything you need or want it to be, including creating files, running system commands, and restarting services to reinitialize them after making configuration changes. The `%post` script for our rpm package performs some of those tasks.
|
||||
|
||||
```
|
||||
%post
|
||||
################################################################################
|
||||
# Set up MOTD scripts #
|
||||
################################################################################
|
||||
cd /etc
|
||||
# Save the old MOTD if it exists
|
||||
if [ -e motd ]
|
||||
then
|
||||
cp motd motd.orig
|
||||
fi
|
||||
# If not there already, Add link to create_motd to cron.daily
|
||||
cd /etc/cron.daily
|
||||
if [ ! -e create_motd ]
|
||||
then
|
||||
ln -s /usr/local/bin/create_motd
|
||||
fi
|
||||
# create the MOTD for the first time
|
||||
/usr/local/bin/mymotd > /etc/motd
|
||||
```
|
||||
|
||||
The comments included in this script should make its purpose clear.
|
||||
|
||||
#### %postun
|
||||
|
||||
This section contains a script that would be run after the rpm package is uninstalled. Using rpm or DNF to remove a package removes all of the files listed in the `%files` section, but it does not remove files or links created by the `%post` section, so we need to handle that in this section.
|
||||
|
||||
This script usually consists of cleanup tasks that simply erasing the files previously installed by the rpm cannot accomplish. In the case of our package, it includes removing the link created by the `%post` script and restoring the saved original of the motd file.
|
||||
|
||||
```
|
||||
%postun
|
||||
# remove installed files and links
|
||||
rm /etc/cron.daily/create_motd
|
||||
|
||||
# Restore the original MOTD if it was backed up
|
||||
if [ -e /etc/motd.orig ]
|
||||
then
|
||||
mv -f /etc/motd.orig /etc/motd
|
||||
fi
|
||||
```
|
||||
|
||||
#### %clean
|
||||
|
||||
This Bash script performs cleanup after the rpm build process. The two lines in the `%clean` section below remove the build directories created by the `rpm-build` command. In many cases, additional cleanup may also be required.
|
||||
|
||||
```
|
||||
%clean
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/bin
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
```
|
||||
|
||||
#### %changelog
|
||||
|
||||
This optional text section contains a list of changes to the rpm and files it contains. The newest changes are recorded at the top of this section.
|
||||
|
||||
```
|
||||
%changelog
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
```
|
||||
|
||||
Replace the data in the header line with your own name and email address.
|
||||
|
||||
### Building the rpm
|
||||
|
||||
The spec file must be in the SPECS directory of the rpmbuild tree. I find it easiest to create a link to the actual spec file in that directory so that it can be edited in the development directory and there is no need to copy it to the SPECS directory. Make the SPECS directory your pwd, then create the link.
|
||||
|
||||
```
|
||||
cd ~/rpmbuild/SPECS/
|
||||
ln -s ~/development/spec/utils.spec
|
||||
```
|
||||
|
||||
Run the following command to build the rpm. It should only take a moment to create the rpm if no errors occur.
|
||||
|
||||
```
|
||||
rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
Check in the ~/rpmbuild/RPMS/noarch directory to verify that the new rpm exists there.
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/
|
||||
[student@testvm1 noarch]$ ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[student@testvm1 noarch]$
|
||||
```
|
||||
|
||||
### Testing the rpm
|
||||
|
||||
As root, install the rpm to verify that it installs correctly and that the files are installed in the correct directories. The exact name of the rpm will depend upon the values you used for the tags in the Preamble section, but if you used the ones in the sample, the rpm name will be as shown in the sample command below:
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/
|
||||
[root@testvm1 noarch]# ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[root@testvm1 noarch]# rpm -ivh utils-1.0.0-1.noarch.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:utils-1.0.0-1 ################################# [100%]
|
||||
```
|
||||
|
||||
Check /usr/local/bin to ensure that the new files are there. You should also verify that the create_motd link in /etc/cron.daily has been created.
|
||||
|
||||
Use the `rpm -q --changelog utils` command to view the changelog. View the files installed by the package using the `rpm -ql utils` command (that is a lowercase L in `ql`.)
|
||||
|
||||
```
|
||||
[root@testvm1 noarch]# rpm -q --changelog utils
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
|
||||
[root@testvm1 noarch]# rpm -ql utils
|
||||
/usr/local/bin/create_motd
|
||||
/usr/local/bin/die
|
||||
/usr/local/bin/mymotd
|
||||
/usr/local/bin/sysdata
|
||||
/usr/local/share/utils/Copyright.and.GPL.Notice.txt
|
||||
/usr/local/share/utils/GPL_LICENSE.txt
|
||||
/usr/local/share/utils/utils.spec
|
||||
[root@testvm1 noarch]#
|
||||
```
|
||||
|
||||
Remove the package.
|
||||
|
||||
```
|
||||
rpm -e utils
|
||||
```
|
||||
|
||||
### Experimenting
|
||||
|
||||
Now you will change the spec file to require a package that does not exist. This will simulate a dependency that cannot be met. Add the following line immediately under the existing Requires line:
|
||||
|
||||
```
|
||||
Requires: badrequire
|
||||
```
|
||||
|
||||
Build the package and attempt to install it. What message is displayed?
|
||||
|
||||
We used the `rpm` command to install and delete the `utils` package. Try installing the package with yum or DNF. You must be in the same directory as the package or specify the full path to the package for this to work.
|
||||
|
||||
### Conclusion
|
||||
|
||||
There are many tags and a couple sections that we did not cover in this look at the basics of creating an rpm package. The resources listed below can provide more information. Building rpm packages is not difficult; you just need the right information. I hope this helps you—it took me months to figure things out on my own.
|
||||
|
||||
We did not cover building from source code, but if you are a developer, that should be a simple step from this point.
|
||||
|
||||
Creating rpm packages is another good way to be a lazy sysadmin and save time and effort. It provides an easy method for distributing and installing the scripts and other files that we as sysadmins need to install on many hosts.
|
||||
|
||||
### Resources
|
||||
|
||||
* Edward C. Baily, Maximum RPM, Sams Publishing, 2000, ISBN 0-672-31105-4
|
||||
|
||||
* Edward C. Baily, [Maximum RPM][1], updated online version
|
||||
|
||||
* [RPM Documentation][4]: This web page lists most of the available online documentation for rpm. It includes many links to other websites and information about rpm.
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/how-build-rpm-packages
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[1]: http://ftp.rpm.org/max-rpm/
|
||||
[2]: http://rpm.org/index.html
|
||||
[3]: http://www.both.org/?p=960
|
||||
[4]: http://rpm.org/documentation.html
|
@ -1,110 +0,0 @@
|
||||
translating by ypingcn
|
||||
|
||||
Control your data with Syncthing: An open source synchronization tool
|
||||
======
|
||||
Decide how to store and share your personal information.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
|
||||
These days, some of our most important possessions—from pictures and videos of family and friends to financial and medical documents—are data. And even as cloud storage services are booming, so there are concerns about privacy and lack of control over our personal data. From the PRISM surveillance program to Google [letting app developers scan your personal emails][1], the news is full of reports that should give us all pause regarding the security of our personal information.
|
||||
|
||||
[Syncthing][2] can help put your mind at ease. An open source peer-to-peer file synchronization tool that runs on Linux, Windows, Mac, Android, and others (sorry, no iOS), Syncthing uses its own protocol, called [Block Exchange Protocol][3]. In brief, Syncthing lets you synchronize your data across many devices without owning a server.
|
||||
|
||||
### Linux
|
||||
|
||||
In this post, I will explain how to install and synchronize files between a Linux computer and an Android phone.
|
||||
|
||||
Syncthing is readily available for most popular distributions. Fedora 28 includes the latest version.
|
||||
|
||||
To install Syncthing in Fedora, you can either search for it in Software Center or execute the following command:
|
||||
|
||||
```
|
||||
sudo dnf install syncthing syncthing-gtk
|
||||
|
||||
```
|
||||
|
||||
Once it’s installed, open it. You’ll be welcomed by an assistant to help configure Syncthing. Click **Next** until it asks to configure the WebUI. The safest option is to keep the option **Listen on localhost**. That will disable the web interface and keep unauthorized users away.
|
||||
|
||||
![Syncthing in Setup WebUI dialog box][5]
|
||||
|
||||
Syncthing in Setup WebUI dialog box
|
||||
|
||||
Close the dialog. Now that Syncthing is installed, it’s time to share a folder, connect a device, and start syncing. But first, let’s continue with your other client.
|
||||
|
||||
### Android
|
||||
|
||||
Syncthing is available in Google Play and in F-Droid app stores.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing2.png)
|
||||
|
||||
Once the application is installed, you’ll be welcomed by a wizard. Grant Syncthing permissions to your storage. You might be asked to disable battery optimization for this application. It is safe to do so as we will optimize the app to synchronize only when plugged in and connected to a wireless network.
|
||||
|
||||
Click on the main menu icon and go to **Settings** , then **Run Conditions**. Tick **Always run in** **the background** , **Run only when charging** , and **Run only on wifi**. Now your Android client is ready to exchange files with your devices.
|
||||
|
||||
There are two important concepts to remember in Syncthing: folders and devices. Folders are what you want to share, but you must have a device to share with. Syncthing allows you to share individual folders with different devices. Devices are added by exchanging device IDs. A device ID is a unique, cryptographically secure identifier that is created when Syncthing starts for the first time.
|
||||
|
||||
### Connecting devices
|
||||
|
||||
Now let’s connect your Linux machine and your Android client.
|
||||
|
||||
In your Linux computer, open Syncthing, click on the **Settings** icon and click **Show ID**. A QR code will show up.
|
||||
|
||||
In your Android mobile, open Syncthing. In the main screen, click the **Devices** tab and press the **+** symbol. In the first field, press the QR code symbol to open the QR scanner.
|
||||
|
||||
Point your mobile camera to the computer QR code. The Device ID** **field will be populated with your desktop client Device ID. Give it a friendly name and save. Because adding a device goes two ways, you now need to confirm on the computer client that you want to add the Android mobile. It might take a couple of minutes for your computer client to ask for confirmation. When it does, click **Add**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing6.png)
|
||||
|
||||
In the **New Device** window, you can verify and configure some options about your new device, like the **Device Name** and **Addresses**. If you keep dynamic, it will try to auto-discover the device IP, but if you want to force one, you can add it in this field. If you already created a folder (more on this later), you can also share it with this new device.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing7.png)
|
||||
|
||||
Your computer and Android are now paired and ready to exchange files. (If you have more than one computer or mobile phone, simply repeat these steps.)
|
||||
|
||||
### Sharing folders
|
||||
|
||||
Now that the devices you want to sync are already connected, it’s time to share a folder. You can share folders on your computer and the devices you add to that folder will get a copy.
|
||||
|
||||
To share a folder, go to **Settings** and click **Add Shared Folder** :
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing8.png)
|
||||
|
||||
In the next window, enter the information of the folder you want to share:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing9.png)
|
||||
|
||||
You can use any label you want. **Folder ID** will be generated randomly and will be used to identify the folder between the clients. In **Path** , click **Browse** and locate the folder you want to share. If you want Syncthing to monitor the folder for changes (such as deletes, new files, etc.), click **Monitor filesystem for changes**.
|
||||
|
||||
Remember, when you share a folder, any change that happens on the other clients will be reflected on every single device. That means that if you share a folder containing pictures with other computers or mobile devices, changes in these other clients will be reflected everywhere. If this is not what you want, you can make your folder “Send Only” so it will send files to the clients, but the other clients’ changes won’t be synced.
|
||||
|
||||
When this is done, go to **Share with Devices** and select the hosts you want to sync with your folder:
|
||||
|
||||
All the devices you select will need to accept the share request; you will get a notification from the devices:
|
||||
|
||||
Just as when you shared the folder, you must configure the new shared folder:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing12.png)
|
||||
|
||||
Again, here you can define any label, but the ID must match each client. In the folder option, select the destination for the folder and its files. Remember that any change done in this folder will be reflected with every device allowed in the folder.
|
||||
|
||||
These are the steps to connect devices and share folders with Syncthing. It might take a few minutes to start copying, depending on your network settings or if you are not on the same network.
|
||||
|
||||
Syncthing offers many more great features and options. Try it—and take control of your data.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/take-control-your-data-syncthing
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://docs.syncthing.net/specs/bep-v1.html
|
||||
[4]: /file/410191
|
||||
[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png (Syncthing in Setup WebUI dialog box)
|
@ -1,3 +1,5 @@
|
||||
Translating by way-ww
|
||||
|
||||
Why Linux users should try Rust
|
||||
======
|
||||
|
||||
|
@ -1,87 +0,0 @@
|
||||
5 cool tiling window managers
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg)
|
||||
The Linux desktop ecosystem offers multiple window managers (WMs). Some are developed as part of a desktop environment. Others are meant to be used as standalone application. This is the case of tiling WMs, which offer a more lightweight, customized environment. This article presents five such tiling WMs for you to try out.
|
||||
|
||||
### i3
|
||||
|
||||
[i3][1] is one of the most popular tiling window managers. Like most other such WMs, i3 focuses on low resource consumption and customizability by the user.
|
||||
|
||||
You can refer to [this previous article in the Magazine][2] to get started with i3 installation details and how to configure it.
|
||||
|
||||
### sway
|
||||
|
||||
[sway][3] is a tiling Wayland compositor. It has the advantage of compatibility with an existing i3 configuration, so you can use it to replace i3 and use Wayland as the display protocol.
|
||||
|
||||
You can use dnf to install sway from Fedora repository:
|
||||
|
||||
```
|
||||
$ sudo dnf install sway
|
||||
```
|
||||
|
||||
If you want to migrate from i3 to sway, there’s a small [migration guide][4] available.
|
||||
|
||||
### Qtile
|
||||
|
||||
[Qtile][5] is another tiling manager that also happens to be written in Python. By default, you configure Qtile in a Python script located under ~/.config/qtile/config.py. When this script is not available, Qtile uses a default [configuration][6].
|
||||
|
||||
One of the benefits of Qtile being in Python is you can write scripts to control the WM. For example, the following script prints the screen details:
|
||||
|
||||
```
|
||||
> from libqtile.command import Client
|
||||
> c = Client()
|
||||
> print(c.screen.info)
|
||||
{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0}
|
||||
```
|
||||
|
||||
To install Qlite on Fedora, use the following command:
|
||||
|
||||
```
|
||||
$ sudo dnf install qtile
|
||||
```
|
||||
|
||||
### dwm
|
||||
|
||||
The [dwm][7] window manager focuses more on being lightweight. One goal of the project is to keep dwm minimal and small. For example, the entire code base never exceeded 2000 lines of code. On the other hand, dwm isn’t as easy to customize and configure. Indeed, the only way to change dwm default configuration is to [edit the source code and recompile the application][8].
|
||||
|
||||
If you want to try the default configuration, you can install dwm in Fedora using dnf:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm
|
||||
```
|
||||
|
||||
For those who wand to change their dwm configuration, the dwm-user package is available in Fedora. This package automatically recompiles dwm using the configuration stored in the user home directory at ~/.dwm/config.h.
|
||||
|
||||
### awesome
|
||||
|
||||
[awesome][9] originally started as a fork of dwm, to provide configuration of the WM using an external configuration file. The configuration is done via Lua scripts, which allow you to write scripts to automate tasks or create widgets.
|
||||
|
||||
You can check out awesome on Fedora by installing it like this:
|
||||
|
||||
```
|
||||
$ sudo dnf install awesome
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/5-cool-tiling-window-managers/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org
|
||||
[1]: https://i3wm.org/
|
||||
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
|
||||
[3]: https://swaywm.org/
|
||||
[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide
|
||||
[5]: http://www.qtile.org/
|
||||
[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py
|
||||
[7]: https://dwm.suckless.org/
|
||||
[8]: https://dwm.suckless.org/customisation/
|
||||
[9]: https://awesomewm.org/
|
@ -1,3 +1,5 @@
|
||||
[translating by jrg 20181014]
|
||||
|
||||
Using Grails with jQuery and DataTables
|
||||
======
|
||||
|
||||
|
@ -1,261 +0,0 @@
|
||||
16 iptables tips and tricks for sysadmins
|
||||
======
|
||||
Iptables provides powerful capabilities to control traffic coming in and out of your system.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg)
|
||||
|
||||
Modern Linux kernels come with a packet-filtering framework named [Netfilter][1]. Netfilter enables you to allow, drop, and modify traffic coming in and going out of a system. The **iptables** userspace command-line tool builds upon this functionality to provide a powerful firewall, which you can configure by adding rules to form a firewall policy. [iptables][2] can be very daunting with its rich set of capabilities and baroque command syntax. Let's explore some of them and develop a set of iptables tips and tricks for many situations a system administrator might encounter.
|
||||
|
||||
### Avoid locking yourself out
|
||||
|
||||
Scenario: You are going to make changes to the iptables policy rules on your company's primary server. You want to avoid locking yourself—and potentially everybody else—out. (This costs time and money and causes your phone to ring off the wall.)
|
||||
|
||||
#### Tip #1: Take a backup of your iptables configuration before you start working on it.
|
||||
|
||||
Back up your configuration with the command:
|
||||
|
||||
```
|
||||
/sbin/iptables-save > /root/iptables-works
|
||||
|
||||
```
|
||||
#### Tip #2: Even better, include a timestamp in the filename.
|
||||
|
||||
Add the timestamp with the command:
|
||||
|
||||
```
|
||||
/sbin/iptables-save > /root/iptables-works-`date +%F`
|
||||
|
||||
```
|
||||
|
||||
You get a file with a name like:
|
||||
|
||||
```
|
||||
/root/iptables-works-2018-09-11
|
||||
|
||||
```
|
||||
|
||||
If you do something that prevents your system from working, you can quickly restore it:
|
||||
|
||||
```
|
||||
/sbin/iptables-restore < /root/iptables-works-2018-09-11
|
||||
|
||||
```
|
||||
|
||||
#### Tip #3: Every time you create a backup copy of the iptables policy, create a link to the file with 'latest' in the name.
|
||||
|
||||
```
|
||||
ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest
|
||||
|
||||
```
|
||||
|
||||
#### Tip #4: Put specific rules at the top of the policy and generic rules at the bottom.
|
||||
|
||||
Avoid generic rules like this at the top of the policy rules:
|
||||
|
||||
```
|
||||
iptables -A INPUT -p tcp --dport 22 -j DROP
|
||||
|
||||
```
|
||||
|
||||
The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this:
|
||||
|
||||
```
|
||||
iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP
|
||||
|
||||
```
|
||||
|
||||
This rule appends ( **-A** ) to the **INPUT** chain a rule that will **DROP** any packets originating from the CIDR block **10.0.0.0/8** on TCP ( **-p tcp** ) port 22 ( **\--dport 22** ) destined for IP address 192.168.100.101 ( **-d 192.168.100.101** ).
|
||||
|
||||
There are plenty of ways you can be more specific. For example, using **-i eth0** will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to **eth1**.
|
||||
|
||||
#### Tip #5: Whitelist your IP address at the top of your policy rules.
|
||||
|
||||
This is a very effective method of not locking yourself out. Everybody else, not so much.
|
||||
|
||||
```
|
||||
iptables -I INPUT -s <your IP> -j ACCEPT
|
||||
|
||||
```
|
||||
|
||||
You need to put this as the first rule for it to work properly. Remember, **-I** inserts it as the first rule; **-A** appends it to the end of the list.
|
||||
|
||||
#### Tip #6: Know and understand all the rules in your current policy.
|
||||
|
||||
Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things.
|
||||
|
||||
### Set up a workstation firewall policy
|
||||
|
||||
Scenario: You want to set up a workstation with a restrictive firewall policy.
|
||||
|
||||
#### Tip #1: Set the default policy as DROP.
|
||||
|
||||
```
|
||||
# Set a default policy of DROP
|
||||
*filter
|
||||
:INPUT DROP [0:0]
|
||||
:FORWARD DROP [0:0]
|
||||
:OUTPUT DROP [0:0]
|
||||
```
|
||||
|
||||
#### Tip #2: Allow users the minimum amount of services needed to get their work done.
|
||||
|
||||
The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP ( **-p udp --dport 67:68 --sport 67:68** ). For remote management, the rules need to allow inbound SSH ( **\--dport 22** ), outbound mail ( **\--dport 25** ), DNS ( **\--dport 53** ), outbound ping ( **-p icmp** ), Network Time Protocol ( **\--dport 123 --sport 123** ), and outbound HTTP ( **\--dport 80** ) and HTTPS ( **\--dport 443** ).
|
||||
|
||||
```
|
||||
# Set a default policy of DROP
|
||||
*filter
|
||||
:INPUT DROP [0:0]
|
||||
:FORWARD DROP [0:0]
|
||||
:OUTPUT DROP [0:0]
|
||||
|
||||
# Accept any related or established connections
|
||||
-I INPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
|
||||
# Allow all traffic on the loopback interface
|
||||
-A INPUT -i lo -j ACCEPT
|
||||
-A OUTPUT -o lo -j ACCEPT
|
||||
|
||||
# Allow outbound DHCP request
|
||||
-A OUTPUT –o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT
|
||||
|
||||
# Allow inbound SSH
|
||||
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
|
||||
|
||||
# Allow outbound email
|
||||
-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT
|
||||
|
||||
# Outbound DNS lookups
|
||||
-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT
|
||||
|
||||
# Outbound PING requests
|
||||
-A OUTPUT –o eth0 -p icmp -j ACCEPT
|
||||
|
||||
# Outbound Network Time Protocol (NTP) requests
|
||||
-A OUTPUT –o eth0 -p udp --dport 123 --sport 123 -j ACCEPT
|
||||
|
||||
# Outbound HTTP
|
||||
-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
|
||||
-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
|
||||
|
||||
COMMIT
|
||||
```
|
||||
|
||||
### Restrict an IP address range
|
||||
|
||||
Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook's IP address by using the **host** and **whois** commands.
|
||||
|
||||
```
|
||||
host -t a www.facebook.com
|
||||
www.facebook.com is an alias for star.c10r.facebook.com.
|
||||
star.c10r.facebook.com has address 31.13.65.17
|
||||
whois 31.13.65.17 | grep inetnum
|
||||
inetnum: 31.13.64.0 - 31.13.127.255
|
||||
```
|
||||
|
||||
Then convert that range to CIDR notation by using the [CIDR to IPv4 Conversion][3] page. You get **31.13.64.0/18**. To prevent outgoing access to [www.facebook.com][4], enter:
|
||||
|
||||
```
|
||||
iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP
|
||||
```
|
||||
|
||||
### Regulate by time
|
||||
|
||||
Scenario: The backlash from the company's employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant's reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables' time features to open up access.
|
||||
|
||||
```
|
||||
iptables –A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 --timestart 12:00 –timestop 13:00 –d
|
||||
31.13.64.0/18 -j ACCEPT
|
||||
```
|
||||
|
||||
This command sets the policy to allow ( **-j ACCEPT** ) http and https ( **-m multiport --dport http,https** ) between noon ( **\--timestart 12:00** ) and 13PM ( **\--timestop 13:00** ) to Facebook.com ( **–d[31.13.64.0/18][5]** ).
|
||||
|
||||
### Regulate by time—Take 2
|
||||
|
||||
Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won't be disrupted by incoming traffic. This will take two iptables rules:
|
||||
|
||||
```
|
||||
iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP
|
||||
iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP
|
||||
```
|
||||
|
||||
With these rules, TCP and UDP traffic ( **-p tcp and -p udp** ) are denied ( **-j DROP** ) between the hours of 2AM ( **\--timestart 02:00** ) and 3AM ( **\--timestop 03:00** ) on input ( **-A INPUT** ).
|
||||
|
||||
### Limit connections with iptables
|
||||
|
||||
Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server:
|
||||
|
||||
```
|
||||
iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset
|
||||
```
|
||||
|
||||
Let's look at what this rule does. If a host makes more than 20 ( **-–connlimit-above 20** ) new connections ( **–p tcp –syn** ) in a minute to the web servers ( **-–dport http,https** ), reject the new connection ( **–j REJECT** ) and tell the connecting host you are rejecting the connection ( **-–reject-with-tcp-reset** ).
|
||||
|
||||
### Monitor iptables rules
|
||||
|
||||
Scenario: Since iptables operates on a "first match wins" basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom?
|
||||
|
||||
#### Tip #1: See how many times each rule has been hit.
|
||||
|
||||
Use this command:
|
||||
|
||||
```
|
||||
iptables -L -v -n –line-numbers
|
||||
```
|
||||
|
||||
The command will list all the rules in the chain ( **-L** ). Since no chain was specified, all the chains will be listed with verbose output ( **-v** ) showing packet and byte counters in numeric format ( **-n** ) with line numbers at the beginning of each rule corresponding to that rule's position in the chain.
|
||||
|
||||
Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom.
|
||||
|
||||
#### Tip #2: Remove unnecessary rules.
|
||||
|
||||
Which rules aren't getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command:
|
||||
|
||||
```
|
||||
iptables -nvL | grep -v "0 0"
|
||||
```
|
||||
|
||||
Note: that's not a tab between the zeros; there are five spaces between the zeros.
|
||||
|
||||
#### Tip #3: Monitor what's going on.
|
||||
|
||||
You would like to monitor what's going on with iptables in real time, like with **top**. Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed:
|
||||
|
||||
```
|
||||
watch --interval=5 'iptables -nvL | grep -v "0 0"'
|
||||
```
|
||||
|
||||
**watch** runs **'iptables -nvL | grep -v "0 0"'** every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time.
|
||||
|
||||
### Report on iptables
|
||||
|
||||
Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it's more important to write a report than to do the work.
|
||||
|
||||
Use the packet filter/firewall/IDS log analyzer [FWLogwatch][6] to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks.
|
||||
|
||||
Here is sample output from FWLogwatch:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/fwlogwatch.png)
|
||||
|
||||
### More than just ACCEPT and DROP
|
||||
|
||||
We've covered many facets of iptables, all the way from making sure you don't lock yourself out when working with iptables to monitoring iptables to visualizing the activity of an iptables firewall. These will get you started down the path to realizing even more iptables tips and tricks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/iptables-tips-and-tricks
|
||||
|
||||
作者:[Gary Smith][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/greptile
|
||||
[1]: https://en.wikipedia.org/wiki/Netfilter
|
||||
[2]: https://en.wikipedia.org/wiki/Iptables
|
||||
[3]: http://www.ipaddressguide.com/cidr
|
||||
[4]: http://www.facebook.com
|
||||
[5]: http://31.13.64.0/18
|
||||
[6]: http://fwlogwatch.inside-security.de/
|
@ -1,72 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Introducing Swift on Fedora
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/swift-816x345.jpg)
|
||||
|
||||
Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. It aims to be the best language for a variety of programming projects, ranging from systems programming to desktop applications and scaling up to cloud services. Read more about it and how to try it out in Fedora.
|
||||
|
||||
### Safe, Fast, Expressive
|
||||
|
||||
Like many modern programming languages, Swift was designed to be safer than C-based languages. For example, variables are always initialized before they can be used. Arrays and integers are checked for overflow. Memory is automatically managed.
|
||||
|
||||
Swift puts intent right in the syntax. To declare a variable, use the var keyword. To declare a constant, use let.
|
||||
|
||||
Swift also guarantees that objects can never be nil; in fact, trying to use an object known to be nil will cause a compile-time error. When using a nil value is appropriate, it supports a mechanism called **optionals**. An optional may contain nil, but is safely unwrapped using the **?** operator.
|
||||
|
||||
Some additional features include:
|
||||
|
||||
* Closures unified with function pointers
|
||||
* Tuples and multiple return values
|
||||
* Generics
|
||||
* Fast and concise iteration over a range or collection
|
||||
* Structs that support methods, extensions, and protocols
|
||||
* Functional programming patterns, e.g., map and filter
|
||||
* Powerful error handling built-in
|
||||
* Advanced control flow with do, guard, defer, and repeat keywords
|
||||
|
||||
|
||||
|
||||
### Try Swift out
|
||||
|
||||
Swift is available in Fedora 28 under then package name **swift-lang**. Once installed, run swift and the REPL console starts up.
|
||||
|
||||
```
|
||||
$ swift
|
||||
Welcome to Swift version 4.2 (swift-4.2-RELEASE). Type :help for assistance.
|
||||
1> let greeting="Hello world!"
|
||||
greeting: String = "Hello world!"
|
||||
2> print(greeting)
|
||||
Hello world!
|
||||
3> greeting = "Hello universe!"
|
||||
error: repl.swift:3:10: error: cannot assign to value: 'greeting' is a 'let' constant
|
||||
greeting = "Hello universe!"
|
||||
~~~~~~~~ ^
|
||||
|
||||
|
||||
3>
|
||||
|
||||
```
|
||||
|
||||
Swift has a growing community, and in particular, a [work group][1] dedicated to making it an efficient and effective server-side programming language. Be sure to visit [its home page][2] for more ways to get involved.
|
||||
|
||||
Photo by [Uillian Vargas][3] on [Unsplash][4].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/introducing-swift-fedora/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/linkdupont/
|
||||
[1]: https://swift.org/server/
|
||||
[2]: http://swift.org
|
||||
[3]: https://unsplash.com/photos/7oJpVR1inGk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[4]: https://unsplash.com/search/photos/fast?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,75 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Tips for listing files with ls at the Linux command line
|
||||
======
|
||||
Learn some of the Linux 'ls' command's most useful variations.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
|
||||
|
||||
One of the first commands I learned in Linux was `ls`. Knowing what’s in a directory where a file on your system resides is important. Being able to see and modify not just some but all of the files is also important.
|
||||
|
||||
My first LInux cheat sheet was the [One Page Linux Manual][1] , which was released in1999 and became my go-to reference. I taped it over my desk and referred to it often as I began to explore Linux. Listing files with `ls -l` is introduced on the first page, at the bottom of the first column.
|
||||
|
||||
Later, I would learn other iterations of this most basic command. Through the `ls` command, I began to learn about the complexity of the Linux file permissions and what was mine and what required root or sudo permission to change. I became very comfortable on the command line over time, and while I still use `ls -l` to find files in the directory, I frequently use `ls -al` so I can see hidden files that might need to be changed, like configuration files.
|
||||
|
||||
According to an article by Eric Fischer about the `ls` command in the [Linux Documentation Project][2], the command's roots go back to the `listf` command on MIT’s Compatible Time Sharing System in 1961. When CTSS was replaced by [Multics][3], the command became `list`, with switches like `list -all`. According to [Wikipedia][4], `ls` appeared in the original version of AT&T Unix. The `ls` command we use today on Linux systems comes from the [GNU Core Utilities][5].
|
||||
|
||||
Most of the time, I use only a couple of iterations of the command. Looking inside a directory with `ls` or `ls -al` is how I generally use the command, but there are many other options that you should be familiar with.
|
||||
|
||||
`$ ls -l` provides a simple list of the directory:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png)
|
||||
|
||||
Using the man pages of my Fedora 28 system, I find that there are many other options to `ls`, all of which provide interesting and useful information about the Linux file system. By entering `man ls` at the command prompt, we can begin to explore some of the other options:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png)
|
||||
|
||||
To sort the directory by file sizes, use `ls -lS`:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png)
|
||||
|
||||
To list the contents in reverse order, use `ls -lr`:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png)
|
||||
|
||||
To list contents by columns, use `ls -c`:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png)
|
||||
|
||||
`ls -al` provides a list of all the files in the same directory:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png)
|
||||
|
||||
Here are some additional options that I find useful and interesting:
|
||||
|
||||
* List only the .txt files in the directory: `ls *.txt`
|
||||
* List by file size: `ls -s`
|
||||
* Sort by time and date: `ls -d`
|
||||
* Sort by extension: `ls -X`
|
||||
* Sort by file size: `ls -S`
|
||||
* Long format with file size: `ls -ls`
|
||||
* List only the .txt files in a directory: `ls *.txt`
|
||||
|
||||
|
||||
|
||||
To generate a directory list in the specified format and send it to a file for later viewing, enter `ls -al > mydirectorylist`. Finally, one of the more exotic commands I found is `ls -R`, which provides a recursive list of all the directories on your computer and their contents.
|
||||
|
||||
For a complete list of the all the iterations of the `ls` command, refer to the [GNU Core Utilities][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ls-command
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf
|
||||
[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html
|
||||
[3]: https://en.wikipedia.org/wiki/Multics
|
||||
[4]: https://en.wikipedia.org/wiki/Ls
|
||||
[5]: http://www.gnu.org/s/coreutils/
|
||||
[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation
|
102
sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md
Normal file
102
sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md
Normal file
@ -0,0 +1,102 @@
|
||||
4 Must-Have Tools for Monitoring Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring-main.jpg?itok=YHLK-gn6)
|
||||
|
||||
Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source.
|
||||
|
||||
But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools.
|
||||
|
||||
### Top
|
||||
|
||||
We’ll first start with the obvious. The top command is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. The top command not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly.
|
||||
|
||||
![top][2]
|
||||
|
||||
Figure 1: Top running on Elementary OS.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
There is no need to install anything to use the top command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top.
|
||||
|
||||
### Glances
|
||||
|
||||
If you thought the top command offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2).
|
||||
|
||||
![glances][5]
|
||||
|
||||
Figure 2: The glances monitor displaying docker stats along with all the other information it offers.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository.
|
||||
|
||||
### GNOME System Monitor
|
||||
|
||||
If you're not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app.
|
||||
|
||||
With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon).
|
||||
|
||||
![GNOME System Monitor][7]
|
||||
|
||||
Figure 3: GNOME System Monitor in action.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4).
|
||||
|
||||
![GNOME System Monitor][9]
|
||||
|
||||
Figure 4: The GNOME System Monitor Resources tab in action.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system.
|
||||
|
||||
### Nagios
|
||||
|
||||
If you’re looking for an enterprise-grade networking monitoring system, look no further than [Nagios][10]. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5).
|
||||
|
||||
![Nagios ][12]
|
||||
|
||||
Figure 5: With Nagios you can even start and stop services.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time.
|
||||
The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers).
|
||||
|
||||
Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it.
|
||||
|
||||
### There’s More Where That Came From
|
||||
|
||||
We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][13] course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/10/4-must-have-tools-monitoring-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: /files/images/monitoring1jpg
|
||||
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_1.jpg?itok=UiyNGji0 (top)
|
||||
[3]: /licenses/category/used-permission
|
||||
[4]: /files/images/monitoring2jpg
|
||||
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_2.jpg?itok=K3OxLcvE (glances)
|
||||
[6]: /files/images/monitoring3jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_3.jpg?itok=UKcyEDcT (GNOME System Monitor)
|
||||
[8]: /files/images/monitoring4jpg
|
||||
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_4.jpg?itok=orLRH3m0 (GNOME System Monitor)
|
||||
[10]: https://www.nagios.org/
|
||||
[11]: /files/images/monitoring5jpg
|
||||
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/monitoring_5.jpg?itok=RGcLLWL7 (Nagios )
|
||||
[13]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Lab 3: User Environments
|
||||
======
|
||||
### Lab 3: User Environments
|
||||
|
@ -1,181 +0,0 @@
|
||||
PyTorch 1.0 Preview Release: Facebook’s newest Open Source AI
|
||||
======
|
||||
Facebook already uses its own Open Source AI, PyTorch quite extensively in its own artificial intelligence projects. Recently, they have gone a league ahead by releasing a pre-release preview version 1.0.
|
||||
|
||||
For those who are not familiar, [PyTorch][1] is a Python-based library for Scientific Computing.
|
||||
|
||||
PyTorch harnesses the [superior computational power of Graphical Processing Units (GPUs)][2] for carrying out complex [Tensor][3] computations and implementing [deep neural networks][4]. So, it is used widely across the world by numerous researchers and developers.
|
||||
|
||||
This new ready-to-use [Preview Release][5] was announced at the [PyTorch Developer Conference][6] at [The Midway][7], San Francisco, CA on Tuesday, October 2, 2018.
|
||||
|
||||
### Highlights of PyTorch 1.0 Release Candidate
|
||||
|
||||
![PyTorhc is Python based open source AI framework from Facebook][8]
|
||||
|
||||
Some of the main new features in the release candidate are:
|
||||
|
||||
#### 1\. JIT
|
||||
|
||||
JIT is a set of compiler tools to bring research close to production. It includes a Python-based language called Torch Script and also ways to make existing code compatible with itself.
|
||||
|
||||
#### 2\. New torch.distributed library: “C10D”
|
||||
|
||||
“C10D” enables asynchronous operation on different backends with performance improvements on slower networks and more.
|
||||
|
||||
#### 3\. C++ frontend (experimental)
|
||||
|
||||
Though it has been specifically mentioned as an unstable API (expected in a pre-release), this is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend to enable research in high performance, low latency and C++ applications installed directly on hardware.
|
||||
|
||||
To know more, you can take a look at the complete [update notes][9] on GitHub.
|
||||
|
||||
The first stable version PyTorch 1.0 will be released in summer.
|
||||
|
||||
### Installing PyTorch on Linux
|
||||
|
||||
To install PyTorch v1.0rc0, the developers recommend using [conda][10] while there also other ways to do that as shown on their [local installation page][11] where they have documented everything necessary in detail.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* Linux
|
||||
* Pip
|
||||
* Python
|
||||
* [CUDA][12] (For Nvidia GPU owners)
|
||||
|
||||
|
||||
|
||||
As we recently showed you [how to install and use Pip][13], let’s get to know how we can install PyTorch with it.
|
||||
|
||||
Note that PyTorch has GPU and CPU-only variants. You should install the one that suits your hardware.
|
||||
|
||||
#### Installing old and stable version of PyTorch
|
||||
|
||||
If you want the stable release (version 0.4) for your GPU, use:
|
||||
|
||||
```
|
||||
pip install torch torchvision
|
||||
|
||||
```
|
||||
|
||||
Use these two commands in succession for a CPU-only stable release:
|
||||
|
||||
```
|
||||
pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl
|
||||
pip install torchvision
|
||||
|
||||
```
|
||||
|
||||
#### Installing PyTorch 1.0 Release Candidate
|
||||
|
||||
You install PyTorch 1.0 RC GPU version with this command:
|
||||
|
||||
```
|
||||
pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html
|
||||
|
||||
```
|
||||
|
||||
If you do not have a GPU and would prefer a CPU-only version, use:
|
||||
|
||||
```
|
||||
pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
|
||||
|
||||
```
|
||||
|
||||
#### Verifying your PyTorch installation
|
||||
|
||||
Startup the python console on a terminal with the following simple command:
|
||||
|
||||
```
|
||||
python
|
||||
|
||||
```
|
||||
|
||||
Now enter the following sample code line by line to verify your installation:
|
||||
|
||||
```
|
||||
from __future__ import print_function
|
||||
import torch
|
||||
x = torch.rand(5, 3)
|
||||
print(x)
|
||||
|
||||
```
|
||||
|
||||
You should get an output like:
|
||||
|
||||
```
|
||||
tensor([[0.3380, 0.3845, 0.3217],
|
||||
[0.8337, 0.9050, 0.2650],
|
||||
[0.2979, 0.7141, 0.9069],
|
||||
[0.1449, 0.1132, 0.1375],
|
||||
[0.4675, 0.3947, 0.1426]])
|
||||
|
||||
```
|
||||
|
||||
To check whether you can use PyTorch’s GPU capabilities, use the following sample code:
|
||||
|
||||
```
|
||||
import torch
|
||||
torch.cuda.is_available()
|
||||
|
||||
```
|
||||
|
||||
The resulting output should be:
|
||||
|
||||
```
|
||||
True
|
||||
|
||||
```
|
||||
|
||||
Support for AMD GPUs for PyTorch is still under development, so complete test coverage is not yet provided as reported [here][14], suggesting this [resource][15] in case you have an AMD GPU.
|
||||
|
||||
Lets now look into some research projects that extensively use PyTorch:
|
||||
|
||||
### Ongoing Research Projects based on PyTorch
|
||||
|
||||
* [Detectron][16]: Facebook AI Research’s software system to intelligently detect and classify objects. It is based on Caffe2. Earlier this year, Caffe2 and PyTorch [joined forces][17] to create a Research + Production enabled PyTorch 1.0 we talk about.
|
||||
* [Unsupervised Sentiment Discovery][18]: Such methods are extensively used with social media algorithms.
|
||||
* [vid2vid][19]: Photorealistic video-to-video translation
|
||||
* [DeepRecommender][20] (We covered how such systems work on our past [Netflix AI article][21])
|
||||
|
||||
|
||||
|
||||
Nvidia, leading GPU manufacturer covered more on this with their own [update][22] on this recent development where you can also read about ongoing collaborative research endeavours.
|
||||
|
||||
### How should we react to such PyTorch capabilities?
|
||||
|
||||
To think Facebook applies such amazingly innovative projects and more in its social media algorithms, should we appreciate all this or get alarmed? This is almost [Skynet][23]! This newly improved production-ready pre-release of PyTorch will certainly push things further ahead! Feel free to share your thoughts with us in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/pytorch-open-source-ai-framework/
|
||||
|
||||
作者:[Avimanyu Bandyopadhyay][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/avimanyu/
|
||||
[1]: https://pytorch.org/
|
||||
[2]: https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
|
||||
[3]: https://en.wikipedia.org/wiki/Tensor
|
||||
[4]: https://www.techopedia.com/definition/32902/deep-neural-network
|
||||
[5]: https://code.fb.com/ai-research/facebook-accelerates-ai-development-with-new-partners-and-production-capabilities-for-pytorch-1-0
|
||||
[6]: https://pytorch.fbreg.com/
|
||||
[7]: https://www.themidwaysf.com/
|
||||
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/pytorch.jpeg
|
||||
[9]: https://github.com/pytorch/pytorch/releases/tag/v1.0rc0
|
||||
[10]: https://conda.io/
|
||||
[11]: https://pytorch.org/get-started/locally/
|
||||
[12]: https://www.pugetsystems.com/labs/hpc/How-to-install-CUDA-9-2-on-Ubuntu-18-04-1184/
|
||||
[13]: https://itsfoss.com/install-pip-ubuntu/
|
||||
[14]: https://github.com/pytorch/pytorch/issues/10657#issuecomment-415067478
|
||||
[15]: https://rocm.github.io/install.html#installing-from-amd-rocm-repositories
|
||||
[16]: https://github.com/facebookresearch/Detectron
|
||||
[17]: https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html
|
||||
[18]: https://github.com/NVIDIA/sentiment-discovery
|
||||
[19]: https://github.com/NVIDIA/vid2vid
|
||||
[20]: https://github.com/NVIDIA/DeepRecommender/
|
||||
[21]: https://itsfoss.com/netflix-open-source-ai/
|
||||
[22]: https://news.developer.nvidia.com/pytorch-1-0-accelerated-on-nvidia-gpus/
|
||||
[23]: https://en.wikipedia.org/wiki/Skynet_(Terminator)
|
@ -1,188 +0,0 @@
|
||||
Open Source Logging Tools for Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs-main.jpg?itok=voNrSz4H)
|
||||
|
||||
If you’re a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those who’d rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options.
|
||||
|
||||
In this article, I’ll highlight a few such tools available for the Linux platform. I won’t be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log.
|
||||
|
||||
Speaking of which…
|
||||
|
||||
### What is /var/log?
|
||||
|
||||
If you’re new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and you’ll see all of the various systems that have log files you can view (Figure 1).
|
||||
|
||||
![/var/log/][2]
|
||||
|
||||
Figure 1: Our ls command reveals the logs available in /var/log/.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isn’t for you? What options do you have? Plenty. Let’s take a look at few such options.
|
||||
|
||||
### Logs
|
||||
|
||||
If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and you’ll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs.
|
||||
|
||||
![Logs tool][5]
|
||||
|
||||
Figure 2: The GNOME Logs tool is one of the easiest GUI log viewers you’ll find for Linux.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Logs is a great tool, especially if you’re not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems.
|
||||
|
||||
### KSystemLog
|
||||
|
||||
KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout.
|
||||
|
||||
From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities.
|
||||
|
||||
![KSystemLog][7]
|
||||
|
||||
Figure 3: The KSystemLog main window.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following:
|
||||
|
||||
1. Open KSystemLog.
|
||||
|
||||
2. Click File > Add Log Entry.
|
||||
|
||||
3. Create your log entry (Figure 4).
|
||||
|
||||
4. Click OK
|
||||
|
||||
|
||||
![log entry][9]
|
||||
|
||||
Figure 4: Creating a manual log entry with KSystemLog.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
KSystemLog makes viewing logs in KDE an incredibly easy task.
|
||||
|
||||
### Logwatch
|
||||
|
||||
Logwatch isn’t a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so:
|
||||
|
||||
```
|
||||
sudo apt-get install logwatch
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```
|
||||
sudo dnf install logwatch
|
||||
```
|
||||
|
||||
During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, you’ll need to install the mailutils app (so you can view mail locally, via the mail command).
|
||||
|
||||
![ Logwatch][11]
|
||||
|
||||
Figure 5: Configuring Logwatch alert sending method.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. You’ll want to edit the MailTo = option. If you’re viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, you’ll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file.
|
||||
Once configured, you can send your first mail with a command like:
|
||||
|
||||
```
|
||||
logwatch --detail Med --mailto ADDRESS --service all --range today
|
||||
Where ADDRESS is either the local user or an email address.
|
||||
|
||||
```
|
||||
|
||||
For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool.
|
||||
|
||||
### Rsyslog
|
||||
|
||||
Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents:
|
||||
|
||||
```
|
||||
# Provide UDP syslog reception
|
||||
$ModLoad imudp
|
||||
$UDPServerRun 514
|
||||
|
||||
# Provide TCP syslog reception
|
||||
$ModLoad imtcp
|
||||
$InputTCPServerRun 514
|
||||
|
||||
# Use custom filenaming scheme
|
||||
$template FILENAME,"/var/log/remote/%HOSTNAME%.log"
|
||||
*.* ?FILENAME
|
||||
|
||||
$PreserveFQDN on
|
||||
|
||||
```
|
||||
|
||||
Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents:
|
||||
|
||||
```
|
||||
$PreserveFQDN on
|
||||
$ActionQueueType LinkedList
|
||||
$ActionQueueFileName srvrfwd
|
||||
$ActionResumeRetryCount -1
|
||||
$ActionQueueSaveOnShutdown on
|
||||
*.* @@SERVER_IP:514
|
||||
|
||||
```
|
||||
|
||||
Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command:
|
||||
|
||||
```
|
||||
sudo systemctl restart rsyslog
|
||||
|
||||
```
|
||||
|
||||
You can now view the centralized log files with the command (run on the centralized server):
|
||||
|
||||
```
|
||||
tail -f /var/log/remote/*.log
|
||||
|
||||
```
|
||||
|
||||
The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6).
|
||||
|
||||
![Rsyslog][13]
|
||||
|
||||
Figure 6: Rsyslog showing entries for a connected client.
|
||||
|
||||
[Used with permission][3]
|
||||
|
||||
Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers.
|
||||
|
||||
### More where that came from
|
||||
|
||||
This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day's journey into the Linux log file.
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][14]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/learn/intro-to-linux/2018/10/open-source-logging-tools-linux
|
||||
|
||||
作者:[JACK WALLEN][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[1]: /files/images/logs1jpg
|
||||
[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_1.jpg?itok=8yO2q1rW (/var/log/)
|
||||
[3]: /licenses/category/used-permission
|
||||
[4]: /files/images/logs2jpg
|
||||
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_2.jpg?itok=kF6V46ZB (Logs tool)
|
||||
[6]: /files/images/logs3jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_3.jpg?itok=PhrIzI1N (KSystemLog)
|
||||
[8]: /files/images/logs4jpg
|
||||
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_4.jpg?itok=OxsGJ-TJ (log entry)
|
||||
[10]: /files/images/logs5jpg
|
||||
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_5.jpg?itok=GeAR551e (Logwatch)
|
||||
[12]: /files/images/logs6jpg
|
||||
[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/logs_6.jpg?itok=ira8UZOr (Rsyslog)
|
||||
[14]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -1,3 +1,5 @@
|
||||
thecyanbird translating
|
||||
|
||||
Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images
|
||||
======
|
||||
This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics.
|
||||
|
@ -1,61 +1,59 @@
|
||||
translating by hopefully2333
|
||||
|
||||
Play Windows games on Fedora with Steam Play and Proton
|
||||
在 Fedora 上使用 Steam play 和 Proton 来玩 Windows 游戏
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg)
|
||||
|
||||
Some weeks ago, Steam [announced][1] a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.
|
||||
几周前,Steam 宣布要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。
|
||||
|
||||
According to the Steam website, there are new features in the beta release:
|
||||
据 Steam 网站称,测试版本中有以下这些新功能:
|
||||
|
||||
* Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
|
||||
* DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
|
||||
* Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
|
||||
* Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
|
||||
* Performance for multi-threaded games has been greatly improved compared to vanilla WINE.
|
||||
* 现在没有 Linux 版本的 Windows 游戏可以直接从 Linux 上的 Steam 客户端进行安装和运行,并且有完整、原生的 Steamworks 和 OpenVR 的支持。
|
||||
* 现在 DirectX 11 和 12 的实现都基于 Vulkan,它可以提高游戏的兼容性并减小游戏性能收到的影响。
|
||||
* 全屏支持已经得到了改进,全屏游戏时可以无缝扩展到所需的显示程度,而不会干扰到显示屏本身的分辨率或者说需要使用虚拟桌面。
|
||||
* 改进了对游戏控制器的支持,游戏自动识别所有 Steam 支持的控制器,比起游戏的原始版本,能够获得更多开箱即用的控制器兼容性。
|
||||
* 和 vanilla WINE 比起来,游戏的多线程性能得到了极大的提高。
|
||||
|
||||
|
||||
|
||||
### Installation
|
||||
### 安装
|
||||
|
||||
If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the [latest updated version of Steam installed][2]. In that case you no longer need Steam Beta to use Proton.)
|
||||
如果你有兴趣,想尝试一下 Steam 和 Proton。请按照下面这些简单的步骤进行操作。(请注意,如果你已经安装了最新版本的 Steam,可以忽略启用 Steam 测试版这个第一步。在这种情况下,你不再需要通过 Steam 测试版来使用 Proton。)
|
||||
|
||||
Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.
|
||||
打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持22个游戏。
|
||||
|
||||
![][3]
|
||||
|
||||
Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.
|
||||
现在点击客户端顶部的 Steam 选项,这会显示一个下拉菜单。然后选择设置。
|
||||
|
||||
![][4]
|
||||
|
||||
Now the settings window pops up. Select the Account option and next to Beta participation, click on change.
|
||||
现在弹出了设置窗口,选择账户选项,并在 Beta participation 旁边,点击更改。
|
||||
|
||||
![][5]
|
||||
|
||||
Now change None to Steam Beta Update.
|
||||
现在将 None 更改为 Steam Beta Update。
|
||||
|
||||
![][6]
|
||||
|
||||
Click on OK and a prompt asks you to restart.
|
||||
点击确定,然后系统会提示你重新启动。
|
||||
|
||||
![][7]
|
||||
|
||||
Let Steam download the update. This can take a while depending on your internet speed and computer resources.
|
||||
让 Steam 下载更新,这会需要一段时间,具体需要多久这要取决于你的网络速度和电脑配置。
|
||||
|
||||
![][8]
|
||||
|
||||
After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.
|
||||
在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定有为提供支持的游戏使用 Stream Play 这个复选框,让所有的游戏都使用 Steam Play 进行运行,而不是 steam 中游戏特定的选项。兼容性工具应该是 Proton。
|
||||
|
||||
![][9]
|
||||
|
||||
The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.
|
||||
Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。
|
||||
|
||||
![][10]
|
||||
|
||||
### Installing a Windows game using Steam Play
|
||||
### 使用 Steam Play 来安装一个 Windows 游戏
|
||||
|
||||
Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots.
|
||||
现在你已经启用 Proton,开始安装游戏,选择你想要安装的游戏,然后你会发现这个安装过程类似于在 Steam 上安装一个普通游戏,如下面这些截图所示。
|
||||
|
||||
![][11]
|
||||
|
||||
@ -65,13 +63,13 @@ Now that you have Proton enabled, install a game. Select the title you want and
|
||||
|
||||
![][14]
|
||||
|
||||
After the game is done downloading and installing, you can play it.
|
||||
在下载和安装完游戏后,你就可以开始玩了。
|
||||
|
||||
![][15]
|
||||
|
||||
![][16]
|
||||
|
||||
Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a [Google doc][17] with a list of games that have been tested.
|
||||
一些游戏可能会受到 Proton 测试性质的影响,在下面这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -80,7 +78,7 @@ via: https://fedoramagazine.org/play-windows-games-steam-play-proton/
|
||||
|
||||
作者:[Francisco J. Vergara Torres][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,72 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool
|
||||
======
|
||||
**Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.**
|
||||
|
||||
![Mathpix converts math equations images into LaTeX][1]
|
||||
|
||||
[LaTeX editors][2] are excellent when it comes to writing academic and scientific documentation.
|
||||
|
||||
There is a steep learning curved involved of course. And this learning curve becomes steeper if you have to write complex mathematical equations.
|
||||
|
||||
[Mathpix][3] is a nifty little tool that helps you in this regard.
|
||||
|
||||
Suppose you are reading a document that has mathematical equations. If you want to use those equations in your [LaTeX document][4], you need to use your ninja LaTeX skills and plenty of time.
|
||||
|
||||
But Mathpix solves this problem for you. With Mathpix, you take the screenshot of the mathematical equations, and it will instantly give you the LaTeX code. You can then use this code in your [favorite LaTeX editor][2].
|
||||
|
||||
See Mathpix in action in the video below:
|
||||
|
||||
<https://itsfoss.com/wp-content/uploads/2018/10/mathpix.mp4>
|
||||
|
||||
[Video credit][5]: Reddit User [kaitlinmcunningham][6]
|
||||
|
||||
Isn’t it super-cool? I guess the hardest part of writing LaTeX documents are those complicated equations. For lazy bums like me, Mathpix is a godsend.
|
||||
|
||||
### Getting Mathpix
|
||||
|
||||
Mathpix is available for Linux, macOS, Windows and iOS. There is no Android app for the moment.
|
||||
|
||||
Note: Mathpix is a free to use tool but it’s not open source.
|
||||
|
||||
On Linux, [Mathpix is available as a Snap package][7]. Which means [if you have Snap support enabled on your Linux distribution][8], you can install Mathpix with this simple command:
|
||||
|
||||
```
|
||||
sudo snap install mathpix-snipping-tool
|
||||
|
||||
```
|
||||
|
||||
Using Mathpix is simple. Once installed, open the tool. You’ll find it in the top panel. You can start taking the screenshot with Mathpix using the keyboard shortcut Ctrl+Alt+M.
|
||||
|
||||
It will instantly translate the image of equation into a LaTeX code. The code will be copied into clipboard and you can then paste it in a LaTeX editor.
|
||||
|
||||
Mathpix’s optical character recognition technology is [being used][9] by a number of companies like [WolframAlpha][10], Microsoft, Google, etc. to improve their tools’ image recognition capability while dealing with math symbols.
|
||||
|
||||
Altogether, it’s an awesome tool for students and academics. It’s free to use and I so wish that it was an open source tool. We cannot get everything in life, can we?
|
||||
|
||||
Do you use Mathpix or some other similar tool while dealing with mathematical symbols in LaTeX? What do you think of Mathpix? Share your views with us in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mathpix/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/mathpix-converts-equations-into-latex.jpeg
|
||||
[2]: https://itsfoss.com/latex-editors-linux/
|
||||
[3]: https://mathpix.com/
|
||||
[4]: https://www.latex-project.org/
|
||||
[5]: https://g.redditmedia.com/b-GL1rQwNezQjGvdlov9U_6vDwb1A7kEwGHYcQ1Ogtg.gif?fm=mp4&mp4-fragmented=false&s=39fd1816b43e2b544986d629f75a7a8e
|
||||
[6]: https://www.reddit.com/user/kaitlinmcunningham
|
||||
[7]: https://snapcraft.io/mathpix-snipping-tool
|
||||
[8]: https://itsfoss.com/install-snap-linux/
|
||||
[9]: https://mathpix.com/api.html
|
||||
[10]: https://www.wolframalpha.com/
|
@ -1,199 +0,0 @@
|
||||
Translating by way-ww
|
||||
How To Create And Maintain Your Own Man Pages
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png)
|
||||
|
||||
We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If you’re looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking – how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what you’ve learned about a command so far.
|
||||
|
||||
By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages.
|
||||
|
||||
### Installing Um
|
||||
|
||||
Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you haven’t installed Linuxbrew yet.
|
||||
|
||||
Once Linuxbrew installed, run the following command to install Um utility.
|
||||
|
||||
```
|
||||
$ brew install sinclairtarget/wst/um
|
||||
|
||||
```
|
||||
|
||||
If you will see an output something like below, congratulations! Um has been installed and ready to use.
|
||||
|
||||
```
|
||||
[...]
|
||||
==> Installing sinclairtarget/wst/um
|
||||
==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz
|
||||
==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0
|
||||
-=#=# # #
|
||||
==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem
|
||||
######################################################################## 100.0%
|
||||
==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939
|
||||
==> Caveats
|
||||
Bash completion has been installed to:
|
||||
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
|
||||
==> Summary
|
||||
🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds
|
||||
==> Caveats
|
||||
==> openssl
|
||||
A CA file has been bootstrapped using certificates from the SystemRoots
|
||||
keychain. To add additional certificates (e.g. the certificates added in
|
||||
the System keychain), place .pem files in
|
||||
/home/linuxbrew/.linuxbrew/etc/openssl/certs
|
||||
|
||||
and run
|
||||
/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash
|
||||
==> ruby
|
||||
Emacs Lisp files have been installed to:
|
||||
/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby
|
||||
==> um
|
||||
Bash completion has been installed to:
|
||||
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
|
||||
|
||||
```
|
||||
|
||||
Before going to use to make your man pages, you need to enable bash completion for Um.
|
||||
|
||||
To do so, open your **~/.bash_profile** file:
|
||||
|
||||
```
|
||||
$ nano ~/.bash_profile
|
||||
|
||||
```
|
||||
|
||||
And, add the following lines in it:
|
||||
|
||||
```
|
||||
if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then
|
||||
. $(brew --prefix)/etc/bash_completion.d/um-completion.sh
|
||||
fi
|
||||
|
||||
```
|
||||
|
||||
Save and close the file. Run the following commands to update the changes.
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
|
||||
```
|
||||
|
||||
All done. let us go ahead and create our first man page.
|
||||
|
||||
### Create And Maintain Your Own Man Pages
|
||||
|
||||
Let us say, you want to create your own man page for “dpkg” command. To do so, run:
|
||||
|
||||
```
|
||||
$ um edit dpkg
|
||||
|
||||
```
|
||||
|
||||
The above command will open a markdown template in your default editor:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png)
|
||||
|
||||
My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template.
|
||||
|
||||
Here is a sample:
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png)
|
||||
|
||||
As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ).
|
||||
|
||||
Finally, view your newly created man page using command:
|
||||
|
||||
```
|
||||
$ um dpkg
|
||||
|
||||
```
|
||||
|
||||
![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png)
|
||||
|
||||
As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details.
|
||||
|
||||
```
|
||||
$ um edit dpkg
|
||||
|
||||
```
|
||||
|
||||
To view the list of newly created man pages using Um, run:
|
||||
|
||||
```
|
||||
$ um list
|
||||
|
||||
```
|
||||
|
||||
All man pages will be saved under a directory named**`.um`**in your home directory
|
||||
|
||||
Just in case, if you don’t want a particular page, simply delete it as shown below.
|
||||
|
||||
```
|
||||
$ um rm dpkg
|
||||
|
||||
```
|
||||
|
||||
To view the help section and all available general options, run:
|
||||
|
||||
```
|
||||
$ um --help
|
||||
usage: um <page name>
|
||||
um <sub-command> [ARGS...]
|
||||
|
||||
The first form is equivalent to `um read <page name>`.
|
||||
|
||||
Subcommands:
|
||||
um (l)ist List the available pages for the current topic.
|
||||
um (r)ead <page name> Read the given page under the current topic.
|
||||
um (e)dit <page name> Create or edit the given page under the current topic.
|
||||
um rm <page name> Remove the given page.
|
||||
um (t)opic [topic] Get or set the current topic.
|
||||
um topics List all topics.
|
||||
um (c)onfig [config key] Display configuration environment.
|
||||
um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
|
||||
|
||||
```
|
||||
|
||||
### Configure Um
|
||||
|
||||
To view the current configuration, run:
|
||||
|
||||
```
|
||||
$ um config
|
||||
Options prefixed by '*' are set in /home/sk/.um/umconfig.
|
||||
editor = vi
|
||||
pager = less
|
||||
pages_directory = /home/sk/.um/pages
|
||||
default_topic = shell
|
||||
pages_ext = .md
|
||||
|
||||
```
|
||||
|
||||
In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file.
|
||||
|
||||
```
|
||||
pages_directory = /Users/myusername/Dropbox/um
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/
|
||||
[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
|
@ -1,3 +1,4 @@
|
||||
translating by dianbanjiu
|
||||
How To List The Enabled/Active Repositories In Linux
|
||||
======
|
||||
There are many ways to list enabled repositories in Linux.
|
||||
|
@ -0,0 +1,257 @@
|
||||
Exploring the Linux kernel: The secrets of Kconfig/kbuild
|
||||
======
|
||||
Dive into understanding how the Linux config/build system works.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/compass_map_explore_adventure.jpg?itok=ecCoVTrZ)
|
||||
|
||||
The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.
|
||||
|
||||
To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.
|
||||
|
||||
### Kconfig
|
||||
|
||||
The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:
|
||||
| config | Update current config utilizing a line-oriented program |
|
||||
| nconfig | Update current config utilizing a ncurses menu-based program |
|
||||
| menuconfig | Update current config utilizing a menu-based program |
|
||||
| xconfig | Update current config utilizing a Qt-based frontend |
|
||||
| gconfig | Update current config utilizing a GTK+ based frontend |
|
||||
| oldconfig | Update current config utilizing a provided .config as base |
|
||||
| localmodconfig | Update current config disabling modules not loaded |
|
||||
| localyesconfig | Update current config converting local mods to core |
|
||||
| defconfig | New config with default from Arch-supplied defconfig |
|
||||
| savedefconfig | Save current config as ./defconfig (minimal config) |
|
||||
| allnoconfig | New config where all options are answered with 'no' |
|
||||
| allyesconfig | New config where all options are accepted with 'yes' |
|
||||
| allmodconfig | New config selecting modules when possible |
|
||||
| alldefconfig | New config with all symbols set to default |
|
||||
| randconfig | New config with a random answer to all options |
|
||||
| listnewconfig | List new options |
|
||||
| olddefconfig | Same as oldconfig but sets new symbols to their default value without prompting |
|
||||
| kvmconfig | Enable additional options for KVM guest kernel support |
|
||||
| xenconfig | Enable additional options for xen dom0 and guest kernel support |
|
||||
| tinyconfig | Configure the tiniest possible kernel |
|
||||
|
||||
I think **menuconfig** is the most popular of these targets. The targets are processed by different host programs, which are provided by the kernel and built during kernel building. Some targets have a GUI (for the user's convenience) while most don't. Kconfig-related tools and source code reside mainly under **scripts/kconfig/** in the kernel source. As we can see from **scripts/kconfig/Makefile** , there are several host programs, including **conf** , **mconf** , and **nconf**. Except for **conf** , each of them is responsible for one of the GUI-based config targets, so, **conf** deals with most of them.
|
||||
|
||||
Logically, Kconfig's infrastructure has two parts: one implements a [new language][1] to define the configuration items (see the Kconfig files under the kernel source), and the other parses the Kconfig language and deals with configuration actions.
|
||||
|
||||
Most of the config targets have roughly the same internal process (shown below):
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kconfig_process.png)
|
||||
|
||||
Note that all configuration items have a default value.
|
||||
|
||||
The first step reads the Kconfig file under source root to construct an initial configuration database; then it updates the initial database by reading an existing configuration file according to this priority:
|
||||
|
||||
> .config
|
||||
> /lib/modules/$(shell,uname -r)/.config
|
||||
> /etc/kernel-config
|
||||
> /boot/config-$(shell,uname -r)
|
||||
> ARCH_DEFCONFIG
|
||||
> arch/$(ARCH)/defconfig
|
||||
|
||||
If you are doing GUI-based configuration via **menuconfig** or command-line-based configuration via **oldconfig** , the database is updated according to your customization. Finally, the configuration database is dumped into the .config file.
|
||||
|
||||
But the .config file is not the final fodder for kernel building; this is why the **syncconfig** target exists. **syncconfig** used to be a config target called **silentoldconfig** , but it doesn't do what the old name says, so it was renamed. Also, because it is for internal use (not for users), it was dropped from the list.
|
||||
|
||||
Here is an illustration of what **syncconfig** does:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncconfig.png)
|
||||
|
||||
**syncconfig** takes .config as input and outputs many other files, which fall into three categories:
|
||||
|
||||
* **auto.conf & tristate.conf** are used for makefile text processing. For example, you may see statements like this in a component's makefile:
|
||||
|
||||
```
|
||||
obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o
|
||||
```
|
||||
|
||||
* **autoconf.h** is used in C-language source files.
|
||||
|
||||
* Empty header files under **include/config/** are used for configuration-dependency tracking during kbuild, which is explained below.
|
||||
|
||||
|
||||
|
||||
|
||||
After configuration, we will know which files and code pieces are not compiled.
|
||||
|
||||
### kbuild
|
||||
|
||||
Component-wise building, called _recursive make_ , is a common way for GNU `make` to manage a large project. Kbuild is a good example of recursive make. By dividing source files into different modules/components, each component is managed by its own makefile. When you start building, a top makefile invokes each component's makefile in the proper order, builds the components, and collects them into the final executive.
|
||||
|
||||
Kbuild refers to different kinds of makefiles:
|
||||
|
||||
* **Makefile** is the top makefile located in source root.
|
||||
* **.config** is the kernel configuration file.
|
||||
* **arch/$(ARCH)/Makefile** is the arch makefile, which is the supplement to the top makefile.
|
||||
* **scripts/Makefile.*** describes common rules for all kbuild makefiles.
|
||||
* Finally, there are about 500 **kbuild makefiles**.
|
||||
|
||||
|
||||
|
||||
The top makefile includes the arch makefile, reads the .config file, descends into subdirectories, invokes **make** on each component's makefile with the help of routines defined in **scripts/Makefile.*** , builds up each intermediate object, and links all the intermediate objects into vmlinux. Kernel document [Documentation/kbuild/makefiles.txt][2] describes all aspects of these makefiles.
|
||||
|
||||
As an example, let's look at how vmlinux is produced on x86-64:
|
||||
|
||||
![vmlinux overview][4]
|
||||
|
||||
(The illustration is based on Richard Y. Steven's [blog][5]. It was updated and is used with the author's permission.)
|
||||
|
||||
All the **.o** files that go into vmlinux first go into their own **built-in.a** , which is indicated via variables **KBUILD_VMLINUX_INIT** , **KBUILD_VMLINUX_MAIN** , **KBUILD_VMLINUX_LIBS** , then are collected into the vmlinux file.
|
||||
|
||||
Take a look at how recursive make is implemented in the Linux kernel, with the help of simplified makefile code:
|
||||
|
||||
```
|
||||
# In top Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
|
||||
+$(call if_changed,link-vmlinux)
|
||||
|
||||
# Variable assignments
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
|
||||
|
||||
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
|
||||
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
|
||||
export KBUILD_VMLINUX_LIBS := $(libs-y1)
|
||||
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
core-y := usr/
|
||||
virt-y := virt/
|
||||
|
||||
# Transform to corresponding built-in.a
|
||||
init-y := $(patsubst %/, %/built-in.a, $(init-y))
|
||||
core-y := $(patsubst %/, %/built-in.a, $(core-y))
|
||||
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
|
||||
net-y := $(patsubst %/, %/built-in.a, $(net-y))
|
||||
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
|
||||
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
|
||||
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
|
||||
|
||||
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
|
||||
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
|
||||
# will be executed. Refer "4.6 Phony Targets" of `info make`
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
|
||||
# Variable vmlinux-dirs is the directory part of each built-in.a
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
|
||||
|
||||
# The entry of recursive make
|
||||
$(vmlinux-dirs):
|
||||
$(Q)$(MAKE) $(build)=$@ need-builtin=1
|
||||
```
|
||||
|
||||
The recursive make recipe is expanded, for example:
|
||||
|
||||
```
|
||||
make -f scripts/Makefile.build obj=init need-builtin=1
|
||||
```
|
||||
|
||||
This means **make** will go into **scripts/Makefile.build** to continue the work of building each **built-in.a**. With the help of **scripts/link-vmlinux.sh** , the vmlinux file is finally under source root.
|
||||
|
||||
#### Understanding vmlinux vs. bzImage
|
||||
|
||||
Many Linux kernel developers may not be clear about the relationship between vmlinux and bzImage. For example, here is their relationship in x86-64:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/vmlinux-bzimage.png)
|
||||
|
||||
The source root vmlinux is stripped, compressed, put into **piggy.S** , then linked with other peer objects into **arch/x86/boot/compressed/vmlinux**. Meanwhile, a file called setup.bin is produced under **arch/x86/boot**. There may be an optional third file that has relocation info, depending on the configuration of **CONFIG_X86_NEED_RELOCS**.
|
||||
|
||||
A host program called **build** , provided by the kernel, builds these two (or three) parts into the final bzImage file.
|
||||
|
||||
#### Dependency tracking
|
||||
|
||||
Kbuild tracks three kinds of dependencies:
|
||||
|
||||
1. All prerequisite files (both * **.c** and * **.h** )
|
||||
2. **CONFIG_** options used in all prerequisite files
|
||||
3. Command-line dependencies used to compile the target
|
||||
|
||||
|
||||
|
||||
The first one is easy to understand, but what about the second and third? Kernel developers often see code pieces like this:
|
||||
|
||||
```
|
||||
#ifdef CONFIG_SMP
|
||||
__boot_cpu_id = cpu;
|
||||
#endif
|
||||
```
|
||||
|
||||
When **CONFIG_SMP** changes, this piece of code should be recompiled. The command line for compiling a source file also matters, because different command lines may result in different object files.
|
||||
|
||||
When a **.c** file uses a header file via a **#include** directive, you need write a rule like this:
|
||||
|
||||
```
|
||||
main.o: defs.h
|
||||
recipe...
|
||||
```
|
||||
|
||||
When managing a large project, you need a lot of these kinds of rules; writing them all would be tedious and boring. Fortunately, most modern C compilers can write these rules for you by looking at the **#include** lines in the source file. For the GNU Compiler Collection (GCC), it is just a matter of adding a command-line parameter: **-MD depfile**
|
||||
|
||||
```
|
||||
# In scripts/Makefile.lib
|
||||
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
|
||||
-include $(srctree)/include/linux/compiler_types.h \
|
||||
$(__c_flags) $(modkern_cflags) \
|
||||
$(basename_flags) $(modname_flags)
|
||||
```
|
||||
|
||||
This would generate a **.d** file with content like:
|
||||
|
||||
```
|
||||
init_task.o: init/init_task.c include/linux/kconfig.h \
|
||||
include/generated/autoconf.h include/linux/init_task.h \
|
||||
include/linux/rcupdate.h include/linux/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
Then the host program **[fixdep][6]** takes care of the other two dependencies by taking the **depfile** and command line as input, then outputting a **. <target>.cmd** file in makefile syntax, which records the command line and all the prerequisites (including the configuration) for a target. It looks like this:
|
||||
|
||||
```
|
||||
# The command line used to compile the target
|
||||
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
|
||||
...
|
||||
# The dependency files
|
||||
deps_init/init_task.o := \
|
||||
$(wildcard include/config/posix/timers.h) \
|
||||
$(wildcard include/config/arch/task/struct/on/stack.h) \
|
||||
$(wildcard include/config/thread/info/in/task.h) \
|
||||
...
|
||||
include/uapi/linux/types.h \
|
||||
arch/x86/include/uapi/asm/types.h \
|
||||
include/uapi/asm-generic/types.h \
|
||||
...
|
||||
```
|
||||
|
||||
A **. <target>.cmd** file will be included during recursive make, providing all the dependency info and helping to decide whether to rebuild a target or not.
|
||||
|
||||
The secret behind this is that **fixdep** will parse the **depfile** ( **.d** file), then parse all the dependency files inside, search the text for all the **CONFIG_** strings, convert them to the corresponding empty header file, and add them to the target's prerequisites. Every time the configuration changes, the corresponding empty header file will be updated, too, so kbuild can detect that change and rebuild the target that depends on it. Because the command line is also recorded, it is easy to compare the last and current compiling parameters.
|
||||
|
||||
### Looking ahead
|
||||
|
||||
Kconfig/kbuild remained the same for a long time until the new maintainer, Masahiro Yamada, joined in early 2017, and now kbuild is under active development again. Don't be surprised if you soon see something different from what's in this article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/kbuild-and-kconfig
|
||||
|
||||
作者:[Cao Jin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pinocchio
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt
|
||||
[2]: https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt
|
||||
[3]: https://opensource.com/file/411516
|
||||
[4]: https://opensource.com/sites/default/files/uploads/vmlinux_generation_process.png (vmlinux overview)
|
||||
[5]: https://blog.csdn.net/richardysteven/article/details/52502734
|
||||
[6]: https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c
|
@ -0,0 +1,60 @@
|
||||
translating---geekpi
|
||||
|
||||
Happy birthday, KDE: 11 applications you never knew existed
|
||||
======
|
||||
Which fun or quirky app do you need today?
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_DebucketizeOrgChart_A.png?itok=RB3WBeQQ)
|
||||
|
||||
The Linux desktop environment KDE celebrates its 22nd anniversary on October 14 this year. There are a gazillion* applications created by the KDE community of users, many of which provide fun and quirky services. We perused the list and picked out 11 applications you might like to know exist.
|
||||
|
||||
*Not really, but [there are a lot][1].
|
||||
|
||||
### 11 KDE applications you never knew existed
|
||||
|
||||
1\. [KTeaTime][2] is a timer for steeping tea. Set it by choosing the type of tea you are drinking—green, black, herbal, etc.—and the timer will ding when it's ready to remove the tea bag and drink.
|
||||
|
||||
2\. [KTux][3] is just a screensaver... or is it? Tux is flying in outer space in his green spaceship.
|
||||
|
||||
3\. [Blinken][4] is a memory game based on Simon Says, an electronic game released in 1978. Players are challenged to remember sequences of increasing length.
|
||||
|
||||
4\. [Tellico][5] is a collection manager for organizing your favorite hobby. Maybe you still collect baseball cards. Maybe you're part of a wine club. Maybe you're a serious bookworm. Maybe all three!
|
||||
|
||||
5\. [KRecipes][6] is **not** a simple recipe manager. It's got a lot going on! Shopping lists, nutrient analysis, advanced search, recipe ratings, import/export various formats, and more.
|
||||
|
||||
6\. [KHangMan][7] is based on the classic game Hangman where you guess the word letter by letter. This game is available in several languages, and it can be used to improve your learning of another language. It has four categories, one of which is "animals" which is great for kids.
|
||||
|
||||
7\. [KLettres][8] is another app that may help you learn a new language. It teaches the alphabet and challenges the user to read and pronounce syllables.
|
||||
|
||||
8\. [KDiamond][9] is similar to Bejeweled or other single player puzzle games where the goal of the game is to build lines of a certain number of the same type of jewel or object. In this case, diamonds.
|
||||
|
||||
9\. [KolourPaint][10] is a very simple editing tool for your images or app for creating simple vectors.
|
||||
|
||||
10\. [Kiriki][11] is a dice game for 2-6 players similar to Yahtzee.
|
||||
|
||||
11\. [RSIBreak][12] doesn't start with a K. What!? It starts with an "RSI" for "Repetitive Strain Injury," which can occur from working for long hours, day in and day out, with a mouse and keyboard. This app reminds you to take breaks and can be personalized to meet your needs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/kde-applications
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.kde.org/applications/
|
||||
[2]: https://www.kde.org/applications/games/kteatime/
|
||||
[3]: https://userbase.kde.org/KTux
|
||||
[4]: https://www.kde.org/applications/education/blinken
|
||||
[5]: http://tellico-project.org/
|
||||
[6]: https://www.kde.org/applications/utilities/krecipes/
|
||||
[7]: https://edu.kde.org/khangman/
|
||||
[8]: https://edu.kde.org/klettres/
|
||||
[9]: https://games.kde.org/game.php?game=kdiamond
|
||||
[10]: https://www.kde.org/applications/graphics/kolourpaint/
|
||||
[11]: https://www.kde.org/applications/games/kiriki/
|
||||
[12]: https://userbase.kde.org/RSIBreak
|
@ -0,0 +1,129 @@
|
||||
How To Lock Virtual Console Sessions On Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
|
||||
|
||||
When you’re working on a shared system, you might not want the other users to sneak peak in your console to know what you’re actually doing. If so, I know a simple trick to lock your own session while still allowing other users to use the system on other virtual consoles. Thanks to **Vlock** , stands for **V** irtual Console **lock** , a command line program to lock one or more sessions on the Linux console. If necessary, you can lock the entire console and disable the virtual console switching functionality altogether. Vlock is especially useful for the shared Linux systems which have multiple users with access to the console.
|
||||
|
||||
### Installing Vlock
|
||||
|
||||
On Arch-based systems, the Vlock package is replaced with **kpd** package which is preinstalled by default, so you need not to bother with installation.
|
||||
|
||||
On Debian, Ubuntu, Linux Mint, run the following command to install Vlock:
|
||||
|
||||
```
|
||||
$ sudo apt-get install vlock
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install vlock
|
||||
```
|
||||
|
||||
On RHEL, CentOS:
|
||||
|
||||
```
|
||||
$ sudo yum install vlock
|
||||
```
|
||||
|
||||
### Lock Virtual Console Sessions On Linux
|
||||
|
||||
The general syntax for Vlock is:
|
||||
|
||||
```
|
||||
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* **a** – Lock all virtual console sessions,
|
||||
* **c** – Lock current virtual console session,
|
||||
* **n** – Switch to new empty console before locking all sessions,
|
||||
* **s** – Disable SysRq key mechanism,
|
||||
* **t** – Specify the timeout for the screensaver plugins,
|
||||
* **h** – Display help section,
|
||||
* **v** – Display version.
|
||||
|
||||
|
||||
|
||||
Let me show you some examples.
|
||||
|
||||
**1\. Lock current console session**
|
||||
|
||||
When running Vlock without any arguments, it locks the current console session (TYY) by default. To unlock the session, you need to enter either the current user’s password or the root password.
|
||||
|
||||
```
|
||||
$ vlock
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
|
||||
|
||||
You can also use **-c** flag to lock the current console session.
|
||||
|
||||
```
|
||||
$ vlock -c
|
||||
```
|
||||
|
||||
Please note that this command will only lock the current console. You can switch to other consoles by pressing **ALT+F2**. For more details about switching between TTYs, refer the following guide.
|
||||
|
||||
Also, if the system has multiple users, the other users can still access their respective TTYs.
|
||||
|
||||
**2\. Lock all console sessions**
|
||||
|
||||
To lock all TTYs at the same time and also disable the virtual console switching functionality, run:
|
||||
|
||||
```
|
||||
$ vlock -a
|
||||
```
|
||||
|
||||
Again, to unlock the console sessions, just press ENTER key and type your current user’s password or root user password.
|
||||
|
||||
Please keep in mind that the **root user can always unlock any vlock session** at any time, unless disabled at compile time.
|
||||
|
||||
**3. Switch to new virtual console before locking all consoles
|
||||
**
|
||||
|
||||
It is also possible to make Vlock to switch to new empty virtual console from X session before locking all consoles. To do so, use **-n** flag.
|
||||
|
||||
```
|
||||
$ vlock -n
|
||||
```
|
||||
|
||||
**4. Disable SysRq mechanism
|
||||
**
|
||||
|
||||
As you may know, the Magic SysRq key mechanism allows the users to perform some operations when the system freeze. So the users can unlock the consoles using SysRq. In order to prevent this, pass the **-s** option to disable SysRq mechanism. Please remember, this only works if the **-a** option is given.
|
||||
|
||||
```
|
||||
$ vlock -sa
|
||||
```
|
||||
|
||||
For more options and its usage, refer the help section or the man pages.
|
||||
|
||||
```
|
||||
$ vlock -h
|
||||
$ man vlock
|
||||
```
|
||||
|
||||
Vlock prevents the unauthorized users from gaining the console access. If you’re looking for a simple console locking mechanism to your Linux machine, Vlock is worth checking!
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,81 @@
|
||||
An introduction to Ansible Operators in Kubernetes
|
||||
======
|
||||
The new Operator SDK makes it easy to create a Kubernetes controller to deploy and manage a service or application in a cluster.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_barnraising_2.png?itok=JOBMbjTM)
|
||||
|
||||
For years, Ansible has been a go-to choice for infrastructure automation. As Kubernetes adoption has skyrocketed, Ansible has continued to shine in the emerging container orchestration ecosystem.
|
||||
|
||||
Ansible fits naturally into a Kubernetes workflow, using YAML to describe the desired state of the world. Multiple projects, including the [Automation Broker][1], are adapting Ansible for use behind specific APIs. This article will focus on a new technique, created through a joint effort by the Ansible core team and the developers of Automation Broker, that uses Ansible to create Operators with minimal effort.
|
||||
|
||||
### What is an Operator?
|
||||
|
||||
An [Operator][2] is a Kubernetes controller that deploys and manages a service or application in a cluster. It automates human operation knowledge and best practices to keep services running and healthy. Input is received in the form of a custom resource. Let's walk through that using a Memcached Operator as an example.
|
||||
|
||||
The [Memcached Operator][3] can be deployed as a service running in a cluster, and it includes a custom resource definition (CRD) for a resource called Memcached. The end user creates an instance of that custom resource to describe how the Memcached Deployment should look. The following example requests a Deployment with three Pods.
|
||||
|
||||
```
|
||||
apiVersion: "cache.example.com/v1alpha1"
|
||||
kind: "Memcached"
|
||||
metadata:
|
||||
name: "example-memcached"
|
||||
spec:
|
||||
size: 3
|
||||
```
|
||||
|
||||
The Operator's job is called reconciliation—continuously ensuring that what is specified in the "spec" matches the real state of the world. This sample Operator delegates Pod management to a Deployment controller. So while it does not directly create or delete Pods, if you change the size, the Operator's reconciliation loop ensures that the new value is applied to the Deployment resource it created.
|
||||
|
||||
A mature Operator can deploy, upgrade, back up, repair, scale, and reconfigure an application that it manages. As you can see, not only does an Operator provide a simple way to deploy arbitrary services using only native Kubernetes APIs; it enables full day-two (post-deployment, such as updates, backups, etc.) management, limited only by what you can code.
|
||||
|
||||
### Creating an Operator
|
||||
|
||||
The [Operator SDK][4] makes it easy to get started. It lays down the skeleton of a new Operator with many of the complex pieces already handled. You can focus on defining your custom resources and coding the reconciliation logic in Go. The SDK saves you a lot of time and ongoing maintenance burden, but you will still end up owning a substantial software project.
|
||||
|
||||
Ansible was recently introduced to the Operator SDK as an even simpler way to make an Operator, with no coding required. To create an Operator, you merely:
|
||||
|
||||
* Create a CRD in the form of YAML
|
||||
* Define what reconciliation should do by creating an Ansible role or playbook
|
||||
|
||||
|
||||
|
||||
It's YAML all the way down—a familiar experience for Kubernetes users.
|
||||
|
||||
### How does it work?
|
||||
|
||||
There is a preexisting Ansible Operator base container image that includes Ansible, [ansible-runner][5], and the Operator's executable service. The SDK helps to build a layer on top that adds one or more CRDs and associates each with an Ansible role or playbook.
|
||||
|
||||
When it's running, the Operator uses a Kubernetes feature to "watch" for changes to any resource of the type defined. Upon receiving such a notification, it reconciles the resource that changed. The Operator runs the corresponding role or playbook, and information about the resource is passed to Ansible as [extra-vars][6].
|
||||
|
||||
### Using Ansible with Kubernetes
|
||||
|
||||
Following several iterations, the Ansible community has produced a remarkably easy-to-use module for working with Kubernetes. Especially if you have any experience with a Kubernetes module prior to Ansible 2.6, you owe it to yourself to have a look at the [k8s module][7]. Creating, retrieving, and updating resources is a natural experience that will feel familiar to any Kubernetes user. It makes creating an Operator that much easier.
|
||||
|
||||
### Give it a try
|
||||
|
||||
If you need to build a Kubernetes Operator, doing so with Ansible could save time and complexity. To learn more, head over to the Operator SDK documentation and work through the [Getting Started Guide][8] for Ansible-based Operators. Then join us on the [Operator Framework mailing list][9] and let us know what you think.
|
||||
|
||||
Michael Hrivnak will present [Automating Multi-Service Deployments on Kubernetes][10] at [LISA18][11], October 29-31 in Nashville, Tennessee, USA.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ansible-operators-kubernetes
|
||||
|
||||
作者:[Michael Hrivnak][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mhrivnak
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/18/2/automated-provisioning-kubernetes
|
||||
[2]: https://coreos.com/operators/
|
||||
[3]: https://github.com/operator-framework/operator-sdk-samples/tree/master/memcached-operator
|
||||
[4]: https://github.com/operator-framework/operator-sdk/
|
||||
[5]: https://github.com/ansible/ansible-runner
|
||||
[6]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#passing-variables-on-the-command-line
|
||||
[7]: https://docs.ansible.com/ansible/2.6/modules/k8s_module.html
|
||||
[8]: https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md
|
||||
[9]: https://groups.google.com/forum/#!forum/operator-framework
|
||||
[10]: https://www.usenix.org/conference/lisa18/presentation/hrivnak
|
||||
[11]: https://www.usenix.org/conference/lisa18
|
@ -0,0 +1,246 @@
|
||||
How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command
|
||||
======
|
||||
It’s a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.
|
||||
|
||||
In Linux, whenever we install any packages which has services or daemons. By default all the services “init & systemd” scripts will be added into it but it wont enabled.
|
||||
|
||||
Hence, we need to enable or disable the service manually if it’s required. There are three major init systems are available in Linux which are very famous and still in use.
|
||||
|
||||
### What is init System?
|
||||
|
||||
In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.
|
||||
|
||||
It’s holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.
|
||||
|
||||
Init looks at the `/etc/inittab` file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.
|
||||
|
||||
BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.
|
||||
|
||||
Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).
|
||||
|
||||
* **`0:`** halt
|
||||
* **`1:`** Single user mode
|
||||
* **`2:`** Multiuser, without NFS
|
||||
* **`3:`** Full multiuser mode
|
||||
* **`4:`** Unused
|
||||
* **`5:`** X11 (GUI – Graphical User Interface)
|
||||
* **`:`** reboot
|
||||
|
||||
|
||||
|
||||
Below three init systems are widely used in Linux.
|
||||
|
||||
* System V (Sys V)
|
||||
* Upstart
|
||||
* systemd
|
||||
|
||||
|
||||
|
||||
### What is System V (Sys V)?
|
||||
|
||||
System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it’s a parent process for everything.
|
||||
|
||||
Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.
|
||||
|
||||
But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.
|
||||
|
||||
### What is Upstart?
|
||||
|
||||
Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.
|
||||
|
||||
It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.
|
||||
|
||||
It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.
|
||||
|
||||
### What is systemd?
|
||||
|
||||
Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.
|
||||
|
||||
systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
|
||||
|
||||
It’s a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).
|
||||
|
||||
systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file.
|
||||
|
||||
### How to Enable or Disable Services on Boot Using chkconfig Commmand?
|
||||
|
||||
The chkconfig utility is a command-line tool that allows you to specify in which
|
||||
runlevel to start a selected service, as well as to list all available services along with their current setting.
|
||||
|
||||
Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.
|
||||
|
||||
All the services script are located on `/etc/rd.d/init.d`.
|
||||
|
||||
### How to list All Services in run-level
|
||||
|
||||
The `-–list` parameter displays all the services along with their current status (What run-level the services are enabled or disabled).
|
||||
|
||||
```
|
||||
# chkconfig --list
|
||||
NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off
|
||||
abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off
|
||||
acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
|
||||
auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
.
|
||||
.
|
||||
```
|
||||
|
||||
### How to check the Status of Specific Service
|
||||
|
||||
If you would like to see a particular service status in run-level then use the following format and grep the required service.
|
||||
|
||||
In this case, we are going to check the `auditd` service status in run-level.
|
||||
|
||||
```
|
||||
# chkconfig --list| grep auditd
|
||||
auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
|
||||
```
|
||||
|
||||
### How to Enable a Particular Service on Run Levels
|
||||
|
||||
Use `--level` parameter to enable a service in the required run-level. In this case, we are going to enable `httpd` service on run-level 3 and 5.
|
||||
|
||||
```
|
||||
# chkconfig --level 35 httpd on
|
||||
```
|
||||
|
||||
### How to Disable a Particular Service on Run Levels
|
||||
|
||||
Use `--level` parameter to disable a service in the required run-level. In this case, we are going to enable `httpd` service on run-level 3 and 5.
|
||||
|
||||
```
|
||||
# chkconfig --level 35 httpd off
|
||||
```
|
||||
|
||||
### How to Add a new Service to the Startup List
|
||||
|
||||
The `-–add` parameter allows us to add any new service to the startup. By default, it will turn on level 2, 3, 4 and 5 automatically for that service.
|
||||
|
||||
```
|
||||
# chkconfig --add nagios
|
||||
```
|
||||
|
||||
### How to Remove a Service from Startup List
|
||||
|
||||
Use `--del` parameter to remove the service from the startup list. Here, we are going to remove the Nagios service from the startup list.
|
||||
|
||||
```
|
||||
# chkconfig --del nagios
|
||||
```
|
||||
|
||||
### How to Enable or Disable Services on Boot Using systemctl Command?
|
||||
|
||||
systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).
|
||||
|
||||
All the created systemd unit files are located on `/etc/systemd/system/`.
|
||||
|
||||
### How to list All Services
|
||||
|
||||
Use the following command to list all the services which included enabled and disabled.
|
||||
|
||||
```
|
||||
# systemctl list-unit-files --type=service
|
||||
UNIT FILE STATE
|
||||
arp-ethers.service disabled
|
||||
auditd.service enabled
|
||||
[email protected] enabled
|
||||
blk-availability.service disabled
|
||||
brandbot.service static
|
||||
[email protected] static
|
||||
chrony-wait.service disabled
|
||||
chronyd.service enabled
|
||||
cloud-config.service enabled
|
||||
cloud-final.service enabled
|
||||
cloud-init-local.service enabled
|
||||
cloud-init.service enabled
|
||||
console-getty.service disabled
|
||||
console-shell.service disabled
|
||||
[email protected] static
|
||||
cpupower.service disabled
|
||||
crond.service enabled
|
||||
.
|
||||
.
|
||||
150 unit files listed.
|
||||
```
|
||||
|
||||
If you would like to see a particular service status then use the following format and grep the required service. In this case, we are going to check the `httpd` service status.
|
||||
|
||||
```
|
||||
# systemctl list-unit-files --type=service | grep httpd
|
||||
httpd.service disabled
|
||||
```
|
||||
|
||||
### How to Enable a Particular Service on boot
|
||||
|
||||
Use the following systemctl command format to enable a particular service. To enable a service, it will create a symlink. The same can be found below.
|
||||
|
||||
```
|
||||
# systemctl enable httpd
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
|
||||
```
|
||||
|
||||
Run the following command to double check whether the services is enabled or not on boot.
|
||||
|
||||
```
|
||||
# systemctl is-enabled httpd
|
||||
enabled
|
||||
```
|
||||
|
||||
### How to Disable a Particular Service on boot
|
||||
|
||||
Use the following systemctl command format to disable a particular service. When you run the command, it will remove a symlink which was created by you while enabling the service. The same can be found below.
|
||||
|
||||
```
|
||||
# systemctl disable httpd
|
||||
Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service.
|
||||
```
|
||||
|
||||
Run the following command to double check whether the services is disabled or not on boot.
|
||||
|
||||
```
|
||||
# systemctl is-enabled httpd
|
||||
disabled
|
||||
```
|
||||
|
||||
### How to Check the current run level
|
||||
|
||||
Use the following systemctl command to verify which run-level you are in. Still “runlevel” command works with systemd, however runlevels is a legacy concept in systemd so, i would advise you to use systemctl command for all activity.
|
||||
|
||||
We are in `run-level 3`, the same is showing below as `multi-user.target`.
|
||||
|
||||
```
|
||||
# systemctl list-units --type=target
|
||||
UNIT LOAD ACTIVE SUB DESCRIPTION
|
||||
basic.target loaded active active Basic System
|
||||
cloud-config.target loaded active active Cloud-config availability
|
||||
cryptsetup.target loaded active active Local Encrypted Volumes
|
||||
getty.target loaded active active Login Prompts
|
||||
local-fs-pre.target loaded active active Local File Systems (Pre)
|
||||
local-fs.target loaded active active Local File Systems
|
||||
multi-user.target loaded active active Multi-User System
|
||||
network-online.target loaded active active Network is Online
|
||||
network-pre.target loaded active active Network (Pre)
|
||||
network.target loaded active active Network
|
||||
paths.target loaded active active Paths
|
||||
remote-fs.target loaded active active Remote File Systems
|
||||
slices.target loaded active active Slices
|
||||
sockets.target loaded active active Sockets
|
||||
swap.target loaded active active Swap
|
||||
sysinit.target loaded active active System Initialization
|
||||
timers.target loaded active active Timers
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,93 @@
|
||||
Kali Linux: What You Must Know Before Using it – FOSS Post
|
||||
======
|
||||
![](https://i1.wp.com/fosspost.org/wp-content/uploads/2018/10/kali-linux.png?fit=1237%2C527&ssl=1)
|
||||
|
||||
Kali Linux is the industry’s leading Linux distribution in penetration testing and ethical hacking. It is a distribution that comes shipped with tons and tons of hacking and penetration tools and software by default, and is widely recognized in all parts of the world, even among Windows users who may not even know what Linux is.
|
||||
|
||||
Because of the latter, many people are trying to get alone with Kali Linux although they don’t even understand the basics of a Linux system. The reasons may vary from having fun, faking being a hacker to impress a girlfriend or simply trying to hack the neighbors’ WiFi network to get a free Internet, all of which is a bad thing to do if you are planning to use Kali Linux.
|
||||
|
||||
Here are some tips that you should know before even planning to use Kali Linux
|
||||
|
||||
### Kali Linux is Not for Beginners
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-000.png?resize=850%2C478&ssl=1)
|
||||
Kali Linux Default GNOME Desktop
|
||||
|
||||
If you are someone who has just started to use Linux few months ago, or if you are don’t consider yourself to be above average in terms of knowledge, then Kali Linux is not for you. If you are going to ask stuff like “How do I install Steam on Kali? How do I make my printer work on Kali? How do I solve the APT sources error on Kali”? Then Kali Linux is not suitable for you.
|
||||
|
||||
Kali Linux is mainly made for professionals wanting to run penetration testing suits or people who want to learn ethical hacking and digital forensics. But even if you were from the latter, the average Kali Linux user is expected to face a lot of trouble while using Kali Linux for his day-to-day usage. He’s also expected to take a very careful approach to how he uses the tools and software, it’s not just “let’s install it and run everything”. Every tool must be carefully used, every software you install must be carefully examined.
|
||||
|
||||
**Good Read:** [What are the components of a Linux system?][1]
|
||||
|
||||
Stuff which the average Linux user can’t do normally. A better approach would be to spend few weeks learning about Linux and its daemons, services, software, distributions and the way it works, and then watch few dozens of videos and courses about ethical hacking, and only then, try to use Kali to apply what you learned.
|
||||
|
||||
### it Can Get You Hacked
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-001.png?resize=850%2C478&ssl=1)
|
||||
Kali Linux Hacking & Testing Tools
|
||||
|
||||
In a normal Linux system, there’s one account for normal user and one separate account for root. This is not the case in Kali Linux. Kali Linux uses the root account by default and doesn’t provide you with a normal user account. This is because almost all security tools available in Kali do require root privileges, and to avoid asking you for root password every minute, they designed it that way.
|
||||
|
||||
Of course, you could simply create a normal user account and start using it. Well, it’s still not recommended because that’s not how the Kali Linux system design is meant to work. You’ll face a lot of problems then in using programs, opening ports, debugging software, discovering why this thing doesn’t work only to discover that it was a weird privilege bug. You will also be annoyed by all the tools that will require you to enter the password each time you try to do anything on your system.
|
||||
|
||||
Now, since you are forced to use it in as a root user, all the software you run on your system will also run with root privileges. This is bad if you don’t know what you are doing, because if there’s a vulnerability in Firefox for example and you visit one of the infected dark web sites, the hacker will be able to get full root permissions on your PC and hack you, which would have been limited if you were using a normal user account. Also, some tools that you may install and use can open ports and leak information without your knowledge, so if you are not extremely careful, people can hack you in the same way you may try to hack them.
|
||||
|
||||
If you visit Facebook groups related to Kali Linux on few occasions, you’ll notice that almost a quarter of the posts in these groups are people calling for help because someone hacked them.
|
||||
|
||||
### it Can Get You in Jail
|
||||
|
||||
Kali Linux provide the software as it is. Then, it is your own responsibility alone of how you use them.
|
||||
|
||||
In most advanced countries around the world, using penetration testing tools against public WiFi networks or the devices of others can easily get you in jail. Now don’t think that you can’t be tracked just because you are using Kali, many systems are configured to have complex logging devices to simply track whoever tries to listen or hack their networks, and you may stumble upon one of these, and it will destroy you life.
|
||||
|
||||
Don’t ever use Kali Linux tools against devices/networks which do not belong to you or given explicit permission to try hacking them. If you say that you didn’t know what you were doing, it won’t be accepted as an excuse in a court.
|
||||
|
||||
### Modified Kernel and Software
|
||||
|
||||
Kali is [based][2] on Debian (Testing branch, which means that Kali Linux uses a rolling release model), so it uses most of the software architecture from there, and you will find most of the software in Kali Linux just as they are in Debian.
|
||||
|
||||
However, some packages were modified to harden security and fix some possible vulnerabilities. The Linux kernel that Kali uses for example is patched to allow wireless injection on various devices. These patches are not normally available in the vanilla kernel. Also, Kali Linux does not depend on Debian servers and mirrors, but builds the packages by its own servers. Here’s the default software sources in the latest release:
|
||||
|
||||
```
|
||||
deb http://http.kali.org/kali kali-rolling main contrib non-free
|
||||
deb-src http://http.kali.org/kali kali-rolling main contrib non-free
|
||||
```
|
||||
|
||||
That’s why, for some specific software, you will find a different behaviour when using the same program in Kali Linux or using it in Fedora, for example. You can see a full list of Kali Linux software from [git.kali.org][3]. You can also find our [own generated list of installed packages][4] on Kali Linux (GNOME).
|
||||
|
||||
More importantly, Kali Linux official documentation extremely suggests to NOT add any other 3rd-party software repositories, because since Kali Linux is a rolling release and depends on Debian Testing, you will most likely break your system by just adding a new repository source due to dependencies conflicts and package hooks.
|
||||
|
||||
### Don’t Install Kali Linux
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2018/10/Kali-Linux-002.png?resize=750%2C504&ssl=1)
|
||||
Running wpscan on fosspost.org using Kali Linux
|
||||
|
||||
I use Kali Linux on rare occasions to test the software and servers I deploy. However, I will never dare to install it and use it as a primary system.
|
||||
|
||||
If you are going to use it as a primary system, then you will have to keep your own personal files, password, data and everything else on your system. You will also need to install tons of daily-use software in order to ease your life. But as we mentioned above, using Kali Linux is very risky and should be done very carefully, and if you get hacked, you will lose all your data and it may get exposed to a wider audience. Your personal information can also be used to track you if you are doing non-legal stuff. You may even destroy your data by yourself if you are not careful about how you use the tools.
|
||||
|
||||
Even professional white hackers don’t recommend installing it as a primary system, but rather, use it from USB to just do your penetration testing work and then leave back to your normal Linux distribution.
|
||||
|
||||
### The Bottom Line
|
||||
|
||||
As you may see now, using Kali is not an easy decision to take lightly. If you are planning to be a whiter hacker and you need to use Kali to learn, then go for it after learning the basics and spending few months with a normal system. But be careful for what you are doing to avoid being in trouble.
|
||||
|
||||
If you are planning to use Kali or if you need any help, I’ll be happy to hear your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fosspost.org/articles/must-know-before-using-kali-linux
|
||||
|
||||
作者:[M.Hanny Sabbagh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fosspost.org/author/mhsabbagh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fosspost.org/articles/what-are-the-components-of-a-linux-distribution
|
||||
[2]: https://www.kali.org/news/kali-linux-rolling-edition-2016-1/
|
||||
[3]: http://git.kali.org
|
||||
[4]: https://paste.ubuntu.com/p/bctSVWwpVw/
|
@ -0,0 +1,84 @@
|
||||
translating---geekpi
|
||||
|
||||
Running Linux containers as a non-root with Podman
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/10/podman-816x345.jpg)
|
||||
|
||||
Linux containers are processes with certain isolation features provided by a Linux kernel — including filesystem, process, and network isolation. Containers help with portability — applications can be distributed in container images along with their dependencies, and run on virtually any Linux system with a container runtime.
|
||||
|
||||
Although container technologies exist for a very long time, Linux containers were widely popularized by Docker. The word “Docker” can refer to several different things, including the container technology and tooling, the community around that, or the Docker Inc. company. However, in this article, I’ll be using it to refer to the technology and the tooling that manages Linux containers.
|
||||
|
||||
### What is Docker
|
||||
|
||||
[Docker][1] is a daemon that runs on your system as root, and manages running containers by leveraging features of the Linux kernel. Apart from running containers, it also makes it easy to manage container images — interacting with container registries, storing images, managing container versions, etc. It basically supports all the operations you need to run individual containers.
|
||||
|
||||
But even though Docker is very a handy tool for managing Linux containers, it has two drawbacks: it is a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Both of those, however, are being addressed by Podman.
|
||||
|
||||
### Introducing Podman
|
||||
|
||||
[Podman][2] is a container runtime providing a very similar features as Docker. And as already hinted, it doesn’t require any daemon to run on your system, and it can also run without root privileges. So let’s have a look at some examples of using Podman to run Linux containers.
|
||||
|
||||
#### Running containers with Podman
|
||||
|
||||
One of the simplest examples could be running a Fedora container, printing “Hello world!” in the command line:
|
||||
|
||||
```
|
||||
$ podman run --rm -it fedora:28 echo "Hello world!"
|
||||
```
|
||||
|
||||
Building an image using the common Dockerfile works the same way as it does with Docker:
|
||||
|
||||
```
|
||||
$ cat Dockerfile
|
||||
FROM fedora:28
|
||||
RUN dnf -y install cowsay
|
||||
|
||||
$ podman build . -t hello-world
|
||||
... output omitted ...
|
||||
|
||||
$ podman run --rm -it hello-world cowsay "Hello!"
|
||||
```
|
||||
|
||||
To build containers, Podman calls another tool called Buildah in the background. You can read a recent [post about building container images with Buildah][3] — not just using the typical Dockerfile.
|
||||
|
||||
Apart from building and running containers, Podman can also interact with container registries. To log in to a container registry, for example the widely used Docker Hub, run:
|
||||
|
||||
```
|
||||
$ podman login docker.io
|
||||
```
|
||||
|
||||
To push the image I just built, I just need to tag so it refers to the specific container registry and my personal namespace, and then simply push it.
|
||||
|
||||
```
|
||||
$ podman -t hello-world docker.io/asamalik/hello-world
|
||||
$ podman push docker.io/asamalik/hello-world
|
||||
```
|
||||
|
||||
By the way, have you noticed how I run everything as a non-root user? Also, there is no big fat daemon running on my system!
|
||||
|
||||
#### Installing Podman
|
||||
|
||||
Podman is available by default on [Silverblue][4] — a new generation of Linux Workstation for container-based workflows. To install it on any Fedora release, simply run:
|
||||
|
||||
```
|
||||
$ sudo dnf install podman
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/running-containers-with-podman/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://docs.docker.com/
|
||||
[2]: https://podman.io/
|
||||
[3]: https://fedoramagazine.org/daemon-less-container-management-buildah/
|
||||
[4]: https://silverblue.fedoraproject.org/
|
120
sources/tech/20181016 Final JOS project.md
Normal file
120
sources/tech/20181016 Final JOS project.md
Normal file
@ -0,0 +1,120 @@
|
||||
Translating by qhwdw
|
||||
Final JOS project
|
||||
======
|
||||
Piazza Discussion Due, November 2, 2018 Proposals Due, November 8, 2018 Code repository Due, December 6, 2018 Check-off and in-class demos, Week of December 10, 2018
|
||||
|
||||
### Introduction
|
||||
|
||||
For the final project you have two options:
|
||||
|
||||
* Work on your own and do [lab 6][1], including one challenge exercise in lab 6\. (You are free, of course, to extend lab 6, or any part of JOS, further in interesting ways, but it isn't required.)
|
||||
|
||||
* Work in a team of one, two or three, on a project of your choice that involves your JOS. This project must be of the same scope as lab 6 or larger (if you are working in a team).
|
||||
|
||||
The goal is to have fun and explore more advanced O/S topics; you don't have to do novel research.
|
||||
|
||||
If you are doing your own project, we'll grade you on how much you got working, how elegant your design is, how well you can explain it, and how interesting and creative your solution is. We do realize that time is limited, so we don't expect you to re-write Linux by the end of the semester. Try to make sure your goals are reasonable; perhaps set a minimum goal that's definitely achievable (e.g., something of the scale of lab 6) and a more ambitious goal if things go well.
|
||||
|
||||
If you are doing lab 6, we will grade you on whether you pass the tests and the challenge exercise.
|
||||
|
||||
### Deliverables
|
||||
|
||||
```
|
||||
Nov 3: Piazza discussion and form groups of 1, 2, or 3 (depending on which final project option you are choosing). Use the lab7 tag/folder on Piazza. Discuss ideas with others in comments on their Piazza posting. Use these postings to help find other students interested in similar ideas for forming a group. Course staff will provide feedback on project ideas on Piazza; if you'd like more detailed feedback, come chat with us in person.
|
||||
```
|
||||
|
||||
```
|
||||
Nov 9: Submit a proposal at [the submission website][19], just a paragraph or two. The proposal should include your group members list, the problem you want to address, how you plan to address it, and what are you proposing to specifically design and implement. (If you are doing lab 6, there is nothing to do for this deliverable.)
|
||||
```
|
||||
|
||||
```
|
||||
Dec 7: submit source code along with a brief write-up. Put the write-up under the top-level source directory with the name "README.pdf". Since some of you will be working in groups for this lab assignment, you may want to use git to share your project code between group members. You will need to decide on whose source code you will use as a starting point for your group project. Make sure to create a branch for your final project, and name it lab7\. (If you do lab 6, follow the lab 6 submission instructions.)
|
||||
```
|
||||
|
||||
```
|
||||
Week of Dec 11: short in-class demonstration. Prepare a short in-class demo of your JOS project. We will provide a projector that you can use to demonstrate your project. Depending on the number of groups and the kinds of projects that each group chooses, we may decide to limit the total number of presentations, and some groups might end up not presenting in class.
|
||||
```
|
||||
|
||||
```
|
||||
Week of Dec 11: check-off with TAs. Demo your project to the TAs so that we can ask you some questions and find out in more detail what you did.
|
||||
```
|
||||
|
||||
### Project ideas
|
||||
|
||||
If you are not doing lab 6, here's a list of ideas to get you started thinking. But, you should feel free to pursue your own ideas. Some of the ideas are starting points and by themselves not of the scope of lab 6, and others are likely to be much of larger scope.
|
||||
|
||||
* Build a virtual machine monitor that can run multiple guests (for example, multiple instances of JOS), using [x86 VM support][2].
|
||||
|
||||
* Do something useful with the hardware protection of Intel SGX. [Here is a recent paper using Intel SGX][3].
|
||||
|
||||
* Make the JOS file system support writing, file creation, logging for durability, etc., perhaps taking ideas from Linux EXT3.
|
||||
|
||||
* Use file system ideas from [Soft updates][4], [WAFL][5], ZFS, or another advanced file system.
|
||||
|
||||
* Add snapshots to a file system, so that a user can look at the file system as it appeared at various points in the past. You'll probably want to use some kind of copy-on-write for disk storage to keep space consumption down.
|
||||
|
||||
* Build a [distributed shared memory][6] (DSM) system, so that you can run multi-threaded shared memory parallel programs on a cluster of machines, using paging to give the appearance of real shared memory. When a thread tries to access a page that's on another machine, the page fault will give the DSM system a chance to fetch the page over the network from whatever machine currently stores it.
|
||||
|
||||
* Allow processes to migrate from one machine to another over the network. You'll need to do something about the various pieces of a process's state, but since much state in JOS is in user-space it may be easier than process migration on Linux.
|
||||
|
||||
* Implement [paging][7] to disk in JOS, so that processes can be bigger than RAM. Extend your pager with swapping.
|
||||
|
||||
* Implement [mmap()][8] of files for JOS.
|
||||
|
||||
* Use [xfi][9] to sandbox code within a process.
|
||||
|
||||
* Support x86 [2MB or 4MB pages][10].
|
||||
|
||||
* Modify JOS to have kernel-supported threads inside processes. See [in-class uthread assignment][11] to get started. Implementing scheduler activations would be one way to do this project.
|
||||
|
||||
* Use fine-grained locking or lock-free concurrency in JOS in the kernel or in the file server (after making it multithreaded). The linux kernel uses [read copy update][12] to be able to perform read operations without holding locks. Explore RCU by implementing it in JOS and use it to support a name cache with lock-free reads.
|
||||
|
||||
* Implement ideas from the [Exokernel papers][13], for example the packet filter.
|
||||
|
||||
* Make JOS have soft real-time behavior. You will have to identify some application for which this is useful.
|
||||
|
||||
* Make JOS run on 64-bit CPUs. This includes redoing the virtual memory system to use 4-level pages tables. See [reference page][14] for some documentation.
|
||||
|
||||
* Port JOS to a different microprocessor. The [osdev wiki][15] may be helpful.
|
||||
|
||||
* A window system for JOS, including graphics driver and mouse. See [reference page][16] for some documentation. [sqrt(x)][17] is an example JOS window system (and writeup).
|
||||
|
||||
* Implement [dune][18] to export privileged hardware instructions to user-space applications in JOS.
|
||||
|
||||
* Write a user-level debugger; add strace-like functionality; hardware register profiling (e.g. Oprofile); call-traces
|
||||
|
||||
* Binary emulation for (static) Linux executables
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab7/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/index.html
|
||||
[2]: http://www.intel.com/technology/itj/2006/v10i3/1-hardware/3-software.htm
|
||||
[3]: https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-baumann.pdf
|
||||
[4]: http://www.ece.cmu.edu/~ganger/papers/osdi94.pdf
|
||||
[5]: https://ng.gnunet.org/sites/default/files/10.1.1.40.3691.pdf
|
||||
[6]: http://www.cdf.toronto.edu/~csc469h/fall/handouts/nitzberg91.pdf
|
||||
[7]: http://en.wikipedia.org/wiki/Paging
|
||||
[8]: http://en.wikipedia.org/wiki/Mmap
|
||||
[9]: http://static.usenix.org/event/osdi06/tech/erlingsson.html
|
||||
[10]: http://en.wikipedia.org/wiki/Page_(computer_memory)
|
||||
[11]: http://pdos.csail.mit.edu/6.828/2018/homework/xv6-uthread.html
|
||||
[12]: http://en.wikipedia.org/wiki/Read-copy-update
|
||||
[13]: http://pdos.csail.mit.edu/6.828/2018/readings/engler95exokernel.pdf
|
||||
[14]: http://pdos.csail.mit.edu/6.828/2018/reference.html
|
||||
[15]: http://wiki.osdev.org/Main_Page
|
||||
[16]: http://pdos.csail.mit.edu/6.828/2018/reference.html
|
||||
[17]: http://web.mit.edu/amdragon/www/pubs/sqrtx-6.828.html
|
||||
[18]: https://www.usenix.org/system/files/conference/osdi12/osdi12-final-117.pdf
|
||||
[19]: https://6828.scripts.mit.edu/2018/handin.py/
|
596
sources/tech/20181016 Lab 4- Preemptive Multitasking.md
Normal file
596
sources/tech/20181016 Lab 4- Preemptive Multitasking.md
Normal file
@ -0,0 +1,596 @@
|
||||
Translating by qhwdw
|
||||
Lab 4: Preemptive Multitasking
|
||||
======
|
||||
### Lab 4: Preemptive Multitasking
|
||||
|
||||
**Part A due Thursday, October 18, 2018
|
||||
Part B due Thursday, October 25, 2018
|
||||
Part C due Thursday, November 1, 2018**
|
||||
|
||||
#### Introduction
|
||||
|
||||
In this lab you will implement preemptive multitasking among multiple simultaneously active user-mode environments.
|
||||
|
||||
In part A you will add multiprocessor support to JOS, implement round-robin scheduling, and add basic environment management system calls (calls that create and destroy environments, and allocate/map memory).
|
||||
|
||||
In part B, you will implement a Unix-like `fork()`, which allows a user-mode environment to create copies of itself.
|
||||
|
||||
Finally, in part C you will add support for inter-process communication (IPC), allowing different user-mode environments to communicate and synchronize with each other explicitly. You will also add support for hardware clock interrupts and preemption.
|
||||
|
||||
##### Getting Started
|
||||
|
||||
Use Git to commit your Lab 3 source, fetch the latest version of the course repository, and then create a local branch called `lab4` based on our lab4 branch, `origin/lab4`:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab4 origin/lab4
|
||||
Branch lab4 set up to track remote branch refs/remotes/origin/lab4.
|
||||
Switched to a new branch "lab4"
|
||||
athena% git merge lab3
|
||||
Merge made by recursive.
|
||||
...
|
||||
athena%
|
||||
```
|
||||
|
||||
Lab 4 contains a number of new source files, some of which you should browse before you start:
|
||||
| kern/cpu.h | Kernel-private definitions for multiprocessor support |
|
||||
| kern/mpconfig.c | Code to read the multiprocessor configuration |
|
||||
| kern/lapic.c | Kernel code driving the local APIC unit in each processor |
|
||||
| kern/mpentry.S | Assembly-language entry code for non-boot CPUs |
|
||||
| kern/spinlock.h | Kernel-private definitions for spin locks, including the big kernel lock |
|
||||
| kern/spinlock.c | Kernel code implementing spin locks |
|
||||
| kern/sched.c | Code skeleton of the scheduler that you are about to implement |
|
||||
|
||||
##### Lab Requirements
|
||||
|
||||
This lab is divided into three parts, A, B, and C. We have allocated one week in the schedule for each part.
|
||||
|
||||
As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. (You do not need to do one challenge problem per part, just one for the whole lab.) Additionally, you will need to write up a brief description of the challenge problem that you implemented. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab4.txt` in the top level of your `lab` directory before handing in your work.
|
||||
|
||||
#### Part A: Multiprocessor Support and Cooperative Multitasking
|
||||
|
||||
In the first part of this lab, you will first extend JOS to run on a multiprocessor system, and then implement some new JOS kernel system calls to allow user-level environments to create additional new environments. You will also implement _cooperative_ round-robin scheduling, allowing the kernel to switch from one environment to another when the current environment voluntarily relinquishes the CPU (or exits). Later in part C you will implement _preemptive_ scheduling, which allows the kernel to re-take control of the CPU from an environment after a certain time has passed even if the environment does not cooperate.
|
||||
|
||||
##### Multiprocessor Support
|
||||
|
||||
We are going to make JOS support "symmetric multiprocessing" (SMP), a multiprocessor model in which all CPUs have equivalent access to system resources such as memory and I/O buses. While all CPUs are functionally identical in SMP, during the boot process they can be classified into two types: the bootstrap processor (BSP) is responsible for initializing the system and for booting the operating system; and the application processors (APs) are activated by the BSP only after the operating system is up and running. Which processor is the BSP is determined by the hardware and the BIOS. Up to this point, all your existing JOS code has been running on the BSP.
|
||||
|
||||
In an SMP system, each CPU has an accompanying local APIC (LAPIC) unit. The LAPIC units are responsible for delivering interrupts throughout the system. The LAPIC also provides its connected CPU with a unique identifier. In this lab, we make use of the following basic functionality of the LAPIC unit (in `kern/lapic.c`):
|
||||
|
||||
* Reading the LAPIC identifier (APIC ID) to tell which CPU our code is currently running on (see `cpunum()`).
|
||||
* Sending the `STARTUP` interprocessor interrupt (IPI) from the BSP to the APs to bring up other CPUs (see `lapic_startap()`).
|
||||
* In part C, we program LAPIC's built-in timer to trigger clock interrupts to support preemptive multitasking (see `apic_init()`).
|
||||
|
||||
|
||||
|
||||
A processor accesses its LAPIC using memory-mapped I/O (MMIO). In MMIO, a portion of _physical_ memory is hardwired to the registers of some I/O devices, so the same load/store instructions typically used to access memory can be used to access device registers. You've already seen one IO hole at physical address `0xA0000` (we use this to write to the VGA display buffer). The LAPIC lives in a hole starting at physical address `0xFE000000` (32MB short of 4GB), so it's too high for us to access using our usual direct map at KERNBASE. The JOS virtual memory map leaves a 4MB gap at `MMIOBASE` so we have a place to map devices like this. Since later labs introduce more MMIO regions, you'll write a simple function to allocate space from this region and map device memory to it.
|
||||
|
||||
```
|
||||
Exercise 1. Implement `mmio_map_region` in `kern/pmap.c`. To see how this is used, look at the beginning of `lapic_init` in `kern/lapic.c`. You'll have to do the next exercise, too, before the tests for `mmio_map_region` will run.
|
||||
```
|
||||
|
||||
###### Application Processor Bootstrap
|
||||
|
||||
Before booting up APs, the BSP should first collect information about the multiprocessor system, such as the total number of CPUs, their APIC IDs and the MMIO address of the LAPIC unit. The `mp_init()` function in `kern/mpconfig.c` retrieves this information by reading the MP configuration table that resides in the BIOS's region of memory.
|
||||
|
||||
The `boot_aps()` function (in `kern/init.c`) drives the AP bootstrap process. APs start in real mode, much like how the bootloader started in `boot/boot.S`, so `boot_aps()` copies the AP entry code (`kern/mpentry.S`) to a memory location that is addressable in the real mode. Unlike with the bootloader, we have some control over where the AP will start executing code; we copy the entry code to `0x7000` (`MPENTRY_PADDR`), but any unused, page-aligned physical address below 640KB would work.
|
||||
|
||||
After that, `boot_aps()` activates APs one after another, by sending `STARTUP` IPIs to the LAPIC unit of the corresponding AP, along with an initial `CS:IP` address at which the AP should start running its entry code (`MPENTRY_PADDR` in our case). The entry code in `kern/mpentry.S` is quite similar to that of `boot/boot.S`. After some brief setup, it puts the AP into protected mode with paging enabled, and then calls the C setup routine `mp_main()` (also in `kern/init.c`). `boot_aps()` waits for the AP to signal a `CPU_STARTED` flag in `cpu_status` field of its `struct CpuInfo` before going on to wake up the next one.
|
||||
|
||||
```
|
||||
Exercise 2. Read `boot_aps()` and `mp_main()` in `kern/init.c`, and the assembly code in `kern/mpentry.S`. Make sure you understand the control flow transfer during the bootstrap of APs. Then modify your implementation of `page_init()` in `kern/pmap.c` to avoid adding the page at `MPENTRY_PADDR` to the free list, so that we can safely copy and run AP bootstrap code at that physical address. Your code should pass the updated `check_page_free_list()` test (but might fail the updated `check_kern_pgdir()` test, which we will fix soon).
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
1. Compare `kern/mpentry.S` side by side with `boot/boot.S`. Bearing in mind that `kern/mpentry.S` is compiled and linked to run above `KERNBASE` just like everything else in the kernel, what is the purpose of macro `MPBOOTPHYS`? Why is it necessary in `kern/mpentry.S` but not in `boot/boot.S`? In other words, what could go wrong if it were omitted in `kern/mpentry.S`?
|
||||
Hint: recall the differences between the link address and the load address that we have discussed in Lab 1.
|
||||
```
|
||||
|
||||
|
||||
###### Per-CPU State and Initialization
|
||||
|
||||
When writing a multiprocessor OS, it is important to distinguish between per-CPU state that is private to each processor, and global state that the whole system shares. `kern/cpu.h` defines most of the per-CPU state, including `struct CpuInfo`, which stores per-CPU variables. `cpunum()` always returns the ID of the CPU that calls it, which can be used as an index into arrays like `cpus`. Alternatively, the macro `thiscpu` is shorthand for the current CPU's `struct CpuInfo`.
|
||||
|
||||
Here is the per-CPU state you should be aware of:
|
||||
|
||||
* **Per-CPU kernel stack**.
|
||||
Because multiple CPUs can trap into the kernel simultaneously, we need a separate kernel stack for each processor to prevent them from interfering with each other's execution. The array `percpu_kstacks[NCPU][KSTKSIZE]` reserves space for NCPU's worth of kernel stacks.
|
||||
|
||||
In Lab 2, you mapped the physical memory that `bootstack` refers to as the BSP's kernel stack just below `KSTACKTOP`. Similarly, in this lab, you will map each CPU's kernel stack into this region with guard pages acting as a buffer between them. CPU 0's stack will still grow down from `KSTACKTOP`; CPU 1's stack will start `KSTKGAP` bytes below the bottom of CPU 0's stack, and so on. `inc/memlayout.h` shows the mapping layout.
|
||||
|
||||
* **Per-CPU TSS and TSS descriptor**.
|
||||
A per-CPU task state segment (TSS) is also needed in order to specify where each CPU's kernel stack lives. The TSS for CPU _i_ is stored in `cpus[i].cpu_ts`, and the corresponding TSS descriptor is defined in the GDT entry `gdt[(GD_TSS0 >> 3) + i]`. The global `ts` variable defined in `kern/trap.c` will no longer be useful.
|
||||
|
||||
* **Per-CPU current environment pointer**.
|
||||
Since each CPU can run different user process simultaneously, we redefined the symbol `curenv` to refer to `cpus[cpunum()].cpu_env` (or `thiscpu->cpu_env`), which points to the environment _currently_ executing on the _current_ CPU (the CPU on which the code is running).
|
||||
|
||||
* **Per-CPU system registers**.
|
||||
All registers, including system registers, are private to a CPU. Therefore, instructions that initialize these registers, such as `lcr3()`, `ltr()`, `lgdt()`, `lidt()`, etc., must be executed once on each CPU. Functions `env_init_percpu()` and `trap_init_percpu()` are defined for this purpose.
|
||||
|
||||
|
||||
|
||||
```
|
||||
Exercise 3. Modify `mem_init_mp()` (in `kern/pmap.c`) to map per-CPU stacks starting at `KSTACKTOP`, as shown in `inc/memlayout.h`. The size of each stack is `KSTKSIZE` bytes plus `KSTKGAP` bytes of unmapped guard pages. Your code should pass the new check in `check_kern_pgdir()`.
|
||||
```
|
||||
|
||||
```
|
||||
Exercise 4. The code in `trap_init_percpu()` (`kern/trap.c`) initializes the TSS and TSS descriptor for the BSP. It worked in Lab 3, but is incorrect when running on other CPUs. Change the code so that it can work on all CPUs. (Note: your new code should not use the global `ts` variable any more.)
|
||||
```
|
||||
|
||||
When you finish the above exercises, run JOS in QEMU with 4 CPUs using make qemu CPUS=4 (or make qemu-nox CPUS=4), you should see output like this:
|
||||
|
||||
```
|
||||
...
|
||||
Physical memory: 66556K available, base = 640K, extended = 65532K
|
||||
check_page_alloc() succeeded!
|
||||
check_page() succeeded!
|
||||
check_kern_pgdir() succeeded!
|
||||
check_page_installed_pgdir() succeeded!
|
||||
SMP: CPU 0 found 4 CPU(s)
|
||||
enabled interrupts: 1 2
|
||||
SMP: CPU 1 starting
|
||||
SMP: CPU 2 starting
|
||||
SMP: CPU 3 starting
|
||||
```
|
||||
|
||||
###### Locking
|
||||
|
||||
Our current code spins after initializing the AP in `mp_main()`. Before letting the AP get any further, we need to first address race conditions when multiple CPUs run kernel code simultaneously. The simplest way to achieve this is to use a _big kernel lock_. The big kernel lock is a single global lock that is held whenever an environment enters kernel mode, and is released when the environment returns to user mode. In this model, environments in user mode can run concurrently on any available CPUs, but no more than one environment can run in kernel mode; any other environments that try to enter kernel mode are forced to wait.
|
||||
|
||||
`kern/spinlock.h` declares the big kernel lock, namely `kernel_lock`. It also provides `lock_kernel()` and `unlock_kernel()`, shortcuts to acquire and release the lock. You should apply the big kernel lock at four locations:
|
||||
|
||||
* In `i386_init()`, acquire the lock before the BSP wakes up the other CPUs.
|
||||
* In `mp_main()`, acquire the lock after initializing the AP, and then call `sched_yield()` to start running environments on this AP.
|
||||
* In `trap()`, acquire the lock when trapped from user mode. To determine whether a trap happened in user mode or in kernel mode, check the low bits of the `tf_cs`.
|
||||
* In `env_run()`, release the lock _right before_ switching to user mode. Do not do that too early or too late, otherwise you will experience races or deadlocks.
|
||||
|
||||
|
||||
```
|
||||
Exercise 5. Apply the big kernel lock as described above, by calling `lock_kernel()` and `unlock_kernel()` at the proper locations.
|
||||
```
|
||||
|
||||
How to test if your locking is correct? You can't at this moment! But you will be able to after you implement the scheduler in the next exercise.
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
2. It seems that using the big kernel lock guarantees that only one CPU can run the kernel code at a time. Why do we still need separate kernel stacks for each CPU? Describe a scenario in which using a shared kernel stack will go wrong, even with the protection of the big kernel lock.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! The big kernel lock is simple and easy to use. Nevertheless, it eliminates all concurrency in kernel mode. Most modern operating systems use different locks to protect different parts of their shared state, an approach called _fine-grained locking_. Fine-grained locking can increase performance significantly, but is more difficult to implement and error-prone. If you are brave enough, drop the big kernel lock and embrace concurrency in JOS!
|
||||
|
||||
It is up to you to decide the locking granularity (the amount of data that a lock protects). As a hint, you may consider using spin locks to ensure exclusive access to these shared components in the JOS kernel:
|
||||
|
||||
* The page allocator.
|
||||
* The console driver.
|
||||
* The scheduler.
|
||||
* The inter-process communication (IPC) state that you will implement in the part C.
|
||||
```
|
||||
|
||||
|
||||
##### Round-Robin Scheduling
|
||||
|
||||
Your next task in this lab is to change the JOS kernel so that it can alternate between multiple environments in "round-robin" fashion. Round-robin scheduling in JOS works as follows:
|
||||
|
||||
* The function `sched_yield()` in the new `kern/sched.c` is responsible for selecting a new environment to run. It searches sequentially through the `envs[]` array in circular fashion, starting just after the previously running environment (or at the beginning of the array if there was no previously running environment), picks the first environment it finds with a status of `ENV_RUNNABLE` (see `inc/env.h`), and calls `env_run()` to jump into that environment.
|
||||
* `sched_yield()` must never run the same environment on two CPUs at the same time. It can tell that an environment is currently running on some CPU (possibly the current CPU) because that environment's status will be `ENV_RUNNING`.
|
||||
* We have implemented a new system call for you, `sys_yield()`, which user environments can call to invoke the kernel's `sched_yield()` function and thereby voluntarily give up the CPU to a different environment.
|
||||
|
||||
|
||||
|
||||
```
|
||||
Exercise 6. Implement round-robin scheduling in `sched_yield()` as described above. Don't forget to modify `syscall()` to dispatch `sys_yield()`.
|
||||
|
||||
Make sure to invoke `sched_yield()` in `mp_main`.
|
||||
|
||||
Modify `kern/init.c` to create three (or more!) environments that all run the program `user/yield.c`.
|
||||
|
||||
Run make qemu. You should see the environments switch back and forth between each other five times before terminating, like below.
|
||||
|
||||
Test also with several CPUS: make qemu CPUS=2.
|
||||
|
||||
...
|
||||
Hello, I am environment 00001000.
|
||||
Hello, I am environment 00001001.
|
||||
Hello, I am environment 00001002.
|
||||
Back in environment 00001000, iteration 0.
|
||||
Back in environment 00001001, iteration 0.
|
||||
Back in environment 00001002, iteration 0.
|
||||
Back in environment 00001000, iteration 1.
|
||||
Back in environment 00001001, iteration 1.
|
||||
Back in environment 00001002, iteration 1.
|
||||
...
|
||||
|
||||
After the `yield` programs exit, there will be no runnable environment in the system, the scheduler should invoke the JOS kernel monitor. If any of this does not happen, then fix your code before proceeding.
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
3. In your implementation of `env_run()` you should have called `lcr3()`. Before and after the call to `lcr3()`, your code makes references (at least it should) to the variable `e`, the argument to `env_run`. Upon loading the `%cr3` register, the addressing context used by the MMU is instantly changed. But a virtual address (namely `e`) has meaning relative to a given address context--the address context specifies the physical address to which the virtual address maps. Why can the pointer `e` be dereferenced both before and after the addressing switch?
|
||||
4. Whenever the kernel switches from one environment to another, it must ensure the old environment's registers are saved so they can be restored properly later. Why? Where does this happen?
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Add a less trivial scheduling policy to the kernel, such as a fixed-priority scheduler that allows each environment to be assigned a priority and ensures that higher-priority environments are always chosen in preference to lower-priority environments. If you're feeling really adventurous, try implementing a Unix-style adjustable-priority scheduler or even a lottery or stride scheduler. (Look up "lottery scheduling" and "stride scheduling" in Google.)
|
||||
|
||||
Write a test program or two that verifies that your scheduling algorithm is working correctly (i.e., the right environments get run in the right order). It may be easier to write these test programs once you have implemented `fork()` and IPC in parts B and C of this lab.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! The JOS kernel currently does not allow applications to use the x86 processor's x87 floating-point unit (FPU), MMX instructions, or Streaming SIMD Extensions (SSE). Extend the `Env` structure to provide a save area for the processor's floating point state, and extend the context switching code to save and restore this state properly when switching from one environment to another. The `FXSAVE` and `FXRSTOR` instructions may be useful, but note that these are not in the old i386 user's manual because they were introduced in more recent processors. Write a user-level test program that does something cool with floating-point.
|
||||
```
|
||||
|
||||
##### System Calls for Environment Creation
|
||||
|
||||
Although your kernel is now capable of running and switching between multiple user-level environments, it is still limited to running environments that the _kernel_ initially set up. You will now implement the necessary JOS system calls to allow _user_ environments to create and start other new user environments.
|
||||
|
||||
Unix provides the `fork()` system call as its process creation primitive. Unix `fork()` copies the entire address space of calling process (the parent) to create a new process (the child). The only differences between the two observable from user space are their process IDs and parent process IDs (as returned by `getpid` and `getppid`). In the parent, `fork()` returns the child's process ID, while in the child, `fork()` returns 0. By default, each process gets its own private address space, and neither process's modifications to memory are visible to the other.
|
||||
|
||||
You will provide a different, more primitive set of JOS system calls for creating new user-mode environments. With these system calls you will be able to implement a Unix-like `fork()` entirely in user space, in addition to other styles of environment creation. The new system calls you will write for JOS are as follows:
|
||||
|
||||
* `sys_exofork`:
|
||||
This system call creates a new environment with an almost blank slate: nothing is mapped in the user portion of its address space, and it is not runnable. The new environment will have the same register state as the parent environment at the time of the `sys_exofork` call. In the parent, `sys_exofork` will return the `envid_t` of the newly created environment (or a negative error code if the environment allocation failed). In the child, however, it will return 0. (Since the child starts out marked as not runnable, `sys_exofork` will not actually return in the child until the parent has explicitly allowed this by marking the child runnable using....)
|
||||
* `sys_env_set_status`:
|
||||
Sets the status of a specified environment to `ENV_RUNNABLE` or `ENV_NOT_RUNNABLE`. This system call is typically used to mark a new environment ready to run, once its address space and register state has been fully initialized.
|
||||
* `sys_page_alloc`:
|
||||
Allocates a page of physical memory and maps it at a given virtual address in a given environment's address space.
|
||||
* `sys_page_map`:
|
||||
Copy a page mapping ( _not_ the contents of a page!) from one environment's address space to another, leaving a memory sharing arrangement in place so that the new and the old mappings both refer to the same page of physical memory.
|
||||
* `sys_page_unmap`:
|
||||
Unmap a page mapped at a given virtual address in a given environment.
|
||||
|
||||
|
||||
|
||||
For all of the system calls above that accept environment IDs, the JOS kernel supports the convention that a value of 0 means "the current environment." This convention is implemented by `envid2env()` in `kern/env.c`.
|
||||
|
||||
We have provided a very primitive implementation of a Unix-like `fork()` in the test program `user/dumbfork.c`. This test program uses the above system calls to create and run a child environment with a copy of its own address space. The two environments then switch back and forth using `sys_yield` as in the previous exercise. The parent exits after 10 iterations, whereas the child exits after 20.
|
||||
|
||||
```
|
||||
Exercise 7. Implement the system calls described above in `kern/syscall.c` and make sure `syscall()` calls them. You will need to use various functions in `kern/pmap.c` and `kern/env.c`, particularly `envid2env()`. For now, whenever you call `envid2env()`, pass 1 in the `checkperm` parameter. Be sure you check for any invalid system call arguments, returning `-E_INVAL` in that case. Test your JOS kernel with `user/dumbfork` and make sure it works before proceeding.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Add the additional system calls necessary to _read_ all of the vital state of an existing environment as well as set it up. Then implement a user mode program that forks off a child environment, runs it for a while (e.g., a few iterations of `sys_yield()`), then takes a complete snapshot or _checkpoint_ of the child environment, runs the child for a while longer, and finally restores the child environment to the state it was in at the checkpoint and continues it from there. Thus, you are effectively "replaying" the execution of the child environment from an intermediate state. Make the child environment perform some interaction with the user using `sys_cgetc()` or `readline()` so that the user can view and mutate its internal state, and verify that with your checkpoint/restart you can give the child environment a case of selective amnesia, making it "forget" everything that happened beyond a certain point.
|
||||
```
|
||||
|
||||
This completes Part A of the lab; make sure it passes all of the Part A tests when you run make grade, and hand it in using make handin as usual. If you are trying to figure out why a particular test case is failing, run ./grade-lab4 -v, which will show you the output of the kernel builds and QEMU runs for each test, until a test fails. When a test fails, the script will stop, and then you can inspect `jos.out` to see what the kernel actually printed.
|
||||
|
||||
#### Part B: Copy-on-Write Fork
|
||||
|
||||
As mentioned earlier, Unix provides the `fork()` system call as its primary process creation primitive. The `fork()` system call copies the address space of the calling process (the parent) to create a new process (the child).
|
||||
|
||||
xv6 Unix implements `fork()` by copying all data from the parent's pages into new pages allocated for the child. This is essentially the same approach that `dumbfork()` takes. The copying of the parent's address space into the child is the most expensive part of the `fork()` operation.
|
||||
|
||||
However, a call to `fork()` is frequently followed almost immediately by a call to `exec()` in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling `exec()`.
|
||||
|
||||
For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to _share_ the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as _copy-on-write_. To do this, on `fork()` the kernel would copy the address space _mappings_ from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a `fork()` followed by an `exec()` in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls `exec()`.
|
||||
|
||||
In the next piece of this lab, you will implement a "proper" Unix-like `fork()` with copy-on-write, as a user space library routine. Implementing `fork()` and copy-on-write support in user space has the benefit that the kernel remains much simpler and thus more likely to be correct. It also lets individual user-mode programs define their own semantics for `fork()`. A program that wants a slightly different implementation (for example, the expensive always-copy version like `dumbfork()`, or one in which the parent and child actually share memory afterward) can easily provide its own.
|
||||
|
||||
##### User-level page fault handling
|
||||
|
||||
A user-level copy-on-write `fork()` needs to know about page faults on write-protected pages, so that's what you'll implement first. Copy-on-write is only one of many possible uses for user-level page fault handling.
|
||||
|
||||
It's common to set up an address space so that page faults indicate when some action needs to take place. For example, most Unix kernels initially map only a single page in a new process's stack region, and allocate and map additional stack pages later "on demand" as the process's stack consumption increases and causes page faults on stack addresses that are not yet mapped. A typical Unix kernel must keep track of what action to take when a page fault occurs in each region of a process's space. For example, a fault in the stack region will typically allocate and map new page of physical memory. A fault in the program's BSS region will typically allocate a new page, fill it with zeroes, and map it. In systems with demand-paged executables, a fault in the text region will read the corresponding page of the binary off of disk and then map it.
|
||||
|
||||
This is a lot of information for the kernel to keep track of. Instead of taking the traditional Unix approach, you will decide what to do about each page fault in user space, where bugs are less damaging. This design has the added benefit of allowing programs great flexibility in defining their memory regions; you'll use user-level page fault handling later for mapping and accessing files on a disk-based file system.
|
||||
|
||||
###### Setting the Page Fault Handler
|
||||
|
||||
In order to handle its own page faults, a user environment will need to register a _page fault handler entrypoint_ with the JOS kernel. The user environment registers its page fault entrypoint via the new `sys_env_set_pgfault_upcall` system call. We have added a new member to the `Env` structure, `env_pgfault_upcall`, to record this information.
|
||||
|
||||
```
|
||||
Exercise 8. Implement the `sys_env_set_pgfault_upcall` system call. Be sure to enable permission checking when looking up the environment ID of the target environment, since this is a "dangerous" system call.
|
||||
```
|
||||
|
||||
###### Normal and Exception Stacks in User Environments
|
||||
|
||||
During normal execution, a user environment in JOS will run on the _normal_ user stack: its `ESP` register starts out pointing at `USTACKTOP`, and the stack data it pushes resides on the page between `USTACKTOP-PGSIZE` and `USTACKTOP-1` inclusive. When a page fault occurs in user mode, however, the kernel will restart the user environment running a designated user-level page fault handler on a different stack, namely the _user exception_ stack. In essence, we will make the JOS kernel implement automatic "stack switching" on behalf of the user environment, in much the same way that the x86 _processor_ already implements stack switching on behalf of JOS when transferring from user mode to kernel mode!
|
||||
|
||||
The JOS user exception stack is also one page in size, and its top is defined to be at virtual address `UXSTACKTOP`, so the valid bytes of the user exception stack are from `UXSTACKTOP-PGSIZE` through `UXSTACKTOP-1` inclusive. While running on this exception stack, the user-level page fault handler can use JOS's regular system calls to map new pages or adjust mappings so as to fix whatever problem originally caused the page fault. Then the user-level page fault handler returns, via an assembly language stub, to the faulting code on the original stack.
|
||||
|
||||
Each user environment that wants to support user-level page fault handling will need to allocate memory for its own exception stack, using the `sys_page_alloc()` system call introduced in part A.
|
||||
|
||||
###### Invoking the User Page Fault Handler
|
||||
|
||||
You will now need to change the page fault handling code in `kern/trap.c` to handle page faults from user mode as follows. We will call the state of the user environment at the time of the fault the _trap-time_ state.
|
||||
|
||||
If there is no page fault handler registered, the JOS kernel destroys the user environment with a message as before. Otherwise, the kernel sets up a trap frame on the exception stack that looks like a `struct UTrapframe` from `inc/trap.h`:
|
||||
|
||||
```
|
||||
<-- UXSTACKTOP
|
||||
trap-time esp
|
||||
trap-time eflags
|
||||
trap-time eip
|
||||
trap-time eax start of struct PushRegs
|
||||
trap-time ecx
|
||||
trap-time edx
|
||||
trap-time ebx
|
||||
trap-time esp
|
||||
trap-time ebp
|
||||
trap-time esi
|
||||
trap-time edi end of struct PushRegs
|
||||
tf_err (error code)
|
||||
fault_va <-- %esp when handler is run
|
||||
|
||||
```
|
||||
|
||||
The kernel then arranges for the user environment to resume execution with the page fault handler running on the exception stack with this stack frame; you must figure out how to make this happen. The `fault_va` is the virtual address that caused the page fault.
|
||||
|
||||
If the user environment is _already_ running on the user exception stack when an exception occurs, then the page fault handler itself has faulted. In this case, you should start the new stack frame just under the current `tf->tf_esp` rather than at `UXSTACKTOP`. You should first push an empty 32-bit word, then a `struct UTrapframe`.
|
||||
|
||||
To test whether `tf->tf_esp` is already on the user exception stack, check whether it is in the range between `UXSTACKTOP-PGSIZE` and `UXSTACKTOP-1`, inclusive.
|
||||
|
||||
```
|
||||
Exercise 9. Implement the code in `page_fault_handler` in `kern/trap.c` required to dispatch page faults to the user-mode handler. Be sure to take appropriate precautions when writing into the exception stack. (What happens if the user environment runs out of space on the exception stack?)
|
||||
```
|
||||
|
||||
###### User-mode Page Fault Entrypoint
|
||||
|
||||
Next, you need to implement the assembly routine that will take care of calling the C page fault handler and resume execution at the original faulting instruction. This assembly routine is the handler that will be registered with the kernel using `sys_env_set_pgfault_upcall()`.
|
||||
|
||||
```
|
||||
Exercise 10. Implement the `_pgfault_upcall` routine in `lib/pfentry.S`. The interesting part is returning to the original point in the user code that caused the page fault. You'll return directly there, without going back through the kernel. The hard part is simultaneously switching stacks and re-loading the EIP.
|
||||
```
|
||||
|
||||
Finally, you need to implement the C user library side of the user-level page fault handling mechanism.
|
||||
|
||||
```
|
||||
Exercise 11. Finish `set_pgfault_handler()` in `lib/pgfault.c`.
|
||||
```
|
||||
|
||||
###### Testing
|
||||
|
||||
Run `user/faultread` (make run-faultread). You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
[00001000] user fault va 00000000 ip 0080003a
|
||||
TRAP frame ...
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
Run `user/faultdie`. You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
i faulted at va deadbeef, err 6
|
||||
[00001000] exiting gracefully
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
Run `user/faultalloc`. You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
fault deadbeef
|
||||
this string was faulted in at deadbeef
|
||||
fault cafebffe
|
||||
fault cafec000
|
||||
this string was faulted in at cafebffe
|
||||
[00001000] exiting gracefully
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
If you see only the first "this string" line, it means you are not handling recursive page faults properly.
|
||||
|
||||
Run `user/faultallocbad`. You should see:
|
||||
|
||||
```
|
||||
...
|
||||
[00000000] new env 00001000
|
||||
[00001000] user_mem_check assertion failure for va deadbeef
|
||||
[00001000] free env 00001000
|
||||
```
|
||||
|
||||
Make sure you understand why `user/faultalloc` and `user/faultallocbad` behave differently.
|
||||
|
||||
```
|
||||
Challenge! Extend your kernel so that not only page faults, but _all_ types of processor exceptions that code running in user space can generate, can be redirected to a user-mode exception handler. Write user-mode test programs to test user-mode handling of various exceptions such as divide-by-zero, general protection fault, and illegal opcode.
|
||||
```
|
||||
|
||||
##### Implementing Copy-on-Write Fork
|
||||
|
||||
You now have the kernel facilities to implement copy-on-write `fork()` entirely in user space.
|
||||
|
||||
We have provided a skeleton for your `fork()` in `lib/fork.c`. Like `dumbfork()`, `fork()` should create a new environment, then scan through the parent environment's entire address space and set up corresponding page mappings in the child. The key difference is that, while `dumbfork()` copied _pages_ , `fork()` will initially only copy page _mappings_. `fork()` will copy each page only when one of the environments tries to write it.
|
||||
|
||||
The basic control flow for `fork()` is as follows:
|
||||
|
||||
1. The parent installs `pgfault()` as the C-level page fault handler, using the `set_pgfault_handler()` function you implemented above.
|
||||
|
||||
2. The parent calls `sys_exofork()` to create a child environment.
|
||||
|
||||
3. For each writable or copy-on-write page in its address space below UTOP, the parent calls `duppage`, which should map the page copy-on-write into the address space of the child and then _remap_ the page copy-on-write in its own address space. [ Note: The ordering here (i.e., marking a page as COW in the child before marking it in the parent) actually matters! Can you see why? Try to think of a specific case where reversing the order could cause trouble. ] `duppage` sets both PTEs so that the page is not writeable, and to contain `PTE_COW` in the "avail" field to distinguish copy-on-write pages from genuine read-only pages.
|
||||
|
||||
The exception stack is _not_ remapped this way, however. Instead you need to allocate a fresh page in the child for the exception stack. Since the page fault handler will be doing the actual copying and the page fault handler runs on the exception stack, the exception stack cannot be made copy-on-write: who would copy it?
|
||||
|
||||
`fork()` also needs to handle pages that are present, but not writable or copy-on-write.
|
||||
|
||||
4. The parent sets the user page fault entrypoint for the child to look like its own.
|
||||
|
||||
5. The child is now ready to run, so the parent marks it runnable.
|
||||
|
||||
|
||||
|
||||
|
||||
Each time one of the environments writes a copy-on-write page that it hasn't yet written, it will take a page fault. Here's the control flow for the user page fault handler:
|
||||
|
||||
1. The kernel propagates the page fault to `_pgfault_upcall`, which calls `fork()`'s `pgfault()` handler.
|
||||
2. `pgfault()` checks that the fault is a write (check for `FEC_WR` in the error code) and that the PTE for the page is marked `PTE_COW`. If not, panic.
|
||||
3. `pgfault()` allocates a new page mapped at a temporary location and copies the contents of the faulting page into it. Then the fault handler maps the new page at the appropriate address with read/write permissions, in place of the old read-only mapping.
|
||||
|
||||
|
||||
|
||||
The user-level `lib/fork.c` code must consult the environment's page tables for several of the operations above (e.g., that the PTE for a page is marked `PTE_COW`). The kernel maps the environment's page tables at `UVPT` exactly for this purpose. It uses a [clever mapping trick][1] to make it to make it easy to lookup PTEs for user code. `lib/entry.S` sets up `uvpt` and `uvpd` so that you can easily lookup page-table information in `lib/fork.c`.
|
||||
|
||||
``````
|
||||
Exercise 12. Implement `fork`, `duppage` and `pgfault` in `lib/fork.c`.
|
||||
|
||||
Test your code with the `forktree` program. It should produce the following messages, with interspersed 'new env', 'free env', and 'exiting gracefully' messages. The messages may not appear in this order, and the environment IDs may be different.
|
||||
|
||||
1000: I am ''
|
||||
1001: I am '0'
|
||||
2000: I am '00'
|
||||
2001: I am '000'
|
||||
1002: I am '1'
|
||||
3000: I am '11'
|
||||
3001: I am '10'
|
||||
4000: I am '100'
|
||||
1003: I am '01'
|
||||
5000: I am '010'
|
||||
4001: I am '011'
|
||||
2002: I am '110'
|
||||
1004: I am '001'
|
||||
1005: I am '111'
|
||||
1006: I am '101'
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Implement a shared-memory `fork()` called `sfork()`. This version should have the parent and child _share_ all their memory pages (so writes in one environment appear in the other) except for pages in the stack area, which should be treated in the usual copy-on-write manner. Modify `user/forktree.c` to use `sfork()` instead of regular `fork()`. Also, once you have finished implementing IPC in part C, use your `sfork()` to run `user/pingpongs`. You will have to find a new way to provide the functionality of the global `thisenv` pointer.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Your implementation of `fork` makes a huge number of system calls. On the x86, switching into the kernel using interrupts has non-trivial cost. Augment the system call interface so that it is possible to send a batch of system calls at once. Then change `fork` to use this interface.
|
||||
|
||||
How much faster is your new `fork`?
|
||||
|
||||
You can answer this (roughly) by using analytical arguments to estimate how much of an improvement batching system calls will make to the performance of your `fork`: How expensive is an `int 0x30` instruction? How many times do you execute `int 0x30` in your `fork`? Is accessing the `TSS` stack switch also expensive? And so on...
|
||||
|
||||
Alternatively, you can boot your kernel on real hardware and _really_ benchmark your code. See the `RDTSC` (read time-stamp counter) instruction, defined in the IA32 manual, which counts the number of clock cycles that have elapsed since the last processor reset. QEMU doesn't emulate this instruction faithfully (it can either count the number of virtual instructions executed or use the host TSC, neither of which reflects the number of cycles a real CPU would require).
|
||||
```
|
||||
|
||||
This ends part B. Make sure you pass all of the Part B tests when you run make grade. As usual, you can hand in your submission with make handin.
|
||||
|
||||
#### Part C: Preemptive Multitasking and Inter-Process communication (IPC)
|
||||
|
||||
In the final part of lab 4 you will modify the kernel to preempt uncooperative environments and to allow environments to pass messages to each other explicitly.
|
||||
|
||||
##### Clock Interrupts and Preemption
|
||||
|
||||
Run the `user/spin` test program. This test program forks off a child environment, which simply spins forever in a tight loop once it receives control of the CPU. Neither the parent environment nor the kernel ever regains the CPU. This is obviously not an ideal situation in terms of protecting the system from bugs or malicious code in user-mode environments, because any user-mode environment can bring the whole system to a halt simply by getting into an infinite loop and never giving back the CPU. In order to allow the kernel to _preempt_ a running environment, forcefully retaking control of the CPU from it, we must extend the JOS kernel to support external hardware interrupts from the clock hardware.
|
||||
|
||||
###### Interrupt discipline
|
||||
|
||||
External interrupts (i.e., device interrupts) are referred to as IRQs. There are 16 possible IRQs, numbered 0 through 15. The mapping from IRQ number to IDT entry is not fixed. `pic_init` in `picirq.c` maps IRQs 0-15 to IDT entries `IRQ_OFFSET` through `IRQ_OFFSET+15`.
|
||||
|
||||
In `inc/trap.h`, `IRQ_OFFSET` is defined to be decimal 32. Thus the IDT entries 32-47 correspond to the IRQs 0-15. For example, the clock interrupt is IRQ 0. Thus, IDT[IRQ_OFFSET+0] (i.e., IDT[32]) contains the address of the clock's interrupt handler routine in the kernel. This `IRQ_OFFSET` is chosen so that the device interrupts do not overlap with the processor exceptions, which could obviously cause confusion. (In fact, in the early days of PCs running MS-DOS, the `IRQ_OFFSET` effectively _was_ zero, which indeed caused massive confusion between handling hardware interrupts and handling processor exceptions!)
|
||||
|
||||
In JOS, we make a key simplification compared to xv6 Unix. External device interrupts are _always_ disabled when in the kernel (and, like xv6, enabled when in user space). External interrupts are controlled by the `FL_IF` flag bit of the `%eflags` register (see `inc/mmu.h`). When this bit is set, external interrupts are enabled. While the bit can be modified in several ways, because of our simplification, we will handle it solely through the process of saving and restoring `%eflags` register as we enter and leave user mode.
|
||||
|
||||
You will have to ensure that the `FL_IF` flag is set in user environments when they run so that when an interrupt arrives, it gets passed through to the processor and handled by your interrupt code. Otherwise, interrupts are _masked_ , or ignored until interrupts are re-enabled. We masked interrupts with the very first instruction of the bootloader, and so far we have never gotten around to re-enabling them.
|
||||
|
||||
```
|
||||
Exercise 13. Modify `kern/trapentry.S` and `kern/trap.c` to initialize the appropriate entries in the IDT and provide handlers for IRQs 0 through 15. Then modify the code in `env_alloc()` in `kern/env.c` to ensure that user environments are always run with interrupts enabled.
|
||||
|
||||
Also uncomment the `sti` instruction in `sched_halt()` so that idle CPUs unmask interrupts.
|
||||
|
||||
The processor never pushes an error code when invoking a hardware interrupt handler. You might want to re-read section 9.2 of the [80386 Reference Manual][2], or section 5.8 of the [IA-32 Intel Architecture Software Developer's Manual, Volume 3][3], at this time.
|
||||
|
||||
After doing this exercise, if you run your kernel with any test program that runs for a non-trivial length of time (e.g., `spin`), you should see the kernel print trap frames for hardware interrupts. While interrupts are now enabled in the processor, JOS isn't yet handling them, so you should see it misattribute each interrupt to the currently running user environment and destroy it. Eventually it should run out of environments to destroy and drop into the monitor.
|
||||
```
|
||||
|
||||
###### Handling Clock Interrupts
|
||||
|
||||
In the `user/spin` program, after the child environment was first run, it just spun in a loop, and the kernel never got control back. We need to program the hardware to generate clock interrupts periodically, which will force control back to the kernel where we can switch control to a different user environment.
|
||||
|
||||
The calls to `lapic_init` and `pic_init` (from `i386_init` in `init.c`), which we have written for you, set up the clock and the interrupt controller to generate interrupts. You now need to write the code to handle these interrupts.
|
||||
|
||||
```
|
||||
Exercise 14. Modify the kernel's `trap_dispatch()` function so that it calls `sched_yield()` to find and run a different environment whenever a clock interrupt takes place.
|
||||
|
||||
You should now be able to get the `user/spin` test to work: the parent environment should fork off the child, `sys_yield()` to it a couple times but in each case regain control of the CPU after one time slice, and finally kill the child environment and terminate gracefully.
|
||||
```
|
||||
|
||||
This is a great time to do some _regression testing_. Make sure that you haven't broken any earlier part of that lab that used to work (e.g. `forktree`) by enabling interrupts. Also, try running with multiple CPUs using make CPUS=2 _target_. You should also be able to pass `stresssched` now. Run make grade to see for sure. You should now get a total score of 65/80 points on this lab.
|
||||
|
||||
##### Inter-Process communication (IPC)
|
||||
|
||||
(Technically in JOS this is "inter-environment communication" or "IEC", but everyone else calls it IPC, so we'll use the standard term.)
|
||||
|
||||
We've been focusing on the isolation aspects of the operating system, the ways it provides the illusion that each program has a machine all to itself. Another important service of an operating system is to allow programs to communicate with each other when they want to. It can be quite powerful to let programs interact with other programs. The Unix pipe model is the canonical example.
|
||||
|
||||
There are many models for interprocess communication. Even today there are still debates about which models are best. We won't get into that debate. Instead, we'll implement a simple IPC mechanism and then try it out.
|
||||
|
||||
###### IPC in JOS
|
||||
|
||||
You will implement a few additional JOS kernel system calls that collectively provide a simple interprocess communication mechanism. You will implement two system calls, `sys_ipc_recv` and `sys_ipc_try_send`. Then you will implement two library wrappers `ipc_recv` and `ipc_send`.
|
||||
|
||||
The "messages" that user environments can send to each other using JOS's IPC mechanism consist of two components: a single 32-bit value, and optionally a single page mapping. Allowing environments to pass page mappings in messages provides an efficient way to transfer more data than will fit into a single 32-bit integer, and also allows environments to set up shared memory arrangements easily.
|
||||
|
||||
###### Sending and Receiving Messages
|
||||
|
||||
To receive a message, an environment calls `sys_ipc_recv`. This system call de-schedules the current environment and does not run it again until a message has been received. When an environment is waiting to receive a message, _any_ other environment can send it a message - not just a particular environment, and not just environments that have a parent/child arrangement with the receiving environment. In other words, the permission checking that you implemented in Part A will not apply to IPC, because the IPC system calls are carefully designed so as to be "safe": an environment cannot cause another environment to malfunction simply by sending it messages (unless the target environment is also buggy).
|
||||
|
||||
To try to send a value, an environment calls `sys_ipc_try_send` with both the receiver's environment id and the value to be sent. If the named environment is actually receiving (it has called `sys_ipc_recv` and not gotten a value yet), then the send delivers the message and returns 0. Otherwise the send returns `-E_IPC_NOT_RECV` to indicate that the target environment is not currently expecting to receive a value.
|
||||
|
||||
A library function `ipc_recv` in user space will take care of calling `sys_ipc_recv` and then looking up the information about the received values in the current environment's `struct Env`.
|
||||
|
||||
Similarly, a library function `ipc_send` will take care of repeatedly calling `sys_ipc_try_send` until the send succeeds.
|
||||
|
||||
###### Transferring Pages
|
||||
|
||||
When an environment calls `sys_ipc_recv` with a valid `dstva` parameter (below `UTOP`), the environment is stating that it is willing to receive a page mapping. If the sender sends a page, then that page should be mapped at `dstva` in the receiver's address space. If the receiver already had a page mapped at `dstva`, then that previous page is unmapped.
|
||||
|
||||
When an environment calls `sys_ipc_try_send` with a valid `srcva` (below `UTOP`), it means the sender wants to send the page currently mapped at `srcva` to the receiver, with permissions `perm`. After a successful IPC, the sender keeps its original mapping for the page at `srcva` in its address space, but the receiver also obtains a mapping for this same physical page at the `dstva` originally specified by the receiver, in the receiver's address space. As a result this page becomes shared between the sender and receiver.
|
||||
|
||||
If either the sender or the receiver does not indicate that a page should be transferred, then no page is transferred. After any IPC the kernel sets the new field `env_ipc_perm` in the receiver's `Env` structure to the permissions of the page received, or zero if no page was received.
|
||||
|
||||
###### Implementing IPC
|
||||
|
||||
```
|
||||
Exercise 15. Implement `sys_ipc_recv` and `sys_ipc_try_send` in `kern/syscall.c`. Read the comments on both before implementing them, since they have to work together. When you call `envid2env` in these routines, you should set the `checkperm` flag to 0, meaning that any environment is allowed to send IPC messages to any other environment, and the kernel does no special permission checking other than verifying that the target envid is valid.
|
||||
|
||||
Then implement the `ipc_recv` and `ipc_send` functions in `lib/ipc.c`.
|
||||
|
||||
Use the `user/pingpong` and `user/primes` functions to test your IPC mechanism. `user/primes` will generate for each prime number a new environment until JOS runs out of environments. You might find it interesting to read `user/primes.c` to see all the forking and IPC going on behind the scenes.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Why does `ipc_send` have to loop? Change the system call interface so it doesn't have to. Make sure you can handle multiple environments trying to send to one environment at the same time.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! The prime sieve is only one neat use of message passing between a large number of concurrent programs. Read C. A. R. Hoare, ``Communicating Sequential Processes,'' _Communications of the ACM_ 21(8) (August 1978), 666-667, and implement the matrix multiplication example.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! One of the most impressive examples of the power of message passing is Doug McIlroy's power series calculator, described in [M. Douglas McIlroy, ``Squinting at Power Series,'' _Software--Practice and Experience_ , 20(7) (July 1990), 661-683][4]. Implement his power series calculator and compute the power series for _sin_ ( _x_ + _x_ ^3).
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Make JOS's IPC mechanism more efficient by applying some of the techniques from Liedtke's paper, [Improving IPC by Kernel Design][5], or any other tricks you may think of. Feel free to modify the kernel's system call API for this purpose, as long as your code is backwards compatible with what our grading scripts expect.
|
||||
```
|
||||
|
||||
**This ends part C.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab4.txt`.
|
||||
|
||||
Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab4.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 4', then make handin and follow the directions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab4/uvpt.html
|
||||
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/i386/toc.htm
|
||||
[3]: https://pdos.csail.mit.edu/6.828/2018/labs/readings/ia32/IA32-3A.pdf
|
||||
[4]: https://swtch.com/~rsc/thread/squint.pdf
|
||||
[5]: http://dl.acm.org/citation.cfm?id=168633
|
346
sources/tech/20181016 Lab 5- File system, Spawn and Shell.md
Normal file
346
sources/tech/20181016 Lab 5- File system, Spawn and Shell.md
Normal file
@ -0,0 +1,346 @@
|
||||
Translating by qhwdw
|
||||
Lab 5: File system, Spawn and Shell
|
||||
======
|
||||
|
||||
**Due Thursday, November 15, 2018
|
||||
**
|
||||
|
||||
### Introduction
|
||||
|
||||
In this lab, you will implement `spawn`, a library call that loads and runs on-disk executables. You will then flesh out your kernel and library operating system enough to run a shell on the console. These features need a file system, and this lab introduces a simple read/write file system.
|
||||
|
||||
#### Getting Started
|
||||
|
||||
Use Git to fetch the latest version of the course repository, and then create a local branch called `lab5` based on our lab5 branch, `origin/lab5`:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab5 origin/lab5
|
||||
Branch lab5 set up to track remote branch refs/remotes/origin/lab5.
|
||||
Switched to a new branch "lab5"
|
||||
athena% git merge lab4
|
||||
Merge made by recursive.
|
||||
.....
|
||||
athena%
|
||||
```
|
||||
|
||||
The main new component for this part of the lab is the file system environment, located in the new `fs` directory. Scan through all the files in this directory to get a feel for what all is new. Also, there are some new file system-related source files in the `user` and `lib` directories,
|
||||
|
||||
| fs/fs.c | Code that mainipulates the file system's on-disk structure. |
|
||||
| fs/bc.c | A simple block cache built on top of our user-level page fault handling facility. |
|
||||
| fs/ide.c | Minimal PIO-based (non-interrupt-driven) IDE driver code. |
|
||||
| fs/serv.c | The file system server that interacts with client environments using file system IPCs. |
|
||||
| lib/fd.c | Code that implements the general UNIX-like file descriptor interface. |
|
||||
| lib/file.c | The driver for on-disk file type, implemented as a file system IPC client. |
|
||||
| lib/console.c | The driver for console input/output file type. |
|
||||
| lib/spawn.c | Code skeleton of the spawn library call. |
|
||||
|
||||
You should run the pingpong, primes, and forktree test cases from lab 4 again after merging in the new lab 5 code. You will need to comment out the `ENV_CREATE(fs_fs)` line in `kern/init.c` because `fs/fs.c` tries to do some I/O, which JOS does not allow yet. Similarly, temporarily comment out the call to `close_all()` in `lib/exit.c`; this function calls subroutines that you will implement later in the lab, and therefore will panic if called. If your lab 4 code doesn't contain any bugs, the test cases should run fine. Don't proceed until they work. Don't forget to un-comment these lines when you start Exercise 1.
|
||||
|
||||
If they don't work, use git diff lab4 to review all the changes, making sure there isn't any code you wrote for lab4 (or before) missing from lab 5. Make sure that lab 4 still works.
|
||||
|
||||
#### Lab Requirements
|
||||
|
||||
As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Additionally, you will need to write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab5.txt` in the top level of your `lab5` directory before handing in your work.
|
||||
|
||||
### File system preliminaries
|
||||
|
||||
The file system you will work with is much simpler than most "real" file systems including that of xv6 UNIX, but it is powerful enough to provide the basic features: creating, reading, writing, and deleting files organized in a hierarchical directory structure.
|
||||
|
||||
We are (for the moment anyway) developing only a single-user operating system, which provides protection sufficient to catch bugs but not to protect multiple mutually suspicious users from each other. Our file system therefore does not support the UNIX notions of file ownership or permissions. Our file system also currently does not support hard links, symbolic links, time stamps, or special device files like most UNIX file systems do.
|
||||
|
||||
### On-Disk File System Structure
|
||||
|
||||
Most UNIX file systems divide available disk space into two main types of regions: _inode_ regions and _data_ regions. UNIX file systems assign one _inode_ to each file in the file system; a file's inode holds critical meta-data about the file such as its `stat` attributes and pointers to its data blocks. The data regions are divided into much larger (typically 8KB or more) _data blocks_ , within which the file system stores file data and directory meta-data. Directory entries contain file names and pointers to inodes; a file is said to be _hard-linked_ if multiple directory entries in the file system refer to that file's inode. Since our file system will not support hard links, we do not need this level of indirection and therefore can make a convenient simplification: our file system will not use inodes at all and instead will simply store all of a file's (or sub-directory's) meta-data within the (one and only) directory entry describing that file.
|
||||
|
||||
Both files and directories logically consist of a series of data blocks, which may be scattered throughout the disk much like the pages of an environment's virtual address space can be scattered throughout physical memory. The file system environment hides the details of block layout, presenting interfaces for reading and writing sequences of bytes at arbitrary offsets within files. The file system environment handles all modifications to directories internally as a part of performing actions such as file creation and deletion. Our file system does allow user environments to _read_ directory meta-data directly (e.g., with `read`), which means that user environments can perform directory scanning operations themselves (e.g., to implement the `ls` program) rather than having to rely on additional special calls to the file system. The disadvantage of this approach to directory scanning, and the reason most modern UNIX variants discourage it, is that it makes application programs dependent on the format of directory meta-data, making it difficult to change the file system's internal layout without changing or at least recompiling application programs as well.
|
||||
|
||||
#### Sectors and Blocks
|
||||
|
||||
Most disks cannot perform reads and writes at byte granularity and instead perform reads and writes in units of _sectors_. In JOS, sectors are 512 bytes each. File systems actually allocate and use disk storage in units of _blocks_. Be wary of the distinction between the two terms: _sector size_ is a property of the disk hardware, whereas _block size_ is an aspect of the operating system using the disk. A file system's block size must be a multiple of the sector size of the underlying disk.
|
||||
|
||||
The UNIX xv6 file system uses a block size of 512 bytes, the same as the sector size of the underlying disk. Most modern file systems use a larger block size, however, because storage space has gotten much cheaper and it is more efficient to manage storage at larger granularities. Our file system will use a block size of 4096 bytes, conveniently matching the processor's page size.
|
||||
|
||||
#### Superblocks
|
||||
|
||||
![Disk layout][1]
|
||||
|
||||
File systems typically reserve certain disk blocks at "easy-to-find" locations on the disk (such as the very start or the very end) to hold meta-data describing properties of the file system as a whole, such as the block size, disk size, any meta-data required to find the root directory, the time the file system was last mounted, the time the file system was last checked for errors, and so on. These special blocks are called _superblocks_.
|
||||
|
||||
Our file system will have exactly one superblock, which will always be at block 1 on the disk. Its layout is defined by `struct Super` in `inc/fs.h`. Block 0 is typically reserved to hold boot loaders and partition tables, so file systems generally do not use the very first disk block. Many "real" file systems maintain multiple superblocks, replicated throughout several widely-spaced regions of the disk, so that if one of them is corrupted or the disk develops a media error in that region, the other superblocks can still be found and used to access the file system.
|
||||
|
||||
#### File Meta-data
|
||||
|
||||
![File structure][2]
|
||||
The layout of the meta-data describing a file in our file system is described by `struct File` in `inc/fs.h`. This meta-data includes the file's name, size, type (regular file or directory), and pointers to the blocks comprising the file. As mentioned above, we do not have inodes, so this meta-data is stored in a directory entry on disk. Unlike in most "real" file systems, for simplicity we will use this one `File` structure to represent file meta-data as it appears _both on disk and in memory_.
|
||||
|
||||
The `f_direct` array in `struct File` contains space to store the block numbers of the first 10 (`NDIRECT`) blocks of the file, which we call the file's _direct_ blocks. For small files up to 10*4096 = 40KB in size, this means that the block numbers of all of the file's blocks will fit directly within the `File` structure itself. For larger files, however, we need a place to hold the rest of the file's block numbers. For any file greater than 40KB in size, therefore, we allocate an additional disk block, called the file's _indirect block_ , to hold up to 4096/4 = 1024 additional block numbers. Our file system therefore allows files to be up to 1034 blocks, or just over four megabytes, in size. To support larger files, "real" file systems typically support _double-_ and _triple-indirect blocks_ as well.
|
||||
|
||||
#### Directories versus Regular Files
|
||||
|
||||
A `File` structure in our file system can represent either a _regular_ file or a directory; these two types of "files" are distinguished by the `type` field in the `File` structure. The file system manages regular files and directory-files in exactly the same way, except that it does not interpret the contents of the data blocks associated with regular files at all, whereas the file system interprets the contents of a directory-file as a series of `File` structures describing the files and subdirectories within the directory.
|
||||
|
||||
The superblock in our file system contains a `File` structure (the `root` field in `struct Super`) that holds the meta-data for the file system's root directory. The contents of this directory-file is a sequence of `File` structures describing the files and directories located within the root directory of the file system. Any subdirectories in the root directory may in turn contain more `File` structures representing sub-subdirectories, and so on.
|
||||
|
||||
### The File System
|
||||
|
||||
The goal for this lab is not to have you implement the entire file system, but for you to implement only certain key components. In particular, you will be responsible for reading blocks into the block cache and flushing them back to disk; allocating disk blocks; mapping file offsets to disk blocks; and implementing read, write, and open in the IPC interface. Because you will not be implementing all of the file system yourself, it is very important that you familiarize yourself with the provided code and the various file system interfaces.
|
||||
|
||||
### Disk Access
|
||||
|
||||
The file system environment in our operating system needs to be able to access the disk, but we have not yet implemented any disk access functionality in our kernel. Instead of taking the conventional "monolithic" operating system strategy of adding an IDE disk driver to the kernel along with the necessary system calls to allow the file system to access it, we instead implement the IDE disk driver as part of the user-level file system environment. We will still need to modify the kernel slightly, in order to set things up so that the file system environment has the privileges it needs to implement disk access itself.
|
||||
|
||||
It is easy to implement disk access in user space this way as long as we rely on polling, "programmed I/O" (PIO)-based disk access and do not use disk interrupts. It is possible to implement interrupt-driven device drivers in user mode as well (the L3 and L4 kernels do this, for example), but it is more difficult since the kernel must field device interrupts and dispatch them to the correct user-mode environment.
|
||||
|
||||
The x86 processor uses the IOPL bits in the EFLAGS register to determine whether protected-mode code is allowed to perform special device I/O instructions such as the IN and OUT instructions. Since all of the IDE disk registers we need to access are located in the x86's I/O space rather than being memory-mapped, giving "I/O privilege" to the file system environment is the only thing we need to do in order to allow the file system to access these registers. In effect, the IOPL bits in the EFLAGS register provides the kernel with a simple "all-or-nothing" method of controlling whether user-mode code can access I/O space. In our case, we want the file system environment to be able to access I/O space, but we do not want any other environments to be able to access I/O space at all.
|
||||
|
||||
```
|
||||
Exercise 1. `i386_init` identifies the file system environment by passing the type `ENV_TYPE_FS` to your environment creation function, `env_create`. Modify `env_create` in `env.c`, so that it gives the file system environment I/O privilege, but never gives that privilege to any other environment.
|
||||
|
||||
Make sure you can start the file environment without causing a General Protection fault. You should pass the "fs i/o" test in make grade.
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
1. Do you have to do anything else to ensure that this I/O privilege setting is saved and restored properly when you subsequently switch from one environment to another? Why?
|
||||
```
|
||||
|
||||
|
||||
Note that the `GNUmakefile` file in this lab sets up QEMU to use the file `obj/kern/kernel.img` as the image for disk 0 (typically "Drive C" under DOS/Windows) as before, and to use the (new) file `obj/fs/fs.img` as the image for disk 1 ("Drive D"). In this lab our file system should only ever touch disk 1; disk 0 is used only to boot the kernel. If you manage to corrupt either disk image in some way, you can reset both of them to their original, "pristine" versions simply by typing:
|
||||
|
||||
```
|
||||
$ rm obj/kern/kernel.img obj/fs/fs.img
|
||||
$ make
|
||||
```
|
||||
|
||||
or by doing:
|
||||
|
||||
```
|
||||
$ make clean
|
||||
$ make
|
||||
```
|
||||
|
||||
Challenge! Implement interrupt-driven IDE disk access, with or without DMA. You can decide whether to move the device driver into the kernel, keep it in user space along with the file system, or even (if you really want to get into the micro-kernel spirit) move it into a separate environment of its own.
|
||||
|
||||
### The Block Cache
|
||||
|
||||
In our file system, we will implement a simple "buffer cache" (really just a block cache) with the help of the processor's virtual memory system. The code for the block cache is in `fs/bc.c`.
|
||||
|
||||
Our file system will be limited to handling disks of size 3GB or less. We reserve a large, fixed 3GB region of the file system environment's address space, from 0x10000000 (`DISKMAP`) up to 0xD0000000 (`DISKMAP+DISKMAX`), as a "memory mapped" version of the disk. For example, disk block 0 is mapped at virtual address 0x10000000, disk block 1 is mapped at virtual address 0x10001000, and so on. The `diskaddr` function in `fs/bc.c` implements this translation from disk block numbers to virtual addresses (along with some sanity checking).
|
||||
|
||||
Since our file system environment has its own virtual address space independent of the virtual address spaces of all other environments in the system, and the only thing the file system environment needs to do is to implement file access, it is reasonable to reserve most of the file system environment's address space in this way. It would be awkward for a real file system implementation on a 32-bit machine to do this since modern disks are larger than 3GB. Such a buffer cache management approach may still be reasonable on a machine with a 64-bit address space.
|
||||
|
||||
Of course, it would take a long time to read the entire disk into memory, so instead we'll implement a form of _demand paging_ , wherein we only allocate pages in the disk map region and read the corresponding block from the disk in response to a page fault in this region. This way, we can pretend that the entire disk is in memory.
|
||||
|
||||
```
|
||||
Exercise 2. Implement the `bc_pgfault` and `flush_block` functions in `fs/bc.c`. `bc_pgfault` is a page fault handler, just like the one your wrote in the previous lab for copy-on-write fork, except that its job is to load pages in from the disk in response to a page fault. When writing this, keep in mind that (1) `addr` may not be aligned to a block boundary and (2) `ide_read` operates in sectors, not blocks.
|
||||
|
||||
The `flush_block` function should write a block out to disk _if necessary_. `flush_block` shouldn't do anything if the block isn't even in the block cache (that is, the page isn't mapped) or if it's not dirty. We will use the VM hardware to keep track of whether a disk block has been modified since it was last read from or written to disk. To see whether a block needs writing, we can just look to see if the `PTE_D` "dirty" bit is set in the `uvpt` entry. (The `PTE_D` bit is set by the processor in response to a write to that page; see 5.2.4.3 in [chapter 5][3] of the 386 reference manual.) After writing the block to disk, `flush_block` should clear the `PTE_D` bit using `sys_page_map`.
|
||||
|
||||
Use make grade to test your code. Your code should pass "check_bc", "check_super", and "check_bitmap".
|
||||
```
|
||||
|
||||
`fs_init` function in `fs/fs.c` is a prime example of how to use the block cache. After initializing the block cache, it simply stores pointers into the disk map region in the `super` global variable. After this point, we can simply read from the `super` structure as if they were in memory and our page fault handler will read them from disk as necessary.
|
||||
|
||||
```
|
||||
Challenge! The block cache has no eviction policy. Once a block gets faulted in to it, it never gets removed and will remain in memory forevermore. Add eviction to the buffer cache. Using the `PTE_A` "accessed" bits in the page tables, which the hardware sets on any access to a page, you can track approximate usage of disk blocks without the need to modify every place in the code that accesses the disk map region. Be careful with dirty blocks.
|
||||
```
|
||||
|
||||
### The Block Bitmap
|
||||
|
||||
After `fs_init` sets the `bitmap` pointer, we can treat `bitmap` as a packed array of bits, one for each block on the disk. See, for example, `block_is_free`, which simply checks whether a given block is marked free in the bitmap.
|
||||
|
||||
```
|
||||
Exercise 3. Use `free_block` as a model to implement `alloc_block` in `fs/fs.c`, which should find a free disk block in the bitmap, mark it used, and return the number of that block. When you allocate a block, you should immediately flush the changed bitmap block to disk with `flush_block`, to help file system consistency.
|
||||
|
||||
Use make grade to test your code. Your code should now pass "alloc_block".
|
||||
```
|
||||
|
||||
### File Operations
|
||||
|
||||
We have provided a variety of functions in `fs/fs.c` to implement the basic facilities you will need to interpret and manage `File` structures, scan and manage the entries of directory-files, and walk the file system from the root to resolve an absolute pathname. Read through _all_ of the code in `fs/fs.c` and make sure you understand what each function does before proceeding.
|
||||
|
||||
```
|
||||
Exercise 4. Implement `file_block_walk` and `file_get_block`. `file_block_walk` maps from a block offset within a file to the pointer for that block in the `struct File` or the indirect block, very much like what `pgdir_walk` did for page tables. `file_get_block` goes one step further and maps to the actual disk block, allocating a new one if necessary.
|
||||
|
||||
Use make grade to test your code. Your code should pass "file_open", "file_get_block", and "file_flush/file_truncated/file rewrite", and "testfile".
|
||||
```
|
||||
|
||||
`file_block_walk` and `file_get_block` are the workhorses of the file system. For example, `file_read` and `file_write` are little more than the bookkeeping atop `file_get_block` necessary to copy bytes between scattered blocks and a sequential buffer.
|
||||
|
||||
```
|
||||
Challenge! The file system is likely to be corrupted if it gets interrupted in the middle of an operation (for example, by a crash or a reboot). Implement soft updates or journalling to make the file system crash-resilient and demonstrate some situation where the old file system would get corrupted, but yours doesn't.
|
||||
```
|
||||
|
||||
### The file system interface
|
||||
|
||||
Now that we have the necessary functionality within the file system environment itself, we must make it accessible to other environments that wish to use the file system. Since other environments can't directly call functions in the file system environment, we'll expose access to the file system environment via a _remote procedure call_ , or RPC, abstraction, built atop JOS's IPC mechanism. Graphically, here's what a call to the file system server (say, read) looks like
|
||||
|
||||
```
|
||||
Regular env FS env
|
||||
+---------------+ +---------------+
|
||||
| read | | file_read |
|
||||
| (lib/fd.c) | | (fs/fs.c) |
|
||||
...|.......|.......|...|.......^.......|...............
|
||||
| v | | | | RPC mechanism
|
||||
| devfile_read | | serve_read |
|
||||
| (lib/file.c) | | (fs/serv.c) |
|
||||
| | | | ^ |
|
||||
| v | | | |
|
||||
| fsipc | | serve |
|
||||
| (lib/file.c) | | (fs/serv.c) |
|
||||
| | | | ^ |
|
||||
| v | | | |
|
||||
| ipc_send | | ipc_recv |
|
||||
| | | | ^ |
|
||||
+-------|-------+ +-------|-------+
|
||||
| |
|
||||
+-------------------+
|
||||
|
||||
```
|
||||
|
||||
Everything below the dotted line is simply the mechanics of getting a read request from the regular environment to the file system environment. Starting at the beginning, `read` (which we provide) works on any file descriptor and simply dispatches to the appropriate device read function, in this case `devfile_read` (we can have more device types, like pipes). `devfile_read` implements `read` specifically for on-disk files. This and the other `devfile_*` functions in `lib/file.c` implement the client side of the FS operations and all work in roughly the same way, bundling up arguments in a request structure, calling `fsipc` to send the IPC request, and unpacking and returning the results. The `fsipc` function simply handles the common details of sending a request to the server and receiving the reply.
|
||||
|
||||
The file system server code can be found in `fs/serv.c`. It loops in the `serve` function, endlessly receiving a request over IPC, dispatching that request to the appropriate handler function, and sending the result back via IPC. In the read example, `serve` will dispatch to `serve_read`, which will take care of the IPC details specific to read requests such as unpacking the request structure and finally call `file_read` to actually perform the file read.
|
||||
|
||||
Recall that JOS's IPC mechanism lets an environment send a single 32-bit number and, optionally, share a page. To send a request from the client to the server, we use the 32-bit number for the request type (the file system server RPCs are numbered, just like how syscalls were numbered) and store the arguments to the request in a `union Fsipc` on the page shared via the IPC. On the client side, we always share the page at `fsipcbuf`; on the server side, we map the incoming request page at `fsreq` (`0x0ffff000`).
|
||||
|
||||
The server also sends the response back via IPC. We use the 32-bit number for the function's return code. For most RPCs, this is all they return. `FSREQ_READ` and `FSREQ_STAT` also return data, which they simply write to the page that the client sent its request on. There's no need to send this page in the response IPC, since the client shared it with the file system server in the first place. Also, in its response, `FSREQ_OPEN` shares with the client a new "Fd page". We'll return to the file descriptor page shortly.
|
||||
|
||||
```
|
||||
Exercise 5. Implement `serve_read` in `fs/serv.c`.
|
||||
|
||||
`serve_read`'s heavy lifting will be done by the already-implemented `file_read` in `fs/fs.c` (which, in turn, is just a bunch of calls to `file_get_block`). `serve_read` just has to provide the RPC interface for file reading. Look at the comments and code in `serve_set_size` to get a general idea of how the server functions should be structured.
|
||||
|
||||
Use make grade to test your code. Your code should pass "serve_open/file_stat/file_close" and "file_read" for a score of 70/150.
|
||||
```
|
||||
|
||||
```
|
||||
Exercise 6. Implement `serve_write` in `fs/serv.c` and `devfile_write` in `lib/file.c`.
|
||||
|
||||
Use make grade to test your code. Your code should pass "file_write", "file_read after file_write", "open", and "large file" for a score of 90/150.
|
||||
```
|
||||
|
||||
### Spawning Processes
|
||||
|
||||
We have given you the code for `spawn` (see `lib/spawn.c`) which creates a new environment, loads a program image from the file system into it, and then starts the child environment running this program. The parent process then continues running independently of the child. The `spawn` function effectively acts like a `fork` in UNIX followed by an immediate `exec` in the child process.
|
||||
|
||||
We implemented `spawn` rather than a UNIX-style `exec` because `spawn` is easier to implement from user space in "exokernel fashion", without special help from the kernel. Think about what you would have to do in order to implement `exec` in user space, and be sure you understand why it is harder.
|
||||
|
||||
```
|
||||
Exercise 7. `spawn` relies on the new syscall `sys_env_set_trapframe` to initialize the state of the newly created environment. Implement `sys_env_set_trapframe` in `kern/syscall.c` (don't forget to dispatch the new system call in `syscall()`).
|
||||
|
||||
Test your code by running the `user/spawnhello` program from `kern/init.c`, which will attempt to spawn `/hello` from the file system.
|
||||
|
||||
Use make grade to test your code.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Implement Unix-style `exec`.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Implement `mmap`-style memory-mapped files and modify `spawn` to map pages directly from the ELF image when possible.
|
||||
```
|
||||
|
||||
### Sharing library state across fork and spawn
|
||||
|
||||
The UNIX file descriptors are a general notion that also encompasses pipes, console I/O, etc. In JOS, each of these device types has a corresponding `struct Dev`, with pointers to the functions that implement read/write/etc. for that device type. `lib/fd.c` implements the general UNIX-like file descriptor interface on top of this. Each `struct Fd` indicates its device type, and most of the functions in `lib/fd.c` simply dispatch operations to functions in the appropriate `struct Dev`.
|
||||
|
||||
`lib/fd.c` also maintains the _file descriptor table_ region in each application environment's address space, starting at `FDTABLE`. This area reserves a page's worth (4KB) of address space for each of the up to `MAXFD` (currently 32) file descriptors the application can have open at once. At any given time, a particular file descriptor table page is mapped if and only if the corresponding file descriptor is in use. Each file descriptor also has an optional "data page" in the region starting at `FILEDATA`, which devices can use if they choose.
|
||||
|
||||
We would like to share file descriptor state across `fork` and `spawn`, but file descriptor state is kept in user-space memory. Right now, on `fork`, the memory will be marked copy-on-write, so the state will be duplicated rather than shared. (This means environments won't be able to seek in files they didn't open themselves and that pipes won't work across a fork.) On `spawn`, the memory will be left behind, not copied at all. (Effectively, the spawned environment starts with no open file descriptors.)
|
||||
|
||||
We will change `fork` to know that certain regions of memory are used by the "library operating system" and should always be shared. Rather than hard-code a list of regions somewhere, we will set an otherwise-unused bit in the page table entries (just like we did with the `PTE_COW` bit in `fork`).
|
||||
|
||||
We have defined a new `PTE_SHARE` bit in `inc/lib.h`. This bit is one of the three PTE bits that are marked "available for software use" in the Intel and AMD manuals. We will establish the convention that if a page table entry has this bit set, the PTE should be copied directly from parent to child in both `fork` and `spawn`. Note that this is different from marking it copy-on-write: as described in the first paragraph, we want to make sure to _share_ updates to the page.
|
||||
|
||||
```
|
||||
Exercise 8. Change `duppage` in `lib/fork.c` to follow the new convention. If the page table entry has the `PTE_SHARE` bit set, just copy the mapping directly. (You should use `PTE_SYSCALL`, not `0xfff`, to mask out the relevant bits from the page table entry. `0xfff` picks up the accessed and dirty bits as well.)
|
||||
|
||||
Likewise, implement `copy_shared_pages` in `lib/spawn.c`. It should loop through all page table entries in the current process (just like `fork` did), copying any page mappings that have the `PTE_SHARE` bit set into the child process.
|
||||
```
|
||||
|
||||
Use make run-testpteshare to check that your code is behaving properly. You should see lines that say "`fork handles PTE_SHARE right`" and "`spawn handles PTE_SHARE right`".
|
||||
|
||||
Use make run-testfdsharing to check that file descriptors are shared properly. You should see lines that say "`read in child succeeded`" and "`read in parent succeeded`".
|
||||
|
||||
### The keyboard interface
|
||||
|
||||
For the shell to work, we need a way to type at it. QEMU has been displaying output we write to the CGA display and the serial port, but so far we've only taken input while in the kernel monitor. In QEMU, input typed in the graphical window appear as input from the keyboard to JOS, while input typed to the console appear as characters on the serial port. `kern/console.c` already contains the keyboard and serial drivers that have been used by the kernel monitor since lab 1, but now you need to attach these to the rest of the system.
|
||||
|
||||
```
|
||||
Exercise 9. In your `kern/trap.c`, call `kbd_intr` to handle trap `IRQ_OFFSET+IRQ_KBD` and `serial_intr` to handle trap `IRQ_OFFSET+IRQ_SERIAL`.
|
||||
```
|
||||
|
||||
We implemented the console input/output file type for you, in `lib/console.c`. `kbd_intr` and `serial_intr` fill a buffer with the recently read input while the console file type drains the buffer (the console file type is used for stdin/stdout by default unless the user redirects them).
|
||||
|
||||
Test your code by running make run-testkbd and type a few lines. The system should echo your lines back to you as you finish them. Try typing in both the console and the graphical window, if you have both available.
|
||||
|
||||
### The Shell
|
||||
|
||||
Run make run-icode or make run-icode-nox. This will run your kernel and start `user/icode`. `icode` execs `init`, which will set up the console as file descriptors 0 and 1 (standard input and standard output). It will then spawn `sh`, the shell. You should be able to run the following commands:
|
||||
|
||||
```
|
||||
echo hello world | cat
|
||||
cat lorem |cat
|
||||
cat lorem |num
|
||||
cat lorem |num |num |num |num |num
|
||||
lsfd
|
||||
```
|
||||
|
||||
Note that the user library routine `cprintf` prints straight to the console, without using the file descriptor code. This is great for debugging but not great for piping into other programs. To print output to a particular file descriptor (for example, 1, standard output), use `fprintf(1, "...", ...)`. `printf("...", ...)` is a short-cut for printing to FD 1. See `user/lsfd.c` for examples.
|
||||
|
||||
```
|
||||
Exercise 10.
|
||||
|
||||
The shell doesn't support I/O redirection. It would be nice to run sh <script instead of having to type in all the commands in the script by hand, as you did above. Add I/O redirection for < to `user/sh.c`.
|
||||
|
||||
Test your implementation by typing sh <script into your shell
|
||||
|
||||
Run make run-testshell to test your shell. `testshell` simply feeds the above commands (also found in `fs/testshell.sh`) into the shell and then checks that the output matches `fs/testshell.key`.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Add more features to the shell. Possibilities include (a few require changes to the file system too):
|
||||
|
||||
* backgrounding commands (`ls &`)
|
||||
* multiple commands per line (`ls; echo hi`)
|
||||
* command grouping (`(ls; echo hi) | cat > out`)
|
||||
* environment variable expansion (`echo $hello`)
|
||||
* quoting (`echo "a | b"`)
|
||||
* command-line history and/or editing
|
||||
* tab completion
|
||||
* directories, cd, and a PATH for command-lookup.
|
||||
* file creation
|
||||
* ctl-c to kill the running environment
|
||||
|
||||
|
||||
|
||||
but feel free to do something not on this list.
|
||||
```
|
||||
|
||||
Your code should pass all tests at this point. As usual, you can grade your submission with make grade and hand it in with make handin.
|
||||
|
||||
**This completes the lab.** As usual, don't forget to run make grade and to write up your answers and a description of your challenge exercise solution. Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab5.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 5', then make handin to submit your solution.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/disk.png
|
||||
[2]: https://pdos.csail.mit.edu/6.828/2018/labs/lab5/file.png
|
||||
[3]: http://pdos.csail.mit.edu/6.828/2011/readings/i386/s05_02.htm
|
512
sources/tech/20181016 Lab 6- Network Driver.md
Normal file
512
sources/tech/20181016 Lab 6- Network Driver.md
Normal file
@ -0,0 +1,512 @@
|
||||
Translating by qhwdw
|
||||
Lab 6: Network Driver
|
||||
======
|
||||
### Lab 6: Network Driver (default final project)
|
||||
|
||||
**Due on Thursday, December 6, 2018
|
||||
**
|
||||
|
||||
### Introduction
|
||||
|
||||
This lab is the default final project that you can do on your own.
|
||||
|
||||
Now that you have a file system, no self respecting OS should go without a network stack. In this the lab you are going to write a driver for a network interface card. The card will be based on the Intel 82540EM chip, also known as the E1000.
|
||||
|
||||
##### Getting Started
|
||||
|
||||
Use Git to commit your Lab 5 source (if you haven't already), fetch the latest version of the course repository, and then create a local branch called `lab6` based on our lab6 branch, `origin/lab6`:
|
||||
|
||||
```
|
||||
athena% cd ~/6.828/lab
|
||||
athena% add git
|
||||
athena% git commit -am 'my solution to lab5'
|
||||
nothing to commit (working directory clean)
|
||||
athena% git pull
|
||||
Already up-to-date.
|
||||
athena% git checkout -b lab6 origin/lab6
|
||||
Branch lab6 set up to track remote branch refs/remotes/origin/lab6.
|
||||
Switched to a new branch "lab6"
|
||||
athena% git merge lab5
|
||||
Merge made by recursive.
|
||||
fs/fs.c | 42 +++++++++++++++++++
|
||||
1 files changed, 42 insertions(+), 0 deletions(-)
|
||||
athena%
|
||||
```
|
||||
|
||||
The network card driver, however, will not be enough to get your OS hooked up to the Internet. In the new lab6 code, we have provided you with a network stack and a network server. As in previous labs, use git to grab the code for this lab, merge in your own code, and explore the contents of the new `net/` directory, as well as the new files in `kern/`.
|
||||
|
||||
In addition to writing the driver, you will need to create a system call interface to give access to your driver. You will implement missing network server code to transfer packets between the network stack and your driver. You will also tie everything together by finishing a web server. With the new web server you will be able to serve files from your file system.
|
||||
|
||||
Much of the kernel device driver code you will have to write yourself from scratch. This lab provides much less guidance than previous labs: there are no skeleton files, no system call interfaces written in stone, and many design decisions are left up to you. For this reason, we recommend that you read the entire assignment write up before starting any individual exercises. Many students find this lab more difficult than previous labs, so please plan your time accordingly.
|
||||
|
||||
##### Lab Requirements
|
||||
|
||||
As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Write up brief answers to the questions posed in the lab and a description of your challenge exercise in `answers-lab6.txt`.
|
||||
|
||||
#### QEMU's virtual network
|
||||
|
||||
We will be using QEMU's user mode network stack since it requires no administrative privileges to run. QEMU's documentation has more about user-net [here][1]. We've updated the makefile to enable QEMU's user-mode network stack and the virtual E1000 network card.
|
||||
|
||||
By default, QEMU provides a virtual router running on IP 10.0.2.2 and will assign JOS the IP address 10.0.2.15. To keep things simple, we hard-code these defaults into the network server in `net/ns.h`.
|
||||
|
||||
While QEMU's virtual network allows JOS to make arbitrary connections out to the Internet, JOS's 10.0.2.15 address has no meaning outside the virtual network running inside QEMU (that is, QEMU acts as a NAT), so we can't connect directly to servers running inside JOS, even from the host running QEMU. To address this, we configure QEMU to run a server on some port on the _host_ machine that simply connects through to some port in JOS and shuttles data back and forth between your real host and the virtual network.
|
||||
|
||||
You will run JOS servers on ports 7 (echo) and 80 (http). To avoid collisions on shared Athena machines, the makefile generates forwarding ports for these based on your user ID. To find out what ports QEMU is forwarding to on your development host, run make which-ports. For convenience, the makefile also provides make nc-7 and make nc-80, which allow you to interact directly with servers running on these ports in your terminal. (These targets only connect to a running QEMU instance; you must start QEMU itself separately.)
|
||||
|
||||
##### Packet Inspection
|
||||
|
||||
The makefile also configures QEMU's network stack to record all incoming and outgoing packets to `qemu.pcap` in your lab directory.
|
||||
|
||||
To get a hex/ASCII dump of captured packets use `tcpdump` like this:
|
||||
|
||||
```
|
||||
tcpdump -XXnr qemu.pcap
|
||||
```
|
||||
|
||||
Alternatively, you can use [Wireshark][2] to graphically inspect the pcap file. Wireshark also knows how to decode and inspect hundreds of network protocols. If you're on Athena, you'll have to use Wireshark's predecessor, ethereal, which is in the sipbnet locker.
|
||||
|
||||
##### Debugging the E1000
|
||||
|
||||
We are very lucky to be using emulated hardware. Since the E1000 is running in software, the emulated E1000 can report to us, in a user readable format, its internal state and any problems it encounters. Normally, such a luxury would not be available to a driver developer writing with bare metal.
|
||||
|
||||
The E1000 can produce a lot of debug output, so you have to enable specific logging channels. Some channels you might find useful are:
|
||||
|
||||
| Flag | Meaning |
|
||||
| --------- | ---------------------------------------------------|
|
||||
| tx | Log packet transmit operations |
|
||||
| txerr | Log transmit ring errors |
|
||||
| rx | Log changes to RCTL |
|
||||
| rxfilter | Log filtering of incoming packets |
|
||||
| rxerr | Log receive ring errors |
|
||||
| unknown | Log reads and writes of unknown registers |
|
||||
| eeprom | Log reads from the EEPROM |
|
||||
| interrupt | Log interrupts and changes to interrupt registers. |
|
||||
|
||||
To enable "tx" and "txerr" logging, for example, use make E1000_DEBUG=tx,txerr ....
|
||||
|
||||
Note: `E1000_DEBUG` flags only work in the 6.828 version of QEMU.
|
||||
|
||||
You can take debugging using software emulated hardware one step further. If you are ever stuck and do not understand why the E1000 is not responding the way you would expect, you can look at QEMU's E1000 implementation in `hw/e1000.c`.
|
||||
|
||||
#### The Network Server
|
||||
|
||||
Writing a network stack from scratch is hard work. Instead, we will be using lwIP, an open source lightweight TCP/IP protocol suite that among many things includes a network stack. You can find more information on lwIP [here][3]. In this assignment, as far as we are concerned, lwIP is a black box that implements a BSD socket interface and has a packet input port and packet output port.
|
||||
|
||||
The network server is actually a combination of four environments:
|
||||
|
||||
* core network server environment (includes socket call dispatcher and lwIP)
|
||||
* input environment
|
||||
* output environment
|
||||
* timer environment
|
||||
|
||||
|
||||
|
||||
The following diagram shows the different environments and their relationships. The diagram shows the entire system including the device driver, which will be covered later. In this lab, you will implement the parts highlighted in green.
|
||||
|
||||
![Network server architecture][4]
|
||||
|
||||
##### The Core Network Server Environment
|
||||
|
||||
The core network server environment is composed of the socket call dispatcher and lwIP itself. The socket call dispatcher works exactly like the file server. User environments use stubs (found in `lib/nsipc.c`) to send IPC messages to the core network environment. If you look at `lib/nsipc.c` you will see that we find the core network server the same way we found the file server: `i386_init` created the NS environment with NS_TYPE_NS, so we scan `envs`, looking for this special environment type. For each user environment IPC, the dispatcher in the network server calls the appropriate BSD socket interface function provided by lwIP on behalf of the user.
|
||||
|
||||
Regular user environments do not use the `nsipc_*` calls directly. Instead, they use the functions in `lib/sockets.c`, which provides a file descriptor-based sockets API. Thus, user environments refer to sockets via file descriptors, just like how they referred to on-disk files. A number of operations (`connect`, `accept`, etc.) are specific to sockets, but `read`, `write`, and `close` go through the normal file descriptor device-dispatch code in `lib/fd.c`. Much like how the file server maintained internal unique ID's for all open files, lwIP also generates unique ID's for all open sockets. In both the file server and the network server, we use information stored in `struct Fd` to map per-environment file descriptors to these unique ID spaces.
|
||||
|
||||
Even though it may seem that the IPC dispatchers of the file server and network server act the same, there is a key difference. BSD socket calls like `accept` and `recv` can block indefinitely. If the dispatcher were to let lwIP execute one of these blocking calls, the dispatcher would also block and there could only be one outstanding network call at a time for the whole system. Since this is unacceptable, the network server uses user-level threading to avoid blocking the entire server environment. For every incoming IPC message, the dispatcher creates a thread and processes the request in the newly created thread. If the thread blocks, then only that thread is put to sleep while other threads continue to run.
|
||||
|
||||
In addition to the core network environment there are three helper environments. Besides accepting messages from user applications, the core network environment's dispatcher also accepts messages from the input and timer environments.
|
||||
|
||||
##### The Output Environment
|
||||
|
||||
When servicing user environment socket calls, lwIP will generate packets for the network card to transmit. LwIP will send each packet to be transmitted to the output helper environment using the `NSREQ_OUTPUT` IPC message with the packet attached in the page argument of the IPC message. The output environment is responsible for accepting these messages and forwarding the packet on to the device driver via the system call interface that you will soon create.
|
||||
|
||||
##### The Input Environment
|
||||
|
||||
Packets received by the network card need to be injected into lwIP. For every packet received by the device driver, the input environment pulls the packet out of kernel space (using kernel system calls that you will implement) and sends the packet to the core server environment using the `NSREQ_INPUT` IPC message.
|
||||
|
||||
The packet input functionality is separated from the core network environment because JOS makes it hard to simultaneously accept IPC messages and poll or wait for a packet from the device driver. We do not have a `select` system call in JOS that would allow environments to monitor multiple input sources to identify which input is ready to be processed.
|
||||
|
||||
If you take a look at `net/input.c` and `net/output.c` you will see that both need to be implemented. This is mainly because the implementation depends on your system call interface. You will write the code for the two helper environments after you implement the driver and system call interface.
|
||||
|
||||
##### The Timer Environment
|
||||
|
||||
The timer environment periodically sends messages of type `NSREQ_TIMER` to the core network server notifying it that a timer has expired. The timer messages from this thread are used by lwIP to implement various network timeouts.
|
||||
|
||||
### Part A: Initialization and transmitting packets
|
||||
|
||||
Your kernel does not have a notion of time, so we need to add it. There is currently a clock interrupt that is generated by the hardware every 10ms. On every clock interrupt we can increment a variable to indicate that time has advanced by 10ms. This is implemented in `kern/time.c`, but is not yet fully integrated into your kernel.
|
||||
|
||||
```
|
||||
Exercise 1. Add a call to `time_tick` for every clock interrupt in `kern/trap.c`. Implement `sys_time_msec` and add it to `syscall` in `kern/syscall.c` so that user space has access to the time.
|
||||
```
|
||||
|
||||
Use make INIT_CFLAGS=-DTEST_NO_NS run-testtime to test your time code. You should see the environment count down from 5 in 1 second intervals. The "-DTEST_NO_NS" disables starting the network server environment because it will panic at this point in the lab.
|
||||
|
||||
#### The Network Interface Card
|
||||
|
||||
Writing a driver requires knowing in depth the hardware and the interface presented to the software. The lab text will provide a high-level overview of how to interface with the E1000, but you'll need to make extensive use of Intel's manual while writing your driver.
|
||||
|
||||
```
|
||||
Exercise 2. Browse Intel's [Software Developer's Manual][5] for the E1000. This manual covers several closely related Ethernet controllers. QEMU emulates the 82540EM.
|
||||
|
||||
You should skim over chapter 2 now to get a feel for the device. To write your driver, you'll need to be familiar with chapters 3 and 14, as well as 4.1 (though not 4.1's subsections). You'll also need to use chapter 13 as reference. The other chapters mostly cover components of the E1000 that your driver won't have to interact with. Don't worry about the details right now; just get a feel for how the document is structured so you can find things later.
|
||||
|
||||
While reading the manual, keep in mind that the E1000 is a sophisticated device with many advanced features. A working E1000 driver only needs a fraction of the features and interfaces that the NIC provides. Think carefully about the easiest way to interface with the card. We strongly recommend that you get a basic driver working before taking advantage of the advanced features.
|
||||
```
|
||||
|
||||
##### PCI Interface
|
||||
|
||||
The E1000 is a PCI device, which means it plugs into the PCI bus on the motherboard. The PCI bus has address, data, and interrupt lines, and allows the CPU to communicate with PCI devices and PCI devices to read and write memory. A PCI device needs to be discovered and initialized before it can be used. Discovery is the process of walking the PCI bus looking for attached devices. Initialization is the process of allocating I/O and memory space as well as negotiating the IRQ line for the device to use.
|
||||
|
||||
We have provided you with PCI code in `kern/pci.c`. To perform PCI initialization during boot, the PCI code walks the PCI bus looking for devices. When it finds a device, it reads its vendor ID and device ID and uses these two values as a key to search the `pci_attach_vendor` array. The array is composed of `struct pci_driver` entries like this:
|
||||
|
||||
```
|
||||
struct pci_driver {
|
||||
uint32_t key1, key2;
|
||||
int (*attachfn) (struct pci_func *pcif);
|
||||
};
|
||||
```
|
||||
|
||||
If the discovered device's vendor ID and device ID match an entry in the array, the PCI code calls that entry's `attachfn` to perform device initialization. (Devices can also be identified by class, which is what the other driver table in `kern/pci.c` is for.)
|
||||
|
||||
The attach function is passed a _PCI function_ to initialize. A PCI card can expose multiple functions, though the E1000 exposes only one. Here is how we represent a PCI function in JOS:
|
||||
|
||||
```
|
||||
struct pci_func {
|
||||
struct pci_bus *bus;
|
||||
|
||||
uint32_t dev;
|
||||
uint32_t func;
|
||||
|
||||
uint32_t dev_id;
|
||||
uint32_t dev_class;
|
||||
|
||||
uint32_t reg_base[6];
|
||||
uint32_t reg_size[6];
|
||||
uint8_t irq_line;
|
||||
};
|
||||
```
|
||||
|
||||
The above structure reflects some of the entries found in Table 4-1 of Section 4.1 of the developer manual. The last three entries of `struct pci_func` are of particular interest to us, as they record the negotiated memory, I/O, and interrupt resources for the device. The `reg_base` and `reg_size` arrays contain information for up to six Base Address Registers or BARs. `reg_base` stores the base memory addresses for memory-mapped I/O regions (or base I/O ports for I/O port resources), `reg_size` contains the size in bytes or number of I/O ports for the corresponding base values from `reg_base`, and `irq_line` contains the IRQ line assigned to the device for interrupts. The specific meanings of the E1000 BARs are given in the second half of table 4-2.
|
||||
|
||||
When the attach function of a device is called, the device has been found but not yet _enabled_. This means that the PCI code has not yet determined the resources allocated to the device, such as address space and an IRQ line, and, thus, the last three elements of the `struct pci_func` structure are not yet filled in. The attach function should call `pci_func_enable`, which will enable the device, negotiate these resources, and fill in the `struct pci_func`.
|
||||
|
||||
```
|
||||
Exercise 3. Implement an attach function to initialize the E1000. Add an entry to the `pci_attach_vendor` array in `kern/pci.c` to trigger your function if a matching PCI device is found (be sure to put it before the `{0, 0, 0}` entry that mark the end of the table). You can find the vendor ID and device ID of the 82540EM that QEMU emulates in section 5.2. You should also see these listed when JOS scans the PCI bus while booting.
|
||||
|
||||
For now, just enable the E1000 device via `pci_func_enable`. We'll add more initialization throughout the lab.
|
||||
|
||||
We have provided the `kern/e1000.c` and `kern/e1000.h` files for you so that you do not need to mess with the build system. They are currently blank; you need to fill them in for this exercise. You may also need to include the `e1000.h` file in other places in the kernel.
|
||||
|
||||
When you boot your kernel, you should see it print that the PCI function of the E1000 card was enabled. Your code should now pass the `pci attach` test of make grade.
|
||||
```
|
||||
|
||||
##### Memory-mapped I/O
|
||||
|
||||
Software communicates with the E1000 via _memory-mapped I/O_ (MMIO). You've seen this twice before in JOS: both the CGA console and the LAPIC are devices that you control and query by writing to and reading from "memory". But these reads and writes don't go to DRAM; they go directly to these devices.
|
||||
|
||||
`pci_func_enable` negotiates an MMIO region with the E1000 and stores its base and size in BAR 0 (that is, `reg_base[0]` and `reg_size[0]`). This is a range of _physical memory addresses_ assigned to the device, which means you'll have to do something to access it via virtual addresses. Since MMIO regions are assigned very high physical addresses (typically above 3GB), you can't use `KADDR` to access it because of JOS's 256MB limit. Thus, you'll have to create a new memory mapping. We'll use the area above MMIOBASE (your `mmio_map_region` from lab 4 will make sure we don't overwrite the mapping used by the LAPIC). Since PCI device initialization happens before JOS creates user environments, you can create the mapping in `kern_pgdir` and it will always be available.
|
||||
|
||||
```
|
||||
Exercise 4. In your attach function, create a virtual memory mapping for the E1000's BAR 0 by calling `mmio_map_region` (which you wrote in lab 4 to support memory-mapping the LAPIC).
|
||||
|
||||
You'll want to record the location of this mapping in a variable so you can later access the registers you just mapped. Take a look at the `lapic` variable in `kern/lapic.c` for an example of one way to do this. If you do use a pointer to the device register mapping, be sure to declare it `volatile`; otherwise, the compiler is allowed to cache values and reorder accesses to this memory.
|
||||
|
||||
To test your mapping, try printing out the device status register (section 13.4.2). This is a 4 byte register that starts at byte 8 of the register space. You should get `0x80080783`, which indicates a full duplex link is up at 1000 MB/s, among other things.
|
||||
```
|
||||
|
||||
Hint: You'll need a lot of constants, like the locations of registers and values of bit masks. Trying to copy these out of the developer's manual is error-prone and mistakes can lead to painful debugging sessions. We recommend instead using QEMU's [`e1000_hw.h`][6] header as a guideline. We don't recommend copying it in verbatim, because it defines far more than you actually need and may not define things in the way you need, but it's a good starting point.
|
||||
|
||||
##### DMA
|
||||
|
||||
You could imagine transmitting and receiving packets by writing and reading from the E1000's registers, but this would be slow and would require the E1000 to buffer packet data internally. Instead, the E1000 uses _Direct Memory Access_ or DMA to read and write packet data directly from memory without involving the CPU. The driver is responsible for allocating memory for the transmit and receive queues, setting up DMA descriptors, and configuring the E1000 with the location of these queues, but everything after that is asynchronous. To transmit a packet, the driver copies it into the next DMA descriptor in the transmit queue and informs the E1000 that another packet is available; the E1000 will copy the data out of the descriptor when there is time to send the packet. Likewise, when the E1000 receives a packet, it copies it into the next DMA descriptor in the receive queue, which the driver can read from at its next opportunity.
|
||||
|
||||
The receive and transmit queues are very similar at a high level. Both consist of a sequence of _descriptors_. While the exact structure of these descriptors varies, each descriptor contains some flags and the physical address of a buffer containing packet data (either packet data for the card to send, or a buffer allocated by the OS for the card to write a received packet to).
|
||||
|
||||
The queues are implemented as circular arrays, meaning that when the card or the driver reach the end of the array, it wraps back around to the beginning. Both have a _head pointer_ and a _tail pointer_ and the contents of the queue are the descriptors between these two pointers. The hardware always consumes descriptors from the head and moves the head pointer, while the driver always add descriptors to the tail and moves the tail pointer. The descriptors in the transmit queue represent packets waiting to be sent (hence, in the steady state, the transmit queue is empty). For the receive queue, the descriptors in the queue are free descriptors that the card can receive packets into (hence, in the steady state, the receive queue consists of all available receive descriptors). Correctly updating the tail register without confusing the E1000 is tricky; be careful!
|
||||
|
||||
The pointers to these arrays as well as the addresses of the packet buffers in the descriptors must all be _physical addresses_ because hardware performs DMA directly to and from physical RAM without going through the MMU.
|
||||
|
||||
#### Transmitting Packets
|
||||
|
||||
The transmit and receive functions of the E1000 are basically independent of each other, so we can work on one at a time. We'll attack transmitting packets first simply because we can't test receive without transmitting an "I'm here!" packet first.
|
||||
|
||||
First, you'll have to initialize the card to transmit, following the steps described in section 14.5 (you don't have to worry about the subsections). The first step of transmit initialization is setting up the transmit queue. The precise structure of the queue is described in section 3.4 and the structure of the descriptors is described in section 3.3.3. We won't be using the TCP offload features of the E1000, so you can focus on the "legacy transmit descriptor format." You should read those sections now and familiarize yourself with these structures.
|
||||
|
||||
##### C Structures
|
||||
|
||||
You'll find it convenient to use C `struct`s to describe the E1000's structures. As you've seen with structures like the `struct Trapframe`, C `struct`s let you precisely layout data in memory. C can insert padding between fields, but the E1000's structures are laid out such that this shouldn't be a problem. If you do encounter field alignment problems, look into GCC's "packed" attribute.
|
||||
|
||||
As an example, consider the legacy transmit descriptor given in table 3-8 of the manual and reproduced here:
|
||||
|
||||
```
|
||||
63 48 47 40 39 32 31 24 23 16 15 0
|
||||
+---------------------------------------------------------------+
|
||||
| Buffer address |
|
||||
+---------------|-------|-------|-------|-------|---------------+
|
||||
| Special | CSS | Status| Cmd | CSO | Length |
|
||||
+---------------|-------|-------|-------|-------|---------------+
|
||||
```
|
||||
|
||||
The first byte of the structure starts at the top right, so to convert this into a C struct, read from right to left, top to bottom. If you squint at it right, you'll see that all of the fields even fit nicely into a standard-size types:
|
||||
|
||||
```
|
||||
struct tx_desc
|
||||
{
|
||||
uint64_t addr;
|
||||
uint16_t length;
|
||||
uint8_t cso;
|
||||
uint8_t cmd;
|
||||
uint8_t status;
|
||||
uint8_t css;
|
||||
uint16_t special;
|
||||
};
|
||||
```
|
||||
|
||||
Your driver will have to reserve memory for the transmit descriptor array and the packet buffers pointed to by the transmit descriptors. There are several ways to do this, ranging from dynamically allocating pages to simply declaring them in global variables. Whatever you choose, keep in mind that the E1000 accesses physical memory directly, which means any buffer it accesses must be contiguous in physical memory.
|
||||
|
||||
There are also multiple ways to handle the packet buffers. The simplest, which we recommend starting with, is to reserve space for a packet buffer for each descriptor during driver initialization and simply copy packet data into and out of these pre-allocated buffers. The maximum size of an Ethernet packet is 1518 bytes, which bounds how big these buffers need to be. More sophisticated drivers could dynamically allocate packet buffers (e.g., to reduce memory overhead when network usage is low) or even pass buffers directly provided by user space (a technique known as "zero copy"), but it's good to start simple.
|
||||
|
||||
```
|
||||
Exercise 5. Perform the initialization steps described in section 14.5 (but not its subsections). Use section 13 as a reference for the registers the initialization process refers to and sections 3.3.3 and 3.4 for reference to the transmit descriptors and transmit descriptor array.
|
||||
|
||||
Be mindful of the alignment requirements on the transmit descriptor array and the restrictions on length of this array. Since TDLEN must be 128-byte aligned and each transmit descriptor is 16 bytes, your transmit descriptor array will need some multiple of 8 transmit descriptors. However, don't use more than 64 descriptors or our tests won't be able to test transmit ring overflow.
|
||||
|
||||
For the TCTL.COLD, you can assume full-duplex operation. For TIPG, refer to the default values described in table 13-77 of section 13.4.34 for the IEEE 802.3 standard IPG (don't use the values in the table in section 14.5).
|
||||
```
|
||||
|
||||
Try running make E1000_DEBUG=TXERR,TX qemu. If you are using the course qemu, you should see an "e1000: tx disabled" message when you set the TDT register (since this happens before you set TCTL.EN) and no further "e1000" messages.
|
||||
|
||||
Now that transmit is initialized, you'll have to write the code to transmit a packet and make it accessible to user space via a system call. To transmit a packet, you have to add it to the tail of the transmit queue, which means copying the packet data into the next packet buffer and then updating the TDT (transmit descriptor tail) register to inform the card that there's another packet in the transmit queue. (Note that TDT is an _index_ into the transmit descriptor array, not a byte offset; the documentation isn't very clear about this.)
|
||||
|
||||
However, the transmit queue is only so big. What happens if the card has fallen behind transmitting packets and the transmit queue is full? In order to detect this condition, you'll need some feedback from the E1000. Unfortunately, you can't just use the TDH (transmit descriptor head) register; the documentation explicitly states that reading this register from software is unreliable. However, if you set the RS bit in the command field of a transmit descriptor, then, when the card has transmitted the packet in that descriptor, the card will set the DD bit in the status field of the descriptor. If a descriptor's DD bit is set, you know it's safe to recycle that descriptor and use it to transmit another packet.
|
||||
|
||||
What if the user calls your transmit system call, but the DD bit of the next descriptor isn't set, indicating that the transmit queue is full? You'll have to decide what to do in this situation. You could simply drop the packet. Network protocols are resilient to this, but if you drop a large burst of packets, the protocol may not recover. You could instead tell the user environment that it has to retry, much like you did for `sys_ipc_try_send`. This has the advantage of pushing back on the environment generating the data.
|
||||
|
||||
```
|
||||
Exercise 6. Write a function to transmit a packet by checking that the next descriptor is free, copying the packet data into the next descriptor, and updating TDT. Make sure you handle the transmit queue being full.
|
||||
```
|
||||
|
||||
Now would be a good time to test your packet transmit code. Try transmitting just a few packets by directly calling your transmit function from the kernel. You don't have to create packets that conform to any particular network protocol in order to test this. Run make E1000_DEBUG=TXERR,TX qemu to run your test. You should see something like
|
||||
|
||||
```
|
||||
e1000: index 0: 0x271f00 : 9000002a 0
|
||||
...
|
||||
```
|
||||
|
||||
as you transmit packets. Each line gives the index in the transmit array, the buffer address of that transmit descriptor, the cmd/CSO/length fields, and the special/CSS/status fields. If QEMU doesn't print the values you expected from your transmit descriptor, check that you're filling in the right descriptor and that you configured TDBAL and TDBAH correctly. If you get "e1000: TDH wraparound @0, TDT x, TDLEN y" messages, that means the E1000 ran all the way through the transmit queue without stopping (if QEMU didn't check this, it would enter an infinite loop), which probably means you aren't manipulating TDT correctly. If you get lots of "e1000: tx disabled" messages, then you didn't set the transmit control register right.
|
||||
|
||||
Once QEMU runs, you can then run tcpdump -XXnr qemu.pcap to see the packet data that you transmitted. If you saw the expected "e1000: index" messages from QEMU, but your packet capture is empty, double check that you filled in every necessary field and bit in your transmit descriptors (the E1000 probably went through your transmit descriptors, but didn't think it had to send anything).
|
||||
|
||||
```
|
||||
Exercise 7. Add a system call that lets you transmit packets from user space. The exact interface is up to you. Don't forget to check any pointers passed to the kernel from user space.
|
||||
```
|
||||
|
||||
#### Transmitting Packets: Network Server
|
||||
|
||||
Now that you have a system call interface to the transmit side of your device driver, it's time to send packets. The output helper environment's goal is to do the following in a loop: accept `NSREQ_OUTPUT` IPC messages from the core network server and send the packets accompanying these IPC message to the network device driver using the system call you added above. The `NSREQ_OUTPUT` IPC's are sent by the `low_level_output` function in `net/lwip/jos/jif/jif.c`, which glues the lwIP stack to JOS's network system. Each IPC will include a page consisting of a `union Nsipc` with the packet in its `struct jif_pkt pkt` field (see `inc/ns.h`). `struct jif_pkt` looks like
|
||||
|
||||
```
|
||||
struct jif_pkt {
|
||||
int jp_len;
|
||||
char jp_data[0];
|
||||
};
|
||||
```
|
||||
|
||||
`jp_len` represents the length of the packet. All subsequent bytes on the IPC page are dedicated to the packet contents. Using a zero-length array like `jp_data` at the end of a struct is a common C trick (some would say abomination) for representing buffers without pre-determined lengths. Since C doesn't do array bounds checking, as long as you ensure there's enough unused memory following the struct, you can use `jp_data` as if it were an array of any size.
|
||||
|
||||
Be aware of the interaction between the device driver, the output environment and the core network server when there is no more space in the device driver's transmit queue. The core network server sends packets to the output environment using IPC. If the output environment is suspended due to a send packet system call because the driver has no more buffer space for new packets, the core network server will block waiting for the output server to accept the IPC call.
|
||||
|
||||
```
|
||||
Exercise 8. Implement `net/output.c`.
|
||||
```
|
||||
|
||||
You can use `net/testoutput.c` to test your output code without involving the whole network server. Try running make E1000_DEBUG=TXERR,TX run-net_testoutput. You should see something like
|
||||
|
||||
```
|
||||
Transmitting packet 0
|
||||
e1000: index 0: 0x271f00 : 9000009 0
|
||||
Transmitting packet 1
|
||||
e1000: index 1: 0x2724ee : 9000009 0
|
||||
...
|
||||
```
|
||||
|
||||
and tcpdump -XXnr qemu.pcap should output
|
||||
|
||||
|
||||
```
|
||||
reading from file qemu.pcap, link-type EN10MB (Ethernet)
|
||||
-5:00:00.600186 [|ether]
|
||||
0x0000: 5061 636b 6574 2030 30 Packet.00
|
||||
-5:00:00.610080 [|ether]
|
||||
0x0000: 5061 636b 6574 2030 31 Packet.01
|
||||
...
|
||||
```
|
||||
|
||||
To test with a larger packet count, try make E1000_DEBUG=TXERR,TX NET_CFLAGS=-DTESTOUTPUT_COUNT=100 run-net_testoutput. If this overflows your transmit ring, double check that you're handling the DD status bit correctly and that you've told the hardware to set the DD status bit (using the RS command bit).
|
||||
|
||||
Your code should pass the `testoutput` tests of make grade.
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
1. How did you structure your transmit implementation? In particular, what do you do if the transmit ring is full?
|
||||
```
|
||||
|
||||
|
||||
### Part B: Receiving packets and the web server
|
||||
|
||||
#### Receiving Packets
|
||||
|
||||
Just like you did for transmitting packets, you'll have to configure the E1000 to receive packets and provide a receive descriptor queue and receive descriptors. Section 3.2 describes how packet reception works, including the receive queue structure and receive descriptors, and the initialization process is detailed in section 14.4.
|
||||
|
||||
```
|
||||
Exercise 9. Read section 3.2. You can ignore anything about interrupts and checksum offloading (you can return to these sections if you decide to use these features later), and you don't have to be concerned with the details of thresholds and how the card's internal caches work.
|
||||
```
|
||||
|
||||
The receive queue is very similar to the transmit queue, except that it consists of empty packet buffers waiting to be filled with incoming packets. Hence, when the network is idle, the transmit queue is empty (because all packets have been sent), but the receive queue is full (of empty packet buffers).
|
||||
|
||||
When the E1000 receives a packet, it first checks if it matches the card's configured filters (for example, to see if the packet is addressed to this E1000's MAC address) and ignores the packet if it doesn't match any filters. Otherwise, the E1000 tries to retrieve the next receive descriptor from the head of the receive queue. If the head (RDH) has caught up with the tail (RDT), then the receive queue is out of free descriptors, so the card drops the packet. If there is a free receive descriptor, it copies the packet data into the buffer pointed to by the descriptor, sets the descriptor's DD (Descriptor Done) and EOP (End of Packet) status bits, and increments the RDH.
|
||||
|
||||
If the E1000 receives a packet that is larger than the packet buffer in one receive descriptor, it will retrieve as many descriptors as necessary from the receive queue to store the entire contents of the packet. To indicate that this has happened, it will set the DD status bit on all of these descriptors, but only set the EOP status bit on the last of these descriptors. You can either deal with this possibility in your driver, or simply configure the card to not accept "long packets" (also known as _jumbo frames_ ) and make sure your receive buffers are large enough to store the largest possible standard Ethernet packet (1518 bytes).
|
||||
|
||||
```
|
||||
Exercise 10. Set up the receive queue and configure the E1000 by following the process in section 14.4. You don't have to support "long packets" or multicast. For now, don't configure the card to use interrupts; you can change that later if you decide to use receive interrupts. Also, configure the E1000 to strip the Ethernet CRC, since the grade script expects it to be stripped.
|
||||
|
||||
By default, the card will filter out _all_ packets. You have to configure the Receive Address Registers (RAL and RAH) with the card's own MAC address in order to accept packets addressed to that card. You can simply hard-code QEMU's default MAC address of 52:54:00:12:34:56 (we already hard-code this in lwIP, so doing it here too doesn't make things any worse). Be very careful with the byte order; MAC addresses are written from lowest-order byte to highest-order byte, so 52:54:00:12 are the low-order 32 bits of the MAC address and 34:56 are the high-order 16 bits.
|
||||
|
||||
The E1000 only supports a specific set of receive buffer sizes (given in the description of RCTL.BSIZE in 13.4.22). If you make your receive packet buffers large enough and disable long packets, you won't have to worry about packets spanning multiple receive buffers. Also, remember that, just like for transmit, the receive queue and the packet buffers must be contiguous in physical memory.
|
||||
|
||||
You should use at least 128 receive descriptors
|
||||
```
|
||||
|
||||
You can do a basic test of receive functionality now, even without writing the code to receive packets. Run make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput. `testinput` will transmit an ARP (Address Resolution Protocol) announcement packet (using your packet transmitting system call), which QEMU will automatically reply to. Even though your driver can't receive this reply yet, you should see a "e1000: unicast match[0]: 52:54:00:12:34:56" message, indicating that a packet was received by the E1000 and matched the configured receive filter. If you see a "e1000: unicast mismatch: 52:54:00:12:34:56" message instead, the E1000 filtered out the packet, which means you probably didn't configure RAL and RAH correctly. Make sure you got the byte ordering right and didn't forget to set the "Address Valid" bit in RAH. If you don't get any "e1000" messages, you probably didn't enable receive correctly.
|
||||
|
||||
Now you're ready to implement receiving packets. To receive a packet, your driver will have to keep track of which descriptor it expects to hold the next received packet (hint: depending on your design, there's probably already a register in the E1000 keeping track of this). Similar to transmit, the documentation states that the RDH register cannot be reliably read from software, so in order to determine if a packet has been delivered to this descriptor's packet buffer, you'll have to read the DD status bit in the descriptor. If the DD bit is set, you can copy the packet data out of that descriptor's packet buffer and then tell the card that the descriptor is free by updating the queue's tail index, RDT.
|
||||
|
||||
If the DD bit isn't set, then no packet has been received. This is the receive-side equivalent of when the transmit queue was full, and there are several things you can do in this situation. You can simply return a "try again" error and require the caller to retry. While this approach works well for full transmit queues because that's a transient condition, it is less justifiable for empty receive queues because the receive queue may remain empty for long stretches of time. A second approach is to suspend the calling environment until there are packets in the receive queue to process. This tactic is very similar to `sys_ipc_recv`. Just like in the IPC case, since we have only one kernel stack per CPU, as soon as we leave the kernel the state on the stack will be lost. We need to set a flag indicating that an environment has been suspended by receive queue underflow and record the system call arguments. The drawback of this approach is complexity: the E1000 must be instructed to generate receive interrupts and the driver must handle them in order to resume the environment blocked waiting for a packet.
|
||||
|
||||
```
|
||||
Exercise 11. Write a function to receive a packet from the E1000 and expose it to user space by adding a system call. Make sure you handle the receive queue being empty.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! If the transmit queue is full or the receive queue is empty, the environment and your driver may spend a significant amount of CPU cycles polling, waiting for a descriptor. The E1000 can generate an interrupt once it is finished with a transmit or receive descriptor, avoiding the need for polling. Modify your driver so that processing the both the transmit and receive queues is interrupt driven instead of polling.
|
||||
|
||||
Note that, once an interrupt is asserted, it will remain asserted until the driver clears the interrupt. In your interrupt handler make sure to clear the interrupt as soon as you handle it. If you don't, after returning from your interrupt handler, the CPU will jump back into it again. In addition to clearing the interrupts on the E1000 card, interrupts also need to be cleared on the LAPIC. Use `lapic_eoi` to do so.
|
||||
```
|
||||
|
||||
#### Receiving Packets: Network Server
|
||||
|
||||
In the network server input environment, you will need to use your new receive system call to receive packets and pass them to the core network server environment using the `NSREQ_INPUT` IPC message. These IPC input message should have a page attached with a `union Nsipc` with its `struct jif_pkt pkt` field filled in with the packet received from the network.
|
||||
|
||||
```
|
||||
Exercise 12. Implement `net/input.c`.
|
||||
```
|
||||
|
||||
Run `testinput` again with make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput. You should see
|
||||
|
||||
```
|
||||
Sending ARP announcement...
|
||||
Waiting for packets...
|
||||
e1000: index 0: 0x26dea0 : 900002a 0
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
input: 0000 5254 0012 3456 5255 0a00 0202 0806 0001
|
||||
input: 0010 0800 0604 0002 5255 0a00 0202 0a00 0202
|
||||
input: 0020 5254 0012 3456 0a00 020f 0000 0000 0000
|
||||
input: 0030 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
```
|
||||
|
||||
The lines beginning with "input:" are a hexdump of QEMU's ARP reply.
|
||||
|
||||
Your code should pass the `testinput` tests of make grade. Note that there's no way to test packet receiving without sending at least one ARP packet to inform QEMU of JOS' IP address, so bugs in your transmitting code can cause this test to fail.
|
||||
|
||||
To more thoroughly test your networking code, we have provided a daemon called `echosrv` that sets up an echo server running on port 7 that will echo back anything sent over a TCP connection. Use make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-echosrv to start the echo server in one terminal and make nc-7 in another to connect to it. Every line you type should be echoed back by the server. Every time the emulated E1000 receives a packet, QEMU should print something like the following to the console:
|
||||
|
||||
```
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
e1000: index 2: 0x26ea7c : 9000036 0
|
||||
e1000: index 3: 0x26f06a : 9000039 0
|
||||
e1000: unicast match[0]: 52:54:00:12:34:56
|
||||
```
|
||||
|
||||
At this point, you should also be able to pass the `echosrv` test.
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
2. How did you structure your receive implementation? In particular, what do you do if the receive queue is empty and a user environment requests the next incoming packet?
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
Challenge! Read about the EEPROM in the developer's manual and write the code to load the E1000's MAC address out of the EEPROM. Currently, QEMU's default MAC address is hard-coded into both your receive initialization and lwIP. Fix your initialization to use the MAC address you read from the EEPROM, add a system call to pass the MAC address to lwIP, and modify lwIP to the MAC address read from the card. Test your change by configuring QEMU to use a different MAC address.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Modify your E1000 driver to be "zero copy." Currently, packet data has to be copied from user-space buffers to transmit packet buffers and from receive packet buffers back to user-space buffers. A zero copy driver avoids this by having user space and the E1000 share packet buffer memory directly. There are many different approaches to this, including mapping the kernel-allocated structures into user space or passing user-provided buffers directly to the E1000. Regardless of your approach, be careful how you reuse buffers so that you don't introduce races between user-space code and the E1000.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Take the zero copy concept all the way into lwIP.
|
||||
|
||||
A typical packet is composed of many headers. The user sends data to be transmitted to lwIP in one buffer. The TCP layer wants to add a TCP header, the IP layer an IP header and the MAC layer an Ethernet header. Even though there are many parts to a packet, right now the parts need to be joined together so that the device driver can send the final packet.
|
||||
|
||||
The E1000's transmit descriptor design is well-suited to collecting pieces of a packet scattered throughout memory, like the packet fragments created inside lwIP. If you enqueue multiple transmit descriptors, but only set the EOP command bit on the last one, then the E1000 will internally concatenate the packet buffers from these descriptors and only transmit the concatenated buffer when it reaches the EOP-marked descriptor. As a result, the individual packet pieces never need to be joined together in memory.
|
||||
|
||||
Change your driver to be able to send packets composed of many buffers without copying and modify lwIP to avoid merging the packet pieces as it does right now.
|
||||
```
|
||||
|
||||
```
|
||||
Challenge! Augment your system call interface to service more than one user environment. This will prove useful if there are multiple network stacks (and multiple network servers) each with their own IP address running in user mode. The receive system call will need to decide to which environment it needs to forward each incoming packet.
|
||||
|
||||
Note that the current interface cannot tell the difference between two packets and if multiple environments call the packet receive system call, each respective environment will get a subset of the incoming packets and that subset may include packets that are not destined to the calling environment.
|
||||
|
||||
Sections 2.2 and 3 in [this][7] Exokernel paper have an in-depth explanation of the problem and a method of addressing it in a kernel like JOS. Use the paper to help you get a grip on the problem, chances are you do not need a solution as complex as presented in the paper.
|
||||
```
|
||||
|
||||
#### The Web Server
|
||||
|
||||
A web server in its simplest form sends the contents of a file to the requesting client. We have provided skeleton code for a very simple web server in `user/httpd.c`. The skeleton code deals with incoming connections and parses the headers.
|
||||
|
||||
```
|
||||
Exercise 13. The web server is missing the code that deals with sending the contents of a file back to the client. Finish the web server by implementing `send_file` and `send_data`.
|
||||
```
|
||||
|
||||
Once you've finished the web server, start the webserver (make run-httpd-nox) and point your favorite browser at http:// _host_ : _port_ /index.html, where _host_ is the name of the computer running QEMU (If you're running QEMU on athena use `hostname.mit.edu` (hostname is the output of the `hostname` command on athena, or `localhost` if you're running the web browser and QEMU on the same computer) and _port_ is the port number reported for the web server by make which-ports . You should see a web page served by the HTTP server running inside JOS.
|
||||
|
||||
At this point, you should score 105/105 on make grade.
|
||||
|
||||
```
|
||||
Challenge! Add a simple chat server to JOS, where multiple people can connect to the server and anything that any user types is transmitted to the other users. To do this, you will have to find a way to communicate with multiple sockets at once _and_ to send and receive on the same socket at the same time. There are multiple ways to go about this. lwIP provides a MSG_DONTWAIT flag for recv (see `lwip_recvfrom` in `net/lwip/api/sockets.c`), so you could constantly loop through all open sockets, polling them for data. Note that, while `recv` flags are supported by the network server IPC, they aren't accessible via the regular `read` function, so you'll need a way to pass the flags. A more efficient approach is to start one or more environments for each connection and to use IPC to coordinate them. Conveniently, the lwIP socket ID found in the struct Fd for a socket is global (not per-environment), so, for example, the child of a `fork` inherits its parents sockets. Or, an environment can even send on another environment's socket simply by constructing an Fd containing the right socket ID.
|
||||
```
|
||||
|
||||
```
|
||||
Question
|
||||
|
||||
3. What does the web page served by JOS's web server say?
|
||||
4. How long approximately did it take you to do this lab?
|
||||
```
|
||||
|
||||
|
||||
**This completes the lab.** As usual, don't forget to run make grade and to write up your answers and a description of your challenge exercise solution. Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab6.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 6', then make handin and follow the directions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/
|
||||
|
||||
作者:[csail.mit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://pdos.csail.mit.edu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://wiki.qemu.org/download/qemu-doc.html#Using-the-user-mode-network-stack
|
||||
[2]: http://www.wireshark.org/
|
||||
[3]: http://www.sics.se/~adam/lwip/
|
||||
[4]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/ns.png
|
||||
[5]: https://pdos.csail.mit.edu/6.828/2018/readings/hardware/8254x_GBe_SDM.pdf
|
||||
[6]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/e1000_hw.h
|
||||
[7]: http://pdos.csail.mit.edu/papers/exo:tocs.pdf
|
@ -0,0 +1,80 @@
|
||||
translating---geekpi
|
||||
|
||||
Turn Your Old PC into a Retrogaming Console with Lakka Linux
|
||||
======
|
||||
**If you have an old computer gathering dust, you can turn it into a PlayStation like retrogaming console with Lakka Linux distribution. **
|
||||
|
||||
You probably already know that there are [Linux distributions specially crafted for reviving older computers][1]. But did you know about a Linux distribution that is created for the sole purpose of turning your old computer into a retro-gaming console?
|
||||
|
||||
![Lakka is a Linux distribution specially for retrogaming][2]
|
||||
|
||||
Meet [Lakka][3], a lightweight Linux distribution that will transform your old or low-end computer (like Raspberry Pi) into a complete retrogaming console,
|
||||
|
||||
When I say retrogaming console, I am serious about the console part. If you have ever used a PlayStation of Xbox, you know what a typical console interface looks like.
|
||||
|
||||
Lakka provides a similar interface and a similar experience. I’ll talk about the ‘experience’ later. Have a look at the interface first.
|
||||
|
||||
<https://itsfoss.com/wp-content/uploads/2018/10/lakka-linux-gaming-console.webm>
|
||||
Lakka Retrogaming interface
|
||||
|
||||
### Lakka: Linux distributions for retrogaming
|
||||
|
||||
Lakka is the official Linux distribution of [RetroArch][4] and the [Libretro][5] ecosystem.
|
||||
|
||||
RetroArch is a frontend for retro game emulators and game engines. The interface you saw in the video above is nothing but RetroArch. If you just want to play retro games, you can simply install RetroArch in your current Linux distribution.
|
||||
|
||||
Lakka provides Libretro core with RetroArch. So you get a preconfigured operating system that you can install or plug in the live USB and start playing games.
|
||||
|
||||
Lakka is lightweight and you can install it on most old systems or single board computers like Raspberry Pi.
|
||||
|
||||
It supports a huge number of emulators. You just need to download the ROMs on your system and Lakka will play the games from these ROMs. You can find the list supported emulators and hardware [here][6].
|
||||
|
||||
It enables you to run classic games on a wide range of computers and consoles through its slick graphical interface. Settings are also unified so configuration is done once and for all.
|
||||
|
||||
Let me summarize the main features of Lakka:
|
||||
|
||||
* PlayStation like interface with RetroArch
|
||||
* Support for a number of retro game emulators
|
||||
* Supports up to 5 players gaming on the same system
|
||||
* Savestates allow you to save your progress at any moment in the game
|
||||
* You can improve the look of your old games with various graphical filters
|
||||
* You can join multiplayer games over the network
|
||||
* Out of the box support for a number of joypads like XBOX360, Dualshock 3, and 8bitdo
|
||||
* Unlike trophies and badges by connecting to [RetroAchievements][7]
|
||||
|
||||
|
||||
|
||||
### Getting Lakka
|
||||
|
||||
Before you go on installing Lakka you should know that it is still under development so expect a few bugs here and there.
|
||||
|
||||
Keep in mind that Lakka only supports MBR partitioning. So if it doesn’t read your hard drive while installing, this could be a reason.
|
||||
|
||||
The [FAQ section of the project][8] answers the common doubts, so please refer to it for any further questions.
|
||||
|
||||
[Get Lakka][9]
|
||||
|
||||
Do you like playing retro games? What emulators do you use? Have you ever used Lakka before? Share your views with us in the comments section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/lakka-retrogaming-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/lakka-retrogaming-linux.jpeg
|
||||
[3]: http://www.lakka.tv/
|
||||
[4]: https://www.retroarch.com/
|
||||
[5]: https://www.libretro.com/
|
||||
[6]: http://www.lakka.tv/powerful/
|
||||
[7]: https://retroachievements.org/
|
||||
[8]: http://www.lakka.tv/doc/FAQ/
|
||||
[9]; http://www.lakka.tv/disclaimer/
|
@ -0,0 +1,87 @@
|
||||
piwheels: Speedy Python package installation for the Raspberry Pi
|
||||
======
|
||||
https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rainbow-pinwheel-piwheel-diversity-inclusion.png?itok=di41Wd3V)
|
||||
|
||||
One of the great things about the Python programming language is [PyPI][1], the Python Package Index, where third-party libraries are hosted, available for anyone to install and gain access to pre-existing functionality without starting from scratch. These libraries are handy utilities, written by members of the community, that aren't found within the Python standard library. But they work in much the same way—you import them into your code and have access to functions and classes you didn't write yourself.
|
||||
|
||||
### The cross-platform problem
|
||||
|
||||
Many of the 150,000+ libraries hosted on PyPI are written in Python, but that's not the only option—you can write Python libraries in C, C++, or anything with Python bindings. The usual benefit of writing a library in C or C++ is speed. The NumPy project is a good example: NumPy provides highly powerful mathematical functionality for dealing with matrix operations. It is highly optimized code that allows users to write in Python but have access to speedy mathematics operations.
|
||||
|
||||
The problem comes when trying to distribute libraries for others to use cross-platform. The standard is to create built distributions called Python wheels. While pure Python libraries are automatically compatible cross-platform, those implemented in C/C++ must be built separately for each operating system, Python version, and system architecture. So, if a library wanted to support Windows, MacOS, and Linux, for both 32-bit and 64-bit computers, and for Python 2.7, 3.4, 3.5, and 3.6, that would require 24 different versions! Some packages do this, but others rely on users building the package from the source code, which can take a long time and can often be complex.
|
||||
|
||||
### Raspberry Pi and Arm
|
||||
|
||||
While the Raspberry Pi runs Linux, it's not the same architecture as your regular PC—it's Arm, rather than Intel. That means the Linux wheels don't work, and Raspberry Pi users had to build from source—until the piwheels project came to fruition last year. [Piwheels][2] is an open source project that aims to build Raspberry Pi platform wheels for every package on PyPI.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pi3b.jpg)
|
||||
|
||||
Packages are natively compiled on Raspberry Pi 3 hardware and hosted in a data center provided by UK-based [Mythic Beasts][3], which provides cloud Pis as part of its hosting service. The piwheels website hosts the wheels in a [pip][4]-compatible web server configuration so Raspberry Pi users can use them easily. Raspbian Stretch even comes preconfigured to use piwheels.org as an additional index to PyPI by default.
|
||||
|
||||
### The piwheels stack
|
||||
|
||||
The piwheels project runs (almost) entirely on Raspberry Pi hardware:
|
||||
|
||||
* **Master**
|
||||
* A Raspberry Pi web server hosts the wheel files and distributes jobs to the builder Pis.
|
||||
* **Database server**
|
||||
* All package information is stored in a [Postgres database][5].
|
||||
* The master logs build attempts and downloads.
|
||||
* **Builders**
|
||||
* Builder Pis are given build jobs to attempt, and they communicate with the database.
|
||||
* The backlog of packages on PyPI was completed using around 20 Raspberry Pis.
|
||||
* A smaller number of Pis is required to keep up with new releases. Currently, there are three with Raspbian Jessie (Python 3.4) and two with Raspbian Stretch (Python 3.5).
|
||||
|
||||
|
||||
|
||||
The database server was originally a Raspberry Pi but was moved to another server when the database got too large.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/piwheels-stack.png)
|
||||
|
||||
### Time saved
|
||||
|
||||
Around 500,000 packages are downloaded from piwheels.org every month.
|
||||
|
||||
Every time a package is built by piwheels or downloaded by a user, its status information (including build duration) is recorded in a database. Therefore, it's possible to calculate how much time has been saved with pre-compiled packages.
|
||||
|
||||
In the 10 months that the service has been running, over 25 years of build time has been saved.
|
||||
|
||||
### Great for projects
|
||||
|
||||
Raspberry Pi project tutorials requiring Python libraries often include warnings like "this step takes a few hours"—but that's no longer true, thanks to piwheels. Piwheels makes it easy for makers and developers to dive straight into their project and not get bogged down waiting for software to install. Amazing libraries are just a **pip install** away; no need to wait for compilation.
|
||||
|
||||
Piwheels has wheels for NumPy, SciPy, OpenCV, Keras, and even [Tensorflow][6], Google's machine learning framework. These libraries are great for [home projects][7], including image and facial recognition with the [camera module][8]. For inspiration, take a look at the Raspberry Pi category on [PyImageSearch][9] (which is one of my [favorite Raspberry Pi blogs][10]) to follow.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/camera_0.jpg)
|
||||
|
||||
Read more about piwheels on the project's [blog][11] and the [Raspberry Pi blog][12], see the [source code on GitHub][13], and check out the [piwheels website][2]. If you want to contribute to the project, check the [missing packages tag][14] and see if you can successfully build one of them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/piwheels-python-raspberrypi
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pypi.org/
|
||||
[2]: https://www.piwheels.org/
|
||||
[3]: https://www.mythic-beasts.com/order/rpi
|
||||
[4]: https://en.wikipedia.org/wiki/Pip_(package_manager)
|
||||
[5]: https://opensource.com/article/17/10/set-postgres-database-your-raspberry-pi
|
||||
[6]: https://www.tensorflow.org/
|
||||
[7]: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
|
||||
[8]: https://opensource.com/life/15/6/raspberry-pi-camera-projects
|
||||
[9]: https://www.pyimagesearch.com/category/raspberry-pi/
|
||||
[10]: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow
|
||||
[11]: https://blog.piwheels.org/
|
||||
[12]: https://www.raspberrypi.org/blog/piwheels/
|
||||
[13]: https://github.com/bennuttall/piwheels
|
||||
[14]: https://github.com/bennuttall/piwheels/issues?q=is%3Aissue+is%3Aopen+label%3A%22missing+package%22
|
@ -0,0 +1,327 @@
|
||||
Automating upstream releases with release-bot
|
||||
======
|
||||
All you need to do is file an issue into your upstream repository and release-bot takes care of the rest.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_robots.png?itok=TOZgajrd)
|
||||
|
||||
If you own or maintain a GitHub repo and have ever pushed a package from it into [PyPI][1] and/or [Fedora][2], you know it requires some additional work using the Fedora infrastructure.
|
||||
|
||||
Good news: We have developed a tool called [release-bot][3] that automates the process. All you need to do is file an issue into your upstream repository and release-bot takes care of the rest. But let’s not get ahead of ourselves. First, let’s look at what needs to be set up for this automation to happen. I’ve chosen the **meta-test-family** upstream repository as an example.
|
||||
|
||||
### Configuration files for release-bot
|
||||
|
||||
There are two configuration files for release-bot: **conf.yaml** and **release-conf.yaml**.
|
||||
|
||||
#### conf.yaml
|
||||
|
||||
**conf.yaml** must be accessible during bot initialization; it specifies how to access the GitHub repository. To show that, I have created a new git repository named **mtf-release-bot** , which contains **conf.yaml** and the other secret files.
|
||||
|
||||
```
|
||||
repository_name: name
|
||||
repository_owner: owner
|
||||
# https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
|
||||
github_token: xxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
# time in seconds during checks for new releases
|
||||
refresh_interval: 180
|
||||
```
|
||||
|
||||
For the meta-test-family case, the configuration file looks like this:
|
||||
|
||||
```
|
||||
repository_name: meta-test-family
|
||||
repository_owner: fedora-modularity
|
||||
github_token: xxxxxxxxxxxxxxxxxxxxx
|
||||
refresh_interval: 180
|
||||
```
|
||||
|
||||
#### release-conf.yaml
|
||||
|
||||
**release-conf.yaml** must be stored [in the repository itself][4]; it specifies how to do GitHub/PyPI/Fedora releases.
|
||||
|
||||
```
|
||||
# list of major python versions that bot will build separate wheels for
|
||||
python_versions:
|
||||
- 2
|
||||
- 3
|
||||
# optional:
|
||||
changelog:
|
||||
- Example changelog entry
|
||||
- Another changelog entry
|
||||
# this is info for the authorship of the changelog
|
||||
# if this is not set, person who merged the release PR will be used as an author
|
||||
author_name: John Doe
|
||||
author_email: johndoe@example.com
|
||||
# whether to release on fedora. False by default
|
||||
fedora: false
|
||||
# list of fedora branches bot should release on. Master is always implied
|
||||
fedora_branches:
|
||||
- f27
|
||||
```
|
||||
|
||||
For the meta-test-family case, the configuration file looks like this:
|
||||
|
||||
```
|
||||
python_versions:
|
||||
- 2
|
||||
fedora: true
|
||||
fedora_branches:
|
||||
- f29
|
||||
- f28
|
||||
trigger_on_issue: true
|
||||
```
|
||||
|
||||
#### PyPI configuration file
|
||||
|
||||
The file **.pypirc** , stored in your **mtf-release-bot** private repository, is needed for uploading the new package version into PyPI:
|
||||
|
||||
```
|
||||
[pypi]
|
||||
username = phracek
|
||||
password = xxxxxxxx
|
||||
```
|
||||
|
||||
Private SSH key, **id_rsa** , that you configured in [FAS][5].
|
||||
|
||||
The final structure of the git repository, with **conf.yaml** and the others, looks like this:
|
||||
|
||||
```
|
||||
$ ls -la
|
||||
total 24
|
||||
drwxrwxr-x 3 phracek phracek 4096 Sep 24 12:38 .
|
||||
drwxrwxr-x. 20 phracek phracek 4096 Sep 24 12:37 ..
|
||||
-rw-rw-r-- 1 phracek phracek 199 Sep 24 12:26 conf.yaml
|
||||
drwxrwxr-x 8 phracek phracek 4096 Sep 24 12:38 .git
|
||||
-rw-rw-r-- 1 phracek phracek 3243 Sep 24 12:38 id_rsa
|
||||
-rw------- 1 phracek phracek 78 Sep 24 12:28 .pypirc
|
||||
```
|
||||
|
||||
### Requirements
|
||||
|
||||
**requirements.txt** with both versions of pip. You must also set up your PyPI login details in **$HOME/.pypirc** , as described in the `-k/–keytab`. Also, **fedpkg** requires that you have an SSH key in your keyring that you uploaded to FAS.
|
||||
|
||||
### How to deploy release-bot
|
||||
|
||||
Releasing to PyPI requires the [wheel package][6] for both Python 2 and Python 3, so installwith both versions of pip. You must also set up your PyPI login details in, as described in the [PyPI documentation][7] . If you are releasing to Fedora, you must have an active [Kerberos][8] ticket while the bot runs, or specify the path to the Kerberos keytab file with. Also,requires that you have an SSH key in your keyring that you uploaded to FAS.
|
||||
|
||||
There are two ways to use release-bot: as a Docker image or as an OpenShift template.
|
||||
|
||||
#### Docker image
|
||||
|
||||
Let’s build the image using the `s2i` command:
|
||||
|
||||
```
|
||||
$ s2i build $CONFIGURATION_REPOSITORY_URL usercont/release-bot app-name
|
||||
```
|
||||
|
||||
where `$CONFIGURATION_REPOSITORY_URL` is a reference to the GitHub repository, like _https:// <GIT_LAB_PATH>/mtf-release-conf._
|
||||
|
||||
Let’s look at Docker images:
|
||||
|
||||
```
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
mtf-release-bot latest 08897871e65e 6 minutes ago 705 MB
|
||||
docker.io/usercont/release-bot latest 5b34aa670639 9 days ago 705 MB
|
||||
```
|
||||
|
||||
Now let’s try to run the **mtf-release-bot** image with this command:
|
||||
|
||||
```
|
||||
$ docker run mtf-release-bot
|
||||
---> Setting up ssh key...
|
||||
Agent pid 12
|
||||
Identity added: ./.ssh/id_rsa (./.ssh/id_rsa)
|
||||
12:21:18.982 configuration.py DEBUG Loaded configuration for fedora-modularity/meta-test-family
|
||||
12:21:18.982 releasebot.py INFO release-bot v0.4.1 reporting for duty!
|
||||
12:21:18.982 github.py DEBUG Fetching release-conf.yaml
|
||||
12:21:37.611 releasebot.py DEBUG No merged release PR found
|
||||
12:21:38.282 releasebot.py INFO Found new release issue with version: 0.8.5
|
||||
12:21:42.565 releasebot.py DEBUG No more open issues found
|
||||
12:21:43.190 releasebot.py INFO Making a new PR for release of version 0.8.5 based on an issue.
|
||||
12:21:46.709 utils.py DEBUG ['git', 'clone', 'https://github.com/fedora-modularity/meta-test-family.git', '.']
|
||||
|
||||
12:21:47.401 github.py DEBUG {"message":"Branch not found","documentation_url":"https://developer.github.com/v3/repos/branches/#get-branch"}
|
||||
12:21:47.994 utils.py DEBUG ['git', 'config', 'user.email', 'the.conu.bot@gmail.com']
|
||||
|
||||
12:21:47.996 utils.py DEBUG ['git', 'config', 'user.name', 'Release bot']
|
||||
|
||||
12:21:48.009 utils.py DEBUG ['git', 'checkout', '-b', '0.8.5-release']
|
||||
|
||||
12:21:48.014 utils.py ERROR No version files found. Aborting version update.
|
||||
12:21:48.014 utils.py WARNING No CHANGELOG.md present in repository
|
||||
[Errno 2] No such file or directory: '/tmp/tmpmbvb05jq/CHANGELOG.md'
|
||||
12:21:48.020 utils.py DEBUG ['git', 'commit', '--allow-empty', '-m', '0.8.5 release']
|
||||
[0.8.5-release 7ee62c6] 0.8.5 release
|
||||
|
||||
12:21:51.342 utils.py DEBUG ['git', 'push', 'origin', '0.8.5-release']
|
||||
|
||||
12:21:51.905 github.py DEBUG No open PR's found
|
||||
12:21:51.905 github.py DEBUG Attempting a PR for 0.8.5-release branch
|
||||
12:21:53.215 github.py INFO Created PR: https://github.com/fedora-modularity/meta-test-family/pull/243
|
||||
12:21:53.216 releasebot.py INFO I just made a PR request for a release version 0.8.5
|
||||
12:21:54.154 github.py DEBUG Comment added to PR: I just made a PR request for a release version 0.8.5
|
||||
Here's a [link to the PR](https://github.com/fedora-modularity/meta-test-family/pull/243)
|
||||
12:21:54.154 github.py DEBUG Attempting to close issue #242
|
||||
12:21:54.992 github.py DEBUG Closed issue #242
|
||||
```
|
||||
|
||||
As you can see, release-bot automatically closed the following issue, requesting a new upstream release of the meta-test-family: [https://github.com/fedora-modularity/meta-test-family/issues/243][9].
|
||||
|
||||
In addition, release-bot created a new PR with changelog. You can update the PR—for example, squash changelog—and once you merge it, it will automatically release to GitHub, and PyPI and Fedora will start.
|
||||
|
||||
You now have a working solution to easily release upstream versions of your package into PyPi and Fedora.
|
||||
|
||||
#### OpenShift template
|
||||
|
||||
Another option to deliver automated releases using release-bot is to deploy it in OpenShift.
|
||||
|
||||
The OpenShift template looks as follows:
|
||||
|
||||
```
|
||||
kind: Template
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: release-bot
|
||||
annotations:
|
||||
description: S2I Relase-bot image builder
|
||||
tags: release-bot s2i
|
||||
iconClass: icon-python
|
||||
labels:
|
||||
template: release-bot
|
||||
role: releasebot_application_builder
|
||||
objects:
|
||||
- kind : ImageStream
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name : ${APP_NAME}
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
- kind : ImageStream
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name : ${APP_NAME}-s2i
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
spec :
|
||||
tags :
|
||||
- name : latest
|
||||
from :
|
||||
kind : DockerImage
|
||||
name : usercont/release-bot:latest
|
||||
#importPolicy:
|
||||
# scheduled: true
|
||||
- kind : BuildConfig
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name : ${APP_NAME}
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
spec :
|
||||
triggers :
|
||||
- type : ConfigChange
|
||||
- type : ImageChange
|
||||
source :
|
||||
type : Git
|
||||
git :
|
||||
uri : ${CONFIGURATION_REPOSITORY}
|
||||
contextDir : ${CONFIGURATION_REPOSITORY}
|
||||
sourceSecret :
|
||||
name : release-bot-secret
|
||||
strategy :
|
||||
type : Source
|
||||
sourceStrategy :
|
||||
from :
|
||||
kind : ImageStreamTag
|
||||
name : ${APP_NAME}-s2i:latest
|
||||
output :
|
||||
to :
|
||||
kind : ImageStreamTag
|
||||
name : ${APP_NAME}:latest
|
||||
- kind : DeploymentConfig
|
||||
apiVersion : v1
|
||||
metadata :
|
||||
name: ${APP_NAME}
|
||||
labels :
|
||||
appid : release-bot-${APP_NAME}
|
||||
spec :
|
||||
strategy :
|
||||
type : Rolling
|
||||
triggers :
|
||||
- type : ConfigChange
|
||||
- type : ImageChange
|
||||
imageChangeParams :
|
||||
automatic : true
|
||||
containerNames :
|
||||
- ${APP_NAME}
|
||||
from :
|
||||
kind : ImageStreamTag
|
||||
name : ${APP_NAME}:latest
|
||||
replicas : 1
|
||||
selector :
|
||||
deploymentconfig : ${APP_NAME}
|
||||
template :
|
||||
metadata :
|
||||
labels :
|
||||
appid: release-bot-${APP_NAME}
|
||||
deploymentconfig : ${APP_NAME}
|
||||
spec :
|
||||
containers :
|
||||
- name : ${APP_NAME}
|
||||
image : ${APP_NAME}:latest
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
|
||||
parameters :
|
||||
- name : APP_NAME
|
||||
description : Name of application
|
||||
value :
|
||||
required : true
|
||||
- name : CONFIGURATION_REPOSITORY
|
||||
description : Git repository with configuration
|
||||
value :
|
||||
required : true
|
||||
```
|
||||
|
||||
The easiest way to deploy the **mtf-release-bot** repository with secret files into OpenShift is to use the following two commands:
|
||||
|
||||
```
|
||||
$ curl -sLO https://github.com/user-cont/release-bot/raw/master/openshift-template.yml
|
||||
```
|
||||
|
||||
In your OpenShift instance, deploy the template by running the following command:
|
||||
|
||||
```
|
||||
oc process -p APP_NAME="mtf-release-bot" -p CONFIGURATION_REPOSITORY="git@<git_lab_path>/mtf-release-conf.git" -f openshift-template.yml | oc apply
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
See the [example pull request][10] in the meta-test-family upstream repository, where you'll find information about what release-bot released. Once you get to this point, you can see that release-bot is able to push new upstream versions into GitHub, PyPI, and Fedora without heavy user intervention. It automates all the steps so you don’t need to manually upload and build new upstream versions of your package.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/upstream-releases-pypi-fedora-release-bot
|
||||
|
||||
作者:[Petr Stone Hracek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/phracek
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pypi.org/
|
||||
[2]: https://getfedora.org/
|
||||
[3]: https://github.com/user-cont/release-bot
|
||||
[4]: https://github.com/fedora-modularity/meta-test-family
|
||||
[5]: https://admin.fedoraproject.org/accounts/
|
||||
[6]: https://pypi.org/project/wheel/
|
||||
[7]: https://packaging.python.org/tutorials/distributing-packages/#create-an-account
|
||||
[8]: https://web.mit.edu/kerberos/
|
||||
[9]: https://github.com/fedora-modularity/meta-test-family/issues/238
|
||||
[10]: https://github.com/fedora-modularity/meta-test-family/pull/243
|
@ -0,0 +1,80 @@
|
||||
Browsing the web with Min, a minimalist open source web browser
|
||||
======
|
||||
Not every web browser needs to carry every single feature. Min puts a minimalist spin on the everyday web browser.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openweb-osdc-lead.png?itok=yjU4KliG)
|
||||
|
||||
Does the world need another web browser? Even though the days of having a multiplicity of browsers to choose from are long gone, there still are folks out there developing new applications that help us use the web.
|
||||
|
||||
One of those new-fangled browsers is [Min][1]. As its name suggests (well, suggests to me, anyway), Min is a minimalist browser. That doesn't mean it's deficient in any significant way, and its open source, Apache 2.0 license piques my interest.
|
||||
|
||||
But is Min worth a look? Let's find out.
|
||||
|
||||
### Getting going
|
||||
|
||||
Min is one of many applications written using a development framework called [Electron][2]. (It's the same framework that brought us the [Atom text editor][3].) You can [get installers][4] for Linux, MacOS, and Windows. You can also grab the [source code from GitHub][5] and compile it if you're inclined.
|
||||
|
||||
I run Manjaro Linux, and there isn't an installer for that distro. Luckily, I was able to install Min from Manjaro's package manager.
|
||||
|
||||
Once that was done, I fired up Min by pressing Alt+F2, typing **min** in the run-application box, and pressing Enter, and I was ready to go.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/min-main.png)
|
||||
|
||||
Min is billed as a smarter, faster web browser. It definitely is fast—at the risk of drawing the ire of denizens of certain places on the web, I'll say that it starts faster than Firefox and Chrome on the laptops with which I tried it.
|
||||
|
||||
Browsing with Min is like browsing with Firefox or Chrome. Type a URL in the address bar, press Enter, and away you go.
|
||||
|
||||
### Min's features
|
||||
|
||||
While Min doesn't pack everything you'd find in browsers like Firefox or Chrome, it doesn't do too badly.
|
||||
|
||||
Like any other browser these days, Min supports multiple tabs. It also has a feature called Tasks, which lets you group your open tabs.
|
||||
|
||||
Min's default search engine is [DuckDuckGo][6]. I really like that touch because DuckDuckGo is one of my search engines of choice. If DuckDuckGo isn't your thing, you can set another search engine as the default in Min's preferences.
|
||||
|
||||
Instead of using tools like AdBlock to filter out content you don't want, Min has a built-in ad blocker. It uses the [EasyList filters][7], which were created for AdBlock. You can block scripts and images, and Min also has a built-in tracking blocker.
|
||||
|
||||
Like Firefox, Min has a reading mode called Reading List. Flipping the Reading List switch (well, clicking the icon in the address bar) removes most of the cruft from a page so you can focus on the words you're reading. Pages stay in the Reading List for 30 days.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/min-reading-list.png)
|
||||
|
||||
Speaking of focus, Min also has a Focus Mode that hides and prevents you from opening other tabs. So, if you're working in a web application, you'll need to click a few times if you feel like procrastinating.
|
||||
|
||||
Of course, Min has a number of keyboard shortcuts that can make using it a lot faster. You can find a reference for those shortcuts [on GitHub][8]. You can also change a number of them in Min's preferences.
|
||||
|
||||
I was pleasantly surprised to find Min can play videos on YouTube, Vimeo, Dailymotion, and similar sites. I also played sample tracks at music retailer 7Digital. I didn't try playing music on popular sites like Spotify or Last.fm (because I don't have accounts with them).
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/min-video.png)
|
||||
|
||||
### What's not there
|
||||
|
||||
The features that Min doesn't pack are as noticeable as the ones it does. There doesn't seem to be a way to bookmark sites. You either have to rely on Min's search history to find your favorite links, or you'll have to rely on a bookmarking service.
|
||||
|
||||
On top of that, Min doesn't support plugins. That's not a deal breaker for me—not having plugins is undoubtedly one of the reasons the browser starts and runs so quickly. I know a number of people who are … well, I wouldn't go so far to say junkies, but they really like their plugins. Min wouldn't cut it for them.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
Min isn't a bad browser. It's light and fast enough to appeal to the minimalists out there. That said, it lacks features that hardcore web browser users clamor for.
|
||||
|
||||
If you want a zippy browser that isn't weighed down by all the features of so-called modern web browsers, I suggest giving Min a serious look.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/min-web-browser
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://minbrowser.github.io/min/
|
||||
[2]: http://electron.atom.io/apps/
|
||||
[3]: https://opensource.com/article/17/5/atom-text-editor-packages-writers
|
||||
[4]: https://github.com/minbrowser/min/releases/
|
||||
[5]: https://github.com/minbrowser/min
|
||||
[6]: http://duckduckgo.com
|
||||
[7]: https://easylist.to/
|
||||
[8]: https://github.com/minbrowser/min/wiki
|
@ -0,0 +1,226 @@
|
||||
Chrony – An Alternative NTP Client And Server For Unix-like Systems
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/chrony-1-720x340.jpeg)
|
||||
|
||||
In this tutorial, we will be discussing how to install and configure **Chrony** , an alternative NTP client and server for Unix-like systems. Chrony can synchronise the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time. Chrony is free, open source and supports GNU/Linux and BSD variants such as FreeBSD, NetBSD, macOS, and Solaris.
|
||||
|
||||
### Installing Chrony
|
||||
|
||||
Chrony is available in the default repositories of most Linux distributions. If you’re on Arch Linux, run the following command to install it:
|
||||
|
||||
```
|
||||
$ sudo pacman -S chrony
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt-get install chrony
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install chrony
|
||||
```
|
||||
|
||||
Once installed, start **chronyd.service** daemon if it is not started already:
|
||||
|
||||
```
|
||||
$ sudo systemctl start chronyd.service
|
||||
```
|
||||
|
||||
Make it to start automatically on every reboot using command:
|
||||
|
||||
```
|
||||
$ sudo systemctl enable chronyd.service
|
||||
```
|
||||
|
||||
To verify if the Chronyd.service has been started, run:
|
||||
|
||||
```
|
||||
$ sudo systemctl status chronyd.service
|
||||
```
|
||||
|
||||
If everything is OK, you will see an output something like below.
|
||||
|
||||
```
|
||||
● chrony.service - chrony, an NTP client/server
|
||||
Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: ena
|
||||
Active: active (running) since Wed 2018-10-17 10:34:53 UTC; 3min 15s ago
|
||||
Docs: man:chronyd(8)
|
||||
man:chronyc(1)
|
||||
man:chrony.conf(5)
|
||||
Main PID: 2482 (chronyd)
|
||||
Tasks: 1 (limit: 2320)
|
||||
CGroup: /system.slice/chrony.service
|
||||
└─2482 /usr/sbin/chronyd
|
||||
|
||||
Oct 17 10:34:53 ubuntuserver systemd[1]: Starting chrony, an NTP client/server...
|
||||
Oct 17 10:34:53 ubuntuserver chronyd[2482]: chronyd version 3.2 starting (+CMDMON
|
||||
Oct 17 10:34:53 ubuntuserver chronyd[2482]: Initial frequency -268.088 ppm
|
||||
Oct 17 10:34:53 ubuntuserver systemd[1]: Started chrony, an NTP client/server.
|
||||
Oct 17 10:35:03 ubuntuserver chronyd[2482]: Selected source 85.25.84.166
|
||||
Oct 17 10:35:03 ubuntuserver chronyd[2482]: Source 85.25.84.166 replaced with 2403
|
||||
Oct 17 10:35:03 ubuntuserver chronyd[2482]: Selected source 91.189.89.199
|
||||
Oct 17 10:35:06 ubuntuserver chronyd[2482]: Selected source 106.10.186.200
|
||||
```
|
||||
|
||||
As you can see, Chrony service is started and working!
|
||||
|
||||
### Configure Chrony
|
||||
|
||||
The NTP clients needs to know which NTP servers it should contact to get the current time. We can specify the NTP servers in the **server** or **pool** directive in the NTP configuration file. Usually, the default configuration file is **/etc/chrony/chrony.conf** or **/etc/chrony.conf** depending upon the Linux distribution version. For better reliability, it is recommended to specify at least three servers.
|
||||
|
||||
The following lines are just an example taken from my Ubuntu 18.04 LTS server.
|
||||
|
||||
```
|
||||
[...]
|
||||
# About using servers from the NTP Pool Project in general see (LP: #104525).
|
||||
# Approved by Ubuntu Technical Board on 2011-02-08.
|
||||
# See http://www.pool.ntp.org/join.html for more information.
|
||||
pool ntp.ubuntu.com iburst maxsources 4
|
||||
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
|
||||
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
|
||||
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
|
||||
[...]
|
||||
```
|
||||
|
||||
As you see in the above output, [**NTP Pool Project**][1] has been set as the default time server. For those wondering, NTP pool project is the cluster of time servers that provides NTP service for tens of millions clients across the world. It is the default time server for Ubuntu and most of the other major Linux distributions.
|
||||
|
||||
Here,
|
||||
|
||||
* the **iburst** option is used to speed up the initial synchronisation.
|
||||
* the **maxsources** refers the maximum number of NTP sources.
|
||||
|
||||
|
||||
|
||||
Please make sure that the NTP servers you have chosen are well synchronised, stable and close to your location to improve the accuracy of the time with NTP sources.
|
||||
|
||||
### Manage Chronyd from command line
|
||||
|
||||
Chrony has a command line utility named **chronyc** to control and monitor the **chrony** daemon (chronyd).
|
||||
|
||||
To check if **chrony** is synchronized, we can use the **tracking** command as shown below.
|
||||
|
||||
```
|
||||
$ chronyc tracking
|
||||
Reference ID : 6A0ABAC8 (t1.time.sg3.yahoo.com)
|
||||
Stratum : 3
|
||||
Ref time (UTC) : Wed Oct 17 11:48:51 2018
|
||||
System time : 0.000984587 seconds slow of NTP time
|
||||
Last offset : -0.000912981 seconds
|
||||
RMS offset : 0.007983995 seconds
|
||||
Frequency : 23.704 ppm slow
|
||||
Residual freq : +0.006 ppm
|
||||
Skew : 1.734 ppm
|
||||
Root delay : 0.089718960 seconds
|
||||
Root dispersion : 0.008760406 seconds
|
||||
Update interval : 515.1 seconds
|
||||
Leap status : Normal
|
||||
```
|
||||
|
||||
We can verify the current time sources that chrony uses with command:
|
||||
|
||||
```
|
||||
$ chronyc sources
|
||||
210 Number of sources = 8
|
||||
MS Name/IP address Stratum Poll Reach LastRx Last sample
|
||||
===============================================================================
|
||||
^- chilipepper.canonical.com 2 10 377 296 +102ms[ +104ms] +/- 279ms
|
||||
^- golem.canonical.com 2 10 377 302 +105ms[ +107ms] +/- 290ms
|
||||
^+ pugot.canonical.com 2 10 377 297 +36ms[ +38ms] +/- 238ms
|
||||
^- alphyn.canonical.com 2 10 377 279 -43ms[ -42ms] +/- 238ms
|
||||
^- dadns.cdnetworks.co.kr 2 10 377 1070 +40ms[ +42ms] +/- 314ms
|
||||
^* t1.time.sg3.yahoo.com 2 10 377 169 -13ms[ -11ms] +/- 80ms
|
||||
^+ sin1.m-d.net 2 10 275 567 -9633us[-7826us] +/- 115ms
|
||||
^- ns2.pulsation.fr 2 10 377 311 -75ms[ -73ms] +/- 250ms
|
||||
```
|
||||
|
||||
Chronyc utility can find the statistics of each sources, such as drift rate and offset estimation process, using **sourcestats** command.
|
||||
|
||||
```
|
||||
$ chronyc sourcestats
|
||||
210 Number of sources = 8
|
||||
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
|
||||
==============================================================================
|
||||
chilipepper.canonical.com 32 16 89m +6.293 14.345 +30ms 24ms
|
||||
golem.canonical.com 32 17 89m +0.312 18.887 +20ms 33ms
|
||||
pugot.canonical.com 32 18 89m +0.281 11.237 +3307us 23ms
|
||||
alphyn.canonical.com 31 20 88m -4.087 8.910 -58ms 17ms
|
||||
dadns.cdnetworks.co.kr 29 16 76m -1.094 9.895 -83ms 14ms
|
||||
t1.time.sg3.yahoo.com 32 16 91m +0.153 1.952 +2835us 4044us
|
||||
sin1.m-d.net 29 13 83m +0.049 6.060 -8466us 9940us
|
||||
ns2.pulsation.fr 32 17 88m +0.784 9.834 -62ms 22ms
|
||||
```
|
||||
|
||||
If your system is not connected to Internet, you need to notify Chrony that the system is not connected to the Internet. To do so, run:
|
||||
|
||||
```
|
||||
$ sudo chronyc offline
|
||||
[sudo] password for sk:
|
||||
200 OK
|
||||
```
|
||||
|
||||
To verify the status of your NTP sources, simply run:
|
||||
|
||||
```
|
||||
$ chronyc activity
|
||||
200 OK
|
||||
0 sources online
|
||||
8 sources offline
|
||||
0 sources doing burst (return to online)
|
||||
0 sources doing burst (return to offline)
|
||||
0 sources with unknown address
|
||||
```
|
||||
|
||||
As you see, all my NTP sources are down at the moment.
|
||||
|
||||
Once you’re connected to the Internet, just notify Chrony that your system is back online using command:
|
||||
|
||||
```
|
||||
$ sudo chronyc online
|
||||
200 OK
|
||||
```
|
||||
|
||||
To view the status of NTP source(s), run:
|
||||
|
||||
```
|
||||
$ chronyc activity
|
||||
200 OK
|
||||
8 sources online
|
||||
0 sources offline
|
||||
0 sources doing burst (return to online)
|
||||
0 sources doing burst (return to offline)
|
||||
0 sources with unknown address
|
||||
```
|
||||
|
||||
For more detailed explanation of all options and parameters, refer the man pages.
|
||||
|
||||
```
|
||||
$ man chronyc
|
||||
|
||||
$ man chronyd
|
||||
```
|
||||
|
||||
And, that’s all for now. Hope this was useful. In the subsequent tutorials, we will see how to setup a local NTP server using Chrony and configure the clients to use it to synchronise time.
|
||||
|
||||
Stay tuned!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-unix-like-systems/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ntppool.org/en/
|
@ -0,0 +1,94 @@
|
||||
4 open source alternatives to Microsoft Access
|
||||
======
|
||||
Build simple business applications and keep track of your data with these worthy open source alternatives.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
|
||||
When small businesses, community organizations, and similar-sized groups realize they need software to manage their data, they think first of Microsoft Access. That may be the right choice if you're already paying for a Microsoft Office subscription or don't care that it's proprietary. But it's far from your only option—whether you prefer to use open source alternatives from a philosophical standpoint or you don't have the big budget for a Microsoft Office subscription—there are several open source database applications that are worthy alternatives to proprietary software like Microsoft Access or Apple FileMaker.
|
||||
|
||||
If that sounds like you, here are four open source database tools for your consideration.
|
||||
|
||||
### LibreOffice Base
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/libreoffice-base.png)
|
||||
In case it's not obvious from its name, [Base][1] is part of the [LibreOffice][2] productivity suite, which includes Writer (word processing), Calc (spreadsheet), Impress (presentations), Draw (graphics), Charts (chart creation), and Math (formulas). As such, Base integrates with the other LibreOffice applications, much like Access does with the Microsoft Office suite. This means you can import and export data from Base into the suite's other applications to create financial reports, mail merges, charts, and more.
|
||||
|
||||
Base includes drivers that natively support multi-user database engines, including the open source MySQL, MariaDB, and PostgreSQL; Access; and other JDBC and ODBC-compliant databases. Built-in wizards and table definitions make it easy for new users to quickly get started building tables, writing queries, and creating forms and reports (such as invoices, sales reports, and customer lists). To learn more, consult the comprehensive [user manual][3] and dive into the [user forums][4]. If you're still stuck, you can find a [certified][5] support professional to help you out.
|
||||
|
||||
Installers are available for Linux, MacOS, Windows, and Android. LibreOffice is available under the [Mozilla Public License v2][6]; if you'd like to join the large contributor community and help improve the software, visit the [Get Involved][7] section of LibreOffice's website.
|
||||
|
||||
### DB Browser for SQLite
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/sqlitebrowser.png)
|
||||
|
||||
[DB Browser for SQLite][8] enables users to create and use SQLite database files without having to know complex SQL commands. This, plus its spreadsheet-like interface and pre-built wizards, make it a great option for new database users to get going without much background knowledge.
|
||||
|
||||
Although the application has gone through several name changes—from the original Arca Database Browser to the SQLite Database Browser and finally to the current name (in 2014, to avoid confusion with SQLite), it's stayed true to its goal of being easy for users to operate.
|
||||
|
||||
Its wizards enable users to easily create and modify database files, tables, indexes, records, etc.; import and export data to common file formats; create and issue queries and searches; and more. Installers are available for Windows, MacOS, and a variety of Linux versions, and its [wiki on GitHub][9] offers a wealth of information for users and developers.
|
||||
|
||||
DB Browser for SQLite is [bi-licensed][10] under the Mozilla Public License Version 2 and the GNU General Public License Version 3 or later, and you can download the source code from the project's website.
|
||||
|
||||
### Kexi
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/kexi-3.0-table-view.png)
|
||||
As the database application in the [Calligra Suite][11] productivity software for the KDE desktop, [Kexi][12] integrates with the other applications in the suite, including Words (word processing), Sheets (spreadsheet), Stage (presentations), and Plan (project management).
|
||||
|
||||
As a full member of the [KDE][13] project, Kexi is purpose-built for KDE Plasma, but it's not limited to KDE users: Linux, BSD, and Unix users running GNOME can run the database, as can MacOS and Windows users.
|
||||
|
||||
Kexi's website says its development was "motivated by the lack of rapid application development ([RAD][14]) tools for database systems that are sufficiently powerful, inexpensive, open standards driven, and portable across many operating systems and hardware platforms." It has all the standard features you'd expect: designing databases, storing data, doing queries, processing data, and so forth.
|
||||
|
||||
Kexi is available under the [LGPL][15] open source license and you can download its [source code][16] from its development wiki. If you'd like to learn more, take a look at its [user handbook][17], [forums][18], and [userbase wiki][17].
|
||||
|
||||
### nuBuilder Forte
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/screenshot_from_2018-10-17_13-23-25.png)
|
||||
[NuBuilder Forte][19] is designed to be as easy as possible for people to use. It's a browser-based tool for developing web-based database applications.
|
||||
|
||||
Its clean interface and low-code tools (including support for drag-and-drop) allow users to create and use a database quickly. As a fully web-based application, data is accessible anywhere from a browser. Everything is stored in MySQL and can be backed up in one database file.
|
||||
|
||||
It uses industry-standard coding languages—HTML, PHP, JavaScript, and SQL—making it easy for developers to get started also.
|
||||
|
||||
Help is available in [videos][20] and other [documentation][21] for topics including creating forms, doing searches, building reports, and more.
|
||||
|
||||
nuBuilder Forte is licensed under [GPLv3.0][22] and you can download it on [GitHub][23]. You can learn more by consulting the [nuBuilder Forum][24] or watching its [demo][25] video.
|
||||
|
||||
Do you have a favorite open source database tool for building simple projects with little or no coding skill required? If so, please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/alternatives/access
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.libreoffice.org/discover/base/
|
||||
[2]: https://www.libreoffice.org/
|
||||
[3]: https://documentation.libreoffice.org/en/english-documentation/base/
|
||||
[4]: http://document-foundation-mail-archive.969070.n3.nabble.com/Users-f1639498.html
|
||||
[5]: https://www.libreoffice.org/get-help/professional-support/
|
||||
[6]: https://www.libreoffice.org/download/license/
|
||||
[7]: https://www.libreoffice.org/community/get-involved/
|
||||
[8]: http://sqlitebrowser.org/
|
||||
[9]: https://github.com/sqlitebrowser/sqlitebrowser/wiki
|
||||
[10]: https://github.com/sqlitebrowser/sqlitebrowser/blob/master/LICENSE
|
||||
[11]: https://www.calligra.org/
|
||||
[12]: https://www.calligra.org/kexi/
|
||||
[13]: https://www.kde.org/
|
||||
[14]: http://en.wikipedia.org/wiki/Rapid_application_development
|
||||
[15]: http://kexi-project.org/wiki/wikiview/index.php@KexiLicense.html
|
||||
[16]: http://kexi-project.org/wiki/wikiview/index.php@Download.html
|
||||
[17]: https://userbase.kde.org/Kexi/Handbook
|
||||
[18]: http://forum.kde.org/kexi
|
||||
[19]: https://www.nubuilder.com/
|
||||
[20]: https://www.nubuilder.com/videos
|
||||
[21]: https://www.nubuilder.com/wiki
|
||||
[22]: https://github.com/nuSoftware/nuBuilder4/blob/master/LICENSE.txt
|
||||
[23]: https://github.com/nuSoftware/nuBuilder4
|
||||
[24]: https://forums.nubuilder.com/viewforum.php?f=18&sid=7036bccdc08ba0da73181bc72cd63c62
|
||||
[25]: https://www.youtube.com/watch?v=tdh9ILCUAco&feature=youtu.be
|
@ -0,0 +1,74 @@
|
||||
MidnightBSD Hits 1.0! Checkout What’s New
|
||||
======
|
||||
A couple days ago, Lucas Holt announced the release of MidnightBSD 1.0. Let’s take a quick look at what is included in this new release.
|
||||
|
||||
### What is MidnightBSD?
|
||||
|
||||
![MidnightBSD][1]
|
||||
|
||||
[MidnightBSD][2] is a fork of FreeBSD. Lucas created MightnightBSD to be an option for desktop users and for BSD newbies. He wanted to create something that would allow people to quickly get a desktop experience on BSD. He believed that other options had too much of a focus on the server market.
|
||||
|
||||
### What is in MidnightBSD 1.0?
|
||||
|
||||
According to the [release notes][3], most of the work in 1.0 went towards updating the base system, improving the package manager and updating tools. The new release is compatible with FreeBSD 10-Stable.
|
||||
|
||||
Mports (MidnightBSD’s package management system) has been upgraded to support installing multiple packages with one command. The `mport upgrade` command has been fixed. Mports now tracks deprecated and expired packages. A new package format was also introduced.
|
||||
|
||||
<https://www.youtube.com/embed/-rlk2wFsjJ4>
|
||||
|
||||
Other changes include:
|
||||
|
||||
* [ZFS][4] is now supported as a boot file system. Previously, ZFS could only be used for additional storage.
|
||||
* Support for NVME SSDs
|
||||
* AMD Ryzen and Radeon support have been improved.
|
||||
* Intel, Broadcom, and other drivers updated.
|
||||
* bhyve support has been ported from FreeBSD
|
||||
* The sensors framework was removed because it was causing locking issues.
|
||||
* Sudo was removed and replaced with [doas][5] from OpenBSD.
|
||||
* Added support for Microsoft hyper-v
|
||||
|
||||
|
||||
|
||||
### Before you upgrade…
|
||||
|
||||
If you are a current MidnightBSD user or are thinking of trying out the new release, it would be a good idea to wait. Lucas is currently rebuilding packages to support the new package format and tooling. He also plans to upgrade packages and ports for the desktop environment over the next couple of months. He is currently working on porting Firefox 52 ESR because it is the last release that does not require Rust. He also hopes to get a newer version of Chromium ported to MidnightBSD. I would recommend keeping an eye on the MidnightBSD [Twi][6][t][6][ter][6] feed.
|
||||
|
||||
### What happened to 0.9?
|
||||
|
||||
You might notice that the previous release of MidnightBSD was 0.8.6. Now, you might be wondering “Why the jump to 1.0”? According to Lucas, he ran into several issues while developing 0.9. In fact, he restarted it several times. He ending up taking CURRENT in a different direction than the 0.9 branch and it became 1.0. Some packages also had an issue with the 0.* numbering system.
|
||||
|
||||
### Help Needed
|
||||
|
||||
Currently, the MidnightBSD project is the work of pretty much one guy, Lucas Holt. This is the main reason why development has been slow. If you are interested in helping out, you can contact him on [Twitter][6].
|
||||
|
||||
In the [release announcement video][7]. Lucas said that he had encountered problems with upstream projects accepting patches. They seem to think that MidnightBSD is too small. This often means that he has to port an application from scratch.
|
||||
|
||||
### Thoughts
|
||||
|
||||
I have a thing for the underdog. Of all the BSDs that I have interacted with, that monicker fits MidnightBSD the most. One guy trying to create an easy desktop experience. Currently, there is only one other BSD trying to do something similar: Project Trident. I think that this is a real barrier to BSDs success. Linux succeeds because people can quickly and easily install it. Hopefully, MidnightBSD does that for BSD, but right now it has a long way to go.
|
||||
|
||||
Have you ever used MidnightBSD? If not, what is your favorite BSD? What other BSD topics should we cover? Let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][8].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/midnightbsd-1-0-release/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/midnightbsd-wallpaper.jpeg
|
||||
[2]: https://www.midnightbsd.org/
|
||||
[3]: https://www.midnightbsd.org/notes/
|
||||
[4]: https://itsfoss.com/what-is-zfs/
|
||||
[5]: https://man.openbsd.org/doas
|
||||
[6]: https://twitter.com/midnightbsd
|
||||
[7]: https://www.youtube.com/watch?v=-rlk2wFsjJ4
|
||||
[8]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,82 @@
|
||||
TimelineJS: An interactive, JavaScript timeline building tool
|
||||
======
|
||||
Learn how to tell a story with TimelineJS.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk)
|
||||
|
||||
[TimelineJS 3][1] is an open source storytelling tool that anyone can use to create visually rich, interactive timelines to post on their websites. To get started, simply click “Make a Timeline” on the homepage and follow the easy [step-by-step instructions][1].
|
||||
|
||||
TimelineJS was developed at Northwestern University’s KnightLab in Evanston, Illinois. KnightLab is a community of designers, developers, students, and educators who work on experiments designed to push journalism into new spaces. TimelineJS has been used by more than 250,000 people, according to its website, to tell stories viewed millions of times. And TimelineJS3 is available in more than 60 languages.
|
||||
|
||||
Joe Germuska, the “chief nerd” who runs KnightLab’s technology, professional staff, and student fellows, explains, "TimelineJS was originally developed by Northwestern professor Zach Wise. He assigned his students a task to tell stories in a timeline format, only to find that none of the free available tools were as good as he thought they could be. KnightLab funded some of his time to develop the tool in 2012. Near the end of that year, I joined the lab, and among my early tasks was to bring TimelineJS in as a fully supported project of the lab. The next year, I helped Zach with a rewrite to address some issues. Along the way, many students have contributed. Interestingly, a group of students from Victoria University in Wellington, New Zealand, worked on TimelineJS (and some of our other tools) as part of a class project in 2016."
|
||||
|
||||
"In general, we designed TimelineJS to make it easy for non-technical people to tell rich, dynamic stories on the web in the context of events in time.”
|
||||
|
||||
Users create timelines by adding content into a Google spreadsheet. KnightLab provides a downloadable template that can be edited to create custom timelines. Experts can use their JSON skills to [create custom installations][2] while keeping TimelineJS’s core functionality.
|
||||
|
||||
This easy-to-follow [Vimeo video][3] shows how to get started with TimelineJS, and I used it myself to create my first timeline.
|
||||
|
||||
### Open sourcing the Adirondacks
|
||||
|
||||
Reid Larson, research and scholarly communication librarian at Hamilton College in Clinton, New York, began searching for ways to combine open data and visualization to chronicle the history of Essex County (a county in northern New York that makes up part of the Adirondacks), in the 1990s, when he was the director of the Essex County Historical Society/Adirondack History Center Museum.
|
||||
|
||||
"I wanted to take all the open data available on the history of Essex County and be able to present it to people visually. Most importantly, I wanted to make sure that the data would be available for use even if the applications used to present it are no longer available or supported," Larson explains.
|
||||
|
||||
Now at Hamilton College, Larson has found TimelineJS to be the ideal open source program to do just what he wanted: Chronicle and present a visually appealing timeline of selected places.
|
||||
|
||||
"It was a professor who was working on a project that required a solution such as Timeline, and after researching the possibilities, I started using Timeline for that project and subsequent projects," Larson adds.
|
||||
|
||||
TimelineJS can be used via a web browser, or the source code can be downloaded from [GitHub][4] for local use.
|
||||
|
||||
"I’ve been using the browser version, but I push it to the limits to see how far I can go with it, such as adding my own HTML tags. I want to fully understand it so that I can educate the students and faculty at Hamilton College on its uses," Larson says.
|
||||
|
||||
### An open source Eagle Scout project
|
||||
|
||||
Not only has Larson used TimelineJS for collegiate purposes, but his son, Erik, created an [interactive historical website][5] for his Eagle Scout project in 2017 using WordPress. The project is a chronicle of places in Waterville, New York, just south of Clinton, in Oneida County. Erik explains that he wants what he started to expand beyond the 36 places in Waterville. "The site is an experiment in online community building," Erik’s website reads.
|
||||
|
||||
Larson says he did a lot of the “tech work” on the project so that Erik could concentrate on content. The site was created with [Omeka][6], an open source web publishing platform for sharing digital collections and creating media-rich online exhibits, and [Curatescape][7], a framework for the open source Omeka CMS.
|
||||
|
||||
Larson explains that a key feature of TimelineJS is that it uses Google Sheets to store and organize the data used in the timeline. "Google Sheets is a good structure for organizing data simply, and that data will be available even if TimelineJS becomes unavailable in the future."
|
||||
|
||||
Larson says that he prefers using [ArcGIS][8] over KnightLab’s StoryMap because it uses spreadsheets to store content, whereas [StoryMap][9] does not. Larson is looking forward to integrating augmented reality into his projects in the future.
|
||||
|
||||
### Create your own open source timeline
|
||||
|
||||
I plan on using TimelineJS to create interactive content for the Development and Alumni Relations department at Clarkson University, where I am the development communications specialist. To practice with working with it, I created [a simple timeline][10] of the articles I’ve written for [Opensource.com][11]:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/google-sheet-timeline.png)
|
||||
![](https://opensource.com/sites/default/files/uploads/wordpress-timeline.png)
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/website-timeline.png)
|
||||
|
||||
As Reid Larson stated, it is very easy to use and the results are quite satisfactory. I was able to get a working timeline created and posted to my WordPress site in a matter of minutes. I used media that I had already uploaded to my Media Library in WordPress and simply copied the image address. I typed in the dates, locations, and information in the other cells and used “publish to web” under “file” in the Google spreadsheet. That produced a link and embed code. I created a new post in my WordPress site and pasted in the embed code, and the timeline was live and working.
|
||||
|
||||
Of course, there is more customization I need to do, but I was able to get it working quickly and easily, much as Reid said it would.
|
||||
|
||||
I will continue experimenting with TimelineJS on my own site, and when I get more comfortable with it, I’ll use it for my professional projects and try out the other apps that KnightLab has created for interactive, visually appealing storytelling.
|
||||
|
||||
What might you use TimelineJS for?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/create-interactive-timelines-open-source-tool
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rikki-endsley
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://timeline.knightlab.com/
|
||||
[2]: https://timeline.knightlab.com/docs/json-format.html
|
||||
[3]: https://vimeo.com/knightlab/timelinejs
|
||||
[4]: https://github.com/NUKnightLab/TimelineJS3
|
||||
[5]: http://nysplaces.com/
|
||||
[6]: https://github.com/omeka
|
||||
[7]: https://github.com/CPHDH/Curatescape
|
||||
[8]: https://www.arcgis.com/index.html
|
||||
[9]: https://storymap.knightlab.com/
|
||||
[10]: https://macharyas.com/index.php/2018/10/06/timeline/
|
||||
[11]: http://opensource.com/
|
@ -0,0 +1,66 @@
|
||||
9 个方法,提升开发者与设计师之间的协作
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV)
|
||||
|
||||
本文由我与 [Jason Porter][1] 共同完成。
|
||||
|
||||
在任何软件项目中,设计至关重要。设计师不像开发团队那样熟悉其内部工作,但迟早都要知道开发人员写代码的意图。
|
||||
|
||||
两边都有自己的成见。工程师经常认为设计师们古怪不理性,而设计师也认为工程师们死板要求高。在一天的工作快要结束时,情况会变得更加微妙。设计师和开发者们的命运永远交织在一起。
|
||||
|
||||
做到以下九件事,便可以增强他们之间的合作
|
||||
|
||||
### 1\. 首先,说实在的,打破壁垒。
|
||||
|
||||
几乎每一个行业都有“<ruby>迷惑之墙<rt>wall of confusion</rt></ruby>”的模子。无论你干什么工作,拆除这堵墙的第一步就是要双方都认同它需要拆除。一旦所有的人都认为现有的流程效率低下,你就可以从其他想法中获得灵感,然后解决问题。
|
||||
|
||||
### 2\. 学会共情
|
||||
|
||||
在撸起袖子开始干之前,休息一下。这是团队建设的重要的交汇点。一个时机去认识到:我们都是成人,我们都有自己的优点与缺点,更重要的是,我们是一个团队。围绕工作流程与工作效率的讨论会经常发生,因此在开始之前,建立一个信任与协作的基础至关重要。
|
||||
|
||||
### 3\. 认识差异
|
||||
|
||||
设计师和开发者从不同的角度攻克问题。对于相同的问题,设计师会追求更好的效果,而开发者会寻求更高的效率。这两种观点不必互相排斥。谈判和妥协的余地很大,并且在二者之间必然存在一个用户满意度最佳的中点。
|
||||
|
||||
### 4\. 拥抱共性
|
||||
|
||||
这一切都是与工作流程相关的。<ruby>持续集成<rt>Continuous Integration</rt></ruby>/<ruby>持续交付<rt>Continuous Delivery</rt></ruby>,scrum,agille 等等,都基本上说了一件事:构思,迭代,考察,重复。迭代和重复是两种工作的相同点。因此,不再让开发周期紧跟设计周期,而是同时并行地运行它们,这样会更有意义。<ruby>同步周期<rt>Syncing cycles</rt></ruby>允许团队在每一步上交流、协作、互相影响。
|
||||
|
||||
### 5\. 管理期望
|
||||
|
||||
一切冲突的起因一言以蔽之:期望不符。因此,防止系统性分裂的简单办法就是通过确保团队成员在说之前先想、在做之前先说来管理期望。设定的期望往往会通过日常对话不断演变。强迫团队通过开会以达到其效果可能会适得其反。
|
||||
|
||||
### 6\. 按需开会
|
||||
|
||||
只在工作开始和工作结束开一次会远远不够。但也不意味着每天或每周都要开会。定期开会也可能会适得其反。试着按需开会吧。即兴会议可能会发生很棒的事情,即使是在开水房。如果你的团队是分散式的或者甚至有一名远程员工,视频会议,文本聊天或者打电话都是开会的好方法。团队中的每人都有多种方式互相沟通,这一点非常重要。
|
||||
|
||||
### 7\. 建立词库
|
||||
|
||||
设计师和开发者有时候对相似的想法有着不同的术语,就像把猫叫了个咪。毕竟,所有人都用的惯比起术语的准确度和适应度更重要。
|
||||
|
||||
### 8\. 学会沟通
|
||||
|
||||
无论什么时候,团队中的每个人都有责任去维持一个有效的沟通。每个人都应该努力做到一字一板。
|
||||
|
||||
### 9\. 不断改善
|
||||
|
||||
仅一名团队成员就能破坏整个进度。全力以赴。如果每个人都不关心产品或目标,继续项目或者做出改变的动机就会出现问题。
|
||||
|
||||
本文参考 [Designers and developers: Finding common ground for effective collaboration][2],演讲的作者将会出席在旧金山五月 8-10 号举办的[Red Hat Summit 2018][3]。[五月 7 号][3]注册将节省 500 美元。支付时使用优惠码 **OPEN18** 以获得更多折扣。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers
|
||||
|
||||
作者:[Jason Brock][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jkbrock
|
||||
[1]:https://opensource.com/users/lightguardjp
|
||||
[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267
|
||||
[3]:https://www.redhat.com/en/summit/2018
|
@ -1,131 +0,0 @@
|
||||
Linux vs Mac: Linux 比 Mac 好的七个原因
|
||||
======
|
||||
最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。
|
||||
|
||||
如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。
|
||||
|
||||
### Linux 比 Mac 好的 7 个原因
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][2]
|
||||
|
||||
Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。
|
||||
|
||||
平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。
|
||||
|
||||
那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。
|
||||
|
||||
#### 1\. 价格
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][3]
|
||||
|
||||
假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。
|
||||
|
||||
那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。
|
||||
|
||||
买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。
|
||||
|
||||
#### 2\. 硬件支持
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][4]
|
||||
|
||||
Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5].
|
||||
|
||||
而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。
|
||||
|
||||
这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。
|
||||
|
||||
总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。
|
||||
|
||||
#### 3\. 安全性
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][7]
|
||||
|
||||
很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。
|
||||
|
||||
我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首.
|
||||
|
||||
从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。
|
||||
|
||||
这可能也是一个你应该选择 Linux 而不是 Mac 的原因。
|
||||
|
||||
#### 4\. 可定制性与灵活性
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][10]
|
||||
|
||||
如果你有不喜欢的东西,自己定制或者修改它都行。
|
||||
|
||||
举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。
|
||||
|
||||
除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗?
|
||||
|
||||
另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。
|
||||
|
||||
#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生]
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][14]
|
||||
|
||||
对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。
|
||||
|
||||
但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。
|
||||
|
||||
除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。
|
||||
|
||||
#### 6\. 可靠
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][15]
|
||||
|
||||
想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。
|
||||
|
||||
但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢?
|
||||
|
||||
答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。
|
||||
|
||||
#### 7\. 开源
|
||||
|
||||
![Linux vs Mac: 为什么 Linux 更好][16]
|
||||
|
||||
开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。
|
||||
|
||||
简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。
|
||||
|
||||
形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕!
|
||||
|
||||
如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。
|
||||
|
||||
### 总结
|
||||
|
||||
现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢?
|
||||
|
||||
在下方评论让我们知道你的想法。
|
||||
|
||||
Note: 这里的图片是以企鹅俱乐部为原型的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-vs-mac/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Ryze-Borgia](https://github.com/Ryze-Borgia)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[1]: https://itsfoss.com/linux-better-than-windows/
|
||||
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg
|
||||
[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg
|
||||
[5]: https://itsfoss.com/lightweight-linux-beginners/
|
||||
[6]: https://hackintosh.com/
|
||||
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg
|
||||
[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html
|
||||
[9]: https://www.imore.com/how-to-remove-browser-hijack
|
||||
[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg
|
||||
[11]: https://www.gnome.org/
|
||||
[12]: https://itsfoss.com/best-gnome-extensions/
|
||||
[13]: https://elementary.io/
|
||||
[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg
|
||||
[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg
|
||||
[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg
|
||||
[17]: https://opensource.com/life/15/12/why-open-source
|
@ -0,0 +1,45 @@
|
||||
写作是如何帮助技能拓展和事业成长的
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/graffiti-1281310_1920.jpg?itok=RCayfGKv)
|
||||
|
||||
在最近的[温哥华开源峰会][1]上,我参加了一个小组讨论,叫做“写作是如何改变你的职业生涯的(即使你不是个作家)”。主持人是 Opensource.com 的社区经理兼编辑 Rikki Endsley,成员有开源策略顾问 VM (Vicky) Brasseur,The New Stack 的创始人兼主编 Alex Williams,还有 The Scale Factory 的顾问 Dawn Foster。
|
||||
|
||||
Rikki 在她的[这篇文章][3]中总结了一些能愉悦你,并且能以意想不到的方式改善你职业生涯的写作方法,我在峰会上的发言是受她这篇文章的启发。透露一下,我认识 Rikki 很久了,我们在同一家公司共事了很多年,一起带过孩子,到现在还是很亲密的朋友。
|
||||
|
||||
### 写作和学习
|
||||
|
||||
正如 Rikki 对这个小组讨论的描述,“即使你自认为不是一个‘作家’,你也应该考虑写一下对开源的贡献,还有你的项目或者社区”。写作是一种很好的方式,来分享自己的知识并让别人参与到你的工作中来,当然它对个人也有好处。写作能帮助你结识新人,学习新技能,还能改善你的沟通。
|
||||
|
||||
我发现写作能让我搞清楚自己对某个主题有哪些不懂的地方。写作的过程会让知识体系的空白很突出,这激励了我通过进一步的研究、阅读和提问来填补空白。
|
||||
|
||||
Rikki 说:“写那些你不知道的东西会更加困难也更加耗时,但是也更有成就感,更有益于你的事业。我发现写我不知道的东西有助于自己学习,因为得研究透彻才能给读者解释清楚。”
|
||||
|
||||
把你刚学到的东西写出来对其他也在学习这些知识的人是很有价值的。[Julia Evans][4] 经常在她的博客里写有关学习新技能的文章。她能把主题分解成一个个小的部分,这种方法对读者很友好,容易上手。Evans 在自己的博客中带领读者了解她的学习过程,指出在这个过程中哪些是对她有用的,哪些是没用的,基本消除了读者的学习障碍,为新手清扫了道路。
|
||||
|
||||
### 更明确的沟通
|
||||
|
||||
|
||||
写作有助于练习思考和准确讲话,尤其是面向国际受众写作(或演讲)时。例如,在[这篇文章中][5],Isabel Drost-Fromm 为那些母语不是英语的演讲者提供了几个技巧来消除歧义。不管是在会议上还是在自己团队内发言,写作还能帮你在演示之前理清思路。
|
||||
|
||||
Rikki 说:“写文章的过程有助于我组织整理自己的发言和演示稿,也是一个给参会者提供笔记的好方式,还可以分享给没有参加活动的更多国际观众。”
|
||||
|
||||
如果你有兴趣,我鼓励你去写作。我强烈建议你参考这里提到的文章,开始思考你要写的内容。 不幸的是,我们在开源峰会上的讨论没有记录,但我希望将来能再做一次讨论,分享更多的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/9/how-writing-can-help-you-learn-new-skills-and-grow-your-career
|
||||
|
||||
作者:[Amber Ankerholz][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[belitex](https://github.com/belitex)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/aankerholz
|
||||
[1]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/
|
||||
[2]: https://ossna18.sched.com/event/FAOF/panel-discussion-how-writing-can-change-your-career-for-the-better-even-if-you-dont-identify-as-a-writer-moderated-by-rikki-endsley-opensourcecom-red-hat?iframe=no#
|
||||
[3]: https://opensource.com/article/18/2/career-changing-magic-writing
|
||||
[4]: https://jvns.ca/
|
||||
[5]: https://www.linux.com/blog/event/open-source-summit-eu/2017/12/technical-writing-international-audience
|
@ -0,0 +1,45 @@
|
||||
万维网的创建者正在创建一个新的分布式网络
|
||||
======
|
||||
|
||||
**万维网的创建者 Tim Berners-Lee 公布了他计划创建一个新的分布式网络,网络中的数据将由用户控制**
|
||||
|
||||
[Tim Berners-Lee] [1]以创建万维网而闻名,万维网就是你现在所知的互联网。二十多年之后,Tim 致力于将互联网从企业巨头的掌控中解放出来,并通过分布式网络将权力交回给人们。
|
||||
|
||||
Berners-Lee 对互联网“强权”们处理用户数据的方式感到不满。所以他[开始致力于他自己的开源项目][2] Solid “来将在网络上的权力归还给人们”
|
||||
|
||||
> Solid 改变了当前用户必须将个人数据交给数字巨头以换取可感知价值的模型。正如我们都已发现的那样,这不符合我们的最佳利益。Solid 是我们如何驱动网络进化以恢复平衡——以一种革命性的方式,让我们每个人完全地控制数据,无论数据是否是个人数据。
|
||||
|
||||
![Tim Berners-Lee is creating a decentralized web with open source project Solid][3]
|
||||
|
||||
基本上,[Solid][4]是一个使用现有网络构建的平台,在这里你可以创建自己的 “pods” (个人数据存储)。你决定这个 “pods” 将被托管在哪里,谁将访问哪些数据元素以及数据将如何通过这个 pod 分享。
|
||||
|
||||
Berners-Lee 相信 Solid "将以一种全新的方式,授权个人、开发者和企业来构思、构建和寻找创新、可信和有益的应用和服务。"
|
||||
|
||||
开发人员需要将 Solid 集成进他们的应用程序和网站中。 Solid 仍在早期阶段,所以目前没有相关的应用程序。但是项目网站宣称“第一批 Solid 应用程序正在开发当中”。
|
||||
|
||||
Berners-Lee 已经创立一家名为[Inrupt][5] 的初创公司,并已从麻省理工学院休假来全职工作在 Solid,来将其”从少部分人的愿景带到多数人的现实“。
|
||||
|
||||
如果你对 Solid 感兴趣,[学习如何开发应用程序][6]或者以自己的方式[给项目做贡献][7]。当然,建立和推动 Solid 的广泛采用将需要大量的努力,所以每一点的贡献都将有助于分布式网络的成功。
|
||||
|
||||
你认为[分布式网络][8]会成为现实吗?你是如何看待分布式网络,特别是 Solid 项目的?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/solid-decentralized-web/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ypingcn](https://github.com/ypingcn)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
|
||||
[2]: https://medium.com/@timberners_lee/one-small-step-for-the-web-87f92217d085
|
||||
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/tim-berners-lee-solid-project.jpeg
|
||||
[4]: https://solid.inrupt.com/
|
||||
[5]: https://www.inrupt.com/
|
||||
[6]: https://solid.inrupt.com/docs/getting-started
|
||||
[7]: https://solid.inrupt.com/community
|
||||
[8]: https://tech.co/decentralized-internet-guide-2018-02
|
@ -0,0 +1,110 @@
|
||||
# [使用 Argbash 来改进你的 Bash 脚本][1]
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/11/argbash-1-945x400.png)
|
||||
|
||||
你编写或维护过有意义的 bash 脚本吗?如果回答是,那么你可能希望它们以标准且健壮的方式接收命令行参数。Fedora 最近得到了[一个很好的附加组件][2],它可以帮助你生成更好的脚本。不用担心,它不会花费你很多时间或精力。
|
||||
|
||||
### 为什么是 Argbash?
|
||||
|
||||
Bash 是一种解释性的命令行语言,没有标准库。因此,如果你编写 bash 脚本并希望命令行界面符合 [POSIX][3] 和 [GNU CLI][4] 标准,那么你只需习惯两个选项:
|
||||
|
||||
1. 直接编写为脚本量身定制的参数解析功能(可使用内置的 `getopts`)。
|
||||
|
||||
2. 使用外部 bash 模块。
|
||||
|
||||
第一个选项看起来非常愚蠢,因为正确实现接口并非易事。但是,从 [Stack Overflow][5] 到 [Bash Hackers][6] wiki 的各种站点上,它被认为是最佳选择。
|
||||
|
||||
第二个选项看起来更聪明,但使用模块有它自己的问题。最大的问题是你必须将其代码与脚本捆绑在一起。这可能意味着:
|
||||
|
||||
* 你将库作为单独的文件分发
|
||||
|
||||
* 在脚本的开头包含库代码
|
||||
|
||||
有两个文件而不是一个是愚蠢的,但一个文件会使用一串超过千行的复杂代码去污染你的脚本。(to 校正:这句话原文不知该如何理解)
|
||||
|
||||
这是 Argbash [项目诞生][7]的主要原因。Argbash 是一个代码生成器,它为你的脚本生成一个量身定制的解析库。与其他 bash 模块的通用代码不同,它生成脚本所需的最少代码。此外,如果你不需要 100% 符合这些 CLI 标准,你可以生成更简单的代码。
|
||||
|
||||
### 示例
|
||||
|
||||
### 分析
|
||||
|
||||
假设你要实现一个脚本,它可以在终端窗口中[绘制条形图][8],你可以通过多次重复选择一个字符来做到这一点。这意味着你需要从命令行获取以下信息:
|
||||
|
||||
* _这个字符是直线的元素。如果未指定,使用破折号。_ 在命令行上,这将是单值位置参数 _character_,其默认值为 -。
|
||||
|
||||
* _直线的长度。如果未指定,会选择 80。_ 这是一个单值可选参数 _-length_,默认值为 80。
|
||||
|
||||
* _Verbose 模式(用于调试)。_ 这是一个布尔型参数 _verbose_,默认情况下关闭。
|
||||
|
||||
由于脚本的主体非常简单,因此本文主要关注从命令行获取用户的输入到合适的脚本变量。Argbash 生成的代码将解析结果保存到 shell 变量 _arg\_character_, _arg\_length_ 和 _arg\_verbose_。
|
||||
|
||||
### 执行
|
||||
|
||||
要继续下去,你还需要 _argbash-init_ 和 _argbash_ bash 脚本,它们是 _argbash_ 包的一部分。因此,运行以下命令:
|
||||
```
|
||||
sudo dnf install argbash
|
||||
```
|
||||
|
||||
然后,使用 _argbash-init_ 来为 _argbash_ 生成模板,它会生成可执行脚本。你需要三个参数:一个名为 _character_ 的位置参数,一个可选的 _length_ 参数以及一个可选的布尔 _verbose_。将这些传递给 _argbash-init_,然后将输出传递给 _argbash_ :
|
||||
```
|
||||
argbash-init --pos character --opt length --opt-bool verbose script-template.sh
|
||||
argbash script-template.sh -o script
|
||||
./script
|
||||
```
|
||||
|
||||
看到帮助信息了吗?看起来该脚本不知道字符参数的默认选项。因此,看一下 [Argbash API][9],然后通过编辑脚本的模板部分来解决问题:
|
||||
```
|
||||
# ...
|
||||
# ARG_OPTIONAL_SINGLE([length],[l],[Length of the line],[80])
|
||||
# ARG_OPTIONAL_BOOLEAN([verbose],[V],[Debug mode])
|
||||
# ARG_POSITIONAL_SINGLE([character],[The element of the line],[-])
|
||||
# ARG_HELP([The line drawer])
|
||||
# ...
|
||||
```
|
||||
|
||||
Argbash 非常智能,它试图让每个生成的脚本都成为自己的模板,这意味着你不必担心存储源模版以供进一步使用。你不应该丢失生成的 bash 脚本。现在,尝试重新生成将来的线条绘图以按预期工作:(to 校正:这里不清楚)
|
||||
```
|
||||
argbash script -o script
|
||||
./script
|
||||
```
|
||||
|
||||
如你所见,一切正常。剩下要做的唯一事情就是完成线条绘图功能。
|
||||
|
||||
### 结论
|
||||
|
||||
你可能会发现包含解析代码的部分很长,但考虑到它允许你调用 _./script.sh x -Vl50_,它将被理解为与 _./script -V -l 50 x_ 相同的方式。确实需要一些代码才能做到这一点。
|
||||
|
||||
但是,通过调用 _argbash-init_ 并将参数 _-mode_ 设置为 _minimal_,你可以将生成的代码复杂度和解析能力之间的平衡转向更简单的代码。这个选项将脚本的大小减少了大约 20 行,这相当于生成的解析代码大小减少了大约 25%。另一方面,_full_ 选项使脚本更加智能。
|
||||
|
||||
如果你想要检查生成的代码,请给 _argbash_ 提供参数 _-commented_,它会将注释放入解析代码中,从而揭示各个部分背后的意图。与其他参数解析库相比较,如 [shflags][10], [argsparse][11] 或 [bash-modules/arguments][12],你将看到 Argbash 强大的简单性。如果出现了严重的错误,你需要快速修复解析功能中的一个故障,Argbash 也允许你这样做。
|
||||
|
||||
由于你很有可能是 Fedora 用户,因此你可以享受从官方仓库安装命令行 Argbash 的便利。然而,在你的服务中还有一个[在线解析代码生成器][13]。此外,如果你在服务器上使用 Docker 工作,你可以试试 [Argbash Docker 镜像][14]。
|
||||
|
||||
因此,请享受并确保你的脚本具有令用户满意的命令行界面。Argbash 随时为你提供帮助,你只需付出很少的努力。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/improve-bash-scripts-argbash/
|
||||
|
||||
作者:[Matěj Týč ][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/bubla/
|
||||
[1]:https://fedoramagazine.org/improve-bash-scripts-argbash/
|
||||
[2]:https://argbash.readthedocs.io/
|
||||
[3]:http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html
|
||||
[4]:https://www.gnu.org/prep/standards/html_node/Command_002dLine-Interfaces.html
|
||||
[5]:https://stackoverflow.com/questions/192249/how-do-i-parse-command-line-arguments-in-bash
|
||||
[6]:http://wiki.bash-hackers.org/howto/getopts_tutorial
|
||||
[7]:https://argbash.readthedocs.io/
|
||||
[8]:http://wiki.bash-hackers.org/snipplets/print_horizontal_line
|
||||
[9]:http://argbash.readthedocs.io/en/stable/guide.html#argbash-api
|
||||
[10]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
|
||||
[11]:https://raw.githubusercontent.com/Anvil/bash-argsparse/master/argsparse.sh
|
||||
[12]:https://raw.githubusercontent.com/vlisivka/bash-modules/master/main/bash-modules/src/bash-modules/arguments.sh
|
||||
[13]:https://argbash.io/generate
|
||||
[14]:https://hub.docker.com/r/matejak/argbash/
|
@ -0,0 +1,153 @@
|
||||
如何使用 Apache Web 服务器配置多个站点
|
||||
|
||||
=====
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/apache-feathers.jpg?itok=fnrpsu3G)
|
||||
|
||||
在我的[上一篇文章][1]中,我解释了如何为单个站点配置 Apache Web 服务器,事实证明这很容易。在这篇文章中,我将向你展示如何使用单个 Apache 实例来服务多个站点。
|
||||
|
||||
注意:我写这篇文章的环境是 Fedora 27 虚拟机,配置了 Apache 2.4.29。如果你有另一个 Fedora 的发行版,那么你使用的命令以及配置文件的位置和内容可能会有所不同。
|
||||
|
||||
正如我之前的文章中提到的,Apache 的所有配置文件都位于 `/etc/httpd/conf` 和 `/etc/httpd/conf.d`。默认情况下,站点的数据位于 `/var/www` 中。对于多个站点,你需要提供多个位置,每个位置对应托管的站点。
|
||||
|
||||
### 基于名称的虚拟主机
|
||||
|
||||
使用基于名称的虚拟主机,你可以为多个站点使用一个 IP 地址。现代 Web 服务器,包括 Apache,使用指定 URL 的 `hostname` 部分来确定哪个虚拟 Web 主机响应页面请求。这仅仅需要比一个站点更多的配置。
|
||||
|
||||
即使你只从单个站点开始,我也建议你将其设置为虚拟主机,这样可以在以后更轻松地添加更多站点。在本文中,我将在上一篇文章中介绍我们停止的位置,因此你需要设置原始站点,即基于名称的虚拟站点。
|
||||
|
||||
### 准备原始站点
|
||||
|
||||
在设置第二个站点之前,你需要为现有网站提供基于名称的虚拟主机。如果你现在没有网站,[请返回并立即创建一个][1]。
|
||||
|
||||
一旦你有了站点,将以下内容添加到 `/etc/httpd/conf/httpd.conf` 配置文件的底部(添加此内容是你需要对 `httpd.conf` 文件进行的唯一更改):
|
||||
```
|
||||
<VirtualHost 127.0.0.1:80>
|
||||
|
||||
DocumentRoot /var/www/html
|
||||
|
||||
ServerName www.site1.org
|
||||
|
||||
</VirtualHost>
|
||||
|
||||
```
|
||||
|
||||
这将是第一个虚拟主机节(to 校正:这里虚拟主机节不太清除),它应该保持为第一个,以使其成为默认定义。这意味着通过 IP 地址或解析为此 IP 地址但没有特定命名主机配置节的其它名称对服务器的 HTTP 访问将定向到此虚拟主机。所有其它虚拟主机配置节都应遵循此节。
|
||||
|
||||
你还需要使用 `/etc/hosts` 中的条目设置你的网站以提供名称解析。上次,我们只使用了 `localhost` 的 IP 地址。通常,这可以使用你使用的任何名称服务来完成,例如 Google 或 Godaddy。对于你的测试网站,通过在 `/etc/hosts` 中的 `localhost` 行添加一个新名称来完成此操作。添加两个网站的条目,方便你以后不需再次编辑此文件。结果如下:
|
||||
```
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 www.site1.org www.site2.org
|
||||
```
|
||||
|
||||
让我们将 `/var/www/html/index.html` 文件改变得更加明显一点。它应该看起来像这样(带有一些额外的文本来识别这是站点 1):
|
||||
```
|
||||
<h1>Hello World</h1>
|
||||
|
||||
Web site 1.
|
||||
|
||||
```
|
||||
|
||||
重新启动 HTTPD 服务器,已启用对 `httpd` 配置的更改。然后,你可以从命令行使用 Lynx 文本模式查看网站。
|
||||
```
|
||||
[root@testvm1 ~]# systemctl restart httpd
|
||||
|
||||
[root@testvm1 ~]# lynx www.site1.org
|
||||
|
||||
Hello World
|
||||
|
||||
Web site 1.
|
||||
|
||||
<snip>
|
||||
|
||||
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
|
||||
|
||||
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
|
||||
|
||||
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
|
||||
|
||||
```
|
||||
|
||||
你可以看到原始网站的修改内容,没有明显的错误,先按下 "Q" 键,然后按 "Y" 退出 Lynx Web 浏览器。
|
||||
|
||||
### 配置第二个站点
|
||||
|
||||
现在你已经准备好建立第二个网站。使用以下命令创建新的网站目录结构:
|
||||
```
|
||||
[root@testvm1 html]# mkdir -p /var/www/html2
|
||||
|
||||
```
|
||||
|
||||
注意,第二个站点只是第二个 `html` 目录,与第一个站点位于同一 `/var/www` 目录下。
|
||||
|
||||
现在创建一个新的索引文件 `/var/www/html2/index.html`,其中包含以下内容(此索引文件稍有不同,以区别于原始网站):
|
||||
```
|
||||
<h1>Hello World -- Again</h1>
|
||||
|
||||
Web site 2.
|
||||
|
||||
```
|
||||
|
||||
在 `httpd.conf` 中为第二个站点创建一个新的配置节,并将其放在上一个虚拟主机节下面(这两个应该看起来非常相似)。此节告诉 Web 服务器在哪里可以找到第二个站点的 HTML 文件。
|
||||
```
|
||||
<VirtualHost 127.0.0.1:80>
|
||||
|
||||
DocumentRoot /var/www/html2
|
||||
|
||||
ServerName www.site2.org
|
||||
|
||||
</VirtualHost>
|
||||
|
||||
```
|
||||
|
||||
重启 HTTPD,并使用 Lynx 来查看结果。
|
||||
```
|
||||
[root@testvm1 httpd]# systemctl restart httpd
|
||||
|
||||
[root@testvm1 httpd]# lynx www.site2.org
|
||||
|
||||
|
||||
|
||||
Hello World -- Again
|
||||
|
||||
|
||||
|
||||
Web site 2.
|
||||
|
||||
|
||||
|
||||
<snip>
|
||||
|
||||
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
|
||||
|
||||
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
|
||||
|
||||
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
|
||||
|
||||
```
|
||||
|
||||
在这里,我压缩了输出结果以适应这个空间。页面的差异表明这是第二个站点。要同时显示两个站点,请打开另一个终端会话并使用 Lynx Web 浏览器查看另一个站点。
|
||||
|
||||
### 其他考虑
|
||||
|
||||
这个简单的例子展示了如何使用 Apache HTTPD 服务器的单个实例来服务于两个站点。当考虑其他因素时,配置虚拟主机会变得有点复杂。
|
||||
|
||||
例如,你可能希望为这些网站中的一个或全部使用一些 CGI 脚本。为此,你可能为 CGI 程序在 `/var/www` 目录下创建一些目录:`/var/www/cgi-bin` 和 `/var/www/cgi-bin2`,以与 HTML 目录命名一致。然后,你需要将配置指令添加到虚拟主机节,以指定 CGI 脚本的目录位置。每个站点可以有下载文件的目录。这还需要相应虚拟主机节中的条目。
|
||||
|
||||
[Apache 网站][2]描述了管理多个站点的其他方法,以及从性能调优到安全性的配置选项。
|
||||
|
||||
Apache 是一个强大的 Web 服务器,可以用来管理从简单到高度复杂的网站。尽管其总体市场份额在缩小,但它仍然是互联网上最常用的 HTTPD 服务器。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
|
||||
|
||||
作者:[David Both][a]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/dboth
|
||||
[1]:https://opensource.com/article/18/2/how-configure-apache-web-server
|
||||
[2]:https://httpd.apache.org/docs/2.4/
|
@ -1,237 +0,0 @@
|
||||
什么是行为驱动的Python?
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk)
|
||||
|
||||
您是否听说过[行为驱动开发][1](BDD),并想知道所有的新奇事物是什么? 也许你已经发现了团队成员在使用“gherkin”了,并感到被排除在外无法参与其中。 或许你是一个Python爱好者,正在寻找更好的方法来测试你的代码。 无论在什么情况下,了解BDD都可以帮助您和您的团队实现更好的协作和测试自动化,而Python的`行为`框架是一个很好的起点。
|
||||
|
||||
### 什么是BDD?
|
||||
|
||||
* 在网站上提交表单
|
||||
* 搜索想要的结果
|
||||
* 保存文档
|
||||
* 进行REST API调用
|
||||
* 运行命令行界面命令
|
||||
|
||||
在软件中,行为是指在明确定义的输入,行为和结果场景中功能是如何运转的。 产品可以表现出无数的行为,例如:
|
||||
|
||||
根据产品的行为定义产品的功能可以更容易地描述产品,并对其进行开发和测试。 BDD的核心是:使行为成为软件开发的焦点。 在开发早期使用示例语言的规范来定义行为。 最常见的行为规范语言之一是Gherkin,Cucumber项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
|
||||
|
||||
下面是用Gherkin编写的行为规范的示例:
|
||||
|
||||
根据产品的行为定义产品的功能可以更容易地描述产品,开发产品并对其进行测试。 这是BDD的核心:使行为成为软件开发的焦点。 在开发早期使用[示例规范][2]的语言来定义行为。 最常见的行为规范语言之一是[Gherkin][3],[Cucumber][4]项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
|
||||
|
||||
下面是用Gherkin编写的行为规范的示例:
|
||||
|
||||
```
|
||||
Scenario: Basic DuckDuckGo Search
|
||||
Given the DuckDuckGo home page is displayed
|
||||
When the user searches for "panda"
|
||||
Then results are shown for "panda"
|
||||
```
|
||||
|
||||
快速浏览一下,行为是直观易懂的。 除少数关键字外,该语言为自由格式。 场景简洁而有意义。 一个真实的例子说明了这种行为。 步骤以声明的方式表明应该发生什么——而不会陷入如何如何的细节中。
|
||||
|
||||
[BDD的主要优点][5]是良好的协作和自动化。 每个人都可以为行为开发做出贡献,而不仅仅是程序员。 从流程开始就定义并理解预期的行为。 测试可以与它们涵盖的功能一起自动化。 每个测试都包含一个单一的,独特的行为,以避免重复。 最后,现有的步骤可以通过新的行为规范重用,从而产生雪球效果。
|
||||
|
||||
### Python的behave框架
|
||||
|
||||
`behave`是Python中最流行的BDD框架之一。 它与其他基于Gherkin的Cucumber框架非常相似,尽管没有得到官方的Cucumber定名。 `behave`有两个主要层:
|
||||
|
||||
1. 用Gherkin的`.feature`文件编写的行为规范
|
||||
2. 用Python模块编写的步骤定义和钩子,用于实现Gherkin步骤
|
||||
|
||||
如上例所示,Gherkin场景有三部分格式:
|
||||
|
||||
1. 鉴于一些初始状态
|
||||
2. 当行为发生时
|
||||
3. 然后验证结果
|
||||
|
||||
当`behave`运行测试时,每个步骤由装饰器“粘合”到Python函数。
|
||||
|
||||
### 安装
|
||||
|
||||
作为先决条件,请确保在你的计算机上安装了Python和`pip`。 我强烈建议使用Python 3.(我还建议使用[`pipenv`][6],但以下示例命令使用更基本的`pip`。)
|
||||
|
||||
`behave`框架只需要一个包:
|
||||
|
||||
```
|
||||
pip install behave
|
||||
```
|
||||
|
||||
其他包也可能有用,例如:
|
||||
```
|
||||
pip install requests # 用于调用REST API
|
||||
pip install selenium # 用于web浏览器交互
|
||||
```
|
||||
|
||||
GitHub上的[behavior-driven-Python][7]项目包含本文中使用的示例。
|
||||
|
||||
### Gherkin特点
|
||||
|
||||
`behave`框架使用的Gherkin语法实际上是符合官方的Cucumber Gherkin标准的。 `.feature`文件包含功能Feature部分,而Feature部分又包含具有Given-When-Then步骤的场景Scenario部分。 以下是一个例子:
|
||||
|
||||
```
|
||||
Feature: Cucumber Basket
|
||||
As a gardener,
|
||||
I want to carry many cucumbers in a basket,
|
||||
So that I don’t drop them all.
|
||||
|
||||
@cucumber-basket
|
||||
Scenario: Add and remove cucumbers
|
||||
Given the basket is empty
|
||||
When "4" cucumbers are added to the basket
|
||||
And "6" more cucumbers are added to the basket
|
||||
But "3" cucumbers are removed from the basket
|
||||
Then the basket contains "7" cucumbers
|
||||
```
|
||||
|
||||
这里有一些重要的事情需要注意:
|
||||
|
||||
- Feature和Scenario部分都有[简短的描述性标题][8]。
|
||||
- 紧跟在Feature标题后面的行是会被`behave`框架忽略掉的注释。将功能描述放在那里是一种很好的做法。
|
||||
- Scenarios和Features可以有标签(注意`@cucumber-basket`标记)用于钩子和过滤(如下所述)。
|
||||
- 步骤都遵循[严格的Given-When-Then顺序][9]。
|
||||
- 使用`And`和`Bu`t可以为任何类型添加附加步骤。
|
||||
- 可以使用输入对步骤进行参数化——注意双引号里的值。
|
||||
|
||||
通过使用场景大纲,场景也可以写为具有多个输入组合的模板:
|
||||
|
||||
```
|
||||
Feature: Cucumber Basket
|
||||
|
||||
@cucumber-basket
|
||||
Scenario Outline: Add cucumbers
|
||||
Given the basket has “<initial>” cucumbers
|
||||
When "<more>" cucumbers are added to the basket
|
||||
Then the basket contains "<total>" cucumbers
|
||||
|
||||
Examples: Cucumber Counts
|
||||
| initial | more | total |
|
||||
| 0 | 1 | 1 |
|
||||
| 1 | 2 | 3 |
|
||||
| 5 | 4 | 9 |
|
||||
```
|
||||
|
||||
场景大纲总是有一个Examples表,其中第一行给出列标题,后续每一行给出一个输入组合。 只要列标题出现在由尖括号括起的步骤中,行值就会被替换。 在上面的示例中,场景将运行三次,因为有三行输入组合。 场景大纲是避免重复场景的好方法。
|
||||
|
||||
Gherkin语言还有其他元素,但这些是主要的机制。 想了解更多信息,请阅读Automation Panda这个网站的文章[Gherkin by Example][10]和[Writing Good Gherkin][11]。
|
||||
|
||||
### Python机制
|
||||
|
||||
每个Gherkin步骤必须“粘合”到步骤定义,即提供了实现的Python函数。 每个函数都有一个带有匹配字符串的步骤类型装饰器。 它还接收共享的上下文和任何步骤参数。 功能文件必须放在名为`features/`的目录中,而步骤定义模块必须放在名为`features/steps/`的目录中。 任何功能文件都可以使用任何模块中的步骤定义——它们不需要具有相同的名称。 下面是一个示例Python模块,其中包含cucumber basket功能的步骤定义。
|
||||
|
||||
```
|
||||
from behave import *
|
||||
from cucumbers.basket import CucumberBasket
|
||||
|
||||
@given('the basket has "{initial:d}" cucumbers')
|
||||
def step_impl(context, initial):
|
||||
context.basket = CucumberBasket(initial_count=initial)
|
||||
|
||||
@when('"{some:d}" cucumbers are added to the basket')
|
||||
def step_impl(context, some):
|
||||
context.basket.add(some)
|
||||
|
||||
@then('the basket contains "{total:d}" cucumbers')
|
||||
def step_impl(context, total):
|
||||
assert context.basket.count == total
|
||||
```
|
||||
|
||||
可以使用三个[步骤匹配器][12]:`parse`,`cfparse`和`re`。默认和最简单的匹配器是`parse`,如上例所示。注意如何解析参数化值并将其作为输入参数传递给函数。一个常见的最佳实践是在步骤中给参数加双引号。
|
||||
|
||||
每个步骤定义函数还接收一个[上下文][13]变量,该变量保存当前正在运行的场景的数据,例如`feature`, `scenario`和`tags`字段。也可以添加自定义字段,用于在步骤之间共享数据。始终使用上下文来共享数据——永远不要使用全局变量!
|
||||
|
||||
`behave`框架还支持[钩子][14]来处理Gherkin步骤之外的自动化问题。钩子是一个将在步骤,场景,功能或整个测试套件之前或之后运行的功能。钩子让人联想到[面向方面的编程][15]。它们应放在`features/`目录下的特殊`environment.py`文件中。钩子函数也可以检查当前场景的标签,因此可以有选择地应用逻辑。下面的示例显示了如何使用钩子为标记为`@web`的任何场景生成和销毁一个Selenium WebDriver实例。
|
||||
|
||||
```
|
||||
from selenium import webdriver
|
||||
|
||||
def before_scenario(context, scenario):
|
||||
if 'web' in context.tags:
|
||||
context.browser = webdriver.Firefox()
|
||||
context.browser.implicitly_wait(10)
|
||||
|
||||
def after_scenario(context, scenario):
|
||||
if 'web' in context.tags:
|
||||
context.browser.quit()
|
||||
```
|
||||
|
||||
注意:也可以使用[fixtures][16]进行构建和清理。
|
||||
|
||||
要了解一个`behave`项目应该是什么样子,这里是示例项目的目录结构:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/behave_dir_layout.png)
|
||||
|
||||
任何Python包和自定义模块都可以与`behave`框架一起使用。 使用良好的设计模式构建可扩展的测试自动化解决方案。步骤定义代码应简明扼要。
|
||||
|
||||
### 运行测试
|
||||
|
||||
要从命令行运行测试,请切换到项目的根目录并运行`behave`命令。 使用`-help`选项查看所有可用选项。
|
||||
|
||||
以下是一些常见用例:
|
||||
|
||||
```
|
||||
# run all tests
|
||||
behave
|
||||
|
||||
# run the scenarios in a feature file
|
||||
behave features/web.feature
|
||||
|
||||
# run all tests that have the @duckduckgo tag
|
||||
behave --tags @duckduckgo
|
||||
|
||||
# run all tests that do not have the @unit tag
|
||||
behave --tags ~@unit
|
||||
|
||||
# run all tests that have @basket and either @add or @remove
|
||||
behave --tags @basket --tags @add,@remove
|
||||
```
|
||||
|
||||
为方便起见,选项可以保存在[config][17]文件中。
|
||||
|
||||
### 其他选择
|
||||
|
||||
`behave`不是Python中唯一的BDD测试框架。其他好的框架包括:
|
||||
|
||||
- `pytest-bdd`,`pytest`的插件,和`behave`一样,它使用Gherkin功能文件和步骤定义模块,但它也利用了`pytest`的所有功能和插件。例如,它可以使用`pytest-xdist`并行运行Gherkin场景。 BDD和非BDD测试也可以与相同的过滤器一起执行。 `pytest-bdd`还提供更灵活的目录布局。
|
||||
- `radish`是一个“Gherkin增强版”框架——它将Scenario循环和前提条件添加到标准的Gherkin语言中,这使得它对程序员更友好。它还提供丰富的命令行选项,如`behave`。
|
||||
- `lettuce`是一种较旧的BDD框架,与`behave`非常相似,在框架机制方面存在细微差别。然而,GitHub最近显示该项目的活动很少(截至2018年5月)。
|
||||
|
||||
任何这些框架都是不错的选择。
|
||||
|
||||
另外,请记住,Python测试框架可用于任何黑盒测试,即使对于非Python产品也是如此! BDD框架非常适合Web和服务测试,因为它们的测试是声明性的,而Python是一种[很好的测试自动化语言][18]。
|
||||
|
||||
本文基于作者的[PyCon Cleveland 2018][19]演讲,[行为驱动的Python][20]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/behavior-driven-python
|
||||
|
||||
作者:[Andrew Knight][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/andylpk247
|
||||
[1]:https://automationpanda.com/bdd/
|
||||
[2]:https://en.wikipedia.org/wiki/Specification_by_example
|
||||
[3]:https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/
|
||||
[4]:https://cucumber.io/
|
||||
[5]:https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/
|
||||
[6]:https://docs.pipenv.org/
|
||||
[7]:https://github.com/AndyLPK247/behavior-driven-python
|
||||
[8]:https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/
|
||||
[9]:https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/
|
||||
[10]:https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/
|
||||
[11]:https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/
|
||||
[12]:http://behave.readthedocs.io/en/latest/api.html#step-parameters
|
||||
[13]:http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes
|
||||
[14]:http://behave.readthedocs.io/en/latest/api.html#environment-file-functions
|
||||
[15]:https://en.wikipedia.org/wiki/Aspect-oriented_programming
|
||||
[16]:http://behave.readthedocs.io/en/latest/api.html#fixtures
|
||||
[17]:http://behave.readthedocs.io/en/latest/behave.html#configuration-files
|
||||
[18]:https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/
|
||||
[19]:https://us.pycon.org/2018/
|
||||
[20]:https://us.pycon.org/2018/schedule/presentation/87/
|
File diff suppressed because it is too large
Load Diff
201
translated/tech/20180715 Why is Python so slow.md
Normal file
201
translated/tech/20180715 Why is Python so slow.md
Normal file
@ -0,0 +1,201 @@
|
||||
为什么 Python 这么慢?
|
||||
============================================================
|
||||
|
||||
Python 现在越来越火,已经迅速扩张到包括 DevOps、数据科学、web 开发、信息安全等各个领域当中。
|
||||
|
||||
然而,相比起 Python 扩张的速度,Python 代码的运行速度就显得有点逊色了。
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1200/0*M2qZQsVnDS-4i5zc.jpg)
|
||||
|
||||
> 在代码运行速度方面,Java、C、C++、C#和 Python 要如何进行比较呢?并没有一个放之四海而皆准的标准,因为具体结果很大程度上取决于运行的程序类型,而<ruby>语言基准测试<rt>Computer Language Benchmarks Games</rt></ruby>可以作为[衡量的一个方面][5]。
|
||||
|
||||
根据我这些年来进行语言基准测试的经验来看,Python 比很多语言运行起来都要慢。无论是使用 [JIT][7] 编译器的 C#、Java,还是使用 [AOT][8] 编译器的 C、C ++,又或者是 JavaScript 这些解释型语言,Python 都[比它们运行得慢][6]。
|
||||
|
||||
注意:对于文中的 Python ,一般指 CPython 这个官方的实现。当然我也会在本文中提到其它语言的 Python 实现。
|
||||
|
||||
> 我要回答的是这个问题:对于一个类似的程序,Python 要比其它语言慢 2 到 10 倍不等,这其中的原因是什么?又有没有改善的方法呢?
|
||||
|
||||
主流的说法有这些:
|
||||
|
||||
* “是<ruby>全局解释器锁<rt>Global Interpreter Lock</rt></ruby>(GIL)的原因”
|
||||
|
||||
* “是因为 Python 是解释型语言而不是编译型语言”
|
||||
|
||||
* “是因为 Python 是一种动态类型的语言”
|
||||
|
||||
哪一个才是是影响 Python 运行效率的主要原因呢?
|
||||
|
||||
### 是全局解释器锁的原因吗?
|
||||
|
||||
现在很多计算机都配备了具有多个核的 CPU ,有时甚至还会有多个处理器。为了更充分利用它们的处理能力,操作系统定义了一个称为线程的低级结构。某一个进程(例如 Chrome 浏览器)可以建立多个线程,在系统内执行不同的操作。在这种情况下,CPU 密集型进程就可以跨核心共享负载了,这样的做法可以大大提高应用程序的运行效率。
|
||||
|
||||
例如在我写这篇文章时,我的 Chrome 浏览器打开了 44 个线程。要知道的是,基于 POSIX 的操作系统(例如 Mac OS、Linux)和 Windows 操作系统的线程结构、API 都是不同的,因此操作系统还负责对各个线程的调度。
|
||||
|
||||
如果你还没有写过多线程执行的代码,你就需要了解一下线程锁的概念了。多线程进程比单线程进程更为复杂,是因为需要使用线程锁来确保同一个内存地址中的数据不会被多个线程同时访问或更改。
|
||||
|
||||
CPython 解释器在创建变量时,首先会分配内存,然后对该变量的引用进行计数,这称为<ruby>引用计数<rt>reference counting</rt></ruby>。如果变量的引用数变为 0,这个变量就会从内存中释放掉。这就是在 for 循环代码块内创建临时变量不会增加内存消耗的原因。
|
||||
|
||||
而当多个线程内共享一个变量时,CPython 锁定引用计数的关键就在于使用了 GIL,它会谨慎地控制线程的执行情况,无论同时存在多少个线程,每次只允许一个线程进行操作。
|
||||
|
||||
#### 这会对 Python 程序的性能有什么影响?
|
||||
|
||||
如果你的程序只有单线程、单进程,代码的速度和性能不会受到全局解释器锁的影响。
|
||||
|
||||
但如果你通过在单进程中使用多线程实现并发,并且是 IO 密集型(例如网络 IO 或磁盘 IO)的线程,GIL 竞争的效果就很明显了。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/0*S_iSksY5oM5H1Qf_.png)
|
||||
由 David Beazley 提供的 GIL 竞争情况图[http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html][1]
|
||||
|
||||
对于一个 web 应用(例如 Django),同时还使用了 WSGI,那么对这个 web 应用的每一个请求都是一个单独的 Python 进程,而且每个请求只有一个锁。同时 Python 解释器的启动也比较慢,某些 WSGI 实现还具有“守护进程模式”,[就会导致 Python 进程非常繁忙][9]。
|
||||
|
||||
#### 其它的 Python 解释器表现如何?
|
||||
|
||||
[PyPy 也是一种带有 GIL 的解释器][10],但通常比 CPython 要快 3 倍以上。
|
||||
|
||||
[Jython 则是一种没有 GIL 的解释器][11],这是因为 Jython 中的 Python 线程使用 Java 线程来实现,并且由 JVM 内存管理系统来进行管理。
|
||||
|
||||
#### JavaScript 在这方面又是怎样做的呢?
|
||||
|
||||
所有的 Javascript 引擎使用的都是 [mark-and-sweep 垃圾收集算法][12],而 GIL 使用的则是 CPython 的内存管理算法。因此 JavaScript 没有 GIL,而且它是单线程的,也不需要用到 GIL, JavaScript 的事件循环和 Promise/Callback 模式实现了以异步编程的方式代替并发。在 Python 当中也有一个类似的 asyncio 事件循环。
|
||||
|
||||
|
||||
### 是因为 Python 是解释型语言吗?
|
||||
|
||||
我经常会听到这个说法,但其实当终端上执行 `python myscript.py` 之后,CPython 会对代码进行一系列的读取、语法分析、解析、编译、解释和执行的操作。
|
||||
|
||||
如果你对这一系列过程感兴趣,也可以阅读一下我之前的文章:
|
||||
|
||||
[在 6 分钟内修改 Python 语言][13]
|
||||
|
||||
创建 `.pyc` 文件是这个过程的重点。在代码编译阶段,Python 3 会将字节码序列写入 `__pycache__/` 下的文件中,而 Python 2 则会将字节码序列写入当前目录的 `.pyc` 文件中。对于你编写的脚本、导入的所有代码以及第三方模块都是如此。
|
||||
|
||||
因此,绝大多数情况下(除非你的代码是一次性的……),Python 都会解释字节码并执行。与 Java、C#.NET 相比:
|
||||
|
||||
> Java 代码会被编译为“中间语言”,由 Java 虚拟机读取字节码,并将其即时编译为机器码。.NET CIL 也是如此,.NET CLR(Common-Language-Runtime)将字节码即时编译为机器码。
|
||||
|
||||
既然 Python 不像 Java 和 C# 那样使用虚拟机或某种字节码,为什么 Python 在基准测试中仍然比 Java 和 C# 慢得多呢?首要原因是,.NET 和 Java 都是 JIT 编译的。
|
||||
|
||||
<ruby>即时编译<rt>Just-in-time compilation</rt></ruby>(JIT)需要一种中间语言,以便将代码拆分为多个块(或多个帧)。而<ruby>提前编译器<rt>ahead of time compiler</rt></ruby>(AOT)则需要确保 CPU 在任何交互发生之前理解每一行代码。
|
||||
|
||||
JIT 本身是不会让执行速度加快的,因为它执行的仍然是同样的字节码序列。但是 JIT 会允许运行时的优化。一个优秀的 JIT 优化器会分析出程序的哪些部分会被多次执行,这就是程序中的“热点”,然后,优化器会将这些热点编译得更为高效以实现优化。
|
||||
|
||||
这就意味着如果你的程序是多次地重复相同的操作时,有可能会被优化器优化得更快。而且,Java 和 C# 是强类型语言,因此优化器对代码的判断可以更为准确。
|
||||
|
||||
PyPy 使用了明显快于 CPython 的 JIT。更详细的结果可以在这篇性能基准测试文章中看到:
|
||||
|
||||
[哪一个 Python 版本最快?][15]
|
||||
|
||||
#### 那为什么 CPython 不使用 JIT 呢?
|
||||
|
||||
JIT 也不是完美的,它的一个显著缺点就在于启动时间。 CPython 的启动时间已经相对比较慢,而 PyPy 比 CPython 启动还要慢 2 到 3 倍,所以 Java 虚拟机启动速度已经是出了名的慢了。.NET CLR则通过在系统启动时自启动来优化体验, 甚至还有专门运行 CLR 的操作系统。
|
||||
|
||||
因此如果你的 Python 进程在一次启动后就长时间运行,JIT 就比较有意义了,因为代码里有“热点”可以优化。
|
||||
|
||||
尽管如此,CPython 仍然是通用的代码实现。设想如果使用 Python 开发命令行程序,但每次调用 CLI 时都必须等待 JIT 缓慢启动,这种体验就相当不好了。
|
||||
|
||||
CPython 必须通过大量用例的测试,才有可能实现[将 JIT 插入到 CPython 中][17],但这个改进工作的进度基本处于停滞不前的状态。
|
||||
|
||||
> 如果你想充分发挥 JIT 的优势,请使用PyPy。
|
||||
|
||||
### 是因为 Python 是一种动态类型的语言吗?
|
||||
|
||||
在 C、C++、Java、C#、Go 这些静态类型语言中,必须在声明变量时指定变量的类型。而在动态类型语言中,虽然也有类型的概念,但变量的类型是可改变的。
|
||||
|
||||
```
|
||||
a = 1
|
||||
a = "foo"
|
||||
```
|
||||
|
||||
在上面这个示例里,Python 将变量 `a` 一开始存储整数类型变量的内存空间释放了,并创建了一个新的存储字符串类型的内存空间,并且和原来的变量同名。
|
||||
|
||||
静态类型语言这样的设计并不是为了为难你,而是为了方便 CPU 运行而这样设计的。因为最终都需要将所有操作都对应为简单的二进制操作,因此必须将对象、类型这些高级的数据结构转换为低级数据结构。
|
||||
|
||||
Python 也实现了这样的转换,但用户看不到这些转换,也不需要关心这些转换。
|
||||
|
||||
变量类型不固定并不是 Python 运行慢的原因,Python 通过巧妙的设计让用户可以让各种结构变得动态:可以在运行时更改对象上的方法,也可以在运行时让模块调用新声明的值,几乎可以做到任何事。
|
||||
|
||||
但也正是这种设计使得 Python 的优化难度变得很大。
|
||||
|
||||
为了证明我的观点,我使用了一个 `dtrace` 这个 Mac OS 上的系统调用跟踪工具。CPython 中没有内置 dTrace,因此必须重新对 CPython 进行编译。以下使用 Python 3.6.6 进行为例:
|
||||
|
||||
```
|
||||
wget https://github.com/python/cpython/archive/v3.6.6.zip
|
||||
unzip v3.6.6.zip
|
||||
cd v3.6.6
|
||||
./configure --with-dtrace
|
||||
make
|
||||
```
|
||||
|
||||
这样 `python.exe` 将使用 dtrace 追踪所有代码。[Paul Ross 也作过关于 dtrace 的闪电演讲][19]。你可以下载 Python 的 dtrace 启动文件来查看函数调用、系统调用、CPU 时间、执行时间,以及各种其它的内容。
|
||||
|
||||
`sudo dtrace -s toolkit/<tracer>.d -c ‘../cpython/python.exe script.py’`
|
||||
|
||||
`py_callflow` 追踪器显示了程序里调用的所有函数。
|
||||
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*Lz4UdUi4EwknJ0IcpSJ52g.gif)
|
||||
|
||||
那么,Python 的动态类型会让它变慢吗?
|
||||
|
||||
* 类型比较和类型转换消耗的资源是比较多的,每次读取、写入或引用变量时都会检查变量的类型
|
||||
|
||||
* Python 的动态程度让它难以被优化,因此很多 Python 的替代品都为了提升速度而在灵活性方面作出了妥协
|
||||
|
||||
* 而 [Cython][2] 结合了 C 的静态类型和 Python 来优化已知类型的代码,它可以将[性能提升][3] 84 倍。
|
||||
|
||||
### 总结
|
||||
|
||||
> 由于 Python 是一种动态、多功能的语言,因此运行起来会相对缓慢。对于不同的实际需求,可以使用各种不同的优化或替代方案。
|
||||
|
||||
例如可以使用异步,引入分析工具或使用多种解释器来优化 Python 程序。
|
||||
|
||||
对于不要求启动时间且代码可以充分利用 JIT 的程序,可以考虑使用 PyPy。
|
||||
|
||||
而对于看重性能并且静态类型变量较多的程序,不妨使用 [Cython][4]。
|
||||
|
||||
#### 延伸阅读
|
||||
|
||||
Jake VDP 的优秀文章(略微过时) [https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/][21]
|
||||
|
||||
Dave Beazley’s 关于 GIL 的演讲 [http://www.dabeaz.com/python/GIL.pdf][22]
|
||||
|
||||
JIT 编译器的那些事 [https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b
|
||||
|
||||
作者:[Anthony Shaw][a]
|
||||
选题:[oska874][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://hackernoon.com/@anthonypjshaw?source=post_header_lockup
|
||||
[b]:https://github.com/oska874
|
||||
[1]:http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html
|
||||
[2]:http://cython.org/
|
||||
[3]:http://notes-on-cython.readthedocs.io/en/latest/std_dev.html
|
||||
[4]:http://cython.org/
|
||||
[5]:http://algs4.cs.princeton.edu/faq/
|
||||
[6]:https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html
|
||||
[7]:https://en.wikipedia.org/wiki/Just-in-time_compilation
|
||||
[8]:https://en.wikipedia.org/wiki/Ahead-of-time_compilation
|
||||
[9]:https://www.slideshare.net/GrahamDumpleton/secrets-of-a-wsgi-master
|
||||
[10]:http://doc.pypy.org/en/latest/faq.html#does-pypy-have-a-gil-why
|
||||
[11]:http://www.jython.org/jythonbook/en/1.0/Concurrency.html#no-global-interpreter-lock
|
||||
[12]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
|
||||
[13]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
|
||||
[14]:https://hackernoon.com/modifying-the-python-language-in-7-minutes-b94b0a99ce14
|
||||
[15]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
|
||||
[16]:https://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b
|
||||
[17]:https://www.slideshare.net/AnthonyShaw5/pyjion-a-jit-extension-system-for-cpython
|
||||
[18]:https://github.com/python/cpython/archive/v3.6.6.zip
|
||||
[19]:https://github.com/paulross/dtrace-py#the-lightning-talk
|
||||
[20]:https://github.com/paulross/dtrace-py/tree/master/toolkit
|
||||
[21]:https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/
|
||||
[22]:http://www.dabeaz.com/python/GIL.pdf
|
||||
[23]:https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-time-jit-compilers/
|
||||
|
350
translated/tech/20180823 CLI- improved.md
Normal file
350
translated/tech/20180823 CLI- improved.md
Normal file
@ -0,0 +1,350 @@
|
||||
|
||||
命令行:增强版
|
||||
======
|
||||
|
||||
我不确定有多少Web 开发者能完全逃避使用命令行。就我来说,我从1997年上大学就开始使用命令行了,那时的l33t-hacker 让我着迷,同时我也觉得它很难掌握。
|
||||
|
||||
过去这些年我的命令行本领在逐步加强,我经常会去搜寻在我工作中能使用的更好的命令行工具。下面就是我现在使用的用于增强原有命令行工具的列表。
|
||||
|
||||
|
||||
### 怎么忽略我所做的命令行增强
|
||||
|
||||
通常情况下我会用别名将新的或者增强的命令行工具链接到原来的命令行(如`cat`和`ping`)。
|
||||
|
||||
|
||||
如果我需要运行原来的命令的话(有时我确实需要这么做),我会像下面这样来运行未加修改的原来的命令行。(我用的是Mac,你的输出可能不一样)
|
||||
|
||||
|
||||
```
|
||||
$ \cat # 忽略叫 "cat" 的别名 - 具体解释: https://stackoverflow.com/a/16506263/22617
|
||||
$ command cat # 忽略函数和别名
|
||||
|
||||
```
|
||||
|
||||
### bat > cat
|
||||
|
||||
`cat`用于打印文件的内容,如果你在命令行上要花很多时间的话,例如语法高亮之类的功能会非常有用。我首先发现了[ccat][3]这个有语法高亮功能的的工具,然后我发现了[bat][4],它的功能有语法高亮,分页,行号和git集成。
|
||||
|
||||
|
||||
`bat`命令也能让我在输出里(只要输出比屏幕的高度长)
|
||||
使用`/`关键字绑定来搜索(和用`less`搜索功能一样)。
|
||||
|
||||
|
||||
![Simple bat output][5]
|
||||
|
||||
我将别名`cat`链接到了`bat`命令:
|
||||
|
||||
|
||||
|
||||
```
|
||||
alias cat='bat'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][4]
|
||||
|
||||
### prettyping > ping
|
||||
|
||||
`ping`非常有用,当我碰到“糟了,是不是什么服务挂了?/我的网不通了?”这种情况下我最先想到的工具就是它了。但是`prettyping`(“prettyping” 可不是指"pre typing")(译注:英文字面意思是'预打印')在`ping`上加上了友好的输出,这可让我感觉命令行友好了很多呢。
|
||||
|
||||
|
||||
![/images/cli-improved/ping.gif][6]
|
||||
|
||||
我也将`ping`用别名链接到了`prettyping`命令:
|
||||
|
||||
|
||||
```
|
||||
alias ping='prettyping --nolegend'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][7]
|
||||
|
||||
### fzf > ctrl+r
|
||||
|
||||
在命令行上使用`ctrl+r`将允许你在命令历史里[反向搜索][8]使用过的命令,这是个挺好的小技巧,但是它需要你给出非常精确的输入才能正常运行。
|
||||
|
||||
`fzf`这个工具相比于`ctrl+r`有了**巨大的**进步。它能针对命令行历史进行模糊查询,并且提供了对可能的合格结果进行全面交互式预览。
|
||||
|
||||
|
||||
除了搜索命令历史,`fzf`还能预览和打开文件,我在下面的视频里展示了这些功能。
|
||||
|
||||
|
||||
为了这个预览的效果,我创建了一个叫`preview`的别名,它将`fzf`和前文提到的`bat`组合起来完成预览功能,还给上面绑定了一个定制的热键Ctrl+o来打开 VS Code:
|
||||
|
||||
|
||||
```
|
||||
alias preview="fzf --preview 'bat --color \"always\" {}'"
|
||||
# 支持在 VS Code 里用ctrl+o 来打开选择的文件
|
||||
export FZF_DEFAULT_OPTS="--bind='ctrl-o:execute(code {})+abort'"
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][9]
|
||||
|
||||
### htop > top
|
||||
|
||||
`top`是当我想快速诊断为什么机器上的CPU跑的那么累或者风扇为什么突然呼呼大做的时候首先会想到的工具。我在产品环境也会使用这个工具。讨厌的是Mac上的`top`和 Linux 上的`top`有着极大的不同(恕我直言,应该是差的多)。
|
||||
|
||||
|
||||
不过,`htop`是对 Linux 上的`top`和 Mac 上蹩脚的`top`的极大改进。它增加了包括颜色输出编码,键盘热键绑定以及不同的视图输出,这极大的帮助了我来理解进程之间的父子关系。
|
||||
|
||||
|
||||
方便的热键绑定包括:
|
||||
|
||||
* P - CPU使用率排序
|
||||
* M - 内存使用排序
|
||||
* F4 - 用字符串过滤进程(例如只看包括"node"的进程)
|
||||
* space - 锚定一个单独进程,这样我能观察它是否有尖峰状态
|
||||
|
||||
|
||||
![htop output][10]
|
||||
|
||||
在Mac Sieera 上htop 有个奇怪的bug,不过这个bug可以通过以root运行来绕过(我实在记不清这个bug 是什么,但是这个别名能搞定它,有点讨厌的是我得每次都输入root密码。):
|
||||
|
||||
|
||||
```
|
||||
alias top="sudo htop" # 给top加上别名并且绕过 Sieera 上的bug
|
||||
```
|
||||
|
||||
💾 [Installation directions][11]
|
||||
|
||||
### diff-so-fancy > diff
|
||||
|
||||
我非常确定我是一些年前从 Paul Irish 那儿学来的这个技巧,尽管我很少直接使用`diff`,但我的git命令行会一直使用`diff`。`diff-so-fancy`给了我代码语法颜色和更改字符高亮的功能。
|
||||
|
||||
|
||||
![diff so fancy][12]
|
||||
|
||||
在我的`~/.gitconfig`文件里我有下面的选项来打开`git diff`和`git show`的`diff-so-fancy`功能。
|
||||
|
||||
|
||||
```
|
||||
[pager]
|
||||
diff = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
show = diff-so-fancy | less --tabs=1,5 -RFX
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][13]
|
||||
|
||||
### fd > find
|
||||
|
||||
尽管我使用 Mac, 但我从来不是一个Spotlight的拥趸,我觉得它的性能很差,关键字也难记,加上更新它自己的数据库时会拖慢CPU,简直一无是处。我经常使用[Alfred][14],但是它的搜索功能也工作的不是很好。
|
||||
|
||||
|
||||
我倾向于在命令行中搜索文件,但是`find`的难用在于很难去记住那些合适的表达式来描述我想要的文件。(而且 Mac 上的 find 命令和非Mac的find命令还有些许不同,这更加深了我的失望。)
|
||||
|
||||
`fd`是一个很好的替代品(它的作者和`bat`的作者是同一个人)。它非常快而且对于我经常要搜索的命令非常好记。
|
||||
|
||||
|
||||
|
||||
几个使用方便的例子:
|
||||
|
||||
```
|
||||
$ fd cli # 所有包含"cli"的文件名
|
||||
$ fd -e md # 所有以.md作为扩展名的文件
|
||||
$ fd cli -x wc -w # 搜索"cli"并且在每个搜索结果上运行`wc -w`
|
||||
|
||||
|
||||
```
|
||||
|
||||
![fd output][15]
|
||||
|
||||
💾 [Installation directions][16]
|
||||
|
||||
### ncdu > du
|
||||
|
||||
对我来说,知道当前的磁盘空间使用是非常重要的任务。我用过 Mac 上的[Dish Daisy][17],但是我觉得那个程序产生结果有点慢。
|
||||
|
||||
|
||||
`du -sh`命令是我经常会跑的命令(`-sh`是指结果以`总结`和`人类可读`的方式显示),我经常会想要深入挖掘那些占用了大量磁盘空间的目录,看看到底是什么在占用空间。
|
||||
|
||||
`ncdu`是一个非常棒的替代品。它提供了一个交互式的界面并且允许快速的扫描那些占用了大量磁盘空间的目录和文件,它又快又准。(尽管不管在哪个工具的情况下,扫描我的home目录都要很长时间,它有550G)
|
||||
|
||||
|
||||
一旦当我找到一个目录我想要“处理”一下(如删除,移动或压缩文件),我都会使用命令+点击屏幕[iTerm2][18]上部的目录名字来对那个目录执行搜索。
|
||||
|
||||
|
||||
![ncdu output][19]
|
||||
|
||||
还有另外一个选择[一个叫nnn的另外选择][20],它提供了一个更漂亮的界面,它也提供文件尺寸和使用情况,实际上它更像一个全功能的文件管理器。
|
||||
|
||||
|
||||
我的`ncdu`使用下面的别名链接:
|
||||
|
||||
```
|
||||
alias du="ncdu --color dark -rr -x --exclude .git --exclude node_modules"
|
||||
|
||||
```
|
||||
|
||||
|
||||
选项有:
|
||||
|
||||
* `--color dark` 使用颜色方案
|
||||
* `-rr` 只读模式(防止误删和运行新的登陆程序)
|
||||
* `--exclude` 忽略不想操作的目录
|
||||
|
||||
|
||||
|
||||
💾 [Installation directions][21]
|
||||
|
||||
### tldr > man
|
||||
|
||||
几乎所有的单独命令行工具都有一个相伴的手册,其可以被`man <命令名>`来调出,但是在`man`的输出里找到东西可有点让人困惑,而且在一个包含了所有的技术细节的输出里找东西也挺可怕的。
|
||||
|
||||
|
||||
这就是TL;DR(译注:英文里`文档太长,没空去读`的缩写)项目创建的初衷。这是一个由社区驱动的文档系统,而且针对的是命令行。就我现在用下来,我还没碰到过一个命令它没有相应的文档,你[也可以做贡献][22]。
|
||||
|
||||
|
||||
![TLDR output for 'fd'][23]
|
||||
|
||||
作为一个小技巧,我将`tldr`的别名链接到`help`(这样输入会快一点。。。)
|
||||
|
||||
```
|
||||
alias help='tldr'
|
||||
|
||||
```
|
||||
|
||||
💾 [Installation directions][24]
|
||||
|
||||
### ack || ag > grep
|
||||
|
||||
`grep`毫无疑问是一个命令行上的强力工具,但是这些年来它已经被一些工具超越了,其中两个叫`ack`和`ag`。
|
||||
|
||||
|
||||
我个人对`ack`和`ag`都尝试过,而且没有非常明显的个人偏好,(那也就是说他们都很棒,并且很相似)。我倾向于默认只使用`ack`,因为这三个字符就在指尖,很好打。并且,`ack`有大量的`ack --`参数可以使用,(你一定会体会到这一点。)
|
||||
|
||||
|
||||
`ack`和`ag`都将使用正则表达式来表达搜索,这非常契合我的工作,我能指定搜索的文件类型而不用使用类似于`--js`或`--html`的文件标识(尽管`ag`比`ack`在文件类型过滤器里包括了更多的文件类型。)
|
||||
|
||||
|
||||
两个工具都支持常见的`grep`选项,如`-B`和`-A`用于在搜索的上下文里指代`之前`和`之后`。
|
||||
|
||||
|
||||
![ack in action][25]
|
||||
|
||||
因为`ack`不支持markdown(而我又恰好写了很多markdown), 我在我的`~/.ackrc`文件里放了如下的定制语句:
|
||||
|
||||
|
||||
|
||||
```
|
||||
--type-set=md=.md,.mkd,.markdown
|
||||
--pager=less -FRX
|
||||
|
||||
```
|
||||
|
||||
💾 Installation directions: [ack][26], [ag][27]
|
||||
|
||||
[Futher reading on ack & ag][28]
|
||||
|
||||
### jq > grep et al
|
||||
|
||||
我是[jq][29]的粉丝之一。当然一开始我也在它的语法里苦苦挣扎,好在我对查询语言还算有些使用心得,现在我对`jq`可以说是每天都要用。(不过从前我要么使用grep 或者使用一个叫[json][30]的工具,相比而言后者的功能就非常基础了。)
|
||||
|
||||
|
||||
我甚至开始撰写一个`jq`的教程系列(有2500字并且还在增加),我还发布了一个[web tool][31]和一个Mac 上的应用(这个还没有发布。)
|
||||
|
||||
|
||||
`jq`允许我传入一个 JSON 并且能非常简单的将其转变为一个 使用JSON格式的结果,这正是我想要的。下面这个例子允许我用一个命令更新我的所有节点依赖(为了阅读方便,我将其分成为多行。)
|
||||
|
||||
|
||||
```
|
||||
$ npm i $(echo $(\
|
||||
npm outdated --json | \
|
||||
jq -r 'to_entries | .[] | "\(.key)@\(.value.latest)"' \
|
||||
))
|
||||
|
||||
```
|
||||
上面的命令将使用npm 的 JSON 输出格式来列出所有的过期节点依赖,然后将下面的源JSON转换为:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"node-jq": {
|
||||
"current": "0.7.0",
|
||||
"wanted": "0.7.0",
|
||||
"latest": "1.2.0",
|
||||
"location": "node_modules/node-jq"
|
||||
},
|
||||
"uuid": {
|
||||
"current": "3.1.0",
|
||||
"wanted": "3.2.1",
|
||||
"latest": "3.2.1",
|
||||
"location": "node_modules/uuid"
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
转换结果为:(译注:原文此处并未给出结果)
|
||||
|
||||
上面的结果会被作为`npm install`的输入,你瞧,我的升级就这样全部搞定了。(当然,这里有点小题大做了。)
|
||||
|
||||
|
||||
### 很荣幸提及一些其他的工具
|
||||
|
||||
我也在开始尝试一些别的工具,但我还没有完全掌握他们。(除了`ponysay`,当我新启动一个命令行会话时,它就会出现。)
|
||||
|
||||
|
||||
* [ponysay][32] > cowsay
|
||||
* [csvkit][33] > awk et al
|
||||
* [noti][34] > `display notification`
|
||||
* [entr][35] > watch
|
||||
|
||||
|
||||
|
||||
### 你有什么好点子吗?
|
||||
|
||||
|
||||
上面是我的命令行清单。能告诉我们你的吗?你有没有试着去增强一些你每天都会用到的命令呢?请告诉我,我非常乐意知道。
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://remysharp.com/2018/08/23/cli-improved
|
||||
|
||||
作者:[Remy Sharp][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:DavidChenLiang(https://github.com/DavidChenLiang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://remysharp.com
|
||||
[1]: https://remysharp.com/images/terminal-600.jpg
|
||||
[2]: https://training.leftlogic.com/buy/terminal/cli2?coupon=READERS-DISCOUNT&utm_source=blog&utm_medium=banner&utm_campaign=remysharp-discount
|
||||
[3]: https://github.com/jingweno/ccat
|
||||
[4]: https://github.com/sharkdp/bat
|
||||
[5]: https://remysharp.com/images/cli-improved/bat.gif (Sample bat output)
|
||||
[6]: https://remysharp.com/images/cli-improved/ping.gif (Sample ping output)
|
||||
[7]: http://denilson.sa.nom.br/prettyping/
|
||||
[8]: https://lifehacker.com/278888/ctrl%252Br-to-search-and-other-terminal-history-tricks
|
||||
[9]: https://github.com/junegunn/fzf
|
||||
[10]: https://remysharp.com/images/cli-improved/htop.jpg (Sample htop output)
|
||||
[11]: http://hisham.hm/htop/
|
||||
[12]: https://remysharp.com/images/cli-improved/diff-so-fancy.jpg (Sample diff output)
|
||||
[13]: https://github.com/so-fancy/diff-so-fancy
|
||||
[14]: https://www.alfredapp.com/
|
||||
[15]: https://remysharp.com/images/cli-improved/fd.png (Sample fd output)
|
||||
[16]: https://github.com/sharkdp/fd/
|
||||
[17]: https://daisydiskapp.com/
|
||||
[18]: https://www.iterm2.com/
|
||||
[19]: https://remysharp.com/images/cli-improved/ncdu.png (Sample ncdu output)
|
||||
[20]: https://github.com/jarun/nnn
|
||||
[21]: https://dev.yorhel.nl/ncdu
|
||||
[22]: https://github.com/tldr-pages/tldr#contributing
|
||||
[23]: https://remysharp.com/images/cli-improved/tldr.png (Sample tldr output for 'fd')
|
||||
[24]: http://tldr-pages.github.io/
|
||||
[25]: https://remysharp.com/images/cli-improved/ack.png (Sample ack output with grep args)
|
||||
[26]: https://beyondgrep.com
|
||||
[27]: https://github.com/ggreer/the_silver_searcher
|
||||
[28]: http://conqueringthecommandline.com/book/ack_ag
|
||||
[29]: https://stedolan.github.io/jq
|
||||
[30]: http://trentm.com/json/
|
||||
[31]: https://jqterm.com
|
||||
[32]: https://github.com/erkin/ponysay
|
||||
[33]: https://csvkit.readthedocs.io/en/1.0.3/
|
||||
[34]: https://github.com/variadico/noti
|
||||
[35]: http://www.entrproject.org/
|
56
translated/tech/20180827 A sysadmin-s guide to containers.md
Normal file
56
translated/tech/20180827 A sysadmin-s guide to containers.md
Normal file
@ -0,0 +1,56 @@
|
||||
写给系统管理员的容器手册
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP)
|
||||
|
||||
现在人们严重地过度使用“容器”这个术语。另外,对不同的人来说,它可能会有不同的含义,这取决于上下文。
|
||||
|
||||
传统的 Linux 容器只是系统上普通的进程组成的进程组。进程组之间是相互隔离的,实现方法包括:资源限制(控制组 [cgoups])、Linux 安全限制(文件权限,基于 Capability 的安全模块,SELinux,AppArmor,seccomp 等)还有名字空间(进程 ID,网络,挂载等)。
|
||||
|
||||
如果你启动一台现代 Linux 操作系统,使用 `cat /proc/PID/cgroup` 命令就可以看到该进程是属于一个控制组的。还可以从 `/proc/PID/status` 文件中查看进程的 Capability 信息,从 `/proc/self/attr/current` 文件中查看进程的 SELinux 标签信息,从 `/proc/PID/ns` 目录下的文件查看进程所属的名字空间。因此,如果把容器定义为带有资源限制、Linux 安全限制和名字空间的进程,那么按照这个定义,Linux 操作系统上的每一个进程都在容器里。因此我们常说 [Linux 就是容器,容器就是 Linux][1]。而**容器运行时**是这样一种工具,它调整上述资源限制、安全限制和名字空间,并启动容器。
|
||||
|
||||
Docker 引入了**容器镜像**的概念,镜像是一个普通的 TAR 包文件,包含了:
|
||||
|
||||
* **Rootfs(容器的根文件系统):**一个目录,看起来像是操作系统的普通根目录(/),例如,一个包含 `/usr`, `/var`, `/home` 等的目录。
|
||||
* **JSON 文件(容器的配置):**定义了如何运行 rootfs;例如,当容器启动的时候要在 rootfs 里运行什么 **command** 或者 **entrypoint**,给容器定义什么样的**环境变量**,容器的**工作目录**是哪个,以及其他一些设置。
|
||||
|
||||
Docker 把 rootfs 和 JSON 配置文件打包成**基础镜像**。你可以在这个基础之上,给 rootfs 安装更多东西,创建新的 JSON 配置文件,然后把相对于原始镜像的不同内容打包到新的镜像。这种方法创建出来的是**分层的镜像**。
|
||||
|
||||
[Open Container Initiative(开放容器计划 OCI)][2] 标准组织最终把容器镜像的格式标准化了,也就是 [OCI Image Specification(OCI 镜像规范)][3]。
|
||||
|
||||
用来创建容器镜像的工具被称为**容器镜像构建器**。有时候容器引擎做这件事情,不过可以用一些独立的工具来构建容器镜像。
|
||||
|
||||
Docker 把这些容器镜像(**tar 包**)托管到 web 服务中,并开发了一种协议来支持从 web 拉取镜像,这个 web 服务就叫**容器仓库**。
|
||||
|
||||
**容器引擎**是能从镜像仓库拉取镜像并装载到**容器存储**上的程序。容器引擎还能启动**容器运行时**(见下图)。
|
||||
|
||||
![](https://opensource.com/sites/default/files/linux_container_internals_2.0_-_hosts.png)
|
||||
|
||||
容器存储一般是**写入时复制**(COW)的分层文件系统。从容器仓库拉取一个镜像时,其中的 rootfs 首先被解压到磁盘。如果这个镜像是多层的,那么每一层都会被下载到 COW 文件系统的不同分层。 COW 文件系统保证了镜像的每一层独立存储,这最大化了多个分层镜像之间的文件共享程度。容器引擎通常支持多种容器存储类型,包括 `overlay`、`devicemapper`、`btrfs`、`aufs` 和 `zfs`。
|
||||
|
||||
容器引擎将容器镜像下载到容器存储中之后,需要创建一份**容器运行时配置**,这份配置是用户/调用者的输入和镜像配置的合并。例如,容器的调用者可能会调整安全设置,添加额外的环境变量或者挂载一些卷到容器中。
|
||||
|
||||
容器运行时配置的格式,和解压出来的 rootfs 也都被开放容器计划 OCI 标准组织做了标准化,称为 [OCI 运行时规范][4]。
|
||||
|
||||
最终,容器引擎启动了一个**容器运行时**来读取运行时配置,修改 Linux 控制组、安全限制和名字空间,并执行容器命令来创建容器的 **PID 1**。至此,容器引擎已经可以把容器的标准输入/标准输出转给调用方,并控制容器了(例如,stop,start,attach)。
|
||||
|
||||
值得一提的是,现在出现了很多新的容器运行时,它们使用 Linux 的不同特性来隔离容器。可以使用 KVM 技术来隔离容器(想想迷你虚拟机),或者使用其他虚拟机监视器策略(例如拦截所有从容器内的进程发起的系统调用)。既然我们有了标准的运行时规范,这些工具都能被相同的容器引擎来启动。即使在 Windows 系统下,也可以使用 OCI 运行时规范来启动 Windows 容器。
|
||||
|
||||
容器编排器是一个更高层次的概念。它是在多个不同的节点上协调容器执行的工具。容器编排工具通过和容器引擎的通信来管理容器。编排器控制容器引擎做容器的启动和容器间的网络连接,它能够监控容器,在负载变高的时候进行容器扩容。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/sysadmins-guide-containers
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[belitex](https://github.com/belitex)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rhatdan
|
||||
[1]:https://www.redhat.com/en/blog/containers-are-linux
|
||||
[2]:https://www.opencontainers.org/
|
||||
[3]:https://github.com/opencontainers/image-spec/blob/master/spec.md
|
||||
[4]:https://github.com/opencontainers/runtime-spec
|
390
translated/tech/20180912 How to build rpm packages.md
Normal file
390
translated/tech/20180912 How to build rpm packages.md
Normal file
@ -0,0 +1,390 @@
|
||||
如何构建rpm包
|
||||
======
|
||||
|
||||
节省跨多个主机安装文件和脚本的时间和精力。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1)
|
||||
|
||||
自20多年前我开始使用 Linux 以来,我已经使用过基于 rpm 的软件包管理器在 Red Hat 和 Fedora Linux系统上安装软件。我使用过 **rpm** 程序本身,还有 **yum** 和 **DNF** ,用于在我的 Linux 主机上安装和更新软件包,DNF 是 yum 的一个紧密后代。 yum 和 DNF 工具是 rpm 实用程序的包装器,它提供了其他功能,例如查找和安装包依赖项的功能。
|
||||
|
||||
多年来,我创建了许多 Bash 脚本,其中一些脚本具有单独的配置文件,我希望在大多数新计算机和虚拟机上安装这些脚本。这也能解决安装所有这些软件包需要花费大量时间的难题,因此我决定通过创建一个 rpm 软件包来自动执行该过程,我可以将其复制到目标主机并将所有这些文件安装在适当的位置。虽然 **rpm** 工具以前用于构建 rpm 包,但该功能已被删除,并且创建了一个新工具来构建新的 rpm。
|
||||
|
||||
当我开始这个项目时,我发现很少有关于创建 rpm 包的信息,但我找到了一本书,名为《Maximum RPM》,这本书才帮我弄明白了。这本书现在已经过时了,我发现的绝大多数信息都是如此。它也已经绝版,使用复印件需要花费数百美元。[Maximum RPM][1] 的在线版本是免费提供的,并保持最新。 [RPM 网站][2]还有其他网站的链接,这些网站上有很多关于 rpm 的文档。其他的信息往往是简短的,显然都是假设你已经对该过程有了很多了解。
|
||||
|
||||
此外,我发现的每个文档都假定代码需要在开发环境中从源代码编译。我不是开发人员。我是一个系统管理员,我们系统管理员有不同的需求,因为我们不需要或者我们不应该为了管理任务而去编译代码;我们应该使用 shell 脚本。所以我们没有源代码,因为它需要被编译成二进制可执行文件。我们拥有的是一个也是可执行的源代码。
|
||||
|
||||
在大多数情况下,此项目应作为非 root 用户执行。 Rpm 包永远不应该由 root 用户构建,而只能由非特权普通用户构建。我将指出哪些部分应该以 root 身份执行,哪些部分应由非 root,非特权用户执行。
|
||||
|
||||
### 准备
|
||||
|
||||
首先,打开一个终端会话,然后 `su` 到 root 用户。 请务必使用 `-` 选项以确保启用完整的 root 环境。 我不认为系统管理员应该使用 `sudo` 来执行任何管理任务。 在我的个人博客文章中可以找出为什么:[真正的系统管理员不要使用 sudo][3]。
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ su -
|
||||
Password:
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
创建可用于此项目的普通用户 student,并为该用户设置密码。
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# useradd -c "Student User" student
|
||||
[root@testvm1 ~]# passwd student
|
||||
Changing password for user student.
|
||||
New password: <Enter the password>
|
||||
Retype new password: <Enter the password>
|
||||
passwd: all authentication tokens updated successfully.
|
||||
[root@testvm1 ~]#
|
||||
```
|
||||
|
||||
构建 rpm 包需要 `rpm-build` 包,该包可能尚未安装。 现在以 root 身份安装它。 请注意,此命令还将安装多个依赖项。 数量可能会有所不同,具体取决于主机上已安装的软件包; 它在我的测试虚拟机上总共安装了17个软件包,这是非常小的。
|
||||
|
||||
```
|
||||
dnf install -y rpm-build
|
||||
```
|
||||
|
||||
除非另有明确指示,否则本项目的剩余部分应以普通用户用户 student 来执行。 打开另一个终端会话并使用 `su` 切换到该用户以执行其余步骤。 使用以下命令从 GitHub 下载我准备好的开发目录结构 utils.tar 这个<ruby>tar 包<rt>tarball</rt></ruby>(LCTT 译注:tarball 是以 tar 命令来打包和压缩的文件的统称):
|
||||
|
||||
```
|
||||
wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar
|
||||
```
|
||||
|
||||
此 tar 包包含将由最终 rpm 程序安装的所有文件和 Bash 脚本。 还有一个完整的 spec 文件,你可以使用它来构建 rpm。 我们将详细介绍 spec 文件的每个部分。
|
||||
|
||||
作为普通学生 student,使用你的家目录作为当前工作目录(pwd),解压缩 tar 包。
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd ; tar -xvf utils.tar
|
||||
```
|
||||
|
||||
使用 `tree` 命令验证~/development 的目录结构和包含的文件,如下所示:
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ tree development/
|
||||
development/
|
||||
├── license
|
||||
│ ├── Copyright.and.GPL.Notice.txt
|
||||
│ └── GPL_LICENSE.txt
|
||||
├── scripts
|
||||
│ ├── create_motd
|
||||
│ ├── die
|
||||
│ ├── mymotd
|
||||
│ └── sysdata
|
||||
└── spec
|
||||
└── utils.spec
|
||||
|
||||
3 directories, 7 files
|
||||
[student@testvm1 ~]$
|
||||
```
|
||||
|
||||
`mymotd` 脚本创建一个发送到标准输出的“当日消息”数据流。 `create_motd` 脚本运行 `mymotd` 脚本并将输出重定向到 /etc/motd 文件。 此文件用于向使用SSH远程登录的用户显示每日消息。
|
||||
|
||||
`die` 脚本是我自己的脚本,它将 `kill` 命令包装在一些代码中,这些代码可以找到与指定字符串匹配的运行程序并将其终止。 它使用 `kill -9` 来确保kill命令一定会执行。
|
||||
|
||||
`sysdata` 脚本可以显示有关计算机硬件,还有已安装的 Linux 版本,所有已安装的软件包以及硬盘驱动器元数据的数万行数据。 我用它来记录某个时间点的主机状态。 我以后可以用它作为参考。 我曾经这样做是为了维护我为客户安装的主机记录。
|
||||
|
||||
你可能需要将这些文件和目录的所有权更改为 student:student 。 如有必要,使用以下命令执行此操作:
|
||||
|
||||
```
|
||||
chown -R student:student development
|
||||
```
|
||||
|
||||
此文件树中的大多数文件和目录将通过你在此项目期间创建的 rpm 包安装在 Fedora 系统上。
|
||||
|
||||
### 创建构建目录结构
|
||||
|
||||
`rpmbuild` 命令需要非常特定的目录结构。 你必须自己创建此目录结构,因为没有提供自动方式。 在家目录中创建以下目录结构:
|
||||
|
||||
```
|
||||
~ ─ rpmbuild
|
||||
├── RPMS
|
||||
│ └── noarch
|
||||
├── SOURCES
|
||||
├── SPECS
|
||||
└── SRPMS
|
||||
```
|
||||
|
||||
我们不会创建 rpmbuild/RPMS/X86_64 目录,因为对于64位编译的二进制文件这是特定于体系结构的。 我们有 shell 脚本,不是特定于体系结构的。 实际上,我们也不会使用 SRPMS 目录,它将包含编译器的源文件。
|
||||
|
||||
### 检查 spec 文件
|
||||
|
||||
每个 spec 文件都有许多部分,其中一些部分可能会被忽视或省略,取决于 rpm 构建的具体情况。 这个特定的 spec 文件不是工作所需的最小文件的示例,但它是一个很好的包含不需要编译的文件的中等复杂 spec 文件的例子。 如果需要编译,它将在`构建`部分中执行,该部分在此 spec 文件中省略掉了,因为它不是必需的。
|
||||
|
||||
#### 前言
|
||||
|
||||
这是 spec 文件中唯一没有标签的部分。 它包含运行命令 `rpm -qi [Package Name]` 时看到的大部分信息。 每个数据都是一行,由标签和标签值的文本数据组成。
|
||||
|
||||
```
|
||||
###############################################################################
|
||||
# Spec file for utils
|
||||
################################################################################
|
||||
# Configured to be built by user student or other non-root user
|
||||
################################################################################
|
||||
#
|
||||
Summary: Utility scripts for testing RPM creation
|
||||
Name: utils
|
||||
Version: 1.0.0
|
||||
Release: 1
|
||||
License: GPL
|
||||
URL: http://www.both.org
|
||||
Group: System
|
||||
Packager: David Both
|
||||
Requires: bash
|
||||
Requires: screen
|
||||
Requires: mc
|
||||
Requires: dmidecode
|
||||
BuildRoot: ~/rpmbuild/
|
||||
|
||||
# Build with the following syntax:
|
||||
# rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
`rpmbuild` 程序会忽略注释行。我总是喜欢在本节中添加注释,其中包含创建包所需的 `rpmbuild` 命令的确切语法。摘要标签是包的简短描述。 Name,Version 和 Release 标签用于创建 rpm 文件的名称,如utils-1.00-1.rpm 中所示。通过增加发行版号码和版本号,你可以创建 rpm 包去更新旧版本的。
|
||||
|
||||
许可证标签定义了发布包的许可证。我总是使用 GPL 的一个变体。指定许可证对于澄清包中包含的软件是开源的这一事实非常重要。这也是我将许可证和 GPL 语句包含在将要安装的文件中的原因。
|
||||
|
||||
URL 通常是项目或项目所有者的网页。在这种情况下,它是我的个人网页。
|
||||
|
||||
Group 标签很有趣,通常用于 GUI 应用程序。 Group 标签的值决定了应用程序菜单中的哪一组图标将包含此包中可执行文件的图标。与 Icon 标签(我们此处未使用)一起使用时,Group 标签允许添加图标和所需信息用于将程序启动到应用程序菜单结构中。
|
||||
|
||||
Packager 标签用于指定负责维护和创建包的人员或组织。
|
||||
|
||||
Requires 语句定义此 rpm 包的依赖项。每个都是包名。如果其中一个指定的软件包不存在,DNF 安装实用程序将尝试在 /etc/yum.repos.d 中定义的某个已定义的存储库中找到它,如果存在则安装它。如果 DNF 找不到一个或多个所需的包,它将抛出一个错误,指出哪些包丢失并终止。
|
||||
|
||||
BuildRoot 行指定顶级目录,`rpmbuild` 工具将在其中找到 spec 文件,并在构建包时在其中创建临时目录。完成的包将存储在我们之前指定的noarch子目录中。注释显示了构建此程序包的命令语法,包括定义了目标体系结构的 `–target noarch` 选项。因为这些是Bash脚本,所以它们与特定的CPU架构无关。如果省略此选项,则构建将选用正在执行构建的CPU的体系结构。
|
||||
|
||||
`rpmbuild` 程序可以针对许多不同的体系结构,并且使用 `--target` 选项允许我们在不同的体系结构主机上构建特定体系结构的包,其具有与执行构建的体系结构不同的体系结构。所以我可以在 x86_64 主机上构建一个用于 i686 架构的软件包,反之亦然。
|
||||
|
||||
如果你有自己的网站,请将打包者的名称更改为你自己的网站。
|
||||
|
||||
#### 描述
|
||||
|
||||
spec 文件的 `描述` 部分包含 rpm 包的描述。 它可以很短,也可以包含许多信息。 我们的 `描述` 部分相当简洁。
|
||||
|
||||
```
|
||||
%description
|
||||
A collection of utility scripts for testing RPM creation.
|
||||
```
|
||||
|
||||
#### 准备
|
||||
|
||||
`准备` 部分是在构建过程中执行的第一个脚本。 在安装程序包期间不会执行此脚本。
|
||||
|
||||
这个脚本只是一个 Bash shell 脚本。 它准备构建目录,根据需要创建用于构建的目录,并将相应的文件复制到各自的目录中。 这将包括完整编译作为构建的一部分所需的源。
|
||||
|
||||
$RPM_BUILD_ROOT 目录表示已安装系统的根目录。 在 $RPM_BUILD_ROOT 目录中创建的目录是实时文件系统中的绝对路径,例如 /user/local/share/utils,/usr/local/bin 等。
|
||||
|
||||
对于我们的包,我们没有预编译源,因为我们的所有程序都是 Bash 脚本。 因此,我们只需将这些脚本和其他文件复制到已安装系统的目录中。
|
||||
|
||||
```
|
||||
%prep
|
||||
################################################################################
|
||||
# Create the build tree and copy the files from the development directories #
|
||||
# into the build tree. #
|
||||
################################################################################
|
||||
echo "BUILDROOT = $RPM_BUILD_ROOT"
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
|
||||
mkdir -p $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
cp /home/student/development/utils/scripts/* $RPM_BUILD_ROOT/usr/local/bin
|
||||
cp /home/student/development/utils/license/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
|
||||
exit
|
||||
```
|
||||
|
||||
请注意,本节末尾的 exit 语句是必需的。
|
||||
|
||||
#### 文件
|
||||
|
||||
spec 文件的这一部分定义了要安装的文件及其在目录树中的位置。 它还指定了要安装的每个文件的文件属性以及所有者和组所有者。 文件权限和所有权是可选的,但我建议明确设置它们以消除这些属性在安装时不正确或不明确的任何可能性。 如果目录尚不存在,则会在安装期间根据需要创建目录。
|
||||
|
||||
```
|
||||
%files
|
||||
%attr(0744, root, root) /usr/local/bin/*
|
||||
%attr(0644, root, root) /usr/local/share/utils/*
|
||||
```
|
||||
|
||||
#### 安装前
|
||||
|
||||
在我们的实验室项目的 spec 文件中,此部分为空。 这将放置那些需要 rpm 安装前执行的脚本。
|
||||
|
||||
#### 安装后
|
||||
|
||||
spec 文件的这一部分是另一个 Bash 脚本。 这个在安装文件后运行。 此部分几乎可以是你需要或想要的任何内容,包括创建文件,运行系统命令以及重新启动服务以在进行配置更改后重新初始化它们。 我们的 rpm 包的 `安装后` 脚本执行其中一些任务。
|
||||
|
||||
```
|
||||
%post
|
||||
################################################################################
|
||||
# Set up MOTD scripts #
|
||||
################################################################################
|
||||
cd /etc
|
||||
# Save the old MOTD if it exists
|
||||
if [ -e motd ]
|
||||
then
|
||||
cp motd motd.orig
|
||||
fi
|
||||
# If not there already, Add link to create_motd to cron.daily
|
||||
cd /etc/cron.daily
|
||||
if [ ! -e create_motd ]
|
||||
then
|
||||
ln -s /usr/local/bin/create_motd
|
||||
fi
|
||||
# create the MOTD for the first time
|
||||
/usr/local/bin/mymotd > /etc/motd
|
||||
```
|
||||
|
||||
此脚本中包含的注释应明确其用途。
|
||||
|
||||
#### 卸载后
|
||||
|
||||
此部分包含将在卸载 rpm 软件包后运行的脚本。 使用 rpm 或 DNF 删除包会删除文件部分中列出的所有文件,但它不会删除安装后部分创建的文件或链接,因此我们需要在本节中处理。
|
||||
|
||||
此脚本通常由清理任务组成,只是清除以前由rpm安装的文件,但rpm本身无法完成清除。 对于我们的包,它包括删除 `安装后` 脚本创建的链接并恢复 motd 文件的已保存原件。
|
||||
|
||||
```
|
||||
%postun
|
||||
# remove installed files and links
|
||||
rm /etc/cron.daily/create_motd
|
||||
|
||||
# Restore the original MOTD if it was backed up
|
||||
if [ -e /etc/motd.orig ]
|
||||
then
|
||||
mv -f /etc/motd.orig /etc/motd
|
||||
fi
|
||||
```
|
||||
|
||||
#### 清理
|
||||
|
||||
这个 Bash 脚本在 rpm 构建过程之后开始清理。 下面 `清理` 部分中的两行删除了 `rpm-build` 命令创建的构建目录。 在许多情况下,可能还需要额外的清理。
|
||||
|
||||
```
|
||||
%clean
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/bin
|
||||
rm -rf $RPM_BUILD_ROOT/usr/local/share/utils
|
||||
```
|
||||
|
||||
#### 更新日志
|
||||
|
||||
此可选的文本部分包含 rpm 及其包含的文件的更改列表。 最新的更改记录在本部分顶部。
|
||||
|
||||
```
|
||||
%changelog
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
```
|
||||
|
||||
使用你自己的姓名和电子邮件地址替换标题行中的数据。
|
||||
|
||||
### 构建 rpm
|
||||
|
||||
spec 文件必须位于 rpmbuild 目录树的 SPECS 目录中。 我发现最简单的方法是创建一个指向该目录中实际 spec 文件的链接,以便可以在开发目录中对其进行编辑,而无需将其复制到 SPECS 目录。 将 SPECS 目录设为当前工作目录,然后创建链接。
|
||||
|
||||
```
|
||||
cd ~/rpmbuild/SPECS/
|
||||
ln -s ~/development/spec/utils.spec
|
||||
```
|
||||
|
||||
运行以下命令以构建 rpm 。 如果没有错误发生,只需要花一点时间来创建 rpm 。
|
||||
|
||||
```
|
||||
rpmbuild --target noarch -bb utils.spec
|
||||
```
|
||||
|
||||
检查 ~/rpmbuild/RPMS/noarch 目录以验证新的 rpm 是否存在。
|
||||
|
||||
```
|
||||
[student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/
|
||||
[student@testvm1 noarch]$ ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[student@testvm1 noarch]$
|
||||
```
|
||||
|
||||
### 测试 rpm
|
||||
|
||||
以 root 用户身份安装 rpm 以验证它是否正确安装并且文件是否安装在正确的目录中。 rpm 的确切名称将取决于你在 Preamble 部分中标签的值,但如果你使用了示例中的值,则 rpm 名称将如下面的示例命令所示:
|
||||
|
||||
```
|
||||
[root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/
|
||||
[root@testvm1 noarch]# ll
|
||||
total 24
|
||||
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
|
||||
[root@testvm1 noarch]# rpm -ivh utils-1.0.0-1.noarch.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:utils-1.0.0-1 ################################# [100%]
|
||||
```
|
||||
|
||||
检查 /usr/local/bin 以确保新文件存在。 你还应验证是否已创建 /etc/cron.daily 中的 create_motd 链接。
|
||||
|
||||
使用 `rpm -q --changelog utils` 命令查看更改日志。 使用 `rpm -ql utils` 命令(在 `ql`中为小写 L )查看程序包安装的文件。
|
||||
|
||||
```
|
||||
[root@testvm1 noarch]# rpm -q --changelog utils
|
||||
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
|
||||
- The original package includes several useful scripts. it is
|
||||
primarily intended to be used to illustrate the process of
|
||||
building an RPM.
|
||||
|
||||
[root@testvm1 noarch]# rpm -ql utils
|
||||
/usr/local/bin/create_motd
|
||||
/usr/local/bin/die
|
||||
/usr/local/bin/mymotd
|
||||
/usr/local/bin/sysdata
|
||||
/usr/local/share/utils/Copyright.and.GPL.Notice.txt
|
||||
/usr/local/share/utils/GPL_LICENSE.txt
|
||||
/usr/local/share/utils/utils.spec
|
||||
[root@testvm1 noarch]#
|
||||
```
|
||||
|
||||
删除包。
|
||||
|
||||
```
|
||||
rpm -e utils
|
||||
```
|
||||
|
||||
### 试验
|
||||
|
||||
现在,你将更改 spec 文件以要求一个不存在的包。 这将模拟无法满足的依赖关系。 在现有依赖行下立即添加以下行:
|
||||
|
||||
```
|
||||
Requires: badrequire
|
||||
```
|
||||
|
||||
构建包并尝试安装它。 显示什么消息?
|
||||
|
||||
我们使用 `rpm` 命令来安装和删除 `utils` 包。 尝试使用 yum 或 DNF 安装软件包。 你必须与程序包位于同一目录中,或指定程序包的完整路径才能使其正常工作。
|
||||
|
||||
### 总结
|
||||
|
||||
在这里看一下创建 rpm 包的基础知识,我们没有涉及很多标签和很多部分。 下面列出的资源可以提供更多信息。 构建 rpm 包并不困难;你只需要正确的信息。 我希望这对你有所帮助——我花了几个月的时间来自己解决问题。
|
||||
|
||||
我们没有涵盖源代码构建,但如果你是开发人员,那么从这一点开始应该是一个简单的步骤。
|
||||
|
||||
创建 rpm 包是另一种成为懒惰系统管理员的好方法,可以节省时间和精力。 它提供了一种简单的方法来分发和安装那些我们作为系统管理员需要在许多主机上安装的脚本和其他文件。
|
||||
|
||||
### 资料
|
||||
|
||||
- Edward C. Baily,Maximum RPM,Sams著,于2000年,ISBN 0-672-31105-4
|
||||
- Edward C. Baily,[Maximum RPM][1],更新在线版本
|
||||
- [RPM文档][4]:此网页列出了 rpm 的大多数可用在线文档。 它包括许多其他网站的链接和有关 rpm 的信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/how-build-rpm-packages
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[1]: http://ftp.rpm.org/max-rpm/
|
||||
[2]: http://rpm.org/index.html
|
||||
[3]: http://www.both.org/?p=960
|
||||
[4]: http://rpm.org/documentation.html
|
@ -0,0 +1,111 @@
|
||||
使用 Syncthing —— 一个开源同步工具来把握你数据的控制权
|
||||
|
||||
决定如何存储和共享您的个人信息。
|
||||
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg)
|
||||
|
||||
如今,我们的一些最重要的财产——从家人和朋友的照片和视频到财务和医疗文件——都是数据。
|
||||
即便是云存储服务的迅猛发展,我们仍有对隐私和个人数据缺乏控制的担忧。从 PRISM 的监控计划到谷歌[让 APP 开发者扫描你的个人邮件][1],这些新闻的报道应该会让我们对我们个人信息的安全性有所顾虑。
|
||||
|
||||
[Syncthing][2] 可以让你放下心来。它是一款开源点对点的文件同步工具,可以运行在Linux、Windows、Mac、Android和其他 (抱歉,没有iOS)。Syncthing 使用自定的协议,叫[块交换协议](3)。简而言之,Syncting 能让你无需拥有服务器来跨设备同步数据,。
|
||||
|
||||
### Linux
|
||||
|
||||
在这篇文章中,我将解释如何在 Linux 电脑和安卓手机之间安装和同步文件。
|
||||
|
||||
Syncting 在大多数流行的发行版都能下载。Fedora 28 包含其最新版本。
|
||||
|
||||
要在 Fedora 上安装 Syncthing,你能在软件中心搜索,或者执行以下命令:
|
||||
|
||||
```
|
||||
sudo dnf install syncthing syncthing-gtk
|
||||
```
|
||||
|
||||
一旦安装好后,打开它。你将会看到一个助手帮你配置 Syncthing。点击 **下一步** 直到它要求配置 WebUI。最安全的选项是选择**监听本地地址**。那将会禁止 Web 接口并且阻止未经授权的用户。
|
||||
|
||||
![Syncthing in Setup WebUI dialog box][5]
|
||||
|
||||
Syncthing 安装时的 WebUI 对话框
|
||||
|
||||
关闭对话框。现在 Syncthing 安装好了。是时间分享一个文件夹,连接一台设备开始同步了。但是,让我们用你其他的客户端继续。
|
||||
|
||||
### Android
|
||||
|
||||
Syncthing 在 Google Play 和 F-Droid 应用商店都能下载
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing2.png)
|
||||
|
||||
安装应用程序后,会显示欢迎界面。给 Syncthing 授予你设备存储的权限。
|
||||
你可能会被要求为了此应用程序而禁用电池优化。这样做是安全的,因为我们将优化应用程序,使其仅在插入并连接到无线网络时同步。
|
||||
|
||||
点击主菜单图标来到**设置**,然后是**运行条件**。点击**总是在后台运行**, **仅在充电时运行**和**仅在 WIFI 下运行**。现在你的安卓客户端已经准备好与你的设备交换文件。
|
||||
|
||||
Syncting 中有两个重要的概念需要记住:文件夹和设备。文件夹是你想要分享的,但是你必须有一台设备来分享。 Syncthing 允许你用不同的设备分享独立的文件夹。设备是通过交换设备 ID 来添加的。设备ID是在 Syncting 首次启动时创建的一个唯一的密码安全标识符。
|
||||
|
||||
### 连接设备
|
||||
|
||||
现在让我们连接你的Linux机器和你的Android客户端。
|
||||
|
||||
在您的Linux计算机中,打开 Syncting,单击 **设置** 图标,然后单击 **显示ID** ,就会显示一个二维码。
|
||||
|
||||
在你的安卓手机上,打开 Syncthing。在主界面上,点击 **设备** 页后点击 **+** 。在第一个区域内点击二维码符号来启动二维码扫描。
|
||||
|
||||
将你手机的摄像头对准电脑上的二维码。设备ID字段将由您的桌面客户端设备 ID 填充。起一个适合的名字并保存。因为添加设备有两种方式,现在你需要在电脑客户端上确认你想要添加安卓手机。你的电脑客户端可能会花上好几分钟来请求确认。当提示确认时,点击**添加**。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing6.png)
|
||||
|
||||
在 **新设备** 窗口,你能确认并配置一些关于你设备的选项,像是**设备名** 和 **地址**。如果你在地址那一栏选择 dynamic (动态),客户端将会自动探测设备的 IP 地址,但是你想要保持住某一个 IP 地址,你能将该地址填进这一栏里。如果你已经创建了文件夹(或者在这之后),你也能与新设备分享这个文件夹。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing7.png)
|
||||
|
||||
你的电脑和安卓设备已经配对,可以交换文件了。(如果你有多台电脑或手机,只需重复这些步骤。)
|
||||
|
||||
### 分享文件夹
|
||||
|
||||
既然您想要同步的设备之间已经连接,现在是时候共享一个文件夹了。您可以在电脑上共享文件夹,添加了该文件夹中的设备将获得一份副本。
|
||||
|
||||
若要共享文件夹,请转至**设置**并单击**添加共享文件夹**:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing8.png)
|
||||
|
||||
在下一个窗口中,输入要共享的文件夹的信息:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing9.png)
|
||||
|
||||
你可以使用任何你想要的标签。**文件夹ID **将随机生成,用于识别客户端之间的文件夹。在**路径**里,点击**浏览**就能定位到你想要分享的文件夹。如果你想 Syncthing 监控文件夹的变化(例如删除,新建文件等),点击** 监控文件系统变化**
|
||||
|
||||
记住,当你分享一个文件夹,在其他客户端的任何改动都将会反映到每一台设备上。这意味着如果你在其他电脑和手机设备之间分享了一个包含图片的文件夹,在这些客户端上的改动都会同步到每一台设备。如果这不是你想要的,你能让你的文件夹“只是发送"给其他客户端,但是其他客户端的改动都不会被同步。
|
||||
|
||||
完成后,转至**与设备共享**页并选择要与之同步文件夹的主机:
|
||||
|
||||
您选择的所有设备都需要接受共享请求;您将在设备上收到通知。
|
||||
|
||||
正如共享文件夹时一样,您必须配置新的共享文件夹:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/syncthing12.png)
|
||||
|
||||
同样,在这里您可以定义任何标签,但是 ID 必须匹配每个客户端。在文件夹选项中,选择文件夹及其文件的位置。请记住,此文件夹中所做的任何更改都将反映到文件夹所允许同步的每个设备上。
|
||||
|
||||
这些是连接设备和与 Syncting 共享文件夹的步骤。开始复制可能需要几分钟时间,这取决于您的网络设置或您是否不在同一网络上。
|
||||
|
||||
Syncting 提供了更多出色的功能和选项。试试看,并把握你数据的控制权。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/9/take-control-your-data-syncthing
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ypingcn](https://github.com/ypingcn)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[1]: https://gizmodo.com/google-says-it-doesnt-go-through-your-inbox-anymore-bu-1827299695
|
||||
[2]: https://syncthing.net/
|
||||
[3]: https://docs.syncthing.net/specs/bep-v1.html
|
||||
[4]: /file/410191
|
||||
[5]: https://opensource.com/sites/default/files/uploads/syncthing1.png "Syncthing in Setup WebUI dialog box"
|
87
translated/tech/20180927 5 cool tiling window managers.md
Normal file
87
translated/tech/20180927 5 cool tiling window managers.md
Normal file
@ -0,0 +1,87 @@
|
||||
5 个很酷的平铺窗口管理器
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/09/tilingwindowmanagers-816x345.jpg)
|
||||
Linux 桌面生态中有多种窗口管理器 (WM)。有些是作为桌面环境的一部分开发的。有的则被用作独立程序。平铺 WM 就是这种情况,它提供了一个更轻量级的自定义环境。本文介绍了五种这样的平铺 WM 供你试用。
|
||||
|
||||
### i3
|
||||
|
||||
[i3][1] 是最受欢迎的平铺窗口管理器之一。与大多数其他此类 WM 一样,i3 专注于低资源消耗和用户可定制性。
|
||||
|
||||
您可以参考[Magazine 上的这篇文章][2]了解 i3 安装细节以及如何配置它。
|
||||
|
||||
### sway
|
||||
|
||||
[sway][3] 是一个平铺 Wayland 合成器。它有与现有 i3 配置兼容的优点,因此你可以使用它来替换 i3 并使用 Wayland 作为显示协议。
|
||||
|
||||
您可以使用 dnf 从 Fedora 仓库安装 sway:
|
||||
|
||||
```
|
||||
$ sudo dnf install sway
|
||||
```
|
||||
|
||||
如果你想从 i3 迁移到 sway,这里有一个[迁移指南][4]。
|
||||
|
||||
### Qtile
|
||||
|
||||
[Qtile][5] 是另一个平铺管理器,也恰好是用 Python 编写的。默认情况下,你在位于 ~/.config/qtile/config.py 下的 Python 脚本中配置 Qtile。当此脚本不存在时,Qtile 会使用默认[配置][6]。
|
||||
|
||||
Qtile 使用 Python 的一个好处是你可以编写脚本来控制 WM。例如,以下脚本打印屏幕详细信息:
|
||||
|
||||
```
|
||||
> from libqtile.command import Client
|
||||
> c = Client()
|
||||
> print(c.screen.info)
|
||||
{'index': 0, 'width': 1920, 'height': 1006, 'x': 0, 'y': 0}
|
||||
```
|
||||
|
||||
要在 Fedora 上安装 Qlite,请使用以下命令:
|
||||
|
||||
```
|
||||
$ sudo dnf install qtile
|
||||
```
|
||||
|
||||
### dwm
|
||||
|
||||
[dwm][7] 窗口管理器更侧重于轻量级。该项目的一个目标是保持 dwm 最小。例如,整个代码库从未超过 2000 行代码。另一方面,dwm 不容易定制和配置。实际上,改变 dwm 默认配置的唯一方法是[编辑源代码并重新编译程序][8]。
|
||||
|
||||
如果你想尝试默认配置,你可以使用 dnf 在 Fedora 中安装 dwm:
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm
|
||||
```
|
||||
|
||||
对于那些想要改变 dwm 配置的人,Fedora 中有一个 dwm-user 包。该软件包使用用户主目录中 ~/.dwm/config.h 的配置自动重新编译 dwm。
|
||||
|
||||
### awesome
|
||||
|
||||
[awesome][9] 最初是作为 dwm 的一个分支开发,使用外部配置文件提供 WM 的配置。配置通过 Lua 脚本完成,这些脚本允许你编写脚本以自动执行任务或创建 widget。
|
||||
|
||||
你可以使用这个命令在 Fedora 上安装 awesome:
|
||||
|
||||
```
|
||||
$ sudo dnf install awesome
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/5-cool-tiling-window-managers/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org
|
||||
[1]: https://i3wm.org/
|
||||
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
|
||||
[3]: https://swaywm.org/
|
||||
[4]: https://github.com/swaywm/sway/wiki/i3-Migration-Guide
|
||||
[5]: http://www.qtile.org/
|
||||
[6]: https://github.com/qtile/qtile/blob/develop/libqtile/resources/default_config.py
|
||||
[7]: https://dwm.suckless.org/
|
||||
[8]: https://dwm.suckless.org/customisation/
|
||||
[9]: https://awesomewm.org/
|
@ -0,0 +1,262 @@
|
||||
系统管理员需知的 16 个 iptables 使用技巧
|
||||
=======
|
||||
|
||||
iptables 是一款控制系统进出流量的强大配置工具。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg)
|
||||
|
||||
现代 Linux 内核带有一个叫 [Netfilter][1] 的数据包过滤框架。Netfilter 提供了允许、禁止以及修改等操作来控制进出系统的流量数据包。基于 Netfilter 框架的用户层命令行工具 **iptables** 提供了强大的防火墙配置功能,允许你添加规则来构建防火墙策略。[iptables][2] 丰富复杂的功能以及其巴洛克式命令语法可能让人难以驾驭。我们就来探讨一下其中的一些功能,提供一些系统管理员解决某些问题需要的使用技巧。
|
||||
|
||||
### 避免封锁自己
|
||||
|
||||
应用场景:假设你将对公司服务器上的防火墙规则进行修改,需要避免封锁你自己以及其他同事的情况(这将会带来一定时间和金钱的损失,也许一旦发生马上就有部门打电话找你了)
|
||||
|
||||
#### 技巧 #1: 开始之前先备份一下 iptables 配置文件。
|
||||
|
||||
用如下命令备份配置文件:
|
||||
|
||||
```
|
||||
/sbin/iptables-save > /root/iptables-works
|
||||
|
||||
```
|
||||
#### 技巧 #2: 更妥当的做法,给文件加上时间戳。
|
||||
|
||||
用如下命令加时间戳:
|
||||
|
||||
```
|
||||
/sbin/iptables-save > /root/iptables-works-`date +%F`
|
||||
|
||||
```
|
||||
|
||||
然后你就可以生成如下名字的文件:
|
||||
|
||||
```
|
||||
/root/iptables-works-2018-09-11
|
||||
|
||||
```
|
||||
|
||||
这样万一使得系统不工作了,你也可以很快的利用备份文件恢复原状:
|
||||
|
||||
```
|
||||
/sbin/iptables-restore < /root/iptables-works-2018-09-11
|
||||
|
||||
```
|
||||
|
||||
#### 技巧 #3: 每次创建 iptables 配置文件副本时,都创建一个指向 `latest` 的文件的链接。
|
||||
|
||||
```
|
||||
ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest
|
||||
|
||||
```
|
||||
|
||||
#### 技巧 #4: 将特定规则放在策略顶部,底部放置通用规则。
|
||||
|
||||
避免在策略顶部使用如下的一些通用规则:
|
||||
|
||||
```
|
||||
iptables -A INPUT -p tcp --dport 22 -j DROP
|
||||
|
||||
```
|
||||
|
||||
你在规则中指定的条件越多,封锁自己的可能性就越小。不要使用上面暗中通用规则,而是使用如下的规则:
|
||||
|
||||
```
|
||||
iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP
|
||||
|
||||
```
|
||||
|
||||
此规则表示在 **INPUT** 链尾追加一条新规则,将源地址为 **10.0.0.0/8**、 目的地址是 **192.168.100.101**、目的端口号是 **22** (**\--dport 22** ) 的 **tcp**(**-p tcp** )数据包通通丢弃掉。
|
||||
|
||||
还有很多方法可以设置更具体的规则。例如,使用 **-i eth0** 将会限制这条规则作用于 **eth0** 网卡,对 **eth1** 网卡则不生效。
|
||||
|
||||
#### 技巧 #5: 在策略规则顶部将你的 IP 列入白名单。
|
||||
|
||||
这是一个有效地避免封锁自己的设置:
|
||||
|
||||
```
|
||||
iptables -I INPUT -s <your IP> -j ACCEPT
|
||||
|
||||
```
|
||||
|
||||
你需要将该规则添加到策略首位置。**-I** 表示则策略首部插入规则,**-A** 表示在策略尾部追加规则。
|
||||
|
||||
#### 技巧 #6: 理解现有策略中的所有规则。
|
||||
|
||||
不犯错就已经成功了一半。如果你了解 iptables 策略背后的工作原理,使用起来更为得心应手。如果有必要,可以绘制流程图来理清数据包的走向。还要记住:策略的预期效果和实际效果可能完全是两回事。
|
||||
|
||||
### 设置防火墙策略
|
||||
|
||||
应用场景:你希望给工作站配置具有限制性策略的防火墙。
|
||||
|
||||
#### 技巧 #1: 设置默认规则为丢弃
|
||||
|
||||
```
|
||||
# Set a default policy of DROP
|
||||
*filter
|
||||
:INPUT DROP [0:0]
|
||||
:FORWARD DROP [0:0]
|
||||
:OUTPUT DROP [0:0]
|
||||
```
|
||||
|
||||
#### 技巧 #2: 将用户完成工作所需的最少量服务设置为允许
|
||||
|
||||
该策略需要允许工作站能通过 DHCP (**-p udp --dport 67:68 -sport 67:68**)来获取 IP 地址、子网掩码以及其他一些信息。对于远程操作,需要允许 SSH 服务(**-dport 22**),邮件服务(**--dport 25**),DNS服务(**--dport 53**),ping 功能(**-p icmp**),NTP 服务(**--dport 123 --sport 123**)以及HTTP 服务(**-dport 80**)和 HTTPS 服务(**--dport 443**)。
|
||||
|
||||
```
|
||||
# Set a default policy of DROP
|
||||
*filter
|
||||
:INPUT DROP [0:0]
|
||||
:FORWARD DROP [0:0]
|
||||
:OUTPUT DROP [0:0]
|
||||
|
||||
# Accept any related or established connections
|
||||
-I INPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
|
||||
# Allow all traffic on the loopback interface
|
||||
-A INPUT -i lo -j ACCEPT
|
||||
-A OUTPUT -o lo -j ACCEPT
|
||||
|
||||
# Allow outbound DHCP request
|
||||
-A OUTPUT –o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT
|
||||
|
||||
# Allow inbound SSH
|
||||
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
|
||||
|
||||
# Allow outbound email
|
||||
-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT
|
||||
|
||||
# Outbound DNS lookups
|
||||
-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT
|
||||
|
||||
# Outbound PING requests
|
||||
-A OUTPUT –o eth0 -p icmp -j ACCEPT
|
||||
|
||||
# Outbound Network Time Protocol (NTP) requests
|
||||
-A OUTPUT –o eth0 -p udp --dport 123 --sport 123 -j ACCEPT
|
||||
|
||||
# Outbound HTTP
|
||||
-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
|
||||
-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
|
||||
|
||||
COMMIT
|
||||
```
|
||||
|
||||
### 限制 IP 地址范围
|
||||
|
||||
应用场景:贵公司的 CEO 认为员工在 Facebook 上花费过多的时间,需要采取一些限制措施。CEO 命令下达给 CIO,CIO 命令CISO,最终任务由你来执行。你决定阻止一切到 Facebook 的访问连接。首先你使用 `host` 或者 `whois` 命令来获取 Facebook 的 IP 地址。
|
||||
|
||||
```
|
||||
host -t a www.facebook.com
|
||||
www.facebook.com is an alias for star.c10r.facebook.com.
|
||||
star.c10r.facebook.com has address 31.13.65.17
|
||||
whois 31.13.65.17 | grep inetnum
|
||||
inetnum: 31.13.64.0 - 31.13.127.255
|
||||
```
|
||||
然后使用 [CIDR to IPv4转换][3] 页面来将其转换为 CIDR 表示法。然后你得到 **31.13.64.0/18** 的地址。输入以下命令来阻止对 Facebook 的访问:
|
||||
|
||||
```
|
||||
iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP
|
||||
```
|
||||
|
||||
### 按时间规定做限制-场景1
|
||||
|
||||
应用场景:公司员工强烈反对限制一切对 Facebook 的访问,这导致了 CEO 放宽了要求(考虑到员工的反对以及他的助理提醒说她将 HIS Facebook 页面保持在最新状态)。然后 CEO 决定允许在午餐时间访问 Facebook(中午12点到下午1点之间)。假设默认规则是丢弃,使用 iptables 的时间功能便可以实现。
|
||||
|
||||
```
|
||||
iptables –A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 –timestop 13:00 –d 31.13.64.0/18 -j ACCEPT
|
||||
```
|
||||
|
||||
该命令中指定在中午12点(**\--timestart 12:00**)到下午1点(**\--timestop 13:00**)之间允许(**-j ACCEPT**)到 Facebook.com (**-d [31.13.64.0/18][5]**)的 http 以及 https (**-m multiport --dport http,https**)的访问。
|
||||
|
||||
### 按时间规定做限制-场景2
|
||||
|
||||
应用场景
|
||||
Scenario: 在计划系统维护期间,你需要设置凌晨2点到3点之间拒绝所有的 TCP 和 UDP 访问,这样维护任务就不会受到干扰。使用两个 iptables 规则可实现:
|
||||
|
||||
```
|
||||
iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP
|
||||
iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP
|
||||
```
|
||||
|
||||
该规则禁止(**-j DROP**)在凌晨2点(**\--timestart 02:00**)到凌晨3点(**\--timestop 03:00**)之间的 TCP 和 UDP (**-p tcp and -p udp**)的数据进入(**-A INPUT**)访问。
|
||||
|
||||
### 限制连接数量
|
||||
|
||||
应用场景:你的 web 服务器有可能受到来自世界各地的 DoS 攻击,为了避免这些攻击,你可以限制单个 IP 地址到你的 web 服务器创建连接的数量:
|
||||
|
||||
```
|
||||
iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset
|
||||
```
|
||||
|
||||
分析一下上面的命令。如果单个主机在一分钟之内新建立(**-p tcp -syn**)超过20个(**-connlimit-above 20**)到你的 web 服务器(**--dport http,https**)的连接,服务器将拒绝(**-j REJECT**)建立新的连接,然后通知该主机新建连接被拒绝(**--reject-with-tcp-reset**)。
|
||||
|
||||
### 监控 iptables 规则
|
||||
|
||||
应用场景:由于数据包会遍历链中的规则,iptables遵循 ”首次匹配获胜“ 的原则,因此经常匹配的规则应该靠近策略的顶部,而不太频繁匹配的规则应该接近底部。 你怎么知道哪些规则使用最多或最少,可以在顶部或底部附近监控?
|
||||
|
||||
#### 技巧 #1: 查看规则被访问了多少次
|
||||
|
||||
使用命令:
|
||||
|
||||
```
|
||||
iptables -L -v -n –line-numbers
|
||||
```
|
||||
|
||||
用 **-L** 选项列出链中的所有规则。因为没有指定具体哪条链,所有链规则都会被输出,使用 **-v** 选项显示详细信息,**-n** 选项则显示数字格式的数据包和字节计数器,每个规则开头的数值表示该规则在链中的位置。
|
||||
|
||||
根据数据包和字节计数的结果,你可以将访问频率最高的规则放到顶部,将访问频率最低的规则放到底部。
|
||||
|
||||
#### 技巧 #2: 删除不必要的规则
|
||||
|
||||
哪条规则从来没有被访问过?这些可以被清除掉。用如下命令查看:
|
||||
|
||||
```
|
||||
iptables -nvL | grep -v "0 0"
|
||||
```
|
||||
|
||||
注意:两个数字0之间不是 Tab 键,而是 5 个空格。
|
||||
|
||||
#### 技巧 #3: 监控正在发生什么
|
||||
|
||||
可能你也想像使用 **top** 命令一样来实时监控 iptables 的情况。使用如下命令来动态监视 iptables 中的活动,并仅显示正在遍历的规则:
|
||||
|
||||
```
|
||||
watch --interval=5 'iptables -nvL | grep -v "0 0"'
|
||||
```
|
||||
|
||||
**watch** 命令通过参数 **iptables -nvL | grep -v “0 0“** 每隔 5s 输出 iptables 的动态。这条命令允许你查看数据包和字节计数的变化。
|
||||
|
||||
### 输出日志
|
||||
|
||||
应用场景:经理觉得你这个防火墙员工的工作质量杠杠的,但如果能有网络流量活动日志最好了。有时候这比写一份有关工作的报告更有效。
|
||||
|
||||
使用工具 [FWLogwatch][6] 基于 iptables 防火墙记录来生成日志报告。FWLogwatch 工具支持很多形式的报告并且也提供了很多分析功能。它生成的日志以及月报告使得管理员可以节省大量时间并且还更好地管理网络,甚至减少未被注意的潜在攻击。
|
||||
|
||||
这里是一个 FWLogwatch 生成的报告示例:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/fwlogwatch.png)
|
||||
|
||||
### 不要满足于允许和丢弃规则
|
||||
|
||||
本文中已经涵盖了 iptables 的很多方面,从避免封锁自己、iptables 配置防火墙以及监控 iptables 中的活动等等方面介绍了 iptables。你可以从这里开始探索 iptables 甚至获取更多的使用技巧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/iptables-tips-and-tricks
|
||||
|
||||
作者:[Gary Smith][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[jrg](https://github.com/jrglinu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/greptile
|
||||
[1]: https://en.wikipedia.org/wiki/Netfilter
|
||||
[2]: https://en.wikipedia.org/wiki/Iptables
|
||||
[3]: http://www.ipaddressguide.com/cidr
|
||||
[4]: http://www.facebook.com
|
||||
[5]: http://31.13.64.0/18
|
||||
[6]: http://fwlogwatch.inside-security.de/
|
||||
|
@ -1,74 +0,0 @@
|
||||
如何在家中使用 SSH 和 SFTP 协议
|
||||
======
|
||||
|
||||
通过 SSH 和 SFTP 协议 ,我们能够访问其他设备 ,有效而且安全的传输文件及更多 。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
|
||||
|
||||
多年前 ,我决定配置一个额外的电脑 ,以便我能在工作时能够访问它来传输我所需要的文件 。最基本的一步是要求你的网络提供商 ( ISP )提供一个固定的地址 ( IP Address )。
|
||||
|
||||
保证你系统的访问是安全的 ,这是一个不必要但很重要的步骤 。在此种特殊情况下 ,我计划只在工作的时候能够访问它 。所以我能够约束访问的 IP 地址 。即使如此 ,你依然要尽多的采用安全措施 。一旦你建立起来这个 ,全世界的人们都能立即访问你的系统 。这是非常令人惊奇及恐慌的 。你能通过日志文件来发现这一点 。我推测有探测机器人在尽它们所能的搜索那些没有安全措施的系统 。
|
||||
|
||||
在我建立系统不久后 ,我觉得我的访问是一个简单的玩具而不是我想要的 ,为此 ,我将它关闭了好让我不在为它而担心 。尽管如此 ,这个系统在家庭网络中对于 SSH 和 SFTP 还有其他的用途 ,它至少已经为你而创建了 。
|
||||
|
||||
一个必备条件 ,你家的另一台电脑必须已经开机了 ,至于电脑是否已经非常老旧是没有影响的 。你也需要知道另一台电脑的 IP 地址 。有两个方法能够知道做到 ,一个是通过网页进入你的路由器 ,一般情况下你的地址格式类似于 **192.168.1.254** 。通过一些搜索 ,找出当前是开机的并且和系统 eth0 或者 wifi 挂钩的系统是足够简单的 。如何组织你所敢兴趣的电脑是一个挑战 。
|
||||
|
||||
询问电脑问题是简单的 ,打开 shell ,输入 :
|
||||
|
||||
```
|
||||
ifconfig
|
||||
|
||||
```
|
||||
|
||||
命令会输出一些信息 ,你所需要的信息在 `inet` 后面 ,看起来和 **192.168.1.234** 类似 。当你发现这个后 ,回到你的客户端电脑 ,在命令行中输入 :
|
||||
|
||||
```
|
||||
ssh gregp@192.168.1.234
|
||||
|
||||
```
|
||||
|
||||
上面的命令能够正常执行 ,**gregp** 必须在主机系统中是中确的用户名 。用户的密码也会被需要 。如果你键入的密码和用户名都是正确的 ,你将通过 shell 环境连接上了其他电脑 。我坦诚 ,对于 SSH 我并不是经常使用的 。我偶尔使用它 ,所以我能够运行 `dnf` 来更新我就坐的其他电脑 。通常 ,我用 SFTP :
|
||||
|
||||
```
|
||||
sftp grego@192.168.1.234
|
||||
|
||||
```
|
||||
|
||||
对于用更简单的方法来把一个文件传输到另一个文件 ,我有很强烈的需求 。相对于闪存棒和额外的设备 ,它更加方便 ,耗时更少 。
|
||||
|
||||
一旦连接建立成功 ,SFTP 有两个基本的命令 ,`get` ,从主机接收文件 ;`put` ,向主机发送文件 。在客户端 ,我经常移动到我想接收或者传输的文件夹下 ,在开始连接之前 。在连接之后 ,你将在顶层目录 **home/gregp** 。一旦连接成功 ,你将和在客户端一样的使用 `cd` ,除非在主机上你改变了你的工作路径 。你会需要用 `ls` 来确认你的位置 。
|
||||
|
||||
在客户端 ,如果你想改变工作路劲 。用 `lcd` 命令 ( **local change directory**)。相同的 ,用 `lls` 来显示客户端工作目录的内容 。
|
||||
|
||||
如果你不喜欢主机工作目录的名字时 ,你该怎么办 ?用 `mkdir` 在主机上创建一个新的文件夹 。或者将整个文件全拷贝到主机 :
|
||||
|
||||
```
|
||||
put -r thisDir/
|
||||
|
||||
```
|
||||
|
||||
在主机上创建文件夹和传输文件以及子文件夹是非常快速的 ,能达到硬件的上限 。在网络传输的过程中不会遇到瓶颈 。查看 SFTP 能够使用的功能 ,查看 :
|
||||
|
||||
```
|
||||
man sftp
|
||||
|
||||
```
|
||||
|
||||
在我的电脑上我也可以在 windows 虚拟机上用 SFTP ,另一个优势是配置一个虚拟机而不是一个双系统 。这让我能够在系统的 Linux 部分移入或者移出文件 。到目前为止 ,我只用了 windows 的客户端 。
|
||||
|
||||
你能够进入到任何通过无线或者 WIFI 连接到你路由器的设备 。暂时 ,我使用一个叫做 [SSHDroid][1] 的应用 ,能够在被动模式下运行 SSH 。换句话来说 ,你能够用你的电脑访问作为主机的 Android 设备 。近来我还发现了另外一个应用 ,[Admin Hands][2] ,不管你的客户端是桌面还是手机 ,都能使用 SSH 或者 SFTP 操作 。这个应用对于备份和手机分享照片是极好的 。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ssh-sftp-home-network
|
||||
|
||||
作者:[Geg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[singledo](https://github.com/singledo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/greg-p
|
||||
[1]: https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid
|
||||
[2]: https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US
|
@ -0,0 +1,72 @@
|
||||
在 Linux 命令行中使用 ls 列出文件的提示
|
||||
======
|
||||
|
||||
学习一些 Linux `ls` 命令最有用的变化。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
|
||||
|
||||
我在 Linux 中最先学到的命令之一就是 `ls`。了解系统中文件所在目录中的内容非常重要。能够查看和修改不仅仅是一些文件还有所有文件也很重要。
|
||||
|
||||
我的第一个 Linux 备忘录是[单页 Linux 手册][1],它于 1999 年发布,成了我的首选参考资料。当我开始探索 Linux 时,我把它贴在桌子上并经常参考它。它在第一页第一列的底部介绍了 `ls -l` 列出文件的命令。
|
||||
|
||||
之后,我将学习这个最基本命令的其它迭代。通过 `ls` 命令,我开始了解 Linux 文件权限的复杂性,以及哪些是我的文件,哪些需要 root 或者 sudo 权限来修改。随着时间的推移,我习惯了使用命令行,虽然我仍然使用 `ls -l` 来查找目录中的文件,但我经常使用 `ls -al`,这样我就可以看到可能需要更改的隐藏文件,比如那些配置文件。
|
||||
|
||||
根据 Eric Fischer 在 [Linux 文档项目][2]中关于 `ls` 命令的文章,该命令的起源可以追溯到 1961 年 MIT 的<ruby>相容分时系统<rt>Compatible Time-Sharing System</rt></ruby>(CTSS)上的 `listf` 命令。当 CTSS 被 [Multics][3] 代替时,命令变为 `list`,并有像 `list -all` 的开关。根据[维基百科][4],`ls` 出现在 AT&T Unix 的原始版本中。我们今天在 Linux 系统上使用的 `ls` 命令来自 [GNU Core Utilities][5]。
|
||||
|
||||
大多数时候,我只使用几个迭代的命令。我通常用 `ls` 或 `ls -al` 查看目录内容,但是你还应该熟悉许多其它选项。
|
||||
|
||||
`ls -l` 提供了一个简单的目录列表:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_1_0.png)
|
||||
|
||||
在我的 Fedora 28 系统的手册页中,我发现 `ls` 还有许多其它选项,所有这些选项都提供了有关 Linux 文件系统的有趣且有用的信息。通过在命令提示符下输入 `man ls`,我们可以开始探索其它一些选项:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_2_0.png)
|
||||
|
||||
要按文件大小对目录进行排序,请使用 `ls -lS`:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_3_0.png)
|
||||
|
||||
要以相反的顺序列出内容,请使用 `ls -lr`:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_4.png)
|
||||
|
||||
要按列列出内容,请使用 `ls -c`:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_5.png)
|
||||
|
||||
`ls -al` 提供了同一目录中所有文件的列表:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux_ls_6.png)
|
||||
|
||||
以下是我认为有用且有趣的一些其它选项:
|
||||
|
||||
* 仅列出目录中的 .txt 文件:`ls *.txt`
|
||||
* 按文件大小列出:`ls -s`
|
||||
* 按时间和日期排序:`ls -t`
|
||||
* 按扩展名排序:`ls -X`
|
||||
* 按文件大小排序:`ls -S`
|
||||
* 带有文件大小的长格式:`ls -ls`
|
||||
|
||||
要生成指定格式的目录列表并将其定向到文件供以后查看,请输入 `ls -al > mydirectorylist`。最后,我找到的一个更奇特的命令是 `ls -R`,它提供了计算机上所有目录及其内容的递归列表。
|
||||
|
||||
有关 `ls` 命令的所有迭代的完整列表,请参阅 [GNU Core Utilities][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/ls-command
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[1]: http://hackerspace.cs.rutgers.edu/library/General/One_Page_Linux_Manual.pdf
|
||||
[2]: http://www.tldp.org/LDP/LG/issue48/fischer.html
|
||||
[3]: https://en.wikipedia.org/wiki/Multics
|
||||
[4]: https://en.wikipedia.org/wiki/Ls
|
||||
[5]: http://www.gnu.org/s/coreutils/
|
||||
[6]: https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user