mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-03 01:10:13 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into new
This commit is contained in:
commit
d19196e751
@ -1,36 +1,30 @@
|
|||||||
运营一个 Kubernetes 网络
|
Kubernetes 网络运维
|
||||||
============================================================
|
======
|
||||||
|
|
||||||
最近我一直在研究 Kubernetes 网络。我注意到一件事情就是,虽然关于如何设置 Kubernetes 网络的文章很多,也写得很不错,但是却没有看到关于如何去运营 Kubernetes 网络的文章、以及如何完全确保它不会给你造成生产事故。
|
最近我一直在研究 Kubernetes 网络。我注意到一件事情就是,虽然关于如何设置 Kubernetes 网络的文章很多,也写得很不错,但是却没有看到关于如何去运维 Kubernetes 网络的文章、以及如何完全确保它不会给你造成生产事故。
|
||||||
|
|
||||||
在本文中,我将尽力让你相信三件事情(我觉得这些都很合理 :)):
|
在本文中,我将尽力让你相信三件事情(我觉得这些都很合理 :)):
|
||||||
|
|
||||||
* 避免生产系统网络中断非常重要
|
* 避免生产系统网络中断非常重要
|
||||||
|
* 运维联网软件是很难的
|
||||||
* 运营联网软件是很难的
|
|
||||||
|
|
||||||
* 有关你的网络基础设施的重要变化值得深思熟虑,以及这种变化对可靠性的影响。虽然非常“牛x”的谷歌人常说“这是我们在谷歌正在用的”(谷歌工程师在 Kubernetes 上正做着很重大的工作!但是我认为重要的仍然是研究架构,并确保它对你的组织有意义)。
|
* 有关你的网络基础设施的重要变化值得深思熟虑,以及这种变化对可靠性的影响。虽然非常“牛x”的谷歌人常说“这是我们在谷歌正在用的”(谷歌工程师在 Kubernetes 上正做着很重大的工作!但是我认为重要的仍然是研究架构,并确保它对你的组织有意义)。
|
||||||
|
|
||||||
我肯定不是 Kubernetes 网络方面的专家,但是我在配置 Kubernetes 网络时遇到了一些问题,并且比以前更加了解 Kubernetes 网络了。
|
我肯定不是 Kubernetes 网络方面的专家,但是我在配置 Kubernetes 网络时遇到了一些问题,并且比以前更加了解 Kubernetes 网络了。
|
||||||
|
|
||||||
### 运营联网软件是很难的
|
### 运维联网软件是很难的
|
||||||
|
|
||||||
在这里,我并不讨论有关运营物理网络的话题(对于它我不懂),而是讨论关于如何让像 DNS 服务、负载均衡以及代理这样的软件正常工作方面的内容。
|
在这里,我并不讨论有关运维物理网络的话题(对于它我不懂),而是讨论关于如何让像 DNS 服务、负载均衡以及代理这样的软件正常工作方面的内容。
|
||||||
|
|
||||||
我在一个负责很多网络基础设施的团队工作过一年时间,并且因此学到了一些运营网络基础设施的知识!(显然我还有很多的知识需要继续学习)在我们开始之前有三个整体看法:
|
我在一个负责很多网络基础设施的团队工作过一年时间,并且因此学到了一些运维网络基础设施的知识!(显然我还有很多的知识需要继续学习)在我们开始之前有三个整体看法:
|
||||||
|
|
||||||
* 联网软件经常重度依赖 Linux 内核。因此除了正确配置软件之外,你还需要确保许多不同的系统控制(sysctl)配置正确,而一个错误配置的系统控制就很容易让你处于“一切都很好”和“到处都出问题”的差别中。
|
|
||||||
|
|
||||||
|
* 联网软件经常重度依赖 Linux 内核。因此除了正确配置软件之外,你还需要确保许多不同的系统控制(`sysctl`)配置正确,而一个错误配置的系统控制就很容易让你处于“一切都很好”和“到处都出问题”的差别中。
|
||||||
* 联网需求会随时间而发生变化(比如,你的 DNS 查询或许比上一年多了五倍!或者你的 DNS 服务器突然开始返回 TCP 协议的 DNS 响应而不是 UDP 的,它们是完全不同的内核负载!)。这意味着之前正常工作的软件突然开始出现问题。
|
* 联网需求会随时间而发生变化(比如,你的 DNS 查询或许比上一年多了五倍!或者你的 DNS 服务器突然开始返回 TCP 协议的 DNS 响应而不是 UDP 的,它们是完全不同的内核负载!)。这意味着之前正常工作的软件突然开始出现问题。
|
||||||
|
|
||||||
* 修复一个生产网络的问题,你必须有足够的经验。(例如,看这篇 [由 Sophie Haskins 写的关于 kube-dns 问题调试的文章][1])我在网络调试方面比以前进步多了,但那也是我花费了大量时间研究 Linux 网络知识之后的事了。
|
* 修复一个生产网络的问题,你必须有足够的经验。(例如,看这篇 [由 Sophie Haskins 写的关于 kube-dns 问题调试的文章][1])我在网络调试方面比以前进步多了,但那也是我花费了大量时间研究 Linux 网络知识之后的事了。
|
||||||
|
|
||||||
我距离成为一名网络运营专家还差得很远,但是我认为以下几点很重要:
|
我距离成为一名网络运维专家还差得很远,但是我认为以下几点很重要:
|
||||||
|
|
||||||
1. 对生产网络的基础设施做重要的更改是很难得的(因为它会产生巨大的混乱)
|
1. 对生产网络的基础设施做重要的更改是很难得的(因为它会产生巨大的混乱)
|
||||||
|
|
||||||
2. 当你对网络基础设施做重大更改时,真的应该仔细考虑如果新网络基础设施失败该如何处理
|
2. 当你对网络基础设施做重大更改时,真的应该仔细考虑如果新网络基础设施失败该如何处理
|
||||||
|
|
||||||
3. 是否有很多人都能理解你的网络配置
|
3. 是否有很多人都能理解你的网络配置
|
||||||
|
|
||||||
切换到 Kubernetes 显然是个非常大的更改!因此,我们来讨论一下可能会导致错误的地方!
|
切换到 Kubernetes 显然是个非常大的更改!因此,我们来讨论一下可能会导致错误的地方!
|
||||||
@ -39,86 +33,72 @@
|
|||||||
|
|
||||||
在本文中我们将要讨论的 Kubernetes 网络组件有:
|
在本文中我们将要讨论的 Kubernetes 网络组件有:
|
||||||
|
|
||||||
* 网络覆盖后端(像 flannel/calico/weave 网络/romana)
|
* <ruby>覆盖网络<rt>overlay network</rt></ruby>的后端(像 flannel/calico/weave 网络/romana)
|
||||||
|
|
||||||
* `kube-dns`
|
* `kube-dns`
|
||||||
|
|
||||||
* `kube-proxy`
|
* `kube-proxy`
|
||||||
|
|
||||||
* 入站控制器 / 负载均衡器
|
* 入站控制器 / 负载均衡器
|
||||||
|
|
||||||
* `kubelet`
|
* `kubelet`
|
||||||
|
|
||||||
如果你打算配置 HTTP 服务,或许这些你都会用到。这些组件中的大部分我都不会用到,但是我尽可能去理解它们,因此,本文将涉及它们有关的内容。
|
如果你打算配置 HTTP 服务,或许这些你都会用到。这些组件中的大部分我都不会用到,但是我尽可能去理解它们,因此,本文将涉及它们有关的内容。
|
||||||
|
|
||||||
### 最简化的方式:为所有容器使用宿主机网络
|
### 最简化的方式:为所有容器使用宿主机网络
|
||||||
|
|
||||||
我们从你能做到的最简单的东西开始。这并不能让你在 Kubernetes 中运行 HTTP 服务。我认为它是非常安全的,因为在这里面可以让你动的东西很少。
|
让我们从你能做到的最简单的东西开始。这并不能让你在 Kubernetes 中运行 HTTP 服务。我认为它是非常安全的,因为在这里面可以让你动的东西很少。
|
||||||
|
|
||||||
如果你为所有容器使用宿主机网络,我认为需要你去做的全部事情仅有:
|
如果你为所有容器使用宿主机网络,我认为需要你去做的全部事情仅有:
|
||||||
|
|
||||||
1. 配置 kubelet,以便于容器内部正确配置 DNS
|
1. 配置 kubelet,以便于容器内部正确配置 DNS
|
||||||
|
|
||||||
2. 没了,就这些!
|
2. 没了,就这些!
|
||||||
|
|
||||||
如果你为每个 Pod 直接使用宿主机网络,那就不需要 kube-dns 或者 kube-proxy 了。你都不需要一个作为基础的覆盖网络。
|
如果你为每个 pod 直接使用宿主机网络,那就不需要 kube-dns 或者 kube-proxy 了。你都不需要一个作为基础的覆盖网络。
|
||||||
|
|
||||||
这种配置方式中,你的 pod 们都可以连接到外部网络(同样的方式,你的宿主机上的任何进程都可以与外部网络对话),但外部网络不能连接到你的 pod 们。
|
这种配置方式中,你的 pod 们都可以连接到外部网络(同样的方式,你的宿主机上的任何进程都可以与外部网络对话),但外部网络不能连接到你的 pod 们。
|
||||||
|
|
||||||
这并不是最重要的(我认为大多数人想在 Kubernetes 中运行 HTTP 服务并与这些服务进行真实的通讯),但我认为有趣的是,从某种程度上来说,网络的复杂性并不是绝对需要的,并且有时候你不用这么复杂的网络就可以实现你的需要。如果可以的话,尽可能地避免让网络过于复杂。
|
这并不是最重要的(我认为大多数人想在 Kubernetes 中运行 HTTP 服务并与这些服务进行真实的通讯),但我认为有趣的是,从某种程度上来说,网络的复杂性并不是绝对需要的,并且有时候你不用这么复杂的网络就可以实现你的需要。如果可以的话,尽可能地避免让网络过于复杂。
|
||||||
|
|
||||||
### 运营一个覆盖网络
|
### 运维一个覆盖网络
|
||||||
|
|
||||||
我们将要讨论的第一个网络组件是有关覆盖网络的。Kubernetes 假设每个 pod 都有一个 IP 地址,这样你就可以与那个 pod 中的服务进行通讯了。我在说到“覆盖网络”这个词时,指的就是这个意思(“让你通过它的 IP 地址指向到 pod 的系统)。
|
我们将要讨论的第一个网络组件是有关覆盖网络的。Kubernetes 假设每个 pod 都有一个 IP 地址,这样你就可以与那个 pod 中的服务进行通讯了。我在说到“覆盖网络”这个词时,指的就是这个意思(“让你通过它的 IP 地址指向到 pod 的系统)。
|
||||||
|
|
||||||
所有其它的 Kubernetes 网络的东西都依赖正确工作的覆盖网络。更多关于它的内容,你可以读 [这里的 kubernetes 网络模型][10]。
|
所有其它的 Kubernetes 网络的东西都依赖正确工作的覆盖网络。更多关于它的内容,你可以读 [这里的 kubernetes 网络模型][10]。
|
||||||
|
|
||||||
Kelsey Hightower 在 [kubernetes the hard way][11] 中描述的方式看起来似乎很好,但是,事实上它的作法在超过 50 个节点的 AWS 上是行不通的,因此,我不打算讨论它了。
|
Kelsey Hightower 在 [kubernetes 艰难之路][11] 中描述的方式看起来似乎很好,但是,事实上它的作法在超过 50 个节点的 AWS 上是行不通的,因此,我不打算讨论它了。
|
||||||
|
|
||||||
有许多覆盖网络后端(calico、flannel、weaveworks、romana)并且规划非常混乱。就我的观点来看,我认为一个覆盖网络有 2 个职责:
|
有许多覆盖网络后端(calico、flannel、weaveworks、romana)并且规划非常混乱。就我的观点来看,我认为一个覆盖网络有 2 个职责:
|
||||||
|
|
||||||
1. 确保你的 pod 能够发送网络请求到外部的集群
|
1. 确保你的 pod 能够发送网络请求到外部的集群
|
||||||
|
|
||||||
2. 保持一个到子网络的稳定的节点映射,并且保持集群中每个节点都可以使用那个映射得以更新。当添加和删除节点时,能够做出正确的反应。
|
2. 保持一个到子网络的稳定的节点映射,并且保持集群中每个节点都可以使用那个映射得以更新。当添加和删除节点时,能够做出正确的反应。
|
||||||
|
|
||||||
Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
|
Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
|
||||||
|
|
||||||
* 覆盖网络负责设置 iptables 规则(最基本的是 `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`),以确保那个容器能够向 Kubernetes 之外发出网络请求。如果在这个规则上有错误,你的容器就不能连接到外部网络。这并不很难(它只是几条 iptables 规则而已),但是它非常重要。我发起了一个 [pull request][2],因为我想确保它有很好的弹性。
|
* 覆盖网络负责设置 iptables 规则(最基本的是 `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`),以确保那个容器能够向 Kubernetes 之外发出网络请求。如果在这个规则上有错误,你的容器就不能连接到外部网络。这并不很难(它只是几条 iptables 规则而已),但是它非常重要。我发起了一个 [拉取请求][2],因为我想确保它有很好的弹性。
|
||||||
|
* 添加或者删除节点时可能会有错误。我们使用 `flannel hostgw` 后端,我们开始使用它的时候,节点删除功能 [尚未开始工作][3]。
|
||||||
* 添加或者删除节点时可能会有错误。我们使用 `flannel hostgw` 后端,我们开始使用它的时候,节点删除 [尚未开始工作][3]。
|
|
||||||
|
|
||||||
* 你的覆盖网络或许依赖一个分布式数据库(etcd)。如果那个数据库发生什么问题,这将导致覆盖网络发生问题。例如,[https://github.com/coreos/flannel/issues/610][4] 上说,如果在你的 `flannel etcd` 集群上丢失了数据,最后的结果将是在容器中网络连接会丢失。(现在这个问题已经被修复了)
|
* 你的覆盖网络或许依赖一个分布式数据库(etcd)。如果那个数据库发生什么问题,这将导致覆盖网络发生问题。例如,[https://github.com/coreos/flannel/issues/610][4] 上说,如果在你的 `flannel etcd` 集群上丢失了数据,最后的结果将是在容器中网络连接会丢失。(现在这个问题已经被修复了)
|
||||||
|
|
||||||
* 你升级 Docker 以及其它东西导致的崩溃
|
* 你升级 Docker 以及其它东西导致的崩溃
|
||||||
|
|
||||||
* 还有更多的其它的可能性!
|
* 还有更多的其它的可能性!
|
||||||
|
|
||||||
我在这里主要讨论的是过去发生在 Flannel 中的问题,但是我并不是要承诺不去使用 Flannel —— 事实上我很喜欢 Flannel,因为我觉得它很简单(比如,类似 [vxlan 在后端这一块的部分][12] 只有 500 行代码),并且我觉得对我来说,通过代码来找出问题的根源成为了可能。并且很显然,它在不断地改进。他们在审查 `pull requests` 方面做的很好。
|
我在这里主要讨论的是过去发生在 Flannel 中的问题,但是我并不是要承诺不去使用 Flannel —— 事实上我很喜欢 Flannel,因为我觉得它很简单(比如,类似 [vxlan 在后端这一块的部分][12] 只有 500 行代码),对我来说,通过代码来找出问题的根源成为了可能。并且很显然,它在不断地改进。他们在审查拉取请求方面做的很好。
|
||||||
|
|
||||||
到目前为止,我运营覆盖网络的方法是:
|
到目前为止,我运维覆盖网络的方法是:
|
||||||
|
|
||||||
* 学习它的工作原理的详细内容以及如何去调试它(比如,Flannel 用于创建路由的 hostgw 网络后端,因此,你只需要使用 `sudo ip route list` 命令去查看它是否正确即可)
|
* 学习它的工作原理的详细内容以及如何去调试它(比如,Flannel 用于创建路由的 hostgw 网络后端,因此,你只需要使用 `sudo ip route list` 命令去查看它是否正确即可)
|
||||||
|
|
||||||
* 如果需要的话,维护一个内部构建版本,这样打补丁比较容易
|
* 如果需要的话,维护一个内部构建版本,这样打补丁比较容易
|
||||||
|
|
||||||
* 有问题时,向上游贡献补丁
|
* 有问题时,向上游贡献补丁
|
||||||
|
|
||||||
我认为去遍历所有已合并的 PR 以及过去已修复的 bug 清单真的是非常有帮助的 —— 这需要花费一些时间,但这是得到一个其它人遇到的各种问题的清单的好方法。
|
我认为去遍历所有已合并的拉取请求以及过去已修复的 bug 清单真的是非常有帮助的 —— 这需要花费一些时间,但这是得到一个其它人遇到的各种问题的清单的好方法。
|
||||||
|
|
||||||
对其他人来说,他们的覆盖网络可能工作的很好,但是我并不能从中得到任何经验,并且我也曾听说过其他人报告类似的问题。如果你有一个类似配置的覆盖网络:a) 在 AWS 上并且 b) 在多于 50-100 节点上运行,我想知道你运营这样的一个网络有多大的把握。
|
对其他人来说,他们的覆盖网络可能工作的很好,但是我并不能从中得到任何经验,并且我也曾听说过其他人报告类似的问题。如果你有一个类似配置的覆盖网络:a) 在 AWS 上并且 b) 在多于 50-100 节点上运行,我想知道你运维这样的一个网络有多大的把握。
|
||||||
|
|
||||||
### 运营 kube-proxy 和 kube-dns?
|
### 运维 kube-proxy 和 kube-dns?
|
||||||
|
|
||||||
现在,我有一些关于运营覆盖网络的想法,我们来讨论一下。
|
现在,我有一些关于运维覆盖网络的想法,我们来讨论一下。
|
||||||
|
|
||||||
这个标题的最后面有一个问号,那是因为我并没有真的去运营过。在这里我还有更多的问题要问答。
|
这个标题的最后面有一个问号,那是因为我并没有真的去运维过。在这里我还有更多的问题要问答。
|
||||||
|
|
||||||
这里的 Kubernetes 服务是如何工作的!一个服务是一群 pod 们,它们中的每个都有自己的 IP 地址(像 10.1.0.3、10.2.3.5、10.3.5.6 这样)
|
这里的 Kubernetes 服务是如何工作的!一个服务是一群 pod 们,它们中的每个都有自己的 IP 地址(像 10.1.0.3、10.2.3.5、10.3.5.6 这样)
|
||||||
|
|
||||||
1. 每个 Kubernetes 服务有一个 IP 地址(像 10.23.1.2 这样)
|
1. 每个 Kubernetes 服务有一个 IP 地址(像 10.23.1.2 这样)
|
||||||
|
|
||||||
2. `kube-dns` 去解析 Kubernetes 服务 DNS 名字为 IP 地址(因此,my-svc.my-namespace.svc.cluster.local 可能映射到 10.23.1.2 上)
|
2. `kube-dns` 去解析 Kubernetes 服务 DNS 名字为 IP 地址(因此,my-svc.my-namespace.svc.cluster.local 可能映射到 10.23.1.2 上)
|
||||||
|
|
||||||
3. `kube-proxy` 配置 `iptables` 规则是为了在它们之间随机进行均衡负载。Kube-proxy 也有一个用户空间的轮询负载均衡器,但是在我的印象中,他们并不推荐使用它。
|
3. `kube-proxy` 配置 `iptables` 规则是为了在它们之间随机进行均衡负载。Kube-proxy 也有一个用户空间的轮询负载均衡器,但是在我的印象中,他们并不推荐使用它。
|
||||||
|
|
||||||
因此,当你发出一个请求到 `my-svc.my-namespace.svc.cluster.local` 时,它将解析为 10.23.1.2,然后,在你本地主机上的 `iptables` 规则(由 kube-proxy 生成)将随机重定向到 10.1.0.3 或者 10.2.3.5 或者 10.3.5.6 中的一个上。
|
因此,当你发出一个请求到 `my-svc.my-namespace.svc.cluster.local` 时,它将解析为 10.23.1.2,然后,在你本地主机上的 `iptables` 规则(由 kube-proxy 生成)将随机重定向到 10.1.0.3 或者 10.2.3.5 或者 10.3.5.6 中的一个上。
|
||||||
@ -126,9 +106,7 @@ Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
|
|||||||
在这个过程中我能想像出的可能出问题的地方:
|
在这个过程中我能想像出的可能出问题的地方:
|
||||||
|
|
||||||
* `kube-dns` 配置错误
|
* `kube-dns` 配置错误
|
||||||
|
|
||||||
* `kube-proxy` 挂了,以致于你的 `iptables` 规则没有得以更新
|
* `kube-proxy` 挂了,以致于你的 `iptables` 规则没有得以更新
|
||||||
|
|
||||||
* 维护大量的 `iptables` 规则相关的一些问题
|
* 维护大量的 `iptables` 规则相关的一些问题
|
||||||
|
|
||||||
我们来讨论一下 `iptables` 规则,因为创建大量的 `iptables` 规则是我以前从没有听过的事情!
|
我们来讨论一下 `iptables` 规则,因为创建大量的 `iptables` 规则是我以前从没有听过的事情!
|
||||||
@ -141,7 +119,6 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
|
|||||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
|
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
|
||||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
|
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
|
||||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
|
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
因此,kube-proxy 创建了许多 `iptables` 规则。它们都是什么意思?它对我的网络有什么样的影响?这里有一个来自华为的非常好的演讲,它叫做 [支持 50,000 个服务的可伸缩 Kubernetes][14],它说如果在你的 Kubernetes 集群中有 5,000 服务,增加一个新规则,将需要 **11 分钟**。如果这种事情发生在真实的集群中,我认为这将是一件非常糟糕的事情。
|
因此,kube-proxy 创建了许多 `iptables` 规则。它们都是什么意思?它对我的网络有什么样的影响?这里有一个来自华为的非常好的演讲,它叫做 [支持 50,000 个服务的可伸缩 Kubernetes][14],它说如果在你的 Kubernetes 集群中有 5,000 服务,增加一个新规则,将需要 **11 分钟**。如果这种事情发生在真实的集群中,我认为这将是一件非常糟糕的事情。
|
||||||
@ -152,19 +129,16 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
|
|||||||
|
|
||||||
但是,我觉得使用 HAProxy 更舒服!它能够用于去替换 kube-proxy!我用谷歌搜索了一下,然后发现了这个 [thread on kubernetes-sig-network][15],它说:
|
但是,我觉得使用 HAProxy 更舒服!它能够用于去替换 kube-proxy!我用谷歌搜索了一下,然后发现了这个 [thread on kubernetes-sig-network][15],它说:
|
||||||
|
|
||||||
> kube-proxy 是很难用的,我们在生产系统中使用它近一年了,它在大部分的时间都表现的很好,但是,随着我们集群中的服务越来越多,我们发现它的排错和维护工作越来越难。在我们的团队中没有 iptables 方面的专家,我们只有 HAProxy&LVS 方面的专家,由于我们已经使用它们好几年了,因此我们决定使用一个中心化的 HAProxy 去替换分布式的代理。我觉得这可能会对在 Kubernetes 中使用 HAProxy 的其他人有用,因此,我们更新了这个项目,并将它开源:[https://github.com/AdoHe/kube2haproxy][5]。如果你发现它有用,你可以去看一看、试一试。
|
> kube-proxy 是很难用的,我们在生产系统中使用它近一年了,它在大部分的时间都表现的很好,但是,随着我们集群中的服务越来越多,我们发现它的排错和维护工作越来越难。在我们的团队中没有 iptables 方面的专家,我们只有 HAProxy & LVS 方面的专家,由于我们已经使用它们好几年了,因此我们决定使用一个中心化的 HAProxy 去替换分布式的代理。我觉得这可能会对在 Kubernetes 中使用 HAProxy 的其他人有用,因此,我们更新了这个项目,并将它开源:[https://github.com/AdoHe/kube2haproxy][5]。如果你发现它有用,你可以去看一看、试一试。
|
||||||
|
|
||||||
因此,那是一个有趣的选择!我在这里确实没有答案,但是,有一些想法:
|
因此,那是一个有趣的选择!我在这里确实没有答案,但是,有一些想法:
|
||||||
|
|
||||||
* 负载均衡器是很复杂的
|
* 负载均衡器是很复杂的
|
||||||
|
|
||||||
* DNS 也很复杂
|
* DNS 也很复杂
|
||||||
|
* 如果你有运维某种类型的负载均衡器(比如 HAProxy)的经验,与其使用一个全新的负载均衡器(比如 kube-proxy),还不如做一些额外的工作去使用你熟悉的那个来替换,或许更有意义。
|
||||||
|
* 我一直在考虑,我们希望在什么地方能够完全使用 kube-proxy 或者 kube-dns —— 我认为,最好是只在 Envoy 上投入,并且在负载均衡&服务发现上完全依赖 Envoy 来做。因此,你只需要将 Envoy 运维好就可以了。
|
||||||
|
|
||||||
* 如果你有运营某种类型的负载均衡器(比如 HAProxy)的经验,与其使用一个全新的负载均衡器(比如 kube-proxy),还不如做一些额外的工作去使用你熟悉的那个来替换,或许更有意义。
|
正如你所看到的,我在关于如何运维 Kubernetes 中的内部代理方面的思路还是很混乱的,并且我也没有使用它们的太多经验。总体上来说,kube-proxy 和 kube-dns 还是很好的,也能够很好地工作,但是我仍然认为应该去考虑使用它们可能产生的一些问题(例如,”你不能有超出 5000 的 Kubernetes 服务“)。
|
||||||
|
|
||||||
* 我一直在考虑,我们希望在什么地方能够完全使用 kube-proxy 或者 kube-dns —— 我认为,最好是只在 Envoy 上投入,并且在负载均衡&服务发现上完全依赖 Envoy 来做。因此,你只需要将 Envoy 运营好就可以了。
|
|
||||||
|
|
||||||
正如你所看到的,我在关于如何运营 Kubernetes 中的内部代理方面的思路还是很混乱的,并且我也没有使用它们的太多经验。总体上来说,kube-proxy 和 kube-dns 还是很好的,也能够很好地工作,但是我仍然认为应该去考虑使用它们可能产生的一些问题(例如,”你不能有超出 5000 的 Kubernetes 服务“)。
|
|
||||||
|
|
||||||
### 入口
|
### 入口
|
||||||
|
|
||||||
@ -175,14 +149,12 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
|
|||||||
几个有用的链接,总结如下:
|
几个有用的链接,总结如下:
|
||||||
|
|
||||||
* [Kubernetes 网络模型][6]
|
* [Kubernetes 网络模型][6]
|
||||||
|
|
||||||
* GKE 网络是如何工作的:[https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
|
* GKE 网络是如何工作的:[https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
|
||||||
|
|
||||||
* 上述的有关 `kube-proxy` 上性能的讨论:[https://www.youtube.com/watch?v=4-pawkiazEg][8]
|
* 上述的有关 `kube-proxy` 上性能的讨论:[https://www.youtube.com/watch?v=4-pawkiazEg][8]
|
||||||
|
|
||||||
### 我认为网络运营很重要
|
### 我认为网络运维很重要
|
||||||
|
|
||||||
我对 Kubernetes 的所有这些联网软件的感觉是,它们都仍然是非常新的,并且我并不能确定我们(作为一个社区)真的知道如何去把它们运营好。这让我作为一个操作者感到很焦虑,因为我真的想让我的网络运行的很好!:) 而且我觉得作为一个组织,运行你自己的 Kubernetes 集群需要相当大的投入,以确保你理解所有的代码片段,这样当它们出现问题时你可以去修复它们。这不是一件坏事,它只是一个事而已。
|
我对 Kubernetes 的所有这些联网软件的感觉是,它们都仍然是非常新的,并且我并不能确定我们(作为一个社区)真的知道如何去把它们运维好。这让我作为一个操作者感到很焦虑,因为我真的想让我的网络运行的很好!:) 而且我觉得作为一个组织,运行你自己的 Kubernetes 集群需要相当大的投入,以确保你理解所有的代码片段,这样当它们出现问题时你可以去修复它们。这不是一件坏事,它只是一个事而已。
|
||||||
|
|
||||||
我现在的计划是,继续不断地学习关于它们都是如何工作的,以尽可能多地减少对我动过的那些部分的担忧。
|
我现在的计划是,继续不断地学习关于它们都是如何工作的,以尽可能多地减少对我动过的那些部分的担忧。
|
||||||
|
|
||||||
@ -192,9 +164,9 @@ kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:
|
|||||||
|
|
||||||
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
|
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
|
||||||
|
|
||||||
作者:[Julia Evans ][a]
|
作者:[Julia Evans][a]
|
||||||
译者:[qhwdw](https://github.com/qhwdw)
|
译者:[qhwdw](https://github.com/qhwdw)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
@ -0,0 +1,258 @@
|
|||||||
|
理解 ext4 等 Linux 文件系统
|
||||||
|
======
|
||||||
|
|
||||||
|
> 了解 ext4 的历史,包括其与 ext3 和之前的其它文件系统之间的区别。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
目前的大部分 Linux 文件系统都默认采用 ext4 文件系统,正如以前的 Linux 发行版默认使用 ext3、ext2 以及更久前的 ext。
|
||||||
|
|
||||||
|
对于不熟悉 Linux 或文件系统的朋友而言,你可能不清楚 ext4 相对于上一版本 ext3 带来了什么变化。你可能还想知道在一连串关于替代的文件系统例如 Btrfs、XFS 和 ZFS 不断被发布的情况下,ext4 是否仍然能得到进一步的发展。
|
||||||
|
|
||||||
|
在一篇文章中,我们不可能讲述文件系统的所有方面,但我们尝试让你尽快了解 Linux 默认文件系统的发展历史,包括它的诞生以及未来发展。
|
||||||
|
|
||||||
|
我仔细研究了维基百科里的各种关于 ext 文件系统文章、kernel.org 的 wiki 中关于 ext4 的条目以及结合自己的经验写下这篇文章。
|
||||||
|
|
||||||
|
### ext 简史
|
||||||
|
|
||||||
|
#### MINIX 文件系统
|
||||||
|
|
||||||
|
在有 ext 之前,使用的是 MINIX 文件系统。如果你不熟悉 Linux 历史,那么可以理解为 MINIX 是用于 IBM PC/AT 微型计算机的一个非常小的类 Unix 系统。Andrew Tannenbaum 为了教学的目的而开发了它,并于 1987 年发布了源代码(以印刷版的格式!)。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
*IBM 1980 中期的 PC/AT,[MBlairMartin](https://commons.wikimedia.org/wiki/File:IBM_PC_AT.jpg),[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en)*
|
||||||
|
|
||||||
|
虽然你可以细读 MINIX 的源代码,但实际上它并不是自由开源软件(FOSS)。出版 Tannebaum 著作的出版商要求你花 69 美元的许可费来运行 MINIX,而这笔费用包含在书籍的费用中。尽管如此,在那时来说非常便宜,并且 MINIX 的使用得到迅速发展,很快超过了 Tannebaum 当初使用它来教授操作系统编码的意图。在整个 20 世纪 90 年代,你可以发现 MINIX 的安装在世界各个大学里面非常流行。而此时,年轻的 Linus Torvalds 使用 MINIX 来开发原始 Linux 内核,并于 1991 年首次公布,而后在 1992 年 12 月在 GPL 开源协议下发布。
|
||||||
|
|
||||||
|
但是等等,这是一篇以 *文件系统* 为主题的文章不是吗?是的,MINIX 有自己的文件系统,早期的 Linux 版本依赖于它。跟 MINIX 一样,Linux 的文件系统也如同玩具那般小 —— MINIX 文件系统最多能处理 14 个字符的文件名,并且只能处理 64MB 的存储空间。到了 1991 年,一般的硬盘尺寸已经达到了 40-140 MB。很显然,Linux 需要一个更好的文件系统。
|
||||||
|
|
||||||
|
#### ext
|
||||||
|
|
||||||
|
当 Linus 开发出刚起步的 Linux 内核时,Rémy Card 从事第一代的 ext 文件系统的开发工作。ext 文件系统在 1992 年首次实现并发布 —— 仅在 Linux 首次发布后的一年!—— ext 解决了 MINIX 文件系统中最糟糕的问题。
|
||||||
|
|
||||||
|
1992 年的 ext 使用在 Linux 内核中的新虚拟文件系统(VFS)抽象层。与之前的 MINIX 文件系统不同的是,ext 可以处理高达 2 GB 存储空间并处理 255 个字符的文件名。
|
||||||
|
|
||||||
|
但 ext 并没有长时间占统治地位,主要是由于它原始的时间戳(每个文件仅有一个时间戳,而不是今天我们所熟悉的有 inode、最近文件访问时间和最新文件修改时间的时间戳。)仅仅一年后,ext2 就替代了它。
|
||||||
|
|
||||||
|
#### ext2
|
||||||
|
|
||||||
|
Rémy 很快就意识到 ext 的局限性,所以一年后他设计出 ext2 替代它。当 ext 仍然根植于 “玩具” 操作系统时,ext2 从一开始就被设计为一个商业级文件系统,沿用 BSD 的 Berkeley 文件系统的设计原理。
|
||||||
|
|
||||||
|
ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小,使其在 20 世纪 90 年代的地位牢牢巩固在文件系统大联盟中。很快它被广泛地使用,无论是在 Linux 内核中还是最终在 MINIX 中,且利用第三方模块可以使其应用于 MacOS 和 Windows。
|
||||||
|
|
||||||
|
但这里仍然有一些问题需要解决:ext2 文件系统与 20 世纪 90 年代的大多数文件系统一样,如果在将数据写入到磁盘的时候,系统发生崩溃或断电,则容易发生灾难性的数据损坏。随着时间的推移,由于碎片(单个文件存储在多个位置,物理上其分散在旋转的磁盘上),它们也遭受了严重的性能损失。
|
||||||
|
|
||||||
|
尽管存在这些问题,但今天 ext2 还是用在某些特殊的情况下 —— 最常见的是,作为便携式 USB 驱动器的文件系统格式。
|
||||||
|
|
||||||
|
#### ext3
|
||||||
|
|
||||||
|
1998 年,在 ext2 被采用后的 6 年后,Stephen Tweedie 宣布他正在致力于改进 ext2。这成了 ext3,并于 2001 年 11 月在 2.4.15 内核版本中被采用到 Linux 内核主线中。
|
||||||
|
|
||||||
|
![Packard Bell 计算机][2]
|
||||||
|
|
||||||
|
*20 世纪 90 年代中期的 Packard Bell 计算机,[Spacekid][3],[CC0][4]*
|
||||||
|
|
||||||
|
在大部分情况下,ext2 在 Linux 发行版中工作得很好,但像 FAT、FAT32、HFS 和当时的其它文件系统一样 —— 在断电时容易发生灾难性的破坏。如果在将数据写入文件系统时候发生断电,则可能会将其留在所谓 *不一致* 的状态 —— 事情只完成一半而另一半未完成。这可能导致大量文件丢失或损坏,这些文件与正在保存的文件无关甚至导致整个文件系统无法卸载。
|
||||||
|
|
||||||
|
ext3 和 20 世纪 90 年代后期的其它文件系统,如微软的 NTFS,使用 *日志* 来解决这个问题。日志是磁盘上的一种特殊的分配区域,其写入被存储在事务中;如果该事务完成磁盘写入,则日志中的数据将提交给文件系统自身。如果系统在该操作提交前崩溃,则重新启动的系统识别其为未完成的事务而将其进行回滚,就像从未发生过一样。这意味着正在处理的文件可能依然会丢失,但文件系统 *本身* 保持一致,且其它所有数据都是安全的。
|
||||||
|
|
||||||
|
在使用 ext3 文件系统的 Linux 内核中实现了三个级别的日志记录方式:<ruby>日记<rt>journal</rt></ruby>、<ruby>顺序<rt>ordered</rt></ruby>和<ruby>回写<rt>writeback</rt></ruby>。
|
||||||
|
|
||||||
|
* **日记** 是最低风险模式,在将数据和元数据提交给文件系统之前将其写入日志。这可以保证正在写入的文件与整个文件系统的一致性,但其显著降低了性能。
|
||||||
|
* **顺序** 是大多数 Linux 发行版默认模式;顺序模式将元数据写入日志而直接将数据提交到文件系统。顾名思义,这里的操作顺序是固定的:首先,元数据提交到日志;其次,数据写入文件系统,然后才将日志中关联的元数据更新到文件系统。这确保了在发生崩溃时,那些与未完整写入相关联的元数据仍在日志中,且文件系统可以在回滚日志时清理那些不完整的写入事务。在顺序模式下,系统崩溃可能导致在崩溃期间文件的错误被主动写入,但文件系统它本身 —— 以及未被主动写入的文件 —— 确保是安全的。
|
||||||
|
* **回写** 是第三种模式 —— 也是最不安全的日志模式。在回写模式下,像顺序模式一样,元数据会被记录到日志,但数据不会。与顺序模式不同,元数据和数据都可以以任何有利于获得最佳性能的顺序写入。这可以显著提高性能,但安全性低很多。尽管回写模式仍然保证文件系统本身的安全性,但在崩溃或崩溃之前写入的文件很容易丢失或损坏。
|
||||||
|
|
||||||
|
跟之前的 ext2 类似,ext3 使用 16 位内部寻址。这意味着对于有着 4K 块大小的 ext3 在最大规格为 16 TiB 的文件系统中可以处理的最大文件大小为 2 TiB。
|
||||||
|
|
||||||
|
#### ext4
|
||||||
|
|
||||||
|
Theodore Ts'o(是当时 ext3 主要开发人员)在 2006 年发表的 ext4,于两年后在 2.6.28 内核版本中被加入到了 Linux 主线。
|
||||||
|
|
||||||
|
Ts'o 将 ext4 描述为一个显著扩展 ext3 但仍然依赖于旧技术的临时技术。他预计 ext4 终将会被真正的下一代文件系统所取代。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
*Dell Precision 380 工作站,[Lance Fisher](https://commons.wikimedia.org/wiki/File:Dell_Precision_380_Workstation.jpeg),[CC BY-SA 2.0](https://creativecommons.org/licenses/by-sa/2.0/deed.en)*
|
||||||
|
|
||||||
|
ext4 在功能上与 ext3 在功能上非常相似,但支持大文件系统,提高了对碎片的抵抗力,有更高的性能以及更好的时间戳。
|
||||||
|
|
||||||
|
### ext4 vs ext3
|
||||||
|
|
||||||
|
ext3 和 ext4 有一些非常明确的差别,在这里集中讨论下。
|
||||||
|
|
||||||
|
#### 向后兼容性
|
||||||
|
|
||||||
|
ext4 特地设计为尽可能地向后兼容 ext3。这不仅允许 ext3 文件系统原地升级到 ext4;也允许 ext4 驱动程序以 ext3 模式自动挂载 ext3 文件系统,因此使它无需单独维护两个代码库。
|
||||||
|
|
||||||
|
#### 大文件系统
|
||||||
|
|
||||||
|
ext3 文件系统使用 32 位寻址,这限制它仅支持 2 TiB 文件大小和 16 TiB 文件系统系统大小(这是假设在块大小为 4 KiB 的情况下,一些 ext3 文件系统使用更小的块大小,因此对其进一步被限制)。
|
||||||
|
|
||||||
|
ext4 使用 48 位的内部寻址,理论上可以在文件系统上分配高达 16 TiB 大小的文件,其中文件系统大小最高可达 1000000 TiB(1 EiB)。在早期 ext4 的实现中有些用户空间的程序仍然将其限制为最大大小为 16 TiB 的文件系统,但截至 2011 年,e2fsprogs 已经直接支持大于 16 TiB 大小的 ext4 文件系统。例如,红帽企业 Linux 在其合同上仅支持最高 50 TiB 的 ext4 文件系统,并建议 ext4 卷不超过 100 TiB。
|
||||||
|
|
||||||
|
#### 分配方式改进
|
||||||
|
|
||||||
|
ext4 在将存储块写入磁盘之前对存储块的分配方式进行了大量改进,这可以显著提高读写性能。
|
||||||
|
|
||||||
|
##### 区段
|
||||||
|
|
||||||
|
<ruby>区段<rt>extent</rt></ruby>是一系列连续的物理块 (最多达 128 MiB,假设块大小为 4 KiB),可以一次性保留和寻址。使用区段可以减少给定文件所需的 inode 数量,并显著减少碎片并提高写入大文件时的性能。
|
||||||
|
|
||||||
|
##### 多块分配
|
||||||
|
|
||||||
|
ext3 为每一个新分配的块调用一次块分配器。当多个写入同时打开分配器时,很容易导致严重的碎片。然而,ext4 使用延迟分配,这允许它合并写入并更好地决定如何为尚未提交的写入分配块。
|
||||||
|
|
||||||
|
##### 持久的预分配
|
||||||
|
|
||||||
|
在为文件预分配磁盘空间时,大部分文件系统必须在创建时将零写入该文件的块中。ext4 允许替代使用 `fallocate()`,它保证了空间的可用性(并试图为它找到连续的空间),而不需要先写入它。这显著提高了写入和将来读取流和数据库应用程序的写入数据的性能。
|
||||||
|
|
||||||
|
##### 延迟分配
|
||||||
|
|
||||||
|
这是一个耐人寻味而有争议性的功能。延迟分配允许 ext4 等待分配将写入数据的实际块,直到它准备好将数据提交到磁盘。(相比之下,即使数据仍然在往写入缓存中写入,ext3 也会立即分配块。)
|
||||||
|
|
||||||
|
当缓存中的数据累积时,延迟分配块允许文件系统对如何分配块做出更好的选择,降低碎片(写入,以及稍后的读)并显著提升性能。然而不幸的是,它 *增加* 了还没有专门调用 `fsync()` 方法(当程序员想确保数据完全刷新到磁盘时)的程序的数据丢失的可能性。
|
||||||
|
|
||||||
|
假设一个程序完全重写了一个文件:
|
||||||
|
|
||||||
|
```
|
||||||
|
fd=open("file", O_TRUNC); write(fd, data); close(fd);
|
||||||
|
```
|
||||||
|
|
||||||
|
使用旧的文件系统,`close(fd);` 足以保证 `file` 中的内容刷新到磁盘。即使严格来说,写不是事务性的,但如果文件关闭后发生崩溃,则丢失数据的风险很小。
|
||||||
|
|
||||||
|
如果写入不成功(由于程序上的错误、磁盘上的错误、断电等),文件的原始版本和较新版本都可能丢失数据或损坏。如果其它进程在写入文件时访问文件,则会看到损坏的版本。如果其它进程打开文件并且不希望其内容发生更改 —— 例如,映射到多个正在运行的程序的共享库。这些进程可能会崩溃。
|
||||||
|
|
||||||
|
为了避免这些问题,一些程序员完全避免使用 `O_TRUNC`。相反,他们可能会写入一个新文件,关闭它,然后将其重命名为旧文件名:
|
||||||
|
|
||||||
|
```
|
||||||
|
fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");
|
||||||
|
```
|
||||||
|
|
||||||
|
在 *没有* 延迟分配的文件系统下,这足以避免上面列出的潜在的损坏和崩溃问题:因为 `rename()` 是原子操作,所以它不会被崩溃中断;并且运行的程序将继续引用旧的文件。现在 `file` 的未链接版本只要有一个打开的文件文件句柄即可。但是因为 ext4 的延迟分配会导致写入被延迟和重新排序,`rename("newfile", "file")` 可以在 `newfile` 的内容实际写入磁盘内容之前执行,这出现了并行进行再次获得 `file` 坏版本的问题。
|
||||||
|
|
||||||
|
为了缓解这种情况,Linux 内核(自版本 2.6.30)尝试检测这些常见代码情况并强制立即分配。这会减少但不能防止数据丢失的可能性 —— 并且它对新文件没有任何帮助。如果你是一位开发人员,请注意:保证数据立即写入磁盘的唯一方法是正确调用 `fsync()`。
|
||||||
|
|
||||||
|
#### 无限制的子目录
|
||||||
|
|
||||||
|
ext3 仅限于 32000 个子目录;ext4 允许无限数量的子目录。从 2.6.23 内核版本开始,ext4 使用 HTree 索引来减少大量子目录的性能损失。
|
||||||
|
|
||||||
|
#### 日志校验
|
||||||
|
|
||||||
|
ext3 没有对日志进行校验,这给处于内核直接控制之外的磁盘或自带缓存的控制器设备带来了问题。如果控制器或具自带缓存的磁盘脱离了写入顺序,则可能会破坏 ext3 的日记事务顺序,从而可能破坏在崩溃期间(或之前一段时间)写入的文件。
|
||||||
|
|
||||||
|
理论上,这个问题可以使用写入<ruby>障碍<rt>barrier</rt></ruby> —— 在安装文件系统时,你在挂载选项设置 `barrier=1`,然后设备就会忠实地执行 `fsync` 一直向下到底层硬件。通过实践,可以发现存储设备和控制器经常不遵守写入障碍 —— 提高性能(和跟竞争对手比较的性能基准),但增加了本应该防止数据损坏的可能性。
|
||||||
|
|
||||||
|
对日志进行校验和允许文件系统崩溃后第一次挂载时意识到其某些条目是无效或无序的。因此,这避免了回滚部分条目或无序日志条目的错误,并进一步损坏的文件系统 —— 即使部分存储设备假做或不遵守写入障碍。
|
||||||
|
|
||||||
|
#### 快速文件系统检查
|
||||||
|
|
||||||
|
在 ext3 下,在 `fsck` 被调用时会检查整个文件系统 —— 包括已删除或空文件。相比之下,ext4 标记了 inode 表未分配的块和扇区,从而允许 `fsck` 完全跳过它们。这大大减少了在大多数文件系统上运行 `fsck` 的时间,它实现于内核 2.6.24。
|
||||||
|
|
||||||
|
#### 改进的时间戳
|
||||||
|
|
||||||
|
ext3 提供粒度为一秒的时间戳。虽然足以满足大多数用途,但任务关键型应用程序经常需要更严格的时间控制。ext4 通过提供纳秒级的时间戳,使其可用于那些企业、科学以及任务关键型的应用程序。
|
||||||
|
|
||||||
|
ext3 文件系统也没有提供足够的位来存储 2038 年 1 月 18 日以后的日期。ext4 在这里增加了两个位,将 [Unix 纪元][5]扩展了 408 年。如果你在公元 2446 年读到这篇文章,你很有可能已经转移到一个更好的文件系统 —— 如果你还在测量自 1970 年 1 月 1 日 00:00(UTC)以来的时间,这会让我死后得以安眠。
|
||||||
|
|
||||||
|
#### 在线碎片整理
|
||||||
|
|
||||||
|
ext2 和 ext3 都不直接支持在线碎片整理 —— 即在挂载时会对文件系统进行碎片整理。ext2 有一个包含的实用程序 `e2defrag`,它的名字暗示 —— 它需要在文件系统未挂载时脱机运行。(显然,这对于根文件系统来说非常有问题。)在 ext3 中的情况甚至更糟糕 —— 虽然 ext3 比 ext2 更不容易受到严重碎片的影响,但 ext3 文件系统运行 `e2defrag` 可能会导致灾难性损坏和数据丢失。
|
||||||
|
|
||||||
|
尽管 ext3 最初被认为“不受碎片影响”,但对同一文件(例如 BitTorrent)采用大规模并行写入过程的过程清楚地表明情况并非完全如此。一些用户空间的手段和解决方法,例如 [Shake][6],以这样或那样方式解决了这个问题 —— 但它们比真正的、文件系统感知的、内核级碎片整理过程更慢并且在各方面都不太令人满意。
|
||||||
|
|
||||||
|
ext4 通过 `e4defrag` 解决了这个问题,且是一个在线、内核模式、文件系统感知、块和区段级别的碎片整理实用程序。
|
||||||
|
|
||||||
|
### 正在进行的 ext4 开发
|
||||||
|
|
||||||
|
ext4,正如 Monty Python 中瘟疫感染者曾经说过的那样,“我还没死呢!”虽然它的[主要开发人员][7]认为它只是一个真正的[下一代文件系统][8]的权宜之计,但是在一段时间内,没有任何可能的候选人准备好(由于技术或许可问题)部署为根文件系统。
|
||||||
|
|
||||||
|
在未来的 ext4 版本中仍然有一些关键功能要开发,包括元数据校验和、一流的配额支持和大分配块。
|
||||||
|
|
||||||
|
#### 元数据校验和
|
||||||
|
|
||||||
|
由于 ext4 具有冗余超级块,因此为文件系统校验其中的元数据提供了一种方法,可以自行确定主超级块是否已损坏并需要使用备用块。可以在没有校验和的情况下,从损坏的超级块恢复 —— 但是用户首先需要意识到它已损坏,然后尝试使用备用方法手动挂载文件系统。由于在某些情况下,使用损坏的主超级块安装文件系统读写可能会造成进一步的损坏,即使是经验丰富的用户也无法避免,这也不是一个完美的解决方案!
|
||||||
|
|
||||||
|
与 Btrfs 或 ZFS 等下一代文件系统提供的极其强大的每块校验和相比,ext4 的元数据校验和的功能非常弱。但它总比没有好。虽然校验 **所有的事情** 都听起来很简单!—— 事实上,将校验和与文件系统连接到一起有一些重大的挑战;请参阅[设计文档][9]了解详细信息。
|
||||||
|
|
||||||
|
#### 一流的配额支持
|
||||||
|
|
||||||
|
等等,配额?!从 ext2 出现的那天开始我们就有了这些!是的,但它们一直都是事后的添加的东西,而且它们总是犯傻。这里可能不值得详细介绍,但[设计文档][10]列出了配额将从用户空间移动到内核中的方式,并且能够更加正确和高效地执行。
|
||||||
|
|
||||||
|
#### 大分配块
|
||||||
|
|
||||||
|
随着时间的推移,那些讨厌的存储系统不断变得越来越大。由于一些固态硬盘已经使用 8K 硬件块大小,因此 ext4 对 4K 模块的当前限制越来越受到限制。较大的存储块可以显著减少碎片并提高性能,代价是增加“松弛”空间(当你只需要块的一部分来存储文件或文件的最后一块时留下的空间)。
|
||||||
|
|
||||||
|
你可以在[设计文档][11]中查看详细说明。
|
||||||
|
|
||||||
|
### ext4 的实际限制
|
||||||
|
|
||||||
|
ext4 是一个健壮、稳定的文件系统。如今大多数人都应该在用它作为根文件系统,但它无法处理所有需求。让我们简单地谈谈你不应该期待的一些事情 —— 现在或可能在未来:
|
||||||
|
|
||||||
|
虽然 ext4 可以处理高达 1 EiB 大小(相当于 1,000,000 TiB)大小的数据,但你 *真的* 不应该尝试这样做。除了能够记住更多块的地址之外,还存在规模上的问题。并且现在 ext4 不会处理(并且可能永远不会)超过 50-100 TiB 的数据。
|
||||||
|
|
||||||
|
ext4 也不足以保证数据的完整性。随着日志记录的重大进展又回到了 ext3 的那个时候,它并未涵盖数据损坏的许多常见原因。如果数据已经在磁盘上被[破坏][12] —— 由于故障硬件,宇宙射线的影响(是的,真的),或者只是数据随时间衰减 —— ext4 无法检测或修复这种损坏。
|
||||||
|
|
||||||
|
基于上面两点,ext4 只是一个纯 *文件系统*,而不是存储卷管理器。这意味着,即使你有多个磁盘 —— 也就是奇偶校验或冗余,理论上你可以从 ext4 中恢复损坏的数据,但无法知道使用它是否对你有利。虽然理论上可以在不同的层中分离文件系统和存储卷管理系统而不会丢失自动损坏检测和修复功能,但这不是当前存储系统的设计方式,并且它将给新设计带来重大挑战。
|
||||||
|
|
||||||
|
### 备用文件系统
|
||||||
|
|
||||||
|
在我们开始之前,提醒一句:要非常小心,没有任何备用的文件系统作为主线内核的一部分而内置和直接支持!
|
||||||
|
|
||||||
|
即使一个文件系统是 *安全的*,如果在内核升级期间出现问题,使用它作为根文件系统也是非常可怕的。如果你没有充分的理由通过一个 chroot 去使用替代介质引导,耐心地操作内核模块、grub 配置和 DKMS……不要在一个很重要的系统中去掉预留的根文件。
|
||||||
|
|
||||||
|
可能有充分的理由使用你的发行版不直接支持的文件系统 —— 但如果你这样做,我强烈建议你在系统启动并可用后再安装它。(例如,你可能有一个 ext4 根文件系统,但是将大部分数据存储在 ZFS 或 Btrfs 池中。)
|
||||||
|
|
||||||
|
#### XFS
|
||||||
|
|
||||||
|
XFS 与非 ext 文件系统在 Linux 中的主线中的地位一样。它是一个 64 位的日志文件系统,自 2001 年以来内置于 Linux 内核中,为大型文件系统和高度并发性提供了高性能(即大量的进程都会立即写入文件系统)。
|
||||||
|
|
||||||
|
从 RHEL 7 开始,XFS 成为 Red Hat Enterprise Linux 的默认文件系统。对于家庭或小型企业用户来说,它仍然有一些缺点 —— 最值得注意的是,重新调整现有 XFS 文件系统是一件非常痛苦的事情,不如创建另一个并复制数据更有意义。
|
||||||
|
|
||||||
|
虽然 XFS 是稳定的且是高性能的,但它和 ext4 之间没有足够具体的最终用途差异,以值得推荐在非默认(如 RHEL7)的任何地方使用它,除非它解决了对 ext4 的特定问题,例如大于 50 TiB 容量的文件系统。
|
||||||
|
|
||||||
|
XFS 在任何方面都不是 ZFS、Btrfs 甚至 WAFL(一个专有的 SAN 文件系统)的“下一代”文件系统。就像 ext4 一样,它应该被视为一种更好的方式的权宜之计。
|
||||||
|
|
||||||
|
#### ZFS
|
||||||
|
|
||||||
|
ZFS 由 Sun Microsystems 开发,以 zettabyte 命名 —— 相当于 1 万亿 GB —— 因为它理论上可以解决大型存储系统。
|
||||||
|
|
||||||
|
作为真正的下一代文件系统,ZFS 提供卷管理(能够在单个文件系统中处理多个单独的存储设备),块级加密校验和(允许以极高的准确率检测数据损坏),[自动损坏修复][12](其中冗余或奇偶校验存储可用),[快速异步增量复制][13],内联压缩等,[以及更多][14]。
|
||||||
|
|
||||||
|
从 Linux 用户的角度来看,ZFS 的最大问题是许可证问题。ZFS 许可证是 CDDL 许可证,这是一种与 GPL 冲突的半许可的许可证。关于在 Linux 内核中使用 ZFS 的意义存在很多争议,其争议范围从“它是 GPL 违规”到“它是 CDDL 违规”到“它完全没问题,它还没有在法庭上进行过测试。”最值得注意的是,自 2016 年以来 Canonical 已将 ZFS 代码内联在其默认内核中,而且目前尚无法律挑战。
|
||||||
|
|
||||||
|
此时,即使我作为一个非常狂热于 ZFS 的用户,我也不建议将 ZFS 作为 Linux 的根文件系统。如果你想在 Linux 上利用 ZFS 的优势,用 ext4 设置一个小的根文件系统,然后将 ZFS 用在你剩余的存储上,把数据、应用程序以及你喜欢的东西放在它上面 —— 但把 root 分区保留在 ext4 上,直到你的发行版明确支持 ZFS 根目录。
|
||||||
|
|
||||||
|
#### Btrfs
|
||||||
|
|
||||||
|
Btrfs 是 B-Tree Filesystem 的简称,通常发音为 “butter” —— 由 Chris Mason 于 2007 年在 Oracle 任职期间发布。Btrfs 旨在跟 ZFS 有大部分相同的目标,提供多种设备管理、每块校验、异步复制、直列压缩等,[还有更多][8]。
|
||||||
|
|
||||||
|
截至 2018 年,Btrfs 相当稳定,可用作标准的单磁盘文件系统,但可能不应该依赖于卷管理器。与许多常见用例中的 ext4、XFS 或 ZFS 相比,它存在严重的性能问题,其下一代功能 —— 复制、多磁盘拓扑和快照管理 —— 可能非常多,其结果可能是从灾难性地性能降低到实际数据的丢失。
|
||||||
|
|
||||||
|
Btrfs 的维持状态是有争议的;SUSE Enterprise Linux 在 2015 年采用它作为默认文件系统,而 Red Hat 于 2017 年宣布它从 RHEL 7.4 开始不再支持 Btrfs。可能值得注意的是,该产品支持 Btrfs 部署用作单磁盘文件系统,而不是像 ZFS 中的多磁盘卷管理器,甚至 Synology 在它的存储设备使用 Btrfs,但是它在传统 Linux 内核 RAID(mdraid)之上分层来管理磁盘。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/4/ext4-filesystem
|
||||||
|
|
||||||
|
作者:[Jim Salter][a]
|
||||||
|
译者:[HardworkFish](https://github.com/HardworkFish)
|
||||||
|
校对:[wxy](https://github.com/wxy), [pityonline](https://github.com/pityonline)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/jim-salter
|
||||||
|
[1]: https://opensource.com/file/391546
|
||||||
|
[2]: https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/packard_bell_pc.jpg?itok=VI8dzcwp (Packard Bell computer)
|
||||||
|
[3]: https://commons.wikimedia.org/wiki/File:Old_packard_bell_pc.jpg
|
||||||
|
[4]: https://creativecommons.org/publicdomain/zero/1.0/deed.en
|
||||||
|
[5]: https://en.wikipedia.org/wiki/Unix_time
|
||||||
|
[6]: https://vleu.net/shake/
|
||||||
|
[7]: http://www.linux-mag.com/id/7272/
|
||||||
|
[8]: https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/
|
||||||
|
[9]: https://ext4.wiki.kernel.org/index.php/Ext4_Metadata_Checksums
|
||||||
|
[10]: https://ext4.wiki.kernel.org/index.php/Design_For_1st_Class_Quota_in_Ext4
|
||||||
|
[11]: https://ext4.wiki.kernel.org/index.php/Design_for_Large_Allocation_Blocks
|
||||||
|
[12]: https://en.wikipedia.org/wiki/Data_degradation#Visual_example_of_data_degradation
|
||||||
|
[13]: https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
|
||||||
|
[14]: https://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/
|
321
published/20180822 What is a Makefile and how does it work.md
Normal file
321
published/20180822 What is a Makefile and how does it work.md
Normal file
@ -0,0 +1,321 @@
|
|||||||
|
Makefile 及其工作原理
|
||||||
|
======
|
||||||
|
|
||||||
|
> 用这个方便的工具来更有效的运行和编译你的程序。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
当你需要在一些源文件改变后运行或更新一个任务时,通常会用到 `make` 工具。`make` 工具需要读取一个 `Makefile`(或 `makefile`)文件,在该文件中定义了一系列需要执行的任务。你可以使用 `make` 来将源代码编译为可执行程序。大部分开源项目会使用 `make` 来实现最终的二进制文件的编译,然后使用 `make install` 命令来执行安装。
|
||||||
|
|
||||||
|
本文将通过一些基础和进阶的示例来展示 `make` 和 `Makefile` 的使用方法。在开始前,请确保你的系统中安装了 `make`。
|
||||||
|
|
||||||
|
### 基础示例
|
||||||
|
|
||||||
|
依然从打印 “Hello World” 开始。首先创建一个名字为 `myproject` 的目录,目录下新建 `Makefile` 文件,文件内容为:
|
||||||
|
|
||||||
|
```
|
||||||
|
say_hello:
|
||||||
|
echo "Hello World"
|
||||||
|
```
|
||||||
|
|
||||||
|
在 `myproject` 目录下执行 `make`,会有如下输出:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make
|
||||||
|
echo "Hello World"
|
||||||
|
Hello World
|
||||||
|
```
|
||||||
|
|
||||||
|
在上面的例子中,“say_hello” 类似于其他编程语言中的函数名。这被称之为<ruby>目标<rt>target</rt></ruby>。在该目标之后的是预置条件或依赖。为了简单起见,我们在这个示例中没有定义预置条件。`echo ‘Hello World'` 命令被称为<ruby>步骤<rt>recipe</rt></ruby>。这些步骤基于预置条件来实现目标。目标、预置条件和步骤共同构成一个规则。
|
||||||
|
|
||||||
|
总结一下,一个典型的规则的语法为:
|
||||||
|
|
||||||
|
```
|
||||||
|
目标: 预置条件
|
||||||
|
<TAB> 步骤
|
||||||
|
```
|
||||||
|
|
||||||
|
作为示例,目标可以是一个基于预置条件(源代码)的二进制文件。另一方面,预置条件也可以是依赖其他预置条件的目标。
|
||||||
|
|
||||||
|
```
|
||||||
|
final_target: sub_target final_target.c
|
||||||
|
Recipe_to_create_final_target
|
||||||
|
|
||||||
|
sub_target: sub_target.c
|
||||||
|
Recipe_to_create_sub_target
|
||||||
|
```
|
||||||
|
|
||||||
|
目标并不要求是一个文件,也可以只是步骤的名字,就如我们的例子中一样。我们称之为“伪目标”。
|
||||||
|
|
||||||
|
再回到上面的示例中,当 `make` 被执行时,整条指令 `echo "Hello World"` 都被显示出来,之后才是真正的执行结果。如果不希望指令本身被打印处理,需要在 `echo` 前添加 `@`。
|
||||||
|
```
|
||||||
|
say_hello:
|
||||||
|
@echo "Hello World"
|
||||||
|
```
|
||||||
|
|
||||||
|
重新运行 `make`,将会只有如下输出:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make
|
||||||
|
Hello World
|
||||||
|
```
|
||||||
|
|
||||||
|
接下来在 `Makefile` 中添加如下伪目标:`generate` 和 `clean`:
|
||||||
|
|
||||||
|
```
|
||||||
|
say_hello:
|
||||||
|
@echo "Hello World"
|
||||||
|
|
||||||
|
generate:
|
||||||
|
@echo "Creating empty text files..."
|
||||||
|
touch file-{1..10}.txt
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning up..."
|
||||||
|
rm *.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
随后当我们运行 `make` 时,只有 `say_hello` 这个目标被执行。这是因为`Makefile` 中的第一个目标为默认目标。通常情况下会调用默认目标,这就是你在大多数项目中看到 `all` 作为第一个目标而出现。`all` 负责来调用它他的目标。我们可以通过 `.DEFAULT_GOAL` 这个特殊的伪目标来覆盖掉默认的行为。
|
||||||
|
|
||||||
|
在 `Makefile` 文件开头增加 `.DEFAULT_GOAL`:
|
||||||
|
|
||||||
|
```
|
||||||
|
.DEFAULT_GOAL := generate
|
||||||
|
```
|
||||||
|
|
||||||
|
`make` 会将 `generate` 作为默认目标:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make
|
||||||
|
Creating empty text files...
|
||||||
|
touch file-{1..10}.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
顾名思义,`.DEFAULT_GOAL` 伪目标仅能定义一个目标。这就是为什么很多 `Makefile` 会包括 `all` 这个目标,这样可以调用多个目标。
|
||||||
|
|
||||||
|
下面删除掉 `.DEFAULT_GOAL`,增加 `all` 目标:
|
||||||
|
|
||||||
|
```
|
||||||
|
all: say_hello generate
|
||||||
|
|
||||||
|
say_hello:
|
||||||
|
@echo "Hello World"
|
||||||
|
|
||||||
|
generate:
|
||||||
|
@echo "Creating empty text files..."
|
||||||
|
touch file-{1..10}.txt
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning up..."
|
||||||
|
rm *.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
运行之前,我们再增加一些特殊的伪目标。`.PHONY` 用来定义这些不是文件的目标。`make` 会默认调用这些伪目标下的步骤,而不去检查文件名是否存在或最后修改日期。完整的 `Makefile` 如下:
|
||||||
|
|
||||||
|
```
|
||||||
|
.PHONY: all say_hello generate clean
|
||||||
|
|
||||||
|
all: say_hello generate
|
||||||
|
|
||||||
|
say_hello:
|
||||||
|
@echo "Hello World"
|
||||||
|
|
||||||
|
generate:
|
||||||
|
@echo "Creating empty text files..."
|
||||||
|
touch file-{1..10}.txt
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning up..."
|
||||||
|
rm *.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
`make` 命令会调用 `say_hello` 和 `generate`:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make
|
||||||
|
Hello World
|
||||||
|
Creating empty text files...
|
||||||
|
touch file-{1..10}.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
`clean` 不应该被放入 `all` 中,或者被放入第一个目标中。`clean` 应当在需要清理时手动调用,调用方法为 `make clean`。
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make clean
|
||||||
|
Cleaning up...
|
||||||
|
rm *.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
现在你应该已经对 `Makefile` 有了基础的了解,接下来我们看一些进阶的示例。
|
||||||
|
|
||||||
|
### 进阶示例
|
||||||
|
|
||||||
|
#### 变量
|
||||||
|
|
||||||
|
在之前的实例中,大部分目标和预置条件是已经固定了的,但在实际项目中,它们通常用变量和模式来代替。
|
||||||
|
|
||||||
|
定义变量最简单的方式是使用 `=` 操作符。例如,将命令 `gcc` 赋值给变量 `CC`:
|
||||||
|
|
||||||
|
```
|
||||||
|
CC = gcc
|
||||||
|
```
|
||||||
|
|
||||||
|
这被称为递归扩展变量,用于如下所示的规则中:
|
||||||
|
|
||||||
|
```
|
||||||
|
hello: hello.c
|
||||||
|
${CC} hello.c -o hello
|
||||||
|
```
|
||||||
|
|
||||||
|
你可能已经想到了,这些步骤将会在传递给终端时展开为:
|
||||||
|
|
||||||
|
```
|
||||||
|
gcc hello.c -o hello
|
||||||
|
```
|
||||||
|
|
||||||
|
`${CC}` 和 `$(CC)` 都能对 `gcc` 进行引用。但如果一个变量尝试将它本身赋值给自己,将会造成死循环。让我们验证一下:
|
||||||
|
|
||||||
|
```
|
||||||
|
CC = gcc
|
||||||
|
CC = ${CC}
|
||||||
|
|
||||||
|
all:
|
||||||
|
@echo ${CC}
|
||||||
|
```
|
||||||
|
|
||||||
|
此时运行 `make` 会导致:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make
|
||||||
|
Makefile:8: *** Recursive variable 'CC' references itself (eventually). Stop.
|
||||||
|
```
|
||||||
|
|
||||||
|
为了避免这种情况发生,可以使用 `:=` 操作符(这被称为简单扩展变量)。以下代码不会造成上述问题:
|
||||||
|
|
||||||
|
```
|
||||||
|
CC := gcc
|
||||||
|
CC := ${CC}
|
||||||
|
|
||||||
|
all:
|
||||||
|
@echo ${CC}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 模式和函数
|
||||||
|
|
||||||
|
下面的 `Makefile` 使用了变量、模式和函数来实现所有 C 代码的编译。我们来逐行分析下:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Usage:
|
||||||
|
# make # compile all binary
|
||||||
|
# make clean # remove ALL binaries and objects
|
||||||
|
|
||||||
|
.PHONY = all clean
|
||||||
|
|
||||||
|
CC = gcc # compiler to use
|
||||||
|
|
||||||
|
LINKERFLAG = -lm
|
||||||
|
|
||||||
|
SRCS := $(wildcard *.c)
|
||||||
|
BINS := $(SRCS:%.c=%)
|
||||||
|
|
||||||
|
all: ${BINS}
|
||||||
|
|
||||||
|
%: %.o
|
||||||
|
@echo "Checking.."
|
||||||
|
${CC} ${LINKERFLAG} $< -o $@
|
||||||
|
|
||||||
|
%.o: %.c
|
||||||
|
@echo "Creating object.."
|
||||||
|
${CC} -c $<
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning up..."
|
||||||
|
rm -rvf *.o ${BINS}
|
||||||
|
```
|
||||||
|
|
||||||
|
* 以 `#` 开头的行是评论。
|
||||||
|
* `.PHONY = all clean` 行定义了 `all` 和 `clean` 两个伪目标。
|
||||||
|
* 变量 `LINKERFLAG` 定义了在步骤中 `gcc` 命令需要用到的参数。
|
||||||
|
* `SRCS := $(wildcard *.c)`:`$(wildcard pattern)` 是与文件名相关的一个函数。在本示例中,所有 “.c”后缀的文件会被存入 `SRCS` 变量。
|
||||||
|
* `BINS := $(SRCS:%.c=%)`:这被称为替代引用。本例中,如果 `SRCS` 的值为 `'foo.c bar.c'`,则 `BINS`的值为 `'foo bar'`。
|
||||||
|
* `all: ${BINS}` 行:伪目标 `all` 调用 `${BINS}` 变量中的所有值作为子目标。
|
||||||
|
* 规则:
|
||||||
|
|
||||||
|
```
|
||||||
|
%: %.o
|
||||||
|
@echo "Checking.."
|
||||||
|
${CC} ${LINKERFLAG} $< -o $@
|
||||||
|
```
|
||||||
|
|
||||||
|
下面通过一个示例来理解这条规则。假定 `foo` 是变量 `${BINS}` 中的一个值。`%` 会匹配到 `foo`(`%`匹配任意一个目标)。下面是规则展开后的内容:
|
||||||
|
|
||||||
|
```
|
||||||
|
foo: foo.o
|
||||||
|
@echo "Checking.."
|
||||||
|
gcc -lm foo.o -o foo
|
||||||
|
```
|
||||||
|
|
||||||
|
如上所示,`%` 被 `foo` 替换掉了。`$<` 被 `foo.o` 替换掉。`$<`用于匹配预置条件,`$@` 匹配目标。对 `${BINS}` 中的每个值,这条规则都会被调用一遍。
|
||||||
|
* 规则:
|
||||||
|
|
||||||
|
```
|
||||||
|
%.o: %.c
|
||||||
|
@echo "Creating object.."
|
||||||
|
${CC} -c $<
|
||||||
|
```
|
||||||
|
|
||||||
|
之前规则中的每个预置条件在这条规则中都会都被作为一个目标。下面是展开后的内容:
|
||||||
|
|
||||||
|
```
|
||||||
|
foo.o: foo.c
|
||||||
|
@echo "Creating object.."
|
||||||
|
gcc -c foo.c
|
||||||
|
```
|
||||||
|
* 最后,在 `clean` 目标中,所有的二进制文件和编译文件将被删除。
|
||||||
|
|
||||||
|
下面是重写后的 `Makefile`,该文件应该被放置在一个有 `foo.c` 文件的目录下:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Usage:
|
||||||
|
# make # compile all binary
|
||||||
|
# make clean # remove ALL binaries and objects
|
||||||
|
|
||||||
|
.PHONY = all clean
|
||||||
|
|
||||||
|
CC = gcc # compiler to use
|
||||||
|
|
||||||
|
LINKERFLAG = -lm
|
||||||
|
|
||||||
|
SRCS := foo.c
|
||||||
|
BINS := foo
|
||||||
|
|
||||||
|
all: foo
|
||||||
|
|
||||||
|
foo: foo.o
|
||||||
|
@echo "Checking.."
|
||||||
|
gcc -lm foo.o -o foo
|
||||||
|
|
||||||
|
foo.o: foo.c
|
||||||
|
@echo "Creating object.."
|
||||||
|
gcc -c foo.c
|
||||||
|
|
||||||
|
clean:
|
||||||
|
@echo "Cleaning up..."
|
||||||
|
rm -rvf foo.o foo
|
||||||
|
```
|
||||||
|
|
||||||
|
关于 `Makefile` 的更多信息,[GNU Make 手册][1]提供了更完整的说明和实例。
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/8/what-how-makefile
|
||||||
|
|
||||||
|
作者:[Sachin Patil][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[Zafiry](https://github.com/zafiry)
|
||||||
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://opensource.com/users/psachin
|
||||||
|
[1]:https://www.gnu.org/software/make/manual/make.pdf
|
@ -1,6 +1,7 @@
|
|||||||
如何在 Ubuntu 18.04 上更新固件
|
如何在 Ubuntu 18.04 上更新固件
|
||||||
======
|
======
|
||||||
通常,Ubuntu 和其他 Linux 中的默认软件中心会处理系统固件的更新。但是如果你遇到了错误,你可以使用 fwupd 命令行工具更新系统的固件。
|
|
||||||
|
通常,Ubuntu 和其他 Linux 中的默认软件中心会处理系统固件的更新。但是如果你遇到了错误,你可以使用 `fwupd` 命令行工具更新系统的固件。
|
||||||
|
|
||||||
我使用 [Dell XPS 13 Ubuntu 版本][1]作为我的主要操作系统。我全新[安装了 Ubuntu 18.04][2],我对硬件兼容性感到满意。蓝牙、外置 USB 耳机和扬声器、多显示器,一切都开箱即用。
|
我使用 [Dell XPS 13 Ubuntu 版本][1]作为我的主要操作系统。我全新[安装了 Ubuntu 18.04][2],我对硬件兼容性感到满意。蓝牙、外置 USB 耳机和扬声器、多显示器,一切都开箱即用。
|
||||||
|
|
||||||
@ -14,7 +15,7 @@
|
|||||||
|
|
||||||
错误消息是:
|
错误消息是:
|
||||||
|
|
||||||
**Unable to update “Thunderbolt NVM for Xps Notebook 9360”: could not detect device after update: timed out while waiting for device**
|
> Unable to update “Thunderbolt NVM for Xps Notebook 9360”: could not detect device after update: timed out while waiting for device
|
||||||
|
|
||||||
在这篇文章中,我将向你展示如何在 [Ubuntu][6] 中更新系统固件。
|
在这篇文章中,我将向你展示如何在 [Ubuntu][6] 中更新系统固件。
|
||||||
|
|
||||||
@ -22,42 +23,42 @@
|
|||||||
|
|
||||||
![How to update firmware in Ubuntu][7]
|
![How to update firmware in Ubuntu][7]
|
||||||
|
|
||||||
有一件事你应该知道 GNOME Softwar 即 Ubuntu 18.04 中的软件中心也能够更新固件。但是在由于某种原因失败的情况下,你可以使用命令行工具 fwupd。
|
有一件事你应该知道 GNOME Software(即 Ubuntu 18.04 中的软件中心)也能够更新固件。但是在由于某种原因失败的情况下,你可以使用命令行工具 `fwupd`。
|
||||||
|
|
||||||
[fwupd][8] 是一个开源守护进程,可以处理基于 Linux 的系统中的固件升级。它由 GNOME 开发人员 [Richard Hughes][9] 创建。戴尔的开发人员也为这一开源工具的开发做出了贡献。
|
[fwupd][8] 是一个开源守护进程,可以处理基于 Linux 的系统中的固件升级。它由 GNOME 开发人员 [Richard Hughes][9] 创建。戴尔的开发人员也为这一开源工具的开发做出了贡献。
|
||||||
|
|
||||||
基本上,它使用 LVFS,Linux 供应商固件服务 (Linux Vendor Firmware Service)。硬件供应商将可再发行固件上传到 LVFS 站点,并且多亏 fwupd,你可以从操作系统内部升级这些固件。fwupd 受到 Ubuntu 和 Fedora 等主要 Linux 发行版的支持。
|
基本上,它使用 LVFS —— <ruby>Linux 供应商固件服务<rt>Linux Vendor Firmware Service</rt></ruby>。硬件供应商将可再发行固件上传到 LVFS 站点,并且多亏 `fwupd`,你可以从操作系统内部升级这些固件。`fwupd` 得到了 Ubuntu 和 Fedora 等主要 Linux 发行版的支持。
|
||||||
|
|
||||||
首先打开终端并更新系统:
|
首先打开终端并更新系统:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt update && sudo apt upgrade -y
|
sudo apt update && sudo apt upgrade -y
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
之后,你可以逐个使用以下命令来启动守护程序,刷新可用固件更新列表并安装固件更新。
|
之后,你可以逐个使用以下命令来启动守护程序,刷新可用固件更新列表并安装固件更新。
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo service fwupd start
|
sudo service fwupd start
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
守护进程运行后,检查是否有可用的固件更新。
|
守护进程运行后,检查是否有可用的固件更新。
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo fwupdmgr refresh
|
sudo fwupdmgr refresh
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
输出应如下所示:
|
输出应如下所示:
|
||||||
|
|
||||||
```
|
```
|
||||||
Fetching metadata <https://cdn.fwupd.org/downloads/firmware.xml.gz>
|
Fetching metadata https://cdn.fwupd.org/downloads/firmware.xml.gz
|
||||||
Downloading… [****************************]
|
Downloading… [****************************]
|
||||||
Fetching signature <https://cdn.fwupd.org/downloads/firmware.xml.gz.asc>
|
Fetching signature https://cdn.fwupd.org/downloads/firmware.xml.gz.asc
|
||||||
```
|
```
|
||||||
|
|
||||||
在此之后,运行固件更新:
|
在此之后,运行固件更新:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo fwupdmgr update
|
sudo fwupdmgr update
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
固件更新的输出可能与此类似:
|
固件更新的输出可能与此类似:
|
||||||
@ -67,8 +68,8 @@ No upgrades for XPS 13 9360 TPM 2.0, current is 1.3.1.0: 1.3.1.0=same
|
|||||||
No upgrades for XPS 13 9360 System Firmware, current is 0.2.8.1: 0.2.8.1=same, 0.2.7.1=older, 0.2.6.2=older, 0.2.5.1=older, 0.2.4.2=older, 0.2.3.1=older, 0.2.2.1=older, 0.2.1.0=older, 0.1.3.7=older, 0.1.3.5=older, 0.1.3.2=older, 0.1.2.3=older
|
No upgrades for XPS 13 9360 System Firmware, current is 0.2.8.1: 0.2.8.1=same, 0.2.7.1=older, 0.2.6.2=older, 0.2.5.1=older, 0.2.4.2=older, 0.2.3.1=older, 0.2.2.1=older, 0.2.1.0=older, 0.1.3.7=older, 0.1.3.5=older, 0.1.3.2=older, 0.1.2.3=older
|
||||||
Downloading 21.00 for XPS13 9360 Thunderbolt Controller…
|
Downloading 21.00 for XPS13 9360 Thunderbolt Controller…
|
||||||
Updating 21.00 on XPS13 9360 Thunderbolt Controller…
|
Updating 21.00 on XPS13 9360 Thunderbolt Controller…
|
||||||
Decompressing… [***********]
|
Decompressing… [***********]
|
||||||
Authenticating… [***********]
|
Authenticating… [***********]
|
||||||
Restarting device… [***********]
|
Restarting device… [***********]
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -83,7 +84,7 @@ via: https://itsfoss.com/update-firmware-ubuntu/
|
|||||||
作者:[Abhishek Prakash][a]
|
作者:[Abhishek Prakash][a]
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
译者:[geekpi](https://github.com/geekpi)
|
译者:[geekpi](https://github.com/geekpi)
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
校对:[wxy](https://github.com/wxy)
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
67
sources/talk/20180906 DevOps- The consequences of blame.md
Normal file
67
sources/talk/20180906 DevOps- The consequences of blame.md
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
DevOps: The consequences of blame
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Merriam-Webster defines "blame" as both a verb and a noun. As a verb, it means "to find fault with or to hold responsible." As a noun, it means "an expression of disapproval or responsibility for something believed to deserve censure."
|
||||||
|
|
||||||
|
Either way, blame isn’t a pleasant thing. It can create feelings of fear and shame, foster power imbalances, and cause us to devalue others.
|
||||||
|
|
||||||
|
Just think of what it felt like the last time you were yelled at or accused of something. Conversely, consider the opposite of blame: Praise, flattery, and approval. How does it feel to be complimented or commended for a job well done?
|
||||||
|
|
||||||
|
You may be wondering what all this talk about blame has to do with DevOps. Read on:
|
||||||
|
|
||||||
|
### DevOps and blame
|
||||||
|
|
||||||
|
The three pillars of DevOps are flow, feedback, and continuous improvement. How can an organization or a team improve if its members are focused on finding someone to blame? For a DevOps culture to succeed, blame must be eliminated.
|
||||||
|
|
||||||
|
For example, suppose your product has a bug or experiences an outage. If your organization's leaders react to this by looking for someone to blame, there’s little chance for feedback on how to improve. Look at how blame is flowing in your organization and work to remove it. Strive for blameless post-mortems and move away from _root-cause analysis_ , which tends to focus on assigning blame. In today’s complex business infrastructure, many factors can contribute to bugs and other problems. Successful DevOps teams practice post-incident reviews to examine the bigger picture when things go wrong.
|
||||||
|
|
||||||
|
### Consequences of blame
|
||||||
|
|
||||||
|
DevOps is about creating a culture of collaboration and community. This is not possible in a culture of blame. Because blame does not correct behavior, there is no continuous learning. What _is_ learned is how to avoid blame—so instead of solving problems, team members focus on how they can avoid being blamed for them.
|
||||||
|
|
||||||
|
What about accountability? Avoiding blame does not mean avoiding accountability or consequences. Here are some tips to create an environment in which people are held accountable without blame:
|
||||||
|
|
||||||
|
* When mistakes are made, focus on what steps you can take to avoid making the same mistake in the future. What did you learn, and how can you apply that knowledge to improving things?
|
||||||
|
|
||||||
|
* When something goes wrong, people feel stress. Work toward eliminating or reducing that stress. Avoid yelling and putting additional pressure on people.
|
||||||
|
|
||||||
|
* Accept that mistakes will happen. Nobody—and nothing—is perfect.
|
||||||
|
|
||||||
|
* When corrective actions are necessary, provide them privately, not publicly.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
As a child, I loved reading the [Family Circus][1] comic strip, especially the ones featuring “Not Me.” Not Me frequently appeared with “Ida Know” and “Nobody” when Mom and Dad asked an accusatory question. Why did the kids in Family Circus blame Not Me? Look no further than the parents' angry, frustrated expressions. Like the kids in the comic strip, we quickly learn to assign blame or look for faults in others because blaming ourselves is too painful.
|
||||||
|
|
||||||
|
In his book, [_Thinking, Fast and Slow_][2], author Daniel Kanheman points out that most of us spend as little time as possible thinking—after all, thinking is hard. To make things easier, we learn from previous experiences, which in turn creates biases. If blame is part of that equation, it will be included in our bias: _“The last time a question was asked in a meeting and I took responsibility, I was chewed out in front of all my co-workers. I won’t do that again.”_
|
||||||
|
|
||||||
|
When something goes wrong, we want answers and accountability. Uncertainty is scary and leads to stress; we prefer predictable scenarios. This drives us to look for root causes, which often leads to blame.
|
||||||
|
|
||||||
|
But what if, instead of assigning blame, we turned the situation into something constructive and helpful—an opportunity for learning? It isn't always easy, but working to eliminate blame will build a stronger DevOps team and a happier, more productive company.
|
||||||
|
|
||||||
|
Next time you find yourself starting to look for someone to blame, think of this poem by Rupi Kaur:
|
||||||
|
|
||||||
|
_“It takes grace_
|
||||||
|
|
||||||
|
_To remain kind_
|
||||||
|
|
||||||
|
_In cruel situations”_
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/9/consequences-blame-your-devops-team
|
||||||
|
|
||||||
|
作者:[Dawn Parzych][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/dawnparzych
|
||||||
|
[1]: http://familycircus.com/comics/september-1-2012/
|
||||||
|
[2]: https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555
|
@ -0,0 +1,79 @@
|
|||||||
|
What do open source and cooking have in common?
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
What’s a fun way to promote the principles of free software without actually coding? Here’s an idea: open source cooking. For the past eight years, this is what we’ve been doing in Munich.
|
||||||
|
|
||||||
|
The idea of _open source cooking_ grew out of our regular open source meetups because we realized that cooking and free software have a lot in common.
|
||||||
|
|
||||||
|
### Cooking together
|
||||||
|
|
||||||
|
The [Munich Open Source Meetings][1] is a series of recurring Friday night events that was born in [Café Netzwerk][2] in July 2009. The meetings help provide a way for open source project members and enthusiasts to get to know each other. Our motto is: “Every fourth Friday for free software.” In addition to adding some weekend workshops, we soon introduced other side events, including white sausage breakfast, sauna, and cooking.
|
||||||
|
|
||||||
|
The first official _Open Source Cooking_ meetup was admittedly rather chaotic, but we’ve improved our routine over the past eight years and 15 events, and we’ve mastered the art of cooking delicious food for 25-30 people.
|
||||||
|
|
||||||
|
Looking back at all those evenings, similarities between cooking together and working together in open source communities have become more clear.
|
||||||
|
|
||||||
|
### FLOSS principles at play
|
||||||
|
|
||||||
|
Here are a few ways cooking together is like working together on open source projects:
|
||||||
|
|
||||||
|
* We enjoy collaborating and working toward a result we share.
|
||||||
|
* We’ve become a community.
|
||||||
|
* As we share a common interest and enthusiasm, we learn more about ourselves, each other, and what we’re working on together.
|
||||||
|
* Mistakes happen. We learn from them and share our knowledge to our mutual benefit, so hopefully we avoid repeating the same mistakes.
|
||||||
|
* Everyone contributes what they’re best at, as everyone has something they’re better at than someone else.
|
||||||
|
* We motivate others to contribute and join us.
|
||||||
|
* Coordination is key, but a bit chaotic.
|
||||||
|
* Everyone benefits from the results!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Smells like open source
|
||||||
|
|
||||||
|
Like any successful open source-related meetup, open source cooking requires some coordination and structure. Ahead of the event, we run a _call for recipes_ in which all participants can vote. Rather than throwing a pizza into a microwave, we want to create something delicious and tasty, and so far we’ve had Japanese, Mexican, Hungarian, and Indian food, just to name a few.
|
||||||
|
|
||||||
|
Like in real life, cooking together requires having respect and mutual understanding for each other, so we always try to have dishes for vegans, vegetarians, and people with allergies and food preferences. A little beta test at home can be helpful (and fun!) when preparing for the big release.
|
||||||
|
|
||||||
|
Scalability matters, and shopping for our “build requirements” at the grocery store easily can eat up three hours. We use a spreadsheet (LibreOffice Calc, naturally) for calculating ingredient requirements and costs.
|
||||||
|
|
||||||
|
For every dinner course we have a “package maintainer” working with volunteers to make the menu in time and to find unconventional solutions to problems that arise.
|
||||||
|
|
||||||
|
Not everyone is a cook by profession, but with a little bit of help and a good distribution of tasks and responsibilities, it’s rather easy to parallelize things — at some point, 18kg of tomatoes and 100 eggs really don’t worry you anymore, believe me! The only real scalability limit is the stove with its four hotplates, so maybe it’s time to invest in an infrastructure budget.
|
||||||
|
|
||||||
|
Time-based releasing, on the other hand, isn’t working as reliably as it should, as we usually serve the main dish at a rather “flexible” time between 21:30 und 01:30, but that’s not a release blocker, either.
|
||||||
|
|
||||||
|
And, as with in many open source projects, cooking documentation has room for improvement. Cleanup tasks such as washing the dishes, surely can be optimized further, too.
|
||||||
|
|
||||||
|
### Future flavor releases
|
||||||
|
|
||||||
|
Some of our future ideas include:
|
||||||
|
|
||||||
|
* cooking in a foreign country,
|
||||||
|
* finally buying and cooking that large 700 € pumpkin, and
|
||||||
|
* find a grocery store that donates a percentage of our purchases to a good cause.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
The last item is also an important aspect about the free software movement: Always remember there are people who are not living on the sunny side, who do not have the same access to resources, and who are otherwise struggling. How can the open nature of what we’re doing help them?
|
||||||
|
|
||||||
|
With all that in mind, I am looking forward to the next Open Source Cooking meetup. If reading about them makes you hungry and you’d like to run own event, we’d love to see you adapt our idea or even fork it. And we’d love to have you join us in a meetup, and perhaps even do some mentoring and QA.
|
||||||
|
|
||||||
|
Article originally appeared on [blog.effenberger.org][3]. Reprinted with permission.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/9/open-source-cooking
|
||||||
|
|
||||||
|
作者:[Florian Effenberger][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/floeff
|
||||||
|
[1]: https://www.opensourcetreffen.de/
|
||||||
|
[2]: http://www.cafe-netzwerk.de/
|
||||||
|
[3]: https://blog.effenberger.org/2018/05/28/what-do-open-source-and-cooking-have-in-common/
|
@ -1,3 +1,5 @@
|
|||||||
|
Translating by DavidChen
|
||||||
|
|
||||||
How do groups work on Linux?
|
How do groups work on Linux?
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
|
imquanquan Translating
|
||||||
Trying Other Go Versions
|
Trying Other Go Versions
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
@ -109,4 +110,4 @@ via: https://pocketgophers.com/trying-other-versions/
|
|||||||
[8]:https://pocketgophers.com/trying-other-versions/#trying-a-specific-release
|
[8]:https://pocketgophers.com/trying-other-versions/#trying-a-specific-release
|
||||||
[9]:https://pocketgophers.com/guide-to-json/
|
[9]:https://pocketgophers.com/guide-to-json/
|
||||||
[10]:https://pocketgophers.com/trying-other-versions/#trying-any-release
|
[10]:https://pocketgophers.com/trying-other-versions/#trying-any-release
|
||||||
[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip
|
[11]:https://pocketgophers.com/trying-other-versions/#trying-a-source-build-e-g-tip
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
|
Zafiry translating...
|
||||||
Writing eBPF tracing tools in Rust
|
Writing eBPF tracing tools in Rust
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
|
@ -1,3 +1,5 @@
|
|||||||
|
translating---geekpi
|
||||||
|
|
||||||
Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution
|
Distrochooser Helps Linux Beginners To Choose A Suitable Linux Distribution
|
||||||
======
|
======
|
||||||

|

|
||||||
|
@ -0,0 +1,168 @@
|
|||||||
|
How To Quickly Serve Files And Folders Over HTTP In Linux
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Today, I came across a whole bunch of methods to serve a single file or entire directory with other systems in your local area network via a web browser. I tested all of them in my Ubuntu test machine, and everything worked just fine as described below. If you ever wondered how to easily and quickly serve files and folders over HTTP in Unix-like operating systems, one of the following methods will definitely help.
|
||||||
|
|
||||||
|
### Serve Files And Folders Over HTTP In Linux
|
||||||
|
|
||||||
|
**Disclaimer:** All the methods given here are meant to be used within a secure local area network. Since these methods doesn’t have any security mechanism, it is **not recommended to use them in production**. You have been warned!
|
||||||
|
|
||||||
|
#### Method 1 – Using simpleHTTPserver (Python)
|
||||||
|
|
||||||
|
We already have written a brief guide to setup a simple http server to share files and directories instantly in the following link. If you have a system with Python installed, this method is quite handy.
|
||||||
|
|
||||||
|
#### Method 2 – Using Quickserve (Python)
|
||||||
|
|
||||||
|
This method is specifically for Arch Linux and its variants. Check the following link for more details.
|
||||||
|
|
||||||
|
#### Method 3 – Using Ruby**
|
||||||
|
|
||||||
|
In this method, we use Ruby to serve files and folders over HTTP in Unix-like systems. Install Ruby and Rails as described in the following link.
|
||||||
|
|
||||||
|
Once Ruby installed, go to the directory, for example ostechnix, that you want to share over the network:
|
||||||
|
```
|
||||||
|
$ cd ostechnix
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
And, run the following command:
|
||||||
|
```
|
||||||
|
$ ruby -run -ehttpd . -p8000
|
||||||
|
[2018-08-10 16:02:55] INFO WEBrick 1.4.2
|
||||||
|
[2018-08-10 16:02:55] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux]
|
||||||
|
[2018-08-10 16:02:55] INFO WEBrick::HTTPServer#start: pid=5859 port=8000
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure the port 8000 is opened in your router or firewall . If the port has already been used by some other services use different port.
|
||||||
|
|
||||||
|
You can now access the contents of this folder from any remote system using URL – **http:// <IP-address>:8000/**.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
To stop sharing press **CTRL+C**.
|
||||||
|
|
||||||
|
#### Method 4 – Using Http-server (NodeJS)
|
||||||
|
|
||||||
|
[**Http-server**][1] is a simple, production ready command line http-server written in NodeJS. It requires zero configuration and can be used to instantly share files and directories via web browser.
|
||||||
|
|
||||||
|
Install NodeJS as described below.
|
||||||
|
|
||||||
|
Once NodeJS installed, run the following command to install http-server.
|
||||||
|
```
|
||||||
|
$ npm install -g http-server
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, go to any directory and share its contents over HTTP as shown below.
|
||||||
|
```
|
||||||
|
$ cd ostechnix
|
||||||
|
|
||||||
|
$ http-server -p 8000
|
||||||
|
Starting up http-server, serving ./
|
||||||
|
Available on:
|
||||||
|
http://127.0.0.1:8000
|
||||||
|
http://192.168.225.24:8000
|
||||||
|
http://192.168.225.20:8000
|
||||||
|
Hit CTRL-C to stop the server
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, you can access the contents of this directory from local or remote systems in the network using URL – **http:// <ip-address>:8000**.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
To stop sharing, press **CTRL+C**.
|
||||||
|
|
||||||
|
#### Method 5 – Using Miniserve (Rust)
|
||||||
|
|
||||||
|
[**Miniserve**][2] is yet another command line utility that allows you to quickly serve files over HTTP. It is very fast, easy-to-use, and cross-platform utility written in **Rust** programming language. Unlike the above utilities/methods, it provides authentication support, so you can setup username and password to the shares.
|
||||||
|
|
||||||
|
Install Rust in your Linux system as described in the following link.
|
||||||
|
|
||||||
|
After installing Rust, run the following command to install miniserve:
|
||||||
|
```
|
||||||
|
$ cargo install miniserve
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, you can download the binaries from [**the releases page**][3] and make it executable.
|
||||||
|
```
|
||||||
|
$ chmod +x miniserve-linux
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
And, then you can run it using command (assuming miniserve binary file is downloaded in the current working directory):
|
||||||
|
```
|
||||||
|
$ ./miniserve-linux <path-to-share>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage**
|
||||||
|
|
||||||
|
To serve a directory:
|
||||||
|
```
|
||||||
|
$ miniserve <path-to-directory>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```
|
||||||
|
$ miniserve /home/sk/ostechnix/
|
||||||
|
miniserve v0.2.0
|
||||||
|
Serving path /home/sk/ostechnix at http://[::]:8080, http://localhost:8080
|
||||||
|
Quit by pressing CTRL-C
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, you can access the share from local system itself using URL – **<http://localhost:8080>** and/or from remote system with URL – **http:// <ip-address>:8080**.
|
||||||
|
|
||||||
|
To serve a single file:
|
||||||
|
```
|
||||||
|
$ miniserve <path-to-file>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```
|
||||||
|
$ miniserve ostechnix/file.txt
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Serve file/folder with username and password:
|
||||||
|
```
|
||||||
|
$ miniserve --auth joe:123 <path-to-share>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Bind to multiple interfaces:
|
||||||
|
```
|
||||||
|
$ miniserve -i 192.168.225.1 -i 10.10.0.1 -i ::1 -- <path-to-share>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see, I have given only 5 methods. But, there are few more methods given in the link attached at the end of this guide. Go and test them as well. Also, bookmark and revisit it from time to time to check if there are any new additions to the list in future.
|
||||||
|
|
||||||
|
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||||
|
|
||||||
|
Cheers!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.ostechnix.com/how-to-quickly-serve-files-and-folders-over-http-in-linux/
|
||||||
|
|
||||||
|
作者:[SK][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]:https://www.ostechnix.com/author/sk/
|
||||||
|
[1]:https://www.npmjs.com/package/http-server
|
||||||
|
[2]:https://github.com/svenstaro/miniserve
|
||||||
|
[3]:https://github.com/svenstaro/miniserve/releases
|
@ -1,3 +1,5 @@
|
|||||||
|
idea2act translating
|
||||||
|
|
||||||
Turn your vi editor into a productivity powerhouse
|
Turn your vi editor into a productivity powerhouse
|
||||||
======
|
======
|
||||||
|
|
||||||
|
@ -0,0 +1,196 @@
|
|||||||
|
How To Limit Network Bandwidth In Linux Using Wondershaper
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
This tutorial will help you to easily limit network bandwidth and shape your network traffic in Unix-like operating systems. By limiting the network bandwidth usage, you can save unnecessary bandwidth consumption’s by applications, such as package managers (pacman, yum, apt), web browsers, torrent clients, download managers etc., and prevent the bandwidth abuse by a single or multiple users in the network. For the purpose of this tutorial, we will be using a command line utility named **Wondershaper**. Trust me, it is not that hard as you may think. It is one of the easiest and quickest way ever I have come across to limit the Internet or local network bandwidth usage in your own Linux system. Read on.
|
||||||
|
|
||||||
|
Please be mindful that the aforementioned utility can only limit the incoming and outgoing traffic of your local network interfaces, not the interfaces of your router or modem. In other words, Wondershaper will only limit the network bandwidth in your local system itself, not any other systems in the network. These utility is mainly designed for limiting the bandwidth of one or more network adapters in your local system. Hope you got my point.
|
||||||
|
|
||||||
|
Let us see how to use Wondershaper to shape the network traffic.
|
||||||
|
|
||||||
|
### Limit Network Bandwidth In Linux Using Wondershaper
|
||||||
|
|
||||||
|
**Wondershaper** is simple script used to limit the bandwidth of your system’s network adapter(s). It limits the bandwidth iproute’s tc command, but greatly simplifies its operation.
|
||||||
|
|
||||||
|
**Installing Wondershaper**
|
||||||
|
|
||||||
|
To install the latest version, git clone wondershaoer repository:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ git clone https://github.com/magnific0/wondershaper.git
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Go to the wondershaper directory and install it as show below
|
||||||
|
|
||||||
|
```
|
||||||
|
$ cd wondershaper
|
||||||
|
|
||||||
|
$ sudo make install
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
And, run the following command to start wondershaper service automatically on every reboot.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo systemctl enable wondershaper.service
|
||||||
|
|
||||||
|
$ sudo systemctl start wondershaper.service
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also install using your distribution’s package manager (official or non-official) if you don’t mind the latest version.
|
||||||
|
|
||||||
|
Wondershaper is available in [**AUR**][1], so you can install it in Arch-based systems using AUR helper programs such as [**Yay**][2].
|
||||||
|
|
||||||
|
```
|
||||||
|
$ yay -S wondershaper-git
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
On Debian, Ubuntu, Linux Mint:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo apt-get install wondershaper
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
On Fedora:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo dnf install wondershaper
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
On RHEL, CentOS, enable EPEL repository and install wondershaper as shown below.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo yum install epel-release
|
||||||
|
|
||||||
|
$ sudo yum install wondershaper
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, start wondershaper service automatically on every reboot.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo systemctl enable wondershaper.service
|
||||||
|
|
||||||
|
$ sudo systemctl start wondershaper.service
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage**
|
||||||
|
|
||||||
|
First, find the name of your network interface. Here are some common ways to find the details of a network card.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ip addr
|
||||||
|
|
||||||
|
$ route
|
||||||
|
|
||||||
|
$ ifconfig
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Once you find the network card name, you can limit the bandwidth rate as shown below.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo wondershaper -a <adapter> -d <rate> -u <rate>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
For instance, if your network card name is **enp0s8** and you wanted to limit the bandwidth to **1024 Kbps** for **downloads** and **512 kbps** for **uploads** , the command would be:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo wondershaper -a enp0s8 -d 1024 -u 512
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Where,
|
||||||
|
|
||||||
|
* **-a** : network card name
|
||||||
|
* **-d** : download rate
|
||||||
|
* **-u** : upload rate
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
To clear the limits from a network adapter, simply run:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo wondershaper -c -a enp0s8
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Or
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo wondershaper -c enp0s8
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Just in case, there are more than one network card available in your system, you need to manually set the download/upload rates for each network interface card as described above.
|
||||||
|
|
||||||
|
If you have installed Wondershaper by cloning its GitHub repository, there is a configuration named **wondershaper.conf** exists in **/etc/conf.d/** location. Make sure you have set the download or upload rates by modifying the appropriate values(network card name, download/upload rate) in this file.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo nano /etc/conf.d/wondershaper.conf
|
||||||
|
|
||||||
|
[wondershaper]
|
||||||
|
# Adapter
|
||||||
|
#
|
||||||
|
IFACE="eth0"
|
||||||
|
|
||||||
|
# Download rate in Kbps
|
||||||
|
#
|
||||||
|
DSPEED="2048"
|
||||||
|
|
||||||
|
# Upload rate in Kbps
|
||||||
|
#
|
||||||
|
USPEED="512"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Here is the sample before Wondershaper:
|
||||||
|
|
||||||
|
After enabling Wondershaper:
|
||||||
|
|
||||||
|
As you can see, the download rate has been tremendously reduced after limiting the bandwidth using WOndershaper in my Ubuntu 18.o4 LTS server.
|
||||||
|
|
||||||
|
For more details, view the help section by running the following command:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ wondershaper -h
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Or, refer man pages.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ man wondershaper
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
As far as tested, Wondershaper worked just fine as described above. Give it a try and let us know what do you think about this utility.
|
||||||
|
|
||||||
|
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned.
|
||||||
|
|
||||||
|
Cheers!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/
|
||||||
|
|
||||||
|
作者:[SK][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.ostechnix.com/author/sk/
|
||||||
|
[1]: https://aur.archlinux.org/packages/wondershaper-git/
|
||||||
|
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
@ -0,0 +1,67 @@
|
|||||||
|
6 open source tools for writing a book
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
I first used and contributed to free and open source software in 1993, and since then I've been an open source software developer and evangelist. I've written or contributed to dozens of open source software projects, although the one that I'll be remembered for is the [FreeDOS Project][1], an open source implementation of the DOS operating system.
|
||||||
|
|
||||||
|
I recently wrote a book about FreeDOS. [_Using FreeDOS_][2] is my celebration of the 24th anniversary of FreeDOS. It is a collection of how-to's about installing and using FreeDOS, essays about my favorite DOS applications, and quick-reference guides to the DOS command line and DOS batch programming. I've been working on this book for the last few months, with the help of a great professional editor.
|
||||||
|
|
||||||
|
_Using FreeDOS_ is available under the Creative Commons Attribution (cc-by) International Public License. You can download the EPUB and PDF versions at no charge from the [FreeDOS e-books][2] website. (I'm also planning a print version, for those who prefer a bound copy.)
|
||||||
|
|
||||||
|
The book was produced almost entirely with open source software. I'd like to share a brief insight into the tools I used to create, edit, and produce _Using FreeDOS_.
|
||||||
|
|
||||||
|
### Google Docs
|
||||||
|
|
||||||
|
[Google Docs][3] is the only tool I used that isn't open source software. I uploaded my first drafts to Google Docs so my editor and I could collaborate. I'm sure there are open source collaboration tools, but Google Doc's ability to let two people edit the same document at the same time, make comments, edit suggestions, and change tracking—not to mention its use of paragraph styles and the ability to download the finished document—made it a valuable part of the editing process.
|
||||||
|
|
||||||
|
### LibreOffice
|
||||||
|
|
||||||
|
I started on [LibreOffice][4] 6.0 but I finished the book using LibreOffice 6.1. I love LibreOffice's rich support of styles. Paragraph styles made it easy to apply a style for titles, headers, body text, sample code, and other text. Character styles let me modify the appearance of text within a paragraph, such as inline sample code or a different style to indicate a filename. Graphics styles let me apply certain styling to screenshots and other images. And page styles allowed me to easily modify the layout and appearance of the page.
|
||||||
|
|
||||||
|
### GIMP
|
||||||
|
|
||||||
|
My book includes a lot of DOS program screenshots, website screenshots, and FreeDOS logos. I used [GIMP][5] to modify these images for the book. Usually, this was simply cropping or resizing an image, but as I prepare the print edition of the book, I'm using GIMP to create a few images that will be simpler for print layout.
|
||||||
|
|
||||||
|
### Inkscape
|
||||||
|
|
||||||
|
Most of the FreeDOS logos and fish mascots are in SVG format, and I used [Inkscape][6] for any image tweaking here. And in preparing the PDF version of the ebook, I wanted a simple blue banner at top of the page, with the FreeDOS logo in the corner. After some experimenting, I found it easier to create an SVG image in Inkscape that looked like the banner I wanted, and I pasted that into the header.
|
||||||
|
|
||||||
|
### ImageMagick
|
||||||
|
|
||||||
|
While it's great to use GIMP to do the fine work, sometimes it's faster to run an [ImageMagick][7] command over a set of images, such as to convert into PNG format or to resize images.
|
||||||
|
|
||||||
|
### Sigil
|
||||||
|
|
||||||
|
LibreOffice can export directly to EPUB format, but it wasn't a great transfer. I haven't tried creating an EPUB with LibreOffice 6.1, but LibreOffice 6.0 didn't include my images. It also added styles in a weird way. I used [Sigil][8] to tweak the EPUB file and make everything look right. Sigil even has a preview function so you can see what the EPUB will look like.
|
||||||
|
|
||||||
|
### QEMU
|
||||||
|
|
||||||
|
Because this book is about installing and running FreeDOS, I needed to actually run FreeDOS. You can boot FreeDOS inside any PC emulator, including VirtualBox, QEMU, GNOME Boxes, PCem, and Bochs. But I like the simplicity of [QEMU][9]. And the QEMU console lets you issue a screen dump in PPM format, which is ideal for grabbing screenshots to include in the book.
|
||||||
|
|
||||||
|
Of course, I have to mention running [GNOME][10] on [Linux][11]. I use the [Fedora][12] distribution of Linux.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://opensource.com/article/18/9/writing-book-open-source-tools
|
||||||
|
|
||||||
|
作者:[Jim Hall][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://opensource.com/users/jim-hall
|
||||||
|
[1]: http://www.freedos.org/
|
||||||
|
[2]: http://www.freedos.org/ebook/
|
||||||
|
[3]: https://www.google.com/docs/about/
|
||||||
|
[4]: https://www.libreoffice.org/
|
||||||
|
[5]: https://www.gimp.org/
|
||||||
|
[6]: https://inkscape.org/
|
||||||
|
[7]: https://www.imagemagick.org/
|
||||||
|
[8]: https://sigil-ebook.com/
|
||||||
|
[9]: https://www.qemu.org/
|
||||||
|
[10]: https://www.gnome.org/
|
||||||
|
[11]: https://www.kernel.org/
|
||||||
|
[12]: https://getfedora.org/
|
@ -0,0 +1,140 @@
|
|||||||
|
Autotrash – A CLI Tool To Automatically Purge Old Trashed Files
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**Autotrash** is a command line utility to automatically purge old trashed files. It will purge files that have been in the trash for more then a given number of days. You don’t need to empty the trash folder or do SHIFT+DELETE to permanently purge the files/folders. Autortrash will handle the contents of your Trash folder and delete them automatically after a particular period of time. In a nutshell, Autotrash will never allow your trash to grow too big.
|
||||||
|
|
||||||
|
### Installing Autotrash
|
||||||
|
|
||||||
|
Autotrash is available in the default repositories of Debian-based systems. To install autotrash on Debian, Ubuntu, Linux Mint, run:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo apt-get install autotrash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
On Fedora:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo dnf install autotrash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
For Arch linux and its variants, you can install it using any AUR helper programs such as [**Yay**][1].
|
||||||
|
|
||||||
|
```
|
||||||
|
$ yay -S autotrash-git
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Automatically Purge Old Trashed Files
|
||||||
|
|
||||||
|
Whenever you run autotrash, It will scan your **`~/.local/share/Trash/info`** directory and read the **`.trashinfo`** files to find their deletion date. If the files have been in trash folder for more than the defined date, they will be deleted.
|
||||||
|
|
||||||
|
Let me show you some examples.
|
||||||
|
|
||||||
|
To purge files which are in the trash folder for more than 30 days, run:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ autotrash -d 30
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
As per above example, if the files in your Trash folder are more than 30-days old, Autotrash will automatically delete them from your Trash. You don’t need to manually delete them. Just send the unnecessary junk to your trash folder and forget about them. Autotrash will take care of the trashed files.
|
||||||
|
|
||||||
|
The above command will only process currently logged-in user’s trash directory. If you want to make autotrash to process trash directories of all users (not just in your home directory), use **-t** option like below.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ autotrash -td 30
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Autotrash also allows you to delete trashed files based on the space left or available on the trash filesystem.
|
||||||
|
|
||||||
|
For example, have a look at the following example.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ autotrash --max-free 1024 -d 30
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
As per the above command, autotrash will only purge trashed files that are older than **30 days** from the trash if there is less than **1GB of space left** on the trash filesystem. This can be useful if your trash filesystem is running out of the space.
|
||||||
|
|
||||||
|
We can also purge files from trash, oldest first, till there is at least 1GB of space on the trash filesystem.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ autotrash --min-free 1024
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
In this case, there is no restriction on how old trashed files are.
|
||||||
|
|
||||||
|
You can combine both options ( **`--min-free`** and **`--max-free`** ) in a single command like below.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ autotrash --max-free 2048 --min-free 1024 -d 30
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
As per the above command, autotrash will start reading the trash if there is less than **2GB** of free space, then start keeping an eye on. At that point, remove files older than 30 days and if there is less than **1GB** of free space after that remove even newer files.
|
||||||
|
|
||||||
|
As you can see, all command should be manually run by the user. You might wonder, how can I automate this task?? That’s easy! Just add autotrash as crontab entry. Now, the commands will automatically run at a scheduled time and purge the files in your trash depending on the defined options.
|
||||||
|
|
||||||
|
To add these commands in crontab file, run:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ crontab -e
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the entries, for example:
|
||||||
|
|
||||||
|
```
|
||||||
|
@daily /usr/bin/autotrash -d 30
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Now autotrash will purge files which are in the trash folder for more than 30 days, everyday.
|
||||||
|
|
||||||
|
For more details about scheduling tasks, refer the following links.
|
||||||
|
|
||||||
|
|
||||||
|
+ [A Beginners Guide To Cron Jobs][2]
|
||||||
|
+ [How To Easily And Safely Manage Cron Jobs In Linux][3]
|
||||||
|
|
||||||
|
|
||||||
|
Please be mindful that if you have deleted any important files inadvertently, they will be permanently gone after the defined days, so just be careful.
|
||||||
|
|
||||||
|
Refer man pages to know more about Autotrash.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ man autotrash
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Emptying Trash folder or pressing SHIFT+DELETE to permanently get rid of unnecessary stuffs from the Linux system is no big deal. It will just take a couple seconds. However, if you wanted an extra utility to take care of your junk files, Autotrash might be helpful. Give it a try and see how it works.
|
||||||
|
|
||||||
|
And, that’s all for now. Hope this helps. More good stuffs to come.
|
||||||
|
|
||||||
|
Cheers!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.ostechnix.com/autotrash-a-cli-tool-to-automatically-purge-old-trashed-files/
|
||||||
|
|
||||||
|
作者:[SK][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.ostechnix.com/author/sk/
|
||||||
|
[1]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||||
|
[2]: https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
|
||||||
|
[3]: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/
|
@ -0,0 +1,229 @@
|
|||||||
|
How to Use the Netplan Network Configuration Tool on Linux
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
For years Linux admins and users have configured their network interfaces in the same way. For instance, if you’re a Ubuntu user, you could either configure the network connection via the desktop GUI or from within the /etc/network/interfaces file. The configuration was incredibly easy and never failed to work. The configuration within that file looked something like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
auto enp10s0
|
||||||
|
|
||||||
|
iface enp10s0 inet static
|
||||||
|
|
||||||
|
address 192.168.1.162
|
||||||
|
|
||||||
|
netmask 255.255.255.0
|
||||||
|
|
||||||
|
gateway 192.168.1.100
|
||||||
|
|
||||||
|
dns-nameservers 1.0.0.1,1.1.1.1
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Save and close that file. Restart networking with the command:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo systemctl restart networking
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Or, if you’re not using a non-systemd distribution, you could restart networking the old fashioned way like so:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo /etc/init.d/networking restart
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Your network will restart and the newly configured interface is good to go.
|
||||||
|
|
||||||
|
That’s how it’s been done for years. Until now. With certain distributions (such as Ubuntu Linux 18.04), the configuration and control of networking has changed considerably. Instead of that interfaces file and using the /etc/init.d/networking script, we now turn to [Netplan][1]. Netplan is a command line utility for the configuration of networking on certain Linux distributions. Netplan uses YAML description files to configure network interfaces and, from those descriptions, will generate the necessary configuration options for any given renderer tool.
|
||||||
|
|
||||||
|
I want to show you how to use Netplan on Linux, to configure a static IP address and a DHCP address. I’ll be demonstrating on Ubuntu Server 18.04. I will give you one word of warning, the .yaml files you create for Netplan must be consistent in spacing, otherwise they’ll fail to work. You don’t have to use a specific spacing for each line, it just has to remain consistent.
|
||||||
|
|
||||||
|
### The new configuration files
|
||||||
|
|
||||||
|
Open a terminal window (or log into your Ubuntu Server via SSH). You will find the new configuration files for Netplan in the /etc/netplan directory. Change into that directory with the command cd /etc/netplan. Once in that directory, you will probably only see a single file:
|
||||||
|
|
||||||
|
```
|
||||||
|
01-netcfg.yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can create a new file or edit the default. If you opt to edit the default, I suggest making a copy with the command:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
With your backup in place, you’re ready to configure.
|
||||||
|
|
||||||
|
### Network Device Name
|
||||||
|
|
||||||
|
Before you configure your static IP address, you’ll need to know the name of device to be configured. To do that, you can issue the command ip a and find out which device is to be used (Figure 1).
|
||||||
|
|
||||||
|
![netplan][3]
|
||||||
|
|
||||||
|
Figure 1: Finding our device name with the ip a command.
|
||||||
|
|
||||||
|
[Used with permission][4]
|
||||||
|
|
||||||
|
I’ll be configuring ens5 for a static IP address.
|
||||||
|
|
||||||
|
### Configuring a Static IP Address
|
||||||
|
|
||||||
|
Open the original .yaml file for editing with the command:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo nano /etc/netplan/01-netcfg.yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
The layout of the file looks like this:
|
||||||
|
|
||||||
|
network:
|
||||||
|
|
||||||
|
Version: 2
|
||||||
|
|
||||||
|
Renderer: networkd
|
||||||
|
|
||||||
|
ethernets:
|
||||||
|
|
||||||
|
DEVICE_NAME:
|
||||||
|
|
||||||
|
Dhcp4: yes/no
|
||||||
|
|
||||||
|
Addresses: [IP/NETMASK]
|
||||||
|
|
||||||
|
Gateway: GATEWAY
|
||||||
|
|
||||||
|
Nameservers:
|
||||||
|
|
||||||
|
Addresses: [NAMESERVER, NAMESERVER]
|
||||||
|
|
||||||
|
Where:
|
||||||
|
|
||||||
|
* DEVICE_NAME is the actual device name to be configured.
|
||||||
|
|
||||||
|
* yes/no is an option to enable or disable dhcp4.
|
||||||
|
|
||||||
|
* IP is the IP address for the device.
|
||||||
|
|
||||||
|
* NETMASK is the netmask for the IP address.
|
||||||
|
|
||||||
|
* GATEWAY is the address for your gateway.
|
||||||
|
|
||||||
|
* NAMESERVER is the comma-separated list of DNS nameservers.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Here’s a sample .yaml file:
|
||||||
|
|
||||||
|
```
|
||||||
|
network:
|
||||||
|
|
||||||
|
version: 2
|
||||||
|
|
||||||
|
renderer: networkd
|
||||||
|
|
||||||
|
ethernets:
|
||||||
|
|
||||||
|
ens5:
|
||||||
|
|
||||||
|
dhcp4: no
|
||||||
|
|
||||||
|
addresses: [192.168.1.230/24]
|
||||||
|
|
||||||
|
gateway4: 192.168.1.254
|
||||||
|
|
||||||
|
nameservers:
|
||||||
|
|
||||||
|
addresses: [8.8.4.4,8.8.8.8]
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Edit the above to fit your networking needs. Save and close that file.
|
||||||
|
|
||||||
|
Notice the netmask is no longer configured in the form 255.255.255.0. Instead, the netmask is added to the IP address.
|
||||||
|
|
||||||
|
### Testing the Configuration
|
||||||
|
|
||||||
|
Before we apply the change, let’s test the configuration. To do that, issue the command:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo netplan try
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
The above command will validate the configuration before applying it. If it succeeds, you will see Configuration accepted. In other words, Netplan will attempt to apply the new settings to a running system. Should the new configuration file fail, Netplan will automatically revert to the previous working configuration. Should the new configuration work, it will be applied.
|
||||||
|
|
||||||
|
### Applying the New Configuration
|
||||||
|
|
||||||
|
If you are certain of your configuration file, you can skip the try option and go directly to applying the new options. The command for this is:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo netplan apply
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point, you can issue the command ip a to see that your new address configurations are in place.
|
||||||
|
|
||||||
|
### Configuring DHCP
|
||||||
|
|
||||||
|
Although you probably won’t be configuring your server for DHCP, it’s always good to know how to do this. For example, you might not know what static IP addresses are currently available on your network. You could configure the device for DHCP, get an IP address, and then reconfigure that address as static.
|
||||||
|
|
||||||
|
To use DHCP with Netplan, the configuration file would look something like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
network:
|
||||||
|
|
||||||
|
version: 2
|
||||||
|
|
||||||
|
renderer: networkd
|
||||||
|
|
||||||
|
ethernets:
|
||||||
|
|
||||||
|
ens5:
|
||||||
|
|
||||||
|
Addresses: []
|
||||||
|
|
||||||
|
dhcp4: true
|
||||||
|
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Save and close that file. Test the file with:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo netplan try
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Netplan should succeed and apply the DHCP configuration. You could then issue the ip a command, get the dynamically assigned address, and then reconfigure a static address. Or, you could leave it set to use DHCP (but seeing as how this is a server, you probably won’t want to do that).
|
||||||
|
|
||||||
|
Should you have more than one interface, you could name the second .yaml configuration file 02-netcfg.yaml. Netplan will apply the configuration files in numerical order, so 01 will be applied before 02. Create as many configuration files as needed for your server.
|
||||||
|
|
||||||
|
### That’s All There Is
|
||||||
|
|
||||||
|
Believe it or not, that’s all there is to using Netplan. Although it is a significant change to how we’re accustomed to configuring network addresses, it’s not all that hard to get used to. But this style of configuration is here to stay… so you will need to get used to it.
|
||||||
|
|
||||||
|
Learn more about Linux through the free ["Introduction to Linux" ][5]course from The Linux Foundation and edX.
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux
|
||||||
|
|
||||||
|
作者:[Jack Wallen][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://www.linux.com/users/jlwallen
|
||||||
|
[1]: https://netplan.io/
|
||||||
|
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/netplan_1.jpg?itok=XuIsXWbV (netplan)
|
||||||
|
[4]: /licenses/category/used-permission
|
||||||
|
[5]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,114 @@
|
|||||||
|
What is ZFS? Why People Use ZFS? [Explained for Beginners]
|
||||||
|
======
|
||||||
|
Today, we will take a look at ZFS, an advanced file system. We will discuss where it came from, what it is, and why it is so popular among techies and enterprise.
|
||||||
|
|
||||||
|
Even though I’m from the US, I prefer to pronounce it ZedFS instead of ZeeFS because it sounds cooler. You are free to pronounce it however you like.
|
||||||
|
|
||||||
|
Note: You will see ZFS repeated many times in the article. When I talk about feature and installation, I’m talking about OpenZFS. ZFS (developed by Oracle) and OpenZFS have followed different paths since Oracle shutdown OpenSolaris. (More on that later.)
|
||||||
|
|
||||||
|
### History of ZFS
|
||||||
|
|
||||||
|
The Z File System (ZFS) was created by [Matthew Ahrens and Jeff Bonwick][1] in 2001. ZFS was designed to be a next generation file system for [Sun Microsystems’][2] [OpenSolaris][3]. In 2008, ZFS was ported to FreeBSD. The same year a project was started to port [ZFS to Linux][4]. However, since ZFS is licensed under the [Common Development and Distribution License][5], which is incompatible with the [GNU General Public License][6], it cannot be included in the Linux kernel. To get around this problem, most Linux distros offer methods to install ZFS.
|
||||||
|
|
||||||
|
Shortly after Oracle purchased Sun Microsystems, OpenSolaris became close-source. All further development of ZFS became closed source, as well. Many of the developers of ZFS where unhappy about this turn of events. [Two-thirds of the core ZFS devlopers][1], including Ahrens and Bonwick, left Oracle due to this decision. They joined other companies and created the [OpenZFS project][7] in September of 2013. The project has spearheaded the open-source development of ZFS.
|
||||||
|
|
||||||
|
Let’s go back to the license issue mentioned above. Since the OpenZFS project is separate from Oracle, some probably wonder why they don’t change the license to something that is compatible with the GPL so it can be included in the Linux kernel. According to the [OpenZFS website][8], changing the license would involve contacting anyone who contributed code to the current OpenZFS implementation (including the initial, common ZFS code till OpenSolaris) and get their permission to change the license. Since this job is near impossible (because some contributors may be dead or hard to find), they have decided to keep the license they have.
|
||||||
|
|
||||||
|
### What is ZFS? What are its features?
|
||||||
|
|
||||||
|
![ZFS filesystem][9]
|
||||||
|
|
||||||
|
As I said before, ZFS is an advanced file system. As such, it has some interesting [features][10]. Such as:
|
||||||
|
|
||||||
|
* Pooled storage
|
||||||
|
* Copy-on-write
|
||||||
|
* Snapshots
|
||||||
|
* Data integrity verification and automatic repair
|
||||||
|
* RAID-Z
|
||||||
|
* Maximum 16 Exabyte file size
|
||||||
|
* Maximum 256 Quadrillion Zettabytes storage
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Let’s break down a couple of those features.
|
||||||
|
|
||||||
|
#### Pooled Storage
|
||||||
|
|
||||||
|
Unlike most files systems, ZFS combines the features of a file system and a volume manager. This means that unlike other file systems, ZFS can create a file system that spans across a series of drives or a pool. Not only that but you can add storage to a pool by adding another drive. ZFS will handle [partitioning and formatting][11].
|
||||||
|
|
||||||
|
![Pooled storage in ZFS][12]Pooled storage in ZFS
|
||||||
|
|
||||||
|
#### Copy-on-write
|
||||||
|
|
||||||
|
[Copy-on-write][13] is another interesting (and cool) features. On most files system, when data is overwritten, it is lost forever. On ZFS, the new information is written to a different block. Once the write is complete, the file systems metadata is updated to point to the new info. This ensures that if the system crashes (or something else happens) while the write is taking place, the old data will be preserved. It also means that the system does not need to run [fsck][14] after a system crash.
|
||||||
|
|
||||||
|
#### Snapshots
|
||||||
|
|
||||||
|
Copy-on-write leads into another ZFS feature: snapshots. ZFS uses snapshots to track changes in the file system. “[The snapshot][13] contains the original version of the file system, and the live filesystem contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data.” It a file is deleted, the snapshot reference is removed, as well. So, snapshots are mainly designed to track changes to files, but not the addition and creation of files.
|
||||||
|
|
||||||
|
Snapshots can be mounted as read-only to recover a past version of a file. It is also possible to rollback the live system to a previous snapshot. All changes made since the snapshot will be lost.
|
||||||
|
|
||||||
|
#### Data integrity verification and automatic repair
|
||||||
|
|
||||||
|
Whenever new data is written to ZFS, it creates a checksum for that data. When that data is read, the checksum is verified. If the checksum does not match, then ZFS knows that an error has been detected. ZFS will then automatically attempt to correct the error.
|
||||||
|
|
||||||
|
#### RAID-Z
|
||||||
|
|
||||||
|
ZFS can handle RAID without requiring any extra software or hardware. Unsurprisingly, ZFS has its own implementation of RAID: RAID-Z. RAID-Z is actually a variation of RAID-5. However, it is designed to overcome the RAID-5 write hole error, “in which the data and parity information become inconsistent after an unexpected restart”. To use the basic [level of RAID-Z][15] (RAID-Z1) you need at least two disks for storage and one for [parity][16]. RAID-Z2 required at least two storage drives and two drive for parity. RAID-Z3 requires at least two storage drives and three drive for parity. When drives are added to the RAID-Z pools, they have to be added in multiples of two.
|
||||||
|
|
||||||
|
#### Huge Storage potential
|
||||||
|
|
||||||
|
When ZFS was created, it was designed to be [the last word in file systems][17]. At a time when most file systems where 64-bit, the ZFS creators decided to jump right to 128-bit to future proof it. This means that ZFS “offers 16 billion billion times the capacity of 32- or 64-bit systems”. In fact, Jeff Bonwick (one of the creators) said [that powering][18] a “fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.”
|
||||||
|
|
||||||
|
### How to Install ZFS?
|
||||||
|
|
||||||
|
If you want to use ZFS out of the box, it would require installing either [FreeBSD][19] or an [operating system using the illumos kernel][20]. [illumos][21] is a fork of the OpenSolaris kernel.
|
||||||
|
|
||||||
|
In fact, support for [ZFS is one of the main reasons why some experienced Linux users opt for BSD][22].
|
||||||
|
|
||||||
|
If you want to try ZFS on Linux, you can only use it at your storage file system. As a far as I know, no Linux distro give you the option to install ZFS on your root out of the box. If you are interested in trying ZFS on Linux, the [ZFS on Linux project][4] has a number of tutorials on how to do that.
|
||||||
|
|
||||||
|
### Caveat
|
||||||
|
|
||||||
|
This article has sung the benefits of ZFS. Now let me tell you a quick problem with ZFS. Using RAID-Z [can be expensive][23] because of the number of drives you need to purchase to add storage space.
|
||||||
|
|
||||||
|
Have you every ZFS? What was your experience like? Let us know in the comments below.
|
||||||
|
|
||||||
|
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][24].
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/what-is-zfs/
|
||||||
|
|
||||||
|
作者:[John Paul][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://itsfoss.com/author/john/
|
||||||
|
[1]: https://wiki.gentoo.org/wiki/ZFS
|
||||||
|
[2]: http://en.wikipedia.org/wiki/Sun_Microsystems
|
||||||
|
[3]: http://en.wikipedia.org/wiki/Opensolaris
|
||||||
|
[4]: https://zfsonlinux.org/
|
||||||
|
[5]: https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License
|
||||||
|
[6]: https://en.wikipedia.org/wiki/GNU_General_Public_License
|
||||||
|
[7]: http://www.open-zfs.org/wiki/Main_Page
|
||||||
|
[8]: http://www.open-zfs.org/wiki/FAQ#Do_you_plan_to_release_OpenZFS_under_a_license_other_than_the_CDDL.3F
|
||||||
|
[9]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/what-is-zfs.png
|
||||||
|
[10]: https://wiki.archlinux.org/index.php/ZFS
|
||||||
|
[11]: https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/
|
||||||
|
[12]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/zfs-overview.png
|
||||||
|
[13]: https://www.freebsd.org/doc/handbook/zfs-term.html
|
||||||
|
[14]: https://en.wikipedia.org/wiki/Fsck
|
||||||
|
[15]: https://wiki.archlinux.org/index.php/ZFS/Virtual_disks#Creating_and_Destroying_Zpools
|
||||||
|
[16]: https://www.pcmag.com/encyclopedia/term/60364/raid-parity
|
||||||
|
[17]: https://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/
|
||||||
|
[18]: https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-high
|
||||||
|
[19]: https://www.freebsd.org/
|
||||||
|
[20]: https://wiki.illumos.org/display/illumos/Distributions
|
||||||
|
[21]: https://wiki.illumos.org/display/illumos/illumos+Home
|
||||||
|
[22]: https://itsfoss.com/why-use-bsd/
|
||||||
|
[23]: http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
|
||||||
|
[24]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,166 @@
|
|||||||
|
13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know
|
||||||
|
======
|
||||||
|
Knowing keyboard shortcuts increase your productivity. Here are some useful Ubuntu shortcut keys that will help you use Ubuntu like a pro.
|
||||||
|
|
||||||
|
You can use an operating system with the combination of keyboard and mouse
|
||||||
|
|
||||||
|
Note: The keyboard shortcuts mentioned in the list is intended for Ubuntu 18.04 GNOME edition. Usually, most of them (if not all) should work on other Ubuntu versions as well, but I cannot vouch for it.
|
||||||
|
|
||||||
|
![Ubuntu keyboard shortcuts][1]
|
||||||
|
|
||||||
|
### Useful Ubuntu keyboard shortcuts
|
||||||
|
|
||||||
|
Let’s have a look at some of the must know keyboard shortcut for Ubuntu GNOME. I have not included universal keyboard shortcuts like Ctrl+C (copy), Ctrl+V (paste) or Ctrl+S (save).
|
||||||
|
|
||||||
|
Note: Super key in Linux refers to the key with Windows logo. I have used capital letters in the shortcuts but it doesn’t mean you have to press the shift key. For example, T means ‘t’ key only, not Shift+t.
|
||||||
|
|
||||||
|
#### 1\. Super key: Opens Activities search
|
||||||
|
|
||||||
|
Super Key Opens the activities menuIf you have to use just one keyboard shortcut on Ubuntu, this has to be the one.
|
||||||
|
|
||||||
|
You want to open an application? Press the super key and search for the application. If the application is not installed, it will even suggest applications from software center.
|
||||||
|
|
||||||
|
You want to see the running applications? Press super key and it will show you all the running GUI applications.
|
||||||
|
|
||||||
|
You want to use workspaces? Simply press the super key and you can see the workspaces option on the right-hand side.
|
||||||
|
|
||||||
|
#### 2\. Ctrl+Alt+T: Ubuntu terminal shortcut
|
||||||
|
|
||||||
|
![Ubuntu Terminal Shortcut][2]Use Ctrl+alt+T to open terminal
|
||||||
|
|
||||||
|
You want to open a new terminal. The combination of three keys Ctrl+Alt+T is what you need. This is my favorite keyboard shortcut in Ubuntu. I even mention it in various tutorials on It’s FOSS when it involves opening a terminal.
|
||||||
|
|
||||||
|
#### 3\. Super+L or Ctrl+Alt+L: Locks the screen
|
||||||
|
|
||||||
|
Locking screen when you are not at your desk is one of the most basic security tips. Instead of going to the top right corner and then choosing the lock screen option, you can simply use the Super+L key combination.
|
||||||
|
|
||||||
|
Some systems also use Ctrl+Alt+L keys for locking the screen.
|
||||||
|
|
||||||
|
#### 4\. Super+D or Ctrl+Alt+D: Show desktop
|
||||||
|
|
||||||
|
Pressing Super+D minimizes all running application windows and shows the desktop.
|
||||||
|
|
||||||
|
Pressing Super+D again will open all the running applications windows as it was previously.
|
||||||
|
|
||||||
|
You may also use Ctrl+Alt+D for this purpose.
|
||||||
|
|
||||||
|
#### 5\. Super+A: Shows the application menu
|
||||||
|
|
||||||
|
You can open the application menu in Ubuntu 18.04 GNOME by clicking on the 9 dots on the left bottom of the screen. However, a quicker way would be to use Super+A key combination.
|
||||||
|
|
||||||
|
It will show the application menu where you can see the installed applications on your systems and can also search for them.
|
||||||
|
|
||||||
|
You can use Esc key to move out of the application menu screen.
|
||||||
|
|
||||||
|
#### 6\. Super+Tab or Alt+Tab: Switch between running applications
|
||||||
|
|
||||||
|
If you have more than one applications running, you can switch between the applications using the Super+Tab or Alt+Tab key combinations.
|
||||||
|
|
||||||
|
Keep holding the super key and press tab and you’ll the application switcher appearing. While holding the super key, keep on tapping the tab key to select between applications. When you are at the desired application, release both super and tab keys.
|
||||||
|
|
||||||
|
By default, the application switcher moves from left to right. If you want to move from right to left, use the Super+Shift+Tab key combination.
|
||||||
|
|
||||||
|
You can also use Alt key instead of Super here.
|
||||||
|
|
||||||
|
Tip: If there are multiple instances of an application, you can switch between those instances by using Super+` key combination.
|
||||||
|
|
||||||
|
#### 7\. Super+Arrow keys: Snap windows
|
||||||
|
|
||||||
|
<https://player.vimeo.com/video/289091549>
|
||||||
|
|
||||||
|
This is available in Windows as well. While using an application, press Super and left arrow key and the application will go to the left edge of the screen, taking half of the screen.
|
||||||
|
|
||||||
|
Similarly, pressing Super and right arrow keys will move the application to the right edge.
|
||||||
|
|
||||||
|
Super and up arrow keys will maximize the application window and super and down arrow will bring the application back to its usual self.
|
||||||
|
|
||||||
|
#### 8\. Super+M: Toggle notification tray
|
||||||
|
|
||||||
|
GNOME has a notification tray where you can see notifications for various system and application activities. You also have the calendar here.
|
||||||
|
|
||||||
|
![Notification Tray Ubuntu 18.04 GNOME][3]
|
||||||
|
Notification Tray
|
||||||
|
|
||||||
|
With Super+M key combination, you can open this notification area. If you press these keys again, an opened notification tray will be closed.
|
||||||
|
|
||||||
|
You can also use Super+V for toggling the notification tray.
|
||||||
|
|
||||||
|
#### 9\. Super+Space: Change input keyboard (for multilingual setup)
|
||||||
|
|
||||||
|
If you are multilingual, perhaps you have more than one keyboards installed on your system. For example, I use [Hindi on Ubuntu][4] along with English and I have Hindi (Devanagari) keyboard installed along with the default English one.
|
||||||
|
|
||||||
|
If you also use a multilingual setup, you can quickly change the input keyboard with the Super+Space shortcut.
|
||||||
|
|
||||||
|
#### 10\. Alt+F2: Run console
|
||||||
|
|
||||||
|
This is for power users. If you want to run a quick command, instead of opening a terminal and running the command there, you can use Alt+F2 to run the console.
|
||||||
|
|
||||||
|
![Alt+F2 to run commands in Ubuntu][5]
|
||||||
|
Console
|
||||||
|
|
||||||
|
This is particularly helpful when you have to use applications that can only be run from the terminal.
|
||||||
|
|
||||||
|
#### 11\. Ctrl+Q: Close an application window
|
||||||
|
|
||||||
|
If you have an application running, you can close the application window using the Ctrl+Q key combination. You can also use Ctrl+W for this purpose.
|
||||||
|
|
||||||
|
Alt+F4 is more ‘universal’ shortcut for closing an application window.
|
||||||
|
|
||||||
|
It not work on a few applications such as the default terminal in Ubuntu.
|
||||||
|
|
||||||
|
#### 12\. Ctrl+Alt+arrow: Move between workspaces
|
||||||
|
|
||||||
|
![Workspace switching][6]
|
||||||
|
Workspace switching
|
||||||
|
|
||||||
|
If you are one of the power users who use workspaces, you can use the Ctrl+Alt+Up arrow and Ctrl+Alt+Down arrow keys to switch between the workspaces.
|
||||||
|
|
||||||
|
#### 13\. Ctrl+Alt+Del: Log out
|
||||||
|
|
||||||
|
No! Like Windows, the famous combination of Ctrl+Alt+Del won’t bring task manager in Linux (unless you use custom keyboard shortcuts for it).
|
||||||
|
|
||||||
|
![Log Out Ubuntu][7]
|
||||||
|
Log Out
|
||||||
|
|
||||||
|
In the normal GNOME desktop environment, you can bring the power off menu using the Ctrl+Alt+Del keys but Ubuntu doesn’t always follow the norms and hence it opens the logout dialogue box when you use Ctrl+Alt+Del in Ubuntu.
|
||||||
|
|
||||||
|
### Use custom keyboard shortcuts in Ubuntu
|
||||||
|
|
||||||
|
You are not limited to the default keyboard shortcuts. You can create your own custom keyboard shortcuts as you like.
|
||||||
|
|
||||||
|
Go to Settings->Devices->Keyboard. You’ll see all the keyboard shortcuts here for your system. Scroll down to the bottom and you’ll see the Custom Shortcuts option.
|
||||||
|
|
||||||
|
![Add custom keyboard shortcut in Ubuntu][8]
|
||||||
|
|
||||||
|
You have to provide an easy-to-recognize name of the shortcut, the command that will be run when the key combinations are used and of course the keys you are going to use for the shortcut.
|
||||||
|
|
||||||
|
### What are your favorite keyboard shortcuts in Ubuntu?
|
||||||
|
|
||||||
|
There is no end to shortcuts. If you want, you can have a look at all the possible [GNOME shortcuts][9] here and see if there are some more shortcuts you would like to use.
|
||||||
|
|
||||||
|
You can, and you should also learn keyboard shortcuts for the applications you use most of the time. For example, I use Kazam for [screen recording][10], and the keyboard shortcuts help me a lot in pausing and resuming the recording.
|
||||||
|
|
||||||
|
What are your favorite Ubuntu shortcuts that you cannot live without?
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://itsfoss.com/ubuntu-shortcuts/
|
||||||
|
|
||||||
|
作者:[Abhishek Prakash][a]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://itsfoss.com/author/abhishek/
|
||||||
|
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-keyboard-shortcuts.jpeg
|
||||||
|
[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/ubuntu-terminal-shortcut.jpg
|
||||||
|
[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/notification-tray-ubuntu-gnome.jpeg
|
||||||
|
[4]: https://itsfoss.com/type-indian-languages-ubuntu/
|
||||||
|
[5]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/console-alt-f2-ubuntu-gnome.jpeg
|
||||||
|
[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/workspace-switcher-ubuntu.png
|
||||||
|
[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/log-out-ubuntu.jpeg
|
||||||
|
[8]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/custom-keyboard-shortcut.jpg
|
||||||
|
[9]: https://wiki.gnome.org/Design/OS/KeyboardShortcuts
|
||||||
|
[10]: https://itsfoss.com/best-linux-screen-recorders/
|
@ -0,0 +1,109 @@
|
|||||||
|
Randomize your MAC address using NetworkManager
|
||||||
|
======
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Today, users run their notebooks everywhere. To stay connected you use the local wifi to access the internet, on the couch at home or in a little cafe with your favorite coffee. But modern hotspots track you based on your MAC address, [an address that is unique per network card][1], and in this way identifies your device. Read more below about how to avoid this kind of tracking.
|
||||||
|
|
||||||
|
Why is this a problem? Many people use the word “privacy” to talk about this issue. But the concern is not about someone accessing the private contents of your laptop (that’s a separate issue). Instead, it’s about legibility — in simple terms, the ability to be easily counted and tracked. You can and should [read more about legibility][2]. But the bottom line is legibility gives the tracker power over the tracked. For instance, timed WiFi leases at the airport can only be enforced when you’re legible.
|
||||||
|
|
||||||
|
Since a fixed MAC address for your laptop is so legible (easily tracked), you should change it often. A random address is a good choice. Since MAC-addresses are only used within a local network, a random MAC-address is unlikely to cause a [collision.][3]
|
||||||
|
|
||||||
|
### Configuring NetworkManager
|
||||||
|
|
||||||
|
To apply randomized MAC-addresses by default to all WiFi connections, create the following file /etc/NetworkManager/conf.d/00-macrandomize.conf :
|
||||||
|
|
||||||
|
```
|
||||||
|
[device]
|
||||||
|
wifi.scan-rand-mac-address=yes
|
||||||
|
|
||||||
|
[connection]
|
||||||
|
wifi.cloned-mac-address=stable
|
||||||
|
ethernet.cloned-mac-address=stable
|
||||||
|
connection.stable-id=${CONNECTION}/${BOOT}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Afterward, restart NetworkManager:
|
||||||
|
|
||||||
|
```
|
||||||
|
systemctl restart NetworkManager
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Set cloned-mac-address to stable to generate the same hashed MAC every time a NetworkManager connection activates, but use a different MAC with each connection. To get a truly random MAC with every activation, use random instead.
|
||||||
|
|
||||||
|
The stable setting is useful to get the same IP address from DHCP, or a captive portal might remember your login status based on the MAC address. With random you may be required to re-authenticate (or click “I agree”) on every connect. You probably want “random” for that airport WiFi. See the NetworkManager [blog post][4] for a more detailed discussion and instructions for using nmcli to configure specific connections from the terminal.
|
||||||
|
|
||||||
|
To see your current MAC addresses, use ip link. The MAC follows the word ether.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ip link
|
||||||
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
|
||||||
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
|
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
|
||||||
|
link/ether 52:54:00:5f:d5:4e brd ff:ff:ff:ff:ff:ff
|
||||||
|
3: wlp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
|
||||||
|
link/ether 52:54:00:03:23:59 brd ff:ff:ff:ff:ff:ff
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### When not to randomize your MAC address
|
||||||
|
|
||||||
|
Naturally, there are times when you do need to be legible. For instance, on your home network, you may have configured your router to assign your notebook a consistent private IP for port forwarding. Or you might allow only certain MAC addresses to use the WiFi. Your employer probably requires legibility as well.
|
||||||
|
To change a specific WiFi connection, use nmcli to see your NetworkManager connections and show the current settings:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ nmcli c | grep wifi
|
||||||
|
Amtrak_WiFi 5f4b9f75-9e41-47f8-8bac-25dae779cd87 wifi --
|
||||||
|
StaplesHotspot de57940c-32c2-468b-8f96-0a3b9a9b0a5e wifi --
|
||||||
|
MyHome e8c79829-1848-4563-8e44-466e14a3223d wifi wlp1s0
|
||||||
|
...
|
||||||
|
$ nmcli c show 5f4b9f75-9e41-47f8-8bac-25dae779cd87 | grep cloned
|
||||||
|
802-11-wireless.cloned-mac-address: --
|
||||||
|
$ nmcli c show e8c79829-1848-4563-8e44-466e14a3223d | grep cloned
|
||||||
|
802-11-wireless.cloned-mac-address: stable
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
This example uses a fully random MAC for Amtrak (which is currently using the default), and the permanent MAC for MyHome (currently set to stable). The permanent MAC was assigned to your network interface when it was manufactured. Network admins like to use the permanent MAC to see [manufacturer IDs on the wire][5].
|
||||||
|
|
||||||
|
Now, make the changes and reconnect the active interface:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ nmcli c modify 5f4b9f75-9e41-47f8-8bac-25dae779cd87 802-11-wireless.cloned-mac-address random
|
||||||
|
$ nmcli c modify e8c79829-1848-4563-8e44-466e14a3223d 802-11-wireless.cloned-mac-address permanent
|
||||||
|
$ nmcli c down e8c79829-1848-4563-8e44-466e14a3223d
|
||||||
|
$ nmcli c up e8c79829-1848-4563-8e44-466e14a3223d
|
||||||
|
$ ip link
|
||||||
|
...
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also install NetworkManager-tui to get the nmtui command for nice menus when editing connections.
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
|
||||||
|
When you walk down the street, you should [stay aware of your surroundings][6], and on the [alert for danger][7]. In the same way, learn to be aware of your legibility when using public internet resources.
|
||||||
|
|
||||||
|
|
||||||
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
via: https://fedoramagazine.org/randomize-mac-address-nm/
|
||||||
|
|
||||||
|
作者:[sheogorath][a],[Stuart D Gathman][b]
|
||||||
|
选题:[lujun9972](https://github.com/lujun9972)
|
||||||
|
译者:[译者ID](https://github.com/译者ID)
|
||||||
|
校对:[校对者ID](https://github.com/校对者ID)
|
||||||
|
|
||||||
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||||
|
|
||||||
|
[a]: https://fedoramagazine.org/author/sheogorath/
|
||||||
|
[b]: https://fedoramagazine.org/author/sdgathman/
|
||||||
|
[1]: https://en.wikipedia.org/wiki/MAC_address
|
||||||
|
[2]: https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/
|
||||||
|
[3]: https://serverfault.com/questions/462178/duplicate-mac-address-on-the-same-lan-possible
|
||||||
|
[4]: https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/
|
||||||
|
[5]: https://www.wireshark.org/tools/oui-lookup.html
|
||||||
|
[6]: https://www.isba.org/committees/governmentlawyers/newsletter/2013/06/becomingmoreawareafewtipsonkeepingy
|
||||||
|
[7]: http://www.selectinternational.com/safety-blog/aware-of-surroundings-can-reduce-safety-incidents
|
@ -1,258 +0,0 @@
|
|||||||
|
|
||||||
理解 Linux 文件系统:ext4 等文件系统
|
|
||||||
=======
|
|
||||||
|
|
||||||
> 了解 ext4 的历史,包括其与 ext3 和之前的其它文件系统之间的区别。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
目前的大部分 Linux 文件系统都默认采用 ext4 文件系统, 正如以前的 Linux 发行版默认使用 ext3、ext2 以及更久前的 ext。
|
|
||||||
|
|
||||||
对于不熟悉 Linux 或文件系统的朋友而言,你可能不清楚 ext4 相对于上一版本 ext3 带来了什么变化。你可能还想知道在一连串关于替代的文件系统例如 btrfs、xfs 和 zfs 不断被发布的情况下,ext4 是否仍然能得到进一步的发展。
|
|
||||||
|
|
||||||
在一篇文章中,我们不可能讲述文件系统的所有方面,但我们尝试让您尽快了解 Linux 默认文件系统的发展历史,包括它的产生以及未来发展。我仔细研究了维基百科里的各种关于 ext 文件系统文章、kernel.org 的 wiki 中关于 ext4 的条目以及结合自己的经验写下这篇文章。
|
|
||||||
|
|
||||||
### ext 简史
|
|
||||||
|
|
||||||
#### MINIX 文件系统
|
|
||||||
|
|
||||||
在有 ext 之前, 使用的是 MINIX 文件系统。如果你不熟悉 Linux 历史, 那么可以理解为 MINIX 是用于 IBM PC/AT 微型计算机的一个非常小的类 Unix 系统。Andrew Tannenbaum 为了教学的目的而开发了它,并于 1987 年发布了源代码(以印刷版的格式!)。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
*IBM 1980 中期的 PC/AT,[MBlairMartin](https://commons.wikimedia.org/wiki/File:IBM_PC_AT.jpg),[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en)*
|
|
||||||
|
|
||||||
虽然你可以细读 MINIX 的源代码,但实际上它并不是自由开源软件(FOSS)。出版 Tannebaum 著作的出版商要求你花 69 美元的许可费来运行 MINIX,而这笔费用包含在书籍的费用中。尽管如此,在那时来说非常便宜,并且 MINIX 的使用得到迅速发展,很快超过了 Tannebaum 当初使用它来教授操作系统编码的意图。在整个 20 世纪 90 年代,你可以发现 MINIX 的安装在世界各个大学里面非常流行。而此时,年轻的 Lius Torvalds 使用 MINIX 来开发原始 Linux 内核,并于 1991 年首次公布。而后在 1992 年 12 月在 GPL 开源协议下发布。
|
|
||||||
|
|
||||||
但是等等,这是一篇以*文件系统*为主题的文章不是吗?是的,MINIX 有自己的文件系统,早期的 Linux 版本依赖于它。跟 MINIX 一样,Linux 的文件系统也如同玩具那般小 —— MINIX 文件系统最多能处理 14 个字符的文件名,并且只能处理 64MB 的存储空间。到了 1991 年,一般的硬盘尺寸已经达到了 40-140MB。很显然,Linux 需要一个更好的文件系统。
|
|
||||||
|
|
||||||
#### ext
|
|
||||||
|
|
||||||
当 Linus 开发出刚起步的 Linux 内核时,Rémy Card 从事第一代的 ext 文件系统的开发工作。 ext 文件系统在 1992 首次实现并发布 —— 仅在 Linux 首次发布后的一年! —— ext 解决了 MINIX 文件系统中最糟糕的问题。
|
|
||||||
|
|
||||||
1992 年的 ext 使用在 Linux 内核中的新虚拟文件系统(VFS)抽象层。与之前的 MINIX 文件系统不同的是,ext 可以处理高达 2GB 存储空间并处理 255 个字符的文件名。
|
|
||||||
|
|
||||||
但 ext 并没有长时间占统治地位,主要是由于它原始的时间戳(每个文件仅有一个时间戳,而不是今天我们所熟悉的有 inode 、最近文件访问时间和最新文件修改时间的时间戳。)仅仅一年后,ext2 就替代了它。
|
|
||||||
|
|
||||||
#### ext2
|
|
||||||
|
|
||||||
Rémy 很快就意识到 ext 的局限性,所以一年后他设计出 ext2 替代它。当 ext 仍然根植于 “玩具” 操作系统时,ext2 从一开始就被设计为一个商业级文件系统,沿用 BSD 的 Berkeley 文件系统的设计原理。
|
|
||||||
|
|
||||||
Ext2 提供了 GB 级别的最大文件大小和 TB 级别的文件系统大小,使其在 20 世纪 90 年代的地位牢牢巩固在文件系统大联盟中。很快它被广泛地使用,无论是在 Linux 内核中还是最终在 MINIX 中,且利用第三方模块可以使其应用于 MacOS 和 Windows。
|
|
||||||
|
|
||||||
但这里仍然有一些问题需要解决:ext2 文件系统与 20 世纪 90 年代的大多数文件系统一样,如果在将数据写入到磁盘的时候,系统发生崩溃或断电,则容易发生灾难性的数据损坏。随着时间的推移,由于碎片(单个文件存储在多个位置,物理上其分散在旋转的磁盘上),它们也遭受了严重的性能损失。
|
|
||||||
|
|
||||||
尽管存在这些问题,但今天 ext2 还是用在某些特殊的情况下 —— 最常见的是,作为便携式 USB 拇指驱动器的文件系统格式。
|
|
||||||
|
|
||||||
#### ext3
|
|
||||||
|
|
||||||
1998 年, 在 ext2 被采用后的 6 年后,Stephen Tweedie 宣布他正在致力于改进 ext2。这成了 ext3,并于 2001 年 11 月在 2.4.15 内核版本中被采用到 Linux 内核主线中。
|
|
||||||
|
|
||||||
![Packard Bell 计算机][2]
|
|
||||||
|
|
||||||
*20 世纪 90 年代中期的 Packard Bell 计算机,[Spacekid][3],[CC0][4]*
|
|
||||||
|
|
||||||
在大部分情况下,Ext2 在 Linux 发行版中工作得很好,但像 FAT、FAT32、HFS 和当时的其他文件系统一样 —— 在断电时容易发生灾难性的破坏。如果在将数据写入文件系统时候发生断电,则可能会将其留在所谓*不一致*的状态 —— 事情只完成一半而另一半未完成。这可能导致大量文件丢失或损坏,这些文件与正在保存的文件无关甚至导致整个文件系统无法卸载。
|
|
||||||
|
|
||||||
Ext3 和 20 世纪 90 年代后期的其他文件系统,如微软的 NTFS ,使用*日志*来解决这个问题。日志是磁盘上的一种特殊的分配区域,其写入被存储在事务中;如果该事务完成磁盘写入,则日志中的数据将提交给文件系统自身。如果系统在该操作提交前崩溃,则重新启动的系统识别其为未完成的事务而将其进行回滚,就像从未发生过一样。这意味着正在处理的文件可能依然会丢失,但文件系统*本身*保持一致,且其他所有数据都是安全的。
|
|
||||||
|
|
||||||
在使用 ext3 文件系统的 Linux 内核中实现了三个级别的日志记录方式:<ruby>日记<rt>journal</rt></ruby>、<ruby>顺序<rt>ordered</rt></ruby>和<ruby>回写<rt>writeback</rt></ruby>。
|
|
||||||
|
|
||||||
* **日记** 是最低风险模式,在将数据和元数据提交给文件系统之前将其写入日志。这可以保证正在写入的文件与整个文件系统的一致性,但其显著降低了性能。
|
|
||||||
* **顺序** 是大多数 Linux 发行版默认模式;顺序模式将元数据写入日志而直接将数据提交到文件系统。顾名思义,这里的操作顺序是固定的:首先,元数据提交到日志;其次,数据写入文件系统,然后才将日志中关联的元数据更新到文件系统。这确保了在发生崩溃时,那些与未完整写入相关联的元数据仍在日志中,且文件系统可以在回滚日志时清理那些不完整的写入事务。在顺序模式下,系统崩溃可能导致在崩溃期间文件的错误被主动写入,但文件系统它本身 —— 以及未被主动写入的文件 —— 确保是安全的。
|
|
||||||
* **回写** 是第三种模式 —— 也是最不安全的日志模式。在回写模式下,像顺序模式一样,元数据会被记录到日志,但数据不会。与顺序模式不同,元数据和数据都可以以任何有利于获得最佳性能的顺序写入。这可以显著提高性能,但安全性低很多。尽管回写模式仍然保证文件系统本身的安全性,但在崩溃或崩溃之前写入的文件很容易丢失或损坏。
|
|
||||||
|
|
||||||
跟之前的 ext2 类似,ext3 使用 16 位内部寻址。这意味着对于有着 4K 块大小的 ext3 在最大规格为 16TiB 的文件系统中可以处理的最大文件大小为 2TiB。
|
|
||||||
|
|
||||||
#### ext4
|
|
||||||
|
|
||||||
Theodore Ts'o (是当时 ext3 主要开发人员) 在 2006 年发表的 ext4 ,于两年后在 2.6.28 内核版本中被加入到了 Linux 主线。
|
|
||||||
|
|
||||||
Ts’o 将 ext4 描述为一个显著扩展 ext3 但仍然依赖于旧技术的临时技术。他预计 ext4 终将会被真正的下一代文件系统所取代。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
*Dell Precision 380 工作站,[Lance Fisher](https://commons.wikimedia.org/wiki/File:Dell_Precision_380_Workstation.jpeg),[CC BY-SA 2.0](https://creativecommons.org/licenses/by-sa/2.0/deed.en)*
|
|
||||||
|
|
||||||
Ext4 在功能上与 Ext3 在功能上非常相似,但支持大文件系统、提高了对碎片的抵抗力,有更高的性能以及更好的时间戳。
|
|
||||||
|
|
||||||
### ext4 vs ext3
|
|
||||||
|
|
||||||
ext3 和 ext4 有一些非常明确的差别,在这里集中讨论下。
|
|
||||||
|
|
||||||
#### 向后兼容性
|
|
||||||
|
|
||||||
ext4 特地设计为尽可能地向后兼容 ext3。这不仅允许 ext3 文件系统原地升级到 ext4;也允许 ext4 驱动程序以 ext3 模式自动挂载 ext3 文件系统,因此使它无需单独维护两个代码库。
|
|
||||||
|
|
||||||
#### 大文件系统
|
|
||||||
|
|
||||||
ext3 文件系统使用 32 位寻址,这限制它仅支持 2TiB 文件大小和 16TiB 文件系统系统大小(这是假设在块大小为 4KiB 的情况下,一些 ext3 文件系统使用更小的块大小,因此对其进一步被限制)。
|
|
||||||
|
|
||||||
ext4 使用 48 位的内部寻址,理论上可以在文件系统上分配高达 16TiB 大小的文件,其中文件系统大小最高可达 1000000 TiB(1EiB)。在早期 ext4 的实现中有些用户空间的程序仍然将其限制为最大大小为 16TiB 的文件系统,但截至 2011 年,e2fsprogs 已经直接支持大于 16TiB 大小的 ext4 文件系统。例如,红帽企业 Linux 在其合同上仅支持最高 50TiB 的 ext4 文件系统,并建议 ext4 卷不超过 100TiB。
|
|
||||||
|
|
||||||
#### 分配方式改进
|
|
||||||
|
|
||||||
ext4 在将存储块写入磁盘之前对存储块的分配方式进行了大量改进,这可以显著提高读写性能。
|
|
||||||
|
|
||||||
##### 区段
|
|
||||||
|
|
||||||
<ruby>区段<rt>extent</rt></ruby>是一系列连续的物理块 (最多达 128 MiB,假设块大小为 4KiB),可以一次性保留和寻址。使用区段可以减少给定文件所需的 inode 数量,并显著减少碎片并提高写入大文件时的性能。
|
|
||||||
|
|
||||||
##### 多块分配
|
|
||||||
|
|
||||||
ext3 为每一个新分配的块调用一次块分配器。当多个写入同时打开分配器时,很容易导致严重的碎片。然而,ext4 使用延迟分配,这允许它合并写入并更好地决定如何为尚未提交的写入分配块。
|
|
||||||
|
|
||||||
##### 持久的预分配
|
|
||||||
|
|
||||||
在为文件预分配磁盘空间时,大部分文件系统必须在创建时将零写入该文件的块中。ext4 允许替代使用 `fallocate()`,它保证了空间的可用性(并试图为它找到连续的空间),而不需要先写入它。这显著提高了写入和将来读取流和数据库应用程序的写入数据的性能。
|
|
||||||
|
|
||||||
##### 延迟分配
|
|
||||||
|
|
||||||
这是一个耐人寻味而有争议性的功能。延迟分配允许 ext4 等待分配将写入数据的实际块,直到它准备好将数据提交到磁盘。(相比之下,即使数据仍然在往写入缓存中写入,ext3 也会立即分配块。)
|
|
||||||
|
|
||||||
当缓存中的数据累积时,延迟分配块允许文件系统对如何分配块做出更好的选择,降低碎片(写入,以及稍后的读)并显著提升性能。然而不幸的是,它*增加*了还没有专门调用 `fsync()` 方法(当程序员想确保数据完全刷新到磁盘时)的程序的数据丢失的可能性。
|
|
||||||
|
|
||||||
假设一个程序完全重写了一个文件:
|
|
||||||
|
|
||||||
```
|
|
||||||
fd=open("file" ,O_TRUNC); write(fd, data); close(fd);
|
|
||||||
```
|
|
||||||
|
|
||||||
使用旧的文件系统, `close(fd);` 足以保证 `file` 中的内容刷新到磁盘。即使严格来说,写不是事务性的,但如果文件关闭后发生崩溃,则丢失数据的风险很小。
|
|
||||||
|
|
||||||
如果写入不成功(由于程序上的错误、磁盘上的错误、断电等),文件的原始版本和较新版本都可能丢失数据或损坏。如果其他进程在写入文件时访问文件,则会看到损坏的版本。如果其他进程打开文件并且不希望其内容发生更改 —— 例如,映射到多个正在运行的程序的共享库。这些进程可能会崩溃。
|
|
||||||
|
|
||||||
为了避免这些问题,一些程序员完全避免使用 `O_TRUNC`。相反,他们可能会写入一个新文件,关闭它,然后将其重命名为旧文件名:
|
|
||||||
|
|
||||||
```
|
|
||||||
fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");
|
|
||||||
```
|
|
||||||
|
|
||||||
在*没有*延迟分配的文件系统下,这足以避免上面列出的潜在的损坏和崩溃问题:因为`rename()` 是原子操作,所以它不会被崩溃中断;并且运行的程序将继续引用旧的文件。现在 `file` 的未链接版本只要有一个打开的文件文件句柄即可。但是因为 ext4 的延迟分配会导致写入被延迟和重新排序,`rename("newfile","file")` 可以在 `newfile` 的内容实际写入磁盘内容之前执行,这出现了并行进行再次获得 `file` 坏版本的问题。
|
|
||||||
|
|
||||||
为了缓解这种情况,Linux 内核(自版本 2.6.30 )尝试检测这些常见代码情况并强制立即分配。这会减少但不能防止数据丢失的可能性 —— 并且它对新文件没有任何帮助。如果你是一位开发人员,请注意:保证数据立即写入磁盘的唯一方法是正确调用 `fsync()` 。
|
|
||||||
|
|
||||||
#### 无限制的子目录
|
|
||||||
|
|
||||||
ext3 仅限于 32000 个子目录;ext4 允许无限数量的子目录。从 2.6.23 内核版本开始,ext4 使用 HTree 索引来减少大量子目录的性能损失。
|
|
||||||
|
|
||||||
#### 日志校验
|
|
||||||
|
|
||||||
ext3 没有对日志进行校验,这给处于内核直接控制之外的磁盘或带有自己的缓存的控制器设备带来了问题。如果控制器或具有自己的缓存的磁盘脱离了写入顺序,则可能会破坏 ext3 的日记事务顺序,从而可能破坏在崩溃期间(或之前一段时间)写入的文件。
|
|
||||||
|
|
||||||
理论上,这个问题可以使用写入<ruby>障碍<rt>barrier</rt></ruby> —— 在安装文件系统时,你在挂载选项设置 `barrier=1` ,然后设备就会忠实地执行 `fsync` 一直向下到底层硬件。通过实践,可以发现存储设备和控制器经常不遵守写入障碍 —— 提高性能(和跟竞争对手比较的性能基准),但增加了本应该防止数据损坏的可能性。
|
|
||||||
|
|
||||||
对日志进行校验和允许文件系统崩溃后第一次挂载时意识到其某些条目是无效或无序的。因此,这避免了回滚部分条目或无序日志条目的错误,并进一步损坏的文件系统——即使部分存储设备假做或不遵守写入障碍。
|
|
||||||
|
|
||||||
#### 快速文件系统检查
|
|
||||||
|
|
||||||
在 ext3 下,在 `fsck` 被调用时会检查整个文件系统 —— 包括已删除或空文件。相比之下,ext4 标记了 inode 表未分配的块和扇区,从而允许 `fsck` 完全跳过它们。这大大减少了在大多数文件系统上运行 `fsck` 的时间,它实现于内核 2.6.24。
|
|
||||||
|
|
||||||
#### 改进的时间戳
|
|
||||||
|
|
||||||
ext3 提供粒度为一秒的时间戳。虽然足以满足大多数用途,但任务关键型应用程序经常需要更严格的时间控制。ext4 通过提供纳秒级的时间戳,使其可用于那些企业、科学以及任务关键型的应用程序。
|
|
||||||
|
|
||||||
ext3 文件系统也没有提供足够的位来存储 2038 年 1 月 18 日以后的日期。ext4 在这里增加了两个位,将 [Unix 纪元][5] 扩展了 408 年。如果你在公元 2446 年读到这篇文章,你很有可能已经转移到一个更好的文件系统 —— 如果你还在测量自 1970 年 1 月 1 日 00:00(UTC)以来的时间,这会让我死后得以安眠。
|
|
||||||
|
|
||||||
#### 在线碎片整理
|
|
||||||
|
|
||||||
ext2 和 ext3 都不直接支持在线碎片整理 —— 即在挂载时会对文件系统进行碎片整理。ext2 有一个包含的实用程序,`e2defrag`,它的名字暗示 —— 它需要在文件系统未挂载时脱机运行。(显然,这对于根文件系统来说非常有问题。)在 ext3 中的情况甚至更糟糕 —— 虽然 ext3 比 ext2 更不容易受到严重碎片的影响,但 ext3 文件系统运行 `e2defrag` 可能会导致灾难性损坏和数据丢失。
|
|
||||||
|
|
||||||
尽管 ext3 最初被认为“不受碎片影响”,但对同一文件(例如 BitTorrent)采用大规模并行写入过程的过程清楚地表明情况并非完全如此。一些用户空间的手段和解决方法,例如 [Shake][6],以这样或那样方式解决了这个问题 —— 但它们比真正的、文件系统感知的、内核级碎片整理过程更慢并且在各方面都不太令人满意。
|
|
||||||
|
|
||||||
ext4通过 `e4defrag` 解决了这个问题,且是一个在线、内核模式、文件系统感知、块和区段级别的碎片整理实用程序。
|
|
||||||
|
|
||||||
### 正在进行的 ext4 开发
|
|
||||||
|
|
||||||
ext4,正如 Monty Python 中瘟疫感染者曾经说过的那样,“我还没死呢!” 虽然它的[主要开发人员][7]认为它只是一个真正的[下一代文件系统][8]的权宜之计,但是在一段时间内,没有任何可能的候选人准备好(由于技术或许可问题)部署为根文件系统。
|
|
||||||
|
|
||||||
在未来的 ext4 版本中仍然有一些关键功能要开发,包括元数据校验和、一流的配额支持和大分配块。
|
|
||||||
|
|
||||||
#### 元数据校验和
|
|
||||||
|
|
||||||
由于 ext4 具有冗余超级块,因此为文件系统校验其中的元数据提供了一种方法,可以自行确定主超级块是否已损坏并需要使用备用块。可以在没有校验和的情况下,从损坏的超级块恢复 —— 但是用户首先需要意识到它已损坏,然后尝试使用备用方法手动挂载文件系统。由于在某些情况下,使用损坏的主超级块安装文件系统读写可能会造成进一步的损坏,即使是经验丰富的用户也无法避免,这也不是一个完美的解决方案!
|
|
||||||
|
|
||||||
与 btrfs 或 zfs 等下一代文件系统提供的极其强大的每块校验和相比,ext4 的元数据校验和的功能非常弱。但它总比没有好。虽然校验**所有的事情**都听起来很简单!—— 事实上,将校验和与文件系统连接到一起有一些重大的挑战;请参阅[设计文档][9]了解详细信息。
|
|
||||||
|
|
||||||
#### 一流的配额支持
|
|
||||||
|
|
||||||
等等,配额?!从 ext2 出现的那天开始我们就有了这些!是的,但它们一直都是事后的添加的东西,而且它们总是犯傻。这里可能不值得详细介绍,但[设计文档][10]列出了配额将从用户空间移动到内核中的方式,并且能够更加正确和高效地执行。
|
|
||||||
|
|
||||||
#### 大分配块
|
|
||||||
|
|
||||||
随着时间的推移,那些讨厌的存储系统不断变得越来越大。由于一些固态硬盘已经使用 8K 硬件块大小,因此 ext4 对 4K 模块的当前限制越来越受到限制。较大的存储块可以显著减少碎片并提高性能,代价是增加“松弛”空间(当您只需要块的一部分来存储文件或文件的最后一块时留下的空间)。
|
|
||||||
|
|
||||||
您可以在[设计文档][11]中查看详细说明。
|
|
||||||
|
|
||||||
### ext4 的实际限制
|
|
||||||
|
|
||||||
ext4 是一个健壮、稳定的文件系统。如今大多数人都应该在用它作为根文件系统,但它无法处理所有需求。让我们简单地谈谈你不应该期待的一些事情 —— 现在或可能在未来:
|
|
||||||
|
|
||||||
虽然 ext4 可以处理高达 1 EiB 大小(相当于 1,000,000 TiB)大小的数据,但你真的、*真的*不应该尝试这样做。除了能够记住更多块的地址之外,还存在规模上的问题。并且现在 ext4 不会处理(并且可能永远不会)超过 50 —— 100TiB 的数据。
|
|
||||||
|
|
||||||
ext4 也不足以保证数据的完整性。随着日志记录的重大进展又回到了 ext3 的那个时候,它并未涵盖数据损坏的许多常见原因。如果数据已经在磁盘上被[破坏][12]——由于故障硬件,宇宙射线的影响(是的,真的),或者只是数据随时间衰减 —— ext4 无法检测或修复这种损坏。
|
|
||||||
|
|
||||||
基于上面两点,ext4 只是一个纯*文件系统*,而不是存储卷管理器。这意味着,即使你有多个磁盘——也就是奇偶校验或冗余,理论上你可以从 ext4 中恢复损坏的数据,但无法知道使用它是否对你有利。虽然理论上可以在不同的层中分离文件系统和存储卷管理系统而不会丢失自动损坏检测和修复功能,但这不是当前存储系统的设计方式,并且它将给新设计带来重大挑战。
|
|
||||||
|
|
||||||
### 备用文件系统
|
|
||||||
|
|
||||||
在我们开始之前,提醒一句:要非常小心,没有任何备用的文件系统作为主线内核的一部分而内置和直接支持!
|
|
||||||
|
|
||||||
即使一个文件系统是*安全的*,如果在内核升级期间出现问题,使用它作为根文件系统也是非常可怕的。如果你没有充分的理由通过一个 chroot 去使用替代介质引导,耐心地操作内核模块、 grub 配置和 DKMS...不要在一个很重要的系统中去掉预留的根文件。
|
|
||||||
|
|
||||||
可能有充分的理由使用您的发行版不直接支持的文件系统 —— 但如果您这样做,我强烈建议您在系统启动并可用后再安装它。(例如,您可能有一个 ext4 根文件系统,但是将大部分数据存储在 zfs 或 btrfs 池中。)
|
|
||||||
|
|
||||||
#### XFS
|
|
||||||
|
|
||||||
XFS 与非 ext 文件系统在 Linux 中的主线中的地位一样。它是一个 64 位的日志文件系统,自 2001 年以来内置于 Linux 内核中,为大型文件系统和高度并发性提供了高性能(即,大量的进程都会立即写入文件系统)。
|
|
||||||
|
|
||||||
从 RHEL 7 开始,XFS 成为 Red Hat Enterprise Linux 的默认文件系统。对于家庭或小型企业用户来说,它仍然有一些缺点 —— 最值得注意的是,重新调整现有 XFS 文件系统是一件非常痛苦的事情,不如创建另一个并复制数据更有意义。
|
|
||||||
|
|
||||||
虽然 XFS 是稳定且是高性能的,但它和 ext4 之间没有足够具体的最终用途差异,以值得推荐在非默认(例如,RHEL7)的任何地方使用它,除非它解决了对 ext4 的特定问题,例如 > 50 TiB 容量的文件系统。
|
|
||||||
|
|
||||||
XFS 在任何方面都不是 ZFS、btrfs 甚至 WAFL(一个专有的 SAN 文件系统)的“下一代”文件系统。就像 ext4 一样,它应该被视为一种更好的方式的权宜之计。
|
|
||||||
|
|
||||||
#### ZFS
|
|
||||||
|
|
||||||
ZFS 由 Sun Microsystems 开发,以 zettabyte 命名 —— 相当于 1 万亿 GB —— 因为它理论上可以解决大型存储系统。
|
|
||||||
|
|
||||||
作为真正的下一代文件系统,ZFS 提供卷管理(能够在单个文件系统中处理多个单独的存储设备),块级加密校验和(允许以极高的准确率检测数据损坏),[自动损坏修复][12](其中冗余或奇偶校验存储可用),[快速异步增量复制][13],内联压缩等,[以及更多][14]。
|
|
||||||
|
|
||||||
从 Linux 用户的角度来看,ZFS 的最大问题是许可证问题。ZFS 许可证是 CDDL 许可证,这是一种与 GPL 冲突的半许可许可证。关于在 Linux 内核中使用 ZFS 的意义存在很多争议,其争议范围从“它是 GPL 违规”到“它是 CDDL 违规”到“它完全没问题,它还没有在法庭上进行过测试。 ” 最值得注意的是,自 2016 年以来 Canonical 已将 ZFS 代码内联在其默认内核中,而且目前尚无法律挑战。
|
|
||||||
|
|
||||||
此时,即使我作为一个非常狂热于 ZFS 的用户,我也不建议将 ZFS 作为 Linux 的 root 文件系统。如果你想在 Linux 上利用 ZFS 的优势,用 ext4 设置一个小的根文件系统,然后将 ZFS 用在你剩余的存储上,把数据、应用程序以及你喜欢的东西放在它上面 —— 但把 root 保留在 ext4 上,直到你的发行版明显支持 zfs 根目录。
|
|
||||||
|
|
||||||
#### BTRFS
|
|
||||||
|
|
||||||
Btrfs 是 B-Tree Filesystem 的简称,通常发音为 “butter” —— 由 Chris Mason 于 2007 年在 Oracle 任职期间发布。BTRFS 旨在跟 ZFS 有大部分相同的目标,提供多种设备管理、每块校验、异步复制、直列压缩等,[还有更多][8]。
|
|
||||||
|
|
||||||
截至 2018 年,btrfs 相当稳定,可用作标准的单磁盘文件系统,但可能不应该依赖于卷管理器。与许多常见用例中的 ext4、XFS 或 ZFS 相比,它存在严重的性能问题,其下一代功能 —— 复制、多磁盘拓扑和快照管理 —— 可能非常多,其结果可能是从灾难性地性能降低到实际数据的丢失。
|
|
||||||
|
|
||||||
btrfs 的维持状态是有争议的;SUSE Enterprise Linux 在 2015 年采用它作为默认文件系统,而 Red Hat 于 2017 年宣布它从 RHEL 7.4 开始不再支持 btrfs。可能值得注意的是,该产品支持 btrfs 部署用作单磁盘文件系统,而不是像 ZFS 中的多磁盘卷管理器,甚至 Synology 在它的存储设备使用 BTRFS,
|
|
||||||
但是它在传统 Linux 内核 RAID(mdraid)之上分层来管理磁盘。
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/4/ext4-filesystem
|
|
||||||
|
|
||||||
作者:[Jim Salter][a]
|
|
||||||
译者:[HardworkFish](https://github.com/HardworkFish)
|
|
||||||
校对:[wxy](https://github.com/wxy)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/jim-salter
|
|
||||||
[1]:https://opensource.com/file/391546
|
|
||||||
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/packard_bell_pc.jpg?itok=VI8dzcwp (Packard Bell computer)
|
|
||||||
[3]:https://commons.wikimedia.org/wiki/File:Old_packard_bell_pc.jpg
|
|
||||||
[4]:https://creativecommons.org/publicdomain/zero/1.0/deed.en
|
|
||||||
[5]:https://en.wikipedia.org/wiki/Unix_time
|
|
||||||
[6]:https://vleu.net/shake/
|
|
||||||
[7]:http://www.linux-mag.com/id/7272/
|
|
||||||
[8]:https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/
|
|
||||||
[9]:https://ext4.wiki.kernel.org/index.php/Ext4_Metadata_Checksums
|
|
||||||
[10]:https://ext4.wiki.kernel.org/index.php/Design_For_1st_Class_Quota_in_Ext4
|
|
||||||
[11]:https://ext4.wiki.kernel.org/index.php/Design_for_Large_Allocation_Blocks
|
|
||||||
[12]:https://en.wikipedia.org/wiki/Data_degradation#Visual_example_of_data_degradation
|
|
||||||
[13]:https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
|
|
||||||
[14]:https://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/
|
|
@ -1,322 +0,0 @@
|
|||||||
Makefile及其工作原理
|
|
||||||
======
|
|
||||||
|
|
||||||

|
|
||||||
当你在一些源文件改变后需要运行或更新一个任务时,make工具通常会被用到。make工具需要读取Makefile(或makefile)文件,在该文件中定义了一系列需要执行的任务。make可以用来将源代码编译为可执行程序。大部分开源项目会使用make来实现二进制文件的编译,然后使用make istall命令来执行安装。
|
|
||||||
|
|
||||||
本文将通过一些基础和进阶的示例来展示make和Makefile的使用方法。在开始前,请确保你的系统中安装了make。
|
|
||||||
|
|
||||||
### 基础示例
|
|
||||||
|
|
||||||
依然从打印“Hello World”开始。首先创建一个名字为myproject的目录,目录下新建Makefile文件,文件内容为:
|
|
||||||
```
|
|
||||||
say_hello:
|
|
||||||
|
|
||||||
echo "Hello World"
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
在myproject目录下执行make,会有如下输出:
|
|
||||||
```
|
|
||||||
$ make
|
|
||||||
|
|
||||||
echo "Hello World"
|
|
||||||
|
|
||||||
Hello World
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
在上面的例子中,“say_hello”类似于其他编程语言中的函数名。在此可以成为target。在target之后的是预置条件和依赖。为了简单期间,我们在示例中没有定义预置条件。“echo ‘Hello World'"命令被称为recipe。recipe基于预置条件来实现target。target、预置条件和recipe共同构成一个规则。
|
|
||||||
|
|
||||||
总结一下,一个典型的规则的语法为:
|
|
||||||
```
|
|
||||||
target: 预置条件
|
|
||||||
|
|
||||||
<TAB> recipe
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
在示例中,target是一个基于源代码这个预置条件的二进制文件。另外,在另一规则中,这个预置条件也可以是依赖其他预置条件的target。
|
|
||||||
```
|
|
||||||
final_target: sub_target final_target.c
|
|
||||||
|
|
||||||
Recipe_to_create_final_target
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
sub_target: sub_target.c
|
|
||||||
|
|
||||||
Recipe_to_create_sub_target
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
target不要求是一个文件,也可以只是方便recipe使用的名字。我们称之为伪target。
|
|
||||||
|
|
||||||
再回到上面的示例中,当make被执行时,整条指令‘echo "Hello World"’都被打印出来,之后才是真正的执行结果。如果不希望指令本身被打印处理,需要在echo前添加@。
|
|
||||||
```
|
|
||||||
say_hello:
|
|
||||||
|
|
||||||
@echo "Hello World"
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
重新运行make,将会只有如下输出:
|
|
||||||
```
|
|
||||||
$ make
|
|
||||||
|
|
||||||
Hello World
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
接下来在Makefile中添加如下伪target:generate和clean:
|
|
||||||
```
|
|
||||||
say_hello:
|
|
||||||
@echo "Hello World"
|
|
||||||
|
|
||||||
generate:
|
|
||||||
@echo "Creating empty text files..."
|
|
||||||
touch file-{1..10}.txt
|
|
||||||
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning up..."
|
|
||||||
rm *.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
随后当我们运行make时,只有‘say_hello’这个target被执行。这是因为makefile中的默认target为第一个target。通常情况下只有默认的target会被调用,大多数项目会将“all”作为默认target。“all”负责来调用其他的target。我们可以通过.DEFAULT_GOAL这个特殊的伪target来覆盖掉默认的行为。
|
|
||||||
|
|
||||||
在makefile文件开头增加.DEFAULT_GOAL:
|
|
||||||
```
|
|
||||||
.DEFAULT_GOAL := generate
|
|
||||||
```
|
|
||||||
|
|
||||||
make会将generate作为默认target:
|
|
||||||
```
|
|
||||||
$ make
|
|
||||||
Creating empty text files...
|
|
||||||
touch file-{1..10}.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
顾名思义,.DEFAULT_GOAL伪target仅能定义一个target。这就是为什么很多项目仍然会有all这个target。这样可以保证多个target的实现。
|
|
||||||
|
|
||||||
下面删除掉.DEFAULT_GOAL,增加all target:
|
|
||||||
```
|
|
||||||
all: say_hello generate
|
|
||||||
|
|
||||||
say_hello:
|
|
||||||
@echo "Hello World"
|
|
||||||
|
|
||||||
generate:
|
|
||||||
@echo "Creating empty text files..."
|
|
||||||
touch file-{1..10}.txt
|
|
||||||
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning up..."
|
|
||||||
rm *.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
运行之前,我们再增加一些特殊的伪target。.PHONY用来定义这些不是file的target。make会默认调用这写伪target下的recipe,而不去检查文件是否存在或最后修改日期。完整的makefile如下:
|
|
||||||
```
|
|
||||||
.PHONY: all say_hello generate clean
|
|
||||||
|
|
||||||
all: say_hello generate
|
|
||||||
|
|
||||||
say_hello:
|
|
||||||
@echo "Hello World"
|
|
||||||
|
|
||||||
generate:
|
|
||||||
@echo "Creating empty text files..."
|
|
||||||
touch file-{1..10}.txt
|
|
||||||
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning up..."
|
|
||||||
rm *.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
make命令会调用say_hello和generate:
|
|
||||||
```
|
|
||||||
$ make
|
|
||||||
Hello World
|
|
||||||
Creating empty text files...
|
|
||||||
touch file-{1..10}.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
clean不应该被放入all中,或者被放入第一个target。clean应当在需要清理时手动调用,调用方法为make clean。
|
|
||||||
```
|
|
||||||
$ make clean
|
|
||||||
Cleaning up...
|
|
||||||
rm *.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
现在你应该已经对makefile有了基础的了解,接下来我们看一些进阶的示例。
|
|
||||||
|
|
||||||
### 进阶示例
|
|
||||||
|
|
||||||
#### 变量
|
|
||||||
|
|
||||||
在之前的实例中,大部分target和预置条件是已经固定了的,但在实际项目中,它们通常用变量和模式来代替。
|
|
||||||
|
|
||||||
定义变量最简单的方式是使用‘=’操作符。例如,将命令gcc赋值给变量CC:
|
|
||||||
```
|
|
||||||
CC = gcc
|
|
||||||
```
|
|
||||||
|
|
||||||
这被称为递归扩展变量,用于如下所示的规则中:
|
|
||||||
```
|
|
||||||
hello: hello.c
|
|
||||||
${CC} hello.c -o hello
|
|
||||||
```
|
|
||||||
|
|
||||||
你可能已经想到了,recipe将会在传递给终端时展开为:
|
|
||||||
```
|
|
||||||
gcc hello.c -o hello
|
|
||||||
```
|
|
||||||
|
|
||||||
${CC}和$(CC)都能对gcc进行引用。但如果一个变量尝试将它本身赋值给自己,将会造成死循环。让我们验证一下:
|
|
||||||
```
|
|
||||||
CC = gcc
|
|
||||||
CC = ${CC}
|
|
||||||
|
|
||||||
all:
|
|
||||||
@echo ${CC}
|
|
||||||
```
|
|
||||||
|
|
||||||
此时运行make会导致:
|
|
||||||
```
|
|
||||||
$ make
|
|
||||||
Makefile:8: *** Recursive variable 'CC' references itself (eventually). Stop.
|
|
||||||
```
|
|
||||||
|
|
||||||
为了避免这种情况发生,可以使用“:=”操作符(这被称为简单扩展变量)。以下代码不会造成上述问题:
|
|
||||||
```
|
|
||||||
CC := gcc
|
|
||||||
CC := ${CC}
|
|
||||||
|
|
||||||
all:
|
|
||||||
@echo ${CC}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 模式和函数
|
|
||||||
|
|
||||||
下面的makefile使用了变量、模式和函数来实现所有C代码的编译。我们来逐行分析下:
|
|
||||||
```
|
|
||||||
# Usage:
|
|
||||||
# make # compile all binary
|
|
||||||
# make clean # remove ALL binaries and objects
|
|
||||||
|
|
||||||
.PHONY = all clean
|
|
||||||
|
|
||||||
CC = gcc # compiler to use
|
|
||||||
|
|
||||||
LINKERFLAG = -lm
|
|
||||||
|
|
||||||
SRCS := $(wildcard *.c)
|
|
||||||
BINS := $(SRCS:%.c=%)
|
|
||||||
|
|
||||||
all: ${BINS}
|
|
||||||
|
|
||||||
%: %.o
|
|
||||||
@echo "Checking.."
|
|
||||||
${CC} ${LINKERFLAG} $< -o $@
|
|
||||||
|
|
||||||
%.o: %.c
|
|
||||||
@echo "Creating object.."
|
|
||||||
${CC} -c $<
|
|
||||||
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning up..."
|
|
||||||
rm -rvf *.o ${BINS}
|
|
||||||
```
|
|
||||||
|
|
||||||
* 以“#”开头的行是评论。
|
|
||||||
|
|
||||||
* `.PHONY = all clean` 定义了“all”和“clean”两个伪代码。
|
|
||||||
|
|
||||||
* 变量`LINKERFLAG` recipe中gcc命令需要用到的参数。
|
|
||||||
|
|
||||||
* `SRCS := $(wildcard *.c)`: `$(wildcard pattern)` 是与文件名相关的一个函数。在本示例中,所有“.c"后缀的文件会被存入“SRCS”变量。
|
|
||||||
|
|
||||||
* `BINS := $(SRCS:%.c=%)`: 这被称为替代引用。本例中,如果“SRCS”的值为“'foo.c bar.c'”,则“BINS”的值为“'foo bar'”。
|
|
||||||
|
|
||||||
* Line `all: ${BINS}`: 伪target “all”调用“${BINS}”变量中的所有值作为子target。
|
|
||||||
|
|
||||||
* 规则:
|
|
||||||
```
|
|
||||||
%: %.o
|
|
||||||
@echo "Checking.."
|
|
||||||
${CC} ${LINKERFLAG} $< -o $@
|
|
||||||
```
|
|
||||||
|
|
||||||
下面通过一个示例来理解这条规则。假定“foo”是变量“${BINS}”中的一个值。“%”会匹配到“foo”(“%”匹配任意一个target)。下面是规则展开后的内容:
|
|
||||||
```
|
|
||||||
foo: foo.o
|
|
||||||
@echo "Checking.."
|
|
||||||
gcc -lm foo.o -o foo
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
如上所示,“%”被“foo”替换掉了。“$<”被“foo.o”替换掉。“$<”用于匹配预置条件,`$@`匹配target。对“${BINS}”中的每个值,这条规则都会被调用一遍。
|
|
||||||
|
|
||||||
* 规则:
|
|
||||||
```
|
|
||||||
%.o: %.c
|
|
||||||
@echo "Creating object.."
|
|
||||||
${CC} -c $<
|
|
||||||
```
|
|
||||||
|
|
||||||
之前规则中的每个预置条件在这条规则中都会都被作为一个target。下面是展开后的内容:
|
|
||||||
```
|
|
||||||
foo.o: foo.c
|
|
||||||
@echo "Creating object.."
|
|
||||||
gcc -c foo.c
|
|
||||||
```
|
|
||||||
|
|
||||||
* 最后,在target “clean”中,所有的而简直文件和编译文件将被删除。
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
下面是重写后的makefile,该文件应该被放置在一个有foo.c文件的目录下:
|
|
||||||
```
|
|
||||||
# Usage:
|
|
||||||
# make # compile all binary
|
|
||||||
# make clean # remove ALL binaries and objects
|
|
||||||
|
|
||||||
.PHONY = all clean
|
|
||||||
|
|
||||||
CC = gcc # compiler to use
|
|
||||||
|
|
||||||
LINKERFLAG = -lm
|
|
||||||
|
|
||||||
SRCS := foo.c
|
|
||||||
BINS := foo
|
|
||||||
|
|
||||||
all: foo
|
|
||||||
|
|
||||||
foo: foo.o
|
|
||||||
@echo "Checking.."
|
|
||||||
gcc -lm foo.o -o foo
|
|
||||||
|
|
||||||
foo.o: foo.c
|
|
||||||
@echo "Creating object.."
|
|
||||||
gcc -c foo.c
|
|
||||||
|
|
||||||
clean:
|
|
||||||
@echo "Cleaning up..."
|
|
||||||
rm -rvf foo.o foo
|
|
||||||
```
|
|
||||||
|
|
||||||
关于makefiles的更多信息,[GNU Make manual][1]提供了更完整的说明和实例。
|
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
via: https://opensource.com/article/18/8/what-how-makefile
|
|
||||||
|
|
||||||
作者:[Sachin Patil][a]
|
|
||||||
选题:[lujun9972](https://github.com/lujun9972)
|
|
||||||
译者:[Zafiry](https://github.com/zafiry)
|
|
||||||
校对:[校对者ID](https://github.com/校对者ID)
|
|
||||||
|
|
||||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
||||||
|
|
||||||
[a]:https://opensource.com/users/psachin
|
|
||||||
[1]:https://www.gnu.org/software/make/manual/make.pdf
|
|
@ -1,13 +1,18 @@
|
|||||||
8 Linux commands for effective process management
|
heguangzhi Translating
|
||||||
|
|
||||||
|
8个Linux命令用于有效的进程管理
|
||||||
======
|
======
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Generally, an application process' lifecycle has three main states: start, run, and stop. Each state can and should be managed carefully if we want to be competent administrators. These eight commands can be used to manage processes through their lifecycles.
|
一般来说,应用程序的生命周期有三种主要状态:启动、运行和停止。如果我们想成为称职的管理员,每个状态都可以而且应该得到认真的管理。这八个命令可用于管理进程的整个生命周期。
|
||||||
|
|
||||||
### Starting a process
|
|
||||||
|
|
||||||
The easiest way to start a process is to type its name at the command line and press Enter. If you want to start an Nginx web server, type **nginx**. Perhaps you just want to check the version.
|
### 启动进程
|
||||||
|
|
||||||
|
|
||||||
|
启动进程的最简单方法是在命令行中键入其名称,然后按 Enter 键。如果要启动 Nginx web 服务器,请键入 **nginx** 。也许您只是想看看其版本。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ nginx
|
alan@workstation:~$ nginx
|
||||||
|
|
||||||
@ -15,9 +20,11 @@ alan@workstation:~$ nginx -v
|
|||||||
nginx version: nginx/1.14.0
|
nginx version: nginx/1.14.0
|
||||||
```
|
```
|
||||||
|
|
||||||
### Viewing your executable path
|
|
||||||
|
|
||||||
The above demonstration of starting a process assumes the executable file is located in your executable path. Understanding this path is key to reliably starting and managing a process. Administrators often customize this path for their desired purpose. You can view your executable path using **echo $PATH**.
|
### 查看您的可执行路径
|
||||||
|
|
||||||
|
以上启动进程的演示是假设可执行文件位于您的可执行路径中。理解这条路径是是否启动和管理进程的关键。管理员通常会为他们想要的目的定制这条路径。您可以使用 **echo $PATH** 查看您的可执行路径。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ echo $PATH
|
alan@workstation:~$ echo $PATH
|
||||||
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
|
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
|
||||||
@ -25,26 +32,36 @@ alan@workstation:~$ echo $PATH
|
|||||||
|
|
||||||
#### WHICH
|
#### WHICH
|
||||||
|
|
||||||
Use the which command to view the full path of an executable file.
|
|
||||||
|
使用 which 命令查看可执行文件的完整路径。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ which nginx
|
alan@workstation:~$ which nginx
|
||||||
/opt/nginx/bin/nginx
|
/opt/nginx/bin/nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
I will use the popular web server software Nginx for my examples. Let's assume that Nginx is installed. If the command **which nginx** returns nothing, then Nginx was not found because which searches only your defined executable path. There are three ways to remedy a situation where a process cannot be started simply by name. The first is to type the full path. Although, I'd rather not have to type all of that, would you?
|
|
||||||
|
我将使用流行的 web 服务器软件 Nginx 作为我的例子。假设安装了 Nginx。如果执行 **which nginx** 的命令什么也不返回,那么 Nginx 就找不到了,因为它只搜索您指定的可执行路径。有三种方法可以补救一个进程不能简单地通过名字启动的情况。首先是键入完整路径。虽然,我不情愿输入全部路径,您会吗?
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ /home/alan/web/prod/nginx/sbin/nginx -v
|
alan@workstation:~$ /home/alan/web/prod/nginx/sbin/nginx -v
|
||||||
nginx version: nginx/1.14.0
|
nginx version: nginx/1.14.0
|
||||||
```
|
```
|
||||||
|
|
||||||
The second solution would be to install the application in a directory in your executable's path. However, this may not be possible, particularly if you don't have root privileges.
|
|
||||||
|
|
||||||
The third solution is to update your executable path environment variable to include the directory where the specific application you want to use is installed. This solution is shell-dependent. For example, Bash users would need to edit the PATH= line in their .bashrc file.
|
第二个解决方案是将应用程序安装在可执行文件路径中的目录中。然而,这可能是不可能的,特别是如果您没有 root 权限。
|
||||||
|
|
||||||
|
|
||||||
|
第三个解决方案是更新您的可执行路径环境变量,包括要使用的特定应用程序的安装目录。这个解决方案是 shell-dependent。例如,Bash 用户需要在他们的 .bashrc 文件中编辑 PATH= line。
|
||||||
|
|
||||||
```
|
```
|
||||||
PATH="$HOME/web/prod/nginx/sbin:$PATH"
|
PATH="$HOME/web/prod/nginx/sbin:$PATH"
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, repeat your echo and which commands or try to check the version. Much easier!
|
|
||||||
|
现在,重复您的 echo 和 which命令或者尝试检查版本。容易多了!
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ echo $PATH
|
alan@workstation:~$ echo $PATH
|
||||||
/home/alan/web/prod/nginx/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
|
/home/alan/web/prod/nginx/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
|
||||||
@ -56,24 +73,27 @@ alan@workstation:~$ nginx -v
|
|||||||
nginx version: nginx/1.14.0
|
nginx version: nginx/1.14.0
|
||||||
```
|
```
|
||||||
|
|
||||||
### Keeping a process running
|
### 保持进程运行
|
||||||
|
|
||||||
#### NOHUP
|
#### NOHUP
|
||||||
|
|
||||||
A process may not continue to run when you log out or close your terminal. This special case can be avoided by preceding the command you want to run with the nohup command. Also, appending an ampersand (&) will send the process to the background and allow you to continue using the terminal. For example, suppose you want to run myprogram.sh.
|
|
||||||
|
注销或关闭终端时,进程可能不会继续运行。这种特殊情况可以通过在要使用 nohup 命令放在要运行的命令前面让进程持续运行。此外,附加一个&符号将会把进程发送到后台,并允许您继续使用终端。例如,假设您想运行 myprogram.sh 。
|
||||||
```
|
```
|
||||||
nohup myprogram.sh &
|
nohup myprogram.sh &
|
||||||
```
|
```
|
||||||
|
|
||||||
One nice thing nohup does is return the running process's PID. I'll talk more about the PID next.
|
nohup 会返回运行进程的PID。接下来我会更多地谈论PID。
|
||||||
|
|
||||||
### Manage a running process
|
### 管理正在运行的进程
|
||||||
|
|
||||||
Each process is given a unique process identification number (PID). This number is what we use to manage each process. We can also use the process name, as I'll demonstrate below. There are several commands that can check the status of a running process. Let's take a quick look at these.
|
|
||||||
|
每个进程都有一个唯一的进程标识号 (PID) 。这个数字是我们用来管理每个进程的。我们还可以使用进程名称,我将在下面演示。有几个命令可以检查正在运行的进程的状态。让我们快速看看这些命令。
|
||||||
|
|
||||||
#### PS
|
#### PS
|
||||||
|
|
||||||
The most common is ps. The default output of ps is a simple list of the processes running in your current terminal. As you can see below, the first column contains the PID.
|
最常见的是 ps 命令。ps 的默认输出是当前终端中运行的进程的简单列表。如下所示,第一列包含PID。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ ps
|
alan@workstation:~$ ps
|
||||||
PID TTY TIME CMD
|
PID TTY TIME CMD
|
||||||
@ -81,7 +101,8 @@ PID TTY TIME CMD
|
|||||||
24148 pts/0 00:00:00 ps
|
24148 pts/0 00:00:00 ps
|
||||||
```
|
```
|
||||||
|
|
||||||
I'd like to view the Nginx process I started earlier. To do this, I tell ps to show me every running process ( **-e** ) and a full listing ( **-f** ).
|
|
||||||
|
我想看看我之前开始的 Nginx 进程。为此,我告诉 ps 给我展示每一个正在运行的进程( **-e** ) 和完整的列表 ( **-f** )。
|
||||||
```
|
```
|
||||||
alan@workstation:~$ ps -ef
|
alan@workstation:~$ ps -ef
|
||||||
UID PID PPID C STIME TTY TIME CMD
|
UID PID PPID C STIME TTY TIME CMD
|
||||||
@ -107,25 +128,29 @@ alan 20536 20526 0 10:39 pts/0 00:00:00 pager
|
|||||||
alan 20564 20496 0 10:40 pts/1 00:00:00 bash
|
alan 20564 20496 0 10:40 pts/1 00:00:00 bash
|
||||||
```
|
```
|
||||||
|
|
||||||
You can see the Nginx processes in the output of the ps command above. The command displayed almost 300 lines, but I shortened it for this illustration. As you can imagine, trying to handle 300 lines of process information is a bit messy. We can pipe this output to grep to filter out nginx.
|
|
||||||
|
您可以在上面 ps 命令的输出中看到 Nginx 进程。这个命令显示了将近300行,但是我在这个例子中缩短了它。可以想象,试图处理300行过程信息有点混乱。我们可以将这个输出输送到 grep, 过滤一下仅显示 nginx。
|
||||||
```
|
```
|
||||||
alan@workstation:~$ ps -ef |grep nginx
|
alan@workstation:~$ ps -ef |grep nginx
|
||||||
alan 20520 1454 0 10:39 ? 00:00:00 nginx: master process nginx
|
alan 20520 1454 0 10:39 ? 00:00:00 nginx: master process nginx
|
||||||
alan 20521 20520 0 10:39 ? 00:00:00 nginx: worker process
|
alan 20521 20520 0 10:39 ? 00:00:00 nginx: worker process
|
||||||
```
|
```
|
||||||
|
|
||||||
That's better. We can quickly see that Nginx has PIDs of 20520 and 20521.
|
|
||||||
|
确实更好了。我们可以很快看到,Nginx 有20520和2052的PIDs。
|
||||||
|
|
||||||
#### PGREP
|
#### PGREP
|
||||||
|
|
||||||
The pgrep command was created to further simplify things by removing the need to call grep separately.
|
pgrep 命令更加简化单独调用 grep 遇到的问题。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ pgrep nginx
|
alan@workstation:~$ pgrep nginx
|
||||||
20520
|
20520
|
||||||
20521
|
20521
|
||||||
```
|
```
|
||||||
|
|
||||||
Suppose you are in a hosting environment where multiple users are running several different instances of Nginx. You can exclude others from the output with the **-u** option.
|
假设您在一个托管环境中,多个用户正在运行几个不同的 Nginx 实例。您可以使用 **-u** 选项将其他人排除在输出之外。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ pgrep -u alan nginx
|
alan@workstation:~$ pgrep -u alan nginx
|
||||||
20520
|
20520
|
||||||
@ -134,7 +159,8 @@ alan@workstation:~$ pgrep -u alan nginx
|
|||||||
|
|
||||||
#### PIDOF
|
#### PIDOF
|
||||||
|
|
||||||
Another nifty one is pidof. This command will check the PID of a specific binary even if another process with the same name is running. To set up an example, I copied my Nginx to a second directory and started it with the prefix set accordingly. In real life, this instance could be in a different location, such as a directory owned by a different user. If I run both Nginx instances, the **ps -ef** output shows all their processes.
|
|
||||||
|
另一个好用的是pidof。此命令将检查特定二进制文件的 PID,即使另一个同名进程正在运行。为了建立一个例子,我将我的 Nginx 复制到第二个目录,并以相应的前缀集开始。在现实生活中,这个实例可能位于不同的位置,例如由不同用户拥有的目录。如果我运行两个 Nginx 实例,则pidof 输出显示它们的所有进程。
|
||||||
```
|
```
|
||||||
alan@workstation:~$ ps -ef |grep nginx
|
alan@workstation:~$ ps -ef |grep nginx
|
||||||
alan 20881 1454 0 11:18 ? 00:00:00 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
|
alan 20881 1454 0 11:18 ? 00:00:00 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
|
||||||
@ -143,7 +169,8 @@ alan 20895 1454 0 11:19 ? 00:00:00 nginx: master process ng
|
|||||||
alan 20896 20895 0 11:19 ? 00:00:00 nginx: worker process
|
alan 20896 20895 0 11:19 ? 00:00:00 nginx: worker process
|
||||||
```
|
```
|
||||||
|
|
||||||
Using grep or pgrep will show PID numbers, but we may not be able to discern which instance is which.
|
使用 grep 或 pgrep 将显示 PID 数字,但我们可能无法辨别哪个实例是哪个。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ pgrep nginx
|
alan@workstation:~$ pgrep nginx
|
||||||
20881
|
20881
|
||||||
@ -152,7 +179,8 @@ alan@workstation:~$ pgrep nginx
|
|||||||
20896
|
20896
|
||||||
```
|
```
|
||||||
|
|
||||||
The pidof command can be used to determine the PID of each specific Nginx instance.
|
pidof 命令可用于确定每个特定 Nginx 实例的PID。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ pidof /home/alan/web/prod/nginxsec/sbin/nginx
|
alan@workstation:~$ pidof /home/alan/web/prod/nginxsec/sbin/nginx
|
||||||
20882 20881
|
20882 20881
|
||||||
@ -163,7 +191,7 @@ alan@workstation:~$ pidof /home/alan/web/prod/nginx/sbin/nginx
|
|||||||
|
|
||||||
#### TOP
|
#### TOP
|
||||||
|
|
||||||
The top command has been around a long time and is very useful for viewing details of running processes and quickly identifying issues such as memory hogs. Its default view is shown below.
|
top 命令已经有很长时间了,对于查看运行进程的细节和快速识别内存消耗等问题是非常有用的。其默认视图如下所示。
|
||||||
```
|
```
|
||||||
top - 11:56:28 up 1 day, 13:37, 1 user, load average: 0.09, 0.04, 0.03
|
top - 11:56:28 up 1 day, 13:37, 1 user, load average: 0.09, 0.04, 0.03
|
||||||
Tasks: 292 total, 3 running, 225 sleeping, 0 stopped, 0 zombie
|
Tasks: 292 total, 3 running, 225 sleeping, 0 stopped, 0 zombie
|
||||||
@ -182,7 +210,7 @@ KiB Swap: 0 total, 0 free, 0 used. 14176540 ava
|
|||||||
7 root 20 0 0 0 0 S 0.0 0.0 0:00.08 ksoftirqd/0
|
7 root 20 0 0 0 0 S 0.0 0.0 0:00.08 ksoftirqd/0
|
||||||
```
|
```
|
||||||
|
|
||||||
The update interval can be changed by typing the letter **s** followed by the number of seconds you prefer for updates. To make it easier to monitor our example Nginx processes, we can call top and pass the PID(s) using the **-p** option. This output is much cleaner.
|
可以通过键入字母 **s** 和您喜欢的更新秒数来更改更新间隔。为了更容易监控我们的示例 Nginx 进程,我们可以使用 **-p** 选项调用top并通过PID。这个输出要干净得多。
|
||||||
```
|
```
|
||||||
alan@workstation:~$ top -p20881 -p20882 -p20895 -p20896
|
alan@workstation:~$ top -p20881 -p20882 -p20895 -p20896
|
||||||
|
|
||||||
@ -198,13 +226,17 @@ KiB Swap: 0 total, 0 free, 0 used. 14177928 ava
|
|||||||
20896 alan 20 0 12460 1628 912 S 0.0 0.0 0:00.00 nginx
|
20896 alan 20 0 12460 1628 912 S 0.0 0.0 0:00.00 nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
It is important to correctly determine the PID when managing processes, particularly stopping one. Also, if using top in this manner, any time one of these processes is stopped or a new one is started, top will need to be informed of the new ones.
|
在管理进程,特别是终止进程时,正确确定PID是非常重要。此外,如果以这种方式使用top,每当这些进程中的一个停止或一个新进程开始时,top都需要被告知有新的更新。
|
||||||
|
|
||||||
### Stopping a process
|
### 终止进程
|
||||||
|
|
||||||
#### KILL
|
#### KILL
|
||||||
|
|
||||||
Interestingly, there is no stop command. In Linux, there is the kill command. Kill is used to send a signal to a process. The most commonly used signal is "terminate" (SIGTERM) or "kill" (SIGKILL). However, there are many more. Below are some examples. The full list can be shown with **kill -L**.
|
Interestingly, there is no stop command. In Linux, there is the kill command. Kill is used to send a signal to a process. The most commonly used signal is "terminate" (SIGTERM) or "kill" (SIGKILL). However, there are many more. Below are some examples. The full list can be shown with **kill -L**.
|
||||||
|
|
||||||
|
|
||||||
|
有趣的是,没有 stop 命令。在 Linux中,有 kill 命令。kill 用于向进程发送信号。最常用的信号是“终止”( SIGTERM )或“杀死”( SIGKILL )。然而,还有更多。下面是一些例子。完整的列表可以用 **kill -L** 显示。
|
||||||
|
|
||||||
```
|
```
|
||||||
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
|
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
|
||||||
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
|
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
|
||||||
@ -213,6 +245,10 @@ Interestingly, there is no stop command. In Linux, there is the kill command. Ki
|
|||||||
```
|
```
|
||||||
|
|
||||||
Notice signal number nine is SIGKILL. Usually, we issue a command such as **kill -9 20896**. The default signal is 15, which is SIGTERM. Keep in mind that many applications have their own method for stopping. Nginx uses a **-s** option for passing a signal such as "stop" or "reload." Generally, I prefer to use an application's specific method to stop an operation. However, I'll demonstrate the kill command to stop Nginx process 20896 and then confirm it is stopped with pgrep. The PID 20896 no longer appears.
|
Notice signal number nine is SIGKILL. Usually, we issue a command such as **kill -9 20896**. The default signal is 15, which is SIGTERM. Keep in mind that many applications have their own method for stopping. Nginx uses a **-s** option for passing a signal such as "stop" or "reload." Generally, I prefer to use an application's specific method to stop an operation. However, I'll demonstrate the kill command to stop Nginx process 20896 and then confirm it is stopped with pgrep. The PID 20896 no longer appears.
|
||||||
|
|
||||||
|
注意第九号信号是 SIGKILL。通常,我们会发布一个命令,比如 **kill -9 20896** 。默认信号是15,这是SIGTERM。请记住,许多应用程序都有自己的停止方法。Nginx 使用 **-s** 选项传递信号,如“停止”或“重新加载”。“通常,我更喜欢使用应用程序的特定方法来停止操作。然而,我将演示 kill 命令来停止 Nginx process 20896,然后用 pgrep 确认它已经停止。PID 20896 就不再出现。
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ kill -9 20896
|
alan@workstation:~$ kill -9 20896
|
||||||
|
|
||||||
@ -226,6 +262,9 @@ alan@workstation:~$ pgrep nginx
|
|||||||
#### PKILL
|
#### PKILL
|
||||||
|
|
||||||
The command pkill is similar to pgrep in that it can search by name. This means you have to be very careful when using pkill. In my example with Nginx, I might not choose to use it if I only want to kill one Nginx instance. I can pass the Nginx option **-s** **stop** to a specific instance to kill it, or I need to use grep to filter on the full ps output.
|
The command pkill is similar to pgrep in that it can search by name. This means you have to be very careful when using pkill. In my example with Nginx, I might not choose to use it if I only want to kill one Nginx instance. I can pass the Nginx option **-s** **stop** to a specific instance to kill it, or I need to use grep to filter on the full ps output.
|
||||||
|
|
||||||
|
命令 pkill 类似于 pgrep,因为它可以按名称搜索。这意味着在使用 pkill 时必须非常小心。在我的 Nginx 示例中,如果我只想杀死一个 Nginx 实例,我可能不会选择使用它。我可以将 Nginx 选项 **-s** **stop** 传递给特定的实例来消除它,或者我需要使用grep来过滤整个 ps 输出。
|
||||||
|
|
||||||
```
|
```
|
||||||
/home/alan/web/prod/nginx/sbin/nginx -s stop
|
/home/alan/web/prod/nginx/sbin/nginx -s stop
|
||||||
|
|
||||||
@ -233,6 +272,9 @@ The command pkill is similar to pgrep in that it can search by name. This means
|
|||||||
```
|
```
|
||||||
|
|
||||||
If I want to use pkill, I can include the **-f** option to ask pkill to filter across the full command line argument. This of course also applies to pgrep. So, first I can check with **pgrep -a** before issuing the **pkill -f**.
|
If I want to use pkill, I can include the **-f** option to ask pkill to filter across the full command line argument. This of course also applies to pgrep. So, first I can check with **pgrep -a** before issuing the **pkill -f**.
|
||||||
|
|
||||||
|
如果我想使用 pkill,我可以包括 **-f** 选项,让 pkill 过滤整个命令行参数。这当然也适用于 pgrep。所以,在执行 **pkill -f** 之前,首先我可以用 **pgrep -a** 确认一下。
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ pgrep -a nginx
|
alan@workstation:~$ pgrep -a nginx
|
||||||
20881 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
|
20881 nginx: master process ./nginx -p /home/alan/web/prod/nginxsec
|
||||||
@ -242,6 +284,10 @@ alan@workstation:~$ pgrep -a nginx
|
|||||||
```
|
```
|
||||||
|
|
||||||
I can also narrow down my result with **pgrep -f**. The same argument used with pkill stops the process.
|
I can also narrow down my result with **pgrep -f**. The same argument used with pkill stops the process.
|
||||||
|
|
||||||
|
我也可以用 **pgrep -f** 缩小我的结果。pkill 使用的相同参数会停止该进程。
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
alan@workstation:~$ pgrep -f nginxsec
|
alan@workstation:~$ pgrep -f nginxsec
|
||||||
20881
|
20881
|
||||||
@ -251,8 +297,14 @@ alan@workstation:~$ pkill -f nginxsec
|
|||||||
|
|
||||||
The key thing to remember with pgrep (and especially pkill) is that you must always be sure that your search result is accurate so you aren't unintentionally affecting the wrong processes.
|
The key thing to remember with pgrep (and especially pkill) is that you must always be sure that your search result is accurate so you aren't unintentionally affecting the wrong processes.
|
||||||
|
|
||||||
|
pgrep (尤其是pkill )要记住的关键点是,您必须始终确保搜索结果准确性,这样您就不会无意中影响到错误的进程。
|
||||||
|
|
||||||
Most of these commands have many command line options, so I always recommend reading the [man page][1] on each one. While most of these exist across platforms such as Linux, Solaris, and BSD, there are a few differences. Always test and be ready to correct as needed when working at the command line or writing scripts.
|
Most of these commands have many command line options, so I always recommend reading the [man page][1] on each one. While most of these exist across platforms such as Linux, Solaris, and BSD, there are a few differences. Always test and be ready to correct as needed when working at the command line or writing scripts.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
大多数这些命令都有许多命令行选项,所以我总是建议阅读每一个命令的 [man page][1]。虽然大多数这些都存在于 Linux、Solaris 和 BSD 等平台上,但也有一些不同之处。在命令行工作或编写脚本时,始终测试并随时准备根据需要进行更正。
|
||||||
|
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
via: https://opensource.com/article/18/9/linux-commands-process-management
|
via: https://opensource.com/article/18/9/linux-commands-process-management
|
Loading…
Reference in New Issue
Block a user