mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
commit
15dec4d7be
@ -1,12 +1,12 @@
|
||||
7 个驱动开源发展的社区
|
||||
================================================================================
|
||||
不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。
|
||||
不久前,开源模式还被成熟的工业级厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开源的促进会和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。
|
||||
|
||||

|
||||
|
||||
### 技术的开放发展驱动着创新 ###
|
||||
|
||||
在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。
|
||||
在过去的 20 几年间,技术的开源推进已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源的促进会中表现活跃。到目前为止,大多数的开源推进都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里介绍 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。
|
||||
|
||||
### OpenPOWER 基金会 ###
|
||||
|
||||
@ -16,21 +16,21 @@
|
||||
|
||||
IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。
|
||||
|
||||
今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。
|
||||
去年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。去年十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。
|
||||
|
||||
### Linux 基金会 ###
|
||||
|
||||

|
||||
|
||||
于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。
|
||||
于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同开发成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助 Linux 核心开发者的工作并促进、保护和推进 Linux 操作系统,并协调软件的协作开发。
|
||||
|
||||
它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。
|
||||
它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI [注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称] 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。
|
||||
|
||||
### 开放虚拟化联盟 ###
|
||||
|
||||

|
||||
|
||||
[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。
|
||||
[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案,例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。
|
||||
|
||||
如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。
|
||||
|
||||
@ -40,31 +40,31 @@ IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power
|
||||
|
||||
原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。
|
||||
|
||||
大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。
|
||||
大约以 6 个月为一个发行周期,基金会的 OpenStack 项目开发用于通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协同开发已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。
|
||||
|
||||
### OpenDaylight ###
|
||||
|
||||

|
||||
|
||||
作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。
|
||||
作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导、开源、有工业支持的针对软件定义网络( SDN: Software-Defined Networking)的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。
|
||||
|
||||
### Apache 软件基金会 ###
|
||||
|
||||

|
||||
|
||||
[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。
|
||||
[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源的企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。
|
||||
|
||||
ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。
|
||||
ASF 是 1999 年成立的一个会员制,非盈利公司,以精英为其核心 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。
|
||||
|
||||
### 开放计算项目 ###
|
||||
|
||||

|
||||
|
||||
作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。
|
||||
作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开源硬件解决方案。 OCP 是一个由廉价无浪费的服务器、针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。
|
||||
|
||||
OCP 董事会成员包括来自 Facebook,Intel,Goldman Sachs,Rackspace 和 Microsoft 的代表。
|
||||
|
||||
OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。
|
||||
OCP 最近宣布了有两种可选的许可证: 一个类似 Apache 2.0 的允许衍生工作的许可证,和一个更规范的鼓励将更改回馈到原有软件的许可证。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -72,7 +72,7 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities
|
||||
|
||||
作者:[Thor Olavsrud][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,20 +1,20 @@
|
||||
如何在 Ubuntu 中管理和使用 LVM(Logical Volume Management,逻辑卷管理)
|
||||
如何在 Ubuntu 中管理和使用 逻辑卷管理 LVM
|
||||
================================================================================
|
||||

|
||||
|
||||
在我们之前的文章中,我们介绍了[什么是 LVM 以及能用 LVM 做什么][1],今天我们会给你介绍一些 LVM 的主要管理工具,使得你在设置和扩展安装时更游刃有余。
|
||||
|
||||
正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是,你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘驱动汇集而成或是一个软件磁盘阵列。
|
||||
正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘汇集而成的或是一个软件磁盘阵列。
|
||||
|
||||
要管理 LVM,这里有很多可用的 GUI 工具,但要真正理解 LVM 配置发生的事情,最好要知道一些命令行工具。这当你在一个服务器或不提供 GUI 工具的发行版上管理 LVM 时尤为有用。
|
||||
|
||||
LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以下其中之一开头:
|
||||
|
||||
- Physical Volume = pv
|
||||
- Volume Group = vg
|
||||
- Logical Volume = lv
|
||||
- Physical Volume (物理卷) = pv
|
||||
- Volume Group (卷组)= vg
|
||||
- Logical Volume (逻辑卷)= lv
|
||||
|
||||
物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组使得你的操作系统能使用指定的空间。
|
||||
物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组,使得你的操作系统能使用指定的空间。
|
||||
|
||||
### 可下载的 LVM 备忘单 ###
|
||||
|
||||
@ -26,7 +26,7 @@ LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以
|
||||
|
||||
### 如何查看当前 LVM 信息 ###
|
||||
|
||||
你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置好的开始点。
|
||||
你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令可以和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置的好起点。
|
||||
|
||||
display 命令会格式化输出信息,因此比 s 命令更易于理解。对每个命令你会看到名称和 pv/vg 的路径,它还会给出空闲和已使用空间的信息。
|
||||
|
||||
@ -40,17 +40,17 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
|
||||
#### 创建物理卷 ####
|
||||
|
||||
我们会从一个完全新的没有任何分区和信息的硬盘驱动开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等)
|
||||
我们会从一个全新的没有任何分区和信息的硬盘开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等)
|
||||
|
||||
> 注意:记住所有的命令都要以 root 身份运行或者在命令前面添加 'sudo' 。
|
||||
|
||||
fdisk -l
|
||||
|
||||
如果之前你的硬盘驱动从没有格式化或分区,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。
|
||||
如果之前你的硬盘从未格式化或分区过,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。
|
||||
|
||||

|
||||
|
||||
我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在驱动上创建一个新的分区。
|
||||
我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在磁盘上创建一个新的分区。
|
||||
|
||||
这里有大量能创建新分区的 GUI 工具,包括 [Gparted][2],但由于我们已经打开了终端,我们将使用 fdisk 命令创建需要的分区。
|
||||
|
||||
@ -62,9 +62,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
|
||||

|
||||
|
||||
以指定的顺序输入命令创建一个使用新硬盘驱动 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或相应多个分区,我建议使用 GParted 或自己了解关于 fdisk 命令的使用。
|
||||
以指定的顺序输入命令创建一个使用新硬盘 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或想要多个分区,我建议使用 GParted 或自己了解一下关于 fdisk 命令的使用。
|
||||
|
||||
**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何信息。**
|
||||
**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何有用的信息。**
|
||||
|
||||
- n = 创建新分区
|
||||
- p = 创建主分区
|
||||
@ -79,9 +79,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
- t = 更改分区类型
|
||||
- 8e = 更改为 LVM 分区类型
|
||||
|
||||
核实并将信息写入硬盘驱动器。
|
||||
核实并将信息写入硬盘。
|
||||
|
||||
- p = 查看分区设置使得写入更改到磁盘之前可以回看
|
||||
- p = 查看分区设置使得在写入更改到磁盘之前可以回看
|
||||
- w = 写入更改到磁盘
|
||||
|
||||

|
||||
@ -102,7 +102,7 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
|
||||

|
||||
|
||||
Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。
|
||||
vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。
|
||||
|
||||
#### 创建逻辑卷 ####
|
||||
|
||||
@ -112,7 +112,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||

|
||||
|
||||
-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 所以 lvcreate 命令知道从什么卷获取空间。
|
||||
-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 以便 lvcreate 命令知道从什么卷获取空间。
|
||||
|
||||
#### 格式化并挂载逻辑卷 ####
|
||||
|
||||
@ -131,7 +131,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
#### 重新设置逻辑卷大小 ####
|
||||
|
||||
逻辑卷的一个好处是你能使你的共享物理变大或变小而不需要移动所有东西到一个更大的硬盘驱动。另外,你可以添加新的硬盘驱动并同时扩展你的卷组。或者如果你有一个不使用的硬盘驱动,你可以从卷组中移除它使得逻辑卷变小。
|
||||
逻辑卷的一个好处是你能使你的存储物理地变大或变小,而不需要移动所有东西到一个更大的硬盘。另外,你可以添加新的硬盘并同时扩展你的卷组。或者如果你有一个不使用的硬盘,你可以从卷组中移除它使得逻辑卷变小。
|
||||
|
||||
这里有三个用于使物理卷、卷组和逻辑卷变大或变小的基础工具。
|
||||
|
||||
@ -147,9 +147,9 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
按照上面创建新分区并更改分区类型为 LVM(8e) 的步骤安装一个新硬盘驱动。然后用 pvcreate 命令创建一个 LVM 能识别的物理卷。
|
||||
|
||||
#### 添加新硬盘驱动到卷组 ####
|
||||
#### 添加新硬盘到卷组 ####
|
||||
|
||||
要添加新的硬盘驱动到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。
|
||||
要添加新的硬盘到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。
|
||||
|
||||
这会添加新物理卷到已存在的卷组中。
|
||||
|
||||
@ -189,7 +189,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
1. 调整文件系统大小 (调整之前确保已经移动文件到硬盘驱动安全的地方)
|
||||
1. 减小逻辑卷 (除了 + 可以扩展大小,你也可以用 - 压缩大小)
|
||||
1. 用 vgreduce 从卷组中移除硬盘驱动
|
||||
1. 用 vgreduce 从卷组中移除硬盘
|
||||
|
||||
#### 备份逻辑卷 ####
|
||||
|
||||
@ -197,7 +197,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||

|
||||
|
||||
LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该照片可以用于在不同的硬盘驱动上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。
|
||||
LVM 获取快照的时候,会有一张和逻辑卷完全相同的“照片”,该“照片”可以用于在不同的硬盘上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。
|
||||
|
||||
要创建一个快照,我们需要创建拥有足够空闲空间的逻辑卷,用于保存我们备份的时候会写入该逻辑卷的任何新信息。如果驱动并不是经常写入,你可以使用很小的一个存储空间。备份完成的时候我们只需要移除临时逻辑卷,原始逻辑卷会和往常一样。
|
||||
|
||||
@ -209,7 +209,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
|
||||

|
||||
|
||||
这里我们创建了一个只有 512MB 的逻辑卷,因为驱动实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。
|
||||
这里我们创建了一个只有 512MB 的逻辑卷,因为该硬盘实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。
|
||||
|
||||
#### 挂载新快照 ####
|
||||
|
||||
@ -222,7 +222,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
|
||||
#### 复制快照和删除逻辑卷 ####
|
||||
|
||||
你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘驱动或者打包所有文件到一个文件。
|
||||
你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘或者打包所有文件到一个文件。
|
||||
|
||||
**注意:tar -c 会创建一个归档文件,-f 要指出归档文件的名称和路径。要获取 tar 命令的帮助信息,可以在终端中输入 man tar。**
|
||||
|
||||
@ -230,7 +230,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
|
||||

|
||||
|
||||
记住备份发生的时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。
|
||||
记住备份时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。
|
||||
|
||||
备份完成后,卸载卷并移除临时快照。
|
||||
|
||||
@ -259,10 +259,10 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/
|
||||
[1]:https://linux.cn/article-5953-1.html
|
||||
[2]:http://www.howtogeek.com/howto/17001/how-to-format-a-usb-drive-in-ubuntu-using-gparted/
|
||||
[3]:http://www.howtogeek.com/howto/33552/htg-explains-which-linux-file-system-should-you-choose/
|
@ -1,18 +1,18 @@
|
||||
如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器
|
||||
================================================================================
|
||||
你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。
|
||||
你在家里运行着一台 Linux 服务器,它放在一个 NAT 路由器或者限制性防火墙后面。现在你想在外出时用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多级嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。
|
||||
|
||||
### 什么是反向 SSH 隧道? ###
|
||||
|
||||
SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。
|
||||
SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。使用这种方案,在你的受限的家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录到它。你可以用有公网 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你的家庭网络服务器中建立一个到公网中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你的家庭网络中的 NAT 或 防火墙限制多么严格,只要你可以访问中继主机,你就可以连接到家庭服务器。
|
||||
|
||||

|
||||
|
||||
### 在 Linux 上设置反向 SSH 隧道 ###
|
||||
|
||||
让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。
|
||||
让我们来看看怎样创建和使用反向 SSH 隧道。我们做如下假设:我们会设置一个从家庭服务器(homeserver)到中继服务器(relayserver)的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机(clientcomputer) SSH 登录到家庭服务器。本例中的**中继服务器** 的公网 IP 地址是 1.1.1.1。
|
||||
|
||||
在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。
|
||||
在家庭服务器上,按照以下方式打开一个到中继服务器的 SSH 连接。
|
||||
|
||||
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
@ -20,11 +20,11 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧
|
||||
|
||||
“-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。
|
||||
|
||||
用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。
|
||||
用 “-fN” 选项,当你成功通过 SSH 服务器验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令,就像我们的例子中只想转发端口的时候非常有用。
|
||||
|
||||
运行上面的命令之后,你就会回到家庭主机的命令行提示框中。
|
||||
|
||||
登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。
|
||||
登录到中继服务器,确认其 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。
|
||||
|
||||
relayserver~$ sudo netstat -nap | grep 10022
|
||||
|
||||
@ -36,13 +36,13 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧
|
||||
|
||||
relayserver~$ ssh -p 10022 homeserver_user@localhost
|
||||
|
||||
需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。
|
||||
需要注意的一点是你在上面为localhost输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器,因此不要错误输入中继服务器的登录/密码。成功登录后,你就在家庭服务器上了。
|
||||
|
||||
### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ###
|
||||
|
||||
上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址(127.0.0.1)。
|
||||
|
||||
事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。
|
||||
事实上,有一种方法可以只需要登录到中继服务器就能直接访问NAT之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **GatewayPorts** 实现。
|
||||
|
||||
打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。
|
||||
|
||||
@ -74,23 +74,23 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧
|
||||
|
||||
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
|
||||
|
||||
不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。
|
||||
不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公网 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的另一端。
|
||||
|
||||
现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。
|
||||
|
||||
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
|
||||
|
||||
在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。
|
||||
在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,homeserver_user必须是家庭服务器上的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。
|
||||
|
||||
### 在 Linux 上设置一个永久反向 SSH 隧道 ###
|
||||
|
||||
现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。
|
||||
现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你就不能可靠的登录到你的家庭服务器。
|
||||
|
||||
对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。
|
||||
对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序可以让你的 SSH 会话无论因为什么原因中断都会自动重连。因此对于保持一个反向 SSH 隧道非常有用。
|
||||
|
||||
第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话,autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。
|
||||
|
||||
下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。
|
||||
下一步,在建立隧道的家庭服务器上[安装 autossh][3]。
|
||||
|
||||
在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。
|
||||
|
||||
@ -113,7 +113,7 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。
|
||||
在这篇博文中,我介绍了你如何能从外部通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。这里我介绍了家庭网络中的一个使用事例,但在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -121,11 +121,11 @@ via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/go/digitalocean
|
||||
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
|
||||
[3]:http://ask.xmodulo.com/install-autossh-linux.html
|
||||
[2]:https://linux.cn/article-5444-1.html
|
||||
[3]:https://linux.cn/article-5459-1.html
|
231
published/20150717 How to monitor NGINX- Part 1.md
Normal file
231
published/20150717 How to monitor NGINX- Part 1.md
Normal file
@ -0,0 +1,231 @@
|
||||
如何监控 NGINX(第一篇)
|
||||
================================================================================
|
||||

|
||||
|
||||
### NGINX 是什么? ###
|
||||
|
||||
[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 可以使用较少的内存非常高效可靠地提供静态内容。作为[反向代理][2],它可以用作多个后端服务器或类似缓存和负载平衡这样的其它应用的单一访问控制点。NGINX 是一个自由开源的产品,并有一个具备更全的功能的叫做 NGINX Plus 的商业版。
|
||||
|
||||
NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接讨论 NGINX 的那些用例的监控。
|
||||
|
||||
### NGINX 主要指标 ###
|
||||
|
||||
通过监控 NGINX 可以 捕获到两类问题:NGINX 本身的资源问题,和出现在你的基础网络设施的其它问题。大多数 NGINX 用户会用到以下指标的监控,包括**每秒请求数**,它提供了一个由所有最终用户活动组成的上层视图;**服务器错误率** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**请求处理时间**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低或当前环境的其他问题)。
|
||||
|
||||
更一般地,至少有三个主要的指标类别来监视:
|
||||
|
||||
- 基本活动指标
|
||||
- 错误指标
|
||||
- 性能指标
|
||||
|
||||
下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍但是值得特别提到的案例来说明:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。
|
||||
|
||||
本文引用指标术语[来自我们的“监控 101 系列”][3],,它提供了一个指标收集和警告框架。
|
||||
|
||||
#### 基本活跃指标 ####
|
||||
|
||||
无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。
|
||||
|
||||
NGINX Plus 上像开源 NGINX 一样可以报告基本活跃指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。
|
||||
|
||||
**NGINX**
|
||||
|
||||
下图显示了一个客户端连接的过程,以及开源版本的 NGINX 如何在连接过程中收集指标。
|
||||
|
||||

|
||||
|
||||
Accepts(接受)、Handled(已处理)、Requests(请求)是一直在增加的计数器。Active(活跃)、Waiting(等待)、Reading(读)、Writing(写)随着请求量而增减。
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)|
|
||||
|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------|
|
||||
| Accepts | NGINX 所接受的客户端连接数 | 资源: 功能 |
|
||||
| Handled | 成功的客户端连接数 | 资源: 功能 |
|
||||
| Active | 当前活跃的客户端连接数| 资源: 功能 |
|
||||
| Dropped(已丢弃,计算得出)| 丢弃的连接数(接受 - 已处理)| 工作:错误*|
|
||||
| Requests | 客户端请求数 | 工作:吞吐量 |
|
||||
|
||||
|
||||
_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_
|
||||
|
||||
NGINX worker 进程接受 OS 的连接请求时 **Accepts** 计数器增加,而**Handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,如果它们有差别则表明连接被**Dropped**,往往这是由于资源限制,比如已经达到 NGINX 的[worker_connections][4]的限制。
|
||||
|
||||
一旦 NGINX 成功处理一个连接时,连接会移动到**Active**状态,在这里对客户端请求进行处理:
|
||||
|
||||
Active状态
|
||||
|
||||
- **Waiting**: 活跃的连接也可以处于 Waiting 子状态,如果有在此刻没有活跃请求的话。新连接可以绕过这个状态并直接变为到 Reading 状态,最常见的是在使用“accept filter(接受过滤器)” 和 “deferred accept(延迟接受)”时,在这种情况下,NGINX 不会接收 worker 进程的通知,直到它具有足够的数据才开始响应。如果连接设置为 keep-alive ,那么它在发送响应后将处于等待状态。
|
||||
|
||||
- **Reading**: 当接收到请求时,连接离开 Waiting 状态,并且该请求本身使 Reading 状态计数增加。在这种状态下 NGINX 会读取客户端请求首部。请求首部是比较小的,因此这通常是一个快速的操作。
|
||||
|
||||
- **Writing**: 请求被读取之后,其使 Writing 状态计数增加,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 状态时, 一方面 NGINX 等待来自上游系统的结果(系统放在 NGINX “后面”),另外一方面,NGINX 也在同时响应。请求往往会在 Writing 状态花费大量的时间。
|
||||
|
||||
通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 的连接 + Reading 请求 + Writing 。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应复用一个连接,所以 Active 可小于 Waiting 的连接、 Reading 请求、Writing 请求的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。)
|
||||
|
||||
**NGINX Plus**
|
||||
|
||||
正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。
|
||||
|
||||
|
||||

|
||||
|
||||
Accepted (已接受)、Dropped,总数是不断增加的计数器。Active、 Idle(空闲)和处于 Current(当前)处理阶段的各种状态下的连接或请求的当前数量随着请求量而增减。
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)|
|
||||
|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------|
|
||||
| Accepted | NGINX 所接受的客户端连接数 | 资源: 功能 |
|
||||
| Dropped |丢弃的连接数(接受 - 已处理)| 工作:错误*|
|
||||
| Active | 当前活跃的客户端连接数| 资源: 功能 |
|
||||
| Idle | 没有当前请求的客户端连接| 资源: 功能 |
|
||||
| Total(全部) | 客户端请求数 | 工作:吞吐量 |
|
||||
|
||||
_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_
|
||||
|
||||
当 NGINX Plus worker 进程接受 OS 的连接请求时 **Accepted** 计数器递增。如果 worker 进程为请求建立连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接被丢弃, **Dropped** 计数增加。通常连接被丢弃是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。
|
||||
|
||||
**Active** 和 **Idle** 和[如上所述][5]的开源 NGINX 的“active” 和 “waiting”状态是相同的,但是有一点关键的不同:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。
|
||||
|
||||
**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接的平均请求数量。
|
||||
|
||||
**开源 和 Plus 之间指标的不同**
|
||||
|
||||
|NGINX (开源) |NGINX Plus|
|
||||
|-----------------------|----------------|
|
||||
| accepts | accepted |
|
||||
| dropped 通过计算得来| dropped 直接得到 |
|
||||
| reading + writing| current|
|
||||
| waiting| idle|
|
||||
| active (包括 “waiting”状态) | active (排除 “idle” 状态)|
|
||||
| requests| total|
|
||||
|
||||
**提醒指标: 丢弃连接**
|
||||
|
||||
被丢弃的连接数目等于 Accepts 和 Handled 之差(NGINX 中),或是可直接得到标准指标(NGINX Plus 中)。在正常情况下,丢弃连接数应该是零。如果在每个单位时间内丢弃连接的速度开始上升,那么应该看看是否资源饱和了。
|
||||
|
||||

|
||||
|
||||
**提醒指标: 每秒请求数**
|
||||
|
||||
按固定时间间隔采样你的请求数据(开源 NGINX 的**requests**或者 NGINX Plus 中**total**) 会提供给你单位时间内(通常是分钟或秒)所接受的请求数量。监测这个指标可以查看进入的 Web 流量尖峰,无论是合法的还是恶意的,或者突然的下降,这通常都代表着出现了问题。每秒请求数若发生急剧变化可以提醒你的环境出现问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都同样计数,无论 URL 是什么。
|
||||
|
||||

|
||||
|
||||
**收集活跃指标**
|
||||
|
||||
开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式显示,实际上任何图形或监控工具可以被配置去解析这些相关数据,以用于分析、可视化、或提醒。NGINX Plus 提供一个 JSON 接口来供给更多的数据。阅读相关文章“[NGINX 指标收集][6]”来启用指标收集的功能。
|
||||
|
||||
#### 错误指标 ####
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 |
|
||||
|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------|
|
||||
| 4xx 代码 | 客户端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus|
|
||||
| 5xx 代码| 服务器端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus|
|
||||
|
||||
NGINX 错误指标告诉你服务器是否经常返回错误而不是正常工作。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。
|
||||
|
||||
**提醒指标: 服务器错误率**
|
||||
|
||||
服务器错误率等于在单位时间(通常为一到五分钟)内5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。
|
||||
|
||||

|
||||
|
||||
关于客户端错误的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URL。换句话说,4xx出现的变化可能是一个信号,例如网络扫描器正在寻找你的网站漏洞时。
|
||||
|
||||
**收集错误度量**
|
||||
|
||||
虽然开源 NGINX 不能马上得到用于监测的错误率,但至少有两种方法可以得到:
|
||||
|
||||
- 使用商业支持的 NGINX Plus 提供的扩展状态模块
|
||||
- 配置 NGINX 的日志模块将响应码写入访问日志
|
||||
|
||||
关于这两种方法,请阅读相关文章“[NGINX 指标收集][6]”。
|
||||
|
||||
#### 性能指标 ####
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 |
|
||||
|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------|
|
||||
| request time (请求处理时间)| 处理每个请求的时间,单位为秒 | 工作:性能 | NGINX 日志|
|
||||
|
||||
**提醒指标: 请求处理时间**
|
||||
|
||||
请求处理时间指标记录了 NGINX 处理每个请求的时间,从读到客户端的第一个请求字节到完成请求。较长的响应时间说明问题在上游。
|
||||
|
||||
**收集处理时间指标**
|
||||
|
||||
NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][6]。
|
||||
|
||||
#### 反向代理指标 ####
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 |
|
||||
|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------|
|
||||
| 上游服务器的活跃链接 | 当前活跃的客户端连接 | 资源:功能 | NGINX Plus |
|
||||
| 上游服务器的 5xx 错误代码| 服务器错误 | 工作:错误 | NGINX Plus |
|
||||
| 每个上游组的可用服务器 | 服务器传递健康检查 | 资源:可用性| NGINX Plus
|
||||
|
||||
[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“上游 upstream”)的服务器指标,这些与反向代理设置相关的。本节重点介绍了几个 NGINX Plus 用户可用的关键上游指标。
|
||||
|
||||
NGINX Plus 首先将它的上游指标按组分开,然后是针对单个服务器的。因此,例如,你的反向代理将请求分配到五个上游的 Web 服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出上游组中服务器的健康状况,以确保良好的响应时间。
|
||||
|
||||
**活跃指标**
|
||||
|
||||
**每上游服务器的活跃连接**的数量可以帮助你确认反向代理是否正确的分配工作到你的整个服务器组上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数的明显偏差都可能表明服务器正在努力消化请求,或者是你配置使用的负载均衡的方法(例如[round-robin 或 IP hashing][10])不是最适合你流量模式的。
|
||||
|
||||
**错误指标**
|
||||
|
||||
错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取**每个上游服务器的 5xx 错误代码**的数量,以及响应的总数量,以此来确定某个特定服务器的错误率。
|
||||
|
||||
**可用性指标**
|
||||
|
||||
对于 web 服务器的运行状况,还有另一种角度,NGINX 可以通过**每个组中当前可用服务器的总量**很方便监控你的上游组的健康。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要有可用的服务器组能够处理当前的负载就行了。但监视上游组内的所有工作的服务器总量可为判断 Web 服务器的健康状况提供一个更高层面的视角。
|
||||
|
||||
**收集上游指标**
|
||||
|
||||
NGINX Plus 上游指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于各种外部监控平台。在我们的相关文章“[NGINX指标收集][6]”中有个例子。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,监控下面提供的大部分或全部指标,可以让你很好的了解你的网络基础设施的健康和活跃程度:
|
||||
|
||||
- [已丢弃的连接][12]
|
||||
- [每秒请求数][13]
|
||||
- [服务器错误率][14]
|
||||
- [请求处理数据][15]
|
||||
|
||||
最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见相关的文章来[逐步指导你的指标收集][6],不管你使用 NGINX 还是 NGINX Plus。
|
||||
|
||||
在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最少的设置来收集和监控所有 Web 服务器的指标。 [在本文中][17]了解如何用 NGINX Datadog来监控,并开始[免费试用 Datadog][18]吧。
|
||||
|
||||
### 诚谢 ###
|
||||
|
||||
在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
|
||||
作者:K Young
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://nginx.org/en/
|
||||
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
|
||||
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
|
||||
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
|
||||
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
|
||||
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
|
||||
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
|
||||
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
|
||||
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
|
||||
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
||||
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
||||
[20]:https://github.com/DataDog/the-monitor/issues
|
@ -1,32 +1,32 @@
|
||||
无忧之道:Docker中容器的备份、恢复和迁移
|
||||
================================================================================
|
||||
今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建块无需依赖于特定的堆栈或供应者。
|
||||
今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建组件无需依赖于特定的堆栈或供应者。
|
||||
|
||||
所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在盒子中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器。
|
||||
所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在机器中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器的方法。
|
||||
|
||||
我们怎样才能在Linux中备份、恢复和迁移Docker容器呢?这里为您提供了一些便捷的步骤。
|
||||
|
||||
### 1. 备份容器 ###
|
||||
|
||||
首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行这Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。
|
||||
首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行着Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。
|
||||
|
||||
# docker ps
|
||||
|
||||

|
||||
|
||||
在此之后,我们要选择我们想要备份的容器,然后我们会去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。
|
||||
在此之后,我们要选择我们想要备份的容器,然后去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。
|
||||
|
||||
# docker commit -p 30b8f18f20b4 container-backup
|
||||
|
||||

|
||||
|
||||
该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 docker images 命令来查看Docker镜像,如下。
|
||||
该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 `docker images` 命令来查看Docker镜像,如下。
|
||||
|
||||
# docker images
|
||||
|
||||

|
||||
|
||||
正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登陆进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tarball备份,以供今后使用。
|
||||
正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登录进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tar包备份,以供今后使用。
|
||||
|
||||
如果我们想要在[Docker注册中心][2]上传或备份镜像,我们只需要运行 docker login 命令来登录进Docker注册中心,然后推送所需的镜像即可。
|
||||
|
||||
@ -39,23 +39,23 @@
|
||||
|
||||

|
||||
|
||||
如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tarball备份。要完成该操作,我们需要运行以下 docker save 命令。
|
||||
如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tar包备份。要完成该操作,我们需要运行以下 `docker save` 命令。
|
||||
|
||||
# docker save -o ~/container-backup.tar container-backup
|
||||
|
||||

|
||||
|
||||
要验证tarball时候已经生成,我们只需要在保存tarball的目录中运行 ls 命令。
|
||||
要验证tar包是否已经生成,我们只需要在保存tar包的目录中运行 ls 命令即可。
|
||||
|
||||
### 2. 恢复容器 ###
|
||||
|
||||
接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些被快照成Docker镜像的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。
|
||||
接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些制作了Docker镜像快照的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。
|
||||
|
||||
# docker pull arunpyasi/container-backup:test
|
||||
|
||||

|
||||
|
||||
但是,如果我们将这些Docker镜像作为tarball文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tarball的备份路径,就可以加载该Docker镜像了。
|
||||
但是,如果我们将这些Docker镜像作为tar包文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tar包的备份路径,就可以加载该Docker镜像了。
|
||||
|
||||
# docker load -i ~/container-backup.tar
|
||||
|
||||
@ -63,7 +63,7 @@
|
||||
|
||||
# docker images
|
||||
|
||||
在镜像被加载后,我们将从加载的镜像去运行Docker容器。
|
||||
在镜像被加载后,我们将用加载的镜像去运行Docker容器。
|
||||
|
||||
# docker run -d -p 80:80 container-backup
|
||||
|
||||
@ -71,11 +71,11 @@
|
||||
|
||||
### 3. 迁移Docker容器 ###
|
||||
|
||||
迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将容器的备份作为快照Docker镜像。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tarball文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tarball备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。
|
||||
迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将把容器备份为Docker镜像快照。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tar包文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tar包备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。
|
||||
|
||||
### 尾声 ###
|
||||
|
||||
最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-)
|
||||
最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个可以成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -83,7 +83,7 @@ via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -5,10 +5,12 @@
|
||||
我试着在Ubuntu中安装Emerald图标主题,而这个主题被打包成了.7z归档包。和以往一样,我试着通过在GUI中右击并选择“提取到这里”来将它解压缩。但是Ubuntu 15.04却并没有解压文件,取而代之的,却是丢给了我一个下面这样的错误信息:
|
||||
|
||||
> Could not open this file
|
||||
>
|
||||
> 无法打开该文件
|
||||
>
|
||||
> There is no command installed for 7-zip archive files. Do you want to search for a command to open this file?
|
||||
> 没有安装用于7-zip归档文件的命令。你是否想要搜索命令来打开该文件?
|
||||
>
|
||||
> 没有安装用于7-zip归档文件的命令。你是否想要搜索用于来打开该文件的命令?
|
||||
|
||||
错误信息看上去是这样的:
|
||||
|
||||
@ -42,7 +44,7 @@ via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-file
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,26 +1,25 @@
|
||||
什么是逻辑分区管理工具,它怎么在Ubuntu启用?
|
||||
什么是逻辑分区管理 LVM ,如何在Ubuntu中使用?
|
||||
================================================================================
|
||||
> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论你是否需要设置存储池或者只需要动态创建分区,LVM就是你正在寻找的。
|
||||
|
||||
> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论是你需要设置存储池,还是只想动态创建分区,那么LVM就是你正在寻找的。
|
||||
|
||||
### 什么是 LVM? ###
|
||||
|
||||
逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等)接着这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。
|
||||
逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等),并且这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。
|
||||
|
||||
在LVM下,磁盘和分区可以抽象成一个设备中含有多个磁盘和分区。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区)
|
||||
在LVM下,磁盘和分区可以抽象成一个含有多个磁盘和分区的设备。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区)
|
||||
|
||||
,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来你的文件系统不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。
|
||||
因为卷组和逻辑卷并不物理地对应到影片,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来了你的文件系统所不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。
|
||||
|
||||
### 你什么时候该使用LVM? ###
|
||||
|
||||
在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区完成什么。一些发行版如Fedora已经默认安装了LVM。
|
||||
在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区来做什么。注意,一些发行版如Fedora已经默认安装了LVM。
|
||||
|
||||
如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果I想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正式你郑寻找的。
|
||||
如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果你想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正是你所寻找的。
|
||||
|
||||
### 在Ubuntu中设置LVM ###
|
||||
|
||||
使用LVM首先要了解的一件事是没有简单的方法将已经存在传统的分区转换成逻辑分区。可以将它移到一个使用LVM的新分区下,但是这并不会在本篇中提到;反之我们将全新安装一台Ubuntu 10.10来设置LVM
|
||||
|
||||

|
||||
使用LVM首先要了解的一件事是,没有一个简单的方法可以将已有的传统分区转换成逻辑卷。可以将数据移到一个使用LVM的新分区下,但是这并不会在本篇中提到;在这里,我们将全新安装一台Ubuntu 10.10来设置LVM。(LCTT 译注:本文针对的是较老的版本,新的版本已经不需如此麻烦了)
|
||||
|
||||
要使用LVM安装Ubuntu你需要使用另外的安装CD。从下面的链接中下载并烧录到CD中或者[使用unetbootin创建一个USB盘][1]。
|
||||
|
||||
@ -64,7 +63,7 @@ via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-
|
||||
|
||||
作者:[How-To Geek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,65 @@
|
||||
使用 Find 命令来帮你找到那些需要清理的文件
|
||||
================================================================================
|
||||

|
||||
|
||||
*Credit: Sandra H-S*
|
||||
|
||||
有一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和整理文件系统,结果,文件变得很混乱,很难找到有用的东西,要使它们运行良好、维护备份、易于管理,这将是一种持久的挑战。
|
||||
|
||||
我见过的一种解决问题的方法是建议使用者将所有的数据碎屑创建一个文件集合的总结报告或"概况",来报告诸如所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件等数据。如果有人看到五年前的一个包含五十万个文件的文件夹,他们可能会去删除哪些文件 -- 或者,至少会归档和压缩。主要问题是太大的文件夹会使人担心误删一些重要的东西。如果有一个描述文件夹的方法能帮助显示文件的性质,那么你就可以去清理它了。
|
||||
|
||||
当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。
|
||||
|
||||
$ find . -type f | wc -l
|
||||
187534
|
||||
|
||||
虽然查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示,在列表顶部的显然是最老的。
|
||||
|
||||
在第二个命令,我们做同样的,但打印的是最后一行,这是最新的。
|
||||
|
||||
$ find -type f -printf '%T+ %p\n' | sort | head -n 1
|
||||
2006-02-03+02:40:33 ./skel/.xemacs/init.el
|
||||
$ find -type f -printf '%T+ %p\n' | sort | tail -n 1
|
||||
2015-07-19+14:20:16 ./.bash_history
|
||||
|
||||
printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。
|
||||
|
||||
如果我们在查找家目录时,无疑会发现,history 文件(如 .bash_history)是最新的,这并没有什么用。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。
|
||||
|
||||
$ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1
|
||||
2015-07-19+13:02:12 ./isPrime
|
||||
|
||||
寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。
|
||||
|
||||
$ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1
|
||||
20183040 project.org.tar
|
||||
|
||||
统计文件的所有者,使用%u(所有者)
|
||||
|
||||
$ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c
|
||||
180034 shs
|
||||
7500 jdoe
|
||||
|
||||
如果文件系统能记录上次的访问日期,也将是非常有用的,可以用来看该文件有没有被访问过,比方说,两年之内没访问过。这将使你能明确分辨这些文件的价值。这个最后访问(%a)参数这样使用:
|
||||
|
||||
$ find -type f -printf '%a+ %p\n' | sort | head -n 1
|
||||
Fri Dec 15 03:00:30 2006+ ./statreport
|
||||
|
||||
当然,如果大多数最近访问的文件也是在很久之前的,这看起来你需要处理更多文件了。
|
||||
|
||||
$ find -type f -printf '%a+ %p\n' | sort | tail -n 1
|
||||
Wed Nov 26 03:00:27 2007+ ./my-notes
|
||||
|
||||
要想层次分明,可以为一个文件系统或大目录创建一个总结报告,显示这些文件的日期范围、最大的文件、文件所有者们、最老的文件和最新访问时间,可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/
|
@ -0,0 +1,75 @@
|
||||
选择成为软件开发工程师的5个原因
|
||||
================================================================================
|
||||
|
||||

|
||||
|
||||
这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲。我是志愿(由 [Transfer][1] 组织的)来到这所学校谈论我的工作的。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。演讲的其中一部分是我为什么觉得软件开发是一个很酷的职业。主要原因如下:
|
||||
|
||||
### 5个原因 ###
|
||||
|
||||
**1、创造性**
|
||||
|
||||
如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。它是最符合创造性定义的了,因为你创造了一个以前没有的新功能。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。这些所有的行为都是需要强大的创造性的。
|
||||
|
||||
**2、协作性**
|
||||
|
||||
另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理、测试人员、客户讨论需求以及其他问题。
|
||||
经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。
|
||||
|
||||
**3、高需性**
|
||||
|
||||
世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司说,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。
|
||||
|
||||
**4、高酬性**
|
||||
|
||||
软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。
|
||||
|
||||
**5、前瞻性**
|
||||
|
||||
有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。
|
||||
|
||||
### 但是...###
|
||||
|
||||
**外包又是怎么一回事呢?**
|
||||
|
||||
难道所有外包到其他国家的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。
|
||||
|
||||
换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。
|
||||
|
||||
### 如何成为人生大赢家 ###
|
||||
|
||||
虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。
|
||||
|
||||
寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10]) 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家!
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/
|
||||
|
||||
作者:[Henrik Warne][a]
|
||||
译者:[mousycoder](https://github.com/mousycoder)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[a]:http://henrikwarne.com/
|
||||
[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23
|
||||
[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460
|
||||
[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/
|
||||
[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov
|
||||
[5]:https://en.wikipedia.org/wiki/Marginal_cost
|
||||
[6]:https://en.wikipedia.org/wiki/Waterfall_model
|
||||
[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/
|
||||
[8]:https://www.coursera.org/
|
||||
[9]:https://www.udacity.com/
|
||||
[10]:https://eskimon.wordpress.com/about/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
martin
|
||||
|
||||
Interview: Larry Wall
|
||||
================================================================================
|
||||
> Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out what’s going on.
|
||||
@ -122,4 +124,4 @@ via: http://www.linuxvoice.com/interview-larry-wall/
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxvoice.com/author/mike/
|
||||
[a]:http://www.linuxvoice.com/author/mike/
|
||||
|
100
sources/talk/20150806 5 heroes of the Linux world.md
Normal file
100
sources/talk/20150806 5 heroes of the Linux world.md
Normal file
@ -0,0 +1,100 @@
|
||||
Translating by ZTinoZ
|
||||
5 heroes of the Linux world
|
||||
================================================================================
|
||||
Who are these people, seen and unseen, whose work affects all of us every day?
|
||||
|
||||

|
||||
Image courtesy [Christopher Michel/Flickr][1]
|
||||
|
||||
### High-flying penguins ###
|
||||
|
||||
Linux and open source is driven by passionate people who write best-of-breed software and then release the code to the public so anyone can use it, without any strings attached. (Well, there is one string attached and that’s licence.)
|
||||
|
||||
Who are these people? These heroes of the Linux world, whose work affects all of us every day. Allow me to introduce you.
|
||||
|
||||

|
||||
Image courtesy Swapnil Bhartiya
|
||||
|
||||
### Klaus Knopper ###
|
||||
|
||||
Klaus Knopper, an Austrian developer who lives in Germany, is the founder of Knoppix and Adriana Linux, which he developed for his blind wife.
|
||||
|
||||
Knoppix holds a very special place in heart of those Linux users who started using Linux before Ubuntu came along. What makes Knoppix so special is that it popularized the concept of Live CD. Unlike Windows or Mac OS X, you could run the entire operating system from the CD without installing anything on the system. It allowed new users to test Linux on their systems without formatting the hard drive. The live feature of Linux alone contributed heavily to its popularity.
|
||||
|
||||

|
||||
Image courtesy [Fórum Internacional Software Live/Flickr][2]
|
||||
|
||||
### Lennart Pottering ###
|
||||
|
||||
Lennart Pottering is yet another genius from Germany. He has written so many core components of a Linux (as well as BSD) system that it’s hard to keep track. Most of his work is towards the successors of aging or broken components of the Linux systems.
|
||||
|
||||
Pottering wrote the modern init system systemd, which shook the Linux world and created a [rift in the Debian community][3].
|
||||
|
||||
While Linus Torvalds has no problems with systemd, and praises it, he is not a huge fan of the way systemd developers (including the co-author Kay Sievers,) respond to bug reports and criticism. At one point Linus said on the LKML (Linux Kernel Mailing List) that he would [never work with Sievers][4].
|
||||
|
||||
Lennart is also the author of Pulseaudio, sound server on Linux and Avahi, zero-configuration networking (zeroconf) implementation.
|
||||
|
||||

|
||||
Image courtesy [Meego Com/Flickr][5]
|
||||
|
||||
### Jim Zemlin ###
|
||||
|
||||
Jim Zemlin isn't a developer, but as founder of The Linux Foundation he is certainly one of the most important figures of the Linux world.
|
||||
|
||||
In 2007, The Linux Foundation was formed as a result of merger between two open source bodies: the Free Standards Group and the Open Source Development Labs. Zemlin was the executive director of the Free Standards Group. Post-merger Zemlin became the executive director of The Linux Foundation and has held that position since.
|
||||
|
||||
Under his leadership, The Linux Foundation has become the central figure in the modern IT world and plays a very critical role for the Linux ecosystem. In order to ensure that key developers like Torvalds and Kroah-Hartman can focus on Linux, the foundation sponsors them as fellows.
|
||||
|
||||
Zemlin also made the foundation a bridge between companies so they can collaborate on Linux while at the same time competing in the market. The foundation also organizes many conferences around the world and [offers many courses for Linux developers][6].
|
||||
|
||||
People may think of Zemlin as Linus Torvalds' boss, but he refers to himself as "Linus Torvalds' janitor."
|
||||
|
||||

|
||||
Image courtesy [Coscup/Flickr][7]
|
||||
|
||||
### Greg Kroah-Hartman ###
|
||||
|
||||
Greg Kroah-Hartman is known as second-in-command of the Linux kernel. The ‘gentle giant’ is the maintainer of the stable branch of the kernel and of staging subsystem, USB, driver core, debugfs, kref, kobject, and the [sysfs][8] kernel subsystems along with many other components of a Linux system.
|
||||
|
||||
He is also credited for device drivers for Linux. One of his jobs is to travel around the globe, meet hardware makers and persuade them to make their drivers available for Linux. The next time you plug some random USB device to your system and it works out of the box, thank Kroah-Hartman. (Don't thank the distro. Some distros try to take credit for the work Kroah-Hartman or the Linux kernel did.)
|
||||
|
||||
Kroah-Hartman previously worked for Novell and then joined the Linux Foundation as a fellow, alongside Linus Torvalds.
|
||||
|
||||
Kroah-Hartman is the total opposite of Linus and never rants (at least publicly). One time there was some ripple was when he stated that [Canonical doesn’t contribute much to the Linux kernel][9].
|
||||
|
||||
On a personal level, Kroah-Hartman is extremely helpful to new developers and users and is easily accessible.
|
||||
|
||||

|
||||
Image courtesy Swapnil Bhartiya
|
||||
|
||||
### Linus Torvalds ###
|
||||
|
||||
No collection of Linux heroes would be complete without Linus Torvalds. He is the author of the Linux kernel, the most used open source technology on the planet and beyond. His software powers everything from space stations to supercomputers, military drones to mobile devices and tiny smartwatches. Linus remains the authority on the Linux kernel and makes the final decision on which patches to merge to the kernel.
|
||||
|
||||
Linux isn't Torvalds' only contribution open source. When he got fed-up with the existing software revision control systems, which his kernel heavily relied on, he wrote his own, called Git. Git enjoys the same reputation as Linux; it is the most used version control system in the world.
|
||||
|
||||
Torvalds is also a passionate scuba diver and when he found no decent dive logs for Linux, he wrote his own and called it SubSurface.
|
||||
|
||||
Torvalds is [well known for his rants][10] and once admitted that his ego is as big as a small planet. But he is also known for admitting his mistakes if he realizes he was wrong.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.html
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
|
||||
[1]:https://flic.kr/p/siJ25M
|
||||
[2]:https://flic.kr/p/uTzj54
|
||||
[3]:http://www.itwire.com/business-it-news/open-source/66153-systemd-fallout-two-debian-technical-panel-members-resign
|
||||
[4]:http://www.linuxveda.com/2014/04/04/linus-torvalds-systemd-kay-sievers/
|
||||
[5]:https://flic.kr/p/9Lnhpu
|
||||
[6]:http://www.itworld.com/article/2951968/linux/linux-foundation-offers-cheaper-courses-and-certifications-for-india.html
|
||||
[7]:https://flic.kr/p/hBv8Pp
|
||||
[8]:https://en.wikipedia.org/wiki/Sysfs
|
||||
[9]:https://www.youtube.com/watch?v=CyHAeGBFS8k
|
||||
[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html
|
@ -0,0 +1,81 @@
|
||||
Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech
|
||||
================================================================================
|
||||

|
||||
Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1]
|
||||
|
||||
Patricia Torvalds isn't the Torvalds name that pops up in Linux and open source circles. Yet.
|
||||
|
||||

|
||||
|
||||
At 18, Patricia is a feminist with a growing list of tech achievements, open source industry experience, and her sights set on diving into her freshman year of college at Duke University's Pratt School of Engineering. She works for [Puppet Labs][2] in Portland, Oregon, as an intern, but soon she'll head to Durham, North Carolina, to start the fall semester of college.
|
||||
|
||||
In this exclusive interview, Patricia explains what got her interested in computer science and engineering (spoiler alert: it wasn't her father), what her high school did "right" with teaching tech, the important role feminism plays in her life, and her thoughts on the lack of diversity in technology.
|
||||
|
||||

|
||||
|
||||
### What made you interested in studying computer science and engineering? ###
|
||||
|
||||
My interest in tech really grew throughout high school. I wanted to go into biology for a while, until around my sophomore year. I had a web design internship at the Portland VA after my sophomore year. And I took an engineering class called Exploratory Ventures, which sent an ROV into the Pacific ocean late in my sophomore year, but the turning point was probably when I was named a regional winner and national runner up for the [NCWIT Aspirations in Computing][3] award halfway through my junior year.
|
||||
|
||||
The award made me feel validated in my interest, of course, but I think the most important part of it was getting to join a Facebook group for all the award winners. The girls who have won the award are absolutely incredible and so supportive of each other. I was definitely interested in computer science before I won the award, because of my work in XV and at the VA, but having these girls to talk to solidified my interest and has kept it really strong. Teaching XV—more on that later—my junior and senior year, also, made engineering and computer science really fun for me.
|
||||
|
||||
### What do you plan to study? And do you already know what you want to do after college? ###
|
||||
|
||||
I hope to major in either Mechanical or Electrical and Computer Engineering as well as Computer Science, and minor in Women's Studies. After college, I hope to work for a company that supports or creates technology for social good, or start my own company.
|
||||
|
||||
### My daughter had one high school programming class—Visual Basic. She was the only girl in her class, and she ended up getting harassed and having a miserable experience. What was your experience like? ###
|
||||
|
||||
My high school began offering computer science classes my senior year, and I took Visual Basic as well! The class wasn't bad, but I was definitely one of three or four girls in the class of 20 or so students. Other computing classes seemed to have similar gender breakdowns. However, my high school was extremely small and the teacher was supportive of inclusivity in tech, so there was no harassment that I noticed. Hopefully the classes become more diverse in future years.
|
||||
|
||||
### What did your schools do right technology-wise? And how could they have been better? ###
|
||||
|
||||
My high school gave us consistent access to computers, and teachers occasionally assigned technology-based assignments in unrelated classes—we had to create a website for a social studies class a few times—which I think is great because it exposes everyone to tech. The robotics club was also pretty active and well-funded, but fairly small; I was not a member. One very strong component of the school's technology/engineering program is actually a student-taught engineering class called Exploratory Ventures, which is a hands-on class that tackles a new engineering or computer science problem every year. I taught it for two years with a classmate of mine, and have had students come up to me and tell me they're interested in pursuing engineering or computer science as a result of the class.
|
||||
|
||||
However, my high school was not particularly focused on deliberately including young women in these programs, and it isn't very racially diverse. The computing-based classes and clubs were, by a vast majority, filled with white male students. This could definitely be improved on.
|
||||
|
||||
### Growing up, how did you use technology at home? ###
|
||||
|
||||
Honestly, when I was younger I used my computer time (my dad created a tracker, which logged us off after an hour of Internet use) to play Neopets or similar games. I guess I could have tried to mess with the tracker or played on the computer without Internet use, but I just didn't. I sometimes did little science projects with my dad, and I remember once printing "Hello world" in the terminal with him a thousand times, but mostly I just played online games with my sisters and didn't get my start in computing until high school.
|
||||
|
||||
### You were active in the Feminism Club at your high school. What did you learn from that experience? What feminist issues are most important to you now? ###
|
||||
|
||||
My friend and I co-founded Feminism Club at our high school late in our sophomore year. We did receive lots of resistance to the club at first, and while that never entirely went away, by the time we graduated feminist ideals were absolutely a part of the school's culture. The feminist work we did at my high school was generally on a more immediate scale and focused on issues like the dress code.
|
||||
|
||||
Personally, I'm very focused on intersectional feminism, which is feminism as it applies to other aspects of oppression like racism and classism. The Facebook page [Guerrilla Feminism][4] is a great example of an intersectional feminism and has done so much to educate me. I currently run the Portland branch.
|
||||
|
||||
Feminism is also important to me in terms of diversity in tech, although as an upper-class white woman with strong connections in the tech world, the problems here affect me much less than they do other people. The same goes for my involvement in intersectional feminism. Publications like [Model View Culture][5] are very inspiring to me, and I admire Shanley Kane so much for what she does.
|
||||
|
||||
### What advice would you give parents who want to teach their children how to program? ###
|
||||
|
||||
Honestly, nobody ever pushed me into computer science or engineering. Like I said, for a long time I wanted to be a geneticist. I got a summer internship doing web design for the VA the summer after my sophomore year and totally changed my mind. So I don't know if I can fully answer that question.
|
||||
|
||||
I do think genuine interest is important, though. If my dad had sat me down in front of the computer and told me to configure a webserver when I was 12, I don't think I'd be interested in computer science. Instead, my parents gave me a lot of free reign to do what I wanted, which was mostly coding terrible little HTML sites for my Neopets. Neither of my younger sisters are interested in engineering or computer science, and my parents don't care. I'm really lucky my parents have given me and my sisters the encouragement and resources to explore our interests.
|
||||
|
||||
Still, I grew up saying my future career would be "like my dad's"—even when I didn't know what he did. He has a pretty cool job. Also, one time when I was in middle school, I told him that and he got a little choked up and said I wouldn't think that in high school. So I guess that motivated me a bit.
|
||||
|
||||
### What suggestions do you have for leaders in open source communities to help them attract and maintain a more diverse mix of contributors? ###
|
||||
|
||||
I'm actually not active in particular open source communities. I feel much more comfortable discussing computing with other women; I'm a member of the [NCWIT Aspirations in Computing][6] network and it's been one of the most important aspects of my continued interest in technology, as well as the Facebook group [Ladies Storm Hackathons][7].
|
||||
|
||||
I think this applies well to attracting and maintaining a talented and diverse mix of contributors: Safe spaces are important. I have seen the misogynistic and racist comments made in some open source communities, and subsequent dismissals when people point out the issues. I think that in maintaining a professional community there have to be strong standards on what constitutes harassment or inappropriate conduct. Of course, people can—and will—have a variety of opinions on what they should be able to express in open source communities, or any community. However, if community leaders actually want to attract and maintain diverse talent, they need to create a safe space and hold community members to high standards.
|
||||
|
||||
I also think that some community leaders just don't value diversity. It's really easy to argue that tech is a meritocracy, and the reason there are so few marginalized people in tech is just that they aren't interested, and that the problem comes from earlier on in the pipeline. They argue that if someone is good enough at their job, their gender or race or sexual orientation doesn't matter. That's the easy argument. But I was raised not to make excuses for mistakes. And I think the lack of diversity is a mistake, and that we should be taking responsibility for it and actively trying to make it better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/life/15/8/patricia-torvalds-interview
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/rikki-endsley
|
||||
[1]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[2]:https://puppetlabs.com/
|
||||
[3]:https://www.aspirations.org/
|
||||
[4]:https://www.facebook.com/guerrillafeminism
|
||||
[5]:https://modelviewculture.com/
|
||||
[6]:https://www.aspirations.org/
|
||||
[7]:https://www.facebook.com/groups/LadiesStormHackathons/
|
@ -1,68 +0,0 @@
|
||||
|
||||
How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots
|
||||
================================================================================
|
||||

|
||||
|
||||
Ubuntu’s installer offers an easy “Use LVM” checkbox. The description says it enables Logical Volume Management so you can take snapshots and more easily resize your hard disk partitions — here’s how to do that.
|
||||
|
||||
LVM is a technology that’s similar to [RAID arrays][1] or [Storage Spaces on Windows][2] in some ways. While this technology is particularly useful on servers, it can be used on desktop PCs, too.
|
||||
|
||||
### Should You Use LVM With Your New Ubuntu Installation? ###
|
||||
|
||||
The first question is whether you even want to use LVM with your Ubuntu installation. Ubuntu makes this easy to enable with a quick click, but this option isn’t enabled by default. As the installer says, this allows you to resize partitions, create snapshots, merge multiple disks into a single logical volume, and so on — all while the system is running. Unlike with typical partitions, you don’t have to shut down your system, boot from a live CD or USB drive, and [resize your partitions while they aren’t in use][3].
|
||||
|
||||
To be perfectly honest, the average Ubuntu desktop user probably won’t realize whether they’re using LVM or not. But, if you want to do more advanced things later, LVM can help. LVM is potentially more complex, which could cause problems if you need to recover your data later — especially if you’re not that experienced with it. There shouldn’t be a noticeable performance penalty here — LVM is implemented right down in the Linux kernel.
|
||||
|
||||

|
||||
|
||||
### Logical Volume Management Explained ###
|
||||
|
||||
We’re previously [explained what LVM is][4]. In a nutshell, it provides a layer of abstraction between your physical disks and the partitions presented to your operating system. For example, your computer might have two hard drives inside it, each 1 TB in size. You’d have to have at least two partitions on these disks, and each of these partitions would be 1 TB in size.
|
||||
|
||||
LVM provides a layer of abstraction over this. Instead of the traditional partition on a disk, LVM would treat the disks as two separate “physical volumes” after you initialize them. You could then create “logical volumes” based on these physical volumes. For example, you could combine those two 1 TB disks into a single 2 TB partition. Your operating system would just see a 2 TB volume, and LVM would deal with everything in the background. A group of physical volumes and logical volumes is known as a “volume group.” A typical system will just have a single volume group.
|
||||
|
||||
This layer of abstraction makes it possibly to easily resize partitions, combine multiple disks into a single volume, and even take “snapshots” of a partition’s file system while it’s running, all without unmounting it.
|
||||
|
||||
Note that merging multiple disks into a single volume can be a bad idea if you’re not creating backups. It’s like with RAID 0 — if you combine two 1 TB volumes into a single 2 TB volume, you could lose important data on the volume if just one of your hard disks fails. Backups are crucial if you go this route.
|
||||
|
||||
### Graphical Utilities for Managing Your LVM Volumes ###
|
||||
|
||||
Traditionally, [LVM volumes are managed with Linux terminal commands][5].These will work for you on Ubuntu, but there’s an easier, graphical method anyone can take advantage of. If you’re a Linux user used to using GParted or a similar partition manager, don’t bother — GParted doesn’t have support for LVM disks.
|
||||
|
||||
Instead, you can use the Disks utility included along with Ubuntu for this. This utility is also known as GNOME Disk Utility, or Palimpsest. Launch it by clicking the icon on the dash, searching for Disks, and pressing Enter. Unlike GParted, the Disks utility will display your LVM partitions under “Other Devices,” so you can format them and adjust other options if you need to. This utility will also work from a live CD or USB drive, too.
|
||||
|
||||

|
||||
|
||||
Unfortunately, the Disks utility doesn’t include support for taking advantage of LVM’s most powerful features. There’s no options for managing your volume groups, extending partitions, or taking snapshots. You could do that from the terminal, but you don’t have to. Instead, you can open the Ubuntu Software Center, search for LVM, and install the Logical Volume Management tool. You could also just run the **sudo apt-get install system-config-lvm** command in a terminal window. After it’s installed, you can open the Logical Volume Management utility from the dash.
|
||||
|
||||
This graphical configuration tool was made by Red Hat. It’s a bit dated, but it’s the only graphical way to do this stuff without resorting to terminal commands.
|
||||
|
||||
Let’s say you wanted to add a new physical volume to your volume group. You’d open the tool, select the new disk under Uninitialized Entries, and click the “Initialize Entry” button. You’d then find the new physical volume under Unallocated Volumes, and you could use the “Add to existing Volume Group” button to add it to the “ubuntu-vg” volume group Ubuntu created during the installation process.
|
||||
|
||||

|
||||
|
||||
The volume group view shows you a visual overview of your physical volumes and logical volumes. Here, we have two physical partitions across two separate hard drives. We have a swap partition and a root partition, just as Ubuntu sets up its partitioning scheme by default. Because we’ve added a second physical partition from another drive, there’s now a good chunk of unused space.
|
||||
|
||||

|
||||
|
||||
To expand a logical partition into the physical space, you could select it under Logical View, click Edit Properties, and modify the size to grow the partition. You could also shrink it from here.
|
||||
|
||||

|
||||
|
||||
The other options in system-config-lvm allow you to set up snapshots and mirroring. You probably won’t need these features on a typical desktop, but they’re available graphically here. Remember, you can also [do all of this with terminal commands][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/
|
||||
[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/
|
||||
[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/
|
||||
[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
@ -1,3 +1,4 @@
|
||||
Translating by Ping
|
||||
How to set up a Replica Set on MongoDB
|
||||
================================================================================
|
||||
MongoDB has become the most famous NoSQL database on the market. MongoDB is document-oriented, and its scheme-free design makes it a really attractive solution for all kinds of web applications. One of the features that I like the most is Replica Set, where multiple copies of the same data set are maintained by a group of mongod nodes for redundancy and high availability.
|
||||
@ -179,4 +180,4 @@ via: http://xmodulo.com/setup-replica-set-mongodb.html
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/valerio
|
||||
[1]:http://docs.mongodb.org/ecosystem/drivers/
|
||||
[1]:http://docs.mongodb.org/ecosystem/drivers/
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating wi-cuckoo
|
||||
Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management
|
||||
================================================================================
|
||||
**Shilpa Nair has just graduated in the year 2015. She went to apply for Trainee position in a National News Television located in Noida, Delhi. When she was in the last year of graduation and searching for help on her assignments she came across Tecmint. Since then she has been visiting Tecmint regularly.**
|
||||
@ -345,4 +346,4 @@ via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
|
||||
[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/
|
||||
[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/
|
||||
|
@ -1,52 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings
|
||||
================================================================================
|
||||
### Settings ###
|
||||
|
||||
There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic.
|
||||
|
||||
First one up? Printers.
|
||||
|
||||

|
||||
|
||||
Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password.
|
||||
|
||||
Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system.
|
||||
|
||||
When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable.
|
||||
|
||||
One more thing...
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive.
|
||||
|
||||
Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon.
|
||||
|
||||

|
||||
|
||||
As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you.
|
||||
|
||||
Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option.
|
||||
|
||||
I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect.
|
||||
|
||||
Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings.
|
||||
|
||||

|
||||
|
||||
Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings...
|
||||
|
||||
KDE, you're killing me. You have "System Settings" USE IT!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,40 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion
|
||||
================================================================================
|
||||
### User Experience and Closing Thoughts ###
|
||||
|
||||
When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come.
|
||||
|
||||
Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail.
|
||||
|
||||
People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts.
|
||||
|
||||
Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity.
|
||||
|
||||
KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look.
|
||||
|
||||
Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience.
|
||||
|
||||
My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression.
|
||||
|
||||
I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed.
|
||||
|
||||
And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things).
|
||||
|
||||
This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with.
|
||||
|
||||
To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them.
|
||||
|
||||
I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better.
|
||||
|
||||
Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,409 +0,0 @@
|
||||
translation by strugglingyouth
|
||||
How to monitor NGINX - Part 1
|
||||
================================================================================
|
||||

|
||||
|
||||
### What is NGINX? ###
|
||||
|
||||
[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus.
|
||||
|
||||
NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases.
|
||||
|
||||
### Key NGINX metrics ###
|
||||
|
||||
By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment).
|
||||
|
||||
More generally, there are at least three key categories of metrics to watch:
|
||||
|
||||
- Basic activity metrics
|
||||
- Error metrics
|
||||
- Performance metrics
|
||||
|
||||
Below we’ll break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice.
|
||||
|
||||
This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting.
|
||||
|
||||
#### Basic activity metrics ####
|
||||
|
||||
Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed.
|
||||
|
||||
NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus.
|
||||
|
||||
**NGINX**
|
||||
|
||||
The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection.
|
||||
|
||||

|
||||
|
||||
Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume.
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts</td>
|
||||
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">handled</td>
|
||||
<td style="text-align: left;">Count of successful client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped (calculated)</td>
|
||||
<td style="text-align: left;">Count of dropped connections (accepts – handled)</td>
|
||||
<td style="text-align: left;">Work: Errors*</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests</td>
|
||||
<td style="text-align: left;">Count of client requests</td>
|
||||
<td style="text-align: left;">Work: Throughput</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics">a metric of resource saturation</a>, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics">a work metric</a>.</em></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINX’s [worker_connections][4] limit, has been reached.
|
||||
|
||||
Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed:
|
||||
|
||||
Active state
|
||||
|
||||
- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive.
|
||||
- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation.
|
||||
- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state.
|
||||
|
||||
Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.)
|
||||
|
||||
**NGINX Plus**
|
||||
|
||||
As mentioned above, all of open-source NGINX’s metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus.
|
||||
|
||||

|
||||
|
||||
Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume.
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepted</td>
|
||||
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped</td>
|
||||
<td style="text-align: left;">Count of dropped connections</td>
|
||||
<td style="text-align: left;">Work: Errors*</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">idle</td>
|
||||
<td style="text-align: left;">Client connections with zero current requests</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">total</td>
|
||||
<td style="text-align: left;">Count of client requests</td>
|
||||
<td style="text-align: left;">Work: Throughput</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.</em></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Plus’s [worker_connections][4] limit, has been reached.
|
||||
|
||||
**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX.
|
||||
|
||||
**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection.
|
||||
|
||||
**Metric differences between Open-Source and Plus**
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;">NGINX (open-source)</th>
|
||||
<th style="text-align: left;">NGINX Plus</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts</td>
|
||||
<td style="text-align: left;">accepted</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped must be calculated</td>
|
||||
<td style="text-align: left;">dropped is reported directly</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">reading + writing</td>
|
||||
<td style="text-align: left;">current</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">waiting</td>
|
||||
<td style="text-align: left;">idle</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active (includes “waiting” states)</td>
|
||||
<td style="text-align: left;">active (excludes “idle” states)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests</td>
|
||||
<td style="text-align: left;">total</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
**Metric to alert on: Dropped connections**
|
||||
|
||||
The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation.
|
||||
|
||||

|
||||
|
||||
**Metric to alert on: Requests per second**
|
||||
|
||||
Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests you’re receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs.
|
||||
|
||||

|
||||
|
||||
**Collecting activity metrics**
|
||||
|
||||
Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection.
|
||||
|
||||
#### Error metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">4xx codes</td>
|
||||
<td style="text-align: left;">Count of client errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes</td>
|
||||
<td style="text-align: left;">Count of server errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes.
|
||||
|
||||
**Metric to alert on: Server error rate**
|
||||
|
||||
Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user.
|
||||
|
||||

|
||||
|
||||
A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities.
|
||||
|
||||
**Collecting error metrics**
|
||||
|
||||
Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information:
|
||||
|
||||
- Use the expanded status module available with commercially supported NGINX Plus
|
||||
- Configure NGINX’s log module to write response codes in access logs
|
||||
|
||||
Read the companion post on NGINX metrics collection for detailed instructions on both approaches.
|
||||
|
||||
#### Performance metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">request time</td>
|
||||
<td style="text-align: left;">Time to process each request, in seconds</td>
|
||||
<td style="text-align: left;">Work: Performance</td>
|
||||
<td style="text-align: left;">NGINX logs</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
**Metric to alert on: Request processing time**
|
||||
|
||||
The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream.
|
||||
|
||||
**Collecting processing time metrics**
|
||||
|
||||
NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8].
|
||||
|
||||
#### Reverse proxy metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">Active connections by upstream server</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes by upstream server</td>
|
||||
<td style="text-align: left;">Server errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">Available servers per upstream group</td>
|
||||
<td style="text-align: left;">Servers passing health checks</td>
|
||||
<td style="text-align: left;">Resource: Availability</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus.
|
||||
|
||||
NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times.
|
||||
|
||||
**Activity metrics**
|
||||
|
||||
The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns
|
||||
|
||||
**Error metrics**
|
||||
|
||||
Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular server’s error rate.
|
||||
|
||||
**Availability metrics**
|
||||
|
||||
For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers.
|
||||
|
||||
**Collecting upstream metrics**
|
||||
|
||||
NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11].
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this post we’ve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure:
|
||||
|
||||
- [Dropped connections][12]
|
||||
- [Requests per second][13]
|
||||
- [Server error rate][14]
|
||||
- [Request processing time][15]
|
||||
|
||||
Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus.
|
||||
|
||||
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18].
|
||||
|
||||
### Acknowledgments ###
|
||||
|
||||
Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications.
|
||||
|
||||
----------
|
||||
|
||||
Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
|
||||
作者:K Young
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://nginx.org/en/
|
||||
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
|
||||
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
|
||||
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
|
||||
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
|
||||
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
|
||||
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
|
||||
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
|
||||
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
|
||||
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
||||
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
||||
[20]:https://github.com/DataDog/the-monitor/issues
|
@ -1,3 +1,6 @@
|
||||
|
||||
Translating by dingdongnigetou
|
||||
|
||||
Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker
|
||||
================================================================================
|
||||
Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names.
|
||||
@ -123,4 +126,4 @@ via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://console.aws.amazon.com/
|
||||
[1]:http://console.aws.amazon.com/
|
||||
|
@ -1,65 +0,0 @@
|
||||
translation by strugglingyouth
|
||||
Handy commands for profiling your Unix file systems
|
||||
================================================================================
|
||||

|
||||
Credit: Sandra H-S
|
||||
|
||||
One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge.
|
||||
|
||||
One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests.
|
||||
|
||||
When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this.
|
||||
|
||||
$ find . -type f | wc -l
|
||||
187534
|
||||
|
||||
Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list.
|
||||
|
||||
In the second command, we do the same, but print the last line -- thus the newest -- file.
|
||||
|
||||
$ find -type f -printf '%T+ %p\n' | sort | head -n 1
|
||||
2006-02-03+02:40:33 ./skel/.xemacs/init.el
|
||||
$ find -type f -printf '%T+ %p\n' | sort | tail -n 1
|
||||
2015-07-19+14:20:16 ./.bash_history
|
||||
|
||||
The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work.
|
||||
|
||||
If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below.
|
||||
|
||||
$ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1
|
||||
2015-07-19+13:02:12 ./isPrime
|
||||
|
||||
Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show.
|
||||
|
||||
$ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1
|
||||
20183040 project.org.tar
|
||||
|
||||
To summarize file ownership, use the %u (owner)
|
||||
|
||||
$ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c
|
||||
180034 shs
|
||||
7500 jdoe
|
||||
|
||||
If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this:
|
||||
|
||||
$ find -type f -printf '%a+ %p\n' | sort | head -n 1
|
||||
Fri Dec 15 03:00:30 2006+ ./statreport
|
||||
|
||||
Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction.
|
||||
|
||||
$ find -type f -printf '%a+ %p\n' | sort | tail -n 1
|
||||
Wed Nov 26 03:00:27 2007+ ./my-notes
|
||||
|
||||
Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/
|
@ -1,3 +1,4 @@
|
||||
wyangsun translating
|
||||
Managing Linux Logs
|
||||
================================================================================
|
||||
A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily.
|
||||
@ -415,4 +416,4 @@ via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/
|
||||
[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html
|
||||
[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
|
||||
[21]:https://github.com/progrium/logspout
|
||||
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/
|
||||
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/
|
||||
|
@ -0,0 +1,429 @@
|
||||
Installation Guide for Puppet on Ubuntu 15.04
|
||||
================================================================================
|
||||
Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes.
|
||||
|
||||
In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04.
|
||||
|
||||
### 1. Setting up Hosts ###
|
||||
|
||||
In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial.
|
||||
|
||||
puppet master server with IP 44.55.88.6 and hostname : puppetmaster
|
||||
puppet node agent with IP 45.55.86.39 and hostname : puppetnode
|
||||
|
||||
Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server.
|
||||
|
||||
# nano /etc/hosts
|
||||
|
||||
45.55.88.6 puppetmaster.example.com puppetmaster
|
||||
45.55.86.39 puppetnode.example.com puppetnode
|
||||
|
||||
Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it.
|
||||
|
||||
### 2. Updating Time with NTP ###
|
||||
|
||||
As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent.
|
||||
|
||||
# ntpdate pool.ntp.org
|
||||
|
||||
17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec
|
||||
|
||||
Now, we'll update our local repository index and install ntp as follows.
|
||||
|
||||
# apt-get update && sudo apt-get -y install ntp ; service ntp restart
|
||||
|
||||
### 3. Puppet Master Package Installation ###
|
||||
|
||||
There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package.
|
||||
|
||||
# cd /tmp/
|
||||
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
|
||||
--2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
|
||||
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 7384 (7.2K) [application/x-debian-package]
|
||||
Saving to: ‘puppetlabs-release-trusty.deb’
|
||||
|
||||
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s
|
||||
|
||||
2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384]
|
||||
|
||||
After the download has been completed, we'll wanna install the package.
|
||||
|
||||
# dpkg -i puppetlabs-release-trusty.deb
|
||||
|
||||
Selecting previously unselected package puppetlabs-release.
|
||||
(Reading database ... 85899 files and directories currently installed.)
|
||||
Preparing to unpack puppetlabs-release-trusty.deb ...
|
||||
Unpacking puppetlabs-release (1.0-11) ...
|
||||
Setting up puppetlabs-release (1.0-11) ...
|
||||
|
||||
Then, we'll update the local respository index with the server using apt package manager.
|
||||
|
||||
# apt-get update
|
||||
|
||||
Then, we'll install the puppetmaster-passenger package by running the below command.
|
||||
|
||||
# apt-get install puppetmaster-passenger
|
||||
|
||||
**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :)
|
||||
|
||||
To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version.
|
||||
|
||||
# puppet --version
|
||||
|
||||
3.8.1
|
||||
|
||||
We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running.
|
||||
|
||||
Before continuing, we'll need to stop the Puppet master by stopping the apache2 service.
|
||||
|
||||
# systemctl stop apache2
|
||||
|
||||
### 4. Master version lock with Apt ###
|
||||
|
||||
As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor.
|
||||
|
||||
# nano /etc/apt/preferences.d/00-puppet.pref
|
||||
|
||||
Then, we'll gonna add the entries in the newly created file as:
|
||||
|
||||
# /etc/apt/preferences.d/00-puppet.pref
|
||||
Package: puppet puppet-common puppetmaster-passenger
|
||||
Pin: version 3.8*
|
||||
Pin-Priority: 501
|
||||
|
||||
Now, it will not update the puppet while running updates in the system.
|
||||
|
||||
### 5. Configuring Puppet Config ###
|
||||
|
||||
Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command.
|
||||
|
||||
# rm -rf /var/lib/puppet/ssl
|
||||
|
||||
Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor.
|
||||
|
||||
# nano /etc/puppet/puppet.conf
|
||||
|
||||
The output seems as shown below.
|
||||
|
||||
[main]
|
||||
logdir=/var/log/puppet
|
||||
vardir=/var/lib/puppet
|
||||
ssldir=/var/lib/puppet/ssl
|
||||
rundir=/var/run/puppet
|
||||
factpath=$vardir/lib/facter
|
||||
templatedir=$confdir/templates
|
||||
|
||||
[master]
|
||||
# These are needed when the puppetmaster is run by passenger
|
||||
# and can safely be removed if webrick is used.
|
||||
ssl_client_header = SSL_CLIENT_S_DN
|
||||
ssl_client_verify_header = SSL_CLIENT_VERIFY
|
||||
|
||||
Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main].
|
||||
|
||||
server = puppetmaster
|
||||
environment = production
|
||||
runinterval = 1h
|
||||
strict_variables = true
|
||||
certname = puppetmaster
|
||||
dns_alt_names = puppetmaster, puppetmaster.example.com
|
||||
|
||||
This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1].
|
||||
|
||||
After editing the file, we'll wanna save that and exit.
|
||||
|
||||
Now, we'll gonna generate a new CA certificates by running the following command.
|
||||
|
||||
# puppet master --verbose --no-daemonize
|
||||
|
||||
Info: Creating a new SSL key for ca
|
||||
Info: Creating a new SSL certificate request for ca
|
||||
Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78
|
||||
...
|
||||
Notice: puppetmaster has a waiting certificate request
|
||||
Notice: Signed certificate request for puppetmaster
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
|
||||
Notice: Starting Puppet master version 3.8.1
|
||||
^CNotice: Caught INT; storing stop
|
||||
Notice: Processing stop
|
||||
|
||||
Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell.
|
||||
|
||||
If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command.
|
||||
|
||||
# puppet cert list -all
|
||||
|
||||
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
|
||||
|
||||
### 6. Creating a Puppet Manifest ###
|
||||
|
||||
The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command.
|
||||
|
||||
# nano /etc/puppet/manifests/site.pp
|
||||
|
||||
Then, we'll add the following lines of configuration in the file that we just opened.
|
||||
|
||||
# execute 'apt-get update'
|
||||
exec { 'apt-update': # exec resource named 'apt-update'
|
||||
command => '/usr/bin/apt-get update' # command this resource will run
|
||||
}
|
||||
|
||||
# install apache2 package
|
||||
package { 'apache2':
|
||||
require => Exec['apt-update'], # require 'apt-update' before installing
|
||||
ensure => installed,
|
||||
}
|
||||
|
||||
# ensure apache2 service is running
|
||||
service { 'apache2':
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent.
|
||||
|
||||
### 7. Starting Master Service ###
|
||||
|
||||
We are now ready to start the puppet master. We can start it by running the apache2 service.
|
||||
|
||||
# systemctl start apache2
|
||||
|
||||
Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master.
|
||||
|
||||
**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server.
|
||||
|
||||
### 8. Puppet Agent Package Installation ###
|
||||
|
||||
Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com .
|
||||
|
||||
We'll run the following command to download the Puppet Labs package in our puppet agent nodes.
|
||||
|
||||
# cd /tmp/
|
||||
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\
|
||||
|
||||
--2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
|
||||
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 7384 (7.2K) [application/x-debian-package]
|
||||
Saving to: ‘puppetlabs-release-trusty.deb’
|
||||
|
||||
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s
|
||||
|
||||
2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384]
|
||||
|
||||
Then, as we're running ubuntu 15.04, we'll use debian package manager to install it.
|
||||
|
||||
# dpkg -i puppetlabs-release-trusty.deb
|
||||
|
||||
Now, we'll gonna update the repository index using apt-get.
|
||||
|
||||
# apt-get update
|
||||
|
||||
Finally, we'll gonna install the puppet agent directly from the remote repository.
|
||||
|
||||
# apt-get install puppet
|
||||
|
||||
Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor.
|
||||
|
||||
# nano /etc/default/puppet
|
||||
|
||||
Then, we'll need to change value of **START** to "yes" as shown below.
|
||||
|
||||
START=yes
|
||||
|
||||
Then, we'll need to save and exit the file.
|
||||
|
||||
### 9. Agent Version Lock with Apt ###
|
||||
|
||||
As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor.
|
||||
|
||||
# nano /etc/apt/preferences.d/00-puppet.pref
|
||||
|
||||
Then, we'll gonna add the entries in the newly created file as:
|
||||
|
||||
# /etc/apt/preferences.d/00-puppet.pref
|
||||
Package: puppet puppet-common
|
||||
Pin: version 3.8*
|
||||
Pin-Priority: 501
|
||||
|
||||
Now, it will not update the Puppet while running updates in the system.
|
||||
|
||||
### 10. Configuring Puppet Node Agent ###
|
||||
|
||||
Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf
|
||||
|
||||
# nano /etc/puppet/puppet.conf
|
||||
|
||||
It will look exactly like the Puppet master's initial configuration file.
|
||||
|
||||
This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it.
|
||||
|
||||
Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com .
|
||||
|
||||
[agent]
|
||||
server = puppetmaster.example.com
|
||||
certname = puppetnode.example.com
|
||||
|
||||
After adding this, it will look alike this.
|
||||
|
||||
[main]
|
||||
logdir=/var/log/puppet
|
||||
vardir=/var/lib/puppet
|
||||
ssldir=/var/lib/puppet/ssl
|
||||
rundir=/var/run/puppet
|
||||
factpath=$vardir/lib/facter
|
||||
#templatedir=$confdir/templates
|
||||
|
||||
[agent]
|
||||
server = puppetmaster.example.com
|
||||
certname = puppetnode.example.com
|
||||
|
||||
After done with that, we'll gonna save and exit it.
|
||||
|
||||
Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command.
|
||||
|
||||
# systemctl start puppet
|
||||
|
||||
If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node.
|
||||
|
||||
**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further.
|
||||
|
||||
### 11. Signing certificate Requests on Master ###
|
||||
|
||||
While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate.
|
||||
|
||||
To get the list of the certificate requests, we'll run the following command in the puppet master server.
|
||||
|
||||
# puppet cert list
|
||||
|
||||
"puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2
|
||||
|
||||
As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname.
|
||||
|
||||
Note that there is no + in front of it which indicates that it has not been signed yet.
|
||||
|
||||
Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below.
|
||||
|
||||
# puppet cert sign puppetnode.example.com
|
||||
|
||||
Notice: Signed certificate request for puppetnode.example.com
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem'
|
||||
|
||||
The Puppet master can now communicate and control the node that the signed certificate belongs to.
|
||||
|
||||
If we want to sign all of the current requests, we can use the -all option as shown below.
|
||||
|
||||
# puppet cert sign --all
|
||||
|
||||
### Removing a Puppet Certificate ###
|
||||
|
||||
If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows.
|
||||
|
||||
# puppet cert clean hostname
|
||||
|
||||
Notice: Revoked certificate with serial 5
|
||||
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem'
|
||||
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem'
|
||||
|
||||
If we want to view all of the requests signed and unsigned, run the following command:
|
||||
|
||||
# puppet cert list --all
|
||||
|
||||
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
|
||||
|
||||
### 12. Deploying a Puppet Manifest ###
|
||||
|
||||
After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node.
|
||||
|
||||
# puppet agent --test
|
||||
|
||||
Info: Retrieving pluginfacts
|
||||
Info: Retrieving plugin
|
||||
Info: Caching catalog for puppetnode.example.com
|
||||
Info: Applying configuration version '1434563858'
|
||||
Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully
|
||||
Notice: Finished catalog run in 10.53 seconds
|
||||
|
||||
This will show us all the processes how the main manifest will affect a single server immediately.
|
||||
|
||||
If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from.
|
||||
|
||||
# puppet apply /etc/puppet/manifest/test.pp
|
||||
|
||||
### 13. Configuring Manifest for a Specific Node ###
|
||||
|
||||
If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows.
|
||||
|
||||
We'll need to edit the manifest on the master server using a text editor.
|
||||
|
||||
# nano /etc/puppet/manifest/site.pp
|
||||
|
||||
Now, we'll gonna add the following lines there.
|
||||
|
||||
node 'puppetnode', 'puppetnode1' {
|
||||
# execute 'apt-get update'
|
||||
exec { 'apt-update': # exec resource named 'apt-update'
|
||||
command => '/usr/bin/apt-get update' # command this resource will run
|
||||
}
|
||||
|
||||
# install apache2 package
|
||||
package { 'apache2':
|
||||
require => Exec['apt-update'], # require 'apt-update' before installing
|
||||
ensure => installed,
|
||||
}
|
||||
|
||||
# ensure apache2 service is running
|
||||
service { 'apache2':
|
||||
ensure => running,
|
||||
}
|
||||
}
|
||||
|
||||
Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically.
|
||||
|
||||
### 14. Configuring Manifest with a Module ###
|
||||
|
||||
Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further.
|
||||
|
||||
On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command.
|
||||
|
||||
# puppet module install puppetlabs-apache
|
||||
|
||||
**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet.
|
||||
|
||||
Now we'll gonna edit the main manifest ie **site.pp** using a text editor.
|
||||
|
||||
# nano /etc/puppet/manifest/site.pp
|
||||
|
||||
Now add the following lines to install apache under puppetnode.
|
||||
|
||||
node 'puppet-node' {
|
||||
class { 'apache': } # use apache module
|
||||
apache::vhost { 'example.com': # define vhost resource
|
||||
port => '80',
|
||||
docroot => '/var/www/html'
|
||||
}
|
||||
}
|
||||
|
||||
Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html
|
@ -1,323 +0,0 @@
|
||||
[translating by xiqingongzi]
|
||||
RHCSA Series: How to Perform File and Directory Management – Part 2
|
||||
================================================================================
|
||||
In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator.
|
||||
|
||||

|
||||
|
||||
RHCSA: Perform File and Directory Management – Part 2
|
||||
|
||||
### Create, Delete, Copy, and Move Files and Directories ###
|
||||
|
||||
File and directory management is a critical competence that every system administrator should possess. This includes the ability to create / delete text files from scratch (the core of each program’s configuration) and directories (where you will organize files and other directories), and to find out the type of existing files.
|
||||
|
||||
The [touch command][1] can be used not only to create empty files, but also to update the access and modification times of existing files.
|
||||
|
||||

|
||||
|
||||
touch command example
|
||||
|
||||
You can use `file [filename]` to determine a file’s type (this will come in handy before launching your preferred text editor to edit it).
|
||||
|
||||

|
||||
|
||||
file command example
|
||||
|
||||
and `rm [filename]` to delete it.
|
||||
|
||||

|
||||
|
||||
rm command example
|
||||
|
||||
As for directories, you can create directories inside existing paths with `mkdir [directory]` or create a full path with `mkdir -p [/full/path/to/directory].`
|
||||
|
||||

|
||||
|
||||
mkdir command example
|
||||
|
||||
When it comes to removing directories, you need to make sure that they’re empty before issuing the `rmdir [directory]` command, or use the more powerful (handle with care!) `rm -rf [directory]`. This last option will force remove recursively the `[directory]` and all its contents – so use it at your own risk.
|
||||
|
||||
### Input and Output Redirection and Pipelining ###
|
||||
|
||||
The command line environment provides two very useful features that allows to redirect the input and output of commands from and to files, and to send the output of a command to another, called redirection and pipelining, respectively.
|
||||
|
||||
To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word.
|
||||
|
||||
- Standard input (aka stdin) is by default attached to the keyboard. In other words, the keyboard is the standard input device to enter commands to the command line.
|
||||
- Standard output (aka stdout) is by default attached to the screen, the device that “receives” the output of commands and display them on the screen.
|
||||
- Standard error (aka stderr), is where the status messages of a command is sent to by default, which is also the screen.
|
||||
|
||||
In the following example, the output of `ls /var` is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown.
|
||||
|
||||

|
||||
|
||||
Input and Output Example
|
||||
|
||||
To more easily identify these special files, they are each assigned a file descriptor, an abstract representation that is used to access them. The essential thing to understand is that these files, just like others, can be redirected. What this means is that you can capture the output from a file or script and send it as input to another file, command, or script. This will allow you to store on disk, for example, the output of commands for later processing or analysis.
|
||||
|
||||
To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available.
|
||||
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="226"></colgroup>
|
||||
<colgroup width="743"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="CENTER" height="24" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">Redirection Operator</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">Effect</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Redirects standard output to a file containing standard output. If the destination file exists, it will be overwritten.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">>></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Appends standard output to a file.</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">2></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Redirects standard error to a file containing standard output. If the destination file exists, it will be overwritten.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">2>></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Appends standard error to the existing file.</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">&></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Redirects both standard output and standard error to a file; if the specified file exists, it will be overwritten.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;"><</span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Uses the specified file as standard input.</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;"><></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">The specified file is used for both standard input and standard output.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
As opposed to redirection, pipelining is performed by adding a vertical bar `(|)` after a command and before another one.
|
||||
|
||||
Remember:
|
||||
|
||||
- Redirection is used to send the output of a command to a file, or to send a file as input to a command.
|
||||
- Pipelining is used to send the output of a command to another command as input.
|
||||
|
||||
#### Examples Of Redirection and Pipelining ####
|
||||
|
||||
**Example 1: Redirecting the output of a command to a file**
|
||||
|
||||
There will be times when you will need to iterate over a list of files. To do that, you can first save that list to a file and then read that file line by line. While it is true that you can iterate over the output of ls directly, this example serves to illustrate redirection.
|
||||
|
||||
# ls -1 /var/mail > mail.txt
|
||||
|
||||

|
||||
|
||||
Redirect output of command tot a file
|
||||
|
||||
**Example 2: Redirecting both stdout and stderr to /dev/null**
|
||||
|
||||
In case we want to prevent both stdout and stderr to be displayed on the screen, we can redirect both file descriptors to `/dev/null`. Note how the output changes when the redirection is implemented for the same command.
|
||||
|
||||
# ls /var /tecmint
|
||||
# ls /var/ /tecmint &> /dev/null
|
||||
|
||||

|
||||
|
||||
Redirecting stdout and stderr ouput to /dev/null
|
||||
|
||||
#### Example 3: Using a file as input to a command ####
|
||||
|
||||
While the classic syntax of the [cat command][2] is as follows.
|
||||
|
||||
# cat [file(s)]
|
||||
|
||||
You can also send a file as input, using the correct redirection operator.
|
||||
|
||||
# cat < mail.txt
|
||||
|
||||

|
||||
|
||||
cat command example
|
||||
|
||||
#### Example 4: Sending the output of a command as input to another ####
|
||||
|
||||
If you have a large directory or process listing and want to be able to locate a certain file or process at a glance, you will want to pipeline the listing to grep.
|
||||
|
||||
Note that we use to pipelines in the following example. The first one looks for the required keyword, while the second one will eliminate the actual `grep command` from the results. This example lists all the processes associated with the apache user.
|
||||
|
||||
# ps -ef | grep apache | grep -v grep
|
||||
|
||||

|
||||
|
||||
Send output of command as input to another
|
||||
|
||||
### Archiving, Compressing, Unpacking, and Uncompressing Files ###
|
||||
|
||||
If you need to transport, backup, or send via email a group of files, you will use an archiving (or grouping) tool such as [tar][3], typically used with a compression utility like gzip, bzip2, or xz.
|
||||
|
||||
Your choice of a compression tool will be likely defined by the compression speed and rate of each one. Of these three compression tools, gzip is the oldest and provides the least compression, bzip2 provides improved compression, and xz is the newest and provides the best compression. Typically, files compressed with these utilities have .gz, .bz2, or .xz extensions, respectively.
|
||||
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="165"></colgroup>
|
||||
<colgroup width="137"></colgroup>
|
||||
<colgroup width="366"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="CENTER" height="24" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">Command</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">Abbreviation</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">Description</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –create</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">c</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Creates a tar archive</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –concatenate</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">A</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Appends tar files to an archive</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –append</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">r</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Appends non-tar files to an archive</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –update</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">u</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Appends files that are newer than those in an archive</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –diff or –compare</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">d</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Compares an archive to files on disk</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="20" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –list</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">t</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Lists the contents of a tarball</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –extract or –get</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">x</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">Extracts files from an archive</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="258"></colgroup>
|
||||
<colgroup width="152"></colgroup>
|
||||
<colgroup width="803"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="CENTER" height="24" bgcolor="#999999" style="border: 1px solid #000001;"><b><span style="font-size: medium;">Operation modifier</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000001;"><b><span style="font-size: medium;">Abbreviation</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000001;"><b><span style="font-size: medium;">Description</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>directory dir</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> C</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">Changes to directory dir before performing operations</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>same-permissions and <span style="font-family: Courier New;">—</span>same-owner</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> p</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">Preserves permissions and ownership information, respectively.</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> –verbose</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> v</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">Lists all files as they are read or extracted; if combined with –list, it also displays file sizes, ownership, and timestamps</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>exclude file</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> —</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">Excludes file from the archive. In this case, file can be an actual file or a pattern.</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>gzip or <span style="font-family: Courier New;">—</span>gunzip</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> z</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">Compresses an archive through gzip</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> –bzip2</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> j</span></td>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;">Compresses an archive through bzip2</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> –xz</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> J</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">Compresses an archive through xz</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
#### Example 5: Creating a tarball and then compressing it using the three compression utilities ####
|
||||
|
||||
You may want to compare the effectiveness of each tool before deciding to use one or another. Note that while compressing small files, or a few files, the results may not show much differences, but may give you a glimpse of what they have to offer.
|
||||
|
||||
# tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/* # Create an ordinary tarball
|
||||
# tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/* # Create a tarball and compress with gzip
|
||||
# tar cjf ApacheLogs-$(date +%Y%m%d).tar.bz2 /var/log/httpd/* # Create a tarball and compress with bzip2
|
||||
# tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* # Create a tarball and compress with xz
|
||||
|
||||

|
||||
|
||||
tar command examples
|
||||
|
||||
#### Example 6: Preserving original permissions and ownership while archiving and when ####
|
||||
|
||||
If you are creating backups from users’ home directories, you will want to store the individual files with the original permissions and ownership instead of changing them to that of the user account or daemon performing the backup. The following example preserves these attributes while taking the backup of the contents in the `/var/log/httpd` directory:
|
||||
|
||||
# tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner
|
||||
|
||||
### Create Hard and Soft Links ###
|
||||
|
||||
In Linux, there are two types of links to files: hard links and soft (aka symbolic) links. Since a hard link represents another name for an existing file and is identified by the same inode, it then points to the actual data, as opposed to symbolic links, which point to filenames instead.
|
||||
|
||||
In addition, hard links do not occupy space on disk, while symbolic links do take a small amount of space to store the text of the link itself. The downside of hard links is that they can only be used to reference files within the filesystem where they are located because inodes are unique inside a filesystem. Symbolic links save the day, in that they point to another file or directory by name rather than by inode, and therefore can cross filesystem boundaries.
|
||||
|
||||
The basic syntax to create links is similar in both cases:
|
||||
|
||||
# ln TARGET LINK_NAME # Hard link named LINK_NAME to file named TARGET
|
||||
# ln -s TARGET LINK_NAME # Soft link named LINK_NAME to file named TARGET
|
||||
|
||||
#### Example 7: Creating hard and soft links ####
|
||||
|
||||
There is no better way to visualize the relation between a file and a hard or symbolic link that point to it, than to create those links. In the following screenshot you will see that the file and the hard link that points to it share the same inode and both are identified by the same disk usage of 466 bytes.
|
||||
|
||||
On the other hand, creating a hard link results in an extra disk usage of 5 bytes. Not that you’re going to run out of storage capacity, but this example is enough to illustrate the difference between a hard link and a soft link.
|
||||
|
||||

|
||||
|
||||
Difference between a hard link and a soft link
|
||||
|
||||
A typical usage of symbolic links is to reference a versioned file in a Linux system. Suppose there are several programs that need access to file fooX.Y, which is subject to frequent version updates (think of a library, for example). Instead of updating every single reference to fooX.Y every time there’s a version update, it is wiser, safer, and faster, to have programs look to a symbolic link named just foo, which in turn points to the actual fooX.Y.
|
||||
|
||||
Thus, when X and Y change, you only need to edit the symbolic link foo with a new destination name instead of tracking every usage of the destination file and updating it.
|
||||
|
||||
### Summary ###
|
||||
|
||||
In this article we have reviewed some essential file and directory management skills that must be a part of every system administrator’s tool-set. Make sure to review other parts of this series as well in order to integrate these topics with the content covered in this tutorial.
|
||||
|
||||
Feel free to let us know if you have any questions or comments. We are always more than glad to hear from our readers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/file-and-directory-management-in-linux/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||
[2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
|
||||
[3]:http://www.tecmint.com/18-tar-command-examples-in-linux/
|
@ -1,3 +1,4 @@
|
||||
[translated by xiqingongzi]
|
||||
RHCSA Series: How to Manage Users and Groups in RHEL 7 – Part 3
|
||||
================================================================================
|
||||
Managing a RHEL 7 server, as it is the case with any other Linux server, will require that you know how to add, edit, suspend, or delete user accounts, and grant users the necessary permissions to files, directories, and other system resources to perform their assigned tasks.
|
||||
@ -245,4 +246,4 @@ via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/
|
||||
[2]:http://www.tecmint.com/usermod-command-examples/
|
||||
[3]:http://www.tecmint.com/ls-interview-questions/
|
||||
[4]:http://www.tecmint.com/file-and-directory-management-in-linux/
|
||||
[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
||||
[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
||||
|
@ -0,0 +1,67 @@
|
||||
Ubuntu上使用LVM轻松调整分区并制作快照
|
||||
================================================================================
|
||||

|
||||
|
||||
Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。说明中说,它启用了逻辑卷管理,因此你可以制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。
|
||||
|
||||
LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空间][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。
|
||||
|
||||
### 你应该在新安装Ubuntu时使用LVM吗? ###
|
||||
|
||||
第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、合并多个磁盘到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[调整这些不使用的分区][3]。
|
||||
|
||||
完全坦率地说,普通Ubuntu桌面用户可能不会意识到他们是否正在使用LVM。但是,如果你想要在今后做一些更高深的事情,那么LVM就会有所帮助了。LVM可能更复杂,可能会在你今后恢复数据时会导致问题——尤其是在你经验不足时。这里不会有显著的性能损失——LVM是彻底地在Linux内核中实现的。
|
||||
|
||||

|
||||
|
||||
### 逻辑卷管理说明 ###
|
||||
|
||||
前面,我们已经[说明了何谓LVM][4]。概括来讲,它在你的物理磁盘和呈现在你系统中的分区之间提供了一个抽象层。例如,你的计算机可能装有两个硬盘驱动器,它们的大小都是 1 TB。你必须得在这些磁盘上至少分两个区,每个区大小 1 TB。
|
||||
|
||||
LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个标准的系统只会有一个卷组。
|
||||
|
||||
该抽象层使得调整分区、将多个磁盘组合成单个卷、甚至为一个运行着的分区的文件系统创建“快照”变得十分简单,而完成所有这一切都无需先卸载分区。
|
||||
|
||||
注意,如果你没有创建备份,那么将多个磁盘合并成一个卷将会是个糟糕的想法。它就像RAID 0——如果你将两个 1 TB 的卷组合成一个 2 TB 的卷,只要其中一个硬盘失败,你将丢失该卷上的重要数据。所以,如果你要走这条路,那么备份就及其重要。
|
||||
|
||||
### 管理LVM卷的图形化工具 ###
|
||||
|
||||
通常,[LVM通过Linux终端命令来管理][5]。这在Ubuntu上也行得通,但是有个更简单的图形化方法可供大家采用。如果你是一个Linux用户,对GParted或者与其类似的分区管理器熟悉,算了,别瞎掰了——GParted根本不支持LVM磁盘。
|
||||
|
||||
然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击停靠盘上的图标来开启它吧,搜索磁盘然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。
|
||||
|
||||

|
||||
|
||||
不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是你没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从停靠盘上打开逻辑卷管理工具了。
|
||||
|
||||
这个图形化配置工具是由红帽公司开发的,虽然有点陈旧了,但却是唯一的图形化方式,你可以通过它来完成上述操作,将那些终端命令抛诸脑后了。
|
||||
|
||||
比如说,你想要添加一个新的物理卷到卷组中。你可以打开该工具,选择未初始化条目下的新磁盘,然后点击“初始化条目”按钮。然后,你就可以在未分配卷下找到新的物理卷了,你可以使用“添加到现存卷组”按钮来将它添加到“ubuntu-vg”卷组,这是Ubuntu在安装过程中创建的卷组。
|
||||
|
||||

|
||||
|
||||
卷组视图会列出你所有物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,就像Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。
|
||||
|
||||

|
||||
|
||||
要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩减分区。
|
||||
|
||||

|
||||
|
||||
system-config-lvm的其它选项允许你设置快照和镜像。对于传统桌面而言,你或许不需要这些特性,但是在这里也可以通过图形化处理。记住,你也可以[使用终端命令完成这一切][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/
|
||||
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/
|
||||
[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/
|
||||
[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/
|
||||
[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
@ -2,7 +2,7 @@
|
||||
================================================================================
|
||||
*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的,不代表Phoronix和Michael的观点。它们完全是我自己的想法。
|
||||
|
||||
另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[剪纸][1]千刀万剐”(原文剪纸一词为papercuts, 指易修复而烦人的漏洞,译者注)。
|
||||
另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[细纸片][1]千刀万剐”(原文含paper cuts一词,指易修复但烦人的缺陷,译者注)。
|
||||
|
||||
现在,重申完毕……文章开始。
|
||||
|
||||
|
@ -0,0 +1,54 @@
|
||||
将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第四节 - GNOME设置
|
||||
================================================================================
|
||||
### Settings设置 ###
|
||||
|
||||
在这我要挑一挑几个特定KDE控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。
|
||||
|
||||
第一个接招的?打印机。
|
||||
|
||||

|
||||
|
||||
GNOME在左,KDE在右。你知道左边跟右边的打印程序有什么区别吗?当我在GNOME控制中心打开“打印机”时,程序窗口弹出来了,之后没有也没发生。而当我在KDE系统设置打开“打印机”时,我收到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出ROOT密码。
|
||||
|
||||
让我再重复一遍。在今天,PolicyKit和Logind的日子里,对一个应该是sudo的操作,我依然被询问要求ROOT的密码。我安装系统的时候甚至都没设置root密码。所以我必须跑到Konsole去,然后运行'sudo passwd root'命令,这样我才能给root设一个密码,这样我才能回到系统设置中的打印程序,然后交出root密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次收到请求ROOT密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次收到请求ROOT密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求。
|
||||
|
||||
而在GNOME下添加打印机,在点击打印机程序中的”解锁“之前,我没有收到任何请求SUDO密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用GNOME的”解锁“模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许KDE应用程序绕过PolicyKit/Logind(如果有的话)并直接请求ROOT权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出ROOT密码,要么我必须时时刻刻呆着以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。
|
||||
|
||||
有还一件事……
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
给论坛的问题:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何的附加打印机准备好时,Gnome打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在KDE添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面它会像图片文件夹显示预览图一样,直接插入另外一个图标到界面里去。我很高兴也很惊讶的看到我是错的。但是事实是它直接”长出”另外一个从末存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。
|
||||
|
||||
打印机说得够多了……下一个接受我公开石刑的KDE系统设置是?多媒体,即Phonon。
|
||||
|
||||

|
||||
|
||||
一如既往,GNOME在左边,KDE在右边。让我们先看看GNOME的系统设置先……眼睛从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空白条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个On/Off开关,用来开关静音功能。Gnome的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你。
|
||||
|
||||
|
||||
继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。均衡的选项设置,声音配置,和清晰地标上标志的“测试麦克风”选项。
|
||||
|
||||
|
||||
|
||||
我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个Gnome化的Pavucontrol,但我想这就是重要的地方。Pavucontrol在这方面几乎完全做对了,Gnome控制中心中的“声音”应用程序的改善使它向完美更进了一步。
|
||||
|
||||
Phonon,该你上了。但开始前我想说:我TM看到的是什么?我知道我看到的是音频设备的权限列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个权限列表当然很好,它也应该存在,但问题是权限列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在Kmix中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。
|
||||
|
||||

|
||||
|
||||
上面展示的Gnome的网络设置。KDE的没有展示,原因就是我接下来要吐槽的内容了。如果你进入KDE的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置,Samba分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB的用户名和密码。TMD怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有Konqueror能用……一个已经倒闭的项目),代理设置,等等……我的wifi设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里……
|
||||
|
||||
KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,39 @@
|
||||
将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第五节 - 总结
|
||||
================================================================================
|
||||
### 用户体验和最后想法 ###
|
||||
|
||||
当Gnome 2.x和KDE 4.x要正面交锋时……我相当开心的跳到其中。我爱的东西它们有,恨的东西也有,但总的来说它们使用起来还算是一种乐趣。然后Gnome 3.x来了,带着一场Gnome Shell的戏剧。那时我就放弃了Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。
|
||||
|
||||
Gnome 3后续发面了八个版本后,奇迹发生了。Gnome变得对对用户友好了。变得直观了。它完美吗?当然不了。我还是很讨厌它想推动的那种设计范例,我讨厌它总想把工作流(work flow)强加给我,但是在时间和耐心的作用下,这两都能被接受。只要你能够回头去看看Gnome Shell那外星人一样的界面,然后开始跟Gnome的其它部分(特别是控制中心)互动,你就能发现Gnome绝对做对了:细节。对细节的关注!
|
||||
|
||||
人们能适应新的界面设计范例,能适应新的工作流——iPhone和iPad都证明了这一点——但真正一直让他们操心的是“纸片的割伤”(paper cuts,此处指易于修复但烦人的缺陷,译注)。
|
||||
|
||||
它带出了KDE和Gnome之间最重要的一个区别。Gnome感觉像一个产品。像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都在你的指尖。它让人感觉就像是一个拥有windows或者OS X那样桌面体验的Linux桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的sudo请求都感觉是Gnome下的一个特意设计的部分,就像在Windows下的一样。而在KDE它就像是任何应用程序都能创建的那种随机外观的弹窗。它不像是以系统的一部分这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。
|
||||
|
||||
KDE让人体验不到有凝聚力的体验。KDE像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道KDE要提供什么——并且——知道它看起来应该是什么样的。
|
||||
|
||||
是不是有什么原因阻止我在KDE下使用Gnome磁盘管理? Rhythmbox? Evolution? 没有。没有。没有。但是这样说又错过了关键。Gnome和KDE都称它们为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有Gnome看起来能符合完整的要求。KDE在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome磁盘管理没有相应的对手——kpartionmanage要求ROOT权限。KDE不运行“首次用户注册”的过程(原文:No 'First Time User' run through.可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在Kubuntu下引入了一个用户管理器。老天,Gnome甚至提供了地图,笔记,日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助Gnome推动“Gnome是一种完整丰富的体验”的想法。
|
||||
|
||||
我吐槽的KDE问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力——GNOME 3.x就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置,”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。
|
||||
|
||||
我知道KDE开发者们知道设计很重要,这也是为什么Visual Design Group(视觉设计团体)存在的原因,但是感觉好像他们没有让VDG充分发挥。所以KDE里存在组织上的缺陷。不是KDE没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。
|
||||
|
||||
还有,在任何人说这句话之前……千万别说“补丁很受欢迎啊"。因为当我开心的为个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦事就会不断发生。这不关Muon有没有中心对齐。也不关Amarok的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“房地产”(说真的,有人会去缩小这些东西)。
|
||||
|
||||
这跟心态的冷漠有关,跟开发者们在为他们的应用设计UI时根本就不多加思考有关。KDE团队做的东西都工作得很好。Amarok能播放音乐。Dragon能播放视频。Kwin或Qt和kdelibs似乎比Mutter/gtk更有力更效率(仅根本我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的和与之交互的东西。
|
||||
|
||||
KDE应用开发者们……让VDG参与进来吧。让VDG审查并核准每一个”核心“应用,让一个VDG的UI/UX专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到VDG论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。
|
||||
|
||||
我不想说得好像我一点都不懂感恩。我爱KDE,我爱那些志愿者们为了给Linux用户一个可视化的桌面而付出的工作与努力,也爱可供选择的Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说”这真垃圾!”,那么情况永远不会变好。
|
||||
|
||||
这周后我会继续使用Gnome吗?可能不,不。Gnome还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前KDE的形势来看,这可能是我能说出的最狠毒的评估了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[XLCYun](https://github.com/XLCYun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,69 @@
|
||||
Linux有问必答——如何启用Open vSwitch的日志功能以便调试和排障
|
||||
================================================================================
|
||||
> **问题** 我试着为我的Open vSwitch部署排障,鉴于此,我想要检查它的由内建日志机制生成的调试信息。我怎样才能启用Open vSwitch的日志功能,并且修改它的日志等级(如,修改成INFO/DEBUG级别)以便于检查更多详细的调试信息呢?
|
||||
|
||||
Open vSwitch(OVS)是Linux平台上用于虚拟切换的最流行的开源部署。由于当今的数据中心日益依赖于软件定义的网络(SDN)架构,OVS被作为数据中心的SDN部署中实际上的标准网络元素而快速采用。
|
||||
|
||||
Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种切换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台,syslog以及一个独立日志文件组合,以供检查。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。
|
||||
|
||||

|
||||
|
||||
这里为你演示如何使用`ovs-appctl`启用Open vSwitch中的日志功能,并进行自定义。
|
||||
|
||||
下面是`ovs-appctl`自定义VLOG的语法。
|
||||
|
||||
$ sudo ovs-appctl vlog/set module[:facility[:level]]
|
||||
|
||||
- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd,以及其它大量组件)
|
||||
- **Facility**:日志信息的目的地(必须是:console,syslog,或者file)
|
||||
- **Level**:日志的详细程度(必须是:emer,err,warn,info,或者dbg)
|
||||
|
||||
在OVS源代码中,模块名称在源文件中是以以下格式定义的:
|
||||
|
||||
VLOG_DEFINE_THIS_MODULE(<module-name>);
|
||||
|
||||
例如,在lib/netdev.c中,你可以看到:
|
||||
|
||||
VLOG_DEFINE_THIS_MODULE(netdev);
|
||||
|
||||
这个表明,lib/netdev.c是netdev模块的一部分,任何在lib/netdev.c中生成的日志信息将属于netdev模块。
|
||||
|
||||
在OVS源代码中,有多个严重度等级用于定义几个不同类型的日志信息:VLOG_INFO()用于报告,VLOG_WARN()用于警告,VLOG_ERR()用于错误提示,VLOG_DBG()用于调试信息,VLOG_EMERG用于紧急情况。日志等级和工具确定哪个日志信息发送到哪里。
|
||||
|
||||
要查看可用模块、工具和各自日志级别的完整列表,请运行以下命令。该命令必须在你启动OVS后调用。
|
||||
|
||||
$ sudo ovs-appctl vlog/list
|
||||
|
||||

|
||||
|
||||
输出结果显示了用于三个工具(console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。
|
||||
|
||||
指定任何一个OVS模块,你可以选择性地修改任何特定工具的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。
|
||||
|
||||
$ sudo ovs-appctl vlog/set dpif:console:dbg
|
||||
|
||||
你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个工具syslog和file的日志级别仍然没有改变。
|
||||
|
||||

|
||||
|
||||
如果你想要修改所有模块的日志等级,你可以指定“ANY”作为模块名。例如,下面命令将修改每个模块的console的日志级别为DBG。
|
||||
|
||||
$ sudo ovs-appctl vlog/set ANY:console:dbg
|
||||
|
||||

|
||||
|
||||
同时,如果你想要一次性修改所有三个工具的日志级别,你可以指定“ANY”作为工具名。例如,下面的命令将修改每个模块的所有工具的日志级别为DBG。
|
||||
|
||||
$ sudo ovs-appctl vlog/set ANY:ANY:dbg
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/enable-logging-open-vswitch.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
@ -0,0 +1,49 @@
|
||||
Linux有问必答——如何修复Linux上的“ImportError: No module named wxversion”错误
|
||||
================================================================================
|
||||
|
||||
> **问题** 我试着在[你的Linux发行版]上运行一个Python应用,但是我得到了这个错误"ImportError: No module named wxversion."。我怎样才能解决Python程序中的这个错误呢?
|
||||
|
||||
Looking for python... 2.7.9 - Traceback (most recent call last):
|
||||
File "/home/dev/playonlinux/python/check_python.py", line 1, in
|
||||
import os, wxversion
|
||||
ImportError: No module named wxversion
|
||||
failed tests
|
||||
|
||||
该错误表明,你的Python应用是基于GUI的,依赖于一个名为wxPython的缺失模块。[wxPython][1]是一个用于wxWidgets GUI库的Python扩展模块,普遍被C++程序员用来设计GUI应用。该wxPython扩展允许Python开发者在任何Python应用中方便地设计和整合GUI。
|
||||
To solve this import error, you need to install wxPython on your Linux, as described below.
|
||||
|
||||
### 安装wxPython到Debian,Ubuntu或Linux Mint ###
|
||||
|
||||
$ sudo apt-get install python-wxgtk2.8
|
||||
|
||||
### 安装wxPython到Fedora ###
|
||||
|
||||
$ sudo yum install wxPython
|
||||
|
||||
### 安装wxPython到CentOS/RHEL ###
|
||||
|
||||
wxPython可以在CentOS/RHEL的EPEL仓库中获取到,而基本仓库中则没有。因此,首先要在你的系统中[启用EPEL仓库][2],然后使用yum命令来安装。
|
||||
|
||||
$ sudo yum install wxPython
|
||||
|
||||
### 安装wxPython到Arch Linux ###
|
||||
|
||||
$ sudo pacman -S wxpython
|
||||
|
||||
### 安装wxPython到Gentoo ###
|
||||
|
||||
$ emerge wxPython
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://wxpython.org/
|
||||
[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
@ -0,0 +1,73 @@
|
||||
Linux问答 -- 如何在Linux上安装Git
|
||||
================================================================================
|
||||
|
||||
> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何安装Git? [注明一下是哪个Linux发行版]?
|
||||
|
||||
Git是一个流行的并且开源的版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会直接提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。
|
||||
|
||||

|
||||
|
||||
### 使用包管理器安装Git ###
|
||||
|
||||
Git已经被所有的主力Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。
|
||||
|
||||
**Debian, Ubuntu, 或 Linux Mint**
|
||||
|
||||
$ sudo apt-get install git
|
||||
|
||||
**Fedora, CentOS 或 RHEL**
|
||||
|
||||
$ sudo yum install git
|
||||
|
||||
**Arch Linux**
|
||||
|
||||
$ sudo pacman -S git
|
||||
|
||||
**OpenSUSE**
|
||||
|
||||
$ sudo zypper install git
|
||||
|
||||
**Gentoo**
|
||||
|
||||
$ emerge --ask --verbose dev-vcs/git
|
||||
|
||||
### 从源码安装Git ###
|
||||
|
||||
如果由于某些原因,你希望从源码安装Git,安装如下介绍操作。
|
||||
|
||||
**安装依赖包**
|
||||
|
||||
在构建Git之前,先安装它的依赖包。
|
||||
|
||||
**Debian, Ubuntu 或 Linux Mint**
|
||||
|
||||
$ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x
|
||||
|
||||
**Fedora, CentOS 或 RHEL**
|
||||
|
||||
$ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x
|
||||
|
||||
#### 从源码编译Git ####
|
||||
|
||||
从 [https://github.com/git/git/releases][1] 下载最新版本的Git。然后在/usr下构建和安装。
|
||||
|
||||
注意,如果你打算安装到其他目录下(例如:/opt),那就把"--prefix=/usr"这个配置命令使用其他路径替换掉。
|
||||
|
||||
$ cd git-x.x.x
|
||||
$ make configure
|
||||
$ ./configure --prefix=/usr
|
||||
$ make all doc info
|
||||
$ sudo make install install-doc install-html install-info
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-git-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[mr-ping](https://github.com/mr-ping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:https://github.com/git/git/releases
|
@ -0,0 +1,98 @@
|
||||
如何在 Linux 和类 Unix 系统上临时清空 Bash 环境变量
|
||||
================================================================================
|
||||
我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个 export 环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢?
|
||||
|
||||
你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令将环境修改为命令行指定的那样之后再执行程序。
|
||||
|
||||
### 如何显示当前环境? ###
|
||||
|
||||
打开终端应用程序并输入下面的其中一个命令:
|
||||
|
||||
printenv
|
||||
|
||||
或
|
||||
|
||||
env
|
||||
|
||||
输出样例:
|
||||
|
||||

|
||||
Fig.01: Unix/Linux: 列出所有环境变量
|
||||
|
||||
### 统计环境变量数目 ###
|
||||
|
||||
输入下面的命令:
|
||||
|
||||
env | wc -l
|
||||
printenv | wc -l
|
||||
|
||||
输出样例:
|
||||
|
||||
20
|
||||
|
||||
### 在 bash/ksh/zsh 干净环境中运行程序 ###
|
||||
|
||||
语法如下所示:
|
||||
|
||||
env -i your-program-name-here arg1 arg2 ...
|
||||
|
||||
例如,不使用 http_proxy 和/或任何其它变量运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序:
|
||||
|
||||
env -i /usr/local/bin/wget www.cyberciti.biz
|
||||
env -i wget www.cyberciti.biz
|
||||
|
||||
这当你想忽视任何已经设置的环境变量来运行命令时非常有用。我每天都会多次使用这个命令,以便忽视 http_proxy 和其它我设置的环境变量。
|
||||
|
||||
#### 例子:使用 http_proxy ####
|
||||
|
||||
$ wget www.cyberciti.biz
|
||||
--2015-08-03 23:20:23-- http://www.cyberciti.biz/
|
||||
Connecting to 10.12.249.194:3128... connected.
|
||||
Proxy request sent, awaiting response... 200 OK
|
||||
Length: unspecified [text/html]
|
||||
Saving to: 'index.html'
|
||||
index.html [ <=> ] 36.17K 87.0KB/s in 0.4s
|
||||
2015-08-03 23:20:24 (87.0 KB/s) - 'index.html' saved [37041]
|
||||
|
||||
#### 例子:忽视 http_proxy ####
|
||||
|
||||
$ env -i /usr/local/bin/wget www.cyberciti.biz
|
||||
--2015-08-03 23:25:17-- http://www.cyberciti.biz/
|
||||
Resolving www.cyberciti.biz... 74.86.144.194
|
||||
Connecting to www.cyberciti.biz|74.86.144.194|:80... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: unspecified [text/html]
|
||||
Saving to: 'index.html.1'
|
||||
index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s
|
||||
2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041]
|
||||
|
||||
-i 选项使 env 命令完全忽视它继承的环境。但是,它并不阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用:
|
||||
|
||||
env -i env | wc -l ## empty ##
|
||||
# Now run bash ##
|
||||
env -i bash
|
||||
## New enviroment set by bash program ##
|
||||
env | wc -l
|
||||
|
||||
#### 例子:设置一个环境变量 ####
|
||||
|
||||
语法如下:
|
||||
|
||||
env var=value /path/to/command arg1 arg2 ...
|
||||
## OR ##
|
||||
var=value /path/to/command arg1 arg2 ...
|
||||
|
||||
例如设置 http_proxy:
|
||||
|
||||
env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \
|
||||
/usr/local/bin/wget www.cyberciti.biz
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-variables-command/
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,320 @@
|
||||
[translating by xiqingongzi]
|
||||
|
||||
RHCSA系列: 复习基础命令及系统文档 – 第一部分
|
||||
================================================================================
|
||||
RHCSA (红帽认证系统工程师) 是由给商业公司提供开源操作系统和软件的RedHat公司举行的认证考试, 除此之外,红帽公司还为这些企业和机构提供支持、训练以及咨询服务
|
||||
|
||||

|
||||
|
||||
RHCSA 考试准备指南
|
||||
|
||||
RHCSA 考试(考试编号 EX200)通过后可以获取由Red Hat 公司颁发的证书. RHCSA 考试是RHCT(红帽认证技师)的升级版,而且RHCSA必须在新的Red Hat Enterprise Linux(红帽企业版)下完成.RHCT和RHCSA的主要变化就是RHCT基于 RHEL5 , 而RHCSA基于RHEL6或者7, 这两个认证的等级也有所不同.
|
||||
|
||||
红帽认证管理员所会的最基础的是在红帽企业版的环境下执行如下系统管理任务:
|
||||
|
||||
- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档
|
||||
- 使用不同的启动等级启动系统,认证和控制进程,启动或停止虚拟机
|
||||
- 使用分区和逻辑卷管理本地存储
|
||||
- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(许可、加密、访问控制表)
|
||||
- 部署、配置、并且控制系统,包括安装、升级和卸载软件
|
||||
- 管理系统用户和组,独立使用集中制的LDAP目录权限控制
|
||||
- 确保系统安全,包括基础的防火墙规则和SELinux配置
|
||||
|
||||
|
||||
关于你所在国家的考试注册费用参考 [RHCSA Certification page][1].
|
||||
|
||||
关于你所在国家的考试注册费用参考RHCSA 认证页面
|
||||
|
||||
|
||||
在这个有15章的RHCSA(红帽认证管理员)备考系列,我们将覆盖以下的关于红帽企业Linux第七版的最新的信息
|
||||
|
||||
- Part 1: 回顾必会的命令和系统文档
|
||||
- Part 2: 在RHEL7如何展示文件和管理目录
|
||||
- Part 3: 在RHEL7中如何管理用户和组
|
||||
- Part 4: 使用nano和vim管理命令/ 使用grep和正则表达式分析文本
|
||||
- Part 5: RHEL7的进程管理:启动,关机,以及其他介于二者之间的.
|
||||
- Part 6: 使用 'Parted'和'SSM'来管理和加密系统存储
|
||||
- Part 7: 使用ACLs(访问控制表)并挂载 Samba /NFS 文件分享
|
||||
- Part 8: 加固SSH,设置主机名并开启网络服务
|
||||
- Part 9: 安装、配置和加固一个Web,FTP服务器
|
||||
- Part 10: Yum 包管理方式,使用Cron进行自动任务管理以及监控系统日志
|
||||
- Part 11: 使用FirewallD和Iptables设置防火墙,控制网络流量
|
||||
- Part 12: 使用Kickstart 自动安装RHEL 7
|
||||
- Part 13: RHEL7:什么是SeLinux?他的原理是什么?
|
||||
- Part 14: 在RHEL7 中使用基于LDAP的权限控制
|
||||
- Part 15: RHEL7的虚拟化:KVM 和虚拟机管理
|
||||
|
||||
在第一章,我们讲解如何输入和运行正确的命令在终端或者Shell窗口,并且讲解如何找到、插入,以及使用系统文档
|
||||
|
||||

|
||||
|
||||
RHCSA:回顾必会的Linux命令 - 第一部分
|
||||
|
||||
#### 前提: ####
|
||||
|
||||
至少你要熟悉如下命令
|
||||
|
||||
- [cd command][2] (改变目录)
|
||||
- [ls command][3] (列举文件)
|
||||
- [cp command][4] (复制文件)
|
||||
- [mv command][5] (移动或重命名文件)
|
||||
- [touch command][6] (创建一个新的文件或更新已存在文件的时间表)
|
||||
- rm command (删除文件)
|
||||
- mkdir command (创建目录)
|
||||
|
||||
在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法.
|
||||
|
||||
虽然没有严格的要求,但是作为讨论常用的Linux命令和方法,你应该安装RHEL7 来尝试使用文章中提到的命令.这将会使你学习起来更省力.
|
||||
|
||||
- [红帽企业版Linux(RHEL)7 安装指南][7]
|
||||
|
||||
### 使用Shell进行交互 ###
|
||||
如果我们使用文本模式登陆Linux,我们就无法使用鼠标在默认的shell。另一方面,如果我们使用图形化界面登陆,我们将会通过启动一个终端来开启shell,无论那种方式,我们都会看到用户提示,并且我们可以开始输入并且执行命令(当按下Enter时,命令就会被执行)
|
||||
|
||||
|
||||
当我们使用文本模式登陆Linux时,
|
||||
命令是由两个部分组成的:
|
||||
|
||||
- 命令本身
|
||||
- 参数
|
||||
|
||||
某些参数,称为选项(通常使用一个连字符区分),改变了由其他参数定义的命令操作.
|
||||
|
||||
命令的类型可以帮助我们识别某一个特定的命令是由shell内建的还是由一个单独的包提供。这样的区别在于我们能够找到更多关于该信息的命令,对shell内置的命令,我们需要看shell的ManPage,如果是其他提供的,我们需要看它自己的ManPage.
|
||||
|
||||

|
||||
|
||||
检查Shell的内建命令
|
||||
|
||||
在上面的例子中, cd 和 type 是shell内建的命令,top和 less 是由其他的二进制文件提供的(在这种情况下,type将返回命令的位置)
|
||||
其他的内建命令
|
||||
|
||||
- [echo command][8]: 展示字符串
|
||||
- [pwd command][9]: 输出当前的工作目录
|
||||
|
||||

|
||||
|
||||
更多内建函数
|
||||
|
||||
**exec 命令**
|
||||
|
||||
运行我们指定的外部程序。请注意,最好是只输入我们想要运行的程序的名字,不过exec命令有一个特殊的特性:使用旧的shell运行,而不是创建新的进程,可以作为子请求的验证.
|
||||
|
||||
# ps -ef | grep [shell 进程的PID]
|
||||
|
||||
当新的进程注销,Shell也随之注销,运行 exec top 然后按下 q键来退出top,你会注意到shell 会话会结束,如下面的屏幕录像展示的那样:
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/f02w4WT73LE"></iframe>
|
||||
|
||||
**export 命令**
|
||||
|
||||
输出之后执行的命令的环境的变量
|
||||
|
||||
**history 命令**
|
||||
|
||||
展示数行之前的历史命令.在感叹号前输入命令编号可以再次执行这个命令.如果我们需要编辑历史列表中的命令,我们可以按下 Ctrl + r 并输入与命令相关的第一个字符.
|
||||
当我们看到的命令自动补全,我们可以根据我们目前的需要来编辑它:
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="640" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/69vafdSMfU4"></iframe>
|
||||
|
||||
命令列表会保存在一个叫 .bash_history的文件里.history命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候.默认情况下,bash保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加:
|
||||
|
||||
|
||||

|
||||
|
||||
Linux history 命令
|
||||
|
||||
但上述变化,在我们的下一次启动不会保留。为了保持HISTSIZE变量的变化,我们需要通过手工修改文件编辑:
|
||||
|
||||
# 设置history请看 HISTSIZE 和 HISTFILESIZE 在 bash(1)的文档
|
||||
HISTSIZE=1000
|
||||
|
||||
**重要**: 我们的更改不会生效,除非我们重启了系统
|
||||
|
||||
**alias 命令**
|
||||
没有参数或使用-p参数将会以 名称=值的标准形式输出alias 列表.当提供了参数时,一个alias 将被定义给给定的命令和值
|
||||
|
||||
使用alias ,我们可以创建我们自己的命令,或修改现有的命令,包括需要的参数.举个例子,假设我们想别名 ls 到 ls –color=auto ,这样就可以使用不同颜色输出文件、目录、链接
|
||||
|
||||
|
||||
# alias ls='ls --color=auto'
|
||||
|
||||

|
||||
|
||||
Linux 别名命令
|
||||
|
||||
**Note**: 你可以给你的新命令起任何的名字,并且附上足够多的使用单引号分割的参数,但是这样的情况下你要用分号区分开他们.
|
||||
|
||||
# alias myNewCommand='cd /usr/bin; ls; cd; clear'
|
||||
|
||||
**exit 命令**
|
||||
|
||||
Exit和logout命令都是退出shell.exit命令退出所有的shell,logout命令只注销登陆的shell,其他的自动以文本模式启动的shell不算.
|
||||
|
||||
如果我们对某个程序由疑问,我们可以看他的man Page,可以使用man命令调出它,额外的,还有一些重要的文件的手册页(inittab,fstab,hosts等等),库函数,shells,设备及其他功能
|
||||
|
||||
#### 举例: ####
|
||||
|
||||
- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等).
|
||||
- man inittab (初始化守护设置).
|
||||
|
||||
另外一个重要的信息的来源就是info命令提供的,info命令常常被用来读取信息文件.这些文件往往比manpage 提供更多信息.通过info 关键词调用某个命令的信息
|
||||
|
||||
# info ls
|
||||
# info cut
|
||||
|
||||
|
||||
另外,在/usr/share/doc 文件夹包含了大量的子目录,里面可以找到大量的文档.他们包含文本文件或其他友好的格式.
|
||||
确保你使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法
|
||||
|
||||
**使用expand命令把tabs转换为空格**
|
||||
|
||||
有时候文本文档包含了tabs但是程序无法很好的处理的tabs.或者我们只是简单的希望将tabs转换成空格.这就是为什么expand (GNU核心组件提供)工具出现,
|
||||
|
||||
举个例子,给我们一个文件 NumberList.txt,让我们使用expand处理它,将tabs转换为一个空格.并且以标准形式输出.
|
||||
|
||||
# expand --tabs=1 NumbersList.txt
|
||||
|
||||

|
||||
|
||||
Linux expand 命令
|
||||
|
||||
unexpand命令可以实现相反的功能(将空格转为tab)
|
||||
|
||||
**使用head输出文件首行及使用tail输出文件尾行**
|
||||
|
||||
通常情况下,head命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 -n 参数来自定义具体的行数。
|
||||
|
||||
# head -n3 /etc/passwd
|
||||
# tail -n3 /etc/passwd
|
||||
|
||||

|
||||
|
||||
Linux 的 head 和 tail 命令
|
||||
|
||||
tail 最有意思的一个特性就是能够展现信息(最后一行)就像我们输入文件(tail -f my.log,一行一行的,就像我们在观察它一样。)这在我们监控一个持续增加的日志文件时非常有用
|
||||
|
||||
更多: [Manage Files Effectively using head and tail Commands][10]
|
||||
|
||||
**使用paste合并文本文件**
|
||||
paste命令一行一行的合并文件,默认会以tab来区分每一行,或者其他你自定义的分行方式.(下面的例子就是输出使用等号划分行的文件).
|
||||
# paste -d= file1 file2
|
||||
|
||||

|
||||
|
||||
Merge Files in Linux
|
||||
|
||||
**使用split命令将文件分块**
|
||||
|
||||
split 命令常常用于把一个文件切割成两个或多个文由我们自定义的前缀命名的件文件.这些文件可以通过大小、区块、行数,生成的文件会有一个数字或字母的后缀.在下面的例子中,我们将切割bash.pdf ,每个文件50KB (-b 50KB) ,使用命名后缀 (-d):
|
||||
|
||||
# split -b 50KB -d bash.pdf bash_
|
||||
|
||||

|
||||
|
||||
在Linux下划分文件
|
||||
|
||||
你可以使用如下命令来合并这些文件,生成源文件:
|
||||
|
||||
# cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf
|
||||
|
||||
**使用tr命令改变字符**
|
||||
|
||||
tr 命令多用于变化(改变)一个一个的字符活使用字符范围.和之前一样,下面的实例我们江使用同样的文件file2,我们将实习:
|
||||
|
||||
- 小写字母 o 变成大写
|
||||
- 所有的小写字母都变成大写字母
|
||||
|
||||
# cat file2 | tr o O
|
||||
# cat file2 | tr [a-z] [A-Z]
|
||||
|
||||

|
||||
|
||||
在Linux中替换文字
|
||||
|
||||
**使用uniq和sort检查或删除重复的文字**
|
||||
|
||||
uniq命令可以帮我们查出或删除文件中的重复的行,默认会写出到stdout.我们应当注意, uniq 只能查出相邻的两个相同的单纯,所以, uniq 往往和sort 一起使用(sort一般用于对文本文件的内容进行排序)
|
||||
|
||||
|
||||
默认的,sort 以第一个参数(使用空格区分)为关键字.想要定义特殊的关键字,我们需要使用 -k参数,请注意如何使用sort 和uniq输出我们想要的字段,具体可以看下面的例子
|
||||
|
||||
# cat file3
|
||||
# sort file3 | uniq
|
||||
# sort -k2 file3 | uniq
|
||||
# sort -k3 file3 | uniq
|
||||
|
||||

|
||||
|
||||
删除文件中重复的行
|
||||
|
||||
**从文件中提取文本的命令**
|
||||
|
||||
Cut命令基于字节(-b),字符(-c),或者区块(-f)从stdin活文件中提取到的部分将会以标准的形式展现在屏幕上
|
||||
|
||||
当我们使用区块切割时,默认的分隔符是一个tab,不过你可以通过 -d 参数来自定义分隔符.
|
||||
|
||||
# cut -d: -f1,3 /etc/passwd # 这个例子提取了第一块和第三块的文本
|
||||
# cut -d: -f2-4 /etc/passwd # 这个例子提取了第一块到第三块的文本
|
||||
|
||||

|
||||
|
||||
从文件中提取文本
|
||||
|
||||
|
||||
注意,上方的两个输出的结果是十分简洁的。
|
||||
|
||||
**使用fmt命令重新格式化文件**
|
||||
|
||||
fmt 被用于去“清理”有大量内容或行的文件,或者有很多缩进的文件.新的锻炼格式每行不会超过75个字符款,你能改变这个设定通过 -w(width 宽度)参数,它可以设置行宽为一个特定的数值
|
||||
|
||||
举个例子,让我们看看当我们用fmt显示定宽为100个字符的时候的文件/etc/passwd 时会发生什么.再来一次,输出值变得更加简洁.
|
||||
|
||||
# fmt -w100 /etc/passwd
|
||||
|
||||

|
||||
|
||||
Linux文件重新格式化
|
||||
|
||||
**使用pr命令格式化打印内容**
|
||||
|
||||
pr 分页并且在列中展示一个或多个用于打印的文件. 换句话说,使用pr格式化一个文件使他打印出来时看起来更好.举个例子,下面这个命令
|
||||
|
||||
# ls -a /etc | pr -n --columns=3 -h "Files in /etc"
|
||||
|
||||
以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现),行号(-n)
|
||||
|
||||

|
||||
|
||||
Linux的文件格式
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中,我们已经讨论了如何在Shell或终端以正确的语法输入和执行命令,并解释如何找到,检查和使用系统文档。正如你看到的一样简单,这就是你成为RHCSA的第一大步
|
||||
|
||||
如果你想添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并为分享他们而感到自豪,请在下方留言.也欢迎提出问题.我们期待您的回复.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:https://www.redhat.com/en/services/certification/rhcsa
|
||||
[2]:http://www.tecmint.com/cd-command-in-linux/
|
||||
[3]:http://www.tecmint.com/ls-command-interview-questions/
|
||||
[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/
|
||||
[5]:http://www.tecmint.com/rename-multiple-files-in-linux/
|
||||
[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||
[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/
|
||||
[8]:http://www.tecmint.com/echo-command-in-linux/
|
||||
[9]:http://www.tecmint.com/pwd-command-examples/
|
||||
[10]:http://www.tecmint.com/view-contents-of-file-in-linux/
|
@ -0,0 +1,325 @@
|
||||
RHCSA 系列: 如何执行文件并进行文件管理 – Part 2
|
||||
================================================================================
|
||||
|
||||
在本篇(RHCSA 第二篇:文件和目录管理)中,我们江回顾一些系统管理员日常任务需要的技能
|
||||
|
||||

|
||||
|
||||
|
||||
RHCSA : 运行文件以及进行文件夹管理 - 第二章
|
||||
### 创建,删除,复制和移动文件及目录 ###
|
||||
|
||||
文件和目录管理是每一个系统管理员都应该掌握的必要的技能.它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其他目录),以及识别存在的文件的类型
|
||||
|
||||
[touch 命令][1] 不仅仅能用来创建空文件,还能用来更新已存在的文件的权限和时间表
|
||||
|
||||

|
||||
|
||||
touch 命令示例
|
||||
|
||||
你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑).
|
||||
|
||||

|
||||
|
||||
file 命令示例
|
||||
|
||||
使用`rm [filename]` 可以删除文件
|
||||
|
||||

|
||||
|
||||
rm 命令示例
|
||||
|
||||
对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory].`带全路径创建文件夹
|
||||
|
||||
|
||||

|
||||
|
||||
mkdir 命令示例
|
||||
|
||||
当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它)`rm -rf [directory]`.后者会强制删除`[directory]`以及他的内容.所以使用这个命令存在一定的风险
|
||||
|
||||
### 输入输出重定向以及管道 ###
|
||||
|
||||
命令行环境提供了两个非常有用的功能:允许命令重定向的输入和输出到文件和发送到另一个文件,分别称为重定向和管道
|
||||
|
||||
To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word.
|
||||
为了理解这两个重要概念,我们首先需要理解通常情况下三个重要的输入输出流的形式
|
||||
|
||||
- 标准输入 (aka stdin) 是指默认使用键盘链接. 换句话说,键盘是输入命令到命令行的标准输入设备。
|
||||
- 标准输出 (aka stdout) 是指默认展示再屏幕上, 显示器接受输出命令,并且展示在屏幕上。
|
||||
- 标准错误 (aka stderr), 是指命令的状态默认输出, 同时也会展示在屏幕上
|
||||
|
||||
In the following example, the output of `ls /var` is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown.
|
||||
在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),就像ls /tecmint 的结果。但在后一种情况下,它是标准错误输出。
|
||||
|
||||

|
||||
输入和输出命令实例
|
||||
|
||||
为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符(用于控制他们的抽象标识)。主要要理解的是,这些文件就像其他人一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。你就可以在在磁盘上存储命令的输出结果,用于稍后的分析
|
||||
|
||||
To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available.
|
||||
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="226"></colgroup>
|
||||
<colgroup width="743"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="CENTER" height="24" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">转向操作</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">效果</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">标准输出到一个文件。如果目标文件存在,内容就会被重写</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">>></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">添加标准输出到文件尾部</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">2></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">标准错误输出到一个文件。如果目标文件存在,内容就会被重写</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">2>></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">添加标准错误输出到文件尾部.</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;">&></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">标准错误和标准输出都到一个文件。如果目标文件存在,内容就会被重写</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;"><</span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">使用特定的文件做标准输出</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="CENTER" height="18" style="border: 1px solid #000000;"><b><span style="font-family: Courier New;"><></span></b></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">使用特定的文件做标准输出和标准错误</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
相比与重定向,管道是通过在命令后添加一个竖杠`(|)`再添加另一个命令 .
|
||||
|
||||
记得:
|
||||
|
||||
- 重定向是用来定向命令的输出到一个文件,或定向一个文件作为输入到一个命令。
|
||||
- 管道是用来将命令的输出转发到另一个命令作为输入。
|
||||
|
||||
#### 重定向和管道的使用实例 ####
|
||||
|
||||
** 例1:将一个命令的输出到文件 **
|
||||
|
||||
有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以遍历直接ls的输出,不过这个例子是用来说明重定向。
|
||||
|
||||
# ls -1 /var/mail > mail.txt
|
||||
|
||||

|
||||
|
||||
将一个命令的输出到文件
|
||||
|
||||
** 例2:重定向stdout和stderr到/dev/null **
|
||||
|
||||
如果不想让标准输出和标准错误展示在屏幕上,我们可以把文件描述符重定向到 `/dev/null` 请注意在执行这个命令时该如何更改输出
|
||||
|
||||
# ls /var /tecmint
|
||||
# ls /var/ /tecmint &> /dev/null
|
||||
|
||||

|
||||
|
||||
重定向stdout和stderr到/dev/null
|
||||
|
||||
#### 例3:使用一个文件作为命令的输入 ####
|
||||
|
||||
当官方的[cat 命令][2]的语法如下时
|
||||
|
||||
# cat [file(s)]
|
||||
|
||||
您还可以使用正确的重定向操作符传送一个文件作为输入。
|
||||
|
||||
# cat < mail.txt
|
||||
|
||||

|
||||
|
||||
cat 命令实例
|
||||
|
||||
#### 例4:发送一个命令的输出作为另一个命令的输入 ####
|
||||
|
||||
如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep
|
||||
|
||||
接下来我们使用管道在下面的命令中,第一个是查找所需的关键词,第二个是除去产生的 `grep command`.这个例子列举了所有与apache用户有关的进程
|
||||
|
||||
# ps -ef | grep apache | grep -v grep
|
||||
|
||||

|
||||
|
||||
发送一个命令的输出作为另一个命令的输入
|
||||
|
||||
### 归档,压缩,解包,解压文件 ###
|
||||
|
||||
如果你需要传输,备份,或者通过邮件发送一组文件,你可以使用一个存档(或文件夹)如 [tar][3]工具,通常使用gzip,bzip2,或XZ压缩工具.
|
||||
|
||||
您选择的压缩工具每一个都有自己的定义的压缩速度和速率的。这三种压缩工具,gzip是最古老和提供最小压缩的工具,bzip2提供经过改进的压缩,以及XZ提供最信和最好的压缩。通常情况下,这些文件都是被压缩的如.gz .bz2或.xz
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="165"></colgroup>
|
||||
<colgroup width="137"></colgroup>
|
||||
<colgroup width="366"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="CENTER" height="24" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">命令</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">缩写</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000000;"><b><span style="font-size: medium;">描述</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –create</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">c</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">创建一个tar归档</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –concatenate</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">A</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">向归档中添加tar文件</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –append</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">r</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">向归档中添加非tar文件</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –update</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">u</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">添加比归档中的文件更新的文件</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –diff or –compare</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">d</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">将归档和硬盘的文件夹进行对比</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="20" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –list</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">t</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">列举一个tar的压缩包</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="18" style="border: 1px solid #000000;"><span style="font-family: Courier New;"> –extract or –get</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">x</td>
|
||||
<td align="LEFT" style="border: 1px solid #000000;">从归档中解压文件</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="258"></colgroup>
|
||||
<colgroup width="152"></colgroup>
|
||||
<colgroup width="803"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="CENTER" height="24" bgcolor="#999999" style="border: 1px solid #000001;"><b><span style="font-size: medium;">操作参数</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000001;"><b><span style="font-size: medium;">缩写</span></b></td>
|
||||
<td align="CENTER" bgcolor="#999999" style="border: 1px solid #000001;"><b><span style="font-size: medium;">描述</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>directory dir</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> C</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">在执行操作前更改目录</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>same-permissions and <span style="font-family: Courier New;">—</span>same-owner</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> p</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">分别保留权限和所有者信息</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> –verbose</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> v</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">列举所有文件用于读取或提取,这里包含列表,并显示文件的大小、所有权和时间戳</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>exclude file</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> —</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">排除存档文件。在这种情况下,文件可以是一个实际的文件或目录。</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;">—</span>gzip or <span style="font-family: Courier New;">—</span>gunzip</td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> z</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">使用gzip压缩文件</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> –bzip2</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> j</span></td>
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;">使用bzip2压缩文件</td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td align="LEFT" height="24" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> –xz</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Courier New;"> J</span></td>
|
||||
<td align="LEFT" style="border: 1px solid #000001;">使用xz压缩文件</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
#### 例5:创建一个文件,然后使用三种压缩工具压缩####
|
||||
|
||||
在决定使用一个或另一个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出他们的差异
|
||||
|
||||
# tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/* # Create an ordinary tarball
|
||||
# tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/* # Create a tarball and compress with gzip
|
||||
# tar cjf ApacheLogs-$(date +%Y%m%d).tar.bz2 /var/log/httpd/* # Create a tarball and compress with bzip2
|
||||
# tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* # Create a tarball and compress with xz
|
||||
|
||||

|
||||
|
||||
tar 命令实例
|
||||
|
||||
#### 例6:归档时同时保存原始权限和所有权 ####
|
||||
|
||||
如果你创建的是用户的主目录的备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变他们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性
|
||||
|
||||
# tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner
|
||||
|
||||
### 创建软连接和硬链接 ###
|
||||
|
||||
在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件代表另一个名称是由同一点确定,然后链接到实际的数据;符号链接指向的文件名,而不是实际的数据
|
||||
|
||||
此外,硬链接不占用磁盘上的空间,而符号链接做占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求他们必须在同一个innode内。而符号链接没有这个限制,符号链接因为只保存了文件名和目录名,所以可以跨文件系统.
|
||||
|
||||
创建链接的基本语法看起来是相似的:
|
||||
|
||||
# ln TARGET LINK_NAME #从Link_NAME到Target的硬链接
|
||||
# ln -s TARGET LINK_NAME #从Link_NAME到Target的软链接
|
||||
|
||||
#### 例7:创建硬链接和软链接 ####
|
||||
|
||||
没有更好的方式来形象的说明一个文件和一个指向它的符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件的硬链接指向它共享相同的节点都是由466个字节的磁盘使用情况确定。
|
||||
|
||||
另一方面,在别的磁盘创建一个硬链接将占用5个字节,并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。
|
||||
|
||||

|
||||
|
||||
软连接和硬链接之间的不同
|
||||
|
||||
符号链接的典型用法是在Linux系统的版本文件参考。假设有需要一个访问文件foo X.Y 想图书馆一样经常被访问,你想更新一个就可以而不是更新所有的foo X.Y,这时使用软连接更为明智和安全。有文件被看成foo X.Y的链接符号,从而找到foo X.Y
|
||||
|
||||
这样的话,当你的X和Y发生变化后,你只需更新一个文件,而不是更新每个文件。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其他部分,以及复习并将这些主题与本教程所涵盖的内容相结合。
|
||||
|
||||
如果你有任何问题或意见,请随时告诉我们。我们总是很高兴从读者那获取反馈.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/file-and-directory-management-in-linux/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[xiqingongzi](https://github.com/xiqingongzi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
|
||||
[2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
|
||||
[3]:http://www.tecmint.com/18-tar-command-examples-in-linux/
|
Loading…
Reference in New Issue
Block a user