mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge branch 'master' of https://github.com/ZTinoZ/TranslateProject.git
This commit is contained in:
commit
7dc7384542
@ -0,0 +1,86 @@
|
||||
7 个驱动开源发展的社区
|
||||
================================================================================
|
||||
不久前,开源模式还被成熟的工业级厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开源的促进会和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
|
||||
|
||||
### 技术的开放发展驱动着创新 ###
|
||||
|
||||
在过去的 20 几年间,技术的开源推进已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源的促进会中表现活跃。到目前为止,大多数的开源推进都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里介绍 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。
|
||||
|
||||
### OpenPOWER 基金会 ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
|
||||
|
||||
[OpenPOWER 基金会][2] 由 IBM, Google, Mellanox, Tyan 和 NVIDIA 于 2013 年共同创建,在与开源软件发展相同的精神下,旨在驱动开放协作硬件的发展,在过去的 20 几年间,开源软件发展已经找到了肥沃的土壤。
|
||||
|
||||
IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。
|
||||
|
||||
去年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。去年十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。
|
||||
|
||||
### Linux 基金会 ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
|
||||
|
||||
于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同开发成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助 Linux 核心开发者的工作并促进、保护和推进 Linux 操作系统,并协调软件的协作开发。
|
||||
|
||||
它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI [注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称] 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。
|
||||
|
||||
### 开放虚拟化联盟 ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
|
||||
|
||||
[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案,例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。
|
||||
|
||||
如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。
|
||||
|
||||
### OpenStack 基金会 ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
|
||||
|
||||
原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。
|
||||
|
||||
大约以 6 个月为一个发行周期,基金会的 OpenStack 项目开发用于通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协同开发已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。
|
||||
|
||||
### OpenDaylight ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
|
||||
|
||||
作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导、开源、有工业支持的针对软件定义网络( SDN: Software-Defined Networking)的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。
|
||||
|
||||
### Apache 软件基金会 ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
|
||||
|
||||
[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源的企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。
|
||||
|
||||
ASF 是 1999 年成立的一个会员制,非盈利公司,以精英为其核心 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。
|
||||
|
||||
### 开放计算项目 ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
|
||||
|
||||
作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开源硬件解决方案。 OCP 是一个由廉价无浪费的服务器、针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。
|
||||
|
||||
OCP 董事会成员包括来自 Facebook,Intel,Goldman Sachs,Rackspace 和 Microsoft 的代表。
|
||||
|
||||
OCP 最近宣布了有两种可选的许可证: 一个类似 Apache 2.0 的允许衍生工作的许可证,和一个更规范的鼓励将更改回馈到原有软件的许可证。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
|
||||
|
||||
作者:[Thor Olavsrud][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
|
||||
[1]:http://openpowerfoundation.org/
|
||||
[2]:http://www.linuxfoundation.org/
|
||||
[3]:https://openvirtualizationalliance.org/
|
||||
[4]:http://www.openstack.org/foundation/
|
||||
[5]:http://www.opendaylight.org/
|
||||
[6]:http://www.apache.org/
|
||||
[7]:http://www.opencompute.org/
|
@ -1,20 +1,20 @@
|
||||
如何在 Ubuntu 中管理和使用 LVM(Logical Volume Management,逻辑卷管理)
|
||||
如何在 Ubuntu 中管理和使用 逻辑卷管理 LVM
|
||||
================================================================================
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-1.png.pagespeed.ic.VGSxDeVS9P.png)
|
||||
|
||||
在我们之前的文章中,我们介绍了[什么是 LVM 以及能用 LVM 做什么][1],今天我们会给你介绍一些 LVM 的主要管理工具,使得你在设置和扩展安装时更游刃有余。
|
||||
|
||||
正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是,你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘驱动汇集而成或是一个软件磁盘阵列。
|
||||
正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘汇集而成的或是一个软件磁盘阵列。
|
||||
|
||||
要管理 LVM,这里有很多可用的 GUI 工具,但要真正理解 LVM 配置发生的事情,最好要知道一些命令行工具。这当你在一个服务器或不提供 GUI 工具的发行版上管理 LVM 时尤为有用。
|
||||
|
||||
LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以下其中之一开头:
|
||||
|
||||
- Physical Volume = pv
|
||||
- Volume Group = vg
|
||||
- Logical Volume = lv
|
||||
- Physical Volume (物理卷) = pv
|
||||
- Volume Group (卷组)= vg
|
||||
- Logical Volume (逻辑卷)= lv
|
||||
|
||||
物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组使得你的操作系统能使用指定的空间。
|
||||
物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组,使得你的操作系统能使用指定的空间。
|
||||
|
||||
### 可下载的 LVM 备忘单 ###
|
||||
|
||||
@ -26,7 +26,7 @@ LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以
|
||||
|
||||
### 如何查看当前 LVM 信息 ###
|
||||
|
||||
你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置好的开始点。
|
||||
你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令可以和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置的好起点。
|
||||
|
||||
display 命令会格式化输出信息,因此比 s 命令更易于理解。对每个命令你会看到名称和 pv/vg 的路径,它还会给出空闲和已使用空间的信息。
|
||||
|
||||
@ -40,17 +40,17 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
|
||||
#### 创建物理卷 ####
|
||||
|
||||
我们会从一个完全新的没有任何分区和信息的硬盘驱动开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等)
|
||||
我们会从一个全新的没有任何分区和信息的硬盘开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等)
|
||||
|
||||
> 注意:记住所有的命令都要以 root 身份运行或者在命令前面添加 'sudo' 。
|
||||
|
||||
fdisk -l
|
||||
|
||||
如果之前你的硬盘驱动从没有格式化或分区,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。
|
||||
如果之前你的硬盘从未格式化或分区过,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/fdisk.png.pagespeed.ce.AmAEsxm-7Q.png)
|
||||
|
||||
我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在驱动上创建一个新的分区。
|
||||
我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在磁盘上创建一个新的分区。
|
||||
|
||||
这里有大量能创建新分区的 GUI 工具,包括 [Gparted][2],但由于我们已经打开了终端,我们将使用 fdisk 命令创建需要的分区。
|
||||
|
||||
@ -62,9 +62,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/627x145xfdisk00.png.pagespeed.ic.I7S8bjoXQG.png)
|
||||
|
||||
以指定的顺序输入命令创建一个使用新硬盘驱动 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或相应多个分区,我建议使用 GParted 或自己了解关于 fdisk 命令的使用。
|
||||
以指定的顺序输入命令创建一个使用新硬盘 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或想要多个分区,我建议使用 GParted 或自己了解一下关于 fdisk 命令的使用。
|
||||
|
||||
**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何信息。**
|
||||
**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何有用的信息。**
|
||||
|
||||
- n = 创建新分区
|
||||
- p = 创建主分区
|
||||
@ -79,9 +79,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
- t = 更改分区类型
|
||||
- 8e = 更改为 LVM 分区类型
|
||||
|
||||
核实并将信息写入硬盘驱动器。
|
||||
核实并将信息写入硬盘。
|
||||
|
||||
- p = 查看分区设置使得写入更改到磁盘之前可以回看
|
||||
- p = 查看分区设置使得在写入更改到磁盘之前可以回看
|
||||
- w = 写入更改到磁盘
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/560x339xfdisk03.png.pagespeed.ic.FC8foICZsb.png)
|
||||
@ -102,7 +102,7 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/vgcreate.png.pagespeed.ce.fVLzSmPZou.png)
|
||||
|
||||
Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。
|
||||
vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。
|
||||
|
||||
#### 创建逻辑卷 ####
|
||||
|
||||
@ -112,7 +112,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/lvcreate.png.pagespeed.ce.vupLB-LJEW.png)
|
||||
|
||||
-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 所以 lvcreate 命令知道从什么卷获取空间。
|
||||
-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 以便 lvcreate 命令知道从什么卷获取空间。
|
||||
|
||||
#### 格式化并挂载逻辑卷 ####
|
||||
|
||||
@ -131,7 +131,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
#### 重新设置逻辑卷大小 ####
|
||||
|
||||
逻辑卷的一个好处是你能使你的共享物理变大或变小而不需要移动所有东西到一个更大的硬盘驱动。另外,你可以添加新的硬盘驱动并同时扩展你的卷组。或者如果你有一个不使用的硬盘驱动,你可以从卷组中移除它使得逻辑卷变小。
|
||||
逻辑卷的一个好处是你能使你的存储物理地变大或变小,而不需要移动所有东西到一个更大的硬盘。另外,你可以添加新的硬盘并同时扩展你的卷组。或者如果你有一个不使用的硬盘,你可以从卷组中移除它使得逻辑卷变小。
|
||||
|
||||
这里有三个用于使物理卷、卷组和逻辑卷变大或变小的基础工具。
|
||||
|
||||
@ -147,9 +147,9 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
按照上面创建新分区并更改分区类型为 LVM(8e) 的步骤安装一个新硬盘驱动。然后用 pvcreate 命令创建一个 LVM 能识别的物理卷。
|
||||
|
||||
#### 添加新硬盘驱动到卷组 ####
|
||||
#### 添加新硬盘到卷组 ####
|
||||
|
||||
要添加新的硬盘驱动到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。
|
||||
要添加新的硬盘到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。
|
||||
|
||||
这会添加新物理卷到已存在的卷组中。
|
||||
|
||||
@ -189,7 +189,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
1. 调整文件系统大小 (调整之前确保已经移动文件到硬盘驱动安全的地方)
|
||||
1. 减小逻辑卷 (除了 + 可以扩展大小,你也可以用 - 压缩大小)
|
||||
1. 用 vgreduce 从卷组中移除硬盘驱动
|
||||
1. 用 vgreduce 从卷组中移除硬盘
|
||||
|
||||
#### 备份逻辑卷 ####
|
||||
|
||||
@ -197,7 +197,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-2.png.pagespeed.ic.VtOUuqYX1W.png)
|
||||
|
||||
LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该照片可以用于在不同的硬盘驱动上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。
|
||||
LVM 获取快照的时候,会有一张和逻辑卷完全相同的“照片”,该“照片”可以用于在不同的硬盘上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。
|
||||
|
||||
要创建一个快照,我们需要创建拥有足够空闲空间的逻辑卷,用于保存我们备份的时候会写入该逻辑卷的任何新信息。如果驱动并不是经常写入,你可以使用很小的一个存储空间。备份完成的时候我们只需要移除临时逻辑卷,原始逻辑卷会和往常一样。
|
||||
|
||||
@ -209,7 +209,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/597x68xlvcreate-snapshot.png.pagespeed.ic.Rw2ivtcpPg.png)
|
||||
|
||||
这里我们创建了一个只有 512MB 的逻辑卷,因为驱动实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。
|
||||
这里我们创建了一个只有 512MB 的逻辑卷,因为该硬盘实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。
|
||||
|
||||
#### 挂载新快照 ####
|
||||
|
||||
@ -222,7 +222,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
|
||||
#### 复制快照和删除逻辑卷 ####
|
||||
|
||||
你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘驱动或者打包所有文件到一个文件。
|
||||
你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘或者打包所有文件到一个文件。
|
||||
|
||||
**注意:tar -c 会创建一个归档文件,-f 要指出归档文件的名称和路径。要获取 tar 命令的帮助信息,可以在终端中输入 man tar。**
|
||||
|
||||
@ -230,7 +230,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/627x67xsnapshot-backup.png.pagespeed.ic.tw-2AK_lfZ.png)
|
||||
|
||||
记住备份发生的时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。
|
||||
记住备份时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。
|
||||
|
||||
备份完成后,卸载卷并移除临时快照。
|
||||
|
||||
@ -259,10 +259,10 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该
|
||||
via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/
|
||||
[1]:https://linux.cn/article-5953-1.html
|
||||
[2]:http://www.howtogeek.com/howto/17001/how-to-format-a-usb-drive-in-ubuntu-using-gparted/
|
||||
[3]:http://www.howtogeek.com/howto/33552/htg-explains-which-linux-file-system-should-you-choose/
|
183
published/20150522 Analyzing Linux Logs.md
Normal file
183
published/20150522 Analyzing Linux Logs.md
Normal file
@ -0,0 +1,183 @@
|
||||
如何分析 Linux 日志
|
||||
==============================================================================
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-Copy@2x1.png)
|
||||
|
||||
日志中有大量的信息需要你处理,尽管有时候想要提取并非想象中的容易。在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子(只需要搜索即可)。我们还将涉及一些更高级的分析,但这些需要你前期努力做出适当的设置,后期就能节省很多时间。对数据进行高级分析的例子包括生成汇总计数、对有效值进行过滤,等等。
|
||||
|
||||
我们首先会向你展示如何在命令行中使用多个不同的工具,然后展示了一个日志管理工具如何能自动完成大部分繁重工作从而使得日志分析变得简单。
|
||||
|
||||
### 用 Grep 搜索 ###
|
||||
|
||||
搜索文本是查找信息最基本的方式。搜索文本最常用的工具是 [grep][1]。这个命令行工具在大部分 Linux 发行版中都有,它允许你用正则表达式搜索日志。正则表达式是一种用特殊的语言写的、能识别匹配文本的模式。最简单的模式就是用引号把你想要查找的字符串括起来。
|
||||
|
||||
#### 正则表达式 ####
|
||||
|
||||
这是一个在 Ubuntu 系统的认证日志中查找 “user hoover” 的例子:
|
||||
|
||||
$ grep "user hoover" /var/log/auth.log
|
||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||
pam_unix(sshd:session): session opened for user hoover by (uid=0)
|
||||
pam_unix(sshd:session): session closed for user hoover
|
||||
|
||||
构建精确的正则表达式可能很难。例如,如果我们想要搜索一个类似端口 “4792” 的数字,它可能也会匹配时间戳、URL 以及其它不需要的数据。Ubuntu 中下面的例子,它匹配了一个我们不想要的 Apache 日志。
|
||||
|
||||
$ grep "4792" /var/log/auth.log
|
||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||
74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972 HTTP/1.0" 404 545 "-" "-”
|
||||
|
||||
#### 环绕搜索 ####
|
||||
|
||||
另一个有用的小技巧是你可以用 grep 做环绕搜索。这会向你展示一个匹配前面或后面几行是什么。它能帮助你调试导致错误或问题的东西。`B` 选项展示前面几行,`A` 选项展示后面几行。举个例子,我们知道当一个人以管理员员身份登录失败时,同时他们的 IP 也没有反向解析,也就意味着他们可能没有有效的域名。这非常可疑!
|
||||
|
||||
$ grep -B 3 -A 2 'Invalid user' /var/log/auth.log
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT!
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Invalid user admin from 216.19.2.8
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: input_userauth_request: invalid user admin [preauth]
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
||||
|
||||
#### Tail ####
|
||||
|
||||
你也可以把 grep 和 [tail][2] 结合使用来获取一个文件的最后几行,或者跟踪日志并实时打印。这在你做交互式更改的时候非常有用,例如启动服务器或者测试代码更改。
|
||||
|
||||
$ tail -f /var/log/auth.log | grep 'Invalid user'
|
||||
Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136
|
||||
Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136
|
||||
|
||||
关于 grep 和正则表达式的详细介绍并不在本指南的范围,但 [Ryan’s Tutorials][3] 有更深入的介绍。
|
||||
|
||||
日志管理系统有更高的性能和更强大的搜索能力。它们通常会索引数据并进行并行查询,因此你可以很快的在几秒内就能搜索 GB 或 TB 的日志。相比之下,grep 就需要几分钟,在极端情况下可能甚至几小时。日志管理系统也使用类似 [Lucene][4] 的查询语言,它提供更简单的语法来检索数字、域以及其它。
|
||||
|
||||
### 用 Cut、 AWK、 和 Grok 解析 ###
|
||||
|
||||
#### 命令行工具 ####
|
||||
|
||||
Linux 提供了多个命令行工具用于文本解析和分析。当你想要快速解析少量数据时非常有用,但处理大量数据时可能需要很长时间。
|
||||
|
||||
#### Cut ####
|
||||
|
||||
[cut][5] 命令允许你从有分隔符的日志解析字段。分隔符是指能分开字段或键值对的等号或逗号等。
|
||||
|
||||
假设我们想从下面的日志中解析出用户:
|
||||
|
||||
pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root
|
||||
|
||||
我们可以像下面这样用 cut 命令获取用等号分割后的第八个字段的文本。这是一个 Ubuntu 系统上的例子:
|
||||
|
||||
$ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8
|
||||
root
|
||||
hoover
|
||||
root
|
||||
nagios
|
||||
nagios
|
||||
|
||||
#### AWK ####
|
||||
|
||||
另外,你也可以使用 [awk][6],它能提供更强大的解析字段功能。它提供了一个脚本语言,你可以过滤出几乎任何不相干的东西。
|
||||
|
||||
例如,假设在 Ubuntu 系统中我们有下面的一行日志,我们想要提取登录失败的用户名称:
|
||||
|
||||
Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth]
|
||||
|
||||
你可以像下面这样使用 awk 命令。首先,用一个正则表达式 /sshd.*invalid user/ 来匹配 sshd invalid user 行。然后用 { print $9 } 根据默认的分隔符空格打印第九个字段。这样就输出了用户名。
|
||||
|
||||
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
||||
guest
|
||||
admin
|
||||
info
|
||||
test
|
||||
ubnt
|
||||
|
||||
你可以在 [Awk 用户指南][7] 中阅读更多关于如何使用正则表达式和输出字段的信息。
|
||||
|
||||
#### 日志管理系统 ####
|
||||
|
||||
日志管理系统使得解析变得更加简单,使用户能快速的分析很多的日志文件。他们能自动解析标准的日志格式,比如常见的 Linux 日志和 Web 服务器日志。这能节省很多时间,因为当处理系统问题的时候你不需要考虑自己写解析逻辑。
|
||||
|
||||
下面是一个 sshd 日志消息的例子,解析出了每个 remoteHost 和 user。这是 Loggly 中的一张截图,它是一个基于云的日志管理服务。
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png)
|
||||
|
||||
你也可以对非标准格式自定义解析。一个常用的工具是 [Grok][8],它用一个常见正则表达式库,可以解析原始文本为结构化 JSON。下面是一个 Grok 在 Logstash 中解析内核日志文件的事例配置:
|
||||
|
||||
filter{
|
||||
grok {
|
||||
match => {"message" => "%{CISCOTIMESTAMP:timestamp} %{HOST:host} %{WORD:program}%{NOTSPACE} %{NOTSPACE}%{NUMBER:duration}%{NOTSPACE} %{GREEDYDATA:kernel_logs}"
|
||||
}
|
||||
}
|
||||
|
||||
下图是 Grok 解析后输出的结果:
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.30.37-AM.png)
|
||||
|
||||
### 用 Rsyslog 和 AWK 过滤 ###
|
||||
|
||||
过滤使得你能检索一个特定的字段值而不是进行全文检索。这使你的日志分析更加准确,因为它会忽略来自其它部分日志信息不需要的匹配。为了对一个字段值进行搜索,你首先需要解析日志或者至少有对事件结构进行检索的方式。
|
||||
|
||||
#### 如何对应用进行过滤 ####
|
||||
|
||||
通常,你可能只想看一个应用的日志。如果你的应用把记录都保存到一个文件中就会很容易。如果你需要在一个聚集或集中式日志中过滤一个应用就会比较复杂。下面有几种方法来实现:
|
||||
|
||||
1. 用 rsyslog 守护进程解析和过滤日志。下面的例子将 sshd 应用的日志写入一个名为 sshd-message 的文件,然后丢弃事件以便它不会在其它地方重复出现。你可以将它添加到你的 rsyslog.conf 文件中测试这个例子。
|
||||
|
||||
:programname, isequal, “sshd” /var/log/sshd-messages
|
||||
&~
|
||||
|
||||
2. 用类似 awk 的命令行工具提取特定字段的值,例如 sshd 用户名。下面是 Ubuntu 系统中的一个例子。
|
||||
|
||||
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
||||
guest
|
||||
admin
|
||||
info
|
||||
test
|
||||
ubnt
|
||||
|
||||
3. 用日志管理系统自动解析日志,然后在需要的应用名称上点击过滤。下面是在 Loggly 日志管理服务中提取 syslog 域的截图。我们对应用名称 “sshd” 进行过滤,如维恩图图标所示。
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png)
|
||||
|
||||
#### 如何过滤错误 ####
|
||||
|
||||
一个人最希望看到日志中的错误。不幸的是,默认的 syslog 配置不直接输出错误的严重性,也就使得难以过滤它们。
|
||||
|
||||
这里有两个解决该问题的方法。首先,你可以修改你的 rsyslog 配置,在日志文件中输出错误的严重性,使得便于查看和检索。在你的 rsyslog 配置中你可以用 pri-text 添加一个 [模板][9],像下面这样:
|
||||
|
||||
"<%pri-text%> : %timegenerated%,%HOSTNAME%,%syslogtag%,%msg%n"
|
||||
|
||||
这个例子会按照下面的格式输出。你可以看到该信息中指示错误的 err。
|
||||
|
||||
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
||||
|
||||
你可以用 awk 或者 grep 检索错误信息。在 Ubuntu 中,对这个例子,我们可以用一些语法特征,例如 . 和 >,它们只会匹配这个域。
|
||||
|
||||
$ grep '.err>' /var/log/auth.log
|
||||
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
||||
|
||||
你的第二个选择是使用日志管理系统。好的日志管理系统能自动解析 syslog 消息并抽取错误域。它们也允许你用简单的点击过滤日志消息中的特定错误。
|
||||
|
||||
下面是 Loggly 中一个截图,显示了高亮错误严重性的 syslog 域,表示我们正在过滤错误:
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.00.36-AM.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/
|
||||
|
||||
作者:[Jason Skowronski][a],[Amy Echeverri][b],[ Sadequl Hussain][c]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linkedin.com/in/jasonskowronski
|
||||
[b]:https://www.linkedin.com/in/amyecheverri
|
||||
[c]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
||||
[1]:http://linux.die.net/man/1/grep
|
||||
[2]:http://linux.die.net/man/1/tail
|
||||
[3]:http://ryanstutorials.net/linuxtutorial/grep.php
|
||||
[4]:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html
|
||||
[5]:http://linux.die.net/man/1/cut
|
||||
[6]:http://linux.die.net/man/1/awk
|
||||
[7]:http://www.delorie.com/gnu/docs/gawk/gawk_26.html#IDX155
|
||||
[8]:http://logstash.net/docs/1.4.2/filters/grok
|
||||
[9]:http://www.rsyslog.com/doc/v8-stable/configuration/templates.html
|
@ -1,18 +1,18 @@
|
||||
Ubuntu 15.04上配置OpenVPN服务器-客户端
|
||||
在 Ubuntu 15.04 上配置 OpenVPN 服务器和客户端
|
||||
================================================================================
|
||||
虚拟专用网(VPN)是几种用于建立与其它网络连接的网络技术中常见的一个名称。它被称为虚拟网,因为各个节点的连接不是通过物理线路实现的。而由于没有网络所有者的正确授权是不能通过公共线路访问到网络,所以它是专用的。
|
||||
虚拟专用网(VPN)常指几种通过其它网络建立连接技术。它之所以被称为“虚拟”,是因为各个节点间的连接不是通过物理线路实现的,而“专用”是指如果没有网络所有者的正确授权是不能被公开访问到。
|
||||
|
||||
![](http://blog.linoxide.com/wp-content/uploads/2015/05/vpn_custom_illustration.jpg)
|
||||
|
||||
[OpenVPN][1]软件通过TUN/TAP驱动的帮助,使用TCP和UDP协议来传输数据。UDP协议和TUN驱动允许NAT后的用户建立到OpenVPN服务器的连接。此外,OpenVPN允许指定自定义端口。它提额外提供了灵活的配置,可以帮助你避免防火墙限制。
|
||||
[OpenVPN][1]软件借助TUN/TAP驱动使用TCP和UDP协议来传输数据。UDP协议和TUN驱动允许NAT后的用户建立到OpenVPN服务器的连接。此外,OpenVPN允许指定自定义端口。它提供了更多的灵活配置,可以帮助你避免防火墙限制。
|
||||
|
||||
OpenVPN中,由OpenSSL库和传输层安全协议(TLS)提供了安全和加密。TLS是SSL协议的一个改进版本。
|
||||
|
||||
OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示了如何配置OpenVPN的服务器端,以及如何预备使用带有公共密钥非对称加密和TLS协议基础结构(PKI)。
|
||||
OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示了如何配置OpenVPN的服务器端,以及如何配置使用带有公共密钥基础结构(PKI)的非对称加密和TLS协议。
|
||||
|
||||
### 服务器端配置 ###
|
||||
|
||||
首先,我们必须安装OpenVPN。在Ubuntu 15.04和其它带有‘apt’报管理器的Unix系统中,可以通过如下命令安装:
|
||||
首先,我们必须安装OpenVPN软件。在Ubuntu 15.04和其它带有‘apt’包管理器的Unix系统中,可以通过如下命令安装:
|
||||
|
||||
sudo apt-get install openvpn
|
||||
|
||||
@ -20,7 +20,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
sudo apt-get unstall easy-rsa
|
||||
|
||||
**注意**: 所有接下来的命令要以超级用户权限执行,如在“sudo -i”命令后;此外,你可以使用“sudo -E”作为接下来所有命令的前缀。
|
||||
**注意**: 所有接下来的命令要以超级用户权限执行,如在使用`sudo -i`命令后执行,或者你可以使用`sudo -E`作为接下来所有命令的前缀。
|
||||
|
||||
开始之前,我们需要拷贝“easy-rsa”到openvpn文件夹。
|
||||
|
||||
@ -32,15 +32,15 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
cd /etc/openvpn/easy-rsa/2.0
|
||||
|
||||
这里,我们开启了一个密钥生成进程。
|
||||
这里,我们开始密钥生成进程。
|
||||
|
||||
首先,我们编辑一个“var”文件。为了简化生成过程,我们需要在里面指定数据。这里是“var”文件的一个样例:
|
||||
首先,我们编辑一个“vars”文件。为了简化生成过程,我们需要在里面指定数据。这里是“vars”文件的一个样例:
|
||||
|
||||
export KEY_COUNTRY="US"
|
||||
export KEY_PROVINCE="CA"
|
||||
export KEY_CITY="SanFrancisco"
|
||||
export KEY_ORG="Fort-Funston"
|
||||
export KEY_EMAIL="my@myhost.mydomain"
|
||||
export KEY_COUNTRY="CN"
|
||||
export KEY_PROVINCE="BJ"
|
||||
export KEY_CITY="Beijing"
|
||||
export KEY_ORG="Linux.CN"
|
||||
export KEY_EMAIL="open@vpn.linux.cn"
|
||||
export KEY_OU=server
|
||||
|
||||
希望这些字段名称对你而言已经很清楚,不需要进一步说明了。
|
||||
@ -61,7 +61,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
./build-ca
|
||||
|
||||
在对话中,我们可以看到默认的变量,这些变量是我们先前在“vars”中指定的。我们可以检查以下,如有必要进行编辑,然后按回车几次。对话如下
|
||||
在对话中,我们可以看到默认的变量,这些变量是我们先前在“vars”中指定的。我们可以检查一下,如有必要进行编辑,然后按回车几次。对话如下
|
||||
|
||||
Generating a 2048 bit RSA private key
|
||||
.............................................+++
|
||||
@ -75,14 +75,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
For some fields there will be a default value,
|
||||
If you enter '.', the field will be left blank.
|
||||
-----
|
||||
Country Name (2 letter code) [US]:
|
||||
State or Province Name (full name) [CA]:
|
||||
Locality Name (eg, city) [SanFrancisco]:
|
||||
Organization Name (eg, company) [Fort-Funston]:
|
||||
Organizational Unit Name (eg, section) [MyOrganizationalUnit]:
|
||||
Common Name (eg, your name or your server's hostname) [Fort-Funston CA]:
|
||||
Country Name (2 letter code) [CN]:
|
||||
State or Province Name (full name) [BJ]:
|
||||
Locality Name (eg, city) [Beijing]:
|
||||
Organization Name (eg, company) [Linux.CN]:
|
||||
Organizational Unit Name (eg, section) [Tech]:
|
||||
Common Name (eg, your name or your server's hostname) [Linux.CN CA]:
|
||||
Name [EasyRSA]:
|
||||
Email Address [me@myhost.mydomain]:
|
||||
Email Address [open@vpn.linux.cn]:
|
||||
|
||||
接下来,我们需要生成一个服务器密钥
|
||||
|
||||
@ -102,14 +102,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
For some fields there will be a default value,
|
||||
If you enter '.', the field will be left blank.
|
||||
-----
|
||||
Country Name (2 letter code) [US]:
|
||||
State or Province Name (full name) [CA]:
|
||||
Locality Name (eg, city) [SanFrancisco]:
|
||||
Organization Name (eg, company) [Fort-Funston]:
|
||||
Organizational Unit Name (eg, section) [MyOrganizationalUnit]:
|
||||
Common Name (eg, your name or your server's hostname) [server]:
|
||||
Country Name (2 letter code) [CN]:
|
||||
State or Province Name (full name) [BJ]:
|
||||
Locality Name (eg, city) [Beijing]:
|
||||
Organization Name (eg, company) [Linux.CN]:
|
||||
Organizational Unit Name (eg, section) [Tech]:
|
||||
Common Name (eg, your name or your server's hostname) [Linux.CN server]:
|
||||
Name [EasyRSA]:
|
||||
Email Address [me@myhost.mydomain]:
|
||||
Email Address [open@vpn.linux.cn]:
|
||||
|
||||
Please enter the following 'extra' attributes
|
||||
to be sent with your certificate request
|
||||
@ -119,14 +119,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
Check that the request matches the signature
|
||||
Signature ok
|
||||
The Subject's Distinguished Name is as follows
|
||||
countryName :PRINTABLE:'US'
|
||||
stateOrProvinceName :PRINTABLE:'CA'
|
||||
localityName :PRINTABLE:'SanFrancisco'
|
||||
organizationName :PRINTABLE:'Fort-Funston'
|
||||
organizationalUnitName:PRINTABLE:'MyOrganizationalUnit'
|
||||
commonName :PRINTABLE:'server'
|
||||
countryName :PRINTABLE:'CN'
|
||||
stateOrProvinceName :PRINTABLE:'BJ'
|
||||
localityName :PRINTABLE:'Beijing'
|
||||
organizationName :PRINTABLE:'Linux.CN'
|
||||
organizationalUnitName:PRINTABLE:'Tech'
|
||||
commonName :PRINTABLE:'Linux.CN server'
|
||||
name :PRINTABLE:'EasyRSA'
|
||||
emailAddress :IA5STRING:'me@myhost.mydomain'
|
||||
emailAddress :IA5STRING:'open@vpn.linux.cn'
|
||||
Certificate is to be certified until May 22 19:00:25 2025 GMT (3650 days)
|
||||
Sign the certificate? [y/n]:y
|
||||
1 out of 1 certificate requests certified, commit? [y/n]y
|
||||
@ -143,7 +143,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
Generating DH parameters, 2048 bit long safe prime, generator 2
|
||||
This is going to take a long time
|
||||
................................+................<and many many dots>
|
||||
................................+................<许多的点>
|
||||
|
||||
在漫长的等待之后,我们可以继续生成最后的密钥了,该密钥用于TLS验证。命令如下:
|
||||
|
||||
@ -176,7 +176,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
### Unix的客户端配置 ###
|
||||
|
||||
假定我们有一台装有类Unix操作系统的设备,比如Ubuntu 15.04,并安装有OpenVPN。我们想要从先前的部分连接到OpenVPN服务器。首先,我们需要为客户端生成密钥。为了生成该密钥,请转到服务器上的目录中:
|
||||
假定我们有一台装有类Unix操作系统的设备,比如Ubuntu 15.04,并安装有OpenVPN。我们想要连接到前面建立的OpenVPN服务器。首先,我们需要为客户端生成密钥。为了生成该密钥,请转到服务器上的对应目录中:
|
||||
|
||||
cd /etc/openvpn/easy-rsa/2.0
|
||||
|
||||
@ -211,7 +211,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
dev tun
|
||||
proto udp
|
||||
|
||||
# IP and Port of remote host with OpenVPN server
|
||||
# 远程 OpenVPN 服务器的 IP 和 端口号
|
||||
remote 111.222.333.444 1194
|
||||
|
||||
resolv-retry infinite
|
||||
@ -243,7 +243,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
安卓设备上的OpenVPN配置和Unix系统上的十分类似,我们需要一个含有配置文件、密钥和证书的包。文件列表如下:
|
||||
|
||||
- configuration file (.ovpn),
|
||||
- 配置文件 (扩展名 .ovpn),
|
||||
- ca.crt,
|
||||
- dh2048.pem,
|
||||
- client.crt,
|
||||
@ -257,7 +257,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
dev tun
|
||||
proto udp
|
||||
|
||||
# IP and Port of remote host with OpenVPN server
|
||||
# 远程 OpenVPN 服务器的 IP 和 端口号
|
||||
remote 111.222.333.444 1194
|
||||
|
||||
resolv-retry infinite
|
||||
@ -274,21 +274,21 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示
|
||||
|
||||
所有这些文件我们必须移动我们设备的SD卡上。
|
||||
|
||||
然后,我们需要安装[OpenVPN连接][2]。
|
||||
然后,我们需要安装一个[OpenVPN Connect][2] 应用。
|
||||
|
||||
接下来,配置过程很是简单:
|
||||
|
||||
open setting of OpenVPN and select Import options
|
||||
select Import Profile from SD card option
|
||||
in opened window go to folder with prepared files and select .ovpn file
|
||||
application offered us to create a new profile
|
||||
tap on the Connect button and wait a second
|
||||
- 打开 OpenVPN 并选择“Import”选项
|
||||
- 选择“Import Profile from SD card”
|
||||
- 在打开的窗口中导航到我们放置好文件的目录,并选择那个 .ovpn 文件
|
||||
- 应用会要求我们创建一个新的配置文件
|
||||
- 点击“Connect”按钮并稍等一下
|
||||
|
||||
搞定。现在,我们的安卓设备已经通过安全的VPN连接连接到我们的专用网。
|
||||
|
||||
### 尾声 ###
|
||||
|
||||
虽然OpenVPN初始配置花费不少时间,但是简易客户端配置为我们弥补了时间上的损失,也提供了从任何设备连接的能力。此外,OpenVPN提供了一个很高的安全等级,以及从不同地方连接的能力,包括位于NAT后面的客户端。因此,OpenVPN可以同时在家和在企业中使用。
|
||||
虽然OpenVPN初始配置花费不少时间,但是简易的客户端配置为我们弥补了时间上的损失,也提供了从任何设备连接的能力。此外,OpenVPN提供了一个很高的安全等级,以及从不同地方连接的能力,包括位于NAT后面的客户端。因此,OpenVPN可以同时在家和企业中使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -296,7 +296,7 @@ via: http://linoxide.com/ubuntu-how-to/configure-openvpn-server-client-ubuntu-15
|
||||
|
||||
作者:[Ivan Zabrovskiy][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,205 @@
|
||||
关于Linux防火墙'iptables'的面试问答
|
||||
================================================================================
|
||||
Nishita Agarwal是Tecmint的用户,她将分享关于她刚刚经历的一家公司(印度的一家私人公司Pune)的面试经验。在面试中她被问及许多不同的问题,但她是iptables方面的专家,因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。
|
||||
|
||||
![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg)
|
||||
|
||||
所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。
|
||||
|
||||
> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位,我的专业集中在UNIX和它的变种(BSD,Linux)。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化,并将供职于印度的Pune公司。”
|
||||
|
||||
下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。
|
||||
|
||||
### 1. 你听说过Linux下面的iptables和Firewalld么?知不知道它们是什么,是用来干什么的? ###
|
||||
|
||||
**答案** : iptables和Firewalld我都知道,并且我已经使用iptables好一段时间了。iptables主要由C语言写成,并且以GNU GPL许可证发布。它是从系统管理员的角度写的,最新的稳定版是iptables 1.4.21。iptables通常被用作类UNIX系统中的防火墙,更准确的说,可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道,来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块,它执行包过滤的任务。
|
||||
|
||||
Firewalld是RHEL/CentOS 7(也许还有其他发行版,但我不太清楚)中最新的过滤规则的实现。它已经取代了iptables接口,并与netfilter相连接。
|
||||
|
||||
### 2. 你用过一些iptables的GUI或命令行工具么? ###
|
||||
|
||||
**答案** : 虽然我既用过GUI工具,比如与[Webmin][1]结合的Shorewall;以及直接通过终端访问iptables,但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员,而终端适合有经验的管理员。
|
||||
|
||||
### 3. 那么iptables和firewalld的基本区别是什么呢? ###
|
||||
|
||||
**答案** : iptables和firewalld都有着同样的目的(包过滤),但它们使用不同的方式。iptables与firewalld不同,在每次发生更改时都刷新整个规则集。通常iptables配置文件位于‘/etc/sysconfig/iptables‘,而firewalld的配置文件位于‘/etc/firewalld/‘。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易,但是两者都可以完成同样的任务。例如,firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。
|
||||
|
||||
### 4. 如果有机会的话,你会在你所有的服务器上用firewalld替换iptables么? ###
|
||||
|
||||
**答案** : 我对iptables很熟悉,它也工作的很好。如果没有任何需求需要firewalld的动态特性,那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下,目前为止,我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢?”。上面是我自己的想法,但如果组织愿意用firewalld替换iptables的话,我不介意。
|
||||
|
||||
### 5. 你看上去对iptables很有信心,巧的是,我们的服务器也在使用iptables。 ###
|
||||
|
||||
iptables使用的表有哪些?请简要的描述iptables使用的表以及它们所支持的链。
|
||||
|
||||
**答案** : 谢谢您的赞赏。至于您问的问题,iptables使用的表有四个,它们是:
|
||||
|
||||
- Nat 表
|
||||
- Mangle 表
|
||||
- Filter 表
|
||||
- Raw 表
|
||||
|
||||
Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如,如果一个通过某个接口的包被修饰(修改了IP地址),该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤,由NAT表支持的链称为PREROUTING 链,POSTROUTING 链和OUTPUT 链。
|
||||
|
||||
Mangle表 : 正如它的名字一样,这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING 链,OUTPUT 链,Forward 链,Input 链和POSTROUTING 链。
|
||||
|
||||
Filter表 : Filter表是iptables中使用的默认表,它用来过滤网络包。如果没有定义任何规则,Filter表则被当作默认的表,并且基于它来过滤。支持的链有INPUT 链,OUTPUT 链,FORWARD 链。
|
||||
|
||||
Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING 链和OUTPUT 链。
|
||||
|
||||
### 6. 简要谈谈什么是iptables中的目标值(能被指定为目标),他们有什么用 ###
|
||||
|
||||
**答案** : 下面是在iptables中可以指定为目标的值:
|
||||
|
||||
- ACCEPT : 接受包
|
||||
- QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方)
|
||||
- DROP : 丢弃包
|
||||
- RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调用规则
|
||||
|
||||
### 7. 让我们来谈谈iptables技术方面的东西,我的意思是说实际使用方面 ###
|
||||
|
||||
你怎么检测在CentOS中安装iptables时需要的iptables的rpm?
|
||||
|
||||
**答案** : iptables已经被默认安装在CentOS中,我们不需要单独安装它。但可以这样检测rpm:
|
||||
|
||||
# rpm -qa iptables
|
||||
|
||||
iptables-1.4.21-13.el7.x86_64
|
||||
|
||||
如果您需要安装它,您可以用yum来安装。
|
||||
|
||||
# yum install iptables-services
|
||||
|
||||
### 8. 怎样检测并且确保iptables服务正在运行? ###
|
||||
|
||||
**答案** : 您可以在终端中运行下面的命令来检测iptables的状态。
|
||||
|
||||
# service status iptables [On CentOS 6/5]
|
||||
# systemctl status iptables [On CentOS 7]
|
||||
|
||||
如果iptables没有在运行,可以使用下面的语句
|
||||
|
||||
---------------- 在CentOS 6/5下 ----------------
|
||||
# chkconfig --level 35 iptables on
|
||||
# service iptables start
|
||||
|
||||
---------------- 在CentOS 7下 ----------------
|
||||
# systemctl enable iptables
|
||||
# systemctl start iptables
|
||||
|
||||
我们还可以检测iptables的模块是否被加载:
|
||||
|
||||
# lsmod | grep ip_tables
|
||||
|
||||
### 9. 你怎么检查iptables中当前定义的规则呢? ###
|
||||
|
||||
**答案** : 当前的规则可以简单的用下面的命令查看:
|
||||
|
||||
# iptables -L
|
||||
|
||||
示例输出
|
||||
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||
ACCEPT icmp -- anywhere anywhere
|
||||
ACCEPT all -- anywhere anywhere
|
||||
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
### 10. 你怎样刷新所有的iptables规则或者特定的链呢? ###
|
||||
|
||||
**答案** : 您可以使用下面的命令来刷新一个特定的链。
|
||||
|
||||
# iptables --flush OUTPUT
|
||||
|
||||
要刷新所有的规则,可以用:
|
||||
|
||||
# iptables --flush
|
||||
|
||||
### 11. 请在iptables中添加一条规则,接受所有从一个信任的IP地址(例如,192.168.0.7)过来的包。 ###
|
||||
|
||||
**答案** : 上面的场景可以通过运行下面的命令来完成。
|
||||
|
||||
# iptables -A INPUT -s 192.168.0.7 -j ACCEPT
|
||||
|
||||
我们还可以在源IP中使用标准的斜线和子网掩码:
|
||||
|
||||
# iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT
|
||||
# iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT
|
||||
|
||||
### 12. 怎样在iptables中添加规则以ACCEPT,REJECT,DENY和DROP ssh的服务? ###
|
||||
|
||||
**答案** : 但愿ssh运行在22端口,那也是ssh的默认端口,我们可以在iptables中添加规则来ACCEPT ssh的tcp包(在22号端口上)。
|
||||
|
||||
# iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT
|
||||
|
||||
REJECT ssh服务(22号端口)的tcp包。
|
||||
|
||||
# iptables -A INPUT -s -p tcp --dport 22 -j REJECT
|
||||
|
||||
DENY ssh服务(22号端口)的tcp包。
|
||||
|
||||
|
||||
# iptables -A INPUT -s -p tcp --dport 22 -j DENY
|
||||
|
||||
DROP ssh服务(22号端口)的tcp包。
|
||||
|
||||
|
||||
# iptables -A INPUT -s -p tcp --dport 22 -j DROP
|
||||
|
||||
### 13. 让我给你另一个场景,假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接,你会怎么做? ###
|
||||
|
||||
**答案** : 这时,我所需要的就是在iptables中使用‘multiport‘选项,并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定:
|
||||
|
||||
# iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP
|
||||
|
||||
可以用下面的语句查看写入的规则。
|
||||
|
||||
# iptables -L
|
||||
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||
ACCEPT icmp -- anywhere anywhere
|
||||
ACCEPT all -- anywhere anywhere
|
||||
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
**面试官** : 好了,我问的就是这些。你是一个很有价值的雇员,我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题,请问我。
|
||||
|
||||
作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事,这样会打断愉快的对话。更不用说HR轮会不会比较难,总之,我获得了机会。
|
||||
|
||||
同时我要感谢Avishek和Ravi(我的朋友)花时间帮我整理我的面试。
|
||||
|
||||
朋友!如果您有过类似的面试,并且愿意与数百万Tecmint读者一起分享您的面试经历,请将您的问题和答案发送到admin@tecmint.com。
|
||||
|
||||
谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/
|
@ -1,11 +1,14 @@
|
||||
如何使用Docker Machine部署Swarm集群
|
||||
================================================================================
|
||||
大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了独立的Docker API,所以任何与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。
|
||||
|
||||
大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了标准的Docker API 支持,所以任何可以与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定来配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。
|
||||
|
||||
下面是我提供的简便方法。
|
||||
|
||||
### 1. 安装Docker Machine ###
|
||||
|
||||
Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。
|
||||
Docker Machine 在各种Linux系统上都支持的很好。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。
|
||||
|
||||
64位操作系统:
|
||||
|
||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
|
||||
@ -18,7 +21,7 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git
|
||||
|
||||
# chmod +x /usr/local/bin/docker-machine
|
||||
|
||||
在做完上面的事情以后,我们必须确保docker-machine已经安装好。怎么检查呢?运行docker-machine -v指令,指令将会给出我们系统上所安装的docker-machine版本。
|
||||
在做完上面的事情以后,我们要确保docker-machine已经安装正确。怎么检查呢?运行`docker-machine -v`指令,该指令将会给出我们系统上所安装的docker-machine版本。
|
||||
|
||||
# docker-machine -v
|
||||
|
||||
@ -31,14 +34,15 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git
|
||||
|
||||
### 2. 创建Machine ###
|
||||
|
||||
在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这片文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。
|
||||
在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这篇文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主控节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。
|
||||
|
||||
创建machine的命令如下:
|
||||
|
||||
# docker-machine create --driver digitalocean --digitalocean-access-token <API-Token> linux-dev
|
||||
|
||||
**Note**: 假设我们要创建一个名为“linux-dev”的machine。<API-Token>是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是<API-Token>了。用其替换上面那条命令中的API-Token字段。
|
||||
**备注**: 假设我们要创建一个名为“linux-dev”的machine。<API-Token>是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是<API-Token>了。用其替换上面那条命令中的API-Token字段。
|
||||
|
||||
现在,运行下面的指令,将Machine configuration装载进shell。
|
||||
现在,运行下面的指令,将Machine 的配置变量加载进shell里。
|
||||
|
||||
# eval "$(docker-machine env linux-dev)"
|
||||
|
||||
@ -48,7 +52,7 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git
|
||||
|
||||
# docker-machine active linux-dev
|
||||
|
||||
现在,我们检查是否它(指machine)被标记为了 ACTIVE "*"。
|
||||
现在,我们检查它(指machine)是否被标记为了 ACTIVE "*"。
|
||||
|
||||
# docker-machine ls
|
||||
|
||||
@ -56,22 +60,21 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git
|
||||
|
||||
### 3. 运行Swarm Docker镜像 ###
|
||||
|
||||
现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像:
|
||||
现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像,并且控制Swarm主控节点和从节点。使用下面的指令运行镜像:
|
||||
|
||||
# docker run swarm create
|
||||
|
||||
![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png)
|
||||
|
||||
If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet.
|
||||
如果你想要在**32位操作系统**上运行swarm docker镜像。你需要SSH登录到Droplet当中。
|
||||
|
||||
# docker-machine ssh
|
||||
#docker run swarm create
|
||||
#exit
|
||||
|
||||
### 4. 创建Swarm主节点 ###
|
||||
### 4. 创建Swarm主控节点 ###
|
||||
|
||||
在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主节点。使用下面的语句,添加一个主节点。(这里的感觉怪怪的,好像少翻译了很多东西,是我把Master翻译为主节点的原因吗?)
|
||||
在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主控节点。使用下面的语句,添加一个主控节点。
|
||||
|
||||
# docker-machine create \
|
||||
-d digitalocean \
|
||||
@ -83,9 +86,9 @@ If you are trying to run swarm docker image using **32 bit Operating System** in
|
||||
|
||||
![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png)
|
||||
|
||||
### 5. 创建Swarm结点群 ###
|
||||
### 5. 创建Swarm从节点 ###
|
||||
|
||||
现在,我们将要创建一个swarm结点,此结点将与Swarm主节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主节点相连。到此,我们就拥有了一个两节点的swarm集群了。
|
||||
现在,我们将要创建一个swarm从节点,此节点将与Swarm主控节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主控节点相连。到此,我们就拥有了一个两节点的swarm集群了。
|
||||
|
||||
# docker-machine create \
|
||||
-d digitalocean \
|
||||
@ -96,21 +99,19 @@ If you are trying to run swarm docker image using **32 bit Operating System** in
|
||||
|
||||
![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png)
|
||||
|
||||
### 6. Connecting to the Swarm Master ###
|
||||
### 6. 与Swarm主节点连接 ###
|
||||
### 6. 与Swarm主控节点连接 ###
|
||||
|
||||
现在,我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。
|
||||
现在,我们连接Swarm主控节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主控节点的Machine配置文件加载到环境当中。
|
||||
|
||||
# eval "$(docker-machine env --swarm swarm-master)"
|
||||
|
||||
然后,我们就可以跨结点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。
|
||||
然后,我们就可以跨节点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。
|
||||
|
||||
# docker info
|
||||
|
||||
### Conclusion ###
|
||||
### 总结 ###
|
||||
|
||||
我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主节点和一个从节点成功地部署了集群。其他类似的应用还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量!
|
||||
我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主控节点和一个从节点成功地部署了集群。其他类似的驱动还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -118,7 +119,7 @@ via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-mach
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[DongShuaike](https://github.com/DongShuaike)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,101 @@
|
||||
FreeBSD 和 Linux 有什么不同?
|
||||
================================================================================
|
||||
|
||||
![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/FreeBSD-790x494.jpg)
|
||||
|
||||
### 简介 ###
|
||||
|
||||
BSD最初从UNIX继承而来,目前,有许多的类Unix操作系统是基于BSD的。FreeBSD是使用最广泛的开源的伯克利软件发行版(即 BSD 发行版)。就像它隐含的意思一样,它是一个自由开源的类Unix操作系统,并且是公共服务器平台。FreeBSD源代码通常以宽松的BSD许可证发布。它与Linux有很多相似的地方,但我们得承认它们在很多方面仍有不同。
|
||||
|
||||
本文的其余部分组织如下:FreeBSD的描述在第一部分,FreeBSD和Linux的相似点在第二部分,它们的区别将在第三部分讨论,对他们功能的讨论和总结在最后一节。
|
||||
|
||||
### FreeBSD描述 ###
|
||||
|
||||
#### 历史 ####
|
||||
|
||||
- FreeBSD的第一个版本发布于1993年,它的第一张CD-ROM是FreeBSD1.0,发行于1993年12月。接下来,FreeBSD 2.1.0在1995年发布,并且获得了所有用户的青睐。实际上许多IT公司都使用FreeBSD并且很满意,我们可以列出其中的一些:IBM、Nokia、NetApp和Juniper Network。
|
||||
|
||||
#### 许可证 ####
|
||||
|
||||
- 关于它的许可证,FreeBSD以多种开源许可证进行发布,它的名为Kernel的最新代码以两句版BSD许可证进行了发布,给予使用和重新发布FreeBSD的绝对自由。其它的代码则以三句版或四句版BSD许可证进行发布,有些是以GPL和CDDL的许可证发布的。
|
||||
|
||||
(LCTT 译注:BSD 许可证与 GPL 许可证相比,相当简短,最初只有四句规则;1999年应 RMS 请求,删除了第三句,新的许可证称作“新 BSD”或三句版BSD;原来的 BSD 许可证称作“旧 BSD”、“修订的 BSD”或四句版BSD;也有一种删除了第三、第四两句的版本,称之为两句版 BSD,等价于 MIT 许可证。)
|
||||
|
||||
#### 用户 ####
|
||||
|
||||
- FreeBSD的重要特点之一就是它的用户多样性。实际上,FreeBSD可以作为邮件服务器、Web 服务器、FTP 服务器以及路由器等,您只需要在它上运行服务相关的软件即可。而且FreeBSD还支持ARM、PowerPC、MIPS、x86、x86-64架构。
|
||||
|
||||
### FreeBSD和Linux的相似处 ###
|
||||
|
||||
FreeBSD和Linux是两个自由开源的软件。实际上,它们的用户可以很容易的检查并修改源代码,用户拥有绝对的自由。而且,FreeBSD和Linux都是类Unix系统,它们的内核、内部组件、库程序都使用从历史上的AT&T Unix继承来的算法。FreeBSD从根基上更像Unix系统,而Linux是作为自由的类Unix系统发布的。许多工具应用都可以在FreeBSD和Linux中找到,实际上,他们几乎有同样的功能。
|
||||
|
||||
此外,FreeBSD能够运行大量的Linux应用。它可以安装一个Linux的兼容层,这个兼容层可以在编译FreeBSD时加入AAC Compact Linux得到,或通过下载已编译了Linux兼容层的FreeBSD系统,其中会包括兼容程序:aac_linux.ko。不同于FreeBSD的是,Linux无法运行FreeBSD的软件。
|
||||
|
||||
最后,我们注意到虽然二者有同样的目标,但二者还是有一些不同之处,我们在下一节中列出。
|
||||
|
||||
### FreeBSD和Linux的区别 ###
|
||||
|
||||
目前对于大多数用户来说并没有一个选择FreeBSD还是Linux的明确的准则。因为他们有着很多同样的应用程序,因为他们都被称作类Unix系统。
|
||||
|
||||
在这一章,我们将列出这两种系统的一些重要的不同之处。
|
||||
|
||||
#### 许可证 ####
|
||||
|
||||
- 两个系统的区别首先在于它们的许可证。Linux以GPL许可证发行,它为用户提供阅读、发行和修改源代码的自由,GPL许可证帮助用户避免仅仅发行二进制。而FreeBSD以BSD许可证发布,BSD许可证比GPL更宽容,因为其衍生著作不需要仍以该许可证发布。这意味着任何用户能够使用、发布、修改代码,并且不需要维持之前的许可证。
|
||||
- 您可以依据您的需求,在两种许可证中选择一种。首先是BSD许可证,由于其特殊的条款,它更受用户青睐。实际上,这个许可证使用户在保证源代码的封闭性的同时,可以售卖以该许可证发布的软件。再说说GPL,它需要每个使用以该许可证发布的软件的用户多加注意。
|
||||
- 如果想在以不同许可证发布的两种软件中做出选择,您需要了解他们各自的许可证,以及他们开发中的方法论,从而能了解他们特性的区别,来选择更适合自己需求的。
|
||||
|
||||
#### 控制 ####
|
||||
|
||||
- 由于FreeBSD和Linux是以不同的许可证发布的,Linus Torvalds控制着Linux的内核,而FreeBSD却与Linux不同,它并未被控制。我个人更倾向于使用FreeBSD而不是Linux,这是因为FreeBSD才是绝对自由的软件,没有任何控制许可证的存在。Linux和FreeBSD还有其他的不同之处,我建议您先不急着做出选择,等读完本文后再做出您的选择。
|
||||
|
||||
#### 操作系统 ####
|
||||
|
||||
- Linux主要指内核系统,这与FreeBSD不同,FreeBSD的整个系统都被维护着。FreeBSD的内核和一组由FreeBSD团队开发的软件被作为一个整体进行维护。实际上,FreeBSD开发人员能够远程且高效的管理核心操作系统。
|
||||
- 而Linux方面,在管理系统方面有一些困难。由于不同的组件由不同的源维护,Linux开发者需要将它们汇集起来,才能达到同样的功能。
|
||||
- FreeBSD和Linux都给了用户大量的可选软件和发行版,但他们管理的方式不同。FreeBSD是统一的管理方式,而Linux需要被分别维护。
|
||||
|
||||
#### 硬件支持 ####
|
||||
|
||||
- 说到硬件支持,Linux比FreeBSD做的更好。但这不意味着FreeBSD没有像Linux那样支持硬件的能力。他们只是在管理的方式不同,这通常还依赖于您的需求。因此,如果您在寻找最新的解决方案,FreeBSD更适应您;但如果您在寻找更多的普适性,那最好使用Linux。
|
||||
|
||||
#### 原生FreeBSD Vs 原生Linux ####
|
||||
|
||||
- 两者的原生系统的区别又有不同。就像我之前说的,Linux是一个Unix的替代系统,由Linux Torvalds编写,并由网络上的许多极客一起协助实现的。Linux有一个现代系统所需要的全部功能,诸如虚拟内存、共享库、动态加载、优秀的内存管理等。它以GPL许可证发布。
|
||||
- FreeBSD也继承了Unix的许多重要的特性。FreeBSD作为在加州大学开发的BSD的一种发行版。开发BSD的最重要的原因是用一个开源的系统来替代AT&T操作系统,从而给用户无需AT&T许可证便可使用的能力。
|
||||
- 许可证的问题是开发者们最关心的问题。他们试图提供一个最大化克隆Unix的开源系统。这影响了用户的选择,由于FreeBSD使用BSD许可证进行发布,因而相比Linux更加自由。
|
||||
|
||||
#### 支持的软件包 ####
|
||||
|
||||
- 从用户的角度来看,另一个二者不同的地方便是软件包以及从源码安装的软件的可用性和支持。Linux只提供了预编译的二进制包,这与FreeBSD不同,它不但提供预编译的包,而且还提供从源码编译和安装的构建系统。使用它的 ports 工具,FreeBSD给了您选择使用预编译的软件包(默认)和在编译时定制您软件的能力。(LCTT 译注:此处说明有误。Linux 也提供了源代码方式的包,并支持自己构建。)
|
||||
- 这些 ports 允许您构建所有支持FreeBSD的软件。而且,它们的管理还是层次化的,您可以在/usr/ports下找到源文件的地址以及一些正确使用FreeBSD的文档。
|
||||
- 这些提到的 ports给予你产生不同软件包版本的可能性。FreeBSD给了您通过源代码构建以及预编译的两种软件,而不是像Linux一样只有预编译的软件包。您可以使用两种安装方式管理您的系统。
|
||||
|
||||
#### FreeBSD 和 Linux 常用工具比较 ####
|
||||
|
||||
- 有大量的常用工具在FreeBSD上可用,并且有趣的是他们由FreeBSD的团队所拥有。相反的,Linux工具来自GNU,这就是为什么在使用中有一些限制。(LCTT 译注:这也是 Linux 正式的名称被称作“GNU/Linux”的原因,因为本质上 Linux 其实只是指内核。)
|
||||
- 实际上FreeBSD采用的BSD许可证非常有益且有用。因此,您有能力维护核心操作系统,控制这些应用程序的开发。有一些工具类似于它们的祖先 - BSD和Unix的工具,但不同于GNU的套件,GNU套件只想做到最小的向后兼容。
|
||||
|
||||
#### 标准 Shell ####
|
||||
|
||||
- FreeBSD默认使用tcsh。它是csh的评估版,由于FreeBSD以BSD许可证发行,因此不建议您在其中使用GNU的组件 bash shell。bash和tcsh的区别仅仅在于tcsh的脚本功能。实际上,我们更推荐在FreeBSD中使用sh shell,因为它更加可靠,可以避免一些使用tcsh和csh时出现的脚本问题。
|
||||
|
||||
#### 一个更加层次化的文件系统 ####
|
||||
|
||||
- 像之前提到的一样,使用FreeBSD时,基础操作系统以及可选组件可以被很容易的区别开来。这导致了一些管理它们的标准。在Linux下,/bin,/sbin,/usr/bin或者/usr/sbin都是存放可执行文件的目录。FreeBSD不同,它有一些附加的对其进行组织的规范。基础操作系统被放在/usr/local/bin或者/usr/local/sbin目录下。这种方法可以帮助管理和区分基础操作系统和可选组件。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
FreeBSD和Linux都是自由且开源的系统,他们有相似点也有不同点。上面列出的内容并不能说哪个系统比另一个更好。实际上,FreeBSD和Linux都有自己的特点和技术规格,这使它们与别的系统区别开来。那么,您有什么看法呢?您已经有在使用它们中的某个系统了么?如果答案为是的话,请给我们您的反馈;如果答案是否的话,在读完我们的描述后,您怎么看?请在留言处发表您的观点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/comparative-introduction-freebsd-linux-users/
|
||||
|
||||
作者:[anismaj][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.unixmen.com/author/anis/
|
@ -1,13 +1,14 @@
|
||||
在Linux中使用‘Systemctl’管理‘Systemd’服务和单元
|
||||
systemctl 完全指南
|
||||
================================================================================
|
||||
Systemctl是一个systemd工具,主要负责控制systemd系统和服务管理器。
|
||||
|
||||
Systemd是一个系统管理守护进程、工具和库的集合,用于取代System V初始进程。Systemd的功能是用于集中管理和配置类UNIX系统。
|
||||
|
||||
在Linux生态系统中,Systemd被部署到了大多数的标准Linux发行版中,只有位数不多的几个尚未部署。Systemd通常是所有其它守护进程的父进程,但并非总是如此。
|
||||
在Linux生态系统中,Systemd被部署到了大多数的标准Linux发行版中,只有为数不多的几个发行版尚未部署。Systemd通常是所有其它守护进程的父进程,但并非总是如此。
|
||||
|
||||
![Manage Linux Services Using Systemctl](http://www.tecmint.com/wp-content/uploads/2015/04/Manage-Linux-Services-Using-Systemctl.jpg)
|
||||
使用Systemctl管理Linux服务
|
||||
|
||||
*使用Systemctl管理Linux服务*
|
||||
|
||||
本文旨在阐明在运行systemd的系统上“如何控制系统和服务”。
|
||||
|
||||
@ -41,11 +42,9 @@ Systemd是一个系统管理守护进程、工具和库的集合,用于取代S
|
||||
root 555 1 0 16:27 ? 00:00:00 /usr/lib/systemd/systemd-logind
|
||||
dbus 556 1 0 16:27 ? 00:00:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
|
||||
|
||||
**注意**:systemd是作为父进程(PID=1)运行的。在上面带(-e)参数的ps命令输出中,选择所有进程,(-
|
||||
**注意**:systemd是作为父进程(PID=1)运行的。在上面带(-e)参数的ps命令输出中,选择所有进程,(-a)选择除会话前导外的所有进程,并使用(-f)参数输出完整格式列表(即 -eaf)。
|
||||
|
||||
a)选择除会话前导外的所有进程,并使用(-f)参数输出完整格式列表(如 -eaf)。
|
||||
|
||||
也请注意上例中后随的方括号和样例剩余部分。方括号表达式是grep的字符类表达式的一部分。
|
||||
也请注意上例中后随的方括号和例子中剩余部分。方括号表达式是grep的字符类表达式的一部分。
|
||||
|
||||
#### 4. 分析systemd启动进程 ####
|
||||
|
||||
@ -147,7 +146,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
||||
1 loaded units listed. Pass --all to see loaded but inactive units, too.
|
||||
To show all installed unit files use 'systemctl list-unit-files'.
|
||||
|
||||
#### 10. 检查某个单元(cron.service)是否启用 ####
|
||||
#### 10. 检查某个单元(如 cron.service)是否启用 ####
|
||||
|
||||
# systemctl is-enabled crond.service
|
||||
|
||||
@ -187,7 +186,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
||||
dbus-org.fedoraproject.FirewallD1.service enabled
|
||||
....
|
||||
|
||||
#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务(httpd.service)状态 ####
|
||||
#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务(如 httpd.service)状态 ####
|
||||
|
||||
# systemctl start httpd.service
|
||||
# systemctl restart httpd.service
|
||||
@ -214,15 +213,15 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
||||
Apr 28 17:21:30 tecmint systemd[1]: Started The Apache HTTP Server.
|
||||
Hint: Some lines were ellipsized, use -l to show in full.
|
||||
|
||||
**注意**:当我们使用systemctl的start,restart,stop和reload命令时,我们不会不会从终端获取到任何输出内容,只有status命令可以打印输出。
|
||||
**注意**:当我们使用systemctl的start,restart,stop和reload命令时,我们不会从终端获取到任何输出内容,只有status命令可以打印输出。
|
||||
|
||||
#### 14. 如何激活服务并在启动时启用或禁用服务(系统启动时自动启动服务) ####
|
||||
#### 14. 如何激活服务并在启动时启用或禁用服务(即系统启动时自动启动服务) ####
|
||||
|
||||
# systemctl is-active httpd.service
|
||||
# systemctl enable httpd.service
|
||||
# systemctl disable httpd.service
|
||||
|
||||
#### 15. 如何屏蔽(让它不能启动)或显示服务(httpd.service) ####
|
||||
#### 15. 如何屏蔽(让它不能启动)或显示服务(如 httpd.service) ####
|
||||
|
||||
# systemctl mask httpd.service
|
||||
ln -s '/dev/null' '/etc/systemd/system/httpd.service'
|
||||
@ -297,7 +296,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
||||
# systemctl enable tmp.mount
|
||||
# systemctl disable tmp.mount
|
||||
|
||||
#### 20. 在Linux中屏蔽(让它不能启动)或显示挂载点 ####
|
||||
#### 20. 在Linux中屏蔽(让它不能启用)或可见挂载点 ####
|
||||
|
||||
# systemctl mask tmp.mount
|
||||
|
||||
@ -375,7 +374,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
||||
|
||||
CPUShares=2000
|
||||
|
||||
**注意**:当你为某个服务设置CPUShares,会自动创建一个以服务名命名的目录(httpd.service),里面包含了一个名为90-CPUShares.conf的文件,该文件含有CPUShare限制信息,你可以通过以下方式查看该文件:
|
||||
**注意**:当你为某个服务设置CPUShares,会自动创建一个以服务名命名的目录(如 httpd.service),里面包含了一个名为90-CPUShares.conf的文件,该文件含有CPUShare限制信息,你可以通过以下方式查看该文件:
|
||||
|
||||
# vi /etc/systemd/system/httpd.service.d/90-CPUShares.conf
|
||||
|
||||
@ -528,13 +527,13 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完
|
||||
#### 35. 启动运行等级5,即图形模式 ####
|
||||
|
||||
# systemctl isolate runlevel5.target
|
||||
OR
|
||||
或
|
||||
# systemctl isolate graphical.target
|
||||
|
||||
#### 36. 启动运行等级3,即多用户模式(命令行) ####
|
||||
|
||||
# systemctl isolate runlevel3.target
|
||||
OR
|
||||
或
|
||||
# systemctl isolate multiuser.target
|
||||
|
||||
#### 36. 设置多用户模式或图形模式为默认运行等级 ####
|
||||
@ -572,7 +571,7 @@ via: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,20 +1,18 @@
|
||||
|
||||
如何在 Ubuntu/CentOS7.1/Fedora22 上安装 Plex Media Server ?
|
||||
如何在 Ubuntu/CentOS7.1/Fedora22 上安装 Plex Media Server
|
||||
================================================================================
|
||||
在本文中我们将会向你展示如何容易地在主流的最新发布的Linux发行版上安装Plex Home Media Server。在Plex安装成功后你将可以使用你的集中式家庭媒体播放系统,该系统能让多个Plex播放器App共享它的媒体资源,并且该系统允许你设置你的环境,通过增加你的设备以及设置一个可以一起使用Plex的用户组。让我们首先在Ubuntu15.04上开始Plex的安装。
|
||||
在本文中我们将会向你展示如何容易地在主流的最新Linux发行版上安装Plex Media Server。在Plex安装成功后你将可以使用你的中央式家庭媒体播放系统,该系统能让多个Plex播放器App共享它的媒体资源,并且该系统允许你设置你的环境,增加你的设备以及设置一个可以一起使用Plex的用户组。让我们首先在Ubuntu15.04上开始Plex的安装。
|
||||
|
||||
### 基本的系统资源 ###
|
||||
|
||||
系统资源主要取决于你打算用来连接服务的设备类型和数量, 所以根据我们的需求我们将会在一个单独的服务器上使用以下系统资源。
|
||||
|
||||
注:表格
|
||||
<table width="666" style="height: 181px;">
|
||||
<tbody>
|
||||
<tr>
|
||||
<td width="670" colspan="2"><b>Plex Home Media Server</b></td>
|
||||
<td width="670" colspan="2"><b>Plex Media Server</b></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="236"><b>Base Operating System</b></td>
|
||||
<td width="236"><b>基础操作系统</b></td>
|
||||
<td width="425">Ubuntu 15.04 / CentOS 7.1 / Fedora 22 Work Station</td>
|
||||
</tr>
|
||||
<tr>
|
||||
@ -22,11 +20,11 @@
|
||||
<td width="425">Version 0.9.12.3.1173-937aac3</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="236"><b>RAM and CPU</b></td>
|
||||
<td width="236"><b>RAM 和 CPU</b></td>
|
||||
<td width="425">1 GB , 2.0 GHZ</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="236"><b>Hard Disk</b></td>
|
||||
<td width="236"><b>硬盘</b></td>
|
||||
<td width="425">30 GB</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
@ -38,13 +36,13 @@
|
||||
|
||||
#### 步骤 1: 系统更新 ####
|
||||
|
||||
用root权限登陆你的服务器。确保你的系统是最新的,如果不是就使用下面的命令。
|
||||
用root权限登录你的服务器。确保你的系统是最新的,如果不是就使用下面的命令。
|
||||
|
||||
root@ubuntu-15:~#apt-get update
|
||||
|
||||
#### 步骤 2: 下载最新的Plex Media Server包 ####
|
||||
|
||||
创建一个新目录,用wget命令从Plex官网下载为Ubuntu提供的.deb包并放入该目录中。
|
||||
创建一个新目录,用wget命令从[Plex官网](https://plex.tv/)下载为Ubuntu提供的.deb包并放入该目录中。
|
||||
|
||||
root@ubuntu-15:~# cd /plex/
|
||||
root@ubuntu-15:/plex#
|
||||
@ -52,7 +50,7 @@
|
||||
|
||||
#### 步骤 3: 安装Plex Media Server的Debian包 ####
|
||||
|
||||
现在在相同的目录下执行下面的命令来开始debian包的安装, 然后检查plexmediaserver(译者注: 原文plekmediaserver, 明显笔误)的状态。
|
||||
现在在相同的目录下执行下面的命令来开始debian包的安装, 然后检查plexmediaserver服务的状态。
|
||||
|
||||
root@ubuntu-15:/plex# dpkg -i plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb
|
||||
|
||||
@ -62,41 +60,41 @@
|
||||
|
||||
![Plexmediaserver Service](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-status.png)
|
||||
|
||||
### 在Ubuntu 15.04上设置Plex Home Media Web应用 ###
|
||||
### 在Ubuntu 15.04上设置Plex Media Web应用 ###
|
||||
|
||||
让我们在你的本地网络主机中打开web浏览器, 并用你的本地主机IP以及端口32400来打开Web界面并完成以下步骤来配置Plex。
|
||||
让我们在你的本地网络主机中打开web浏览器, 并用你的本地主机IP以及端口32400来打开Web界面,并完成以下步骤来配置Plex。
|
||||
|
||||
http://172.25.10.179:32400/web
|
||||
http://localhost:32400/web
|
||||
|
||||
#### 步骤 1: 登陆前先注册 ####
|
||||
#### 步骤 1: 登录前先注册 ####
|
||||
|
||||
在你访问到Plex Media Server的Web界面之后(译者注: 原文是Plesk, 应该是笔误), 确保注册并填上你的用户名(译者注: 原文username email ID感觉怪怪:))和密码来登陆。
|
||||
在你访问到Plex Media Server的Web界面之后, 确保注册并填上你的用户名和密码来登录。
|
||||
|
||||
![Plex Sign In](http://blog.linoxide.com/wp-content/uploads/2015/06/PMS-Login.png)
|
||||
|
||||
#### 输入你的PIN码来保护你的Plex Home Media用户(译者注: 原文Plex Media Home, 个人觉得专业称谓应该保持一致) ####
|
||||
#### 输入你的PIN码来保护你的Plex Media用户####
|
||||
|
||||
![Plex User Pin](http://blog.linoxide.com/wp-content/uploads/2015/06/333.png)
|
||||
|
||||
现在你已经成功的在Plex Home Media下配置你的用户。
|
||||
现在你已经成功的在Plex Media下配置你的用户。
|
||||
|
||||
![Welcome To Plex](http://blog.linoxide.com/wp-content/uploads/2015/06/3333.png)
|
||||
|
||||
### 在设备上而不是本地服务器上打开Plex Web应用 ###
|
||||
|
||||
正如我们在Plex Media主页看到的表明"你没有权限访问这个服务"。 这是因为我们跟服务器计算机不在同个网络。
|
||||
如我们在Plex Media主页看到的提示“你没有权限访问这个服务”。 这说明我们跟服务器计算机不在同个网络。
|
||||
|
||||
![Plex Server Permissions](http://blog.linoxide.com/wp-content/uploads/2015/06/33.png)
|
||||
|
||||
现在我们需要解决这个权限问题以便我们通过设备访问服务器而不是通过托管服务器(Plex服务器), 通过完成下面的步骤。
|
||||
现在我们需要解决这个权限问题,以便我们通过设备访问服务器而不是只能在服务器上访问。通过完成下面的步骤完成。
|
||||
|
||||
### 设置SSH隧道使Windows系统访问到Linux服务器 ###
|
||||
### 设置SSH隧道使Windows系统可以访问到Linux服务器 ###
|
||||
|
||||
首先我们需要建立一条SSH隧道以便我们访问远程服务器资源,就好像资源在本地一样。 这仅仅是必要的初始设置。
|
||||
|
||||
如果你正在使用Windows作为你的本地系统,Linux作为服务器,那么我们可以参照下图通过Putty来设置SSH隧道。
|
||||
(译者注: 首先要在Putty的Session中用Plex服务器IP配置一个SSH的会话,才能进行下面的隧道转发规则配置。
|
||||
(LCTT译注: 首先要在Putty的Session中用Plex服务器IP配置一个SSH的会话,才能进行下面的隧道转发规则配置。
|
||||
然后点击“Open”,输入远端服务器用户名密码, 来保持SSH会话连接。)
|
||||
|
||||
![Plex SSH Tunnel](http://blog.linoxide.com/wp-content/uploads/2015/06/ssh-tunnel.png)
|
||||
@ -111,13 +109,13 @@
|
||||
|
||||
![Agree to Plex term](http://blog.linoxide.com/wp-content/uploads/2015/06/5.png)
|
||||
|
||||
现在一个功能齐全的Plex Home Media Server已经准备好添加新的媒体库、频道、播放列表等资源。
|
||||
现在一个功能齐全的Plex Media Server已经准备好添加新的媒体库、频道、播放列表等资源。
|
||||
|
||||
![PMS Settings](http://blog.linoxide.com/wp-content/uploads/2015/06/8.png)
|
||||
|
||||
### 在CentOS 7.1上安装Plex Media Server 0.9.12.3 ###
|
||||
|
||||
我们将会按照上述在Ubuntu15.04上安装Plex Home Media Server的步骤来将Plex安装到CentOS 7.1上。
|
||||
我们将会按照上述在Ubuntu15.04上安装Plex Media Server的步骤来将Plex安装到CentOS 7.1上。
|
||||
|
||||
让我们从安装Plex Media Server开始。
|
||||
|
||||
@ -144,9 +142,9 @@
|
||||
[root@linux-tutorials plex]# systemctl enable plexmediaserver.service
|
||||
[root@linux-tutorials plex]# systemctl status plexmediaserver.service
|
||||
|
||||
### 在CentOS-7.1上设置Plex Home Media Web应用 ###
|
||||
### 在CentOS-7.1上设置Plex Media Web应用 ###
|
||||
|
||||
现在我们只需要重复在Ubuntu上设置Plex Web应用的所有步骤就可以了。 让我们在Web浏览器上打开一个新窗口并用localhost或者Plex服务器的IP(译者注: 原文为or your Plex server, 明显的笔误)来访问Plex Home Media Web应用(译者注:称谓一致)。
|
||||
现在我们只需要重复在Ubuntu上设置Plex Web应用的所有步骤就可以了。 让我们在Web浏览器上打开一个新窗口并用localhost或者Plex服务器的IP来访问Plex Media Web应用。
|
||||
|
||||
http://172.20.3.174:32400/web
|
||||
http://localhost:32400/web
|
||||
@ -157,25 +155,25 @@
|
||||
|
||||
### 在Fedora 22工作站上安装Plex Media Server 0.9.12.3 ###
|
||||
|
||||
基本的下载和安装Plex Media Server步骤跟在CentOS 7.1上安装的步骤一致。我们只需要下载对应的rpm包然后用rpm命令来安装它。
|
||||
下载和安装Plex Media Server步骤基本跟在CentOS 7.1上安装的步骤一致。我们只需要下载对应的rpm包然后用rpm命令来安装它。
|
||||
|
||||
![PMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-on-fedora.png)
|
||||
|
||||
### 在Fedora 22工作站上配置Plex Home Media Web应用 ###
|
||||
### 在Fedora 22工作站上配置Plex Media Web应用 ###
|
||||
|
||||
我们在(与Plex服务器)相同的主机上配置Plex Media Server,因此不需要设置SSH隧道。只要在你的Fedora 22工作站上用Plex Home Media Server的默认端口号32400打开Web浏览器并同意Plex的服务条款即可。
|
||||
我们在(与Plex服务器)相同的主机上配置Plex Media Server,因此不需要设置SSH隧道。只要在你的Fedora 22工作站上用Plex Media Server的默认端口号32400打开Web浏览器并同意Plex的服务条款即可。
|
||||
|
||||
![Plex Agreement](http://blog.linoxide.com/wp-content/uploads/2015/06/Plex-Terms.png)
|
||||
|
||||
**欢迎来到Fedora 22工作站上的Plex Home Media Server**
|
||||
*欢迎来到Fedora 22工作站上的Plex Media Server*
|
||||
|
||||
让我们用你的Plex账户登陆,并且开始将你喜欢的电影频道添加到媒体库、创建你的播放列表、添加你的图片以及享用更多其他的特性。
|
||||
让我们用你的Plex账户登录,并且开始将你喜欢的电影频道添加到媒体库、创建你的播放列表、添加你的图片以及享用更多其他的特性。
|
||||
|
||||
![Plex Add Libraries](http://blog.linoxide.com/wp-content/uploads/2015/06/create-library.png)
|
||||
|
||||
### 总结 ###
|
||||
|
||||
我们已经成功完成Plex Media Server在主流Linux发行版上安装和配置。Plex Home Media Server永远都是媒体管理的最佳选择。 它在跨平台上的设置是如此的简单,就像我们在Ubuntu,CentOS以及Fedora上的设置一样。它简化了你组织媒体内容的工作,并将媒体内容“流”向其他计算机以及设备以便你跟你的朋友分享媒体内容。
|
||||
我们已经成功完成Plex Media Server在主流Linux发行版上安装和配置。Plex Media Server永远都是媒体管理的最佳选择。 它在跨平台上的设置是如此的简单,就像我们在Ubuntu,CentOS以及Fedora上的设置一样。它简化了你组织媒体内容的工作,并将媒体内容“流”向其他计算机以及设备以便你跟你的朋友分享媒体内容。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -183,7 +181,7 @@ via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedor
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[dingdongnigetou](https://github.com/dingdongnigetou)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
231
published/20150717 How to monitor NGINX- Part 1.md
Normal file
231
published/20150717 How to monitor NGINX- Part 1.md
Normal file
@ -0,0 +1,231 @@
|
||||
如何监控 NGINX(第一篇)
|
||||
================================================================================
|
||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
|
||||
|
||||
### NGINX 是什么? ###
|
||||
|
||||
[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 可以使用较少的内存非常高效可靠地提供静态内容。作为[反向代理][2],它可以用作多个后端服务器或类似缓存和负载平衡这样的其它应用的单一访问控制点。NGINX 是一个自由开源的产品,并有一个具备更全的功能的叫做 NGINX Plus 的商业版。
|
||||
|
||||
NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接讨论 NGINX 的那些用例的监控。
|
||||
|
||||
### NGINX 主要指标 ###
|
||||
|
||||
通过监控 NGINX 可以 捕获到两类问题:NGINX 本身的资源问题,和出现在你的基础网络设施的其它问题。大多数 NGINX 用户会用到以下指标的监控,包括**每秒请求数**,它提供了一个由所有最终用户活动组成的上层视图;**服务器错误率** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**请求处理时间**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低或当前环境的其他问题)。
|
||||
|
||||
更一般地,至少有三个主要的指标类别来监视:
|
||||
|
||||
- 基本活动指标
|
||||
- 错误指标
|
||||
- 性能指标
|
||||
|
||||
下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍但是值得特别提到的案例来说明:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。
|
||||
|
||||
本文引用指标术语[来自我们的“监控 101 系列”][3],,它提供了一个指标收集和警告框架。
|
||||
|
||||
#### 基本活跃指标 ####
|
||||
|
||||
无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。
|
||||
|
||||
NGINX Plus 上像开源 NGINX 一样可以报告基本活跃指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。
|
||||
|
||||
**NGINX**
|
||||
|
||||
下图显示了一个客户端连接的过程,以及开源版本的 NGINX 如何在连接过程中收集指标。
|
||||
|
||||
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png)
|
||||
|
||||
Accepts(接受)、Handled(已处理)、Requests(请求)是一直在增加的计数器。Active(活跃)、Waiting(等待)、Reading(读)、Writing(写)随着请求量而增减。
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)|
|
||||
|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------|
|
||||
| Accepts | NGINX 所接受的客户端连接数 | 资源: 功能 |
|
||||
| Handled | 成功的客户端连接数 | 资源: 功能 |
|
||||
| Active | 当前活跃的客户端连接数| 资源: 功能 |
|
||||
| Dropped(已丢弃,计算得出)| 丢弃的连接数(接受 - 已处理)| 工作:错误*|
|
||||
| Requests | 客户端请求数 | 工作:吞吐量 |
|
||||
|
||||
|
||||
_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_
|
||||
|
||||
NGINX worker 进程接受 OS 的连接请求时 **Accepts** 计数器增加,而**Handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,如果它们有差别则表明连接被**Dropped**,往往这是由于资源限制,比如已经达到 NGINX 的[worker_connections][4]的限制。
|
||||
|
||||
一旦 NGINX 成功处理一个连接时,连接会移动到**Active**状态,在这里对客户端请求进行处理:
|
||||
|
||||
Active状态
|
||||
|
||||
- **Waiting**: 活跃的连接也可以处于 Waiting 子状态,如果有在此刻没有活跃请求的话。新连接可以绕过这个状态并直接变为到 Reading 状态,最常见的是在使用“accept filter(接受过滤器)” 和 “deferred accept(延迟接受)”时,在这种情况下,NGINX 不会接收 worker 进程的通知,直到它具有足够的数据才开始响应。如果连接设置为 keep-alive ,那么它在发送响应后将处于等待状态。
|
||||
|
||||
- **Reading**: 当接收到请求时,连接离开 Waiting 状态,并且该请求本身使 Reading 状态计数增加。在这种状态下 NGINX 会读取客户端请求首部。请求首部是比较小的,因此这通常是一个快速的操作。
|
||||
|
||||
- **Writing**: 请求被读取之后,其使 Writing 状态计数增加,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 状态时, 一方面 NGINX 等待来自上游系统的结果(系统放在 NGINX “后面”),另外一方面,NGINX 也在同时响应。请求往往会在 Writing 状态花费大量的时间。
|
||||
|
||||
通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 的连接 + Reading 请求 + Writing 。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应复用一个连接,所以 Active 可小于 Waiting 的连接、 Reading 请求、Writing 请求的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。)
|
||||
|
||||
**NGINX Plus**
|
||||
|
||||
正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。
|
||||
|
||||
|
||||
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png)
|
||||
|
||||
Accepted (已接受)、Dropped,总数是不断增加的计数器。Active、 Idle(空闲)和处于 Current(当前)处理阶段的各种状态下的连接或请求的当前数量随着请求量而增减。
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)|
|
||||
|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------|
|
||||
| Accepted | NGINX 所接受的客户端连接数 | 资源: 功能 |
|
||||
| Dropped |丢弃的连接数(接受 - 已处理)| 工作:错误*|
|
||||
| Active | 当前活跃的客户端连接数| 资源: 功能 |
|
||||
| Idle | 没有当前请求的客户端连接| 资源: 功能 |
|
||||
| Total(全部) | 客户端请求数 | 工作:吞吐量 |
|
||||
|
||||
_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_
|
||||
|
||||
当 NGINX Plus worker 进程接受 OS 的连接请求时 **Accepted** 计数器递增。如果 worker 进程为请求建立连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接被丢弃, **Dropped** 计数增加。通常连接被丢弃是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。
|
||||
|
||||
**Active** 和 **Idle** 和[如上所述][5]的开源 NGINX 的“active” 和 “waiting”状态是相同的,但是有一点关键的不同:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。
|
||||
|
||||
**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接的平均请求数量。
|
||||
|
||||
**开源 和 Plus 之间指标的不同**
|
||||
|
||||
|NGINX (开源) |NGINX Plus|
|
||||
|-----------------------|----------------|
|
||||
| accepts | accepted |
|
||||
| dropped 通过计算得来| dropped 直接得到 |
|
||||
| reading + writing| current|
|
||||
| waiting| idle|
|
||||
| active (包括 “waiting”状态) | active (排除 “idle” 状态)|
|
||||
| requests| total|
|
||||
|
||||
**提醒指标: 丢弃连接**
|
||||
|
||||
被丢弃的连接数目等于 Accepts 和 Handled 之差(NGINX 中),或是可直接得到标准指标(NGINX Plus 中)。在正常情况下,丢弃连接数应该是零。如果在每个单位时间内丢弃连接的速度开始上升,那么应该看看是否资源饱和了。
|
||||
|
||||
![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png)
|
||||
|
||||
**提醒指标: 每秒请求数**
|
||||
|
||||
按固定时间间隔采样你的请求数据(开源 NGINX 的**requests**或者 NGINX Plus 中**total**) 会提供给你单位时间内(通常是分钟或秒)所接受的请求数量。监测这个指标可以查看进入的 Web 流量尖峰,无论是合法的还是恶意的,或者突然的下降,这通常都代表着出现了问题。每秒请求数若发生急剧变化可以提醒你的环境出现问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都同样计数,无论 URL 是什么。
|
||||
|
||||
![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png)
|
||||
|
||||
**收集活跃指标**
|
||||
|
||||
开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式显示,实际上任何图形或监控工具可以被配置去解析这些相关数据,以用于分析、可视化、或提醒。NGINX Plus 提供一个 JSON 接口来供给更多的数据。阅读相关文章“[NGINX 指标收集][6]”来启用指标收集的功能。
|
||||
|
||||
#### 错误指标 ####
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 |
|
||||
|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------|
|
||||
| 4xx 代码 | 客户端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus|
|
||||
| 5xx 代码| 服务器端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus|
|
||||
|
||||
NGINX 错误指标告诉你服务器是否经常返回错误而不是正常工作。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。
|
||||
|
||||
**提醒指标: 服务器错误率**
|
||||
|
||||
服务器错误率等于在单位时间(通常为一到五分钟)内5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。
|
||||
|
||||
![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png)
|
||||
|
||||
关于客户端错误的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URL。换句话说,4xx出现的变化可能是一个信号,例如网络扫描器正在寻找你的网站漏洞时。
|
||||
|
||||
**收集错误度量**
|
||||
|
||||
虽然开源 NGINX 不能马上得到用于监测的错误率,但至少有两种方法可以得到:
|
||||
|
||||
- 使用商业支持的 NGINX Plus 提供的扩展状态模块
|
||||
- 配置 NGINX 的日志模块将响应码写入访问日志
|
||||
|
||||
关于这两种方法,请阅读相关文章“[NGINX 指标收集][6]”。
|
||||
|
||||
#### 性能指标 ####
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 |
|
||||
|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------|
|
||||
| request time (请求处理时间)| 处理每个请求的时间,单位为秒 | 工作:性能 | NGINX 日志|
|
||||
|
||||
**提醒指标: 请求处理时间**
|
||||
|
||||
请求处理时间指标记录了 NGINX 处理每个请求的时间,从读到客户端的第一个请求字节到完成请求。较长的响应时间说明问题在上游。
|
||||
|
||||
**收集处理时间指标**
|
||||
|
||||
NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][6]。
|
||||
|
||||
#### 反向代理指标 ####
|
||||
|
||||
| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 |
|
||||
|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------|
|
||||
| 上游服务器的活跃链接 | 当前活跃的客户端连接 | 资源:功能 | NGINX Plus |
|
||||
| 上游服务器的 5xx 错误代码| 服务器错误 | 工作:错误 | NGINX Plus |
|
||||
| 每个上游组的可用服务器 | 服务器传递健康检查 | 资源:可用性| NGINX Plus
|
||||
|
||||
[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“上游 upstream”)的服务器指标,这些与反向代理设置相关的。本节重点介绍了几个 NGINX Plus 用户可用的关键上游指标。
|
||||
|
||||
NGINX Plus 首先将它的上游指标按组分开,然后是针对单个服务器的。因此,例如,你的反向代理将请求分配到五个上游的 Web 服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出上游组中服务器的健康状况,以确保良好的响应时间。
|
||||
|
||||
**活跃指标**
|
||||
|
||||
**每上游服务器的活跃连接**的数量可以帮助你确认反向代理是否正确的分配工作到你的整个服务器组上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数的明显偏差都可能表明服务器正在努力消化请求,或者是你配置使用的负载均衡的方法(例如[round-robin 或 IP hashing][10])不是最适合你流量模式的。
|
||||
|
||||
**错误指标**
|
||||
|
||||
错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取**每个上游服务器的 5xx 错误代码**的数量,以及响应的总数量,以此来确定某个特定服务器的错误率。
|
||||
|
||||
**可用性指标**
|
||||
|
||||
对于 web 服务器的运行状况,还有另一种角度,NGINX 可以通过**每个组中当前可用服务器的总量**很方便监控你的上游组的健康。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要有可用的服务器组能够处理当前的负载就行了。但监视上游组内的所有工作的服务器总量可为判断 Web 服务器的健康状况提供一个更高层面的视角。
|
||||
|
||||
**收集上游指标**
|
||||
|
||||
NGINX Plus 上游指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于各种外部监控平台。在我们的相关文章“[NGINX指标收集][6]”中有个例子。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,监控下面提供的大部分或全部指标,可以让你很好的了解你的网络基础设施的健康和活跃程度:
|
||||
|
||||
- [已丢弃的连接][12]
|
||||
- [每秒请求数][13]
|
||||
- [服务器错误率][14]
|
||||
- [请求处理数据][15]
|
||||
|
||||
最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见相关的文章来[逐步指导你的指标收集][6],不管你使用 NGINX 还是 NGINX Plus。
|
||||
|
||||
在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最少的设置来收集和监控所有 Web 服务器的指标。 [在本文中][17]了解如何用 NGINX Datadog来监控,并开始[免费试用 Datadog][18]吧。
|
||||
|
||||
### 诚谢 ###
|
||||
|
||||
在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
|
||||
作者:K Young
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://nginx.org/en/
|
||||
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
|
||||
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
|
||||
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
|
||||
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
|
||||
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
|
||||
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
|
||||
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
|
||||
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
|
||||
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
||||
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
||||
[20]:https://github.com/DataDog/the-monitor/issues
|
@ -1,11 +1,13 @@
|
||||
如何在 Fedora 22 上配置 Proftpd 服务器
|
||||
================================================================================
|
||||
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源的 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
|
||||
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款基于 GPL 授权的自由开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是提供许多高级功能以及给用户提供丰富的配置选项以轻松实现定制。它具备许多在其他一些 FTP 服务器软件里仍然没有的配置选项。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。
|
||||
|
||||
- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
|
||||
FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
|
||||
|
||||
- 每个目录都可以包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
|
||||
- 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。
|
||||
- 可以作为独立进程启动服务或者通过 inetd/xinetd 启动
|
||||
- 它的文件/目录属性、属主和权限采用类 UNIX 方式。
|
||||
- 它的文件/目录属性、属主和权限是基于 UNIX 方式的。
|
||||
- 它可以独立运行,保护系统避免 root 访问可能带来的损坏。
|
||||
- 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器,SSL/TLS 加密,RADIUS 支持,等等。
|
||||
- ProFTPD 服务器还支持 IPv6.
|
||||
@ -38,7 +40,7 @@
|
||||
|
||||
### 3. 添加 FTP 用户 ###
|
||||
|
||||
在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
|
||||
在设定好了基本的配置文件后,我们很自然地希望添加一个以特定目录为根目录的 FTP 用户。目前登录的用户自动就可以使用 FTP 服务,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
|
||||
|
||||
下面,我们将建立一个名字是 ftpgroup 的新用户组。
|
||||
|
||||
@ -57,7 +59,7 @@
|
||||
Retype new password:
|
||||
passwd: all authentication tokens updated successfully.
|
||||
|
||||
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。
|
||||
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限(LCTT 译注:这是SELinux 相关设置,如果未启用 SELinux,可以不用)。
|
||||
|
||||
$ sudo setsebool -P allow_ftpd_full_access=1
|
||||
$ sudo setsebool -P ftp_home_dir=1
|
||||
@ -129,7 +131,7 @@
|
||||
|
||||
如果 **打开了 TLS/SSL 加密**,执行下面的命令。
|
||||
|
||||
$sudo firewall-cmd --add-port=1024-65534/tcp
|
||||
$ sudo firewall-cmd --add-port=1024-65534/tcp
|
||||
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
|
||||
|
||||
如果 **没有打开 TLS/SSL 加密**,执行下面的命令。
|
||||
@ -158,7 +160,7 @@
|
||||
|
||||
### 7. 登录到 FTP 服务器 ###
|
||||
|
||||
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。
|
||||
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或名称 **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **要求显式的基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。
|
||||
|
||||
![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
|
||||
|
||||
@ -170,7 +172,7 @@
|
||||
|
||||
### 总结 ###
|
||||
|
||||
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
|
||||
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度定制和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -178,7 +180,7 @@ via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -6,17 +6,17 @@
|
||||
|
||||
每当你开机进入一个操作系统,一系列的应用将会自动启动。这些应用被称为‘开机启动应用’ 或‘开机启动程序’。随着时间的推移,当你在系统中安装了足够多的应用时,你将发现有太多的‘开机启动应用’在开机时自动地启动了,它们吃掉了很多的系统资源,并将你的系统拖慢。这可能会让你感觉卡顿,我想这种情况并不是你想要的。
|
||||
|
||||
让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你发现这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。
|
||||
让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你找到这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。
|
||||
|
||||
在这篇文章中,我们将看到 **在 Ubuntu 中,如何控制开机启动应用,如何让一个应用在开机时启动以及如何发现隐藏的开机启动应用。**这里提供的指导对所有的 Ubuntu 版本均适用,例如 Ubuntu 12.04, Ubuntu 14.04 和 Ubuntu 15.04。
|
||||
|
||||
### 在 Ubuntu 中管理开机启动应用 ###
|
||||
|
||||
默认情况下, Ubuntu 提供了一个`开机启动应用工具`来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。
|
||||
默认情况下, Ubuntu 提供了一个`Startup Applications`工具来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。
|
||||
|
||||
![ubuntu 中的开机启动应用工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_Ubuntu.jpeg)
|
||||
|
||||
点击它来启动。下面是我的`开机启动应用`的样子:
|
||||
点击它来启动。下面是我的`Startup Applications`的样子:
|
||||
|
||||
![在 Ubuntu 中查看开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Screenshot-from-2015-07-18-122550.png)
|
||||
|
||||
@ -84,7 +84,7 @@
|
||||
|
||||
就这样,你将在下一次开机时看到这个程序会自动运行。这就是在 Ubuntu 中你能做的关于开机启动应用的所有事情。
|
||||
|
||||
到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。
|
||||
到现在为止,我们已经讨论在开机时可见到的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。
|
||||
|
||||
### 在 Ubuntu 中查看隐藏的开机启动程序 ###
|
||||
|
||||
@ -97,13 +97,14 @@
|
||||
![在 Ubuntu 中查看隐藏的开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Hidden_startup_program_Ubuntu.jpg)
|
||||
|
||||
你可以像先前我们讨论的那样管理这些开机启动应用。我希望这篇教程可以帮助你在 Ubuntu 中控制开机启动程序。任何的问题或建议总是欢迎的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,32 +1,32 @@
|
||||
无忧之道:Docker中容器的备份、恢复和迁移
|
||||
================================================================================
|
||||
今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建块无需依赖于特定的堆栈或供应者。
|
||||
今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建组件无需依赖于特定的堆栈或供应者。
|
||||
|
||||
所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在盒子中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器。
|
||||
所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在机器中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器的方法。
|
||||
|
||||
我们怎样才能在Linux中备份、恢复和迁移Docker容器呢?这里为您提供了一些便捷的步骤。
|
||||
|
||||
### 1. 备份容器 ###
|
||||
|
||||
首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行这Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。
|
||||
首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行着Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。
|
||||
|
||||
# docker ps
|
||||
|
||||
![Docker Containers List](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-containers-list.png)
|
||||
|
||||
在此之后,我们要选择我们想要备份的容器,然后我们会去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。
|
||||
在此之后,我们要选择我们想要备份的容器,然后去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。
|
||||
|
||||
# docker commit -p 30b8f18f20b4 container-backup
|
||||
|
||||
![Docker Commit](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-commit.png)
|
||||
|
||||
该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 docker images 命令来查看Docker镜像,如下。
|
||||
该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 `docker images` 命令来查看Docker镜像,如下。
|
||||
|
||||
# docker images
|
||||
|
||||
![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-images.png)
|
||||
|
||||
正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登陆进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tarball备份,以供今后使用。
|
||||
正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登录进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tar包备份,以供今后使用。
|
||||
|
||||
如果我们想要在[Docker注册中心][2]上传或备份镜像,我们只需要运行 docker login 命令来登录进Docker注册中心,然后推送所需的镜像即可。
|
||||
|
||||
@ -39,23 +39,23 @@
|
||||
|
||||
![Docker Push](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-push.png)
|
||||
|
||||
如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tarball备份。要完成该操作,我们需要运行以下 docker save 命令。
|
||||
如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tar包备份。要完成该操作,我们需要运行以下 `docker save` 命令。
|
||||
|
||||
# docker save -o ~/container-backup.tar container-backup
|
||||
|
||||
![taking tarball backup](http://blog.linoxide.com/wp-content/uploads/2015/07/taking-tarball-backup.png)
|
||||
|
||||
要验证tarball时候已经生成,我们只需要在保存tarball的目录中运行 ls 命令。
|
||||
要验证tar包是否已经生成,我们只需要在保存tar包的目录中运行 ls 命令即可。
|
||||
|
||||
### 2. 恢复容器 ###
|
||||
|
||||
接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些被快照成Docker镜像的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。
|
||||
接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些制作了Docker镜像快照的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。
|
||||
|
||||
# docker pull arunpyasi/container-backup:test
|
||||
|
||||
![Docker Pull](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-pull.png)
|
||||
|
||||
但是,如果我们将这些Docker镜像作为tarball文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tarball的备份路径,就可以加载该Docker镜像了。
|
||||
但是,如果我们将这些Docker镜像作为tar包文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tar包的备份路径,就可以加载该Docker镜像了。
|
||||
|
||||
# docker load -i ~/container-backup.tar
|
||||
|
||||
@ -63,7 +63,7 @@
|
||||
|
||||
# docker images
|
||||
|
||||
在镜像被加载后,我们将从加载的镜像去运行Docker容器。
|
||||
在镜像被加载后,我们将用加载的镜像去运行Docker容器。
|
||||
|
||||
# docker run -d -p 80:80 container-backup
|
||||
|
||||
@ -71,11 +71,11 @@
|
||||
|
||||
### 3. 迁移Docker容器 ###
|
||||
|
||||
迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将容器的备份作为快照Docker镜像。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tarball文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tarball备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。
|
||||
迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将把容器备份为Docker镜像快照。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tar包文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tar包备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。
|
||||
|
||||
### 尾声 ###
|
||||
|
||||
最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-)
|
||||
最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个可以成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -83,7 +83,7 @@ via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,71 @@
|
||||
什么是逻辑分区管理 LVM ,如何在Ubuntu中使用?
|
||||
================================================================================
|
||||
|
||||
> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论是你需要设置存储池,还是只想动态创建分区,那么LVM就是你正在寻找的。
|
||||
|
||||
### 什么是 LVM? ###
|
||||
|
||||
逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等),并且这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。
|
||||
|
||||
在LVM下,磁盘和分区可以抽象成一个含有多个磁盘和分区的设备。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区)
|
||||
|
||||
因为卷组和逻辑卷并不物理地对应到影片,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来了你的文件系统所不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。
|
||||
|
||||
### 你什么时候该使用LVM? ###
|
||||
|
||||
在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区来做什么。注意,一些发行版如Fedora已经默认安装了LVM。
|
||||
|
||||
如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果你想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正是你所寻找的。
|
||||
|
||||
### 在Ubuntu中设置LVM ###
|
||||
|
||||
使用LVM首先要了解的一件事是,没有一个简单的方法可以将已有的传统分区转换成逻辑卷。可以将数据移到一个使用LVM的新分区下,但是这并不会在本篇中提到;在这里,我们将全新安装一台Ubuntu 10.10来设置LVM。(LCTT 译注:本文针对的是较老的版本,新的版本已经不需如此麻烦了)
|
||||
|
||||
要使用LVM安装Ubuntu你需要使用另外的安装CD。从下面的链接中下载并烧录到CD中或者[使用unetbootin创建一个USB盘][1]。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/download-web.png)
|
||||
|
||||
从安装盘启动你的电脑,并在磁盘选择界面选择整个磁盘并设置LVM。
|
||||
|
||||
*注意:这会格式化你的整个磁盘,因此如果正在尝试双启动或者其他的安装选择,选择手动。*
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-1.png)
|
||||
|
||||
选择你想用的主磁盘,最典型的是使用你最大的磁盘,接着进入下一步。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-2.png)
|
||||
|
||||
你马上会将改变写入磁盘所以确保此时你选择的是正确的磁盘接着才写入设置。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-3.png)
|
||||
|
||||
选择第一个逻辑卷的大小并继续。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-4.png)
|
||||
|
||||
确认你的磁盘分区并继续安装。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-5.png)
|
||||
|
||||
最后一步将GRUB的bootloader写到磁盘中。重点注意的是GRUB不能作为一个LVM分区因为计算机BIOS不能直接从逻辑卷中读取数据。Ubuntu将自动创建一个255MB的ext2分区用于bootloder。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-6.png)
|
||||
|
||||
安装完成之后。重启电脑并如往常一样进入Ubuntu。使用这种方式安装之后应该就感受不到LVM和传统磁盘管理之间的区别了。
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/disk-manager.png)
|
||||
|
||||
要使用LVM的全部功能,静待我们的下篇关于管理LVM的文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/
|
||||
|
||||
作者:[How-To Geek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/+howtogeek?prsrc=5
|
||||
[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/
|
48
published/20150730 Compare PDF Files on Ubuntu.md
Normal file
48
published/20150730 Compare PDF Files on Ubuntu.md
Normal file
@ -0,0 +1,48 @@
|
||||
如何在 Ubuntu 上比较 PDF 文件
|
||||
================================================================================
|
||||
|
||||
如果你想要对PDF文件进行比较,你可以使用下面工具之一。
|
||||
|
||||
### Comparepdf ###
|
||||
|
||||
comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默认对比模式是文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0)和一个指示性的返回码。
|
||||
|
||||
用于文本模式对比的选项有 -ct 或 --compare=text(默认),用于视觉对比(这对图标或其它图像发生改变时很有用)的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应);使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。
|
||||
|
||||
#### 安装comparepdf到Ubuntu ####
|
||||
|
||||
打开终端,然后运行以下命令
|
||||
|
||||
sudo apt-get install comparepdf
|
||||
|
||||
**Comparepdf 语法**
|
||||
|
||||
comparepdf [OPTIONS] file1.pdf file2.pdf
|
||||
|
||||
###Diffpdf###
|
||||
|
||||
DiffPDF是一个图形化应用程序,用于对两个PDF文件进行对比。默认情况下,它只会对比两个相关页面的文字,但是也支持对图形化页面进行对比(例如,如果图表被修改过,或者段落被重新格式化过)。它也可以对特定的页面或者页面范围进行对比。例如,如果同一个PDF文件有两个版本,其中一个有页面1-12,而另一个则有页面1-13,因为这里添加了一个额外的页面4,它们可以通过指定两个页面范围来进行对比,第一个是1-12,而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面(1,1),(2,2),(3,3),(4,5),(5,6),以此类推,直到(12,13)。
|
||||
|
||||
#### 安装 diffpdf 到 ubuntu ####
|
||||
|
||||
打开终端,然后运行以下命令
|
||||
|
||||
sudo apt-get install diffpdf
|
||||
|
||||
#### 截图 ####
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png)
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/23.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
@ -0,0 +1,75 @@
|
||||
选择成为软件开发工程师的5个原因
|
||||
================================================================================
|
||||
|
||||
![](http://henrikwarne1.files.wordpress.com/2011/09/cropped-desk1.jpg)
|
||||
|
||||
这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲。我是志愿(由 [Transfer][1] 组织的)来到这所学校谈论我的工作的。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。演讲的其中一部分是我为什么觉得软件开发是一个很酷的职业。主要原因如下:
|
||||
|
||||
### 5个原因 ###
|
||||
|
||||
**1、创造性**
|
||||
|
||||
如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。它是最符合创造性定义的了,因为你创造了一个以前没有的新功能。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。这些所有的行为都是需要强大的创造性的。
|
||||
|
||||
**2、协作性**
|
||||
|
||||
另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理、测试人员、客户讨论需求以及其他问题。
|
||||
经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。
|
||||
|
||||
**3、高需性**
|
||||
|
||||
世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司说,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。
|
||||
|
||||
**4、高酬性**
|
||||
|
||||
软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。
|
||||
|
||||
**5、前瞻性**
|
||||
|
||||
有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。
|
||||
|
||||
### 但是...###
|
||||
|
||||
**外包又是怎么一回事呢?**
|
||||
|
||||
难道所有外包到其他国家的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。
|
||||
|
||||
换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。
|
||||
|
||||
### 如何成为人生大赢家 ###
|
||||
|
||||
虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。
|
||||
|
||||
寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10]) 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家!
|
||||
|
||||
![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/
|
||||
|
||||
作者:[Henrik Warne][a]
|
||||
译者:[mousycoder](https://github.com/mousycoder)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
[a]:http://henrikwarne.com/
|
||||
[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23
|
||||
[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460
|
||||
[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/
|
||||
[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov
|
||||
[5]:https://en.wikipedia.org/wiki/Marginal_cost
|
||||
[6]:https://en.wikipedia.org/wiki/Waterfall_model
|
||||
[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/
|
||||
[8]:https://www.coursera.org/
|
||||
[9]:https://www.udacity.com/
|
||||
[10]:https://eskimon.wordpress.com/about/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,88 +0,0 @@
|
||||
FSSlc Translating
|
||||
|
||||
7 communities driving open source development
|
||||
================================================================================
|
||||
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
|
||||
|
||||
### Open Development of Tech Drives Innovation ###
|
||||
|
||||
Over the past two decades, open development of technology has come to be seen as a key to driving innovation. Even companies that once saw open source as a threat have come around — Microsoft, for example, is now active in a number of open source initiatives. To date, most open development has focused on software. But even that is changing as communities have begun to coalesce around open hardware initiatives. Here are seven organizations that are successfully promoting and developing open technologies, both hardware and software.
|
||||
|
||||
### OpenPOWER Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
|
||||
|
||||
The [OpenPOWER Foundation][2] was founded by IBM, Google, Mellanox, Tyan and NVIDIA in 2013 to drive open collaboration hardware development in the same spirit as the open source software development which has found fertile ground in the past two decades.
|
||||
|
||||
IBM seeded the foundation by opening up its Power-based hardware and software technologies, offering licenses to use Power IP in independent hardware products. More than 70 members now work together to create custom open servers, components and software for Linux-based data centers.
|
||||
|
||||
In April, OpenPOWER unveiled a technology roadmap based on new POWER8 process-based servers capable of analyzing data 50 times faster than the latest x86-based systems. In July, IBM and Google released a firmware stack. October saw the availability of NVIDIA GPU accelerated POWER8 systems and the first OpenPOWER reference server from Tyan.
|
||||
|
||||
### The Linux Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
|
||||
|
||||
Founded in 2000, [The Linux Foundation][2] is now the host for the largest open source, collaborative development effort in history, with more than 180 corporate members and many individual and student members. It sponsors the work of key Linux developers and promotes, protects and advances the Linux operating system and collaborative software development.
|
||||
|
||||
Some of its most successful collaborative projects include Code Aurora Forum (a consortium of companies with projects serving the mobile wireless industry), MeeGo (a project to build a Linux kernel-based operating system for mobile devices and IVI) and the Open Virtualization Alliance (which fosters the adoption of free and open source software virtualization solutions).
|
||||
|
||||
### Open Virtualization Alliance ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
|
||||
|
||||
The [Open Virtualization Alliance (OVA)][3] exists to foster the adoption of free and open source software virtualization solutions like Kernel-based Virtual Machine (KVM) through use cases and support for the development of interoperable common interfaces and APIs. KVM turns the Linux kernel into a hypervisor.
|
||||
|
||||
Today, KVM is the most commonly used hypervisor with OpenStack.
|
||||
|
||||
### The OpenStack Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
|
||||
|
||||
Originally launched as an Infrastructure-as-a-Service (IaaS) product by NASA and Rackspace hosting in 2010, the [OpenStack Foundation][4] has become the home for one of the biggest open source projects around. It boasts more than 200 member companies, including AT&T, AMD, Avaya, Canonical, Cisco, Dell and HP.
|
||||
|
||||
Organized around a six-month release cycle, the foundation's OpenStack projects are developed to control pools of processing, storage and networking resources through a data center — all managed or provisioned through a Web-based dashboard, command-line tools or a RESTful API. So far, the collaborative development supported by the foundation has resulted in the creation of OpenStack components including OpenStack Compute (a cloud computing fabric controller that is the main part of an IaaS system), OpenStack Networking (a system for managing networks and IP addresses) and OpenStack Object Storage (a scalable redundant storage system).
|
||||
|
||||
### OpenDaylight ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
|
||||
|
||||
Another collaborative project to come out of the Linux Foundation, [OpenDaylight][5] is a joint initiative of industry vendors, like Dell, HP, Oracle and Avaya founded in April 2013. Its mandate is the creation of a community-led, open, industry-supported framework consisting of code and blueprints for Software-Defined Networking (SDN). The idea is to provide a fully functional SDN platform that can be deployed directly, without requiring other components, though vendors can offer add-ons and enhancements.
|
||||
|
||||
### Apache Software Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
|
||||
|
||||
The [Apache Software Foundation (ASF)][7] is home to nearly 150 top level projects ranging from open source enterprise automation software to a whole ecosystem of distributed computing projects related to Apache Hadoop. These projects deliver enterprise-grade, freely available software products, while the Apache License is intended to make it easy for users, whether commercial or individual, to deploy Apache products.
|
||||
|
||||
ASF was incorporated in 1999 as a membership-based, not-for-profit corporation with meritocracy at its heart — to become a member you must first be actively contributing to one or more of the foundation's collaborative projects.
|
||||
|
||||
### Open Compute Project ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
|
||||
|
||||
An outgrowth of Facebook's redesign of its Oregon data center, the [Open Compute Project (OCP)][7] aims to develop open hardware solutions for data centers. The OCP is an initiative made up of cheap, vanity-free servers, modular I/O storage for Open Rack (a rack standard designed for data centers to integrate the rack into the data center infrastructure) and a relatively "green" data center design.
|
||||
|
||||
OCP board members include representatives from Facebook, Intel, Goldman Sachs, Rackspace and Microsoft.
|
||||
|
||||
OCP recently announced two options for licensing: an Apache 2.0-like license that allows for derivative works and a more prescriptive license that encourages changes to be rolled back into the original software.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
|
||||
|
||||
作者:[Thor Olavsrud][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
|
||||
[1]:http://openpowerfoundation.org/
|
||||
[2]:http://www.linuxfoundation.org/
|
||||
[3]:https://openvirtualizationalliance.org/
|
||||
[4]:http://www.openstack.org/foundation/
|
||||
[5]:http://www.opendaylight.org/
|
||||
[6]:http://www.apache.org/
|
||||
[7]:http://www.opencompute.org/
|
100
sources/talk/20150806 5 heroes of the Linux world.md
Normal file
100
sources/talk/20150806 5 heroes of the Linux world.md
Normal file
@ -0,0 +1,100 @@
|
||||
Translating by ZTinoZ
|
||||
5 heroes of the Linux world
|
||||
================================================================================
|
||||
Who are these people, seen and unseen, whose work affects all of us every day?
|
||||
|
||||
![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg)
|
||||
Image courtesy [Christopher Michel/Flickr][1]
|
||||
|
||||
### High-flying penguins ###
|
||||
|
||||
Linux and open source is driven by passionate people who write best-of-breed software and then release the code to the public so anyone can use it, without any strings attached. (Well, there is one string attached and that’s licence.)
|
||||
|
||||
Who are these people? These heroes of the Linux world, whose work affects all of us every day. Allow me to introduce you.
|
||||
|
||||
![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg)
|
||||
Image courtesy Swapnil Bhartiya
|
||||
|
||||
### Klaus Knopper ###
|
||||
|
||||
Klaus Knopper, an Austrian developer who lives in Germany, is the founder of Knoppix and Adriana Linux, which he developed for his blind wife.
|
||||
|
||||
Knoppix holds a very special place in heart of those Linux users who started using Linux before Ubuntu came along. What makes Knoppix so special is that it popularized the concept of Live CD. Unlike Windows or Mac OS X, you could run the entire operating system from the CD without installing anything on the system. It allowed new users to test Linux on their systems without formatting the hard drive. The live feature of Linux alone contributed heavily to its popularity.
|
||||
|
||||
![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg)
|
||||
Image courtesy [Fórum Internacional Software Live/Flickr][2]
|
||||
|
||||
### Lennart Pottering ###
|
||||
|
||||
Lennart Pottering is yet another genius from Germany. He has written so many core components of a Linux (as well as BSD) system that it’s hard to keep track. Most of his work is towards the successors of aging or broken components of the Linux systems.
|
||||
|
||||
Pottering wrote the modern init system systemd, which shook the Linux world and created a [rift in the Debian community][3].
|
||||
|
||||
While Linus Torvalds has no problems with systemd, and praises it, he is not a huge fan of the way systemd developers (including the co-author Kay Sievers,) respond to bug reports and criticism. At one point Linus said on the LKML (Linux Kernel Mailing List) that he would [never work with Sievers][4].
|
||||
|
||||
Lennart is also the author of Pulseaudio, sound server on Linux and Avahi, zero-configuration networking (zeroconf) implementation.
|
||||
|
||||
![Image courtesy Meego Com/Flickr](http://images.techhive.com/images/article/2015/07/jim-zemlin-100599362-orig.jpg)
|
||||
Image courtesy [Meego Com/Flickr][5]
|
||||
|
||||
### Jim Zemlin ###
|
||||
|
||||
Jim Zemlin isn't a developer, but as founder of The Linux Foundation he is certainly one of the most important figures of the Linux world.
|
||||
|
||||
In 2007, The Linux Foundation was formed as a result of merger between two open source bodies: the Free Standards Group and the Open Source Development Labs. Zemlin was the executive director of the Free Standards Group. Post-merger Zemlin became the executive director of The Linux Foundation and has held that position since.
|
||||
|
||||
Under his leadership, The Linux Foundation has become the central figure in the modern IT world and plays a very critical role for the Linux ecosystem. In order to ensure that key developers like Torvalds and Kroah-Hartman can focus on Linux, the foundation sponsors them as fellows.
|
||||
|
||||
Zemlin also made the foundation a bridge between companies so they can collaborate on Linux while at the same time competing in the market. The foundation also organizes many conferences around the world and [offers many courses for Linux developers][6].
|
||||
|
||||
People may think of Zemlin as Linus Torvalds' boss, but he refers to himself as "Linus Torvalds' janitor."
|
||||
|
||||
![Image courtesy Coscup/Flickr](http://images.techhive.com/images/article/2015/07/greg-kh-100599350-orig.jpg)
|
||||
Image courtesy [Coscup/Flickr][7]
|
||||
|
||||
### Greg Kroah-Hartman ###
|
||||
|
||||
Greg Kroah-Hartman is known as second-in-command of the Linux kernel. The ‘gentle giant’ is the maintainer of the stable branch of the kernel and of staging subsystem, USB, driver core, debugfs, kref, kobject, and the [sysfs][8] kernel subsystems along with many other components of a Linux system.
|
||||
|
||||
He is also credited for device drivers for Linux. One of his jobs is to travel around the globe, meet hardware makers and persuade them to make their drivers available for Linux. The next time you plug some random USB device to your system and it works out of the box, thank Kroah-Hartman. (Don't thank the distro. Some distros try to take credit for the work Kroah-Hartman or the Linux kernel did.)
|
||||
|
||||
Kroah-Hartman previously worked for Novell and then joined the Linux Foundation as a fellow, alongside Linus Torvalds.
|
||||
|
||||
Kroah-Hartman is the total opposite of Linus and never rants (at least publicly). One time there was some ripple was when he stated that [Canonical doesn’t contribute much to the Linux kernel][9].
|
||||
|
||||
On a personal level, Kroah-Hartman is extremely helpful to new developers and users and is easily accessible.
|
||||
|
||||
![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/linus-swapnil-100599349-orig.jpg)
|
||||
Image courtesy Swapnil Bhartiya
|
||||
|
||||
### Linus Torvalds ###
|
||||
|
||||
No collection of Linux heroes would be complete without Linus Torvalds. He is the author of the Linux kernel, the most used open source technology on the planet and beyond. His software powers everything from space stations to supercomputers, military drones to mobile devices and tiny smartwatches. Linus remains the authority on the Linux kernel and makes the final decision on which patches to merge to the kernel.
|
||||
|
||||
Linux isn't Torvalds' only contribution open source. When he got fed-up with the existing software revision control systems, which his kernel heavily relied on, he wrote his own, called Git. Git enjoys the same reputation as Linux; it is the most used version control system in the world.
|
||||
|
||||
Torvalds is also a passionate scuba diver and when he found no decent dive logs for Linux, he wrote his own and called it SubSurface.
|
||||
|
||||
Torvalds is [well known for his rants][10] and once admitted that his ego is as big as a small planet. But he is also known for admitting his mistakes if he realizes he was wrong.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.html
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
|
||||
[1]:https://flic.kr/p/siJ25M
|
||||
[2]:https://flic.kr/p/uTzj54
|
||||
[3]:http://www.itwire.com/business-it-news/open-source/66153-systemd-fallout-two-debian-technical-panel-members-resign
|
||||
[4]:http://www.linuxveda.com/2014/04/04/linus-torvalds-systemd-kay-sievers/
|
||||
[5]:https://flic.kr/p/9Lnhpu
|
||||
[6]:http://www.itworld.com/article/2951968/linux/linux-foundation-offers-cheaper-courses-and-certifications-for-india.html
|
||||
[7]:https://flic.kr/p/hBv8Pp
|
||||
[8]:https://en.wikipedia.org/wiki/Sysfs
|
||||
[9]:https://www.youtube.com/watch?v=CyHAeGBFS8k
|
||||
[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html
|
@ -0,0 +1,81 @@
|
||||
Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech
|
||||
================================================================================
|
||||
![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png)
|
||||
Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1]
|
||||
|
||||
Patricia Torvalds isn't the Torvalds name that pops up in Linux and open source circles. Yet.
|
||||
|
||||
![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png)
|
||||
|
||||
At 18, Patricia is a feminist with a growing list of tech achievements, open source industry experience, and her sights set on diving into her freshman year of college at Duke University's Pratt School of Engineering. She works for [Puppet Labs][2] in Portland, Oregon, as an intern, but soon she'll head to Durham, North Carolina, to start the fall semester of college.
|
||||
|
||||
In this exclusive interview, Patricia explains what got her interested in computer science and engineering (spoiler alert: it wasn't her father), what her high school did "right" with teaching tech, the important role feminism plays in her life, and her thoughts on the lack of diversity in technology.
|
||||
|
||||
![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png)
|
||||
|
||||
### What made you interested in studying computer science and engineering? ###
|
||||
|
||||
My interest in tech really grew throughout high school. I wanted to go into biology for a while, until around my sophomore year. I had a web design internship at the Portland VA after my sophomore year. And I took an engineering class called Exploratory Ventures, which sent an ROV into the Pacific ocean late in my sophomore year, but the turning point was probably when I was named a regional winner and national runner up for the [NCWIT Aspirations in Computing][3] award halfway through my junior year.
|
||||
|
||||
The award made me feel validated in my interest, of course, but I think the most important part of it was getting to join a Facebook group for all the award winners. The girls who have won the award are absolutely incredible and so supportive of each other. I was definitely interested in computer science before I won the award, because of my work in XV and at the VA, but having these girls to talk to solidified my interest and has kept it really strong. Teaching XV—more on that later—my junior and senior year, also, made engineering and computer science really fun for me.
|
||||
|
||||
### What do you plan to study? And do you already know what you want to do after college? ###
|
||||
|
||||
I hope to major in either Mechanical or Electrical and Computer Engineering as well as Computer Science, and minor in Women's Studies. After college, I hope to work for a company that supports or creates technology for social good, or start my own company.
|
||||
|
||||
### My daughter had one high school programming class—Visual Basic. She was the only girl in her class, and she ended up getting harassed and having a miserable experience. What was your experience like? ###
|
||||
|
||||
My high school began offering computer science classes my senior year, and I took Visual Basic as well! The class wasn't bad, but I was definitely one of three or four girls in the class of 20 or so students. Other computing classes seemed to have similar gender breakdowns. However, my high school was extremely small and the teacher was supportive of inclusivity in tech, so there was no harassment that I noticed. Hopefully the classes become more diverse in future years.
|
||||
|
||||
### What did your schools do right technology-wise? And how could they have been better? ###
|
||||
|
||||
My high school gave us consistent access to computers, and teachers occasionally assigned technology-based assignments in unrelated classes—we had to create a website for a social studies class a few times—which I think is great because it exposes everyone to tech. The robotics club was also pretty active and well-funded, but fairly small; I was not a member. One very strong component of the school's technology/engineering program is actually a student-taught engineering class called Exploratory Ventures, which is a hands-on class that tackles a new engineering or computer science problem every year. I taught it for two years with a classmate of mine, and have had students come up to me and tell me they're interested in pursuing engineering or computer science as a result of the class.
|
||||
|
||||
However, my high school was not particularly focused on deliberately including young women in these programs, and it isn't very racially diverse. The computing-based classes and clubs were, by a vast majority, filled with white male students. This could definitely be improved on.
|
||||
|
||||
### Growing up, how did you use technology at home? ###
|
||||
|
||||
Honestly, when I was younger I used my computer time (my dad created a tracker, which logged us off after an hour of Internet use) to play Neopets or similar games. I guess I could have tried to mess with the tracker or played on the computer without Internet use, but I just didn't. I sometimes did little science projects with my dad, and I remember once printing "Hello world" in the terminal with him a thousand times, but mostly I just played online games with my sisters and didn't get my start in computing until high school.
|
||||
|
||||
### You were active in the Feminism Club at your high school. What did you learn from that experience? What feminist issues are most important to you now? ###
|
||||
|
||||
My friend and I co-founded Feminism Club at our high school late in our sophomore year. We did receive lots of resistance to the club at first, and while that never entirely went away, by the time we graduated feminist ideals were absolutely a part of the school's culture. The feminist work we did at my high school was generally on a more immediate scale and focused on issues like the dress code.
|
||||
|
||||
Personally, I'm very focused on intersectional feminism, which is feminism as it applies to other aspects of oppression like racism and classism. The Facebook page [Guerrilla Feminism][4] is a great example of an intersectional feminism and has done so much to educate me. I currently run the Portland branch.
|
||||
|
||||
Feminism is also important to me in terms of diversity in tech, although as an upper-class white woman with strong connections in the tech world, the problems here affect me much less than they do other people. The same goes for my involvement in intersectional feminism. Publications like [Model View Culture][5] are very inspiring to me, and I admire Shanley Kane so much for what she does.
|
||||
|
||||
### What advice would you give parents who want to teach their children how to program? ###
|
||||
|
||||
Honestly, nobody ever pushed me into computer science or engineering. Like I said, for a long time I wanted to be a geneticist. I got a summer internship doing web design for the VA the summer after my sophomore year and totally changed my mind. So I don't know if I can fully answer that question.
|
||||
|
||||
I do think genuine interest is important, though. If my dad had sat me down in front of the computer and told me to configure a webserver when I was 12, I don't think I'd be interested in computer science. Instead, my parents gave me a lot of free reign to do what I wanted, which was mostly coding terrible little HTML sites for my Neopets. Neither of my younger sisters are interested in engineering or computer science, and my parents don't care. I'm really lucky my parents have given me and my sisters the encouragement and resources to explore our interests.
|
||||
|
||||
Still, I grew up saying my future career would be "like my dad's"—even when I didn't know what he did. He has a pretty cool job. Also, one time when I was in middle school, I told him that and he got a little choked up and said I wouldn't think that in high school. So I guess that motivated me a bit.
|
||||
|
||||
### What suggestions do you have for leaders in open source communities to help them attract and maintain a more diverse mix of contributors? ###
|
||||
|
||||
I'm actually not active in particular open source communities. I feel much more comfortable discussing computing with other women; I'm a member of the [NCWIT Aspirations in Computing][6] network and it's been one of the most important aspects of my continued interest in technology, as well as the Facebook group [Ladies Storm Hackathons][7].
|
||||
|
||||
I think this applies well to attracting and maintaining a talented and diverse mix of contributors: Safe spaces are important. I have seen the misogynistic and racist comments made in some open source communities, and subsequent dismissals when people point out the issues. I think that in maintaining a professional community there have to be strong standards on what constitutes harassment or inappropriate conduct. Of course, people can—and will—have a variety of opinions on what they should be able to express in open source communities, or any community. However, if community leaders actually want to attract and maintain diverse talent, they need to create a safe space and hold community members to high standards.
|
||||
|
||||
I also think that some community leaders just don't value diversity. It's really easy to argue that tech is a meritocracy, and the reason there are so few marginalized people in tech is just that they aren't interested, and that the problem comes from earlier on in the pipeline. They argue that if someone is good enough at their job, their gender or race or sexual orientation doesn't matter. That's the easy argument. But I was raised not to make excuses for mistakes. And I think the lack of diversity is a mistake, and that we should be taking responsibility for it and actively trying to make it better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/life/15/8/patricia-torvalds-interview
|
||||
|
||||
作者:[Rikki Endsley][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/rikki-endsley
|
||||
[1]:https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[2]:https://puppetlabs.com/
|
||||
[3]:https://www.aspirations.org/
|
||||
[4]:https://www.facebook.com/guerrillafeminism
|
||||
[5]:https://modelviewculture.com/
|
||||
[6]:https://www.aspirations.org/
|
||||
[7]:https://www.facebook.com/groups/LadiesStormHackathons/
|
@ -1,260 +0,0 @@
|
||||
translating...
|
||||
|
||||
How to set up IPv6 BGP peering and filtering in Quagga BGP router
|
||||
================================================================================
|
||||
In the previous tutorials, we demonstrated how we can set up a [full-fledged BGP router][1] and configure [prefix filtering][2] with Quagga. In this tutorial, we are going to show you how we can set up IPv6 BGP peering and advertise IPv6 prefixes through BGP. We will also demonstrate how we can filter IPv6 prefixes advertised or received by using prefix-list and route-map features.
|
||||
|
||||
### Topology ###
|
||||
|
||||
For this tutorial, we will be considering the following topology.
|
||||
|
||||
![](https://farm9.staticflickr.com/8599/15944659374_1c65852df2_c.jpg)
|
||||
|
||||
Service providers A and B want to establish an IPv6 BGP peering between them. Their IPv6 and AS information is as follows.
|
||||
|
||||
- Peering IP block: 2001:DB8:3::/64
|
||||
- Service provider A: AS 100, 2001:DB8:1::/48
|
||||
- Service provider B: AS 200, 2001:DB8:2::/48
|
||||
|
||||
### Installing Quagga on CentOS/RHEL ###
|
||||
|
||||
If Quagga has not already been installed, we can install it using yum.
|
||||
|
||||
# yum install quagga
|
||||
|
||||
On CentOS/RHEL 7, the default SELinux policy, which prevents /usr/sbin/zebra from writing to its configuration directory, can interfere with the setup procedure we are going to describe. Thus we want to disable this policy as follows. Skip this step if you are using CentOS/RHEL 6.
|
||||
|
||||
# setsebool -P zebra_write_config 1
|
||||
|
||||
### Creating Configuration Files ###
|
||||
|
||||
After installation, we start the configuration process by creating the zebra/bgpd configuration files.
|
||||
|
||||
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
|
||||
# cp /usr/share/doc/quagga-XXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
|
||||
|
||||
Next, enable auto-start of these services.
|
||||
|
||||
**On CentOS/RHEL 6:**
|
||||
|
||||
# service zebra start; service bgpd start
|
||||
# chkconfig zebra on; chkconfig bgpd on
|
||||
|
||||
**On CentOS/RHEL 7:**
|
||||
|
||||
# systemctl start zebra; systemctl start bgpd
|
||||
# systemctl enable zebra; systmectl enable bgpd
|
||||
|
||||
Quagga provides a built-in shell called vtysh, whose interface is similar to those of major router vendors such as Cisco or Juniper. Launch vtysh command shell:
|
||||
|
||||
# vtysh
|
||||
|
||||
The prompt will be changed to:
|
||||
|
||||
router-a#
|
||||
|
||||
or
|
||||
|
||||
router-b#
|
||||
|
||||
In the rest of the tutorials, these prompts indicate that you are inside vtysh shell of either router.
|
||||
|
||||
### Specifying Log File for Zebra ###
|
||||
|
||||
Let's configure the log file for Zebra, which will be helpful for debugging.
|
||||
|
||||
First, enter the global configuration mode by typing:
|
||||
|
||||
router-a# configure terminal
|
||||
|
||||
The prompt will be changed to:
|
||||
|
||||
router-a(config)#
|
||||
|
||||
Now specify log file location. Then exit the configuration mode:
|
||||
|
||||
router-a(config)# log file /var/log/quagga/quagga.log
|
||||
router-a(config)# exit
|
||||
|
||||
Save configuration permanently by:
|
||||
|
||||
router-a# write
|
||||
|
||||
### Configuring Interface IP Addresses ###
|
||||
|
||||
Let's now configure the IP addresses for Quagga's physical interfaces.
|
||||
|
||||
First, we check the available interfaces from inside vtysh.
|
||||
|
||||
router-a# show interfaces
|
||||
|
||||
----------
|
||||
|
||||
Interface eth0 is up, line protocol detection is disabled
|
||||
## OUTPUT TRUNCATED ###
|
||||
Interface eth1 is up, line protocol detection is disabled
|
||||
## OUTPUT TRUNCATED ##
|
||||
|
||||
Now we assign necessary IPv6 addresses.
|
||||
|
||||
router-a# conf terminal
|
||||
router-a(config)# interface eth0
|
||||
router-a(config-if)# ipv6 address 2001:db8:3::1/64
|
||||
router-a(config-if)# interface eth1
|
||||
router-a(config-if)# ipv6 address 2001:db8:1::1/64
|
||||
|
||||
We use the same method to assign IPv6 addresses to router-B. I am summarizing the configuration below.
|
||||
|
||||
router-b# show running-config
|
||||
|
||||
----------
|
||||
|
||||
interface eth0
|
||||
ipv6 address 2001:db8:3::2/64
|
||||
|
||||
interface eth1
|
||||
ipv6 address 2001:db8:2::1/64
|
||||
|
||||
Since the eth0 interface of both routers are in the same subnet, i.e., 2001:DB8:3::/64, you should be able to ping from one router to another. Make sure that you can ping successfully before moving on to the next step.
|
||||
|
||||
router-a# ping ipv6 2001:db8:3::2
|
||||
|
||||
----------
|
||||
|
||||
PING 2001:db8:3::2(2001:db8:3::2) 56 data bytes
|
||||
64 bytes from 2001:db8:3::2: icmp_seq=1 ttl=64 time=3.20 ms
|
||||
64 bytes from 2001:db8:3::2: icmp_seq=2 ttl=64 time=1.05 ms
|
||||
|
||||
### Phase 1: IPv6 BGP Peering ###
|
||||
|
||||
In this section, we will configure IPv6 BGP between the two routers. We start by specifying BGP neighbors in router-A.
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# router bgp 100
|
||||
router-a(config-router)# no auto-summary
|
||||
router-a(config-router)# no synchronization
|
||||
router-a(config-router)# neighbor 2001:DB8:3::2 remote-as 200
|
||||
|
||||
Next, we define the address family for IPv6. Within the address family section, we will define the network to be advertised, and activate the neighbors as well.
|
||||
|
||||
router-a(config-router)# address-family ipv6
|
||||
router-a(config-router-af)# network 2001:DB8:1::/48
|
||||
router-a(config-router-af)# neighbor 2001:DB8:3::2 activate
|
||||
|
||||
We will go through the same configuration for router-B. I'm providing the summary of the configuration.
|
||||
|
||||
router-b# conf t
|
||||
router-b(config)# router bgp 200
|
||||
router-b(config-router)# no auto-summary
|
||||
router-b(config-router)# no synchronization
|
||||
router-b(config-router)# neighbor 2001:DB8:3::1 remote-as 100
|
||||
router-b(config-router)# address-family ipv6
|
||||
router-b(config-router-af)# network 2001:DB8:2::/48
|
||||
router-b(config-router-af)# neighbor 2001:DB8:3::1 activate
|
||||
|
||||
If all goes well, an IPv6 BGP session should be up between the two routers. If not already done, please make sure that necessary ports (TCP 179) are [open in your firewall][3].
|
||||
|
||||
We can check IPv6 BGP session information using the following commands.
|
||||
|
||||
**For BGP summary:**
|
||||
|
||||
router-a# show bgp ipv6 unicast summary
|
||||
|
||||
**For BGP advertised routes:**
|
||||
|
||||
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> advertised-routes
|
||||
|
||||
**For BGP received routes:**
|
||||
|
||||
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> routes
|
||||
|
||||
![](https://farm8.staticflickr.com/7317/16379555088_6e29cb6884_b.jpg)
|
||||
|
||||
### Phase 2: Filtering IPv6 Prefixes ###
|
||||
|
||||
As we can see from the above output, the routers are advertising their full /48 IPv6 prefix. For demonstration purposes, we will consider the following requirements.
|
||||
|
||||
- Router-B will advertise one /64 prefix, one /56 prefix, as well as one full /48 prefix.
|
||||
- Router-A will accept any IPv6 prefix owned by service provider B, which has a netmask length between /56 and /64.
|
||||
|
||||
We are going to filter the prefix as required, using prefix-list and route-map in router-A.
|
||||
|
||||
![](https://farm8.staticflickr.com/7367/16381297417_6549218289_c.jpg)
|
||||
|
||||
#### Modifying prefix advertisement for Router-B ####
|
||||
|
||||
Currently, router-B is advertising only one /48 prefix. We will modify router-B's BGP configuration so that it advertises additional /56 and /64 prefixes as well.
|
||||
|
||||
router-b# conf t
|
||||
router-b(config)# router bgp 200
|
||||
router-b(config-router)# address-family ipv6
|
||||
router-b(config-router-af)# network 2001:DB8:2::/56
|
||||
router-b(config-router-af)# network 2001:DB8:2::/64
|
||||
|
||||
We will verify that all prefixes are received at router-A.
|
||||
|
||||
![](https://farm9.staticflickr.com/8598/16379761980_7c083ae977_b.jpg)
|
||||
|
||||
Great! As we are receiving all prefixes in router-A, we will move forward and create prefix-list and route-map entries to filter these prefixes.
|
||||
|
||||
#### Creating Prefix-List ####
|
||||
|
||||
As described in the [previous tutorial][4], prefix-list is a mechanism that is used to match an IP address prefix with a subnet length. Once a matched prefix is found, we can apply filtering or other actions to the matched prefix. To meet our requirements, we will go ahead and create a necessary prefix-list entry in router-A.
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# ipv6 prefix-list FILTER-IPV6-PRFX permit 2001:DB8:2::/56 le 64
|
||||
|
||||
The above commands will create a prefix-list entry named 'FILTER-IPV6-PRFX' which will match any prefix in the 2001:DB8:2:: pool with a netmask between 56 and 64.
|
||||
|
||||
#### Creating and Applying Route-Map ####
|
||||
|
||||
Now that the prefix-list entry is created, we will create a corresponding route-map rule which uses the prefix-list entry.
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# route-map FILTER-IPV6-RMAP permit 10
|
||||
router-a(config-route-map)# match ipv6 address prefix-list FILTER-IPV6-PRFX
|
||||
|
||||
The above commands will create a route-map rule named 'FILTER-IPV6-RMAP'. This rule will permit IPv6 addresses matched by the prefix-list 'FILTER-IPV6-PRFX' that we have created earlier.
|
||||
|
||||
Remember that a route-map rule is only effective when it is applied to a neighbor or an interface in a certain direction. We will apply the route-map in the BGP neighbor configuration. As the filter is meant for inbound prefixes, we apply the route-map in the inbound direction.
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# router bgp 100
|
||||
router-a(config-router)# address-family ipv6
|
||||
router-a(config-router-af)# neighbor 2001:DB8:3::2 route-map FILTER-IPV6-RMAP in
|
||||
|
||||
Now when we check the routes received at router-A, we should see only two prefixes that are allowed.
|
||||
|
||||
![](https://farm8.staticflickr.com/7337/16379762020_ec2dc39b31_c.jpg)
|
||||
|
||||
**Note**: You may need to reset the BGP session for the route-map to take effect.
|
||||
|
||||
All IPv6 BGP sessions can be restarted using the following command:
|
||||
|
||||
router-a# clear bgp ipv6 *
|
||||
|
||||
I am summarizing the configuration of both routers so you get a clear picture at a glance.
|
||||
|
||||
![](https://farm9.staticflickr.com/8672/16567240165_eee4398dc8_c.jpg)
|
||||
|
||||
### Summary ###
|
||||
|
||||
To sum up, this tutorial focused on how to set up BGP peering and filtering using IPv6. We showed how to advertise IPv6 prefixes to a neighboring BGP router, and how to filter the prefixes advertised or received are advertised. Note that the process described in this tutorial may affect production networks of a service provider, so please use caution.
|
||||
|
||||
Hope this helps.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
|
||||
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
|
||||
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
@ -1,68 +0,0 @@
|
||||
|
||||
How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots
|
||||
================================================================================
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png)
|
||||
|
||||
Ubuntu’s installer offers an easy “Use LVM” checkbox. The description says it enables Logical Volume Management so you can take snapshots and more easily resize your hard disk partitions — here’s how to do that.
|
||||
|
||||
LVM is a technology that’s similar to [RAID arrays][1] or [Storage Spaces on Windows][2] in some ways. While this technology is particularly useful on servers, it can be used on desktop PCs, too.
|
||||
|
||||
### Should You Use LVM With Your New Ubuntu Installation? ###
|
||||
|
||||
The first question is whether you even want to use LVM with your Ubuntu installation. Ubuntu makes this easy to enable with a quick click, but this option isn’t enabled by default. As the installer says, this allows you to resize partitions, create snapshots, merge multiple disks into a single logical volume, and so on — all while the system is running. Unlike with typical partitions, you don’t have to shut down your system, boot from a live CD or USB drive, and [resize your partitions while they aren’t in use][3].
|
||||
|
||||
To be perfectly honest, the average Ubuntu desktop user probably won’t realize whether they’re using LVM or not. But, if you want to do more advanced things later, LVM can help. LVM is potentially more complex, which could cause problems if you need to recover your data later — especially if you’re not that experienced with it. There shouldn’t be a noticeable performance penalty here — LVM is implemented right down in the Linux kernel.
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035cbada6ae.png.pagespeed.ic.cnqyiKfCvi.png)
|
||||
|
||||
### Logical Volume Management Explained ###
|
||||
|
||||
We’re previously [explained what LVM is][4]. In a nutshell, it provides a layer of abstraction between your physical disks and the partitions presented to your operating system. For example, your computer might have two hard drives inside it, each 1 TB in size. You’d have to have at least two partitions on these disks, and each of these partitions would be 1 TB in size.
|
||||
|
||||
LVM provides a layer of abstraction over this. Instead of the traditional partition on a disk, LVM would treat the disks as two separate “physical volumes” after you initialize them. You could then create “logical volumes” based on these physical volumes. For example, you could combine those two 1 TB disks into a single 2 TB partition. Your operating system would just see a 2 TB volume, and LVM would deal with everything in the background. A group of physical volumes and logical volumes is known as a “volume group.” A typical system will just have a single volume group.
|
||||
|
||||
This layer of abstraction makes it possibly to easily resize partitions, combine multiple disks into a single volume, and even take “snapshots” of a partition’s file system while it’s running, all without unmounting it.
|
||||
|
||||
Note that merging multiple disks into a single volume can be a bad idea if you’re not creating backups. It’s like with RAID 0 — if you combine two 1 TB volumes into a single 2 TB volume, you could lose important data on the volume if just one of your hard disks fails. Backups are crucial if you go this route.
|
||||
|
||||
### Graphical Utilities for Managing Your LVM Volumes ###
|
||||
|
||||
Traditionally, [LVM volumes are managed with Linux terminal commands][5].These will work for you on Ubuntu, but there’s an easier, graphical method anyone can take advantage of. If you’re a Linux user used to using GParted or a similar partition manager, don’t bother — GParted doesn’t have support for LVM disks.
|
||||
|
||||
Instead, you can use the Disks utility included along with Ubuntu for this. This utility is also known as GNOME Disk Utility, or Palimpsest. Launch it by clicking the icon on the dash, searching for Disks, and pressing Enter. Unlike GParted, the Disks utility will display your LVM partitions under “Other Devices,” so you can format them and adjust other options if you need to. This utility will also work from a live CD or USB drive, too.
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png)
|
||||
|
||||
Unfortunately, the Disks utility doesn’t include support for taking advantage of LVM’s most powerful features. There’s no options for managing your volume groups, extending partitions, or taking snapshots. You could do that from the terminal, but you don’t have to. Instead, you can open the Ubuntu Software Center, search for LVM, and install the Logical Volume Management tool. You could also just run the **sudo apt-get install system-config-lvm** command in a terminal window. After it’s installed, you can open the Logical Volume Management utility from the dash.
|
||||
|
||||
This graphical configuration tool was made by Red Hat. It’s a bit dated, but it’s the only graphical way to do this stuff without resorting to terminal commands.
|
||||
|
||||
Let’s say you wanted to add a new physical volume to your volume group. You’d open the tool, select the new disk under Uninitialized Entries, and click the “Initialize Entry” button. You’d then find the new physical volume under Unallocated Volumes, and you could use the “Add to existing Volume Group” button to add it to the “ubuntu-vg” volume group Ubuntu created during the installation process.
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png)
|
||||
|
||||
The volume group view shows you a visual overview of your physical volumes and logical volumes. Here, we have two physical partitions across two separate hard drives. We have a swap partition and a root partition, just as Ubuntu sets up its partitioning scheme by default. Because we’ve added a second physical partition from another drive, there’s now a good chunk of unused space.
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png)
|
||||
|
||||
To expand a logical partition into the physical space, you could select it under Logical View, click Edit Properties, and modify the size to grow the partition. You could also shrink it from here.
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png)
|
||||
|
||||
The other options in system-config-lvm allow you to set up snapshots and mirroring. You probably won’t need these features on a typical desktop, but they’re available graphically here. Remember, you can also [do all of this with terminal commands][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/
|
||||
[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/
|
||||
[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/
|
||||
[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
@ -1,89 +0,0 @@
|
||||
How to run Ubuntu Snappy Core on Raspberry Pi 2
|
||||
================================================================================
|
||||
The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core.
|
||||
|
||||
Snappy, the new component in the mix, represents a package format that is derived from DEB, is a frontend to update the system that lends its idea from atomic upgrades used in CoreOS, Red Hat's Atomic and elsewhere. As soon as the Raspberry Pi 2 was marketed, Canonical released Snappy Core for that plattform. The first edition of the Raspberry Pi was not able to run Ubuntu because Ubuntu's ARM images use the ARMv7 architecture, while the first Raspberry Pis were based on ARMv6. That has changed now, and Canonical, by releasing a RPI2-Image of Snappy Core, took the opportunity to make clear that Snappy was meant for the cloud and especially for IoT.
|
||||
|
||||
Snappy also runs on other platforms like Amazon EC2, Microsofts Azure, and Google's Compute Engine, and can also be virtualized with KVM, Virtualbox, or Vagrant. Canonical has embraced big players like Microsoft, Google, Docker or OpenStack and, at the same time, also included small projects from the maker scene as partners. Besides startups like Ninja Sphere and Erle Robotics, there are board manufacturers like Odroid, Banana Pro, Udoo, PCDuino and Parallella as well as Allwinner. Snappy Core will also run in routers soon to help with the poor upgrade policy that vendors perform.
|
||||
|
||||
In this post, let's see how we can test Ubuntu Snappy Core on Raspberry Pi 2.
|
||||
|
||||
The image for Snappy Core for the RPI2 can be downloaded from the [Raspberry Pi website][2]. Unpacked from the archive, the resulting image should be [written to an SD card][3] of at least 8 GB. Even though the OS is small, atomic upgrades and the rollback function eat up quite a bit of space. After booting up your Raspberry Pi 2 with Snappy Core, you can log into the system with the default username and password being 'ubuntu'.
|
||||
|
||||
![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg)
|
||||
|
||||
sudo is already configured and ready for use. For security reasons you should change the username with:
|
||||
|
||||
$ sudo usermod -l <new name> <old name>
|
||||
|
||||
Alternatively, you can add a new user with the command `adduser`.
|
||||
|
||||
Due to the lack of a hardware clock on the RPI, that the Snappy Core image does not take account of, the image has a small bug that will throw a lot of errors when processing commands. It is easy to fix.
|
||||
|
||||
To find out if the bug affects you, use the command:
|
||||
|
||||
$ date
|
||||
|
||||
If the output is "Thu Jan 1 01:56:44 UTC 1970", you can fix it with:
|
||||
|
||||
$ sudo date --set="Sun Apr 04 17:43:26 UTC 2015"
|
||||
|
||||
adapted to your actual time.
|
||||
|
||||
![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg)
|
||||
|
||||
Now you might want to check if there are any updates available. Note that the usual commands:
|
||||
|
||||
$ sudo apt-get update && sudo apt-get distupgrade
|
||||
|
||||
will not get you very far though, as Snappy uses its own simplified package management system which is based on dpkg. This makes sense, as Snappy will run on a lot of embedded appliances, and you want things to be as simple as possible.
|
||||
|
||||
Let's dive into the engine room for a minute to understand how things work with Snappy. The SD card you run Snappy on has three partitions besides the boot partition. Two of those house a duplicated file system. Both of those parallel file systems are permanently mounted as "read only", and only one is active at any given time. The third partition holds a partial writable file system and the users persistent data. With a fresh system, the partition labeled 'system-a' holds one complete file system, called a core, leaving the parallel partition still empty.
|
||||
|
||||
![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg)
|
||||
|
||||
If we run the following command now:
|
||||
|
||||
$ sudo snappy update
|
||||
|
||||
the system will install the update as a complete core, similar to an image, on 'system-b'. You will be asked to reboot your device afterwards to activate the new core.
|
||||
|
||||
After the reboot, run the following command to check if your system is up to date and which core is active.
|
||||
|
||||
$ sudo snappy versions -a
|
||||
|
||||
After rolling out the update and rebooting, you should see that the core that is now active has changed.
|
||||
|
||||
As we have not installed any apps yet, the following command:
|
||||
|
||||
$ sudo snappy update ubuntu-core
|
||||
|
||||
would have been sufficient, and is the way if you want to upgrade just the underlying OS. Should something go wrong, you can rollback by:
|
||||
|
||||
$ sudo snappy rollback ubuntu-core
|
||||
|
||||
which will take you back to the system's state before the update.
|
||||
|
||||
![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg)
|
||||
|
||||
Speaking of apps, they are what makes Snappy useful. There are not that many at this point, but the IRC channel #snappy on Freenode is humming along nicely and with a lot of people involved, the Snappy App Store gets new apps added on a regular basis. You can visit the shop by pointing your browser to http://<ip-address>:4200, and you can install apps right from the shop and then launch them with http://webdm.local in your browser. Building apps yourself for Snappy is not all that hard, and [well documented][4]. You can also port DEB packages into the snappy format quite easily.
|
||||
|
||||
![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg)
|
||||
|
||||
Ubuntu Snappy Core, due to the limited number of available apps, is not overly useful in a productive way at this point in time, although it invites us to dive into the new Snappy package format and play with atomic upgrades the Canonical way. Since it is easy to set up, this seems like a good opportunity to learn something new.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html
|
||||
|
||||
作者:[Ferdinand Thommes][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/ferdinand
|
||||
[1]:http://www.ubuntu.com/things
|
||||
[2]:http://www.raspberrypi.org/downloads/
|
||||
[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html
|
||||
[4]:https://developer.ubuntu.com/en/snappy/
|
@ -1,130 +0,0 @@
|
||||
How to access a Linux server behind NAT via reverse SSH tunnel
|
||||
================================================================================
|
||||
You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users.
|
||||
|
||||
### What is Reverse SSH Tunneling? ###
|
||||
|
||||
One alternative to SSH port forwarding is **reverse SSH tunneling**. The concept of reverse SSH tunneling is simple. For this, you will need another host (so-called "relay host") outside your restrictive home network, which you can connect to via SSH from where you are. You could set up a relay host using a [VPS instance][1] with a public IP address. What you do then is to set up a persistent SSH tunnel from the server in your home network to the public relay host. With that, you can connect "back" to the home server from the relay host (which is why it's called a "reverse" tunnel). As long as the relay host is reachable to you, you can connect to your home server wherever you are, or however restrictive your NAT or firewall is in your home network.
|
||||
|
||||
![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg)
|
||||
|
||||
### Set up a Reverse SSH Tunnel on Linux ###
|
||||
|
||||
Let's see how we can create and use a reverse SSH tunnel. We assume the following. We will be setting up a reverse SSH tunnel from homeserver to relayserver, so that we can SSH to homeserver via relayserver from another computer called clientcomputer. The public IP address of **relayserver** is 1.1.1.1.
|
||||
|
||||
On homeserver, open an SSH connection to relayserver as follows.
|
||||
|
||||
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
Here the port 10022 is any arbitrary port number you can choose. Just make sure that this port is not used by other programs on relayserver.
|
||||
|
||||
The "-R 10022:localhost:22" option defines a reverse tunnel. It forwards traffic on port 10022 of relayserver to port 22 of homeserver.
|
||||
|
||||
With "-fN" option, SSH will go right into the background once you successfully authenticate with an SSH server. This option is useful when you do not want to execute any command on a remote SSH server, and just want to forward ports, like in our case.
|
||||
|
||||
After running the above command, you will be right back to the command prompt of homeserver.
|
||||
|
||||
Log in to relayserver, and verify that 127.0.0.1:10022 is bound to sshd. If so, that means a reverse tunnel is set up correctly.
|
||||
|
||||
relayserver~$ sudo netstat -nap | grep 10022
|
||||
|
||||
----------
|
||||
|
||||
tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd
|
||||
|
||||
Now from any other computer (e.g., clientcomputer), log in to relayserver. Then access homeserver as follows.
|
||||
|
||||
relayserver~$ ssh -p 10022 homeserver_user@localhost
|
||||
|
||||
One thing to take note is that the SSH login/password you type for localhost should be for homeserver, not for relayserver, since you are logging in to homeserver via the tunnel's local endpoint. So do not type login/password for relayserver. After successful login, you will be on homeserver.
|
||||
|
||||
### Connect Directly to a NATed Server via a Reverse SSH Tunnel ###
|
||||
|
||||
While the above method allows you to reach **homeserver** behind NAT, you need to log in twice: first to **relayserver**, and then to **homeserver**. This is because the end point of an SSH tunnel on relayserver is binding to loopback address (127.0.0.1).
|
||||
|
||||
But in fact, there is a way to reach NATed homeserver directly with a single login to relayserver. For this, you will need to let sshd on relayserver forward a port not only from loopback address, but also from an external host. This is achieved by specifying **GatewayPorts** option in sshd running on relayserver.
|
||||
|
||||
Open /etc/ssh/sshd_conf of **relayserver** and add the following line.
|
||||
|
||||
relayserver~$ vi /etc/ssh/sshd_conf
|
||||
|
||||
----------
|
||||
|
||||
GatewayPorts clientspecified
|
||||
|
||||
Restart sshd.
|
||||
|
||||
Debian-based system:
|
||||
|
||||
relayserver~$ sudo /etc/init.d/ssh restart
|
||||
|
||||
Red Hat-based system:
|
||||
|
||||
relayserver~$ sudo systemctl restart sshd
|
||||
|
||||
Now let's initiate a reverse SSH tunnel from homeserver as follows.
|
||||
homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
Log in to relayserver and confirm with netstat command that a reverse SSH tunnel is established successfully.
|
||||
|
||||
relayserver~$ sudo netstat -nap | grep 10022
|
||||
|
||||
----------
|
||||
|
||||
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
|
||||
|
||||
Unlike a previous case, the end point of a tunnel is now at 1.1.1.1:10022 (relayserver's public IP address), not 127.0.0.1:10022. This means that the end point of the tunnel is reachable from an external host.
|
||||
|
||||
Now from any other computer (e.g., clientcomputer), type the following command to gain access to NATed homeserver.
|
||||
|
||||
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
|
||||
|
||||
In the above command, while 1.1.1.1 is the public IP address of relayserver, homeserver_user must be the user account associated with homeserver. This is because the real host you are logging in to is homeserver, not relayserver. The latter simply relays your SSH traffic to homeserver.
|
||||
|
||||
### Set up a Persistent Reverse SSH Tunnel on Linux ###
|
||||
|
||||
Now that you understand how to create a reverse SSH tunnel, let's make the tunnel "persistent", so that the tunnel is up and running all the time (regardless of temporary network congestion, SSH timeout, relay host rebooting, etc.). After all, if the tunnel is not always up, you won't be able to connect to your home server reliably.
|
||||
|
||||
For a persistent tunnel, I am going to use a tool called autossh. As the name implies, this program allows you to automatically restart an SSH session should it breaks for any reason. So it is useful to keep a reverse SSH tunnel active.
|
||||
|
||||
As the first step, let's set up [passwordless SSH login][2] from homeserver to relayserver. That way, autossh can restart a broken reverse SSH tunnel without user's involvement.
|
||||
|
||||
Next, [install autossh][3] on homeserver where a tunnel is initiated.
|
||||
|
||||
From homeserver, run autossh with the following arguments to create a persistent SSH tunnel destined to relayserver.
|
||||
|
||||
homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
The "-M 10900" option specifies a monitoring port on relayserver which will be used to exchange test data to monitor an SSH session. This port should not be used by any program on relayserver.
|
||||
|
||||
The "-fN" option is passed to ssh command, which will let the SSH tunnel run in the background.
|
||||
|
||||
The "-o XXXX" options tell ssh to:
|
||||
|
||||
- Use key authentication, not password authentication.
|
||||
- Automatically accept (unknown) SSH host keys.
|
||||
- Exchange keep-alive messages every 60 seconds.
|
||||
- Send up to 3 keep-alive messages without receiving any response back.
|
||||
|
||||
The rest of reverse SSH tunneling related options remain the same as before.
|
||||
|
||||
If you want an SSH tunnel to be automatically up upon boot, you can add the above autossh command in /etc/rc.local.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this post, I talked about how you can use a reverse SSH tunnel to access a Linux server behind a restrictive firewall or NAT gateway from outside world. While I demonstrated its use case for a home network, you must be careful when applying it for corporate networks. Such a tunnel can be considered as a breach of a corporate policy, as it circumvents corporate firewalls and can expose corporate networks to outside attacks. There is a great chance it can be misused or abused. So always remember its implication before setting it up.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/go/digitalocean
|
||||
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
|
||||
[3]:http://ask.xmodulo.com/install-autossh-linux.html
|
@ -1,182 +0,0 @@
|
||||
ictlyh Translating
|
||||
Analyzing Linux Logs
|
||||
================================================================================
|
||||
There’s a great deal of information waiting for you within your logs, although it’s not always as easy as you’d like to extract it. In this section we will cover some examples of basic analysis you can do with your logs right away (just search what’s there). We’ll also cover more advanced analysis that may take some upfront effort to set up properly, but will save you time on the back end. Examples of advanced analysis you can do on parsed data include generating summary counts, filtering on field values, and more.
|
||||
|
||||
We’ll show you first how to do this yourself on the command line using several different tools and then show you how a log management tool can automate much of the grunt work and make this so much more streamlined.
|
||||
|
||||
### Searching with Grep ###
|
||||
|
||||
Searching for text is the most basic way to find what you’re looking for. The most common tool for searching text is [grep][1]. This command line tool, available on most Linux distributions, allows you to search your logs using regular expressions. A regular expression is a pattern written in a special language that can identify matching text. The simplest pattern is to put the string you’re searching for surrounded by quotes
|
||||
|
||||
#### Regular Expressions ####
|
||||
|
||||
Here’s an example to find authentication logs for “user hoover” on an Ubuntu system:
|
||||
|
||||
$ grep "user hoover" /var/log/auth.log
|
||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||
pam_unix(sshd:session): session opened for user hoover by (uid=0)
|
||||
pam_unix(sshd:session): session closed for user hoover
|
||||
|
||||
It can be hard to construct regular expressions that are accurate. For example, if we searched for a number like the port “4792” it could also match timestamps, URLs, and other undesired data. In the below example for Ubuntu, it matched an Apache log that we didn’t want.
|
||||
|
||||
$ grep "4792" /var/log/auth.log
|
||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||
74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972HTTP/1.0" 404 545 "-" "-”
|
||||
|
||||
#### Surround Search ####
|
||||
|
||||
Another useful tip is that you can do surround search with grep. This will show you what happened a few lines before or after a match. It can help you debug what lead up to a particular error or problem. The B flag gives you lines before, and A gives you lines after. For example, we can see that when someone failed to login as an admin, they also failed the reverse mapping which means they might not have a valid domain name. This is very suspicious!
|
||||
|
||||
$ grep -B 3 -A 2 'Invalid user' /var/log/auth.log
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT!
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Invalid user admin from 216.19.2.8
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: input_userauth_request: invalid user admin [preauth]
|
||||
Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth]
|
||||
|
||||
#### Tail ####
|
||||
|
||||
You can also pair grep with [tail][2] to get the last few lines of a file, or to follow the logs and print them in real time. This is useful if you are making interactive changes like starting a server or testing a code change.
|
||||
|
||||
$ tail -f /var/log/auth.log | grep 'Invalid user'
|
||||
Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136
|
||||
Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136
|
||||
|
||||
A full introduction on grep and regular expressions is outside the scope of this guide, but [Ryan’s Tutorials][3] include more in-depth information.
|
||||
|
||||
Log management systems have higher performance and more powerful searching abilities. They often index their data and parallelize queries so you can quickly search gigabytes or terabytes of logs in seconds. In contrast, this would take minutes or in extreme cases hours with grep. Log management systems also use query languages like [Lucene][4] which offer an easier syntax for searching on numbers, fields, and more.
|
||||
|
||||
### Parsing with Cut, AWK, and Grok ###
|
||||
|
||||
#### Command Line Tools ####
|
||||
|
||||
Linux offers several command line tools for text parsing and analysis. They are great if you want to quickly parse a small amount of data but can take a long time to process large volumes of data
|
||||
|
||||
#### Cut ####
|
||||
|
||||
The [cut][5] command allows you to parse fields from delimited logs. Delimiters are characters like equal signs or commas that break up fields or key value pairs.
|
||||
|
||||
Let’s say we want to parse the user from this log:
|
||||
|
||||
pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root
|
||||
|
||||
We can use the cut command like this to get the text after the eighth equal sign. This example is on an Ubuntu system:
|
||||
|
||||
$ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8
|
||||
root
|
||||
hoover
|
||||
root
|
||||
nagios
|
||||
nagios
|
||||
|
||||
#### AWK ####
|
||||
|
||||
Alternately, you can use [awk][6], which offers more powerful features to parse out fields. It offers a scripting language so you can filter out nearly everything that’s not relevant.
|
||||
|
||||
For example, let’s say we have the following log line on an Ubuntu system and we want to extract the username that failed to login:
|
||||
|
||||
Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth]
|
||||
|
||||
Here’s how you can use the awk command. First, put a regular expression /sshd.*invalid user/ to match the sshd invalid user lines. Then print the ninth field using the default delimiter of space using { print $9 }. This outputs the usernames.
|
||||
|
||||
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
||||
guest
|
||||
admin
|
||||
info
|
||||
test
|
||||
ubnt
|
||||
|
||||
You can read more about how to use regular expressions and print fields in the [Awk User’s Guide][7].
|
||||
|
||||
#### Log Management Systems ####
|
||||
|
||||
Log management systems make parsing easier and enable users to quickly analyze large collections of log files. They can automatically parse standard log formats like common Linux logs or web server logs. This saves a lot of time because you don’t have to think about writing your own parsing logic when troubleshooting a system problem.
|
||||
|
||||
Here you can see an example log message from sshd which has each of the fields remoteHost and user parsed out. This is a screenshot from Loggly, a cloud-based log management service.
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png)
|
||||
|
||||
You can also do custom parsing for non-standard formats. A common tool to use is [Grok][8] which uses a library of common regular expressions to parse raw text into structured JSON. Here is an example configuration for Grok to parse kernel log files inside Logstash:
|
||||
|
||||
filter{
|
||||
grok {
|
||||
match => {"message" => "%{CISCOTIMESTAMP:timestamp} %{HOST:host} %{WORD:program}%{NOTSPACE} %{NOTSPACE}%{NUMBER:duration}%{NOTSPACE} %{GREEDYDATA:kernel_logs}"
|
||||
}
|
||||
}
|
||||
|
||||
And here is what the parsed output looks like from Grok:
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.30.37-AM.png)
|
||||
|
||||
### Filtering with Rsyslog and AWK ###
|
||||
|
||||
Filtering allows you to search on a specific field value instead of doing a full text search. This makes your log analysis more accurate because it will ignore undesired matches from other parts of the log message. In order to search on a field value, you need to parse your logs first or at least have a way of searching based on the event structure.
|
||||
|
||||
#### How to Filter on One App ####
|
||||
|
||||
Often, you just want to see the logs from just one application. This is easy if your application always logs to a single file. It’s more complicated if you need to filter one application among many in an aggregated or centralized log. Here are several ways to do this:
|
||||
|
||||
1. Use the rsyslog daemon to parse and filter logs. This example writes logs from the sshd application to a file named sshd-messages, then discards the event so it’s not repeated elsewhere. You can try this example by adding it to your rsyslog.conf file.
|
||||
|
||||
:programname, isequal, “sshd” /var/log/sshd-messages
|
||||
&~
|
||||
|
||||
2. Use command line tools like awk to extract the values of a particular field like the sshd username. This example is from an Ubuntu system.
|
||||
|
||||
$ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log
|
||||
guest
|
||||
admin
|
||||
info
|
||||
test
|
||||
ubnt
|
||||
|
||||
3. Use a log management system that automatically parses your logs, then click to filter on the desired application name. Here is a screenshot showing the syslog fields in a log management service called Loggly. We are filtering on the appName “sshd” as indicated by the Venn diagram icon.
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png)
|
||||
|
||||
#### How to Filter on Errors ####
|
||||
|
||||
One of the most common thing people want to see in their logs is errors. Unfortunately, the default syslog configuration doesn’t output the severity of errors directly, making it difficult to filter on them.
|
||||
|
||||
There are two ways you can solve this problem. First, you can modify your rsyslog configuration to output the severity in the log file to make it easier to read and search. In your rsyslog configuration you can add a [template][9] with pri-text such as the following:
|
||||
|
||||
"<%pri-text%> : %timegenerated%,%HOSTNAME%,%syslogtag%,%msg%n"
|
||||
|
||||
This example gives you output in the following format. You can see that the severity in this message is err.
|
||||
|
||||
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
||||
|
||||
You can use awk or grep to search for just the error messages. In this example for Ubuntu, we’re including some surrounding syntax like the . and the > which match only this field.
|
||||
|
||||
$ grep '.err>' /var/log/auth.log
|
||||
<authpriv.err> : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure
|
||||
|
||||
Your second option is to use a log management system. Good log management systems automatically parse syslog messages and extract the severity field. They also allow you to filter on log messages of a certain severity with a single click.
|
||||
|
||||
Here is a screenshot from Loggly showing the syslog fields with the error severity highlighted to show we are filtering for errors:
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.00.36-AM.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/
|
||||
|
||||
作者:[Jason Skowronski][a] [Amy Echeverri][b] [ Sadequl Hussain][c]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linkedin.com/in/jasonskowronski
|
||||
[b]:https://www.linkedin.com/in/amyecheverri
|
||||
[c]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
||||
[1]:http://linux.die.net/man/1/grep
|
||||
[2]:http://linux.die.net/man/1/tail
|
||||
[3]:http://ryanstutorials.net/linuxtutorial/grep.php
|
||||
[4]:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html
|
||||
[5]:http://linux.die.net/man/1/cut
|
||||
[6]:http://linux.die.net/man/1/awk
|
||||
[7]:http://www.delorie.com/gnu/docs/gawk/gawk_26.html#IDX155
|
||||
[8]:http://logstash.net/docs/1.4.2/filters/grok
|
||||
[9]:http://www.rsyslog.com/doc/v8-stable/configuration/templates.html
|
@ -1,55 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction
|
||||
================================================================================
|
||||
*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.*
|
||||
|
||||
Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]".
|
||||
|
||||
Now, with that out of the way... Onto the article.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
|
||||
|
||||
When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things.
|
||||
|
||||
The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse.
|
||||
|
||||
It was with that attitude in mind that I decided to take a dip in the Gnome pool.
|
||||
|
||||
I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has.
|
||||
|
||||
### Login Screen ###
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
|
||||
|
||||
I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception.
|
||||
|
||||
First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
|
||||
|
||||
That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring.
|
||||
|
||||
Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..."
|
||||
|
||||
The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought.
|
||||
|
||||
GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
|
||||
|
||||
Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme.
|
||||
|
||||
Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things – very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
|
||||
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1
|
@ -1,32 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop
|
||||
================================================================================
|
||||
### The Desktop ###
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
|
||||
|
||||
I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually.
|
||||
|
||||
Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to.
|
||||
|
||||
But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring.
|
||||
|
||||
If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
|
||||
|
||||
If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me?
|
||||
|
||||
Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users.
|
||||
|
||||
Let's move on from the key chain issue though, because I think I made my point.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,62 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 3 - GNOME Applications
|
||||
================================================================================
|
||||
### Applications ###
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920)
|
||||
|
||||
This is the one area where things are basically a wash. Each environment has a few applications that are really nice, and a few that are not so great. Once again though, Gnome gets the little things right in a way that KDE completely misses. None of KDE's applications are bad or broken, that's not what I'm saying. They function. But that's about it. To use an analogy: they passed the test, but they sure didn't get any where close to 100% on it.
|
||||
|
||||
Gnome on left, KDE on right. Dragon performs perfectly fine, it has clearly marked buttons for playing a file, URL, or a disc, just as you can do under Gnome Videos... but Gnome takes it one extra little step further in the name of convenience and user friendliness: they show all the videos detected under your system by default, without you having to do anything. KDE has Baloo-- just as they had Nepomuk before that-- why not use them? They've got a list video files that are freely accessible... but don't make use of the feature.
|
||||
|
||||
Moving on... Music Players.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920)
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920)
|
||||
|
||||
Both of these applications, Rhythmbox on the left and Amarok on the right were opened up and then a screenshot was immediately taken, nothing was clicked, or altered. See the difference? Rhythmbox looks like a music player. It's direct, there's obvious ways to sort the results, it knows what is trying to be and what it's job is: to play music.
|
||||
|
||||
Amarok feels like one of the tech demos, or library demos where someone puts every option and extension they possible can all inside one application in order to show them off-- it's never something that gets shipped as production, it's just there to show off bits and pieces. And that's exactly what Amarok feels like: its someone trying to show off every single possible cool thing they shove into a media player without ever stopping to think "Wait, what were trying to write again? An app to play music?"
|
||||
|
||||
Just look at the default layout. What is front and center for the user? A visualizer and Wikipedia integration-- the largest and most prominent column on the page. What's the second largest? Playlist list. Third largest, aka smallest? The actual music listing. How on earth are these sane defaults for a core application?
|
||||
|
||||
Software Managers! Something that has seen a lot of push in recent years and will likely only see a bigger push in the months to come. Unfortunately, it's another area where KDE was so close... and then fell on its face right at the finish line.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920)
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920)
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920)
|
||||
|
||||
Gnome Software is probably my new favorite software center, minus one gripe which I will get to in a bit. Muon, I wanted to like you. I really did. But you are a design nightmare. When the VDG was drawing up plans for you (mockup below), you looked pretty slick. Good use of white space, clean design, nice category listing, your whole not-being-split-into-two-applications.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920)
|
||||
|
||||
Then someone got around to coding you and doing your actual UI, and I can only guess they were drunk while they did it.
|
||||
|
||||
Let's look at Gnome Software. What's smack dab in the middle? The application, its screenshots, its description, etc. What's smack dab in the middle of Muon? Gigantic waste of white space. Gnome Software also includes the lovely convenience feature of putting a "Launch" button right there in case you already have an application installed. Convenience and ease of use are important, people. Honestly, JUST having things in Muon be centered aligned would probably make things look better already.
|
||||
|
||||
What's along the top edge of Gnome Software, like a tab listing? All Software, Installed, Updates. Clean language, direct, to the point. Muon? Well, we have "Discover", which works okay as far as language goes, and then we have Installed, and then nothing. Where's updates?
|
||||
|
||||
Well.. the developers decided to split updates off into its own application, thus requiring you to open two applications to handle your software-- one to install it, and one to update it-- going against every Software Center paradigm that has ever existed since the Synaptic graphical package manager.
|
||||
|
||||
I'm not going to show it in a screenshot just because I don't want to have to clean up my system afterwards, but if you go into Muon and start installing something the way it shows that is by adding a little tab to the bottom of your screen with the application's name. That tab doesn't go away when the application is done installing either, so if you're installing a lot of applications at a single time then you'll just slowly accumulate tabs along the bottom that you then have to go through and clean up manually, because if you don't then they grow off the screen and you have to swipe through them all to get to the most recent ones. Think: opening 50 tabs in Firefox. Major annoyance, major inconvenience.
|
||||
|
||||
I did say I would bash on Gnome a bit, and I meant it. Muon does get one thing very right that Gnome Software doesn't. Under the settings bar Muon has an option for "Show Technical Packages" aka: compilers, software libraries, non-graphical applications, applications without AppData, etc. Gnome doesn't. If you want to install any of those you have to drop down to the terminal. I think that's wrong. I certainly understand wanting to push AppData but I think they pushed it too soon. What made me realize Gnome didn't have this setting was when I went to install PowerTop and couldn't get Gnome to display it-- no AppData, no "Show Technical Packages" setting.
|
||||
|
||||
Doubly unfortunate is the fact that you can't "just use apper" if you're under KDE since...
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920)
|
||||
|
||||
Apper's support for installing local packages has been broken for since Fedora 19 or so, almost two years. I love the attention to detail and quality.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,52 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings
|
||||
================================================================================
|
||||
### Settings ###
|
||||
|
||||
There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic.
|
||||
|
||||
First one up? Printers.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920)
|
||||
|
||||
Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password.
|
||||
|
||||
Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system.
|
||||
|
||||
When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable.
|
||||
|
||||
One more thing...
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920)
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920)
|
||||
|
||||
Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive.
|
||||
|
||||
Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920)
|
||||
|
||||
As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you.
|
||||
|
||||
Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option.
|
||||
|
||||
I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect.
|
||||
|
||||
Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings.
|
||||
|
||||
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920)
|
||||
|
||||
Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings...
|
||||
|
||||
KDE, you're killing me. You have "System Settings" USE IT!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,40 +0,0 @@
|
||||
Translating by XLCYun.
|
||||
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion
|
||||
================================================================================
|
||||
### User Experience and Closing Thoughts ###
|
||||
|
||||
When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come.
|
||||
|
||||
Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail.
|
||||
|
||||
People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts.
|
||||
|
||||
Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity.
|
||||
|
||||
KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look.
|
||||
|
||||
Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience.
|
||||
|
||||
My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression.
|
||||
|
||||
I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed.
|
||||
|
||||
And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things).
|
||||
|
||||
This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with.
|
||||
|
||||
To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them.
|
||||
|
||||
I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better.
|
||||
|
||||
Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
|
||||
|
||||
作者:Eric Griffith
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,239 +0,0 @@
|
||||
translation by strugglingyouth
|
||||
|
||||
How to collect NGINX metrics - Part 2
|
||||
================================================================================
|
||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png)
|
||||
|
||||
### How to get the NGINX metrics you need ###
|
||||
|
||||
How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs:
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: center;">
|
||||
<col style="text-align: center;">
|
||||
<col style="text-align: center;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th rowspan="2" style="text-align: left;">Metric</th>
|
||||
<th colspan="3" style="text-align: center;">Availability</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source">NGINX (open-source)</a></th>
|
||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus">NGINX Plus</a></th>
|
||||
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs">NGINX logs</a></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts / accepted</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">handled</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests / total</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">4xx codes</td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes</td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;">x</td>
|
||||
<td style="text-align: center;">x</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">request time</td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;"></td>
|
||||
<td style="text-align: center;">x</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
#### Metrics collection: NGINX (open-source) ####
|
||||
|
||||
Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run:
|
||||
|
||||
nginx -V 2>&1 | grep -o with-http_stub_status_module
|
||||
|
||||
The status module is enabled if you see with-http_stub_status_module as output in the terminal.
|
||||
|
||||
If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]:
|
||||
|
||||
./configure \
|
||||
… \
|
||||
--with-http_stub_status_module
|
||||
make
|
||||
sudo make install
|
||||
|
||||
After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page:
|
||||
|
||||
server {
|
||||
location /nginx_status {
|
||||
stub_status on;
|
||||
|
||||
access_log off;
|
||||
allow 127.0.0.1;
|
||||
deny all;
|
||||
}
|
||||
}
|
||||
|
||||
Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running:
|
||||
|
||||
nginx -t
|
||||
|
||||
Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as:
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
|
||||
In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing:
|
||||
|
||||
nginx -s reload
|
||||
|
||||
Now you can view the status page to see your metrics:
|
||||
|
||||
Active connections: 24
|
||||
server accepts handled requests
|
||||
1156958 1156958 4491319
|
||||
Reading: 0 Writing: 18 Waiting : 6
|
||||
|
||||
Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machine’s IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above.
|
||||
|
||||
The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6].
|
||||
|
||||
#### Metrics collection: NGINX Plus ####
|
||||
|
||||
The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8].
|
||||
|
||||
![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png)
|
||||
|
||||
*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).*
|
||||
|
||||
NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream:
|
||||
|
||||
{"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
|
||||
|
||||
To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block:
|
||||
|
||||
server {
|
||||
listen 8080;
|
||||
root /usr/share/nginx/html;
|
||||
|
||||
location /status {
|
||||
status;
|
||||
}
|
||||
|
||||
location = /status.html {
|
||||
}
|
||||
}
|
||||
|
||||
The status pages should be live once you reload your NGINX configuration:
|
||||
|
||||
nginx -s reload
|
||||
|
||||
The official NGINX Plus docs have [more details][13] on how to configure the expanded status module.
|
||||
|
||||
#### Metrics collection: NGINX logs ####
|
||||
|
||||
NGINX’s [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files):
|
||||
|
||||
access_log logs/host.access.log combined;
|
||||
|
||||
After changing any NGINX configurations, reload the configs by executing:
|
||||
|
||||
nginx -s reload
|
||||
|
||||
The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail.
|
||||
|
||||
127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
|
||||
|
||||
127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
||||
|
||||
You can log request processing time as well by adding a new log format to the http block of your NGINX config file:
|
||||
|
||||
log_format nginx '$remote_addr - $remote_user [$time_local] '
|
||||
'"$request" $status $body_bytes_sent $request_time '
|
||||
'"$http_referer" "$http_user_agent"';
|
||||
|
||||
And by adding or modifying the access_log line in the server block of your config file:
|
||||
|
||||
access_log logs/host.access.log nginx;
|
||||
|
||||
After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds):
|
||||
|
||||
127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
|
||||
|
||||
You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs?
|
||||
|
||||
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22].
|
||||
|
||||
----------
|
||||
|
||||
Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
|
||||
作者:K Young
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
|
||||
[3]:http://wiki.nginx.org/InstallOptions
|
||||
[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
|
||||
[5]:http://docs.datadoghq.com/integrations/nginx/
|
||||
[6]:https://collectd.org/wiki/index.php/Plugin:nginx
|
||||
[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
|
||||
[8]:http://demo.nginx.com/status.html
|
||||
[9]:http://demo.nginx.com/status
|
||||
[10]:http://demo.nginx.com/status/upstreams/demoupstreams
|
||||
[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
|
||||
[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||
[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
|
||||
[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
|
||||
[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||
[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
|
||||
[18]:http://www.rsyslog.com/
|
||||
[19]:https://www.elastic.co/products/logstash
|
||||
[20]:http://www.fluentd.org/
|
||||
[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
|
||||
[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
|
||||
[24]:https://github.com/DataDog/the-monitor/issues
|
@ -1,4 +1,4 @@
|
||||
translating wi-cuckoo
|
||||
translation by strugglingyouth
|
||||
How to monitor NGINX with Datadog - Part 3
|
||||
================================================================================
|
||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
|
||||
@ -148,4 +148,4 @@ via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
|
||||
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
|
||||
[19]:https://github.com/DataDog/the-monitor/issues
|
||||
[19]:https://github.com/DataDog/the-monitor/issues
|
||||
|
@ -1,409 +0,0 @@
|
||||
KevinSJ Translating
|
||||
How to monitor NGINX - Part 1
|
||||
================================================================================
|
||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
|
||||
|
||||
### What is NGINX? ###
|
||||
|
||||
[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus.
|
||||
|
||||
NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases.
|
||||
|
||||
### Key NGINX metrics ###
|
||||
|
||||
By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment).
|
||||
|
||||
More generally, there are at least three key categories of metrics to watch:
|
||||
|
||||
- Basic activity metrics
|
||||
- Error metrics
|
||||
- Performance metrics
|
||||
|
||||
Below we’ll break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice.
|
||||
|
||||
This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting.
|
||||
|
||||
#### Basic activity metrics ####
|
||||
|
||||
Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed.
|
||||
|
||||
NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus.
|
||||
|
||||
**NGINX**
|
||||
|
||||
The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection.
|
||||
|
||||
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png)
|
||||
|
||||
Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume.
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts</td>
|
||||
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">handled</td>
|
||||
<td style="text-align: left;">Count of successful client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped (calculated)</td>
|
||||
<td style="text-align: left;">Count of dropped connections (accepts – handled)</td>
|
||||
<td style="text-align: left;">Work: Errors*</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests</td>
|
||||
<td style="text-align: left;">Count of client requests</td>
|
||||
<td style="text-align: left;">Work: Throughput</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics">a metric of resource saturation</a>, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics">a work metric</a>.</em></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINX’s [worker_connections][4] limit, has been reached.
|
||||
|
||||
Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed:
|
||||
|
||||
Active state
|
||||
|
||||
- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive.
|
||||
- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation.
|
||||
- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state.
|
||||
|
||||
Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.)
|
||||
|
||||
**NGINX Plus**
|
||||
|
||||
As mentioned above, all of open-source NGINX’s metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus.
|
||||
|
||||
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png)
|
||||
|
||||
Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume.
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepted</td>
|
||||
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped</td>
|
||||
<td style="text-align: left;">Count of dropped connections</td>
|
||||
<td style="text-align: left;">Work: Errors*</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">idle</td>
|
||||
<td style="text-align: left;">Client connections with zero current requests</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">total</td>
|
||||
<td style="text-align: left;">Count of client requests</td>
|
||||
<td style="text-align: left;">Work: Throughput</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.</em></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Plus’s [worker_connections][4] limit, has been reached.
|
||||
|
||||
**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX.
|
||||
|
||||
**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection.
|
||||
|
||||
**Metric differences between Open-Source and Plus**
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;">NGINX (open-source)</th>
|
||||
<th style="text-align: left;">NGINX Plus</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">accepts</td>
|
||||
<td style="text-align: left;">accepted</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">dropped must be calculated</td>
|
||||
<td style="text-align: left;">dropped is reported directly</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">reading + writing</td>
|
||||
<td style="text-align: left;">current</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">waiting</td>
|
||||
<td style="text-align: left;">idle</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">active (includes “waiting” states)</td>
|
||||
<td style="text-align: left;">active (excludes “idle” states)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">requests</td>
|
||||
<td style="text-align: left;">total</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
**Metric to alert on: Dropped connections**
|
||||
|
||||
The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation.
|
||||
|
||||
![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png)
|
||||
|
||||
**Metric to alert on: Requests per second**
|
||||
|
||||
Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests you’re receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs.
|
||||
|
||||
![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png)
|
||||
|
||||
**Collecting activity metrics**
|
||||
|
||||
Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection.
|
||||
|
||||
#### Error metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">4xx codes</td>
|
||||
<td style="text-align: left;">Count of client errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes</td>
|
||||
<td style="text-align: left;">Count of server errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes.
|
||||
|
||||
**Metric to alert on: Server error rate**
|
||||
|
||||
Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user.
|
||||
|
||||
![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png)
|
||||
|
||||
A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities.
|
||||
|
||||
**Collecting error metrics**
|
||||
|
||||
Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information:
|
||||
|
||||
- Use the expanded status module available with commercially supported NGINX Plus
|
||||
- Configure NGINX’s log module to write response codes in access logs
|
||||
|
||||
Read the companion post on NGINX metrics collection for detailed instructions on both approaches.
|
||||
|
||||
#### Performance metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">request time</td>
|
||||
<td style="text-align: left;">Time to process each request, in seconds</td>
|
||||
<td style="text-align: left;">Work: Performance</td>
|
||||
<td style="text-align: left;">NGINX logs</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
**Metric to alert on: Request processing time**
|
||||
|
||||
The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream.
|
||||
|
||||
**Collecting processing time metrics**
|
||||
|
||||
NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8].
|
||||
|
||||
#### Reverse proxy metrics ####
|
||||
|
||||
注:表格
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;">
|
||||
<col style="text-align: left;"> </colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="text-align: left;"><strong>Name</strong></th>
|
||||
<th style="text-align: left;"><strong>Description</strong></th>
|
||||
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
|
||||
<th style="text-align: left;"><strong>Availability</strong></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="text-align: left;">Active connections by upstream server</td>
|
||||
<td style="text-align: left;">Currently active client connections</td>
|
||||
<td style="text-align: left;">Resource: Utilization</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">5xx codes by upstream server</td>
|
||||
<td style="text-align: left;">Server errors</td>
|
||||
<td style="text-align: left;">Work: Errors</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="text-align: left;">Available servers per upstream group</td>
|
||||
<td style="text-align: left;">Servers passing health checks</td>
|
||||
<td style="text-align: left;">Resource: Availability</td>
|
||||
<td style="text-align: left;">NGINX Plus</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus.
|
||||
|
||||
NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times.
|
||||
|
||||
**Activity metrics**
|
||||
|
||||
The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns
|
||||
|
||||
**Error metrics**
|
||||
|
||||
Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular server’s error rate.
|
||||
|
||||
**Availability metrics**
|
||||
|
||||
For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers.
|
||||
|
||||
**Collecting upstream metrics**
|
||||
|
||||
NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11].
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this post we’ve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure:
|
||||
|
||||
- [Dropped connections][12]
|
||||
- [Requests per second][13]
|
||||
- [Server error rate][14]
|
||||
- [Request processing time][15]
|
||||
|
||||
Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus.
|
||||
|
||||
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18].
|
||||
|
||||
### Acknowledgments ###
|
||||
|
||||
Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications.
|
||||
|
||||
----------
|
||||
|
||||
Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
|
||||
作者:K Young
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://nginx.org/en/
|
||||
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
|
||||
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
|
||||
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
|
||||
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
|
||||
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
|
||||
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
|
||||
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
|
||||
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
|
||||
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
|
||||
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
|
||||
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
|
||||
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
|
||||
[20]:https://github.com/DataDog/the-monitor/issues
|
@ -1,127 +0,0 @@
|
||||
How to Update Linux Kernel for Improved System Performance
|
||||
================================================================================
|
||||
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f)
|
||||
|
||||
The rate of development for [the Linux kernel][1] is unprecedented, with a new major release approximately every two to three months. Each release offers several new features and improvements that a lot of people could take advantage of to make their computing experience faster, more efficient, or better in other ways.
|
||||
|
||||
The problem, however, is that you usually can’t take advantage of these new kernel releases as soon as they come out — you have to wait until your distribution comes out with a new release that packs a newer kernel with it. We’ve previously laid out [the benefits for regularly updating your kernel][2], and you don’t have to wait to get your hands on them. We’ll show you how.
|
||||
|
||||
> Disclaimer: As some of our literature may have mentioned before, updating your kernel does carry a (small) risk of breaking your system. If this is the case, it’s usually easy to pick an older kernel at boot time that works, but something may always go wrong. Therefore, we’re not responsible for any damage to your system — use at your own risk!
|
||||
|
||||
### Prep Work ###
|
||||
|
||||
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f)
|
||||
|
||||
To update your kernel, you’ll first need to determine whether you’re using a 32-bit or 64-bit system. Open up a terminal window and run
|
||||
|
||||
uname -a
|
||||
|
||||
Then check to see if the output says x86_64 or i686. If it’s x86_64, then you’re running the 64-bit version; otherwise, you’re running the 32-bit version. Remember this, because it will be important.
|
||||
|
||||
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f)
|
||||
|
||||
Next, visit the [official Linux kernel website][3]. This will tell you what the current stable version of the kernel is. You can try out release candidates if you’d like, but they are a lot less tested than the stable releases. Stick with the stable kernel unless you are certain you need a release candidate version.
|
||||
|
||||
### Ubuntu Instructions ###
|
||||
|
||||
It’s quite easy for Ubuntu and Ubuntu-derivative users to update their kernel, thanks to the Ubuntu Mainline Kernel PPA. Although it’s officially called a PPA, you cannot use it like other PPAs by adding them to your software sources list and expecting it to automatically update the kernel for you. Instead, it’s simply a webpage you navigate through to download the kernel you want.
|
||||
|
||||
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f)
|
||||
|
||||
Now, visit the [kernel PPA webpage][4] and scroll all the way to the bottom. The absolute bottom of the list will probably contain some release candidate versions (which you can see by the “rc” in the name), but just above them should be the latest stable kernel (to make this easier to explain, at the time of writing the stable version was 4.1.2). Click on that, and you’ll be presented with several options. You’ll need to grab three files and save them in their own folder (within the Downloads folder if you’d like) so that they’re isolated from all other files:
|
||||
|
||||
- The “generic” header file for your architecture (in my case, 64-bit or “amd64″)
|
||||
- The header file in the middle that has “all” towards the end of the filename
|
||||
- The “generic” kernel file for your architecture (again, I would pick “amd64″ but if you use 32-bit you’ll need “i686″)
|
||||
|
||||
You’ll notice that there are also “lowlatency” files available to download, but it’s fine to ignore this. These files are relatively unstable and are only made available for people who need their low-latency benefits if the general files don’t suffice for tasks such as audio recording. Again, the recommendation is to always use generic first and only try lowlatency if your performance isn’t good enough for certain tasks. No, gaming or Internet browsing aren’t excuses to try lowlatency.
|
||||
|
||||
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f)
|
||||
|
||||
You put these files into their own folder, right? Now open up the Terminal, use the
|
||||
|
||||
cd
|
||||
|
||||
command to go to your newly-created folder, such as
|
||||
|
||||
cd /home/user/Downloads/Kernel
|
||||
|
||||
and then run:
|
||||
|
||||
sudo dpkg -i *.deb
|
||||
|
||||
This command marks all .deb files within the folder as “to be installed” and then performs the installation. This is the recommended way to install these files because otherwise it’s easy to pick one file to install and it’ll complain about dependency issues. This approach avoids that problem. If you’re not sure what cd or sudo are, get a quick crash course on [essential Linux commands][5].
|
||||
|
||||
Once the installation is complete, **Restart** your system and you should be running the just-installed kernel! You can check this by running uname -a in the Terminal and checking the output.
|
||||
|
||||
### Fedora Instructions ###
|
||||
|
||||
If you use Fedora or one of its derivatives, the process is very similar to Ubuntu. There’s just a different location to grab different files, and a different command to install them.
|
||||
|
||||
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f)
|
||||
|
||||
VIew the list of the most [recent kernel builds for Fedora][6]. Pick the latest stable version out of the list, and then scroll down to either the i686 or x86_64 section, depending on your system’s architecture. In this section, you’ll need to grab the following files and save them in their own folder (such as “Kernel” within your Downloads folder, as an example):
|
||||
|
||||
- kernel
|
||||
- kernel-core
|
||||
- kernel-headers
|
||||
- kernel-modules
|
||||
- kernel-modules-extra
|
||||
- kernel-tools
|
||||
- perf and python-perf (optional)
|
||||
|
||||
If your system is i686 (32-bit) and you have 4GB of RAM or more, you’ll need to grab the PAE version of all of these files where available. PAE is an address extension technique used for 32-bit system to allow them to use more than 3GB of RAM.
|
||||
|
||||
Now, use the
|
||||
|
||||
cd
|
||||
|
||||
command to go to that folder, such as
|
||||
|
||||
cd /home/user/Downloads/Kernel
|
||||
|
||||
and then run the following command to install all the files:
|
||||
|
||||
yum --nogpgcheck localinstall *.rpm
|
||||
|
||||
Finally **Restart** your computer and you should be running a new kernel!
|
||||
|
||||
### Using Rawhide ###
|
||||
|
||||
Alternatively, Fedora users can also simply [switch to Rawhide][7] and it’ll automatically update every package to the latest version, including the kernel. However, Rawhide is known to break quite often (especially early on in the development cycle) and should **not** be used on a system that you need to rely on.
|
||||
|
||||
### Arch Instructions ###
|
||||
|
||||
[Arch users][8] should always have the latest and greatest stable kernel available (or one pretty close to it). If you want to get even closer to the latest-released stable kernel, you can enable the testing repo which will give you access to major new releases roughly two to four weeks early.
|
||||
|
||||
To do this, open the file located at
|
||||
|
||||
/etc/pacman.conf
|
||||
|
||||
with sudo permissions in [your favorite terminal text editor][9], and then uncomment (delete the pound symbols from the front of each line) the three lines associated with testing. If you have the multilib repository enabled, then do the same for the multilib-testing repository. See [this Arch Linux wiki page][10] if you need more information.
|
||||
|
||||
Upgrading your kernel isn’t easy (done so intentionally), but it can give you a lot of benefits. So long as your new kernel didn’t break anything, you can now enjoy improved performance, better efficiency, support for more hardware, and potential new features. Especially if you’re running relatively new hardware, upgrading the kernel can really help out.
|
||||
|
||||
**How has upgraded the kernel helped you? Do you think your favorite distribution’s policy on kernel releases is what it should be?** Let us know in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/
|
||||
|
||||
作者:[Danny Stieben][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.makeuseof.com/tag/author/danny/
|
||||
[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/
|
||||
[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/
|
||||
[3]:http://www.kernel.org/
|
||||
[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/
|
||||
[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/
|
||||
[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8
|
||||
[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/
|
||||
[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/
|
||||
[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/
|
||||
[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories
|
@ -1,130 +0,0 @@
|
||||
translation by strugglingyouth
|
||||
Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser
|
||||
================================================================================
|
||||
Here again, I have written another post on [Linux Tips and Tricks][1] series. Since beginning the objective of this post is to make you aware of those small tips and hacks that lets you manage your system/server efficiently.
|
||||
|
||||
![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg)
|
||||
|
||||
Create Cdrom ISO Image and Monitor Users in Linux
|
||||
|
||||
In this post we will see how to create ISO image from the contents of CD/DVD loaded in the drive, Open random man pages for learning, know details of other logged-in users and what they are doing and monitoring the memory usages of a browser, and all these using native tools/commands without any third-party application/utility. Here we go…
|
||||
|
||||
### Create ISO image from a CD ###
|
||||
|
||||
Often we need to backup/copy the content of CD/DVD. If you are on Linux platform you do not need any additional software. All you need is the access to Linux console.
|
||||
|
||||
To create ISO image of the files in your CD/DVD ROM, you need two things. The first thing is you need to find the name of your CD/DVD drive. To find the name of your CD/DVD drive, you may choose any of the below three methods.
|
||||
|
||||
**1. Run command lsblk (list block devices) from your terminal/console.**
|
||||
|
||||
$ lsblk
|
||||
|
||||
![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png)
|
||||
|
||||
Find Block Devices
|
||||
|
||||
**2. To see information about CD-ROM, you may use commands like less or more.**
|
||||
|
||||
$ less /proc/sys/dev/cdrom/info
|
||||
|
||||
![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png)
|
||||
|
||||
Check Cdrom Information
|
||||
|
||||
**3. You may get the same information from [dmesg command][2] and customize the output using egrep.**
|
||||
|
||||
The command ‘dmesg‘ print/control the kernel buffer ring. ‘egrep‘ command is used to print lines that matches a pattern. Option -i and –color with egrep is used to ignore case sensitive search and highlight the matching string respectively.
|
||||
|
||||
$ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer'
|
||||
|
||||
![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png)
|
||||
|
||||
Find Device Information
|
||||
|
||||
Once you know the name of your CD/DVD, you can use following command to create a ISO image of your cdrom in Linux.
|
||||
|
||||
$ cat /dev/sr0 > /path/to/output/folder/iso_name.iso
|
||||
|
||||
Here ‘sr0‘ is the name of my CD/DVD drive. You should replace this with the name of your CD/DVD. This will help you in creating ISO image and backup contents of CD/DVD without any third-party application.
|
||||
|
||||
![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png)
|
||||
|
||||
Create ISO Image of CDROM
|
||||
|
||||
### Open a man page randomly for Reading ###
|
||||
|
||||
If you are new to Linux and want to learn commands and switches, this tweak is for you. Put the below line of code at the end of your `~/.bashrc` file.
|
||||
|
||||
/use/bin/man $(ls /bin | shuf | head -1)
|
||||
|
||||
Remember to put the above one line script in users’s `.bashrc` file and not in the .bashrc file of root. So when the next you login either locally or remotely using SSH you will see a man page randomly opened for you to read. For the newbies who want to learn commands and command-line switches, this will prove helpful.
|
||||
|
||||
Here is what I got in my terminal after logging in to session for two times back-to-back.
|
||||
|
||||
![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png)
|
||||
|
||||
LoadKeys Man Pages
|
||||
|
||||
![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png)
|
||||
|
||||
Zgrep Man Pages
|
||||
|
||||
### Check Activity of Logged-in Users ###
|
||||
|
||||
Know what other users are doing on your shared server.
|
||||
|
||||
In most general case, either you are a user of Shared Linux Server or the Admin. If you are concerned about your server and want to check what other users are doing, you may try command ‘w‘.
|
||||
|
||||
This command lets you know if someone is executing any malicious code or tampering the server, slowing it down or anything else. ‘w‘ is the preferred way of keeping an eye on logged on users and what they are doing.
|
||||
|
||||
To see logged on users and what they re doing, run command ‘w’ from terminal, preferably as root.
|
||||
|
||||
# w
|
||||
|
||||
![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png)
|
||||
|
||||
Check Linux User Activity
|
||||
|
||||
### Check Memory usages by Browser ###
|
||||
|
||||
These days a lot of jokes are cracked on Google-chrome and its demand of memory. If you want to know the memory usages of a browser, you can list the name of the process, its PID and Memory usages of it. To check memory usages of a browser, just enter the “about:memory” in the address bar without quotes.
|
||||
|
||||
I have tested it on Google-Chrome and Mozilla Firefox web browser. If you can check it on any other browser and it works well you may acknowledge us in the comments below. Also you may kill the browser process simply as if you have done for any Linux terminal process/service.
|
||||
|
||||
In Google Chrome, type `about:memory` in address bar, you should get something similar to below image.
|
||||
|
||||
![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png)
|
||||
|
||||
Check Chrome Memory Usage
|
||||
|
||||
In Mozilla Firefox, type `about:memory` in address bar, you should get something similar to below image.
|
||||
|
||||
![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png)
|
||||
|
||||
Check Firefox Memory Usage
|
||||
|
||||
Out of these options you may select any of them, if you understand what it is. To check memory usages, click the left most option ‘Measure‘.
|
||||
|
||||
![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png)
|
||||
|
||||
Firefox Main Process
|
||||
|
||||
It shows tree like process-memory usages by browser.
|
||||
|
||||
That’s all for now. Hope all the above tips will help you at some point of time. If you have one (or more) tips/tricks that will help Linux Users to manage their Linux System/Server more efficiently ans is lesser known, you may like to share it with us.
|
||||
|
||||
I’ll be here with another post soon, till then stay tuned and connected to TecMint. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linux/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/tag/linux-tricks/
|
||||
[2]:http://www.tecmint.com/dmesg-commands/
|
@ -1,121 +0,0 @@
|
||||
|
||||
Translating by dingdongnigetou
|
||||
|
||||
Understanding Shell Commands Easily Using “Explain Shell” Script in Linux
|
||||
================================================================================
|
||||
While working on Linux platform all of us need help on shell commands, at some point of time. Although inbuilt help like man pages, whatis command is helpful, but man pages output are too lengthy and until and unless one has some experience with Linux, it is very difficult to get any help from massive man pages. The output of whatis command is rarely more than one line which is not sufficient for newbies.
|
||||
|
||||
![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg)
|
||||
|
||||
Explain Shell Commands in Linux Shell
|
||||
|
||||
There are third-party application like ‘cheat‘, which we have covered here “[Commandline Cheat Sheet for Linux Users][1]. Although Cheat is an exceptionally good application which shows help on shell command even when computer is not connected to Internet, it shows help on predefined commands only.
|
||||
|
||||
There is a small piece of code written by Jackson which is able to explain shell commands within the bash shell very effectively and guess what the best part is you don’t need to install any third party package. He named the file containing this piece of code as `'explain.sh'`.
|
||||
|
||||
#### Features of Explain Utility ####
|
||||
|
||||
- Easy Code Embedding.
|
||||
- No third-party utility needed to be installed.
|
||||
- Output just enough information in course of explanation.
|
||||
- Requires internet connection to work.
|
||||
- Pure command-line utility.
|
||||
- Able to explain most of the shell commands in bash shell.
|
||||
- No root Account involvement required.
|
||||
|
||||
**Prerequisite**
|
||||
|
||||
The only requirement is `'curl'` package. In most of the today’s latest Linux distributions, curl package comes pre-installed, if not you can install it using package manager as shown below.
|
||||
|
||||
# apt-get install curl [On Debian systems]
|
||||
# yum install curl [On CentOS systems]
|
||||
|
||||
### Installation of explain.sh Utility in Linux ###
|
||||
|
||||
We have to insert the below piece of code as it is in the `~/.bashrc` file. The code should be inserted for each user and each `.bashrc` file. It is suggested to insert the code to the user’s .bashrc file only and not in the .bashrc of root user.
|
||||
|
||||
Notice the first line of code that starts with hash `(#)` is optional and added just to differentiate rest of the codes of .bashrc.
|
||||
|
||||
# explain.sh marks the beginning of the codes, we are inserting in .bashrc file at the bottom of this file.
|
||||
|
||||
# explain.sh begins
|
||||
explain () {
|
||||
if [ "$#" -eq 0 ]; then
|
||||
while read -p "Command: " cmd; do
|
||||
curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd"
|
||||
done
|
||||
echo "Bye!"
|
||||
elif [ "$#" -eq 1 ]; then
|
||||
curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1"
|
||||
else
|
||||
echo "Usage"
|
||||
echo "explain interactive mode."
|
||||
echo "explain 'cmd -o | ...' one quoted command to explain it."
|
||||
fi
|
||||
}
|
||||
|
||||
### Working of explain.sh Utility ###
|
||||
|
||||
After inserting the code and saving it, you must logout of the current session and login back to make the changes taken into effect. Every thing is taken care of by the ‘curl’ command which transfer the input command and flag that need explanation to the mankier server and then print just necessary information to the Linux command-line. Not to mention to use this utility you must be connected to internet always.
|
||||
|
||||
Let’s test few examples of command which I don’t know the meaning with explain.sh script.
|
||||
|
||||
**1. I forgot what ‘du -h‘ does. All I need to do is:**
|
||||
|
||||
$ explain 'du -h'
|
||||
|
||||
![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png)
|
||||
|
||||
Get Help on du Command
|
||||
|
||||
**2. If you forgot what ‘tar -zxvf‘ does, you may simply do:**
|
||||
|
||||
$ explain 'tar -zxvf'
|
||||
|
||||
![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png)
|
||||
|
||||
Tar Command Help
|
||||
|
||||
**3. One of my friend often confuse the use of ‘whatis‘ and ‘whereis‘ command, so I advised him.**
|
||||
|
||||
Go to Interactive Mode by simply typing explain command on the terminal.
|
||||
|
||||
$ explain
|
||||
|
||||
and then type the commands one after another to see what they do in one window, as:
|
||||
|
||||
Command: whatis
|
||||
Command: whereis
|
||||
|
||||
![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png)
|
||||
|
||||
Whatis Whereis Commands Help
|
||||
|
||||
To exit interactive mode he just need to do Ctrl + c.
|
||||
|
||||
**4. You can ask to explain more than one command chained by pipeline.**
|
||||
|
||||
$ explain 'ls -l | grep -i Desktop'
|
||||
|
||||
![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png)
|
||||
|
||||
Get Help on Multiple Commands
|
||||
|
||||
Similarly you can ask your shell to explain any shell command. All you need is a working Internet connection. The output is generated based upon the explanation needed from the server and hence the output result is not customizable.
|
||||
|
||||
For me this utility is really helpful and it has been honored being added to my .bashrc. Let me know what is your thought on this project? How it can useful for you? Is the explanation satisfactory?
|
||||
|
||||
Provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/
|
@ -1,72 +0,0 @@
|
||||
What is Logical Volume Management and How Do You Enable It in Ubuntu?
|
||||
================================================================================
|
||||
> Logical Volume Management (LVM) is a disk management option that every major Linux distribution includes. Whether you need to set up storage pools or just need to dynamically create partitions, LVM is probably what you are looking for.
|
||||
|
||||
### What is LVM? ###
|
||||
|
||||
Logical Volume Manager allows for a layer of abstraction between your operating system and the disks/partitions it uses. In traditional disk management your operating system looks for what disks are available (/dev/sda, /dev/sdb, etc.) and then looks at what partitions are available on those disks (/dev/sda1, /dev/sda2, etc.).
|
||||
|
||||
With LVM, disks and partitions can be abstracted to contain multiple disks and partitions into one device. Your operating systems will never know the difference because LVM will only show the OS the volume groups (disks) and logical volumes (partitions) that you have set up.
|
||||
|
||||
Because volume groups and logical volumes aren’t physically tied to a hard drive, it makes it easy to dynamically resize and create new disks and partitions. In addition, LVM can give you features that your file system is not capable of doing. For example, Ext3 does not have support for live snapshots, but if you’re using LVM you have the ability to take a snapshot of your logical volumes without unmounting the disk.
|
||||
|
||||
### When Should You Use LVM? ###
|
||||
|
||||
The first thing your should consider before setting up LVM is what you want to accomplish with your disks and partitions. Some distributions, like Fedora, install with LVM by default.
|
||||
|
||||
If you are using Ubuntu on a laptop with only one internal hard drive and you don’t need extended features like live snapshots, then you may not need LVM. If you need easy expansion or want to combine multiple hard drives into a single pool of storage then LVM may be what you have been looking for.
|
||||
|
||||
### Setting up LVM in Ubuntu ###
|
||||
|
||||
First thing to know about using LVM is there is no easy way to convert your existing traditional partitions to logical volumes. It is possible to move to a new partition that uses LVM, but that won’t be something that we will cover in this article; instead we are going to take the approach of setting up LVM on a fresh installation of Ubuntu 10.10.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/ubuntu-10-banner.png)
|
||||
|
||||
To install Ubuntu using LVM you need to use the alternate install CD. Download it from the link below and burn a CD or [use unetbootin to create a USB drive][1].
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/download-web.png)
|
||||
|
||||
Boot your computer from the alternate install disk and select your options up until the partition disks screen and select guided – use entire disk and set up LVM.
|
||||
|
||||
*Note: This will format your entire hard drive so if you are trying to dual boot or have another installation select manual instead.*
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-1.png)
|
||||
|
||||
Select the main disk you want to use, typically your largest drive, and then go to the next step.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-2.png)
|
||||
|
||||
You will immediately need to write the changes to disk so make sure you selected the right disk and then write the changes.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-3.png)
|
||||
|
||||
Select the size you want the first logical volume to be and then continue.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-4.png)
|
||||
|
||||
Confirm your disk partitions and continue with the installation.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-5.png)
|
||||
|
||||
The final step will write the GRUB bootloader to the hard drive. It is important to note that GRUB cannot be on an LVM partition because computer BIOSes cannot directly read from a logical volume. Ubuntu will automatically create a 255 MB ext2 partition for your bootloader.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-6.png)
|
||||
|
||||
After the installation is complete, reboot the machine and boot into Ubuntu as normal. There should be no perceivable difference between using LVM or traditional disk management with this type of installation.
|
||||
|
||||
![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/disk-manager.png)
|
||||
|
||||
To use LVM to its full potential, stay tuned for our upcoming article on managing your LVM installation.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/
|
||||
|
||||
作者:[How-To Geek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/+howtogeek?prsrc=5
|
||||
[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/
|
@ -0,0 +1,174 @@
|
||||
How to Setup iTOP (IT Operational Portal) on CentOS 7
|
||||
================================================================================
|
||||
iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP.
|
||||
|
||||
To install and configure iTOP we will be using CentOS 7 as base operating with basic LAMP Stack environment installed on it that will cover its almost all prerequisites.
|
||||
|
||||
### Downloading iTOP ###
|
||||
|
||||
iTop download package is present on SourceForge, we can get its link from their official website [link][1].
|
||||
|
||||
![itop download](http://blog.linoxide.com/wp-content/uploads/2015/07/1-itop-download.png)
|
||||
|
||||
We will the download link from here and get this zipped file on server with wget command as below.
|
||||
|
||||
[root@centos-007 ~]# wget http://downloads.sourceforge.net/project/itop/itop/2.1.0/iTop-2.1.0-2127.zip
|
||||
|
||||
### iTop Extensions and Web Setup ###
|
||||
|
||||
By using unzip command we will extract the downloaded packages in the document root directory of our apache web server in a new directory with name itop.
|
||||
|
||||
[root@centos-7 ~]# ls
|
||||
iTop-2.1.0-2127.zip
|
||||
[root@centos-7 ~]# unzip iTop-2.1.0-2127.zip -d /var/www/html/itop/
|
||||
|
||||
List the folder to view installation packages in it.
|
||||
|
||||
[root@centos-7 ~]# ls -lh /var/www/html/itop/
|
||||
total 68K
|
||||
-rw-r--r--. 1 root root 1.4K Dec 17 2014 INSTALL
|
||||
-rw-r--r--. 1 root root 35K Dec 17 2014 LICENSE
|
||||
-rw-r--r--. 1 root root 23K Dec 17 2014 README
|
||||
drwxr-xr-x. 19 root root 4.0K Jul 14 13:10 web
|
||||
|
||||
Here is all the extensions that we can install.
|
||||
|
||||
[root@centos-7 2.x]# ls
|
||||
authent-external itop-backup itop-config-mgmt itop-problem-mgmt itop-service-mgmt-provider itop-welcome-itil
|
||||
authent-ldap itop-bridge-virtualization-storage itop-datacenter-mgmt itop-profiles-itil itop-sla-computation version.xml
|
||||
authent-local itop-change-mgmt itop-endusers-devices itop-request-mgmt itop-storage-mgmt wizard-icons
|
||||
installation.xml itop-change-mgmt-itil itop-incident-mgmt-itil itop-request-mgmt-itil itop-tickets
|
||||
itop-attachments itop-config itop-knownerror-mgmt itop-service-mgmt itop-virtualization-mgmt
|
||||
|
||||
Now from the extracted web directory, moving through different data models we will migrate the required extensions from the datamodels into the web extensions directory of web document root directory with copy command.
|
||||
|
||||
[root@centos-7 2.x]# pwd
|
||||
/var/www/html/itop/web/datamodels/2.x
|
||||
[root@centos-7 2.x]# cp -r itop-request-mgmt itop-service-mgmt itop-service-mgmt itop-config itop-change-mgmt /var/www/html/itop/web/extensions/
|
||||
|
||||
### Installing iTop Web Interface ###
|
||||
|
||||
Most of our server side settings and configurations are done.Finally we need to complete its web interface installation process to finalize the setup.
|
||||
|
||||
Open your favorite web browser and access the WordPress web directory in your web browser using your server IP or FQDN like.
|
||||
|
||||
http://servers_ip_address/itop/web/
|
||||
|
||||
You will be redirected towards the web installation process for iTop. Let’s configure it as per your requirements like we did here in this tutorial.
|
||||
|
||||
#### Prerequisites Validation ####
|
||||
|
||||
At the stage you will be prompted for welcome screen with prerequisites validation ok. If you get some warning then you have to make resolve it by installing its prerequisites.
|
||||
|
||||
![mcrypt missing](http://blog.linoxide.com/wp-content/uploads/2015/07/2-itop-web-install.png)
|
||||
|
||||
At this stage one optional package named php mcrypt will be missing. Download the following rpm package then try to install php mcrypt package.
|
||||
|
||||
[root@centos-7 ~]#yum localinstall php-mcrypt-5.3.3-1.el6.x86_64.rpm libmcrypt-2.5.8-9.el6.x86_64.rpm.
|
||||
|
||||
After successful installation of php-mcrypt library we need to restart apache web service, then reload the web page and this time its prerequisites validation should be OK.
|
||||
|
||||
#### Install or Upgrade iTop ####
|
||||
|
||||
Here we will choose the fresh installation as we have not installed iTop previously on our server.
|
||||
|
||||
![Install New iTop](http://blog.linoxide.com/wp-content/uploads/2015/07/3.png)
|
||||
|
||||
#### iTop License Agreement ####
|
||||
|
||||
Chose the option to accept the terms of the licenses of all the components of iTop and click "NEXT".
|
||||
|
||||
![License Agreement](http://blog.linoxide.com/wp-content/uploads/2015/07/4.png)
|
||||
|
||||
#### Database Configuration ####
|
||||
|
||||
Here we the do Configuration of the database connection by giving our database servers credentials and then choose from the option to create new database as shown.
|
||||
|
||||
![DB Connection](http://blog.linoxide.com/wp-content/uploads/2015/07/5.png)
|
||||
|
||||
#### Administrator Account ####
|
||||
|
||||
In this step we will configure an Admin account by filling out its login details as.
|
||||
|
||||
![Admin Account](http://blog.linoxide.com/wp-content/uploads/2015/07/6.png)
|
||||
|
||||
#### Miscellaneous Parameters ####
|
||||
|
||||
Let's choose the additional parameters whether you want to install with demo contents or with fresh database and proceed forward.
|
||||
|
||||
![Misc Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/7.png)
|
||||
|
||||
### iTop Configurations Management ###
|
||||
|
||||
The options below allow you to configure the type of elements that are to be managed inside iTop like all the base objects that are mandatory in the iTop CMDB, Manage Data Center devices, storage device and virtualization.
|
||||
|
||||
![Conf Management](http://blog.linoxide.com/wp-content/uploads/2015/07/8.png)
|
||||
|
||||
#### Service Management ####
|
||||
|
||||
Select from the choices that best describes the relationships between the services and the IT infrastructure in your IT environment. So we are choosing Service Management for Service Providers here.
|
||||
|
||||
![Service Management](http://blog.linoxide.com/wp-content/uploads/2015/07/9.png)
|
||||
|
||||
#### iTop Tickets Management ####
|
||||
|
||||
From the different available options we will Select the ITIL Compliant Tickets Management option to have different types of ticket for managing user requests and incidents.
|
||||
|
||||
![Ticket Management](http://blog.linoxide.com/wp-content/uploads/2015/07/10.png)
|
||||
|
||||
#### Change Management Options ####
|
||||
|
||||
Select the type of tickets you want to use in order to manage changes to the IT infrastructure from the available options. We are going to choose ITIL change management option here.
|
||||
|
||||
![ITIL Change](http://blog.linoxide.com/wp-content/uploads/2015/07/11.png)
|
||||
|
||||
#### iTop Extensions ####
|
||||
|
||||
In this section we can select the additional extensions to install or we can unchecked the ones that you want to skip.
|
||||
|
||||
![iTop Extensions](http://blog.linoxide.com/wp-content/uploads/2015/07/13.png)
|
||||
|
||||
### Ready to Start Web Installation ###
|
||||
|
||||
Now we are ready to start installing the components that we choose in previous steps. We can also drop down these installation parameters to view our configuration from the drop down.
|
||||
|
||||
Once you are confirmed with the installation parameters click on the install button.
|
||||
|
||||
![Installation Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/16.png)
|
||||
|
||||
Let's wait for the progress bar to complete the installation process. It might takes few minutes to complete its installation process.
|
||||
|
||||
![iTop Installation Process](http://blog.linoxide.com/wp-content/uploads/2015/07/17.png)
|
||||
|
||||
### iTop Installation Done ###
|
||||
|
||||
Our iTop installation setup is complete, just need to do a simple manual operation as shown and then click to enter iTop.
|
||||
|
||||
![iTop Done](http://blog.linoxide.com/wp-content/uploads/2015/07/18.png)
|
||||
|
||||
### Welcome to iTop (IT Operational Portal) ###
|
||||
|
||||
![itop welcome note](http://blog.linoxide.com/wp-content/uploads/2015/07/20.png)
|
||||
|
||||
### iTop Dashboard ###
|
||||
|
||||
You can manage configuration of everything from here Servers, computers, Contacts, Locations, Contracts, Network devices…. You can create your own. Just the fact, that the installed CMDB module is great which is an essential part of every bigger IT.
|
||||
|
||||
![iTop Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/07/19.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
ITOP is one of the best Open Source Service Desk solutions. We have successfully installed and configured it on our CentOS 7 cloud host. So, the most powerful aspect of iTop is the ease with which it can be customized via its “extensions”. Feel free to comment if you face any trouble during its setup.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/tools/setup-itop-centos-7/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
||||
[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8
|
@ -0,0 +1,129 @@
|
||||
|
||||
Translating by dingdongnigetou
|
||||
|
||||
Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker
|
||||
================================================================================
|
||||
Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names.
|
||||
|
||||
Here, in this tutorial, we will use Nginx to load balance requests to a set of containers running Apache. Here are the simple and easy to do steps on using Weave to configure nginx as a load balancer running in ubuntu docker container.
|
||||
|
||||
### 1. Settting up AWS Instances ###
|
||||
|
||||
First of all, we'll need to setup Amazon Web Service Instances so that we can run docker containers with Weave and Ubuntu as Operating System. We will use the [AWS CLI][1] to setup and configure two AWS EC2 instances. Here, in this tutorial, we'll use the smallest available instances, t1.micro. We will need to have a valid **Amazon Web Services account** with AWS CLI setup and configured. We'll first gonna clone the repository of weave from the github by running the following command in AWS CLI.
|
||||
|
||||
$ git clone http://github.com/fintanr/weave-gs
|
||||
$ cd weave-gs/aws-nginx-ubuntu-simple
|
||||
|
||||
After cloning the repository, we wanna run the script that will deploy two instances of t1.micro instance running Weave and Docker in Ubuntu Operating System.
|
||||
|
||||
$ sudo ./demo-aws-setup.sh
|
||||
|
||||
Here, for this tutorial we'll need the IP addresses of these instances further in future. These are stored in an environment file weavedemo.env which is created during the execution of the demo-aws-setup.sh. To get those ip addresses, we need to run the following command which will give the output similar to the output below.
|
||||
|
||||
$ cat weavedemo.env
|
||||
|
||||
export WEAVE_AWS_DEMO_HOST1=52.26.175.175
|
||||
export WEAVE_AWS_DEMO_HOST2=52.26.83.141
|
||||
export WEAVE_AWS_DEMO_HOSTCOUNT=2
|
||||
export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141)
|
||||
|
||||
Please note these are not the IP addresses for our tutorial, AWS dynamically allocate IP addresses to our instances.
|
||||
|
||||
As were are using a bash, we will just source this file and execute it using the command below.
|
||||
|
||||
. ./weavedemo.env
|
||||
|
||||
### 2. Launching Weave and WeaveDNS ###
|
||||
|
||||
After deploying the instances, we'll want to launch weave and weavedns on each hosts. Weave and weavedns allows us to easily deploy our containers to a new infrastructure and configuration without the need of changing the codes and without the need to understand concepts such as ambassador containers and links. Here are the commands to launch them in the first host.
|
||||
|
||||
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
|
||||
$ sudo weave launch
|
||||
$ sudo weave launch-dns 10.2.1.1/24
|
||||
|
||||
Next, we'll also wanna launch them in our second host.
|
||||
|
||||
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
|
||||
$ sudo weave launch $WEAVE_AWS_DEMO_HOST1
|
||||
$ sudo weave launch-dns 10.2.1.2/24
|
||||
|
||||
### 3. Launching Application Containers ###
|
||||
|
||||
Now, we wanna launch six containers across our two hosts running an Apache2 Web Server instance with our simple php site. So, we'll be running the following commands which will run 3 containers running Apache2 Web Server on our 1st instance.
|
||||
|
||||
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
|
||||
$ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache
|
||||
$ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache
|
||||
$ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache
|
||||
|
||||
After that, we'll again launch 3 containers running apache2 web server in our 2nd instance as shown below.
|
||||
|
||||
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
|
||||
$ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache
|
||||
$ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache
|
||||
$ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache
|
||||
|
||||
Note: Here, --with-dns option tells the container to use weavedns to resolve names and -h x.weave.local allows the host to be resolvable with WeaveDNS.
|
||||
|
||||
### 4. Launching Nginx Container ###
|
||||
|
||||
After our application containers are running well as expected, we'll wanna launch an nginx container which contains the nginx configuration which will round-robin across the severs for the reverse proxy or load balancing. To run the nginx container, we'll need to run the following command.
|
||||
|
||||
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
|
||||
$ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple
|
||||
|
||||
Hence, our Nginx container is publicly exposed as a http server on $WEAVE_AWS_DEMO_HOST1.
|
||||
|
||||
### 5. Testing the Load Balancer ###
|
||||
|
||||
To test our load balancer is working or not, we'll run a script that will make http requests to our nginx container. We'll make six requests so that we can see nginx moving through each of the webservers in round-robin turn.
|
||||
|
||||
$ ./access-aws-hosts.sh
|
||||
|
||||
{
|
||||
"message" : "Hello Weave - nginx example",
|
||||
"hostname" : "ws1.weave.local",
|
||||
"date" : "2015-06-26 12:24:23"
|
||||
}
|
||||
{
|
||||
"message" : "Hello Weave - nginx example",
|
||||
"hostname" : "ws2.weave.local",
|
||||
"date" : "2015-06-26 12:24:23"
|
||||
}
|
||||
{
|
||||
"message" : "Hello Weave - nginx example",
|
||||
"hostname" : "ws3.weave.local",
|
||||
"date" : "2015-06-26 12:24:23"
|
||||
}
|
||||
{
|
||||
"message" : "Hello Weave - nginx example",
|
||||
"hostname" : "ws4.weave.local",
|
||||
"date" : "2015-06-26 12:24:23"
|
||||
}
|
||||
{
|
||||
"message" : "Hello Weave - nginx example",
|
||||
"hostname" : "ws5.weave.local",
|
||||
"date" : "2015-06-26 12:24:23"
|
||||
}
|
||||
{
|
||||
"message" : "Hello Weave - nginx example",
|
||||
"hostname" : "ws6.weave.local",
|
||||
"date" : "2015-06-26 12:24:23"
|
||||
}
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Finally, we've successfully configured nginx as a reverse proxy or load balancer with weave and docker running ubuntu server in AWS (Amazon Web Service) EC2 . From the above output in above step, it is clear that we have configured it correctly. We can see that the request is being sent to 6 application containers in round-robin turn which is running a PHP app hosted in apache web server. Here, weave and weavedns did great work to deploy a containerised PHP application using nginx across multiple hosts on AWS EC2 without need to change in codes and connected the containers to eachother with the hostname using weavedns. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://console.aws.amazon.com/
|
92
sources/tech/20150803 Linux Logging Basics.md
Normal file
92
sources/tech/20150803 Linux Logging Basics.md
Normal file
@ -0,0 +1,92 @@
|
||||
FSSlc translating
|
||||
|
||||
Linux Logging Basics
|
||||
================================================================================
|
||||
First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section.
|
||||
|
||||
### Linux System Logs ###
|
||||
|
||||
Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system:
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png)
|
||||
|
||||
Some of the most important Linux system logs include:
|
||||
|
||||
- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages.
|
||||
- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure.
|
||||
- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels.
|
||||
- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully.
|
||||
|
||||
Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS.
|
||||
|
||||
Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below).
|
||||
|
||||
### What’s Syslog? ###
|
||||
|
||||
How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file.
|
||||
|
||||
The word “syslog” is an overloaded term and is often used in short to refer to one of these:
|
||||
|
||||
1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.”
|
||||
1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). It’s officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.”
|
||||
1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.”
|
||||
|
||||
Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue.
|
||||
|
||||
Here is an example log message with the syslog header included. It’s from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt:
|
||||
|
||||
<34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2
|
||||
|
||||
### Syslog Format and Fields ###
|
||||
|
||||
Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name.
|
||||
|
||||
<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n
|
||||
|
||||
Below, you’ll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues.
|
||||
|
||||
#### Timestamp ####
|
||||
|
||||
The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this:
|
||||
|
||||
- **2003-10-11** is the year, month, and day.
|
||||
- **T** is a required element of the TIMESTAMP field, separating the date and the time.
|
||||
- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second.
|
||||
- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST.
|
||||
|
||||
#### Hostname ####
|
||||
|
||||
The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message.
|
||||
|
||||
#### App-Name ####
|
||||
|
||||
The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message.
|
||||
|
||||
#### Priority ####
|
||||
|
||||
The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. It’s a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use.
|
||||
|
||||
Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/
|
||||
|
||||
作者:[Jason Skowronski][a1]
|
||||
作者:[Amy Echeverri][a2]
|
||||
作者:[Sadequl Hussain][a3]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a1]:https://www.linkedin.com/in/jasonskowronski
|
||||
[a2]:https://www.linkedin.com/in/amyecheverri
|
||||
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
||||
[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos
|
||||
[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb
|
||||
[3]:https://tools.ietf.org/html/rfc5424
|
||||
[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3
|
||||
[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4
|
||||
[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5
|
||||
[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1
|
419
sources/tech/20150803 Managing Linux Logs.md
Normal file
419
sources/tech/20150803 Managing Linux Logs.md
Normal file
@ -0,0 +1,419 @@
|
||||
wyangsun translating
|
||||
Managing Linux Logs
|
||||
================================================================================
|
||||
A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily.
|
||||
|
||||
### Benefits of Centralizing Logs ###
|
||||
|
||||
It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. It’d take a long time to hunt down the right file, and even longer to correlate problems across servers. There’s nothing more frustrating than finding the information you are looking for hasn’t been captured, or the log file that could have held the answer has just been lost after a restart.
|
||||
|
||||
Centralizing your logs makes them faster to search, which can help you solve production issues faster. You don’t have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed.
|
||||
|
||||
Centralizing your logs also makes them easier to manage:
|
||||
|
||||
- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem.
|
||||
- You don’t have to worry about ssh or inefficient grep commands requiring more resources on troubled systems.
|
||||
- You don’t have to worry about full disks, which can crash your servers.
|
||||
- You can keep your production servers secure without giving your entire team access just to look at logs. It’s much safer to give your team access to logs from the central location.
|
||||
|
||||
With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. We’ll discuss how to intelligently address these issues in the sections below.
|
||||
|
||||
### Popular Tools for Centralizing Logs ###
|
||||
|
||||
The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files:
|
||||
|
||||
- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions.
|
||||
- [syslog-ng][3] is the second most popular syslog daemon for Linux.
|
||||
- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing.
|
||||
- [fluentd][5] is another agent with advanced processing capabilities.
|
||||
|
||||
Rsyslog is the most popular daemon for centralizing your log data because it’s installed by default in most common distributions of Linux. You don’t need to download it or install it, and it’s lightweight so it won’t take up much of your system resources.
|
||||
|
||||
If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you don’t mind the extra system footprint.
|
||||
|
||||
### Configure Rsyslog.conf ###
|
||||
|
||||
Since rsyslog is the most widely used syslog daemon, we’ll show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6].
|
||||
|
||||
The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name.
|
||||
|
||||
action(type="omfwd" protocol="tcp" target="BEBOP" port="514")
|
||||
|
||||
You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If you’re storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full.
|
||||
|
||||
Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider.
|
||||
|
||||
### Log Directories ###
|
||||
|
||||
You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*).
|
||||
|
||||
Common versions of rsyslog can’t monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9].
|
||||
|
||||
### Which Protocol: UDP, TCP, or RELP? ###
|
||||
|
||||
There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol.
|
||||
|
||||
[UDP][10] sends a datagram packet, which is a single packet of information. It’s an outbound-only protocol, so it doesn’t send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. It’s most commonly used on reliable networks like localhost.
|
||||
|
||||
[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet.
|
||||
|
||||
[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol.
|
||||
|
||||
### Reliably Send with Disk Assisted Queues ###
|
||||
|
||||
If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity.
|
||||
|
||||
**Warning: You can lose data if you store logs only in memory.**
|
||||
|
||||
Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue:
|
||||
|
||||
$WorkDirectory /var/spool/rsyslog # where to place spool files
|
||||
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
|
||||
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
|
||||
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
|
||||
$ActionQueueType LinkedList # run asynchronously
|
||||
$ActionResumeRetryCount -1 # infinite retries if host is down
|
||||
|
||||
### Encrypt Logs Using TLS ###
|
||||
|
||||
When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer.
|
||||
|
||||
To set up TLS encryption, you need to do the following tasks:
|
||||
|
||||
1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If you’re using a log management service, it will have one ready for you.
|
||||
1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider.
|
||||
1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system.
|
||||
|
||||
Here’s an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting.
|
||||
|
||||
$DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt
|
||||
$ActionSendStreamDriver gtls
|
||||
$ActionSendStreamDriverMode 1
|
||||
$ActionSendStreamDriverAuthMode x509/name
|
||||
$ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com
|
||||
|
||||
### Best Practices for Application Logging ###
|
||||
|
||||
In addition to the logs that Linux creates by default, it’s also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on.
|
||||
|
||||
The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but that’s the first place where people look for configuration files.
|
||||
|
||||
Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things.
|
||||
|
||||
If you’re not sure where it is, you can use the locate command to find it:
|
||||
|
||||
[root@localhost ~]# locate postgresql.conf
|
||||
/usr/pgsql-9.4/share/postgresql.conf.sample
|
||||
/var/lib/pgsql/9.4/data/postgresql.conf
|
||||
|
||||
#### Set a Standard Location for Log Files ####
|
||||
|
||||
Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? That’s because other applications save their log files under /var/log too and if your app saves more than one log file – perhaps once every day or after each service restart – it may be a bit difficult to trawl through a large directory to find the file you want.
|
||||
|
||||
If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go.
|
||||
|
||||
#### Use A Standard Filename ####
|
||||
|
||||
Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically.
|
||||
|
||||
#### Append the Log File ####
|
||||
|
||||
Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart.
|
||||
|
||||
#### Appending vs. Rotation of Log File ####
|
||||
|
||||
Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines.
|
||||
|
||||
We recommend you configure the application to rotate its log file once every day, say at mid-night.
|
||||
|
||||
Why? Well it becomes manageable for a starter. It’s much easier to find a file name with a specific date time pattern than to search through one file for that date’s entries. Files are also much smaller: you don’t think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location – perhaps a nightly backup job copying to a centralized log server – it doesn’t chew up your network’s bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, it’s easier to delete files older than a particular date than to have an application parsing one single large file.
|
||||
|
||||
#### Retention of Log File ####
|
||||
|
||||
How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one week’s worth of logging information, or it may be a regulatory requirement to keep ten years’ worth of data. Whatever it is, logs need to go from the server at one time or other.
|
||||
|
||||
In our opinion, unless otherwise required, keep at least a month’s worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier.
|
||||
|
||||
#### Separate Disk Location for Log Files ####
|
||||
|
||||
Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main application’s data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesn’t fill up the entire disk.
|
||||
|
||||
#### Log Entries ####
|
||||
|
||||
What information should be captured in each log entry?
|
||||
|
||||
That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything that’s happening? Is it a legal requirement to capture what each user is running or viewing?
|
||||
|
||||
If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. There’s no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself.
|
||||
|
||||
#### A Practical Example for PostgreSQL ####
|
||||
|
||||
As an example, let’s look at the main configuration file for a vanilla PostgreSQL 9.4 installation. It’s called postgresql.conf and contrary to other config files in Linux systems, it’s not saved under /etc directory. In the code snippet below, we can see it’s in /var/lib/pgsql directory of our CentOS 7 server:
|
||||
|
||||
root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf
|
||||
...
|
||||
#------------------------------------------------------------------------------
|
||||
# ERROR REPORTING AND LOGGING
|
||||
#------------------------------------------------------------------------------
|
||||
# - Where to Log -
|
||||
log_destination = 'stderr'
|
||||
# Valid values are combinations of
|
||||
# stderr, csvlog, syslog, and eventlog,
|
||||
# depending on platform. csvlog
|
||||
# requires logging_collector to be on.
|
||||
# This is used when logging to stderr:
|
||||
logging_collector = on
|
||||
# Enable capturing of stderr and csvlog
|
||||
# into log files. Required to be on for
|
||||
# csvlogs.
|
||||
# (change requires restart)
|
||||
# These are only used if logging_collector is on:
|
||||
log_directory = 'pg_log'
|
||||
# directory where log files are written,
|
||||
# can be absolute or relative to PGDATA
|
||||
log_filename = 'postgresql-%a.log' # log file name pattern,
|
||||
# can include strftime() escapes
|
||||
# log_file_mode = 0600 .
|
||||
# creation mode for log files,
|
||||
# begin with 0 to use octal notation
|
||||
log_truncate_on_rotation = on # If on, an existing log file with the
|
||||
# same name as the new log file will be
|
||||
# truncated rather than appended to.
|
||||
# But such truncation only occurs on
|
||||
# time-driven rotation, not on restarts
|
||||
# or size-driven rotation. Default is
|
||||
# off, meaning append to existing files
|
||||
# in all cases.
|
||||
log_rotation_age = 1d
|
||||
# Automatic rotation of logfiles will happen after that time. 0 disables.
|
||||
log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables.
|
||||
# These are relevant when logging to syslog:
|
||||
#syslog_facility = 'LOCAL0'
|
||||
#syslog_ident = 'postgres'
|
||||
# This is only relevant when logging to eventlog (win32):
|
||||
#event_source = 'PostgreSQL'
|
||||
# - When to Log -
|
||||
#client_min_messages = notice # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# log
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
#log_min_messages = warning # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic
|
||||
#log_min_error_statement = error # values in order of decreasing detail:
|
||||
# debug5
|
||||
# debug4
|
||||
# debug3
|
||||
# debug2
|
||||
# debug1
|
||||
# info
|
||||
# notice
|
||||
# warning
|
||||
# error
|
||||
# log
|
||||
# fatal
|
||||
# panic (effectively off)
|
||||
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
|
||||
# and their durations, > 0 logs only
|
||||
# statements running at least this number
|
||||
# of milliseconds
|
||||
# - What to Log
|
||||
#debug_print_parse = off
|
||||
#debug_print_rewritten = off
|
||||
#debug_print_plan = off
|
||||
#debug_pretty_print = on
|
||||
#log_checkpoints = off
|
||||
#log_connections = off
|
||||
#log_disconnections = off
|
||||
#log_duration = off
|
||||
#log_error_verbosity = default
|
||||
# terse, default, or verbose messages
|
||||
#log_hostname = off
|
||||
log_line_prefix = '< %m >' # special values:
|
||||
# %a = application name
|
||||
# %u = user name
|
||||
# %d = database name
|
||||
# %r = remote host and port
|
||||
# %h = remote host
|
||||
# %p = process ID
|
||||
# %t = timestamp without milliseconds
|
||||
# %m = timestamp with milliseconds
|
||||
# %i = command tag
|
||||
# %e = SQL state
|
||||
# %c = session ID
|
||||
# %l = session line number
|
||||
# %s = session start timestamp
|
||||
# %v = virtual transaction ID
|
||||
# %x = transaction ID (0 if none)
|
||||
# %q = stop here in non-session
|
||||
# processes
|
||||
# %% = '%'
|
||||
# e.g. '<%u%%%d> '
|
||||
#log_lock_waits = off # log lock waits >= deadlock_timeout
|
||||
#log_statement = 'none' # none, ddl, mod, all
|
||||
#log_temp_files = -1 # log temporary files equal or larger
|
||||
# than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5
|
||||
log_timezone = 'Australia/ACT'
|
||||
|
||||
Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there.
|
||||
|
||||
Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files:
|
||||
|
||||
[root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log
|
||||
total 20
|
||||
-rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log
|
||||
-rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log
|
||||
-rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log
|
||||
-rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log
|
||||
-rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log
|
||||
|
||||
So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf.
|
||||
|
||||
Looking inside one log file shows its entries start with date time only:
|
||||
|
||||
[root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log
|
||||
...
|
||||
< 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request
|
||||
< 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions
|
||||
< 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down
|
||||
< 2015-02-27 01:21:27.036 EST >LOG: shutting down
|
||||
< 2015-02-27 01:21:27.211 EST >LOG: database system is shut down
|
||||
|
||||
### Centralizing Application Logs ###
|
||||
|
||||
#### Log File Monitoring with Imfile ####
|
||||
|
||||
Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but don’t scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this:
|
||||
|
||||
$ModLoad imfile
|
||||
$InputFilePollInterval 10
|
||||
$PrivDropToGroup adm
|
||||
|
||||
----------
|
||||
|
||||
# Input for FILE1
|
||||
$InputFileName /FILE1
|
||||
$InputFileTag APPNAME1
|
||||
$InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled
|
||||
$InputFileSeverity info
|
||||
$InputFilePersistStateInterval 20000
|
||||
$InputRunFileMonitor
|
||||
|
||||
Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured.
|
||||
|
||||
#### Local Socket Logs with Imuxsock ####
|
||||
|
||||
A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket.
|
||||
|
||||
This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or can’t keep up, then you could lose log data.
|
||||
|
||||
The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command:
|
||||
|
||||
$ModLoad imuxsock
|
||||
|
||||
#### UDP Logs with Imupd ####
|
||||
|
||||
Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution.
|
||||
|
||||
Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514:
|
||||
|
||||
$ModLoad imudp
|
||||
|
||||
----------
|
||||
|
||||
$UDPServerRun 514
|
||||
|
||||
### Manage Logs with Logrotate ###
|
||||
|
||||
Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine.
|
||||
|
||||
The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived.
|
||||
|
||||
When logrotate copies a file, the new file has a new inode, which can interfere with rsyslog’s ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file.
|
||||
|
||||
The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18].
|
||||
|
||||
### Manage Configuration on Many Servers ###
|
||||
|
||||
When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect.
|
||||
|
||||
#### Pssh ####
|
||||
|
||||
This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time.
|
||||
|
||||
#### Puppet/Chef ####
|
||||
|
||||
Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you aren’t sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorld’s comparison of the two tools][19].
|
||||
|
||||
Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Loggly’s Puppet module. It offers a class for rsyslog to which you can add an identifying token:
|
||||
|
||||
node 'my_server_node.example.net' {
|
||||
# Send syslog events to Loggly
|
||||
class { 'loggly::rsyslog':
|
||||
customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000',
|
||||
}
|
||||
}
|
||||
|
||||
#### Docker ####
|
||||
|
||||
Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center.
|
||||
|
||||
There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21].
|
||||
|
||||
#### Vendor Scripts or Agents ####
|
||||
|
||||
Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/
|
||||
|
||||
作者:[Jason Skowronski][a1]
|
||||
作者:[Amy Echeverri][a2]
|
||||
作者:[Sadequl Hussain][a3]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a1]:https://www.linkedin.com/in/jasonskowronski
|
||||
[a2]:https://www.linkedin.com/in/amyecheverri
|
||||
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
||||
[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl
|
||||
[2]:http://www.rsyslog.com/
|
||||
[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system
|
||||
[4]:http://logstash.net/
|
||||
[5]:http://www.fluentd.org/
|
||||
[6]:http://www.rsyslog.com/doc/rsyslog_conf.html
|
||||
[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html
|
||||
[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87
|
||||
[9]:https://www.loggly.com/docs/file-monitoring/
|
||||
[10]:http://www.networksorcery.com/enp/protocol/udp.htm
|
||||
[11]:http://www.networksorcery.com/enp/protocol/tcp.htm
|
||||
[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html
|
||||
[13]:http://www.rsyslog.com/doc/relp.html
|
||||
[14]:http://www.rsyslog.com/doc/queues.html
|
||||
[15]:http://www.rsyslog.com/doc/tls_cert_ca.html
|
||||
[16]:http://www.rsyslog.com/doc/tls_cert_machine.html
|
||||
[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html
|
||||
[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
|
||||
[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html
|
||||
[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
|
||||
[21]:https://github.com/progrium/logspout
|
||||
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/
|
117
sources/tech/20150803 Troubleshooting with Linux Logs.md
Normal file
117
sources/tech/20150803 Troubleshooting with Linux Logs.md
Normal file
@ -0,0 +1,117 @@
|
||||
translation by strugglingyouth
|
||||
Troubleshooting with Linux Logs
|
||||
================================================================================
|
||||
Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs.
|
||||
|
||||
### Cause of Login Failures ###
|
||||
|
||||
If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another user’s permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened.
|
||||
|
||||
Failure Examples:
|
||||
|
||||
pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2
|
||||
Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2
|
||||
pam_unix(sshd:auth): check pass; user unknown
|
||||
PAM service(sshd) ignoring max retries; 6 > 3
|
||||
|
||||
Success Examples:
|
||||
|
||||
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
|
||||
pam_unix(sshd:session): session opened for user hoover by (uid=0)
|
||||
pam_unix(sshd:session): session closed for user hoover
|
||||
|
||||
You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system.
|
||||
|
||||
$ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr
|
||||
23 oracle
|
||||
18 postgres
|
||||
17 nagios
|
||||
10 zabbix
|
||||
6 test
|
||||
|
||||
You’ll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username.
|
||||
|
||||
Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user.
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png)
|
||||
|
||||
Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, it’s more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system.
|
||||
|
||||
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png)
|
||||
|
||||
### Cause of Reboots ###
|
||||
|
||||
Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it?
|
||||
|
||||
#### Shutdown Command ####
|
||||
|
||||
If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down.
|
||||
|
||||
Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh
|
||||
Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
|
||||
Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now
|
||||
|
||||
#### Kernel Initializing ####
|
||||
|
||||
If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. You’d search for the facility kernel messages and Initializing cpu.
|
||||
|
||||
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset
|
||||
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu
|
||||
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25)
|
||||
|
||||
### Detect Memory Problems ###
|
||||
|
||||
There are lots of reasons a server might crash, but one common cause is running out of memory.
|
||||
|
||||
When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash.
|
||||
|
||||
Examples:
|
||||
|
||||
[33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child
|
||||
[29923450.995084] select 5230 (docker), adj 0, size 708, to kill
|
||||
|
||||
You can find these logs using a tool like grep. This example is for Ubuntu:
|
||||
|
||||
$ grep “Out of memory” /var/log/syslog
|
||||
[33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child
|
||||
|
||||
Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason it’s a fabulous idea to centralize your logs!
|
||||
|
||||
### Log Cron Job Errors ###
|
||||
|
||||
The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself.
|
||||
|
||||
By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here.
|
||||
|
||||
Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from=<hoover>
|
||||
Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110>
|
||||
Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=<hoover@loggly.com>, size=607, nrcpt=1 (queue active)
|
||||
Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=<hoover@loggly.com>, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp)
|
||||
|
||||
You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to.
|
||||
|
||||
*/5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron
|
||||
|
||||
Which creates the log entries:
|
||||
|
||||
Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron)
|
||||
Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World!
|
||||
|
||||
Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/
|
||||
|
||||
作者:[Jason Skowronski][a1]
|
||||
作者:[Amy Echeverri][a2]
|
||||
作者:[Sadequl Hussain][a3]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a1]:https://www.linkedin.com/in/jasonskowronski
|
||||
[a2]:https://www.linkedin.com/in/amyecheverri
|
||||
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
|
||||
[1]:http://linux.die.net/man/8/pam.d
|
@ -0,0 +1,429 @@
|
||||
Installation Guide for Puppet on Ubuntu 15.04
|
||||
================================================================================
|
||||
Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes.
|
||||
|
||||
In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04.
|
||||
|
||||
### 1. Setting up Hosts ###
|
||||
|
||||
In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial.
|
||||
|
||||
puppet master server with IP 44.55.88.6 and hostname : puppetmaster
|
||||
puppet node agent with IP 45.55.86.39 and hostname : puppetnode
|
||||
|
||||
Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server.
|
||||
|
||||
# nano /etc/hosts
|
||||
|
||||
45.55.88.6 puppetmaster.example.com puppetmaster
|
||||
45.55.86.39 puppetnode.example.com puppetnode
|
||||
|
||||
Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it.
|
||||
|
||||
### 2. Updating Time with NTP ###
|
||||
|
||||
As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent.
|
||||
|
||||
# ntpdate pool.ntp.org
|
||||
|
||||
17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec
|
||||
|
||||
Now, we'll update our local repository index and install ntp as follows.
|
||||
|
||||
# apt-get update && sudo apt-get -y install ntp ; service ntp restart
|
||||
|
||||
### 3. Puppet Master Package Installation ###
|
||||
|
||||
There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package.
|
||||
|
||||
# cd /tmp/
|
||||
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
|
||||
--2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
|
||||
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 7384 (7.2K) [application/x-debian-package]
|
||||
Saving to: ‘puppetlabs-release-trusty.deb’
|
||||
|
||||
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s
|
||||
|
||||
2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384]
|
||||
|
||||
After the download has been completed, we'll wanna install the package.
|
||||
|
||||
# dpkg -i puppetlabs-release-trusty.deb
|
||||
|
||||
Selecting previously unselected package puppetlabs-release.
|
||||
(Reading database ... 85899 files and directories currently installed.)
|
||||
Preparing to unpack puppetlabs-release-trusty.deb ...
|
||||
Unpacking puppetlabs-release (1.0-11) ...
|
||||
Setting up puppetlabs-release (1.0-11) ...
|
||||
|
||||
Then, we'll update the local respository index with the server using apt package manager.
|
||||
|
||||
# apt-get update
|
||||
|
||||
Then, we'll install the puppetmaster-passenger package by running the below command.
|
||||
|
||||
# apt-get install puppetmaster-passenger
|
||||
|
||||
**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :)
|
||||
|
||||
To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version.
|
||||
|
||||
# puppet --version
|
||||
|
||||
3.8.1
|
||||
|
||||
We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running.
|
||||
|
||||
Before continuing, we'll need to stop the Puppet master by stopping the apache2 service.
|
||||
|
||||
# systemctl stop apache2
|
||||
|
||||
### 4. Master version lock with Apt ###
|
||||
|
||||
As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor.
|
||||
|
||||
# nano /etc/apt/preferences.d/00-puppet.pref
|
||||
|
||||
Then, we'll gonna add the entries in the newly created file as:
|
||||
|
||||
# /etc/apt/preferences.d/00-puppet.pref
|
||||
Package: puppet puppet-common puppetmaster-passenger
|
||||
Pin: version 3.8*
|
||||
Pin-Priority: 501
|
||||
|
||||
Now, it will not update the puppet while running updates in the system.
|
||||
|
||||
### 5. Configuring Puppet Config ###
|
||||
|
||||
Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command.
|
||||
|
||||
# rm -rf /var/lib/puppet/ssl
|
||||
|
||||
Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor.
|
||||
|
||||
# nano /etc/puppet/puppet.conf
|
||||
|
||||
The output seems as shown below.
|
||||
|
||||
[main]
|
||||
logdir=/var/log/puppet
|
||||
vardir=/var/lib/puppet
|
||||
ssldir=/var/lib/puppet/ssl
|
||||
rundir=/var/run/puppet
|
||||
factpath=$vardir/lib/facter
|
||||
templatedir=$confdir/templates
|
||||
|
||||
[master]
|
||||
# These are needed when the puppetmaster is run by passenger
|
||||
# and can safely be removed if webrick is used.
|
||||
ssl_client_header = SSL_CLIENT_S_DN
|
||||
ssl_client_verify_header = SSL_CLIENT_VERIFY
|
||||
|
||||
Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main].
|
||||
|
||||
server = puppetmaster
|
||||
environment = production
|
||||
runinterval = 1h
|
||||
strict_variables = true
|
||||
certname = puppetmaster
|
||||
dns_alt_names = puppetmaster, puppetmaster.example.com
|
||||
|
||||
This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1].
|
||||
|
||||
After editing the file, we'll wanna save that and exit.
|
||||
|
||||
Now, we'll gonna generate a new CA certificates by running the following command.
|
||||
|
||||
# puppet master --verbose --no-daemonize
|
||||
|
||||
Info: Creating a new SSL key for ca
|
||||
Info: Creating a new SSL certificate request for ca
|
||||
Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78
|
||||
...
|
||||
Notice: puppetmaster has a waiting certificate request
|
||||
Notice: Signed certificate request for puppetmaster
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
|
||||
Notice: Starting Puppet master version 3.8.1
|
||||
^CNotice: Caught INT; storing stop
|
||||
Notice: Processing stop
|
||||
|
||||
Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell.
|
||||
|
||||
If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command.
|
||||
|
||||
# puppet cert list -all
|
||||
|
||||
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
|
||||
|
||||
### 6. Creating a Puppet Manifest ###
|
||||
|
||||
The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command.
|
||||
|
||||
# nano /etc/puppet/manifests/site.pp
|
||||
|
||||
Then, we'll add the following lines of configuration in the file that we just opened.
|
||||
|
||||
# execute 'apt-get update'
|
||||
exec { 'apt-update': # exec resource named 'apt-update'
|
||||
command => '/usr/bin/apt-get update' # command this resource will run
|
||||
}
|
||||
|
||||
# install apache2 package
|
||||
package { 'apache2':
|
||||
require => Exec['apt-update'], # require 'apt-update' before installing
|
||||
ensure => installed,
|
||||
}
|
||||
|
||||
# ensure apache2 service is running
|
||||
service { 'apache2':
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent.
|
||||
|
||||
### 7. Starting Master Service ###
|
||||
|
||||
We are now ready to start the puppet master. We can start it by running the apache2 service.
|
||||
|
||||
# systemctl start apache2
|
||||
|
||||
Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master.
|
||||
|
||||
**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server.
|
||||
|
||||
### 8. Puppet Agent Package Installation ###
|
||||
|
||||
Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com .
|
||||
|
||||
We'll run the following command to download the Puppet Labs package in our puppet agent nodes.
|
||||
|
||||
# cd /tmp/
|
||||
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\
|
||||
|
||||
--2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
|
||||
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
|
||||
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 7384 (7.2K) [application/x-debian-package]
|
||||
Saving to: ‘puppetlabs-release-trusty.deb’
|
||||
|
||||
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s
|
||||
|
||||
2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384]
|
||||
|
||||
Then, as we're running ubuntu 15.04, we'll use debian package manager to install it.
|
||||
|
||||
# dpkg -i puppetlabs-release-trusty.deb
|
||||
|
||||
Now, we'll gonna update the repository index using apt-get.
|
||||
|
||||
# apt-get update
|
||||
|
||||
Finally, we'll gonna install the puppet agent directly from the remote repository.
|
||||
|
||||
# apt-get install puppet
|
||||
|
||||
Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor.
|
||||
|
||||
# nano /etc/default/puppet
|
||||
|
||||
Then, we'll need to change value of **START** to "yes" as shown below.
|
||||
|
||||
START=yes
|
||||
|
||||
Then, we'll need to save and exit the file.
|
||||
|
||||
### 9. Agent Version Lock with Apt ###
|
||||
|
||||
As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor.
|
||||
|
||||
# nano /etc/apt/preferences.d/00-puppet.pref
|
||||
|
||||
Then, we'll gonna add the entries in the newly created file as:
|
||||
|
||||
# /etc/apt/preferences.d/00-puppet.pref
|
||||
Package: puppet puppet-common
|
||||
Pin: version 3.8*
|
||||
Pin-Priority: 501
|
||||
|
||||
Now, it will not update the Puppet while running updates in the system.
|
||||
|
||||
### 10. Configuring Puppet Node Agent ###
|
||||
|
||||
Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf
|
||||
|
||||
# nano /etc/puppet/puppet.conf
|
||||
|
||||
It will look exactly like the Puppet master's initial configuration file.
|
||||
|
||||
This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it.
|
||||
|
||||
Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com .
|
||||
|
||||
[agent]
|
||||
server = puppetmaster.example.com
|
||||
certname = puppetnode.example.com
|
||||
|
||||
After adding this, it will look alike this.
|
||||
|
||||
[main]
|
||||
logdir=/var/log/puppet
|
||||
vardir=/var/lib/puppet
|
||||
ssldir=/var/lib/puppet/ssl
|
||||
rundir=/var/run/puppet
|
||||
factpath=$vardir/lib/facter
|
||||
#templatedir=$confdir/templates
|
||||
|
||||
[agent]
|
||||
server = puppetmaster.example.com
|
||||
certname = puppetnode.example.com
|
||||
|
||||
After done with that, we'll gonna save and exit it.
|
||||
|
||||
Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command.
|
||||
|
||||
# systemctl start puppet
|
||||
|
||||
If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node.
|
||||
|
||||
**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further.
|
||||
|
||||
### 11. Signing certificate Requests on Master ###
|
||||
|
||||
While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate.
|
||||
|
||||
To get the list of the certificate requests, we'll run the following command in the puppet master server.
|
||||
|
||||
# puppet cert list
|
||||
|
||||
"puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2
|
||||
|
||||
As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname.
|
||||
|
||||
Note that there is no + in front of it which indicates that it has not been signed yet.
|
||||
|
||||
Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below.
|
||||
|
||||
# puppet cert sign puppetnode.example.com
|
||||
|
||||
Notice: Signed certificate request for puppetnode.example.com
|
||||
Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem'
|
||||
|
||||
The Puppet master can now communicate and control the node that the signed certificate belongs to.
|
||||
|
||||
If we want to sign all of the current requests, we can use the -all option as shown below.
|
||||
|
||||
# puppet cert sign --all
|
||||
|
||||
### Removing a Puppet Certificate ###
|
||||
|
||||
If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows.
|
||||
|
||||
# puppet cert clean hostname
|
||||
|
||||
Notice: Revoked certificate with serial 5
|
||||
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem'
|
||||
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem'
|
||||
|
||||
If we want to view all of the requests signed and unsigned, run the following command:
|
||||
|
||||
# puppet cert list --all
|
||||
|
||||
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
|
||||
|
||||
### 12. Deploying a Puppet Manifest ###
|
||||
|
||||
After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node.
|
||||
|
||||
# puppet agent --test
|
||||
|
||||
Info: Retrieving pluginfacts
|
||||
Info: Retrieving plugin
|
||||
Info: Caching catalog for puppetnode.example.com
|
||||
Info: Applying configuration version '1434563858'
|
||||
Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully
|
||||
Notice: Finished catalog run in 10.53 seconds
|
||||
|
||||
This will show us all the processes how the main manifest will affect a single server immediately.
|
||||
|
||||
If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from.
|
||||
|
||||
# puppet apply /etc/puppet/manifest/test.pp
|
||||
|
||||
### 13. Configuring Manifest for a Specific Node ###
|
||||
|
||||
If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows.
|
||||
|
||||
We'll need to edit the manifest on the master server using a text editor.
|
||||
|
||||
# nano /etc/puppet/manifest/site.pp
|
||||
|
||||
Now, we'll gonna add the following lines there.
|
||||
|
||||
node 'puppetnode', 'puppetnode1' {
|
||||
# execute 'apt-get update'
|
||||
exec { 'apt-update': # exec resource named 'apt-update'
|
||||
command => '/usr/bin/apt-get update' # command this resource will run
|
||||
}
|
||||
|
||||
# install apache2 package
|
||||
package { 'apache2':
|
||||
require => Exec['apt-update'], # require 'apt-update' before installing
|
||||
ensure => installed,
|
||||
}
|
||||
|
||||
# ensure apache2 service is running
|
||||
service { 'apache2':
|
||||
ensure => running,
|
||||
}
|
||||
}
|
||||
|
||||
Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically.
|
||||
|
||||
### 14. Configuring Manifest with a Module ###
|
||||
|
||||
Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further.
|
||||
|
||||
On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command.
|
||||
|
||||
# puppet module install puppetlabs-apache
|
||||
|
||||
**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet.
|
||||
|
||||
Now we'll gonna edit the main manifest ie **site.pp** using a text editor.
|
||||
|
||||
# nano /etc/puppet/manifest/site.pp
|
||||
|
||||
Now add the following lines to install apache under puppetnode.
|
||||
|
||||
node 'puppet-node' {
|
||||
class { 'apache': } # use apache module
|
||||
apache::vhost { 'example.com': # define vhost resource
|
||||
port => '80',
|
||||
docroot => '/var/www/html'
|
||||
}
|
||||
}
|
||||
|
||||
Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html
|
@ -0,0 +1,70 @@
|
||||
Translating by GOlinu!
|
||||
Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting
|
||||
================================================================================
|
||||
> **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information?
|
||||
|
||||
Open vSwitch (OVS) is the most popular open-source implementation of virtual switch on the Linux platform. As the today's data centers increasingly rely on the software-defined network (SDN) architecture, OVS is fastly adopted as the de-facto standard network element in data center's SDN deployments.
|
||||
|
||||
Open vSwitch has a built-in logging mechanism called VLOG. The VLOG facility allows one to enable and customize logging within various components of the switch. The logging information generated by VLOG can be sent to a combination of console, syslog and a separate log file for inspection. You can configure OVS logging dynamically at run-time with a command-line tool called `ovs-appctl`.
|
||||
|
||||
![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg)
|
||||
|
||||
Here is how to enable logging and customize logging levels in Open vSwitch with `ovs-appctl`.
|
||||
|
||||
The syntax of `ovs-appctl` to customize VLOG is as follows.
|
||||
|
||||
$ sudo ovs-appctl vlog/set module[:facility[:level]]
|
||||
|
||||
- **Module**: name of any valid component in OVS (e.g., netdev, ofproto, dpif, vswitchd, and many others)
|
||||
- **Facility**: destination of logging information (must be: console, syslog or file)
|
||||
- **Level**: verbosity of logging (must be: emer, err, warn, info, or dbg)
|
||||
|
||||
In OVS source code, module name is defined in each source file in the form of:
|
||||
|
||||
VLOG_DEFINE_THIS_MODULE(<module-name>);
|
||||
|
||||
For example, in lib/netdev.c, you will see:
|
||||
|
||||
VLOG_DEFINE_THIS_MODULE(netdev);
|
||||
|
||||
which indicates that lib/netdev.c is part of netdev module. Any logging messages generated in lib/netdev.c will belong to netdev module.
|
||||
|
||||
In OVS source code, there are multiple severity levels used to define several different kinds of logging messages: VLOG_INFO() for informational, VLOG_WARN() for warning, VLOG_ERR() for error, VLOG_DBG() for debugging, VLOG_EMERG for emergency. Logging level and facility determine which logging messages are sent where.
|
||||
|
||||
To see a full list of available modules, facilities, and their respective logging levels, run the following commands. This command must be invoked after you have started OVS.
|
||||
|
||||
$ sudo ovs-appctl vlog/list
|
||||
|
||||
![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg)
|
||||
|
||||
The output shows the debug levels of each module for three different facilities (console, syslog, file). By default, all modules have their logging level set to INFO.
|
||||
|
||||
Given any one OVS module, you can selectively change the debug level of any particular facility. For example, if you want to see more detailed debug messages of dpif module at the console screen, run the following command.
|
||||
|
||||
$ sudo ovs-appctl vlog/set dpif:console:dbg
|
||||
|
||||
You will see that dpif module's console facility has changed its logging level to DBG. The logging level of two other facilities, syslog and file, remains unchanged.
|
||||
|
||||
![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg)
|
||||
|
||||
If you want to change the logging level for all modules, you can specify "ANY" as the module name. For example, the following command will change the console logging level of every module to DBG.
|
||||
|
||||
$ sudo ovs-appctl vlog/set ANY:console:dbg
|
||||
|
||||
![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg)
|
||||
|
||||
Also, if you want to change the logging level of all three facilities at once, you can specify "ANY" as the facility name. For example, the following command will change the logging level of all facilities for every module to DBG.
|
||||
|
||||
$ sudo ovs-appctl vlog/set ANY:ANY:dbg
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/enable-logging-open-vswitch.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
@ -0,0 +1,74 @@
|
||||
Translating by Ping
|
||||
|
||||
Linux FAQs with Answers--How to install git on Linux
|
||||
================================================================================
|
||||
> **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]?
|
||||
|
||||
Git is a popular open-source version control system (VCS) originally developed for Linux environment. Contrary to other VCS tools like CVS or SVN, Git's revision control is considered "distributed" in a sense that your local Git working directory can function as a fully-working repository with complete history and version-tracking capabilities. In this model, each collaborator commits to his or her local repository (as opposed to always committing to a central repository), and optionally push to a centralized repository if need be. This brings in scalability and redundancy to the revision control system, which is a must in any kind of large-scale collaboration.
|
||||
|
||||
![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg)
|
||||
|
||||
### Install Git with a Package Manager ###
|
||||
|
||||
Git is shipped with all major Linux distributions. Thus the easiest way to install Git is by using your Linux distro's package manager.
|
||||
|
||||
**Debian, Ubuntu, or Linux Mint**
|
||||
|
||||
$ sudo apt-get install git
|
||||
|
||||
**Fedora, CentOS or RHEL**
|
||||
|
||||
$ sudo yum install git
|
||||
|
||||
**Arch Linux**
|
||||
|
||||
$ sudo pacman -S git
|
||||
|
||||
**OpenSUSE**
|
||||
|
||||
$ sudo zypper install git
|
||||
|
||||
**Gentoo**
|
||||
|
||||
$ emerge --ask --verbose dev-vcs/git
|
||||
|
||||
### Install Git from the Source ###
|
||||
|
||||
If for whatever reason you want to built Git from the source, you can follow the instructions below.
|
||||
|
||||
**Install Dependencies**
|
||||
|
||||
Before building Git, first install dependencies.
|
||||
|
||||
**Debian, Ubuntu or Linux**
|
||||
|
||||
$ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x
|
||||
|
||||
**Fedora, CentOS or RHEL**
|
||||
|
||||
$ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x
|
||||
|
||||
#### Compile Git from the Source ####
|
||||
|
||||
Download the latest release of Git from [https://github.com/git/git/releases][1]. Then build and install Git under /usr as follows.
|
||||
|
||||
Note that if you want to install it under a different directory (e.g., /opt), replace "--prefix=/usr" in configure command with something else.
|
||||
|
||||
$ cd git-x.x.x
|
||||
$ make configure
|
||||
$ ./configure --prefix=/usr
|
||||
$ make all doc info
|
||||
$ sudo make install install-doc install-html install-info
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-git-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:https://github.com/git/git/releases
|
@ -1,144 +0,0 @@
|
||||
struggling 翻译中
|
||||
Introduction to RAID, Concepts of RAID and RAID Levels – Part 1
|
||||
================================================================================
|
||||
RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume.
|
||||
|
||||
![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg)
|
||||
|
||||
Understanding RAID Setups in Linux
|
||||
|
||||
Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability.
|
||||
|
||||
This series will be titled Preparation for the setting up RAID ‘s through Parts 1-9 and covers the following topics.
|
||||
|
||||
- Part 1: Introduction to RAID, Concepts of RAID and RAID Levels
|
||||
- Part 2: How to setup RAID0 (Stripe) in Linux
|
||||
- Part 3: How to setup RAID1 (Mirror) in Linux
|
||||
- Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux
|
||||
- Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux
|
||||
- Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux
|
||||
- Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid
|
||||
- Part 8: Recovering (Rebuilding) failed drives in RAID
|
||||
- Part 9: Managing RAID in Linux
|
||||
|
||||
This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux.
|
||||
|
||||
### Software RAID and Hardware RAID ###
|
||||
|
||||
Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment.
|
||||
|
||||
Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won’t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale.
|
||||
|
||||
Hardware RAID Card will look like below:
|
||||
|
||||
![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg)
|
||||
|
||||
Hardware RAID
|
||||
|
||||
#### Featured Concepts of RAID ####
|
||||
|
||||
- Parity method in raid regenerate the lost content from parity saved information’s. RAID 5, RAID 6 Based on Parity.
|
||||
- Stripe is sharing data randomly to multiple disk. This won’t have full data in a single disk. If we use 3 disks half of our data will be in each disks.
|
||||
- Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.
|
||||
- Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically.
|
||||
- Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance.
|
||||
|
||||
RAID’s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment.
|
||||
|
||||
- RAID0 = Striping
|
||||
- RAID1 = Mirroring
|
||||
- RAID5 = Single Disk Distributed Parity
|
||||
- RAID6 = Double Disk Distributed Parity
|
||||
- RAID10 = Combine of Mirror & Stripe. (Nested RAID)
|
||||
|
||||
RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels.
|
||||
|
||||
#### RAID 0 (or) Striping ####
|
||||
|
||||
Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk.
|
||||
|
||||
Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in Second disk and it continues in round-robin process.
|
||||
|
||||
In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can’t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don’t use this RAID LEVEL.
|
||||
|
||||
- High Performance.
|
||||
- There is Zero Capacity Loss in RAID 0
|
||||
- Zero Fault Tolerance.
|
||||
- Write and Reading will be good performance.
|
||||
|
||||
#### RAID 1 (or) Mirroring ####
|
||||
|
||||
Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive.
|
||||
|
||||
While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss.
|
||||
|
||||
- Good Performance.
|
||||
- Here Half of the Space will be lost in total capacity.
|
||||
- Full Fault Tolerance.
|
||||
- Rebuilt will be faster.
|
||||
- Writing Performance will be slow.
|
||||
- Reading will be good.
|
||||
- Can be used for operating systems and database for small scale.
|
||||
|
||||
#### RAID 5 (or) Distributed Parity ####
|
||||
|
||||
RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.
|
||||
|
||||
Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s.
|
||||
|
||||
- Excellent Performance
|
||||
- Reading will be extremely very good in speed.
|
||||
- Writing will be Average, slow if we won’t use a Hardware RAID Controller.
|
||||
- Rebuild from Parity information from all drives.
|
||||
- Full Fault Tolerance.
|
||||
- 1 Disk Space will be under Parity.
|
||||
- Can be used in file servers, web servers, very important backups.
|
||||
|
||||
#### RAID 6 Two Parity Distributed Disk ####
|
||||
|
||||
RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives.
|
||||
|
||||
Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity.
|
||||
|
||||
- Poor Performance.
|
||||
- Read Performance will be good.
|
||||
- Write Performance will be Poor if we not using a Hardware RAID Controller.
|
||||
- Rebuild from 2 Parity Drives.
|
||||
- Full Fault tolerance.
|
||||
- 2 Disks space will be under Parity.
|
||||
- Can be Used in Large Arrays.
|
||||
- Can be use in backup purpose, video streaming, used in large scale.
|
||||
|
||||
#### RAID 10 (or) Mirror & Stripe ####
|
||||
|
||||
RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01.
|
||||
|
||||
Assume, we have 4 Number of drives. While I’m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods.
|
||||
|
||||
If I’m writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too.
|
||||
|
||||
Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk.
|
||||
|
||||
- Good read and write performance.
|
||||
- Here Half of the Space will be lost in total capacity.
|
||||
- Fault Tolerance.
|
||||
- Fast rebuild from copying data.
|
||||
- Can be used in Database storage for high performance and availability.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID.
|
||||
|
||||
In the next upcoming articles I’m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -1,219 +0,0 @@
|
||||
struggling 翻译中
|
||||
Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2
|
||||
================================================================================
|
||||
RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group).
|
||||
|
||||
RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid.
|
||||
|
||||
![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg)
|
||||
|
||||
Setup RAID0 in Linux
|
||||
|
||||
Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it’s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set.
|
||||
|
||||
#### What is Stripe in RAID 0? ####
|
||||
|
||||
Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we can’t get the data if one of the drive fails. So, it isn’t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files.
|
||||
|
||||
- RAID 0 has High Performance.
|
||||
- Zero Capacity Loss in RAID 0. No Space will be wasted.
|
||||
- Zero Fault Tolerance ( Can’t get back the data if any one of disk fails).
|
||||
- Write and Reading will be Excellent.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks.
|
||||
|
||||
Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it’s utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys.
|
||||
|
||||
If you’re new to RAID setups, please read our earlier article, where we’ve covered some basic introduction of about RAID.
|
||||
|
||||
- [Introduction to RAID and RAID Concepts][1]
|
||||
|
||||
**My Server Setup**
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.225
|
||||
Two Disks : 20 GB each
|
||||
|
||||
This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc.
|
||||
|
||||
### Step 1: Updating System and Installing mdadm for Managing RAID ###
|
||||
|
||||
1. Before setting up RAID0 in Linux, let’s do a system update and then install ‘mdadm‘ package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.
|
||||
|
||||
# yum clean all && yum update
|
||||
# yum install mdadm -y
|
||||
|
||||
![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png)
|
||||
|
||||
Install mdadm Tool
|
||||
|
||||
### Step 2: Verify Attached Two 20GB Drives ###
|
||||
|
||||
2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png)
|
||||
|
||||
Check Hard Drives
|
||||
|
||||
3. Once the new hard drives detected, it’s time to check whether the attached drives are already using any existing raid with the help of following ‘mdadm’ command.
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
|
||||
![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png)
|
||||
|
||||
Check RAID Devices
|
||||
|
||||
In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives.
|
||||
|
||||
### Step 3: Creating Partitions for RAID ###
|
||||
|
||||
4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Follow below instructions for creating partitions.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Next select the partition number as 1.
|
||||
- Give the default value by just pressing two times Enter key.
|
||||
- Next press ‘P‘ to print the defined partition.
|
||||
|
||||
![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png)
|
||||
|
||||
Create Partitions
|
||||
|
||||
Follow below instructions for creating Linux raid auto on partitions.
|
||||
|
||||
- Press ‘L‘ to list all available types.
|
||||
- Type ‘t‘to choose the partitions.
|
||||
- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
|
||||
- Then again use ‘P‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||
![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png)
|
||||
|
||||
Create RAID Partitions in Linux
|
||||
|
||||
**Note**: Please follow same above instructions to create partition on sdc drive now.
|
||||
|
||||
5. After creating partitions, verify both the drivers are correctly defined for RAID using following command.
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
# mdadm --examine /dev/sd[b-c]1
|
||||
|
||||
![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png)
|
||||
|
||||
Verify RAID Partitions
|
||||
|
||||
### Step 4: Creating RAID md Devices ###
|
||||
|
||||
6. Now create md device (i.e. /dev/md0) and apply raid level using below command.
|
||||
|
||||
# mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
|
||||
|
||||
- -C – create
|
||||
- -l – level
|
||||
- -n – No of raid-devices
|
||||
|
||||
7. Once md device has been created, now verify the status of RAID Level, Devices and Array used, with the help of following series of commands as shown.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png)
|
||||
|
||||
Verify RAID Level
|
||||
|
||||
# mdadm -E /dev/sd[b-c]1
|
||||
|
||||
![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png)
|
||||
|
||||
Verify RAID Device
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png)
|
||||
|
||||
Verify RAID Array
|
||||
|
||||
### Step 5: Assiging RAID Devices to Filesystem ###
|
||||
|
||||
8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png)
|
||||
|
||||
Create ext4 Filesystem
|
||||
|
||||
9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it.
|
||||
|
||||
# mkdir /mnt/raid0
|
||||
# mount /dev/md0 /mnt/raid0/
|
||||
|
||||
10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command.
|
||||
|
||||
# df -h
|
||||
|
||||
11. Next, create a file called ‘tecmint.txt‘ under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory.
|
||||
|
||||
# touch /mnt/raid0/tecmint.txt
|
||||
# echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
|
||||
# cat /mnt/raid0/tecmint.txt
|
||||
# ls -l /mnt/raid0/
|
||||
|
||||
![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png)
|
||||
|
||||
Verify Mount Device
|
||||
|
||||
12. Once you’ve verified mount points, it’s time to create an fstab entry in /etc/fstab file.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
Add the following entry as described. May vary according to your mount location and filesystem you using.
|
||||
|
||||
/dev/md0 /mnt/raid0 ext4 deaults 0 0
|
||||
|
||||
![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png)
|
||||
|
||||
Add Device to Fstab
|
||||
|
||||
13. Run mount ‘-a‘ to check if there is any error in fstab entry.
|
||||
|
||||
# mount -av
|
||||
|
||||
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png)
|
||||
|
||||
Check Errors in Fstab
|
||||
|
||||
### Step 6: Saving RAID Configurations ###
|
||||
|
||||
14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use ‘mdadm’ command with ‘-s‘ (scan) and ‘-v‘ (verbose) options as shown.
|
||||
|
||||
# mdadm -E -s -v >> /etc/mdadm.conf
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# cat /etc/mdadm.conf
|
||||
|
||||
![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png)
|
||||
|
||||
Save RAID Configurations
|
||||
|
||||
That’s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid0-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
@ -1,98 +0,0 @@
|
||||
以比较的方式向Linux用户介绍FreeBSD
|
||||
================================================================================
|
||||
![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/FreeBSD-790x494.jpg)
|
||||
|
||||
### 简介 ###
|
||||
|
||||
BSD最初从UNIX继承而来,目前,有许多的类Unix操作系统是基于BSD的。FreeBSD是使用最广泛的开源伯克利软件发行版(BSD发行版)。就像它隐含的意思一样,它是一个免费开源的类Unix操作系统,并且是公共服务器的平台。FreeBSD源代码通常以宽松的BSD许可证发布。它与Linux有很多相似的地方,但我们得承认它们在很多方面仍有不同。
|
||||
|
||||
本文的其余部分组织如下:FreeBSD的描述在第一部分,FreeBSD和Linux的相似点在第二部分,它们的区别将在第三部分讨论,对他们功能的讨论和总结在最后一节。
|
||||
|
||||
### FreeBSD描述 ###
|
||||
|
||||
#### 历史 ####
|
||||
|
||||
- FreeBSD的第一个版本发布于1993年,它的第一张CD-ROM是FreeBSD1.0,也发行于1993年。接下来,FreeBSD 2.1.0在1995年发布并且获得了所有用户的青睐。实际上许多IT公司都使用FreeBSD并且很满意,我们可以列出其中的一些:IBM、Nokia、NetApp和Juniper Network。
|
||||
|
||||
#### 许可证 ####
|
||||
|
||||
- 关于它的许可证,FreeBSD以多种开源许可证进行发布,它最新的名为Kernel的代码以两种子BSD许可证进行了发布,给予使用和重新发布FreeBSD的绝对自由。其它的代码则以三、四种子BSD许可证进行发布,有些是以GPL和CDDL的许可证发布的。
|
||||
|
||||
#### 用户 ####
|
||||
|
||||
- FreeBSD的重要特点之一就是它多样的用户。实际上,FreeBSD可以作为邮件服务器、Web Server、FTP服务器以及路由器等,您只需要在它上运行服务相关的软件即可。而且FreeBSD还支持ARM、PowerPC、MIPS、x86、x86-64架构。
|
||||
|
||||
### FreeBSD和Linux的相似处 ###
|
||||
|
||||
FreeBSD和Linux是两个免费开源的软件。实际上,它们的用户可以很容易的检查并修改源代码,用户拥有绝对的自由。而且,FreeBSD和Linux都是类Unix系统,它们的内核、内部组件、库程序都使用从历史上的AT&T Unix处继承的算法。FreeBSD从根基上更像Unix系统,而Linux是作为免费的类Unix系统发布的。许多工具应用都可以在FreeBSD和Linux中找到,实际上,他们几乎有同样的功能。
|
||||
|
||||
此外,FreeBSD能够运行大量的Linux应用。它可以安装一个Linux的兼容层,这个兼容层可以在编译FreeBSD时加入AAC Compact Linux得到或通过下载已编译了Linux兼容层的FreeBSD系统,其中会包括兼容程序:aac_linux.ko。不同于FreeBSD的是,Linux无法运行FreeBSD的软件。
|
||||
|
||||
最后,我们注意到虽然二者有同样的目标,但二者还是有一些不同之处,我们在下一节中列出。
|
||||
|
||||
### FreeBSD和Linux的区别 ###
|
||||
|
||||
目前对于大多数用户来说并没有一个选择FreeBSD还是Linux的清楚的准则。因为他们有着很多同样的应用程序,因为他们都被称作类Unix系统。
|
||||
|
||||
在这一章,我们将列出这两种系统的一些重要的不同之处。
|
||||
|
||||
#### 许可证 ####
|
||||
|
||||
- 两个系统的区别首先在于它们的许可证。Linux以GPL许可证发行,它为用户提供阅读、发行和修改源代码的自由,GPL许可证帮助用户避免仅仅发行二进制。而FreeBSD以BSD许可证发布,BSD许可证比GPL更宽容,因为其衍生著作不需要仍以该许可证发布。这意味着任何用户能够使用、发布、修改代码,并且不需要维持之前的许可证。
|
||||
- 您可以依据您的需求,在两种许可证中选择一种。首先是BSD许可证,由于其特殊的条款,它更受用户青睐。实际上,这个许可证使用户在保证源代码的封闭性的同时,可以售卖以该许可证发布的软件。再说说GPL,它需要每个使用以该许可证发布的软件的用户多加注意。
|
||||
- 如果想在以不同许可证发布的两种软件中做出选择,您需要了解他们各自的许可证,以及他们开发中的方法论,从而能了解他们特性的区别,来选择更适合自己需求的。
|
||||
|
||||
#### 控制 ####
|
||||
|
||||
- 由于FreeBSD和Linux是以不同的许可证发布的,Linus Torvalds控制着Linux的内核,而FreeBSD却与Linux不同,它并未被控制。我个人更倾向于使用FreeBSD而不是Linux,这是因为FreeBSD才是绝对自由的软件,没有任何控制许可的存在。Linux和FreeBSD还有其他的不同之处,我建议您先不急着做出选择,等读完本文后再做出您的选择。
|
||||
|
||||
#### 操作系统 ####
|
||||
|
||||
- Linux聚焦于内核系统,这与FreeBSD不同,FreeBSD的整个系统都被维护着。FreeBSD的内核和一组由FreeBSD团队开发的软件被作为一个整体进行维护。实际上,FreeBSD开发人员能够远程且高效的管理核心操作系统。
|
||||
- 而Linux方面,在管理系统方面有一些困难。由于不同的组件由不同的源维护,Linux开发者需要将它们汇集起来,才能达到同样的功能。
|
||||
- FreeBSD和Linux都给了用户大量的可选软件和发行版,但他们管理的方式不同。FreeBSD是统一的管理方式,而Linux需要被分别维护。
|
||||
|
||||
#### 硬件支持 ####
|
||||
|
||||
- 说到硬件支持,Linux比FreeBSD做的更好。但这不意味着FreeBSD没有像Linux那样支持硬件的能力。他们只是在管理的方式不同,这通常还依赖于您的需求。因此,如果您在寻找最新的解决方案,FreeBSD更适应您;但如果您在寻找一幅宏大的画卷,那最好使用Linux。
|
||||
|
||||
#### 原生FreeBSD Vs 原生Linux ####
|
||||
|
||||
- 两者的原生系统的区别又有不同。就像我之前说的,Linux是一个Unix的替代系统,由Linux Torvalds编写,并由网络上的许多极客一起协助实现的。Linux有一个现代系统所需要的全部功能,诸如虚拟内存、共享库、动态加载、优秀的内存管理等。它以GPL许可证发布。
|
||||
- FreeBSD也继承了Unix的许多重要的特性。FreeBSD作为在加州大学开发的BSD的一种发行版。开发BSD的最重要的原因是用一个开源的系统来替代AT&T操作系统,从而给用户无需AT&T证书便可使用的能力。
|
||||
- 许可证的问题是开发者们最关心的问题。他们试图提供一个最大化克隆Unix的开源系统。这影响了用户的选择,由于FreeBSD相比Linux使用BSD许可证进行发布,因而更加自由。
|
||||
|
||||
#### 支持的软件包 ####
|
||||
|
||||
- 从用户的角度来看,另一个二者不同的地方便是软件包以及对源码安装的软件的可用性和支持。Linux只提供了预编译的二进制包,这与FreeBSD不同,它不但提供预编译的包,而且还提供从源码编译和安装的构建系统。由于这样的移植,FreeBSD给了您选择使用预编译的软件包(默认)和在编译时定制您软件的能力。
|
||||
- 这些可选组件允许您用FreeBSD构建所有的软件。而且,它们的管理还是层次化的,您可以在/usr/ports下找到源文件的地址以及一些正确使用FreeBSD的文档。
|
||||
- 这些提到的可选组件给予了产生不同软件包版本的可能性。FreeBSD给了您通过源代码构建以及预编译的两种软件,而不是像Linux一样只有预编译的软件包。您可以使用两种安装方式管理您的系统。
|
||||
|
||||
#### FreeBSD 和 Linux 常用工具比较 ####
|
||||
|
||||
- 有大量的常用工具在FreeBSD上可用,并且有趣的是他们由FreeBSD的团队所拥有。相反的,Linux工具来自GNU,这就是为什么在使用中有一些限制。
|
||||
- 实际上FreeBSD采用的BSD许可证非常有益且有用。因此,您有能力维护核心操作系统,控制这些应用程序的开发。有一些工具类似于它们的祖先 - BSD和Unix的工具,但不同于GNU的套件,GNU套件只想做到最小的向后兼容。
|
||||
|
||||
#### 标准 Shell ####
|
||||
|
||||
- FreeBSD默认使用tcsh。它是csh的评估版,由于FreeBSD以BSD许可证发行,因此不建议您在其中使用GNU的组件 bash shell。bash和tcsh的区别仅仅在于tcsh的脚本功能。实际上,我们更推荐在FreeBSD中使用sh shell,因为它更加可靠,可以避免一些使用tcsh和csh时出现的脚本问题。
|
||||
|
||||
#### 一个更加层次化的文件系统 ####
|
||||
|
||||
- 像之前提到的一样,使用FreeBSD时,基础操作系统以及可选组件可以被很容易的区别开来。这导致了一些管理它们的标准。在Linux下,/bin,/sbin,/usr/bin或者/usr/sbin都是存放可执行文件的目录。FreeBSD不同,它有一些附加的对其进行组织的规范。基础操作系统被放在/usr/local/bin或者/usr/local/sbin目录下。这种方法可以帮助管理和区分基础操作系统和可选组件。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
FreeBSD和Linux都是免费且开源的系统,他们有相似点也有不同点。上面列出的内容并不能说哪个系统比另一个更好。实际上,FreeBSD和Linux都有自己的特点和技术规格,这使它们与别的系统区别开来。那么,您有什么看法呢?您已经有在使用它们中的某个系统了么?如果答案为是的话,请给我们您的反馈;如果答案是否的话,在读完我们的描述后,您怎么看?请在留言处发表您的观点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.unixmen.com/comparative-introduction-freebsd-linux-users/
|
||||
|
||||
作者:[anismaj][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.unixmen.com/author/anis/
|
@ -0,0 +1,258 @@
|
||||
如何设置在Quagga BGP路由器中设置IPv6的BGP对等体和过滤
|
||||
================================================================================
|
||||
在之前的教程中,我们演示了如何使用Quagga建立一个[完备的BGP路由器][1]和配置[前缀过滤][2]。在本教程中,我们会向你演示如何创建IPv6 BGP对等体并通过BGP通告IPv6前缀。同时我们也将演示如何使用前缀列表和路由映射特性来过滤通告的或者获取到的IPv6前缀。
|
||||
|
||||
### 拓扑 ###
|
||||
|
||||
教程中,我们主要参考如下拓扑。
|
||||
|
||||
![](https://farm9.staticflickr.com/8599/15944659374_1c65852df2_c.jpg)
|
||||
|
||||
服务供应商A和B希望在他们之间建立一个IPv6的BGP对等体。他们的IPv6地址和AS信息如下所示。
|
||||
|
||||
- 对等体IP块: 2001:DB8:3::/64
|
||||
- 供应商A: AS 100, 2001:DB8:1::/48
|
||||
- 供应商B: AS 200, 2001:DB8:2::/48
|
||||
|
||||
### CentOS/RHEL安装Quagga ###
|
||||
|
||||
如果Quagga还没有安装,我们可以先使用yum安装。
|
||||
|
||||
# yum install quagga
|
||||
|
||||
在CentOS/RHEL 7,SELinux策略会默认的阻止对于/usr/sbin/zebra配置目录的写操作,这会对我们将要介绍的安装操作有所影响。因此我们需要像下面这样关闭这个策略。如果你使用的是CentOS/RHEL 6可以跳过这一步。
|
||||
|
||||
# setsebool -P zebra_write_config 1
|
||||
|
||||
### 创建配置文件 ###
|
||||
|
||||
在安装过后,我们先创建配置文件zebra/bgpd作为配置流程的开始。
|
||||
|
||||
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
|
||||
# cp /usr/share/doc/quagga-XXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
|
||||
|
||||
然后,允许这些服务开机自启。
|
||||
|
||||
**在 CentOS/RHEL 6:**
|
||||
|
||||
# service zebra start; service bgpd start
|
||||
# chkconfig zebra on; chkconfig bgpd on
|
||||
|
||||
**在 CentOS/RHEL 7:**
|
||||
|
||||
# systemctl start zebra; systemctl start bgpd
|
||||
# systemctl enable zebra; systmectl enable bgpd
|
||||
|
||||
Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂商Cisco或Juniper十分相似。启动vtysh shell命令行:
|
||||
|
||||
# vtysh
|
||||
|
||||
提示将改为:
|
||||
|
||||
router-a#
|
||||
|
||||
或
|
||||
|
||||
router-b#
|
||||
|
||||
在教程的其余部分,这个提示可以表明你正身处在哪个路由的vtysh shell中。
|
||||
|
||||
### 为Zebra指定日志文件 ###
|
||||
|
||||
来为Zebra配置日志文件,这会有助于调试。
|
||||
|
||||
首先,进入全局配置模式通过输入:
|
||||
|
||||
router-a# configure terminal
|
||||
|
||||
提示将变更成:
|
||||
|
||||
router-a(config)#
|
||||
|
||||
指定日志文件的位置。然后退出配置模式:
|
||||
|
||||
router-a(config)# log file /var/log/quagga/quagga.log
|
||||
router-a(config)# exit
|
||||
|
||||
保存配置通过:
|
||||
|
||||
router-a# write
|
||||
|
||||
### 配置接口IP地址 ###
|
||||
|
||||
现在,让我们为Quagga的物理接口配置IP地址。
|
||||
|
||||
首先,查看一下vtysh中现有的接口。
|
||||
|
||||
router-a# show interfaces
|
||||
|
||||
----------
|
||||
|
||||
Interface eth0 is up, line protocol detection is disabled
|
||||
## OUTPUT TRUNCATED ###
|
||||
Interface eth1 is up, line protocol detection is disabled
|
||||
## OUTPUT TRUNCATED ##
|
||||
|
||||
现在我们配置IPv6地址。
|
||||
|
||||
router-a# conf terminal
|
||||
router-a(config)# interface eth0
|
||||
router-a(config-if)# ipv6 address 2001:db8:3::1/64
|
||||
router-a(config-if)# interface eth1
|
||||
router-a(config-if)# ipv6 address 2001:db8:1::1/64
|
||||
|
||||
在路由B上采用同样的方式分配IPv6地址。我将配置汇总成如下。
|
||||
|
||||
router-b# show running-config
|
||||
|
||||
----------
|
||||
|
||||
interface eth0
|
||||
ipv6 address 2001:db8:3::2/64
|
||||
|
||||
interface eth1
|
||||
ipv6 address 2001:db8:2::1/64
|
||||
|
||||
由于两台路由的eth0端口同属一个子网,即2001:DB8:3::/64,你应该可以相互ping通。在保证ping通的情况下,我们开始下面的内容。
|
||||
|
||||
router-a# ping ipv6 2001:db8:3::2
|
||||
|
||||
----------
|
||||
|
||||
PING 2001:db8:3::2(2001:db8:3::2) 56 data bytes
|
||||
64 bytes from 2001:db8:3::2: icmp_seq=1 ttl=64 time=3.20 ms
|
||||
64 bytes from 2001:db8:3::2: icmp_seq=2 ttl=64 time=1.05 ms
|
||||
|
||||
### 步骤 1: IPv6 BGP 对等体 ###
|
||||
|
||||
本段,我们将在两个路由之间配置IPv6 BGP。首先,我们在路由A上指定BGP邻居。
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# router bgp 100
|
||||
router-a(config-router)# no auto-summary
|
||||
router-a(config-router)# no synchronization
|
||||
router-a(config-router)# neighbor 2001:DB8:3::2 remote-as 200
|
||||
|
||||
然后,我们定义IPv6的地址族。在地址族中,我们需要定义要通告的网段,并激活邻居。
|
||||
|
||||
router-a(config-router)# address-family ipv6
|
||||
router-a(config-router-af)# network 2001:DB8:1::/48
|
||||
router-a(config-router-af)# neighbor 2001:DB8:3::2 activate
|
||||
|
||||
我们在路由B上也实施相同的配置。这里提供我归总后的配置。
|
||||
|
||||
router-b# conf t
|
||||
router-b(config)# router bgp 200
|
||||
router-b(config-router)# no auto-summary
|
||||
router-b(config-router)# no synchronization
|
||||
router-b(config-router)# neighbor 2001:DB8:3::1 remote-as 100
|
||||
router-b(config-router)# address-family ipv6
|
||||
router-b(config-router-af)# network 2001:DB8:2::/48
|
||||
router-b(config-router-af)# neighbor 2001:DB8:3::1 activate
|
||||
|
||||
如果一切顺利,在路由间将会形成一个IPv6 BGP会话。如果失败了,请确保[在防火墙中开启了][3]必要的端口(TCP 179)。
|
||||
|
||||
我们使用以下命令来确认IPv6 BGP会话的信息。
|
||||
|
||||
**查看BGP汇总:**
|
||||
|
||||
router-a# show bgp ipv6 unicast summary
|
||||
|
||||
**查看BGP通告的路由:**
|
||||
|
||||
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> advertised-routes
|
||||
|
||||
**查看BGP获得的路由:**
|
||||
|
||||
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> routes
|
||||
|
||||
![](https://farm8.staticflickr.com/7317/16379555088_6e29cb6884_b.jpg)
|
||||
|
||||
### 步骤 2: 过滤IPv6前缀 ###
|
||||
|
||||
正如我们在上面看到的输出信息那样,路由间通告了他们完整的/48 IPv6前缀。出于演示的目的,我们会考虑以下要求。
|
||||
|
||||
- Router-B将通告一个/64前缀,一个/56前缀,和一个完整的/48前缀.
|
||||
- Router-A将接受任由B提供的何形式的IPv6前缀,其中包含有/56和/64之间的网络掩码长度。
|
||||
|
||||
我们将根据需要过滤的前缀,来使用路由器的前缀列表和路由映射。
|
||||
|
||||
![](https://farm8.staticflickr.com/7367/16381297417_6549218289_c.jpg)
|
||||
|
||||
#### 为路由B修改通告的前缀 ####
|
||||
|
||||
目前,路由B只通告一个/48前缀。我们修改路由B的BGP配置使它可以通告额外的/56和/64前缀。
|
||||
|
||||
router-b# conf t
|
||||
router-b(config)# router bgp 200
|
||||
router-b(config-router)# address-family ipv6
|
||||
router-b(config-router-af)# network 2001:DB8:2::/56
|
||||
router-b(config-router-af)# network 2001:DB8:2::/64
|
||||
|
||||
我们将路由A上验证了所有的前缀都获得到了。
|
||||
|
||||
![](https://farm9.staticflickr.com/8598/16379761980_7c083ae977_b.jpg)
|
||||
|
||||
太好了!我们在路由A上收到了所有的前缀,那么我们可以更进一步创建前缀列表和路由映射来过滤这些前缀。
|
||||
|
||||
#### 创建前缀列表 ####
|
||||
|
||||
就像在[上则教程中][4]描述的那样,前缀列表是一种机制用来匹配带有子网长度的IP地址前缀。按照我们指定的需求,我们需要在路由A的前缀列表中创建一则必要的条目。
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# ipv6 prefix-list FILTER-IPV6-PRFX permit 2001:DB8:2::/56 le 64
|
||||
|
||||
以上的命令会创建一个名为'FILTER-IPV6-PRFX'的前缀列表,用以匹配任何2001:DB8:2::池内掩码在56和64之间的所有前缀。
|
||||
|
||||
#### 创建并应用路由映射 ####
|
||||
|
||||
现在已经在前缀列表中创建了条目,我们也应该相应的创建一条使用此条目的路由映射规则了。
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# route-map FILTER-IPV6-RMAP permit 10
|
||||
router-a(config-route-map)# match ipv6 address prefix-list FILTER-IPV6-PRFX
|
||||
|
||||
以上的命令会创建一条名为'FILTER-IPV6-RMAP'的路由映射规则。这则规则将会允许与之前在前缀列表中创建'FILTER-IPV6-PRFX'所匹配的IPv6
|
||||
|
||||
要记住路由映射规则只有在应用在邻居或者端口的指定方向时才有效。我们将把路由映射应用到BGP的邻居配置中。我们将路由映射应用于入方向,作为进入路由端的前缀过滤器。
|
||||
|
||||
router-a# conf t
|
||||
router-a(config)# router bgp 100
|
||||
router-a(config-router)# address-family ipv6
|
||||
router-a(config-router-af)# neighbor 2001:DB8:3::2 route-map FILTER-IPV6-RMAP in
|
||||
|
||||
现在我们在路由A上再查看一边获得到的路由,我们应该只能看见两个被允许的前缀了。
|
||||
|
||||
![](https://farm8.staticflickr.com/7337/16379762020_ec2dc39b31_c.jpg)
|
||||
|
||||
**注意**: 你可能需要重置BGP会话来刷新路由表。
|
||||
|
||||
所有IPv6的BGP会话可以使用以下的命令重启:
|
||||
|
||||
router-a# clear bgp ipv6 *
|
||||
|
||||
我汇总了两个路由的配置,并做成了一张清晰的图片以便阅读。
|
||||
|
||||
![](https://farm9.staticflickr.com/8672/16567240165_eee4398dc8_c.jpg)
|
||||
|
||||
### 总结 ###
|
||||
|
||||
总结一下,这篇教程重点在于如何创建BGP对等体和IPv6的过滤。我们演示了如何向邻居BGP路由通告IPv6前缀,和如何过滤通告前缀或获得的通告。需要注意,本教程使用的过程可能会对网络供应商的网络运作有所影响,请谨慎参考。
|
||||
|
||||
希望这些对你有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
|
||||
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
||||
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
|
||||
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
|
@ -0,0 +1,67 @@
|
||||
Ubuntu上使用LVM轻松调整分区并制作快照
|
||||
================================================================================
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png)
|
||||
|
||||
Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。说明中说,它启用了逻辑卷管理,因此你可以制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。
|
||||
|
||||
LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空间][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。
|
||||
|
||||
### 你应该在新安装Ubuntu时使用LVM吗? ###
|
||||
|
||||
第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、合并多个磁盘到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[调整这些不使用的分区][3]。
|
||||
|
||||
完全坦率地说,普通Ubuntu桌面用户可能不会意识到他们是否正在使用LVM。但是,如果你想要在今后做一些更高深的事情,那么LVM就会有所帮助了。LVM可能更复杂,可能会在你今后恢复数据时会导致问题——尤其是在你经验不足时。这里不会有显著的性能损失——LVM是彻底地在Linux内核中实现的。
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035cbada6ae.png.pagespeed.ic.cnqyiKfCvi.png)
|
||||
|
||||
### 逻辑卷管理说明 ###
|
||||
|
||||
前面,我们已经[说明了何谓LVM][4]。概括来讲,它在你的物理磁盘和呈现在你系统中的分区之间提供了一个抽象层。例如,你的计算机可能装有两个硬盘驱动器,它们的大小都是 1 TB。你必须得在这些磁盘上至少分两个区,每个区大小 1 TB。
|
||||
|
||||
LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个标准的系统只会有一个卷组。
|
||||
|
||||
该抽象层使得调整分区、将多个磁盘组合成单个卷、甚至为一个运行着的分区的文件系统创建“快照”变得十分简单,而完成所有这一切都无需先卸载分区。
|
||||
|
||||
注意,如果你没有创建备份,那么将多个磁盘合并成一个卷将会是个糟糕的想法。它就像RAID 0——如果你将两个 1 TB 的卷组合成一个 2 TB 的卷,只要其中一个硬盘失败,你将丢失该卷上的重要数据。所以,如果你要走这条路,那么备份就及其重要。
|
||||
|
||||
### 管理LVM卷的图形化工具 ###
|
||||
|
||||
通常,[LVM通过Linux终端命令来管理][5]。这在Ubuntu上也行得通,但是有个更简单的图形化方法可供大家采用。如果你是一个Linux用户,对GParted或者与其类似的分区管理器熟悉,算了,别瞎掰了——GParted根本不支持LVM磁盘。
|
||||
|
||||
然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击停靠盘上的图标来开启它吧,搜索磁盘然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png)
|
||||
|
||||
不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是你没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从停靠盘上打开逻辑卷管理工具了。
|
||||
|
||||
这个图形化配置工具是由红帽公司开发的,虽然有点陈旧了,但却是唯一的图形化方式,你可以通过它来完成上述操作,将那些终端命令抛诸脑后了。
|
||||
|
||||
比如说,你想要添加一个新的物理卷到卷组中。你可以打开该工具,选择未初始化条目下的新磁盘,然后点击“初始化条目”按钮。然后,你就可以在未分配卷下找到新的物理卷了,你可以使用“添加到现存卷组”按钮来将它添加到“ubuntu-vg”卷组,这是Ubuntu在安装过程中创建的卷组。
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png)
|
||||
|
||||
卷组视图会列出你所有物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,就像Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png)
|
||||
|
||||
要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩减分区。
|
||||
|
||||
![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png)
|
||||
|
||||
system-config-lvm的其它选项允许你设置快照和镜像。对于传统桌面而言,你或许不需要这些特性,但是在这里也可以通过图形化处理。记住,你也可以[使用终端命令完成这一切][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/
|
||||
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/
|
||||
[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/
|
||||
[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/
|
||||
[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
@ -0,0 +1,89 @@
|
||||
如何在树莓派2 代运行ubuntu Snappy Core
|
||||
================================================================================
|
||||
物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。
|
||||
|
||||
Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。树莓派2 代投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。
|
||||
|
||||
Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。
|
||||
|
||||
接下来,让我们看看怎么样在树莓派2 上运行Snappy。
|
||||
|
||||
用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。
|
||||
|
||||
![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg)
|
||||
|
||||
sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名
|
||||
|
||||
$ sudo usermod -l <new name> <old name>
|
||||
|
||||
或者也可以使用`adduser` 为你添加一个新用户。
|
||||
|
||||
因为RPI缺少硬件时钟,而Snappy 并不知道这一点,所以系统会有一个小bug:处理某些命令时会报很多错。不过这个很容易解决:
|
||||
|
||||
使用这个命令来确认这个bug 是否影响:
|
||||
|
||||
$ date
|
||||
|
||||
如果输出是 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正:
|
||||
|
||||
$ sudo date --set="Sun Apr 04 17:43:26 UTC 2015"
|
||||
|
||||
改成你的实际时间。
|
||||
|
||||
![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg)
|
||||
|
||||
现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令:
|
||||
|
||||
$ sudo apt-get update && sudo apt-get distupgrade
|
||||
|
||||
不过这时系统不会让你通过,因为Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是Snappy 会运行很多嵌入式程序,而同时你也会想着所有事情尽可能的简化。
|
||||
|
||||
让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。
|
||||
|
||||
![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg)
|
||||
|
||||
如果我们运行以下命令:
|
||||
|
||||
$ sudo snappy update
|
||||
|
||||
系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。
|
||||
|
||||
重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心
|
||||
|
||||
$ sudo snappy versions -a
|
||||
|
||||
经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。
|
||||
|
||||
因为到目前为止我们还没有安装任何软件,下面的命令:
|
||||
|
||||
$ sudo snappy update ubuntu-core
|
||||
|
||||
将会生效,而且如果你打算仅仅更新特定的OS 版本,这也是一个办法。如果出了问题,你可以使用下面的命令回滚:
|
||||
|
||||
$ sudo snappy rollback ubuntu-core
|
||||
|
||||
这将会把系统状态回滚到更新之前。
|
||||
|
||||
![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg)
|
||||
|
||||
再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识,但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://<ip-address>:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。
|
||||
|
||||
![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg)
|
||||
|
||||
尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html
|
||||
|
||||
作者:[Ferdinand Thommes][a]
|
||||
译者:[Ezio](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/ferdinand
|
||||
[1]:http://www.ubuntu.com/things
|
||||
[2]:http://www.raspberrypi.org/downloads/
|
||||
[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html
|
||||
[4]:https://developer.ubuntu.com/en/snappy/
|
@ -0,0 +1,131 @@
|
||||
如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器
|
||||
================================================================================
|
||||
你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。
|
||||
|
||||
### 什么是反向 SSH 隧道? ###
|
||||
|
||||
SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。
|
||||
|
||||
![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg)
|
||||
|
||||
### 在 Linux 上设置反向 SSH 隧道 ###
|
||||
|
||||
让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。
|
||||
|
||||
在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。
|
||||
|
||||
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
这里端口 10022 是任何你可以使用的端口数字。只需要确保中继服务器上不会有其它程序使用这个端口。
|
||||
|
||||
“-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。
|
||||
|
||||
用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。
|
||||
|
||||
运行上面的命令之后,你就会回到家庭主机的命令行提示框中。
|
||||
|
||||
登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。
|
||||
|
||||
relayserver~$ sudo netstat -nap | grep 10022
|
||||
|
||||
----------
|
||||
|
||||
tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd
|
||||
|
||||
现在就可以从任何其它计算机(客户端计算机)登录到中继服务器,然后按照下面的方法访问家庭服务器。
|
||||
|
||||
relayserver~$ ssh -p 10022 homeserver_user@localhost
|
||||
|
||||
需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。
|
||||
|
||||
### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ###
|
||||
|
||||
上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址(127.0.0.1)。
|
||||
|
||||
事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。
|
||||
|
||||
打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。
|
||||
|
||||
relayserver~$ vi /etc/ssh/sshd_conf
|
||||
|
||||
----------
|
||||
|
||||
GatewayPorts clientspecified
|
||||
|
||||
重启 sshd。
|
||||
|
||||
基于 Debian 的系统:
|
||||
|
||||
relayserver~$ sudo /etc/init.d/ssh restart
|
||||
|
||||
基于红帽的系统:
|
||||
|
||||
relayserver~$ sudo systemctl restart sshd
|
||||
|
||||
现在在家庭服务器中按照下面方式初始化一个反向 SSH 隧道。
|
||||
|
||||
homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
登录到中继服务器然后用 netstat 命令确认成功建立的一个反向 SSH 隧道。
|
||||
|
||||
relayserver~$ sudo netstat -nap | grep 10022
|
||||
|
||||
----------
|
||||
|
||||
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
|
||||
|
||||
不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。
|
||||
|
||||
现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。
|
||||
|
||||
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
|
||||
|
||||
在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。
|
||||
|
||||
### 在 Linux 上设置一个永久反向 SSH 隧道 ###
|
||||
|
||||
现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。
|
||||
|
||||
对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。
|
||||
|
||||
第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话,autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。
|
||||
|
||||
下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。
|
||||
|
||||
在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。
|
||||
|
||||
homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
|
||||
|
||||
“-M 10900” 选项指定中继服务器上的监视端口,用于交换监视 SSH 会话的测试数据。中继服务器上的其它程序不能使用这个端口。
|
||||
|
||||
“-fN” 选项传递给 ssh 命令,让 SSH 隧道在后台运行。
|
||||
|
||||
“-o XXXX” 选项让 ssh:
|
||||
|
||||
- 使用密钥验证,而不是密码验证。
|
||||
- 自动接受(未知)SSH 主机密钥。
|
||||
- 每 60 秒交换 keep-alive 消息。
|
||||
- 没有收到任何响应时最多发送 3 条 keep-alive 消息。
|
||||
|
||||
其余 SSH 隧道相关的选项和之前介绍的一样。
|
||||
|
||||
如果你想系统启动时自动运行 SSH 隧道,你可以将上面的 autossh 命令添加到 /etc/rc.local。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/go/digitalocean
|
||||
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
|
||||
[3]:http://ask.xmodulo.com/install-autossh-linux.html
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user