mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
commit
74cbf1f7cc
@ -1,48 +1,49 @@
|
||||
在 Debian 中安装 OpenQRM 云计算平台
|
||||
================================================================================
|
||||
|
||||
### 简介 ###
|
||||
|
||||
**openQRM**是一个基于 Web 的开源云计算和数据中心管理平台,可灵活地与企业数据中心的现存组件集成。
|
||||
|
||||
它支持下列虚拟技术:
|
||||
|
||||
- KVM,
|
||||
- XEN,
|
||||
- Citrix XenServer,
|
||||
- VMWare ESX,
|
||||
- LXC,
|
||||
- OpenVZ.
|
||||
- KVM
|
||||
- XEN
|
||||
- Citrix XenServer
|
||||
- VMWare ESX
|
||||
- LXC
|
||||
- OpenVZ
|
||||
|
||||
openQRM 中的杂交云连接器通过 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 来支持一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也自动地进行资源调配、 虚拟化、 存储和配置管理,且关注高可用性。集成计费系统的自助服务云门户可使终端用户按需请求新的服务器和应用堆栈。
|
||||
openQRM 中的混合云连接器支持 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 等一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也可以自动地进行资源调配、 虚拟化、 存储和配置管理,且保证高可用性。集成的计费系统的自服务云门户可使终端用户按需请求新的服务器和应用堆栈。
|
||||
|
||||
openQRM 有两种不同风格的版本可获取:
|
||||
|
||||
- 企业版
|
||||
- 社区版
|
||||
|
||||
你可以在[这里][1] 查看这两个版本间的区别。
|
||||
你可以在[这里][1]查看这两个版本间的区别。
|
||||
|
||||
### 特点 ###
|
||||
|
||||
- 私有/杂交的云计算平台;
|
||||
- 可管理物理或虚拟的服务器系统;
|
||||
- 可与所有主流的开源或商业的存储技术集成;
|
||||
- 跨平台: Linux, Windows, OpenSolaris, and BSD;
|
||||
- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox;
|
||||
- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行杂交云设置;
|
||||
- 支持 P2V, P2P, V2P, V2V 迁移和高可用性;
|
||||
- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd;
|
||||
- 有超过 50 个插件来支持扩展功能并与你的基础设施集成;
|
||||
- 针对终端用户的自助门户;
|
||||
- 集成计费系统.
|
||||
- 私有/混合的云计算平台
|
||||
- 可管理物理或虚拟的服务器系统
|
||||
- 集成了所有主流的开源或商业的存储技术
|
||||
- 跨平台: Linux, Windows, OpenSolaris 和 BSD
|
||||
- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox
|
||||
- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行混合云设置
|
||||
- 支持 P2V, P2P, V2P, V2V 迁移和高可用性
|
||||
- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd
|
||||
- 有超过 50 个插件来支持扩展功能并与你的基础设施集成
|
||||
- 针对终端用户的自服务门户
|
||||
- 集成了计费系统
|
||||
|
||||
### 安装 ###
|
||||
|
||||
在这里我们将在 in Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求:
|
||||
在这里我们将在 Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求:
|
||||
|
||||
- 1 GB RAM;
|
||||
- 100 GB Hdd(硬盘驱动器);
|
||||
- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V).
|
||||
- 1 GB RAM
|
||||
- 100 GB Hdd(硬盘驱动器)
|
||||
- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V)
|
||||
|
||||
首先,安装 `make` 软件包来编译 openQRM 源码包:
|
||||
|
||||
@ -52,7 +53,7 @@ openQRM 有两种不同风格的版本可获取:
|
||||
|
||||
然后,逐次运行下面的命令来安装 openQRM。
|
||||
|
||||
从[这里][2] 下载最新的可用版本:
|
||||
从[这里][2]下载最新的可用版本:
|
||||
|
||||
wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz
|
||||
|
||||
@ -66,35 +67,35 @@ openQRM 有两种不同风格的版本可获取:
|
||||
|
||||
sudo make start
|
||||
|
||||
安装期间,你将被询问去更新文件 `php.ini`
|
||||
安装期间,会要求你更新文件 `php.ini`
|
||||
|
||||
![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png)
|
||||
![~-openqrm-community-5.1-src_001](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png)
|
||||
|
||||
输入 mysql root 用户密码。
|
||||
|
||||
![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png)
|
||||
![~-openqrm-community-5.1-src_002](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png)
|
||||
|
||||
再次输入密码:
|
||||
|
||||
![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png)
|
||||
![~-openqrm-community-5.1-src_003](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png)
|
||||
|
||||
选择邮件服务器配置类型。
|
||||
选择邮件服务器配置类型:
|
||||
|
||||
![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png)
|
||||
![~-openqrm-community-5.1-src_004](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png)
|
||||
|
||||
假如你不确定该如何选择,可选择 `Local only`。在我们的这个示例中,我选择了 **Local only** 选项。
|
||||
|
||||
![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png)
|
||||
![~-openqrm-community-5.1-src_005](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png)
|
||||
|
||||
输入你的系统邮件名称,并最后输入 Nagios 管理员密码。
|
||||
|
||||
![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png)
|
||||
![~-openqrm-community-5.1-src_007](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png)
|
||||
|
||||
根据你的网络连接状态,上面的命令可能将花费很长的时间来下载所有运行 openQRM 所需的软件包,请耐心等待。
|
||||
|
||||
最后你将得到 openQRM 配置 URL 地址以及相关的用户名和密码。
|
||||
|
||||
![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png)
|
||||
![~_002](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@debian-_002.png)
|
||||
|
||||
### 配置 ###
|
||||
|
||||
@ -104,23 +105,23 @@ openQRM 有两种不同风格的版本可获取:
|
||||
|
||||
默认的用户名和密码是: **openqrm/openqrm** 。
|
||||
|
||||
![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png)
|
||||
![Mozilla Firefox_003](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/Mozilla-Firefox_003.png)
|
||||
|
||||
选择一个网卡来给 openQRM 管理网络使用。
|
||||
|
||||
![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png)
|
||||
![openQRM Server - Mozilla Firefox_004](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png)
|
||||
|
||||
选择一个数据库类型,在我们的示例中,我选择了 mysql。
|
||||
|
||||
![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png)
|
||||
![openQRM Server - Mozilla Firefox_006](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png)
|
||||
|
||||
现在,配置数据库连接并初始化 openQRM, 在这里,我使用 **openQRM** 作为数据库名称, **root** 作为用户的身份,并将 debian 作为数据库的密码。 请小心,你应该输入先前在安装 openQRM 时创建的 mysql root 用户密码。
|
||||
|
||||
![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png)
|
||||
![openQRM Server - Mozilla Firefox_012](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png)
|
||||
|
||||
祝贺你!! openQRM 已经安装并配置好了。
|
||||
祝贺你! openQRM 已经安装并配置好了。
|
||||
|
||||
![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png)
|
||||
![openQRM Server - Mozilla Firefox_013](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png)
|
||||
|
||||
### 更新 openQRM ###
|
||||
|
||||
@ -129,16 +130,17 @@ openQRM 有两种不同风格的版本可获取:
|
||||
cd openqrm/src/
|
||||
make update
|
||||
|
||||
到现在为止,我们做的只是在我们的 Ubuntu 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。
|
||||
到现在为止,我们做的只是在我们的 Debian 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。
|
||||
|
||||
就是这些了,欢呼吧!周末快乐!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,24 +1,24 @@
|
||||
Trickr:一个开源的Linux桌面RSS新闻速递
|
||||
Tickr:一个开源的 Linux 桌面 RSS 新闻速递应用
|
||||
================================================================================
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg)
|
||||
|
||||
**最新的!最新的!阅读关于它的一切!**
|
||||
|
||||
好了,所以我们今天要强调的应用程序不是相当于旧报纸的二进制版本—而是它会以一个伟大的方式,将最新的新闻推送到你的桌面上。
|
||||
好了,我们今天要推荐的应用程序可不是旧式报纸的二进制版本——它会以一种漂亮的方式将最新的新闻推送到你的桌面上。
|
||||
|
||||
Tick是一个基于GTK的Linux桌面新闻速递,能够在水平带滚动显示最新头条新闻,以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。
|
||||
Tickr 是一个基于 GTK 的 Linux 桌面新闻速递应用,能够以横条方式滚动显示最新头条新闻以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。
|
||||
|
||||
请叫我Joey Calamezzo;我把我的放在底部,有电视新闻台的风格。
|
||||
请叫我 Joey Calamezzo;我把它放在底部,就像电视新闻台的滚动字幕一样。 (LCTT 译注: Joan Callamezzo 是 Pawnee Today 的主持人,一位 Pawnee 的本地新闻/脱口秀主持人。而本文作者是 Joey。)
|
||||
|
||||
“到你了,子标题”
|
||||
“到你了,副标题”。
|
||||
|
||||
### RSS -还记得吗? ###
|
||||
|
||||
“谢谢段落结尾。”
|
||||
“谢谢,这段结束了。”
|
||||
|
||||
在一个推送通知,社交媒体,以及点击诱饵的时代,哄骗我们阅读最新的令人惊奇的,人人都爱读的清单,RSS看起来有一点过时了。
|
||||
在一个充斥着推送通知、社交媒体、标题党,以及哄骗人们点击的清单体的时代,RSS看起来有一点过时了。
|
||||
|
||||
对我来说?恩,RSS是名副其实的真正简单的聚合。这是将消息通知给我的最简单,最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。
|
||||
对我来说呢?恩,RSS是名副其实的真正简单的聚合(RSS : Really Simple Syndication)。这是将消息通知给我的最简单、最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。
|
||||
|
||||
tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的底部,然后不时地瞥一眼。
|
||||
|
||||
@ -32,31 +32,30 @@ tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的
|
||||
|
||||
尽管虽然tickr可以从Ubuntu软件中心安装,然而它已经很久没有更新了。当你打开笨拙的不直观的控制面板的时候,没有什么能够比这更让人感觉被遗弃的了。
|
||||
|
||||
打开它:
|
||||
要打开它:
|
||||
|
||||
1. 右键单击tickr条
|
||||
1. 转至编辑>首选项
|
||||
1. 调整各种设置
|
||||
|
||||
选项和设置行的后面,有些似乎是容易理解的。但是知己知彼你能够几乎掌控一切,包括:
|
||||
选项和设置行的后面,有些似乎是容易理解的。但是详细了解这些你才能够掌握一切,包括:
|
||||
|
||||
- 设置滚动速度
|
||||
- 选择鼠标经过时的行为
|
||||
- 资讯更新频率
|
||||
- 字体,包括字体大小和颜色
|
||||
- 分隔符(“delineator”)
|
||||
- 消息分隔符(“delineator”)
|
||||
- tickr在屏幕上的位置
|
||||
- tickr条的颜色和不透明度
|
||||
- 选择每种资讯显示多少文章
|
||||
|
||||
有个值得一提的“怪癖”是,当你点击“应用”按钮,只会更新tickr的屏幕预览。当您退出“首选项”窗口时,请单击“确定”。
|
||||
|
||||
想要滚动条在你的显示屏上水平显示,也需要公平一点的调整,特别是统一显示。
|
||||
想要得到完美的显示效果, 你需要一点点调整,特别是在 Unity 上。
|
||||
|
||||
按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序被创建在过去的GNOME2.x桌面)。只需添加额外的25像素到输入框,来弥补这个问题。
|
||||
按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序以前是在GNOME2.x桌面上创建的)。只需添加额外的25像素到输入框,来弥补这个问题。
|
||||
|
||||
其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现;
|
||||
是否显示一个时钟;以及应用程序多久检查一次文章资讯。
|
||||
其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现;是否显示一个时钟;以及应用程序多久检查一次文章资讯。
|
||||
|
||||
#### 添加资讯 ####
|
||||
|
||||
@ -76,9 +75,9 @@ tickr自带的有超过30种不同的资讯列表,从技术博客到主流新
|
||||
|
||||
### 在Ubuntu 14.04 LTS或更高版本上安装Tickr ###
|
||||
|
||||
在Ubuntu 14.04 LTS或更高版本上安装Tickr
|
||||
这就是 Tickr,它不会改变世界,但是它能让你知道世界上发生了什么。
|
||||
|
||||
在Ubuntu 14.04 LTS或更高版本中安装,转到Ubuntu软件中心,但要点击下面的按钮。
|
||||
在Ubuntu 14.04 LTS或更高版本中安装,点击下面的按钮转到Ubuntu软件中心。
|
||||
|
||||
- [点击此处进入Ubuntu软件中心安装tickr][1]
|
||||
|
||||
@ -88,7 +87,7 @@ via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticke
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[xiaoyu33](https://github.com/xiaoyu33)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
146
published/20150717 How to monitor NGINX with Datadog - Part 3.md
Normal file
146
published/20150717 How to monitor NGINX with Datadog - Part 3.md
Normal file
@ -0,0 +1,146 @@
|
||||
如何使用 Datadog 监控 NGINX(第三篇)
|
||||
================================================================================
|
||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
|
||||
|
||||
如果你已经阅读了前面的[如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标:
|
||||
|
||||
![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
|
||||
|
||||
Datadog 允许你以单个主机、服务、流程和度量来构建图形和警告,或者使用它们的几乎任何组合构建。例如,你可以监控你的所有主机,或者某个特定可用区域的所有NGINX主机,或者您可以监视具有特定标签的所有主机的一个关键指标。本文将告诉您如何:
|
||||
|
||||
- 在 Datadog 仪表盘上监控 NGINX 指标,就像监控其他系统一样
|
||||
- 当一个关键指标急剧变化时设置自动警报来通知你
|
||||
|
||||
### 配置 NGINX ###
|
||||
|
||||
为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个 报告 status 指标的 URL。一步步的[配置开源 NGINX][2] 和 [NGINX Plus][3] 请参见之前的相关文章。
|
||||
|
||||
### 整合 Datadog 和 NGINX ###
|
||||
|
||||
#### 安装 Datadog 代理 ####
|
||||
|
||||
Datadog 代理是[一个开源软件][4],它能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装这个代理通常[仅需要一个命令][5]
|
||||
|
||||
只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。
|
||||
|
||||
![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
|
||||
|
||||
#### 配置 Agent ####
|
||||
|
||||
接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该[在这儿][7]找到。
|
||||
|
||||
在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例:
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
|
||||
- nginx_status_url: http://localhost/nginx_status/
|
||||
tags:
|
||||
- instance:foo
|
||||
|
||||
当你提供了 status URL 和任意 tag,将配置文件保存为 conf.d/nginx.yaml。
|
||||
|
||||
#### 重启代理 ####
|
||||
|
||||
你必须重新启动代理程序来加载新的配置文件。重新启动命令[在这里][9],根据平台的不同而不同。
|
||||
|
||||
#### 检查配置文件 ####
|
||||
|
||||
要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的 info 命令。每个平台使用的命令[看这儿][10]。
|
||||
|
||||
如果配置是正确的,你会看到这样的输出:
|
||||
|
||||
Checks
|
||||
======
|
||||
|
||||
[...]
|
||||
|
||||
nginx
|
||||
-----
|
||||
- instance #0 [OK]
|
||||
- Collected 8 metrics & 0 events
|
||||
|
||||
#### 安装整合 ####
|
||||
|
||||
最后,在你的 Datadog 帐户打开“Nginx 整合”。这非常简单,你只要在 [NGINX 整合设置][11]中点击“Install Integration”按钮。
|
||||
|
||||
![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
|
||||
|
||||
### 指标! ###
|
||||
|
||||
一旦代理开始报告 NGINX 指标,你会看到[一个 NGINX 仪表盘][12]出现在在你 Datadog 可用仪表盘的列表中。
|
||||
|
||||
基本的 NGINX 仪表盘显示有用的图表,囊括了几个[我们的 NGINX 监控介绍][13]中的关键指标。 (一些指标,特别是请求处理时间要求进行日志分析,Datadog 不支持。)
|
||||
|
||||
你可以通过增加 NGINX 之外的重要指标的图表来轻松创建一个全面的仪表盘,以监控你的整个网站设施。例如,你可能想监视你 NGINX 的主机级的指标,如系统负载。要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。
|
||||
|
||||
![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
|
||||
|
||||
你也可以使用 Datadog 的[主机地图][14]在更高层面监控你的 NGINX 实例,举个例子,用颜色标示你所有的 NGINX 主机的 CPU 使用率来辨别潜在热点。
|
||||
|
||||
![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
|
||||
|
||||
### NGINX 指标警告 ###
|
||||
|
||||
一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动地密切关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。
|
||||
|
||||
#### 监控 NGINX 吞吐量 ####
|
||||
|
||||
Datadog 指标警报可以是“基于吞吐量的”(当指标超过设定值会警报)或“基于变化幅度的”(当指标的变化超过一定范围会警报)。在这个例子里,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。
|
||||
|
||||
1. **创建一个新的指标监控**。从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。
|
||||
|
||||
![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
|
||||
|
||||
2. **定义你的指标监视器**。我们想知道 NGINX 每秒总的请求量下降的数量,所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s 之和。
|
||||
|
||||
![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
|
||||
|
||||
3. **设置指标警报条件**。我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个一分钟的数据窗口来表示 “now” 指标的值,对横跨该间隔内的平均变化和之前 10 分钟的指标值作比较。
|
||||
|
||||
![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
|
||||
|
||||
4. **自定义通知**。如果 NGINX 的请求量下降,我们想要通知我们的团队。在这个例子中,我们将给 ops 团队的聊天室发送通知,并给值班工程师发送短信。在“Say what’s happening”中,我们会为监控器命名,并添加一个伴随该通知的短消息,建议首先开始调查的内容。我们会 @ ops 团队使用的 Slack,并 @pagerduty [将警告发给短信][15]。
|
||||
|
||||
![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
|
||||
|
||||
5. **保存集成监控**。点击页面底部的“Save”按钮。你现在在监控一个关键的 NGINX [工作指标][16],而当它快速下跌时会给值班工程师发短信。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们谈到了通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。
|
||||
|
||||
如果你一直使用你自己的 Datadog 账号,你现在应该可以极大的提升你的 web 环境的可视化,也有能力对你的环境、你所使用的模式、和对你的组织最有价值的指标创建自动监控。
|
||||
|
||||
如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。
|
||||
|
||||
------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
|
||||
作者:K Young
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://linux.cn/article-5970-1.html
|
||||
[2]:https://linux.cn/article-5985-1.html#open-source
|
||||
[3]:https://linux.cn/article-5985-1.html#plus
|
||||
[4]:https://github.com/DataDog/dd-agent
|
||||
[5]:https://app.datadoghq.com/account/settings#agent
|
||||
[6]:https://app.datadoghq.com/infrastructure
|
||||
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
|
||||
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
|
||||
[12]:https://app.datadoghq.com/dash/integration/nginx
|
||||
[13]:https://linux.cn/article-5970-1.html
|
||||
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
|
||||
[15]:https://www.datadoghq.com/blog/pagerduty/
|
||||
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
|
||||
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
|
||||
[19]:https://github.com/DataDog/the-monitor/issues
|
@ -0,0 +1,70 @@
|
||||
如何在 Linux 终端中知道你的公有 IP
|
||||
================================================================================
|
||||
![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png)
|
||||
|
||||
公有地址由 InterNIC 分配并由基于类的网络 ID 或基于 CIDR 的地址块构成(被称为 CIDR 块),并保证了在全球互联网中的唯一性。当公有地址被分配时,其路由将会被记录到互联网中的路由器中,这样访问公有地址的流量就能顺利到达。访问目标公有地址的流量可经由互联网抵达。比如,当一个 CIDR 块被以网络 ID 和子网掩码的形式分配给一个组织时,对应的 [网络 ID,子网掩码] 也会同时作为路由储存在互联网中的路由器中。目标是 CIDR 块中的地址的 IP 封包会被导向对应的位置。
|
||||
|
||||
在本文中我将会介绍在几种在 Linux 终端中查看你的公有 IP 地址的方法。这对普通用户来说并无意义,但 Linux 服务器(无GUI或者作为只能使用基本工具的用户登录时)会很有用。无论如何,从 Linux 终端中获取公有 IP 在各种方面都很意义,说不定某一天就能用得着。
|
||||
|
||||
以下是我们主要使用的两个命令,curl 和 wget。你可以换着用。
|
||||
|
||||
### Curl 纯文本格式输出: ###
|
||||
|
||||
curl icanhazip.com
|
||||
curl ifconfig.me
|
||||
curl curlmyip.com
|
||||
curl ip.appspot.com
|
||||
curl ipinfo.io/ip
|
||||
curl ipecho.net/plain
|
||||
curl www.trackip.net/i
|
||||
|
||||
### curl JSON格式输出: ###
|
||||
|
||||
curl ipinfo.io/json
|
||||
curl ifconfig.me/all.json
|
||||
curl www.trackip.net/ip?json (有点丑陋)
|
||||
|
||||
### curl XML格式输出: ###
|
||||
|
||||
curl ifconfig.me/all.xml
|
||||
|
||||
### curl 得到所有IP细节 (挖掘机)###
|
||||
|
||||
curl ifconfig.me/all
|
||||
|
||||
### 使用 DYDNS (当你使用 DYDNS 服务时有用)###
|
||||
|
||||
curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g'
|
||||
curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+"
|
||||
|
||||
### 使用 Wget 代替 Curl ###
|
||||
|
||||
wget http://ipecho.net/plain -O - -q ; echo
|
||||
wget http://observebox.com/ip -O - -q ; echo
|
||||
|
||||
### 使用 host 和 dig 命令 ###
|
||||
|
||||
如果有的话,你也可以直接使用 host 和 dig 命令。
|
||||
|
||||
host -t a dartsclink.com | sed 's/.*has address //'
|
||||
dig +short myip.opendns.com @resolver1.opendns.com
|
||||
|
||||
### bash 脚本示例: ###
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo`
|
||||
echo $PUBLIC_IP
|
||||
|
||||
简单易用。
|
||||
|
||||
我实际上是在写一个用于记录每日我的路由器中所有 IP 变化并保存到一个文件的脚本。我在搜索过程中找到了这些很好用的命令。希望某天它能帮到其他人。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/
|
||||
|
||||
译者:[KevinSJ](https://github.com/KevinSJ)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
115
published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md
Normal file
115
published/20150817 Top 5 Torrent Clients For Ubuntu Linux.md
Normal file
@ -0,0 +1,115 @@
|
||||
Ubuntu 下五个最好的 BT 客户端
|
||||
================================================================================
|
||||
|
||||
![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png)
|
||||
|
||||
在寻找 **Ubuntu 中最好的 BT 客户端**吗?事实上,Linux 桌面平台中有许多 BT 客户端,但是它们中的哪些才是**最好的 Ubuntu 客户端**呢?
|
||||
|
||||
我将会列出 Linux 上最好的五个 BT 客户端,它们都拥有着体积轻盈,功能强大的特点,而且还有令人印象深刻的用户界面。自然,易于安装和使用也是特性之一。
|
||||
|
||||
### Ubuntu 下最好的 BT 客户端 ###
|
||||
|
||||
考虑到 Ubuntu 默认安装了 Transmission,所以我将会从这个列表中排除了 Transmission。但是这并不意味着 Transmission 没有资格出现在这个列表中,事实上,Transmission 是一个非常好的BT客户端,这也正是它被包括 Ubuntu 在内的多个发行版默认安装的原因。
|
||||
|
||||
### Deluge ###
|
||||
|
||||
![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png)
|
||||
|
||||
[Deluge][1] 被 Lifehacker 评选为 Linux 下最好的 BT 客户端,这说明了 Deluge 是多么的有用。而且,并不仅仅只有 Lifehacker 是 Deluge 的粉丝,纵观多个论坛,你都会发现不少 Deluge 的忠实拥趸。
|
||||
|
||||
快速,时尚而直观的界面使得 Deluge 成为 Linux 用户的挚爱。
|
||||
|
||||
Deluge 可在 Ubuntu 的仓库中获取,你能够在 Ubuntu 软件中心中安装它,或者使用下面的命令:
|
||||
|
||||
sudo apt-get install deluge
|
||||
|
||||
### qBittorrent ###
|
||||
|
||||
![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png)
|
||||
|
||||
正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的 Qt 版本。如果曾经使用过它,你将会看到和 Windows 下的 Bittorrent 相似的界面。同样轻巧并且有着 BT 客户端的所有标准功能, qBittorrent 也可以在 Ubuntu 的默认仓库中找到。
|
||||
|
||||
它可以通过 Ubuntu 软件仓库安装,或者使用下面的命令:
|
||||
|
||||
sudo apt-get install qbittorrent
|
||||
|
||||
|
||||
### Tixati ###
|
||||
|
||||
![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png)
|
||||
|
||||
[Tixati][4] 是另一个不错的 Ubuntu 下的 BT 客户端。它有着一个默认的黑暗主题,尽管很多人喜欢,但是我例外。它拥有着一切你能在 BT 客户端中找到的功能。
|
||||
|
||||
除此之外,它还有着数据分析的额外功能。你可以在美观的图表中分析流量以及其它数据。
|
||||
|
||||
- [下载 Tixati][5]
|
||||
|
||||
|
||||
|
||||
### Vuze ###
|
||||
|
||||
![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png)
|
||||
|
||||
[Vuze][6] 是许多 Linux 以及 Windows 用户最喜欢的 BT 客户端。除了标准的功能,你可以直接在应用程序中搜索种子,也可以订阅系列片源,这样就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。
|
||||
|
||||
它还配备了一个视频播放器,可以播放带有字幕的高清视频等等。但是我不认为你会用它来代替那些更好的视频播放器,比如 VLC。
|
||||
|
||||
Vuze 可以通过 Ubuntu 软件中心安装或者使用下列命令:
|
||||
|
||||
sudo apt-get install vuze
|
||||
|
||||
|
||||
|
||||
### Frostwire ###
|
||||
|
||||
![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png)
|
||||
|
||||
[Frostwire][7] 是一个你应该试一下的应用。它不仅仅是一个简单的 BT 客户端,它还可以应用于安卓,你可以用它通过 Wifi 来共享文件。
|
||||
|
||||
你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。
|
||||
|
||||
还有一个特点是:Frostwire 提供了独立音乐人的[合法音乐下载][13]。你可以下载并且欣赏它们,免费而且合法。
|
||||
|
||||
- [下载 Frostwire][8]
|
||||
|
||||
|
||||
|
||||
### 荣誉奖 ###
|
||||
|
||||
在 Windows 中,uTorrent(发音:mu torrent)是我最喜欢的 BT 应用。尽管 uTorrent 可以在 Linux 下运行,但是我还是特意忽略了它。因为在 Linux 下使用 uTorrent 不仅困难,而且无法获得完整的应用体验(运行在浏览器中)。
|
||||
|
||||
可以[在这里][9]阅读 Ubuntu下uTorrent 的安装教程。
|
||||
|
||||
#### 快速提示: ####
|
||||
|
||||
大多数情况下,BT 应用不会默认自动启动。如果想改变这一行为,请阅读[如何管理 Ubuntu 下的自启动程序][10]来学习。
|
||||
|
||||
### 你最喜欢的是什么? ###
|
||||
|
||||
这些是我对于 Ubuntu 下最好的 BT 客户端的意见。你最喜欢的是什么呢?请发表评论。也可以查看与本主题相关的[Ubuntu 最好的下载管理器][11]。如果使用 Popcorn Time,试试 [Popcorn Time 技巧][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/best-torrent-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[Xuanwo](https://github.com/Xuanwo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://deluge-torrent.org/
|
||||
[2]:http://www.qbittorrent.org/
|
||||
[3]:http://www.bittorrent.com/
|
||||
[4]:http://www.tixati.com/
|
||||
[5]:http://www.tixati.com/download/
|
||||
[6]:http://www.vuze.com/
|
||||
[7]:http://www.frostwire.com/
|
||||
[8]:http://www.frostwire.com/downloads
|
||||
[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/
|
||||
[10]:http://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
[11]:http://itsfoss.com/4-best-download-managers-for-linux/
|
||||
[12]:http://itsfoss.com/popcorn-time-tips/
|
||||
[13]:http://www.frostclick.com/wp/
|
||||
|
@ -0,0 +1,100 @@
|
||||
Linux中通过命令行监控股票报价
|
||||
================================================================================
|
||||
|
||||
如果你是那些股票投资者或者交易者中的一员,那么监控证券市场将是你的日常工作之一。最有可能的是你会使用一个在线交易平台,这个平台有着一些漂亮的实时图表和全部种类的高级股票分析和交易工具。虽然这种复杂的市场研究工具是任何严肃的证券投资者了解市场的必备工具,但是监控最新的股票报价来构建有利可图的投资组合仍然有很长一段路要走。
|
||||
|
||||
如果你是一位长久坐在终端前的全职系统管理员,而证券交易又成了你日常生活中的业余兴趣,那么一个简单地显示实时股票报价的命令行工具会是给你的恩赐。
|
||||
|
||||
在本教程中,让我来介绍一个灵巧而简洁的命令行工具,它可以让你在Linux上从命令行监控股票报价。
|
||||
|
||||
这个工具叫做[Mop][1]。它是用GO编写的一个轻量级命令行工具,可以极其方便地跟踪来自美国市场的最新股票报价。你可以很轻松地自定义要监控的证券列表,它会在一个基于ncurses的便于阅读的界面显示最新的股票报价。
|
||||
|
||||
**注意**:Mop是通过雅虎金融API获取最新的股票报价的。你必须意识到,他们的的股票报价已知会有15分钟的延时。所以,如果你正在寻找0延时的“实时”股票报价,那么Mop就不是你的菜了。这种“现场”股票报价订阅通常可以通过向一些不开放的私有接口付费获取。了解这些之后,让我们来看看怎样在Linux环境下使用Mop吧。
|
||||
|
||||
### 安装 Mop 到 Linux ###
|
||||
|
||||
由于Mop是用Go实现的,你首先需要安装Go语言。如果你还没有安装Go,请参照[此指南][2]将Go安装到你的Linux平台中。请确保按指南中所讲的设置GOPATH环境变量。
|
||||
|
||||
安装完Go后,继续像下面这样安装Mop。
|
||||
|
||||
**Debian,Ubuntu 或 Linux Mint**
|
||||
|
||||
$ sudo apt-get install git
|
||||
$ go get github.com/michaeldv/mop
|
||||
$ cd $GOPATH/src/github.com/michaeldv/mop
|
||||
$ make install
|
||||
|
||||
**Fedora,CentOS,RHEL**
|
||||
|
||||
$ sudo yum install git
|
||||
$ go get github.com/michaeldv/mop
|
||||
$ cd $GOPATH/src/github.com/michaeldv/mop
|
||||
$ make install
|
||||
|
||||
上述命令将安装Mop到$GOPATH/bin。
|
||||
|
||||
现在,编辑你的.bashrc,将$GOPATH/bin写到你的PATH变量中。
|
||||
|
||||
export PATH="$PATH:$GOPATH/bin"
|
||||
|
||||
----------
|
||||
|
||||
$ source ~/.bashrc
|
||||
|
||||
### 使用Mop来通过命令行监控股票报价 ###
|
||||
|
||||
要启动Mop,只需运行名为cmd的命令(LCTT 译注:这名字实在是……)。
|
||||
|
||||
$ cmd
|
||||
|
||||
首次启动,你将看到一些Mop预配置的证券行情自动收录器。
|
||||
|
||||
![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg)
|
||||
|
||||
报价显示了像最新价格、交易百分比、每日低/高、52周低/高、股息以及年收益率等信息。Mop从[CNN][3]获取市场总览信息,从[雅虎金融][4]获得个股报价,股票报价信息它自己会在终端内周期性更新。
|
||||
|
||||
### 自定义Mop中的股票报价 ###
|
||||
|
||||
让我们来试试自定义证券列表吧。对此,Mop提供了易于记忆的快捷键:‘+’用于添加一只新股,而‘-’则用于移除一只股票。
|
||||
|
||||
要添加新股,请按‘+’,然后输入股票代码来添加(如MSFT)。你可以通过输入一个由逗号分隔的交易代码列表来一次添加多个股票(如”MSFT, AMZN, TSLA”)。
|
||||
|
||||
![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg)
|
||||
|
||||
从列表中移除股票可以类似地按‘-’来完成。
|
||||
|
||||
### 对Mop中的股票报价排序 ###
|
||||
|
||||
你可以基于任何栏目对股票报价列表进行排序。要排序,请按‘o’,然后使用左/右键来选择排序的基准栏目。当选定了一个特定栏目后,你可以按回车来对列表进行升序排序,或者降序排序。
|
||||
|
||||
![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg)
|
||||
|
||||
通过按‘g’,你可以根据股票当日的涨或跌来分组。涨的情况以绿色表示,跌的情况以白色表示。
|
||||
|
||||
![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg)
|
||||
|
||||
如果你想要访问帮助页,只需要按‘?’。
|
||||
|
||||
![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg)
|
||||
|
||||
### 尾声 ###
|
||||
|
||||
正如你所见,Mop是一个轻量级的,然而极其方便的证券监控工具。当然,你可以很轻松地从其它别的什么地方,从在线站点,你的智能手机等等访问到股票报价信息。然而,如果你在整天使用终端环境,Mop可以很容易地适应你的工作环境,希望没有让你过多地从你的工作流程中分心。只要让它在你其中一个终端中运行并保持市场日期持续更新,那就够了。
|
||||
|
||||
交易快乐!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:https://github.com/michaeldv/mop
|
||||
[2]:http://ask.xmodulo.com/install-go-language-linux.html
|
||||
[3]:http://money.cnn.com/data/markets/
|
||||
[4]:http://finance.yahoo.com/
|
@ -0,0 +1,52 @@
|
||||
Linux无极限:IBM发布LinuxONE大型机
|
||||
================================================================================
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png)
|
||||
|
||||
LinuxONE Emperor MainframeGood的Ubuntu服务器团队今天发布了一条消息关于[IBM发布了LinuxONE][1],一种只支持Linux的大型机,也可以运行Ubuntu。
|
||||
|
||||
IBM发布的最大的LinuxONE系统称作‘Emperor’,它可以扩展到8000台虚拟机或者上万台容器- 对任何一台Linux系统都可能的记录。
|
||||
|
||||
LinuxONE被IBM称作‘游戏改变者’,它‘释放了Linux的商业潜力’。
|
||||
|
||||
IBM和Canonical正在一起协作为LinuxONE和其他IBM z系统创建Ubuntu发行版。Ubuntu将会在IBM z加入RedHat和SUSE作为首屈一指的Linux发行版。
|
||||
|
||||
随着IBM ‘Emperor’发布的还有LinuxONE Rockhopper,一个为中等规模商业或者组织小一点的大型机。
|
||||
|
||||
IBM是大型机中的领导者,并且占有大型机市场中90%的份额。
|
||||
|
||||
注:youtube 视频
|
||||
<iframe width="750" height="422" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/2ABfNrWs-ns?feature=oembed"></iframe>
|
||||
|
||||
### 大型机用于什么? ###
|
||||
|
||||
你阅读这篇文章所使用的电脑在一个‘大铁块’一样的大型机前会显得很矮小。它们是巨大的,笨重的机柜里面充满了高端的组件、自己设计的技术和眼花缭乱的大量存储(就是数据存储,没有空间放钢笔和尺子)。
|
||||
|
||||
大型机被大型机构和商业用来处理和存储大量数据,通过统计来处理数据和处理大规模的事务处理。
|
||||
|
||||
### ‘世界最快的处理器’ ###
|
||||
|
||||
IBM已经与Canonical Ltd组成了团队来在LinuxONE和其他IBM z系统中使用Ubuntu。
|
||||
|
||||
LinuxONE Emperor使用IBM z13处理器。发布于一月的芯片声称是时间上最快的微处理器。它可以在几毫秒内响应事务。
|
||||
|
||||
但是也可以很好地处理高容量的移动事务,z13中的LinuxONE系统也是一个理想的云系统。
|
||||
|
||||
每个核心可以处理超过50个虚拟服务器,总共可以超过8000台虚拟服务器么,这使它以更便宜,更环保、更高效的方式扩展到云。
|
||||
|
||||
**在阅读这篇文章时你不必是一个CIO或者大型机巡查员。LinuxONE提供的可能性足够清晰。**
|
||||
|
||||
来源: [Reuters (h/t @popey)][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www-03.ibm.com/systems/z/announcement.html
|
||||
[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817
|
@ -0,0 +1,53 @@
|
||||
Ubuntu Linux 来到 IBM 大型机
|
||||
================================================================================
|
||||
最终来到了。在 [LinuxCon][1] 上,IBM 和 [Canonical][2] 宣布 [Ubuntu Linux][3] 不久就会运行在 IBM 大型机 [LinuxONE][1] 上,这是一种只支持 Linux 的大型机,现在也可以运行 Ubuntu 了。
|
||||
|
||||
这个 IBM 发布的最大的 LinuxONE 系统称作‘Emperor’,它可以扩展到 8000 台虚拟机或者上万台容器,这可能是单独一台 Linux 系统的记录。
|
||||
|
||||
LinuxONE 被 IBM 称作‘游戏改变者’,它‘释放了 Linux 的商业潜力’。
|
||||
|
||||
![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg)
|
||||
|
||||
*很快你就可以在你的 IBM 大型机上安装 Ubuntu Linux orange 啦*
|
||||
|
||||
根据 IBM z 系统的总经理 Ross Mauri 以及 Canonical 和 Ubuntu 的创立者 Mark Shuttleworth 所言,这是因为客户需要。十多年来,IBM 大型机只支持 [红帽企业版 Linux (RHEL)][4] 和 [SUSE Linux 企业版 (SLES)][5] Linux 发行版。
|
||||
|
||||
随着 Ubuntu 越来越成熟,更多的企业把它作为企业级 Linux,也有更多的人希望它能运行在 IBM 大型机上。尤其是银行希望如此。不久,金融 CIO 们就可以满足他们的需求啦。
|
||||
|
||||
在一次采访中 Shuttleworth 说 Ubuntu Linux 在 2016 年 4 月下一次长期支持版 Ubuntu 16.04 中就可以用到大型机上。而在 2014 年底 Canonical 和 IBM 将 [Ubuntu 带到 IBM 的 POWER][6] 架构中就迈出了第一步。
|
||||
|
||||
在那之前,Canonical 和 IBM 差点签署了协议 [在 2011 年实现 Ubuntu 支持 IBM 大型机][7],但最终也没有实现。这次,真的发生了。
|
||||
|
||||
Canonical 的 CEO Jane Silber 解释说 “[把 Ubuntu 平台支持扩大][8]到 [IBM z 系统][9] 是因为认识到需要 z 系统运行其业务的客户数量以及混合云市场的成熟。”
|
||||
|
||||
**Silber 还说:**
|
||||
|
||||
> 由于 z 系统的支持,包括 [LinuxONE][10],Canonical 和 IBM 的关系进一步加深,构建了对 POWER 架构的支持和 OpenPOWER 生态系统。正如 Power 系统的客户受益于 Ubuntu 的可扩展能力,我们的敏捷开发过程也使得类似 POWER8 CAPI (Coherent Accelerator Processor Interface,一致性加速器接口)得到了市场支持,z 系统的客户也可以期望技术进步能快速部署,并从 [Juju][11] 和我们的其它云工具中获益,使得能快速向端用户提供新服务。另外,我们和 IBM 的合作包括实现扩展部署很多 IBM 和 Juju 的软件解决方案。大型机客户对于能通过 Juju 将丰富‘迷人的’ IBM 解决方案、其它软件供应商的产品、开源解决方案部署到大型机上感到高兴。
|
||||
|
||||
Shuttleworth 期望 z 系统上的 Ubuntu 能取得巨大成功。它发展很快,由于对 OpenStack 的支持,希望有卓越云性能的人会感到非常高兴。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a],[Joey-Elijah Sneddon][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh),[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:http://events.linuxfoundation.org/events/linuxcon-north-america
|
||||
[2]:http://www.canonical.com/
|
||||
[3]:http://www.ubuntu.comj/
|
||||
[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[5]:https://www.suse.com/products/server/
|
||||
[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/
|
||||
[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/
|
||||
[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/
|
||||
[9]:http://www-03.ibm.com/systems/uk/z/
|
||||
[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/
|
||||
[11]:https://jujucharms.com/
|
@ -0,0 +1,49 @@
|
||||
Linux有问必答:如何检查MariaDB服务端版本
|
||||
================================================================================
|
||||
> **提问**: 我使用的是一台运行MariaDB的VPS。我该如何检查MariaDB服务端的版本?
|
||||
|
||||
有时候你需要知道你的数据库版本,比如当你升级你数据库或对已知缺陷打补丁时。这里有几种方法找出MariaDB版本的方法。
|
||||
|
||||
### 方法一 ###
|
||||
|
||||
第一种找出版本的方法是登录MariaDB服务器,登录之后,你会看到一些MariaDB的版本信息。
|
||||
|
||||
![](https://farm6.staticflickr.com/5807/20669891016_91249d3239_c.jpg)
|
||||
|
||||
另一种方法是在登录MariaDB后出现的命令行中输入‘status’命令。输出会显示服务器的版本还有协议版本。
|
||||
|
||||
![](https://farm6.staticflickr.com/5801/20669891046_73f60e5c81_c.jpg)
|
||||
|
||||
### 方法二 ###
|
||||
|
||||
如果你不能访问MariaDB服务器,那么你就不能用第一种方法。这种情况下你可以根据MariaDB的安装包的版本来推测。这种方法只有在MariaDB通过包管理器安装的才有用。
|
||||
|
||||
你可以用下面的方法检查MariaDB的安装包。
|
||||
|
||||
#### Debian、Ubuntu或者Linux Mint: ####
|
||||
|
||||
$ dpkg -l | grep mariadb
|
||||
|
||||
下面的输出说明MariaDB的版本是10.0.17。
|
||||
|
||||
![](https://farm1.staticflickr.com/607/20669890966_b611fcd915_c.jpg)
|
||||
|
||||
#### Fedora、CentOS或者 RHEL: ####
|
||||
|
||||
$ rpm -qa | grep mariadb
|
||||
|
||||
下面的输出说明安装的版本是5.5.41。
|
||||
|
||||
![](https://farm1.staticflickr.com/764/20508160748_23d9808256_b.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/check-mariadb-server-version.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
@ -0,0 +1,74 @@
|
||||
如何在 Docker 容器中运行 Kali Linux 2.0
|
||||
================================================================================
|
||||
### 介绍 ###
|
||||
|
||||
Kali Linux 是一个对于安全测试人员和白帽的一个知名操作系统。它带有大量安全相关的程序,这让它很容易用于渗透测试。最近,[Kali Linux 2.0][1] 发布了,它被认为是这个操作系统最重要的一次发布。另一方面,Docker 技术由于它的可扩展性和易用性让它变得很流行。Dokcer 让你非常容易地将你的程序带给你的用户。好消息是你可以通过 Docker 运行Kali Linux 了,让我们看看该怎么做 :)
|
||||
|
||||
### 在 Docker 中运行 Kali Linux 2.0 ###
|
||||
|
||||
**相关提示**
|
||||
|
||||
> 如果你还没有在系统中安装docker,你可以运行下面的命令:
|
||||
|
||||
> **对于 Ubuntu/Linux Mint/Debian:**
|
||||
|
||||
> sudo apt-get install docker
|
||||
|
||||
> **对于 Fedora/RHEL/CentOS:**
|
||||
|
||||
> sudo yum install docker
|
||||
|
||||
> **对于 Fedora 22:**
|
||||
|
||||
> dnf install docker
|
||||
|
||||
> 你可以运行下面的命令来启动docker:
|
||||
|
||||
> sudo docker start
|
||||
|
||||
首先运行下面的命令确保 Docker 服务运行正常:
|
||||
|
||||
sudo docker status
|
||||
|
||||
Kali Linux 的开发团队已将 Kali Linux 的 docker 镜像上传了,只需要输入下面的命令来下载镜像。
|
||||
|
||||
docker pull kalilinux/kali-linux-docker
|
||||
|
||||
![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png)
|
||||
|
||||
下载完成后,运行下面的命令来找出你下载的 docker 镜像的 ID。
|
||||
|
||||
docker images
|
||||
|
||||
![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png)
|
||||
|
||||
现在运行下面的命令来从镜像文件启动 kali linux docker 容器(这里需用正确的镜像ID替换)。
|
||||
|
||||
docker run -i -t 198cd6df71ab3 /bin/bash
|
||||
|
||||
它会立刻启动容器并且让你登录到该操作系统,你现在可以在 Kaili Linux 中工作了。
|
||||
|
||||
![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png)
|
||||
|
||||
你可以在容器外面通过下面的命令来验证容器已经启动/运行中了:
|
||||
|
||||
docker ps
|
||||
|
||||
![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png)
|
||||
|
||||
### 总结 ###
|
||||
|
||||
Docker 是一种最聪明的用来部署和分发包的方式。Kali Linux docker 镜像非常容易上手,也不会消耗很大的硬盘空间,这样也可以很容易地在任何安装了 docker 的操作系统上测试这个很棒的发行版了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/
|
||||
|
||||
作者:[Aun][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxpitstop.com/author/aun/
|
||||
[1]:https://linux.cn/article-6005-1.html
|
@ -1,24 +1,25 @@
|
||||
使用dd命令在Linux和Unix环境下进行硬盘I/O性能检测
|
||||
使用 dd 命令进行硬盘 I/O 性能检测
|
||||
================================================================================
|
||||
如何使用dd命令测试硬盘的性能?如何在linux操作系统下检测硬盘的读写能力?
|
||||
|
||||
如何使用dd命令测试我的硬盘性能?如何在linux操作系统下检测硬盘的读写速度?
|
||||
|
||||
你可以使用以下命令在一个Linux或类Unix操作系统上进行简单的I/O性能测试。
|
||||
|
||||
- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。
|
||||
- **hparm命令**:它被用来获取或设置硬盘参数,包括测试读性能以及缓存性能等。
|
||||
- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。
|
||||
- **hparm命令**:它用来在基于 Linux 的系统上获取或设置硬盘参数,包括测试读性能以及缓存性能等。
|
||||
|
||||
在这篇指南中,你将会学到如何使用dd命令来测试硬盘性能。
|
||||
|
||||
### 使用dd命令来监控硬盘的读写性能:###
|
||||
|
||||
- 打开shell终端(这里貌似不能翻译为终端提示符)。
|
||||
- 通过ssh登录到远程服务器。
|
||||
- 打开shell终端。
|
||||
- 或者通过ssh登录到远程服务器。
|
||||
- 使用dd命令来测量服务器的吞吐率(写速度) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync`
|
||||
- 使用dd命令测量服务器延迟 `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync`
|
||||
|
||||
####理解dd命令的选项###
|
||||
|
||||
在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为:
|
||||
在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为:
|
||||
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
## GNU dd语法 ##
|
||||
@ -29,18 +30,19 @@
|
||||
输出样例:
|
||||
|
||||
![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg)
|
||||
Fig.01: 使用dd命令获取的服务器吞吐率
|
||||
|
||||
*图01: 使用dd命令获取的服务器吞吐率*
|
||||
|
||||
请各位注意在这个实验中,我们写入一个G的数据,可以发现,服务器的吞吐率是135 MB/s,这其中
|
||||
|
||||
- `if=/dev/zero (if=/dev/input.file)` :用来设置dd命令读取的输入文件名。
|
||||
- `of=/tmp/test1.img (of=/path/to/output.file)` :dd命令将input.file写入的输出文件的名字。
|
||||
- `bs=1G (bs=block-size)` :设置dd命令读取的块的大小。例子中为1个G。
|
||||
- `count=1 (count=number-of-blocks)`: dd命令读取的块的个数。
|
||||
- `oflag=dsync (oflag=dsync)` :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。
|
||||
- `if=/dev/zero` (if=/dev/input.file) :用来设置dd命令读取的输入文件名。
|
||||
- `of=/tmp/test1.img` (of=/path/to/output.file):dd命令将input.file写入的输出文件的名字。
|
||||
- `bs=1G` (bs=block-size) :设置dd命令读取的块的大小。例子中为1个G。
|
||||
- `count=1` (count=number-of-blocks):dd命令读取的块的个数。
|
||||
- `oflag=dsync` (oflag=dsync) :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。
|
||||
- `conv=fdatasyn`: 这个选项和`oflag=dsync`含义一样。
|
||||
|
||||
在这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间:
|
||||
在下面这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间:
|
||||
|
||||
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync
|
||||
|
||||
@ -50,11 +52,11 @@ Fig.01: 使用dd命令获取的服务器吞吐率
|
||||
1000+0 records out
|
||||
512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s
|
||||
|
||||
请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的加载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。
|
||||
请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的负载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。
|
||||
|
||||
####为什么服务器的吞吐率和延迟时间都这么差?###
|
||||
###为什么服务器的吞吐率和延迟时间都这么差?###
|
||||
|
||||
低的数值并不意味着你在使用差劲的硬件。可能是HARDWARE RAID10的控制器缓存导致的。
|
||||
低的数值并不意味着你在使用差劲的硬件。可能是硬件 RAID10的控制器缓存导致的。
|
||||
|
||||
使用hdparm命令来查看硬盘缓存的读速度。
|
||||
|
||||
@ -79,11 +81,12 @@ Fig.01: 使用dd命令获取的服务器吞吐率
|
||||
输出样例:
|
||||
|
||||
![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg)
|
||||
Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令
|
||||
|
||||
请再一次注意由于文件文件操作的缓存属性,你将总是会看到很高的读速度。
|
||||
*图02: 检测硬盘读入以及缓存性能的Linux hdparm命令*
|
||||
|
||||
**使用dd命令来测试读入速度**
|
||||
请再次注意,由于文件文件操作的缓存属性,你将总是会看到很高的读速度。
|
||||
|
||||
###使用dd命令来测试读取速度###
|
||||
|
||||
为了获得精确的读测试数据,首先在测试前运行下列命令,来将缓存设置为无效:
|
||||
|
||||
@ -91,11 +94,11 @@ Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令
|
||||
echo 3 | sudo tee /proc/sys/vm/drop_caches
|
||||
time time dd if=/path/to/bigfile of=/dev/null bs=8k
|
||||
|
||||
**笔记本上的示例**
|
||||
####笔记本上的示例####
|
||||
|
||||
运行下列命令:
|
||||
|
||||
### Cache存在的Debian系统笔记本吞吐率###
|
||||
### 带有Cache的Debian系统笔记本吞吐率###
|
||||
dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct
|
||||
|
||||
###使cache失效###
|
||||
@ -104,10 +107,11 @@ Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令
|
||||
###没有Cache的Debian系统笔记本吞吐率###
|
||||
dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct
|
||||
|
||||
**苹果OS X Unix(Macbook pro)的例子**
|
||||
####苹果OS X Unix(Macbook pro)的例子####
|
||||
|
||||
GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows:
|
||||
GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能:
|
||||
|
||||
GNU dd命令有其他许多选项,但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能:
|
||||
|
||||
## 运行这个命令2-3次来获得更好地结果 ###
|
||||
time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync"
|
||||
@ -124,26 +128,29 @@ GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令
|
||||
|
||||
本人Macbook Pro的写速度是635346520字节(635.347MB/s)。
|
||||
|
||||
**不喜欢用命令行?^_^**
|
||||
###不喜欢用命令行?\^_^###
|
||||
|
||||
你可以在Linux或基于Unix的系统上使用disk utility(gnome-disk-utility)这款工具来得到同样的信息。下面的那个图就是在我的Fedora Linux v22 VM上截取的。
|
||||
|
||||
**图形化方法**
|
||||
####图形化方法####
|
||||
|
||||
点击“Activites”或者“Super”按键来在桌面和Activites视图间切换。输入“Disks”
|
||||
|
||||
![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg)
|
||||
Fig.03: 打开Gnome硬盘工具
|
||||
|
||||
*图03: 打开Gnome硬盘工具*
|
||||
|
||||
在左边的面板上选择你的硬盘,点击configure按钮,然后点击“Benchmark partition”:
|
||||
|
||||
![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg)
|
||||
Fig.04: 评测硬盘/分区
|
||||
|
||||
最后,点击“Start Benchmark...”按钮(你可能被要求输入管理员用户名和密码):
|
||||
*图04: 评测硬盘/分区*
|
||||
|
||||
最后,点击“Start Benchmark...”按钮(你可能需要输入管理员用户名和密码):
|
||||
|
||||
![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg)
|
||||
Fig.05: 最终的评测结果
|
||||
|
||||
*图05: 最终的评测结果*
|
||||
|
||||
如果你要问,我推荐使用哪种命令和方法?
|
||||
|
||||
@ -158,7 +165,7 @@ via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd
|
||||
|
||||
作者:Vivek Gite
|
||||
译者:[DongShuaike](https://github.com/DongShuaike)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,156 @@
|
||||
在 Linux 下使用 RAID(一):介绍 RAID 的级别和概念
|
||||
================================================================================
|
||||
|
||||
RAID 的意思是廉价磁盘冗余阵列(Redundant Array of Inexpensive Disks),但现在它被称为独立磁盘冗余阵列(Redundant Array of Independent Drives)。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是一系列放在一起,成为一个逻辑卷的磁盘集合。
|
||||
|
||||
![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg)
|
||||
|
||||
*在 Linux 中理解 RAID 设置*
|
||||
|
||||
RAID 包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。将至少两个磁盘连接到一个 RAID 控制器,而成为一个逻辑卷,也可以将多个驱动器放在一个组中。一组磁盘只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。
|
||||
|
||||
这个系列被命名为“在 Linux 下使用 RAID”,分为9个部分,包括以下主题:
|
||||
|
||||
- 第1部分:介绍 RAID 的级别和概念
|
||||
- 第2部分:在Linux中如何设置 RAID0(条带化)
|
||||
- 第3部分:在Linux中如何设置 RAID1(镜像化)
|
||||
- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验)
|
||||
- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验)
|
||||
- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套)
|
||||
- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘
|
||||
- 第8部分:在 RAID 中恢复(重建)损坏的驱动器
|
||||
- 第9部分:在 Linux 中管理 RAID
|
||||
|
||||
这是9篇系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。
|
||||
|
||||
### 软件 RAID 和硬件 RAID ###
|
||||
|
||||
软件 RAID 的性能较低,因为其使用主机的资源。 需要加载 RAID 软件以从软件 RAID 卷中读取数据。在加载 RAID 软件前,操作系统需要引导起来才能加载 RAID 软件。在软件 RAID 中无需物理硬件。零成本投资。
|
||||
|
||||
硬件 RAID 的性能较高。他们采用 PCI Express 卡物理地提供有专用的 RAID 控制器。它不会使用主机资源。他们有 NVRAM 用于缓存的读取和写入。缓存用于 RAID 重建时,即使出现电源故障,它会使用后备的电池电源保持缓存。对于大规模使用是非常昂贵的投资。
|
||||
|
||||
硬件 RAID 卡如下所示:
|
||||
|
||||
![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg)
|
||||
|
||||
*硬件 RAID*
|
||||
|
||||
#### 重要的 RAID 概念 ####
|
||||
|
||||
- **校验**方式用在 RAID 重建中从校验所保存的信息中重新生成丢失的内容。 RAID 5,RAID 6 基于校验。
|
||||
- **条带化**是将切片数据随机存储到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用2个磁盘,则每个磁盘存储我们的一半数据。
|
||||
- **镜像**被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1 中,它会保存相同的内容到其他盘上。
|
||||
- **热备份**只是我们的服务器上的一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动用于重建 RAID。
|
||||
- **块**是 RAID 控制器每次读写数据时的最小单位,最小 4KB。通过定义块大小,我们可以增加 I/O 性能。
|
||||
|
||||
RAID有不同的级别。在这里,我们仅列出在真实环境下的使用最多的 RAID 级别。
|
||||
|
||||
- RAID0 = 条带化
|
||||
- RAID1 = 镜像
|
||||
- RAID5 = 单磁盘分布式奇偶校验
|
||||
- RAID6 = 双磁盘分布式奇偶校验
|
||||
- RAID10 = 镜像 + 条带。(嵌套RAID)
|
||||
|
||||
RAID 在大多数 Linux 发行版上使用名为 mdadm 的软件包进行管理。让我们先对每个 RAID 级别认识一下。
|
||||
|
||||
#### RAID 0 / 条带化 ####
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/thumb/9/9b/RAID_0.svg/150px-RAID_0.svg.png)
|
||||
|
||||
条带化有很好的性能。在 RAID 0(条带化)中数据将使用切片的方式被写入到磁盘。一半的内容放在一个磁盘上,另一半内容将被写入到另一个磁盘。
|
||||
|
||||
假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。(LCTT 译注:实际上不可能按字节切片,是按数据块切片的。)
|
||||
|
||||
在这种情况下,如果驱动器中的任何一个发生故障,我们就会丢失数据,因为一个盘中只有一半的数据,不能用于重建 RAID。不过,当比较写入速度和性能时,RAID 0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。
|
||||
|
||||
- 高性能。
|
||||
- RAID 0 中容量零损失。
|
||||
- 零容错。
|
||||
- 写和读有很高的性能。
|
||||
|
||||
#### RAID 1 / 镜像化 ####
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/RAID_1.svg/150px-RAID_1.svg.png)
|
||||
|
||||
镜像也有不错的性能。镜像可以对我们的数据做一份相同的副本。假设我们有两个2TB的硬盘驱动器,我们总共有4TB,但在镜像中,但是放在 RAID 控制器后面的驱动器形成了一个逻辑驱动器,我们只能看到这个逻辑驱动器有2TB。
|
||||
|
||||
当我们保存数据时,它将同时写入这两个2TB驱动器中。创建 RAID 1(镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以通过更换一个新的磁盘恢复 RAID 。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为另外的磁盘中也有相同的数据。所以是零数据丢失。
|
||||
|
||||
- 良好的性能。
|
||||
- 总容量丢失一半可用空间。
|
||||
- 完全容错。
|
||||
- 重建会更快。
|
||||
- 写性能变慢。
|
||||
- 读性能变好。
|
||||
- 能用于操作系统和小规模的数据库。
|
||||
|
||||
#### RAID 5 / 分布式奇偶校验 ####
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/RAID_5.svg/300px-RAID_5.svg.png)
|
||||
|
||||
RAID 5 多用于企业级。 RAID 5 的以分布式奇偶校验的方式工作。奇偶校验信息将被用于重建数据。它从剩下的正常驱动器上的信息来重建。在驱动器发生故障时,这可以保护我们的数据。
|
||||
|
||||
假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中,而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。
|
||||
|
||||
- 性能卓越
|
||||
- 读速度将非常好。
|
||||
- 写速度处于平均水准,如果我们不使用硬件 RAID 控制器,写速度缓慢。
|
||||
- 从所有驱动器的奇偶校验信息中重建。
|
||||
- 完全容错。
|
||||
- 1个磁盘空间将用于奇偶校验。
|
||||
- 可以被用在文件服务器,Web服务器,非常重要的备份中。
|
||||
|
||||
#### RAID 6 双分布式奇偶校验磁盘 ####
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/RAID_6.svg/300px-RAID_6.svg.png)
|
||||
|
||||
RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大数量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以更换新的驱动器后重建数据。
|
||||
|
||||
它比 RAID 5 慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度就处于平均水准。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。
|
||||
|
||||
- 性能不佳。
|
||||
- 读的性能很好。
|
||||
- 如果我们不使用硬件 RAID 控制器写的性能会很差。
|
||||
- 从两个奇偶校验驱动器上重建。
|
||||
- 完全容错。
|
||||
- 2个磁盘空间将用于奇偶校验。
|
||||
- 可用于大型阵列。
|
||||
- 用于备份和视频流中,用于大规模。
|
||||
|
||||
#### RAID 10 / 镜像+条带 ####
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e6/RAID_10_01.svg/300px-RAID_10_01.svg.png)
|
||||
|
||||
![](https://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/RAID_01.svg/300px-RAID_01.svg.png)
|
||||
|
||||
RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。
|
||||
|
||||
假设,我们有4个驱动器。当我逻辑卷上写数据时,它会使用镜像和条带的方式将数据保存到4个驱动器上。
|
||||
|
||||
如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下方式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入另外两个磁盘,所有数据都写入两块磁盘。这样可以将每个数据复制到另外的磁盘。
|
||||
|
||||
同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一组盘,“E”写入第二组盘。再次将“C”写入第一组盘,“M”到第二组盘。
|
||||
|
||||
- 良好的读写性能。
|
||||
- 总容量丢失一半的可用空间。
|
||||
- 容错。
|
||||
- 从副本数据中快速重建。
|
||||
- 由于其高性能和高可用性,常被用于数据库的存储中。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们已经了解了什么是 RAID 和在实际环境大多采用哪个级别的 RAID。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容可以基本满足你对 RAID 的了解。
|
||||
|
||||
在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -0,0 +1,219 @@
|
||||
在 Linux 下使用 RAID(一):使用 mdadm 工具创建软件 RAID 0 (条带化)
|
||||
================================================================================
|
||||
|
||||
RAID 即廉价磁盘冗余阵列,其高可用性和可靠性适用于大规模环境中,相比正常使用,数据更需要被保护。RAID 是一些磁盘的集合,是包含一个阵列的逻辑卷。驱动器可以组合起来成为一个阵列或称为(组的)集合。
|
||||
|
||||
创建 RAID 最少应使用2个连接到 RAID 控制器的磁盘组成,来构成逻辑卷,可以根据定义的 RAID 级别将更多的驱动器添加到一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 也叫做穷人 RAID。
|
||||
|
||||
![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg)
|
||||
|
||||
*在 Linux 中创建 RAID0*
|
||||
|
||||
使用 RAID 的主要目的是为了在发生单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。
|
||||
|
||||
#### 在 RAID 0 中条带是什么 ####
|
||||
|
||||
条带是通过将数据在同时分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到该逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID 0 逻辑卷的操作系统来提高重要文件的安全性。
|
||||
|
||||
- RAID 0 性能较高。
|
||||
- 在 RAID 0 上,空间零浪费。
|
||||
- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。
|
||||
- 写和读性能都很好。
|
||||
|
||||
#### 要求 ####
|
||||
|
||||
创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,不过数目应该是2,4,6,8等的偶数。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。
|
||||
|
||||
在这里,我们没有使用硬件 RAID,此设置只需要软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的功能界面访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问它的界面。
|
||||
|
||||
如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。
|
||||
|
||||
- [介绍 RAID 的级别和概念][1]
|
||||
|
||||
**我的服务器设置**
|
||||
|
||||
操作系统 : CentOS 6.5 Final
|
||||
IP 地址 : 192.168.0.225
|
||||
两块盘 : 20 GB each
|
||||
|
||||
这是9篇系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID 0(条带化),以名为 sdb 和 sdc 两个 20GB 的硬盘为例。
|
||||
|
||||
### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ###
|
||||
|
||||
1、 在 Linux 上设置 RAID 0 前,我们先更新一下系统,然后安装`mdadm` 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。
|
||||
|
||||
# yum clean all && yum update
|
||||
# yum install mdadm -y
|
||||
|
||||
![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png)
|
||||
|
||||
*安装 mdadm 工具*
|
||||
|
||||
### 第2步:确认连接了两个 20GB 的硬盘 ###
|
||||
|
||||
2、 在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png)
|
||||
|
||||
*检查硬盘*
|
||||
|
||||
3、 一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的`mdadm` 命令来查看。
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
|
||||
![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png)
|
||||
|
||||
*检查 RAID 设备*
|
||||
|
||||
从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。
|
||||
|
||||
### 第3步:创建 RAID 分区 ###
|
||||
|
||||
4、 现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
请按照以下说明创建分区。
|
||||
|
||||
- 按`n` 创建新的分区。
|
||||
- 然后按`P` 选择主分区。
|
||||
- 接下来选择分区号为1。
|
||||
- 只需按两次回车键选择默认值即可。
|
||||
- 然后,按`P` 来显示创建好的分区。
|
||||
|
||||
![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png)
|
||||
|
||||
*创建分区*
|
||||
|
||||
请按照以下说明将分区创建为 Linux 的 RAID 类型。
|
||||
|
||||
- 按`L`,列出所有可用的类型。
|
||||
- 按`t` 去修改分区。
|
||||
- 键入`fd` 设置为 Linux 的 RAID 类型,然后按回车确认。
|
||||
- 然后再次使用`p`查看我们所做的更改。
|
||||
- 使用`w`保存更改。
|
||||
|
||||
![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png)
|
||||
|
||||
*在 Linux 上创建 RAID 分区*
|
||||
|
||||
**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。
|
||||
|
||||
5、 创建分区后,验证这两个驱动器是否正确定义 RAID,使用下面的命令。
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
# mdadm --examine /dev/sd[b-c]1
|
||||
|
||||
![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png)
|
||||
|
||||
*验证 RAID 分区*
|
||||
|
||||
### 第4步:创建 RAID md 设备 ###
|
||||
|
||||
6、 现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。
|
||||
|
||||
# mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
|
||||
|
||||
- -C – 创建
|
||||
- -l – 级别
|
||||
- -n – RAID 设备数
|
||||
|
||||
7、 一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png)
|
||||
|
||||
*查看 RAID 级别*
|
||||
|
||||
# mdadm -E /dev/sd[b-c]1
|
||||
|
||||
![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png)
|
||||
|
||||
*查看 RAID 设备*
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png)
|
||||
|
||||
*查看 RAID 阵列*
|
||||
|
||||
### 第5步:给 RAID 设备创建文件系统 ###
|
||||
|
||||
8、 将 RAID 设备 /dev/md0 创建为 ext4 文件系统,并挂载到 /mnt/raid0 下。
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png)
|
||||
|
||||
*创建 ext4 文件系统*
|
||||
|
||||
9、 在 RAID 设备上创建好 ext4 文件系统后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。
|
||||
|
||||
# mkdir /mnt/raid0
|
||||
# mount /dev/md0 /mnt/raid0/
|
||||
|
||||
10、下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。
|
||||
|
||||
# df -h
|
||||
|
||||
11、 接下来,在挂载点 /mnt/raid0 下创建一个名为`tecmint.txt` 的文件,为创建的文件添加一些内容,并查看文件和目录的内容。
|
||||
|
||||
# touch /mnt/raid0/tecmint.txt
|
||||
# echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
|
||||
# cat /mnt/raid0/tecmint.txt
|
||||
# ls -l /mnt/raid0/
|
||||
|
||||
![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png)
|
||||
|
||||
*验证挂载的设备*
|
||||
|
||||
12、 当你验证挂载点后,就可以将它添加到 /etc/fstab 文件中。
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。
|
||||
|
||||
/dev/md0 /mnt/raid0 ext4 deaults 0 0
|
||||
|
||||
![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png)
|
||||
|
||||
*添加设备到 fstab 文件中*
|
||||
|
||||
13、 使用 mount 命令的 `-a` 来检查 fstab 的条目是否有误。
|
||||
|
||||
# mount -av
|
||||
|
||||
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png)
|
||||
|
||||
*检查 fstab 文件是否有误*
|
||||
|
||||
### 第6步:保存 RAID 配置 ###
|
||||
|
||||
14、 最后,保存 RAID 配置到一个文件中,以供将来使用。我们再次使用带有`-s` (scan) 和`-v` (verbose) 选项的 `mdadm` 命令,如图所示。
|
||||
|
||||
# mdadm -E -s -v >> /etc/mdadm.conf
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# cat /etc/mdadm.conf
|
||||
|
||||
![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png)
|
||||
|
||||
*保存 RAID 配置*
|
||||
|
||||
就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID 0 。在接下来的文章中,我们将看到如何设置 RAID 1。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid0-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:https://linux.cn/article-6085-1.html
|
@ -1,83 +1,82 @@
|
||||
在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分
|
||||
在 Linux 下使用 RAID(三):用两块磁盘创建 RAID 1(镜像)
|
||||
================================================================================
|
||||
RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。
|
||||
|
||||
**RAID 镜像**意味着相同数据的完整克隆(或镜像),分别写入到两个磁盘中。创建 RAID 1 至少需要两个磁盘,而且仅用于读取性能或者可靠性要比数据存储容量更重要的场合。
|
||||
|
||||
|
||||
![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg)
|
||||
|
||||
在 Linux 中设置 RAID1
|
||||
*在 Linux 中设置 RAID 1*
|
||||
|
||||
创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。
|
||||
|
||||
### RAID 1 的特点 ###
|
||||
|
||||
-镜像具有良好的性能。
|
||||
- 镜像具有良好的性能。
|
||||
|
||||
-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。
|
||||
- 磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。
|
||||
|
||||
-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。
|
||||
- 在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。
|
||||
|
||||
-读取数据会比写入性能更好。
|
||||
- 读取性能会比写入性能更好。
|
||||
|
||||
#### 要求 ####
|
||||
|
||||
创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8等偶数。要添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。
|
||||
|
||||
创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。
|
||||
这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的功能界面或使用 Ctrl + I 键来访问它。
|
||||
|
||||
这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。
|
||||
|
||||
需要阅读: [Basic Concepts of RAID in Linux][1]
|
||||
需要阅读: [介绍 RAID 的级别和概念][1]
|
||||
|
||||
#### 在我的服务器安装 ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.226
|
||||
Hostname : rd1.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
操作系统 : CentOS 6.5 Final
|
||||
IP 地址 : 192.168.0.226
|
||||
主机名 : rd1.tecmintlocal.com
|
||||
磁盘 1 [20GB] : /dev/sdb
|
||||
磁盘 2 [20GB] : /dev/sdc
|
||||
|
||||
本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。
|
||||
本文将指导你在 Linux 平台上使用 mdadm (用于创建和管理 RAID )一步步的建立一个软件 RAID 1 (镜像)。同样的做法也适用于如 RedHat,CentOS,Fedora 等 Linux 发行版。
|
||||
|
||||
### 第1步:安装所需要的并且检查磁盘 ###
|
||||
### 第1步:安装所需软件并且检查磁盘 ###
|
||||
|
||||
1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。
|
||||
1、 正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
# yum install mdadm [在 RedHat 系统]
|
||||
# apt-get install mdadm [在 Debain 系统]
|
||||
|
||||
2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。
|
||||
2、 一旦安装好`mdadm`包,我们需要使用下面的命令来检查磁盘是否已经配置好。
|
||||
|
||||
# mdadm -E /dev/sd[b-c]
|
||||
|
||||
![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png)
|
||||
|
||||
检查 RAID 的磁盘
|
||||
|
||||
*检查 RAID 的磁盘*
|
||||
|
||||
正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。
|
||||
|
||||
### 第2步:为 RAID 创建分区 ###
|
||||
|
||||
3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。
|
||||
3、 正如我提到的,我们使用最少的两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID 1。我们首先使用`fdisk`命令来创建这两个分区并更改其类型为 raid。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
按照下面的说明
|
||||
|
||||
- 按 ‘n’ 创建新的分区。
|
||||
- 然后按 ‘P’ 选择主分区。
|
||||
- 按 `n` 创建新的分区。
|
||||
- 然后按 `P` 选择主分区。
|
||||
- 接下来选择分区号为1。
|
||||
- 按两次回车键默认将整个容量分配给它。
|
||||
- 然后,按 ‘P’ 来打印创建好的分区。
|
||||
- 按 ‘L’,列出所有可用的类型。
|
||||
- 按 ‘t’ 修改分区类型。
|
||||
- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。
|
||||
- 然后再次使用‘p’查看我们所做的更改。
|
||||
- 使用‘w’保存更改。
|
||||
- 然后,按 `P` 来打印创建好的分区。
|
||||
- 按 `L`,列出所有可用的类型。
|
||||
- 按 `t` 修改分区类型。
|
||||
- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按 Enter 确认。
|
||||
- 然后再次使用`p`查看我们所做的更改。
|
||||
- 使用`w`保存更改。
|
||||
|
||||
![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png)
|
||||
|
||||
创建磁盘分区
|
||||
*创建磁盘分区*
|
||||
|
||||
在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。
|
||||
|
||||
@ -85,59 +84,59 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁
|
||||
|
||||
![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png)
|
||||
|
||||
创建第二个分区
|
||||
*创建第二个分区*
|
||||
|
||||
4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。
|
||||
4、 一旦这两个分区创建成功后,使用相同的命令来检查 sdb 和 sdc 分区并确认 RAID 分区的类型如上图所示。
|
||||
|
||||
# mdadm -E /dev/sd[b-c]
|
||||
|
||||
![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png)
|
||||
|
||||
验证分区变化
|
||||
*验证分区变化*
|
||||
|
||||
![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png)
|
||||
|
||||
检查 RAID 类型
|
||||
*检查 RAID 类型*
|
||||
|
||||
**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。
|
||||
|
||||
### 步骤3:创建 RAID1 设备 ###
|
||||
### 第3步:创建 RAID 1 设备 ###
|
||||
|
||||
5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它
|
||||
5、 接下来使用以下命令来创建一个名为 /dev/md0 的“RAID 1”设备并验证它
|
||||
|
||||
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png)
|
||||
|
||||
创建RAID设备
|
||||
*创建RAID设备*
|
||||
|
||||
6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列
|
||||
6、 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列
|
||||
|
||||
# mdadm -E /dev/sd[b-c]1
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png)
|
||||
|
||||
检查 RAID 设备类型
|
||||
*检查 RAID 设备类型*
|
||||
|
||||
![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png)
|
||||
|
||||
检查 RAID 设备阵列
|
||||
*检查 RAID 设备阵列*
|
||||
|
||||
从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。
|
||||
从上图中,人们很容易理解,RAID 1 已经创建好了,使用了 /dev/sdb1 和 /dev/sdc1 分区,你也可以看到状态为 resyncing(重新同步中)。
|
||||
|
||||
### 第4步:在 RAID 设备上创建文件系统 ###
|
||||
|
||||
7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 .
|
||||
7、 给 md0 上创建 ext4 文件系统
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png)
|
||||
|
||||
创建 RAID 设备文件系统
|
||||
*创建 RAID 设备文件系统*
|
||||
|
||||
8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据
|
||||
8、 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据
|
||||
|
||||
# mkdir /mnt/raid1
|
||||
# mount /dev/md0 /mnt/raid1/
|
||||
@ -146,51 +145,52 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁
|
||||
|
||||
![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png)
|
||||
|
||||
挂载 RAID 设备
|
||||
*挂载 RAID 设备*
|
||||
|
||||
9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。
|
||||
9、为了在系统重新启动自动挂载 RAID 1,需要在 fstab 文件中添加条目。打开`/etc/fstab`文件并添加以下行:
|
||||
|
||||
/dev/md0 /mnt/raid1 ext4 defaults 0 0
|
||||
|
||||
![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png)
|
||||
|
||||
自动挂载 Raid 设备
|
||||
*自动挂载 Raid 设备*
|
||||
|
||||
10、 运行`mount -av`,检查 fstab 中的条目是否有错误
|
||||
|
||||
10. 运行“mount -a”,检查 fstab 中的条目是否有错误
|
||||
# mount -av
|
||||
|
||||
![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png)
|
||||
|
||||
检查 fstab 中的错误
|
||||
*检查 fstab 中的错误*
|
||||
|
||||
11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。
|
||||
11、 接下来,使用下面的命令保存 RAID 的配置到文件“mdadm.conf”中。
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||
![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png)
|
||||
|
||||
保存 Raid 的配置
|
||||
*保存 Raid 的配置*
|
||||
|
||||
上述配置文件在系统重启时会读取并加载 RAID 设备。
|
||||
|
||||
### 第5步:在磁盘故障后检查数据 ###
|
||||
|
||||
12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。
|
||||
12、我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png)
|
||||
|
||||
验证 Raid 设备
|
||||
*验证 RAID 设备*
|
||||
|
||||
在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。
|
||||
在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的,并且 Active Devices 是2。现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png)
|
||||
|
||||
测试 RAID 设备
|
||||
*测试 RAID 设备*
|
||||
|
||||
现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。
|
||||
|
||||
@ -199,9 +199,9 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁
|
||||
|
||||
![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png)
|
||||
|
||||
验证 RAID 数据
|
||||
*验证 RAID 数据*
|
||||
|
||||
你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。
|
||||
你可以看到我们的数据仍然可用。由此,我们可以了解 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -209,9 +209,9 @@ via: http://www.tecmint.com/create-raid1-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[1]:https://linux.cn/article-6085-1.html
|
@ -1,89 +1,90 @@
|
||||
|
||||
在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分
|
||||
在 Linux 下使用 RAID(四):创建 RAID 5(条带化与分布式奇偶校验)
|
||||
================================================================================
|
||||
在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。
|
||||
|
||||
在 RAID 5 中,数据条带化后存储在分布式奇偶校验的多个磁盘上。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带化数据分布在多个磁盘上,这样会有很好的数据冗余。
|
||||
|
||||
![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg)
|
||||
|
||||
在 Linux 中配置 RAID 5
|
||||
*在 Linux 中配置 RAID 5*
|
||||
|
||||
对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。
|
||||
对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中,以花费更多的成本来提供更好的数据冗余性能。
|
||||
|
||||
#### 什么是奇偶校验? ####
|
||||
|
||||
奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。
|
||||
奇偶校验是在数据存储中检测错误最简单的常见方式。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中相当于一个磁盘大小的空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。
|
||||
|
||||
#### RAID 5 的优点和缺点 ####
|
||||
|
||||
- 提供更好的性能
|
||||
- 提供更好的性能。
|
||||
- 支持冗余和容错。
|
||||
- 支持热备份。
|
||||
- 将失去一个磁盘的容量存储奇偶校验信息。
|
||||
- 将用掉一个磁盘的容量存储奇偶校验信息。
|
||||
- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。
|
||||
- 事务处理读操作会更快。
|
||||
- 由于奇偶校验占用资源,写操作将是缓慢的。
|
||||
- 适合于面向事务处理的环境,读操作会更快。
|
||||
- 由于奇偶校验占用资源,写操作会慢一些。
|
||||
- 重建需要很长的时间。
|
||||
|
||||
#### 要求 ####
|
||||
|
||||
创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。
|
||||
|
||||
mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。
|
||||
mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下没有 RAID 的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件 mdadm.conf 中。
|
||||
|
||||
在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。
|
||||
|
||||
- [Basic Concepts of RAID in Linux – Part 1][1]
|
||||
- [Creating RAID 0 (Stripe) in Linux – Part 2][2]
|
||||
- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3]
|
||||
- [介绍 RAID 的级别和概念][1]
|
||||
- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2]
|
||||
- [用两块磁盘创建 RAID 1(镜像)][3]
|
||||
|
||||
#### 我的服务器设置 ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.227
|
||||
Hostname : rd5.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
操作系统 : CentOS 6.5 Final
|
||||
IP 地址 : 192.168.0.227
|
||||
主机名 : rd5.tecmintlocal.com
|
||||
磁盘 1 [20GB] : /dev/sdb
|
||||
磁盘 2 [20GB] : /dev/sdc
|
||||
磁盘 3 [20GB] : /dev/sdd
|
||||
|
||||
这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。
|
||||
这是9篇系列教程的第4部分,在这里我们要在 Linux 系统或服务器上使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘建立带有分布式奇偶校验的软件 RAID 5。
|
||||
|
||||
### 第1步:安装 mdadm 并检验磁盘 ###
|
||||
|
||||
1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。
|
||||
1、 正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。
|
||||
|
||||
# lsb_release -a
|
||||
# ifconfig | grep inet
|
||||
|
||||
![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png)
|
||||
|
||||
CentOS 6.5 摘要
|
||||
*CentOS 6.5 摘要*
|
||||
|
||||
2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。
|
||||
2、 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
# yum install mdadm [在 RedHat 系统]
|
||||
# apt-get install mdadm [在 Debain 系统]
|
||||
|
||||
3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。
|
||||
3、 “mdadm”包安装后,先使用`fdisk`命令列出我们在系统上增加的三个20GB的硬盘。
|
||||
|
||||
# fdisk -l | grep sd
|
||||
|
||||
![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png)
|
||||
|
||||
安装 mdadm 工具
|
||||
*安装 mdadm 工具*
|
||||
|
||||
4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。
|
||||
4、 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。
|
||||
|
||||
# mdadm -E /dev/sd[b-d]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd # 或
|
||||
|
||||
![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png)
|
||||
|
||||
检查 Raid 磁盘
|
||||
*检查 Raid 磁盘*
|
||||
|
||||
**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧!
|
||||
|
||||
### 第2步:为磁盘创建 RAID 分区 ###
|
||||
|
||||
5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。
|
||||
5、 首先,在创建 RAID 前磁盘(/dev/sdb, /dev/sdc 和 /dev/sdd)必须有分区,因此,在进行下一步之前,先使用`fdisk`命令进行分区。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
@ -93,20 +94,20 @@ CentOS 6.5 摘要
|
||||
|
||||
请按照下面的说明在 /dev/sdb 硬盘上创建分区。
|
||||
|
||||
- 按 ‘n’ 创建新的分区。
|
||||
- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。
|
||||
- 接下来选择分区号为1。默认就是1.
|
||||
- 按 `n` 创建新的分区。
|
||||
- 然后按 `P` 选择主分区。选择主分区是因为还没有定义过分区。
|
||||
- 接下来选择分区号为1。默认就是1。
|
||||
- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。
|
||||
- 然后,按 ‘P’ 来打印创建好的分区。
|
||||
- 改变分区类型,按 ‘L’可以列出所有可用的类型。
|
||||
- 按 ‘t’ 修改分区类型。
|
||||
- 这里使用‘fd’设置为 RAID 的类型。
|
||||
- 然后再次使用‘p’查看我们所做的更改。
|
||||
- 使用‘w’保存更改。
|
||||
- 然后,按 `P` 来打印创建好的分区。
|
||||
- 改变分区类型,按 `L`可以列出所有可用的类型。
|
||||
- 按 `t` 修改分区类型。
|
||||
- 这里使用`fd`设置为 RAID 的类型。
|
||||
- 然后再次使用`p`查看我们所做的更改。
|
||||
- 使用`w`保存更改。
|
||||
|
||||
![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png)
|
||||
|
||||
创建 sdb 分区
|
||||
*创建 sdb 分区*
|
||||
|
||||
**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。
|
||||
|
||||
@ -118,7 +119,7 @@ CentOS 6.5 摘要
|
||||
|
||||
![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png)
|
||||
|
||||
创建 sdc 分区
|
||||
*创建 sdc 分区*
|
||||
|
||||
#### 创建 /dev/sdd 分区 ####
|
||||
|
||||
@ -126,93 +127,87 @@ CentOS 6.5 摘要
|
||||
|
||||
![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png)
|
||||
|
||||
创建 sdd 分区
|
||||
*创建 sdd 分区*
|
||||
|
||||
6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。
|
||||
6、 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。
|
||||
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
|
||||
|
||||
or
|
||||
|
||||
# mdadm -E /dev/sd[b-c]
|
||||
# mdadm -E /dev/sd[b-c] # 或
|
||||
|
||||
![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png)
|
||||
|
||||
检查磁盘变化
|
||||
*检查磁盘变化*
|
||||
|
||||
**注意**: 在上面的图片中,磁盘的类型是 fd。
|
||||
|
||||
7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。
|
||||
7、 现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,在这些磁盘中创建一个新的 RAID 5 配置。
|
||||
|
||||
![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png)
|
||||
|
||||
在分区中检查 Raid
|
||||
*在分区中检查 RAID *
|
||||
|
||||
### 第3步:创建 md 设备 md0 ###
|
||||
|
||||
8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。
|
||||
8、 现在使用所有新创建的分区(sdb1, sdc1 和 sdd1)创建一个 RAID 设备“md0”(即 /dev/md0),使用以下命令。
|
||||
|
||||
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
|
||||
|
||||
or
|
||||
|
||||
# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1
|
||||
# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 # 或
|
||||
|
||||
9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。
|
||||
9、 创建 RAID 设备后,检查并确认 RAID,从 mdstat 中输出中可以看到包括的设备的 RAID 级别。
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png)
|
||||
|
||||
验证 Raid 设备
|
||||
*验证 Raid 设备*
|
||||
|
||||
如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。
|
||||
如果你想监视当前的创建过程,你可以使用`watch`命令,将 `cat /proc/mdstat` 传递给它,它会在屏幕上显示且每隔1秒刷新一次。
|
||||
|
||||
# watch -n1 cat /proc/mdstat
|
||||
|
||||
![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png)
|
||||
|
||||
监控 Raid 5 过程
|
||||
*监控 RAID 5 构建过程*
|
||||
|
||||
![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png)
|
||||
|
||||
Raid 5 过程概要
|
||||
*Raid 5 过程概要*
|
||||
|
||||
10. 创建 RAID 后,使用以下命令验证 RAID 设备
|
||||
10、 创建 RAID 后,使用以下命令验证 RAID 设备
|
||||
|
||||
# mdadm -E /dev/sd[b-d]1
|
||||
|
||||
![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png)
|
||||
|
||||
验证 Raid 级别
|
||||
*验证 Raid 级别*
|
||||
|
||||
**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。
|
||||
|
||||
11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。
|
||||
11、 接下来,验证 RAID 阵列,假定包含 RAID 的设备正在运行并已经开始了重新同步。
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png)
|
||||
|
||||
验证 Raid 阵列
|
||||
*验证 RAID 阵列*
|
||||
|
||||
### 第4步:为 md0 创建文件系统###
|
||||
|
||||
12. 在挂载前为“md0”设备创建 ext4 文件系统。
|
||||
12、 在挂载前为“md0”设备创建 ext4 文件系统。
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png)
|
||||
|
||||
创建 md0 文件系统
|
||||
*创建 md0 文件系统*
|
||||
|
||||
13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。
|
||||
13、 现在,在`/mnt`下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下,并检查挂载点下的文件,你会看到 lost+found 目录。
|
||||
|
||||
# mkdir /mnt/raid5
|
||||
# mount /dev/md0 /mnt/raid5/
|
||||
# ls -l /mnt/raid5/
|
||||
|
||||
14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。
|
||||
14、 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。
|
||||
|
||||
# touch /mnt/raid5/raid5_tecmint_{1..5}
|
||||
# ls -l /mnt/raid5/
|
||||
@ -222,9 +217,9 @@ Raid 5 过程概要
|
||||
|
||||
![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png)
|
||||
|
||||
挂载 Raid 设备
|
||||
*挂载 RAID 设备*
|
||||
|
||||
15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。
|
||||
15、 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。编辑 fstab 文件添加条目,在文件尾追加以下行。挂载点会根据你环境的不同而不同。
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
@ -232,19 +227,19 @@ Raid 5 过程概要
|
||||
|
||||
![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png)
|
||||
|
||||
自动挂载 Raid 5
|
||||
*自动挂载 RAID 5*
|
||||
|
||||
16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。
|
||||
16、 接下来,运行`mount -av`命令检查 fstab 条目中是否有错误。
|
||||
|
||||
# mount -av
|
||||
|
||||
![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png)
|
||||
|
||||
检查 Fstab 错误
|
||||
*检查 Fstab 错误*
|
||||
|
||||
### 第5步:保存 Raid 5 的配置 ###
|
||||
|
||||
17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。
|
||||
17、 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步中没有跟随不属于 md0 的 RAID 设备,它会是一些其他随机数字。
|
||||
|
||||
所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。
|
||||
|
||||
@ -252,17 +247,17 @@ Raid 5 过程概要
|
||||
|
||||
![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png)
|
||||
|
||||
保存 Raid 5 配置
|
||||
*保存 RAID 5 配置*
|
||||
|
||||
注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。
|
||||
注意:保存配置将保持 md0 设备的 RAID 级别稳定不变。
|
||||
|
||||
### 第6步:添加备用磁盘 ###
|
||||
|
||||
18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。
|
||||
18、 备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会进入激活重建过程,并从其他磁盘上同步数据,这样就有了冗余。
|
||||
|
||||
更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。
|
||||
|
||||
- [Add Spare Drive to Raid 5 Setup][4]
|
||||
- [在 RAID 5 中添加备用磁盘][4]
|
||||
|
||||
### 结论 ###
|
||||
|
||||
@ -274,12 +269,12 @@ via: http://www.tecmint.com/create-raid-5-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/create-raid0-in-linux/
|
||||
[3]:http://www.tecmint.com/create-raid1-in-linux/
|
||||
[1]:https://linux.cn/article-6085-1.html
|
||||
[2]:https://linux.cn/article-6087-1.html
|
||||
[3]:https://linux.cn/article-6093-1.html
|
||||
[4]:http://www.tecmint.com/create-raid-6-in-linux/
|
109
published/kde-plasma-5.4.md
Normal file
109
published/kde-plasma-5.4.md
Normal file
@ -0,0 +1,109 @@
|
||||
KDE Plasma 5.4.0 发布,八月特色版
|
||||
=============================
|
||||
|
||||
![Plasma 5.4](https://www.kde.org/announcements/plasma-5.4/plasma-screen-desktop-2-shadow.png)
|
||||
|
||||
2015 年 8 月 25 ,星期二,KDE 发布了 Plasma 5 的一个特色新版本。
|
||||
|
||||
此版本为我们带来了许多非常棒的感受,如优化了对高分辨率的支持,KRunner 自动补全和一些新的 Breeze 漂亮图标。这还为不久以后的技术预览版的 Wayland 桌面奠定了基础。我们还带来了几个新组件,如声音音量部件,显示器校准工具和测试版的用户管理工具。
|
||||
|
||||
###新的音频音量程序
|
||||
|
||||
![The new Audio Volume Applet](https://www.kde.org/announcements/plasma-5.4/plasma-screen-audiovolume-shadows.png)
|
||||
|
||||
新的音频音量程序直接工作于 PulseAudio (Linux 上一个非常流行的音频服务) 之上 ,并且在一个漂亮的简约的界面提供一个完整的音量控制和输出设定。
|
||||
|
||||
###替代的应用控制面板起动器
|
||||
|
||||
![he new Dashboard alternative launcher](https://www.kde.org/announcements/plasma-5.4/plasma-screen-dashboard-2-shadow.png)
|
||||
|
||||
Plasma 5.4 在 kdeplasma-addons 软件包中提供了一个全新的全屏的应用控制面板,它具有应用菜单的所有功能,还支持缩放和全空间键盘导航。新的起动器可以像你目前所用的“最近使用的”或“收藏的文档和联系人”一样简单和快速地查找应用。
|
||||
|
||||
###丰富的艺术图标
|
||||
|
||||
![Just some of the new icons in this release](https://kver.files.wordpress.com/2015/07/image10430.png)
|
||||
|
||||
Plasma 5.4 提供了超过 1400 个的新图标,其中不仅包含 KDE 程序的,而且还为 Inkscape, Firefox 和 Libreoffice 提供 Breeze 主题的艺术图标,可以体验到更加一致和本地化的感觉。
|
||||
|
||||
###KRunner 历史记录
|
||||
|
||||
![KRunner](https://www.kde.org/announcements/plasma-5.4/plasma-screen-krunner-shadow.png)
|
||||
|
||||
KRunner 现在可以记住之前的搜索历史并根据历史记录进行自动补全。
|
||||
|
||||
###Network 程序中实用的图形展示
|
||||
|
||||
![Network Graphs](https://www.kde.org/announcements/plasma-5.4/plasma-screen-nm-graph-shadow.png)
|
||||
|
||||
Network 程序现在可以以图形形式显示网络流量了,同时也支持两个新的 VPN 插件:通过 SSH 连接或通过 SSTP 连接。
|
||||
|
||||
###Wayland 技术预览
|
||||
|
||||
随着 Plasma 5.4 ,Wayland 桌面发布了第一个技术预览版。在使用自由图形驱动(free graphics drivers)的系统上可以使用 KWin(Plasma 的 Wayland 合成器和 X11 窗口管理器)通过[内核模式设定][1]来运行 Plasma。现在已经支持的功能需求来自于[手机 Plasma 项目][2],更多的面向桌面的功能还未被完全实现。现在还不能作为替换那些基于 Xorg 的桌面,但可以轻松地对它测试和贡献,以及观看令人激动视频。有关如何在 Wayland 中使用 Plasma 的介绍请到:[KWin 维基页][3]。Wlayland 将随着我们构建的稳定版本而逐步得到改进。
|
||||
|
||||
###其他的改变和添加
|
||||
|
||||
- 优化对高 DPI 支持
|
||||
- 更少的内存占用
|
||||
- 桌面搜索使用了更快的新后端
|
||||
- 便笺增加拖拉支持和键盘导航
|
||||
- 回收站重新支持拖拉
|
||||
- 系统托盘获得更快的可配置性
|
||||
- 文档重新修订和更新
|
||||
- 优化了窄小面板上的数字时钟的布局
|
||||
- 数字时钟支持 ISO 日期
|
||||
- 切换数字时钟 12/24 格式更简单
|
||||
- 日历显示第几周
|
||||
- 任何项目都可以收藏进应用菜单(Kicker),支持收藏文档和 Telepathy 联系人
|
||||
- Telepathy 联系人收藏可以展示联系人的照片和实时状态
|
||||
- 优化程序与容器间的焦点和激活处理
|
||||
- 文件夹视图中各种小修复:更好的默认尺寸,鼠标交互问题以及文本标签换行
|
||||
- 任务管理器更好的呈现起动器默认的应用图标
|
||||
- 可再次通过将程序拖入任务管理器来添加启动器
|
||||
- 可配置中间键点击在任务管理器中的行为:无动作,关闭窗口,启动一个相同的程序的新实例
|
||||
- 任务管理器现在以列排序优先,无论用户是否更倾向于行优先;许多用户更喜欢这样排序是因为它会使更少的任务按钮像窗口一样移来移去
|
||||
- 优化任务管理器的图标和缩放边
|
||||
- 任务管理器中各种小修复:垂直下拉,触摸事件处理现在支持所有系统,组扩展箭头的视觉问题
|
||||
- 提供可用的目的框架技术预览版,可以使用 QuickShare Plasmoid,它可以让许多 web 服务分享文件更容易
|
||||
- 增加了显示器配置工具
|
||||
- 增加的 kwallet-pam 可以在登录时打开 wallet
|
||||
- 用户管理器现在会同步联系人到 KConfig 的设置中,用户账户模块被丢弃了
|
||||
- 应用程序菜单(Kicker)的性能得到改善
|
||||
- 应用程序菜单(Kicker)各种小修复:隐藏/显示程序更加可靠,顶部面板的对齐修复,文件夹视图中 “添加到桌面”更加可靠,在基于 KActivities 的最新模块中有更好的表现
|
||||
- 支持自定义菜单布局 (kmenuedit)和应用程序菜单(Kicker)支持菜单项目分隔
|
||||
- 当在面板中时,改进了文件夹视图,参见 [blog][4]
|
||||
- 将文件夹拖放到桌面容器现在会再次创建一个文件夹视图
|
||||
|
||||
[完整的 Plasma 5.4 变更日志在此](https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php)
|
||||
|
||||
###Live 镜像
|
||||
|
||||
尝鲜的最简单的方式就是从 U 盘中启动,可以在 KDE 社区 Wiki 中找到 各种 [带有 Plasma 5 的 Live 镜像][5]。
|
||||
|
||||
###下载软件包
|
||||
|
||||
各发行版已经构建了软件包,或者正在构建,wiki 中的列出了各发行版的软件包名:[软件包下载维基页][6]。
|
||||
|
||||
###源码下载
|
||||
|
||||
可以直接从源码中安装 Plasma 5。KDE 社区 Wiki 已经介绍了[怎样编译][7]。
|
||||
|
||||
注意,Plasma 5 与 Plasma 4 不兼容,必须先卸载旧版本,或者安装到不同的前缀处。
|
||||
|
||||
|
||||
- [源代码信息页][8]
|
||||
|
||||
---
|
||||
|
||||
via: https://www.kde.org/announcements/plasma-5.4.0.php
|
||||
|
||||
译者:[Locez](http://locez.com) 校对:[wxy](http://github.com/wxy)
|
||||
|
||||
[1]:https://en.wikipedia.org/wiki/Direct_Rendering_Manager
|
||||
[2]:https://dot.kde.org/2015/07/25/plasma-mobile-free-mobile-platform
|
||||
[3]:https://community.kde.org/KWin/Wayland#Start_a_Plasma_session_on_Wayland
|
||||
[4]:https://blogs.kde.org/2015/06/04/folder-view-panel-popups-are-list-views-again
|
||||
[5]:https://community.kde.org/Plasma/LiveImages
|
||||
[6]:https://community.kde.org/Plasma/Packages
|
||||
[7]:http://community.kde.org/Frameworks/Building
|
||||
[8]:https://www.kde.org/info/plasma-5.4.0.php
|
@ -1,52 +0,0 @@
|
||||
Linux Without Limits: IBM Launch LinuxONE Mainframes
|
||||
================================================================================
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png)
|
||||
|
||||
LinuxONE Emperor MainframeGood news for Ubuntu’s server team today as [IBM launch the LinuxONE][1] a Linux-only mainframe that is also able to run Ubuntu.
|
||||
|
||||
The largest of the LinuxONE systems launched by IBM is called ‘Emperor’ and can scale up to 8000 virtual machines or tens of thousands of containers – a possible record for any one single Linux system.
|
||||
|
||||
The LinuxONE is described by IBM as a ‘game changer’ that ‘unleashes the potential of Linux for business’.
|
||||
|
||||
IBM and Canonical are working together on the creation of an Ubuntu distribution for LinuxONE and other IBM z Systems. Ubuntu will join RedHat and SUSE as ‘premier Linux distributions’ on IBM z.
|
||||
|
||||
Alongside the ‘Emperor’ IBM is also offering the LinuxONE Rockhopper, a smaller mainframe for medium-sized businesses and organisations.
|
||||
|
||||
IBM is the market leader in mainframes and commands over 90% of the mainframe market.
|
||||
|
||||
注:youtube 视频
|
||||
<iframe width="750" height="422" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/2ABfNrWs-ns?feature=oembed"></iframe>
|
||||
|
||||
### What Is a Mainframe Computer Used For? ###
|
||||
|
||||
The computer you’re reading this article on would be dwarfed by a ‘big iron’ mainframe. They are large, hulking great cabinets packed full of high-end components, custom designed technology and dizzying amounts of storage (that is data storage, not ample room for pens and rulers).
|
||||
|
||||
Mainframes computers are used by large organizations and businesses to process and store large amounts of data, crunch through statistics, and handle large-scale transaction processing.
|
||||
|
||||
### ‘World’s Fastest Processor’ ###
|
||||
|
||||
IBM has teamed up with Canonical Ltd to use Ubuntu on the LinuxONE and other IBM z Systems.
|
||||
|
||||
The LinuxONE Emperor uses the IBM z13 processor. The chip, announced back in January, is said to be the world’s fastest microprocessor. It is able to deliver transaction response times in the milliseconds.
|
||||
|
||||
But as well as being well equipped to handle for high-volume mobile transactions, the z13 inside the LinuxONE is also an ideal cloud system.
|
||||
|
||||
It can handle more than 50 virtual servers per core for a total of 8000 virtual servers, making it a cheaper, greener and more performant way to scale-out to the cloud.
|
||||
|
||||
**You don’t have to be a CIO or mainframe spotter to appreciate this announcement. The possibilities LinuxONE provides are clear enough. **
|
||||
|
||||
Source: [Reuters (h/t @popey)][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www-03.ibm.com/systems/z/announcement.html
|
||||
[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817
|
@ -1,46 +0,0 @@
|
||||
Ubuntu Linux is coming to IBM mainframes
|
||||
================================================================================
|
||||
SEATTLE -- It's finally happened. At [LinuxCon][1], IBM and [Canonical][2] announced that [Ubuntu Linux][3] will soon be running on IBM mainframes.
|
||||
|
||||
![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg)
|
||||
|
||||
You'll soon to be able to get your IBM mainframe in Ubuntu Linux orange
|
||||
|
||||
According to Ross Mauri, IBM's General Manager of System z, and Mark Shuttleworth, Canonical and Ubuntu's founder, this move came about because of customer demand. For over a decade, [Red Hat Enterprise Linux (RHEL)][4] and [SUSE Linux Enterprise Server (SLES)][5] were the only supported IBM mainframe Linux distributions.
|
||||
|
||||
As Ubuntu matured, more and more businesses turned to it for the enterprise Linux, and more and more of them wanted it on IBM big iron hardware. In particular, banks wanted Ubuntu there. Soon, financial CIOs will have their wish granted.
|
||||
|
||||
In an interview Shuttleworth said that Ubuntu Linux will be available on the mainframe by April 2016 in the next long-term support version of Ubuntu: Ubuntu 16.04. Canonical and IBM already took the first move in this direction in late 2014 by bringing [Ubuntu to IBM's POWER][6] architecture.
|
||||
|
||||
Before that, Canonical and IBM almost signed the dotted line to bring [Ubuntu to IBM mainframes in 2011][7] but that deal was never finalized. This time, it's happening.
|
||||
|
||||
Jane Silber, Canonical's CEO, explained in a statement, "Our [expansion of Ubuntu platform][8] support to [IBM z Systems][9] is a recognition of the number of customers that count on z Systems to run their businesses, and the maturity the hybrid cloud is reaching in the marketplace.
|
||||
|
||||
**Silber continued:**
|
||||
|
||||
> With support of z Systems, including [LinuxONE][10], Canonical is also expanding our relationship with IBM, building on our support for the POWER architecture and OpenPOWER ecosystem. Just as Power Systems clients are now benefiting from the scaleout capabilities of Ubuntu, and our agile development process which results in first to market support of new technologies such as CAPI (Coherent Accelerator Processor Interface) on POWER8, z Systems clients can expect the same rapid rollout of technology advancements, and benefit from [Juju][11] and our other cloud tools to enable faster delivery of new services to end users. In addition, our collaboration with IBM includes the enablement of scale-out deployment of many IBM software solutions with Juju Charms. Mainframe clients will delight in having a wealth of 'charmed' IBM solutions, other software provider products, and open source solutions, deployable on mainframes via Juju.
|
||||
|
||||
Shuttleworth expects Ubuntu on z to be very successful. "It's blazingly fast, and with its support for OpenStack, people who want exceptional cloud region performance will be very happy.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/#ftag=RSSbaffb68
|
||||
|
||||
作者:[Steven J. Vaughan-Nichols][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
|
||||
[1]:http://events.linuxfoundation.org/events/linuxcon-north-america
|
||||
[2]:http://www.canonical.com/
|
||||
[3]:http://www.ubuntu.comj/
|
||||
[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
|
||||
[5]:https://www.suse.com/products/server/
|
||||
[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/
|
||||
[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/
|
||||
[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/
|
||||
[9]:http://www-03.ibm.com/systems/uk/z/
|
||||
[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/
|
||||
[11]:https://jujucharms.com/
|
@ -0,0 +1,87 @@
|
||||
Plasma 5.4 Is Out And It’s Packed Full Of Features
|
||||
================================================================================
|
||||
KDE has [announced][1] a brand new feature release of Plasma 5 — and it’s a corker.
|
||||
|
||||
![kde network applet graphs](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/kde-network-applet-graphs.jpg)
|
||||
|
||||
Better network details are among the changes
|
||||
|
||||
Plasma 5.4.0 builds on [April’s 5.3.0 milestone][2] in a number of ways, ranging from the inherently technical, Wayland preview session, ahoy, to lavish aesthetic touches, like **1,400 brand new icons**.
|
||||
|
||||
A handful of new components also feature in the release, including a new Plasma Widget for volume control, a monitor calibration tool and an improved user management tool.
|
||||
|
||||
The ‘Kicker’ application menu has been powered up to let you favourite all types of content, not just applications.
|
||||
|
||||
**KRunner now remembers searches** so that it can automatically offer suggestions based on your earlier queries as you type.
|
||||
|
||||
The **network applet displays a graph** to give you a better understanding of your network traffic. It also gains two new VPN plugins for SSH and SSTP connections.
|
||||
|
||||
Minor tweaks to the digital clock see it adapt better in slim panel mode, it gains ISO date support and makes it easier for you to toggle between 12 hour and 24 hour clock. Week numbers have been added to the calendar.
|
||||
|
||||
### Application Dashboard ###
|
||||
|
||||
![plasma 5.4 fullscreen dashboard](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/plasma-fullscreen-dashboard.jpg)
|
||||
|
||||
The new ‘Application Dashboard’ in KDE Plasma 5.4.0
|
||||
|
||||
**A new full screen launcher, called ‘Application Dashboard’**, is also available.
|
||||
|
||||
This full-screen dash offers the same features as the traditional Application Menu but with “sophisticated scaling to screen size and full spatial keyboard navigation”.
|
||||
|
||||
Like the Unity launch, the new Plasma Application Dashboard helps you quickly find applications, sift through files and contacts based on your previous activity.
|
||||
|
||||
### Changes in KDE Plasma 5.4.0 at a glance ###
|
||||
|
||||
- Improved high DPI support
|
||||
- KRunner autocompletion
|
||||
- KRunner search history
|
||||
- Application Dashboard add on
|
||||
- 1,400 New icons
|
||||
- Wayland tech preview
|
||||
|
||||
For a full list of changes in Plasma 5.4 refer to [this changelog][3].
|
||||
|
||||
### Install Plasma 5.4 in Kubuntu 15.04 ###
|
||||
|
||||
![new plasma desktop](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/new-plasma-desktop-.jpg)
|
||||
|
||||
![Kubuntu logo](http://www.omgubuntu.co.uk/wp-content/uploads/2012/02/logo-kubuntu.png)
|
||||
|
||||
To **install Plasma 5.4 in Kubuntu 15.04** you will need to add the KDE Backports PPA to your Software Sources.
|
||||
|
||||
Adding the Kubuntu backports PPA **is not strictly advised** as it may upgrade other parts of the KDE desktop, application suite, developer frameworks or Kubuntu specific config files.
|
||||
|
||||
If you like your desktop being stable, don’t proceed.
|
||||
|
||||
The quickest way to upgrade to Plasma 5.4 once it lands in the Kubuntu Backports PPA is to use the Terminal:
|
||||
|
||||
sudo add-apt-repository ppa:kubuntu-ppa/backports
|
||||
|
||||
sudo apt-get update && sudo apt-get dist-upgrade
|
||||
|
||||
Let the upgrade process complete. Assuming no errors emerge, reboot your computer for changes to take effect.
|
||||
|
||||
If you’re not already using Kubuntu, i.e. you’re using the Unity version of Ubuntu, you should first install the Kubuntu desktop package (you’ll find it in the Ubuntu Software Centre).
|
||||
|
||||
To undo the changes above and downgrade to the most recent version of Plasma available in the Ubuntu archives use the PPA-Purge tool:
|
||||
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
sudo ppa-purge ppa:kubuntu-ppa/backports
|
||||
|
||||
Let us know how your upgrade/testing goes in the comments below and don’t forget to mention the features you hope to see added to the Plasma 5 desktop next.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/08/plasma-5-4-new-features
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://dot.kde.org/2015/08/25/kde-ships-plasma-540-feature-release-august
|
||||
[2]:http://www.omgubuntu.co.uk/2015/04/kde-plasma-5-3-released-heres-how-to-upgrade-in-kubuntu-15-04
|
||||
[3]:https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php
|
@ -1,117 +0,0 @@
|
||||
Top 5 Torrent Clients For Ubuntu Linux
|
||||
================================================================================
|
||||
![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png)
|
||||
|
||||
Looking for the **best torrent client in Ubuntu**? Indeed there are a number of torrent clients available for desktop Linux. But which ones are the **best Ubuntu torrent clients** among them?
|
||||
|
||||
I am going to list top 5 torrent clients for Linux, which are lightweight, feature rich and have impressive GUI. Ease of installation and using is also a factor.
|
||||
|
||||
### Best torrent programs for Ubuntu ###
|
||||
|
||||
Since Ubuntu comes by default with Transmission, I am going to exclude it from the list. This doesn’t mean that Transmission doesn’t deserve to be on the list. Transmission is a good to have torrent client for Ubuntu and this is the reason why it is the default Torrent application in several Linux distributions, including Ubuntu.
|
||||
|
||||
----------
|
||||
|
||||
### Deluge ###
|
||||
|
||||
![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png)
|
||||
|
||||
[Deluge][1] has been chosen as the best torrent client for Linux by Lifehacker and that speaks itself of the usefulness of Deluge. And it’s not just Lifehacker who is fan of Deluge, check out any forum and you’ll find a number of people admitting that Deluge is their favorite.
|
||||
|
||||
Fast, sleek and intuitive interface makes Deluge a hot favorite among Linux users.
|
||||
|
||||
Deluge is available in Ubuntu repositories and you can install it in Ubuntu Software Center or by using the command below:
|
||||
|
||||
sudo apt-get install deluge
|
||||
|
||||
----------
|
||||
|
||||
### qBittorrent ###
|
||||
|
||||
![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png)
|
||||
|
||||
As the name suggests, [qBittorrent][2] is the Qt version of famous [Bittorrent][3] application. You’ll see an interface similar to Bittorrent client in Windows, if you ever used it. Sort of lightweight and have all the standard features of a torrent program, qBittorrent is also available in default Ubuntu repository.
|
||||
|
||||
It could be installed from Ubuntu Software Center or using the command below:
|
||||
|
||||
sudo apt-get install qbittorrent
|
||||
|
||||
----------
|
||||
|
||||
### Tixati ###
|
||||
|
||||
![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png)
|
||||
|
||||
[Tixati][4] is another nice to have torrent client for Ubuntu. It has a default dark theme which might be preferred by many but not me. It has all the standard features that you can seek in a torrent client.
|
||||
|
||||
In addition to that, there are additional feature of data analysis. You can measure and analyze bandwidth and other statistics in nice charts.
|
||||
|
||||
- [Download Tixati][5]
|
||||
|
||||
----------
|
||||
|
||||
### Vuze ###
|
||||
|
||||
![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png)
|
||||
|
||||
[Vuze][6] is favorite torrent application of a number of Linux as well as Windows users. Apart from the standard features, you can search for torrents directly in the application. You can also subscribe to episodic content so that you won’t have to search for new contents as you can see it in your subscription in sidebar.
|
||||
|
||||
It also comes with a video player that can play HD videos with subtitles and all. But I don’t think you would like to use it over the better video players such as VLC.
|
||||
|
||||
Vuze can be installed from Ubuntu Software Center or using the command below:
|
||||
|
||||
sudo apt-get install vuze
|
||||
|
||||
----------
|
||||
|
||||
### Frostwire ###
|
||||
|
||||
![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png)
|
||||
|
||||
[Frostwire][7] is the torrent application you might want to try. It is more than just a simple torrent client. Also available for Android, you can use it to share files over WiFi.
|
||||
|
||||
You can search for torrents from within the application and play them inside the application. In addition to the downloaded files, it can browse your local media and have them organized inside the player. The same is applicable for the Android version.
|
||||
|
||||
An additional feature is that Frostwire also provides access to legal music by indi artists. You can download them and listen to it, for free, for legal.
|
||||
|
||||
- [Download Frostwire][8]
|
||||
|
||||
----------
|
||||
|
||||
### Honorable mention ###
|
||||
|
||||
On Windows, uTorrent (pronounced mu torrent) is my favorite torrent application. While uTorrent may be available for Linux, I deliberately skipped it from the list because installing and using uTorrent in Linux is neither easy nor does it provide a complete application experience (runs with in web browser).
|
||||
|
||||
You can read about uTorrent installation in Ubuntu [here][9].
|
||||
|
||||
#### Quick tip: ####
|
||||
|
||||
Most of the time, torrent applications do not start by default. You might want to change this behavior. Read this post to learn [how to manage startup applications in Ubuntu][10].
|
||||
|
||||
### What’s your favorite? ###
|
||||
|
||||
That was my opinion on the best Torrent clients in Ubuntu. What is your favorite one? Do leave a comment. You can also check the [best download managers for Ubuntu][11] in related posts. And if you use Popcorn Time, check these [Popcorn Time Tips][12].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/best-torrent-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://deluge-torrent.org/
|
||||
[2]:http://www.qbittorrent.org/
|
||||
[3]:http://www.bittorrent.com/
|
||||
[4]:http://www.tixati.com/
|
||||
[5]:http://www.tixati.com/download/
|
||||
[6]:http://www.vuze.com/
|
||||
[7]:http://www.frostwire.com/
|
||||
[8]:http://www.frostwire.com/downloads
|
||||
[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/
|
||||
[10]:http://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
[11]:http://itsfoss.com/4-best-download-managers-for-linux/
|
||||
[12]:http://itsfoss.com/popcorn-time-tips/
|
@ -0,0 +1,228 @@
|
||||
Great Open Source Collaborative Editing Tools
|
||||
================================================================================
|
||||
In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore.
|
||||
|
||||
There are many ways to collaborate online, and it has never been easier. This article highlights my favourite open source tools to collaborate on documents in real time.
|
||||
|
||||
Google Docs is an excellent productivity application with most of the features I need. It serves as a collaborative tool for editing documents in real time. Documents can be shared, opened, and edited by multiple users simultaneously and users can see character-by-character changes as other collaborators make edits. While Google Docs is free for individuals, it is not open source.
|
||||
|
||||
Here is my take on the finest open source collaborative editors which help you focus on writing without interruption, yet work mutually with others.
|
||||
|
||||
----------
|
||||
|
||||
### Hackpad ###
|
||||
|
||||
![Hackpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Hackpad.png)
|
||||
|
||||
Hackpad is an open source web-based realtime wiki, based on the open source EtherPad collaborative document editor.
|
||||
|
||||
Hackpad allows users to share your docs realtime and it uses color coding to show which authors have contributed to which content. It also allows in line photos, checklists and can also be used for coding as it offers syntax highlighting.
|
||||
|
||||
While Dropbox acquired Hackpad in April 2014, it is only this month that the software has been released under an open source license. It has been worth the wait.
|
||||
|
||||
Features include:
|
||||
|
||||
- Very rich set of functions, similar to those offered by wikis
|
||||
- Take collaborative notes, share data and files, and use comments to share your thoughts in real-time or asynchronously
|
||||
- Granular privacy permissions enable you to invite a single friend, a dozen teammates, or thousands of Twitter followers
|
||||
- Intelligent execution
|
||||
- Directly embed videos from popular video sharing sites
|
||||
- Tables
|
||||
- Syntax highlighting for most common programming languages including C, C#, CSS, CoffeeScript, Java, and HTML
|
||||
|
||||
- Website: [hackpad.com][1]
|
||||
- Source code: [github.com/dropbox/hackpad][2]
|
||||
- Developer: [Contributors][3]
|
||||
- License: Apache License, Version 2.0
|
||||
- Version Number: -
|
||||
|
||||
----------
|
||||
|
||||
### Etherpad ###
|
||||
|
||||
![Etherpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Etherpad.png)
|
||||
|
||||
Etherpad is an open source web-based collaborative real-time editor, allowing authors to simultaneously edit a text document leave comments, and interact with others using an integrated chat.
|
||||
|
||||
Etherpad is implemented in JavaScript, on top of the AppJet platform, with the real-time functionality achieved using Comet streaming.
|
||||
|
||||
Features include:
|
||||
|
||||
- Well designed spartan interface
|
||||
- Simple text formatting features
|
||||
- "Time slider" - explore the history of a pad
|
||||
- Download documents in plain text, PDF, Microsoft Word, Open Document, and HTML
|
||||
- Auto-saves the document at regular, short intervals
|
||||
- Highly customizable
|
||||
- Client side plugins extend the editor functionality
|
||||
- Hundreds of plugins extend Etherpad including support for email notifications, pad management, authentication
|
||||
- Accessibility enabled
|
||||
- Interact with Pad contents in real time from within Node and from your CLI
|
||||
|
||||
- Website: [etherpad.org][4]
|
||||
- Source code: [github.com/ether/etherpad-lite][5]
|
||||
- Developer: David Greenspan, Aaron Iba, J.D. Zamfiresc, Daniel Clemens, David Cole
|
||||
- License: Apache License Version 2.0
|
||||
- Version Number: 1.5.7
|
||||
|
||||
----------
|
||||
|
||||
### Firepad ###
|
||||
|
||||
![Firepad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Firepad.png)
|
||||
|
||||
Firepad is an open source, collaborative text editor. It is designed to be embedded inside larger web applications with collaborative code editing added in only a few days.
|
||||
|
||||
Firepad is a full-featured text editor, with capabilities like conflict resolution, cursor synchronization, user attribution, and user presence detection. It uses Firebase as a backend, and doesn't need any server-side code. It can be added to any web app. Firepad can use either the CodeMirror editor or the Ace editor to render documents, and its operational transform code borrows from ot.js.
|
||||
|
||||
If you want to extend your web application capabilities by adding the simple document and code editor, Firepad is perfect.
|
||||
|
||||
Firepad is used by several editors, including the Atlassian Stash Realtime Editor, Nitrous.IO, LiveMinutes, and Koding.
|
||||
|
||||
Features include:
|
||||
|
||||
- True collaborative editing
|
||||
- Intelligent OT-based merging and conflict resolution
|
||||
- Support for both rich text and code editing
|
||||
- Cursor position synchronization
|
||||
- Undo / redo
|
||||
- Text highlighting
|
||||
- User attribution
|
||||
- Presence detection
|
||||
- Version checkpointing
|
||||
- Images
|
||||
- Extend Firepad through its API
|
||||
- Supports all modern browsers: Chrome, Safari, Opera 11+, IE8+, Firefox 3.6+
|
||||
|
||||
- Website: [www.firepad.io][6]
|
||||
- Source code: [github.com/firebase/firepad][7]
|
||||
- Developer: Michael Lehenbauer and the team at Firebase
|
||||
- License: MIT
|
||||
- Version Number: 1.1.1
|
||||
|
||||
----------
|
||||
|
||||
### OwnCloud Documents ###
|
||||
|
||||
![ownCloud Documents in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ownCloud.png)
|
||||
|
||||
ownCloud Documents is an ownCloud app to work with office documents alone and/or collaboratively. It allows up to 5 individuals to collaborate editing .odt and .doc files in a web browser.
|
||||
|
||||
ownCloud is a self-hosted file sync and share server. It provides access to your data through a web interface, sync clients or WebDAV while providing a platform to view, sync and share across devices easily.
|
||||
|
||||
Features include:
|
||||
|
||||
- Cooperative edit, with multiple users editing files simultaneously
|
||||
- Document creation within ownCloud
|
||||
- Document upload
|
||||
- Share and edit files in the browser, and then share them inside ownCloud or through a public link
|
||||
- ownCloud features like versioning, local syncing, encryption, undelete
|
||||
- Seamless support for Microsoft Word documents by way of transparent conversion of file formats
|
||||
|
||||
- Website: [owncloud.org][8]
|
||||
- Source code: [github.com/owncloud/documents][9]
|
||||
- Developer: OwnCloud Inc.
|
||||
- License: AGPLv3
|
||||
- Version Number: 8.1.1
|
||||
|
||||
----------
|
||||
|
||||
### Gobby ###
|
||||
|
||||
![Gobby in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Gobby.png)
|
||||
|
||||
Gobby is a collaborative editor supporting multiple documents in one session and a multi-user chat. All users could work on the file simultaneously without the need to lock it. The parts the various users write are highlighted in different colours and it supports syntax highlighting of various programming and markup languages.
|
||||
|
||||
Gobby allows multiple users to edit the same document together over the internet in real-time. It integrates well with the GNOME environment. It features a client-server architecture which supports multiple documents in one session, document synchronisation on request, password protection and an IRC-like chat for communication out of band. Users can choose a colour to highlight the text they have written in a document.
|
||||
|
||||
A dedicated server called infinoted is also provided.
|
||||
|
||||
Features include:
|
||||
|
||||
- Full-fledged text editing capabilities including syntax highlighting using GtkSourceView
|
||||
- Real-time, lock-free collaborative text editing through encrypted connections (including PFS)
|
||||
- Integrated group chat
|
||||
- Local group undo: Undo does not affect changes of remote users
|
||||
- Shows cursors and selections of remote users
|
||||
- Highlights text written by different users with different colors
|
||||
- Syntax highlighting for most programming languages, auto indentation, configurable tab width
|
||||
- Zeroconf support
|
||||
- Encrypted data transfer including perfect forward secrecy (PFS)
|
||||
- Sessions can be password-protected
|
||||
- Sophisticated access control with Access Control Lists (ACLs)
|
||||
- Highly configurable dedicated server
|
||||
- Automatic saving of documents
|
||||
- Advanced search and replace options
|
||||
- Internationalisation
|
||||
- Full Unicode support
|
||||
|
||||
- Website: [gobby.github.io][10]
|
||||
- Source code: [github.com/gobby][11]
|
||||
- Developer: Armin Burgmeier, Philipp Kern and contributors
|
||||
- License: GNU GPLv2+ and ISC
|
||||
- Version Number: 0.5.0
|
||||
|
||||
----------
|
||||
|
||||
### OnlyOffice ###
|
||||
|
||||
![OnlyOffice in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-OnlyOffice.png)
|
||||
|
||||
ONLYOFFICE (formerly known as Teamlab Office) is a multifunctional cloud online office suite integrated with CRM system, document and project management toolset, Gantt chart and email aggregator.
|
||||
|
||||
It allows you to organize business tasks and milestones, store and share your corporate or personal documents, use social networking tools such as blogs and forums, as well as communicate with your team members via corporate IM.
|
||||
|
||||
Manage documents, projects, team and customer relations in one place. OnlyOffice combines text, spreadsheet and presentation editors that include features similar to Microsoft desktop editors (Word, Excel and PowerPoint), but then allow to co-edit, comment and chat in real time.
|
||||
|
||||
OnlyOffice is written in ASP.NET, based on HTML5 Canvas element, and translated to 21 languages.
|
||||
|
||||
Features include:
|
||||
|
||||
- As powerful as a desktop editor when working with large documents, paging and zooming
|
||||
- Document sharing in view / edit modes
|
||||
- Document embedding
|
||||
- Spreadsheet and presentation editors
|
||||
- Co-editing
|
||||
- Commenting
|
||||
- Integrated chat
|
||||
- Mobile applications
|
||||
- Gantt charts
|
||||
- Time management
|
||||
- Access right management
|
||||
- Invoicing system
|
||||
- Calendar
|
||||
- Integration with file storage systems: Google Drive, Box, OneDrive, Dropbox, OwnCloud
|
||||
- Integration with CRM, email aggregator and project management module
|
||||
- Mail server
|
||||
- Mail aggregator
|
||||
- Edit documents, spreadsheets and presentations of the most popular formats: DOC, DOCX, ODT, RTF, TXT, XLS, XLSX, ODS, CSV, PPTX, PPT, ODP
|
||||
|
||||
- Website: [www.onlyoffice.com][12]
|
||||
- Source code: [github.com/ONLYOFFICE/DocumentServer][13]
|
||||
- Developer: Ascensio System SIA
|
||||
- License: GNU GPL v3
|
||||
- Version Number: 7.7
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.html
|
||||
|
||||
作者:Frazer Kline
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://hackpad.com/
|
||||
[2]:https://github.com/dropbox/hackpad
|
||||
[3]:https://github.com/dropbox/hackpad/blob/master/CONTRIBUTORS
|
||||
[4]:http://etherpad.org/
|
||||
[5]:https://github.com/ether/etherpad-lite
|
||||
[6]:http://www.firepad.io/
|
||||
[7]:https://github.com/firebase/firepad
|
||||
[8]:https://owncloud.org/
|
||||
[9]:http://github.com/owncloud/documents/
|
||||
[10]:https://gobby.github.io/
|
||||
[11]:https://github.com/gobby
|
||||
[12]:https://www.onlyoffice.com/free-edition.aspx
|
||||
[13]:https://github.com/ONLYOFFICE/DocumentServer
|
65
sources/share/20150826 Five Super Cool Open Source Games.md
Normal file
65
sources/share/20150826 Five Super Cool Open Source Games.md
Normal file
@ -0,0 +1,65 @@
|
||||
Five Super Cool Open Source Games
|
||||
================================================================================
|
||||
In 2014 and 2015, Linux became home to a list of popular commercial titles such as the popular Borderlands, Witcher, Dead Island, and Counter Strike series of games. While this is exciting news, what of the gamer on a budget? Commercial titles are good, but even better are free-to-play alternatives made by developers who know what players like.
|
||||
|
||||
Some time ago, I came across a three year old YouTube video with the ever optimistic title [5 Open Source Games that Don’t Suck][1]. Although the video praises some open source games, I’d prefer to approach the subject with a bit more enthusiasm, at least as far as the title goes. So, here’s my list of five super cool open source games.
|
||||
|
||||
### Tux Racer ###
|
||||
|
||||
![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg)
|
||||
|
||||
Tux Racer
|
||||
|
||||
[Tux Racer][2] is the first game on this list because I’ve had plenty of experience with it. On a [recent trip to Mexico][3] that my brother and I took with [Kids on Computers][4], Tux Racer was one of the games that kids and teachers alike enjoyed. In this game, players use the Linux mascot, the penguin Tux, to race on downhill ski slopes in time trials in which players challenge their own personal bests. Currently there’s no multiplayer version available, but that could be subject to change. Available for Linux, OS X, Windows, and Android.
|
||||
|
||||
### Warsow ###
|
||||
|
||||
![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg)
|
||||
|
||||
Warsow
|
||||
|
||||
The [Warsow][5] website explains: “Set in a futuristic cartoonish world, Warsow is a completely free fast-paced first-person shooter (FPS) for Windows, Linux and Mac OS X. Warsow is the Art of Respect and Sportsmanship Over the Web.” I was reluctant to include games from the FPS genre on this list, because many have played games in this genre, but I was amused by Warsow. It prioritizes lots of movement and the game is fast paced with a set of eight weapons to start with. The cartoonish style makes playing feel less serious and more casual, something for friends and family to play together. However, it boasts competitive play, and when I experienced the game I found there were, indeed, some expert players around. Available for Linux, Windows and OS X.
|
||||
|
||||
### M.A.R.S – A ridiculous shooter ###
|
||||
|
||||
![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg)
|
||||
|
||||
M.A.R.S. – A ridiculous shooter
|
||||
|
||||
[M.A.R.S – A ridiculous shooter][6] is appealing because of it’s vibrant coloring and style. There is support for two players on the same keyboard, but an online multiplayer version is currently in the works — meaning plans to play with friends have to wait for now. Regardless, it’s an entertaining space shooter with a few different ships and weapons to play as. There are different shaped ships, ranging from shotguns, lasers, scattered shots and more (one of the random ships shot bubbles at my opponents, which was funny amid the chaotic gameplay). There are a few modes of play, such as the standard death match against opponents to score a certain limit or score high, along with other modes called Spaceball, Grave-itation Pit and Cannon Keep. Available for Linux, Windows and OS X.
|
||||
|
||||
### Valyria Tear ###
|
||||
|
||||
![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg)
|
||||
|
||||
Valyria Tear
|
||||
|
||||
[Valyria Tear][7] resembles many fan favorite role-playing games (RPGs) spanning the years. The story is set in the usual era of fantasy games, full of knights, kingdoms and wizardry, and follows the main character Bronann. The design team did great work in designing the world and gives players everything expected from the genre: hidden chests, random monster encounters, non-player character (NPC) interaction, and something no RPG would be complete without: grinding for experience on lower level slime monsters until you’re ready for the big bosses. When I gave it a try, time didn’t permit me to play too far into the campaign, but for those interested there is a ‘[Let’s Play][8]‘ series by YouTube user Yohann Ferriera. Available for Linux, Windows and OS X.
|
||||
|
||||
### SuperTuxKart ###
|
||||
|
||||
![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg)
|
||||
|
||||
SuperTuxKart
|
||||
|
||||
Last but not least is [SuperTuxKart][9], a clone of Mario Kart that is every bit as fun as the original. It started development around 2000-2004 as Tux Kart, but there were errors in its production which led to a cease in development for a few years. Since development picked up again in 2006, it’s been improving, with version 0.9 debuting four months ago. In the game, our old friend Tux starts in the role of Mario and a few other open source mascots. One recognizable face among them is Suzanne, the monkey mascot for Blender. The graphics are solid and gameplay is fluent. While online play is in the planning stages, split screen multiplayer action is available, with up to four players supported on a single computer. Available for Linux, Windows, OS X, AmigaOS 4, AROS and MorphOS.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://fossforce.com/2015/08/five-super-cool-open-source-games/
|
||||
|
||||
作者:Hunter Banks
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8
|
||||
[2]:http://tuxracer.sourceforge.net/download.html
|
||||
[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/
|
||||
[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca
|
||||
[5]:https://www.warsow.net/download
|
||||
[6]:http://mars-game.sourceforge.net/
|
||||
[7]:http://valyriatear.blogspot.com/
|
||||
[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA
|
||||
[9]:http://supertuxkart.sourceforge.net/
|
@ -0,0 +1,110 @@
|
||||
Mosh Shell – A SSH Based Client for Connecting Remote Unix/Linux Systems
|
||||
================================================================================
|
||||
Mosh, which stands for Mobile Shell is a command-line application which is used for connecting to the server from a client computer, over the Internet. It can be used as SSH and contains more feature than Secure Shell. It is an application similar to SSH, but with additional features. The application is written originally by Keith Winstein for Unix like operating system and released under GNU GPL v3.
|
||||
|
||||
![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png)
|
||||
|
||||
Mosh Shell SSH Client
|
||||
|
||||
#### Features of Mosh ####
|
||||
|
||||
- It is a remote terminal application that supports roaming.
|
||||
- Available for all major UNIX-like OS viz., Linux, FreeBSD, Solaris, Mac OS X and Android.
|
||||
- Intermittent Connectivity supported.
|
||||
- Provides intelligent local echo.
|
||||
- Line editing of user keystrokes supported.
|
||||
- Responsive design and Robust Nature over wifi, cellular and long-distance links.
|
||||
- Remain Connected even when IP changes. It usages UDP in place of TCP (used by SSH). TCP time out when connect is reset or new IP assigned but UDP keeps the connection open.
|
||||
- The Connection remains intact when you resume the session after a long time.
|
||||
- No network lag. Shows users typed key and deletions immediately without network lag.
|
||||
- Same old method to login as it was in SSH.
|
||||
- Mechanism to handle packet loss.
|
||||
|
||||
### Installation of Mosh Shell in Linux ###
|
||||
|
||||
On Debian, Ubuntu and Mint alike systems, you can easily install the Mosh package with the help of [apt-get package manager][1] as shown.
|
||||
|
||||
# apt-get update
|
||||
# apt-get install mosh
|
||||
|
||||
On RHEL/CentOS/Fedora based distributions, you need to turn on third party repository called [EPEL][2], in order to install mosh from this repository using [yum package manager][3] as shown.
|
||||
|
||||
# yum update
|
||||
# yum install mosh
|
||||
|
||||
On Fedora 22+ version, you need to use [dnf package manager][4] to install mosh as shown.
|
||||
|
||||
# dnf install mosh
|
||||
|
||||
### How do I use Mosh Shell? ###
|
||||
|
||||
1. Let’s try to login into remote Linux server using mosh shell.
|
||||
|
||||
$ mosh root@192.168.0.150
|
||||
|
||||
![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png)
|
||||
|
||||
Mosh Shell Remote Connection
|
||||
|
||||
**Note**: Did you see I got an error in connecting since the port was not open in my remote CentOS 7 box. A quick but not recommended solution I performed was:
|
||||
|
||||
# systemctl stop firewalld [on Remote Server]
|
||||
|
||||
The preferred way is to open a port and update firewall rules. And then connect to mosh on a predefined port. For in-depth details on firewalld you may like to visit this post.
|
||||
|
||||
- [How to Configure Firewalld][5]
|
||||
|
||||
2. Let’s assume that the default SSH port 22 was changed to port 70, in this case you can define custom port with the help of ‘-p‘ switch with mosh.
|
||||
|
||||
$ mosh -p 70 root@192.168.0.150
|
||||
|
||||
3. Check the version of installed Mosh.
|
||||
|
||||
$ mosh --version
|
||||
|
||||
![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png)
|
||||
|
||||
Check Mosh Version
|
||||
|
||||
4. You can close mosh session type ‘exit‘ on the prompt.
|
||||
|
||||
$ exit
|
||||
|
||||
5. Mosh supports a lot of options, which you may see as:
|
||||
|
||||
$ mosh --help
|
||||
|
||||
![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png)
|
||||
|
||||
Mosh Shell Options
|
||||
|
||||
#### Cons of Mosh Shell ####
|
||||
|
||||
- Mosh requires additional prerequisite for example, allow direct connection via UDP, which was not required by SSH.
|
||||
- Dynamic port allocation in the range of 60000-61000. The first open fort is allocated. It requires one port per connection.
|
||||
- Default port allocation is a serious security concern, especially in production.
|
||||
- IPv6 connections supported, but roaming on IPv6 not supported.
|
||||
- Scrollback not supported.
|
||||
- No X11 forwarding supported.
|
||||
- No support for ssh-agent forwarding.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Mosh is a nice small utility which is available for download in the repository of most of the Linux Distributions. Though it has a few discrepancies specially security concern and additional requirement it’s features like remaining connected even while roaming is its plus point. My recommendation is Every Linux-er who deals with SSH should try this application and mind it, Mosh is worth a try.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
|
||||
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
||||
[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
|
||||
[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/
|
||||
[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/
|
@ -0,0 +1,67 @@
|
||||
Xtreme Download Manager Updated With Fresh GUI
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg)
|
||||
|
||||
[Xtreme Download Manager][1], unarguably one of the [best download managers for Linux][2], has a new version named XDM 2015 which brings a fresh new look to it.
|
||||
|
||||
Xtreme Download Manager, also known as XDM or XDMAN, is a popular cross-platform download manager available for Linux, Windows and Mac OS X. It is also compatible with all major web browsers such as Chrome, Firefox, Safari enabling you to download directly from XDM when you try to download something in your web browser.
|
||||
|
||||
Applications such as XDM are particularly useful when you have slow/limited network connectivity and you need to manage your downloads. Imagine downloading a huge file from internet on a slow network. What if you could pause and resume the download at will? XDM helps you in such situations.
|
||||
|
||||
Some of the main features of XDM are:
|
||||
|
||||
- Pause and resume download
|
||||
- [Download videos from YouTube][3] and other video sites
|
||||
- Force assemble
|
||||
- Download speed acceleration
|
||||
- Schedule downloads
|
||||
- Limit download speed
|
||||
- Web browser integration
|
||||
- Support for proxy servers
|
||||
|
||||
Here you can see the difference between the old and new XDM.
|
||||
|
||||
![Old XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-700x400_c.jpg)
|
||||
|
||||
Old XDM
|
||||
|
||||
![New XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme_Download_Manager.png)
|
||||
|
||||
New XDM
|
||||
|
||||
### Install Xtreme Download Manager in Ubuntu based Linux distros ###
|
||||
|
||||
Thanks to the PPA by Noobslab, you can easily install Xtreme Download Manager using the commands below. XDM requires Java but thanks to the PPA, you don’t need to bother with installing dependencies separately.
|
||||
|
||||
sudo add-apt-repository ppa:noobslab/apps
|
||||
sudo apt-get update
|
||||
sudo apt-get install xdman
|
||||
|
||||
The above PPA should be available for Ubuntu and other Ubuntu based Linux distributions such as Linux Mint, elementary OS, Linux Lite etc.
|
||||
|
||||
#### Remove XDM ####
|
||||
|
||||
To remove XDM (installed using the PPA), use the commands below:
|
||||
|
||||
sudo apt-get remove xdman
|
||||
sudo add-apt-repository --remove ppa:noobslab/apps
|
||||
|
||||
For other Linux distributions, you can download it from the link below:
|
||||
|
||||
- [Download Xtreme Download Manager][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/xtreme-download-manager-install/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://xdman.sourceforge.net/
|
||||
[2]:http://itsfoss.com/4-best-download-managers-for-linux/
|
||||
[3]:http://itsfoss.com/download-youtube-videos-ubuntu/
|
||||
[4]:http://xdman.sourceforge.net/download.html
|
@ -1,53 +0,0 @@
|
||||
Docker Working on Security Components, Live Container Migration
|
||||
================================================================================
|
||||
![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg)
|
||||
|
||||
**Docker developers take the stage at Containercon and discuss their work on future container innovations for security and live migration.**
|
||||
|
||||
SEATTLE—Containers are one of the hottest topics in IT today and at the Linuxcon USA event here there is a co-located event called Containercon, dedicated to this virtualization technology.
|
||||
|
||||
Docker, the lead commercial sponsor of the open-source Docker effort brought three of its top people to the keynote stage today, but not Docker founder Solomon Hykes.
|
||||
|
||||
Hykes who delivered a Linuxcon keynote in 2014 was in the audience though, as Senior Vice President of Engineering Marianna Tessel, Docker security chief Diogo Monica and Docker chief maintainer Michael Crosby presented what's new and what's coming in Docker.
|
||||
|
||||
Tessel emphasized that Docker is very real today and used in production environments at some of the largest organizations on the planet, including the U.S. Government. Docker also is working in small environments too, including the Raspberry Pi small form factor ARM computer, which now can support up to 2,300 containers on a single device.
|
||||
|
||||
"We're getting more powerful and at the same time Docker will also get simpler to use," Tessel said.
|
||||
|
||||
As a metaphor, Tessel said that the whole Docker experience is much like a cruise ship, where there is powerful and complex machinery that powers the ship, yet the experience for passengers is all smooth sailing.
|
||||
|
||||
One area that Docker is trying to make easier is security. Tessel said that security is mind-numbingly complex for most people as organizations constantly try to avoid network breaches.
|
||||
|
||||
That's where Docker Content Trust comes into play, which is a configurable feature in the recent Docker 1.8 release. Diogo Mónica, security lead for Docker joined Tessel on stage and said that security is a hard topic, which is why Docker content trust is being developed.
|
||||
|
||||
With Docker Content Trust there is a verifiable way to make sure that a given Docker application image is authentic. There also are controls to limit fraud and potential malicious code injection by verifying application freshness.
|
||||
|
||||
To prove his point, Monica did a live demonstration of what could happen if Content Trust is not enabled. In one instance, a Website update is manipulated to allow the demo Web app to be defaced. When Content Trust is enabled, the hack didn't work and was blocked.
|
||||
|
||||
"Don't let the simple demo fool you," Tessel said. "You have seen the best security possible."
|
||||
|
||||
One area where containers haven't been put to use before is for live migration, which on VMware virtual machines is a technology called vMotion. It's an area that Docker is currently working on.
|
||||
|
||||
Docker chief maintainer Michael Crosby did an onstage demonstration of a live migration of Docker containers. Crosby referred to the approach as checkpoint and restore, where a running container gets a checkpoint snapshot and is then restored to another location.
|
||||
|
||||
A container also can be cloned and then run in another location. Crosby humorously referred to his cloned container as "Dolly," a reference to the world's first cloned animal, Dolly the sheep.
|
||||
|
||||
Tessel also took time to talk about the RunC component of containers, which is now a technology component that is being developed by the Open Containers Initiative as a multi-stakeholder process. With RunC, containers expand beyond Linux to multiple operating systems including Windows and Solaris.
|
||||
|
||||
Overall, Tessel said that she can't predict the future of Docker, though she is very optimistic.
|
||||
|
||||
"I'm not sure what the future is, but I'm sure it'll be out of this world," Tessel said.
|
||||
|
||||
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/
|
@ -1,49 +0,0 @@
|
||||
Linuxcon: The Changing Role of the Server OS
|
||||
================================================================================
|
||||
SEATTLE - Containers might one day change the world, but it will take time and it will also change the role of the operating system. That's the message delivered during a Linuxcon keynote here today by Wim Coekaerts, SVP Linux and virtualization engineering at Oracle.
|
||||
|
||||
![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg)
|
||||
|
||||
Coekaerts started his presentation by putting up a slide stating it's the year of the desktop, which generated a few laughs from the audience. Oracle Wim Coekarts Truly, though, Coekaerts said it is now apparent that 2015 is the year of the container, and more importantly the year of the application, which is what containers really are all about.
|
||||
|
||||
"What do you need an operating system for?" Coekaerts asked. "It's really just there to run an application; an operating system is there to manage hardware and resources so your app can run."
|
||||
|
||||
Coekaerts added that with Docker containers, the focus is once again on the application. At Oracle, Coekaerts said much of the focus is on how to make the app run better on the OS.
|
||||
|
||||
"Many people are used to installing apps, but many of the younger generation just click a button on their mobile device and it runs," Coekaerts said.
|
||||
|
||||
Coekaerts said that people now wonder why it's more complex in the enterprise to install software, and Docker helps to change that.
|
||||
|
||||
"The role of the operating system is changing," Coekaerts said.
|
||||
|
||||
The rise of Docker does not mean the demise of virtual machines (VMs), though. Coekaerts said it will take a very long time for things to mature in the containerization space and get used in real world.
|
||||
|
||||
During that period VMs and containers will co-exist and there will be a need for transition and migration tools between containers and VMs. For example, Coekaerts noted that Oracle's VirtualBox open-source technology is widely used on desktop systems today as a way to help users run Docker. The Docker Kitematic project makes use of VirtualBox to boot Docker on Macs today.
|
||||
|
||||
### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ###
|
||||
|
||||
A key promise that needs to be enabled for containers to truly be successful is the concept of write once, deploy anywhere. That's an area where the Linux Foundations' Open Compute Initiative (OCI) will play a key role in enabling interoperability across container runtimes.
|
||||
|
||||
"With OCI, it will make it easier to build once and run anywhere, so what you package locally you can run wherever you want," Coekaerts said.
|
||||
|
||||
Overall, though, Coekaerts said that while there is a lot of interest in moving to the container model, it's not quite ready yet. He noted Oracle is working on certifying its products to run in containers, but it's a hard process.
|
||||
|
||||
"Running the database is easy; it's everything else around it that is complex," Coekaerts said. "Containers don't behave the same as VMs, and some applications depend on low-level system configuration items that are not exposed from the host to the container."
|
||||
|
||||
Additionally, Coekaerts commented that debugging problems inside a container is different than in a VM, and there is currently a lack of mature tools for proper container app debugging.
|
||||
|
||||
Coekaerts emphasized that as containers matures it's important to not forget about the existing technology that organizations use to run and deploy applications on servers today. He said enterprises don't typically throw out everything they have just to start with new technology.
|
||||
|
||||
"Deploying new technology is hard, and you need to be able to transition from what you have," Coekaerts said. "The technology that allows you to transition easily is the technology that wins."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm
|
@ -1,49 +0,0 @@
|
||||
A Look at What's Next for the Linux Kernel
|
||||
================================================================================
|
||||
![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg)
|
||||
|
||||
**The upcoming Linux 4.2 kernel will have more contributors than any other Linux kernel in history, according to Linux kernel developer Jonathan Corbet.**
|
||||
|
||||
SEATTLE—The Linux kernel continues to grow—both in lines of code and the number of developers that contribute to it—yet some challenges need to be addressed. That was one of the key messages from Linux kernel developer Jonathan Corbet during his annual Kernel Report session at the LinuxCon conference here.
|
||||
|
||||
The Linux 4.2 kernel is still under development, with general availability expected on Aug. 23. Corbet noted that 1,569 developers have contributed code for the Linux 4.2 kernel. Of those, 277 developers made their first contribution ever, during the Linux 4.2 development cycle.
|
||||
|
||||
Even as more developers are coming to Linux, the pace of development and releases is very fast, Corbet said. He estimates that it now takes approximately 63 days for the community to build a new Linux kernel milestone.
|
||||
|
||||
Linux 4.2 will benefit from a number of improvements that have been evolving in Linux over the last several releases. One such improvement is the introduction of OverlayFS, a new type of read-only file system that is useful because it can enable many containers to be layered on top of each other, Corbet said.
|
||||
|
||||
Linux networking also is set to improve small packet performance, which is important for areas such as high-frequency financial trading. The improvements are aimed at reducing the amount of time and power needed to process each data packet, Corbet said.
|
||||
|
||||
New drivers are always being added to Linux. On average, there are 60 to 80 new or updated drivers added in every Linux kernel development cycle, Corbet said.
|
||||
|
||||
Another key area that continues to improve is that of Live Kernel patching, first introduced in the Linux 4.0 kernel. With live kernel patching, the promise is that a system administrator can patch a live running kernel without the need to reboot a running production system. While the basic elements of live kernel patching are in the kernel already, work is under way to make the technology all work with the right level of consistency and stability, Corbet explained.
|
||||
|
||||
**Linux Security, IoT and Other Concerns**
|
||||
|
||||
Security has been a hot topic in the open-source community in the past year due to high-profile issues, including Heartbleed and Shellshock.
|
||||
|
||||
"I don't doubt there are some unpleasant surprises in the neglected Linux code at this point," Corbet said.
|
||||
|
||||
He noted that there are more than 3 millions lines of code in the Linux kernel today that have been untouched in the last decade by developers and that the Shellshock vulnerability was a flaw in 20-year-old code that hadn't been looked at in some time.
|
||||
|
||||
Another issue that concerns Corbet is the Unix 2038 issue—the Linux equivalent of the Y2K bug, which could have caused global havoc in the year 2000 if it hadn't been fixed. With the 2038 issue, there is a bug that could shut down Linux and Unix machines in the year 2038. Corbet said that while 2038 is still 23 years away, there are systems being deployed now that will be in use in the 2038.
|
||||
|
||||
Some initial work took place to fix the 2038 flaw in Linux, but much more remains to be done, Corbet said. "The time to fix this is now, not 20 years from now in a panic when we're all trying to enjoy our retirement," Corbet said.
|
||||
|
||||
The Internet of things (IoT) is another area of Linux concern for Corbet. Today, Linux is a leading embedded operating system for IoT, but that might not always be the case. Corbet is concerned that the Linux kernel's growth is making it too big in terms of memory footprint to work in future IoT devices.
|
||||
|
||||
A Linux project is now under way to minimize the size of the Linux kernel, and it's important that it gets the support it needs, Corbet said.
|
||||
|
||||
"Either Linux is suitable for IoT, or something else will come along and that something else might not be as free and open as Linux," Corbet said. "We can't assume the continued dominance of Linux in IoT. We have to earn it. We have to pay attention to stuff that makes the kernel bigger."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/
|
@ -1,3 +1,4 @@
|
||||
KevinSJ translating
|
||||
Why did you start using Linux?
|
||||
================================================================================
|
||||
> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux
|
||||
@ -144,4 +145,4 @@ via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.
|
||||
[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
|
||||
[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
|
||||
[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/
|
||||
[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/
|
||||
[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/
|
||||
|
@ -0,0 +1,28 @@
|
||||
Linux 4.3 Kernel To Add The MOST Driver Subsystem
|
||||
================================================================================
|
||||
While the [Linux 4.2][1] kernel hasn't been officially released yet, Greg Kroah-Hartman sent in early his pull requests for the various subsystems he maintains for the Linux 4.3 merge window.
|
||||
|
||||
The pull requests sent in by Greg KH on Thursday include the Linux 4.3 merge window updates for the driver core, TTY/serial, USB driver, char/misc, and the staging area. These pull requests don't offer any really shocking changes but mostly routine work on improvements / additions / bug-fixes. The staging area once again is heavy with various fixes and clean-ups but there's also a new driver subsystem.
|
||||
|
||||
Greg mentioned of the [4.3 staging changes][2], "Lots of things all over the place, almost all of them trivial fixups and changes. The usual IIO updates and new drivers and we have added the MOST driver subsystem which is getting cleaned up in the tree. The ozwpan driver is finally being deleted as it is obviously abandoned and no one cares about it."
|
||||
|
||||
The MOST driver subsystem is short for the Media Oriented Systems Transport. The documentation to be added in the Linux 4.3 kernel explains, "The Media Oriented Systems Transport (MOST) driver gives Linux applications access a MOST network: The Automotive Information Backbone and the de-facto standard for high-bandwidth automotive multimedia networking. MOST defines the protocol, hardware and software layers necessary to allow for the efficient and low-cost transport of control, real-time and packet data using a single medium (physical layer). Media currently in use are fiber optics, unshielded twisted pair cables (UTP) and coax cables. MOST also supports various speed grades up to 150 Mbps." As explained, MOST is mostly about Linux in automotive applications.
|
||||
|
||||
While Greg KH sent in his various subsystem updates for Linux 4.3, he didn't yet propose the [KDBUS][5] kernel code be pulled. He's previously expressed plans for [KDBUS in Linux 4.3][3] so we'll wait until the 4.3 merge window officially gets going to see what happens. Stay tuned to Phoronix for more Linux 4.3 kernel coverage next week when the merge window will begin, [assuming Linus releases 4.2][4] this weekend.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.3-Staging-Pull
|
||||
|
||||
作者:[Michael Larabel][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.michaellarabel.com/
|
||||
[1]:http://www.phoronix.com/scan.php?page=search&q=Linux+4.2
|
||||
[2]:http://lkml.iu.edu/hypermail/linux/kernel/1508.2/02604.html
|
||||
[3]:http://www.phoronix.com/scan.php?page=news_item&px=KDBUS-Not-In-Linux-4.2
|
||||
[4]:http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.2-rc7-Released
|
||||
[5]:http://www.phoronix.com/scan.php?page=search&q=KDBUS
|
@ -0,0 +1,126 @@
|
||||
How learning data structures and algorithms make you a better developer
|
||||
================================================================================
|
||||
|
||||
> "I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful […] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."
|
||||
-- Linus Torvalds
|
||||
|
||||
---
|
||||
|
||||
> "Smart data structures and dumb code works a lot better than the other way around."
|
||||
-- Eric S. Raymond, The Cathedral and The Bazaar
|
||||
|
||||
Learning about data structures and algorithms makes you a stonking good programmer.
|
||||
|
||||
**Data structures and algorithms are patterns for solving problems.** The more of them you have in your utility belt, the greater variety of problems you'll be able to solve. You'll also be able to come up with more elegant solutions to new problems than you would otherwise be able to.
|
||||
|
||||
You'll understand, ***in depth***, how your computer gets things done. This informs any technical decisions you make, regardless of whether or not you're using a given algorithm directly. Everything from memory allocation in the depths of your operating system, to the inner workings of your RDBMS to how your networking stack manages to send data from one corner of Earth to another. All computers rely on fundamental data structures and algorithms, so understanding them better makes you understand the computer better.
|
||||
|
||||
Cultivate a broad and deep knowledge of algorithms and you'll have stock solutions to large classes of problems. Problem spaces that you had difficulty modelling before often slot neatly into well-worn data structures that elegantly handle the known use-cases. Dive deep into the implementation of even the most basic data structures and you'll start seeing applications for them in your day-to-day programming tasks.
|
||||
|
||||
You'll also be able to come up with novel solutions to the somewhat fruitier problems you're faced with. Data structures and algorithms have the habit of proving themselves useful in situations that they weren't originally intended for, and the only way you'll discover these on your own is by having a deep and intuitive knowledge of at least the basics.
|
||||
|
||||
But enough with the theory, have a look at some examples
|
||||
|
||||
###Figuring out the fastest way to get somewhere###
|
||||
Let's say we're creating software to figure out the shortest distance from one international airport to another. Assume we're constrained to following routes:
|
||||
|
||||
![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg)
|
||||
|
||||
graph of destinations and the distances between them, how can we find the shortest distance say, from Helsinki to London? **Dijkstra's algorithm** is the algorithm that will definitely get us the right answer in the shortest time.
|
||||
|
||||
In all likelihood, if you ever came across this problem and knew that Dijkstra's algorithm was the solution, you'd probably never have to implement it from scratch. Just ***knowing*** about it would point you to a library implementation that solves the problem for you.
|
||||
|
||||
If you did dive deep into the implementation, you'd be working through one of the most important graph algorithms we know of. You'd know that in practice it's a little resource intensive so an extension called A* is often used in it's place. It gets used everywhere from robot guidance to routing TCP packets to GPS pathfinding.
|
||||
|
||||
###Figuring out the order to do things in###
|
||||
Let's say you're trying to model courses on a new Massive Open Online Courses platform (like Udemy or Khan Academy). Some of the courses depend on each other. For example, a user has to have taken Calculus before she's eligible for the course on Newtonian Mechanics. Courses can have multiple dependencies. Here's are some examples of what that might look like written out in YAML:
|
||||
|
||||
# Mapping from course name to requirements
|
||||
#
|
||||
# If you're a physcist or a mathematicisn and you're reading this, sincere
|
||||
# apologies for the completely made-up dependency tree :)
|
||||
courses:
|
||||
arithmetic: []
|
||||
algebra: [arithmetic]
|
||||
trigonometry: [algebra]
|
||||
calculus: [algebra, trigonometry]
|
||||
geometry: [algebra]
|
||||
mechanics: [calculus, trigonometry]
|
||||
atomic_physics: [mechanics, calculus]
|
||||
electromagnetism: [calculus, atomic_physics]
|
||||
radioactivity: [algebra, atomic_physics]
|
||||
astrophysics: [radioactivity, calculus]
|
||||
quantumn_mechanics: [atomic_physics, radioactivity, calculus]
|
||||
|
||||
Given those dependencies, as a user, I want to be able to pick any course and have the system give me an ordered list of courses that I would have to take to be eligible. So if I picked `calculus`, I'd want the system to return the list:
|
||||
|
||||
arithmetic -> algebra -> trigonometry -> calculus
|
||||
|
||||
Two important constraints on this that may not be self-evident:
|
||||
|
||||
- At every stage in the course list, the dependencies of the next course must be met.
|
||||
- We don't want any duplicate courses in the list.
|
||||
|
||||
This is an example of resolving dependencies and the algorithm we're looking for to solve this problem is called topological sort (tsort). Tsort works on a dependency graph like we've outlined in the YAML above. Here's what that would look like in a graph (where each arrow means `requires`):
|
||||
|
||||
![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg)
|
||||
|
||||
topological sort does is take a graph like the one above and find an ordering in which all the dependencies are met at each stage. So if we took a sub-graph that only contained `radioactivity` and it's dependencies, then ran tsort on it, we might get the following ordering:
|
||||
|
||||
arithmetic
|
||||
algebra
|
||||
trigonometry
|
||||
calculus
|
||||
mechanics
|
||||
atomic_physics
|
||||
radioactivity
|
||||
|
||||
This meets the requirements set out by the use case we described above. A user just has to pick `radioactivity` and they'll get an ordered list of all the courses they have to work through before they're allowed to.
|
||||
|
||||
We don't even need to go into the details of how topological sort works before we put it to good use. In all likelihood, your programming language of choice probably has an implementation of it in the standard library. In the worst case scenario, your Unix probably has the `tsort` utility installed by default, run man `tsort` and have a play with it.
|
||||
|
||||
###Other places tsort get's used###
|
||||
|
||||
- **Tools like** `make` allow you to declare task dependencies. Topological sort is used under the hood to figure out what order the tasks should be executed in.
|
||||
- **Any programming language that has a `require` directive**, indicating that the current file requires the code in a different file to be run first. Here topological sort can be used to figure out what order the files should be loaded in so that each is only loaded once and all dependencies are met.
|
||||
- **Project management tools with Gantt charts**. A Gantt chart is a graph that outlines all the dependencies of a given task and gives you an estimate of when it will be complete based on those dependencies. I'm not a fan of Gantt charts, but it's highly likely that tsort will be used to draw them.
|
||||
|
||||
###Squeezing data with Huffman coding###
|
||||
[Huffman coding](http://en.wikipedia.org/wiki/Huffman_coding) is an algorithm used for lossless data compression. It works by analyzing the data you want to compress and creating a binary code for each character. More frequently occurring characters get smaller codes, so `e` might be encoded as `111` while `x` might be `10010`. The codes are created so that they can be concatenated without a delimeter and still be decoded accurately.
|
||||
|
||||
Huffman coding is used along with LZ77 in the DEFLATE algorithm which is used by gzip to compress things. gzip is used all over the place, in particular for compressing files (typically anything with a `.gz` extension) and for http requests/responses in transit.
|
||||
|
||||
Knowing how to implement and use Huffman coding has a number of benefits:
|
||||
|
||||
- You'll know why a larger compression context results in better compression overall (e.g. the more you compress, the better the compression ratio). This is one of the proposed benefits of SPDY: that you get better compression on multiple HTTP requests/responses.
|
||||
- You'll know that if you're compressing your javascript/css in transit anyway, it's completely pointless to run a minifier on them. Sames goes for PNG files, which use DEFLATE internally for compression already.
|
||||
- If you ever find yourself trying to forcibly decipher encrypted information , you may realize that since repeating data compresses better, the compression ratio of a given bit of ciphertext will help you determine it's [block cipher mode of operation](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation).
|
||||
|
||||
###Picking what to learn next is hard###
|
||||
Being a programmer involves learning constantly. To operate as a web developer you need to know markup languages, high level languages like ruby/python, regular expressions, SQL and JavaScript. You need to know the fine details of HTTP, how to drive a unix terminal and the subtle art of object oriented programming. It's difficult to navigate that landscape effectively and choose what to learn next.
|
||||
|
||||
I'm not a fast learner so I have to choose what to spend time on very carefully. As much as possible, I want to learn skills and techniques that are evergreen, that is, won't be rendered obsolete in a few years time. That means I'm hesitant to learn the javascript framework of the week or untested programming languages and environments.
|
||||
|
||||
As long as our dominant model of computation stays the same, data structures and algorithms that we use today will be used in some form or another in the future. You can safely spend time on gaining a deep and thorough knowledge of them and know that they will pay dividends for your entire career as a programmer.
|
||||
|
||||
###Sign up to the Happy Bear Software List###
|
||||
Find this article useful? For a regular dose of freshly squeezed technical content delivered straight to your inbox, **click on the big green button below to sign up to the Happy Bear Software mailing list.**
|
||||
|
||||
We'll only be in touch a few times per month and you can unsubscribe at any time.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithms-makes-you-a-better-developer
|
||||
|
||||
作者:[Happy Bear][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.happybearsoftware.com/
|
||||
[1]:http://en.wikipedia.org/wiki/Huffman_coding
|
||||
[2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
|
||||
|
||||
|
||||
|
@ -0,0 +1,92 @@
|
||||
LinuxCon exclusive: Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project
|
||||
================================================================================
|
||||
![](http://images.techhive.com/images/article/2015/08/mark-100608730-primary.idge.jpg)
|
||||
|
||||
Mark Shuttleworth at LinuxCon Credit: Swapnil Bhartiya
|
||||
|
||||
> Mark Shuttleworth, founder of Canonical and Ubuntu, made a surprise visit at LinuxCon. I sat down with him for a video interview and talked about Ubuntu on IBM’s new LinuxONE systems, Canonical’s plans for containers, open source in the enterprise space and much more.
|
||||
|
||||
### You made a surprise entry during the keynote. What brought you to LinuxCon? ###
|
||||
|
||||
**Mark Shuttleworth**: I am here at LinuxCon to support IBM and Canonical in their announcement of Ubuntu on their new Linux-only super-high-end mainframe LinuxONE. These are the biggest machines in the world, purpose-built to run only Linux. And we will be bringing Ubuntu to them, which is a real privilege for us and is going to be incredible for developers.
|
||||
|
||||
![mark selfie](http://images.techhive.com/images/article/2015/08/mark-selfie-100608731-large.idge.jpg)
|
||||
|
||||
Swapnil Bhartiya
|
||||
|
||||
Mark Shuttleworth and Swapnil Bhartiya, mandatory selfie at LinuxCon
|
||||
|
||||
### Only Red Hat and SUSE were supported on it. Why was Ubuntu missing from the mainframe scene? ###
|
||||
|
||||
**Mark**: Ubuntu has always been about developers. It has been about enabling the free software platform from where it is collaboratively built to be available at no cost to developers in the world, so they are limited only by their imagination—not by money, not by geography.
|
||||
|
||||
There was an incredible story told today about a 12-year-old kid who started out with Ubuntu; there are incredible stories about people building giant businesses with Ubuntu. And for me, being able to empower people, whether they come from one part of the world or another to express their ideas on free software, is what Ubuntu is all about. It's been a journey for us essentially, going to the platforms those developers care about, and just in the last year, we suddenly saw a flood of requests from companies who run mainframes, who are using Ubuntu for their infrastructure—70% of OpenStack deployments are on Ubuntu. Those same people said, “Look, there is the mainframe, and we like to unleash it and think of it as a region in the cloud.” So when IBM started talking to us, saying that they have this project in the works, it felt like a very natural fit: You are going to be able to take your Ubuntu laptop, build code there and ship it straight to every cloud, every virtualization environment, every bare metal in every architecture including the mainframe, and that's going to be beautiful.
|
||||
|
||||
### Will Canonical be offering support for these systems? ###
|
||||
|
||||
**Mark**: Yes. Ubuntu on z Systems is going to be completely supported. We will make long-term commitments to that. The idea is to bring together scale-out-fast cloud-like workloads, which is really born on Ubuntu; 70% of workloads on Amazon and other public clouds run on Ubuntu. Now you can think of running that on a mainframe if that makes sense to you.
|
||||
|
||||
We are going to provide exactly the same platform that we do on the cloud, and we are going to provide that on the mainframe as well. We are also going to expose it to the OpenStack API so you can consume it on a mainframe with exactly the same tools and exactly the same processes that you would consume on a laptop, or OpenStack or public cloud resources. So all of the things that Ubuntu builds to make your life easy as a developer are going to be available across that full range of platforms and systems, and all of that is commercially supported.
|
||||
|
||||
### Canonical is doing a lot of things: It is into enterprise, and it’s in the consumer space with mobile and desktop. So what is the core focus of Canonical now? ###
|
||||
|
||||
**Mark**: The trick for us is to enable the reuse of specifically the same parts [of our technology] in as many useful ways as possible. So if you look at the work that we do at z Systems, it's absolutely defined by the work that we do on the cloud. We want to deliver exactly the same libraries on exactly the same date for the mainframe as we do for public clouds and for x86, ARM and Power servers today.
|
||||
|
||||
We don't allow Ubuntu or our focus to fragment very dramatically because we don't allow different products managers to find Ubuntu in different ways in different environments. We just want to bring that standard experience that developers love to this new environment.
|
||||
|
||||
Similarly if you look at the work we are doing on IoT [Internet of Things], Snappy Ubuntu is the heart of the phone. It’s the phone without the GUI. So the definitions, the tools, the kernels, the mechanisms are shared across those projects. So we are able to multiply the impact of the work. We have an incredible community, and we try to enable the community to do things that they want to do that we can’t do. So that's why we have so many buntus, and it's kind of incredible for me to see what they do with that.
|
||||
|
||||
We also see the community climbing in. We see hundreds of developers working with Snappy for IoT, and we see developers working with Snappy on mobile, for personal computing as convergence becomes real. And, of course, there is the cloud server story: 70% of the world is Ubuntu, so there is a huge audience. We don't have to do all the work that we do; we just have to be open and willing to, kind of, do the core infrastructure and then reuse it as efficiently as possible.
|
||||
|
||||
### Is Snappy a response to Atomic or CoreOS? ###
|
||||
|
||||
**Mark**: Snappy as a project was born four years ago when we started working on the phone, which was long before the CoreOS, long before Atomic. I think the principles of atomicity, transactionality are beautiful, but remember: We needed to build the same things for the phone. And with Snappy, we have the ability to deliver transactional updates to any of these systems—phones, servers and cloud devices.
|
||||
|
||||
Of course, it feels a little different because in order to provide those guarantees, we have to shape the system in such a way that we can guarantee the guarantees. And that's why Snappy is snappy; it's a new thing. It's not based on an old packaging system. Though we will keep both of them: All Snaps for us that Canonical makes, the core snaps that define the OS, are all built from Debian packages. They are two different faces of the same coin for us, and developers will use them as tools. We use the right tools for the job.
|
||||
|
||||
There are couple of key advantages for Snappy over CoreOS and Atomic, and the main one is this: We took the view that we wanted the base idea to be extensible. So with Snappy, the core operating system is tiny. You make all the choices, and you take all the decisions about things you want to bolt on that: you want to bolt on Docker; you want to bolt on Kubernete; you want to bolt on Mesos; you want to bolt on Lattice from Pivotal; you want to bolt on OpenStack. Those are the things you choose to add with Snappy. Whereas with Atomic and CoreOS, it's one blob and you have to do it exactly the way they want you to do it. You have to live with the versions of software and the choices they make.
|
||||
|
||||
Whereas with Snappy, we really preserve this idea of the choices you have got in Ubuntu are now transactionally available on Snappy systems. That makes the core much smaller, and it gives you the choice of different container systems, different container management systems, different cloud infrastructure systems or different apps of every description. I think that's the winning idea. In fullness of time, people will realize that they wanted to make those choices themselves; they just want Canonical to do the work of providing the updates in a really efficient manner.
|
||||
|
||||
### There is so much competition in the container space with Docker, Rocket and many other players. Where will Canonical stand amid this competition? ###
|
||||
|
||||
**Mark**: Canonical is focused on platform tools, and we see things like the Rocket and Docker as things super-useful for developers; we just make sure that those work best on Ubuntu. Docker, for years, ran only Ubuntu because we work very closely with them, and we are glad now that it's available everywhere else. But if you look at the numbers, the vast majority of Docker containers are on Ubuntu. Because we work really hard, as developers, you get the best experience with all of these tools on Ubuntu. We don't want to try and control everything, and it’s great for us to have those guys competing.
|
||||
|
||||
I think in the end people will see that there is really two kinds of containers. 1) There are cases where a container is just like a VM machine. It feels like a whole machine, it runs all processes, all the logs and cron jobs are there. It's like a VM, just that it's much cheaper, much lighter, much faster, and that's LXD. 2) And then there would be process containers, which are like Docker or Rocket; they are there to run a specific application very fast. I think we lead the world in general machine container story, which is our hypervisor LXD, and I think Docker leads the story when it comes to applications containers, process containers. And those two work together really beautifully.
|
||||
|
||||
### Microsoft and Canonical are working together on LXD? Can you tell us about this engagement? ###
|
||||
|
||||
Mark: LXD is two things. First, it's an implementation on top of Canonical's work on the kernel so that you can start to create full machine containers on any host. But it's also a REST API. That’s the transitions from LXC to LXD. We got a daemon there so you can talk to the daemon over the network, if it's listening on the network, and says tell me about the containers on that machine, tell me about the file systems on that machine, the networks on that machine, start or stop the container.
|
||||
|
||||
So LXD becomes a distributed hypervisor effectively. Very interestingly, last week Microsoft announced that they like REST API. It is very clean, very simple, very well engineered, and they are going to implement the same API for Windows machines. It's completely cross-platform, which means you will be able to talk to any machine—Linux or Windows. So it gives you very clean and simple APIs to talk about containers on any host on the network.
|
||||
|
||||
Of course, we have led the work in [OpenStack to bind LXD to Nova][1], which is the control system to compute in OpenStack, so that's how we create a whole cloud with OpenStack API with the individual VMs being actually containers, so much denser, much faster, much lighter, much cheaper.
|
||||
|
||||
### Open Source is becoming a norm in the enterprise segment. What do you think is driving the adoption of open source in the enterprise? ###
|
||||
|
||||
**Mark**: The reason why open source has become so popular in the enterprise is because it enables them to go faster. We are all competing at some level, and if you can't make progress because you have to call up some vendor, you can't dig in and help yourself go faster, then you feel frustrated. And given the choice between frustration and at least the ability to dig into a problem, enterprises over time will always choose to give themselves the ability to dig in and help themselves. So that is why open source is phenomenal.
|
||||
|
||||
I think it goes a bit deeper than that. I think people have started to realize as much as we compete, 99% of what we need to do is shared, and there is something meaningful about contributing to something that is shared. As I have seen Ubuntu go from something that developers love, to something that CIOs love that developers love Ubuntu. As that happens, it's not a one-way ticket. They often want to say how can we help contribute to make this whole thing go faster.
|
||||
|
||||
We have always seen a curve of complexity, and open source has traditionally been higher up on the curve of complexity and therefore considered threatening or difficult or too uncertain for people who are not comfortable with the complexity. What's wonderful to me is that many open source projects have identified that as a blocker for their own future. So in Ubuntu we have made user experience, design and “making it easy” a first-class goal. We have done the same for OpenStack. With Ubuntu tools for OpenStack anybody can build an OpenStack cloud in an hour, and if you want, that cloud can run itself, scale itself, manage itself, can deal with failures. It becomes something you can just fire up and forget, which also makes it really cheap. It also makes it something that's not a distraction, and so by making open source easier and easier, we are broadening its appeal to consumers and into the enterprise and potentially into the government.
|
||||
|
||||
### How open are governments to open source? Can you tell us about the utilization of open source by governments, especially in the U.S.? ###
|
||||
|
||||
**Mark**: I don't track the usage in government, but part of government utilization in the modern era is the realization that how untrustworthy other governments might be. There is a desire for people to be able to say, “Look, I want to review or check and potentially self-build all the things that I depend on.” That's a really important mission. At the end of the day, some people see this as a game where maybe they can get something out of the other guy. I see it as a game where we can make a level playing field, where everybody gets to compete. I have a very strong interest in making sure that Ubuntu is trustworthy, which means the way we build it, the way we run it, the governance around it is such that people can have confidence in it as an independent thing.
|
||||
|
||||
### You are quite vocal about freedom, privacy and other social issues on Google+. How do you see yourself, your company and Ubuntu playing a role in making the world a better place? ###
|
||||
|
||||
**Mark**: The most important thing for us to do is to build confidence in trusted platforms, platforms that are freely available but also trustworthy. At any given time, there will always be people who can make arguments about why they should have access to something. But we know from history that at the end of the day, due process of law, justice, doesn't depend on the abuse of privacy, abuse of infrastructure, the abuse of data. So I am very strongly of the view that in the fullness of time, all of the different major actors will come to the view that their primary interest is in having something that is conceptually trustworthy. This isn't about what America can steal from Germany or what China can learn in Russia. This is about saying we’re all going to be able to trust our infrastructure; that's a generational journey. But I believe Ubuntu can be right at the center of people's thinking about that.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.itworld.com/article/2973116/linux/linuxcon-exclusive-mark-shuttleworth-says-snappy-was-born-long-before-coreos-and-the-atomic-project.html
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
|
||||
[1]:https://wiki.openstack.org/wiki/HypervisorSupportMatrix
|
@ -0,0 +1,67 @@
|
||||
The Strangest, Most Unique Linux Distros
|
||||
================================================================================
|
||||
From the most consumer focused distros like Ubuntu, Fedora, Mint or elementary OS to the more obscure, minimal and enterprise focused ones such as Slackware, Arch Linux or RHEL, I thought I've seen them all. Couldn't have been any further from the truth. Linux eco-system is very diverse. There's one for everyone. Let's discuss the weird and wacky world of niche Linux distros that represents the true diversity of open platforms.
|
||||
|
||||
![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png)
|
||||
|
||||
**Puppy Linux**: An operating system which is about 1/10th the size of an average DVD quality movie rip, that's Puppy Linux for you. The OS is just 100 MB in size! And it can run from RAM making it unusually fast even in older PCs. You can even remove the boot medium after the operating system has started! Can it get any better than that? System requirements are bare minimum, most hardware are automatically detected, and it comes loaded with software catering to your basic needs. [Experience Puppy Linux][1].
|
||||
|
||||
![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg)
|
||||
|
||||
**Suicide Linux**: Did the name scare you? Well it should. 'Any time - any time - you type any remotely incorrect command, the interpreter creatively resolves it into rm -rf / and wipes your hard drive'. Simple as that. I really want to know the ones who are confident enough to risk their production machines with [Suicide Linux][2]. **Warning: DO NOT try this on production machines!** The whole thing is available in a neat [DEB package][3] if you're interested.
|
||||
|
||||
![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png)
|
||||
|
||||
**PapyrOS**: "Strange" in a good way. PapyrOS is trying to adapt the material design language of Android into their brand new Linux distribution. Though the project is in early stages, it already looks very promising. The project page says the OS is 80% complete and one can expect the first Alpha release anytime soon. We did a small write up on [PapyrOS][4] when it was announced and by the looks of it, PapyrOS might even become a trend-setter of sorts. Follow the project on [Google+][5] and contribute via [BountySource][6] if you're interested.
|
||||
|
||||
![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png)
|
||||
|
||||
**Qubes OS**: Qubes is an open-source operating system designed to provide strong security using a Security by Compartmentalization approach. The assumption is that there can be no perfect, bug-free desktop environment. And by implementing a 'Security by Isolation' approach, [Qubes Linux][7] intends to remedy that. Qubes is based on Xen, the X Window System, and Linux, and can run most Linux applications and supports most Linux drivers. Qubes was selected as a finalist of Access Innovation Prize 2014 for Endpoint Security Solution.
|
||||
|
||||
![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg)
|
||||
|
||||
**Ubuntu Satanic Edition**: Ubuntu SE is a Linux distribution based on Ubuntu. "It brings together the best of free software and free metal music" in one comprehensive package consisting of themes, wallpapers, and even some heavy-metal music sourced from talented new artists. Though the project doesn't look actively developed anymore, Ubuntu Satanic Edition is strange in every sense of that word. [Ubuntu SE (Slightly NSFW)][8].
|
||||
|
||||
![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png)
|
||||
|
||||
**Tiny Core Linux**: Puppy Linux not small enough? Try this. Tiny Core Linux is a 12 MB graphical Linux desktop! Yep, you read it right. One major caveat: It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access. There is even a version without the GUI called Micro Core Linux which is just 9MB in size. [Tiny Core Linux][9] folks.
|
||||
|
||||
![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png)
|
||||
|
||||
**NixOS**: A very experienced-user focused Linux distribution with a unique approach to package and configuration management. In other distributions, actions such as upgrades can be dangerous. Upgrading a package can cause other packages to break, upgrading an entire system is much less reliable than reinstalling from scratch. And top of all that you can't safely test what the results of a configuration change will be, there's no "Undo" so to speak. In NixOS, the entire operating system is built by the Nix package manager from a description in a purely functional build language. This means that building a new configuration cannot overwrite previous configurations. Most of the other features follow this pattern. Nix stores all packages in isolation from each other. [More about NixOS][10].
|
||||
|
||||
![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg)
|
||||
|
||||
**GoboLinux**: This is another very unique Linux distro. What makes GoboLinux so different from the rest is its unique re-arrangement of the filesystem. It has its own subdirectory tree, where all of its files and programs are stored. GoboLinux does not have a package database because the filesystem is its database. In some ways, this sort of arrangement is similar to that seen in OS X. [Get GoboLinux][11].
|
||||
|
||||
![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg)
|
||||
|
||||
**Hannah Montana Linux**: Here is a Linux distro based on Kubuntu with a Hannah Montana themed boot screen, KDM, icon set, ksplash, plasma, color scheme, and wallpapers (I'm so sorry). [Link][12]. Project not active anymore.
|
||||
|
||||
**RLSD Linux**: An extremely minimalistic, small, lightweight and security-hardened, text-based operating system built on Linux. "It's a unique distribution that provides a selection of console applications and home-grown security features which might appeal to hackers," developers claim. [RLSD Linux][13].
|
||||
|
||||
Did we miss anything even stranger? Let us know.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html
|
||||
|
||||
作者:Manuel Jose
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm
|
||||
[2]:http://qntm.org/suicide
|
||||
[3]:http://sourceforge.net/projects/suicide-linux/files/
|
||||
[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html
|
||||
[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69
|
||||
[6]:https://www.bountysource.com/teams/papyros
|
||||
[7]:https://www.qubes-os.org/
|
||||
[8]:http://ubuntusatanic.org/
|
||||
[9]:http://tinycorelinux.net/
|
||||
[10]:https://nixos.org/
|
||||
[11]:http://www.gobolinux.org/
|
||||
[12]:http://hannahmontana.sourceforge.net/
|
||||
[13]:http://rlsd2.dimakrasner.com/
|
@ -1,114 +0,0 @@
|
||||
wyangsun translating
|
||||
Install Strongswan - A Tool to Setup IPsec Based VPN in Linux
|
||||
================================================================================
|
||||
IPsec is a standard which provides the security at network layer. It consist of authentication header (AH) and encapsulating security payload (ESP) components. AH provides the packet Integrity and confidentiality is provided by ESP component . IPsec ensures the following security features at network layer.
|
||||
|
||||
- Confidentiality
|
||||
- Integrity of packet
|
||||
- Source Non. Repudiation
|
||||
- Replay attack protection
|
||||
|
||||
[Strongswan][1] is an open source implementation of IPsec protocol and Strongswan stands for Strong Secure WAN (StrongS/WAN). It supports the both version of automatic keying exchange in IPsec VPN (Internet keying Exchange (IKE) V1 & V2).
|
||||
|
||||
Strongswan basically provides the automatic keying sharing between two nodes/gateway of the VPN and after that it uses the Linux Kernel implementation of IPsec (AH & ESP). Key shared using IKE mechanism is further used in the ESP for the encryption of data. In IKE phase, strongswan uses the encryption algorithms (AES,SHA etc) of OpenSSL and other crypto libraries. However, ESP component of IPsec uses the security algorithm which are implemented in the Linux Kernel. The main features of Strongswan are given below.
|
||||
|
||||
- 509 certificates or pre-shared keys based Authentication
|
||||
- Support of IKEv1 and IKEv2 key exchange protocols
|
||||
- Optional built-in integrity and crypto tests for plugins and libraries
|
||||
- Support of elliptic curve DH groups and ECDSA certificates
|
||||
- Storage of RSA private keys and certificates on a smartcard.
|
||||
|
||||
It can be used in the client / server (road warrior) and gateway to gateway scenarios.
|
||||
|
||||
### How to Install ###
|
||||
|
||||
Almost all Linux distro’s, supports the binary package of Strongswan. In this tutorial, we will install the strongswan from binary package and also the compilation of strongswan source code with desirable features.
|
||||
|
||||
### Using binary package ###
|
||||
|
||||
Strongswan can be installed using following command on Ubuntu 14.04 LTS .
|
||||
|
||||
$sudo aptitude install strongswan
|
||||
|
||||
![Installation of strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png)
|
||||
|
||||
The global configuration (strongswan.conf) file and ipsec configuration (ipsec.conf/ipsec.secrets) files of strongswan are under /etc/ directory.
|
||||
|
||||
### Pre-requisite for strongswan source compilation & installation ###
|
||||
|
||||
- GMP (Mathematical/Precision Library used by strongswan)
|
||||
- OpenSSL (Crypto Algorithms from this library)
|
||||
- PKCS (1,7,8,11,12)(Certificate encoding and smart card integration with Strongswan )
|
||||
|
||||
#### Procedure ####
|
||||
|
||||
**1)** Go to /usr/src/ directory using following command in the terminal.
|
||||
|
||||
$cd /usr/src
|
||||
|
||||
**2)** Download the source code from strongswan site suing following command
|
||||
|
||||
$sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz
|
||||
|
||||
(strongswan-5.2.1.tar.gz is the latest version.)
|
||||
|
||||
![Downloading software](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png)
|
||||
|
||||
**3)** Extract the downloaded software and go inside it using following command.
|
||||
|
||||
$sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1
|
||||
|
||||
**4)** Configure the strongswan as per desired options using configure command.
|
||||
|
||||
./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl
|
||||
|
||||
![checking packages for strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png)
|
||||
|
||||
If GMP library is not installed, then configure script will generate following error.
|
||||
|
||||
![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png)
|
||||
|
||||
Therefore, first of all, install the GMP library using following command and then run the configure script.
|
||||
|
||||
![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png)
|
||||
|
||||
However, if GMP is already installed and still above error exists then create soft link of libgmp.so library at /usr/lib , /lib/, /usr/lib/x86_64-linux-gnu/ paths in Ubuntu using following command.
|
||||
|
||||
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so
|
||||
|
||||
![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png)
|
||||
|
||||
After the creation of libgmp.so softlink, again run the ./configure script and it may find the gmp library. However, it may generate another error of gmp header file which is shown the following figure.
|
||||
|
||||
![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png)
|
||||
|
||||
Install the libgmp-dev package using following command for the solution of above error.
|
||||
|
||||
$sudo aptitude install libgmp-dev
|
||||
|
||||
![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png)
|
||||
|
||||
After installation of development package of gmp library, again run the configure script and if it does not produce any error, then the following output will be displayed.
|
||||
|
||||
![Output of Configure scirpt](http://blog.linoxide.com/wp-content/uploads/2014/12/successful-run.png)
|
||||
|
||||
Type the following commands for the compilation and installation of strongswan.
|
||||
|
||||
$ sudo make ; sudo make install
|
||||
|
||||
After the installation of strongswan , the Global configuration (strongswan.conf) and ipsec policy/secret configuration files (ipsec.conf/ipsec.secretes) are placed in **/usr/local/etc** directory.
|
||||
|
||||
Strongswan can be used as tunnel or transport mode depends on our security need. It provides well known site-2-site and road warrior VPNs. It can be use easily with Cisco,Juniper devices.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/security/install-strongswan/
|
||||
|
||||
作者:[nido][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/naveeda/
|
||||
[1]:https://www.strongswan.org/
|
@ -1,114 +0,0 @@
|
||||
[bazz2]
|
||||
Howto Manage Host Using Docker Machine in a VirtualBox
|
||||
================================================================================
|
||||
Hi all, today we'll learn how to create and manage a Docker host using Docker Machine in a VirtualBox. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. This API works for provisioning Docker on a local machine, on a virtual machine in the data center, or on a public cloud instance. Docker Machine is supported on Windows, OSX, and Linux and is available for installation as one standalone binary. It enables us to take full advantage of ecosystem partners providing Docker-ready infrastructure, while still accessing everything through the same interface. It makes people able to deploy the docker containers in the respective platform pretty fast and in pretty easy way with just a single command.
|
||||
|
||||
Here are some easy and simple steps that helps us to deploy docker containers using Docker Machine.
|
||||
|
||||
### 1. Installing Docker Machine ###
|
||||
|
||||
Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the [Github site][1] . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 .
|
||||
|
||||
**For 64 Bit Operating System**
|
||||
|
||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
|
||||
|
||||
**For 32 Bit Operating System**
|
||||
|
||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine
|
||||
|
||||
After downloading the latest release of Docker Machine, we'll make the file named **docker-machine** under **/usr/local/bin/** executable using the command below.
|
||||
|
||||
# chmod +x /usr/local/bin/docker-machine
|
||||
|
||||
After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system.
|
||||
|
||||
# docker-machine -v
|
||||
|
||||
![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png)
|
||||
|
||||
To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below.
|
||||
|
||||
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
|
||||
# chmod +x /usr/local/bin/docker
|
||||
|
||||
### 2. Creating VirualBox VM ###
|
||||
|
||||
After we have successfully installed Docker Machine in our Linux running machine, we'll definitely wanna go for creating a Virtual Machine using VirtualBox. To get started, we need to run docker-machine create command followed by --driver flag with string as virtualbox as we are trying to deploy docker inside of Virtual Box running VM and the final argument is the name of the machine, here we have machine name as "linux". This command will download [boot2docker][2] iso which is a light-weighted linux distribution based on Tiny Core Linux with the Docker daemon installed and will create and start a VirtualBox VM with Docker running as mentioned above.
|
||||
|
||||
To do so, we'll run the following command in a terminal or shell in our box.
|
||||
|
||||
# docker-machine create --driver virtualbox linux
|
||||
|
||||
![Creating Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png)
|
||||
|
||||
Now, to check whether we have successfully create a Virtualbox running Docker or not, we'll run the command **docker-machine** ls as shown below.
|
||||
|
||||
# docker-machine ls
|
||||
|
||||
![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png)
|
||||
|
||||
If the host is active, we can see * under the ACTIVE column in the output as shown above.
|
||||
|
||||
### 3. Setting Environment Variables ###
|
||||
|
||||
Now, we'll need to make docker talk with the machine. We can do that by running docker-machine env and then the machine name, here we have named **linux** as above.
|
||||
|
||||
# eval "$(docker-machine env linux)"
|
||||
# docker ps
|
||||
|
||||
This will set environment variables that the Docker client will read which specify the TLS settings. Note that we'll need to do this every time we reboot our machine or start a new tab. We can see what variables will be set by running the following command.
|
||||
|
||||
# docker-machine env linux
|
||||
|
||||
export DOCKER_TLS_VERIFY=1
|
||||
export DOCKER_CERT_PATH=/Users/<your username>/.docker/machine/machines/dev
|
||||
export DOCKER_HOST=tcp://192.168.99.100:2376
|
||||
|
||||
### 4. Running Docker Containers ###
|
||||
|
||||
Finally, after configuring the environment variables and Virtual Machine, we are able to run docker containers in the host running inside the Virtual Machine. To give it a test, we'll run a busybox container out of it run running **docker run busybox** command with **echo hello world** so that we can get the output of the container.
|
||||
|
||||
# docker run busybox echo hello world
|
||||
|
||||
![Running Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png)
|
||||
|
||||
### 5. Getting Docker Host's IP ###
|
||||
|
||||
We can get the IP Address of the running Docker Host's using the **docker-machine ip** command. We can see any exposed ports that are available on the Docker host’s IP address.
|
||||
|
||||
# docker-machine ip
|
||||
|
||||
![Docker IP Address](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png)
|
||||
|
||||
### 6. Managing the Hosts ###
|
||||
|
||||
Now we can manage as many local VMs running Docker as we desire by running docker-machine create command again and again as mentioned in above steps
|
||||
|
||||
If you are finished working with the running docker, we can simply run **docker-machine stop** command to stop the whole hosts which are Active and if wanna start again, we can run **docker-machine start**.
|
||||
|
||||
# docker-machine stop
|
||||
# docker-machine start
|
||||
|
||||
You can also specify a host to stop or start using the host name as an argument.
|
||||
|
||||
$ docker-machine stop linux
|
||||
$ docker-machine start linux
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Finally, we have successfully created and managed a Docker host inside a VirtualBox using Docker Machine. Really, Docker Machine enables people fast and easy to create, deploy and manage Docker hosts in different platforms as here we are running Docker hosts using Virtualbox platform. This virtualbox driver API works for provisioning Docker on a local machine, on a virtual machine in the data center. Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances whereas more drivers are in the work for AWS, Azure, VMware, and other infrastructure. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://github.com/docker/machine/releases
|
||||
[2]:https://github.com/boot2docker/boot2docker
|
@ -1,676 +0,0 @@
|
||||
Translating by Ezio
|
||||
|
||||
Process of the Linux kernel building
|
||||
================================================================================
|
||||
Introduction
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
I will not tell you how to build and install custom Linux kernel on your machine, you can find many many [resources](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) that will help you to do it. Instead, we will know what does occur when you are typed `make` in the directory with Linux kernel source code in this part. When I just started to learn source code of the Linux kernel, the [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) file was a first file that I've opened. And it was scary :) This [makefile](https://en.wikipedia.org/wiki/Make_%28software%29) contains `1591` lines of code at the time when I wrote this part and it was [third](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) release candidate.
|
||||
|
||||
This makefile is the the top makefile in the Linux kernel source code and kernel build starts here. Yes, it is big, but moreover, if you've read the source code of the Linux kernel you can noted that all directories with a source code has an own makefile. Of course it is not real to describe how each source files compiled and linked. So, we will see compilation only for the standard case. You will not find here building of the kernel's documentation, cleaning of the kernel source code, [tags](https://en.wikipedia.org/wiki/Ctags) generation, [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) related stuff and etc. We will start from the `make` execution with the standard kernel configuration file and will finish with the building of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage).
|
||||
|
||||
It would be good if you're already familiar with the [make](https://en.wikipedia.org/wiki/Make_%28software%29) util, but I will anyway try to describe all code that will be in this part.
|
||||
|
||||
So let's start.
|
||||
|
||||
Preparation before the kernel compilation
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
There are many things to preparate before the kernel compilation will be started. The main point here is to find and configure
|
||||
the type of compilation, to parse command line arguments that are passed to the `make` util and etc. So let's dive into the top `Makefile` of the Linux kernel.
|
||||
|
||||
The Linux kernel top `Makefile` is responsible for building two major products: [vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (the resident kernel image) and the modules (any module files). The [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel starts from the definition of the following variables:
|
||||
|
||||
```Makefile
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 2
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc3
|
||||
NAME = Hurr durr I'ma sheep
|
||||
```
|
||||
|
||||
These variables determine the current version of the Linux kernel and are used in the different places, for example in the forming of the `KERNELVERSION` variable:
|
||||
|
||||
```Makefile
|
||||
KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
|
||||
```
|
||||
|
||||
After this we can see a couple of the `ifeq` condition that check some of the parameters passed to `make`. The Linux kernel `makefiles` provides a special `make help` target that prints all available targets and some of the command line arguments that can be passed to `make`. For example: `make V=1` - provides verbose builds. The first `ifeq` condition checks if the `V=n` option is passed to make:
|
||||
|
||||
```Makefile
|
||||
ifeq ("$(origin V)", "command line")
|
||||
KBUILD_VERBOSE = $(V)
|
||||
endif
|
||||
ifndef KBUILD_VERBOSE
|
||||
KBUILD_VERBOSE = 0
|
||||
endif
|
||||
|
||||
ifeq ($(KBUILD_VERBOSE),1)
|
||||
quiet =
|
||||
Q =
|
||||
else
|
||||
quiet=quiet_
|
||||
Q = @
|
||||
endif
|
||||
|
||||
export quiet Q KBUILD_VERBOSE
|
||||
```
|
||||
|
||||
If this option is passed to `make` we set the `KBUILD_VERBOSE` variable to the value of the `V` option. Otherwise we set the `KBUILD_VERBOSE` variable to zero. After this we check value of the `KBUILD_VERBOSE` variable and set values of the `quiet` and `Q` variables depends on the `KBUILD_VERBOSE` value. The `@` symbols suppress the output of the command and if it will be set before a command we will see something like this: `CC scripts/mod/empty.o` instead of the `Compiling .... scripts/mod/empty.o`. In the end we just export all of these variables. The next `ifeq` statement checks that `O=/dir` option was passed to the `make`. This option allows to locate all output files in the given `dir`:
|
||||
|
||||
```Makefile
|
||||
ifeq ($(KBUILD_SRC),)
|
||||
|
||||
ifeq ("$(origin O)", "command line")
|
||||
KBUILD_OUTPUT := $(O)
|
||||
endif
|
||||
|
||||
ifneq ($(KBUILD_OUTPUT),)
|
||||
saved-output := $(KBUILD_OUTPUT)
|
||||
KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \
|
||||
&& /bin/pwd)
|
||||
$(if $(KBUILD_OUTPUT),, \
|
||||
$(error failed to create output directory "$(saved-output)"))
|
||||
|
||||
sub-make: FORCE
|
||||
$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
|
||||
-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
|
||||
|
||||
skip-makefile := 1
|
||||
endif # ifneq ($(KBUILD_OUTPUT),)
|
||||
endif # ifeq ($(KBUILD_SRC),)
|
||||
```
|
||||
|
||||
We check the `KBUILD_SRC` that represent top directory of the source code of the linux kernel and if it is empty (it is empty every time while makefile executes first time) and the set the `KBUILD_OUTPUT` variable to the value that passed with the `O` option (if this option was passed). In the next step we check this `KBUILD_OUTPUT` variable and if we set it, we do following things:
|
||||
|
||||
* Store value of the `KBUILD_OUTPUT` in the temp `saved-output` variable;
|
||||
* Try to create given output directory;
|
||||
* Check that directory created, in other way print error;
|
||||
* If custom output directory created sucessfully, execute `make` again with the new directory (see `-C` option).
|
||||
|
||||
The next `ifeq` statements checks that `C` or `M` options was passed to the make:
|
||||
|
||||
```Makefile
|
||||
ifeq ("$(origin C)", "command line")
|
||||
KBUILD_CHECKSRC = $(C)
|
||||
endif
|
||||
ifndef KBUILD_CHECKSRC
|
||||
KBUILD_CHECKSRC = 0
|
||||
endif
|
||||
|
||||
ifeq ("$(origin M)", "command line")
|
||||
KBUILD_EXTMOD := $(M)
|
||||
endif
|
||||
```
|
||||
|
||||
The first `C` option tells to the `makefile` that need to check all `c` source code with a tool provided by the `$CHECK` environment variable, by default it is [sparse](https://en.wikipedia.org/wiki/Sparse). The second `M` option provides build for the external modules (will not see this case in this part). As we set this variables we make a check of the `KBUILD_SRC` variable and if it is not set we set `srctree` variable to `.`:
|
||||
|
||||
```Makefile
|
||||
ifeq ($(KBUILD_SRC),)
|
||||
srctree := .
|
||||
endif
|
||||
|
||||
objtree := .
|
||||
src := $(srctree)
|
||||
obj := $(objtree)
|
||||
|
||||
export srctree objtree VPATH
|
||||
```
|
||||
|
||||
That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the getting value for the `SUBARCH` variable that will represent tewhat the underlying archicecture is:
|
||||
|
||||
```Makefile
|
||||
SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
|
||||
-e s/sun4u/sparc64/ \
|
||||
-e s/arm.*/arm/ -e s/sa110/arm/ \
|
||||
-e s/s390x/s390/ -e s/parisc64/parisc/ \
|
||||
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
|
||||
-e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
|
||||
```
|
||||
|
||||
As you can see it executes [uname](https://en.wikipedia.org/wiki/Uname) utils that prints information about machine, operating system and architecture. As it will get output of the `uname` util, it will parse it and assign to the `SUBARCH` variable. As we got `SUBARCH`, we set the `SRCARCH` variable that provides directory of the certain architecture and `hfr-arch` that provides directory for the header files:
|
||||
|
||||
```Makefile
|
||||
ifeq ($(ARCH),i386)
|
||||
SRCARCH := x86
|
||||
endif
|
||||
ifeq ($(ARCH),x86_64)
|
||||
SRCARCH := x86
|
||||
endif
|
||||
|
||||
hdr-arch := $(SRCARCH)
|
||||
```
|
||||
|
||||
Note that `ARCH` is the alias for the `SUBARCH`. In the next step we set the `KCONFIG_CONFIG` variable that represents path to the kernel configuration file and if it was not set before, it will be `.config` by default:
|
||||
|
||||
```Makefile
|
||||
KCONFIG_CONFIG ?= .config
|
||||
export KCONFIG_CONFIG
|
||||
```
|
||||
|
||||
and the [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) that will be used during kernel compilation:
|
||||
|
||||
```Makefile
|
||||
CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
|
||||
else if [ -x /bin/bash ]; then echo /bin/bash; \
|
||||
else echo sh; fi ; fi)
|
||||
```
|
||||
|
||||
The next set of variables related to the compiler that will be used during Linux kernel compilation. We set the host compilers for the `c` and `c++` and flags for it:
|
||||
|
||||
```Makefile
|
||||
HOSTCC = gcc
|
||||
HOSTCXX = g++
|
||||
HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89
|
||||
HOSTCXXFLAGS = -O2
|
||||
```
|
||||
|
||||
Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both):
|
||||
|
||||
```Makefile
|
||||
KBUILD_MODULES :=
|
||||
KBUILD_BUILTIN := 1
|
||||
|
||||
ifeq ($(MAKECMDGOALS),modules)
|
||||
KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1)
|
||||
endif
|
||||
```
|
||||
|
||||
Here we can see definition of these variables and the value of the `KBUILD_BUILTIN` will depens on the `CONFIG_MODVERSIONS` kernel configuration parameter if we pass only `modules` to the `make`. The next step is including of the:
|
||||
|
||||
```Makefile
|
||||
include scripts/Kbuild.include
|
||||
```
|
||||
|
||||
`kbuild` file. The [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) or `Kernel Build System` is the special infrastructure to manage building of the kernel and its modules. The `kbuild` files has the same syntax that makefiles. The [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) file provides some generic definitions for the `kbuild` system. As we included this `kbuild` files we can see definition of the variables that are related to the different tools that will be used during kernel and modules compilation (like linker, compilers, utils from the [binutils](http://www.gnu.org/software/binutils/) and etc...):
|
||||
|
||||
```Makefile
|
||||
AS = $(CROSS_COMPILE)as
|
||||
LD = $(CROSS_COMPILE)ld
|
||||
CC = $(CROSS_COMPILE)gcc
|
||||
CPP = $(CC) -E
|
||||
AR = $(CROSS_COMPILE)ar
|
||||
NM = $(CROSS_COMPILE)nm
|
||||
STRIP = $(CROSS_COMPILE)strip
|
||||
OBJCOPY = $(CROSS_COMPILE)objcopy
|
||||
OBJDUMP = $(CROSS_COMPILE)objdump
|
||||
AWK = awk
|
||||
...
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
After definition of these variables we define two variables: `USERINCLUDE` and `LINUXINCLUDE`. They will contain paths of the directories with headers (public for users in the first case and for kernel in the second case):
|
||||
|
||||
```Makefile
|
||||
USERINCLUDE := \
|
||||
-I$(srctree)/arch/$(hdr-arch)/include/uapi \
|
||||
-Iarch/$(hdr-arch)/include/generated/uapi \
|
||||
-I$(srctree)/include/uapi \
|
||||
-Iinclude/generated/uapi \
|
||||
-include $(srctree)/include/linux/kconfig.h
|
||||
|
||||
LINUXINCLUDE := \
|
||||
-I$(srctree)/arch/$(hdr-arch)/include \
|
||||
...
|
||||
```
|
||||
|
||||
And the standard flags for the C compiler:
|
||||
|
||||
```Makefile
|
||||
KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
|
||||
-fno-strict-aliasing -fno-common \
|
||||
-Werror-implicit-function-declaration \
|
||||
-Wno-format-security \
|
||||
-std=gnu89
|
||||
```
|
||||
|
||||
It is the not last compiler flags, they can be updated by the other makefiles (for example kbuilds from `arch/`). After all of these, all variables will be exported to be available in the other makefiles. The following two the `RCS_FIND_IGNORE` and the `RCS_TAR_IGNORE` variables will contain files that will be ignored in the version control system:
|
||||
|
||||
```Makefile
|
||||
export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \
|
||||
-name CVS -o -name .pc -o -name .hg -o -name .git \) \
|
||||
-prune -o
|
||||
export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \
|
||||
--exclude CVS --exclude .pc --exclude .hg --exclude .git
|
||||
```
|
||||
|
||||
That's all. We have finished with the all preparations, next point is the building of `vmlinux`.
|
||||
|
||||
Directly to the kernel build
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
As we have finished all preparations, next step in the root makefile is related to the kernel build. Before this moment we will not see in the our terminal after the execution of the `make` command. But now first steps of the compilation are started. In this moment we need to go on the [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) line of the Linux kernel top makefile and we will see `vmlinux` target there:
|
||||
|
||||
```Makefile
|
||||
all: vmlinux
|
||||
include arch/$(SRCARCH)/Makefile
|
||||
```
|
||||
|
||||
Don't worry that we have missed many lines in Makefile that are placed after `export RCS_FIND_IGNORE.....` and before `all: vmlinux.....`. This part of the makefile is responsible for the `make *.config` targets and as I wrote in the beginning of this part we will see only building of the kernel in a general way.
|
||||
|
||||
The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile:
|
||||
|
||||
```Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
|
||||
```
|
||||
|
||||
The `vmlinux` is is the Linux kernel in an statically linked executable file format. The [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script links combines different compiled subsystems into vmlinux. The second target is the `vmlinux-deps` that defined as:
|
||||
|
||||
```Makefile
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN)
|
||||
```
|
||||
|
||||
and consists from the set of the `built-in.o` from the each top directory of the Linux kernel. Later, when we will go through all directories in the Linux kernel, the `Kbuild` will compile all the `$(obj-y)` files. It then calls `$(LD) -r` to merge these files into one `built-in.o` file. For this moment we have no `vmlinux-deps`, so the `vmlinux` target will not be executed now. For me `vmlinux-deps` contains following files:
|
||||
|
||||
```
|
||||
arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o
|
||||
arch/x86/kernel/head64.o arch/x86/kernel/head.o
|
||||
init/built-in.o usr/built-in.o
|
||||
arch/x86/built-in.o kernel/built-in.o
|
||||
mm/built-in.o fs/built-in.o
|
||||
ipc/built-in.o security/built-in.o
|
||||
crypto/built-in.o block/built-in.o
|
||||
lib/lib.a arch/x86/lib/lib.a
|
||||
lib/built-in.o arch/x86/lib/built-in.o
|
||||
drivers/built-in.o sound/built-in.o
|
||||
firmware/built-in.o arch/x86/pci/built-in.o
|
||||
arch/x86/power/built-in.o arch/x86/video/built-in.o
|
||||
net/built-in.o
|
||||
```
|
||||
|
||||
The next target that can be executed is following:
|
||||
|
||||
```Makefile
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
$(vmlinux-dirs): prepare scripts
|
||||
$(Q)$(MAKE) $(build)=$@
|
||||
```
|
||||
|
||||
As we can see the `vmlinux-dirs` depends on the two targets: `prepare` and `scripts`. The first `prepare` defined in the top `Makefile` of the Linux kernel and executes three stages of preparations:
|
||||
|
||||
```Makefile
|
||||
prepare: prepare0
|
||||
prepare0: archprepare FORCE
|
||||
$(Q)$(MAKE) $(build)=.
|
||||
archprepare: archheaders archscripts prepare1 scripts_basic
|
||||
|
||||
prepare1: prepare2 $(version_h) include/generated/utsrelease.h \
|
||||
include/config/auto.conf
|
||||
$(cmd_crmodverdir)
|
||||
prepare2: prepare3 outputmakefile asm-generic
|
||||
```
|
||||
|
||||
The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code, calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table:
|
||||
|
||||
```Makefile
|
||||
archheaders:
|
||||
$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
|
||||
```
|
||||
|
||||
And the second target is `archscripts` in this makefile is:
|
||||
|
||||
```Makefile
|
||||
archscripts: scripts_basic
|
||||
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||
```
|
||||
|
||||
We can see that it depends on the `scripts_basic` target from the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile). At the first we can see the `scripts_basic` target that executes make for the [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) makefile:
|
||||
|
||||
```Maklefile
|
||||
scripts_basic:
|
||||
$(Q)$(MAKE) $(build)=scripts/basic
|
||||
```
|
||||
|
||||
The `scripts/basic/Makefile` contains targets for compilation of the two host programs: `fixdep` and `bin2`:
|
||||
|
||||
```Makefile
|
||||
hostprogs-y := fixdep
|
||||
hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c
|
||||
always := $(hostprogs-y)
|
||||
|
||||
$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep
|
||||
```
|
||||
|
||||
First program is `fixdep` - optimizes list of dependencies generated by the [gcc](https://gcc.gnu.org/) that tells make when to remake a source code file. The second program is `bin2c` depends on the value of the `CONFIG_BUILD_BIN2C` kernel configuration option and very little C program that allows to convert a binary on stdin to a C include on stdout. You can note here strange notation: `hostprogs-y` and etc. This notation is used in the all `kbuild` files and more about it you can read in the [documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt). In our case the `hostprogs-y` tells to the `kbuild` that there is one host program named `fixdep` that will be built from the will be built from `fixdep.c` that located in the same directory that `Makefile`. The first output after we will execute `make` command in our terminal will be result of this `kbuild` file:
|
||||
|
||||
```
|
||||
$ make
|
||||
HOSTCC scripts/basic/fixdep
|
||||
```
|
||||
|
||||
As `script_basic` target was executed, the `archscripts` target will execute `make` for the [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) makefile with the `relocs` target:
|
||||
|
||||
```Makefile
|
||||
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||
```
|
||||
|
||||
The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) information and we will see it in the `make` output:
|
||||
|
||||
```Makefile
|
||||
HOSTCC arch/x86/tools/relocs_32.o
|
||||
HOSTCC arch/x86/tools/relocs_64.o
|
||||
HOSTCC arch/x86/tools/relocs_common.o
|
||||
HOSTLD arch/x86/tools/relocs
|
||||
```
|
||||
|
||||
There is checking of the `version.h` after compiling of the `relocs.c`:
|
||||
|
||||
```Makefile
|
||||
$(version_h): $(srctree)/Makefile FORCE
|
||||
$(call filechk,version.h)
|
||||
$(Q)rm -f $(old_version_h)
|
||||
```
|
||||
|
||||
We can see it in the output:
|
||||
|
||||
```
|
||||
CHK include/config/kernel.release
|
||||
```
|
||||
|
||||
and the building of the `generic` assembly headers with the `asm-generic` target from the `arch/x86/include/generated/asm` that generated in the top Makefile of the Linux kernel. After the `asm-generic` target the `archprepare` will be done, so the `prepare0` target will be executed. As I wrote above:
|
||||
|
||||
```Makefile
|
||||
prepare0: archprepare FORCE
|
||||
$(Q)$(MAKE) $(build)=.
|
||||
```
|
||||
|
||||
Note on the `build`. It defined in the [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) and looks like this:
|
||||
|
||||
```Makefile
|
||||
build := -f $(srctree)/scripts/Makefile.build obj
|
||||
```
|
||||
|
||||
or in our case it is current source directory - `.`:
|
||||
|
||||
```Makefile
|
||||
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=.
|
||||
```
|
||||
|
||||
The [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) tries to find the `Kbuild` file by the given directory via the `obj` parameter, include this `Kbuild` files:
|
||||
|
||||
```Makefile
|
||||
include $(kbuild-file)
|
||||
```
|
||||
|
||||
and build targets from it. In our case `.` contains the [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) file that generates the `kernel/bounds.s` and the `arch/x86/kernel/asm-offsets.s`. After this the `prepare` target finished to work. The `vmlinux-dirs` also depends on the second target - `scripts` that compiles following programs: `file2alias`, `mk_elfconfig`, `modpost` and etc... After scripts/host-programs compilation our `vmlinux-dirs` target can be executed. First of all let's try to understand what does `vmlinux-dirs` contain. For my case it contains paths of the following kernel directories:
|
||||
|
||||
```
|
||||
init usr arch/x86 kernel mm fs ipc security crypto block
|
||||
drivers sound firmware arch/x86/pci arch/x86/power
|
||||
arch/x86/video net lib arch/x86/lib
|
||||
```
|
||||
|
||||
We can find definition of the `vmlinux-dirs` in the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel:
|
||||
|
||||
```Makefile
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m)))
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
...
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
Here we remove the `/` symbol from the each directory with the help of the `patsubst` and `filter` functions and put it to the `vmlinux-dirs`. So we have list of directories in the `vmlinux-dirs` and the following code:
|
||||
|
||||
```Makefile
|
||||
$(vmlinux-dirs): prepare scripts
|
||||
$(Q)$(MAKE) $(build)=$@
|
||||
```
|
||||
|
||||
The `$@` represents `vmlinux-dirs` here that means that it will go recursively over all directories from the `vmlinux-dirs` and its internal directories (depens on configuration) and will execute `make` in there. We can see it in the output:
|
||||
|
||||
```
|
||||
CC init/main.o
|
||||
CHK include/generated/compile.h
|
||||
CC init/version.o
|
||||
CC init/do_mounts.o
|
||||
...
|
||||
CC arch/x86/crypto/glue_helper.o
|
||||
AS arch/x86/crypto/aes-x86_64-asm_64.o
|
||||
CC arch/x86/crypto/aes_glue.o
|
||||
...
|
||||
AS arch/x86/entry/entry_64.o
|
||||
AS arch/x86/entry/thunk_64.o
|
||||
CC arch/x86/entry/syscall_64.o
|
||||
```
|
||||
|
||||
Source code in each directory will be compiled and linked to the `built-in.o`:
|
||||
|
||||
```
|
||||
$ find . -name built-in.o
|
||||
./arch/x86/crypto/built-in.o
|
||||
./arch/x86/crypto/sha-mb/built-in.o
|
||||
./arch/x86/net/built-in.o
|
||||
./init/built-in.o
|
||||
./usr/built-in.o
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
Ok, all buint-in.o(s) built, now we can back to the `vmlinux` target. As you remember, the `vmlinux` target is in the top Makefile of the Linux kernel. Before the linking of the `vmlinux` it builds [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) and etc., but I will not describe it in this part as I wrote in the beginning of this part.
|
||||
|
||||
```Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
|
||||
...
|
||||
...
|
||||
+$(call if_changed,link-vmlinux)
|
||||
```
|
||||
|
||||
As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script is linking of the all `built-in.o`(s) to the one statically linked executable and creation of the [System.map](https://en.wikipedia.org/wiki/System.map). In the end we will see following output:
|
||||
|
||||
```
|
||||
LINK vmlinux
|
||||
LD vmlinux.o
|
||||
MODPOST vmlinux.o
|
||||
GEN .version
|
||||
CHK include/generated/compile.h
|
||||
UPD include/generated/compile.h
|
||||
CC init/version.o
|
||||
LD init/built-in.o
|
||||
KSYM .tmp_kallsyms1.o
|
||||
KSYM .tmp_kallsyms2.o
|
||||
LD vmlinux
|
||||
SORTEX vmlinux
|
||||
SYSMAP System.map
|
||||
```
|
||||
|
||||
and `vmlinux` and `System.map` in the root of the Linux kernel source tree:
|
||||
|
||||
```
|
||||
$ ls vmlinux System.map
|
||||
System.map vmlinux
|
||||
```
|
||||
|
||||
That's all, `vmlinux` is ready. The next step is creation of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage).
|
||||
|
||||
Building bzImage
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image:
|
||||
|
||||
```Makefile
|
||||
all: bzImage
|
||||
```
|
||||
|
||||
in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on this target, it will help us to understand how this image builds. As I already said the `bzImage` target defined in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) and looks like this:
|
||||
|
||||
```Makefile
|
||||
bzImage: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
|
||||
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
|
||||
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
|
||||
```
|
||||
|
||||
We can see here, that first of all called `make` for the boot directory, in our case it is:
|
||||
|
||||
```Makefile
|
||||
boot := arch/x86/boot
|
||||
```
|
||||
|
||||
The main goal now to build source code in the `arch/x86/boot` and `arch/x86/boot/compressed` directories, build `setup.bin` and `vmlinux.bin`, and build the `bzImage` from they in the end. First target in the [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) is the `$(obj)/setup.elf`:
|
||||
|
||||
```Makefile
|
||||
$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
|
||||
$(call if_changed,ld)
|
||||
```
|
||||
|
||||
We already have the `setup.ld` linker script in the `arch/x86/boot` directory and the `SETUP_OBJS` expands to the all source files from the `boot` directory. We can see first output:
|
||||
|
||||
```Makefile
|
||||
AS arch/x86/boot/bioscall.o
|
||||
CC arch/x86/boot/cmdline.o
|
||||
AS arch/x86/boot/copy.o
|
||||
HOSTCC arch/x86/boot/mkcpustr
|
||||
CPUSTR arch/x86/boot/cpustr.h
|
||||
CC arch/x86/boot/cpu.o
|
||||
CC arch/x86/boot/cpuflags.o
|
||||
CC arch/x86/boot/cpucheck.o
|
||||
CC arch/x86/boot/early_serial_console.o
|
||||
CC arch/x86/boot/edd.o
|
||||
```
|
||||
|
||||
The next source code file is the [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S), but we can't build it now because this target depends on the following two header files:
|
||||
|
||||
```Makefile
|
||||
$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h
|
||||
```
|
||||
|
||||
The first is `voffset.h` generated by the `sed` script that gets two addresses from the `vmlinux` with the `nm` util:
|
||||
|
||||
```C
|
||||
#define VO__end 0xffffffff82ab0000
|
||||
#define VO__text 0xffffffff81000000
|
||||
```
|
||||
|
||||
They are start and end of the kernel. The second is `zoffset.h` depens on the `vmlinux` target from the [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile):
|
||||
|
||||
```Makefile
|
||||
$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
|
||||
$(call if_changed,zoffset)
|
||||
```
|
||||
|
||||
The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that compiles source code files from the [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) directory and generates `vmlinux.bin`, `vmlinux.bin.bz2`, and compiles programm - `mkpiggy`. We can see this in the output:
|
||||
|
||||
```Makefile
|
||||
LDS arch/x86/boot/compressed/vmlinux.lds
|
||||
AS arch/x86/boot/compressed/head_64.o
|
||||
CC arch/x86/boot/compressed/misc.o
|
||||
CC arch/x86/boot/compressed/string.o
|
||||
CC arch/x86/boot/compressed/cmdline.o
|
||||
OBJCOPY arch/x86/boot/compressed/vmlinux.bin
|
||||
BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2
|
||||
HOSTCC arch/x86/boot/compressed/mkpiggy
|
||||
```
|
||||
|
||||
Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and comments and the `vmlinux.bin.bz2` compressed `vmlinux.bin.all` + `u32` size of `vmlinux.bin.all`. The `vmlinux.bin.all` is `vmlinux.bin + vmlinux.relocs`, where `vmlinux.relocs` is the `vmlinux` that was handled by the `relocs` program (see above). As we got these files, the `piggy.S` assembly files will be generated with the `mkpiggy` program and compiled:
|
||||
|
||||
```Makefile
|
||||
MKPIGGY arch/x86/boot/compressed/piggy.S
|
||||
AS arch/x86/boot/compressed/piggy.o
|
||||
```
|
||||
|
||||
This assembly files will contain computed offset from a compressed kernel. After this we can see that `zoffset` generated:
|
||||
|
||||
```Makefile
|
||||
ZOFFSET arch/x86/boot/zoffset.h
|
||||
```
|
||||
|
||||
As the `zoffset.h` and the `voffset.h` are generated, compilation of the source code files from the [arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) can be continued:
|
||||
|
||||
```Makefile
|
||||
AS arch/x86/boot/header.o
|
||||
CC arch/x86/boot/main.o
|
||||
CC arch/x86/boot/mca.o
|
||||
CC arch/x86/boot/memory.o
|
||||
CC arch/x86/boot/pm.o
|
||||
AS arch/x86/boot/pmjump.o
|
||||
CC arch/x86/boot/printf.o
|
||||
CC arch/x86/boot/regs.o
|
||||
CC arch/x86/boot/string.o
|
||||
CC arch/x86/boot/tty.o
|
||||
CC arch/x86/boot/video.o
|
||||
CC arch/x86/boot/video-mode.o
|
||||
CC arch/x86/boot/video-vga.o
|
||||
CC arch/x86/boot/video-vesa.o
|
||||
CC arch/x86/boot/video-bios.o
|
||||
```
|
||||
|
||||
As all source code files will be compiled, they will be linked to the `setup.elf`:
|
||||
|
||||
```Makefile
|
||||
LD arch/x86/boot/setup.elf
|
||||
```
|
||||
|
||||
or:
|
||||
|
||||
```
|
||||
ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf
|
||||
```
|
||||
|
||||
The last two things is the creation of the `setup.bin` that will contain compiled code from the `arch/x86/boot/*` directory:
|
||||
|
||||
```
|
||||
objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin
|
||||
```
|
||||
|
||||
and the creation of the `vmlinux.bin` from the `vmlinux`:
|
||||
|
||||
```
|
||||
objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin
|
||||
```
|
||||
|
||||
In the end we compile host program: [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c) that will create our `bzImage` from the `setup.bin` and the `vmlinux.bin`:
|
||||
|
||||
```
|
||||
arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage
|
||||
```
|
||||
|
||||
Actually the `bzImage` is the concatenated `setup.bin` and the `vmlinux.bin`. In the end we will see the output which familiar to all who once build the Linux kernel from source:
|
||||
|
||||
```
|
||||
Setup is 16268 bytes (padded to 16384 bytes).
|
||||
System is 4704 kB
|
||||
CRC 94a88f9a
|
||||
Kernel: arch/x86/boot/bzImage is ready (#5)
|
||||
```
|
||||
|
||||
That's all.
|
||||
|
||||
Conclusion
|
||||
================================================================================
|
||||
|
||||
It is the end of this part and here we saw all steps from the execution of the `make` command to the generation of the `bzImage`. I know, the Linux kernel makefiles and process of the Linux kernel building may seem confusing at first glance, but it is not so hard. Hope this part will help you to understand process of the Linux kernel building.
|
||||
|
||||
Links
|
||||
================================================================================
|
||||
|
||||
* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29)
|
||||
* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile)
|
||||
* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler)
|
||||
* [Ctags](https://en.wikipedia.org/wiki/Ctags)
|
||||
* [sparse](https://en.wikipedia.org/wiki/Sparse)
|
||||
* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage)
|
||||
* [uname](https://en.wikipedia.org/wiki/Uname)
|
||||
* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29)
|
||||
* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt)
|
||||
* [binutils](http://www.gnu.org/software/binutils/)
|
||||
* [gcc](https://gcc.gnu.org/)
|
||||
* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt)
|
||||
* [System.map](https://en.wikipedia.org/wiki/System.map)
|
||||
* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,138 +0,0 @@
|
||||
(translating by runningwater)
|
||||
How to Install Logwatch on Ubuntu 15.04
|
||||
================================================================================
|
||||
Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at.
|
||||
|
||||
### Pre-installation Setup ###
|
||||
|
||||
We will be using Ubuntu 15.04 operating system to deploy Logwatch on it so as a perquisite for the installation of Logwatch, make sure that your emails setup is working as it will be used to send email to the administrators for daily reports on the gathered reports.Your system repositories should be enabled as we will be installing it from its available universal repositories.
|
||||
|
||||
Then open the terminal of your ubuntu operating system and login with root user to update your system packages before moving to Logwatch installation.
|
||||
|
||||
root@ubuntu-15:~# apt-get update
|
||||
|
||||
### Installing Logwatch ###
|
||||
|
||||
Once your system is updated and your have fulfilled all its prerequisites then run the following command to start the installation of Logwatch in your server.
|
||||
|
||||
root@ubuntu-15:~# apt-get install logwatch
|
||||
|
||||
The logwatch installation process will starts with addition of some extra required packages as shown once you press “Y” to accept the required changes to the system.
|
||||
|
||||
During the installation process you will be prompted to configure the Postfix Configurations according to your mail server’s setup. Here we used “Local only” in the tutorial for ease, we can choose from the other available options as per your infrastructure requirements and then press “OK” to proceed.
|
||||
|
||||
![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png)
|
||||
|
||||
Then you have to choose your mail server’s name that will also be used by other programs, so it should be single fully qualified domain name (FQDN).
|
||||
|
||||
![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png)
|
||||
|
||||
Once you press “OK” after postfix configurations, then it will completes the Logwatch installation process with default configurations of Postfix.
|
||||
|
||||
![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png)
|
||||
|
||||
You can check the status of Logwatch by issuing the following command in the terminal that should be in active state.
|
||||
|
||||
root@ubuntu-15:~# service postfix status
|
||||
|
||||
![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png)
|
||||
|
||||
To confirm the installation of Logwatch with its default configurations, issue the simple “logwatch” command as shown.
|
||||
|
||||
root@ubuntu-15:~# logwatch
|
||||
|
||||
The output from the above executed command will results in following compiled report form in the terminal.
|
||||
|
||||
![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png)
|
||||
|
||||
### Logwatch Configurations ###
|
||||
|
||||
Now after successful installation of Logwatch, we need to make few configuration changes in its configuration file located under following shown path. So, let’s open it with the file editor to update its configurations as required.
|
||||
|
||||
root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf
|
||||
|
||||
**Output/Format Options**
|
||||
|
||||
By default Logwatch will print to stdout in text with no encoding.To make email Default set “Output = mail” and to save to file set “Output = file”. So you can comment out the its default configurations as per your required settings.
|
||||
|
||||
Output = stdout
|
||||
|
||||
To make Html the default formatting update the following line if you are using Internet email configurations.
|
||||
|
||||
Format = text
|
||||
|
||||
Now add the default person to mail reports should be sent to, it could be a local account or a complete email address that you are free to mention in this line
|
||||
|
||||
MailTo = root
|
||||
#MailTo = user@test.com
|
||||
|
||||
Default person to mail reports sent from can be a local account or any other you wish to use.
|
||||
|
||||
# complete email address.
|
||||
MailFrom = Logwatch
|
||||
|
||||
Save the changes made in the configuration file of Logwatch while leaving the other parameter as default.
|
||||
|
||||
**Cronjob Configuration**
|
||||
|
||||
Now edit the "00logwatch" file in daily crons directory to configure your desired email address to forward reports from logwatch.
|
||||
|
||||
root@ubuntu-15:~# vim /etc/cron.daily/00logwatch
|
||||
|
||||
Here you need to use "--mailto" user@test.com instead of --output mail and save the file.
|
||||
|
||||
![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png)
|
||||
|
||||
### Using Logwatch Report ###
|
||||
|
||||
Now we generate the test report by executing the "logwatch" command in the terminal to get its result shown in the Text format within the terminal.
|
||||
|
||||
root@ubuntu-15:~#logwatch
|
||||
|
||||
The generated report starts with showing its execution time and date, it will be comprising of different sections that starts with its begin status and closed with end status after showing the complete information about its logs of the mentioned sections.
|
||||
|
||||
Here is its starting point looks like, where it starts by showing all the installed packages in the system as shown below.
|
||||
|
||||
![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png)
|
||||
|
||||
The following sections shows the logs informmation about the login sessions, rsyslogs and SSH connections about the current and last sessions enabled on the system.
|
||||
|
||||
![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png)
|
||||
|
||||
The logwatch report will ends up by showing the secure sudo logs and the disk space usage of the root diretory as shown below.
|
||||
|
||||
![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png)
|
||||
|
||||
You can also check for the generated emails about the logwatch reports by opening the following file.
|
||||
|
||||
root@ubuntu-15:~# vim /var/mail/root
|
||||
|
||||
Here you will be able to see all the generated emails to your configured users with their message delivery status.
|
||||
|
||||
### More about Logwatch ###
|
||||
|
||||
Logwatch is a great tool to lern more about it, so if your more interested to learn more about its logwatch then you can also get much help from the below few commands.
|
||||
|
||||
root@ubuntu-15:~# man logwatch
|
||||
|
||||
The above command contains all the users manual about the logwatch, so read it carefully and to exit from the manuals section simply press "q".
|
||||
|
||||
To get help about the logwatch commands usage you can run the following help command for further information in details.
|
||||
|
||||
root@ubuntu-15:~# logwatch --help
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
At the end of this tutorial you learn about the complete setup of Logwatch on Ubuntu 15.04 that includes with its installation and configurations guide. Now you can start monitoring your logs in a customize able form, whether you monitor the logs of all the services rnning on your system or you customize it to send you the reports about the specific services on the scheduled days. So, let's use this tool and feel free to leave us a comment if you face any issue or need to know more about logwatch usage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
@ -1,69 +0,0 @@
|
||||
KevinSJ Translating
|
||||
How to get Public IP from Linux Terminal?
|
||||
================================================================================
|
||||
![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png)
|
||||
|
||||
Public addresses are assigned by InterNIC and consist of class-based network IDs or blocks of CIDR-based addresses (called CIDR blocks) that are guaranteed to be globally unique to the Internet. How to get Public IP from Linux Terminal - blackMORE OpsWhen the public addresses are assigned, routes are programmed into the routers of the Internet so that traffic to the assigned public addresses can reach their locations. Traffic to destination public addresses are reachable on the Internet. For example, when an organization is assigned a CIDR block in the form of a network ID and subnet mask, that [network ID, subnet mask] pair also exists as a route in the routers of the Internet. IP packets destined to an address within the CIDR block are routed to the proper destination. In this post I will show several ways to find your public IP address from Linux terminal. This though seems like a waste for normal users, but when you are in a terminal of a headless Linux server(i.e. no GUI or you’re connected as a user with minimal tools). Either way, being able to getHow to get Public IP from Linux Terminal public IP from Linux terminal can be useful in many cases or it could be one of those things that might just come in handy someday.
|
||||
|
||||
There’s two main commands we use, curl and wget. You can use them interchangeably.
|
||||
|
||||
### Curl output in plain text format: ###
|
||||
|
||||
curl icanhazip.com
|
||||
curl ifconfig.me
|
||||
curl curlmyip.com
|
||||
curl ip.appspot.com
|
||||
curl ipinfo.io/ip
|
||||
curl ipecho.net/plain
|
||||
curl www.trackip.net/i
|
||||
|
||||
### curl output in JSON format: ###
|
||||
|
||||
curl ipinfo.io/json
|
||||
curl ifconfig.me/all.json
|
||||
curl www.trackip.net/ip?json (bit ugly)
|
||||
|
||||
### curl output in XML format: ###
|
||||
|
||||
curl ifconfig.me/all.xml
|
||||
|
||||
### curl all IP details – The motherload ###
|
||||
|
||||
curl ifconfig.me/all
|
||||
|
||||
### Using DYNDNS (Useful when you’re using DYNDNS service) ###
|
||||
|
||||
curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g'
|
||||
curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+"
|
||||
|
||||
### Using wget instead of curl ###
|
||||
|
||||
wget http://ipecho.net/plain -O - -q ; echo
|
||||
wget http://observebox.com/ip -O - -q ; echo
|
||||
|
||||
### Using host and dig command (cause we can) ###
|
||||
|
||||
You can also use host and dig command assuming they are available or installed
|
||||
|
||||
host -t a dartsclink.com | sed 's/.*has address //'
|
||||
dig +short myip.opendns.com @resolver1.opendns.com
|
||||
|
||||
### Sample bash script: ###
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo`
|
||||
echo $PUBLIC_IP
|
||||
|
||||
Quite a few to pick from.
|
||||
|
||||
I was actually writing a small script to track all the IP changes of my router each day and save those into a file. I found these nifty commands and sites to use while doing some online research. Hope they help someone else someday too. Thanks for reading, please Share and RT.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,62 +0,0 @@
|
||||
translation by strugglingyouth
|
||||
Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop
|
||||
================================================================================
|
||||
> **Question**: When I try to open a pre-recorded packet dump on Wireshark on Ubuntu, its UI suddenly freezes, and the following errors and warnings appear in the terminal where I launched Wireshark. How can I fix this problem?
|
||||
|
||||
Wireshark is a GUI-based packet capture and sniffer tool. This tool is popularly used by network administrators, network security engineers or developers for various tasks where packet-level network analysis is required, for example during network troubleshooting, vulnerability testing, application debugging, or protocol reverse engineering. Wireshark allows one to capture live packets and browse their protocol headers and payloads via a convenient GUI.
|
||||
|
||||
![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg)
|
||||
|
||||
It is known that Wireshark's UI, especially run under Ubuntu desktop, sometimes hangs or freezes with the following errors, while you are scrolling up or down the packet list view, or starting to load a pre-recorded packet dump file.
|
||||
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject'
|
||||
(wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange'
|
||||
(wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable'
|
||||
(wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar'
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget'
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject'
|
||||
(wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed
|
||||
(wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed
|
||||
|
||||
Apparently this error is caused by some incompatibility between Wireshark and overlay-scrollbar, and has not been fixed in the latest Ubuntu desktop (e.g., as of Ubuntu 15.04 Vivid Vervet).
|
||||
|
||||
A workaround to avoid this Wireshark UI freeze problem is to **temporarily disabling overlay-scrollbar**. There are two ways to disable overlay-scrollbar in Wireshark, depending on how you launch Wireshark on your desktop.
|
||||
|
||||
### Command-Line Solution ###
|
||||
|
||||
Overlay-scrollbar can be disabled by setting "**LIBOVERLAY_SCROLLBAR**" environment variable to "0".
|
||||
|
||||
So if you are launching Wireshark from the command in a terminal, you can disable overlay-scrollbar in Wireshark as follows.
|
||||
|
||||
Open your .bashrc, and define the following alias.
|
||||
|
||||
alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark"
|
||||
|
||||
### Desktop Launcher Solution ###
|
||||
|
||||
If you are launching Wireshark using a desktop launcher, you can edit its desktop launcher file.
|
||||
|
||||
$ sudo vi /usr/share/applications/wireshark.desktop
|
||||
|
||||
Look for a line that starts with "Exec", and change it as follows.
|
||||
|
||||
Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f
|
||||
|
||||
While this solution will be beneficial for all desktop users system-wide, it will not survive Wireshark upgrade. If you want to preserve the modified .desktop file, copy it to your home directory as follows.
|
||||
|
||||
$ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
@ -1,99 +0,0 @@
|
||||
How to monitor stock quotes from the command line on Linux
|
||||
================================================================================
|
||||
If you are one of those stock investors or traders, monitoring the stock market will be one of your daily routines. Most likely you will be using an online trading platform which comes with some fancy real-time charts and all sort of advanced stock analysis and tracking tools. While such sophisticated market research tools are a must for any serious stock investors to read the market, monitoring the latest stock quotes still goes a long way to build a profitable portfolio.
|
||||
|
||||
If you are a full-time system admin constantly sitting in front of terminals while trading stocks as a hobby during the day, a simple command-line tool that shows real-time stock quotes will be a blessing for you.
|
||||
|
||||
In this tutorial, let me introduce a neat command-line tool that allows you to monitor stock quotes from the command line on Linux.
|
||||
|
||||
This tool is called [Mop][1]. Written in Go, this lightweight command-line tool is extremely handy for tracking the latest stock quotes from the U.S. markets. You can easily customize the list of stocks to monitor, and it shows the latest stock quotes in ncurses-based, easy-to-read interface.
|
||||
|
||||
**Note**: Mop obtains the latest stock quotes via Yahoo! Finance API. Be aware that their stock quotes are known to be delayed by 15 minutes. So if you are looking for "real-time" stock quotes with zero delay, Mop is not a tool for you. Such "live" stock quote feeds are usually available for a fee via some proprietary closed-door interface. With that being said, let's see how you can use Mop under Linux environment.
|
||||
|
||||
### Install Mop on Linux ###
|
||||
|
||||
Since Mop is implemented in Go, you will need to install Go language first. If you don't have Go installed, follow [this guide][2] to install Go on your Linux platform. Make sure to set GOPATH environment variable as described in the guide.
|
||||
|
||||
Once Go is installed, proceed to install Mop as follows.
|
||||
|
||||
**Debian, Ubuntu or Linux Mint**
|
||||
|
||||
$ sudo apt-get install git
|
||||
$ go get github.com/michaeldv/mop
|
||||
$ cd $GOPATH/src/github.com/michaeldv/mop
|
||||
$ make install
|
||||
|
||||
Fedora, CentOS, RHEL
|
||||
|
||||
$ sudo yum install git
|
||||
$ go get github.com/michaeldv/mop
|
||||
$ cd $GOPATH/src/github.com/michaeldv/mop
|
||||
$ make install
|
||||
|
||||
The above commands will install Mop under $GOPATH/bin.
|
||||
|
||||
Now edit your .bashrc to include $GOPATH/bin in your PATH variable.
|
||||
|
||||
export PATH="$PATH:$GOPATH/bin"
|
||||
|
||||
----------
|
||||
|
||||
$ source ~/.bashrc
|
||||
|
||||
### Monitor Stock Quotes from the Command Line with Mop ###
|
||||
|
||||
To launch Mod, simply run the command called cmd.
|
||||
|
||||
$ cmd
|
||||
|
||||
At the first launch, you will see a few stock tickers which Mop comes pre-configured with.
|
||||
|
||||
![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg)
|
||||
|
||||
The quotes show information like the latest price, %change, daily low/high, 52-week low/high, dividend, and annual yield. Mop obtains market overview information from [CNN][3], and individual stock quotes from [Yahoo Finance][4]. The stock quote information updates itself within the terminal periodically.
|
||||
|
||||
### Customize Stock Quotes in Mop ###
|
||||
|
||||
Let's try customizing the stock list. Mop provides easy-to-remember shortcuts for this: '+' to add a new stock, and '-' to remove a stock.
|
||||
|
||||
To add a new stock, press '+', and type a stock ticker symbol to add (e.g., MSFT). You can add more than one stock at once by typing a comma-separated list of tickers (e.g., "MSFT, AMZN, TSLA").
|
||||
|
||||
![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg)
|
||||
|
||||
Removing stocks from the list can be done similarly by pressing '-'.
|
||||
|
||||
### Sort Stock Quotes in Mop ###
|
||||
|
||||
You can sort the stock quote list based on any column. To sort, press 'o', and use left/right key to choose the column to sort by. When a particular column is chosen, you can sort the list either in increasing order or in decreasing order by pressing ENTER.
|
||||
|
||||
![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg)
|
||||
|
||||
By pressing 'g', you can group your stocks based on whether they are advancing or declining for the day. Advancing issues are represented in green color, while declining issues are colored in white.
|
||||
|
||||
![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg)
|
||||
|
||||
If you want to access help page, simply press '?'.
|
||||
|
||||
![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
As you can see, Mop is a lightweight, yet extremely handy stock monitoring tool. Of course you can easily access stock quotes information elsewhere, from online websites, your smartphone, etc. However, if you spend a great deal of your time in a terminal environment, Mop can easily fit in to your workspace, hopefully without distracting must of your workflow. Just let it run and continuously update market date in one of your terminals, and be done with it.
|
||||
|
||||
Happy trading!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:https://github.com/michaeldv/mop
|
||||
[2]:http://ask.xmodulo.com/install-go-language-linux.html
|
||||
[3]:http://money.cnn.com/data/markets/
|
||||
[4]:http://finance.yahoo.com/
|
@ -0,0 +1,97 @@
|
||||
Fix No Bootable Device Found Error After Installing Ubuntu
|
||||
================================================================================
|
||||
Usually, I dual boot Ubuntu and Windows but this time I decided to go for a clean Ubuntu installation i.e. eliminating Windows completely. After the clean install of Ubuntu, I ended up with a screen saying **no bootable device found** instead of the Grub screen. Clearly, the installation messed up with the UEFI boot settings.
|
||||
|
||||
![No Bootable Device Found After Installing Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_1.jpg)
|
||||
|
||||
I am going to show you how I fixed **no bootable device found error after installing Ubuntu in Acer laptops**. It is important that I mention that I am using Acer Aspire R13 because we have to change things in firmware settings and those settings might look different from manufacturer to manufacturer and from device to device.
|
||||
|
||||
So before you go on trying the steps mentioned here, let’s first see what state my computer was in during this error:
|
||||
|
||||
- My Acer Aspire R13 came preinstalled with Windows 8.1 and with UEFI boot manager
|
||||
- Secure boot was not turned off (my laptop has just come from repair and the service guy had put the secure boot on again, I did not know until I ran up in the problem). You can read this post to know [how disable secure boot in Acer laptops][1]
|
||||
- I chose to install Ubuntu by erasing everything i.e. existing Windows 8.1, various partitions etc.
|
||||
- After installing Ubuntu, I saw no bootable device found error while booting from the hard disk. Booting from live USB worked just fine
|
||||
|
||||
In my opinion, not disabling the secure boot was the reason of this error. However, I have no data to backup my claim. It is just a hunch. Interestingly, dual booting Windows and Linux often ends up in common Grub issues like these two:
|
||||
|
||||
- [error: no such partition grub rescue][2]
|
||||
- [Minimal BASH like line editing is supported][3]
|
||||
|
||||
If you are in similar situation, you can try the fix which worked for me.
|
||||
|
||||
### Fix no bootable device found error after installing Ubuntu ###
|
||||
|
||||
Pardon me for poor quality images. My OnePlus camera seems to be not very happy with my laptop screen.
|
||||
|
||||
#### Step 1 ####
|
||||
|
||||
Turn the power off and boot into boot settings. I had to press Fn+F2 (to press F2 key) on Acer Aspire R13 quickly. You have to be very quick with it if you are using SSD hard disk because SSDs are very fast in booting. Depending upon your manufacturer/model, you might need to use Del or F10 or F12 keys.
|
||||
|
||||
#### Step 2 ####
|
||||
|
||||
In the boot settings, make sure that Secure Boot is turned on. It should be under the Boot tab.
|
||||
|
||||
#### Step 3 ####
|
||||
|
||||
Go to Security tab and look for “Select an UEFI file as trusted for executing” and click enter.
|
||||
|
||||
![Fix no bootable device found ](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_2.jpg)
|
||||
|
||||
Just for your information, what we are going to do here is to add the UEFI settings file (it was generated while Ubuntu installation) among the trusted UEFI boots in your device. If you remember, UEFI boot’s main aim is to provide security and since Secure Boot was not disabled (perhaps) the device did not intend to boot from the newly installed OS. Adding it as trusted, kind of whitelisting, will let the device boot from the Ubuntu UEFI file.
|
||||
|
||||
#### Step 4 ####
|
||||
|
||||
You should see your hard disk like HDD0 etc here. If you have more than one hard disk, I hope you remember where did you install Ubuntu. Press Enter here as well.
|
||||
|
||||
![Fix no bootable device found in boot settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_3.jpg)
|
||||
|
||||
#### Step 5 ####
|
||||
|
||||
You should see <EFI> here. Press enter.
|
||||
|
||||
![Fix settings in UEFI](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_4.jpg)
|
||||
|
||||
#### Step 6 ####
|
||||
|
||||
You’ll see <Ubuntu> in next screen. Don’t get impatient, you are almost there
|
||||
|
||||
![Fixing boot error after installing Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_5.jpg)
|
||||
|
||||
#### Step 7 ####
|
||||
|
||||
You’ll see shimx64.efi, grubx64.efi and MokManager.efi file here. The important one is shimx64.efi here. Select it and click enter.
|
||||
|
||||
|
||||
![Fix no bootable device found](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_6.jpg)
|
||||
|
||||
In next screen, type Yes and click enter.
|
||||
|
||||
![No_Bootable_Device_Found_7](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_7.jpg)
|
||||
|
||||
#### Step 8 ####
|
||||
|
||||
Once we have added it as trused EFI file to be executed, press F10 to save and exit.
|
||||
|
||||
![Save and exist firmware settings](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_8.jpg)
|
||||
|
||||
Reboot your system and this time you should be seeing the familiar Grub screen. Even if you do not see Grub screen, you should at least not be seeing “no bootable device found” screen anymore. You should be able to boot into Ubuntu.
|
||||
|
||||
If your Grub screen was messed up after the fix but you got to login into it, you can reinstall Grub to boot into the familiar purple Grub screen of Ubuntu.
|
||||
|
||||
I hope this tutorial helped you to fix no bootable device found error. Any questions or suggestions or a word of thanks is always welcomed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/no-bootable-device-found-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/disable-secure-boot-in-acer/
|
||||
[2]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
|
||||
[3]:http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/
|
@ -0,0 +1,233 @@
|
||||
How to Setup Zephyr Test Management Tool on CentOS 7.x
|
||||
================================================================================
|
||||
Test Management encompasses anything and everything that you need to do as testers. Test management tools are used to store information on how testing is to be done, plan testing activities and report the status of quality assurance activities. So in this article we will illustrate you about the setup of Zephyr test management tool that includes everything needed to manage the test process can save testers hassle of installing separate applications that are necessary for the testing process. Once you have done with its setup you will be able to track bugs, defects and allows the project tasks for collaboration with your team as you can easily share and access the data across multiple project teams for communication and collaboration throughout the testing process.
|
||||
|
||||
### Requirements for Zephyr ###
|
||||
|
||||
We are going to install and run Zephyr under the following set of its minimum resources. Resources can be enhanced as per your infrastructure requirements. We will be installing Zephyr on the CentOS-7 64-bit while its binary distributions are available for almost all Linux operating systems.
|
||||
|
||||
注:表格
|
||||
<table width="669" style="height: 231px;">
|
||||
<tbody>
|
||||
<tr>
|
||||
<td width="660" colspan="3"><strong>Zephyr test management tool</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Linux OS</strong></td>
|
||||
<td width="312">CentOS Linux 7 (Core), 64-bit</td>
|
||||
<td width="209"></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Packages</strong></td>
|
||||
<td width="312">JDK 7 or above , Oracle JDK 6 update</td>
|
||||
<td width="209">No Prior Tomcat, MySQL installed</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>RAM</strong></td>
|
||||
<td width="312">4 GB</td>
|
||||
<td width="209">Preferred 8 GB</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>CPU</strong></td>
|
||||
<td width="521" colspan="2">2.0 GHZ or Higher</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="140"><strong>Hard Disk</strong></td>
|
||||
<td width="521" colspan="2">30 GB , Atleast 5GB must be free</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
You must have super user (root) access to perform the installation process for Zephyr and make sure that you have properly configured yout network with static IP address and its default set of ports must be available and allowed in the firewall where as the Port 80/443, 8005, 8009, 8010 will used by tomcat and Port 443 or 2099 will used within Zephyr by flex for the RTMP protocol.
|
||||
|
||||
### Install Java JDK 7 ###
|
||||
|
||||
Java JDK 7 is the basic requirement for the installation of Zephyr, if its not already installed in your operating system then do the following to install Java and setup its JAVA_HOME environment variables to be properly configured.
|
||||
|
||||
Let’s issue the below commands to install Java JDK 7.
|
||||
|
||||
[root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1
|
||||
|
||||
----------
|
||||
|
||||
[root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64
|
||||
|
||||
Once your java is installed including its required dependencies, run the following commands to set its JAVA_HOME environment variables.
|
||||
|
||||
[root@centos-007 ~]# export JAVA_HOME=/usr/java/default
|
||||
[root@centos-007 ~]# export PATH=/usr/java/default/bin:$PATH
|
||||
|
||||
Now check the version of java to verify its installation with following command.
|
||||
|
||||
[root@centos-007 ~]# java –version
|
||||
|
||||
----------
|
||||
|
||||
java version "1.7.0_79"
|
||||
OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14)
|
||||
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
|
||||
|
||||
The output shows that we we have successfully installed OpenJDK Java verion 1.7.0_79.
|
||||
|
||||
### Install MySQL 5.6.X ###
|
||||
|
||||
If you have other MySQLs on the machine then it is recommended to remove them and
|
||||
install this version on top of them or upgrade their schemas to what is specified. As this specific major/minor (5.6.X) version of MySQL is required with the root username as a prerequisite of Zephyr.
|
||||
|
||||
To install MySQL 5.6 on CentOS-7.1 lets do the following steps:
|
||||
|
||||
Download the rpm package, which will create a yum repo file for MySQL Server installation.
|
||||
|
||||
[root@centos-007 ~]# yum install wget
|
||||
[root@centos-007 ~]# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
|
||||
|
||||
Now Install this downloaded rpm package by using rpm command.
|
||||
|
||||
[root@centos-007 ~]# rpm -ivh mysql-community-release-el7-5.noarch.rpm
|
||||
|
||||
After the installation of this package you will get two new yum repo related to MySQL. Then by using yum command, now we will install MySQL Server 5.6 and all dependencies will be installed itself.
|
||||
|
||||
[root@centos-007 ~]# yum install mysql-server
|
||||
|
||||
Once the installation process completes, run the following commands to start mysqld services and check its status whether its active or not.
|
||||
|
||||
[root@centos-007 ~]# service mysqld start
|
||||
[root@centos-007 ~]# service mysqld status
|
||||
|
||||
On fresh installation of MySQL Server. The MySQL root user password is blank.
|
||||
For good security practice, we should reset the password MySQL root user.
|
||||
|
||||
Connect to MySQL using the auto-generated empty password and change the
|
||||
root password.
|
||||
|
||||
[root@centos-007 ~]# mysql
|
||||
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password');
|
||||
mysql> flush privileges;
|
||||
mysql> quit;
|
||||
|
||||
Now we need to configure the required database parameters in the default configuration file of MySQL. Let's open its file located in "/etc/" folder and update it as follow.
|
||||
|
||||
[root@centos-007 ~]# vi /etc/my.cnf
|
||||
|
||||
----------
|
||||
|
||||
[mysqld]
|
||||
datadir=/var/lib/mysql
|
||||
socket=/var/lib/mysql/mysql.sock
|
||||
symbolic-links=0
|
||||
|
||||
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
|
||||
max_allowed_packet=150M
|
||||
max_connections=600
|
||||
default-storage-engine=INNODB
|
||||
character-set-server=utf8
|
||||
collation-server=utf8_unicode_ci
|
||||
|
||||
[mysqld_safe]
|
||||
log-error=/var/log/mysqld.log
|
||||
pid-file=/var/run/mysqld/mysqld.pid
|
||||
default-storage-engine=INNODB
|
||||
character-set-server=utf8
|
||||
collation-server=utf8_unicode_ci
|
||||
|
||||
[mysql]
|
||||
max_allowed_packet = 150M
|
||||
[mysqldump]
|
||||
quick
|
||||
|
||||
Save the changes made in the configuration file and restart mysql services.
|
||||
|
||||
[root@centos-007 ~]# service mysqld restart
|
||||
|
||||
### Download Zephyr Installation Package ###
|
||||
|
||||
We done with installation of required packages necessary to install Zephyr. Now we need to get the binary distributed package of Zephyr and its license key. Go to official download link of Zephyr that is http://download.yourzephyr.com/linux/download.php give your email ID and click to download.
|
||||
|
||||
![Zephyr Download](http://blog.linoxide.com/wp-content/uploads/2015/08/13.png)
|
||||
|
||||
Then and confirm your mentioned Email Address and you will get the Zephyr Download link and its License Key link. So click on the provided links and choose the appropriate version of your Operating system to download the binary installation package and its license file to the server.
|
||||
|
||||
We have placed it in the home directory and modify its permissions to make it executable.
|
||||
|
||||
![Zephyr Binary](http://blog.linoxide.com/wp-content/uploads/2015/08/22.png)
|
||||
|
||||
### Start Zephyr Installation and Configuration ###
|
||||
|
||||
Now we are ready to start the installation of Zephyr by executing its binary installation script as below.
|
||||
|
||||
[root@centos-007 ~]# ./zephyr_4_7_9213_linux_setup.sh –c
|
||||
|
||||
Once you run the above command, it will check for the Java environment variables to be properly setup and configured. If there's some mis-configuration you might the error like.
|
||||
|
||||
testing JVM in /usr ...
|
||||
Starting Installer ...
|
||||
Error : Either JDK is not found at expected locations or JDK version is mismatched.
|
||||
Zephyr requires Oracle Java Development Kit (JDK) version 1.7 or higher.
|
||||
|
||||
Once you have properly configured your Java, then it will start installation of Zephyr and asks to press "o" to proceed and "c" to cancel the setup. Let's type "o" and press "Enter" key to start installation.
|
||||
|
||||
![install zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/32.png)
|
||||
|
||||
The next option is to review all the requirements for the Zephyr setup and Press "Enter" to move forward to next option.
|
||||
|
||||
![zephyr requirements](http://blog.linoxide.com/wp-content/uploads/2015/08/42.png)
|
||||
|
||||
To accept the license agreement type "1" and Press Enter.
|
||||
|
||||
I accept the terms of this license agreement [1], I do not accept the terms of this license agreement [2, Enter]
|
||||
|
||||
Here we need to choose the appropriate destination location where we want to install the zephyr and choose the default ports, if you want to choose other than default ports, you are free to mention here.
|
||||
|
||||
![installation folder](http://blog.linoxide.com/wp-content/uploads/2015/08/52.png)
|
||||
|
||||
Then customize the mysql database parameters and give the right paths to the configurations file. You might the an error at this point as shown below.
|
||||
|
||||
Please update MySQL configuration. Configuration parameter max_connection should be at least 500 (max_connection = 500) and max_allowed_packet should be at least 50MB (max_allowed_packet = 50M).
|
||||
|
||||
To overcome this error make sure that you have configure the "max_connection" and "max_allowed_packet" limits properly in the mysql configuration file. So confirm these settings, connect to mysql server and run the commands as shown.
|
||||
|
||||
![mysql connections](http://blog.linoxide.com/wp-content/uploads/2015/08/62.png)
|
||||
|
||||
Once you have configured your mysql database properly, it will extract the configuration files to complete the setup.
|
||||
|
||||
![mysql customization](http://blog.linoxide.com/wp-content/uploads/2015/08/72.png)
|
||||
|
||||
The installation process completes with successful installation of Zephyr 4.7 on your computer. To Launch Zephyr Desktop type "y" to finish Zephyr installation.
|
||||
|
||||
![launch zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/82.png)
|
||||
|
||||
### Launch Zephyr Desktop ###
|
||||
|
||||
Open your web browser to launch Zephyr Desktop with your localhost IP adress and you will be direted to the Zephyr Desktop.
|
||||
|
||||
http://your_server_IP/zephyr/desktop/
|
||||
|
||||
![Zephyr Desktop](http://blog.linoxide.com/wp-content/uploads/2015/08/91.png)
|
||||
|
||||
From your Zephyr Dashboard click on the "Test Manager" and login with the dault user name and password that is "test.manager".
|
||||
|
||||
![Test Manage Login](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manager_login.png)
|
||||
|
||||
Once you are loged in you will be able to configure your administrative settings as shown. So choose the settings you wish to put according to your environment.
|
||||
|
||||
![Test Manage Administration](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manage_admin.png)
|
||||
|
||||
Save the settings after you have done with your administrative settings, similarly do the settings of resources management and project setup and start using Zephyr as a complete set of your testing management tool. You check and edit the status of your administrative settings from the Department Dashboard Management as shown.
|
||||
|
||||
![zephyr dashboard](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard.png)
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Cheers! we have done with the complete setup of Zephyr installation setup on Centos 7.1. We hope you are now much aware of Zephyr Test management tool which offer the prospect of streamlining the testing process and allow quick access to data analysis, collaborative tools and easy communication across multiple project teams. Feel free to comment us if you find any difficulty while you are doing it in your environment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
@ -0,0 +1,295 @@
|
||||
wyangsun translating
|
||||
How to set up a system status page of your infrastructure
|
||||
================================================================================
|
||||
If you are a system administrator who is responsible for critical IT infrastructure or services of your organization, you will understand the importance of effective communication in your day-to-day tasks. Suppose your production storage server is on fire. You want your entire team on the same page in order to resolve the issue as fast as you can. While you are at it, you don't want half of all users contacting you asking why they cannot access their documents. When a scheduled maintenance is coming up, you want to notify interested parties of the event ahead of the schedule, so that unnecessary support tickets can be avoided.
|
||||
|
||||
All these require some sort of streamlined communication channel between you, your team and people you serve. One way to achieve that is to maintain a centralized system status page, where the detail of downtime incidents, progress updates and maintenance schedules are reported and chronicled. That way, you can minimize unnecessary distractions during downtime, and also have any interested party informed and opt-in for any status update.
|
||||
|
||||
One good **open-source, self-hosted system status page solution** is [Cachet][1]. In this tutorial, I am going to describe how to set up a self-hosted system status page using Cachet.
|
||||
|
||||
### Cachet Features ###
|
||||
|
||||
Before going into the detail of setting up Cachet, let me briefly introduce its main features.
|
||||
|
||||
- **Full JSON API**: The Cachet API allows you to connect any external program or script (e.g., uptime script) to Cachet to report incidents or update status automatically.
|
||||
- **Authentication**: Cachet supports Basic Auth and API token in JSON API, so that only authorized personnel can update the status page.
|
||||
- **Metrics system**: This is useful to visualize custom data over time (e.g., server load or response time).
|
||||
- **Notification**: Optionally you can send notification emails about reported incidents to anyone who signed up to the status page.
|
||||
- **Multiple languages**: The status page can be translated into 11 different languages.
|
||||
- **Two factor authentication**: This allows you to lock your Cachet admin account with Google's two-factor authentication.
|
||||
- **Cross database support**: You can choose between MySQL, SQLite, Redis, APC, and PostgreSQL for a backend storage.
|
||||
|
||||
In the rest of the tutorial, I explain how to install and configure Cachet on Linux.
|
||||
|
||||
### Step One: Download and Install Cachet ###
|
||||
|
||||
Cachet requires a web server and a backend database to operate. In this tutorial, I am going to use the LAMP stack. Here are distro-specific instructions to install Cachet and LAMP stack.
|
||||
|
||||
#### Debian, Ubuntu or Linux Mint ####
|
||||
|
||||
$ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql
|
||||
$ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet
|
||||
$ cd /var/www/cachet
|
||||
$ sudo git checkout v1.1.1
|
||||
$ sudo chown -R www-data:www-data .
|
||||
|
||||
For more detail on setting up LAMP stack on Debian-based systems, refer to [this tutorial][2].
|
||||
|
||||
#### Fedora, CentOS or RHEL ####
|
||||
|
||||
On Red Hat based systems, you first need to [enable REMI repository][3] (to meet PHP version requirement). Then proceed as follows.
|
||||
|
||||
$ sudo yum install curl git httpd mariadb-server
|
||||
$ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring
|
||||
$ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet
|
||||
$ cd /var/www/cachet
|
||||
$ sudo git checkout v1.1.1
|
||||
$ sudo chown -R apache:apache .
|
||||
$ sudo firewall-cmd --permanent --zone=public --add-service=http
|
||||
$ sudo firewall-cmd --reload
|
||||
$ sudo systemctl enable httpd.service; sudo systemctl start httpd.service
|
||||
$ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service
|
||||
|
||||
For more details on setting up LAMP on Red Hat-based systems, refer to [this tutorial][4].
|
||||
|
||||
### Configure a Backend Database for Cachet ###
|
||||
|
||||
The next step is to configure database backend.
|
||||
|
||||
Log in to MySQL/MariaDB server, and create an empty database called 'cachet'.
|
||||
|
||||
$ sudo mysql -uroot -p
|
||||
|
||||
----------
|
||||
|
||||
mysql> create database cachet;
|
||||
mysql> quit
|
||||
|
||||
Now create a Cachet configuration file by using a sample configuration file.
|
||||
|
||||
$ cd /var/www/cachet
|
||||
$ sudo mv .env.example .env
|
||||
|
||||
In .env file, fill in database information (i.e., DB_*) according to your setup. Leave other fields unchanged for now.
|
||||
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_URL=http://localhost
|
||||
APP_KEY=SomeRandomString
|
||||
|
||||
DB_DRIVER=mysql
|
||||
DB_HOST=localhost
|
||||
DB_DATABASE=cachet
|
||||
DB_USERNAME=root
|
||||
DB_PASSWORD=<root-password>
|
||||
|
||||
CACHE_DRIVER=apc
|
||||
SESSION_DRIVER=apc
|
||||
QUEUE_DRIVER=database
|
||||
|
||||
MAIL_DRIVER=smtp
|
||||
MAIL_HOST=mailtrap.io
|
||||
MAIL_PORT=2525
|
||||
MAIL_USERNAME=null
|
||||
MAIL_PASSWORD=null
|
||||
MAIL_ADDRESS=null
|
||||
MAIL_NAME=null
|
||||
|
||||
REDIS_HOST=null
|
||||
REDIS_DATABASE=null
|
||||
REDIS_PORT=null
|
||||
|
||||
### Step Three: Install PHP Dependencies and Perform DB Migration ###
|
||||
|
||||
Next, we are going to install necessary PHP dependencies. For that we will use composer. If you do not have composer installed on your system, install it first:
|
||||
|
||||
$ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
|
||||
|
||||
Now go ahead and install PHP dependencies using composer.
|
||||
|
||||
$ cd /var/www/cachet
|
||||
$ sudo composer install --no-dev -o
|
||||
|
||||
Next, perform one-time database migration. This step will populate the empty database we created earlier with necessary tables.
|
||||
|
||||
$ sudo php artisan migrate
|
||||
|
||||
Assuming the database config in /var/www/cachet/.env is correct, database migration should be completed successfully as shown below.
|
||||
|
||||
![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg)
|
||||
|
||||
Next, create a security key, which will be used to encrypt the data entered in Cachet.
|
||||
|
||||
$ sudo php artisan key:generate
|
||||
$ sudo php artisan config:cache
|
||||
|
||||
![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg)
|
||||
|
||||
The generated app key will be automatically added to the APP_KEY variable of your .env file. No need to edit .env on your own here.
|
||||
|
||||
### Step Four: Configure Apache HTTP Server ###
|
||||
|
||||
Now it's time to configure the web server that Cachet will be running on. As we are using Apache HTTP server, create a new [virtual host][5] for Cachet as follows.
|
||||
|
||||
#### Debian, Ubuntu or Linux Mint ####
|
||||
|
||||
$ sudo vi /etc/apache2/sites-available/cachet.conf
|
||||
|
||||
----------
|
||||
|
||||
<VirtualHost *:80>
|
||||
ServerName cachethost
|
||||
ServerAlias cachethost
|
||||
DocumentRoot "/var/www/cachet/public"
|
||||
<Directory "/var/www/cachet/public">
|
||||
Require all granted
|
||||
Options Indexes FollowSymLinks
|
||||
AllowOverride All
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
Enable the new Virtual Host and mod_rewrite with:
|
||||
|
||||
$ sudo a2ensite cachet.conf
|
||||
$ sudo a2enmod rewrite
|
||||
$ sudo service apache2 restart
|
||||
|
||||
#### Fedora, CentOS or RHEL ####
|
||||
|
||||
On Red Hat based systems, create a virtual host file as follows.
|
||||
|
||||
$ sudo vi /etc/httpd/conf.d/cachet.conf
|
||||
|
||||
----------
|
||||
|
||||
<VirtualHost *:80>
|
||||
ServerName cachethost
|
||||
ServerAlias cachethost
|
||||
DocumentRoot "/var/www/cachet/public"
|
||||
<Directory "/var/www/cachet/public">
|
||||
Require all granted
|
||||
Options Indexes FollowSymLinks
|
||||
AllowOverride All
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
Now reload Apache configuration:
|
||||
|
||||
$ sudo systemctl reload httpd.service
|
||||
|
||||
### Step Five: Configure /etc/hosts for Testing Cachet ###
|
||||
|
||||
At this point, the initial Cachet status page should be up and running, and now it's time to test.
|
||||
|
||||
Since Cachet is configured as a virtual host of Apache HTTP server, we need to tweak /etc/hosts of your client computer to be able to access it. Here the client computer is the one from which you will be accessing the Cachet page.
|
||||
|
||||
Open /etc/hosts, and add the following entry.
|
||||
|
||||
$ sudo vi /etc/hosts
|
||||
|
||||
----------
|
||||
|
||||
<cachet-server-ip-address> cachethost
|
||||
|
||||
In the above, the name "cachethost" must match with ServerName specified in the Apache virtual host file for Cachet.
|
||||
|
||||
### Test Cachet Status Page ###
|
||||
|
||||
Now you are ready to access Cachet status page. Type http://cachethost in your browser address bar. You will be redirected to the initial Cachet setup page as follows.
|
||||
|
||||
![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg)
|
||||
|
||||
Choose cache/session driver. Here let's choose "File" for both cache and session drivers.
|
||||
|
||||
Next, type basic information about the status page (e.g., site name, domain, timezone and language), as well as administrator account.
|
||||
|
||||
![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg)
|
||||
|
||||
![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg)
|
||||
|
||||
![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg)
|
||||
|
||||
Your initial status page will finally be ready.
|
||||
|
||||
![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg)
|
||||
|
||||
Go ahead and create components (units of your system), incidents or any scheduled maintenance as you want.
|
||||
|
||||
For example, to add a new component:
|
||||
|
||||
![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg)
|
||||
|
||||
To add a scheduled maintenance:
|
||||
|
||||
This is what the public Cachet status page looks like:
|
||||
|
||||
![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg)
|
||||
|
||||
With SMTP integration, you can send out emails on status updates to any subscribers. Also, you can fully customize the layout and style of the status page using CSS and markdown formatting.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Cachet is pretty easy-to-use, self-hosted status page software. One of the nicest features of Cachet is its support for full JSON API. Using its RESTful API, one can easily hook up Cachet with separate monitoring backends (e.g., [Nagios][6]), and feed Cachet with incident reports and status updates automatically. This is far quicker and efficient than manually manage a status page.
|
||||
|
||||
As final words, I'd like to mention one thing. While setting up a fancy status page with Cachet is straightforward, making the best use of the software is not as easy as installing it. You need total commitment from the IT team on updating the status page in an accurate and timely manner, thereby building credibility of the published information. At the same time, you need to educate users to turn to the status page. At the end of the day, it would be pointless to set up a status page if it's not populated well, and/or no one is checking it. Remember this when you consider deploying Cachet in your work environment.
|
||||
|
||||
### Troubleshooting ###
|
||||
|
||||
As a bonus, here are some useful troubleshooting tips in case you encounter problems while setting up Cachet.
|
||||
|
||||
1. The Cachet page does not load anything, and you are getting the following error.
|
||||
|
||||
production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695
|
||||
|
||||
**Solution**: Make sure that you create an app key, as well as clear configuration cache as follows.
|
||||
|
||||
$ cd /path/to/cachet
|
||||
$ sudo php artisan key:generate
|
||||
$ sudo php artisan config:cache
|
||||
|
||||
2. You are getting the following error while invoking composer command.
|
||||
|
||||
- danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system.
|
||||
- laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system.
|
||||
- league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system.
|
||||
|
||||
**Solution**: Make sure to install the required PHP extension mbstring on your system which is compatible with your PHP. On Red Hat based system, since we installed PHP from REMI-56 repository, we install the extension from the same repository.
|
||||
|
||||
$ sudo yum --enablerepo=remi-php56 install php-mbstring
|
||||
|
||||
3. You are getting a blank page while trying to access Cachet status page. The HTTP log shows the following error.
|
||||
|
||||
PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851
|
||||
|
||||
**Solution**: Try the following commands.
|
||||
|
||||
$ cd /var/www/cachet
|
||||
$ sudo php artisan cache:clear
|
||||
$ sudo chmod -R 777 storage
|
||||
$ sudo composer dump-autoload
|
||||
|
||||
If the above solution does not work, try disabling SELinux:
|
||||
|
||||
$ sudo setenforce 0
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/setup-system-status-page.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:https://cachethq.io/
|
||||
[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html
|
||||
[3]:http://ask.xmodulo.com/install-remi-repository-centos-rhel.html
|
||||
[4]:http://xmodulo.com/install-lamp-stack-centos.html
|
||||
[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html
|
||||
[6]:http://xmodulo.com/monitor-common-services-nagios.html
|
@ -0,0 +1,159 @@
|
||||
How to Convert From RPM to DEB and DEB to RPM Package Using Alien
|
||||
================================================================================
|
||||
As I’m sure you already know, there are plenty of ways to install software in Linux: using the package management system provided by your distribution ([aptitude, yum, or zypper][1], to name a few examples), compiling from source (though somewhat rare these days, it was the only method available during the early days of Linux), or utilizing a low level tool such as dpkg or rpm with .deb and .rpm standalone, precompiled packages, respectively.
|
||||
|
||||
![Convert RPM to DEB and DEB to RPM](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-RPM-to-DEB-and-DEB-to-RPM.png)
|
||||
|
||||
Convert RPM to DEB and DEB to RPM Package Using Alien
|
||||
|
||||
In this article we will introduce you to alien, a tool that converts between different Linux package formats, with .rpm to .deb (and vice versa) being the most common usage.
|
||||
|
||||
This tool, even when its author is no longer maintaining it and states in his website that alien will always probably remain in experimental status, can come in handy if you need a certain type of package but can only find that program in another package format.
|
||||
|
||||
For example, alien saved my day once when I was looking for a .deb driver for a inkjet printer and couldn’t find any – the manufacturer only provided a .rpm package. I installed alien, converted the package, and before long I was able to use my printer without issues.
|
||||
|
||||
That said, we must clarify that this utility should not be used to replace important system files and libraries since they are set up differently across distributions. Only use alien as a last resort if the suggested installation methods at the beginning of this article are out of the question for the required program.
|
||||
|
||||
Last but not least, we must note that even though we will use CentOS and Debian in this article, alien is also known to work in Slackware and even in Solaris, besides the first two distributions and their respective families.
|
||||
|
||||
### Step 1: Installing Alien and Dependencies ###
|
||||
|
||||
To install alien in CentOS/RHEL 7, you will need to enable the EPEL and the Nux Dextop (yes, it’s Dextop – not Desktop) repositories, in that order:
|
||||
|
||||
# yum install epel-release
|
||||
# rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
|
||||
|
||||
The latest version of the package that enables this repository is currently 0.5 (published on Aug. 10, 2015). You should check [http://li.nux.ro/download/nux/dextop/el7/x86_64/][2] to see whether there’s a newer version before proceeding further:
|
||||
|
||||
# rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
|
||||
|
||||
then do,
|
||||
|
||||
# yum update && yum install alien
|
||||
|
||||
In Fedora, you will only need to run the last command.
|
||||
|
||||
In Debian and derivatives, simply do:
|
||||
|
||||
# aptitude install alien
|
||||
|
||||
### Step 2: Converting from .deb to .rpm Package ###
|
||||
|
||||
For this test we have chosen dateutils, which provides a set of date and time utilities to deal with large amounts of financial data. We will download the .deb package to our CentOS 7 box, convert it to .rpm and install it:
|
||||
|
||||
![Check CentOS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-OS-Version.png)
|
||||
|
||||
Check CentOS Version
|
||||
|
||||
# cat /etc/centos-release
|
||||
# wget http://ftp.us.debian.org/debian/pool/main/d/dateutils/dateutils_0.3.1-1.1_amd64.deb
|
||||
# alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb
|
||||
|
||||
![Convert .deb to .rpm package in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-deb-to-rpm-package.png)
|
||||
|
||||
Convert .deb to .rpm package in Linux
|
||||
|
||||
**Important**: (Please note how, by default, alien increases the version minor number of the target package. If you want to override this behavior, add the –keep-version flag).
|
||||
|
||||
If we try to install the package right away, we will run into a slight issue:
|
||||
|
||||
# rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm
|
||||
|
||||
![Install RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-RPM-Package.png)
|
||||
|
||||
Install RPM Package
|
||||
|
||||
To solve this issue, we will enable the epel-testing repository and install the rpmrebuild utility to edit the settings of the package to be rebuilt:
|
||||
|
||||
# yum --enablerepo=epel-testing install rpmrebuild
|
||||
|
||||
Then run,
|
||||
|
||||
# rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm
|
||||
|
||||
Which will open up your default text editor. Go to the `%files` section and delete the lines that refer to the directories mentioned in the error message, then save the file and exit:
|
||||
|
||||
![Convert .deb to Alien Version](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-Deb-Package-to-Alien-Version.png)
|
||||
|
||||
Convert .deb to Alien Version
|
||||
|
||||
When you exit the file you will be prompted to continue with the rebuild. If you choose Y, the file will be rebuilt into the specified directory (different than the current working directory):
|
||||
|
||||
# rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm
|
||||
|
||||
![Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Build-RPM-Package.png)
|
||||
|
||||
Build RPM Package
|
||||
|
||||
Now you can proceed to install the package and verify as usual:
|
||||
|
||||
# rpm -Uvh /root/rpmbuild/RPMS/x86_64/dateutils-0.3.1-2.1.x86_64.rpm
|
||||
# rpm -qa | grep dateutils
|
||||
|
||||
![Install Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Build-RPM-Package.png)
|
||||
|
||||
Install Build RPM Package
|
||||
|
||||
Finally, you can list the individual tools that were included with dateutils and alternatively check their respective man pages:
|
||||
|
||||
# ls -l /usr/bin | grep dateutils
|
||||
|
||||
![Verify Installed RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Installed-Package.png)
|
||||
|
||||
Verify Installed RPM Package
|
||||
|
||||
### Step 3: Converting from .rpm to .deb Package ###
|
||||
|
||||
In this section we will illustrate how to convert from .rpm to .deb. In a 32-bit Debian Wheezy box, let’s download the .rpm package for the zsh shell from the CentOS 6 OS repository. Note that this shell is not available by default in Debian and derivatives.
|
||||
|
||||
# cat /etc/shells
|
||||
# lsb_release -a | tail -n 4
|
||||
|
||||
![Check Shell and Debian OS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Shell-Debian-OS-Version.png)
|
||||
|
||||
Check Shell and Debian OS Version
|
||||
|
||||
# wget http://mirror.centos.org/centos/6/os/i386/Packages/zsh-4.3.11-4.el6.centos.i686.rpm
|
||||
# alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm
|
||||
|
||||
You can safely disregard the messages about a missing signature:
|
||||
|
||||
![Convert .rpm to .deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-rpm-to-deb-Package.png)
|
||||
|
||||
Convert .rpm to .deb Package
|
||||
|
||||
After a few moments, the .deb file should have been generated and be ready to install:
|
||||
|
||||
# dpkg -i zsh_4.3.11-5_i386.deb
|
||||
|
||||
![Install RPM Converted Deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Deb-Package.png)
|
||||
|
||||
Install RPM Converted Deb Package
|
||||
|
||||
After the installation, you can verify that zsh is added to the list of valid shells:
|
||||
|
||||
# cat /etc/shells
|
||||
|
||||
![Confirm Installed Zsh Package](http://www.tecmint.com/wp-content/uploads/2015/08/Confirm-Installed-Package.png)
|
||||
|
||||
Confirm Installed Zsh Package
|
||||
|
||||
### Summary ###
|
||||
|
||||
In this article we have explained how to convert from .rpm to .deb and vice versa to install packages as a last resort when such programs are not available in the repositories or as distributable source code. You will want to bookmark this article because all of us will need alien at one time or another.
|
||||
|
||||
Feel free to share your thoughts about this article using the form below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using-alien/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/linux-package-management/
|
||||
[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/
|
@ -0,0 +1,163 @@
|
||||
translation by strugglingyouth
|
||||
Linux/UNIX: Bash Read a File Line By Line
|
||||
================================================================================
|
||||
How do I read a file line by line under a Linux or UNIX-like system using KSH or BASH shell?
|
||||
|
||||
You can use while..do..done bash loop to read file line by line on a Linux, OSX, *BSD, or Unix-like system.
|
||||
|
||||
**Syntax to read file line by line on a Bash Unix & Linux shell:**
|
||||
|
||||
1. The syntax is as follows for bash, ksh, zsh, and all other shells -
|
||||
1. while read -r line; do COMMAND; done < input.file
|
||||
1. The -r option passed to red command prevents backslash escapes from being interpreted.
|
||||
1. Add IFS= option before read command to prevent leading/trailing whitespace from being trimmed -
|
||||
1. while IFS= read -r line; do COMMAND_on $line; done < input.file
|
||||
|
||||
Here is more human readable syntax for you:
|
||||
|
||||
#!/bin/bash
|
||||
input="/path/to/txt/file"
|
||||
while IFS= read -r var
|
||||
do
|
||||
echo "$var"
|
||||
done < "$input"
|
||||
|
||||
**Examples**
|
||||
|
||||
Here are some examples:
|
||||
|
||||
#!/bin/ksh
|
||||
file="/home/vivek/data.txt"
|
||||
while IFS= read line
|
||||
do
|
||||
# display $line or do somthing with $line
|
||||
echo "$line"
|
||||
done <"$file"
|
||||
|
||||
The same example using bash shell:
|
||||
|
||||
#!/bin/bash
|
||||
file="/home/vivek/data.txt"
|
||||
while IFS= read -r line
|
||||
do
|
||||
# display $line or do somthing with $line
|
||||
printf '%s\n' "$line"
|
||||
done <"$file"
|
||||
|
||||
You can also read field wise:
|
||||
|
||||
#!/bin/bash
|
||||
file="/etc/passwd"
|
||||
while IFS=: read -r f1 f2 f3 f4 f5 f6 f7
|
||||
do
|
||||
# display fields using f1, f2,..,f7
|
||||
printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6"
|
||||
done <"$file"
|
||||
|
||||
Sample outputs:
|
||||
|
||||
![Fig.01: Bash shell scripting- read file line by line demo outputs](http://s0.cyberciti.org/uploads/faq/2011/01/Bash-Scripting-Read-File-line-by-line-demo.jpg)
|
||||
|
||||
Fig.01: Bash shell scripting- read file line by line demo outputs
|
||||
|
||||
**Bash Scripting: Read text file line-by-line to create pdf files**
|
||||
|
||||
My input file is as follows (faq.txt):
|
||||
|
||||
4|http://www.cyberciti.biz/faq/mysql-user-creation/|Mysql User Creation: Setting Up a New MySQL User Account
|
||||
4096|http://www.cyberciti.biz/faq/ksh-korn-shell/|What is UNIX / Linux Korn Shell?
|
||||
4101|http://www.cyberciti.biz/faq/what-is-posix-shell/|What Is POSIX Shell?
|
||||
17267|http://www.cyberciti.biz/faq/linux-check-battery-status/|Linux: Check Battery Status Command
|
||||
17245|http://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/|Linux Restart NTPD Service Command
|
||||
17183|http://www.cyberciti.biz/faq/ubuntu-linux-determine-your-ip-address/|Ubuntu Linux: Determine Your IP Address
|
||||
17172|http://www.cyberciti.biz/faq/determine-ip-address-of-linux-server/|HowTo: Determine an IP Address My Linux Server
|
||||
16510|http://www.cyberciti.biz/faq/unix-linux-restart-php-service-command/|Linux / Unix: Restart PHP Service Command
|
||||
8292|http://www.cyberciti.biz/faq/mounting-harddisks-in-freebsd-with-mount-command/|FreeBSD: Mount Hard Drive / Disk Command
|
||||
8190|http://www.cyberciti.biz/faq/rebooting-solaris-unix-server/|Reboot a Solaris UNIX System
|
||||
|
||||
My bash script:
|
||||
|
||||
#!/bin/bash
|
||||
# Usage: Create pdf files from input (wrapper script)
|
||||
# Author: Vivek Gite <Www.cyberciti.biz> under GPL v2.x+
|
||||
#---------------------------------------------------------
|
||||
|
||||
#Input file
|
||||
_db="/tmp/wordpress/faq.txt"
|
||||
|
||||
#Output location
|
||||
o="/var/www/prviate/pdf/faq"
|
||||
|
||||
_writer="~/bin/py/pdfwriter.py"
|
||||
|
||||
# If file exists
|
||||
if [[ -f "$_db" ]]
|
||||
then
|
||||
# read it
|
||||
while IFS='|' read -r pdfid pdfurl pdftitle
|
||||
do
|
||||
local pdf="$o/$pdfid.pdf"
|
||||
echo "Creating $pdf file ..."
|
||||
#Genrate pdf file
|
||||
$_writer --quiet --footer-spacing 2 \
|
||||
--footer-left "nixCraft is GIT UL++++ W+++ C++++ M+ e+++ d-" \
|
||||
--footer-right "Page [page] of [toPage]" --footer-line \
|
||||
--footer-font-size 7 --print-media-type "$pdfurl" "$pdf"
|
||||
done <"$_db"
|
||||
fi
|
||||
|
||||
**Tip: Read from bash variable**
|
||||
|
||||
Let us say you want a list of all installed php packages on a Debian or Ubuntu Linux, enter:
|
||||
|
||||
# My input source is the contents of a variable called $list #
|
||||
list=$(dpkg --list php\* | awk '/ii/{print $2}')
|
||||
printf '%s\n' "$list"
|
||||
|
||||
Sample outputs:
|
||||
|
||||
php-pear
|
||||
php5-cli
|
||||
php5-common
|
||||
php5-fpm
|
||||
php5-gd
|
||||
php5-json
|
||||
php5-memcache
|
||||
php5-mysql
|
||||
php5-readline
|
||||
php5-suhosin-extension
|
||||
|
||||
You can now read from $list and install the package:
|
||||
|
||||
#!/bin/bash
|
||||
# BASH can iterate over $list variable using a "here string" #
|
||||
while IFS= read -r pkg
|
||||
do
|
||||
printf 'Installing php package %s...\n' "$pkg"
|
||||
/usr/bin/apt-get -qq install $pkg
|
||||
done <<< "$list"
|
||||
printf '*** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) ***\n'
|
||||
|
||||
Sample outputs:
|
||||
|
||||
Installing php package php-pear...
|
||||
Installing php package php5-cli...
|
||||
Installing php package php5-common...
|
||||
Installing php package php5-fpm...
|
||||
Installing php package php5-gd...
|
||||
Installing php package php5-json...
|
||||
Installing php package php5-memcache...
|
||||
Installing php package php5-mysql...
|
||||
Installing php package php5-readline...
|
||||
Installing php package php5-suhosin-extension...
|
||||
*** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) ***
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/
|
||||
|
||||
作者:[作者名][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 1 - LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux
|
||||
================================================================================
|
||||
The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams.
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting
|
||||
================================================================================
|
||||
The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.
|
||||
@ -99,10 +101,10 @@ Execute Script
|
||||
|
||||
Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is:
|
||||
|
||||
if CONDITION; then
|
||||
if CONDITION; then
|
||||
COMMANDS;
|
||||
else
|
||||
OTHER-COMMANDS
|
||||
OTHER-COMMANDS
|
||||
fi
|
||||
|
||||
Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when:
|
||||
@ -133,8 +135,8 @@ Where CONDITION can be one of the following (only the most frequent conditions a
|
||||
|
||||
This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is:
|
||||
|
||||
for item in SEQUENCE; do
|
||||
COMMANDS;
|
||||
for item in SEQUENCE; do
|
||||
COMMANDS;
|
||||
done
|
||||
|
||||
Where item is a generic variable that represents each value in SEQUENCE during each iteration.
|
||||
@ -143,8 +145,8 @@ Where item is a generic variable that represents each value in SEQUENCE during e
|
||||
|
||||
This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is:
|
||||
|
||||
while EVALUATION_COMMAND; do
|
||||
EXECUTE_COMMANDS;
|
||||
while EVALUATION_COMMAND; do
|
||||
EXECUTE_COMMANDS;
|
||||
done
|
||||
|
||||
Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops.
|
||||
@ -158,7 +160,7 @@ We will demonstrate the use of the if construct and the for loop with the follow
|
||||
Let’s create a file with a list of services that we want to monitor at a glance.
|
||||
|
||||
# cat myservices.txt
|
||||
|
||||
|
||||
sshd
|
||||
mariadb
|
||||
httpd
|
||||
@ -172,10 +174,10 @@ Script to Monitor Linux Services
|
||||
Our shell script should look like.
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
|
||||
# This script iterates over a list of services and
|
||||
# is used to determine whether they are running or not.
|
||||
|
||||
|
||||
for service in $(cat myservices.txt); do
|
||||
systemctl status $service | grep --quiet "running"
|
||||
if [ $? -eq 0 ]; then
|
||||
@ -214,10 +216,10 @@ Services Monitoring Script
|
||||
We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop.
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
|
||||
# This script iterates over a list of services and
|
||||
# is used to determine whether they are running or not.
|
||||
|
||||
|
||||
if [ -f myservices.txt ]; then
|
||||
for service in $(cat myservices.txt); do
|
||||
systemctl status $service | grep --quiet "running"
|
||||
@ -238,9 +240,9 @@ You may want to maintain a list of hosts in a text file and use a script to dete
|
||||
The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command.
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
|
||||
# This script is used to demonstrate the use of a while loop
|
||||
|
||||
|
||||
while read host; do
|
||||
ping -c 2 $host
|
||||
done < myhosts
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor
|
||||
================================================================================
|
||||
A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams.
|
||||
@ -295,7 +297,7 @@ Vi Search String in File
|
||||
|
||||
c). vi uses a command (similar to sed’s) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command.
|
||||
|
||||
:%s/old/young/g
|
||||
:%s/old/young/g
|
||||
|
||||
**Notice**: The colon at the beginning of the command.
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux
|
||||
================================================================================
|
||||
Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams.
|
||||
@ -178,9 +180,9 @@ List Archive Content
|
||||
|
||||
Run any of the following commands:
|
||||
|
||||
# gzip -d myfiles.tar.gz [#1]
|
||||
# bzip2 -d myfiles.tar.bz2 [#2]
|
||||
# xz -d myfiles.tar.xz [#3]
|
||||
# gzip -d myfiles.tar.gz [#1]
|
||||
# bzip2 -d myfiles.tar.bz2 [#2]
|
||||
# xz -d myfiles.tar.xz [#3]
|
||||
|
||||
Then
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition
|
||||
================================================================================
|
||||
Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams.
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux
|
||||
================================================================================
|
||||
The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 6 - LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups
|
||||
================================================================================
|
||||
Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams.
|
||||
@ -24,7 +26,7 @@ However, the actual fault-tolerance and disk I/O performance lean on how the har
|
||||
Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm (short for multiple disks admin).
|
||||
|
||||
---------------- Debian and Derivatives ----------------
|
||||
# aptitude update && aptitude install mdadm
|
||||
# aptitude update && aptitude install mdadm
|
||||
|
||||
----------
|
||||
|
||||
@ -34,7 +36,7 @@ Our tool of choice for creating, assembling, managing, and monitoring our softwa
|
||||
----------
|
||||
|
||||
---------------- On openSUSE ----------------
|
||||
# zypper refresh && zypper install mdadm #
|
||||
# zypper refresh && zypper install mdadm #
|
||||
|
||||
#### Assembling Partitions as RAID Devices ####
|
||||
|
||||
@ -55,7 +57,7 @@ Creating RAID Array
|
||||
After creating RAID array, you an check the status of the array using the following commands.
|
||||
|
||||
# cat /proc/mdstat
|
||||
or
|
||||
or
|
||||
# mdadm --detail /dev/md0 [More detailed summary]
|
||||
|
||||
![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png)
|
||||
@ -203,16 +205,16 @@ The downside of this backup approach is that the image will have the same size a
|
||||
|
||||
# dd if=/dev/sda of=/system_images/sda.img
|
||||
OR
|
||||
--------------------- Alternatively, you can compress the image file ---------------------
|
||||
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
|
||||
--------------------- Alternatively, you can compress the image file ---------------------
|
||||
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
|
||||
|
||||
**Restoring the backup from the image file**
|
||||
|
||||
# dd if=/system_images/sda.img of=/dev/sda
|
||||
OR
|
||||
|
||||
--------------------- Depending on your choice while creating the image ---------------------
|
||||
gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
|
||||
OR
|
||||
|
||||
--------------------- Depending on your choice while creating the image ---------------------
|
||||
gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
|
||||
|
||||
Method 2: Backup certain files / directories with tar command – already covered in [Part 3][3] of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users’ home directories, and so on).
|
||||
|
||||
@ -247,7 +249,7 @@ Synchronizing remote → local directories over ssh.
|
||||
|
||||
In this case, switch the source and destination directories from the previous example.
|
||||
|
||||
# rsync -avzhe ssh root@remote_host:/remote_directory/ backups
|
||||
# rsync -avzhe ssh root@remote_host:/remote_directory/ backups
|
||||
|
||||
Please note that these are only 3 examples (most frequent cases you’re likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article.
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
|
||||
================================================================================
|
||||
A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams.
|
||||
@ -267,7 +269,7 @@ Starting Stoping Services
|
||||
|
||||
Under systemd you can enable or disable a service when it boots.
|
||||
|
||||
# systemctl enable [service] # enable a service
|
||||
# systemctl enable [service] # enable a service
|
||||
# systemctl disable [service] # prevent a service from starting at boot
|
||||
|
||||
The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory.
|
||||
@ -315,7 +317,7 @@ For example,
|
||||
|
||||
# My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null <dave.null@example.com>"
|
||||
# Stanzas
|
||||
|
||||
|
||||
#
|
||||
# Stanzas define when and how a process is started and stopped
|
||||
# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts
|
||||
================================================================================
|
||||
Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams.
|
||||
@ -191,7 +193,7 @@ Thus, any user should have permission to run /bin/passwd, but only root will be
|
||||
![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png)
|
||||
|
||||
Change User Password
|
||||
|
||||
|
||||
**Understanding Setgid**
|
||||
|
||||
When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owner’s primary group.
|
||||
|
@ -1,3 +1,5 @@
|
||||
Translating by Xuanwo
|
||||
|
||||
Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper
|
||||
================================================================================
|
||||
Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams.
|
||||
@ -85,7 +87,7 @@ rpm is the package management system used by Linux Standard Base (LSB)-compliant
|
||||
yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories.
|
||||
|
||||
- Read More: [20 yum Command Examples][4]
|
||||
-
|
||||
-
|
||||
### Common Usage of Low-Level Tools ###
|
||||
|
||||
The most frequent tasks that you will do with low level tools are as follows:
|
||||
@ -155,7 +157,7 @@ The most frequent tasks that you will do with high level tools are as follows.
|
||||
|
||||
aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name.
|
||||
|
||||
# aptitude update && aptitude search package_name
|
||||
# aptitude update && aptitude search package_name
|
||||
|
||||
In the search all option, yum will search for package_name not only in package names, but also in package descriptions.
|
||||
|
||||
@ -190,8 +192,8 @@ The option remove will uninstall the package but leaving configuration files int
|
||||
# yum erase package_name
|
||||
|
||||
---Notice the minus sign in front of the package that will be uninstalled, openSUSE ---
|
||||
|
||||
# zypper remove -package_name
|
||||
|
||||
# zypper remove -package_name
|
||||
|
||||
Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble!
|
||||
|
||||
@ -199,7 +201,7 @@ Most (if not all) package managers will prompt you, by default, if you’re sure
|
||||
|
||||
The following command will display information about the birthday package.
|
||||
|
||||
# aptitude show birthday
|
||||
# aptitude show birthday
|
||||
# yum info birthday
|
||||
# zypper info birthday
|
||||
|
||||
|
@ -1,321 +0,0 @@
|
||||
struggling 翻译中
|
||||
Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5
|
||||
================================================================================
|
||||
RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity.
|
||||
|
||||
In our earlier article, we’ve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.
|
||||
|
||||
![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg)
|
||||
|
||||
Setup RAID 6 in Linux
|
||||
|
||||
To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks.
|
||||
|
||||
Now, many of us comes to conclusion, why we need to use RAID 6, when it doesn’t perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.
|
||||
|
||||
#### Pros and Cons of RAID 6 ####
|
||||
|
||||
- Performance are good.
|
||||
- RAID 6 is expensive, as it requires two independent drives are used for parity functions.
|
||||
- Will loose a two disks capacity for using parity information (double parity).
|
||||
- No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.
|
||||
- Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller.
|
||||
|
||||
Those who are new to RAID setup, we recommend to go through RAID articles below.
|
||||
|
||||
- [Basic Concepts of RAID in Linux – Part 1][1]
|
||||
- [Creating Software RAID 0 (Stripe) in Linux – Part 2][2]
|
||||
- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.228
|
||||
Hostname : rd6.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
Disk 4 [20GB] : /dev/sde
|
||||
|
||||
This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde.
|
||||
|
||||
### Step 1: Installing mdadm Tool and Examine Drives ###
|
||||
|
||||
1. If you’re following our last two Raid articles (Part 2 and Part 3), where we’ve already shown how to install ‘mdadm‘ tool. If you’re new to this article, let me explain that ‘mdadm‘ is a tool to create and manage Raid in Linux systems, let’s install the tool using following command according to your Linux distribution.
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
2. After installing the tool, now it’s time to verify the attached four drives that we are going to use for raid creation using the following ‘fdisk‘ command.
|
||||
|
||||
# fdisk -l | grep sd
|
||||
|
||||
![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png)
|
||||
|
||||
Check Disks in Linux
|
||||
|
||||
3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
|
||||
![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png)
|
||||
|
||||
Check Raid on Disk
|
||||
|
||||
**Note**: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6.
|
||||
|
||||
### Step 2: Drive Partitioning for RAID 6 ###
|
||||
|
||||
4. Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives.
|
||||
|
||||
**Create /dev/sdb Partition**
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Please follow the instructions as shown below for creating partition.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Next choose the partition number as 1.
|
||||
- Define the default value by just pressing two times Enter key.
|
||||
- Next press ‘P‘ to print the defined partition.
|
||||
- Press ‘L‘ to list all available types.
|
||||
- Type ‘t‘ to choose the partitions.
|
||||
- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
|
||||
- Then again use ‘P‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||
![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png)
|
||||
|
||||
Create /dev/sdb Partition
|
||||
|
||||
**Create /dev/sdb Partition**
|
||||
|
||||
# fdisk /dev/sdc
|
||||
|
||||
![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png)
|
||||
|
||||
Create /dev/sdc Partition
|
||||
|
||||
**Create /dev/sdd Partition**
|
||||
|
||||
# fdisk /dev/sdd
|
||||
|
||||
![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png)
|
||||
|
||||
Create /dev/sdd Partition
|
||||
|
||||
**Create /dev/sde Partition**
|
||||
|
||||
# fdisk /dev/sde
|
||||
|
||||
![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png)
|
||||
|
||||
Create /dev/sde Partition
|
||||
|
||||
5. After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
|
||||
or
|
||||
|
||||
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
|
||||
![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png)
|
||||
|
||||
Check Raid on New Partitions
|
||||
|
||||
### Step 3: Creating md device (RAID) ###
|
||||
|
||||
6. Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands.
|
||||
|
||||
# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png)
|
||||
|
||||
Create Raid 6 Device
|
||||
|
||||
7. You can also check the current process of raid using watch command as shown in the screen grab below.
|
||||
|
||||
# watch -n1 cat /proc/mdstat
|
||||
|
||||
![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png)
|
||||
|
||||
Check Raid 6 Process
|
||||
|
||||
8. Verify the raid devices using the following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
**Note**:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here.
|
||||
|
||||
9. Next, verify the RAID array to confirm that the re-syncing is started.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png)
|
||||
|
||||
Check Raid 6 Array
|
||||
|
||||
### Step 4: Creating FileSystem on Raid Device ###
|
||||
|
||||
10. Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid5. Here we’ve used ext4, but you can use any type of filesystem as per your choice.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png)
|
||||
|
||||
Create File System on Raid 6
|
||||
|
||||
11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory.
|
||||
|
||||
# mkdir /mnt/raid6
|
||||
# mount /dev/md0 /mnt/raid6/
|
||||
# ls -l /mnt/raid6/
|
||||
|
||||
12. Create some files under mount point and append some text in any one of the file to verify the content.
|
||||
|
||||
# touch /mnt/raid6/raid6_test.txt
|
||||
# ls -l /mnt/raid6/
|
||||
# echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt
|
||||
# cat /mnt/raid6/raid6_test.txt
|
||||
|
||||
![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png)
|
||||
|
||||
Verify Raid Content
|
||||
|
||||
13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid6 ext4 defaults 0 0
|
||||
|
||||
![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png)
|
||||
|
||||
Automount Raid 6 Device
|
||||
|
||||
14. Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry.
|
||||
|
||||
# mount -av
|
||||
|
||||
![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png)
|
||||
|
||||
Verify Raid Automount
|
||||
|
||||
### Step 5: Save RAID 6 Configuration ###
|
||||
|
||||
15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
|
||||
|
||||
Save Raid 6 Configuration
|
||||
|
||||
![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
|
||||
|
||||
Check Raid 6 Status
|
||||
|
||||
### Step 6: Adding a Spare Drives ###
|
||||
|
||||
16. Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.
|
||||
|
||||
May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.
|
||||
|
||||
For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk.
|
||||
|
||||
# ls -l /dev/ | grep sd
|
||||
|
||||
![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png)
|
||||
|
||||
Check New Disk
|
||||
|
||||
17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command.
|
||||
|
||||
# mdadm --examine /dev/sdf
|
||||
|
||||
![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png)
|
||||
|
||||
Check Raid on New Disk
|
||||
|
||||
**Note**: As usual, like we’ve created partitions for four disks earlier, similarly we’ve to create new partition on the new plugged disk using fdisk command.
|
||||
|
||||
# fdisk /dev/sdf
|
||||
|
||||
![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png)
|
||||
|
||||
Create /dev/sdf Partition
|
||||
|
||||
18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device.
|
||||
|
||||
# mdadm --examine /dev/sdf
|
||||
# mdadm --examine /dev/sdf1
|
||||
# mdadm --add /dev/md0 /dev/sdf1
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png)
|
||||
|
||||
Verify Raid on sdf Partition
|
||||
|
||||
![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png)
|
||||
|
||||
Add sdf Partition to Raid
|
||||
|
||||
![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png)
|
||||
|
||||
Verify sdf Partition Details
|
||||
|
||||
### Step 7: Check Raid 6 Fault Tolerance ###
|
||||
|
||||
19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed.
|
||||
|
||||
Here, we’re going to mark /dev/sdd1 as failed drive.
|
||||
|
||||
# mdadm --manage --fail /dev/md0 /dev/sdd1
|
||||
|
||||
![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png)
|
||||
|
||||
Check Raid 6 Fault Tolerance
|
||||
|
||||
20. Let me get the details of RAID set now and check whether our spare started to sync.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png)
|
||||
|
||||
Check Auto Raid Syncing
|
||||
|
||||
**Hurray!** Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png)
|
||||
|
||||
Raid 6 Auto Syncing
|
||||
|
||||
### Conclusion: ###
|
||||
|
||||
Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles. Till then, stay connected with TECMINT.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-6-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/create-raid0-in-linux/
|
||||
[3]:http://www.tecmint.com/create-raid1-in-linux/
|
@ -1,276 +0,0 @@
|
||||
struggling 翻译中
|
||||
Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6
|
||||
================================================================================
|
||||
RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we’ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks.
|
||||
|
||||
Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method.
|
||||
|
||||
![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg)
|
||||
|
||||
Create Raid 10 in Linux
|
||||
|
||||
Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.
|
||||
|
||||
In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.
|
||||
|
||||
Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.
|
||||
|
||||
#### Pros and Cons of RAID 5 ####
|
||||
|
||||
- Gives better performance.
|
||||
- We will loose two of the disk capacity in RAID 10.
|
||||
- Reading and writing will be very good, because it will write and read to all those 4 disk at the same time.
|
||||
- It can be used for Database solutions, which needs a high I/O disk writes.
|
||||
|
||||
#### Requirements ####
|
||||
|
||||
In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.
|
||||
|
||||
**My Server Setup**
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.229
|
||||
Hostname : rd10.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdd
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
Disk 4 [20GB] : /dev/sde
|
||||
|
||||
There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.
|
||||
|
||||
### Method 1: Setting Up Raid 10 ###
|
||||
|
||||
1. First, verify that all the 4 added disks are detected or not using the following command.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
|
||||
![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png)
|
||||
|
||||
Verify 4 Added Disks
|
||||
|
||||
**Note**: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives.
|
||||
|
||||
#### Step 1: Drive Partitioning for RAID ####
|
||||
|
||||
3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
# fdisk /dev/sde
|
||||
|
||||
**Create /dev/sdb Partition**
|
||||
|
||||
Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
Please use the below steps for creating a new partition on /dev/sdb drive.
|
||||
|
||||
- Press ‘n‘ for creating new partition.
|
||||
- Then choose ‘P‘ for Primary partition.
|
||||
- Then choose ‘1‘ to be the first partition.
|
||||
- Next press ‘p‘ to print the created partition.
|
||||
- Change the Type, If we need to know the every available types Press ‘L‘.
|
||||
- Here, we are selecting ‘fd‘ as my type is RAID.
|
||||
- Next press ‘p‘ to print the defined partition.
|
||||
- Then again use ‘p‘ to print the changes what we have made.
|
||||
- Use ‘w‘ to write the changes.
|
||||
|
||||
![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png)
|
||||
|
||||
Disk sdb Partition
|
||||
|
||||
**Note**: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).
|
||||
|
||||
4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
OR
|
||||
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
|
||||
![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png)
|
||||
|
||||
Check All Disks for Raid
|
||||
|
||||
**Note**: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.
|
||||
|
||||
#### Step 2: Creating ‘md’ RAID Device ####
|
||||
|
||||
5. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first.
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command.
|
||||
|
||||
# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1
|
||||
|
||||
6. Next verify the newly created raid device using the ‘cat’ command.
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png)
|
||||
|
||||
Create md raid Device
|
||||
|
||||
7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.
|
||||
|
||||
# mdadm --examine /dev/sd[b-e]1
|
||||
|
||||
8. Next, check the details of Raid Array with the help of following command.
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png)
|
||||
|
||||
Check Raid Array Details
|
||||
|
||||
**Note**: You see in the above results, that the status of Raid was active and re-syncing.
|
||||
|
||||
#### Step 3: Creating Filesystem ####
|
||||
|
||||
9. Create a file system using ext4 for ‘md0′ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want.
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png)
|
||||
|
||||
Create md Filesystem
|
||||
|
||||
10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command.
|
||||
|
||||
# mkdir /mnt/raid10
|
||||
# mount /dev/md0 /mnt/raid10/
|
||||
# ls -l /mnt/raid10/
|
||||
|
||||
Next, add some files under mount point and append some text in any one of the file and check the content.
|
||||
|
||||
# touch /mnt/raid10/raid10_files.txt
|
||||
# ls -l /mnt/raid10/
|
||||
# echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
|
||||
# cat /mnt/raid10/raid10_files.txt
|
||||
|
||||
![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png)
|
||||
|
||||
Mount md Device
|
||||
|
||||
11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!.
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid10 ext4 defaults 0 0
|
||||
|
||||
![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png)
|
||||
|
||||
AutoMount md Device
|
||||
|
||||
12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command.
|
||||
|
||||
# mount -av
|
||||
|
||||
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png)
|
||||
|
||||
Check Errors in Fstab
|
||||
|
||||
#### Step 4: Save RAID Configuration ####
|
||||
|
||||
13. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||
![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png)
|
||||
|
||||
Save Raid10 Configuration
|
||||
|
||||
That’s it, we have created RAID 10 using method 1, this method is the easier one. Now let’s move forward to setup RAID 10 using method 2.
|
||||
|
||||
### Method 2: Creating RAID 10 ###
|
||||
|
||||
1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0.
|
||||
|
||||
First, list the disks which are all available for creating RAID 10.
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png)
|
||||
|
||||
List 4 Devices
|
||||
|
||||
2. Partition the all 4 disks using ‘fdisk’ command. For partitioning, you can follow #step 3 above.
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
# fdisk /dev/sde
|
||||
|
||||
3. After partitioning all 4 disks, now examine the disks for any existing raid blocks.
|
||||
|
||||
# mdadm --examine /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sd[b-e]1
|
||||
|
||||
![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png)
|
||||
|
||||
Examine 4 Disks
|
||||
|
||||
#### Step 1: Creating RAID 1 ####
|
||||
|
||||
4. First let me create 2 sets of RAID 1 using 4 disks ‘sdb1′ and ‘sdc1′ and other set using ‘sdd1′ & ‘sde1′.
|
||||
|
||||
# mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
|
||||
|
||||
Creating Raid 1
|
||||
|
||||
![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
|
||||
|
||||
Check Details of Raid 1
|
||||
|
||||
#### Step 2: Creating RAID 0 ####
|
||||
|
||||
5. Next, create the RAID 0 using md1 and md2 devices.
|
||||
|
||||
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png)
|
||||
|
||||
Creating Raid 0
|
||||
|
||||
#### Step 3: Save RAID Configuration ####
|
||||
|
||||
6. We need to save the Configuration under ‘/etc/mdadm.conf‘ to load all raid devices in every reboot times.
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||
After this, we need to follow #step 3 Creating file system of method 1.
|
||||
|
||||
That’s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-10-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -1,208 +0,0 @@
|
||||
ictlyh Translating
|
||||
Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks
|
||||
================================================================================
|
||||
Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why:
|
||||
|
||||
![Automate Linux System Maintenance Tasks](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png)
|
||||
|
||||
RHCE Series: Automate Linux System Maintenance Tasks – Part 4
|
||||
|
||||
if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using,
|
||||
|
||||
for example, the tools reviewed in Part 3 – [Monitor System Activity Reports Using Linux Toolsets][1] of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial.
|
||||
|
||||
### What is a shell script? ###
|
||||
|
||||
In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user.
|
||||
|
||||
By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to [this Wikipedia article][2].
|
||||
|
||||
To find out more about the enormous set of features provided by this shell, you may want to check out its **man page**, which is downloaded in in PDF format at ([Bash Commands][3]). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through [A Guide from Newbies to SysAdmin][4] article in **Tecmint.com** before proceeding). Now let’s get started.
|
||||
|
||||
### Writing a script to display system information ###
|
||||
|
||||
For our convenience, let’s create a directory to store our shell scripts:
|
||||
|
||||
# mkdir scripts
|
||||
# cd scripts
|
||||
|
||||
And open a new text file named `system_info.sh` with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# Sample script written for Part 4 of the RHCE series
|
||||
# This script will return the following set of system information:
|
||||
# -Hostname information:
|
||||
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
|
||||
hostnamectl
|
||||
echo ""
|
||||
# -File system disk space usage:
|
||||
echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m"
|
||||
df -h
|
||||
echo ""
|
||||
# -Free and used memory in the system:
|
||||
echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m"
|
||||
free
|
||||
echo ""
|
||||
# -System uptime and load:
|
||||
echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m"
|
||||
uptime
|
||||
echo ""
|
||||
# -Logged-in users:
|
||||
echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m"
|
||||
who
|
||||
echo ""
|
||||
# -Top 5 processes as far as memory usage is concerned
|
||||
echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m"
|
||||
ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6
|
||||
echo ""
|
||||
echo -e "\e[1;32mDone.\e[0m"
|
||||
|
||||
Next, give the script execute permissions:
|
||||
|
||||
# chmod +x system_info.sh
|
||||
|
||||
and run it:
|
||||
|
||||
./system_info.sh
|
||||
|
||||
Note that the headers of each section are shown in color for better visualization:
|
||||
|
||||
![Server Monitoring Shell Script](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png)
|
||||
|
||||
Server Monitoring Shell Script
|
||||
|
||||
That functionality is provided by this command:
|
||||
|
||||
echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"
|
||||
|
||||
Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the [Arch Linux Wiki][5]) and <YOUR TEXT HERE> is the string that you want to show in color.
|
||||
|
||||
### Automating Tasks ###
|
||||
|
||||
The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting:
|
||||
|
||||
**1)** update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit.
|
||||
|
||||
Let’s create a file named `auto_tasks.sh` in our scripts directory with the following content:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# Sample script to automate tasks:
|
||||
# -Update local file database:
|
||||
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
|
||||
updatedb
|
||||
if [ $? == 0 ]; then
|
||||
echo "The local file database was updated correctly."
|
||||
else
|
||||
echo "The local file database was not updated correctly."
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# -Find and / or delete files with 777 permissions.
|
||||
echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m"
|
||||
# Enable either option (comment out the other line), but not both.
|
||||
# Option 1: Delete files without prompting for confirmation. Assumes GNU version of find.
|
||||
#find -type f -perm 0777 -delete
|
||||
# Option 2: Ask for confirmation before deleting files. More portable across systems.
|
||||
find -type f -perm 0777 -exec rm -i {} +;
|
||||
echo ""
|
||||
# -Alert when file system usage surpasses a defined limit
|
||||
echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m"
|
||||
THRESHOLD=30
|
||||
while read line; do
|
||||
# This variable stores the file system path as a string
|
||||
FILESYSTEM=$(echo $line | awk '{print $1}')
|
||||
# This variable stores the use percentage (XX%)
|
||||
PERCENTAGE=$(echo $line | awk '{print $5}')
|
||||
# Use percentage without the % sign.
|
||||
USAGE=${PERCENTAGE%?}
|
||||
if [ $USAGE -gt $THRESHOLD ]; then
|
||||
echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE"
|
||||
fi
|
||||
done < <(df -h --total | grep -vi filesystem)
|
||||
|
||||
Please note that there is a space between the two `<` signs in the last line of the script.
|
||||
|
||||
![Shell Script to Find 777 Permissions](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png)
|
||||
|
||||
Shell Script to Find 777 Permissions
|
||||
|
||||
### Using Cron ###
|
||||
|
||||
To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser.
|
||||
|
||||
The following script (filesystem_usage.sh) will run the well-known **df -h** command, format the output into a HTML table and save it in the **report.html** file:
|
||||
|
||||
#!/bin/bash
|
||||
# Sample script to demonstrate the creation of an HTML report using shell scripting
|
||||
# Web directory
|
||||
WEB_DIR=/var/www/html
|
||||
# A little CSS and table layout to make the report look a little nicer
|
||||
echo "<HTML>
|
||||
<HEAD>
|
||||
<style>
|
||||
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em 0.2em;}
|
||||
table
|
||||
{
|
||||
border-collapse:collapse;
|
||||
}
|
||||
table, td, th
|
||||
{
|
||||
border:1px solid black;
|
||||
}
|
||||
</style>
|
||||
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
|
||||
</HEAD>
|
||||
<BODY>" > $WEB_DIR/report.html
|
||||
# View hostname and insert it at the top of the html body
|
||||
HOST=$(hostname)
|
||||
echo "Filesystem usage for host <strong>$HOST</strong><br>
|
||||
Last updated: <strong>$(date)</strong><br><br>
|
||||
<table border='1'>
|
||||
<tr><th class='titulo'>Filesystem</td>
|
||||
<th class='titulo'>Size</td>
|
||||
<th class='titulo'>Use %</td>
|
||||
</tr>" >> $WEB_DIR/report.html
|
||||
# Read the output of df -h line by line
|
||||
while read line; do
|
||||
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
|
||||
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
|
||||
echo "</td><td align='center'>" >> $WEB_DIR/report.html
|
||||
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
|
||||
echo "</td><td align='center'>" >> $WEB_DIR/report.html
|
||||
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
|
||||
echo "</td></tr>" >> $WEB_DIR/report.html
|
||||
done < <(df -h | grep -vi filesystem)
|
||||
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html
|
||||
|
||||
In our **RHEL 7** server (**192.168.0.18**), this looks as follows:
|
||||
|
||||
![Server Monitoring Report](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png)
|
||||
|
||||
Server Monitoring Report
|
||||
|
||||
You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry:
|
||||
|
||||
30 13 * * * /root/scripts/filesystem_usage.sh
|
||||
|
||||
### Summary ###
|
||||
|
||||
You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don't hesitate to add your own ideas or comments via the form below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/
|
||||
[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29
|
||||
[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf
|
||||
[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
|
||||
[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt
|
@ -0,0 +1,166 @@
|
||||
ictlyh Translating
|
||||
Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7
|
||||
================================================================================
|
||||
In order to keep your RHEL 7 systems secure, you need to know how to monitor all of the activities that take place on such systems by examining log files. Thus, you will be able to detect any unusual or potentially malicious activity and perform system troubleshooting or take another appropriate action.
|
||||
|
||||
![Linux Rotate Log Files Using Rsyslog and Logrotate](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg)
|
||||
|
||||
RHCE Exam: Manage System LogsUsing Rsyslogd and Logrotate – Part 5
|
||||
|
||||
In RHEL 7, the [rsyslogd][1] daemon is responsible for system logging and reads its configuration from /etc/rsyslog.conf (this file specifies the default location for all system logs) and from files inside /etc/rsyslog.d, if any.
|
||||
|
||||
### Rsyslogd Configuration ###
|
||||
|
||||
A quick inspection of the [rsyslog.conf][2] will be helpful to start. This file is divided into 3 main sections: Modules (since rsyslog follows a modular design), Global directives (used to set global properties of the rsyslogd daemon), and Rules. As you will probably guess, this last section indicates what gets logged or shown (also known as the selector) and where, and will be our focus throughout this article.
|
||||
|
||||
A typical line in rsyslog.conf is as follows:
|
||||
|
||||
![Rsyslogd Configuration](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png)
|
||||
|
||||
Rsyslogd Configuration
|
||||
|
||||
In the image above, we can see that a selector consists of one or more pairs Facility:Priority separated by semicolons, where Facility describes the type of message (refer to [section 4.1.1 in RFC 3164][3] to see the complete list of facilities available for rsyslog) and Priority indicates its severity, which can be one of the following self-explanatory words:
|
||||
|
||||
- debug
|
||||
- info
|
||||
- notice
|
||||
- warning
|
||||
- err
|
||||
- crit
|
||||
- alert
|
||||
- emerg
|
||||
|
||||
Though not a priority itself, the keyword none means no priority at all of the given facility.
|
||||
|
||||
**Note**: That a given priority indicates that all messages of such priority and above should be logged. Thus, the line in the example above instructs the rsyslogd daemon to log all messages of priority info or higher (regardless of the facility) except those belonging to mail, authpriv, and cron services (no messages coming from this facilities will be taken into account) to /var/log/messages.
|
||||
|
||||
You can also group multiple facilities using the colon sign to apply the same priority to all of them. Thus, the line:
|
||||
|
||||
*.info;mail.none;authpriv.none;cron.none /var/log/messages
|
||||
|
||||
Could be rewritten as
|
||||
|
||||
*.info;mail,authpriv,cron.none /var/log/messages
|
||||
|
||||
In other words, the facilities mail, authpriv, and cron are grouped and the keyword none is applied to the three of them.
|
||||
|
||||
#### Creating a custom log file ####
|
||||
|
||||
To log all daemon messages to /var/log/tecmint.log, we need to add the following line either in rsyslog.conf or in a separate file (easier to manage) inside /etc/rsyslog.d:
|
||||
|
||||
daemon.* /var/log/tecmint.log
|
||||
|
||||
Let’s restart the daemon (note that the service name does not end with a d):
|
||||
|
||||
# systemctl restart rsyslog
|
||||
|
||||
And check the contents of our custom log before and after restarting two random daemons:
|
||||
|
||||
![Linux Create Custom Log File](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png)
|
||||
|
||||
Create Custom Log File
|
||||
|
||||
As a self-study exercise, I would recommend you play around with the facilities and priorities and either log additional messages to existing log files or create new ones as in the previous example.
|
||||
|
||||
### Rotating Logs using Logrotate ###
|
||||
|
||||
To prevent log files from growing endlessly, the logrotate utility is used to rotate, compress, remove, and alternatively mail logs, thus easing the administration of systems that generate large numbers of log files.
|
||||
|
||||
Logrotate runs daily as a cron job (/etc/cron.daily/logrotate) and reads its configuration from /etc/logrotate.conf and from files located in /etc/logrotate.d, if any.
|
||||
|
||||
As with the case of rsyslog, even when you can include settings for specific services in the main file, creating separate configuration files for each one will help organize your settings better.
|
||||
|
||||
Let’s take a look at a typical logrotate.conf:
|
||||
|
||||
![Logrotate Configuration](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png)
|
||||
|
||||
Logrotate Configuration
|
||||
|
||||
In the example above, logrotate will perform the following actions for /var/loh/wtmp: attempt to rotate only once a month, but only if the file is at least 1 MB in size, then create a brand new log file with permissions set to 0664 and ownership given to user root and group utmp. Next, only keep one archived log, as specified by the rotate directive:
|
||||
|
||||
![Logrotate Logs Monthly](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png)
|
||||
|
||||
Logrotate Logs Monthly
|
||||
|
||||
Let’s now consider another example as found in /etc/logrotate.d/httpd:
|
||||
|
||||
![Rotate Apache Log Files](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png)
|
||||
|
||||
Rotate Apache Log Files
|
||||
|
||||
You can read more about the settings for logrotate in its man pages ([man logrotate][4] and [man logrotate.conf][5]). Both files are provided along with this article in PDF format for your reading convenience.
|
||||
|
||||
As a system engineer, it will be pretty much up to you to decide for how long logs will be stored and in what format, depending on whether you have /var in a separate partition / logical volume. Otherwise, you really want to consider removing old logs to save storage space. On the other hand, you may be forced to keep several logs for future security auditing according to your company’s or client’s internal policies.
|
||||
|
||||
#### Saving Logs to a Database ####
|
||||
|
||||
Of course examining logs (even with the help of tools such as grep and regular expressions) can become a rather tedious task. For that reason, rsyslog allows us to export them into a database (OTB supported RDBMS include MySQL, MariaDB, PostgreSQL, and Oracle.
|
||||
|
||||
This section of the tutorial assumes that you have already installed the MariaDB server and client in the same RHEL 7 box where the logs are being managed:
|
||||
|
||||
# yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql
|
||||
# systemctl enable mariadb && systemctl start mariadb
|
||||
|
||||
Then use the `mysql_secure_installation` utility to set the password for the root user and other security considerations:
|
||||
|
||||
![Secure MySQL Database](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png)
|
||||
|
||||
Secure MySQL Database
|
||||
|
||||
Note: If you don’t want to use the MariaDB root user to insert log messages to the database, you can configure another user account to do so. Explaining how to do that is out of the scope of this tutorial but is explained in detail in [MariaDB knowledge][6] base. In this tutorial we will use the root account for simplicity.
|
||||
|
||||
Next, download the createDB.sql script from [GitHub][7] and import it into your database server:
|
||||
|
||||
# mysql -u root -p < createDB.sql
|
||||
|
||||
![Save Server Logs to Database](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png)
|
||||
|
||||
Save Server Logs to Database
|
||||
|
||||
Finally, add the following lines to /etc/rsyslog.conf:
|
||||
|
||||
$ModLoad ommysql
|
||||
$ActionOmmysqlServerPort 3306
|
||||
*.* :ommysql:localhost,Syslog,root,YourPasswordHere
|
||||
|
||||
Restart rsyslog and the database server:
|
||||
|
||||
# systemctl restart rsyslog
|
||||
# systemctl restart mariadb
|
||||
|
||||
#### Querying the Logs using SQL syntax ####
|
||||
|
||||
Now perform some tasks that will modify the logs (like stopping and starting services, for example), then log to your DB server and use standard SQL commands to display and search in the logs:
|
||||
|
||||
USE Syslog;
|
||||
SELECT ReceivedAt, Message FROM SystemEvents;
|
||||
|
||||
![Query Logs in Database](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png)
|
||||
|
||||
Query Logs in Database
|
||||
|
||||
### Summary ###
|
||||
|
||||
In this article we have explained how to set up system logging, how to rotate logs, and how to redirect the messages to a database for easier search. We hope that these skills will be helpful as you prepare for the [RHCE exam][8] and in your daily responsibilities as well.
|
||||
|
||||
As always, your feedback is more than welcome. Feel free to use the form below to reach us.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotate/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/wp-content/pdf/rsyslogd.pdf
|
||||
[2]:http://www.tecmint.com/wp-content/pdf/rsyslog.conf.pdf
|
||||
[3]:https://tools.ietf.org/html/rfc3164#section-4.1.1
|
||||
[4]:http://www.tecmint.com/wp-content/pdf/logrotate.pdf
|
||||
[5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf
|
||||
[6]:https://mariadb.com/kb/en/mariadb/create-user/
|
||||
[7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql
|
||||
[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
|
@ -1,214 +0,0 @@
|
||||
FSSlc translating
|
||||
|
||||
RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7
|
||||
================================================================================
|
||||
In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm.
|
||||
|
||||
![Configure ACL's and Mounting NFS / Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png)
|
||||
|
||||
RHCSA Series:: Configure ACL’s and Mounting NFS / Samba Shares – Part 7
|
||||
|
||||
We also discussed how to create and mount encrypted volumes with a password during system boot. In addition, we warned you to avoid performing critical storage management operations on mounted filesystems. With that in mind we will now review the most used file system formats in Red Hat Enterprise Linux 7 and then proceed to cover the topics of mounting, using, and unmounting both manually and automatically network filesystems (CIFS and NFS), along with the implementation of access control lists for your system.
|
||||
|
||||
#### Prerequisites ####
|
||||
|
||||
Before proceeding further, please make sure you have a Samba server and a NFS server available (note that NFSv2 is no longer supported in RHEL 7).
|
||||
|
||||
During this guide we will use a machine with IP 192.168.0.10 with both services running in it as server, and a RHEL 7 box as client with IP address 192.168.0.18. Later in the article we will tell you which packages you need to install on the client.
|
||||
|
||||
### File System Formats in RHEL 7 ###
|
||||
|
||||
Beginning with RHEL 7, XFS has been introduced as the default file system for all architectures due to its high performance and scalability. It currently supports a maximum filesystem size of 500 TB as per the latest tests performed by Red Hat and its partners for mainstream hardware.
|
||||
|
||||
Also, XFS enables user_xattr (extended user attributes) and acl (POSIX access control lists) as default mount options, unlike ext3 or ext4 (ext2 is considered deprecated as of RHEL 7), which means that you don’t need to specify those options explicitly either on the command line or in /etc/fstab when mounting a XFS filesystem (if you want to disable such options in this last case, you have to explicitly use no_acl and no_user_xattr).
|
||||
|
||||
Keep in mind that the extended user attributes can be assigned to files and directories for storing arbitrary additional information such as the mime type, character set or encoding of a file, whereas the access permissions for user attributes are defined by the regular file permission bits.
|
||||
|
||||
#### Access Control Lists ####
|
||||
|
||||
As every system administrator, either beginner or expert, is well acquainted with regular access permissions on files and directories, which specify certain privileges (read, write, and execute) for the owner, the group, and “the world” (all others). However, feel free to refer to [Part 3 of the RHCSA series][2] if you need to refresh your memory a little bit.
|
||||
|
||||
However, since the standard ugo/rwx set does not allow to configure different permissions for different users, ACLs were introduced in order to define more detailed access rights for files and directories than those specified by regular permissions.
|
||||
|
||||
In fact, ACL-defined permissions are a superset of the permissions specified by the file permission bits. Let’s see how all of this translates is applied in the real world.
|
||||
|
||||
1. There are two types of ACLs: access ACLs, which can be applied to either a specific file or a directory), and default ACLs, which can only be applied to a directory. If files contained therein do not have a ACL set, they inherit the default ACL of their parent directory.
|
||||
|
||||
2. To begin, ACLs can be configured per user, per group, or per an user not in the owning group of a file.
|
||||
|
||||
3. ACLs are set (and removed) using setfacl, with either the -m or -x options, respectively.
|
||||
|
||||
For example, let us create a group named tecmint and add users johndoe and davenull to it:
|
||||
|
||||
# groupadd tecmint
|
||||
# useradd johndoe
|
||||
# useradd davenull
|
||||
# usermod -a -G tecmint johndoe
|
||||
# usermod -a -G tecmint davenull
|
||||
|
||||
And let’s verify that both users belong to supplementary group tecmint:
|
||||
|
||||
# id johndoe
|
||||
# id davenull
|
||||
|
||||
![Verify Users](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png)
|
||||
|
||||
Verify Users
|
||||
|
||||
Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file):
|
||||
|
||||
# mkdir /mnt/playground
|
||||
# touch /mnt/playground/testfile.txt
|
||||
# chmod 770 /mnt/playground/testfile.txt
|
||||
|
||||
Then switch user to johndoe and davenull, in that order, and write to the file:
|
||||
|
||||
echo "My name is John Doe" > /mnt/playground/testfile.txt
|
||||
echo "My name is Dave Null" >> /mnt/playground/testfile.txt
|
||||
|
||||
So far so good. Now let’s have user gacanepa write to the file – and the write operation will, which was to be expected.
|
||||
|
||||
But what if we actually need user gacanepa (who is not a member of group tecmint) to have write permissions on /mnt/playground/testfile.txt? The first thing that may come to your mind is adding that user account to group tecmint. But that will give him write permissions on ALL files were the write bit is set for the group, and we don’t want that. We only want him to be able to write to /mnt/playground/testfile.txt.
|
||||
|
||||
# touch /mnt/playground/testfile.txt
|
||||
# chown :tecmint /mnt/playground/testfile.txt
|
||||
# chmod 777 /mnt/playground/testfile.txt
|
||||
# su johndoe
|
||||
$ echo "My name is John Doe" > /mnt/playground/testfile.txt
|
||||
$ su davenull
|
||||
$ echo "My name is Dave Null" >> /mnt/playground/testfile.txt
|
||||
$ su gacanepa
|
||||
$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt
|
||||
|
||||
![Manage User Permissions](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png)
|
||||
|
||||
Manage User Permissions
|
||||
|
||||
Let’s give user gacanepa read and write access to /mnt/playground/testfile.txt.
|
||||
|
||||
Run as root,
|
||||
|
||||
# setfacl -R -m u:gacanepa:rwx /mnt/playground
|
||||
|
||||
and you’ll have successfully added an ACL that allows gacanepa to write to the test file. Then switch to user gacanepa and try to write to the file again:
|
||||
|
||||
$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt
|
||||
|
||||
To view the ACLs for a specific file or directory, use getfacl:
|
||||
|
||||
# getfacl /mnt/playground/testfile.txt
|
||||
|
||||
![Check ACLs of Files](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png)
|
||||
|
||||
Check ACLs of Files
|
||||
|
||||
To set a default ACL to a directory (which its contents will inherit unless overwritten otherwise), add d: before the rule and specify a directory instead of a file name:
|
||||
|
||||
# setfacl -m d:o:r /mnt/playground
|
||||
|
||||
The ACL above will allow users not in the owner group to have read access to the future contents of the /mnt/playground directory. Note the difference in the output of getfacl /mnt/playground before and after the change:
|
||||
|
||||
![Set Default ACL in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png)
|
||||
|
||||
Set Default ACL in Linux
|
||||
|
||||
[Chapter 20 in the official RHEL 7 Storage Administration Guide][3] provides more ACL examples, and I highly recommend you take a look at it and have it handy as reference.
|
||||
|
||||
#### Mounting NFS Network Shares ####
|
||||
|
||||
To show the list of NFS shares available in your server, you can use the showmount command with the -e option, followed by the machine name or its IP address. This tool is included in the nfs-utils package:
|
||||
|
||||
# yum update && yum install nfs-utils
|
||||
|
||||
Then do:
|
||||
|
||||
# showmount -e 192.168.0.10
|
||||
|
||||
and you will get a list of the available NFS shares on 192.168.0.10:
|
||||
|
||||
![Check Available NFS Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png)
|
||||
|
||||
Check Available NFS Shares
|
||||
|
||||
To mount NFS network shares on the local client using the command line on demand, use the following syntax:
|
||||
|
||||
# mount -t nfs -o [options] remote_host:/remote/directory /local/directory
|
||||
|
||||
which, in our case, translates to:
|
||||
|
||||
# mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs
|
||||
|
||||
If you get the following error message: “Job for rpc-statd.service failed. See “systemctl status rpc-statd.service” and “journalctl -xn” for details.”, make sure the rpcbind service is enabled and started in your system first:
|
||||
|
||||
# systemctl enable rpcbind.socket
|
||||
# systemctl restart rpcbind.service
|
||||
|
||||
and then reboot. That should do the trick and you will be able to mount your NFS share as explained earlier. If you need to mount the NFS share automatically on system boot, add a valid entry to the /etc/fstab file:
|
||||
|
||||
remote_host:/remote/directory /local/directory nfs options 0 0
|
||||
|
||||
The variables remote_host, /remote/directory, /local/directory, and options (which is optional) are the same ones used when manually mounting an NFS share from the command line. As per our previous example:
|
||||
|
||||
192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0
|
||||
|
||||
#### Mounting CIFS (Samba) Network Shares ####
|
||||
|
||||
Samba represents the tool of choice to make a network share available in a network with *nix and Windows machines. To show the Samba shares that are available, use the smbclient command with the -L flag, followed by the machine name or its IP address. This tool is included in the samba-client package:
|
||||
|
||||
You will be prompted for root’s password in the remote host:
|
||||
|
||||
# smbclient -L 192.168.0.10
|
||||
|
||||
![Check Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png)
|
||||
|
||||
Check Samba Shares
|
||||
|
||||
To mount Samba network shares on the local client you will need to install first the cifs-utils package:
|
||||
|
||||
# yum update && yum install cifs-utils
|
||||
|
||||
Then use the following syntax on the command line:
|
||||
|
||||
# mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory
|
||||
|
||||
which, in our case, translates to:
|
||||
|
||||
# mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba
|
||||
|
||||
where smbcredentials:
|
||||
|
||||
username=gacanepa
|
||||
password=XXXXXX
|
||||
|
||||
is a hidden file inside root’s home (/root/) with permissions set to 600, so that no one else but the owner of the file can read or write to it.
|
||||
|
||||
Please note that the samba_share is the name of the Samba share as returned by smbclient -L remote_host as shown above.
|
||||
|
||||
Now, if you need the Samba share to be available automatically on system boot, add a valid entry to the /etc/fstab file as follows:
|
||||
|
||||
//remote_host:/samba_share /local/directory cifs options 0 0
|
||||
|
||||
The variables remote_host, /samba_share, /local/directory, and options (which is optional) are the same ones used when manually mounting a Samba share from the command line. Following the definitions given in our previous example:
|
||||
|
||||
//192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article we have explained how to set up ACLs in Linux, and discussed how to mount CIFS and NFS network shares in a RHEL 7 client.
|
||||
|
||||
I recommend you to practice these concepts and even mix them (go ahead and try to set ACLs in mounted network shares) until you feel comfortable. If you have questions or comments feel free to use the form below to contact us anytime. Also, feel free to share this article through your social networks.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/
|
||||
[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/
|
||||
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html
|
@ -1,215 +0,0 @@
|
||||
RHCSA Series: Securing SSH, Setting Hostname and Enabling Network Services – Part 8
|
||||
================================================================================
|
||||
As a system administrator you will often have to log on to remote systems to perform a variety of administration tasks using a terminal emulator. You will rarely sit in front of a real (physical) terminal, so you need to set up a way to log on remotely to the machines that you will be asked to manage.
|
||||
|
||||
In fact, that may be the last thing that you will have to do in front of a physical terminal. For security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through the wire in unencrypted, plain text.
|
||||
|
||||
In addition, in this article we will also review how to configure network services to start automatically at boot and learn how to set up network and hostname resolution statically or dynamically.
|
||||
|
||||
![RHCSA: Secure SSH and Enable Network Services](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png)
|
||||
|
||||
RHCSA: Secure SSH and Enable Network Services – Part 8
|
||||
|
||||
### Installing and Securing SSH Communication ###
|
||||
|
||||
For you to be able to log on remotely to a RHEL 7 box using SSH, you will have to install the openssh, openssh-clients and openssh-servers packages. The following command not only will install the remote login program, but also the secure file transfer tool, as well as the remote file copy utility:
|
||||
|
||||
# yum update && yum install openssh openssh-clients openssh-servers
|
||||
|
||||
Note that it’s a good idea to install the server counterparts as you may want to use the same machine as both client and server at some point or another.
|
||||
|
||||
After installation, there is a couple of basic things that you need to take into account if you want to secure remote access to your SSH server. The following settings should be present in the `/etc/ssh/sshd_config` file.
|
||||
|
||||
1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high port (2000 or greater), but first make sure the chosen port is not being used.
|
||||
|
||||
For example, let’s suppose you choose port 2500. Use [netstat][1] in order to check whether the chosen port is being used or not:
|
||||
|
||||
# netstat -npltu | grep 2500
|
||||
|
||||
If netstat does not return anything, you can safely use port 2500 for sshd, and you should change the Port setting in the configuration file as follows:
|
||||
|
||||
Port 2500
|
||||
|
||||
2. Only allow protocol 2:
|
||||
|
||||
Protocol 2
|
||||
|
||||
3. Configure the authentication timeout to 2 minutes, do not allow root logins, and restrict to a minimum the list of users which are allowed to login via ssh:
|
||||
|
||||
LoginGraceTime 2m
|
||||
PermitRootLogin no
|
||||
AllowUsers gacanepa
|
||||
|
||||
4. If possible, use key-based instead of password authentication:
|
||||
|
||||
PasswordAuthentication no
|
||||
RSAAuthentication yes
|
||||
PubkeyAuthentication yes
|
||||
|
||||
This assumes that you have already created a key pair with your user name on your client machine and copied it to your server as explained here.
|
||||
|
||||
- [Enable SSH Passwordless Login][2]
|
||||
|
||||
### Configuring Networking and Name Resolution ###
|
||||
|
||||
1. Every system administrator should be well acquainted with the following system-wide configuration files:
|
||||
|
||||
- /etc/hosts is used to resolve names <---> IPs in small networks.
|
||||
|
||||
Every line in the `/etc/hosts` file has the following structure:
|
||||
|
||||
IP address - Hostname - FQDN
|
||||
|
||||
For example,
|
||||
|
||||
192.168.0.10 laptop laptop.gabrielcanepa.com.ar
|
||||
|
||||
2. `/etc/resolv.conf` specifies the IP addresses of DNS servers and the search domain, which is used for completing a given query name to a fully qualified domain name when no domain suffix is supplied.
|
||||
|
||||
Under normal circumstances, you don’t need to edit this file as it is managed by the system. However, should you want to change DNS servers, be advised that you need to stick to the following structure in each line:
|
||||
|
||||
nameserver - IP address
|
||||
|
||||
For example,
|
||||
|
||||
nameserver 8.8.8.8
|
||||
|
||||
3. 3. `/etc/host.conf` specifies the methods and the order by which hostnames are resolved within a network. In other words, tells the name resolver which services to use, and in what order.
|
||||
|
||||
Although this file has several options, the most common and basic setup includes a line as follows:
|
||||
|
||||
order bind,hosts
|
||||
|
||||
Which indicates that the resolver should first look in the nameservers specified in `resolv.conf` and then to the `/etc/hosts` file for name resolution.
|
||||
|
||||
4. `/etc/sysconfig/network` contains routing and global host information for all network interfaces. The following values may be used:
|
||||
|
||||
NETWORKING=yes|no
|
||||
HOSTNAME=value
|
||||
|
||||
Where value should be the Fully Qualified Domain Name (FQDN).
|
||||
|
||||
GATEWAY=XXX.XXX.XXX.XXX
|
||||
|
||||
Where XXX.XXX.XXX.XXX is the IP address of the network’s gateway.
|
||||
|
||||
GATEWAYDEV=value
|
||||
|
||||
In a machine with multiple NICs, value is the gateway device, such as enp0s3.
|
||||
|
||||
5. Files inside `/etc/sysconfig/network-scripts` (network adapters configuration files).
|
||||
|
||||
Inside the directory mentioned previously, you will find several plain text files named.
|
||||
|
||||
ifcfg-name
|
||||
|
||||
Where name is the name of the NIC as returned by ip link show:
|
||||
|
||||
![Check Network Link Status](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png)
|
||||
|
||||
Check Network Link Status
|
||||
|
||||
For example:
|
||||
|
||||
![Network Files](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png)
|
||||
|
||||
Network Files
|
||||
|
||||
Other than for the loopback interface, you can expect a similar configuration for your NICs. Note that some variables, if set, will override those present in `/etc/sysconfig/network` for this particular interface. Each line is commented for clarification in this article but in the actual file you should avoid comments:
|
||||
|
||||
HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC
|
||||
TYPE=Ethernet # Type of connection
|
||||
BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case.
|
||||
IPADDR=192.168.0.18
|
||||
NETMASK=255.255.255.0
|
||||
GATEWAY=192.168.0.1
|
||||
NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file.
|
||||
NAME=enp0s3
|
||||
UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb
|
||||
ONBOOT=yes # The operating system should bring up this NIC during boot
|
||||
|
||||
### Setting Hostnames ###
|
||||
|
||||
In Red Hat Enterprise Linux 7, the hostnamectl command is used to both query and set the system’s hostname.
|
||||
|
||||
To display the current hostname, type:
|
||||
|
||||
# hostnamectl status
|
||||
|
||||
![Check System hostname in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png)
|
||||
|
||||
Check System Hostname
|
||||
|
||||
To change the hostname, use
|
||||
|
||||
# hostnamectl set-hostname [new hostname]
|
||||
|
||||
For example,
|
||||
|
||||
# hostnamectl set-hostname cinderella
|
||||
|
||||
For the changes to take effect you will need to restart the hostnamed daemon (that way you will not have to log off and on again in order to apply the change):
|
||||
|
||||
# systemctl restart systemd-hostnamed
|
||||
|
||||
![Set System Hostname in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png)
|
||||
|
||||
Set System Hostname
|
||||
|
||||
In addition, RHEL 7 also includes the nmcli utility that can be used for the same purpose. To display the hostname, run:
|
||||
|
||||
# nmcli general hostname
|
||||
|
||||
and to change it:
|
||||
|
||||
# nmcli general hostname [new hostname]
|
||||
|
||||
For example,
|
||||
|
||||
# nmcli general hostname rhel7
|
||||
|
||||
![Set Hostname Using nmcli Command](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png)
|
||||
|
||||
Set Hostname Using nmcli Command
|
||||
|
||||
### Starting Network Services on Boot ###
|
||||
|
||||
To wrap up, let us see how we can ensure that network services are started automatically on boot. In simple terms, this is done by creating symlinks to certain files specified in the [Install] section of the service configuration files.
|
||||
|
||||
In the case of firewalld (/usr/lib/systemd/system/firewalld.service):
|
||||
|
||||
[Install]
|
||||
WantedBy=basic.target
|
||||
Alias=dbus-org.fedoraproject.FirewallD1.service
|
||||
|
||||
To enable the service:
|
||||
|
||||
# systemctl enable firewalld
|
||||
|
||||
On the other hand, disabling firewalld entitles removing the symlinks:
|
||||
|
||||
# systemctl disable firewalld
|
||||
|
||||
![Enable Service at System Boot](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png)
|
||||
|
||||
Enable Service at System Boot
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article we have summarized how to install and secure connections via SSH to a RHEL server, how to change its name, and finally how to ensure that network services are started on boot. If you notice that a certain service has failed to start properly, you can use systemctl status -l [service] and journalctl -xn to troubleshoot it.
|
||||
|
||||
Feel free to let us know what you think about this article using the comment form below. Questions are also welcome. We look forward to hearing from you!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/
|
||||
[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
|
@ -1,3 +1,5 @@
|
||||
FSSlc Translating
|
||||
|
||||
RHCSA Series: Installing, Configuring and Securing a Web and FTP Server – Part 9
|
||||
================================================================================
|
||||
A web server (also known as a HTTP server) is a service that handles content (most commonly web pages, but other types of documents as well) over to a client in a network.
|
||||
@ -173,4 +175,4 @@ via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-an
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://httpd.apache.org/docs/2.4/
|
||||
[2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/
|
||||
[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache
|
||||
[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache
|
||||
|
@ -0,0 +1,80 @@
|
||||
KevinSJ Translating
|
||||
四大开源版命令行邮件客户端
|
||||
================================================================================
|
||||
![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png)
|
||||
|
||||
无论你承认与否,email并没有消亡。对依赖命令行的 Linux 高级用户而言,离开 shell 转而使用传统的桌面或网页版邮件客户端并不合适。归根结底,命令行最善于处理文件,特别是文本文件,能使效率倍增。
|
||||
|
||||
幸运的是,也有不少的命令行邮件客户端,他们的用户大都乐于帮助你入门并回答你使用中遇到的问题。但别说我没警告过你:一旦你完全掌握了其中一个客户端,要再使用图基于图形界面的客户端将回变得很困难!
|
||||
|
||||
要安装下述四个客户端中的任何一个是非常容易的;主要 Linux 发行版的软件仓库中都提供此类软件,并可通过包管理器进行安装。你也可以再其他的操作系统中寻找并安装这类客户端,但我并未尝试过也没有相关的经验。
|
||||
|
||||
### Mutt ###
|
||||
|
||||
- [项目主页][1]
|
||||
- [源代码][2]
|
||||
- 授权协议: [GPLv2][3]
|
||||
|
||||
许多终端爱好者都听说过甚至熟悉 Mutt 和 Alpine, 他们已经存在多年。让我们先看看 Mutt。
|
||||
|
||||
Mutt 支持许多你所期望 email 系统支持的功能:会话,颜色区分,支持多语言,同时还有很多设置选项。它支持 POP3 和 IMAP, 两个主要的邮件传输协议,以及许多邮箱格式。自从1995年诞生以来, Mutt 即拥有一个活跃的开发社区,但最近几年,新版本更多的关注于修复问题和安全更新而非提供新功能。这对大多数 Mutt 用户而言并无大碍,他们钟爱这样的界面,并支持此项目的口号:“所有邮件客户端都很烂,只是这个烂的没那么彻底。”
|
||||
|
||||
### Alpine ###
|
||||
|
||||
- [项目主页][4]
|
||||
- [源代码][5]
|
||||
- 授权协议: [Apache 2.0][6]
|
||||
|
||||
Alpine 是另一款知名的终端邮件客户端,它由华盛顿大学开发,初衷是作为 UW 开发的 Pine 的开源,支持unicode的替代版本。
|
||||
|
||||
Alpine 不仅容易上手,还为高级用户提供了很多特性,它支持很多协议 —— IMAP, LDAP, NNTP, POP, SMTP 等,同时也支持不同的邮箱格式。Alpine 内置了一款名为 Pico 的可独立使用的简易文本编辑工具,但你也可以使用你常用的文本编辑器: vi, Emacs等。
|
||||
|
||||
尽管Alpine的升级并不频繁,名为re-alpine的分支为不同的开发者提供了开发此项目的机会。
|
||||
|
||||
Alpine 支持再屏幕上显示上下文帮助,但一些用户回喜欢 Mutt 式的独立说明手册,但这两种提供了较好的说明。用户可以同时尝试 Mutt 和 Alpine,并由个人喜好作出决定,也可以尝试以下几个比较新颖的选项。
|
||||
|
||||
### Sup ###
|
||||
|
||||
- [项目主页][7]
|
||||
- [源代码][8]
|
||||
- 授权协议: [GPLv2][9]
|
||||
|
||||
Sup 是我们列表中能被称为“大容量邮件客户端”的两个之一。自称“为邮件较多的人设计的命令行客户端”,Sup 的目标是提供一个支持层次化设计并允许再为会话添加标签进行简单整理的界面。
|
||||
|
||||
由于采用 Ruby 编写,Sup 能提供十分快速的搜索并能自动管理联系人列表,同时还允许自定义插件。对于使用 Gmail 作为网页邮件客户端的人们,这些功能都是耳熟能详的,这就使得 Sup 成为一种比较现代的命令行邮件管理方式。
|
||||
Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line.
|
||||
|
||||
### Notmuch ###
|
||||
|
||||
- [项目主页][10]
|
||||
- [源代码][11]
|
||||
- 授权协议: [GPLv3][12]
|
||||
|
||||
"Sup? Notmuch." Notmuch 作为 Sup 的回应,最初只是重写了 Sup 的一小部分来提高性能。最终,这个项目逐渐变大并成为了一个独立的邮件客户端。
|
||||
|
||||
Notmuch是一款相当精简的软件。它并不能独立的收发邮件,启用 Notmuch 的快速搜索功能的代码实际上是一个需要调用的独立库。但这样的模块化设计也使得你能使用你最爱的工具进行写信,发信和收信,集中精力做好一件事情并有效浏览和管理你的邮件。
|
||||
|
||||
这个列表并不完整,还有很多 email 客户端,他们或许才是你的最佳选择。你喜欢什么客户端呢?
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
译者:[KevinSJ](https://github.com/KevinSj)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://opensource.com/users/jason-baker
|
||||
[1]:http://www.mutt.org/
|
||||
[2]:http://dev.mutt.org/trac/
|
||||
[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
|
||||
[4]:http://www.washington.edu/alpine/
|
||||
[5]:http://www.washington.edu/alpine/acquire/
|
||||
[6]:http://www.apache.org/licenses/LICENSE-2.0
|
||||
[7]:http://supmua.org/
|
||||
[8]:https://github.com/sup-heliotrope/sup
|
||||
[9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
|
||||
[10]:http://notmuchmail.org/
|
||||
[11]:http://notmuchmail.org/releases/
|
||||
[12]:http://www.gnu.org/licenses/gpl.html
|
@ -0,0 +1,37 @@
|
||||
看这些孩子在Ubuntu的Linux终端下玩耍
|
||||
================================================================================
|
||||
我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。
|
||||
|
||||
注:youtube 视频
|
||||
<iframe width="640" height="390" frameborder="0" allowfullscreen="true" src="http://www.youtube.com/embed/z8taQPomp0Y?version=3&rel=1&fs=1&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent" type="text/html" class="youtube-player"></iframe>
|
||||
|
||||
### 在Linux终端下面跑火车 ###
|
||||
|
||||
这里没有魔术。只是一个叫做“sl”的命令行工具。我假定它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。
|
||||
|
||||
如果你想从这个终端下的火车获得一些乐趣,你可以使用下面的命令安装它。
|
||||
|
||||
sudo apt-get install sl
|
||||
|
||||
要运行终端火车,只需要在终端中输入**sl**。它有以下几个选项:
|
||||
|
||||
- -a : 意外模式。你会看见哭救的群众
|
||||
- -l : 显示一个更小的火车但有更多的车厢
|
||||
- -F : 一个飞行的火车
|
||||
- -e : 允许通过Ctrl+C。使用其他模式你不能使用Ctrl+C中断火车。但是,它不能长时间运行。
|
||||
|
||||
正常情况下,你应该会听到汽笛声但是在大多数Linux系统下都不管用,Ubuntu是其中一个。这就是一个意外的终端火车。
|
||||
|
||||
![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/ubuntu-terminal-train/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
@ -0,0 +1,53 @@
|
||||
Docker Working on Security Components, Live Container Migration
|
||||
================================================================================
|
||||
![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg)
|
||||
|
||||
**Docker 开发者在 Containercon 上的演讲,谈论将来的容器在安全和实时迁移方面的创新**
|
||||
|
||||
来自西雅图的消息。当前 IT 界最热的词汇是“容器”,美国有两大研讨会:Linuxcon USA 和 Containercon,后者就是为容器而生的。
|
||||
|
||||
Docker 公司是开源 Docker 项目的商业赞助商,本次研讨会这家公司有 3 位高管带来主题演讲,但公司创始人 Solomon Hykes 没上场演讲。
|
||||
|
||||
Hykes 曾在 2014 年的 Linuxcon 上进行过一次主题演讲,但今年的 Containeron 他只坐在观众席上。而工程部高级副总裁 Marianna Tessel、Docker 首席安全员 Diogo Monica 和核心维护员 Michael Crosby 为我们演讲 Docker 新增的功能和将来会有的功能。
|
||||
|
||||
Tessel 强调 Docker 现在已经被很多世界上最大的组织用在生产环境中,包括美国政府。Docker 也被用在小环境中,比如树莓派,一块树莓派上可以跑 2300 个容器。
|
||||
|
||||
“Docker 的功能正在变得越来越强大,而部署方法变得越来越简单。”Tessel 在会上说道。
|
||||
|
||||
Tessel 把 Docker 形容成一艘游轮,内部由强大而复杂的机器驱动,外部为乘客提供平稳航行的体验。
|
||||
|
||||
Docker 试图解决的领域是简化安全配置。Tessel 认为对于大多数用户和组织来说,避免网络漏洞所涉及的安全问题是一个乏味而且复杂的过程。
|
||||
|
||||
于是 Docker Content Trust 就出现在 Docker 1.8 release 版本中了。安全项目领导 Diogo Mónica 中加入 Tessel 上台讨论,说安全是一个难题,而 Docker Content Trust 就是为解决这个难道而存在的。
|
||||
|
||||
Docker Content Trusst 提供一种方法来验证一个 Docker 应用是否可信,以及多种方法来限制欺骗和病毒注入。
|
||||
|
||||
为了证明他的观点,Monica 做了个现场示范,演示 Content Trust 的效果。在一个实验中,一个网站在更新过程中其 Web App 被人为攻破,而当 Content Trust 启动后,这个黑客行为再也无法得逞。
|
||||
|
||||
“不要被这个表面上简单的演示欺骗了,”Tessel 说道,“你们看的是最安全的可行方案。”
|
||||
|
||||
Docker 以前没有实现的领域是实时迁移,这个技术在 VMware 虚拟机中叫做 vMotion,而现在,Docker 也实现了这个功能。
|
||||
|
||||
Docker 首席维护员 Micheal Crosby 在台上做了个实时迁移的演示,Crosby 把这个过程称为快照和恢复:首先从运行中的容器拿到一个快照,之后将这个快照移到另一个地方恢复。
|
||||
|
||||
一个容器也可以克隆到另一个地方,Crosby 将他的克隆容器称为“多利”,就是世界上第一只被克隆出来的羊的名字。
|
||||
|
||||
Tessel 也花了点时间聊了下 RunC 组件,这是个正在被 Open Container Initiative 作为多方开发的项目,目的是让窗口兼容 Linux、Windows 和 Solaris。
|
||||
|
||||
Tessel 总结说她不知道 Docker 的未来是什么样,但对此抱非常乐观的态度。
|
||||
|
||||
“我不确定未来是什么样的,但我很确定 Docker 会在这个世界中脱颖而出”,Tessel 说的。
|
||||
|
||||
Sean Michael Kerner 是 eWEEK 和 InternetNews.com 网站的高级编辑,可通过推特 @TechJournalist 关注他。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/
|
@ -0,0 +1,50 @@
|
||||
LinuxCon: 服务器操作系统的转型
|
||||
================================================================================
|
||||
来自西雅图。容器迟早要改变世界,以及改变操作系统的角色。这是 Wim Coekaerts 带来的 LinuxCon 演讲主题,Coekaerts 是 Oracle 公司 Linux 与虚拟化工程的高级副总裁。
|
||||
|
||||
![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg)
|
||||
|
||||
Coekaerts 在开始演讲的时候拿出一张关于“桌面之年”的幻灯片,引发了现场观众的一片笑声。之后他说 2015 年很明显是容器之年,更是应用之年,应用才是容器的关键。
|
||||
|
||||
“你需要操作系统做什么事情?”,Coekaerts 回答现场观众:“只需一件事:运行一个应用。操作系统负责管理硬件和资源,来让你的应用运行起来。”
|
||||
|
||||
Coakaerts 说在 Docker 容器的帮助下,我们的注意力再次集中在应用上,而在 Oracle,我们将注意力放在如何让应用更好地运行在操作系统上。
|
||||
|
||||
“许多人过去常常需要繁琐地安装应用,而现在的年轻人只需要按一个按钮就能让应用在他们的移动设备上运行起来”。
|
||||
|
||||
人们对安装企业版的软件需要这么复杂的步骤而感到惊讶,而 Docker 帮助他们脱离了这片苦海。
|
||||
|
||||
“操作系统的角色已经变了。” Coekaerts 说。
|
||||
|
||||
Docker 的出现不代表虚拟机的淘汰,容器化过程需要经过很长时间才能变得成熟,然后才能在世界范围内得到应用。
|
||||
|
||||
在这段时间内,容器会与虚拟机共存,并且我们需要一些工具,将应用在容器和虚拟机之间进行转换迁移。Coekaerts 举例说 Oracle 的 VirtualBox 就可以用来帮助用户运行 Docker,而它原来是被广泛用在桌面系统上的一项开源技术。现在 Docker 的 Kitematic 项目将在 Mac 上使用 VirtualBox 运行 Docker。
|
||||
|
||||
### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ###
|
||||
### 容器的开放计算计划和一次写随处部署 ###
|
||||
|
||||
一个能让容器成功的关键是“一次写,随处部署”的概念。而在容器之间的互操作领域,Linux 基金会的开放计算计划(OCI)扮演一个非常关键的角色。
|
||||
|
||||
“使用 OCI,应用编译一次后就可以很方便地在多地运行,所以你可以将你的应用部署在任何地方”。
|
||||
|
||||
Coekaerts 总结说虽然在迁移到容器模型过程中会发生很多好玩的事情,但容器还没真正做好准备,他强调 Oracle 现在正在验证将产品运行在容器内的可行性,但这是一个非常艰难的过程。
|
||||
|
||||
“运行数据库很简单,难的是要搞定数据库所需的环境”,Coekaerts 说:“容器与虚拟机不一样,一些需要依赖底层系统配置的应用无法从主机迁移到容器中。”
|
||||
|
||||
另外,Coekaerts 指出在容器内调试问题与在虚拟机内调试问题也是不一样的,现在还没有成熟的工具来进行容器应用的调试。
|
||||
|
||||
Coekaerts 强调当容器足够成熟时,有一点很重要:不要抛弃现有的技术。组织和企业不能抛弃现有的部署好的应用,而完全投入新技术的怀抱。
|
||||
|
||||
“部署新技术是很困难的事情,你需要缓慢地迁移过去,能让你顺利迁移的技术才是成功的技术。”Coekaerts 说。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm
|
@ -0,0 +1,49 @@
|
||||
Linux 内核的发展方向
|
||||
================================================================================
|
||||
![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg)
|
||||
|
||||
**即将到来的 Linux 4.2 内核涉及到史上最多的贡献者数量,内核开发者 Jonathan Corbet 如是说。**
|
||||
|
||||
来自西雅图。Linux 内核持续增长:代码量在增加,代码贡献者数量也在增加。而随之而来的一些挑战需要处理一下。以上是 Jonathan Corbet 在今年的 LinuxCon 的内核年度报告上提出的主要观点。以下是他的主要演讲内容:
|
||||
|
||||
Linux 4.2 内核依然处于开发阶段,预计在8月23号释出。Corbet 强调有 1569 名开发者为这个版本贡献了代码,其中 277 名是第一次提交代码。
|
||||
|
||||
越来越多的开发者的加入,内核更新非常快,Corbet 估计现在大概 63 天就能产生一个新的内核里程碑。
|
||||
|
||||
Linux 4.2 涉及多方面的更新。其中一个就是引进了 OverLayFS,这是一种只读型文件系统,它可以实现在一个容器之上再放一个容器。
|
||||
|
||||
网络系统对小包传输性能也有了提升,这对于高频传输领域如金融交易而言非常重要。提升的方面主要集中在减小处理数据包的时间的能耗。
|
||||
|
||||
依然有新的驱动中加入内核。在每个内核发布周期,平均会有 60 到 80 个新增或升级驱动中加入。
|
||||
|
||||
另一个主要更新是实时内核补丁,这个特性在 4.0 版首次引进,好处是系统管理员可以在生产环境中打上内核补丁而不需要重启系统。当补丁所需要的元素都已准备就绪,打补丁的过程会在后台持续而稳定地进行。
|
||||
|
||||
**Linux 安全, IoT 和其他关注点 **
|
||||
|
||||
过去一年中,安全问题在开源社区是一个很热的话题,这都归因于那些引发高度关注的事件,比如 Heartbleed 和 Shellshock。
|
||||
|
||||
“我毫不怀疑 Linux 代码对这些方面的忽视会产生一些令人不悦的问题”,Corbet 原话。
|
||||
|
||||
他强调说过去 10 年间有超过 3 百万行代码不再被开发者修改,而产生 Shellshock 漏洞的代码的年龄已经是 20 岁了,近年来更是无人问津。
|
||||
|
||||
另一个关注点是 2038 问题,Linux 界的“千年虫”,如果不解决,2000 年出现过的问题还会重现。2038 问题说的是在 2038 年一些 Linux 和 Unix 机器会死机(LCTT:32 位系统记录的时间,在2038年1月19日星期二晚上03:14:07之后的下一秒,会变成负数)。Corbet 说现在离 2038 年还有 23 年时间,现在部署的系统都会考虑 2038 问题。
|
||||
|
||||
Linux 已经开始一些初步的方案来修复 2038 问题了,但做的还远远不够。“现在就要修复这个问题,而不是等 20 年后把这个头疼的问题留给下一代解决,我们却享受着退休的美好时光”。
|
||||
|
||||
物联网(IoT)也是 Linux 关注的领域,Linux 是物联网嵌入式操作系统的主要占有者,然而这并没有什么卵用。Corget 认为日渐臃肿的内核对于未来的物联网设备来说肯定过于庞大。
|
||||
|
||||
现在有一个项目就是做内核最小化的,获取足够的支持对于这个项目来说非常重要。
|
||||
|
||||
“除了 Linux 之外,也有其他项目可以做物联网,但那些项目不会像 Linux 一样开放”,Corbet 说,“我们不能指望 Linux 在物联网领域一直保持优势,我们需要靠自己的努力去做到这点,我们需要注意不能让内核变得越来越臃肿。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/
|
@ -0,0 +1,25 @@
|
||||
Linux 界将出现一个新的文件系统:bcachefs
|
||||
================================================================================
|
||||
这个有 5 年历史,由 Kent Oberstreet 创建,过去属于谷歌的文件系统,最近完成了关键的组件。Bcachefs 文件系统自称其性能和稳定性与 ext4 和 xfs 相同,而其他方面的功能又可以与 btrfs 和 zfs 相媲美。主要特性包括校验、压缩、多设备支持、缓存、快照与其他好用的特性。
|
||||
|
||||
Bcachefs 来自 **bcache**,这是一个块级缓存层,从 bcaceh 到一个功能完整的[写时复制][1]文件系统,堪称是一项质的转变。
|
||||
|
||||
在自己提出问题“为什么要出一个新的文件系统”中,Kent Oberstreet 作了以下回答:当我还在谷歌的时候,我与其他在 bcache 上工作的同事在偶然的情况下意识到我们正在使用的东西可以成为一个成熟文件系统的功能块,我们可以用 bcache 创建一个拥有干净而优雅设计的文件系统,而最重要的一点是,bcachefs 的主要目的就是在性能和稳定性上能与 ext4 和 xfs 匹敌,同时拥有 btrfs 和 zfs 的特性。
|
||||
|
||||
Overstreet 邀请人们在自己的系统上测试 bcachefs,可以通过邮件列表[通告]获取 bcachefs 的操作指南。
|
||||
|
||||
Linux 生态系统中文件系统几乎处于一家独大状态,Fedora 在第 16 版的时候就想用 btrfs 换掉 ext4 作为其默认文件系统,但是到现在(LCTT:都出到 Fedora 22 了)还在使用 ext4。而几乎所有 Debian 系的发行版(Ubuntu、Mint、elementary OS 等)也使用 ext4 作为默认文件系统,并且这些主流的发生版都没有替换默认文件系统的意思。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/
|
||||
|
||||
作者:[Paul Hill][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxveda.com/author/paul_hill/
|
||||
[1]:https://en.wikipedia.org/wiki/Copy-on-write
|
||||
[2]:https://lkml.org/lkml/2015/8/21/22
|
@ -0,0 +1,113 @@
|
||||
安装Strongswan - Linux上一个基于IPsec的vpn工具
|
||||
================================================================================
|
||||
IPsec是一个提供网络层安全的标准。它包含认证头(AH)和安全负载封装(ESP)组件。AH提供包的完整性,ESP组件提供包的保密性。IPsec确保了在网络层的安全特性。
|
||||
|
||||
- 保密性
|
||||
- 数据包完整性
|
||||
- 来源不可抵赖性
|
||||
- 重放攻击防护
|
||||
|
||||
[Strongswan][1]是一个IPsec协议实现的开源代码,Strongswan代表强壮开源广域网(StrongS/WAN)。它支持IPsec的VPN两个版本的密钥自动交换(网络密钥交换(IKE)V1和V2)。
|
||||
|
||||
Strongswan基本上提供了自动交换密钥共享VPN两个节点或网络,然后它使用Linux内核的IPsec(AH和ESP)实现。密钥共享使用了IKE机制的特性使用ESP编码数据。在IKE阶段,strongswan使用OpenSSL加密算法(AES,SHA等等)和其他加密类库。无论如何,ESP组成IPsec使用的安全算法,它是Linux内核实现的。Strongswan的主要特性是下面这些。
|
||||
|
||||
- x.509证书或基于预共享密钥认证
|
||||
- 支持IKEv1和IKEv2密钥交换协议
|
||||
- 可选内置插件和库的完整性和加密测试
|
||||
- 支持椭圆曲线DH群体和ECDSA证书
|
||||
- 在智能卡上存储RSA私钥和证书
|
||||
|
||||
它能被使用在客户端或服务器(road warrior模式)和网关到网关的情景。
|
||||
|
||||
### 如何安装 ###
|
||||
|
||||
几乎所有的Linux发行版都支持Strongswan的二进制包。在这个教程,我们将从二进制包安装strongswan也编译strongswan合适的特性的源代码。
|
||||
|
||||
### 使用二进制包 ###
|
||||
|
||||
可以使用以下命令安装Strongswan到Ubuntu 14.04 LTS
|
||||
|
||||
$sudo aptitude install strongswan
|
||||
|
||||
![安装strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png)
|
||||
|
||||
strongswan的全局配置(strongswan.conf)文件和ipsec配置(ipsec.conf/ipsec.secrets)文件都在/etc/目录下。
|
||||
|
||||
### strongswan源码编译安装的依赖包 ###
|
||||
|
||||
- GMP(strongswan使用的Mathematical/Precision 库)
|
||||
- OpenSSL(加密算法在这个库里)
|
||||
- PKCS(1,7,8,11,12)(证书编码和智能卡与Strongswan集成)
|
||||
|
||||
#### 步骤 ####
|
||||
|
||||
**1)** 在终端使用下面命令到/usr/src/目录
|
||||
|
||||
$cd /usr/src
|
||||
|
||||
**2)** 用下面命令从strongswan网站下载源代码
|
||||
|
||||
$sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz
|
||||
|
||||
(strongswan-5.2.1.tar.gz 是最新版。)
|
||||
|
||||
![下载软件](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png)
|
||||
|
||||
**3)** 用下面命令提取下载软件,然后进入目录。
|
||||
|
||||
$sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1
|
||||
|
||||
**4)** 使用configure命令配置strongswan每个想要的选项。
|
||||
|
||||
./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl
|
||||
|
||||
![检查strongswan包](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png)
|
||||
|
||||
如果GMP库没有安装,然后配置脚本将会发生下面的错误。
|
||||
|
||||
![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png)
|
||||
|
||||
因此,首先,使用下面命令安装GMP库然后执行配置脚本。
|
||||
|
||||
![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png)
|
||||
|
||||
无论如何,如果GMP已经安装而且还一致报错,然后在Ubuntu上使用下面命令创建libgmp.so库的软连到/usr/lib,/lib/,/usr/lib/x86_64-linux-gnu/路径下。
|
||||
|
||||
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so
|
||||
|
||||
![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png)
|
||||
|
||||
创建libgmp.so软连后,再执行./configure脚本也许就找到gmp库了。无论如何,gmp头文件也许发生其他错误,像下面这样。
|
||||
|
||||
![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png)
|
||||
|
||||
为解决上面的错误,使用下面命令安装libgmp-dev包
|
||||
|
||||
$sudo aptitude install libgmp-dev
|
||||
|
||||
![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png)
|
||||
|
||||
安装gmp的开发库后,在运行一遍配置脚本,如果没有发生错误,则将看见下面的这些输出。
|
||||
|
||||
![Output of Configure scirpt](http://blog.linoxide.com/wp-content/uploads/2014/12/successful-run.png)
|
||||
|
||||
使用下面的命令编译安装strongswan。
|
||||
|
||||
$ sudo make ; sudo make install
|
||||
|
||||
安装strongswan后,全局配置(strongswan.conf)和ipsec策略/密码配置文件(ipsec.conf/ipsec.secretes)被放在**/usr/local/etc**目录。
|
||||
|
||||
根据我们的安全需要Strongswan可以用作隧道或者传输模式。它提供众所周知的site-2-site模式和road warrior模式的VPN。它很容易使用在Cisco,Juniper设备上。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/security/install-strongswan/
|
||||
|
||||
作者:[nido][a]
|
||||
译者:[wyangsun](https://github.com/wyangsun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/naveeda/
|
||||
[1]:https://www.strongswan.org/
|
@ -0,0 +1,111 @@
|
||||
在 VirtualBox 中使用 Docker Machine 管理主机
|
||||
================================================================================
|
||||
大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个应用,用于在我们的电脑上、在云端、在数据中心创建 Docker 主机,然后用户可以使用 Docker 客户端来配置一些东西。这个 API 为本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux,并且是以一个独立的二进制文件包形式安装的。使用(与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。
|
||||
|
||||
本文列出一些简单的步骤用 Docker Machine 来部署 docker 容器。
|
||||
|
||||
### 1. 安装 Docker Machine ###
|
||||
|
||||
Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [github][1] 下载最新版本的 Docker Machine,本文使用 curl 作为下载工具,Docker Machine 版本为 0.2.0。
|
||||
|
||||
** 64 位操作系统 **
|
||||
|
||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
|
||||
|
||||
** 32 位操作系统 **
|
||||
|
||||
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine
|
||||
|
||||
下载完成后,找到 **/usr/local/bin** 目录下的 **docker-machine** 文件,执行一下:
|
||||
|
||||
# chmod +x /usr/local/bin/docker-machine
|
||||
|
||||
确认是否成功安装了 docker-machine,可以运行下面的命令,它会打印 Docker Machine 的版本信息:
|
||||
|
||||
# docker-machine -v
|
||||
|
||||
![安装 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png)
|
||||
|
||||
运行下面的命令,安装 Docker 客户端,以便于在我们自己的电脑止运行 Docker 命令:
|
||||
|
||||
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
|
||||
# chmod +x /usr/local/bin/docker
|
||||
|
||||
### 2. 创建 VirtualBox 虚拟机 ###
|
||||
|
||||
在 Linux 系统上安装完 Docker Machine 后,接下来我们可以安装 VirtualBox 虚拟机,运行下面的就可以了。--driver virtualbox 选项表示我们要在 VirtualBox 的虚拟机里面部署 docker,最后的参数“linux” 是虚拟机的名称。这个命令会下载 [boot2docker][2] iso,它是个基于 Tiny Core Linux 的轻量级发行版,自带 Docker 程序,然后 docker-machine 命令会创建一个 VirtualBox 虚拟机(LCTT:当然,我们也可以选择其他的虚拟机软件)来运行这个 boot2docker 系统。
|
||||
|
||||
# docker-machine create --driver virtualbox linux
|
||||
|
||||
![创建 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png)
|
||||
|
||||
测试下有没有成功运行 VirtualBox 和 Docker,运行命令:
|
||||
|
||||
# docker-machine ls
|
||||
|
||||
![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png)
|
||||
|
||||
如果执行成功,我们可以看到在 ACTIVE 那列下面会出现一个星号“*”。
|
||||
|
||||
### 3. 设置环境变量 ###
|
||||
|
||||
现在我们需要让 docker 与虚拟机通信,运行 docker-machine env <虚拟机名称> 来实现这个目的。
|
||||
|
||||
# eval "$(docker-machine env linux)"
|
||||
# docker ps
|
||||
|
||||
这个命令会设置 TLS 认证的环境变量,每次重启机器或者重新打开一个会话都需要执行一下这个命令,我们可以看到它的输出内容:
|
||||
|
||||
# docker-machine env linux
|
||||
|
||||
export DOCKER_TLS_VERIFY=1
|
||||
export DOCKER_CERT_PATH=/Users/<your username>/.docker/machine/machines/dev
|
||||
export DOCKER_HOST=tcp://192.168.99.100:2376
|
||||
|
||||
### 4. 运行 Docker 容器 ###
|
||||
|
||||
完成配置后我们就可以在 VirtualBox 上运行 docker 容器了。测试一下,在虚拟机里执行 **docker run busybox echo hello world** 命令,我们可以看到容器的输出信息。
|
||||
|
||||
# docker run busybox echo hello world
|
||||
|
||||
![运行 Docker 容器](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png)
|
||||
|
||||
### 5. 拿到 Docker 主机的 IP ###
|
||||
|
||||
我们可以执行下面的命令获取 Docker 主机的 IP 地址。
|
||||
|
||||
# docker-machine ip
|
||||
|
||||
![Docker IP 地址](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png)
|
||||
|
||||
### 6. 管理主机 ###
|
||||
|
||||
现在我们可以随心所欲地使用上述的 docker-machine 命令来不断创建主机了。
|
||||
|
||||
当你使用完 docker 时,可以运行 **docker-machine stop** 来停止所有主机,如果想开启所有主机,运行 **docker-machine start**。
|
||||
|
||||
# docker-machine stop
|
||||
# docker-machine start
|
||||
|
||||
你也可以只停止或开启一台主机:
|
||||
|
||||
$ docker-machine stop linux
|
||||
$ docker-machine start linux
|
||||
|
||||
### 总结 ###
|
||||
|
||||
最后,我们使用 Docker Machine 成功在 VirtualBox 上创建并管理一台 Docker 主机。Docker Machine 确实能让用户快速地在不同的平台上部署 Docker 主机,就像我们这里部署在 VirtualBox 上一样。这个 --driver virtulbox 驱动可以在本地机器上使用,也可以在数据中心的虚拟机上使用。Docker Machine 驱动除了支持本地的 VirtualBox 之外,还支持远端的 Digital Ocean、AWS、Azure、VMware 以及其他基础设施。如果你有任何疑问,或者建议,请在评论栏中写出来,我们会不断改进我们的内容。谢谢,祝愉快。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://github.com/docker/machine/releases
|
||||
[2]:https://github.com/boot2docker/boot2docker
|
@ -1,154 +0,0 @@
|
||||
|
||||
如何使用 Datadog 监控 NGINX - 第3部分
|
||||
================================================================================
|
||||
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
|
||||
|
||||
如果你已经阅读了[前面的如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标:
|
||||
|
||||
![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
|
||||
|
||||
Datadog 允许你建立单个主机,服务,流程,度量,或者几乎任何它们的组合图形周围和警报。例如,你可以在一定的可用性区域监控所有NGINX主机,或所有主机,或者您可以监视被报道具有一定标签的所有主机的一个关键指标。本文将告诉您如何:
|
||||
|
||||
Datadog 允许你来建立图表并报告周围的主机,进程,指标或其他的。例如,你可以在特定的可用性区域监控所有 NGINX 主机,或所有主机,或者你可以监视一个关键指标并将它报告给周围所有标记的主机。本文将告诉你如何做:
|
||||
|
||||
- 在 Datadog 仪表盘上监控 NGINX 指标,对其他所有系统
|
||||
- 当一个关键指标急剧变化时设置自动警报来通知你
|
||||
|
||||
### 配置 NGINX ###
|
||||
|
||||
为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个URL 来报告 status 指标。下面将一步一步展示[配置开源 NGINX ][2]和[NGINX Plus][3]。
|
||||
|
||||
### 整合 Datadog 和 NGINX ###
|
||||
|
||||
#### 安装 Datadog 代理 ####
|
||||
|
||||
Datadog 代理是 [一个开源软件][4] 能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装代理通常 [仅需要一个命令][5]
|
||||
|
||||
只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。
|
||||
|
||||
![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
|
||||
|
||||
#### 配置 Agent ####
|
||||
|
||||
|
||||
接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该 [在这儿][7]。
|
||||
|
||||
在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例:
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
|
||||
- nginx_status_url: http://localhost/nginx_status/
|
||||
tags:
|
||||
- instance:foo
|
||||
|
||||
一旦你修改了 status URLs 和其他标签,将配置文件保存为 conf.d/nginx.yaml。
|
||||
|
||||
#### 重启代理 ####
|
||||
|
||||
|
||||
你必须重新启动代理程序来加载新的配置文件。重新启动命令 [在这里][9] 根据平台的不同而不同。
|
||||
|
||||
#### 检查配置文件 ####
|
||||
|
||||
要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的信息命令。每个平台使用的命令[看这儿][10]。
|
||||
|
||||
如果配置是正确的,你会看到这样的输出:
|
||||
|
||||
Checks
|
||||
======
|
||||
|
||||
[...]
|
||||
|
||||
nginx
|
||||
-----
|
||||
- instance #0 [OK]
|
||||
- Collected 8 metrics & 0 events
|
||||
|
||||
#### 安装整合 ####
|
||||
|
||||
最后,在你的 Datadog 帐户里面整合 Nginx。这非常简单,你只要点击“Install Integration”按钮在 [NGINX 集成设置][11] 配置表中。
|
||||
|
||||
![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
|
||||
|
||||
### 指标! ###
|
||||
|
||||
一旦代理开始报告 NGINX 指标,你会看到 [一个 NGINX 仪表盘][12] 在你 Datadog 可用仪表盘的列表中。
|
||||
|
||||
基本的 NGINX 仪表盘显示了几个关键指标 [在我们介绍的 NGINX 监控中][13] 的最大值。 (一些指标,特别是请求处理时间,日志分析,Datadog 不提供。)
|
||||
|
||||
你可以轻松创建一个全面的仪表盘来监控你的整个网站区域通过增加额外的图形与 NGINX 外部的重要指标。例如,你可能想监视你 NGINX 主机的host-level 指标,如系统负载。你需要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。
|
||||
|
||||
![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
|
||||
|
||||
你也可以更高级别的监控你的 NGINX 实例通过使用 Datadog 的 [Host Maps][14] -对于实例,color-coding 你所有的 NGINX 主机通过 CPU 使用率来辨别潜在热点。
|
||||
|
||||
![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
|
||||
|
||||
### NGINX 指标 ###
|
||||
|
||||
一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动密切的关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。
|
||||
|
||||
#### 监控 NGINX 吞吐量 ####
|
||||
|
||||
Datadog 指标警报可以是 threshold-based(当指标超过设定值会警报)或 change-based(当指标的变化超过一定范围会警报)。在这种情况下,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。
|
||||
|
||||
1.**创建一个新的指标监控**. 从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。
|
||||
|
||||
![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
|
||||
|
||||
2.**定义你的指标监视器**. 我们想知道 NGINX 每秒总的请求量下降的数量。所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s度量和。
|
||||
|
||||
![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
|
||||
|
||||
3.**设置指标警报条件**.我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个 one-minute 数据窗口来表示“now” 指标的值,警报横跨该间隔内的平均变化,和之前 10 分钟的指标值作比较。
|
||||
|
||||
![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
|
||||
|
||||
4.**自定义通知**.如果 NGINX 的请求量下降,我们想要通知我们的团队。在这种情况下,我们将给 ops 队的聊天室发送通知,网页呼叫工程师。在“Say what’s happening”中,我们将其命名为监控器并添加一个短消息将伴随该通知并建议首先开始调查。我们使用 @mention 作为一般警告,使用 ops 并用 @pagerduty [专门给 PagerDuty 发警告][15]。
|
||||
|
||||
![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
|
||||
|
||||
5.**保存集成监控**.点击页面底部的“Save”按钮。你现在监控的关键指标NGINX [work 指标][16],它边打电话给工程师并在它迅速下时随时分页。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们已经通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。
|
||||
|
||||
如果你一直使用你自己的 Datadog 账号,你现在应该在 web 环境中有了很大的可视化提高,也有能力根据你的环境创建自动监控,你所使用的模式,指标应该是最有价值的对你的组织。
|
||||
|
||||
如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。
|
||||
|
||||
----------
|
||||
这篇文章的来源在 [on GitHub][18]. 问题,错误,补充等?请[联系我们][19].
|
||||
|
||||
------------------------------------------------------------
|
||||
|
||||
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
|
||||
|
||||
作者:K Young
|
||||
译者:[strugglingyouth](https://github.com/译者ID)
|
||||
校对:[strugglingyouth](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
|
||||
[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
|
||||
[4]:https://github.com/DataDog/dd-agent
|
||||
[5]:https://app.datadoghq.com/account/settings#agent
|
||||
[6]:https://app.datadoghq.com/infrastructure
|
||||
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
|
||||
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
|
||||
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
|
||||
[12]:https://app.datadoghq.com/dash/integration/nginx
|
||||
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
|
||||
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
|
||||
[15]:https://www.datadoghq.com/blog/pagerduty/
|
||||
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
|
||||
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
|
||||
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
|
||||
[19]:https://github.com/DataDog/the-monitor/issues
|
674
translated/tech/20150728 Process of the Linux kernel building.md
Normal file
674
translated/tech/20150728 Process of the Linux kernel building.md
Normal file
@ -0,0 +1,674 @@
|
||||
如何构建Linux 内核
|
||||
================================================================================
|
||||
介绍
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) 太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件看起来真令人害怕 :)。那时候这个[Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 还只包含了`1591` 行代码,当我开始写本文是,这个[Makefile](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 已经是第三个候选版本了。
|
||||
|
||||
这个makefile 是Linux 内核代码的根makefile ,内核构建就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的。所以我们将只会挑选一些通用的例子来说明问题,而你不会在这里找到构建内核的文档、如何整洁内核代码、[tags](https://en.wikipedia.org/wiki/Ctags) 的生成和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。
|
||||
|
||||
如果你已经很了解[make](https://en.wikipedia.org/wiki/Make_%28software%29) 工具那是最好,但是我也会描述本文出现的相关代码。
|
||||
|
||||
让我们开始吧
|
||||
|
||||
|
||||
编译内核前的准备
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
在开始编译前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。现在就让我们深入内核的根`makefile` 吧
|
||||
|
||||
内核的根`Makefile` 负责构建两个主要的文件:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 从此处开始:
|
||||
|
||||
```Makefile
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 2
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc3
|
||||
NAME = Hurr durr I'ma sheep
|
||||
```
|
||||
|
||||
这些变量决定了当前内核的版本,并且被使用在很多不同的地方,比如`KERNELVERSION` :
|
||||
|
||||
```Makefile
|
||||
KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
|
||||
```
|
||||
|
||||
接下来我们会看到很多`ifeq` 条件判断语句,它们负责检查传给`make` 的参数。内核的`Makefile` 提供了一个特殊的编译选项`make help` ,这个选项可以生成所有的可用目标和一些能传给`make` 的有效的命令行参数。举个例子,`make V=1` 会在构建过程中输出详细的编译信息,第一个`ifeq` 就是检查传递给make的`V=n` 选项。
|
||||
|
||||
```Makefile
|
||||
ifeq ("$(origin V)", "command line")
|
||||
KBUILD_VERBOSE = $(V)
|
||||
endif
|
||||
ifndef KBUILD_VERBOSE
|
||||
KBUILD_VERBOSE = 0
|
||||
endif
|
||||
|
||||
ifeq ($(KBUILD_VERBOSE),1)
|
||||
quiet =
|
||||
Q =
|
||||
else
|
||||
quiet=quiet_
|
||||
Q = @
|
||||
endif
|
||||
|
||||
export quiet Q KBUILD_VERBOSE
|
||||
```
|
||||
|
||||
如果`V=n` 这个选项传给了`make` ,系统就会给变量`KBUILD_VERBOSE` 选项附上`V` 的值,否则的话`KBUILD_VERBOSE` 就会为`0`。然后系统会检查`KBUILD_VERBOSE` 的值,以此来决定`quiet` 和`Q` 的值。符号`@` 控制命令的输出,如果它被放在一个命令之前,这条命令的执行将会是`CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(注:CC 在makefile 中一般都是编译命令)。最后系统仅仅导出所有的变量。下一个`ifeq` 语句检查的是传递给`make` 的选项`O=/dir`,这个选项允许在指定的目录`dir` 输出所有的结果文件:
|
||||
|
||||
```Makefile
|
||||
ifeq ($(KBUILD_SRC),)
|
||||
|
||||
ifeq ("$(origin O)", "command line")
|
||||
KBUILD_OUTPUT := $(O)
|
||||
endif
|
||||
|
||||
ifneq ($(KBUILD_OUTPUT),)
|
||||
saved-output := $(KBUILD_OUTPUT)
|
||||
KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \
|
||||
&& /bin/pwd)
|
||||
$(if $(KBUILD_OUTPUT),, \
|
||||
$(error failed to create output directory "$(saved-output)"))
|
||||
|
||||
sub-make: FORCE
|
||||
$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
|
||||
-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
|
||||
|
||||
skip-makefile := 1
|
||||
endif # ifneq ($(KBUILD_OUTPUT),)
|
||||
endif # ifeq ($(KBUILD_SRC),)
|
||||
```
|
||||
|
||||
系统会检查变量`KBUILD_SRC`,如果他是空的(第一次执行makefile 时总是空的),并且变量`KBUILD_OUTPUT` 被设成了选项`O` 的值(如果这个选项被传进来了),那么这个值就会用来代表内核源码的顶层目录。下一步会检查变量`KBUILD_OUTPUT` ,如果之前设置过这个变量,那么接下来会做以下几件事:
|
||||
|
||||
* 将变量`KBUILD_OUTPUT` 的值保存到临时变量`saved-output`;
|
||||
* 尝试创建输出目录;
|
||||
* 检查创建的输出目录,如果失败了就打印错误;
|
||||
* 如果成功创建了输出目录,那么就在新目录重新执行`make` 命令(参见选项`-C`)。
|
||||
|
||||
下一个`ifeq` 语句会检查传递给make 的选项`C` 和`M`:
|
||||
|
||||
```Makefile
|
||||
ifeq ("$(origin C)", "command line")
|
||||
KBUILD_CHECKSRC = $(C)
|
||||
endif
|
||||
ifndef KBUILD_CHECKSRC
|
||||
KBUILD_CHECKSRC = 0
|
||||
endif
|
||||
|
||||
ifeq ("$(origin M)", "command line")
|
||||
KBUILD_EXTMOD := $(M)
|
||||
endif
|
||||
```
|
||||
|
||||
第一个选项`C` 会告诉`makefile` 需要使用环境变量`$CHECK` 提供的工具来检查全部`c` 代码,默认情况下会使用[sparse](https://en.wikipedia.org/wiki/Sparse)。第二个选项`M` 会用来编译外部模块(本文不做讨论)。因为设置了这两个变量,系统还会检查变量`KBUILD_SRC`,如果`KBUILD_SRC` 没有被设置,系统会设置变量`srctree` 为`.`:
|
||||
|
||||
```Makefile
|
||||
ifeq ($(KBUILD_SRC),)
|
||||
srctree := .
|
||||
endif
|
||||
|
||||
objtree := .
|
||||
src := $(srctree)
|
||||
obj := $(objtree)
|
||||
|
||||
export srctree objtree VPATH
|
||||
```
|
||||
|
||||
这将会告诉`Makefile` 内核的源码树就在执行make 命令的目录。然后要设置`objtree` 和其他变量为执行make 命令的目录,并且将这些变量导出。接着就是要获取`SUBARCH` 的值,这个变量代表了当前的系统架构(注:一般都指CPU 架构):
|
||||
|
||||
```Makefile
|
||||
SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
|
||||
-e s/sun4u/sparc64/ \
|
||||
-e s/arm.*/arm/ -e s/sa110/arm/ \
|
||||
-e s/s390x/s390/ -e s/parisc64/parisc/ \
|
||||
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
|
||||
-e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
|
||||
```
|
||||
|
||||
如你所见,系统执行[uname](https://en.wikipedia.org/wiki/Uname) 得到机器、操作系统和架构的信息。因为我们得到的是`uname` 的输出,所以我们需要做一些处理在赋给变量`SUBARCH` 。获得`SUBARCH` 之后就要设置`SRCARCH` 和`hfr-arch`,`SRCARCH`提供了硬件架构相关代码的目录,`hfr-arch` 提供了相关头文件的目录:
|
||||
|
||||
```Makefile
|
||||
ifeq ($(ARCH),i386)
|
||||
SRCARCH := x86
|
||||
endif
|
||||
ifeq ($(ARCH),x86_64)
|
||||
SRCARCH := x86
|
||||
endif
|
||||
|
||||
hdr-arch := $(SRCARCH)
|
||||
```
|
||||
|
||||
注意:`ARCH` 是`SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量`KCONFIG_CONFIG`,下一步系统会设置它,默认情况下就是`.config` :
|
||||
|
||||
```Makefile
|
||||
KCONFIG_CONFIG ?= .config
|
||||
export KCONFIG_CONFIG
|
||||
```
|
||||
以及编译内核过程中要用到的[shell](https://en.wikipedia.org/wiki/Shell_%28computing%29)
|
||||
|
||||
```Makefile
|
||||
CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
|
||||
else if [ -x /bin/bash ]; then echo /bin/bash; \
|
||||
else echo sh; fi ; fi)
|
||||
```
|
||||
|
||||
接下来就要设置一组和编译内核的编译器相关的变量。我们会设置主机的`C` 和`C++` 的编译器及相关配置项:
|
||||
|
||||
```Makefile
|
||||
HOSTCC = gcc
|
||||
HOSTCXX = g++
|
||||
HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89
|
||||
HOSTCXXFLAGS = -O2
|
||||
```
|
||||
|
||||
下一步会去适配代表编译器的变量`CC`,那为什么还要`HOST*` 这些选项呢?这是因为`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量决定了我们要编译什么东西(内核、模块还是其他):
|
||||
|
||||
```Makefile
|
||||
KBUILD_MODULES :=
|
||||
KBUILD_BUILTIN := 1
|
||||
|
||||
ifeq ($(MAKECMDGOALS),modules)
|
||||
KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1)
|
||||
endif
|
||||
```
|
||||
|
||||
在这我们可以看到这些变量的定义,并且,如果们仅仅传递了`modules` 给`make`,变量`KBUILD_BUILTIN` 会依赖于内核配置选项`CONFIG_MODVERSIONS`。下一步操作是引入下面的文件:
|
||||
|
||||
```Makefile
|
||||
include scripts/Kbuild.include
|
||||
```
|
||||
|
||||
文件`kbuild` ,[Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) 或者又叫做 `Kernel Build System`是一个用来管理构建内核和模块的特殊框架。`kbuild` 文件的语法与makefile 一样。文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) 为`kbuild` 系统同提供了一些原生的定义。因为我们包含了这个`kbuild` 文件,我们可以看到和不同工具关联的这些变量的定义,这些工具会在内核和模块编译过程中被使用(比如链接器、编译器、二进制工具包[binutils](http://www.gnu.org/software/binutils/),等等):
|
||||
|
||||
```Makefile
|
||||
AS = $(CROSS_COMPILE)as
|
||||
LD = $(CROSS_COMPILE)ld
|
||||
CC = $(CROSS_COMPILE)gcc
|
||||
CPP = $(CC) -E
|
||||
AR = $(CROSS_COMPILE)ar
|
||||
NM = $(CROSS_COMPILE)nm
|
||||
STRIP = $(CROSS_COMPILE)strip
|
||||
OBJCOPY = $(CROSS_COMPILE)objcopy
|
||||
OBJDUMP = $(CROSS_COMPILE)objdump
|
||||
AWK = awk
|
||||
...
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
在这些定义好的变量后面,我们又定义了两个变量:`USERINCLUDE` 和`LINUXINCLUDE`。他们包含了头文件的路径(第一个是给用户用的,第二个是给内核用的):
|
||||
|
||||
```Makefile
|
||||
USERINCLUDE := \
|
||||
-I$(srctree)/arch/$(hdr-arch)/include/uapi \
|
||||
-Iarch/$(hdr-arch)/include/generated/uapi \
|
||||
-I$(srctree)/include/uapi \
|
||||
-Iinclude/generated/uapi \
|
||||
-include $(srctree)/include/linux/kconfig.h
|
||||
|
||||
LINUXINCLUDE := \
|
||||
-I$(srctree)/arch/$(hdr-arch)/include \
|
||||
...
|
||||
```
|
||||
|
||||
以及标准的C 编译器标志:
|
||||
```Makefile
|
||||
KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
|
||||
-fno-strict-aliasing -fno-common \
|
||||
-Werror-implicit-function-declaration \
|
||||
-Wno-format-security \
|
||||
-std=gnu89
|
||||
```
|
||||
|
||||
这并不是最终确定的编译器标志,他们还可以在其他makefile 里面更新(比如`arch/` 里面的kbuild)。变量定义完之后,全部会被导出供其他makefile 使用。下面的两个变量`RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件:
|
||||
|
||||
```Makefile
|
||||
export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \
|
||||
-name CVS -o -name .pc -o -name .hg -o -name .git \) \
|
||||
-prune -o
|
||||
export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \
|
||||
--exclude CVS --exclude .pc --exclude .hg --exclude .git
|
||||
```
|
||||
|
||||
这就是全部了,我们已经完成了所有的准备工作,下一个点就是如果构建`vmlinux`.
|
||||
|
||||
直面构建内核
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
现在我们已经完成了所有的准备工作,根makefile(注:内核根目录下的makefile)的下一步工作就是和编译内核相关的了。在我们执行`make` 命令之前,我们不会在终端看到任何东西。但是现在编译的第一步开始了,这里我们需要从内核根makefile的的[598](https://github.com/torvalds/linux/blob/master/Makefile#L598) 行开始,这里可以看到目标`vmlinux`:
|
||||
|
||||
```Makefile
|
||||
all: vmlinux
|
||||
include arch/$(SRCARCH)/Makefile
|
||||
```
|
||||
|
||||
不要操心我们略过的从`export RCS_FIND_IGNORE.....` 到`all: vmlinux.....` 这一部分makefile 代码,他们只是负责根据各种配置文件生成不同目标内核的,因为之前我就说了这一部分我们只讨论构建内核的通用途径。
|
||||
|
||||
目标`all:` 是在命令行如果不指定具体目标时默认使用的目标。你可以看到这里包含了架构相关的makefile(在这里就指的是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面声明的`vmlinux`:
|
||||
|
||||
```Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
|
||||
```
|
||||
|
||||
`vmlinux` 是linux 内核的静态链接可执行文件格式。脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 把不同的编译好的子模块链接到一起形成了vmlinux。第二个目标是`vmlinux-deps`,它的定义如下:
|
||||
|
||||
```Makefile
|
||||
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN)
|
||||
```
|
||||
|
||||
它是由内核代码下的每个顶级目录的`built-in.o` 组成的。之后我们还会检查内核所有的目录,`kbuild` 会编译各个目录下所有的对应`$obj-y` 的源文件。接着调用`$(LD) -r` 把这些文件合并到一个`build-in.o` 文件里。此时我们还没有`vmloinux-deps`, 所以目标`vmlinux` 现在还不会被构建。对我而言`vmlinux-deps` 包含下面的文件
|
||||
|
||||
```
|
||||
arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o
|
||||
arch/x86/kernel/head64.o arch/x86/kernel/head.o
|
||||
init/built-in.o usr/built-in.o
|
||||
arch/x86/built-in.o kernel/built-in.o
|
||||
mm/built-in.o fs/built-in.o
|
||||
ipc/built-in.o security/built-in.o
|
||||
crypto/built-in.o block/built-in.o
|
||||
lib/lib.a arch/x86/lib/lib.a
|
||||
lib/built-in.o arch/x86/lib/built-in.o
|
||||
drivers/built-in.o sound/built-in.o
|
||||
firmware/built-in.o arch/x86/pci/built-in.o
|
||||
arch/x86/power/built-in.o arch/x86/video/built-in.o
|
||||
net/built-in.o
|
||||
```
|
||||
|
||||
下一个可以被执行的目标如下:
|
||||
|
||||
```Makefile
|
||||
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
|
||||
$(vmlinux-dirs): prepare scripts
|
||||
$(Q)$(MAKE) $(build)=$@
|
||||
```
|
||||
|
||||
就像我们看到的,`vmlinux-dir` 依赖于两部分:`prepare` 和`scripts`。第一个`prepare` 定义在内核的根`makefile` ,准备工作分成三个阶段:
|
||||
|
||||
```Makefile
|
||||
prepare: prepare0
|
||||
prepare0: archprepare FORCE
|
||||
$(Q)$(MAKE) $(build)=.
|
||||
archprepare: archheaders archscripts prepare1 scripts_basic
|
||||
|
||||
prepare1: prepare2 $(version_h) include/generated/utsrelease.h \
|
||||
include/config/auto.conf
|
||||
$(cmd_crmodverdir)
|
||||
prepare2: prepare3 outputmakefile asm-generic
|
||||
```
|
||||
|
||||
第一个`prepare0` 展开到`archprepare` ,后者又展开到`archheader` 和`archscripts`,这两个变量定义在`x86_64` 相关的[Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的makefile从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。变量定义之后,这个makefile 定义了编译[16-bit](https://en.wikipedia.org/wiki/Real_mode)代码的编译选项,根据变量`BITS` 的值,如果是`32`, 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是`i386`,而`64`就对应的是`x86_84`。生成的系统调用列表(syscall table)的makefile 里第一个目标就是`archheaders` :
|
||||
|
||||
```Makefile
|
||||
archheaders:
|
||||
$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
|
||||
```
|
||||
|
||||
这个makefile 里第二个目标就是`archscripts`:
|
||||
|
||||
```Makefile
|
||||
archscripts: scripts_basic
|
||||
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||
```
|
||||
|
||||
我们可以看到`archscripts` 是依赖于根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出`scripts_basic` 是按照[scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的mekefile 执行make 的:
|
||||
|
||||
```Maklefile
|
||||
scripts_basic:
|
||||
$(Q)$(MAKE) $(build)=scripts/basic
|
||||
```
|
||||
|
||||
`scripts/basic/Makefile`包含了编译两个主机程序`fixdep` 和`bin2` 的目标:
|
||||
|
||||
```Makefile
|
||||
hostprogs-y := fixdep
|
||||
hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c
|
||||
always := $(hostprogs-y)
|
||||
|
||||
$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep
|
||||
```
|
||||
|
||||
第一个工具是`fixdep`:用来优化[gcc](https://gcc.gnu.org/) 生成的依赖列表,然后在重新编译源文件的时候告诉make。第二个工具是`bin2c`,他依赖于内核配置选项`CONFIG_BUILD_BIN2C`,并且它是一个用来将标准输入接口(注:即stdin)收到的二进制流通过标准输出接口(即:stdout)转换成C 头文件的非常小的C 程序。你可以注意到这里有些奇怪的标志,如`hostprogs-y`等。这些标志使用在所有的`kbuild` 文件,更多的信息你可以从[documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) 获得。在我们的用例`hostprogs-y` 中,他告诉`kbuild` 这里有个名为`fixed` 的程序,这个程序会通过和`Makefile` 相同目录的`fixdep.c` 编译而来。执行make 之后,终端的第一个输出就是`kbuild` 的结果:
|
||||
|
||||
```
|
||||
$ make
|
||||
HOSTCC scripts/basic/fixdep
|
||||
```
|
||||
|
||||
当目标`script_basic` 被执行,目标`archscripts` 就会make [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) 下的makefile 和目标`relocs`:
|
||||
|
||||
```Makefile
|
||||
$(Q)$(MAKE) $(build)=arch/x86/tools relocs
|
||||
```
|
||||
|
||||
代码`relocs_32.c` 和`relocs_64.c` 包含了[重定位](https://en.wikipedia.org/wiki/Relocation_%28computing%29) 的信息,将会被编译,者可以在`make` 的输出中看到:
|
||||
|
||||
```Makefile
|
||||
HOSTCC arch/x86/tools/relocs_32.o
|
||||
HOSTCC arch/x86/tools/relocs_64.o
|
||||
HOSTCC arch/x86/tools/relocs_common.o
|
||||
HOSTLD arch/x86/tools/relocs
|
||||
```
|
||||
|
||||
在编译完`relocs.c` 之后会检查`version.h`:
|
||||
|
||||
```Makefile
|
||||
$(version_h): $(srctree)/Makefile FORCE
|
||||
$(call filechk,version.h)
|
||||
$(Q)rm -f $(old_version_h)
|
||||
```
|
||||
|
||||
我们可以在输出看到它:
|
||||
|
||||
```
|
||||
CHK include/config/kernel.release
|
||||
```
|
||||
|
||||
以及在内核根Makefiel 使用`arch/x86/include/generated/asm`的目标`asm-generic` 来构建`generic` 汇编头文件。在目标`asm-generic` 之后,`archprepare` 就会被完成,所以目标`prepare0` 会接着被执行,如我上面所写:
|
||||
|
||||
```Makefile
|
||||
prepare0: archprepare FORCE
|
||||
$(Q)$(MAKE) $(build)=.
|
||||
```
|
||||
|
||||
注意`build`,它是定义在文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include),内容是这样的:
|
||||
|
||||
```Makefile
|
||||
build := -f $(srctree)/scripts/Makefile.build obj
|
||||
```
|
||||
|
||||
或者在我们的例子中,他就是当前源码目录路径——`.`:
|
||||
```Makefile
|
||||
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=.
|
||||
```
|
||||
|
||||
参数`obj` 会告诉脚本[scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) 那些目录包含`kbuild` 文件,脚本以此来寻找各个`kbuild` 文件:
|
||||
|
||||
```Makefile
|
||||
include $(kbuild-file)
|
||||
```
|
||||
|
||||
然后根据这个构建目标。我们这里`.` 包含了[Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild),就用这个文件来生成`kernel/bounds.s` 和`arch/x86/kernel/asm-offsets.s`。这样目标`prepare` 就完成了它的工作。`vmlinux-dirs` 也依赖于第二个目标——`scripts` ,`scripts`会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost`等等。`scripts/host-programs` 编译完之后,我们的目标`vmlinux-dirs` 就可以开始编译了。第一步,我们先来理解一下`vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了接下来要使用的内核目录的路径:
|
||||
|
||||
```
|
||||
init usr arch/x86 kernel mm fs ipc security crypto block
|
||||
drivers sound firmware arch/x86/pci arch/x86/power
|
||||
arch/x86/video net lib arch/x86/lib
|
||||
```
|
||||
|
||||
我们可以在内核的根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 里找到`vmlinux-dirs` 的定义:
|
||||
|
||||
```Makefile
|
||||
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
|
||||
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
|
||||
$(net-y) $(net-m) $(libs-y) $(libs-m)))
|
||||
|
||||
init-y := init/
|
||||
drivers-y := drivers/ sound/ firmware/
|
||||
net-y := net/
|
||||
libs-y := lib/
|
||||
...
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
这里我们借助函数`patsubst` 和`filter`去掉了每个目录路径里的符号`/`,并且把结果放到`vmlinux-dirs` 里。所以我们就有了`vmlinux-dirs` 里的目录的列表,以及下面的代码:
|
||||
|
||||
```Makefile
|
||||
$(vmlinux-dirs): prepare scripts
|
||||
$(Q)$(MAKE) $(build)=$@
|
||||
```
|
||||
|
||||
符号`$@` 在这里代表了`vmlinux-dirs`,这就表明程序会递归遍历从`vmlinux-dirs` 以及它内部的全部目录(依赖于配置),并且在对应的目录下执行`make` 命令。我们可以在输出看到结果:
|
||||
|
||||
```
|
||||
CC init/main.o
|
||||
CHK include/generated/compile.h
|
||||
CC init/version.o
|
||||
CC init/do_mounts.o
|
||||
...
|
||||
CC arch/x86/crypto/glue_helper.o
|
||||
AS arch/x86/crypto/aes-x86_64-asm_64.o
|
||||
CC arch/x86/crypto/aes_glue.o
|
||||
...
|
||||
AS arch/x86/entry/entry_64.o
|
||||
AS arch/x86/entry/thunk_64.o
|
||||
CC arch/x86/entry/syscall_64.o
|
||||
```
|
||||
|
||||
每个目录下的源代码将会被编译并且链接到`built-io.o` 里:
|
||||
|
||||
```
|
||||
$ find . -name built-in.o
|
||||
./arch/x86/crypto/built-in.o
|
||||
./arch/x86/crypto/sha-mb/built-in.o
|
||||
./arch/x86/net/built-in.o
|
||||
./init/built-in.o
|
||||
./usr/built-in.o
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
好了,所有的`built-in.o` 都构建完了,现在我们回到目标`vmlinux` 上。你应该还记得,目标`vmlinux` 是在内核的根makefile 里。在链接`vmlinux` 之前,系统会构建[samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation)等等,但是如上文所述,我不会在本文描述这些。
|
||||
|
||||
```Makefile
|
||||
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
|
||||
...
|
||||
...
|
||||
+$(call if_changed,link-vmlinux)
|
||||
```
|
||||
|
||||
你可以看到,`vmlinux` 的调用脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 的主要目的是把所有的`built-in.o` 链接成一个静态可执行文件、生成[System.map](https://en.wikipedia.org/wiki/System.map)。 最后我们来看看下面的输出:
|
||||
|
||||
```
|
||||
LINK vmlinux
|
||||
LD vmlinux.o
|
||||
MODPOST vmlinux.o
|
||||
GEN .version
|
||||
CHK include/generated/compile.h
|
||||
UPD include/generated/compile.h
|
||||
CC init/version.o
|
||||
LD init/built-in.o
|
||||
KSYM .tmp_kallsyms1.o
|
||||
KSYM .tmp_kallsyms2.o
|
||||
LD vmlinux
|
||||
SORTEX vmlinux
|
||||
SYSMAP System.map
|
||||
```
|
||||
|
||||
以及内核源码树根目录下的`vmlinux` 和`System.map`
|
||||
|
||||
```
|
||||
$ ls vmlinux System.map
|
||||
System.map vmlinux
|
||||
```
|
||||
|
||||
这就是全部了,`vmlinux` 构建好了,下一步就是创建[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage).
|
||||
|
||||
制作bzImage
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
`bzImage` 就是压缩了的linux 内核镜像。我们可以在构建了`vmlinux` 之后通过执行`make bzImage` 获得`bzImage`。同时我们可以仅仅执行`make` 而不带任何参数也可以生成`bzImage` ,因为它是在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里预定义的、默认生成的镜像:
|
||||
|
||||
```Makefile
|
||||
all: bzImage
|
||||
```
|
||||
|
||||
让我们看看这个目标,他能帮助我们理解这个镜像是怎么构建的。我已经说过了`bzImage` 师被定义在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile),定义如下:
|
||||
|
||||
```Makefile
|
||||
bzImage: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
|
||||
$(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot
|
||||
$(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@
|
||||
```
|
||||
|
||||
在这里我们可以看到第一次为boot 目录执行`make`,在我们的例子里是这样的:
|
||||
|
||||
```Makefile
|
||||
boot := arch/x86/boot
|
||||
```
|
||||
|
||||
现在的主要目标是编译目录`arch/x86/boot` 和`arch/x86/boot/compressed` 的代码,构建`setup.bin` 和`vmlinux.bin`,然后用这两个文件生成`bzImage`。第一个目标是定义在[arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) 的`$(obj)/setup.elf`:
|
||||
|
||||
```Makefile
|
||||
$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
|
||||
$(call if_changed,ld)
|
||||
```
|
||||
|
||||
我们已经在目录`arch/x86/boot`有了链接脚本`setup.ld`,并且将变量`SETUP_OBJS` 扩展到`boot` 目录下的全部源代码。我们可以看看第一个输出:
|
||||
|
||||
```Makefile
|
||||
AS arch/x86/boot/bioscall.o
|
||||
CC arch/x86/boot/cmdline.o
|
||||
AS arch/x86/boot/copy.o
|
||||
HOSTCC arch/x86/boot/mkcpustr
|
||||
CPUSTR arch/x86/boot/cpustr.h
|
||||
CC arch/x86/boot/cpu.o
|
||||
CC arch/x86/boot/cpuflags.o
|
||||
CC arch/x86/boot/cpucheck.o
|
||||
CC arch/x86/boot/early_serial_console.o
|
||||
CC arch/x86/boot/edd.o
|
||||
```
|
||||
|
||||
下一个源码文件是[arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译它,因为这个目标依赖于下面两个头文件:
|
||||
|
||||
```Makefile
|
||||
$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h
|
||||
```
|
||||
|
||||
第一个头文件`voffset.h` 是使用`sed` 脚本生成的,包含用`nm` 工具从`vmlinux` 获取的两个地址:
|
||||
|
||||
```C
|
||||
#define VO__end 0xffffffff82ab0000
|
||||
#define VO__text 0xffffffff81000000
|
||||
```
|
||||
|
||||
这两个地址是内核的起始和结束地址。第二个头文件`zoffset.h` 在[arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile) 可以看出是依赖于目标`vmlinux`的:
|
||||
|
||||
```Makefile
|
||||
$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
|
||||
$(call if_changed,zoffset)
|
||||
```
|
||||
|
||||
目标`$(obj)/compressed/vmlinux` 依赖于变量`vmlinux-objs-y` —— 说明需要编译目录[arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成`vmlinux.bin`, `vmlinux.bin.bz2`, 和编译工具 - `mkpiggy`。我们可以在下面的输出看出来:
|
||||
|
||||
```Makefile
|
||||
LDS arch/x86/boot/compressed/vmlinux.lds
|
||||
AS arch/x86/boot/compressed/head_64.o
|
||||
CC arch/x86/boot/compressed/misc.o
|
||||
CC arch/x86/boot/compressed/string.o
|
||||
CC arch/x86/boot/compressed/cmdline.o
|
||||
OBJCOPY arch/x86/boot/compressed/vmlinux.bin
|
||||
BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2
|
||||
HOSTCC arch/x86/boot/compressed/mkpiggy
|
||||
```
|
||||
|
||||
`vmlinux.bin` 是去掉了调试信息和注释的`vmlinux` 二进制文件,加上了占用了`u32` (注:即4-Byte)的长度信息的`vmlinux.bin.all` 压缩后就是`vmlinux.bin.bz2`。其中`vmlinux.bin.all` 包含了`vmlinux.bin` 和`vmlinux.relocs`(注:vmlinux 的重定位信息),其中`vmlinux.relocs` 是`vmlinux` 经过程序`relocs` 处理之后的`vmlinux` 镜像(见上文所述)。我们现在已经获取到了这些文件,汇编文件`piggy.S` 将会被`mkpiggy` 生成、然后编译:
|
||||
|
||||
```Makefile
|
||||
MKPIGGY arch/x86/boot/compressed/piggy.S
|
||||
AS arch/x86/boot/compressed/piggy.o
|
||||
```
|
||||
|
||||
这个汇编文件会包含经过计算得来的、压缩内核的偏移信息。处理完这个汇编文件,我们就可以看到`zoffset` 生成了:
|
||||
|
||||
```Makefile
|
||||
ZOFFSET arch/x86/boot/zoffset.h
|
||||
```
|
||||
|
||||
现在`zoffset.h` 和`voffset.h` 已经生成了,[arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) 里的源文件可以继续编译:
|
||||
|
||||
```Makefile
|
||||
AS arch/x86/boot/header.o
|
||||
CC arch/x86/boot/main.o
|
||||
CC arch/x86/boot/mca.o
|
||||
CC arch/x86/boot/memory.o
|
||||
CC arch/x86/boot/pm.o
|
||||
AS arch/x86/boot/pmjump.o
|
||||
CC arch/x86/boot/printf.o
|
||||
CC arch/x86/boot/regs.o
|
||||
CC arch/x86/boot/string.o
|
||||
CC arch/x86/boot/tty.o
|
||||
CC arch/x86/boot/video.o
|
||||
CC arch/x86/boot/video-mode.o
|
||||
CC arch/x86/boot/video-vga.o
|
||||
CC arch/x86/boot/video-vesa.o
|
||||
CC arch/x86/boot/video-bios.o
|
||||
```
|
||||
|
||||
所有的源代码会被编译,他们最终会被链接到`setup.elf` :
|
||||
|
||||
```Makefile
|
||||
LD arch/x86/boot/setup.elf
|
||||
```
|
||||
|
||||
|
||||
或者:
|
||||
|
||||
```
|
||||
ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf
|
||||
```
|
||||
|
||||
最后两件事是创建包含目录`arch/x86/boot/*` 下的编译过的代码的`setup.bin`:
|
||||
|
||||
```
|
||||
objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin
|
||||
```
|
||||
|
||||
以及从`vmlinux` 生成`vmlinux.bin` :
|
||||
|
||||
```
|
||||
objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin
|
||||
```
|
||||
|
||||
最后,我们编译主机程序[arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c),它将会用来把`setup.bin` 和`vmlinux.bin` 打包成`bzImage`:
|
||||
|
||||
```
|
||||
arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage
|
||||
```
|
||||
|
||||
实际上`bzImage` 就是把`setup.bin` 和`vmlinux.bin` 连接到一起。最终我们会看到输出结果,就和那些用源码编译过内核的同行的结果一样:
|
||||
|
||||
```
|
||||
Setup is 16268 bytes (padded to 16384 bytes).
|
||||
System is 4704 kB
|
||||
CRC 94a88f9a
|
||||
Kernel: arch/x86/boot/bzImage is ready (#5)
|
||||
```
|
||||
|
||||
|
||||
全部结束。
|
||||
|
||||
结论
|
||||
================================================================================
|
||||
|
||||
这就是本文的最后一节。本文我们了解了编译内核的全部步骤:从执行`make` 命令开始,到最后生成`bzImage`。我知道,linux 内核的makefiles 和构建linux 的过程第一眼看起来可能比较迷惑,但是这并不是很难。希望本文可以帮助你理解构建linux 内核的整个流程。
|
||||
|
||||
|
||||
链接
|
||||
================================================================================
|
||||
|
||||
* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29)
|
||||
* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile)
|
||||
* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler)
|
||||
* [Ctags](https://en.wikipedia.org/wiki/Ctags)
|
||||
* [sparse](https://en.wikipedia.org/wiki/Sparse)
|
||||
* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage)
|
||||
* [uname](https://en.wikipedia.org/wiki/Uname)
|
||||
* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29)
|
||||
* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt)
|
||||
* [binutils](http://www.gnu.org/software/binutils/)
|
||||
* [gcc](https://gcc.gnu.org/)
|
||||
* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt)
|
||||
* [System.map](https://en.wikipedia.org/wiki/System.map)
|
||||
* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md
|
||||
|
||||
译者:[译者ID](https://github.com/oska874)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,137 @@
|
||||
Ubuntu 15.04 and系统中安装 Logwatch
|
||||
================================================================================
|
||||
大家好,今天我们会讲述在 Ubuntu 15.04 操作系统上如何安装 Logwatch 软件,它也可以在任意的 Linux 系统和类 Unix 系统上安装。Logwatch 是一款可定制的日志分析和日志监控报告生成系统,它可以根据一段时间的日志文件生成您所希望关注的详细报告。它具有易安装、易配置、可审查等特性,同时对其提供的数据的安全性上也有一些保障措施。Logwatch 会扫描重要的操作系统组件像 SSH、网站服务等的日志文件,然后生成用户所关心的有价值的条目汇总报告。
|
||||
|
||||
### 预安装设置 ###
|
||||
|
||||
我们会使用 Ubuntu 15.04 版本的操作系统来部署 Logwatch,所以安装 Logwatch 之前,要确保系统上邮件服务设置是正常可用的。因为它会每天把生成的报告通过日报的形式发送邮件给管理员。您的系统的源库也应该设置可用,以便可以从通用源库来安装 Logwatch。
|
||||
|
||||
然后打开您 ubuntu 系统的终端,用 root 账号登陆,在进入 Logwatch 的安装操作前,先更新您的系统软件包。
|
||||
|
||||
root@ubuntu-15:~# apt-get update
|
||||
|
||||
### 安装 Logwatch ###
|
||||
|
||||
只要你的系统已经更新和已经满足前面说的先决条件,那么就可以在您的机器上输入如下命令来安装 Logwatch。
|
||||
|
||||
root@ubuntu-15:~# apt-get install logwatch
|
||||
|
||||
在安装过程中,一旦您按提示按下“Y”健同意对系统修改的话,Logwatch 将会开始安装一些额外的必须软件包。
|
||||
|
||||
在安装过程中会根据您机器上的邮件服务器设置情况弹出提示对 Postfix 设置的配置界面。在这篇教程中我们使用最容易的 “仅本地” 选项。根据您的基础设施情况也可以选择其它的可选项,然后点击“确定”继续。
|
||||
|
||||
![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png)
|
||||
|
||||
随后您得选择邮件服务器名,这邮件服务器名也会被其它程序使用,所以它应该是一个完全合格域名/全称域名(FQDN),且只一个。
|
||||
|
||||
![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png)
|
||||
|
||||
一旦按下在 postfix 配置提示底端的 “OK”,安装进程就会用 Postfix 的默认配置来安装,并且完成 Logwatch 的整个安装。
|
||||
|
||||
![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png)
|
||||
|
||||
您可以在终端下发出如下命令来检查 Logwatch 状态,正常情况下它应该是激活状态。
|
||||
|
||||
root@ubuntu-15:~# service postfix status
|
||||
|
||||
![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png)
|
||||
|
||||
要确认 Logwatch 在默认配置下的安装信息,可以如下示简单的发出“logwatch” 命令。
|
||||
|
||||
root@ubuntu-15:~# logwatch
|
||||
|
||||
上面执行命令的输出就是终端下编制出的报表展现格式。
|
||||
|
||||
![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png)
|
||||
|
||||
### 配置 Logwatch ###
|
||||
|
||||
在成功安装好 Logwatch 后,我们需要在它的配置文件中做一些修改,配置文件位于如下所示的路径。那么,就让我们用文本编辑器打开它,然后按需要做些变动。
|
||||
|
||||
root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf
|
||||
|
||||
**输出/格式化选项**
|
||||
|
||||
默认情况下 Logwatch 会以无编码的文本打印到标准输出方式。要改为以邮件为默认方式,需设置“Output = mail”,要改为保存成文件方式,需设置“Output = file”。所以您可以根据您的要求设置其默认配置。
|
||||
|
||||
Output = stdout
|
||||
|
||||
如果使用的是因特网电子邮件配置,要用 Html 格式为默认出格式,需要修改成如下行所示的样子。
|
||||
|
||||
Format = text
|
||||
|
||||
现在增加默认的邮件报告接收人地址,可以是本地账号也可以是完整的邮件地址,需要的都可以在这行上写上
|
||||
|
||||
MailTo = root
|
||||
#MailTo = user@test.com
|
||||
|
||||
默认的邮件发送人可以是本地账号,也可以是您需要使用的其它名字。
|
||||
|
||||
# complete email address.
|
||||
MailFrom = Logwatch
|
||||
|
||||
对这个配置文件保存修改,至于其它的参数就让它是默认的,无需改动。
|
||||
|
||||
**调度任务配置**
|
||||
|
||||
现在编辑在日常 crons 目录下的 “00logwatch” 文件来配置从 logwatch 生成的报告需要发送的邮件地址。
|
||||
|
||||
root@ubuntu-15:~# vim /etc/cron.daily/00logwatch
|
||||
|
||||
在这儿您需要作用“--mailto user@test.com”来替换掉“--output mail”,然后保存文件。
|
||||
|
||||
![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png)
|
||||
|
||||
### 生成报告 ###
|
||||
|
||||
现在我们在终端中执行“logwatch”命令来生成测试报告,生成的结果在终端中会以文本格式显示出来。
|
||||
|
||||
root@ubuntu-15:~#logwatch
|
||||
|
||||
生成的报告开始部分显示的是执行的时间和日期。它包含不同的部分,每个部分以开始标识开始而以结束标识结束,中间显示的标识部分提到的完整日志信息。
|
||||
|
||||
这儿演示的是开始标识头的样子,要显示系统上所有安装包的信息,如下所示:
|
||||
|
||||
![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png)
|
||||
|
||||
接下来的部分显示的日志信息是关于当前系统登陆会话、rsyslogs 和当前及最后可用的会话 SSH 连接信息。
|
||||
|
||||
![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png)
|
||||
|
||||
Logwatch 报告最后显示的是安全 sudo 日志及root目录磁盘使用情况,如下示:
|
||||
|
||||
![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png)
|
||||
|
||||
您也可以打开如下的文件来检查生成的 logwatch 报告电子邮件。
|
||||
|
||||
root@ubuntu-15:~# vim /var/mail/root
|
||||
|
||||
您会看到所有已生成的邮件到其配置用户的信息传送状态。
|
||||
|
||||
### 更多详情 ###
|
||||
|
||||
Logwatch 是一款很不错的工具,可以学习的很多很多,所以如果您对它的日志监控功能很感兴趣的话,也以通过如下所示的简短命令来获得更多帮助。
|
||||
|
||||
root@ubuntu-15:~# man logwatch
|
||||
|
||||
上面的命令包含所有关于 logwatch 的用户手册,所以仔细阅读,要退出手册的话可以简单的输入“q”。
|
||||
|
||||
关于 logwatch 命令的使用,您可以使用如下所示的帮助命令来获得更多的详细信息。
|
||||
|
||||
root@ubuntu-15:~# logwatch --help
|
||||
|
||||
### 结论 ###
|
||||
|
||||
教程结束,您也学会了如何在 Ubuntu 15.04 上对 Logwatch 的安装、配置等全部设置指导。现在您就可以自定义监控您的系统日志,不管是监控所有服务的运行情况还是对特定的服务在指定的时间发送报告都可以。所以,开始使用这工具吧,无论何时有问题或想知道更多关于 logwatch 的使用的都可以给我们留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
@ -0,0 +1,64 @@
|
||||
|
||||
Linux 有问必答--如何解决 Linux 桌面上的 Wireshark GUI 死机
|
||||
================================================================================
|
||||
> **问题**: 当我试图在 Ubuntu 上的 Wireshark 中打开一个 pre-recorded 数据包转储时,它的 UI 突然死机,在我发起 Wireshark 的终端出现了下面的错误和警告。我该如何解决这个问题?
|
||||
|
||||
Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被网络管理员普遍使用,网络安全工程师或开发人员对于各种任务的 packet-level 网络分析是必需的,例如在网络故障,漏洞测试,应用程序调试,或逆向协议工程是必需的。 Wireshark 允许记录存活数据包,并通过便捷的图形用户界面浏览他们的协议首部和有效负荷。
|
||||
|
||||
![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg)
|
||||
|
||||
这是 Wireshark 的 UI,尤其是在 Ubuntu 桌面下运行,有时会挂起或冻结出现以下错误,而你是向上或向下滚动分组列表视图时,就开始加载一个 pre-recorded 包转储文件。
|
||||
|
||||
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject'
|
||||
(wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange'
|
||||
(wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable'
|
||||
(wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar'
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget'
|
||||
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject'
|
||||
(wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed
|
||||
(wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed
|
||||
|
||||
显然,这个错误是由 Wireshark 和叠加滚动条之间的一些不兼容造成的,在最新的 Ubuntu 桌面还没有被解决(例如,Ubuntu 15.04 的桌面)。
|
||||
|
||||
一种避免 Wireshark 的 UI 卡死的办法就是 **暂时禁用叠加滚动条**。在 Wireshark 上有两种方法来禁用叠加滚动条,这取决于你在桌面上如何启动 Wireshark 的。
|
||||
|
||||
### 命令行解决方法 ###
|
||||
|
||||
叠加滚动条可以通过设置"**LIBOVERLAY_SCROLLBAR**"环境变量为“0”来被禁止。
|
||||
|
||||
所以,如果你是在终端使用命令行启动 Wireshark 的,你可以在 Wireshark 中禁用叠加滚动条,如下所示。
|
||||
|
||||
打开你的 .bashrc 文件,并定义以下 alias。
|
||||
|
||||
alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark"
|
||||
|
||||
### 桌面启动解决方法 ###
|
||||
|
||||
如果你是使用桌面启动器启动的 Wireshark,你可以编辑它的桌面启动器文件。
|
||||
|
||||
$ sudo vi /usr/share/applications/wireshark.desktop
|
||||
|
||||
查找以"Exec"开头的行,并如下更改。
|
||||
|
||||
Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f
|
||||
|
||||
虽然这种解决方法将有利于所有桌面用户的 system-wide,但它将无法升级 Wireshark。如果你想保留修改的 .desktop 文件,如下所示将它复制到你的主目录。
|
||||
|
||||
$ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
|
@ -0,0 +1,127 @@
|
||||
如何在 Linux 中安装 Visual Studio Code
|
||||
================================================================================
|
||||
大家好,今天我们一起来学习如何在 Linux 发行版中安装 Visual Studio Code。Visual Studio Code 是基于 Electron 优化代码后的编辑器,后者是基于 Chromium 的一款软件,用于为桌面系统发布 io.js 应用。Visual Studio Code 是微软开发的包括 Linux 在内的全平台代码编辑器和文本编辑器。它是免费软件但不开源,在专有软件许可条款下发布。它是我们日常使用的超级强大和快速的代码编辑器。Visual Studio Code 有很多很酷的功能,例如导航、智能感知支持、语法高亮、括号匹配、自动补全、片段、支持自定义键盘绑定、并且支持多种语言,例如 Python、C++、Jade、PHP、XML、Batch、F#、DockerFile、Coffee Script、Java、HandleBars、 R、 Objective-C、 PowerShell、 Luna、 Visual Basic、 .Net、 Asp.Net、 C#、 JSON、 Node.js、 Javascript、 HTML、 CSS、 Less、 Sass 和 Markdown。Visual Studio Code 集成了包管理器和库,并构建通用任务使得加速每日的工作流。Visual Studio Code 中最受欢迎的是它的调试功能,它包括流式支持 Node.js 的预览调试。
|
||||
|
||||
注意:请注意 Visual Studio Code 只支持 64 位 Linux 发行版。
|
||||
|
||||
下面是在所有 Linux 发行版中安装 Visual Studio Code 的几个简单步骤。
|
||||
|
||||
### 1. 下载 Visual Studio Code 软件包 ###
|
||||
|
||||
首先,我们要从微软服务器中下载 64 位 Linux 操作系统的 Visual Studio Code 安装包,链接是 [http://go.microsoft.com/fwlink/?LinkID=534108][1]。这里我们使用 wget 下载并保存到 tmp/VSCODE 目录。
|
||||
|
||||
# mkdir /tmp/vscode; cd /tmp/vscode/
|
||||
# wget https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip
|
||||
|
||||
--2015-06-24 06:02:54-- https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip
|
||||
Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459
|
||||
Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: 64992671 (62M) [application/octet-stream]
|
||||
Saving to: ‘VSCode-linux-x64.zip’
|
||||
100%[================================================>] 64,992,671 14.9MB/s in 4.1s
|
||||
2015-06-24 06:02:58 (15.0 MB/s) - ‘VSCode-linux-x64.zip’ saved [64992671/64992671]
|
||||
|
||||
### 2. 提取软件包 ###
|
||||
|
||||
现在,下载好 Visual Studio Code 的 zip 压缩包之后,我们打算使用 unzip 命令解压它。我们要在终端或者控制台中运行以下命令。
|
||||
|
||||
# unzip /tmp/vscode/VSCode-linux-x64.zip -d /opt/
|
||||
|
||||
注意:如果我们还没有安装 unzip,我们首先需要通过软件包管理器安装它。如果你运行的是 Ubuntu,使用 apt-get,如果运行的是 Fedora、CentOS、可以用 dnf 或 yum 安装它。
|
||||
|
||||
### 3. 运行 Visual Studio Code ###
|
||||
|
||||
提取软件包之后,我们可以直接运行一个名为 Code 的文件启动 Visual Studio Code。
|
||||
|
||||
# sudo chmod +x /opt/VSCode-linux-x64/Code
|
||||
# sudo /opt/VSCode-linux-x64/Code
|
||||
|
||||
如果我们想启动 Code 并通过终端能在任何地方打开,我们就需要创建 /opt/vscode/Code 的一个链接 /usr/local/bin/code。
|
||||
|
||||
# ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code
|
||||
|
||||
现在,我们就可以在终端中运行以下命令启动 Visual Studio Code 了。
|
||||
|
||||
# code .
|
||||
|
||||
### 4. 创建桌面启动 ###
|
||||
|
||||
下一步,成功抽取 Visual Studio Code 软件包之后,我们打算创建桌面启动程序,使得根据不同桌面环境能够从启动器、菜单、桌面启动它。首先我们要复制一个图标文件到 /usr/share/icons/ 目录。
|
||||
|
||||
# cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/
|
||||
|
||||
然后,我们创建一个桌面启动程序,文件扩展名为 .desktop。这里我们在 /tmp/VSCODE/ 目录中使用喜欢的文本编辑器创建名为 visualstudiocode.desktop 的文件。
|
||||
|
||||
# vi /tmp/vscode/visualstudiocode.desktop
|
||||
|
||||
然后,粘贴下面的行到那个文件中。
|
||||
|
||||
[Desktop Entry]
|
||||
Name=Visual Studio Code
|
||||
Comment=Multi-platform code editor for Linux
|
||||
Exec=/opt/VSCode-linux-x64/Code
|
||||
Icon=/usr/share/icons/vso.png
|
||||
Type=Application
|
||||
StartupNotify=true
|
||||
Categories=TextEditor;Development;Utility;
|
||||
MimeType=text/plain;
|
||||
|
||||
创建完桌面文件之后,我们会复制这个桌面文件到 /usr/share/applications/ 目录,这样启动器和菜单中就可以单击启动 Visual Studio Code 了。
|
||||
|
||||
# cp /tmp/vscode/visualstudiocode.desktop /usr/share/applications/
|
||||
|
||||
完成之后,我们可以在启动器或者菜单中启动它。
|
||||
|
||||
![Visual Studio Code](http://blog.linoxide.com/wp-content/uploads/2015/06/visual-studio-code.png)
|
||||
|
||||
### 在 Ubuntu 中 Visual Studio Code ###
|
||||
|
||||
要在 Ubuntu 14.04/14.10/15.04 Linux 发行版中安装 Visual Studio Code,我们可以使用 Ubuntu Make 0.7。这是在 ubuntu 中安装 code 最简单的方法,因为我们只需要执行几个命令。首先,我们要在我们的 ubuntu linux 发行版中安装 Ubuntu Make 0.7。要安装它,首先要为它添加 PPA。可以通过运行下面命令完成。
|
||||
|
||||
# add-apt-repository ppa:ubuntu-desktop/ubuntu-make
|
||||
|
||||
This ppa proposes package backport of Ubuntu make for supported releases.
|
||||
More info: https://launchpad.net/~ubuntu-desktop/+archive/ubuntu/ubuntu-make
|
||||
Press [ENTER] to continue or ctrl-c to cancel adding it
|
||||
gpg: keyring `/tmp/tmpv0vf24us/secring.gpg' created
|
||||
gpg: keyring `/tmp/tmpv0vf24us/pubring.gpg' created
|
||||
gpg: requesting key A1231595 from hkp server keyserver.ubuntu.com
|
||||
gpg: /tmp/tmpv0vf24us/trustdb.gpg: trustdb created
|
||||
gpg: key A1231595: public key "Launchpad PPA for Ubuntu Desktop" imported
|
||||
gpg: no ultimately trusted keys found
|
||||
gpg: Total number processed: 1
|
||||
gpg: imported: 1 (RSA: 1)
|
||||
OK
|
||||
|
||||
然后,更新本地库索引并安装 ubuntu-make。
|
||||
|
||||
# apt-get update
|
||||
# apt-get install ubuntu-make
|
||||
|
||||
在我们的 ubuntu 操作系统上安装完 Ubuntu Make 之后,我们打算在一个终端中运行以下命令安装 Code。
|
||||
|
||||
# umake web visual-studio-code
|
||||
|
||||
![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png)
|
||||
|
||||
运行完上面的命令之后,会要求我们输入想要的安装路径。然后,会请求我们允许在 ubuntu 系统中安装 Visual Studio Code。我们敲击 “a”。点击完后,它会在 ubuntu 机器上下载和安装 Code。最后,我们可以在启动器或者菜单中启动它。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
我们已经成功地在 Linux 发行版上安装了 Visual Studio Code。在所有 linux 发行版上安装 Visual Studio Code 都和上面介绍的相似,我们同样可以使用 umake 在 linux 发行版中安装。Umake 是一个安装开发工具,IDEs 和语言流行的工具。我们可以用 Umake 轻松地安装 Android Studios、Eclipse 和很多其它流行 IDE。Visual Studio Code 是基于 Github 上一个叫 [Electron][2] 的项目,它是 [Atom.io][3] 编辑器的一部分。它有很多 Atom.io 编辑器没有的改进功能。当前 Visual Studio Code 只支持 64 位 linux 操作系统平台。如果你有任何疑问、建议或者反馈,请在下面的评论框中留言以便我们改进和更新我们的内容。非常感谢!Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://go.microsoft.com/fwlink/?LinkID=534108
|
||||
[2]:https://github.com/atom/electron
|
||||
[3]:https://github.com/atom/atom
|
@ -0,0 +1,155 @@
|
||||
网络管理命令行工具基础,Nmcli
|
||||
================================================================================
|
||||
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/08/networking1.jpg)
|
||||
|
||||
### 介绍 ###
|
||||
|
||||
在本教程中,我们会在CentOS / RHEL 7中讨论网络管理工具,也叫**nmcli**。那些使用**ifconfig**的用户应该在CentOS 7中避免使用这个命令。
|
||||
|
||||
让我们用nmcli工具配置一些网络设置。
|
||||
|
||||
### 要得到系统中所有接口的地址信息 ###
|
||||
|
||||
[root@localhost ~]# ip addr show
|
||||
|
||||
**示例输出:**
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 00:0c:29:67:2f:4c brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.51/24 brd 192.168.1.255 scope global eno16777736
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 fe80::20c:29ff:fe67:2f4c/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
#### 检索与连接的接口相关的数据包统计 ####
|
||||
|
||||
[root@localhost ~]# ip -s link show eno16777736
|
||||
|
||||
**示例输出:**
|
||||
|
||||
![unxmen_(011)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0111.png)
|
||||
|
||||
#### 得到路由配置 ####
|
||||
|
||||
[root@localhost ~]# ip route
|
||||
|
||||
示例输出:
|
||||
|
||||
default via 192.168.1.1 dev eno16777736 proto static metric 100
|
||||
192.168.1.0/24 dev eno16777736 proto kernel scope link src 192.168.1.51 metric 100
|
||||
|
||||
#### 分析主机/网站路径 ####
|
||||
|
||||
[root@localhost ~]# tracepath unixmen.com
|
||||
|
||||
输出像traceroute,但是更加完整。
|
||||
|
||||
![unxmen_0121](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_01211.png)
|
||||
|
||||
### nmcli 工具 ###
|
||||
|
||||
**Nmcli** 是一个非常丰富和灵活的命令行工具。nmcli使用的情况有:
|
||||
|
||||
- **设备** – 正在使用的网络接口
|
||||
- **连接** – 一组配置设置,对于一个单一的设备可以有多个连接,可以在连接之间切换。
|
||||
|
||||
#### 找出有多少连接服务于多少设备 ####
|
||||
|
||||
[root@localhost ~]# nmcli connection show
|
||||
|
||||
![unxmen_(013)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_013.png)
|
||||
|
||||
#### 得到特定连接的详情 ####
|
||||
|
||||
[root@localhost ~]# nmcli connection show eno1
|
||||
|
||||
**示例输出:**
|
||||
|
||||
![unxmen_(014)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0141.png)
|
||||
|
||||
#### 得到网络设备状态 ####
|
||||
|
||||
[root@localhost ~]# nmcli device status
|
||||
|
||||
----------
|
||||
|
||||
DEVICE TYPE STATE CONNECTION
|
||||
eno16777736 ethernet connected eno1
|
||||
lo loopback unmanaged --
|
||||
|
||||
#### 使用“dhcp”创建新的连接 ####
|
||||
|
||||
[root@localhost ~]# nmcli connection add con-name "dhcp" type ethernet ifname eno16777736
|
||||
|
||||
这里,
|
||||
|
||||
- **Connection add** – 添加新的连接
|
||||
- **con-name** – 连接名
|
||||
- **type** – 设备类型
|
||||
- **ifname** – 接口名
|
||||
|
||||
这个命令会使用dhcp协议添加连接
|
||||
|
||||
**示例输出:**
|
||||
|
||||
Connection 'dhcp' (163a6822-cd50-4d23-bb42-8b774aeab9cb) successfully added.
|
||||
|
||||
#### 不同过dhcp分配IP,使用“static”添加地址 ####
|
||||
|
||||
[root@localhost ~]# nmcli connection add con-name "static" ifname eno16777736 autoconnect no type ethernet ip4 192.168.1.240 gw4 192.168.1.1
|
||||
|
||||
**示例输出:**
|
||||
|
||||
Connection 'static' (8e69d847-03d7-47c7-8623-bb112f5cc842) successfully added.
|
||||
|
||||
**更新连接:**
|
||||
|
||||
[root@localhost ~]# nmcli connection up eno1
|
||||
|
||||
Again Check, whether ip address is changed or not.
|
||||
再检查一遍,ip地址是否已经改变
|
||||
|
||||
[root@localhost ~]# ip addr show
|
||||
|
||||
![unxmen_(015)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0151.png)
|
||||
|
||||
#### 添加DNS设置到静态连接中 ####
|
||||
|
||||
[root@localhost ~]# nmcli connection modify "static" ipv4.dns 202.131.124.4
|
||||
|
||||
#### 添加额外的DNS值 ####
|
||||
|
||||
[root@localhost ~]# nmcli connection modify "static" +ipv4.dns 8.8.8.8
|
||||
|
||||
**注意**:要使用额外的**+**符号,并且要是**+ipv4.dns**,而不是**ip4.dns**。
|
||||
|
||||
|
||||
添加一个额外的ip地址:
|
||||
|
||||
[root@localhost ~]# nmcli connection modify "static" +ipv4.addresses 192.168.200.1/24
|
||||
|
||||
使用命令刷新设置:
|
||||
|
||||
[root@localhost ~]# nmcli connection up eno1
|
||||
|
||||
![unxmen_(016)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_016.png)
|
||||
|
||||
你会看见,设置生效了。
|
||||
|
||||
完结
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/basics-networkmanager-command-line-tool-nmcli/
|
||||
|
||||
作者:Rajneesh Upadhyay
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,46 @@
|
||||
为Antergos与Arch Linux添加印度语和梵文支持
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg)
|
||||
|
||||
你们到目前或许知道,我最近一直在尝试体验[Antergos Linux][1]。在安装完[Antergos][2]后我所首先注意到的一些事情是在默认的Chromium浏览器中**没法正确显示印度语脚本**。
|
||||
|
||||
这是一件奇怪的事情,在我之前桌面Linux的体验中是从未遇到过的。起初,我认为是浏览器的问题,所以我安装了Firefox,然而问题依旧,Firefox也不能正确显示印度语。和Chromium不显示任何东西不同的是,Firefox确实显示了一些东西,但是毫无可读性。
|
||||
|
||||
![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_1.jpeg)
|
||||
Chromium中的印度语显示
|
||||
|
||||
|
||||
![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_2.jpeg)
|
||||
Firefox中的印度语显示
|
||||
|
||||
奇怪吧?那么,默认情况下基于Arch的Antergos Linux中没有印度语的支持吗?我没有去验证,但是我假设其它基于梵语脚本的印地语之类会产生同样的问题。
|
||||
|
||||
在这个快速指南中,我打算为大家演示如何来添加梵语支持,以便让印度语和其它印地语都能正确显示。
|
||||
|
||||
### 在Antergos和Arch Linux中添加印地语支持 ###
|
||||
|
||||
打开终端,使用以下命令:
|
||||
|
||||
sudo yaourt -S ttf-indic-otf
|
||||
|
||||
键入密码,它会提供给你对于印地语的译文支持。
|
||||
|
||||
重启Firefox,会马上正确显示印度语了,但是它需要一次重启来显示印度语。因此,我建议你在安装了印地语字体后**重启你的系统**。
|
||||
|
||||
![Adding Hindi display support in Arch based Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_4.jpeg)
|
||||
|
||||
我希望这篇快速指南能够帮助你,让你可以在Antergos和其它基于Arch的Linux发行版中,如Manjaro Linux,阅读印度语、梵文、泰米尔语、泰卢固语、马拉雅拉姆语、孟加拉语,以及其它印地语。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/display-hindi-arch-antergos/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://antergos.com/
|
||||
[2]:http://itsfoss.com/tag/antergos/
|
@ -0,0 +1,74 @@
|
||||
如何在 Ubuntu 15.04 下创建连接至 Android/iOS 的 AP
|
||||
================================================================================
|
||||
我成功地在 Ubuntu 15.04 下用 Gnome Network Manager 创建了一个无线AP热点. 接下来我要分享一下我的步骤. 请注意: 你必须要有一个可以用来创建AP热点的无线网卡. 如果你不知道如何找到连上了的设备的话, 在终端(Terminal)里输入`iw list`.
|
||||
|
||||
如果你没有安装`iw`的话, 在Ubuntu下你可以使用`udo apt-get install iw`进行安装.
|
||||
|
||||
在你键入`iw list`之后, 寻找可用的借口, 你应该会看到类似下列的条目:
|
||||
|
||||
Supported interface modes:
|
||||
|
||||
* IBSS
|
||||
* managed
|
||||
* AP
|
||||
* AP/VLAN
|
||||
* monitor
|
||||
* mesh point
|
||||
|
||||
让我们一步步看
|
||||
|
||||
1. 断开WIFI连接. 使用有线网络接入你的笔记本.
|
||||
1. 在顶栏面板里点击网络的图标 -> Edit Connections(编辑连接) -> 在弹出窗口里点击Add(新增)按钮.
|
||||
1. 在下拉菜单内选择Wi-Fi.
|
||||
1. 接下来,
|
||||
|
||||
a. 输入一个链接名 比如: Hotspot
|
||||
|
||||
b. 输入一个 SSID 比如: Hotspot
|
||||
|
||||
c. 选择模式(mode): Infrastructure
|
||||
|
||||
d. 设备 MAC 地址: 在下拉菜单里选择你的无线设备
|
||||
|
||||
![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg)
|
||||
|
||||
1. 进入Wi-Fi安全选项卡, 选择 WPA & WPA2 Personal 并且输入密码.
|
||||
1. 进入IPv4设置选项卡, 在Method(方法)下拉菜单里, 选择Shared to other computers(共享至其他电脑).
|
||||
|
||||
![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg)
|
||||
|
||||
1. 进入IPv6选项卡, 在Method(方法)里设置为忽略ignore (只有在你不使用IPv6的情况下这么做)
|
||||
1. 点击 Save(保存) 按钮以保存配置.
|
||||
1. 从 menu/dash 里打开Terminal.
|
||||
1. 修改你刚刚使用 network settings 创建的连接.
|
||||
|
||||
使用 VIM 编辑器:
|
||||
|
||||
sudo vim /etc/NetworkManager/system-connections/Hotspot
|
||||
|
||||
使用Gedit 编辑器:
|
||||
|
||||
gksu gedit /etc/NetworkManager/system-connections/Hotspot
|
||||
|
||||
把名字 Hotspot 用你在第4步里起的连接名替换掉.
|
||||
|
||||
![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402)
|
||||
|
||||
1. 把 `mode=infrastructure` 改成 `mode=ap` 并且保存文件
|
||||
1. 一旦你保存了这个文件, 你应该能在 Wifi 菜单里看到你刚刚建立的AP了. (如果没有的话请再顶栏里 关闭/打开 Wifi 选项一次)
|
||||
|
||||
![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375)
|
||||
|
||||
1. 你现在可以把你的设备连上Wifi了. 已经过 Android 5.0的小米4测试.(下载了1GB的文件以测试速度与稳定性)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/
|
||||
|
||||
作者:[Sayantan Das][a]
|
||||
译者:[jerryling315](https://github.com/jerryling315)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxveda.com/author/sayantan_das/
|
@ -0,0 +1,183 @@
|
||||
Mhddfs——将多个小分区合并成一个大的虚拟存储
|
||||
================================================================================
|
||||
|
||||
让我们假定你有30GB的电影,并且你有3个驱动器,每个的大小为20GB。那么,你会怎么来存放东西呢?
|
||||
|
||||
很明显,你可以将你的视频分割成2个或者3个不同的卷,并将它们手工存储到驱动器上。这当然不是一个好主意,它成了一项费力的工作,它需要你手工干预,而且花费你大量时间。
|
||||
|
||||
另外一个解决方案是创建一个[RAID磁盘阵列][1]。然而,RAID在缺乏存储可靠性,磁盘空间可用性差等方面声名狼藉。另外一个解决方案,就是mhddfs。
|
||||
|
||||
![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png)
|
||||
Mhddfs——在Linux中合并多个分区
|
||||
|
||||
mhddfs是一个用于Linux的驱动,它可以将多个挂载点合并到一个虚拟磁盘中。它是一个基于FUSE的驱动,提供了一个用于大数据存储的简单解决方案。它将所有小文件系统合并,以创建一个单一的大虚拟文件系统,该文件系统包含其成员文件系统的所有颗粒,包括文件和空闲空间。
|
||||
|
||||
#### 你为什么需要Mhddfs? ####
|
||||
|
||||
你所有存储设备创建了一个单一的虚拟池,它可以在启动时被挂载。这个小工具可以智能地照看并处理哪个驱动器满了,哪个驱动器空着,将数据写到哪个驱动器中。当你成功创建虚拟驱动器后,你可以使用[SAMBA][2]来共享你的虚拟文件系统。你的客户端将在任何时候都看到一个巨大的驱动器和大量的空闲空间。
|
||||
|
||||
#### Mhddfs特性 ####
|
||||
|
||||
- 获取文件系统属性和系统信息。
|
||||
- 设置文件系统属性。
|
||||
- 创建、读取、移除和写入目录和文件。
|
||||
- 支持文件锁和单一设备上的硬链接。
|
||||
|
||||
注:表格
|
||||
<table cellspacing="0" border="0">
|
||||
<colgroup width="472"></colgroup>
|
||||
<colgroup width="491"></colgroup>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td height="29" align="center" style="border: 1px solid #000000;"><b><span style="color: black; font-size: large;">mhddfs的优点</span></b></td>
|
||||
<td align="center" style="border: 1px solid #000000;"><b><span style="color: black; font-size: large;">mhddfs的缺点</span></b></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 适合家庭用户</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;">mhddfs驱动没有内建在Linux内核中 </span></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 运行简单</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 运行时需要大量处理能力</span></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 没有明显的数据丢失</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 没有冗余解决方案</span></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 不分割文件</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 不支持移动硬链接</span></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 添加新文件到合并的虚拟文件系统</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="font-size: medium;"> </span></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 管理文件保存的位置</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="font-size: medium;"> </span></td>
|
||||
</tr>
|
||||
<tr class="alt">
|
||||
<td height="25" align="left" style="border: 1px solid #000000;"><span style="color: black; font-size: medium;"> 扩展文件属性</span></td>
|
||||
<td align="left" style="border: 1px solid #000000;"><span style="font-size: medium;"> </span></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
### Linux中安装Mhddfs ###
|
||||
|
||||
在Debian及其类似的移植系统中,你可以使用下面的命令来安装mhddfs包。
|
||||
|
||||
# apt-get update && apt-get install mhddfs
|
||||
|
||||
![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png)
|
||||
安装Mhddfs到基于Debian的系统中
|
||||
|
||||
在RHEL/CentOS Linux系统中,你需要开启[epel仓库][3],然后执行下面的命令来安装mhddfs包。
|
||||
|
||||
# yum install mhddfs
|
||||
|
||||
在Fedora 22及以上系统中,你可以通过dnf包管理来获得它,就像下面这样。
|
||||
|
||||
# dnf install mhddfs
|
||||
|
||||
![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png)
|
||||
安装Mhddfs到Fedora
|
||||
|
||||
如果万一mhddfs包不能从epel仓库获取到,那么你需要解决下面的依赖,然后像下面这样来编译源码并安装。
|
||||
|
||||
- FUSE头文件
|
||||
- GCC
|
||||
- libc6头文件
|
||||
- uthash头文件
|
||||
- libattr1头文件(可选)
|
||||
|
||||
接下来,只需从下面建议的地址下载最新的源码包,然后编译。
|
||||
|
||||
# wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz
|
||||
# tar -zxvf mhddfs*.tar.gz
|
||||
# cd mhddfs-0.1.39/
|
||||
# make
|
||||
|
||||
你应该可以在当前目录中看到mhddfs的二进制文件,以root身份将它移动到/usr/bin/和/usr/local/bin/中。
|
||||
|
||||
# cp mhddfs /usr/bin/
|
||||
# cp mhddfs /usr/local/bin/
|
||||
|
||||
一切搞定,mhddfs已经可以用了。
|
||||
|
||||
### 我怎么使用Mhddfs? ###
|
||||
|
||||
1.让我们看看当前所有挂载到我们系统中的硬盘。
|
||||
|
||||
|
||||
$ df -h
|
||||
|
||||
![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif)
|
||||
**样例输出**
|
||||
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
|
||||
/dev/sda1 511M 132K 511M 1% /boot/efi
|
||||
/dev/sda2 451G 92G 336G 22% /
|
||||
/dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE
|
||||
/dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1
|
||||
|
||||
注意这里的‘挂载点’名称,我们后面会使用到它们。
|
||||
|
||||
2.创建目录‘/mnt/virtual_hdd’,在这里,所有这些文件系统将被组成组。
|
||||
|
||||
|
||||
# mkdir /mnt/virtual_hdd
|
||||
|
||||
3.然后,挂载所有文件系统。你可以通过root或者FUSE组中的某个成员来完成。
|
||||
|
||||
|
||||
# mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other
|
||||
|
||||
![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png)
|
||||
在Linux中挂载所有文件系统
|
||||
|
||||
**注意**:这里我们使用了所有硬盘的挂载点名称,很明显,你的挂载点名称会有所不同。也请注意“-o allow_other”选项可以让这个虚拟文件系统让其它所有人可见,而不仅仅是创建它的人。
|
||||
|
||||
4.现在,运行“df -h”来看看所有文件系统。它应该包含了你刚才创建的那个。
|
||||
|
||||
|
||||
$ df -h
|
||||
|
||||
![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png)
|
||||
验证虚拟文件系统挂载
|
||||
|
||||
你可以像对已挂在的驱动器那样给虚拟文件系统部署所有的选项。
|
||||
|
||||
5.要在每次系统启动创建这个虚拟文件系统,你应该以root身份添加下面的这行代码(在你那里会有点不同,取决于你的挂载点)到/etc/fstab文件的末尾。
|
||||
|
||||
mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0
|
||||
|
||||
6.如果在任何时候你想要添加/移除一个新的驱动器到/从虚拟硬盘,你可以挂载一个新的驱动器,拷贝/mnt/vritual_hdd的内容,卸载卷,弹出你要移除的的驱动器并/或挂载你要包含的新驱动器。使用mhddfs命令挂载全部文件系统到Virtual_hdd下,这样就全部搞定了。
|
||||
#### 我怎么卸载Virtual_hdd? ####
|
||||
|
||||
卸载virtual_hdd相当简单,就像下面这样
|
||||
|
||||
# umount /mnt/virtual_hdd
|
||||
|
||||
![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png)
|
||||
卸载虚拟文件系统
|
||||
|
||||
注意,是umount,而不是unmount,很多用户都输错了。
|
||||
|
||||
到现在为止全部结束了。我正在写另外一篇文章,你们一定喜欢读的。到那时,请保持连线到Tecmint。请在下面的评论中给我们提供有用的反馈吧。请为我们点赞并分享,帮助我们扩散。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/mount-filesystem-in-linux/
|
||||
[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
@ -1,146 +0,0 @@
|
||||
|
||||
RAID的级别和概念的介绍 - 第1部分
|
||||
================================================================================
|
||||
RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。
|
||||
|
||||
|
||||
![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg)
|
||||
|
||||
在 Linux 中理解 RAID 的设置
|
||||
|
||||
RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。
|
||||
|
||||
这个系列被命名为RAID的构建共包含9个部分包括以下主题。
|
||||
|
||||
- 第1部分:RAID的级别和概念的介绍
|
||||
- 第2部分:在Linux中如何设置 RAID0(条带化)
|
||||
- 第3部分:在Linux中如何设置 RAID1(镜像化)
|
||||
- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验)
|
||||
- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验)
|
||||
- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套)
|
||||
- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘
|
||||
- 第8部分:在 RAID 中恢复(重建)损坏的驱动器
|
||||
- 第9部分:在 Linux 中管理 RAID
|
||||
|
||||
这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。
|
||||
|
||||
|
||||
### 软件RAID和硬件RAID ###
|
||||
|
||||
软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。
|
||||
|
||||
硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。
|
||||
|
||||
硬件 RAID 卡如下所示:
|
||||
|
||||
![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg)
|
||||
|
||||
硬件RAID
|
||||
|
||||
#### 精选的 RAID 概念 ####
|
||||
|
||||
- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。
|
||||
- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。
|
||||
- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。
|
||||
- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。
|
||||
- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。
|
||||
|
||||
RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。
|
||||
|
||||
- RAID0 = 条带化
|
||||
- RAID1 = 镜像
|
||||
- RAID5 = 单个磁盘分布式奇偶校验
|
||||
- RAID6 = 双盘分布式奇偶校验
|
||||
- RAID10 = 镜像 + 条带。(嵌套RAID)
|
||||
|
||||
RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。
|
||||
|
||||
#### RAID 0(或)条带化 ####
|
||||
|
||||
条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。
|
||||
|
||||
假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。
|
||||
|
||||
在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。
|
||||
|
||||
- 高性能。
|
||||
- 在 RAID0 上零容量损失。
|
||||
- 零容错。
|
||||
- 写和读有很高的性能。
|
||||
|
||||
#### RAID1(或)镜像化 ####
|
||||
|
||||
镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。
|
||||
|
||||
当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。
|
||||
|
||||
- 良好的性能。
|
||||
- 空间的一半将在总容量丢失。
|
||||
- 完全容错。
|
||||
- 重建会更快。
|
||||
- 写性能将是缓慢的。
|
||||
- 读将会很好。
|
||||
- 被操作系统和数据库使用的规模很小。
|
||||
|
||||
#### RAID 5(或)分布式奇偶校验 ####
|
||||
|
||||
RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。
|
||||
|
||||
假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。
|
||||
|
||||
- 性能卓越
|
||||
- 读速度将非常好。
|
||||
- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。
|
||||
- 从所有驱动器的奇偶校验信息中重建。
|
||||
- 完全容错。
|
||||
- 1个磁盘空间将用于奇偶校验。
|
||||
- 可以被用在文件服务器,Web服务器,非常重要的备份中。
|
||||
|
||||
#### RAID 6 两个分布式奇偶校验磁盘 ####
|
||||
|
||||
RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。
|
||||
|
||||
它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。
|
||||
|
||||
- 性能不佳。
|
||||
- 读的性能很好。
|
||||
- 如果我们不使用硬件 RAID 控制器写的性能会很差。
|
||||
- 从2奇偶校验驱动器上重建。
|
||||
- 完全容错。
|
||||
- 2个磁盘空间将用于奇偶校验。
|
||||
- 可用于大型阵列。
|
||||
- 在备份和视频流中大规模使用。
|
||||
|
||||
#### RAID 10(或)镜像+条带 ####
|
||||
|
||||
RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。
|
||||
|
||||
假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。
|
||||
|
||||
如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。
|
||||
|
||||
同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。
|
||||
|
||||
- 良好的读写性能。
|
||||
- 空间的一半将在总容量丢失。
|
||||
- 容错。
|
||||
- 从备份数据中快速重建。
|
||||
- 它的高性能和高可用性常被用于数据库的存储中。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。
|
||||
|
||||
在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -1,218 +0,0 @@
|
||||
在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分
|
||||
================================================================================
|
||||
RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。
|
||||
|
||||
创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。
|
||||
|
||||
![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg)
|
||||
|
||||
在 Linux 中创建 RAID0
|
||||
|
||||
使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。
|
||||
|
||||
#### 在 RAID 0 中条带是什么 ####
|
||||
|
||||
条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。
|
||||
|
||||
- RAID 0 性能较高。
|
||||
- 在 RAID 0 上,空间零浪费。
|
||||
- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。
|
||||
- 写和读性能得以提高。
|
||||
|
||||
#### 要求 ####
|
||||
|
||||
创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。
|
||||
|
||||
在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。
|
||||
|
||||
如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。
|
||||
|
||||
- [Introduction to RAID and RAID Concepts][1]
|
||||
|
||||
**我的服务器设置**
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.225
|
||||
Two Disks : 20 GB each
|
||||
|
||||
这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。
|
||||
|
||||
### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ###
|
||||
|
||||
1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。
|
||||
|
||||
# yum clean all && yum update
|
||||
# yum install mdadm -y
|
||||
|
||||
![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png)
|
||||
|
||||
安装 mdadm 工具
|
||||
|
||||
### 第2步:检测并连接两个 20GB 的硬盘 ###
|
||||
|
||||
2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png)
|
||||
|
||||
检查硬盘
|
||||
|
||||
3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
|
||||
![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png)
|
||||
|
||||
检查 RAID 设备
|
||||
|
||||
从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。
|
||||
|
||||
### 第3步:创建 RAID 分区 ###
|
||||
|
||||
4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
请按照以下说明创建分区。
|
||||
|
||||
- 按 ‘n’ 创建新的分区。
|
||||
- 然后按 ‘P’ 选择主分区。
|
||||
- 接下来选择分区号为1。
|
||||
- 只需按两次回车键选择默认值即可。
|
||||
- 然后,按 ‘P’ 来打印创建好的分区。
|
||||
|
||||
![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png)
|
||||
|
||||
创建分区
|
||||
|
||||
请按照以下说明将分区创建为 Linux 的 RAID 类型。
|
||||
|
||||
- 按 ‘L’,列出所有可用的类型。
|
||||
- 按 ‘t’ 去修改分区。
|
||||
- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。
|
||||
- 然后再次使用‘p’查看我们所做的更改。
|
||||
- 使用‘w’保存更改。
|
||||
|
||||
![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png)
|
||||
|
||||
在 Linux 上创建 RAID 分区
|
||||
|
||||
**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。
|
||||
|
||||
5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。
|
||||
|
||||
# mdadm --examine /dev/sd[b-c]
|
||||
# mdadm --examine /dev/sd[b-c]1
|
||||
|
||||
![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png)
|
||||
|
||||
验证 RAID 分区
|
||||
|
||||
### 第4步:创建 RAID md 设备 ###
|
||||
|
||||
6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。
|
||||
|
||||
# mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
|
||||
|
||||
- -C – create
|
||||
- -l – level
|
||||
- -n – No of raid-devices
|
||||
|
||||
7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png)
|
||||
|
||||
查看 RAID 级别
|
||||
|
||||
# mdadm -E /dev/sd[b-c]1
|
||||
|
||||
![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png)
|
||||
|
||||
查看 RAID 设备
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png)
|
||||
|
||||
查看 RAID 阵列
|
||||
|
||||
### 第5步:挂载 RAID 设备到文件系统 ###
|
||||
|
||||
8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png)
|
||||
|
||||
创建 ext4 文件系统
|
||||
|
||||
9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。
|
||||
|
||||
# mkdir /mnt/raid0
|
||||
# mount /dev/md0 /mnt/raid0/
|
||||
|
||||
10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。
|
||||
|
||||
# df -h
|
||||
|
||||
11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。
|
||||
|
||||
# touch /mnt/raid0/tecmint.txt
|
||||
# echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
|
||||
# cat /mnt/raid0/tecmint.txt
|
||||
# ls -l /mnt/raid0/
|
||||
|
||||
![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png)
|
||||
|
||||
验证挂载的设备
|
||||
|
||||
12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。
|
||||
|
||||
/dev/md0 /mnt/raid0 ext4 deaults 0 0
|
||||
|
||||
![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png)
|
||||
|
||||
添加设备到 fstab 文件中
|
||||
|
||||
13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。
|
||||
|
||||
# mount -av
|
||||
|
||||
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png)
|
||||
|
||||
检查 fstab 文件是否有误
|
||||
|
||||
### 第6步:保存 RAID 配置 ###
|
||||
|
||||
14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。
|
||||
|
||||
# mdadm -E -s -v >> /etc/mdadm.conf
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# cat /etc/mdadm.conf
|
||||
|
||||
![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png)
|
||||
|
||||
保存 RAID 配置
|
||||
|
||||
就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid0-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
@ -0,0 +1,321 @@
|
||||
|
||||
在 Linux 中安装 RAID 6(条带化双分布式奇偶校验) - 第5部分
|
||||
================================================================================
|
||||
RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即时两个磁盘发生故障后依然有容错能力。两并列的磁盘发生故障时,系统的关键任务仍然能运行。它与 RAID 5 相似,但性能更健壮,因为它多用了一个磁盘来进行奇偶校验。
|
||||
|
||||
在之前的文章中,我们已经在 RAID 5 看了分布式奇偶校验,但在本文中,我们将看到的是 RAID 6 双分布式奇偶校验。不要期望比其他 RAID 有额外的性能,我们仍然需要安装一个专用的 RAID 控制器。在 RAID 6 中,即使我们失去了2个磁盘,我们仍可以取回数据通过更换磁盘,然后从校验中构建数据。
|
||||
|
||||
![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg)
|
||||
|
||||
在 Linux 中安装 RAID 6
|
||||
|
||||
要建立一个 RAID 6,一组最少需要4个磁盘。RAID 6 甚至在有些设定中会有多组磁盘,当读取数据时,它会同时从所有磁盘读取,所以读取速度会更快,当写数据时,因为它要将数据写在条带化的多个磁盘上,所以性能会较差。
|
||||
|
||||
现在,很多人都在讨论为什么我们需要使用 RAID 6,它的性能和其他 RAID 相比并不太好。提出这个问题首先需要知道的是,如果需要高容错的必须选择 RAID 6。在每一个对数据库的高可用性要求较高的环境中,他们需要 RAID 6 因为数据库是最重要,无论花费多少都需要保护其安全,它在视频流环境中也是非常有用的。
|
||||
|
||||
#### RAID 6 的的优点和缺点 ####
|
||||
|
||||
- 性能很不错。
|
||||
- RAID 6 非常昂贵,因为它要求两个独立的磁盘用于奇偶校验功能。
|
||||
- 将失去两个磁盘的容量来保存奇偶校验信息(双奇偶校验)。
|
||||
- 不存在数据丢失,即时两个磁盘损坏。我们可以在更换损坏的磁盘后从校验中重建数据。
|
||||
- 读性能比 RAID 5 更好,因为它从多个磁盘读取,但对于没有专用的 RAID 控制器的设备写性能将非常差。
|
||||
|
||||
#### 要求 ####
|
||||
|
||||
要创建一个 RAID 6 最少需要4个磁盘.你也可以添加更多的磁盘,但你必须有专用的 RAID 控制器。在软件 RAID 中,我们在 RAID 6 中不会得到更好的性能,所以我们需要一个物理 RAID 控制器。
|
||||
|
||||
这些是新建一个 RAID 需要的设置,我们建议先看完以下 RAID 文章。
|
||||
|
||||
- [Linux 中 RAID 的基本概念 – 第一部分][1]
|
||||
- [在 Linux 上创建软件 RAID 0 (条带化) – 第二部分][2]
|
||||
- [在 Linux 上创建软件 RAID 1 (镜像) – 第三部分][3]
|
||||
|
||||
#### My Server Setup ####
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.228
|
||||
Hostname : rd6.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdb
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
Disk 4 [20GB] : /dev/sde
|
||||
|
||||
这篇文章是9系列 RAID 教程的第5部分,在这里我们将看到我们如何在 Linux 系统或者服务器上创建和设置软件 RAID 6 或条带化双分布式奇偶校验,使用四个 20GB 的磁盘 /dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde.
|
||||
|
||||
### 第1步:安装 mdadm 工具,并检查磁盘 ###
|
||||
|
||||
1.如果你按照我们最进的两篇 RAID 文章(第2篇和第3篇),我们已经展示了如何安装‘mdadm‘工具。如果你直接看的这篇文章,我们先来解释下在Linux系统中如何使用‘mdadm‘工具来创建和管理 RAID,首先根据你的 Linux 发行版使用以下命令来安装。
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
2.安装该工具后,然后来验证需要的四个磁盘,我们将会使用下面的‘fdisk‘命令来检验用于创建 RAID 的磁盘。
|
||||
|
||||
# fdisk -l | grep sd
|
||||
|
||||
![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png)
|
||||
|
||||
在 Linux 中检查磁盘
|
||||
|
||||
3.在创建 RAID 磁盘前,先检查下我们的磁盘是否创建过 RAID 分区。
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
|
||||
![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png)
|
||||
|
||||
在磁盘上检查 Raid 分区
|
||||
|
||||
**注意**: 在上面的图片中,没有检测到任何 super-block 或者说在四个磁盘上没有 RAID 存在。现在我们开始创建 RAID 6。
|
||||
|
||||
### 第2步:为 RAID 6 创建磁盘分区 ###
|
||||
|
||||
4.现在为 raid 创建分区‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ 和 ‘/dev/sde‘使用下面 fdisk 命令。在这里,我们将展示如何创建分区在 sdb 磁盘,同样的步骤也适用于其他分区。
|
||||
|
||||
**创建 /dev/sdb 分区**
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
请按照说明进行操作,如下图所示创建分区。
|
||||
|
||||
- 按 ‘n’ 创建新的分区。
|
||||
- 然后按 ‘P’ 选择主分区。
|
||||
- 接下来选择分区号为1。
|
||||
- 只需按两次回车键选择默认值即可。
|
||||
- 然后,按 ‘P’ 来打印创建好的分区。
|
||||
- 按 ‘L’,列出所有可用的类型。
|
||||
- 按 ‘t’ 去修改分区。
|
||||
- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。
|
||||
- 然后再次使用‘p’查看我们所做的更改。
|
||||
- 使用‘w’保存更改。
|
||||
|
||||
![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png)
|
||||
|
||||
创建 /dev/sdb 分区
|
||||
|
||||
**创建 /dev/sdc 分区**
|
||||
|
||||
# fdisk /dev/sdc
|
||||
|
||||
![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png)
|
||||
|
||||
创建 /dev/sdc 分区
|
||||
|
||||
**创建 /dev/sdd 分区**
|
||||
|
||||
# fdisk /dev/sdd
|
||||
|
||||
![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png)
|
||||
|
||||
创建 /dev/sdd 分区
|
||||
|
||||
**创建 /dev/sde 分区**
|
||||
|
||||
# fdisk /dev/sde
|
||||
|
||||
![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png)
|
||||
|
||||
创建 /dev/sde 分区
|
||||
|
||||
5.创建好分区后,检查磁盘的 super-blocks 是个好的习惯。如果 super-blocks 不存在我们可以按前面的创建一个新的 RAID。
|
||||
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
|
||||
或者
|
||||
|
||||
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
|
||||
![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png)
|
||||
|
||||
在新分区中检查 Raid
|
||||
|
||||
### 步骤3:创建 md 设备(RAID) ###
|
||||
|
||||
6,现在是时候来创建 RAID 设备‘md0‘ (即 /dev/md0)并应用 RAID 级别在所有新创建的分区中,确认 raid 使用以下命令。
|
||||
|
||||
# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png)
|
||||
|
||||
创建 Raid 6 设备
|
||||
|
||||
7.你还可以使用 watch 命令来查看当前 raid 的进程,如下图所示。
|
||||
|
||||
# watch -n1 cat /proc/mdstat
|
||||
|
||||
![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png)
|
||||
|
||||
检查 Raid 6 进程
|
||||
|
||||
8.使用以下命令验证 RAID 设备。
|
||||
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
**注意**::上述命令将显示四个磁盘的信息,这是相当长的,所以没有截取其完整的输出。
|
||||
|
||||
9.接下来,验证 RAID 阵列,以确认 re-syncing 被启动。
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png)
|
||||
|
||||
检查 Raid 6 阵列
|
||||
|
||||
### 第4步:在 RAID 设备上创建文件系统 ###
|
||||
|
||||
10.使用 ext4 为‘/dev/md0‘创建一个文件系统并将它挂载在 /mnt/raid5 。这里我们使用的是 ext4,但你可以根据你的选择使用任意类型的文件系统。
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png)
|
||||
|
||||
在 Raid 6 上创建文件系统
|
||||
|
||||
11.挂载创建的文件系统到 /mnt/raid6,并验证挂载点下的文件,我们可以看到 lost+found 目录。
|
||||
|
||||
# mkdir /mnt/raid6
|
||||
# mount /dev/md0 /mnt/raid6/
|
||||
# ls -l /mnt/raid6/
|
||||
|
||||
12.在挂载点下创建一些文件,在任意文件中添加一些文字并验证其内容。
|
||||
|
||||
# touch /mnt/raid6/raid6_test.txt
|
||||
# ls -l /mnt/raid6/
|
||||
# echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt
|
||||
# cat /mnt/raid6/raid6_test.txt
|
||||
|
||||
![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png)
|
||||
|
||||
验证 Raid 内容
|
||||
|
||||
13.在 /etc/fstab 中添加以下条目使系统启动时自动挂载设备,环境不同挂载点可能会有所不同。
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid6 ext4 defaults 0 0
|
||||
|
||||
![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png)
|
||||
|
||||
自动挂载 Raid 6 设备
|
||||
|
||||
14.接下来,执行‘mount -a‘命令来验证 fstab 中的条目是否有错误。
|
||||
|
||||
# mount -av
|
||||
|
||||
![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png)
|
||||
|
||||
验证 Raid 是否自动挂载
|
||||
|
||||
### 第5步:保存 RAID 6 的配置 ###
|
||||
|
||||
15.请注意默认 RAID 没有配置文件。我们需要使用以下命令手动保存它,然后检查设备‘/dev/md0‘的状态。
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
|
||||
|
||||
保存 Raid 6 配置
|
||||
|
||||
![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
|
||||
|
||||
检查 Raid 6 状态
|
||||
|
||||
### 第6步:添加备用磁盘 ###
|
||||
|
||||
16.现在,它使用了4个磁盘,并且有两个作为奇偶校验信息来使用。在某些情况下,如果任意一个磁盘出现故障,我们仍可以得到数据,因为在 RAID 6 使用双奇偶校验。
|
||||
|
||||
如果第二个磁盘也出现故障,在第三块磁盘损坏前我们可以添加一个新的。它可以作为一个备用磁盘并入 RAID 集合,但我在创建 raid 集合前没有定义备用的磁盘。但是,在磁盘损坏后或者创建 RAId 集合时我们可以添加一块磁盘。现在,我们已经创建好了 RAID,下面让我演示如何添加备用磁盘。
|
||||
|
||||
为了达到演示的目的,我已经热插入了一个新的 HDD 磁盘(即 /dev/sdf),让我们来验证接入的磁盘。
|
||||
|
||||
# ls -l /dev/ | grep sd
|
||||
|
||||
![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png)
|
||||
|
||||
检查新 Disk
|
||||
|
||||
17.现在再次确认新连接的磁盘没有配置过 RAID ,使用 mdadm 来检查。
|
||||
|
||||
# mdadm --examine /dev/sdf
|
||||
|
||||
![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png)
|
||||
|
||||
在新磁盘中检查 Raid
|
||||
|
||||
**注意**: 像往常一样,我们早前已经为四个磁盘创建了分区,同样,我们使用 fdisk 命令为新插入的磁盘创建新分区。
|
||||
|
||||
# fdisk /dev/sdf
|
||||
|
||||
![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png)
|
||||
|
||||
为 /dev/sdf 创建分区
|
||||
|
||||
18.在 /dev/sdf 创建新的分区后,在新分区上确认 raid,包括/dev/md0 raid 设备的备用磁盘,并验证添加的设备。
|
||||
|
||||
# mdadm --examine /dev/sdf
|
||||
# mdadm --examine /dev/sdf1
|
||||
# mdadm --add /dev/md0 /dev/sdf1
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png)
|
||||
|
||||
在 sdf 分区上验证 Raid
|
||||
|
||||
![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png)
|
||||
|
||||
为 RAID 添加 sdf 分区
|
||||
|
||||
![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png)
|
||||
|
||||
验证 sdf 分区信息
|
||||
|
||||
### 第7步:检查 RAID 6 容错 ###
|
||||
|
||||
19.现在,让我们检查备用驱动器是否能自动工作,当我们阵列中的任何一个磁盘出现故障时。为了测试,我亲自将一个磁盘模拟为故障设备。
|
||||
|
||||
在这里,我们标记 /dev/sdd1 为故障磁盘。
|
||||
|
||||
# mdadm --manage --fail /dev/md0 /dev/sdd1
|
||||
|
||||
![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png)
|
||||
|
||||
检查 Raid 6 容错
|
||||
|
||||
20.让我们查看 RAID 的详细信息,并检查备用磁盘是否开始同步。
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png)
|
||||
|
||||
检查 Raid 自动同步
|
||||
|
||||
**哇塞!** 这里,我们看到备用磁盘激活了,并开始重建进程。在底部,我们可以看到有故障的磁盘 /dev/sdd1 标记为 faulty。可以使用下面的命令查看进程重建。
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png)
|
||||
|
||||
Raid 6 自动同步
|
||||
|
||||
### 结论: ###
|
||||
|
||||
在这里,我们看到了如何使用四个磁盘设置 RAID 6。这种 RAID 级别是具有高冗余的昂贵设置之一。在接下来的文章中,我们将看到如何建立一个嵌套的 RAID 10 甚至更多。至此,请继续关注 TECMINT。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-6-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
||||
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
|
||||
[2]:http://www.tecmint.com/create-raid0-in-linux/
|
||||
[3]:http://www.tecmint.com/create-raid1-in-linux/
|
@ -0,0 +1,277 @@
|
||||
|
||||
在 Linux 中设置 RAID 10 或 1 + 0(嵌套) - 第6部分
|
||||
================================================================================
|
||||
RAID 10 是结合 RAID 0 和 RAID 1 形成的。要设置 RAID 10,我们至少需要4个磁盘。在之前的文章中,我们已经看到了如何使用两个磁盘设置 RAID 0 和 RAID 1。
|
||||
|
||||
在这里,我们将使用最少4个磁盘结合 RAID 0 和 RAID 1 来设置 RAID 10。假设,我们已经在逻辑卷保存了一些数据,这是 RAID 10 创建的,如果我们要保存数据“apple”,它将使用以下方法将其保存在4个磁盘中。
|
||||
|
||||
![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg)
|
||||
|
||||
在 Linux 中创建 Raid 10
|
||||
|
||||
使用 RAID 0 时,它将“A”保存在第一个磁盘,“p”保存在第二个磁盘,下一个“P”又在第一个磁盘,“L”在第二个磁盘。然后,“e”又在第一个磁盘,像这样它会继续循环此过程将数据保存完整。由此我们知道,RAID 0 是将数据的一半保存到第一个磁盘,另一半保存到第二个磁盘。
|
||||
|
||||
在 RAID 1 方法中,相同的数据将被写入到两个磁盘中。 “A”将同时被写入到第一和第二个磁盘中,“P”也将被同时写入到两个磁盘中,下一个“P”也将同时被写入到两个磁盘。因此,使用 RAID 1 将同时写入到两个磁盘。它将继续循环此过程。
|
||||
|
||||
现在大家来了解 RAID 10 怎样结合 RAID 0 和 RAID 1 来工作。如果我们有4个20 GB 的磁盘,总共为 80 GB,但我们将只能得到40 GB 的容量,另一半的容量将用于构建 RAID 10。
|
||||
|
||||
#### RAID 10 的优点和缺点 ####
|
||||
|
||||
- 提供更好的性能。
|
||||
- 在 RAID 10 中我们将失去两个磁盘的容量。
|
||||
- 读与写的性能将会很好,因为它会同时进行写入和读取。
|
||||
- 它能解决数据库的高 I/O 磁盘写操作。
|
||||
|
||||
#### 要求 ####
|
||||
|
||||
在 RAID 10 中,我们至少需要4个磁盘,2个磁盘为 RAID 0,其他2个磁盘为 RAID 1,就像我之前说的,RAID 10 仅仅是结合了 RAID 0和1。如果我们需要扩展 RAID 组,最少需要添加4个磁盘。
|
||||
|
||||
**我的服务器设置**
|
||||
|
||||
Operating System : CentOS 6.5 Final
|
||||
IP Address : 192.168.0.229
|
||||
Hostname : rd10.tecmintlocal.com
|
||||
Disk 1 [20GB] : /dev/sdd
|
||||
Disk 2 [20GB] : /dev/sdc
|
||||
Disk 3 [20GB] : /dev/sdd
|
||||
Disk 4 [20GB] : /dev/sde
|
||||
|
||||
有两种方法来设置 RAID 10,在这里两种方法我都会演示,但我更喜欢第一种方法,使用它来设置 RAID 10 更简单。
|
||||
|
||||
### 方法1:设置 RAID 10 ###
|
||||
|
||||
1.首先,使用以下命令确认所添加的4块磁盘没有被使用。
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
2.四个磁盘被检测后,然后来检查磁盘是否存在 RAID 分区。
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
|
||||
![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png)
|
||||
|
||||
验证添加的4块磁盘
|
||||
|
||||
**注意**: 在上面的输出中,如果没有检测到 super-block 意味着在4块磁盘中没有定义过 RAID。
|
||||
|
||||
#### 第1步:为 RAID 分区 ####
|
||||
|
||||
3.现在,使用‘fdisk’,命令为4个磁盘(/dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde)创建新分区。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
# fdisk /dev/sde
|
||||
|
||||
**为 /dev/sdb 创建分区**
|
||||
|
||||
我来告诉你如何使用 fdisk 为磁盘(/dev/sdb)进行分区,此步也适用于其他磁盘。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
|
||||
请使用以下步骤为 /dev/sdb 创建一个新的分区。
|
||||
|
||||
- 按 ‘n’ 创建新的分区。
|
||||
- 然后按 ‘P’ 选择主分区。
|
||||
- 接下来选择分区号为1。
|
||||
- 只需按两次回车键选择默认值即可。
|
||||
- 然后,按 ‘P’ 来打印创建好的分区。
|
||||
- 按 ‘L’,列出所有可用的类型。
|
||||
- 按 ‘t’ 去修改分区。
|
||||
- 键入 ‘fd’ 设置为 Linux 的 RAID 类型,然后按 Enter 确认。
|
||||
- 然后再次使用‘p’查看我们所做的更改。
|
||||
- 使用‘w’保存更改。
|
||||
|
||||
![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png)
|
||||
|
||||
为磁盘 sdb 分区
|
||||
|
||||
**注意**: 请使用上面相同的指令对其他磁盘(sdc, sdd sdd sde)进行分区。
|
||||
|
||||
4.创建好4个分区后,需要使用下面的命令来检查磁盘是否存在 raid。
|
||||
|
||||
# mdadm -E /dev/sd[b-e]
|
||||
# mdadm -E /dev/sd[b-e]1
|
||||
|
||||
或者
|
||||
|
||||
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
|
||||
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
||||
|
||||
![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png)
|
||||
|
||||
检查磁盘
|
||||
|
||||
**注意**: 以上输出显示,新创建的四个分区中没有检测到 super-block,这意味着我们可以继续在这些磁盘上创建 RAID 10。
|
||||
|
||||
#### 第2步: 创建 RAID 设备 ‘md’ ####
|
||||
|
||||
5.现在改创建一个‘md’(即 /dev/md0)设备,使用“mdadm” raid 管理工具。在创建设备之前,必须确保系统已经安装了‘mdadm’工具,如果没有请使用下面的命令来安装。
|
||||
|
||||
# yum install mdadm [on RedHat systems]
|
||||
# apt-get install mdadm [on Debain systems]
|
||||
|
||||
‘mdadm’工具安装完成后,可以使用下面的命令创建一个‘md’ raid 设备。
|
||||
|
||||
# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1
|
||||
|
||||
6.接下来使用‘cat’命令验证新创建的 raid 设备。
|
||||
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png)
|
||||
|
||||
创建 md raid 设备
|
||||
|
||||
7.接下来,使用下面的命令来检查4个磁盘。下面命令的输出会很长,因为它会显示4个磁盘的所有信息。
|
||||
|
||||
# mdadm --examine /dev/sd[b-e]1
|
||||
|
||||
8.接下来,使用以下命令来查看 RAID 阵列的详细信息。
|
||||
|
||||
# mdadm --detail /dev/md0
|
||||
|
||||
![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png)
|
||||
|
||||
查看 Raid 阵列详细信息
|
||||
|
||||
**注意**: 你在上面看到的结果,该 RAID 的状态是 active 和re-syncing。
|
||||
|
||||
#### 第3步:创建文件系统 ####
|
||||
|
||||
9.使用 ext4 作为‘md0′的文件系统并将它挂载到‘/mnt/raid10‘下。在这里,我用的是 ext4,你可以使用你想要的文件系统类型。
|
||||
|
||||
# mkfs.ext4 /dev/md0
|
||||
|
||||
![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png)
|
||||
|
||||
创建 md 文件系统
|
||||
|
||||
10.在创建文件系统后,挂载文件系统到‘/mnt/raid10‘下,并使用‘ls -l’命令列出挂载点下的内容。
|
||||
|
||||
# mkdir /mnt/raid10
|
||||
# mount /dev/md0 /mnt/raid10/
|
||||
# ls -l /mnt/raid10/
|
||||
|
||||
接下来,在挂载点下创建一些文件,并在文件中添加些内容,然后检查内容。
|
||||
|
||||
# touch /mnt/raid10/raid10_files.txt
|
||||
# ls -l /mnt/raid10/
|
||||
# echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
|
||||
# cat /mnt/raid10/raid10_files.txt
|
||||
|
||||
![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png)
|
||||
|
||||
挂载 md 设备
|
||||
|
||||
11.要想自动挂载,打开‘/etc/fstab‘文件并添加下面的条目,挂载点根据你环境的不同来添加。使用 wq! 保存并退出。
|
||||
|
||||
# vim /etc/fstab
|
||||
|
||||
/dev/md0 /mnt/raid10 ext4 defaults 0 0
|
||||
|
||||
![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png)
|
||||
|
||||
挂载 md 设备
|
||||
|
||||
12.接下来,在重新启动系统前使用‘mount -a‘来确认‘/etc/fstab‘文件是否有错误。
|
||||
|
||||
# mount -av
|
||||
|
||||
![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png)
|
||||
|
||||
检查 Fstab 中的错误
|
||||
|
||||
#### 第四步:保存 RAID 配置 ####
|
||||
|
||||
13.默认情况下 RAID 没有配置文件,所以我们需要在上述步骤完成后手动保存它。
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||
![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png)
|
||||
|
||||
保存 Raid10 的配置
|
||||
|
||||
就这样,我们使用方法1创建完了 RAID 10,这种方法是比较容易的。现在,让我们使用方法2来设置 RAID 10。
|
||||
|
||||
### 方法2:创建 RAID 10 ###
|
||||
|
||||
1.在方法2中,我们必须定义2组 RAID 1,然后我们需要使用这些创建好的 RAID 1 的集来定义一个 RAID 0。在这里,我们将要做的是先创建2个镜像(RAID1),然后创建 RAID0 (条带化)。
|
||||
|
||||
首先,列出所有的可用于创建 RAID 10 的磁盘。
|
||||
|
||||
# ls -l /dev | grep sd
|
||||
|
||||
![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png)
|
||||
|
||||
列出了 4 设备
|
||||
|
||||
2.将4个磁盘使用‘fdisk’命令进行分区。对于如何分区,您可以按照 #步骤 3。
|
||||
|
||||
# fdisk /dev/sdb
|
||||
# fdisk /dev/sdc
|
||||
# fdisk /dev/sdd
|
||||
# fdisk /dev/sde
|
||||
|
||||
3.在完成4个磁盘的分区后,现在检查磁盘是否存在 RAID块。
|
||||
|
||||
# mdadm --examine /dev/sd[b-e]
|
||||
# mdadm --examine /dev/sd[b-e]1
|
||||
|
||||
![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png)
|
||||
|
||||
检查 4 个磁盘
|
||||
|
||||
#### 第1步:创建 RAID 1 ####
|
||||
|
||||
4.首先,使用4块磁盘创建2组 RAID 1,一组为‘sdb1′和 ‘sdc1′,另一组是‘sdd1′ 和 ‘sde1′。
|
||||
|
||||
# mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1
|
||||
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
|
||||
|
||||
创建 Raid 1
|
||||
|
||||
![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
|
||||
|
||||
查看 Raid 1 的详细信息
|
||||
|
||||
#### 第2步:创建 RAID 0 ####
|
||||
|
||||
5.接下来,使用 md1 和 md2 来创建 RAID 0。
|
||||
|
||||
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2
|
||||
# cat /proc/mdstat
|
||||
|
||||
![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png)
|
||||
|
||||
创建 Raid 0
|
||||
|
||||
#### 第3步:保存 RAID 配置 ####
|
||||
|
||||
6.我们需要将配置文件保存在‘/etc/mdadm.conf‘文件中,使其每次重新启动后都能加载所有的 raid 设备。
|
||||
|
||||
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
|
||||
|
||||
在此之后,我们需要按照方法1中的#第3步来创建文件系统。
|
||||
|
||||
就是这样!我们采用的方法2创建完了 RAID 1+0.我们将会失去两个磁盘的空间,但相比其他 RAID ,它的性能将是非常好的。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这里,我们采用两种方法创建 RAID 10。RAID 10 具有良好的性能和冗余性。希望这篇文章可以帮助你了解 RAID 10(嵌套 RAID 的级别)。在后面的文章中我们会看到如何扩展现有的 RAID 阵列以及更多精彩的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/create-raid-10-in-linux/
|
||||
|
||||
作者:[Babin Lonston][a]
|
||||
译者:[strugglingyouth](https://github.com/strugglingyouth)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/babinlonston/
|
@ -0,0 +1,205 @@
|
||||
第四部分 - 使用 Shell 脚本自动化 Linux 系统维护任务
|
||||
================================================================================
|
||||
之前我听说高效系统管理员/工程师的其中一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因:
|
||||
|
||||
![自动化 Linux 系统维护任务](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png)
|
||||
|
||||
RHCE 系列:第四部分 - 自动化 Linux 系统维护任务
|
||||
|
||||
如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得尽量花费少的时间去做重复的工作,以及通过使用该系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。
|
||||
|
||||
### 什么是 shell 脚本? ###
|
||||
|
||||
简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和端用户之间提供接口的另一个程序。
|
||||
|
||||
默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看 [维基页面][2]。
|
||||
|
||||
关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。
|
||||
|
||||
### 写一个脚本显示系统信息 ###
|
||||
|
||||
为了方便,首先让我们新建一个目录用于保存我们的 shell 脚本:
|
||||
|
||||
# mkdir scripts
|
||||
# cd scripts
|
||||
|
||||
然后用喜欢的文本编辑器打开新的文本文件 `system_info.sh`。我们首先在头部插入一些注释以及一些命令:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# RHCE 系列第四部分事例脚本
|
||||
# 该脚本会返回以下这些系统信息:
|
||||
# -主机名称:
|
||||
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
|
||||
hostnamectl
|
||||
echo ""
|
||||
# -文件系统磁盘空间使用:
|
||||
echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m"
|
||||
df -h
|
||||
echo ""
|
||||
# -系统空闲和使用中的内存:
|
||||
echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m"
|
||||
free
|
||||
echo ""
|
||||
# -系统启动时间:
|
||||
echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m"
|
||||
uptime
|
||||
echo ""
|
||||
# -登录的用户:
|
||||
echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m"
|
||||
who
|
||||
echo ""
|
||||
# -使用内存最多的 5 个进程
|
||||
echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m"
|
||||
ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6
|
||||
echo ""
|
||||
echo -e "\e[1;32mDone.\e[0m"
|
||||
|
||||
然后,给脚本可执行权限:
|
||||
|
||||
# chmod +x system_info.sh
|
||||
|
||||
运行脚本:
|
||||
|
||||
./system_info.sh
|
||||
|
||||
注意为了更好的可视化效果各部分标题都用颜色显示:
|
||||
|
||||
![服务器监视 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png)
|
||||
|
||||
服务器监视 Shell 脚本
|
||||
|
||||
该功能用以下命令提供:
|
||||
|
||||
echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"
|
||||
|
||||
其中 COLOR1 和 COLOR2 是前景色和背景色([Arch Linux Wiki][5] 有更多的信息和选项解释),<YOUR TEXT HERE> 是你想用颜色显示的字符串。
|
||||
|
||||
### 使任务自动化 ###
|
||||
|
||||
你想使其自动化的任务可能因情况而不同。因此,我们不可能在一篇文章中覆盖所有可能的场景,但是我们会介绍使用 shell 脚本可以使其自动化的三种典型任务:
|
||||
|
||||
**1)** 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。
|
||||
|
||||
让我们在脚本目录中新建一个名为 `auto_tasks.sh` 的文件并添加以下内容:
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
# 自动化任务事例脚本:
|
||||
# -更新本地文件数据库:
|
||||
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
|
||||
updatedb
|
||||
if [ $? == 0 ]; then
|
||||
echo "The local file database was updated correctly."
|
||||
else
|
||||
echo "The local file database was not updated correctly."
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# -查找 和/或 删除有 777 权限的文件。
|
||||
echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m"
|
||||
# Enable either option (comment out the other line), but not both.
|
||||
# Option 1: Delete files without prompting for confirmation. Assumes GNU version of find.
|
||||
#find -type f -perm 0777 -delete
|
||||
# Option 2: Ask for confirmation before deleting files. More portable across systems.
|
||||
find -type f -perm 0777 -exec rm -i {} +;
|
||||
echo ""
|
||||
# -文件系统使用率超过定义的阀值时发出警告
|
||||
echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m"
|
||||
THRESHOLD=30
|
||||
while read line; do
|
||||
# This variable stores the file system path as a string
|
||||
FILESYSTEM=$(echo $line | awk '{print $1}')
|
||||
# This variable stores the use percentage (XX%)
|
||||
PERCENTAGE=$(echo $line | awk '{print $5}')
|
||||
# Use percentage without the % sign.
|
||||
USAGE=${PERCENTAGE%?}
|
||||
if [ $USAGE -gt $THRESHOLD ]; then
|
||||
echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE"
|
||||
fi
|
||||
done < <(df -h --total | grep -vi filesystem)
|
||||
|
||||
请注意该脚本最后一行两个 `<` 符号之间有个空格。
|
||||
|
||||
![查找 777 权限文件的 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png)
|
||||
|
||||
查找 777 权限文件的 Shell 脚本
|
||||
|
||||
### 使用 Cron ###
|
||||
|
||||
想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预定义的接收者或者将它们保存到使用 web 浏览器可以查看的文件中。
|
||||
|
||||
下面的脚本(filesystem_usage.sh)会运行有名的 **df -h** 命令,格式化输出到 HTML 表格并保存到 **report.html** 文件中:
|
||||
|
||||
#!/bin/bash
|
||||
# Sample script to demonstrate the creation of an HTML report using shell scripting
|
||||
# Web directory
|
||||
WEB_DIR=/var/www/html
|
||||
# A little CSS and table layout to make the report look a little nicer
|
||||
echo "<HTML>
|
||||
<HEAD>
|
||||
<style>
|
||||
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em 0.2em;}
|
||||
table
|
||||
{
|
||||
border-collapse:collapse;
|
||||
}
|
||||
table, td, th
|
||||
{
|
||||
border:1px solid black;
|
||||
}
|
||||
</style>
|
||||
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
|
||||
</HEAD>
|
||||
<BODY>" > $WEB_DIR/report.html
|
||||
# View hostname and insert it at the top of the html body
|
||||
HOST=$(hostname)
|
||||
echo "Filesystem usage for host <strong>$HOST</strong><br>
|
||||
Last updated: <strong>$(date)</strong><br><br>
|
||||
<table border='1'>
|
||||
<tr><th class='titulo'>Filesystem</td>
|
||||
<th class='titulo'>Size</td>
|
||||
<th class='titulo'>Use %</td>
|
||||
</tr>" >> $WEB_DIR/report.html
|
||||
# Read the output of df -h line by line
|
||||
while read line; do
|
||||
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
|
||||
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
|
||||
echo "</td><td align='center'>" >> $WEB_DIR/report.html
|
||||
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
|
||||
echo "</td><td align='center'>" >> $WEB_DIR/report.html
|
||||
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
|
||||
echo "</td></tr>" >> $WEB_DIR/report.html
|
||||
done < <(df -h | grep -vi filesystem)
|
||||
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html
|
||||
|
||||
在我们的 **RHEL 7** 服务器(**192.168.0.18**)中,看起来像下面这样:
|
||||
|
||||
![服务器监视报告](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png)
|
||||
|
||||
服务器监视报告
|
||||
|
||||
你可以添加任何你想要的信息到那个报告中。添加下面的 crontab 条目在每天下午的 1:30 运行该脚本:
|
||||
|
||||
30 13 * * * /root/scripts/filesystem_usage.sh
|
||||
|
||||
### 总结 ###
|
||||
|
||||
你很可能想起各种其他想要自动化的任务;正如你看到的,使用 shell 脚本能极大的简化任务。如果你觉得这篇文章对你有所帮助就告诉我们吧,别犹豫在下面的表格中添加你自己的想法或评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/
|
||||
[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29
|
||||
[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf
|
||||
[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
|
||||
[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt
|
@ -0,0 +1,215 @@
|
||||
RHCSA 系列:使用 ACL(访问控制列表) 和挂载 Samba/NFS 共享 – Part 7
|
||||
================================================================================
|
||||
在上一篇文章([RHCSA 系列 Part 6][1])中,我们解释了如何使用 parted 和 ssm 来设置和配置本地系统存储。
|
||||
|
||||
![配置 ACL 及挂载 NFS/Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png)
|
||||
|
||||
RHCSA Series: 配置 ACL 及挂载 NFS/Samba 共享 – Part 7
|
||||
|
||||
我们也讨论了如何创建和在系统启动时使用一个密码来挂载加密的卷。另外,我们告诫过你要避免在挂载的文件系统上执行苛刻的存储管理操作。记住了这点后,现在,我们将回顾在 RHEL 7 中最常使用的文件系统格式,然后将涵盖有关手动或自动挂载、使用和卸载网络文件系统(CIFS 和 NFS)的话题以及在你的操作系统上实现访问控制列表的使用。
|
||||
|
||||
#### 前提条件 ####
|
||||
|
||||
在进一步深入之前,请确保你可使用 Samba 服务和 NFS 服务(注意在 RHEL 7 中 NFSv2 已不再被支持)。
|
||||
|
||||
在本次指导中,我们将使用一个IP 地址为 192.168.0.10 且同时运行着 Samba 服务和 NFS 服务的机子来作为服务器,使用一个 IP 地址为 192.168.0.18 的 RHEL 7 机子来作为客户端。在这篇文章的后面部分,我们将告诉你在客户端上你需要安装哪些软件包。
|
||||
|
||||
### RHEL 7 中的文件系统格式 ###
|
||||
|
||||
从 RHEL 7 开始,由于 XFS 的高性能和可扩展性,它已经被引入所有的架构中来作为默认的文件系统。
|
||||
根据 Red Hat 及其合作伙伴在主流硬件上执行的最新测试,当前 XFS 已支持最大为 500 TB 大小的文件系统。
|
||||
|
||||
另外, XFS 启用了 user_xattr(扩展用户属性) 和 acl(
|
||||
POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext4(对于 RHEL 7 来说, ext2 已过时),这意味着当挂载一个 XFS 文件系统时,你不必显式地在命令行或 /etc/fstab 中指定这些选项(假如你想在后一种情况下禁用这些选项,你必须显式地使用 no_acl 和 no_user_xattr)。
|
||||
|
||||
请记住扩展用户属性可以被指定到文件和目录中来存储任意的额外信息如 mime 类型,字符集或文件的编码,而用户属性中的访问权限由一般的文件权限位来定义。
|
||||
|
||||
#### 访问控制列表 ####
|
||||
|
||||
作为一名系统管理员,无论你是新手还是专家,你一定非常熟悉与文件和目录有关的常规访问权限,这些权限为所有者,所有组和"世界"(所有的其他人)指定了特定的权限(可读,可写及可执行)。但如若你需要稍微更新你的记忆,请随意参考 [RHCSA 系列的 Part 3][3].
|
||||
|
||||
但是,由于标准的 `ugo/rwx` 集合并不允许为不同的用户配置不同的权限,所以 ACL 便被引入了进来,为的是为文件和目录定义更加详细的访问权限,而不仅仅是这些特别指定的特定权限。
|
||||
|
||||
事实上, ACL 定义的权限是由文件权限位所特别指定的权限的一个超集。下面就让我们看看这个转换是如何在真实世界中被应用的吧。
|
||||
|
||||
1. 存在两种类型的 ACL:访问 ACL,可被应用到一个特定的文件或目录上,以及默认 ACL,只可被应用到一个目录上。假如目录中的文件没有 ACL,则它们将继承它们的父目录的默认 ACL 。
|
||||
|
||||
2. 从一开始, ACL 就可以为每个用户,每个组或不在文件所属组中的用户配置相应的权限。
|
||||
|
||||
3. ACL 可使用 `setfacl` 来设置(和移除),可相应地使用 -m 或 -x 选项。
|
||||
|
||||
例如,让我们创建一个名为 tecmint 的组,并将用户 johndoe 和 davenull 加入该组:
|
||||
|
||||
# groupadd tecmint
|
||||
# useradd johndoe
|
||||
# useradd davenull
|
||||
# usermod -a -G tecmint johndoe
|
||||
# usermod -a -G tecmint davenull
|
||||
|
||||
并且让我们检验这两个用户都已属于追加的组 tecmint:
|
||||
|
||||
# id johndoe
|
||||
# id davenull
|
||||
|
||||
![检验用户](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png)
|
||||
|
||||
检验用户
|
||||
|
||||
现在,我们在 /mnt 下创建一个名为 playground 的目录,并在该目录下创建一个名为 testfile.txt 的文件。我们将设定该文件的属组为 tecmint,并更改它的默认 ugo/rwx 权限为 770(即赋予该文件的属主和属组可读,可写和可执行权限):
|
||||
|
||||
# mkdir /mnt/playground
|
||||
# touch /mnt/playground/testfile.txt
|
||||
# chmod 770 /mnt/playground/testfile.txt
|
||||
|
||||
接着,依次切换为 johndoe 和 davenull 用户,并在文件中写入一些信息:
|
||||
|
||||
echo "My name is John Doe" > /mnt/playground/testfile.txt
|
||||
echo "My name is Dave Null" >> /mnt/playground/testfile.txt
|
||||
|
||||
到目前为止,一切正常。现在我们让用户 gacanepa 来向该文件执行写操作 – 则写操作将会失败,这是可以预料的。
|
||||
|
||||
但实际上我们需要用户 gacanepa(TA 不是组 tecmint 的成员)在文件 /mnt/playground/testfile.txt 上有写权限,那又该怎么办呢?首先映入你脑海里的可能是将该用户添加到组 tecmint 中。但那将使得他在所有该组具有写权限位的文件上均拥有写权限,但我们并不想这样,我们只想他能够在文件 /mnt/playground/testfile.txt 上有写权限。
|
||||
|
||||
# touch /mnt/playground/testfile.txt
|
||||
# chown :tecmint /mnt/playground/testfile.txt
|
||||
# chmod 777 /mnt/playground/testfile.txt
|
||||
# su johndoe
|
||||
$ echo "My name is John Doe" > /mnt/playground/testfile.txt
|
||||
$ su davenull
|
||||
$ echo "My name is Dave Null" >> /mnt/playground/testfile.txt
|
||||
$ su gacanepa
|
||||
$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt
|
||||
|
||||
![管理用户的权限](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png)
|
||||
|
||||
管理用户的权限
|
||||
|
||||
现在,让我们给用户 gacanepa 在 /mnt/playground/testfile.txt 文件上有读和写权限。
|
||||
|
||||
以 root 的身份运行如下命令:
|
||||
|
||||
# setfacl -R -m u:gacanepa:rwx /mnt/playground
|
||||
|
||||
则你将成功地添加一条 ACL,运行 gacanepa 对那个测试文件可写。然后切换为 gacanepa 用户,并再次尝试向该文件写入一些信息:
|
||||
|
||||
$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt
|
||||
|
||||
要观察一个特定的文件或目录的 ACL,可以使用 `getfacl` 命令:
|
||||
|
||||
# getfacl /mnt/playground/testfile.txt
|
||||
|
||||
![检查文件的 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png)
|
||||
|
||||
检查文件的 ACL
|
||||
|
||||
要为目录设定默认 ACL(它的内容将被该目录下的文件继承,除非另外被覆写),在规则前添加 `d:`并特别指定一个目录名,而不是文件名:
|
||||
|
||||
# setfacl -m d:o:r /mnt/playground
|
||||
|
||||
上面的 ACL 将允许不在属组中的用户对目录 /mnt/playground 中的内容有读权限。请注意观察这次更改前后
|
||||
`getfacl /mnt/playground` 的输出结果的不同:
|
||||
|
||||
![在 Linux 中设定默认 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png)
|
||||
|
||||
在 Linux 中设定默认 ACL
|
||||
|
||||
[在官方的 RHEL 7 存储管理指导手册的第 20 章][3] 中提供了更多有关 ACL 的例子,我极力推荐你看一看它并将它放在身边作为参考。
|
||||
|
||||
#### 挂载 NFS 网络共享 ####
|
||||
|
||||
要显示你服务器上可用的 NFS 共享的列表,你可以使用带有 -e 选项的 `showmount` 命令,再跟上机器的名称或它的 IP 地址。这个工具包含在 `nfs-utils` 软件包中:
|
||||
|
||||
# yum update && yum install nfs-utils
|
||||
|
||||
接着运行:
|
||||
|
||||
# showmount -e 192.168.0.10
|
||||
|
||||
则你将得到一个在 192.168.0.10 上可用的 NFS 共享的列表:
|
||||
|
||||
![检查可用的 NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png)
|
||||
|
||||
检查可用的 NFS 共享
|
||||
|
||||
要按照需求在本地客户端上使用命令行来挂载 NFS 网络共享,可使用下面的语法:
|
||||
|
||||
# mount -t nfs -o [options] remote_host:/remote/directory /local/directory
|
||||
|
||||
其中,在我们的例子中,对应为:
|
||||
|
||||
# mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs
|
||||
|
||||
若你得到如下的错误信息:“Job for rpc-statd.service failed. See “systemctl status rpc-statd.service”及“journalctl -xn” for details.”,请确保 `rpcbind` 服务被启用且已在你的系统中启动了。
|
||||
|
||||
# systemctl enable rpcbind.socket
|
||||
# systemctl restart rpcbind.service
|
||||
|
||||
接着重启。这就应该达到了上面的目的,且你将能够像先前解释的那样挂载你的 NFS 共享了。若你需要在系统启动时自动挂载 NFS 共享,可以向 /etc/fstab 文件添加一个有效的条目:
|
||||
|
||||
remote_host:/remote/directory /local/directory nfs options 0 0
|
||||
|
||||
上面的变量 remote_host, /remote/directory, /local/directory 和 options(可选) 和在命令行中手动挂载一个 NFS 共享时使用的一样。按照我们前面的例子,对应为:
|
||||
|
||||
192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0
|
||||
|
||||
#### 挂载 CIFS (Samba) 网络共享 ####
|
||||
|
||||
Samba 代表一个特别的工具,使得在由 *nix 和 Windows 机器组成的网络中进行网络共享成为可能。要显示可用的 Samba 共享,可使用带有 -L 选项的 smbclient 命令,再跟上机器的名称或它的 IP 地址。这个工具包含在 samba_client 软件包中:
|
||||
|
||||
你将被提示在远程主机上输入 root 用户的密码:
|
||||
|
||||
# smbclient -L 192.168.0.10
|
||||
|
||||
![检查 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png)
|
||||
|
||||
检查 Samba 共享
|
||||
|
||||
要在本地客户端上挂载 Samba 网络共享,你需要已安装好 cifs-utils 软件包:
|
||||
|
||||
# yum update && yum install cifs-utils
|
||||
|
||||
然后在命令行中使用下面的语法:
|
||||
|
||||
# mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory
|
||||
|
||||
其中,在我们的例子中,对应为:
|
||||
|
||||
# mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba
|
||||
|
||||
其中 `smbcredentials`
|
||||
|
||||
username=gacanepa
|
||||
password=XXXXXX
|
||||
|
||||
是一个位于 root 用户的家目录(/root/) 中的隐藏文件,其权限被设置为 600,所以除了该文件的属主外,其他人对该文件既不可读也不可写。
|
||||
|
||||
请注意 samba_share 是 Samba 分享的名称,由上面展示的 `smbclient -L remote_host` 所返回。
|
||||
|
||||
现在,若你需要在系统启动时自动地使得 Samba 分享可用,可以向 /etc/fstab 文件添加一个像下面这样的有效条目:
|
||||
|
||||
//remote_host:/samba_share /local/directory cifs options 0 0
|
||||
|
||||
上面的变量 remote_host, /remote/directory, /local/directory 和 options(可选) 和在命令行中手动挂载一个 Samba 共享时使用的一样。按照我们前面的例子中所给的定义,对应为:
|
||||
|
||||
//192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在这篇文章中,我们已经解释了如何在 Linux 中设置 ACL,并讨论了如何在一个 RHEL 7 客户端上挂载 CIFS 和 NFS 网络共享。
|
||||
|
||||
我建议你去练习这些概念,甚至混合使用它们(试着在一个挂载的网络共享上设置 ACL),直至你感觉舒适。假如你有问题或评论,请随时随意地使用下面的评论框来联系我们。另外,请随意通过你的社交网络分享这篇文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/
|
||||
[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/
|
||||
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html
|
@ -0,0 +1,215 @@
|
||||
RHCSA 系列:安全 SSH,设定主机名及开启网络服务 – Part 8
|
||||
================================================================================
|
||||
作为一名系统管理员,你将经常使用一个终端模拟器来登陆到一个远程的系统中,执行一系列的管理任务。你将很少有机会坐在一个真实的(物理)终端前,所以你需要设定好一种方法来使得你可以登陆到你被要求去管理的那台远程主机上。
|
||||
|
||||
事实上,当你必须坐在一台物理终端前的时候,就可能是你登陆到该主机的最后一种方法。基于安全原因,使用 Telnet 来达到以上目的并不是一个好主意,因为穿行在线缆上的流量并没有被加密,它们以文本方式在传送。
|
||||
|
||||
另外,在这篇文章中,我们也将复习如何配置网络服务来使得它在开机时被自动开启,并学习如何设置网络和静态或动态地解析主机名。
|
||||
|
||||
![RHCSA: 安全 SSH 和开启网络服务](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png)
|
||||
|
||||
RHCSA: 安全 SSH 和开启网络服务 – Part 8
|
||||
|
||||
### 安装并确保 SSH 通信安全 ###
|
||||
|
||||
对于你来说,要能够使用 SSH 远程登陆到一个 RHEL 7 机子,你必须安装 `openssh`,`openssh-clients` 和 `openssh-servers` 软件包。下面的命令不仅将安装远程登陆程序,也会安装安全的文件传输工具以及远程文件复制程序:
|
||||
|
||||
# yum update && yum install openssh openssh-clients openssh-servers
|
||||
|
||||
注意,安装上服务器所需的相应软件包是一个不错的主意,因为或许在某个时刻,你想使用同一个机子来作为客户端和服务器。
|
||||
|
||||
在安装完成后,如若你想安全地访问你的 SSH 服务器,你还需要考虑一些基本的事情。下面的设定应该在文件 `/etc/ssh/sshd_config` 中得以呈现。
|
||||
|
||||
1. 更改 sshd 守护进程的监听端口,从 22(默认的端口值)改为一个更高的端口值(2000 或更大),但首先要确保所选的端口没有被占用。
|
||||
|
||||
例如,让我们假设你选择了端口 2500 。使用 [netstat][1] 来检查所选的端口是否被占用:
|
||||
|
||||
# netstat -npltu | grep 2500
|
||||
|
||||
假如 netstat 没有返回任何信息,则你可以安全地为 sshd 使用端口 2500,并且你应该在上面的配置文件中更改端口的设定,具体如下:
|
||||
|
||||
Port 2500
|
||||
|
||||
2. 只允许协议 2:
|
||||
|
||||
Protocol 2
|
||||
|
||||
3. 配置验证超时的时间为 2 分钟,不允许以 root 身份登陆,并将允许通过 ssh 登陆的人数限制到最小:
|
||||
|
||||
LoginGraceTime 2m
|
||||
PermitRootLogin no
|
||||
AllowUsers gacanepa
|
||||
|
||||
4. 假如可能,使用基于公钥的验证方式而不是使用密码:
|
||||
|
||||
PasswordAuthentication no
|
||||
RSAAuthentication yes
|
||||
PubkeyAuthentication yes
|
||||
|
||||
这假设了你已经在你的客户端机子上创建了带有你的用户名的一个密钥对,并将公钥复制到了你的服务器上。
|
||||
|
||||
- [开启 SSH 无密码登陆][2]
|
||||
|
||||
### 配置网络和名称的解析 ###
|
||||
|
||||
1. 每个系统管理员应该对下面这个系统配置文件非常熟悉:
|
||||
|
||||
- /etc/hosts 被用来在小型网络中解析名称 <---> IP 地址。
|
||||
|
||||
文件 `/etc/hosts` 中的每一行拥有如下的结构:
|
||||
|
||||
IP address - Hostname - FQDN
|
||||
|
||||
例如,
|
||||
|
||||
192.168.0.10 laptop laptop.gabrielcanepa.com.ar
|
||||
|
||||
2. `/etc/resolv.conf` 特别指定 DNS 服务器的 IP 地址和搜索域,它被用来在没有提供域名后缀时,将一个给定的查询名称对应为一个全称域名。
|
||||
|
||||
在正常情况下,你不必编辑这个文件,因为它是由系统管理的。然而,若你非要改变 DNS 服务器的 IP 地址,建议你在该文件的每一行中,都应该遵循下面的结构:
|
||||
|
||||
nameserver - IP address
|
||||
|
||||
例如,
|
||||
|
||||
nameserver 8.8.8.8
|
||||
|
||||
3. `/etc/host.conf` 特别指定在一个网络中主机名被解析的方法和顺序。换句话说,告诉名称解析器使用哪个服务,并以什么顺序来使用。
|
||||
|
||||
尽管这个文件由几个选项,但最为常见和基本的设置包含如下的一行:
|
||||
|
||||
order bind,hosts
|
||||
|
||||
它意味着解析器应该首先查看 `resolv.conf` 中特别指定的域名服务器,然后到 `/etc/hosts` 文件中查找解析的名称。
|
||||
|
||||
4. `/etc/sysconfig/network` 包含了所有网络接口的路由和全局主机信息。下面的值可能会被使用:
|
||||
|
||||
NETWORKING=yes|no
|
||||
HOSTNAME=value
|
||||
|
||||
其中的 value 应该是全称域名(FQDN)。
|
||||
|
||||
GATEWAY=XXX.XXX.XXX.XXX
|
||||
|
||||
其中的 XXX.XXX.XXX.XXX 是网关的 IP 地址。
|
||||
|
||||
GATEWAYDEV=value
|
||||
|
||||
在一个带有多个网卡的机器中, value 为网关设备名,例如 enp0s3。
|
||||
|
||||
5. 位于 `/etc/sysconfig/network-scripts` 中的文件(网络适配器配置文件)。
|
||||
|
||||
在上面提到的目录中,你将找到几个被命名为如下格式的文本文件。
|
||||
|
||||
ifcfg-name
|
||||
|
||||
其中 name 为网卡的名称,由 `ip link show` 返回:
|
||||
|
||||
![检查网络连接状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png)
|
||||
|
||||
检查网络连接状态
|
||||
|
||||
例如:
|
||||
|
||||
![网络文件](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png)
|
||||
|
||||
网络文件
|
||||
|
||||
除了环回接口,你还可以为你的网卡进行一个相似的配置。注意,假如设定了某些变量,它们将为这个特别的接口,覆盖掉 `/etc/sysconfig/network` 中定义的值。在这篇文章中,为了能够解释清楚,每行都被加上了注释,但在实际的文件中,你应该避免加上注释:
|
||||
|
||||
HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC
|
||||
TYPE=Ethernet # Type of connection
|
||||
BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case.
|
||||
IPADDR=192.168.0.18
|
||||
NETMASK=255.255.255.0
|
||||
GATEWAY=192.168.0.1
|
||||
NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file.
|
||||
NAME=enp0s3
|
||||
UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb
|
||||
ONBOOT=yes # The operating system should bring up this NIC during boot
|
||||
|
||||
### 设定主机名 ###
|
||||
|
||||
在 RHEL 7 中, `hostnamectl` 命令被同时用来查询和设定系统的主机名。
|
||||
|
||||
要展示当前的主机名,输入:
|
||||
|
||||
# hostnamectl status
|
||||
|
||||
![在RHEL 7 中检查系统的主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png)
|
||||
|
||||
检查系统的主机名
|
||||
|
||||
要更改主机名,使用
|
||||
|
||||
# hostnamectl set-hostname [new hostname]
|
||||
|
||||
例如,
|
||||
|
||||
# hostnamectl set-hostname cinderella
|
||||
|
||||
要想使得更改生效,你需要重启 hostnamed 守护进程(这样你就不必因为要应用更改而登出系统并再登陆系统):
|
||||
|
||||
# systemctl restart systemd-hostnamed
|
||||
|
||||
![在 RHEL7 中设定系统主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png)
|
||||
|
||||
设定系统主机名
|
||||
|
||||
另外, RHEL 7 还包含 `nmcli` 工具,它可被用来达到相同的目的。要展示主机名,运行:
|
||||
|
||||
# nmcli general hostname
|
||||
|
||||
且要改变主机名,则运行:
|
||||
|
||||
# nmcli general hostname [new hostname]
|
||||
|
||||
例如,
|
||||
|
||||
# nmcli general hostname rhel7
|
||||
|
||||
![使用 nmcli 命令来设定主机名](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png)
|
||||
|
||||
使用 nmcli 命令来设定主机名
|
||||
|
||||
### 在开机时开启网络服务 ###
|
||||
|
||||
作为本文的最后部分,就让我们看看如何确保网络服务在开机时被自动开启。简单来说,这个可通过创建符号链接到某些由服务的配置文件中的 [Install] 小节中指定的文件来实现。
|
||||
|
||||
以 firewalld(/usr/lib/systemd/system/firewalld.service) 为例:
|
||||
|
||||
[Install]
|
||||
WantedBy=basic.target
|
||||
Alias=dbus-org.fedoraproject.FirewallD1.service
|
||||
|
||||
要开启该服务,运行:
|
||||
|
||||
# systemctl enable firewalld
|
||||
|
||||
另一方面,要禁用 firewalld,则需要移除符号链接:
|
||||
|
||||
# systemctl disable firewalld
|
||||
|
||||
![在开机时开启服务](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png)
|
||||
|
||||
在开机时开启服务
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中,我们总结了如何安装 SSH 及使用它安全地连接到一个 RHEL 服务器,如何改变主机名,并在最后如何确保在系统启动时开启服务。假如你注意到某个服务启动失败,你可以使用 `systemctl status -l [service]` 和 `journalctl -xn` 来进行排错。
|
||||
|
||||
请随意使用下面的评论框来让我们知晓你对本文的看法。提问也同样欢迎。我们期待着你的反馈!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gacanepa/
|
||||
[1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/
|
||||
[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
|
Loading…
Reference in New Issue
Block a user