Merge pull request #11 from LCTT/master

update
This commit is contained in:
Kevin Sicong Jiang 2015-08-21 19:42:23 -05:00
commit f62f6c2ab7
82 changed files with 8107 additions and 3315 deletions

View File

@ -0,0 +1,56 @@
一周 GNOME 之旅:品味它和 KDE 的是是非非(第一节 介绍)
================================================================================
*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西……这是一篇评论文章,文中的观点都是我自己的,不代表 Phoronix 网站和 Michael 的观点。它们完全是我自己的想法。*
另外,没错……这可能是一篇引战的文章。我希望 KDE 和 Gnome 社团变得更好一些,因为我想发起一个讨论并反馈给他们。为此,当我想指出(我所看到的)一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“死于成千上万的[纸割][1]”LCTT 译注paper cuts——纸割被纸片割伤——指易修复但烦人的缺陷。Ubuntu 从 9.10 开始,发起了 [One Hundred Papercuts][1] 项目,用于修复那些小而烦人的易用性问题)。
现在,重申完毕……文章开始。
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
当我把[《评价 Fedora 22 KDE 》][2]一文发给 Michael 时,感觉很不是滋味。不是因为我不喜欢 KDE或者不待见 Fedora远非如此。事实上我刚开始想把我的 T450s 的系统换为 Arch Linux 时,马上又决定放弃了,因为我很享受 fedora 在很多方面所带来的便捷性。
我感觉很不是滋味的原因是 Fedora 的开发者花费了大量的时间和精力在他们的“工作站”产品上,但是我却一点也没看到。在使用 Fedora 时,我并没用采用那些主要开发者希望用户采用的那种使用方式,因此我也就体验不到所谓的“ Fedora 体验”。它感觉就像一个人评价 Ubuntu 时用的却是 Kubuntu评价 OS X 时用的却是 Hackintosh或者评价 Gentoo 时用的却是 Sabayon。根据论坛里大量读者对 Michael 的说法,他们在评价各种发行版时都是使用的默认设置——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成,当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。
正是在怀着这种态度的情况下,我决定跳到 Gnome 这个水坑里来泡泡澡。
但是,我还要在此多加一个声明……我在这里所看到的 KDE 和 Gnome 都是打包在 Fedora 中的。OpenSUSE、 Kubuntu、 Arch等发行版的各个桌面可能有不同的实现方法使得我这里所说的具体的“痛点”跟你所用的发行版有所不同。还有虽然用了这个标题但这篇文章将会是一篇“很 KDE”的重量级文章。之所以这样称呼是因为我在“使用” Gnome 之后,才知道 KDE 的“纸割”到底有多么的多。
### 登录界面 ###
![Gnome 登录界面](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
我一般情况下都不会介意发行版带着它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。
第一印象很重要对吧那么GDMLCTT 译注: Gnome Display ManageGnome 显示管理器。)绝对干得漂亮。它的登录界面看起来极度简洁,每一部分都应用了一致的设计风格。使用通用图标而不是文本框为它的简洁加了分。
![ KDE 登录界面](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
这并不是说 Fedora 22 KDE ——现在已经是 SDDM 而不是 KDM 了——的登录界面不好看,但是看起来绝对没有它这样和谐。
问题到底出来在哪?顶部栏。看看 Gnome 的截图——你选择一个用户,然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁,一点都不碍事,实话讲,如果你没注意的话可能完全会看不到它。现在看看那蓝色( LCTT 译注blue有忧郁之意一语双关的 KDE 截图,顶部栏看起来甚至不像是用同一个工具渲染出来的,它的整个位置的安排好像是某人想着:“哎哟妈呀,我们需要把这个选项扔在哪个地方……”之后决定下来的。
对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。
从实用观点来看GDM 还要远远实用的多,再看看顶部一栏。时间被列了出来,还有一个音量控制按钮,如果你想保持周围安静,你甚至可以在登录前设置静音,还有一个可用性按钮来实现高对比度、缩放、语音转文字等功能,所有可用的功能通过简单的一个开关按钮就能得到。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
切换到上游KDE 自带)的 Breeve 主题……突然间,我抱怨的大部分问题都被解决了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个文本框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当前时间以一种漂亮的感觉呈现,旁边还有电量指示器。当然 gnome 还是有一些很好的附加物,例如音量小程序和可用性按钮,但 Breeze 总归要比 Fedora 的 KDE 主题进步。
到 WindowsWindows 8和10之前或者 OS X 中去你会看到类似的东西——非常简洁的“不碍事”的锁屏与登录界面它们都没有文本框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认带有 Breeze 主题。VDG 在 Breeze 主题设计上干得不错。可别糟蹋了它。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1
[3]:https://launchpad.net/hundredpapercuts

View File

@ -0,0 +1,32 @@
一周 GNOME 之旅:品味它和 KDE 的是是非非(第二节 GNOME桌面
================================================================================
### 桌面 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
在我这一周的前五天中,我都是直接手动登录进 Gnome 的——没有打开自动登录功能。在第五天的晚上每一次都要手动登录让我觉得很厌烦所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示“你的密钥链keychain未解锁请输入你的密码解锁”。在这时我才意识到了什么……Gnome 以前一直都在自动解锁我的密钥链KDE 中叫做我的钱包),每当我通过 GDM 登录时 !当我绕开 GDM 的登录程序时Gnome 才不得不介入让我手动解锁。
现在,鄙人的陋见是如果你打开了自动登录功能,那么你的密钥链也应当自动解锁——否则,这功能还有何用?无论如何,你还是需要输入你的密码,况且在 GDM 登录界面你还有机会选择要登录的会话,如果你想换的话。
但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉就像它在**和我**一起工作一样是多么简单的一件事。当我通过 SDDM 登录 KDE 时?甚至连启动界面都还没加载完成,就有一个窗口弹出来遮挡了启动动画(因此启动动画也就被破坏了),它提示我解锁我的 KDE 钱包或 GPG 钥匙环。
如果当前还没有钱包你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗接着它又让你在两种加密模式中选择一种甚至还暗示我们其中一种Blowfish是不安全的既然是为了安全为什么还要我选择一个不安全的东西作者声明如果你安装了真正的 KDE spin 版本而不是仅仅安装了被 KDE 搞过的版本,那么在创建用户时,它就会为你创建一个钱包。但很不幸的是,它不会帮你自动解锁,并且它似乎还使用了更老的 Blowfish 加密模式,而不是更新而且更安全的 GPG 模式。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
如果你选择了那个安全的加密模式GPG那么它会尝试加载 GPG 密钥……我希望你已经创建过一个了,因为如果你没有,那么你可又要被指责一番了。怎么样才能创建一个?额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用 KGpg 来创建一个密钥,接着在你就会遇到一层层的菜单和一个个的提示,而这些菜单和提示只能让新手感到困惑。为什么你要问我 GPG 的二进制文件在哪?天知道在哪!如果不止一个,你就不能为我选择一个最新的吗?如果只有一个,我再问一次,为什么你还要问我?
为什么你要问我要使用多大的密钥大小和加密算法?你既然默认选择了 2048 和 RSA/RSA为什么不直接使用如果你想让这些选项能够被修改那就把它们扔在下面的“Expert mode专家模式” 按钮里去。这里不仅仅是说让配置可被用户修改的问题而是说根本不需要默认把多余的东西扔在了用户面前。这种问题将会成为这篇文章剩下的主要内容之一……KDE 需要更理智的默认配置。配置是好的,我很喜欢在使用 KDE 时的配置,但它还需要知道什么时候应该,什么时候不应该去提示用户。而且它还需要知道“嗯,它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置,不好的默认配置注定要失去用户。
让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
一周 GNOME 之旅:品味它和 KDE 的是是非非(第三节 GNOME应用
================================================================================
### 应用 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920)
这是一个基本扯平的方面。每一个桌面环境都有一些非常好的应用也有一些不怎么样的。再次强调Gnome 把那些 KDE 完全错失的小细节给做对了。我不是想说 KDE 中有哪些应用不好。他们都能工作但仅此而已。也就是说它们合格了但确实还没有达到甚至接近100分。
Gnome 在左KDE 在右。Dragon 播放器运行得很好清晰的标出了播放文件、URL或和光盘的按钮正如你在 Gnome Videos 中能做到的一样……但是在便利的文件名和用户的友好度方面Gnome 多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件不需要你做任何事情。KDE 有 [Baloo][](正如之前的 [Nepomuk][2]LCTT 译注:这是 KDE 中一种文件索引服务框架)为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。
下一步……音乐播放器
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920)
这两个应用,左边的是 Rhythmbox ,右边的是 Amarok都是打开后没有做任何修改直接截屏的。看到差别了吗Rhythmbox 看起来像个音乐播放器,直接了当,排序文件的方法也很清晰,它知道它应该是什么样的,它的工作是什么:就是播放音乐。
Amarok 感觉就像是某个人为了展示而把所有的扩展和选项都尽可能地塞进一个应用程序中去而做出来的一个技术演示产品tech demos或者一个库演示产品library demos——而这些是不应该做为产品装进去的它只应该展示其中一点东西。而 Amarok 给人的感觉却是这样的:好像是某个人想把每一个感觉可能很酷的东西都塞进一个媒体播放器里,甚至都不停下来想“我想写啥来着?一个播放音乐的应用?”
看看默认布局就行了。前面和中心都呈现了什么?一个可视化工具和集成了维基百科——占了整个页面最大和最显眼的区域。第二大的呢?播放列表。第三大,同时也是最小的呢?真正的音乐列表。这种默认设置对于一个核心应用来说,怎么可能称得上理智?
软件管理器!它在最近几年当中有很大的进步,而且接下来的几个月中,很可能只能看到它更大的进步。不幸的是,这是另一个 KDE 做得差一点点就能……但还是在终点线前以脸戗地了。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920)
Gnome 软件中心可能是我的新的最爱的软件中心先放下牢骚等下再发。Muon, 我想爱上你,真的。但你就是个设计上的梦魇。当 VDG 给你画设计草稿时(草图如下),你看起来真漂亮。白色空间用得很好,设计简洁,类别列表也很好,你的整个“不要分开做成两个应用程序”的设计都很不错。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920)
接着就有人为你写代码实现真正的UI但是我猜这些家伙当时一定是喝醉了。
我们来看看 Gnome 软件中心。正中间是什么软件软件截图和软件描述等等。Muon 的正中心是什么白白浪费的大块白色空间。Gnome 软件中心还有一个贴心便利特点,那就是放了一个“运行”的按钮在那儿,以防你已经安装了这个软件。便利性和易用性很重要啊,大哥。说实话,仅仅让 Muon 把东西都居中对齐了可能看起来的效果都要好得多。
Gnome 软件中心沿着顶部的东西是什么像个标签列表所有软件已安装软件软件升级。语言简洁直接直指要点。Muon好吧我们有个“发现”这个语言表达上还算差强人意然后我们又有一个“已安装软件”然后就没有然后了。软件升级哪去了
好吧……开发者决定把升级独立分开成一个应用程序,这样你就得打开两个应用程序才能管理你的软件——一个用来安装,一个用来升级——自从有了新立得图形软件包管理器以来,首次有这种破天荒的设计,与任何已存的软件中心的设计范例相违背。
我不想贴上截图给你们看,因为我不想等下还得清理我的电脑,如果你进入 Muon 安装了什么那么它就会在屏幕下方根据安装的应用名创建一个标签所以如果你一次性安装很多软件的话那么下面的标签数量就会慢慢的增长然后你就不得不手动检查清除它们因为如果你不这样做当标签增长到超过屏幕显示时你就不得不一个个找过去来才能找到最近正在安装的软件。想想在火狐浏览器中打开50个标签是什么感受。太烦人太不方便
我说过我会给 Gnome 一点打击我是认真的。Muon 有一点做得比 Gnome 软件中心做得好。在 Muon 的设置栏下面有个“显示技术包”,即:编辑器,软件库,非图形应用程序,无 AppData 的应用等等LCTT 译注AppData 是软件包中的一个特殊文件用于专门存储软件的信息。Gnome 则没有。如果你想安装其中任何一项你必须跑到终端操作。我想这是他们做得不对的一点。我完全理解他们推行 AppData 的心情但我想他们太急了LCTT 译注:推行所有软件包带有 AppData 是 Gnome 软件中心的目标之一)。我是在想安装 PowerTop而 Gnome 不显示这个软件时我才发现这点的——因为它没有 AppData也没有“显示技术包”设置。
更不幸的事实是,如果你在 KDE 下你不能说“用 [Apper][3] 就行了”,因为……
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920)
Apper 对安装本地软件包的支持大约在 Fedora 19 时就中止了,几乎两年了。我喜欢关注细节与质量。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://community.kde.org/Baloo
[2]:http://www.ikde.org/tech/kde-tech-nepomuk/
[3]:https://en.wikipedia.org/wiki/Apper

View File

@ -0,0 +1,52 @@
一周 GNOME 之旅:品味它和 KDE 的是是非非(第四节 GNOME设置
================================================================================
### 设置 ###
在这我要挑一挑几个特定 KDE 控制模块的毛病大部分原因是因为相比它们的对手GNOME来说糟糕得太可笑实话说真是悲哀。
第一个接招的?打印机。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920)
GNOME 在左KDE 在右。你知道左边跟右边的打印程序有什么区别吗?当我在 GNOME 控制中心打开“打印机”时,程序窗口弹出来了,然后这样就可以使用了。而当我在 KDE 系统设置打开“打印机”时,我得到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出 ROOT 密码。
让我再重复一遍。在今天这个有了 PolicyKit 和 Logind 的日子里,对一个应该是 sudo 的操作,我依然被询问要求 ROOT 的密码。我安装系统的时候甚至都没设置 root 密码。所以我必须跑到 Konsole 去,接着运行 'sudo passwd root' 命令,这样我才能给 root 设一个密码,然后我才能回到系统设置中的打印程序,再交出 root 密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次得到请求 ROOT 密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次得到请求 ROOT 密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求!
而在 GNOME 下添加打印机,在点击打印机程序中的“解锁”之前,我没有得到任何请求 SUDO 密码的提示。整个过程我只被请求过一次仅此而已。KDE求你了……采用 GNOME 的“解锁”模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许 KDE 应用程序绕过 PolicyKit/Logind如果有的话并直接请求 ROOT 权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出 ROOT 密码,要么我必须时时刻刻待命,以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。
有还一件事……
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920)
这个问题问大家怎么样看起来更简洁我在写这篇文章时意识到当有任何附加的打印机准备好时Gnome 打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在 KDE 中添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面,它会像图片文件夹显示预览图一样直接在界面里插入另外一个图标。我很高兴也很惊讶的看到我是错的。但是事实是它直接“长出”另外一个从未存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。
打印机说得够多了……下一个接受我公开石刑的 KDE 系统设置是?多媒体,即 Phonon。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920)
一如既往GNOME 在左边KDE 在右边。让我们先看看 GNOME 的系统设置先……眼睛移动是从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个 On/Off 开关用来开关静音功能。Gnome 的再次得分在于静音后能记住当前设置的音量而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer你个健忘的垃圾我真的希望我能多讨论你一下。
继续输入输出和应用程序的标签选项每一个应用程序的音量随时可控Gnome每过一秒我爱你越深。音量均衡选项、声音配置、和清晰地标上标志的“测试麦克风”选项。
我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个 Gnome 化的 Pavucontrol但我想这就是重要的地方。Pavucontrol 在这方面几乎完全做对了Gnome 控制中心中的“声音”应用程序的改善使它向完美更进了一步。
Phonon该你上了。但开始前我想说我 TM 看到的是什么?!我知道我看到的是音频设备的优先级列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个优先级列表当然很好,它也应该存在,但问题是优先级列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说不够常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在 Kmix 中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920)
上面展示的 Gnome 的网络设置。KDE 的没有展示,原因就是我接下来要吐槽的内容了。如果你进入 KDE 的系统设置里然后点击“网络”区域中三个选项中的任何一个你会得到一大堆的选项蓝牙设置、Samba 分享的默认用户名和密码说真的“连通性Connectivity”下面只有两个选项SMB 的用户名和密码。TMD 怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有 Konqueror 能用……一个已经倒闭的项目),代理设置,等等……我的 wifi 设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里……
KDE你这是要杀了我啊你有“系统设置”当凶器拿着它动手吧
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,40 @@
一周 GNOME 之旅:品味它和 KDE 的是是非非(第三节 总结)
================================================================================
### 用户体验和最后想法 ###
当 Gnome 2.x 和 KDE 4.x 要正面交锋时……我在它们之间左右逢源。我对它们爱恨交织,但总的来说它们使用起来还算是一种乐趣。然后 Gnome 3.x 来了,带着一场 Gnome Shell 的戏剧。那时我就放弃了 Gnome我尽我所能的避开它。当时它对用户是不友好的而且不直观它打破了原有的设计典范只为平板的统治世界做准备……而根据平板下跌的销量来看这样的未来不可能实现。
在 Gnome 3 后续发布了八个版本后奇迹发生了。Gnome 变得对对用户友好了变得直观了。它完美吗当然不。我还是很讨厌它想推动的那种设计范例我讨厌它总想给我强加一种工作流work flow但是在付出时间和耐心后这两都能被接受。只要你能够回头去看看 Gnome Shell 那外星人一样的界面,然后开始跟 Gnome 的其它部分(特别是控制中心)互动,你就能发现 Gnome 绝对做对了:细节,对细节的关注!
人们能适应新的界面设计范例,能适应新的工作流—— iPhone 和 iPad 都证明了这一点——但真正让他们操心的一直是“纸割”——那些不完美的细节。
它带出了 KDE 和 Gnome 之间最重要的一个区别。Gnome 感觉像一个产品,像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都触手可及。它让人感觉就像是一个拥有 Windows 或者 OS X 那样桌面体验的 Linux 桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的 sudo 请求都感觉是 Gnome 下的一个特意设计的部分,就像在 Windows 下的一样。而在 KDE 下感觉就是随便一个应用程序都能创建的那种各种外观的弹窗。它不像是以系统本身这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。
KDE 让人体验不到有凝聚力的体验。KDE 像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包而已。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道 KDE 要提供什么——并且——知道它看起来应该是什么样的。
是不是有什么原因阻止我在 KDE 下使用 Gnome 磁盘管理? Rhythmbox 呢? Evolution 呢? 没有没有没有。但是这样说又错过了关键。Gnome 和 KDE 都称它们自己为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你应该使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有 Gnome 看起来能符合完整的要求。KDE 在“汇集在一起”这一方面感觉就像个半成品更不用说提供“完整体验”中你所需要的东西。Gnome 磁盘管理没有相应的对手—— kpartionmanage 要求 ROOT 权限。KDE 不运行“首次用户注册”的过程原文No 'First Time User' run through。可能是指系统安装过程中KDE没有创建新用户的过程译注 ,现在也不过是在 Kubuntu 下引入了一个用户管理器。老天Gnome 甚至提供了地图、笔记、日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助 Gnome 推动“Gnome 是一种完整丰富的体验”的想法。
我吐槽的 KDE 问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力—— GNOME 3.x 就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。
我知道 KDE 开发者们知道设计很重要这也是为什么VDGVisual Design Group 视觉设计组)存在的原因,但是感觉好像他们没有让 VDG 充分发挥,所以 KDE 里存在组织上的缺陷。不是 KDE 没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。
还有,在任何人说这句话之前……千万别说“欢迎给我们提交补丁啊"。因为当我开心的为某个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦人事就会不断发生。这不关 Muon 有没有中心对齐。也不关 Amarok 的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“地皮”(说真的,有人会把这些东西缩小)。
这跟心态的冷漠有关,跟开发者们在为他们的应用设计 UI 时根本就不多加思考有关。KDE 团队做的东西都工作得很好。Amarok 能播放音乐。Dragon 能播放视频。Kwin 或 Qt 和 kdelibs 似乎比 Mutter/gtk 更有力更效率(仅根据我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的并与之交互的东西。
KDE 应用开发者们……让 VDG 参与进来吧。让 VDG 审查并核准每一个“核心”应用,让一个 VDG 的 UI/UX 专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到 VDG 论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。
我不想说得好像我一点都不懂感恩。我爱 KDE我爱那些志愿者们为了给 Linux 用户一个可视化的桌面而付出的工作与努力,也爱可供选择的 Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的 KDE我想看到它走得比以前更加遥远。而这样做需要每个人继续努力并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评如果我们不说“这真垃圾那么情况永远不会变好。
这周后我会继续使用 Gnome 吗可能不。应该不。Gnome 还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐 Gnome特别是那些不大懂技术只要求“能工作”就行的朋友。根据目前 KDE 的形势来看,这可能是我能说出的最狠毒的评估了。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,120 @@
如何更新 Linux 内核来提升系统性能
================================================================================
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f)
目前的 [Linux 内核][1]的开发速度是前所未有的大概每2到3个月就会有一个主要的版本发布。每个发布都带来几个的新的功能和改进可以让很多人的处理体验更快、更有效率、或者其它的方面更好。
问题是,你不能在这些内核发布的时候就用它们,你要等到你的发行版带来新内核的发布。我们先前讲到[定期更新内核的好处][2],所以你不必等到那时。让我们来告诉你该怎么做。
> 免责声明: 我们先前的一些文章已经提到过,升级内核有(很小)的风险可能会破坏你系统。如果发生这种情况,通常可以通过使用旧内核来使系统保持工作,但是有时还是不行。因此我们对系统的任何损坏都不负责,你得自己承担风险!
### 预备工作 ###
要更新你的内核你首先要确定你使用的是32位还是64位的系统。打开终端并运行
uname -a
检查一下输出的是 x86\_64 还是 i686。如果是 x86\_64你就运行64位的版本否则就运行32位的版本。千万记住这个这很重要。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f)
接下来,访问[官方的 Linux 内核网站][3]它会告诉你目前稳定内核的版本。愿意的话你可以尝试下发布预选版RC但是这比稳定版少了很多测试。除非你确定想要需要发布预选版否则就用稳定内核。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f)
### Ubuntu 指导 ###
对 Ubuntu 及其衍生版的用户而言升级内核非常简单,这要感谢 Ubuntu 主线内核 PPA。虽然官方把它叫做 PPA但是你不能像其他 PPA 一样将它添加到你软件源列表中,并指望它自动升级你的内核。实际上,它只是一个简单的网页,你应该浏览并下载到你想要的内核。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f)
现在,访问这个[内核 PPA 网页][4]并滚到底部。列表的最下面会含有最新发布的预选版本你可以在名字中看到“rc”字样但是这上面就可以看到最新的稳定版说的更清楚些本文写作时最新的稳定版是4.1.2。LCTT 译注:这里虽然 4.1.2 是当时的稳定版,但是由于尚未进入 Ubuntu 发行版中,所以文件夹名称为“-unstable”。点击文件夹名称你会看到几个选择。你需要下载 3 个文件并保存到它们自己的文件夹中(如果你喜欢的话可以放在下载文件夹中),以便它们与其它文件相隔离:
1. 针对架构的含“generic”通用的头文件我这里是64位即“amd64”
2. 放在列表中间在文件名末尾有“all”的头文件
3. 针对架构的含“generic”内核文件再说一次我会用“amd64”但是你如果用32位的你需要使用“i686”
你还可以在下面看到含有“lowlatency”低延时的文件。但最好忽略它们。这些文件相对不稳定并且只为那些通用文件不能满足像音频录制这类任务想要低延迟的人准备的。再说一次首选通用版除非你有特定的任务需求不能很好地满足。一般的游戏和网络浏览不是使用低延时版的借口。
你把它们放在各自的文件夹下,对么?现在打开终端,使用`cd`命令切换到新创建的文件夹下,如
cd /home/user/Downloads/Kernel
接着运行:
sudo dpkg -i *.deb
这个命令会标记文件夹中所有的“.deb”文件为“待安装”接着执行安装。这是推荐的安装方法因为不可以很简单地选择一个文件安装它总会报出依赖问题。这这样一起安装就可以避免这个问题。如果你不清楚`cd`和`sudo`是什么。快速地看一下 [Linux 基本命令][5]这篇文章。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f)
安装完成后,**重启**你的系统,这时应该就会运行刚安装的内核了!你可以在命令行中使用`uname -a`来检查输出。
### Fedora 指导 ###
如果你使用的是 Fedora 或者它的衍生版,过程跟 Ubuntu 很类似。不同的是文件获取的位置不同,安装的命令也不同。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f)
查看 [最新 Fedora 内核构建][6]列表。选取列表中最新的稳定版并翻页到下面选择 i686 或者 x86_64 版。这取决于你的系统架构。这时你需要下载下面这些文件并保存到它们对应的目录下比如“Kernel”到下载目录下
- kernel
- kernel-core
- kernel-headers
- kernel-modules
- kernel-modules-extra
- kernel-tools
- perf 和 python-perf (可选)
如果你的系统是 i68632位同时你有 4GB 或者更大的内存,你需要下载所有这些文件的 PAE 版本。PAE 是用于32位系统上的地址扩展技术它允许你使用超过 3GB 的内存。
现在使用`cd`命令进入文件夹,像这样
cd /home/user/Downloads/Kernel
接着运行下面的命令来安装所有的文件
yum --nogpgcheck localinstall *.rpm
最后**重启**你的系统,这样你就可以运行新的内核了!
#### 使用 Rawhide ####
另外一个方案是Fedora 用户也可以[切换到 Rawhide][7]它会自动更新所有的包到最新版本包括内核。然而Rawhide 经常会破坏系统(尤其是在早期的开发阶段中),它**不应该**在你日常使用的系统中用。
### Arch 指导 ###
[Arch 用户][8]应该总是使用的是最新和最棒的稳定版或者相当接近的版本。如果你想要更接近最新发布的稳定版你可以启用测试库提前2到3周获取到主要的更新。
要这么做,用[你喜欢的编辑器][9]以`sudo`权限打开下面的文件
/etc/pacman.conf
接着取消注释带有 testing 的三行(删除行前面的#号)。如果你启用了 multilib 仓库,就把 multilib-testing 也做相同的事情。如果想要了解更多参考[这个 Arch 的 wiki 界面][10]。
升级内核并不简单(有意这么做的),但是这会给你带来很多好处。只要你的新内核不会破坏任何东西,你可以享受它带来的性能提升,更好的效率,更多的硬件支持和潜在的新特性。尤其是你正在使用相对较新的硬件时,升级内核可以帮助到你。
**怎么升级内核这篇文章帮助到你了么?你认为你所喜欢的发行版对内核的发布策略应该是怎样的?**。在评论栏让我们知道!
--------------------------------------------------------------------------------
via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/
作者:[Danny Stieben][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.makeuseof.com/tag/author/danny/
[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/
[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/
[3]:http://www.kernel.org/
[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/
[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/
[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8
[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/
[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/
[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/
[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories

View File

@ -1,47 +1,52 @@
用 CD 创建 ISO观察用户活动和检查浏览器内存的技巧
一些 Linux 小技巧
================================================================================
我已经写过 [Linux 提示和技巧][1] 系列的一篇文章。写这篇文章的目的是让你知道这些小技巧可以有效地管理你的系统/服务器。
![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg)
在Linux中创建 Cdrom ISO 镜像和监控用户
*在Linux中创建 Cdrom ISO 镜像和监控用户*
在这篇文章中,我们将看到如何使用 CD/DVD 驱动器中加载到的内容来创建 ISO 镜像,打开随机手册页学习,看到登录用户的详细情况和查看浏览器内存使用量,而所有这些完全使用本地工具/命令无任何第三方应用程序/组件。让我们开始吧...
在这篇文章中,我们将看到如何使用 CD/DVD 驱动器中载入的碟片来创建 ISO 镜像;打开随机手册页学习;看到登录用户的详细情况和查看浏览器内存使用量,而所有这些完全使用本地工具/命令任何第三方应用程序/组件。让我们开始吧……
### 用 CD 创建 ISO 映像 ###
### 用 CD 碟片创建 ISO 映像 ###
我们经常需要备份/复制 CD/DVD 的内容。如果你是在 Linux 平台上,不需要任何额外的软件。所有需要的是进入 Linux 终端。
要从 CD/DVD 上创建 ISO 镜像你需要做两件事。第一件事就是需要找到CD/DVD 驱动器的名称。要找到 CD/DVD 驱动器的名称,可以使用以下三种方法。
**1. 从终端/控制台上运行 lsblk 命令(单个驱动器).**
**1. 从终端/控制台上运行 lsblk 命令(列出块设备)**
$ lsblk
![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png)
找驱动器
*找块设备*
**2.要查看有关 CD-ROM 的信息,可以使用以下命令。**
从上图可以看到sr0 就是你的 cdrom (即 /dev/sr0 )。
**2. 要查看有关 CD-ROM 的信息,可以使用以下命令**
$ less /proc/sys/dev/cdrom/info
![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png)
检查 Cdrom 信息
*检查 Cdrom 信息*
从上图可以看到, 设备名称是 sr0 (即 /dev/sr0
**3. 使用 [dmesg 命令][2] 也会得到相同的信息,并使用 egrep 来自定义输出。**
命令 dmesg 命令的输出/控制内核缓冲区信息。egrep 命令输出匹配到的行。选项 -i 和 -color 与 egrep 连用时会忽略大小写,并高亮显示匹配的字符串。
命令 dmesg 命令的输出/控制内核缓冲区信息。egrep 命令输出匹配到的行。egrep 使用选项 -i 和 -color 时会忽略大小写,并高亮显示匹配的字符串。
$ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer'
![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png)
查找设备信息
*查找设备信息*
一旦知道 CD/DVD 的名称后,在 Linux 上你可以用下面的命令来创建 ISO 镜像。
从上图可以看到,设备名称是 sr0 (即 /dev/sr0
一旦知道 CD/DVD 的名称后,在 Linux 上你可以用下面的命令来创建 ISO 镜像(你看,只需要 cat 即可!)。
$ cat /dev/sr0 > /path/to/output/folder/iso_name.iso
@ -49,11 +54,11 @@
![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png)
创建 CDROM 的 ISO 映像
*创建 CDROM 的 ISO 映像*
### 随机打开一个手册页 ###
如果你是 Linux 新人并想学习使用命令行开关,这个修改是为你做的。把下面的代码行添加在`〜/ .bashrc`文件的末尾。
如果你是 Linux 新人并想学习使用命令行开关,这个技巧就是给你的。把下面的代码行添加在`〜/ .bashrc`文件的末尾。
/use/bin/man $(ls /bin | shuf | head -1)
@ -63,17 +68,19 @@
![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png)
LoadKeys 手册页
*LoadKeys 手册页*
![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png)
Zgrep 手册页
*Zgrep 手册页*
希望你知道如何退出手册页浏览——如果你已经厌烦了每次都看到手册页,你可以删除你添加到 `.bashrc`文件中的那几行。
### 查看登录用户的状态 ###
了解其他用户正在共享服务器上做什么。
一般情况下,你是共享的 Linux 服务器的用户或管理员的。如果你担心自己服务器的安全并想要查看哪些用户在做什么,你可以使用命令 'w'
一般情况下,你是共享的 Linux 服务器的用户或管理员的。如果你担心自己服务器的安全并想要查看哪些用户在做什么,你可以使用命令 `w`
这个命令可以让你知道是否有人在执行恶意代码或篡改服务器,让他停下或使用其他方法。'w' 是查看登录用户状态的首选方式。
@ -83,33 +90,33 @@ Zgrep 手册页
![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png)
检查 Linux 用户状态
*检查 Linux 用户状态*
### 查看浏览器的内存使用状况 ###
最近有不少谈论关于 Google-chrome 内存使用量。如果你想知道一个浏览器的内存用量你可以列出进程名PID 和它的使用情况。要检查浏览器的内存使用情况,只需在地址栏输入 “about:memory” 不要带引号。
最近有不少谈论关于 Google-chrome 内存使用量。如要检查浏览器的内存使用情况,只需在地址栏输入 “about:memory”不要带引号。
我已经在 Google-Chrome 和 Mozilla 的 Firefox 网页浏览器进行了测试。你可以查看任何浏览器,如果它工作得很好,你可能会承认我们在下面的评论。你也可以杀死浏览器进程在 Linux 终端的进程/服务中。
在 Google Chrome 中,在地址栏输入 `about:memory`,你应该得到类似下图的东西。
在 Google Chrome 中在地址栏输入 `about:memory`,你应该得到类似下图的东西。
![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png)
查看 Chrome 内存使用状况
*查看 Chrome 内存使用状况*
在Mozilla Firefox浏览器在地址栏输入 `about:memory`,你应该得到类似下图的东西。
![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png)
查看 Firefox 内存使用状况
*查看 Firefox 内存使用状况*
如果你已经了解它是什么,除了这些选项。要检查内存用量,你也可以点击最左边的 Measure 选项。
![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png)
Firefox 主进程
*Firefox 主进程*
它将通过浏览器树形展示进程内存使用量
它将通过浏览器树形展示进程内存使用量
目前为止就这样了。希望上述所有的提示将会帮助你。如果你有一个(或多个)技巧,分享给我们,将帮助 Linux 用户更有效地管理他们的 Linux 系统/服务器。
@ -122,7 +129,7 @@ via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linu
作者:[Avishek Kumar][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,7 +1,7 @@
Linux有问必答——如何修复Linux上的“ImportError: No module named wxversion”错误
Linux有问必答:如何修复“ImportError: No module named wxversion”错误
================================================================================
> **问题** 我试着在[你的Linux发行版]上运行一个Python应用但是我得到了这个错误"ImportError: No module named wxversion."。我怎样才能解决Python程序中的这个错误呢?
> **问题** 我试着在[某某 Linux 发行版]上运行一个 Python 应用但是我得到了这个错误“ImportError: No module named wxversion.”。我怎样才能解决 Python 程序中的这个错误呢?
Looking for python... 2.7.9 - Traceback (most recent call last):
File "/home/dev/playonlinux/python/check_python.py", line 1, in
@ -10,7 +10,8 @@ Linux有问必答——如何修复Linux上的“ImportError: No module named wx
failed tests
该错误表明你的Python应用是基于GUI的依赖于一个名为wxPython的缺失模块。[wxPython][1]是一个用于wxWidgets GUI库的Python扩展模块普遍被C++程序员用来设计GUI应用。该wxPython扩展允许Python开发者在任何Python应用中方便地设计和整合GUI。
To solve this import error, you need to install wxPython on your Linux, as described below.
摇解决这个 import 错误,你需要在你的 Linux 上安装 wxPython如下
### 安装wxPython到DebianUbuntu或Linux Mint ###
@ -40,10 +41,10 @@ via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html
作者:[Dan Nanni][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://wxpython.org/
[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[2]:https://linux.cn/article-2324-1.html

View File

@ -1,6 +1,7 @@
Darkstat一个基于网络的流量分析器 - 在Linux中安装
在 Linux 中安装 Darkstat基于网页的流量分析器
================================================================================
Darkstat是一个简易的基于网络的流量分析程序。它可以在主流的操作系统如Linux、Solaris、MAC、AIX上工作。它以守护进程的形式持续工作在后台并不断地嗅探网络数据并以简单易懂的形式展现在网页上。它可以为主机生成流量报告鉴别特定主机上哪些端口打开并且兼容IPv6。让我们看下如何在Linux中安装和配置它。
Darkstat是一个简易的基于网页的流量分析程序。它可以在主流的操作系统如Linux、Solaris、MAC、AIX上工作。它以守护进程的形式持续工作在后台不断地嗅探网络数据以简单易懂的形式展现在它的网页上。它可以为主机生成流量报告识别特定的主机上哪些端口是打开的它兼容IPv6。让我们看下如何在Linux中安装和配置它。
### 在Linux中安装配置Darkstat ###
@ -20,14 +21,15 @@ Darkstat是一个简易的基于网络的流量分析程序。它可以在主
### 配置 Darkstat ###
为了正确运行这个程序,我需要执行一些基本的配置。运行下面的命令用gedit编辑器打开/etc/darkstat/init.cfg文件。
为了正确运行这个程序,我需要执行一些基本的配置。运行下面的命令用gedit编辑器打开/etc/darkstat/init.cfg文件。
sudo gedit /etc/darkstat/init.cfg
![](http://linuxpitstop.com/wp-content/uploads/2015/08/13.png)
编辑 Darkstat
修改START_DARKSTAT这个参数为yes并在“INTERFACE”中提供你的网络接口。确保取消了DIR、PORT、BINDIP和LOCAL这些参数的注释。如果你希望绑定Darkstat到特定的IP在BINDIP中提供它
*编辑 Darkstat*
修改START_DARKSTAT这个参数为yes并在“INTERFACE”中提供你的网络接口。确保取消了DIR、PORT、BINDIP和LOCAL这些参数的注释。如果你希望绑定Darkstat到特定的IP在BINDIP参数中提供它。
### 启动Darkstat守护进程 ###
@ -47,7 +49,7 @@ Darkstat是一个简易的基于网络的流量分析程序。它可以在主
### 总结 ###
它是一个占用很少内存的轻量级工具。这个工具流行的原因是简易、易于配置使用。这是一个对系统管理员而言必须拥有的程序
它是一个占用很少内存的轻量级工具。这个工具流行的原因是简易、易于配置使用。这是一个对系统管理员而言必须拥有的程序
--------------------------------------------------------------------------------
@ -55,7 +57,7 @@ via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/
作者:[Aun][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,14 +1,15 @@
如何在 Linux 从 Google Play 商店里下载 apk 文件
如何在 Linux 从 Google Play 商店里下载 apk 文件
================================================================================
假设你想在你的 Android 设备中安装一个 Android 应用,然而由于某些原因,你不能在 Andor 设备上访问 Google Play 商店。接着你该怎么做呢?在不访问 Google Play 商店的前提下安装应用的一种可能的方法是使用其他的手段下载该应用的 APK 文件,然后手动地在 Android 设备上 [安装 APK 文件][1]。
在非 Android 设备如常规的电脑和笔记本电脑上,有着几种方式来从 Google Play 商店下载到官方的 APK 文件。例如,使用浏览器插件(例如, 针对 [Chrome][2] 或针对 [Firefox][3] 的插件) 或利用允许你使用浏览器下载 APK 文件的在线的 APK 存档等。假如你不信任这些闭源的插件或第三方的 APK 仓库,这里有另一种手动下载官方 APK 文件的方法,它使用一个名为 [GooglePlayDownloader][4] 的开源 Linux 应用
假设你想在你的 Android 设备中安装一个 Android 应用,然而由于某些原因,你不能在 Andord 设备上访问 Google Play 商店LCTT 译注:显然这对于我们来说是常态)。接着你该怎么做呢?在不访问 Google Play 商店的前提下安装应用的一种可能的方法是,使用其他的手段下载该应用的 APK 文件,然后手动地在 Android 设备上 [安装 APK 文件][1]
GooglePlayDownloader 是一个基于 Python 的 GUI 应用,使得你可以从 Google Play 商店上搜索和下载 APK 文件。由于它是完全开源的,你可以放心地使用它。在本篇教程中,我将展示如何在 Linux 环境下,使用 GooglePlayDownloader 来从 Google Play 商店下载 APK 文件。
在非 Android 设备如常规的电脑和笔记本电脑上,有着几种方式来从 Google Play 商店下载到官方的 APK 文件。例如,使用浏览器插件(例如,针对 [Chrome][2] 或针对 [Firefox][3] 的插件) 或利用允许你使用浏览器下载 APK 文件的在线的 APK 存档等。假如你不信任这些闭源的插件或第三方的 APK 仓库,这里有另一种手动下载官方 APK 文件的方法,它使用一个名为 [GooglePlayDownloader][4] 的开源 Linux 应用。
GooglePlayDownloader 是一个基于 Python 的 GUI 应用,它可以让你从 Google Play 商店上搜索和下载 APK 文件。由于它是完全开源的,你可以放心地使用它。在本篇教程中,我将展示如何在 Linux 环境下,使用 GooglePlayDownloader 来从 Google Play 商店下载 APK 文件。
### Python 需求 ###
GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器名称指示) 来支持 SSL/TLS 通信,该功能由 Python 2.7.9 或更高版本带来。这使得一些旧的发行版本如 Debian 7 Wheezy 及早期版本Ubuntu 14.04 及早期版本或 CentOS/RHEL 7 及早期版本均不能满足该要求。假设你已经有了一个带有 Python 2.7.9 或更高版本的发行版本,可以像下面这样接着安装 GooglePlayDownloader。
GooglePlayDownloader 需要使用带有 SNIServer Name Indication 服务器名称指示)的 Python 来支持 SSL/TLS 通信,该功能由 Python 2.7.9 或更高版本引入。这使得一些旧的发行版本如 Debian 7 Wheezy 及早期版本Ubuntu 14.04 及早期版本或 CentOS/RHEL 7 及早期版本均不能满足该要求。这里假设你已经有了一个带有 Python 2.7.9 或更高版本的发行版本,可以像下面这样接着安装 GooglePlayDownloader。
### 在 Ubuntu 上安装 GooglePlayDownloader ###
@ -16,7 +17,7 @@ GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器
#### 在 Ubuntu 14.10 上 ####
下载 [python-ndg-httpsclient][5] deb 软件包,这在旧一点的 Ubuntu 发行版本中是一个缺失的依赖。同时还要下载 GooglePlayDownloader 的官方 deb 软件包。
下载 [python-ndg-httpsclient][5] deb 软件包,这是一个较旧的 Ubuntu 发行版本中缺失的依赖。同时还要下载 GooglePlayDownloader 的官方 deb 软件包。
$ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb
$ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb
@ -64,7 +65,7 @@ GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器
### 使用 GooglePlayDownloader 从 Google Play 商店下载 APK 文件 ###
一旦你安装好 GooglePlayDownloader 后,你就可以像下面那样从 Google Play 商店下载 APK 文件。
一旦你安装好 GooglePlayDownloader 后,你就可以像下面那样从 Google Play 商店下载 APK 文件。LCTT 译注:显然你需要让你的 Linux 能爬梯子)
首先通过输入下面的命令来启动该应用:
@ -76,7 +77,7 @@ GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器
![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg)
一旦你从搜索列表中找到了该应用,就选择该应用,接着点击 "下载选定的 APK 文件" 按钮。最后你将在你的家目录中找到下载的 APK 文件。现在,你就可以将下载到的 APK 文件转移到你所选择的 Android 设备上,然后手动安装它。
一旦你从搜索列表中找到了该应用,就选择该应用,接着点击 “下载选定的 APK 文件” 按钮。最后你将在你的家目录中找到下载的 APK 文件。现在,你就可以将下载到的 APK 文件转移到你所选择的 Android 设备上,然后手动安装它。
希望这篇教程对你有所帮助。
@ -86,7 +87,7 @@ via: http://xmodulo.com/download-apk-files-google-play-store.html
作者:[Dan Nanni][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,63 @@
Ubuntu 有望让你安装最新 Nvidia Linux 驱动更简单
================================================================================
![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg)
*Ubuntu 上的游戏玩家在增长——因而需要最新版驱动*
**在 Ubuntu 上安装上游的 NVIDIA 图形驱动即将变得更加容易。**
Ubuntu 开发者正在考虑构建一个全新的'官方' PPA以便为桌面用户分发最新的闭源 NVIDIA 二进制驱动。
该项改变会让 Ubuntu 游戏玩家收益,并且*不会*给其它人造成 OS 稳定性方面的风险。
**仅**当用户明确选择它时,新的上游驱动将通过这个新 PPA 安装并更新。其他人将继续得到并使用更近的包含在 Ubuntu 归档中的稳定版 NVIDIA Linux 驱动快照。
### 为什么需要该项目? ###
![Ubuntu provides drivers but theyre not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg)
*Ubuntu 提供了驱动——但是它们不是最新的*
可以从归档中使用命令行、synaptic或者通过额外驱动工具安装到 Ubuntu 上的闭源 NVIDIA 图形驱动在大多数情况下都能工作得很好,并且可以轻松地处理 Unity 桌面外壳的混染。
但对于游戏需求而言,那完全是另外一码事儿。
如果你想要将最高帧率和 HD 纹理从最新流行的 Steam 游戏中压榨出来,你需要最新的二进制驱动文件。
驱动越新,越可能支持最新的特性和技术,或者带有预先打包的游戏专门的优化和漏洞修复。
问题在于,在 Ubuntu 上安装最新 Nvidia Linux 驱动不是件容易的事儿,而且也不具安全保证。
要填补这个空白,许多由热心人维护的第三方 PPA 就出现了。由于许多这些 PPA 也发布了其它实验性的或者前沿软件,它们的使用**并不是毫无风险的**。添加一个前沿的 PPA 通常是搞崩整个系统的最快的方式!
一个解决方法是,让 Ubuntu 用户安装最新的专有图形驱动以满足对第三方 PPA 的需要,**但是**提供一个安全机制,如果有需要,你可以回滚到稳定版本。
### ‘对全新驱动的需求难以忽视’ ###
> '一个让Ubuntu用户安全地获得最新硬件驱动的解决方案出现了。'
‘在快速发展的市场中,对全新驱动的需求正变得难以忽视,用户将想要最新的上游软件,’卡斯特罗在一封给 Ubuntu 桌面邮件列表的电子邮件中解释道。
[NVIDIA] 可以毫不费力为 [Windows 10] 用户带来了不起的体验。直到我们可以说服 NVIDIA 在 Ubuntu 中做了同样的工作,这样我们就可以搞定这一切了。’
卡斯特罗的“官方的” NVIDIA PPA 方案就是最实现这一目的的最容易的方式。
游戏玩家将可以在 Ubuntu 的默认专有软件驱动工具中选择接收来自该 PPA 的新驱动,再也不需要它们从网站或维基页面拷贝并粘贴终端命令了。
该 PPA 内的驱动将由一个选定的社区成员组成的团队打包并维护,并受惠于一个名为**自动化测试**的半官方方式。
就像卡斯特罗自己说的那样:'人们想要最新的闪光的东西,而不管他们想要做什么。我们也许也要在其周围放置一个框架,因此人们可以获得他们所想要的,而不必破坏他们的计算机。'
**你想要使用这个 PPA 吗?你怎样来评估 Ubuntu 上默认 Nvidia 驱动的性能呢?在评论中分享你的想法吧,伙计们!**
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers
作者:[Joey-Elijah Sneddon][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author

View File

@ -0,0 +1,66 @@
Ubuntu NVIDIA 显卡驱动 PPA 已经做好准备
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screen-Shot-2015-08-12-at-14.19.42.png)
加速你的帧率!
**嘿,各位,稍安勿躁,很快就好。**
就在提议开发一个[新的 PPA][1] 来给 Ubuntu 用户们提供最新的 NVIDIA 显卡驱动后不久ubuntu 社区的人们又集结起来了,就是为了这件事。
顾名思义,‘**Graphics Drivers PPA** 包含了最新的 NVIDIA Linux 显卡驱动发布,已经打包好可供用户升级使用,没有让人头疼的二进制运行时文件!
这个 PPA 被设计用来让玩家们尽可能方便地在 Ubuntu 上运行最新款的游戏。
#### 万事俱备,只欠东风 ####
Jorge Castro 开发一个包含 NVIDIA 最新显卡驱动的 PPA 神器的想法得到了 Ubuntu 用户和广大游戏开发者的热烈响应。
就连那些致力于将“Steam平台”上的知名大作移植到 Linux 上的人们,也给了不少建议。
Edwin SmithFeral Interactive 公司(Shadow of Mordor) 的产品总监,对于“让用户更方便地更新驱动”的倡议表示非常欣慰。
### 如何使用最新的 Nvidia Drivers PPA###
虽然新的“显卡PPA”已经开发出来但是现在还远远达不到成熟。开发者们提醒到
> “这个 PPA 还处于测试阶段,在你使用它之前最好有一些打包的经验。请大家稍安勿躁,再等几天。”
将 PPA 试发布给 Ubuntu desktop 邮件列表的 Jorge也强调说使用现行的一些 PPA比如 xorg-edgers)的玩家可能发现不了什么区别(因为现在的驱动只不过是把内容从其他那些现存驱动拷贝过来了)
“新驱动发布的时候,好戏才会上演呢,”他说。
截至写作本文时为止,这个 PPA 囊括了从 Ubuntu 12.04.1 到 15.10 各个版本的 Nvidia 驱动。注意这些驱动对所有的发行版都适用。
> **毫无疑问,除非你清楚自己在干些什么,并且知道如果出了问题应该怎么撤销,否则就不要进行下面的操作。**
新打开一个终端窗口,运行下面的命令加入 PPA
sudo add-apt-repository ppa:graphics-drivers/ppa
安装或更新到最新的 Nvidia 显卡驱动:
sudo apt-get update && sudo apt-get install nvidia-355
记住如果PPA把你的系统弄崩了你可得自己去想办法我们提醒过了哦。译者注切记
如果想要撤销对PPA的改变使用 `ppa-purge` 命令。
有什么意见,想法,或者指正,就在下面的评论栏里写下来吧。(我没有 NVIDIA 的硬件来为我自己验证上面的这些东西,如果你可以验证的话,那就太感谢了。)
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/08/ubuntu-nvidia-graphics-drivers-ppa-is-ready-for-action
作者:[Joey-Elijah Sneddon][a]
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://linux.cn/article-6030-1.html

View File

@ -0,0 +1,158 @@
shellinabox一款使用 AJAX 的基于 Web 的终端模拟器
================================================================================
### shellinabox简介 ###
通常情况下我们在访问任何远程服务器时会使用常见的通信工具如OpenSSH和Putty等。但是有可能我们在防火墙后面不能使用这些工具访问远程系统或者防火墙只允许HTTPS流量才能通过。不用担心即使你在这样的防火墙后面我们依然有办法来访问你的远程系统。而且你不需要安装任何类似于OpenSSH或Putty的通讯工具。你只需要有一个支持JavaScript和CSS的现代浏览器并且你不用安装任何插件或第三方应用软件。
这个 **Shell In A Box**,发音是**shellinabox**,是由**Markus Gutschke**开发的一款自由开源的基于Web的Ajax的终端模拟器。它使用AJAX技术通过Web浏览器提供了类似原生的 Shell 的外观和感受。
这个**shellinaboxd**守护进程实现了一个Web服务器能够侦听指定的端口。其Web服务器可以发布一个或多个服务这些服务显示在用 AJAX Web 应用实现的VT100模拟器中。默认情况下端口为4200。你可以更改默认端口到任意选择的任意端口号。在你的远程服务器安装shellinabox以后如果你想从本地系统接入打开Web浏览器并导航到**http://IP-Address:4200/**。输入你的用户名和密码然后就可以开始使用你远程系统的Shell。看起来很有趣不是吗确实 有趣!
**免责声明**:
shellinabox不是SSH客户端或任何安全软件。它仅仅是一个应用程序能够通过Web浏览器模拟一个远程系统的Shell。同时它和SSH没有任何关系。这不是可靠的安全地远程控制您的系统的方式。这只是迄今为止最简单的方法之一。无论如何你都不应该在任何公共网络上运行它。
### 安装shellinabox ###
#### 在Debian / Ubuntu系统上 ####
shellinabox在默认库是可用的。所以你可以使用命令来安装它
$ sudo apt-get install shellinabox
#### 在RHEL / CentOS系统上 ####
首先使用命令安装EPEL仓库
# yum install epel-release
然后使用命令安装shellinabox
# yum install shellinabox
完成!
### 配置shellinabox ###
正如我之前提到的shellinabox侦听端口默认为**4200**。你可以将此端口更改为任意数字,以防别人猜到。
在Debian/Ubuntu系统上shellinabox配置文件的默认位置是**/etc/default/shellinabox**。在RHEL/CentOS/Fedora上默认位置在**/etc/sysconfig/shellinaboxd**。
如果要更改默认端口,
在Debian / Ubuntu
$ sudo vi /etc/default/shellinabox
在RHEL / CentOS / Fedora
# vi /etc/sysconfig/shellinaboxd
更改你的端口到任意数字。因为我在本地网络上测试它,所以我使用默认值。
# Shell in a box daemon configuration
# For details see shellinaboxd man page
# Basic options
USER=shellinabox
GROUP=shellinabox
CERTDIR=/var/lib/shellinabox
PORT=4200
OPTS="--disable-ssl-menu -s /:LOGIN"
# Additional examples with custom options:
# Fancy configuration with right-click menu choice for black-on-white:
# OPTS="--user-css Normal:+black-on-white.css,Reverse:-white-on-black.css --disable-ssl-menu -s /:LOGIN"
# Simple configuration for running it as an SSH console with SSL disabled:
# OPTS="-t -s /:SSH:host.example.com"
重启shelinabox服务。
**在Debian/Ubuntu:**
$ sudo systemctl restart shellinabox
或者
$ sudo service shellinabox restart
在RHEL/CentOS系统运行下面的命令能在每次重启时自动启动shellinaboxd服务
# systemctl enable shellinaboxd
或者
# chkconfig shellinaboxd on
如果你正在运行一个防火墙,记得要打开端口**4200**或任何你指定的端口。
例如在RHEL/CentOS系统你可以如下图所示允许端口。
# firewall-cmd --permanent --add-port=4200/tcp
----------
# firewall-cmd --reload
### 使用 ###
现在在你的客户端系统打开Web浏览器并导航到**https://ip-address-of-remote-servers:4200**。
**注意**:如果你改变了端口,请填写修改后的端口。
你会得到一个证书问题的警告信息。接受该证书并继续。
![Privacy error - Google Chrome_001](http://www.unixmen.com/wp-content/uploads/2015/08/Privacy-error-Google-Chrome_001.jpg)
输入远程系统的用户名和密码。现在,您就能够从浏览器本身访问远程系统的外壳。
![Shell In A Box - Google Chrome_003](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_003.jpg)
右键点击你浏览器的空白位置。你可以得到一些有很有用的额外菜单选项。
![Shell In A Box - Google Chrome_004](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_004.jpg)
从现在开始你可以通过本地系统的Web浏览器在你的远程服务器随意操作。
当你完成工作时,记得输入`exit`退出。
当再次连接到远程系统时,单击**连接**按钮,然后输入远程服务器的用户名和密码。
![Shell In A Box - Google Chrome_005](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_005.jpg)
如果想了解shellinabox更多细节在你的终端键入下面的命令
# man shellinabox
或者
# shellinaboxd -help
同时,参考[shellinabox 在wiki页面的介绍][1]来了解shellinabox的综合使用细节。
### 结论 ###
正如我之前提到的如果你在服务器运行在防火墙后面那么基于web的SSH工具是非常有用的。有许多基于web的SSH工具但shellinabox是非常简单而有用的工具可以从的网络上的任何地方模拟一个远程系统的Shell。因为它是基于浏览器的所以你可以从任何设备访问您的远程服务器只要你有一个支持JavaScript和CSS的浏览器。
就这些啦。祝你今天有个好心情!
#### 参考链接: ####
- [shellinabox website][2]
--------------------------------------------------------------------------------
via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/
作者:[SK][a]
译者:[xiaoyu33](https://github.com/xiaoyu33)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://code.google.com/p/shellinabox/wiki/shellinaboxd_man
[2]:https://code.google.com/p/shellinabox/

View File

@ -0,0 +1,52 @@
Linux Without Limits: IBM Launch LinuxONE Mainframes
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png)
LinuxONE Emperor MainframeGood news for Ubuntus server team today as [IBM launch the LinuxONE][1] a Linux-only mainframe that is also able to run Ubuntu.
The largest of the LinuxONE systems launched by IBM is called Emperor and can scale up to 8000 virtual machines or tens of thousands of containers a possible record for any one single Linux system.
The LinuxONE is described by IBM as a game changer that unleashes the potential of Linux for business.
IBM and Canonical are working together on the creation of an Ubuntu distribution for LinuxONE and other IBM z Systems. Ubuntu will join RedHat and SUSE as premier Linux distributions on IBM z.
Alongside the Emperor IBM is also offering the LinuxONE Rockhopper, a smaller mainframe for medium-sized businesses and organisations.
IBM is the market leader in mainframes and commands over 90% of the mainframe market.
youtube 视频
<iframe width="750" height="422" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/2ABfNrWs-ns?feature=oembed"></iframe>
### What Is a Mainframe Computer Used For? ###
The computer youre reading this article on would be dwarfed by a big iron mainframe. They are large, hulking great cabinets packed full of high-end components, custom designed technology and dizzying amounts of storage (that is data storage, not ample room for pens and rulers).
Mainframes computers are used by large organizations and businesses to process and store large amounts of data, crunch through statistics, and handle large-scale transaction processing.
### Worlds Fastest Processor ###
IBM has teamed up with Canonical Ltd to use Ubuntu on the LinuxONE and other IBM z Systems.
The LinuxONE Emperor uses the IBM z13 processor. The chip, announced back in January, is said to be the worlds fastest microprocessor. It is able to deliver transaction response times in the milliseconds.
But as well as being well equipped to handle for high-volume mobile transactions, the z13 inside the LinuxONE is also an ideal cloud system.
It can handle more than 50 virtual servers per core for a total of 8000 virtual servers, making it a cheaper, greener and more performant way to scale-out to the cloud.
**You dont have to be a CIO or mainframe spotter to appreciate this announcement. The possibilities LinuxONE provides are clear enough. **
Source: [Reuters (h/t @popey)][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www-03.ibm.com/systems/z/announcement.html
[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817

View File

@ -0,0 +1,46 @@
Ubuntu Linux is coming to IBM mainframes
================================================================================
SEATTLE -- It's finally happened. At [LinuxCon][1], IBM and [Canonical][2] announced that [Ubuntu Linux][3] will soon be running on IBM mainframes.
![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg)
You'll soon to be able to get your IBM mainframe in Ubuntu Linux orange
According to Ross Mauri, IBM's General Manager of System z, and Mark Shuttleworth, Canonical and Ubuntu's founder, this move came about because of customer demand. For over a decade, [Red Hat Enterprise Linux (RHEL)][4] and [SUSE Linux Enterprise Server (SLES)][5] were the only supported IBM mainframe Linux distributions.
As Ubuntu matured, more and more businesses turned to it for the enterprise Linux, and more and more of them wanted it on IBM big iron hardware. In particular, banks wanted Ubuntu there. Soon, financial CIOs will have their wish granted.
In an interview Shuttleworth said that Ubuntu Linux will be available on the mainframe by April 2016 in the next long-term support version of Ubuntu: Ubuntu 16.04. Canonical and IBM already took the first move in this direction in late 2014 by bringing [Ubuntu to IBM's POWER][6] architecture.
Before that, Canonical and IBM almost signed the dotted line to bring [Ubuntu to IBM mainframes in 2011][7] but that deal was never finalized. This time, it's happening.
Jane Silber, Canonical's CEO, explained in a statement, "Our [expansion of Ubuntu platform][8] support to [IBM z Systems][9] is a recognition of the number of customers that count on z Systems to run their businesses, and the maturity the hybrid cloud is reaching in the marketplace.
**Silber continued:**
> With support of z Systems, including [LinuxONE][10], Canonical is also expanding our relationship with IBM, building on our support for the POWER architecture and OpenPOWER ecosystem. Just as Power Systems clients are now benefiting from the scaleout capabilities of Ubuntu, and our agile development process which results in first to market support of new technologies such as CAPI (Coherent Accelerator Processor Interface) on POWER8, z Systems clients can expect the same rapid rollout of technology advancements, and benefit from [Juju][11] and our other cloud tools to enable faster delivery of new services to end users. In addition, our collaboration with IBM includes the enablement of scale-out deployment of many IBM software solutions with Juju Charms. Mainframe clients will delight in having a wealth of 'charmed' IBM solutions, other software provider products, and open source solutions, deployable on mainframes via Juju.
Shuttleworth expects Ubuntu on z to be very successful. "It's blazingly fast, and with its support for OpenStack, people who want exceptional cloud region performance will be very happy.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/#ftag=RSSbaffb68
作者:[Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:http://events.linuxfoundation.org/events/linuxcon-north-america
[2]:http://www.canonical.com/
[3]:http://www.ubuntu.comj/
[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[5]:https://www.suse.com/products/server/
[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/
[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/
[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/
[9]:http://www-03.ibm.com/systems/uk/z/
[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/
[11]:https://jujucharms.com/

View File

@ -1,97 +0,0 @@
translating by xiaoyu33
Tickr Is An Open-Source RSS News Ticker for Linux Desktops
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg)
**Latest! Latest! Read all about it!**
Alright, so the app were highlighting today isnt quite the binary version of an old newspaper seller — but it is a great way to have the latest news brought to you, on your desktop.
Tick is a GTK-based news ticker for the Linux desktop that scrolls the latest headlines and article titles from your favourite RSS feeds in horizontal strip that you can place anywhere on your desktop.
Call me Joey Calamezzo; I put mine on the bottom TV news station style.
“Over to you, sub-heading.”
### RSS — Remember That? ###
“Thanks paragraph ending.”
In an era of push notifications, social media, and clickbait, cajoling us into reading the latest mind-blowing, humanity saving listicle ASAP, RSS can seem a bit old hat.
For me? Well, RSS lives up to its name of Really Simple Syndication. Its the easiest, most manageable way to have news come to me. I can manage and read stuff when I want; theres no urgency to view lest the tweet vanish into the stream or the push notification vanish.
The beauty of Tickr is in its utility. You can have a constant stream of news trundling along the bottom of your screen, which you can passively glance at from time to time.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/tickr-close-up-750x58.jpg)
Theres no pressure to read or mark all read or any of that. When you see something you want to read you just click it to open it in a web browser.
### Setting it Up ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/tickr-rss-settings.jpg)
Although Tickr is available to install from the Ubuntu Software Centre it hasnt been updated for a long time. Nowhere is this sense of abandonment more keenly felt than when opening the unwieldy and unintuitive configuration panel.
To open it:
1. Right click on the Tickr bar
1. Go to Edit > Preferences
1. Adjust the various settings
Row after row of options and settings, few of which seem to make sense at first. But poke and prod around and youll controls for pretty much everything, including:
- Set scrolling speed
- Choose behaviour when mousing over
- Feed update frequency
- Font, including font sizes and color
- Separator character (delineator)
- Position of Tickr on screen
- Color and opacity of Tickr bar
- Choose how many articles each feed displays
One quirk worth mentioning is that pressing the Apply only updates the on-screen Tickr to preview changes. For changes to take effect when you exit the Preferences window you need to click OK.
Getting the bar to sit flush on your display can also take a fair bit of tweaking, especially on Unity.
Press the “full width button” to have the app auto-detect your screen width. By default when placed at the top or bottom it leaves a 25px gap (the app was created back in the days of GNOME 2.x desktops). After hitting the top or bottom buttons just add an extra 25 pixels to the input box compensate for this.
Other options available include: choose which browser articles open in; whether Tickr appears within a regular window frame; whether a clock is shown; and how often the app checks feed for articles.
#### Adding Feeds ####
Tickr comes with a built-in list of over 30 different feeds, ranging from technology blogs to mainstream news services.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/feed-picker-750x398.jpg)
You can select as many of these as you like to show headlines in the on screen ticker. If you want to add your own feeds you can:
1. Right click on the Tickr bar
1. Go to File > Open Feed
1. Enter Feed URL
1. Click Add/Upd button
1. Click OK (select)
To set how many items from each feed shows in the ticker change the “Read N items max per feed” in the other preferences window.
### Install Tickr in Ubuntu 14.04 LTS and Up ###
So thats Tickr. Its not going to change the world but it will keep you abreast of whats happening in it.
To install it in Ubuntu 14.04 LTS or later head to the Ubuntu Software Centre but clicking the button below.
- [Click to install Tickr form the Ubuntu Software Center][1]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticker
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:apt://tickr

View File

@ -0,0 +1,117 @@
Top 5 Torrent Clients For Ubuntu Linux
================================================================================
![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png)
Looking for the **best torrent client in Ubuntu**? Indeed there are a number of torrent clients available for desktop Linux. But which ones are the **best Ubuntu torrent clients** among them?
I am going to list top 5 torrent clients for Linux, which are lightweight, feature rich and have impressive GUI. Ease of installation and using is also a factor.
### Best torrent programs for Ubuntu ###
Since Ubuntu comes by default with Transmission, I am going to exclude it from the list. This doesnt mean that Transmission doesnt deserve to be on the list. Transmission is a good to have torrent client for Ubuntu and this is the reason why it is the default Torrent application in several Linux distributions, including Ubuntu.
----------
### Deluge ###
![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png)
[Deluge][1] has been chosen as the best torrent client for Linux by Lifehacker and that speaks itself of the usefulness of Deluge. And its not just Lifehacker who is fan of Deluge, check out any forum and youll find a number of people admitting that Deluge is their favorite.
Fast, sleek and intuitive interface makes Deluge a hot favorite among Linux users.
Deluge is available in Ubuntu repositories and you can install it in Ubuntu Software Center or by using the command below:
sudo apt-get install deluge
----------
### qBittorrent ###
![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png)
As the name suggests, [qBittorrent][2] is the Qt version of famous [Bittorrent][3] application. Youll see an interface similar to Bittorrent client in Windows, if you ever used it. Sort of lightweight and have all the standard features of a torrent program, qBittorrent is also available in default Ubuntu repository.
It could be installed from Ubuntu Software Center or using the command below:
sudo apt-get install qbittorrent
----------
### Tixati ###
![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png)
[Tixati][4] is another nice to have torrent client for Ubuntu. It has a default dark theme which might be preferred by many but not me. It has all the standard features that you can seek in a torrent client.
In addition to that, there are additional feature of data analysis. You can measure and analyze bandwidth and other statistics in nice charts.
- [Download Tixati][5]
----------
### Vuze ###
![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png)
[Vuze][6] is favorite torrent application of a number of Linux as well as Windows users. Apart from the standard features, you can search for torrents directly in the application. You can also subscribe to episodic content so that you wont have to search for new contents as you can see it in your subscription in sidebar.
It also comes with a video player that can play HD videos with subtitles and all. But I dont think you would like to use it over the better video players such as VLC.
Vuze can be installed from Ubuntu Software Center or using the command below:
sudo apt-get install vuze
----------
### Frostwire ###
![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png)
[Frostwire][7] is the torrent application you might want to try. It is more than just a simple torrent client. Also available for Android, you can use it to share files over WiFi.
You can search for torrents from within the application and play them inside the application. In addition to the downloaded files, it can browse your local media and have them organized inside the player. The same is applicable for the Android version.
An additional feature is that Frostwire also provides access to legal music by indi artists. You can download them and listen to it, for free, for legal.
- [Download Frostwire][8]
----------
### Honorable mention ###
On Windows, uTorrent (pronounced mu torrent) is my favorite torrent application. While uTorrent may be available for Linux, I deliberately skipped it from the list because installing and using uTorrent in Linux is neither easy nor does it provide a complete application experience (runs with in web browser).
You can read about uTorrent installation in Ubuntu [here][9].
#### Quick tip: ####
Most of the time, torrent applications do not start by default. You might want to change this behavior. Read this post to learn [how to manage startup applications in Ubuntu][10].
### Whats your favorite? ###
That was my opinion on the best Torrent clients in Ubuntu. What is your favorite one? Do leave a comment. You can also check the [best download managers for Ubuntu][11] in related posts. And if you use Popcorn Time, check these [Popcorn Time Tips][12].
--------------------------------------------------------------------------------
via: http://itsfoss.com/best-torrent-ubuntu/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://deluge-torrent.org/
[2]:http://www.qbittorrent.org/
[3]:http://www.bittorrent.com/
[4]:http://www.tixati.com/
[5]:http://www.tixati.com/download/
[6]:http://www.vuze.com/
[7]:http://www.frostwire.com/
[8]:http://www.frostwire.com/downloads
[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/
[10]:http://itsfoss.com/manage-startup-applications-ubuntu/
[11]:http://itsfoss.com/4-best-download-managers-for-linux/
[12]:http://itsfoss.com/popcorn-time-tips/

View File

@ -0,0 +1,79 @@
Top 4 open source command-line email clients
================================================================================
![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png)
Like it or not, email isn't dead yet. And for Linux power users who live and die by the command line, leaving the shell to use a traditional desktop or web based email client just doesn't cut it. After all, if there's one thing that the command line excels at, it's letting you process files, and especially text, with uninterrupted efficiency.
Fortunately, there are a number of great command-line email clients, many with a devoted following of users who can help you get started and answer any questions you might have along the way. But fair warning: once you've mastered one of these clients, you may find it hard to go back to your old GUI-based solution!
To install any of these four clients is pretty easy; most are available in standard repositories for major Linux distributions, and can be installed with a normal package manager. You may also have luck finding and running them on other operating systems as well, although I haven't tried it and can't speak to the experience.
### Mutt ###
- [Project page][1]
- [Source code][2]
- License: [GPLv2][3]
Many terminal enthusiasts may already have heard of or even be familiar with Mutt and Alpine, which have both been on the scene for many years. Let's first take a look at Mutt.
Mutt supports many of the features you've come to expect from any email system: message threading, color coding, availability in a number of languages, and lots of configuration options. It supports POP3 and IMAP, the two most common email transfer protocols, and multiple mailbox formats. Having first been released in 1995, Mutt still has an active development community, but in recent years, new releases have focused on bug fixes and security updates rather than new features. That's okay for many Mutt users, though, who are comfortable with the interface and adhere to the project's slogan: "All mail clients suck. This one just sucks less."
### Alpine ###
- [Project page][4]
- [Source code][5]
- License: [Apache 2.0][6]
Alpine is the other well-known client for terminal email, developed at the University of Washington and designed to be an open source, Unicode-friendly alternative to Pine, also originally from UW.
Designed to be friendly to beginners, but also chocked full of features for advanced users, Alpine also supports a multitude of protocols—IMAP, LDAP, NNTP, POP, SMTP, etc.—as well as different mailbox formats. Alpine is packaged with Pico, a simple text editing utility that many use as a standalone tool, but it also should work with your text editor of choice: vi, Emacs, etc.
While Alpine is still infrequently updated, there is also a fork, re-alpine, which was created to allow a different set of maintainers to continue the project's development.
Alpine features contextual help on the screen, which some users may prefer to breaking out the manual with Mutt, but both are well documented. Between Mutt and Alpine, users may want to try both and let personal preference guide their decision, or they may wish to check out a couple of the newer options below.
### Sup ###
- [Project page][7]
- [Source code][8]
- License: [GPLv2][9]
Sup is the first of two of what can be called "high volume email clients" on our list. Described as a "console-based email client for people with a lot of email," Sup's goal is to provide an interface to email with a hierarchical design and to allow tagging of threads for easier organization.
Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line.
### Notmuch ###
- [Project page][10]
- [Source code][11]
- License: [GPLv3][12]
"Sup? Notmuch." Notmuch was written as a response to Sup, originally starting out as a speed-focused rewrite of some portions of Sup to enhance performance. Eventually, the project grew in scope and is now a stand-alone email client.
Notmuch is also a fairly trim program. It doesn't actually send or receive email messages on its own, and the code which enables Notmuch's super-fast searching is actually designed as a separate library which the program can call. But its modular nature enables you to pick your favorite tools for composing, sending, and receiving, and instead focuses on doing one task and doing it well—efficient browsing and management of your email.
This list isnt by any means comprehensive; there are a lot more email clients out there which might be an even better fit for you. Whats your favorite? Did we leave one out that you want to share about? Let us know in the comments below!
--------------------------------------------------------------------------------
via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients
作者:[Jason Baker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://opensource.com/users/jason-baker
[1]:http://www.mutt.org/
[2]:http://dev.mutt.org/trac/
[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
[4]:http://www.washington.edu/alpine/
[5]:http://www.washington.edu/alpine/acquire/
[6]:http://www.apache.org/licenses/LICENSE-2.0
[7]:http://supmua.org/
[8]:https://github.com/sup-heliotrope/sup
[9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
[10]:http://notmuchmail.org/
[11]:http://notmuchmail.org/releases/
[12]:http://www.gnu.org/licenses/gpl.html

View File

@ -0,0 +1,344 @@
A Linux User Using Windows 10 After More than 8 Years See Comparison
================================================================================
Windows 10 is the newest member of windows NT family of which general availability was made on July 29, 2015. It is the successor of Windows 8.1. Windows 10 is supported on Intel Architecture 32 bit, AMD64 and ARMv7 processors.
![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg)
Windows 10 and Linux Comparison
As a Linux-user for more than 8 continuous years, I thought to test Windows 10, as it is making a lots of news these days. This article is a breakthrough of my observation. I will be seeing everything from the perspective of a Linux user so you may find it a bit biased towards Linux but with absolutely no false information.
1. I searched Google with the text “download windows 10” and clicked the first link.
![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg)
Search Windows 10
You may directly go to link : [https://www.microsoft.com/en-us/software-download/windows10ISO][1]
2. I was supposed to select a edition from windows 10, windows 10 KN, windows 10 N and windows 10 single language.
![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg)
Select Windows 10 Edition
For those who want to know details of different editions of Windows 10, here is the brief details of editions.
- Windows 10 Contains everything offered by Microsoft for this OS.
- Windows 10N This edition comes without Media-player.
- Windows 10KN This edition comes without media playing capabilities.
- Windows 10 Single Language Only one Language Pre-installed.
3. I selected the first option Windows 10 and clicked Confirm. Then I was supposed to select a product language. I choose English.
I was provided with Two Download Links. One for 32-bit and other for 64-bit. I clicked 64-bit, as per my architecture.
![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg)
Download Windows 10
With my download speed (15Mbps), it took me 3 long hours to download it. Unfortunately there were no torrent file to download the OS, which could otherwise have made the overall process smooth. The OS iso image size is 3.8 GB.
I could not find an image of smaller size but again the truth is there dont exist net-installer image like things for Windows. Also there is no way to calculate hash value after the iso image has been downloaded.
Wonder why so ignorance from windows on such issues. To verify if the iso is downloaded correctly I need to write the image to a disk or to a USB flash drive and then boot my system and keep my finger crossed till the setup is finished.
Lets start. I made my USB flash drive bootable with the windows 10 iso using dd command, as:
# dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync
It took a few minutes to complete the process. I then rebooted the system and choose to boot from USB flash Drive in my UEFI (BIOS) settings.
#### System Requirements ####
If you are upgrading
- Upgrade supported only from Windows 7 SP1 or Windows 8.1
If you are fresh Installing
- Processor: 1GHz or faster
- RAM : 1GB and Above(32-bit), 2GB and Above(64-bit)
- HDD: 16GB and Above(32-bit), 20GB and Above(64-bit)
- Graphic card: DirectX 9 or later + WDDM 1.0 Driver
### Installation of Windows 10 ###
1. Windows 10 boots. Yet again they changed the logo. Also no information on whats going on.
![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg)
Windows 10 Logo
2. Selected Language to install, Time & currency format and keyboard & Input methods before clicking Next.
![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg)
Select Language and Time
3. And then Install Now Menu.
![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg)
Install Windows 10
4. The next screen is asking for Product key. I clicked skip.
![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg)
Windows 10 Product Key
5. Choose from a listed OS. I chose windows 10 pro.
![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg)
Select Install Operating System
6. oh yes the license agreement. Put a check mark against I accept the license terms and click next.
![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg)
Accept License
7. Next was to upgrade (to windows 10 from previous versions of windows) and Install Windows. Dont know why custom: Windows Install only is suggested as advanced by windows. Anyway I chose to Install windows only.
![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg)
Select Installation Type
8. Selected the file-system and clicked next.
![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg)
Select Install Drive
9. The installer started to copy files, getting files ready for installation, installing features, installing updates and finishing up. It would be better if the installer would have shown verbose output on the action is it taking.
![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg)
Installing Windows
10. And then windows restarted. They said reboot was needed to continue.
![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg)
Windows Installation Process
11. And then all I got was the below screen which reads “Getting Ready”. It took 5+ minutes at this point. No idea what was going on. No output.
![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg)
Windows Getting Ready
12. yet again, it was time to “Enter Product Key”. I clicked “Do this later” and then used expressed settings.
![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg)
Enter Product Key
![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg)
Select Express Settings
14. And then three more output screens, where I as a Linuxer expected that the Installer will tell me what it is doing but all in vain.
![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg)
Loading Windows
![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg)
Getting Updates
![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg)
Still Loading Windows
15. And then the installer wanted to know who owns this machine “My organization” or I myself. Chose “I own it” and then next.
![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg)
Select Organization
16. Installer prompted me to join “Azure Ad” or “Join a domain”, before I can click continue. I chooses the later option.
![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg)
Connect Windows
17. The Installer wants me to create an account. So I entered user_name and clicked Next, I was expecting an error message that I must enter a password.
![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg)
Create Account
18. To my surprise Windows didnt even showed warning/notification that I must create password. Such a negligence. Anyway I got my desktop.
![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg)
Windows 10 Desktop
#### Experience of a Linux-user (Myself) till now ####
- No Net-installer Image
- Image size too heavy
- No way to check the integrity of iso downloaded (no hash check)
- The booting and installation remains same as it was in XP, Windows 7 and 8 perhaps.
- As usual no output on what windows Installer is doing What file copying or what package installing.
- Installation was straight forward and easy as compared to the installation of a Linux distribution.
### Windows 10 Testing ###
19. The default Desktop is clean. It has a recycle bin Icon on the default desktop. Search web directly from the desktop itself. Additionally icons for Task viewing, Internet browsing, folder browsing and Microsoft store is there. As usual notification bar is present on the bottom right to sum up desktop.
![Deskop Shortcut Icons](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg)
Deskop Shortcut Icons
20. Internet Explorer replaced with Microsoft Edge. Windows 10 has replace the legacy web browser Internet Explorer also known as IE with Edge aka project spartan.
![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg)
Microsoft Edge Browser
It is fast at least as compared to IE (as it seems it testing). Familiar user Interface. The home screen contains news feed updates. There is also a search bar title that reads Where to next?. The browser loads time is considerably low which result in improving overall speed and performance. The memory usages of Edge seems normal.
![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg)
Windows Performance
Edge has got cortana Intelligent Personal Assistant, Support for chrome-extension, web Note Take notes while Browsing, Share Right from the tab without opening any other TAB.
#### Experience of a Linux-user (Myself) on this point ####
21. Microsoft has really improved web browsing. Lets see how stable and fine it remains. It dont lag as of now.
22. Though RAM usages by Edge was fine for me, a lots of users are complaining that Edge is notorious for Excessive RAM Usages.
23. Difficult to say at this point if Edge is ready to compete with Chrome and/or Firefox at this point of time. Lets see what future unfolds.
#### A few more Virtual Tour ####
24. Start Menu redesigned Seems clear and effective. Metro icons make it live. Populated with most commonly applications viz., Calendar, Mail, Edge, Photos, Contact, Temperature, Companion suite, OneNote, Store, Xbox, Music, Movies & TV, Money, News, Store, etc.
![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg)
Windows Look and Feel
In Linux on Gnome Desktop Environment, I use to search required applications simply by pressing windows key and then type the name of the application.
![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg)
Search Within Desktop
25. File Explorer seems clear Designing. Edges are sharp. In the left pane there is link to quick access folders.
![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg)
Windows File Explorer
Equally clear and effective file explorer on Gnome Desktop Environment on Linux. Removed UN-necessary graphics and images from icons is a plus point.
![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg)
File Browser on Gnome
26. Settings Though the settings are a bit refined on Windows 10, you may compare it with the settings on a Linux Box.
**Settings on Windows**
![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg)
Windows 10 Settings
**Setting on Linux Gnome**
![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg)
Gnome Settings
27. List of Applications List of Application on Linux is better than what they use to provide (based upon my memory, when I was a regular windows user) but still it stands low as compared to how Gnome3 list application.
**Application Listed by Windows**
![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg)
Application List on Windows 10
**Application Listed by Gnome3 on Linux**
![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg)
Gnome Application List on Linux
28. Virtual Desktop Virtual Desktop feature of Windows 10 is one of those topic which are very much talked about these days.
Here is the virtual Desktop in Windows 10.
![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg)
Windows Virtual Desktop
and the virtual Desktop on Linux we are using for more than 2 decades.
![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg)
Virtual Desktop on Linux
#### A few other features of Windows 10 ####
29. Windows 10 comes with wi-fi sense. It shares your password with others. Anyone who is in the range of your wi-fi and connected to you over Skype, Outlook, Hotmail or Facebook can be granted access to your wifi network. And mind it this feature has been added as a feature by microsoft to save time and hassle-free connection.
In a reply to question raised by Tecmint, Microsoft said The user has to agree to enable wifi sense, everytime on a new network. oh! What a pathetic taste as far as security is concerned. I am not convinced.
30. Up-gradation from Windows 7 and Windows 8.1 is free though the retail cost of Home and pro editions are approximately $119 and $199 respectively.
31. Microsoft released first cumulative update for windows 10, which is said to put system into endless crash loop for a few people. Windows perhaps dont understand such problem or dont want to work on that part dont know why.
32. Microsofts inbuilt utility to block/hide unwanted updates dont work in my case. This means If a update is there, there is no way to block/hide it. Sorry windows users!
#### A few features native to Linux that windows 10 have ####
Windows 10 has a lots of features that were taken directly from Linux. If Linux were not released under GNU License perhaps Microsoft would never had the below features.
33. Command-line package management Yup! You heard it right. Windows 10 has a built-in package management. It works only in Windows Power Shell. OneGet is the official package manager for windows. Windows package manager in action.
![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg)
Windows 10 Package Manager
- Border-less windows
- Flat Icons
- Virtual Desktop
- One search for Online+offline search
- Convergence of mobile and desktop OS
### Overall Conclusion ###
- Improved responsiveness
- Well implemented Animation
- low on resource
- Improved battery life
- Microsoft Edge web-browser is rock solid
- Supported on Raspberry pi 2.
- It is good because windows 8/8.1 was not upto mark and really bad.
- It is a the same old wine in new bottle. Almost the same things with brushed up icons.
What my testing suggest is Windows 10 has improved on a few things like look and feel (as windows always did), +1 for Project spartan, Virtual Desktop, Command-line package management, one search for online and offline search. It is overall an improved product but those who thinks that Windows 10 will prove to be the last nail in the coffin of Linux are mistaken.
Linux is years ahead of Windows. Their approach is different. In near future windows wont stand anywhere around Linux and there is nothing for which a Linux user need to go to Windows 10.
Thats all for now. Hope you liked the post. I will be here again with another interesting post you people will love to read. Provide us with your valuable feedback in the comments below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/a-linux-user-using-windows-10-after-more-than-8-years-see-comparison/
作者:[vishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:https://www.microsoft.com/en-us/software-download/windows10ISO

View File

@ -0,0 +1,109 @@
Debian GNU/Linux Birthday : A 22 Years of Journey and Still Counting…
================================================================================
On 16th August 2015, the Debian project has celebrated its 22nd anniversary, making it one of the oldest popular distribution in open source world. Debian project was conceived and founded in the year 1993 by Ian Murdock. By that time Slackware had already made a remarkable presence as one of the earliest Linux Distribution.
![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png)
Happy 22nd Birthday to Debian Linux
Ian Ashley Murdock, an American Software Engineer by profession, conceived the idea of Debian project, when he was a student of Purdue University. He named the project Debian after the name of his then-girlfriend Debra Lynn (Deb) and his name. He later married her and then got divorced in January 2008.
![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg)
Debian Creator: Ian Murdock
Ian is currently serving as Vice President of Platform and Development Community at ExactTarget.
Debian (as Slackware) was the result of unavailability of up-to mark Linux Distribution, that time. Ian in an interview said “Providing the first class Product without profit would be the sole aim of Debian Project. Even Linux was not reliable and up-to mark that time. I Remember…. Moving files between file-system and dealing with voluminous file would often result in Kernel Panic. However the project Linux was promising. The availability of Source Code freely and the potential it seemed was qualitative.”
I remember … like everyone else I wanted to solve problem, run something like UNIX at home, but it was not possible…neither financially nor legally, in the other sense . Then I come to know about GNU kernel Development and its non-association with any kind of legal issues, he added. He was sponsored by Free Software Foundation (FSF) in the early days when he was working on Debian, it also helped Debian to take a giant step though Ian needed to finish his degree and hence quited FSF roughly after one year of sponsorship.
### Debian Development History ###
- **Debian 0.01 0.09** : Released between August 1993 December 1993.
- **Debian 0.91 ** Released in January 1994 with primitive package system, No dependencies.
- **Debian 0.93 rc5** : March 1995. It is the first modern release of Debian, dpkg was used to install and maintain packages after base system installation.
- **Debian 0.93 rc6**: Released in November 1995. It was last a.out release, deselect made an appearance for the first time 60 developers were maintaining packages, then at that time.
- **Debian 1.1**: Released in June 1996. Code name Buzz, Packages count 474, Package Manager dpkg, Kernel 2.0, ELF.
- **Debian 1.2**: Released in December 1996. Code name Rex, Packages count 848, Developers Count 120.
- **Debian 1.3**: Released in July 1997. Code name Bo, package count 974, Developers count 200.
- **Debian 2.0**: Released in July 1998. Code name: Hamm, Support for architecture Intel i386 and Motorola 68000 series, Number of Packages: 1500+, Number of Developers: 400+, glibc included.
- **Debian 2.1**: Released on March 09, 1999. Code name slink, support architecture Alpha and Sparc, apt came in picture, Number of package 2250.
- **Debian 2.2**: Released on August 15, 2000. Code name Potato, Supported architecture Intel i386, Motorola 68000 series, Alpha, SUN Sparc, PowerPC and ARM architecture. Number of packages: 3900+ (binary) and 2600+ (Source), Number of Developers 450. There were a group of people studied and came with an article called Counting potatoes, which shows How a free software effort could lead to a modern operating system despite all the issues around it.
- **Debian 3.0** : Released on July 19th, 2002. Code name woody, Architecture supported increased HP, PA_RISC, IA-64, MIPS and IBM, First release in DVD, Package Count 8500+, Developers Count 900+, Cryptography.
- **Debian 3.1**: Release on June 6th, 2005. Code name sarge, Architecture support same as woody + AMD64 Unofficial Port released, Kernel 2.4 qnd 2.6 series, Number of Packages: 15000+, Number of Developers : 1500+, packages like OpenOffice Suite, Firefox Browser, Thunderbird, Gnome 2.8, kernel 3.3 Advanced Installation Support: RAID, XFS, LVM, Modular Installer.
- **Debian 4.0**: Released on April 8th, 2007. Code name etch, architecture support same as sarge, included AMD64. Number of packages: 18,200+ Developers count : 1030+, Graphical Installer.
- **Debian 5.0**: Released on February 14th, 2009. Code name lenny, Architecture Support Same as before + ARM. Number of packages: 23000+, Developers Count: 1010+.
- **Debian 6.0** : Released on July 29th, 2009. Code name squeeze, Package included : kernel 2.6.32, Gnome 2.3. Xorg 7.5, DKMS included, Dependency-based. Architecture : Same as pervious + kfreebsd-i386 and kfreebsd-amd64, Dependency based booting.
- **Debian 7.0**: Released on may 4, 2013. Code name: wheezy, Support for Multiarch, Tools for private cloud, Improved Installer, Third party repo need removed, full featured multimedia-codec, Kernel 3.2, Xen Hypervisor 4.1.4 Package Count: 37400+.
- **Debian 8.0**: Released on May 25, 2015 and Code name: Jessie, Systemd as the default init system, powered by Kernel 3.16, fast booting, cgroups for services, possibility of isolating part of the services, 43000+ packages. Sysvinit init system available in Jessie.
**Note**: Linux Kernel initial release was on October 05, 1991 and Debian initial release was on September 15, 1993. So, Debian is there for 22 Years running Linux Kernel which is there for 24 years.
### Debian Facts ###
Year 1994 was spent on organizing and managing Debian project so that it would be easy for others to contribute. Hence no release for users were made this year however there were certain internal release.
Debian 1.0 was never released. A CDROM manufacturer company by mistakenly labelled an unreleased version as Debian 1.0. Hence to avoid confusion Debian 1.0 was released as Debian 1.1 and since then only the concept of official CDROM images came into existence.
Each release of Debian is a character of Toy Story.
Debian remains available in old stable, stable, testing and experimental, all the time.
The Debian Project continues to work on the unstable distribution (codenamed sid, after the evil kid from the Toy Story). Sid is the permanent name for the unstable distribution and is remains Still In Development. The testing release is intended to become the next stable release and is currently codenamed jessie.
Debian official distribution includes only Free and OpenSource Software and nothing else. However the availability of contrib and Non-free Packages makes it possible to install those packages which are free but their dependencies are not licensed free (contrib) and Packages licensed under non-free softwares.
Debian is the mother of a lot of Linux distribution. Some of these Includes:
- Damn Small Linux
- KNOPPIX
- Linux Advanced
- MEPIS
- Ubuntu
- 64studio (No more active)
- LMDE
Debian is the worlds largest non commercial Linux Distribution. It is written in C (32.1%) programming language and rest in 70 other languages.
![Debian Contribution](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png)
Debian Contribution
Image Source: [Xmodulo][1]
Debian project contains 68.5 million actual loc (lines of code) + 4.5 million lines of comments and white spaces.
International Space station dropped Windows & Red Hat for adopting Debian These astronauts are using one release back now “squeeze” for stability and strength from community.
Thank God! Who would have heard the scream from space on Windows Metro Screen :P
#### The Black Wednesday ####
On November 20th, 2002 the University of Twente Network Operation Center (NOC) caught fire. The fire department gave up protecting the server area. NOC hosted satie.debian.org which included Security, non-US archive, New Maintainer, quality assurance, databases Everything was turned to ashes. Later these services were re-built by debian.
#### The Future Distro ####
Next in the list is Debian 9, code name Stretch, what it will have is yet to be revealed. The best is yet to come, Just Wait for it!
A lot of distribution made an appearance in Linux Distro genre and then disappeared. In most cases managing as it gets bigger was a concern. But certainly this is not the case with Debian. It has hundreds of thousands of developer and maintainer all across the globe. It is a one Distro which was there from the initial days of Linux.
The contribution of Debian in Linux ecosystem cant be measured in words. If there had been no Debian, Linux would not have been so rich and user-friendly. Debian is among one of the disto which is considered highly reliable, secure and stable and a perfect choice for Web Servers.
Thats the beginning of Debian. It came a long way and still going. The Future is Here! The world is here! If you have not used Debian till now, What are you Waiting for. Just Download Your Image and get started, we will be here if you get into trouble.
- [Debian Homepage][2]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html
[2]:https://www.debian.org/

View File

@ -0,0 +1,53 @@
Docker Working on Security Components, Live Container Migration
================================================================================
![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg)
**Docker developers take the stage at Containercon and discuss their work on future container innovations for security and live migration.**
SEATTLE—Containers are one of the hottest topics in IT today and at the Linuxcon USA event here there is a co-located event called Containercon, dedicated to this virtualization technology.
Docker, the lead commercial sponsor of the open-source Docker effort brought three of its top people to the keynote stage today, but not Docker founder Solomon Hykes.
Hykes who delivered a Linuxcon keynote in 2014 was in the audience though, as Senior Vice President of Engineering Marianna Tessel, Docker security chief Diogo Monica and Docker chief maintainer Michael Crosby presented what's new and what's coming in Docker.
Tessel emphasized that Docker is very real today and used in production environments at some of the largest organizations on the planet, including the U.S. Government. Docker also is working in small environments too, including the Raspberry Pi small form factor ARM computer, which now can support up to 2,300 containers on a single device.
"We're getting more powerful and at the same time Docker will also get simpler to use," Tessel said.
As a metaphor, Tessel said that the whole Docker experience is much like a cruise ship, where there is powerful and complex machinery that powers the ship, yet the experience for passengers is all smooth sailing.
One area that Docker is trying to make easier is security. Tessel said that security is mind-numbingly complex for most people as organizations constantly try to avoid network breaches.
That's where Docker Content Trust comes into play, which is a configurable feature in the recent Docker 1.8 release. Diogo Mónica, security lead for Docker joined Tessel on stage and said that security is a hard topic, which is why Docker content trust is being developed.
With Docker Content Trust there is a verifiable way to make sure that a given Docker application image is authentic. There also are controls to limit fraud and potential malicious code injection by verifying application freshness.
To prove his point, Monica did a live demonstration of what could happen if Content Trust is not enabled. In one instance, a Website update is manipulated to allow the demo Web app to be defaced. When Content Trust is enabled, the hack didn't work and was blocked.
"Don't let the simple demo fool you," Tessel said. "You have seen the best security possible."
One area where containers haven't been put to use before is for live migration, which on VMware virtual machines is a technology called vMotion. It's an area that Docker is currently working on.
Docker chief maintainer Michael Crosby did an onstage demonstration of a live migration of Docker containers. Crosby referred to the approach as checkpoint and restore, where a running container gets a checkpoint snapshot and is then restored to another location.
A container also can be cloned and then run in another location. Crosby humorously referred to his cloned container as "Dolly," a reference to the world's first cloned animal, Dolly the sheep.
Tessel also took time to talk about the RunC component of containers, which is now a technology component that is being developed by the Open Containers Initiative as a multi-stakeholder process. With RunC, containers expand beyond Linux to multiple operating systems including Windows and Solaris.
Overall, Tessel said that she can't predict the future of Docker, though she is very optimistic.
"I'm not sure what the future is, but I'm sure it'll be out of this world," Tessel said.
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
--------------------------------------------------------------------------------
via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html
作者:[Sean Michael Kerner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/

View File

@ -0,0 +1,49 @@
Linuxcon: The Changing Role of the Server OS
================================================================================
SEATTLE - Containers might one day change the world, but it will take time and it will also change the role of the operating system. That's the message delivered during a Linuxcon keynote here today by Wim Coekaerts, SVP Linux and virtualization engineering at Oracle.
![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg)
Coekaerts started his presentation by putting up a slide stating it's the year of the desktop, which generated a few laughs from the audience. Oracle Wim Coekarts Truly, though, Coekaerts said it is now apparent that 2015 is the year of the container, and more importantly the year of the application, which is what containers really are all about.
"What do you need an operating system for?" Coekaerts asked. "It's really just there to run an application; an operating system is there to manage hardware and resources so your app can run."
Coekaerts added that with Docker containers, the focus is once again on the application. At Oracle, Coekaerts said much of the focus is on how to make the app run better on the OS.
"Many people are used to installing apps, but many of the younger generation just click a button on their mobile device and it runs," Coekaerts said.
Coekaerts said that people now wonder why it's more complex in the enterprise to install software, and Docker helps to change that.
"The role of the operating system is changing," Coekaerts said.
The rise of Docker does not mean the demise of virtual machines (VMs), though. Coekaerts said it will take a very long time for things to mature in the containerization space and get used in real world.
During that period VMs and containers will co-exist and there will be a need for transition and migration tools between containers and VMs. For example, Coekaerts noted that Oracle's VirtualBox open-source technology is widely used on desktop systems today as a way to help users run Docker. The Docker Kitematic project makes use of VirtualBox to boot Docker on Macs today.
### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ###
A key promise that needs to be enabled for containers to truly be successful is the concept of write once, deploy anywhere. That's an area where the Linux Foundations' Open Compute Initiative (OCI) will play a key role in enabling interoperability across container runtimes.
"With OCI, it will make it easier to build once and run anywhere, so what you package locally you can run wherever you want," Coekaerts said.
Overall, though, Coekaerts said that while there is a lot of interest in moving to the container model, it's not quite ready yet. He noted Oracle is working on certifying its products to run in containers, but it's a hard process.
"Running the database is easy; it's everything else around it that is complex," Coekaerts said. "Containers don't behave the same as VMs, and some applications depend on low-level system configuration items that are not exposed from the host to the container."
Additionally, Coekaerts commented that debugging problems inside a container is different than in a VM, and there is currently a lack of mature tools for proper container app debugging.
Coekaerts emphasized that as containers matures it's important to not forget about the existing technology that organizations use to run and deploy applications on servers today. He said enterprises don't typically throw out everything they have just to start with new technology.
"Deploying new technology is hard, and you need to be able to transition from what you have," Coekaerts said. "The technology that allows you to transition easily is the technology that wins."
--------------------------------------------------------------------------------
via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html
作者:[Sean Michael Kerner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm

View File

@ -0,0 +1,49 @@
A Look at What's Next for the Linux Kernel
================================================================================
![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg)
**The upcoming Linux 4.2 kernel will have more contributors than any other Linux kernel in history, according to Linux kernel developer Jonathan Corbet.**
SEATTLE—The Linux kernel continues to grow—both in lines of code and the number of developers that contribute to it—yet some challenges need to be addressed. That was one of the key messages from Linux kernel developer Jonathan Corbet during his annual Kernel Report session at the LinuxCon conference here.
The Linux 4.2 kernel is still under development, with general availability expected on Aug. 23. Corbet noted that 1,569 developers have contributed code for the Linux 4.2 kernel. Of those, 277 developers made their first contribution ever, during the Linux 4.2 development cycle.
Even as more developers are coming to Linux, the pace of development and releases is very fast, Corbet said. He estimates that it now takes approximately 63 days for the community to build a new Linux kernel milestone.
Linux 4.2 will benefit from a number of improvements that have been evolving in Linux over the last several releases. One such improvement is the introduction of OverlayFS, a new type of read-only file system that is useful because it can enable many containers to be layered on top of each other, Corbet said.
Linux networking also is set to improve small packet performance, which is important for areas such as high-frequency financial trading. The improvements are aimed at reducing the amount of time and power needed to process each data packet, Corbet said.
New drivers are always being added to Linux. On average, there are 60 to 80 new or updated drivers added in every Linux kernel development cycle, Corbet said.
Another key area that continues to improve is that of Live Kernel patching, first introduced in the Linux 4.0 kernel. With live kernel patching, the promise is that a system administrator can patch a live running kernel without the need to reboot a running production system. While the basic elements of live kernel patching are in the kernel already, work is under way to make the technology all work with the right level of consistency and stability, Corbet explained.
**Linux Security, IoT and Other Concerns**
Security has been a hot topic in the open-source community in the past year due to high-profile issues, including Heartbleed and Shellshock.
"I don't doubt there are some unpleasant surprises in the neglected Linux code at this point," Corbet said.
He noted that there are more than 3 millions lines of code in the Linux kernel today that have been untouched in the last decade by developers and that the Shellshock vulnerability was a flaw in 20-year-old code that hadn't been looked at in some time.
Another issue that concerns Corbet is the Unix 2038 issue—the Linux equivalent of the Y2K bug, which could have caused global havoc in the year 2000 if it hadn't been fixed. With the 2038 issue, there is a bug that could shut down Linux and Unix machines in the year 2038. Corbet said that while 2038 is still 23 years away, there are systems being deployed now that will be in use in the 2038.
Some initial work took place to fix the 2038 flaw in Linux, but much more remains to be done, Corbet said. "The time to fix this is now, not 20 years from now in a panic when we're all trying to enjoy our retirement," Corbet said.
The Internet of things (IoT) is another area of Linux concern for Corbet. Today, Linux is a leading embedded operating system for IoT, but that might not always be the case. Corbet is concerned that the Linux kernel's growth is making it too big in terms of memory footprint to work in future IoT devices.
A Linux project is now under way to minimize the size of the Linux kernel, and it's important that it gets the support it needs, Corbet said.
"Either Linux is suitable for IoT, or something else will come along and that something else might not be as free and open as Linux," Corbet said. "We can't assume the continued dominance of Linux in IoT. We have to earn it. We have to pay attention to stuff that makes the kernel bigger."
--------------------------------------------------------------------------------
via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html
作者:[Sean Michael Kerner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/

View File

@ -0,0 +1,46 @@
LinuxCon's surprise keynote speaker Linus Torvalds muses about open-source software
================================================================================
> In a broad-ranging question and answer session, Linus Torvalds, Linux's founder, shared his thoughts on the current state of open source and Linux.
**SEATTLE** -- [LinuxCon][1] attendees got an early Christmas present when the Wednesday morning "surprise" keynote speaker turned out to be Linux's founder, Linus Torvalds.
![zemlin-and-torvalds-08192015-1.jpg](http://zdnet2.cbsistatic.com/hub/i/2015/08/19/9951f05a-fedf-4bf4-a4a1-3b4a15458de6/c19c89ded58025eccd090787ba40e803/zemlin-and-torvalds-08192015-1.jpg)
Jim Zemlin and Linus Torvalds shooting the breeze at LinuxCon in Seattle. -- sjvn
Jim Zemlin, the Linux Foundation's executive director, opened the question and answer session by quoting from a recent article about Linus, "[Torvalds may be the most influential individual economic force][2] of the past 20 years. ... Torvalds has, in effect, been as instrumental in retooling the production lines of the modern economy as Henry Ford was 100 years earlier."
Torvalds replied, "I don't think I'm all that powerful, but I'm glad to get all the credit for open source." For someone who's arguably been more influential on technology than Bill Gates, Steve Jobs, or Larry Ellison, Torvalds remains amusingly modest. That's probably one reason [Torvalds, who doesn't suffer fools gladly][3], remains the unchallenged leader of Linux.
It also helps that he doesn't take himself seriously, except when it comes to code quality. Zemlin reminded him that he was also described in the same article as being "5-feet, ho-hum tall with a paunch, ... his body type and gait resemble that of Tux, the penguin mascot of Linux." Torvald's reply was to grin and say "What is this? A roast?" He added that 5'8" was a perfectly good height.
More seriously, Zemlin asked Torvalds what he thought about the current excitement over containers. Indeed, at times LinuxCon has felt like DockerCon. Torvalds replied, "I'm glad that the kernel is far removed from containers and other buzzwords. We only care about just the kernel. I'm so focused on the kernel I really don't care. I don't get involved in the politics above the kernel and I'm really happy that I don't know."
Moving on, Zemlin asked Torvalds what he thought about the demand from the Internet of Things (IoT) for an even smaller Linux kernel. "Everyone has always wished for a smaller kernel," Torvalds said. "But, with all the modules it's still tens of MegaBytes in size. It's shocking that it used to fit into a MB. We'd like it to be mean lean, mean IT machine again."
But, "Torvalds continued, "It's hard to get rid of unnecessary fat. Things tend to grow. Realistically I don't think we can get down to the sizes we were 20 years ago."
As for security, the next topic, Torvalds said, "I'm at odds with the security community. They tend to see technology as black and white. If it's not security they don't care at all about it." The truth is "security is bugs. Most of the security issues we've had in the kernel hasn't been that big. Most of them have been really stupid and then some clever person takes advantage of it."
The bottom line is, "We'll never get rid of bugs so security will never be perfect. We do try to be really careful about code. With user space we have to be very strict." But, "Bugs happen and all you can do is mitigate them. Open source is doing fairly well, but anyone who thinks we'll ever be completely secure is foolish."
Zemlin concluded by asking Torvalds where he saw Linux ten years from now. Torvalds replied that he doesn't look at it this way. "I'm plodding, pedestrian, I look ahead six months, I don't plan 10 years ahead. I think that's insane."
Sure, "companies plan ten years, and their plans use open source. Their whole process is very forward thinking. But I'm not worried about 10 years ahead. I look to the next release and the release beyond that."
For Torvalds, who works at home where "the FedEx guy is no longer surprised to find me in my bathrobe at 2 in the afternoon," looking ahead a few months works just fine. And so do all the businesses -- both technology-based Amazon, Google, Facebook and more mainstream, WalMart, the New York Stock Exchange, and McDonalds -- that live on Linux every day.
--------------------------------------------------------------------------------
via: http://www.zdnet.com/article/linus-torvalds-muses-about-open-source-software/
作者:[Steven J. Vaughan-Nichols][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:http://events.linuxfoundation.org/events/linuxcon-north-america
[2]:http://www.bloomberg.com/news/articles/2015-06-16/the-creator-of-linux-on-the-future-without-him
[3]:http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/

View File

@ -0,0 +1,53 @@
Which Open Source Linux Distributions Would Presidential Hopefuls Run?
================================================================================
![Republican presidential candidate Donald Trump
](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/08/donaldtrump.jpg)
Republican presidential candidate Donald Trump
If people running for president used Linux or another open source operating system, which distribution would it be? That's a key question that the rest of the press—distracted by issues of questionable relevance such as "policy platforms" and whether it's appropriate to add an exclamation point to one's Christian name—has been ignoring. But the ignorance ends here: Read on for this sometime-journalist's take on presidential elections and Linux distributions.
If this sounds like a familiar topic to those of you who have been reading my drivel for years (is anyone, other than my dear editor, unfortunate enough to have actually done that?), it's because I wrote a [similar post][1] during the last presidential election cycle. Some kind readers took that article more seriously than I intended, so I'll take a moment to point out that I don't actually believe that open source software and political campaigns have anything meaningful to do with one another. I am just trying to amuse myself at the start of a new week.
But you can make of this what you will. You're the reader, after all.
### Linux Distributions of Choice: Republicans ###
Today, I'll cover just the Republicans. And I won't even discuss all of them, since the candidates hoping for the Republican party's nomination are too numerous to cover fully here in one post. But for starters:
If **Jeb (Jeb!?) Bush** ran Linux, it would be [Debian][2]. It's a relatively boring distribution designed for serious, grown-up hackers—the kind who see it as their mission to be the adults in the pack and clean up the messes that less-experienced open source fans create. Of course, this also makes Debian relatively unexciting, and its user base remains perennially small as a result.
**Scott Walker**, for his part, would be a [Damn Small Linux][3] (DSL) user. Requiring merely 50MB of disk space and 16MB of RAM to run, DSL can breathe new life into 20-year-old 486 computers—which is exactly what a cost-cutting guru like Walker would want. Of course, the user experience you get from DSL is damn primitive; the platform barely runs a browser. But at least you won't be wasting money on new computer hardware when the stuff you bought in 1993 can still serve you perfectly well.
How about **Chris Christie**? He'd obviously be clinging to [Relax-and-Recover Linux][4], which bills itself as a "setup-and-forget Linux bare metal disaster recovery solution." "Setup-and-forget" has basically been Christie's political strategy ever since that unfortunate incident on the George Washington Bridge stymied his political momentum. Disaster recovery may or may not bring back everything for Christie in the end, but at least he might succeed in recovering a confidential email or two that accidentally disappeared when his computer crashed.
As for **Carly Fiorina**, she'd no doubt be using software developed for "[The Machine][5]" operating system from [Hewlett-Packard][6] (HPQ), the company she led from 1999 to 2005. The Machine actually may run several different operating systems, which may or may not be based on Linux—details remain unclear—and its development began well after Fiorina's tenure at HP came to a conclusion. Still, her roots as a successful executive in the IT world form an important part of her profile today, meaning that her ties to HP have hardly been severed fully.
Last but not least—and you knew this was coming—there's **Donald Trump**. He'd most likely pay a team of elite hackers millions of dollars to custom-build an operating system just for him—even though he could obtain a perfectly good, ready-made operating system for free—to show off how much money he has to waste. He'd then brag about it being the best operating system ever made, though it would of course not be compliant with POSIX or anything else, because that would mean catering to the establishment. The platform would also be totally undocumented, since, if Trump explained how his operating system actually worked, he'd risk giving away all his secrets to the Islamic State—obviously.
Alternatively, if Trump had to go with a Linux platform already out there, [Ubuntu][7] seems like the most obvious choice. Like Trump, the Ubuntu developers have taken a we-do-what-we-want approach to building open source software by implementing their own, sometimes proprietary applications and interfaces. Free-software purists hate Ubuntu for that, but plenty of ordinary people like it a lot. Of course, whether playing purely by your own rules—in the realms of either software or politics—is sustainable in the long run remains to be seen.
### Stay Tuned ###
If you're wondering why I haven't yet mentioned the Democratic candidates, worry not. I am not leaving them out of today's writing because I like them any more or less than the Republicans. (Personally, I think the peculiar American practice of having only two viable political parties—which virtually no other functioning democracy does—is ridiculous, and I am suspicious of all of these candidates as a result.)
On the contrary, there's plenty to say about the Linux distributions the Democrats might use, too. And I will, in a future post. Stay tuned.
--------------------------------------------------------------------------------
via: http://thevarguy.com/open-source-application-software-companies/081715/which-open-source-linux-distributions-would-presidential-
作者:[Christopher Tozzi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://thevarguy.com/author/christopher-tozzi
[1]:http://thevarguy.com/open-source-application-software-companies/aligning-linux-distributions-presidential-hopefuls
[2]:http://debian.org/
[3]:http://www.damnsmalllinux.org/
[4]:http://relax-and-recover.org/
[5]:http://thevarguy.com/open-source-application-software-companies/061614/hps-machine-open-source-os-truly-revolutionary
[6]:http://hp.com/
[7]:http://ubuntu.com/

View File

@ -0,0 +1,147 @@
Why did you start using Linux?
================================================================================
> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux
### Why did you start using Linux? ###
Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers.
SilverKnight asked his question on the Linux subreddit:
> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here.
>
> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet.
>
> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon.
>
> [More at Reddit][1]
Fellow redditors in the Linux subreddit responded with their thoughts:
> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want."
>
> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now.
>
> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together.
>
> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux.
>
> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python.
>
> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...).
>
> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons.
>
> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked!
>
> I continued using it for another 6 months.
>
> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server.
>
> After the 6 months I got a new PC (which I still use!) I wanted to try something different.
>
> I decided to install openSUSE.
>
> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it."
>
> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games."
>
> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!"
>
> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP.
>
> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew.
>
> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance.
>
> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu.
>
> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is.
>
> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them."
>
> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it.
>
> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching."
>
> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux.
>
> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses.
>
> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's.
>
> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer.
>
> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day.
>
> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence.
>
> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great!
>
> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it."
>
> **Linuxllc**: "You also can learn from old farts like me.
>
> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux.
>
> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games."
>
> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for.
>
> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS.
>
> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday.
>
> /shrug"
>
> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally."
>
> [More at Reddit][2]
### IBM's Linux only Mainframe ###
IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne.
Ron Miller reports for TechCrunch:
> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer.
>
> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef.
>
> The metered mainframe will still sit inside the customers on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained.
>
> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market.
>
> [More at TechCrunch][3]
### Why you should skip Windows 10 and opt for Linux ###
Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer.
SJVN reports for ZDNet:
> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux.
>
> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft.
>
> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux.
>
> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve.
>
> [More at ZDNet][4]
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html
作者:[Jim Lynch][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Jim-Lynch/
[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/
[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/

View File

@ -0,0 +1,28 @@
Linux 4.3 Kernel To Add The MOST Driver Subsystem
================================================================================
While the [Linux 4.2][1] kernel hasn't been officially released yet, Greg Kroah-Hartman sent in early his pull requests for the various subsystems he maintains for the Linux 4.3 merge window.
The pull requests sent in by Greg KH on Thursday include the Linux 4.3 merge window updates for the driver core, TTY/serial, USB driver, char/misc, and the staging area. These pull requests don't offer any really shocking changes but mostly routine work on improvements / additions / bug-fixes. The staging area once again is heavy with various fixes and clean-ups but there's also a new driver subsystem.
Greg mentioned of the [4.3 staging changes][2], "Lots of things all over the place, almost all of them trivial fixups and changes. The usual IIO updates and new drivers and we have added the MOST driver subsystem which is getting cleaned up in the tree. The ozwpan driver is finally being deleted as it is obviously abandoned and no one cares about it."
The MOST driver subsystem is short for the Media Oriented Systems Transport. The documentation to be added in the Linux 4.3 kernel explains, "The Media Oriented Systems Transport (MOST) driver gives Linux applications access a MOST network: The Automotive Information Backbone and the de-facto standard for high-bandwidth automotive multimedia networking. MOST defines the protocol, hardware and software layers necessary to allow for the efficient and low-cost transport of control, real-time and packet data using a single medium (physical layer). Media currently in use are fiber optics, unshielded twisted pair cables (UTP) and coax cables. MOST also supports various speed grades up to 150 Mbps." As explained, MOST is mostly about Linux in automotive applications.
While Greg KH sent in his various subsystem updates for Linux 4.3, he didn't yet propose the [KDBUS][5] kernel code be pulled. He's previously expressed plans for [KDBUS in Linux 4.3][3] so we'll wait until the 4.3 merge window officially gets going to see what happens. Stay tuned to Phoronix for more Linux 4.3 kernel coverage next week when the merge window will begin, [assuming Linus releases 4.2][4] this weekend.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.3-Staging-Pull
作者:[Michael Larabel][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.michaellarabel.com/
[1]:http://www.phoronix.com/scan.php?page=search&q=Linux+4.2
[2]:http://lkml.iu.edu/hypermail/linux/kernel/1508.2/02604.html
[3]:http://www.phoronix.com/scan.php?page=news_item&px=KDBUS-Not-In-Linux-4.2
[4]:http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.2-rc7-Released
[5]:http://www.phoronix.com/scan.php?page=search&q=KDBUS

View File

@ -1,3 +1,4 @@
wyangsun translating
Install Strongswan - A Tool to Setup IPsec Based VPN in Linux
================================================================================
IPsec is a standard which provides the security at network layer. It consist of authentication header (AH) and encapsulating security payload (ESP) components. AH provides the packet Integrity and confidentiality is provided by ESP component . IPsec ensures the following security features at network layer.
@ -110,4 +111,4 @@ via: http://linoxide.com/security/install-strongswan/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/
[1]:https://www.strongswan.org/
[1]:https://www.strongswan.org/

View File

@ -1,114 +0,0 @@
[bazz2]
Howto Manage Host Using Docker Machine in a VirtualBox
================================================================================
Hi all, today we'll learn how to create and manage a Docker host using Docker Machine in a VirtualBox. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. This API works for provisioning Docker on a local machine, on a virtual machine in the data center, or on a public cloud instance. Docker Machine is supported on Windows, OSX, and Linux and is available for installation as one standalone binary. It enables us to take full advantage of ecosystem partners providing Docker-ready infrastructure, while still accessing everything through the same interface. It makes people able to deploy the docker containers in the respective platform pretty fast and in pretty easy way with just a single command.
Here are some easy and simple steps that helps us to deploy docker containers using Docker Machine.
### 1. Installing Docker Machine ###
Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the [Github site][1] . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 .
**For 64 Bit Operating System**
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
**For 32 Bit Operating System**
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine
After downloading the latest release of Docker Machine, we'll make the file named **docker-machine** under **/usr/local/bin/** executable using the command below.
# chmod +x /usr/local/bin/docker-machine
After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system.
# docker-machine -v
![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png)
To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below.
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
### 2. Creating VirualBox VM ###
After we have successfully installed Docker Machine in our Linux running machine, we'll definitely wanna go for creating a Virtual Machine using VirtualBox. To get started, we need to run docker-machine create command followed by --driver flag with string as virtualbox as we are trying to deploy docker inside of Virtual Box running VM and the final argument is the name of the machine, here we have machine name as "linux". This command will download [boot2docker][2] iso which is a light-weighted linux distribution based on Tiny Core Linux with the Docker daemon installed and will create and start a VirtualBox VM with Docker running as mentioned above.
To do so, we'll run the following command in a terminal or shell in our box.
# docker-machine create --driver virtualbox linux
![Creating Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png)
Now, to check whether we have successfully create a Virtualbox running Docker or not, we'll run the command **docker-machine** ls as shown below.
# docker-machine ls
![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png)
If the host is active, we can see * under the ACTIVE column in the output as shown above.
### 3. Setting Environment Variables ###
Now, we'll need to make docker talk with the machine. We can do that by running docker-machine env and then the machine name, here we have named **linux** as above.
# eval "$(docker-machine env linux)"
# docker ps
This will set environment variables that the Docker client will read which specify the TLS settings. Note that we'll need to do this every time we reboot our machine or start a new tab. We can see what variables will be set by running the following command.
# docker-machine env linux
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=/Users/<your username>/.docker/machine/machines/dev
export DOCKER_HOST=tcp://192.168.99.100:2376
### 4. Running Docker Containers ###
Finally, after configuring the environment variables and Virtual Machine, we are able to run docker containers in the host running inside the Virtual Machine. To give it a test, we'll run a busybox container out of it run running **docker run busybox** command with **echo hello world** so that we can get the output of the container.
# docker run busybox echo hello world
![Running Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png)
### 5. Getting Docker Host's IP ###
We can get the IP Address of the running Docker Host's using the **docker-machine ip** command. We can see any exposed ports that are available on the Docker hosts IP address.
# docker-machine ip
![Docker IP Address](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png)
### 6. Managing the Hosts ###
Now we can manage as many local VMs running Docker as we desire by running docker-machine create command again and again as mentioned in above steps
If you are finished working with the running docker, we can simply run **docker-machine stop** command to stop the whole hosts which are Active and if wanna start again, we can run **docker-machine start**.
# docker-machine stop
# docker-machine start
You can also specify a host to stop or start using the host name as an argument.
$ docker-machine stop linux
$ docker-machine start linux
### Conclusion ###
Finally, we have successfully created and managed a Docker host inside a VirtualBox using Docker Machine. Really, Docker Machine enables people fast and easy to create, deploy and manage Docker hosts in different platforms as here we are running Docker hosts using Virtualbox platform. This virtualbox driver API works for provisioning Docker on a local machine, on a virtual machine in the data center. Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances whereas more drivers are in the work for AWS, Azure, VMware, and other infrastructure. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://github.com/docker/machine/releases
[2]:https://github.com/boot2docker/boot2docker

View File

@ -1,151 +0,0 @@
translation by strugglingyouth
How to monitor NGINX with Datadog - Part 3
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
If youve already read [our post on monitoring NGINX][1], you know how much information you can gain about your web environment from just a handful of metrics. And youve also seen just how easy it is to start collecting metrics from NGINX on ad hoc basis. But to implement comprehensive, ongoing NGINX monitoring, you will need a robust monitoring system to store and visualize your metrics, and to alert you when anomalies happen. In this post, well show you how to set up NGINX monitoring in Datadog so that you can view your metrics on customizable dashboards like this:
![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
Datadog allows you to build graphs and alerts around individual hosts, services, processes, metrics—or virtually any combination thereof. For instance, you can monitor all of your NGINX hosts, or all hosts in a certain availability zone, or you can monitor a single key metric being reported by all hosts with a certain tag. This post will show you how to:
- Monitor NGINX metrics on Datadog dashboards, alongside all your other systems
- Set up automated alerts to notify you when a key metric changes dramatically
### Configuring NGINX ###
To collect metrics from NGINX, you first need to ensure that NGINX has an enabled status module and a URL for reporting its status metrics. Step-by-step instructions [for configuring open-source NGINX][2] and [NGINX Plus][3] are available in our companion post on metric collection.
### Integrating Datadog and NGINX ###
#### Install the Datadog Agent ####
The Datadog Agent is [the open-source software][4] that collects and reports metrics from your hosts so that you can view and monitor them in Datadog. Installing the agent usually takes [just a single command][5].
As soon as your Agent is up and running, you should see your host reporting metrics [in your Datadog account][6].
![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
#### Configure the Agent ####
Next youll need to create a simple NGINX configuration file for the Agent. The location of the Agents configuration directory for your OS can be found [here][7].
Inside that directory, at conf.d/nginx.yaml.example, you will find [a sample NGINX config file][8] that you can edit to provide the status URL and optional tags for each of your NGINX instances:
init_config:
instances:
- nginx_status_url: http://localhost/nginx_status/
tags:
- instance:foo
Once you have supplied the status URLs and any tags, save the config file as conf.d/nginx.yaml.
#### Restart the Agent ####
You must restart the Agent to load your new configuration file. The restart command varies somewhat by platform—see the specific commands for your platform [here][9].
#### Verify the configuration settings ####
To check that Datadog and NGINX are properly integrated, run the Datadog info command. The command for each platform is available [here][10].
If the configuration is correct, you will see a section like this in the output:
Checks
======
[...]
nginx
-----
- instance #0 [OK]
- Collected 8 metrics & 0 events
#### Install the integration ####
Finally, switch on the NGINX integration inside your Datadog account. Its as simple as clicking the “Install Integration” button under the Configuration tab in the [NGINX integration settings][11].
![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
### Metrics! ###
Once the Agent begins reporting NGINX metrics, you will see [an NGINX dashboard][12] among your list of available dashboards in Datadog.
The basic NGINX dashboard displays a handful of graphs encapsulating most of the key metrics highlighted [in our introduction to NGINX monitoring][13]. (Some metrics, notably request processing time, require log analysis and are not available in Datadog.)
You can easily create a comprehensive dashboard for monitoring your entire web stack by adding additional graphs with important metrics from outside NGINX. For example, you might want to monitor host-level metrics on your NGINX hosts, such as system load. To start building a custom dashboard, simply clone the default NGINX dashboard by clicking on the gear near the upper right of the dashboard and selecting “Clone Dash”.
![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
You can also monitor your NGINX instances at a higher level using Datadogs [Host Maps][14]—for instance, color-coding all your NGINX hosts by CPU usage to identify potential hotspots.
![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
### Alerting on NGINX metrics ###
Once Datadog is capturing and visualizing your metrics, you will likely want to set up some monitors to automatically keep tabs on your metrics—and to alert you when there are problems. Below well walk through a representative example: a metric monitor that alerts on sudden drops in NGINX throughput.
#### Monitor your NGINX throughput ####
Datadog metric alerts can be threshold-based (alert when the metric exceeds a set value) or change-based (alert when the metric changes by a certain amount). In this case well take the latter approach, alerting when our incoming requests per second drop precipitously. Such drops are often indicative of problems.
1.**Create a new metric monitor**. Select “New Monitor” from the “Monitors” dropdown in Datadog. Select “Metric” as the monitor type.
![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
2.**Define your metric monitor**. We want to know when our total NGINX requests per second drop by a certain amount. So we define the metric of interest to be the sum of nginx.net.request_per_s across our infrastructure.
![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
3.**Set metric alert conditions**. Since we want to alert on a change, rather than on a fixed threshold, we select “Change Alert.” Well set the monitor to alert us whenever the request volume drops by 30 percent or more. Here we use a one-minute window of data to represent the metrics value “now” and alert on the average change across that interval, as compared to the metrics value 10 minutes prior.
![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
4.**Customize the notification**. If our NGINX request volume drops, we want to notify our team. In this case we will post a notification in the ops teams chat room and page the engineer on call. In “Say whats happening”, we name the monitor and add a short message that will accompany the notification to suggest a first step for investigation. We @mention the Slack channel that we use for ops and use @pagerduty to [route the alert to PagerDuty][15]
![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
5.**Save the integration monitor**. Click the “Save” button at the bottom of the page. Youre now monitoring a key NGINX [work metric][16], and your on-call engineer will be paged anytime it drops rapidly.
### Conclusion ###
In this post weve walked you through integrating NGINX with Datadog to visualize your key metrics and notify your team when your web infrastructure shows signs of trouble.
If youve followed along using your own Datadog account, you should now have greatly improved visibility into whats happening in your web environment, as well as the ability to create automated monitors tailored to your environment, your usage patterns, and the metrics that are most valuable to your organization.
If you dont yet have a Datadog account, you can sign up for [a free trial][17] and start monitoring your infrastructure, your applications, and your services today.
----------
Source Markdown for this post is available [on GitHub][18]. Questions, corrections, additions, etc.? Please [let us know][19].
------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
[4]:https://github.com/DataDog/dd-agent
[5]:https://app.datadoghq.com/account/settings#agent
[6]:https://app.datadoghq.com/infrastructure
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
[12]:https://app.datadoghq.com/dash/integration/nginx
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
[15]:https://www.datadoghq.com/blog/pagerduty/
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
[19]:https://github.com/DataDog/the-monitor/issues

View File

@ -1,129 +0,0 @@
Translating by dingdongnigetou
Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker
================================================================================
Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names.
Here, in this tutorial, we will use Nginx to load balance requests to a set of containers running Apache. Here are the simple and easy to do steps on using Weave to configure nginx as a load balancer running in ubuntu docker container.
### 1. Settting up AWS Instances ###
First of all, we'll need to setup Amazon Web Service Instances so that we can run docker containers with Weave and Ubuntu as Operating System. We will use the [AWS CLI][1] to setup and configure two AWS EC2 instances. Here, in this tutorial, we'll use the smallest available instances, t1.micro. We will need to have a valid **Amazon Web Services account** with AWS CLI setup and configured. We'll first gonna clone the repository of weave from the github by running the following command in AWS CLI.
$ git clone http://github.com/fintanr/weave-gs
$ cd weave-gs/aws-nginx-ubuntu-simple
After cloning the repository, we wanna run the script that will deploy two instances of t1.micro instance running Weave and Docker in Ubuntu Operating System.
$ sudo ./demo-aws-setup.sh
Here, for this tutorial we'll need the IP addresses of these instances further in future. These are stored in an environment file weavedemo.env which is created during the execution of the demo-aws-setup.sh. To get those ip addresses, we need to run the following command which will give the output similar to the output below.
$ cat weavedemo.env
export WEAVE_AWS_DEMO_HOST1=52.26.175.175
export WEAVE_AWS_DEMO_HOST2=52.26.83.141
export WEAVE_AWS_DEMO_HOSTCOUNT=2
export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141)
Please note these are not the IP addresses for our tutorial, AWS dynamically allocate IP addresses to our instances.
As were are using a bash, we will just source this file and execute it using the command below.
. ./weavedemo.env
### 2. Launching Weave and WeaveDNS ###
After deploying the instances, we'll want to launch weave and weavedns on each hosts. Weave and weavedns allows us to easily deploy our containers to a new infrastructure and configuration without the need of changing the codes and without the need to understand concepts such as ambassador containers and links. Here are the commands to launch them in the first host.
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
$ sudo weave launch
$ sudo weave launch-dns 10.2.1.1/24
Next, we'll also wanna launch them in our second host.
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
$ sudo weave launch $WEAVE_AWS_DEMO_HOST1
$ sudo weave launch-dns 10.2.1.2/24
### 3. Launching Application Containers ###
Now, we wanna launch six containers across our two hosts running an Apache2 Web Server instance with our simple php site. So, we'll be running the following commands which will run 3 containers running Apache2 Web Server on our 1st instance.
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
$ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache
After that, we'll again launch 3 containers running apache2 web server in our 2nd instance as shown below.
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
$ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache
Note: Here, --with-dns option tells the container to use weavedns to resolve names and -h x.weave.local allows the host to be resolvable with WeaveDNS.
### 4. Launching Nginx Container ###
After our application containers are running well as expected, we'll wanna launch an nginx container which contains the nginx configuration which will round-robin across the severs for the reverse proxy or load balancing. To run the nginx container, we'll need to run the following command.
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
$ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple
Hence, our Nginx container is publicly exposed as a http server on $WEAVE_AWS_DEMO_HOST1.
### 5. Testing the Load Balancer ###
To test our load balancer is working or not, we'll run a script that will make http requests to our nginx container. We'll make six requests so that we can see nginx moving through each of the webservers in round-robin turn.
$ ./access-aws-hosts.sh
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws1.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws2.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws3.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws4.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws5.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws6.weave.local",
"date" : "2015-06-26 12:24:23"
}
### Conclusion ###
Finally, we've successfully configured nginx as a reverse proxy or load balancer with weave and docker running ubuntu server in AWS (Amazon Web Service) EC2 . From the above output in above step, it is clear that we have configured it correctly. We can see that the request is being sent to 6 application containers in round-robin turn which is running a PHP app hosted in apache web server. Here, weave and weavedns did great work to deploy a containerised PHP application using nginx across multiple hosts on AWS EC2 without need to change in codes and connected the containers to eachother with the hostname using weavedns. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://console.aws.amazon.com/

View File

@ -1,419 +0,0 @@
wyangsun translating
Managing Linux Logs
================================================================================
A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. Well tell you why this is a good idea and give tips on how to do it easily.
### Benefits of Centralizing Logs ###
It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. Itd take a long time to hunt down the right file, and even longer to correlate problems across servers. Theres nothing more frustrating than finding the information you are looking for hasnt been captured, or the log file that could have held the answer has just been lost after a restart.
Centralizing your logs makes them faster to search, which can help you solve production issues faster. You dont have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed.
Centralizing your logs also makes them easier to manage:
- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem.
- You dont have to worry about ssh or inefficient grep commands requiring more resources on troubled systems.
- You dont have to worry about full disks, which can crash your servers.
- You can keep your production servers secure without giving your entire team access just to look at logs. Its much safer to give your team access to logs from the central location.
With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. Well discuss how to intelligently address these issues in the sections below.
### Popular Tools for Centralizing Logs ###
The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files:
- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions.
- [syslog-ng][3] is the second most popular syslog daemon for Linux.
- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing.
- [fluentd][5] is another agent with advanced processing capabilities.
Rsyslog is the most popular daemon for centralizing your log data because its installed by default in most common distributions of Linux. You dont need to download it or install it, and its lightweight so it wont take up much of your system resources.
If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you dont mind the extra system footprint.
### Configure Rsyslog.conf ###
Since rsyslog is the most widely used syslog daemon, well show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6].
The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name.
action(type="omfwd" protocol="tcp" target="BEBOP" port="514")
You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If youre storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full.
Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider.
### Log Directories ###
You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*).
Common versions of rsyslog cant monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9].
### Which Protocol: UDP, TCP, or RELP? ###
There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol.
[UDP][10] sends a datagram packet, which is a single packet of information. Its an outbound-only protocol, so it doesnt send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. Its most commonly used on reliable networks like localhost.
[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet.
[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol.
### Reliably Send with Disk Assisted Queues ###
If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity.
**Warning: You can lose data if you store logs only in memory.**
Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue:
$WorkDirectory /var/spool/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
### Encrypt Logs Using TLS ###
When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer.
To set up TLS encryption, you need to do the following tasks:
1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If youre using a log management service, it will have one ready for you.
1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider.
1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system.
Heres an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting.
$DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com
### Best Practices for Application Logging ###
In addition to the logs that Linux creates by default, its also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on.
The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but thats the first place where people look for configuration files.
Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things.
If youre not sure where it is, you can use the locate command to find it:
[root@localhost ~]# locate postgresql.conf
/usr/pgsql-9.4/share/postgresql.conf.sample
/var/lib/pgsql/9.4/data/postgresql.conf
#### Set a Standard Location for Log Files ####
Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? Thats because other applications save their log files under /var/log too and if your app saves more than one log file perhaps once every day or after each service restart it may be a bit difficult to trawl through a large directory to find the file you want.
If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go.
#### Use A Standard Filename ####
Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically.
#### Append the Log File ####
Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart.
#### Appending vs. Rotation of Log File ####
Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines.
We recommend you configure the application to rotate its log file once every day, say at mid-night.
Why? Well it becomes manageable for a starter. Its much easier to find a file name with a specific date time pattern than to search through one file for that dates entries. Files are also much smaller: you dont think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location perhaps a nightly backup job copying to a centralized log server it doesnt chew up your networks bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, its easier to delete files older than a particular date than to have an application parsing one single large file.
#### Retention of Log File ####
How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one weeks worth of logging information, or it may be a regulatory requirement to keep ten years worth of data. Whatever it is, logs need to go from the server at one time or other.
In our opinion, unless otherwise required, keep at least a months worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier.
#### Separate Disk Location for Log Files ####
Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main applications data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesnt fill up the entire disk.
#### Log Entries ####
What information should be captured in each log entry?
That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything thats happening? Is it a legal requirement to capture what each user is running or viewing?
If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. Theres no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself.
#### A Practical Example for PostgreSQL ####
As an example, lets look at the main configuration file for a vanilla PostgreSQL 9.4 installation. Its called postgresql.conf and contrary to other config files in Linux systems, its not saved under /etc directory. In the code snippet below, we can see its in /var/lib/pgsql directory of our CentOS 7 server:
root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf
...
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr'
# Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on
# Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'pg_log'
# directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
# log_file_mode = 0600 .
# creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d
# Automatic rotation of logfiles will happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default
# terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '< %m >' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5
log_timezone = 'Australia/ACT'
Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there.
Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files:
[root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log
total 20
-rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log
-rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log
-rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log
-rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log
-rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log
So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf.
Looking inside one log file shows its entries start with date time only:
[root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log
...
< 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request
< 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions
< 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down
< 2015-02-27 01:21:27.036 EST >LOG: shutting down
< 2015-02-27 01:21:27.211 EST >LOG: database system is shut down
### Centralizing Application Logs ###
#### Log File Monitoring with Imfile ####
Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but dont scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this:
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
----------
# Input for FILE1
$InputFileName /FILE1
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured.
#### Local Socket Logs with Imuxsock ####
A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket.
This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or cant keep up, then you could lose log data.
The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command:
$ModLoad imuxsock
#### UDP Logs with Imupd ####
Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution.
Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514:
$ModLoad imudp
----------
$UDPServerRun 514
### Manage Logs with Logrotate ###
Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine.
The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived.
When logrotate copies a file, the new file has a new inode, which can interfere with rsyslogs ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file.
The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18].
### Manage Configuration on Many Servers ###
When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect.
#### Pssh ####
This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time.
#### Puppet/Chef ####
Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you arent sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorlds comparison of the two tools][19].
Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Logglys Puppet module. It offers a class for rsyslog to which you can add an identifying token:
node 'my_server_node.example.net' {
# Send syslog events to Loggly
class { 'loggly::rsyslog':
customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000',
}
}
#### Docker ####
Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center.
There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21].
#### Vendor Scripts or Agents ####
Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl
[2]:http://www.rsyslog.com/
[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system
[4]:http://logstash.net/
[5]:http://www.fluentd.org/
[6]:http://www.rsyslog.com/doc/rsyslog_conf.html
[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html
[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87
[9]:https://www.loggly.com/docs/file-monitoring/
[10]:http://www.networksorcery.com/enp/protocol/udp.htm
[11]:http://www.networksorcery.com/enp/protocol/tcp.htm
[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html
[13]:http://www.rsyslog.com/doc/relp.html
[14]:http://www.rsyslog.com/doc/queues.html
[15]:http://www.rsyslog.com/doc/tls_cert_ca.html
[16]:http://www.rsyslog.com/doc/tls_cert_machine.html
[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html
[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html
[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
[21]:https://github.com/progrium/logspout
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/

View File

@ -1,143 +0,0 @@
Linux Tricks: Play Game in Chrome, Text-to-Speech, Schedule a Job and Watch Commands in Linux
================================================================================
Here again, I have compiled a list of four things under [Linux Tips and Tricks][1] series you may do to remain more productive and entertained with Linux Environment.
![Linux Tips and Tricks Series](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png)
Linux Tips and Tricks Series
The topics I have covered includes Google-chrome inbuilt small game, Text-to-speech in Linux Terminal, Quick job scheduling using at command and watch a command at regular interval.
### 1. Play A Game in Google Chrome Browser ###
Very often when there is a power shedding or no network due to some other reason, I dont put my Linux box into maintenance mode. I keep myself engage in a little fun game by Google Chrome. I am not a gamer and hence I have not installed third-party creepy games. Security is another concern.
So when there is Internet related issue and my web page seems something like this:
![Unable to Connect Internet](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png)
Unable to Connect Internet
You may play the Google-chrome inbuilt game simply by hitting the space-bar. There is no limitation for the number of times you can play. The best thing is you need not break a sweat installing and using it.
No third-party application/plugin required. It should work well on other platforms like Windows and Mac but our niche is Linux and Ill talk about Linux only and mind it, it works well on Linux. It is a very simple game (a kind of time pass).
Use Space-Bar/Navigation-up-key to jump. A glimpse of the game in action.
![Play Game in Google Chrome](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif)
Play Game in Google Chrome
### 2. Text to Speech in Linux Terminal ###
For those who may not be aware of espeak utility, It is a Linux command-line text to speech converter. Write anything in a variety of languages and espeak utility will read it loud for you.
Espeak should be installed in your system by default, however it is not installed for your system, you may do:
# apt-get install espeak (Debian)
# yum install espeak (CentOS)
# dnf install espeak (Fedora 22 onwards)
You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do:
$ espeak [Hit Return Key]
For detailed output you may do:
$ espeak --stdout | aplay [Hit Return Key][Double - Here]
espeak is flexible and you can ask espeak to accept input from a text file and speak it loud for you. All you need to do is:
$ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter]
You may ask espeak to speak fast/slow for you. The default speed is 160 words per minute. Define your preference using switch -s.
To ask espeak to speak 30 words per minute, you may do:
$ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay
To ask espeak to speak 200 words per minute, you may do:
$ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay
To use another language say Hindi (my mother tongue), you may do:
$ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay
You may choose any language of your preference and ask to speak in your preferred language as suggested above. To get the list of all the languages supported by espeak, you need to run:
$ espeak --voices
### 3. Quick Schedule a Job ###
Most of us are already familiar with [cron][2] which is a daemon to execute scheduled commands.
Cron is an advanced command often used by Linux SYSAdmins to schedule a job such as Backup or practically anything at certain time/interval.
Are you aware of at command in Linux which lets you schedule a job/command to run at specific time? You can tell at what to do and when to do and everything else will be taken care by command at.
For an example, say you want to print the output of uptime command at 11:02 AM, All you need to do is:
$ at 11:02
uptime >> /home/$USER/uptime.txt
Ctrl+D
![Schedule Job in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png)
Schedule Job in Linux
To check if the command/script/job has been set or not by at command, you may do:
$ at -l
![View Scheduled Jobs](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png)
View Scheduled Jobs
You may schedule more than one command in one go using at, simply as:
$ at 12:30
Command 1
Command 2
command 50
Ctrl + D
### 4. Watch a Command at Specific Interval ###
We need to run some command for specified amount of time at regular interval. Just for example say we need to print the current time and watch the output every 3 seconds.
To see current time we need to run the below command in terminal.
$ date +"%H:%M:%S
![Check Date and Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png)
Check Date and Time in Linux
and to check the output of this command every three seconds, we need to run the below command in Terminal.
$ watch -n 3 'date +"%H:%M:%S"'
![Watch Command in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif)
Watch Command in Linux
The switch -n in watch command is for Interval. In the above example we defined Interval to be 3 sec. You may define yours as required. Also you may pass any command/script with watch command to watch that command/script at the defined interval.
Thats all for now. Hope you are like this series that aims at making you more productive with Linux and that too with fun inside. All the suggestions are welcome in the comments below. Stay tuned for more such posts. Keep connected and Enjoy…
--------------------------------------------------------------------------------
via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/tag/linux-tricks/
[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/

View File

@ -99,4 +99,4 @@ via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-doc
[a]:http://linoxide.com/author/arunp/
[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization
[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo
[3]:http://www.jboss.org/products/datavirt/download/
[3]:http://www.jboss.org/products/datavirt/download/

View File

@ -1,164 +0,0 @@
DongShuaike is translating.
Linux and Unix Test Disk I/O Performance With dd Command
================================================================================
How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems?
You can use the following commands on a Linux or Unix-like systems for simple I/O performance test:
- **dd command** : It is used to monitor the writing performance of a disk device on a Linux and Unix-like system
- **hdparm command** : It is used to get/set hard disk parameters including test the reading and caching performance of a disk device on a Linux based system.
In this tutorial you will learn how to use the dd command to test disk I/O performance.
### Use dd command to monitor the reading and writing performance of a disk device: ###
- Open a shell prompt.
- Or login to a remote server via ssh.
- Use the dd command to measure server throughput (write speed) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync`
- Use the dd command to measure server latency `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync`
#### Understanding dd command options ####
In this example, I'm using RAID-10 (Adaptec 5405Z with SAS SSD) array running on a Ubuntu Linux 14.04 LTS server. The basic syntax is
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
## GNU dd syntax ##
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
## OR alternate syntax for GNU/dd ##
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
Sample outputs:
![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg)
Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd
Please note that one gigabyte was written for the test and 135 MB/s was server throughput for this test. Where,
- `if=/dev/zero (if=/dev/input.file)` : The name of the input file you want dd the read from.
- `of=/tmp/test1.img (of=/path/to/output.file)` : The name of the output file you want dd write the input.file to.
- `bs=1G (bs=block-size)` : Set the size of the block you want dd to use. 1 gigabyte was written for the test.
- `count=1 (count=number-of-blocks)`: The number of blocks you want dd to read.
- `oflag=dsync (oflag=dsync)` : Use synchronized I/O for data. Do not skip this option. This option get rid of caching and gives you good and accurate results
- `conv=fdatasyn`: Again, this tells dd to require a complete "sync" once, right before it exits. This option is equivalent to oflag=dsync.
In this example, 512 bytes were written one thousand times to get RAID10 server latency time:
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync
Sample outputs:
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s
Please note that server throughput and latency time depends upon server/application load too. So I recommend that you run these tests on a newly rebooted server as well as peak time to get better idea about your workload. You can now compare these numbers with all your devices.
#### But why the server throughput and latency time are so low? ####
Low values does not mean you are using slow hardware. The value can be low because of the HARDWARE RAID10 controller's cache.
Use hdparm command to see buffered and cached disk read speed
I suggest you run the following commands 2 or 3 times Perform timings of device reads for benchmark and comparison purposes:
### Buffered disk read test for /dev/sda ##
hdparm -t /dev/sda1
## OR ##
hdparm -t /dev/sda
To perform timings of cache reads for benchmark and comparison purposes again run the following command 2-3 times (note the -T option):
## Cache read benchmark for /dev/sda ###
hdparm -T /dev/sda1
## OR ##
hdparm -T /dev/sda
OR combine both tests:
hdparm -Tt /dev/sda
Sample outputs:
![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg)
Fig.02: Linux hdparm command to test reading and caching disk performance
Again note that due to filesystems caching on file operations, you will always see high read rates.
**Use dd command on Linux to test read speed**
To get accurate read test data, first discard caches before testing by running the following commands:
flush
echo 3 | sudo tee /proc/sys/vm/drop_caches
time time dd if=/path/to/bigfile of=/dev/null bs=8k
**Linux Laptop example**
Run the following command:
### Debian Laptop Throughput With Cache ##
dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct
### Deactivate the cache ###
hdparm -W0 /dev/sda
### Debian Laptop Throughput Without Cache ##
dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct
**Apple OS X Unix (Macbook pro) example**
GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows:
## Run command 2-3 times to get good results ###
time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync"
Sample outputs:
1024+0 records in
1024+0 records out
104857600 bytes transferred in 0.165040 secs (635346520 bytes/sec)
real 0m0.241s
user 0m0.004s
sys 0m0.113s
So I'm getting 635346520 bytes (635.347 MB/s) write speed on my MBP.
**Not a fan of command line...?**
You can use disk utility (gnome-disk-utility) on a Linux or Unix based system to get the same information. The following screenshot is taken from my Fedora Linux v22 VM.
**Graphical method**
Click on the "Activities" or press the "Super" key to switch between the Activities overview and desktop. Type "Disks"
![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg)
Fig.03: Start the Gnome disk utility
Select your hard disk at left pane and click on configure button and click on "Benchmark partition":
![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg)
Fig.04: Benchmark disk/partition
Finally, click on the "Start Benchmark..." button (you may be promoted for the admin username and password):
![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg)
Fig.05: Final benchmark result
Which method and command do you recommend to use?
- I recommend dd command on all Unix-like systems (`time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync`"
- If you are using GNU/Linux use the dd command (`dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync`)
- Make sure you adjust count and bs arguments as per your setup to get a good set of result.
- The GUI method is recommended only for Linux/Unix laptop users running Gnome2 or 3 desktop.
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,4 @@
Translating by dingdongnigetou
translating by tnuoccalanosrep
Linux file system hierarchy v2.0
================================================================================

View File

@ -1,63 +0,0 @@
Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver
================================================================================
![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg)
Ubuntu Gamers are on the rise -and so is demand for the latest drivers
**Installing the latest upstream NVIDIA graphics driver on Ubuntu could be about to get much easier. **
Ubuntu developers are considering the creation of a brand new official PPA to distribute the latest closed-source NVIDIA binary drivers to desktop users.
The move would benefit Ubuntu gamers **without** risking the stability of the OS for everyone else.
New upstream drivers would be installed and updated from this new PPA **only** when a user explicitly opts-in to it. Everyone else would continue to receive and use the more recent stable NVIDIA Linux driver snapshot included in the Ubuntu archive.
### Why Is This Needed? ###
![Ubuntu provides drivers but theyre not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg)
Ubuntu provides drivers but theyre not the latest
The closed-source NVIDIA graphics drivers that are available to install on Ubuntu from the archive (using the command line, synaptic or through the additional drivers tool) work fine for most and can handle the composited Unity desktop shell with ease.
For gaming needs its a different story.
If you want to squeeze every last frame and HD texture out of the latest big-name Steam game youll need the latest binary drivers blob.
> Installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe.
The more recent the driver the more likely it is to support the latest features and technologies, or come pre-packed with game-specific tweaks and bug fixes too.
The problem is that installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe.
To fill the void many third-party PPAs maintained by enthusiasts have emerged. Since many of these PPAs also distribute other experimental or bleeding-edge software their use is **not without risk**. Adding a bleeding edge PPA is often the fastest way to entirely hose a system!
A solution that lets Ubuntu users install the latest propriety graphics drivers as offered in third-party PPAs is needed **but** with the safety catch of being able to roll-back to the stable archive version if needed.
### Demand for fresh drivers is hard to ignore ###
> A solution that lets Ubuntu users get the latest hardware drivers safely is coming.
The demand for fresh drivers in a fast developing market is becoming hard to ignore, users are going to want the latest upstream has to offer, Castro explains in an e-mail to the Ubuntu Desktop mailing list.
[NVIDIA] can deliver a kickass experience with almost no effort from the user [in Windows 10]. Until we can convince NVIDIA to do the same with Ubuntu were going to have to pick up the slack.
Castros proposition of a “blessed” NVIDIA PPA is the easiest way to do this.
Gamers would be able to opt-in to receive new drivers from the PPA straight from Ubuntus default proprietary hardware drivers tool — no need for them to copy and paste terminal commands from websites or wiki pages.
The drivers within this PPA would be packaged and maintained by a select band of community members and receive benefits from being a semi-official option, namely **automated testing**.
As Castro himself puts it: People want the latest bling, and no matter what theyre going to do it. We might as well put a framework around it so people can get what they want without breaking their computer.
**Would you make use of this PPA? How would you rate the performance of the default Nvidia drivers on Ubuntu? Share your thoughts in the comments, folks! **
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author

View File

@ -0,0 +1,179 @@
How to Install OsTicket Ticketing System in Fedora 22 / Centos 7
================================================================================
In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system.
Here are some easy steps on how we can setup Help Desk ticketing system with osTicket in Fedora 22 or CentOS 7 operating system.
### 1. Installing LAMP stack ###
First of all, we'll need to install LAMP Stack to make osTicket working. LAMP stack is the combination of Apache web server, MySQL or MariaDB database system and PHP. To install a complete suit of LAMP stack that we need for the installation of osTicket, we'll need to run the following commands in a shell or a terminal.
**On Fedora 22**
LAMP stack is available on the official repository of Fedora 22. As the default package manager of Fedora 22 is the latest DNF package manager, we'll need to run the following command.
$ sudo dnf install httpd mariadb mariadb-server php php-mysql php-fpm php-cli php-xml php-common php-gd php-imap php-mbstring wget
**On CentOS 7**
As there is LAMP stack available on the official repository of CentOS 7, we'll gonna install it using yum package manager.
$ sudo yum install httpd mariadb mariadb-server php php-mysql php-fpm php-cli php-xml php-common php-gd php-imap php-mbstring wget
### 2. Starting Apache Web Server and MariaDB ###
Next, we'll gonna start MariaDB server and Apache Web Server to get started.
$ sudo systemctl start mariadb httpd
Then, we'll gonna enable them to start on every boot of the system.
$ sudo systemctl enable mariadb httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
### 3. Downloading osTicket package ###
Next, we'll gonna download the latest release of osTicket ie version 1.9.9 . We can download it from the official download page [http://osticket.com/download][2] or from the official github repository. [https://github.com/osTicket/osTicket-1.8/releases][3] . Here, in this tutorial we'll download the tarball of the latest release of osTicket from the github release page using wget command.
$ cd /tmp/
$ wget https://github.com/osTicket/osTicket-1.8/releases/download/v1.9.9/osTicket-v1.9.9-1-gbe2f138.zip
--2015-07-16 09:14:23-- https://github.com/osTicket/osTicket-1.8/releases/download/v1.9.9/osTicket-v1.9.9-1-gbe2f138.zip
Resolving github.com (github.com)... 192.30.252.131
...
Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.244.4|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7150871 (6.8M) [application/octet-stream]
Saving to: osTicket-v1.9.9-1-gbe2f138.zip
osTicket-v1.9.9-1-gb 100%[========================>] 6.82M 1.25MB/s in 12s
2015-07-16 09:14:37 (604 KB/s) - osTicket-v1.9.9-1-gbe2f138.zip saved [7150871/7150871]
### 4. Extracting the osTicket ###
After we have successfully downloaded the osTicket zipped package, we'll now gonna extract the zip. As the default root directory of Apache web server is /var/www/html/ , we'll gonna create a directory called "**support**" where we'll extract the whole directory and files of the compressed zip file. To do so, we'll need to run the following commands in a terminal or a shell.
$ unzip osTicket-v1.9.9-1-gbe2f138.zip
Then, we'll move the whole extracted files to it.
$ sudo mv /tmp/upload /var/www/html/support
### 5. Fixing Ownership and Permission ###
Now, we'll gonna assign the ownership of the directories and files under /var/ww/html/support to apache to enable writable access to the apache process owner. To do so, we'll need to run the following command.
$ sudo chown apache: -R /var/www/html/support
Then, we'll also need to copy a sample configuration file to its default configuration file. To do so, we'll need to run the below command.
$ cd /var/www/html/support/
$ sudo cp include/ost-sampleconfig.php include/ost-config.php
$ sudo chmod 0666 include/ost-config.php
If you have SELinux enabled on the system, run the following command.
$ sudo chcon -R -t httpd_sys_content_t /var/www/html/vtigercrm
$ sudo chcon -R -t httpd_sys_rw_content_t /var/www/html/vtigercrm
### 6. Configuring MariaDB ###
As this is the first time we're going to configure MariaDB, we'll need to create a password for the root user of mariadb so that we can use it to login and create the database for our osTicket installation. To do so, we'll need to run the following command in a terminal or a shell.
$ sudo mysql_secure_installation
...
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
Success!
...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
Note: Above, we are asked to enter the root password of the mariadb server but as we are setting for the first time and no password has been set yet, we'll simply hit enter while asking the current mariadb root password. Then, we'll need to enter twice the new password we wanna set. Then, we can simply hit enter in every argument in order to set default configurations.
### 7. Creating osTicket Database ###
As osTicket needs a database system to store its data and information, we'll be configuring MariaDB for osTicket. So, we'll need to first login into the mariadb command environment. To do so, we'll need to run the following command.
$ sudo mysql -u root -p
Now, we'll gonna create a new database "**osticket_db**" with user "**osticket_user**" and password "osticket_password" which will be granted access to the database. To do so, we'll need to run the following commands inside the MariaDB command environment.
> CREATE DATABASE osticket_db;
> CREATE USER 'osticket_user'@'localhost' IDENTIFIED BY 'osticket_password';
> GRANT ALL PRIVILEGES on osticket_db.* TO 'osticket_user'@'localhost' ;
> FLUSH PRIVILEGES;
> EXIT;
**Note**: It is strictly recommended to replace the database name, user and password as your desire for security issue.
### 8. Allowing Firewall ###
If we are running a firewall program, we'll need to configure our firewall to allow port 80 so that the Apache web server's default port will be accessible externally. This will allow us to navigate our web browser to osTicket's web interface with the default http port 80. To do so, we'll need to run the following command.
$ sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
After done, we'll need to reload our firewall service.
$ sudo firewall-cmd --reload
### 9. Web based Installation ###
Finally, is everything is done as described above, we'll now should be able to navigate osTicket's Installer by pointing our web browser to http://domain.com/support or http://ip-address/support . Now, we'll be shown if the dependencies required by osTicket are installed or not. As we've already installed all the necessary packages, we'll be welcomed with **green colored tick** to proceed forward.
![osTicket Requirements Check](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-requirements-check1.png)
After that, we'll be required to enter the details for our osTicket instance as shown below. We'll need to enter the database name, username, password and hostname and other important account information that we'll require while logging into the admin panel.
![osticket configuration](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-configuration.png)
After the installation has been completed successfully, we'll be welcomed by a Congratulations screen. There we can see two links, one for our Admin Panel and the other for the support center as the homepage of the osTicket Support Help Desk.
![osticket installation completed](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-installation-completed.png)
If we click on http://ip-address/support or http://domain.com/support, we'll be redirected to the osTicket support page which is as shown below.
![osticket support homepage](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-support-homepage.png)
Next, to login into the admin panel, we'll need to navigate our web browser to http://ip-address/support/scp or http://domain.com/support/scp . Then, we'll need to enter the login details we had just created above while configuring the database and other information in the web installer. After successful login, we'll be able to access our dashboard and other admin sections.
![osticket admin panel](http://blog.linoxide.com/wp-content/uploads/2015/07/osticket-admin-panel.png)
### 10. Post Installation ###
After we have finished the web installation of osTicket, we'll now need to secure some of our configuration files. To do so, we'll need to run the following command.
$ sudo rm -rf /var/www/html/support/setup/
$ sudo chmod 644 /var/www/html/support/include/ost-config.php
### Conclusion ###
osTicket is an awesome help desk ticketing system providing several new features. It supports rich text or HTML emails, ticket filters, agent collision avoidance, auto-responder and many more features. The user interface of osTicket is very beautiful with easy to use control panel. It is a complete set of tools required for a help and support ticketing system. It is the best solution for providing customers a better way to communicate with the support team. It helps a company to make their customers happy with them regarding the support and help desk. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-osticket-fedora-22-centos-7/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.enhancesoft.com/
[2]:http://osticket.com/download
[3]:https://github.com/osTicket/osTicket-1.8/releases

View File

@ -0,0 +1,62 @@
translation by strugglingyouth
Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop
================================================================================
> **Question**: When I try to open a pre-recorded packet dump on Wireshark on Ubuntu, its UI suddenly freezes, and the following errors and warnings appear in the terminal where I launched Wireshark. How can I fix this problem?
Wireshark is a GUI-based packet capture and sniffer tool. This tool is popularly used by network administrators, network security engineers or developers for various tasks where packet-level network analysis is required, for example during network troubleshooting, vulnerability testing, application debugging, or protocol reverse engineering. Wireshark allows one to capture live packets and browse their protocol headers and payloads via a convenient GUI.
![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg)
It is known that Wireshark's UI, especially run under Ubuntu desktop, sometimes hangs or freezes with the following errors, while you are scrolling up or down the packet list view, or starting to load a pre-recorded packet dump file.
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject'
(wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange'
(wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable'
(wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar'
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget'
(wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject'
(wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed
(wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed
Apparently this error is caused by some incompatibility between Wireshark and overlay-scrollbar, and has not been fixed in the latest Ubuntu desktop (e.g., as of Ubuntu 15.04 Vivid Vervet).
A workaround to avoid this Wireshark UI freeze problem is to **temporarily disabling overlay-scrollbar**. There are two ways to disable overlay-scrollbar in Wireshark, depending on how you launch Wireshark on your desktop.
### Command-Line Solution ###
Overlay-scrollbar can be disabled by setting "**LIBOVERLAY_SCROLLBAR**" environment variable to "0".
So if you are launching Wireshark from the command in a terminal, you can disable overlay-scrollbar in Wireshark as follows.
Open your .bashrc, and define the following alias.
alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark"
### Desktop Launcher Solution ###
If you are launching Wireshark using a desktop launcher, you can edit its desktop launcher file.
$ sudo vi /usr/share/applications/wireshark.desktop
Look for a line that starts with "Exec", and change it as follows.
Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f
While this solution will be beneficial for all desktop users system-wide, it will not survive Wireshark upgrade. If you want to preserve the modified .desktop file, copy it to your home directory as follows.
$ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -0,0 +1,99 @@
How to monitor stock quotes from the command line on Linux
================================================================================
If you are one of those stock investors or traders, monitoring the stock market will be one of your daily routines. Most likely you will be using an online trading platform which comes with some fancy real-time charts and all sort of advanced stock analysis and tracking tools. While such sophisticated market research tools are a must for any serious stock investors to read the market, monitoring the latest stock quotes still goes a long way to build a profitable portfolio.
If you are a full-time system admin constantly sitting in front of terminals while trading stocks as a hobby during the day, a simple command-line tool that shows real-time stock quotes will be a blessing for you.
In this tutorial, let me introduce a neat command-line tool that allows you to monitor stock quotes from the command line on Linux.
This tool is called [Mop][1]. Written in Go, this lightweight command-line tool is extremely handy for tracking the latest stock quotes from the U.S. markets. You can easily customize the list of stocks to monitor, and it shows the latest stock quotes in ncurses-based, easy-to-read interface.
**Note**: Mop obtains the latest stock quotes via Yahoo! Finance API. Be aware that their stock quotes are known to be delayed by 15 minutes. So if you are looking for "real-time" stock quotes with zero delay, Mop is not a tool for you. Such "live" stock quote feeds are usually available for a fee via some proprietary closed-door interface. With that being said, let's see how you can use Mop under Linux environment.
### Install Mop on Linux ###
Since Mop is implemented in Go, you will need to install Go language first. If you don't have Go installed, follow [this guide][2] to install Go on your Linux platform. Make sure to set GOPATH environment variable as described in the guide.
Once Go is installed, proceed to install Mop as follows.
**Debian, Ubuntu or Linux Mint**
$ sudo apt-get install git
$ go get github.com/michaeldv/mop
$ cd $GOPATH/src/github.com/michaeldv/mop
$ make install
Fedora, CentOS, RHEL
$ sudo yum install git
$ go get github.com/michaeldv/mop
$ cd $GOPATH/src/github.com/michaeldv/mop
$ make install
The above commands will install Mop under $GOPATH/bin.
Now edit your .bashrc to include $GOPATH/bin in your PATH variable.
export PATH="$PATH:$GOPATH/bin"
----------
$ source ~/.bashrc
### Monitor Stock Quotes from the Command Line with Mop ###
To launch Mod, simply run the command called cmd.
$ cmd
At the first launch, you will see a few stock tickers which Mop comes pre-configured with.
![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg)
The quotes show information like the latest price, %change, daily low/high, 52-week low/high, dividend, and annual yield. Mop obtains market overview information from [CNN][3], and individual stock quotes from [Yahoo Finance][4]. The stock quote information updates itself within the terminal periodically.
### Customize Stock Quotes in Mop ###
Let's try customizing the stock list. Mop provides easy-to-remember shortcuts for this: '+' to add a new stock, and '-' to remove a stock.
To add a new stock, press '+', and type a stock ticker symbol to add (e.g., MSFT). You can add more than one stock at once by typing a comma-separated list of tickers (e.g., "MSFT, AMZN, TSLA").
![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg)
Removing stocks from the list can be done similarly by pressing '-'.
### Sort Stock Quotes in Mop ###
You can sort the stock quote list based on any column. To sort, press 'o', and use left/right key to choose the column to sort by. When a particular column is chosen, you can sort the list either in increasing order or in decreasing order by pressing ENTER.
![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg)
By pressing 'g', you can group your stocks based on whether they are advancing or declining for the day. Advancing issues are represented in green color, while declining issues are colored in white.
![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg)
If you want to access help page, simply press '?'.
![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg)
### Conclusion ###
As you can see, Mop is a lightweight, yet extremely handy stock monitoring tool. Of course you can easily access stock quotes information elsewhere, from online websites, your smartphone, etc. However, if you spend a great deal of your time in a terminal environment, Mop can easily fit in to your workspace, hopefully without distracting must of your workflow. Just let it run and continuously update market date in one of your terminals, and be done with it.
Happy trading!
--------------------------------------------------------------------------------
via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://github.com/michaeldv/mop
[2]:http://ask.xmodulo.com/install-go-language-linux.html
[3]:http://money.cnn.com/data/markets/
[4]:http://finance.yahoo.com/

View File

@ -0,0 +1,127 @@
How to Install Visual Studio Code in Linux
================================================================================
Hi everyone, today we'll learn how to install Visual Studio Code in Linux Distributions. Visual Studio Code is a code-optimized editor based on Electron, a piece of software that is based on Chromium, which is used to deploy io.js applications for the desktop. It is a source code editor and text editor developed by Microsoft for all the operating system platforms including Linux. Visual Studio Code is free but not an open source software ie. its under proprietary software license terms. It is an awesome powerful and fast code editor for our day to day use. Some of the cool features of visual studio code are navigation, intellisense support, syntax highlighting, bracket matching, auto indentation, and snippets, keyboard support with customizable bindings and support for dozens of languages like Python, C++, jade, PHP, XML, Batch, F#, DockerFile, Coffee Script, Java, HandleBars, R, Objective-C, PowerShell, Luna, Visual Basic, .Net, Asp.Net, C#, JSON, Node.js, Javascript, HTML, CSS, Less, Sass and Markdown. Visual Studio Code integrates with package managers and repositories, and builds and other common tasks to make everyday workflows faster. The most popular feature in Visual Studio Code is its debugging feature which includes a streamlined support for Node.js debugging in the preview.
Note: Please note that, Visual Studio Code is only available for 64-bit versions of Linux Distributions.
Here, are some easy to follow steps on how to install Visual Sudio Code in all Linux Distribution.
### 1. Downloading Visual Studio Code Package ###
First of all, we'll gonna download the Visual Studio Code Package for 64-bit Linux Operating System from the Microsoft server using the given url [http://go.microsoft.com/fwlink/?LinkID=534108][1] . Here, we'll use wget to download it and keep it under /tmp/VSCODE directory as shown below.
# mkdir /tmp/vscode; cd /tmp/vscode/
# wget https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip
--2015-06-24 06:02:54-- https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip
Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459
Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 64992671 (62M) [application/octet-stream]
Saving to: VSCode-linux-x64.zip
100%[================================================>] 64,992,671 14.9MB/s in 4.1s
2015-06-24 06:02:58 (15.0 MB/s) - VSCode-linux-x64.zip saved [64992671/64992671]
### 2. Extracting the Package ###
Now, after we have successfully downloaded the zipped package of Visual Studio Code, we'll gonna extract it using the unzip command to /opt/directory. To do so, we'll need to run the following command in a terminal or a console.
# unzip /tmp/vscode/VSCode-linux-x64.zip -d /opt/
Note: If we don't have unzip already installed, we'll need to install it via our Package Manager. If you're running Ubuntu, apt-get whereas if you're running Fedora, CentOS, dnf or yum can be used to install it.
### 3. Running Visual Studio Code ###
After we have extracted the package, we can directly launch the Visual Studio Code by executing a file named Code.
# sudo chmod +x /opt/VSCode-linux-x64/Code
# sudo /opt/VSCode-linux-x64/Code
If we want to launch Code and want to be available globally via terminal in any place, we'll need to create the link of /opt/vscode/Code as/usr/local/bin/code .
# ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code
Now, we can launch Visual Studio Code by running the following command in a terminal.
# code .
### 4. Creating a Desktop Launcher ###
Next, after we have successfully extracted the Visual Studio Code package, we'll gonna create a desktop launcher so that it will be easily available in the launchers, menus, desktop, according to the desktop environment so that anyone can launch it from them. So, first we'll gonna copy the icon file to /usr/share/icons/ directory.
# cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/
Then, we'll gonna create the desktop launcher having the extension as .desktop. Here, we'll create a file named visualstudiocode.desktop under /tmp/VSCODE/ folder using our favorite text editor.
# vi /tmp/vscode/visualstudiocode.desktop
Then, we'll gonna paste the following lines into that file.
[Desktop Entry]
Name=Visual Studio Code
Comment=Multi-platform code editor for Linux
Exec=/opt/VSCode-linux-x64/Code
Icon=/usr/share/icons/vso.png
Type=Application
StartupNotify=true
Categories=TextEditor;Development;Utility;
MimeType=text/plain;
After we're done creating the desktop file, we'll wanna copy that desktop file to /usr/share/applications/ directory so that it will be available in launchers and menus for use with single click.
# cp /tmp/vscode/visualstudiocode.desktop /usr/share/applications/
Once its done, we can launch it by opening it from the Launcher or Menu.
![Visual Studio Code](http://blog.linoxide.com/wp-content/uploads/2015/06/visual-studio-code.png)
### Installing Visual Studio Code in Ubuntu ###
We can use Ubuntu Make 0.7 in order to install Visual Studio Code in Ubuntu 14.04/14.10/15.04 distribution of linux. This method is the most easiest way to setup Code in ubuntu as we just need to execute few commands for it. First of all, we'll need to install Ubuntu Make 0.7 in our ubuntu distribution of linux. To install it, we'll need to add PPA for it. This can be done by running the command below.
# add-apt-repository ppa:ubuntu-desktop/ubuntu-make
This ppa proposes package backport of Ubuntu make for supported releases.
More info: https://launchpad.net/~ubuntu-desktop/+archive/ubuntu/ubuntu-make
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmpv0vf24us/secring.gpg' created
gpg: keyring `/tmp/tmpv0vf24us/pubring.gpg' created
gpg: requesting key A1231595 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpv0vf24us/trustdb.gpg: trustdb created
gpg: key A1231595: public key "Launchpad PPA for Ubuntu Desktop" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
Then, we'll gonna update the local repository index and install ubuntu-make.
# apt-get update
# apt-get install ubuntu-make
After Ubuntu Make is installed in our ubuntu operating system, we'll gonna install Code by running the following command in a terminal.
# umake web visual-studio-code
![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png)
After running the above command, we'll be asked to enter the path where we want to install it. Then, it will ask for permission to install Visual Studio Code in our ubuntu system. Then, we'll press "a". Once we do that, it will download and install it in our ubuntu machine. Finally, we can launch it by opening it from the Launcher or Menu.
### Conclusion ###
We have successfully installed Visual Studio Code in Linux Distribution. Installing Visual Studio Code in every linux distribution is the same as shown in the above steps where we can also use umake to install it in ubuntu distributions. Umake is a popular tool for the development tools, IDEs, Languages. We can easily install Android Studios, Eclipse and many other popular IDEs with umake. Visual Studio Code is based on a project in Github called [Electron][2] which is a part of [Atom.io][3] Editor. It has a bunch of new cool and improved features that Atom.io Editor doesn't have. Visual Studio Code is currently only available in 64-bit platform of linux operating system. So, If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://go.microsoft.com/fwlink/?LinkID=534108
[2]:https://github.com/atom/electron
[3]:https://github.com/atom/atom

View File

@ -0,0 +1,49 @@
Linux FAQs with Answers--How to check MariaDB server version
================================================================================
> **Question**: I am on a VPS server where MariaDB server is running. How can I find out which version of MariaDB server it is running?
There are circumstances where you need to know the version of your database server, e.g., when upgrading the database or patching any known server vulnerabilities. There are a few ways to find out what the version of your MariaDB server is.
### Method One ###
The first method to identify MariaDB server version is by logging in to the MariaDB server. Right after you log in, your will see a welcome message where MariaDB server version is indicated.
![](https://farm6.staticflickr.com/5807/20669891016_91249d3239_c.jpg)
Alternatively, simply type 'status' command at the MariaDB prompt any time while you are logged in. The output will show server version as well as protocol version as follows.
![](https://farm6.staticflickr.com/5801/20669891046_73f60e5c81_c.jpg)
### Method Two ###
If you don't have access to the MariaDB server, you cannot use the first method. In this case, you can infer MariaDB server version by checking which MariaDB package was installed. This works only when the MariaDB server was installed using a distribution's package manager.
You can search for the installed MariaDB server package as follows.
#### Debian, Ubuntu or Linux Mint: ####
$ dpkg -l | grep mariadb
The output below indicates that installed MariaDB server is version 10.0.17.
![](https://farm1.staticflickr.com/607/20669890966_b611fcd915_c.jpg)
#### Fedora, CentOS or RHEL: ####
$ rpm -qa | grep mariadb
The output below indicates that the installed version is 5.5.41.
![](https://farm1.staticflickr.com/764/20508160748_23d9808256_b.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/check-mariadb-server-version.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -0,0 +1,220 @@
Part 1 - LFCS: How to use GNU sed Command to Create, Edit, and Manipulate files in Linux
================================================================================
The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams.
![Linux Foundation Certified Sysadmin](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-1.png)
Linux Foundation Certified Sysadmin Part 1
Please watch the following video that demonstrates about The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
The series will be titled Preparation for the LFCS (Linux Foundation Certified Sysadmin) Parts 1 through 10 and cover the following topics for Ubuntu, CentOS, and openSUSE:
- Part 1: How to use GNU sed Command to Create, Edit, and Manipulate files in Linux
- Part 2: How to Install and Use vi/m as a full Text Editor
- Part 3: Archiving Files/Directories and Finding Files on the Filesystem
- Part 4: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition
- Part 5: Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux
- Part 6: Assembling Partitions as RAID Devices Creating & Managing System Backups
- Part 7: Managing System Startup Process and Services (SysVinit, Systemd and Upstart
- Part 8: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts
- Part 9: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper
- Part 10: Learning Basic Shell Scripting and Filesystem Troubleshooting
This post is Part 1 of a 10-tutorial series, which will cover the necessary domains and competencies that are required for the LFCS certification exam. That being said, fire up your terminal, and lets start.
### Processing Text Streams in Linux ###
Linux treats the input to and the output from programs as streams (or sequences) of characters. To begin understanding redirection and pipes, we must first understand the three most important types of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and Linux, data streams and peripherals, or device files, are also treated as ordinary files).
The difference between > (redirection operator) and | (pipeline operator) is that while the first connects a command with a file, the latter connects the output of a command with another command.
# command > file
# command1 | command2
Since the redirection operator creates or overwrites files silently, we must use it with extreme caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems is that there is no intermediate file involved with a pipe the stdout of the first command is not written to a file and then read by the second command.
For the following practice exercises we will use the poem “A happy child” (anonymous author).
![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png)
cat command example
#### Using sed ####
The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline).
The most basic (and popular) usage of sed is the substitution of characters. We will begin by changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output to ahappychild2.txt. The g flag indicates that sed should perform the substitution for all instances of term on every line of file. If this flag is omitted, sed will replace only the first occurrence of term on each line.
**Basic syntax:**
# sed s/term/replacement/flag file
**Our example:**
# sed s/y/Y/g ahappychild.txt > ahappychild2.txt
![sed command](http://www.tecmint.com/wp-content/uploads/2014/10/sed-command.png)
sed command example
Should you want to search for or replace a special character (such as /, \, &) you need to escape it, in the term or replacement strings, with a backward slash.
For example, we will substitute the word and for an ampersand. At the same time, we will replace the word I with You when the first one is found at the beginning of a line.
# sed 's/and/\&/g;s/^I/You/g' ahappychild.txt
![sed replace string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-replace-string.png)
sed replace string
In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent the beginning of a line.
As you can see, we can combine two or more substitution commands (and use regular expressions inside them) by separating them with a semicolon and enclosing the set inside single quotes.
Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8.
# sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p
Note that by default, sed prints every line. We can override this behaviour with the -n option and then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern (Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case).
Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and leave out comments. The following sed one-liner deletes (d) blank lines or those starting with # (the | character indicates a boolean OR between the two regular expressions).
# sed '/^#\|^$/d' apache2.conf
![sed match string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-match-string.png)
sed match string
#### uniq Command ####
The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files). By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option.
**Examples**
The du sch /path/to/directory/* command returns the disk space usage per subdirectories and files within the specified directory in human-readable format (also shows a total per directory), and does not order the output by size, but by subdirectory and file name. We can use the following command to sort by size.
# du -sch /var/* | sort h
![sort command](http://www.tecmint.com/wp-content/uploads/2014/10/sort-command.jpg)
sort command example
You can count the number of events in a log by date by telling uniq to perform the comparison using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each output line by the number of occurrences (-c) with the following command.
# cat /var/log/mail.log | uniq -c -w 6
![Count Numbers in File](http://www.tecmint.com/wp-content/uploads/2014/10/count-numbers-in-file.jpg)
Count Numbers in File
Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list of donors, donation date, and amount. Suppose we want to know how many unique donors there are. We will use the following command to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines.
# cat sortuniq.txt | cut -d: -f1 | sort | uniq
![Find Unique Records in File](http://www.tecmint.com/wp-content/uploads/2014/10/find-uniqu-records-in-file.jpg)
Find Unique Records in File
- Read Also: [13 “cat” Command Examples][1]
#### grep Command ####
grep searches text files or (command output) for the occurrence of a specified regular expression and outputs any line containing a match to standard output.
**Examples**
Display the information from /etc/passwd for user gacanepa, ignoring case.
# grep -i gacanepa /etc/passwd
![grep Command](http://www.tecmint.com/wp-content/uploads/2014/10/grep-command.jpg)
grep command example
Show all the contents of /etc whose name begins with rc followed by any single number.
# ls -l /etc | grep rc[0-9]
![List Content Using grep](http://www.tecmint.com/wp-content/uploads/2014/10/list-content-using-grep.jpg)
List Content Using grep
- Read Also: [12 “grep” Command Examples][2]
#### tr Command Usage ####
The tr command can be used to translate (change) or delete characters from stdin, and write the result to stdout.
**Examples**
Change all lowercase to uppercase in sortuniq.txt file.
# cat sortuniq.txt | tr [:lower:] [:upper:]
![Sort Strings in File](http://www.tecmint.com/wp-content/uploads/2014/10/sort-strings.jpg)
Sort Strings in File
Squeeze the delimiter in the output of ls l to only one space.
# ls -l | tr -s ' '
![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg)
Squeeze Delimiter
#### cut Command Usage ####
The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b option), characters (-c), or fields (-f). In this last case (based on fields), the default field separator is a tab, but a different delimiter can be specified by using the -d option.
**Examples**
Extract the user accounts and the default shells assigned to them from /etc/passwd (the d option allows us to specify the field delimiter, and the f switch indicates which field(s) will be extracted.
# cat /etc/passwd | cut -d: -f1,7
![Extract User Accounts](http://www.tecmint.com/wp-content/uploads/2014/10/extract-user-accounts.jpg)
Extract User Accounts
Summing up, we will create a text stream consisting of the first and third non-blank files of the output of the last command. We will use grep as a first filter to check for sessions of user gacanepa, then squeeze delimiters to only one space (tr -s ). Next, well extract the first and third fields with cut, and finally sort by the second field (IP addresses in this case) showing unique.
# last | grep gacanepa | tr -s | cut -d -f1,3 | sort -k2 | uniq
![last command](http://www.tecmint.com/wp-content/uploads/2014/10/last-command.png)
last command example
The above command shows how multiple commands and pipes can be combined so as to obtain filtered data according to our desires. Feel free to also run it by parts, to help you see the output that is pipelined from one command to the next (this can be a great learning experience, by the way!).
### Summary ###
Although this example (along with the rest of the examples in the current tutorial) may not seem very useful at first sight, they are a nice starting point to begin experimenting with commands that are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your questions and comments below they will be much appreciated!
#### Reference Links ####
- [About the LFCS][3]
- [Why get a Linux Foundation Certification?][4]
- [Register for the LFCS exam][5]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/
[2]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/
[3]:https://training.linuxfoundation.org/certification/LFCS
[4]:https://training.linuxfoundation.org/certification/why-certify-with-us
[5]:https://identity.linuxfoundation.org/user?destination=pid/1

View File

@ -0,0 +1,315 @@
Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting
================================================================================
The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.
![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png)
Linux Foundation Certified Sysadmin Part 10
Check out the following video that guides you an introduction to the Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This is the last article (Part 10) of the present 10-tutorial long series. In this article we will focus on basic shell scripting and troubleshooting Linux file systems. Both topics are required for the LFCS certification exam.
### Understanding Terminals and Shells ###
Lets clarify a few concepts first.
- A shell is a program that takes commands and gives them to the operating system to be executed.
- A terminal is a program that allows us as end users to interact with the shell. One example of a terminal is GNOME terminal, as shown in the below image.
![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png)
Gnome Terminal
When we first start a shell, it presents a command prompt (also known as the command line), which tells us that the shell is ready to start accepting commands from its standard input device, which is usually the keyboard.
You may want to refer to another article in this series ([Use Command to Create, Edit, and Manipulate files Part 1][1]) to review some useful commands.
Linux provides a range of options for shells, the following being the most common:
**bash Shell**
Bash stands for Bourne Again SHell and is the GNU Projects default shell. It incorporates useful features from the Korn shell (ksh) and C shell (csh), offering several improvements at the same time. This is the default shell used by the distributions covered in the LFCS certification, and it is the shell that we will use in this tutorial.
**sh Shell**
The Bourne SHell is the oldest shell and therefore has been the default shell of many UNIX-like operating systems for many years.
ksh Shell
The Korn SHell is a Unix shell which was developed by David Korn at Bell Labs in the early 1980s. It is backward-compatible with the Bourne shell and includes many features of the C shell.
A shell script is nothing more and nothing less than a text file turned into an executable program that combines commands that are executed by the shell one after another.
### Basic Shell Scripting ###
As mentioned earlier, a shell script is born as a plain text file. Thus, can be created and edited using our preferred text editor. You may want to consider using vi/m (refer to [Usage of vi Editor Part 2][2] of this series), which features syntax highlighting for your convenience.
Type the following command to create a file named myscript.sh and press Enter.
# vim myscript.sh
The very first line of a shell script must be as follows (also known as a shebang).
#!/bin/bash
It “tells” the operating system the name of the interpreter that should be used to run the text that follows.
Now its time to add our commands. We can clarify the purpose of each command, or the entire script, by adding comments as well. Note that the shell ignores those lines beginning with a pound sign # (explanatory comments).
#!/bin/bash
echo This is Part 10 of the 10-article series about the LFCS certification
echo Today is $(date +%Y-%m-%d)
Once the script has been written and saved, we need to make it executable.
# chmod 755 myscript.sh
Before running our script, we need to say a few words about the $PATH environment variable. If we run,
echo $PATH
from the command line, we will see the contents of $PATH: a colon-separated list of directories that are searched when we enter the name of a executable program. It is called an environment variable because it is part of the shell environment a set of information that becomes available for the shell and its child processes when the shell is first started.
When we type a command and press Enter, the shell searches in all the directories listed in the $PATH variable and executes the first instance that is found. Lets see an example,
![Linux Environment Variables](http://www.tecmint.com/wp-content/uploads/2014/11/Environment-Variable.png)
Environment Variables
If there are two executable files with the same name, one in /usr/local/bin and another in /usr/bin, the one in the first directory will be executed first, whereas the other will be disregarded.
If we havent saved our script inside one of the directories listed in the $PATH variable, we need to append ./ to the file name in order to execute it. Otherwise, we can run it just as we would do with a regular command.
# pwd
# ./myscript.sh
# cp myscript.sh ../bin
# cd ../bin
# pwd
# myscript.sh
![Execute Script in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Execute-Script.png)
Execute Script
#### Conditionals ####
Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is:
if CONDITION; then
COMMANDS;
else
OTHER-COMMANDS
fi
Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when:
- [ -a file ] → file exists.
- [ -d file ] → file exists and is a directory.
- [ -f file ] →file exists and is a regular file.
- [ -u file ] →file exists and its SUID (set user ID) bit is set.
- [ -g file ] →file exists and its SGID bit is set.
- [ -k file ] →file exists and its sticky bit is set.
- [ -r file ] →file exists and is readable.
- [ -s file ]→ file exists and is not empty.
- [ -w file ]→file exists and is writable.
- [ -x file ] is true if file exists and is executable.
- [ string1 = string2 ] → the strings are equal.
- [ string1 != string2 ] →the strings are not equal.
[ int1 op int2 ] should be part of the preceding list, while the items that follow (for example, -eq > is true if int1 is equal to int2.) should be a “children” list of [ int1 op int2 ] where op is one of the following comparison operators.
- -eq > is true if int1 is equal to int2.
- -ne > true if int1 is not equal to int2.
- -lt > true if int1 is less than int2.
- -le > true if int1 is less than or equal to int2.
- -gt > true if int1 is greater than int2.
- -ge > true if int1 is greater than or equal to int2.
#### For Loops ####
This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is:
for item in SEQUENCE; do
COMMANDS;
done
Where item is a generic variable that represents each value in SEQUENCE during each iteration.
#### While Loops ####
This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is:
while EVALUATION_COMMAND; do
EXECUTE_COMMANDS;
done
Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops.
#### Putting It All Together ####
We will demonstrate the use of the if construct and the for loop with the following example.
**Determining if a service is running in a systemd-based distro**
Lets create a file with a list of services that we want to monitor at a glance.
# cat myservices.txt
sshd
mariadb
httpd
crond
firewalld
![Script to Monitor Linux Services](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Services.png)
Script to Monitor Linux Services
Our shell script should look like.
#!/bin/bash
# This script iterates over a list of services and
# is used to determine whether they are running or not.
for service in $(cat myservices.txt); do
systemctl status $service | grep --quiet "running"
if [ $? -eq 0 ]; then
echo $service "is [ACTIVE]"
else
echo $service "is [INACTIVE or NOT INSTALLED]"
fi
done
![Linux Service Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Script.png)
Linux Service Monitoring Script
**Lets explain how the script works.**
1). The for loop reads the myservices.txt file one element of LIST at a time. That single element is denoted by the generic variable named service. The LIST is populated with the output of,
# cat myservices.txt
2). The above command is enclosed in parentheses and preceded by a dollar sign to indicate that it should be evaluated to populate the LIST that we will iterate over.
3). For each element of LIST (meaning every instance of the service variable), the following command will be executed.
# systemctl status $service | grep --quiet "running"
This time we need to precede our generic variable (which represents each element in LIST) with a dollar sign to indicate its a variable and thus its value in each iteration should be used. The output is then piped to grep.
The quiet flag is used to prevent grep from displaying to the screen the lines where the word running appears. When that happens, the above command returns an exit status of 0 (represented by $? in the if construct), thus verifying that the service is running.
An exit status different than 0 (meaning the word running was not found in the output of systemctl status $service) indicates that the service is not running.
![Services Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Services-Monitoring-Script.png)
Services Monitoring Script
We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop.
#!/bin/bash
# This script iterates over a list of services and
# is used to determine whether they are running or not.
if [ -f myservices.txt ]; then
for service in $(cat myservices.txt); do
systemctl status $service | grep --quiet "running"
if [ $? -eq 0 ]; then
echo $service "is [ACTIVE]"
else
echo $service "is [INACTIVE or NOT INSTALLED]"
fi
done
else
echo "myservices.txt is missing"
fi
**Pinging a series of network or internet hosts for reply statistics**
You may want to maintain a list of hosts in a text file and use a script to determine every now and then whether theyre pingable or not (feel free to replace the contents of myhosts and try for yourself).
The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command.
#!/bin/bash
# This script is used to demonstrate the use of a while loop
while read host; do
ping -c 2 $host
done < myhosts
![Script to Ping Servers](http://www.tecmint.com/wp-content/uploads/2014/11/Script-to-Ping-Servers.png)
Script to Ping Servers
Read Also:
- [Learn Shell Scripting: A Guide from Newbies to System Administrator][3]
- [5 Shell Scripts to Learn Shell Programming][4]
### Filesystem Troubleshooting ###
Although Linux is a very stable operating system, if it crashes for some reason (for example, due to a power outage), one (or more) of your file systems will not be unmounted properly and thus will be automatically checked for errors when Linux is restarted.
In addition, each time the system boots during a normal boot, it always checks the integrity of the filesystems before mounting them. In both cases this is performed using a tool named fsck (“file system check”).
fsck will not only check the integrity of file systems, but also attempt to repair corrupt file systems if instructed to do so. Depending on the severity of damage, fsck may succeed or not; when it does, recovered portions of files are placed in the lost+found directory, located in the root of each file system.
Last but not least, we must note that inconsistencies may also happen if we try to remove an USB drive when the operating system is still writing to it, and may even result in hardware damage.
The basic syntax of fsck is as follows:
# fsck [options] filesystem
**Checking a filesystem for errors and attempting to repair automatically**
In order to check a filesystem with fsck, we must first unmount it.
# mount | grep sdg1
# umount /mnt
# fsck -y /dev/sdg1
![Scan Linux Filesystem for Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Filesystem-Errors.png)
Check Filesystem Errors
Besides the -y flag, we can use the -a option to automatically repair the file systems without asking any questions, and force the check even when the filesystem looks clean.
# fsck -af /dev/sdg1
If were only interested in finding out whats wrong (without trying to fix anything for the time being) we can run fsck with the -n option, which will output the filesystem issues to standard output.
# fsck -n /dev/sdg1
Depending on the error messages in the output of fsck, we will know whether we can try to solve the issue ourselves or escalate it to engineering teams to perform further checks on the hardware.
### Summary ###
We have arrived at the end of this 10-article series where have tried to cover the basic domain competencies required to pass the LFCS exam.
For obvious reasons, it is not possible to cover every single aspect of these topics in any single tutorial, and thats why we hope that these articles have put you on the right track to try new stuff yourself and continue learning.
If you have any questions or comments, they are always welcome so dont hesitate to drop us a line via the form below!
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/
[2]:http://www.tecmint.com/vi-editor-usage/
[3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/
[4]:http://www.tecmint.com/basic-shell-programming-part-ii/

View File

@ -0,0 +1,387 @@
Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor
================================================================================
A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when its time to raise issues to upper support teams.
![Learning VI Editor in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/LFCS-Part-2.png)
Learning VI Editor in Linux
Please take a look at the below video that explains The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This post is Part 2 of a 10-tutorial series, here in this part, we will cover the basic file editing operations and understanding modes in vi/m editor, that are required for the LFCS certification exam.
### Perform Basic File Editing Operations Using vi/m ###
Vi was the first full-screen text editor written for Unix. Although it was intended to be small and simple, it can be a bit challenging for people used exclusively to GUI text editors, such as NotePad++, or gedit, to name a few examples.
To use Vi, we must first understand the 3 modes in which this powerful program operates, in order to begin learning later about the its powerful text-editing procedures.
Please note that most modern Linux distributions ship with a variant of vi known as vim (“Vi improved”), which supports more features than the original vi does. For that reason, throughout this tutorial we will use vi and vim interchangeably.
If your distribution does not have vim installed, you can install it as follows.
- Ubuntu and derivatives: aptitude update && aptitude install vim
- Red Hat-based distributions: yum update && yum install vim
- openSUSE: zypper update && zypper install vim
### Why should I want to learn vi? ###
There are at least 2 good reasons to learn vi.
1. vi is always available (no matter what distribution youre using) since it is required by POSIX.
2. vi does not consume a considerable amount of system resources and allows us to perform any imaginable tasks without lifting our fingers from the keyboard.
In addition, vi has a very extensive built-in manual, which can be launched using the :help command right after the program is started. This built-in manual contains more information than vi/ms man page.
![vi Man Pages](http://www.tecmint.com/wp-content/uploads/2014/10/vi-man-pages.png)
vi Man Pages
#### Launching vi ####
To launch vi, type vi in your command prompt.
![Start vi Editor](http://www.tecmint.com/wp-content/uploads/2014/10/start-vi-editor.png)
Start vi Editor
Then press i to enter Insert mode, and you can start typing. Another way to launch vi/m is.
# vi filename
Which will open a new buffer (more on buffers later) named filename, which you can later save to disk.
#### Understanding Vi modes ####
1. In command mode, vi allows the user to navigate around the file and enter vi commands, which are brief, case-sensitive combinations of one or more letters. Almost all of them can be prefixed with a number to repeat the command that number of times.
For example, yy (or Y) copies the entire current line, whereas 3yy (or 3Y) copies the entire current line along with the two next lines (3 lines in total). We can always enter command mode (regardless of the mode were working on) by pressing the Esc key. The fact that in command mode the keyboard keys are interpreted as commands instead of text tends to be confusing to beginners.
2. In ex mode, we can manipulate files (including saving a current file and running outside programs). To enter this mode, we must type a colon (:) from command mode, directly followed by the name of the ex-mode command that needs to be used. After that, vi returns automatically to command mode.
3. In insert mode (the letter i is commonly used to enter this mode), we simply enter text. Most keystrokes result in text appearing on the screen (one important exception is the Esc key, which exits insert mode and returns to command mode).
![vi Insert Mode](http://www.tecmint.com/wp-content/uploads/2014/10/vi-insert-mode.png)
vi Insert Mode
#### Vi Commands ####
The following table shows a list of commonly used vi commands. File edition commands can be enforced by appending the exclamation sign to the command (for example, <b.:q! enforces quitting without saving).
注:表格
<table cellspacing="0" border="0">
<colgroup width="290">
</colgroup>
<colgroup width="781">
</colgroup>
<tbody>
<tr>
<td bgcolor="#999999" height="19" align="LEFT" style="border: 1px solid #000000;"><b><span style="font-size: small;">&nbsp;Key command</span></b></td>
<td bgcolor="#999999" align="LEFT" style="border: 1px solid #000000;"><b><span style="font-size: small;">&nbsp;Description</span></b></td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;h or left arrow</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go one character to the left</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;j or down arrow</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go down one line</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;k or up arrow</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go up one line</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;l (lowercase L) or right arrow</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go one character to the right</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;H</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go to the top of the screen</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;L</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go to the bottom of the screen</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;G</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go to the end of the file</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;w</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Move one word to the right</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;b</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Move one word to the left</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;0 (zero)</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go to the beginning of the current line</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;^</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go to the first nonblank character on the current line</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;$</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go to the end of the current line</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;Ctrl-B</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go back one screen</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;Ctrl-F</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Go forward one screen</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;i</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Insert at the current cursor position</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;I (uppercase i)</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Insert at the beginning of the current line</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;J (uppercase j)</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Join current line with the next one (move next line up)</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;a</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Append after the current cursor position</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;o (lowercase O)</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Creates a blank line after the current line</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;O (uppercase o)</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Creates a blank line before the current line</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;r</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Replace the character at the current cursor position</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;R</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Overwrite at the current cursor position</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;x</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Delete the character at the current cursor position</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;X</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Delete the character immediately before (to the left) of the current cursor position</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;dd</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Cut (for later pasting) the entire current line</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;D</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Cut from the current cursor position to the end of the line (this command is equivalent to d$)</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;yX</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Give a movement command X, copy (yank) the appropriate number of characters, words, or lines from the current cursor position</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;yy or Y</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Yank (copy) the entire current line</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;p</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Paste after (next line) the current cursor position</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;P</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Paste before (previous line) the current cursor position</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;. (period)</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Repeat the last command</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;u</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Undo the last command</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;U</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Undo the last command in the last line. This will work as long as the cursor is still on the line.</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;n</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Find the next match in a search</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;N</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Find the previous match in a search</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;:n</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Next file; when multiple files are specified for editing, this commands loads the next file.</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000000;">&nbsp;:e file</td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Load file in place of the current file.</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000000;">&nbsp;:r file</td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Insert the contents of file after (next line) the current cursor position</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;:q</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Quit without saving changes.</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000000;">&nbsp;:w file</td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Write the current buffer to file. To append to an existing file, use :w &gt;&gt; file.</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000000;"><span style="font-family: Courier New;">&nbsp;:wq</span></td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Write the contents of the current file and quit. Equivalent to x! and ZZ</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000000;">&nbsp;:r! command</td>
<td align="LEFT" style="border: 1px solid #000000;">&nbsp;Execute command and insert output after (next line) the current cursor position.</td>
</tr>
</tbody>
</table>
#### Vi Options ####
The following options can come in handy while running vim (we need to add them in our ~/.vimrc file).
# echo set number >> ~/.vimrc
# echo syntax on >> ~/.vimrc
# echo set tabstop=4 >> ~/.vimrc
# echo set autoindent >> ~/.vimrc
![vi Editor Options](http://www.tecmint.com/wp-content/uploads/2014/10/vi-options.png)
vi Editor Options
- set number shows line numbers when vi opens an existing or a new file.
- syntax on turns on syntax highlighting (for multiple file extensions) in order to make code and config files more readable.
- set tabstop=4 sets the tab size to 4 spaces (default value is 8).
- set autoindent carries over previous indent to the next line.
#### Search and replace ####
vi has the ability to move the cursor to a certain location (on a single line or over an entire file) based on searches. It can also perform text replacements with or without confirmation from the user.
a). Searching within a line: the f command searches a line and moves the cursor to the next occurrence of a specified character in the current line.
For example, the command fh would move the cursor to the next instance of the letter h within the current line. Note that neither the letter f nor the character youre searching for will appear anywhere on your screen, but the character will be highlighted after you press Enter.
For example, this is what I get after pressing f4 in command mode.
![Search String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-string.png)
Search String in Vi
b). Searching an entire file: use the / command, followed by the word or phrase to be searched for. A search may be repeated using the previous search string with the n command, or the next one (using the N command). This is the result of typing /Jane in command mode.
![Vi Search String in File](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-line.png)
Vi Search String in File
c). vi uses a command (similar to seds) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command.
:%s/old/young/g
**Notice**: The colon at the beginning of the command.
![Vi Search and Replace](http://www.tecmint.com/wp-content/uploads/2014/10/vi-search-and-replace.png)
Vi Search and Replace
The colon (:) starts the ex command, s in this case (for substitution), % is a shortcut meaning from the first line to the last line (the range can also be specified as n,m which means “from line n to line m”), old is the search pattern, while young is the replacement text, and g indicates that the substitution should be performed on every occurrence of the search string in the file.
Alternatively, a c can be added to the end of the command to ask for confirmation before performing any substitution.
:%s/old/young/gc
Before replacing the original text with the new one, vi/m will present us with the following message.
![Replace String in Vi](http://www.tecmint.com/wp-content/uploads/2014/10/vi-replace-old-with-young.png)
Replace String in Vi
- y: perform the substitution (yes)
- n: skip this occurrence and go to the next one (no)
- a: perform the substitution in this and all subsequent instances of the pattern.
- q or Esc: quit substituting.
- l (lowercase L): perform this substitution and quit (last).
- Ctrl-e, Ctrl-y: Scroll down and up, respectively, to view the context of the proposed substitution.
#### Editing Multiple Files at a Time ####
Lets type vim file1 file2 file3 in our command prompt.
# vim file1 file2 file3
First, vim will open file1. To switch to the next file (file2), we need to use the :n command. When we want to return to the previous file, :N will do the job.
In order to switch from file1 to file3.
a). The :buffers command will show a list of the file currently being edited.
:buffers
![Edit Multiple Files](http://www.tecmint.com/wp-content/uploads/2014/10/vi-edit-multiple-files.png)
Edit Multiple Files
b). The command :buffer 3 (without the s at the end) will open file3 for editing.
In the image above, a pound sign (#) indicates that the file is currently open but in the background, while %a marks the file that is currently being edited. On the other hand, a blank space after the file number (3 in the above example) indicates that the file has not yet been opened.
#### Temporary vi buffers ####
To copy a couple of consecutive lines (lets say 4, for example) into a temporary buffer named a (not associated with a file) and place those lines in another part of the file later in the current vi section, we need to…
1. Press the ESC key to be sure we are in vi Command mode.
2. Place the cursor on the first line of the text we wish to copy.
3. Type “a4yy to copy the current line, along with the 3 subsequent lines, into a buffer named a. We can continue editing our file we do not need to insert the copied lines immediately.
4. When we reach the location for the copied lines, use “a before the p or P commands to insert the lines copied into the buffer named a:
- Type “ap to insert the lines copied into buffer a after the current line on which the cursor is resting.
- Type “aP to insert the lines copied into buffer a before the current line.
If we wish, we can repeat the above steps to insert the contents of buffer a in multiple places in our file. A temporary buffer, as the one in this section, is disposed when the current window is closed.
### Summary ###
As we have seen, vi/m is a powerful and versatile text editor for the CLI. Feel free to share your own tricks and comments below.
#### Reference Links ####
- [About the LFCS][1]
- [Why get a Linux Foundation Certification?][2]
- [Register for the LFCS exam][3]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/vi-editor-usage/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://training.linuxfoundation.org/certification/LFCS
[2]:https://training.linuxfoundation.org/certification/why-certify-with-us
[3]:https://identity.linuxfoundation.org/user?destination=pid/1

View File

@ -0,0 +1,382 @@
Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux
================================================================================
Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams.
![Linux Foundation Certified Sysadmin Part 3](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-3.png)
Linux Foundation Certified Sysadmin Part 3
Please watch the below video that gives the idea about The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This post is Part 3 of a 10-tutorial series, here in this part, we will cover how to archive/compress files and directories, set file attributes, and find files on the filesystem, that are required for the LFCS certification exam.
### Archiving and Compression Tools ###
A file archiving tool groups a set of files into a single standalone file that we can backup to several types of media, transfer across a network, or send via email. The most frequently used archiving utility in Linux is tar. When an archiving utility is used along with a compression tool, it allows to reduce the disk size that is needed to store the same files and information.
#### The tar utility ####
tar bundles a group of files together into a single archive (commonly called a tar file or tarball). The name originally stood for tape archiver, but we must note that we can use this tool to archive data to any kind of writeable media (not only to tapes). Tar is normally used with a compression tool such as gzip, bzip2, or xz to produce a compressed tarball.
**Basic syntax:**
# tar [options] [pathname ...]
Where … represents the expression used to specify which files should be acted upon.
#### Most commonly used tar commands ####
注:表格
<table cellspacing="0" border="0">
<colgroup width="150">
</colgroup>
<colgroup width="109">
</colgroup>
<colgroup width="351">
</colgroup>
<tbody>
<tr>
<td bgcolor="#999999" height="18" align="CENTER" style="border: 1px solid #000001;"><b>Long option</b></td>
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b>Abbreviation</b></td>
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b>Description</b></td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;create</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;c</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Creates a tar archive</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;concatenate</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;A</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Appends tar files to an archive</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;append</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;r</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Appends files to the end of an archive</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;update</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;u</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Appends files newer than copy in archive</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;diff or &ndash;compare</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;d</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Find differences between archive and file system</td>
</tr>
<tr>
<td height="18" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;file archive</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;f</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Use archive file or device ARCHIVE</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;list</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;t</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Lists the contents of a tarball</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;extract or &ndash;get</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;x</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Extracts files from an archive</td>
</tr>
</tbody>
</table>
#### Normally used operation modifiers ####
注:表格
<table cellspacing="0" border="0">
<colgroup width="162">
</colgroup>
<colgroup width="109">
</colgroup>
<colgroup width="743">
</colgroup>
<tbody>
<tr class="alt">
<td bgcolor="#999999" height="18" align="CENTER" style="border: 1px solid #000001;"><b><span style="font-family: Droid Sans;">Long option</span></b></td>
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b><span style="font-family: Droid Sans;">Abbreviation</span></b></td>
<td bgcolor="#999999" align="CENTER" style="border: 1px solid #000001;"><b><span style="font-family: Droid Sans;">Description</span></b></td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;directory dir</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;C</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Changes to directory dir before performing operations</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;same-permissions</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;p</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Droid Sans;">&nbsp;Preserves original permissions</span></td>
</tr>
<tr>
<td height="38" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;verbose</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;v</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Lists all files read or extracted. When this flag is used along with &ndash;list, the file sizes, ownership, and time stamps are displayed.</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;verify</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;W</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Droid Sans;">&nbsp;Verifies the archive after writing it</span></td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;exclude file</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&mdash;</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Excludes file from the archive</td>
</tr>
<tr class="alt">
<td height="18" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;exclude=pattern</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;X</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Droid Sans;">&nbsp;Exclude files, given as a PATTERN</span></td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;">&nbsp;&ndash;gzip or &ndash;gunzip</td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;z</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Processes an archive through gzip</td>
</tr>
<tr class="alt">
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;bzip2</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;j</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Processes an archive through bzip2</td>
</tr>
<tr>
<td height="20" align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;&ndash;xz</span></td>
<td align="LEFT" style="border: 1px solid #000001;"><span style="font-family: Consolas;">&nbsp;J</span></td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Processes an archive through xz</td>
</tr>
</tbody>
</table>
Gzip is the oldest compression tool and provides the least compression, while bzip2 provides improved compression. In addition, xz is the newest but (usually) provides the best compression. This advantages of best compression come at a price: the time it takes to complete the operation, and system resources used during the process.
Normally, tar files compressed with these utilities have .gz, .bz2, or .xz extensions, respectively. In the following examples we will be using these files: file1, file2, file3, file4, and file5.
**Grouping and compressing with gzip, bzip2 and xz**
Group all the files in the current working directory and compress the resulting bundle with gzip, bzip2, and xz (please note the use of a regular expression to specify which files should be included in the bundle this is to prevent the archiving tool to group the tarballs created in previous steps).
# tar czf myfiles.tar.gz file[0-9]
# tar cjf myfiles.tar.bz2 file[0-9]
# tar cJf myfile.tar.xz file[0-9]
![Compress Multiple Files Using tar](http://www.tecmint.com/wp-content/uploads/2014/10/Compress-Multiple-Files.png)
Compress Multiple Files
**Listing the contents of a tarball and updating / appending files to the bundle**
List the contents of a tarball and display the same information as a long directory listing. Note that update or append operations cannot be applied to compressed files directly (if you need to update or append a file to a compressed tarball, you need to uncompress the tar file and update / append to it, then compress again).
# tar tvf [tarball]
![Check Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/List-Archive-Content.png)
List Archive Content
Run any of the following commands:
# gzip -d myfiles.tar.gz [#1]
# bzip2 -d myfiles.tar.bz2 [#2]
# xz -d myfiles.tar.xz [#3]
Then
# tar --delete --file myfiles.tar file4 (deletes the file inside the tarball)
# tar --update --file myfiles.tar file4 (adds the updated file)
and
# gzip myfiles.tar [ if you choose #1 above ]
# bzip2 myfiles.tar [ if you choose #2 above ]
# xz myfiles.tar [ if you choose #3 above ]
Finally,
# tar tvf [tarball] #again
and compare the modification date and time of file4 with the same information as shown earlier.
**Excluding file types**
Suppose you want to perform a backup of users home directories. A good sysadmin practice would be (may also be specified by company policies) to exclude all video and audio files from backups.
Maybe your first approach would be to exclude from the backup all files with an .mp3 or .mp4 extension (or other extensions). What if you have a clever user who can change the extension to .txt or .bkp, your approach wont do you much good. In order to detect an audio or video file, you need to check its file type with file. The following shell script will do the job.
#!/bin/bash
# Pass the directory to backup as first argument.
DIR=$1
# Create the tarball and compress it. Exclude files with the MPEG string in its file type.
# -If the file type contains the string mpeg, $? (the exit status of the most recently executed command) expands to 0, and the filename is redirected to the exclude option. Otherwise, it expands to 1.
# -If $? equals 0, add the file to the list of files to be backed up.
tar X <(for i in $DIR/*; do file $i | grep -i mpeg; if [ $? -eq 0 ]; then echo $i; fi;done) -cjf backupfile.tar.bz2 $DIR/*
![Exclude Files in tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Exclude-Files-in-Tar.png)
Exclude Files in tar
**Restoring backups with tar preserving permissions**
You can then restore the backup to the original users home directory (user_restore in this example), preserving permissions, with the following command.
# tar xjf backupfile.tar.bz2 --directory user_restore --same-permissions
![Restore Files from tar Archive](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-tar-Backup-Files.png)
Restore Files from Archive
**Read Also:**
- [18 tar Command Examples in Linux][1]
- [Dtrx An Intelligent Archive Tool for Linux][2]
### Using find Command to Search for Files ###
The find command is used to search recursively through directory trees for files or directories that match certain characteristics, and can then either print the matching files or directories or perform other operations on the matches.
Normally, we will search by name, owner, group, type, permissions, date, and size.
#### Basic syntax: ####
# find [directory_to_search] [expression]
**Finding files recursively according to Size**
Find all files (-f) in the current directory (.) and 2 subdirectories below (-maxdepth 3 includes the current working directory and 2 levels down) whose size (-size) is greater than 2 MB.
# find . -maxdepth 3 -type f -size +2M
![Find Files by Size in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-Based-on-Size.png)
Find Files Based on Size
**Finding and deleting files that match a certain criteria**
Files with 777 permissions are sometimes considered an open door to external attackers. Either way, it is not safe to let anyone do anything with files. We will take a rather aggressive approach and delete them! ({} + is used to “collect” the results of the search).
# find /home/user -perm 777 -exec rm '{}' +
![Find all 777 Permission Files](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Files-with-777-Permission.png)
Find Files with 777Permission
**Finding files per atime or mtime**
Search for configuration files in /etc that have been accessed (-atime) or modified (-mtime) more (+180) or less (-180) than 6 months ago or exactly 6 months ago (180).
Modify the following command as per the example below:
# find /etc -iname "*.conf" -mtime -180 -print
![Find Files by Modification Time](http://www.tecmint.com/wp-content/uploads/2014/10/Find-Modified-Files.png)
Find Modified Files
- Read Also: [35 Practical Examples of Linux find Command][3]
### File Permissions and Basic Attributes ###
The first 10 characters in the output of ls -l are the file attributes. The first of these characters is used to indicate the file type:
- : a regular file
- -d : a directory
- -l : a symbolic link
- -c : a character device (which treats data as a stream of bytes, i.e. a terminal)
- -b : a block device (which handles data in blocks, i.e. storage devices)
The next nine characters of the file attributes are called the file mode and represent the read (r), write (w), and execute (x) permissions of the files owner, the files group owner, and the rest of the users (commonly referred to as “the world”).
Whereas the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run, while in a directory it allows the same to be cded into it.
File permissions are changed with the chmod command, whose basic syntax is as follows:
# chmod [new_mode] file
Where new_mode is either an octal number or an expression that specifies the new permissions.
The octal number can be converted from its binary equivalent, which is calculated from the desired file permissions for the owner, the group, and the world, as follows:
The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its absence equates to 0. For example:
![Linux File Permissions](http://www.tecmint.com/wp-content/uploads/2014/10/File-Permissions.png)
File Permissions
To set the files permissions as above in octal form, type:
# chmod 744 myfile
You can also set a files mode using an expression that indicates the owners rights with the letter u, the group owners rights with the letter g, and the rest with o. All of these “individuals” can be represented at the same time with the letter a. Permissions are granted (or revoked) with the + or signs, respectively.
**Revoking execute permission for a shell script to all users**
As we explained earlier, we can revoke a certain permission prepending it with the minus sign and indicating whether it needs to be revoked for the owner, the group owner, or all users. The one-liner below can be interpreted as follows: Change mode for all (a) users, revoke () execute permission (x).
# chmod a-x backup.sh
Granting read, write, and execute permissions for a file to the owner and group owner, and read permissions for the world.
When we use a 3-digit octal number to set permissions for a file, the first digit indicates the permissions for the owner, the second digit for the group owner and the third digit for everyone else:
- Owner: (r=22 + w=21 + x=20 = 7)
- Group owner: (r=22 + w=21 + x=20 = 7)
- World: (r=22 + w=0 + x=0 = 4),
# chmod 774 myfile
In time, and with practice, you will be able to decide which method to change a file mode works best for you in each case. A long directory listing also shows the files owner and its group owner (which serve as a rudimentary yet effective access control to files in a system):
![Linux File Listing](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-File-Listing.png)
Linux File Listing
File ownership is changed with the chown command. The owner and the group owner can be changed at the same time or separately. Its basic syntax is as follows:
# chown user:group file
Where at least user or group need to be present.
**Few Examples**
Changing the owner of a file to a certain user.
# chown gacanepa sent
Changing the owner and group of a file to an specific user:group pair.
# chown gacanepa:gacanepa TestFile
Changing only the group owner of a file to a certain group. Note the colon before the groups name.
# chown :gacanepa email_body.txt
### Conclusion ###
As a sysadmin, you need to know how to create and restore backups, how to find files in your system and change their attributes, along with a few tricks that can make your life easier and will prevent you from running into future issues.
I hope that the tips provided in the present article will help you to achieve that goal. Feel free to add your own tips and ideas in the comments section for the benefit of the community. Thanks in advance!
Reference Links
- [About the LFCS][4]
- [Why get a Linux Foundation Certification?][5]
- [Register for the LFCS exam][6]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/compress-files-and-finding-files-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/18-tar-command-examples-in-linux/
[2]:http://www.tecmint.com/dtrx-an-intelligent-archive-extraction-tar-zip-cpio-rpm-deb-rar-tool-for-linux/
[3]:http://www.tecmint.com/35-practical-examples-of-linux-find-command/
[4]:https://training.linuxfoundation.org/certification/LFCS
[5]:https://training.linuxfoundation.org/certification/why-certify-with-us
[6]:https://identity.linuxfoundation.org/user?destination=pid/1

View File

@ -0,0 +1,191 @@
Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition
================================================================================
Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation if needed to other support teams.
![Linux Foundation Certified Sysadmin Part 4](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-4.png)
Linux Foundation Certified Sysadmin Part 4
Please aware that Linux Foundation certifications are precise, totally based on performance and available through an online portal anytime, anywhere. Thus, you no longer have to travel to a examination center to get the certifications you need to establish your skills and expertise.
Please watch the below video that explains The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This post is Part 4 of a 10-tutorial series, here in this part, we will cover the Partitioning storage devices, Formatting filesystems and Configuring swap partition, that are required for the LFCS certification exam.
### Partitioning Storage Devices ###
Partitioning is a means to divide a single hard drive into one or more parts or “slices” called partitions. A partition is a section on a drive that is treated as an independent disk and which contains a single type of file system, whereas a partition table is an index that relates those physical sections of the hard drive to partition identifications.
In Linux, the traditional tool for managing MBR partitions (up to ~2009) in IBM PC compatible systems is fdisk. For GPT partitions (~2010 and later) we will use gdisk. Each of these tools can be invoked by typing its name followed by a device name (such as /dev/sdb).
#### Managing MBR Partitions with fdisk ####
We will cover fdisk first.
# fdisk /dev/sdb
A prompt appears asking for the next operation. If you are unsure, you can press the m key to display the help contents.
![fdisk Help Menu](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-help.png)
fdisk Help Menu
In the above image, the most frequently used options are highlighted. At any moment, you can press p to display the current partition table.
![Check Partition Table in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Show-Partition-Table.png)
Show Partition Table
The Id column shows the partition type (or partition id) that has been assigned by fdisk to the partition. A partition type serves as an indicator of the file system, the partition contains or, in simple words, the way data will be accessed in that partition.
Please note that a comprehensive study of each partition type is out of the scope of this tutorial as this series is focused on the LFCS exam, which is performance-based.
**Some of the options used by fdisk as follows:**
You can list all the partition types that can be managed by fdisk by pressing the l option (lowercase l).
Press d to delete an existing partition. If more than one partition is found in the drive, you will be asked which one should be deleted.
Enter the corresponding number, and then press w (write modifications to partition table) to apply changes.
In the following example, we will delete /dev/sdb2, and then print (p) the partition table to verify the modifications.
![fdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/fdisk-options.png)
fdisk Command Options
Press n to create a new partition, then p to indicate it will be a primary partition. Finally, you can accept all the default values (in which case the partition will occupy all the available space), or specify a size as follows.
![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-New-Partition.png)
Create New Partition
If the partition Id that fdisk chose is not the right one for our setup, we can press t to change it.
![Change Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Partition-Name.png)
Change Partition Name
When youre done setting up the partitions, press w to commit the changes to disk.
![Save Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Partition-Changes.png)
Save Partition Changes
#### Managing GPT Partitions with gdisk ####
In the following example, we will use /dev/sdb.
# gdisk /dev/sdb
We must note that gdisk can be used either to create MBR or GPT partitions.
![Create GPT Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-GPT-Partitions.png)
Create GPT Partitions
The advantage of using GPT partitioning is that we can create up to 128 partitions in the same disk whose size can be up to the order of petabytes, whereas the maximum size for MBR partitions is 2 TB.
Note that most of the options in fdisk are the same in gdisk. For that reason, we will not go into detail about them, but heres a screenshot of the process.
![gdisk Command Options](http://www.tecmint.com/wp-content/uploads/2014/10/gdisk-options.png)
gdisk Command Options
### Formatting Filesystems ###
Once we have created all the necessary partitions, we must create filesystems. To find out the list of filesystems supported in your system, run.
# ls /sbin/mk*
![Check Filesystems Type in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Filesystems.png)
Check Filesystems Type
The type of filesystem that you should choose depends on your requirements. You should consider the pros and cons of each filesystem and its own set of features. Two important attributes to look for in a filesystem are.
- Journaling support, which allows for faster data recovery in the event of a system crash.
- Security Enhanced Linux (SELinux) support, as per the project wiki, “a security enhancement to Linux which allows users and administrators more control over access control”.
In our next example, we will create an ext4 filesystem (supports both journaling and SELinux) labeled Tecmint on /dev/sdb1, using mkfs, whose basic syntax is.
# mkfs -t [filesystem] -L [label] device
or
# mkfs.[filesystem] -L [label] device
![Create ext4 Filesystems in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystems.png)
Create ext4 Filesystems
### Creating and Using Swap Partitions ###
Swap partitions are necessary if we need our Linux system to have access to virtual memory, which is a section of the hard disk designated for use as memory, when the main system memory (RAM) is all in use. For that reason, a swap partition may not be needed on systems with enough RAM to meet all its requirements; however, even in that case its up to the system administrator to decide whether to use a swap partition or not.
A simple rule of thumb to decide the size of a swap partition is as follows.
Swap should usually equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB.
So, if:
M = Amount of RAM in GB, and S = Amount of swap in GB, then
If M < 2
S = M *2
Else
S = M + 2
Remember this is just a formula and that only you, as a sysadmin, have the final word as to the use and size of a swap partition.
To configure a swap partition, create a regular partition as demonstrated earlier with the desired size. Next, we need to add the following entry to the /etc/fstab file (X can be either b or c).
/dev/sdX1 swap swap sw 0 0
Finally, lets format and enable the swap partition.
# mkswap /dev/sdX1
# swapon -v /dev/sdX1
To display a snapshot of the swap partition(s).
# cat /proc/swaps
To disable the swap partition.
# swapoff /dev/sdX1
For the next example, well use /dev/sdc1 (=512 MB, for a system with 256 MB of RAM) to set up a partition with fdisk that we will use as swap, following the steps detailed above. Note that we will specify a fixed size in this case.
![Create-Swap-Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Swap-Partition.png)
Create Swap Partition
![Add Swap Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Swap-Partition.png)
Enable Swap Partition
### Conclusion ###
Creating partitions (including swap) and formatting filesystems are crucial in your road to Sysadminship. I hope that the tips given in this article will guide you to achieve your goals. Feel free to add your own tips & ideas in the comments section below, for the benefit of the community.
Reference Links
- [About the LFCS][1]
- [Why get a Linux Foundation Certification?][2]
- [Register for the LFCS exam][3]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://training.linuxfoundation.org/certification/LFCS
[2]:https://training.linuxfoundation.org/certification/why-certify-with-us
[3]:https://identity.linuxfoundation.org/user?destination=pid/1

View File

@ -0,0 +1,232 @@
Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux
================================================================================
The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.
![Linux Foundation Certified Sysadmin Part 5](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-5.png)
Linux Foundation Certified Sysadmin Part 5
The following video shows an introduction to The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This post is Part 5 of a 10-tutorial series, here in this part, we will explain How to mount/unmount local and network filesystems in linux, that are required for the LFCS certification exam.
### Mounting Filesystems ###
Once a disk has been partitioned, Linux needs some way to access the data on the partitions. Unlike DOS or Windows (where this is done by assigning a drive letter to each partition), Linux uses a unified directory tree where each partition is mounted at a mount point in that tree.
A mount point is a directory that is used as a way to access the filesystem on the partition, and mounting the filesystem is the process of associating a certain filesystem (a partition, for example) with a specific directory in the directory tree.
In other words, the first step in managing a storage device is attaching the device to the file system tree. This task can be accomplished on a one-time basis by using tools such as mount (and then unmounted with umount) or persistently across reboots by editing the /etc/fstab file.
The mount command (without any options or arguments) shows the currently mounted filesystems.
# mount
![Check Mounted Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/check-mounted-filesystems.png)
Check Mounted Filesystem
In addition, mount is used to mount filesystems into the filesystem tree. Its standard syntax is as follows.
# mount -t type device dir -o options
This command instructs the kernel to mount the filesystem found on device (a partition, for example, that has been formatted with a filesystem type) at the directory dir, using all options. In this form, mount does not look in /etc/fstab for instructions.
If only a directory or device is specified, for example.
# mount /dir -o options
or
# mount device -o options
mount tries to find a mount point and if it cant find any, then searches for a device (both cases in the /etc/fstab file), and finally attempts to complete the mount operation (which usually succeeds, except for the case when either the directory or the device is already being used, or when the user invoking mount is not root).
You will notice that every line in the output of mount has the following format.
device on directory type (options)
For example,
/dev/mapper/debian-home on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)
Reads:
dev/mapper/debian-home is mounted on /home, which has been formatted as ext4, with the following options: rw,relatime,user_xattr,barrier=1,data=ordered
**Mount Options**
Most frequently used mount options include.
- async: allows asynchronous I/O operations on the file system being mounted.
- auto: marks the file system as enabled to be mounted automatically using mount -a. It is the opposite of noauto.
- defaults: this option is an alias for async,auto,dev,exec,nouser,rw,suid. Note that multiple options must be separated by a comma without any spaces. If by accident you type a space between options, mount will interpret the subsequent text string as another argument.
- loop: Mounts an image (an .iso file, for example) as a loop device. This option can be used to simulate the presence of the disks contents in an optical media reader.
- noexec: prevents the execution of executable files on the particular filesystem. It is the opposite of exec.
- nouser: prevents any users (other than root) to mount and unmount the filesystem. It is the opposite of user.
- remount: mounts the filesystem again in case it is already mounted.
- ro: mounts the filesystem as read only.
- rw: mounts the file system with read and write capabilities.
- relatime: makes access time to files be updated only if atime is earlier than mtime.
- user_xattr: allow users to set and remote extended filesystem attributes.
**Mounting a device with ro and noexec options**
# mount -t ext4 /dev/sdg1 /mnt -o ro,noexec
In this case we can see that attempts to write a file to or to run a binary file located inside our mounting point fail with corresponding error messages.
# touch /mnt/myfile
# /mnt/bin/echo “Hi there”
![Mount Device in Read Write Mode](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device-Read-Write.png)
Mount Device Read Write
**Mounting a device with default options**
In the following scenario, we will try to write a file to our newly mounted device and run an executable file located within its filesystem tree using the same commands as in the previous example.
# mount -t ext4 /dev/sdg1 /mnt -o defaults
![Mount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Device.png)
Mount Device
In this last case, it works perfectly.
### Unmounting Devices ###
Unmounting a device (with the umount command) means finish writing all the remaining “on transit” data so that it can be safely removed. Note that if you try to remove a mounted device without properly unmounting it first, you run the risk of damaging the device itself or cause data loss.
That being said, in order to unmount a device, you must be “standing outside” its block device descriptor or mount point. In other words, your current working directory must be something else other than the mounting point. Otherwise, you will get a message saying that the device is busy.
![Unmount Device in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Unmount-Device.png)
Unmount Device
An easy way to “leave” the mounting point is typing the cd command which, in lack of arguments, will take us to our current users home directory, as shown above.
### Mounting Common Networked Filesystems ###
The two most frequently used network file systems are SMB (which stands for “Server Message Block”) and NFS (“Network File System”). Chances are you will use NFS if you need to set up a share for Unix-like clients only, and will opt for Samba if you need to share files with Windows-based clients and perhaps other Unix-like clients as well.
Read Also
- [Setup Samba Server in RHEL/CentOS and Fedora][1]
- [Setting up NFS (Network File System) on RHEL/CentOS/Fedora and Debian/Ubuntu][2]
The following steps assume that Samba and NFS shares have already been set up in the server with IP 192.168.0.10 (please note that setting up a NFS share is one of the competencies required for the LFCE exam, which we will cover after the present series).
#### Mounting a Samba share on Linux ####
Step 1: Install the samba-client samba-common and cifs-utils packages on Red Hat and Debian based distributions.
# yum update && yum install samba-client samba-common cifs-utils
# aptitude update && aptitude install samba-client samba-common cifs-utils
Then run the following command to look for available samba shares in the server.
# smbclient -L 192.168.0.10
And enter the password for the root account in the remote machine.
![Mount Samba Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Samba-Share.png)
Mount Samba Share
In the above image we have highlighted the share that is ready for mounting on our local system. You will need a valid samba username and password on the remote server in order to access it.
Step 2: When mounting a password-protected network share, it is not a good idea to write your credentials in the /etc/fstab file. Instead, you can store them in a hidden file somewhere with permissions set to 600, like so.
# mkdir /media/samba
# echo “username=samba_username” > /media/samba/.smbcredentials
# echo “password=samba_password” >> /media/samba/.smbcredentials
# chmod 600 /media/samba/.smbcredentials
Step 3: Then add the following line to /etc/fstab file.
# //192.168.0.10/gacanepa /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0
Step 4: You can now mount your samba share, either manually (mount //192.168.0.10/gacanepa) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.
![Mount Password Protect Samba Share](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-Password-Protect-Samba-Share.png)
Mount Password Protect Samba Share
#### Mounting a NFS share on Linux ####
Step 1: Install the nfs-common and portmap packages on Red Hat and Debian based distributions.
# yum update && yum install nfs-utils nfs-utils-lib
# aptitude update && aptitude install nfs-common
Step 2: Create a mounting point for the NFS share.
# mkdir /media/nfs
Step 3: Add the following line to /etc/fstab file.
192.168.0.10:/NFS-SHARE /media/nfs nfs defaults 0 0
Step 4: You can now mount your nfs share, either manually (mount 192.168.0.10:/NFS-SHARE) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.
![Mount NFS Share in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-NFS-Share.png)
Mount NFS Share
### Mounting Filesystems Permanently ###
As shown in the previous two examples, the /etc/fstab file controls how Linux provides access to disk partitions and removable media devices and consists of a series of lines that contain six fields each; the fields are separated by one or more spaces or tabs. A line that begins with a hash mark (#) is a comment and is ignored.
Each line has the following format.
<file system> <mount point> <type> <options> <dump> <pass>
Where:
- <file system>: The first column specifies the mount device. Most distributions now specify partitions by their labels or UUIDs. This practice can help reduce problems if partition numbers change.
- <mount point>: The second column specifies the mount point.
- <type>: The file system type code is the same as the type code used to mount a filesystem with the mount command. A file system type code of auto lets the kernel auto-detect the filesystem type, which can be a convenient option for removable media devices. Note that this option may not be available for all filesystems out there.
- <options>: One (or more) mount option(s).
- <dump>: You will most likely leave this to 0 (otherwise set it to 1) to disable the dump utility to backup the filesystem upon boot (The dump program was once a common backup tool, but it is much less popular today.)
- <pass>: This column specifies whether the integrity of the filesystem should be checked at boot time with fsck. A 0 means that fsck should not check a filesystem. The higher the number, the lowest the priority. Thus, the root partition will most likely have a value of 1, while all others that should be checked should have a value of 2.
**Mount Examples**
1. To mount a partition with label TECMINT at boot time with rw and noexec attributes, you should add the following line in /etc/fstab file.
LABEL=TECMINT /mnt ext4 rw,noexec 0 0
2. If you want the contents of a disk in your DVD drive be available at boot time.
/dev/sr0 /media/cdrom0 iso9660 ro,user,noauto 0 0
Where /dev/sr0 is your DVD drive.
### Summary ###
You can rest assured that mounting and unmounting local and network filesystems from the command line will be part of your day-to-day responsibilities as sysadmin. You will also need to master /etc/fstab. I hope that you have found this article useful to help you with those tasks. Feel free to add your comments (or ask questions) below and to share this article through your network social profiles.
Reference Links
- [About the LFCS][3]
- [Why get a Linux Foundation Certification?][4]
- [Register for the LFCS exam][5]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/mount-filesystem-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/setup-samba-server-using-tdbsam-backend-on-rhel-centos-6-3-5-8-and-fedora-17-12/
[2]:http://www.tecmint.com/how-to-setup-nfs-server-in-linux/
[3]:https://training.linuxfoundation.org/certification/LFCS
[4]:https://training.linuxfoundation.org/certification/why-certify-with-us
[5]:https://identity.linuxfoundation.org/user?destination=pid/1

View File

@ -0,0 +1,276 @@
Part 6 - LFCS: Assembling Partitions as RAID Devices Creating & Managing System Backups
================================================================================
Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams.
![Linux Foundation Certified Sysadmin Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png)
Linux Foundation Certified Sysadmin Part 6
The following video provides an introduction to The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This post is Part 6 of a 10-tutorial series, here in this part, we will explain How to Assemble Partitions as RAID Devices Creating & Managing System Backups, that are required for the LFCS certification exam.
### Understanding RAID ###
The technology known as Redundant Array of Independent Disks (RAID) is a storage solution that combines multiple hard disks into a single logical unit to provide redundancy of data and/or improve performance in read / write operations to disk.
However, the actual fault-tolerance and disk I/O performance lean on how the hard disks are set up to form the disk array. Depending on the available devices and the fault tolerance / performance needs, different RAID levels are defined. You can refer to the RAID series here in Tecmint.com for a more detailed explanation on each RAID level.
- RAID Guide: [What is RAID, Concepts of RAID and RAID Levels Explained][1]
Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm (short for multiple disks admin).
---------------- Debian and Derivatives ----------------
# aptitude update && aptitude install mdadm
----------
---------------- Red Hat and CentOS based Systems ----------------
# yum update && yum install mdadm
----------
---------------- On openSUSE ----------------
# zypper refresh && zypper install mdadm #
#### Assembling Partitions as RAID Devices ####
The process of assembling existing partitions as RAID devices consists of the following steps.
**1. Create the array using mdadm**
If one of the partitions has been formatted previously, or has been a part of another RAID array previously, you will be prompted to confirm the creation of the new array. Assuming you have taken the necessary precautions to avoid losing important data that may have resided in them, you can safely type y and press Enter.
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png)
Creating RAID Array
**2. Check the array creation status**
After creating RAID array, you an check the status of the array using the following commands.
# cat /proc/mdstat
or
# mdadm --detail /dev/md0 [More detailed summary]
![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png)
Check RAID Array Status
**3. Format the RAID Device**
Format the device with a filesystem as per your needs / requirements, as explained in [Part 4][2] of this series.
**4. Monitor RAID Array Service**
Instruct the monitoring service to “keep an eye” on the array. Add the output of mdadm detail scan to /etc/mdadm/mdadm.conf (Debian and derivatives) or /etc/mdadm.conf (CentOS / openSUSE), like so.
# mdadm --detail --scan
![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png)
Monitor RAID Array
# mdadm --assemble --scan [Assemble the array]
To ensure the service starts on system boot, run the following commands as root.
**Debian and Derivatives**
Debian and derivatives, though it should start running on boot by default.
# update-rc.d mdadm defaults
Edit the /etc/default/mdadm file and add the following line.
AUTOSTART=true
**On CentOS and openSUSE (systemd-based)**
# systemctl start mdmonitor
# systemctl enable mdmonitor
**On CentOS and openSUSE (SysVinit-based)**
# service mdmonitor start
# chkconfig mdmonitor on
**5. Check RAID Disk Failure**
In RAID levels that support redundancy, replace failed drives when needed. When a device in the disk array becomes faulty, a rebuild automatically starts only if there was a spare device added when we first created the array.
![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png)
Check RAID Faulty Disk
Otherwise, we need to manually attach an extra physical drive to our system and run.
# mdadm /dev/md0 --add /dev/sdX1
Where /dev/md0 is the array that experienced the issue and /dev/sdX1 is the new device.
**6. Disassemble a working array**
You may have to do this if you need to create a new array using the devices (Optional Step).
# mdadm --stop /dev/md0 # Stop the array
# mdadm --remove /dev/md0 # Remove the RAID device
# mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes
**7. Set up mail alerts**
You can configure a valid email address or system account to send alerts to (make sure you have this line in mdadm.conf). (Optional Step)
MAILADDR root
In this case, all alerts that the RAID monitoring daemon collects will be sent to the local root accounts mail box. One of such alerts looks like the following.
**Note**: This event is related to the example in STEP 5, where a device was marked as faulty and the spare device was automatically built into the array by mdadm. Thus, we “ran out” of healthy spare devices and we got the alert.
![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png)
RAID Monitoring Alerts
#### Understanding RAID Levels ####
**RAID 0**
The total array size is n times the size of the smallest partition, where n is the number of independent disks in the array (you will need at least two drives). Run the following command to assemble a RAID 0 array using partitions /dev/sdb1 and /dev/sdc1.
# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1
Common uses: Setups that support real-time applications where performance is more important than fault-tolerance.
**RAID 1 (aka Mirroring)**
The total array size equals the size of the smallest partition (you will need at least two drives). Run the following command to assemble a RAID 1 array using partitions /dev/sdb1 and /dev/sdc1.
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
Common uses: Installation of the operating system or important subdirectories, such as /home.
**RAID 5 (aka drives with Parity)**
The total array size will be (n 1) times the size of the smallest partition. The “lost” space in (n-1) is used for parity (redundancy) calculation (you will need at least three drives).
Note that you can specify a spare device (/dev/sde1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 5 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 as spare.
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1
Common uses: Web and file servers.
**RAID 6 (aka drives with double Parity**
The total array size will be (n*s)-2*s, where n is the number of independent disks in the array and s is the size of the smallest disk. Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs.
Run the following command to assemble a RAID 6 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare.
# mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1
Common uses: File and backup servers with large capacity and high availability requirements.
**RAID 1+0 (aka stripe of mirrors)**
The total array size is computed based on the formulas for RAID 0 and RAID 1, since RAID 1+0 is a combination of both. First, calculate the size of each mirror and then the size of the stripe.
Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 1+0 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare.
# mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1
Common uses: Database and application servers that require fast I/O operations.
#### Creating and Managing System Backups ####
It never hurts to remember that RAID with all its bounties IS NOT A REPLACEMENT FOR BACKUPS! Write it 1000 times on the chalkboard if you need to, but make sure you keep that idea in mind at all times. Before we begin, we must note that there is no one-size-fits-all solution for system backups, but here are some things that you do need to take into account while planning a backup strategy.
- What do you use your system for? (Desktop or server? If the latter case applies, what are the most critical services whose configuration would be a real pain to lose?)
- How often do you need to take backups of your system?
- What is the data (e.g. files / directories / database dumps) that you want to backup? You may also want to consider if you really need to backup huge files (such as audio or video files).
- Where (meaning physical place and media) will those backups be stored?
**Backing Up Your Data**
Method 1: Backup entire drives with dd command. You can either back up an entire hard disk or a partition by creating an exact image at any point in time. Note that this works best when the device is offline, meaning its not mounted and there are no processes accessing it for I/O operations.
The downside of this backup approach is that the image will have the same size as the disk or partition, even when the actual data occupies a small percentage of it. For example, if you want to image a partition of 20 GB that is only 10% full, the image file will still be 20 GB in size. In other words, its not only the actual data that gets backed up, but the entire partition itself. You may consider using this method if you need exact backups of your devices.
**Creating an image file out of an existing device**
# dd if=/dev/sda of=/system_images/sda.img
OR
--------------------- Alternatively, you can compress the image file ---------------------
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz
**Restoring the backup from the image file**
# dd if=/system_images/sda.img of=/dev/sda
OR
--------------------- Depending on your choice while creating the image ---------------------
gzip -dc /system_images/sda.img.gz | dd of=/dev/sda
Method 2: Backup certain files / directories with tar command already covered in [Part 3][3] of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users home directories, and so on).
Method 3: Synchronize files with rsync command. Rsync is a versatile remote (and local) file-copying tool. If you need to backup and synchronize your files to/from network drives, rsync is a go.
Whether youre synchronizing two local directories or local < — > remote directories mounted on the local filesystem, the basic syntax is the same.
Synchronizing two local directories or local < — > remote directories mounted on the local filesystem
# rsync -av source_directory destination directory
Where, -a recurse into subdirectories (if they exist), preserve symbolic links, timestamps, permissions, and original owner / group and -v verbose.
![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png)
rsync Synchronizing Files
In addition, if you want to increase the security of the data transfer over the wire, you can use ssh over rsync.
**Synchronizing local → remote directories over ssh**
# rsync -avzhe ssh backups root@remote_host:/remote_directory/
This example will synchronize the backups directory on the local host with the contents of /root/remote_directory on the remote host.
Where the -h option shows file sizes in human-readable format, and the -e flag is used to indicate a ssh connection.
![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png)
rsync Synchronize Remote Files
Synchronizing remote → local directories over ssh.
In this case, switch the source and destination directories from the previous example.
# rsync -avzhe ssh root@remote_host:/remote_directory/ backups
Please note that these are only 3 examples (most frequent cases youre likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article.
- Read Also: [10 rsync Commands to Sync Files in Linux][4]
### Summary ###
As a sysadmin, you need to ensure that your systems perform as good as possible. If youre well prepared, and if the integrity of your data is well supported by a storage technology such as RAID and regular system backups, youll be safe.
If you have questions, comments, or further ideas on how this article can be improved, feel free to speak out below. In addition, please consider sharing this series through your social network profiles.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
[2]:http://www.tecmint.com/create-partitions-and-filesystems-in-linux/
[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/
[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/

View File

@ -0,0 +1,367 @@
Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
================================================================================
A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams.
![Linux Foundation Certified Sysadmin Part 7](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-7.png)
Linux Foundation Certified Sysadmin Part 7
The following video describes an brief introduction to The Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This post is Part 7 of a 10-tutorial series, here in this part, we will explain how to Manage Linux System Startup Process and Services, that are required for the LFCS certification exam.
### Managing the Linux Startup Process ###
The boot process of a Linux system consists of several phases, each represented by a different component. The following diagram briefly summarizes the boot process and shows all the main components involved.
![Linux Boot Process](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-Boot-Process.png)
Linux Boot Process
When you press the Power button on your machine, the firmware that is stored in a EEPROM chip in the motherboard initializes the POST (Power-On Self Test) to check on the state of the systems hardware resources. When the POST is finished, the firmware then searches and loads the 1st stage boot loader, located in the MBR or in the EFI partition of the first available disk, and gives control to it.
#### MBR Method ####
The MBR is located in the first sector of the disk marked as bootable in the BIOS settings and is 512 bytes in size.
- First 446 bytes: The bootloader contains both executable code and error message text.
- Next 64 bytes: The Partition table contains a record for each of four partitions (primary or extended). Among other things, each record indicates the status (active / not active), size, and start / end sectors of each partition.
- Last 2 bytes: The magic number serves as a validation check of the MBR.
The following command performs a backup of the MBR (in this example, /dev/sda is the first hard disk). The resulting file, mbr.bkp can come in handy should the partition table become corrupt, for example, rendering the system unbootable.
Of course, in order to use it later if the need arises, we will need to save it and store it somewhere else (like a USB drive, for example). That file will help us restore the MBR and will get us going once again if and only if we do not change the hard drive layout in the meanwhile.
**Backup MBR**
# dd if=/dev/sda of=mbr.bkp bs=512 count=1
![Backup MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Backup-MBR-in-Linux.png)
Backup MBR in Linux
**Restoring MBR**
# dd if=mbr.bkp of=/dev/sda bs=512 count=1
![Restore MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-MBR-in-Linux.png)
Restore MBR in Linux
#### EFI/UEFI Method ####
For systems using the EFI/UEFI method, the UEFI firmware reads its settings to determine which UEFI application is to be launched and from where (i.e., in which disk and partition the EFI partition is located).
Next, the 2nd stage boot loader (aka boot manager) is loaded and run. GRUB [GRand Unified Boot] is the most frequently used boot manager in Linux. One of two distinct versions can be found on most systems used today.
- GRUB legacy configuration file: /boot/grub/menu.lst (older distributions, not supported by EFI/UEFI firmwares).
- GRUB2 configuration file: most likely, /etc/default/grub.
Although the objectives of the LFCS exam do not explicitly request knowledge about GRUB internals, if youre brave and can afford to mess up your system (you may want to try it first on a virtual machine, just in case), you need to run.
# update-grub
As root after modifying GRUBs configuration in order to apply the changes.
Basically, GRUB loads the default kernel and the initrd or initramfs image. In few words, initrd or initramfs help to perform the hardware detection, the kernel module loading and the device discovery necessary to get the real root filesystem mounted.
Once the real root filesystem is up, the kernel executes the system and service manager (init or systemd, whose process identification or PID is always 1) to begin the normal user-space boot process in order to present a user interface.
Both init and systemd are daemons (background processes) that manage other daemons, as the first service to start (during boot) and the last service to terminate (during shutdown).
![Systemd and Init](http://www.tecmint.com/wp-content/uploads/2014/10/systemd-and-init.png)
Systemd and Init
### Starting Services (SysVinit) ###
The concept of runlevels in Linux specifies different ways to use a system by controlling which services are running. In other words, a runlevel controls what tasks can be accomplished in the current execution state = runlevel (and which ones cannot).
Traditionally, this startup process was performed based on conventions that originated with System V UNIX, with the system passing executing collections of scripts that start and stop services as the machine entered a specific runlevel (which, in other words, is a different mode of running the system).
Within each runlevel, individual services can be set to run, or to be shut down if running. Latest versions of some major distributions are moving away from the System V standard in favour of a rather new service and system manager called systemd (which stands for system daemon), but usually support sysv commands for compatibility purposes. This means that you can run most of the well-known sysv init tools in a systemd-based distribution.
- Read Also: [Why systemd replaces init in Linux][1]
Besides starting the system process, init looks to the /etc/inittab file to decide what runlevel must be entered.
注:表格
<table cellspacing="0" border="0">
<colgroup width="85">
</colgroup>
<colgroup width="1514">
</colgroup>
<tbody>
<tr>
<td align="CENTER" height="18" style="border: 1px solid #000001;"><b>Runlevel</b></td>
<td align="LEFT" style="border: 1px solid #000001;"><b> Description</b></td>
</tr>
<tr class="alt">
<td align="CENTER" height="18" style="border: 1px solid #000001;">0</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Halt the system. Runlevel 0 is a special transitional state used to shutdown the system quickly.</td>
</tr>
<tr>
<td align="CENTER" height="20" style="border: 1px solid #000001;">1</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Also aliased to s, or S, this runlevel is sometimes called maintenance mode. What services, if any, are started at this runlevel varies by distribution. Its typically used for low-level system maintenance that may be impaired by normal system operation.</td>
</tr>
<tr class="alt">
<td align="CENTER" height="18" style="border: 1px solid #000001;">2</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Multiuser. On Debian systems and derivatives, this is the default runlevel, and includes -if available- a graphical login. On Red-Hat based systems, this is multiuser mode without networking.</td>
</tr>
<tr>
<td align="CENTER" height="18" style="border: 1px solid #000001;">3</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;On Red-Hat based systems, this is the default multiuser mode, which runs everything except the graphical environment. This runlevel and levels 4 and 5 usually are not used on Debian-based systems.</td>
</tr>
<tr class="alt">
<td align="CENTER" height="18" style="border: 1px solid #000001;">4</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Typically unused by default and therefore available for customization.</td>
</tr>
<tr>
<td align="CENTER" height="18" style="border: 1px solid #000001;">5</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;On Red-Hat based systems, full multiuser mode with GUI login. This runlevel is like level 3, but with a GUI login available.</td>
</tr>
<tr class="alt">
<td align="CENTER" height="18" style="border: 1px solid #000001;">6</td>
<td align="LEFT" style="border: 1px solid #000001;">&nbsp;Reboot the system.</td>
</tr>
</tbody>
</table>
To switch between runlevels, we can simply issue a runlevel change using the init command: init N (where N is one of the runlevels listed above). Please note that this is not the recommended way of taking a running system to a different runlevel because it gives no warning to existing logged-in users (thus causing them to lose work and processes to terminate abnormally).
Instead, the shutdown command should be used to restart the system (which first sends a warning message to all logged-in users and blocks any further logins; it then signals init to switch runlevels); however, the default runlevel (the one the system will boot to) must be edited in the /etc/inittab file first.
For that reason, follow these steps to properly switch between runlevels, As root, look for the following line in /etc/inittab.
id:2:initdefault:
and change the number 2 for the desired runlevel with your preferred text editor, such as vim (described in [How to use vi/vim editor in Linux Part 2][2] of this series).
Next, run as root.
# shutdown -r now
That last command will restart the system, causing it to start in the specified runlevel during next boot, and will run the scripts located in the /etc/rc[runlevel].d directory in order to decide which services should be started and which ones should not. For example, for runlevel 2 in the following system.
![Change Runlevels in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Runlevels-in-Linux.jpeg)
Change Runlevels in Linux
#### Manage Services using chkconfig ####
To enable or disable system services on boot, we will use [chkconfig command][3] in CentOS / openSUSE and sysv-rc-conf in Debian and derivatives. This tool can also show us what is the preconfigured state of a service for a particular runlevel.
- Read Also: [How to Stop and Disable Unwanted Services in Linux][4]
Listing the runlevel configuration for a service.
# chkconfig --list [service name]
# chkconfig --list postfix
# chkconfig --list mysqld
![Listing Runlevel Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Listing-Runlevel-Configuration.png)
Listing Runlevel Configuration
In the above image we can see that postfix is set to start when the system enters runlevels 2 through 5, whereas mysqld will be running by default for runlevels 2 through 4. Now suppose that this is not the expected behaviour.
For example, we need to turn on mysqld for runlevel 5 as well, and turn off postfix for runlevels 4 and 5. Heres what we would do in each case (run the following commands as root).
**Enabling a service for a particular runlevel**
# chkconfig --level [level(s)] service on
# chkconfig --level 5 mysqld on
**Disabling a service for particular runlevels**
# chkconfig --level [level(s)] service off
# chkconfig --level 45 postfix off
![Enable Disable Services in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Disable-Services.png)
Enable Disable Services
We will now perform similar tasks in a Debian-based system using sysv-rc-conf.
#### Manage Services using sysv-rc-conf ####
Configuring a service to start automatically on a specific runlevel and prevent it from starting on all others.
1. Lets use the following command to see what are the runlevels where mdadm is configured to start.
# ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'
![Check Runlevel of Service Running](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Runlevel.png)
Check Runlevel of Service Running
2. We will use sysv-rc-conf to prevent mdadm from starting on all runlevels except 2. Just check or uncheck (with the space bar) as desired (you can move up, down, left, and right with the arrow keys).
# sysv-rc-conf
![SysV Runlevel Config](http://www.tecmint.com/wp-content/uploads/2014/10/SysV-Runlevel-Config.png)
SysV Runlevel Config
Then press q to quit.
3. We will restart the system and run again the command from STEP 1.
# ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'
![Verify Service Runlevel](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Service-Runlevel.png)
Verify Service Runlevel
In the above image we can see that mdadm is configured to start only on runlevel 2.
### What About systemd? ###
systemd is another service and system manager that is being adopted by several major Linux distributions. It aims to allow more processing to be done in parallel during system startup (unlike sysvinit, which always tends to be slower because it starts processes one at a time, checks whether one depends on another, and waits for daemons to launch so more services can start), and to serve as a dynamic resource management to a running system.
Thus, services are started when needed (to avoid consuming system resources) instead of being launched without a solid reason during boot.
Viewing the status of all the processes running on your system, both systemd native and SysV services, run the following command.
# systemctl
![Check All Running Processes in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-All-Running-Processes.png)
Check All Running Processes
The LOAD column shows whether the unit definition (refer to the UNIT column, which shows the service or anything maintained by systemd) was properly loaded, while the ACTIVE and SUB columns show the current status of such unit.
Displaying information about the current status of a service
When the ACTIVE column indicates that an units status is other than active, we can check what happened using.
# systemctl status [unit]
For example, in the image above, media-samba.mount is in failed state. Lets run.
# systemctl status media-samba.mount
![Check Linux Service Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Status.png)
Check Service Status
We can see that media-samba.mount failed because the mount process on host dev1 was unable to find the network share at //192.168.0.10/gacanepa.
### Starting or Stopping Services ###
Once the network share //192.168.0.10/gacanepa becomes available, lets try to start, then stop, and finally restart the unit media-samba.mount. After performing each action, lets run systemctl status media-samba.mount to check on its status.
# systemctl start media-samba.mount
# systemctl status media-samba.mount
# systemctl stop media-samba.mount
# systemctl restart media-samba.mount
# systemctl status media-samba.mount
![Starting Stoping Services](http://www.tecmint.com/wp-content/uploads/2014/10/Starting-Stoping-Service.jpeg)
Starting Stoping Services
**Enabling or disabling a service to start during boot**
Under systemd you can enable or disable a service when it boots.
# systemctl enable [service] # enable a service
# systemctl disable [service] # prevent a service from starting at boot
The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory.
![Enabling Disabling Services](http://www.tecmint.com/wp-content/uploads/2014/10/Enabling-Disabling-Services.jpeg)
Enabling Disabling Services
Alternatively, you can find out a services current status (enabled or disabled) with the command.
# systemctl is-enabled [service]
For example,
# systemctl is-enabled postfix.service
In addition, you can reboot or shutdown the system with.
# systemctl reboot
# systemctl shutdown
### Upstart ###
Upstart is an event-based replacement for the /sbin/init daemon and was born out of the need for starting services only, when they are needed (also supervising them while they are running), and handling events as they occur, thus surpassing the classic, dependency-based sysvinit system.
It was originally developed for the Ubuntu distribution, but is used in Red Hat Enterprise Linux 6.0. Though it was intended to be suitable for deployment in all Linux distributions as a replacement for sysvinit, in time it was overshadowed by systemd. On February 14, 2014, Mark Shuttleworth (founder of Canonical Ltd.) announced that future releases of Ubuntu would use systemd as the default init daemon.
Because the SysV startup script for system has been so common for so long, a large number of software packages include SysV startup scripts. To accommodate such packages, Upstart provides a compatibility mode: It runs SysV startup scripts in the usual locations (/etc/rc.d/rc?.d, /etc/init.d/rc?.d, /etc/rc?.d, or a similar location). Thus, if we install a package that doesnt yet include an Upstart configuration script, it should still launch in the usual way.
Furthermore, if we have installed utilities such as [chkconfig][5], you should be able to use them to manage your SysV-based services just as we would on sysvinit based systems.
Upstart scripts also support starting or stopping services based on a wider variety of actions than do SysV startup scripts; for example, Upstart can launch a service whenever a particular hardware device is attached.
A system that uses Upstart and its native scripts exclusively replaces the /etc/inittab file and the runlevel-specific SysV startup script directories with .conf scripts in the /etc/init directory.
These *.conf scripts (also known as job definitions) generally consists of the following:
- Description of the process.
- Runlevels where the process should run or events that should trigger it.
- Runlevels where process should be stopped or events that should stop it.
- Options.
- Command to launch the process.
For example,
# My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null <dave.null@example.com>"
# Stanzas
#
# Stanzas define when and how a process is started and stopped
# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process in case of crash
respawn
# Specify working directory
chdir /home/dave/myfiles
# Specify the process/command (add arguments if needed) to run
exec bash backup.sh arg1 arg2
To apply changes, you will need to tell upstart to reload its configuration.
# initctl reload-configuration
Then start your job by typing the following command.
$ sudo start yourjobname
Where yourjobname is the name of the job that was added earlier with the yourjobname.conf script.
A more complete and detailed reference guide for Upstart is available in the projects web site under the menu “[Cookbook][6]”.
### Summary ###
A knowledge of the Linux boot process is necessary to help you with troubleshooting tasks as well as with adapting the computers performance and running services to your needs.
In this article we have analyzed what happens from the moment when you press the Power switch to turn on the machine until you get a fully operational user interface. I hope you have learned reading it as much as I did while putting it together. Feel free to leave your comments or questions below. We always look forward to hearing from our readers!
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-boot-process-and-manage-services/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/systemd-replaces-init-in-linux/
[2]:http://www.tecmint.com/vi-editor-usage/
[3]:http://www.tecmint.com/chkconfig-command-examples/
[4]:http://www.tecmint.com/remove-unwanted-services-from-linux/
[5]:http://www.tecmint.com/chkconfig-command-examples/
[6]:http://upstart.ubuntu.com/cookbook/

View File

@ -0,0 +1,330 @@
Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts
================================================================================
Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when its necessary to escalate issues to higher level support teams.
![Linux Users and Groups Management](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-8.png)
Linux Foundation Certified Sysadmin Part 8
Please have a quick look at the following video that describes an introduction to the Linux Foundation Certification Program.
youtube视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This article is Part 8 of a 10-tutorial long series, here in this section, we will guide you on how to manage users and groups permissions in Linux system, that are required for the LFCS certification exam.
Since Linux is a multi-user operating system (in that it allows multiple users on different computers or terminals to access a single system), you will need to know how to perform effective user management: how to add, edit, suspend, or delete user accounts, along with granting them the necessary permissions to do their assigned tasks.
### Adding User Accounts ###
To add a new user account, you can run either of the following two commands as root.
# adduser [new_account]
# useradd [new_account]
When a new user account is added to the system, the following operations are performed.
1. His/her home directory is created (/home/username by default).
2. The following hidden files are copied into the users home directory, and will be used to provide environment variables for his/her user session.
.bash_logout
.bash_profile
.bashrc
3. A mail spool is created for the user at /var/spool/mail/username.
4. A group is created and given the same name as the new user account.
**Understanding /etc/passwd**
The full account information is stored in the /etc/passwd file. This file contains a record per system user account and has the following format (fields are delimited by a colon).
[username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell]
- Fields [username] and [Comment] are self explanatory.
- The x in the second field indicates that the account is protected by a shadowed password (in /etc/shadow), which is needed to logon as [username].
- The [UID] and [GID] fields are integers that represent the User IDentification and the primary Group IDentification to which [username] belongs, respectively.
- The [Home directory] indicates the absolute path to [username]s home directory, and
- The [Default shell] is the shell that will be made available to this user when he or she logins the system.
**Understanding /etc/group**
Group information is stored in the /etc/group file. Each record has the following format.
[Group name]:[Group password]:[GID]:[Group members]
- [Group name] is the name of group.
- An x in [Group password] indicates group passwords are not being used.
- [GID]: same as in /etc/passwd.
- [Group members]: a comma separated list of users who are members of [Group name].
![Add User Accounts in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-user-accounts.png)
Add User Accounts
After adding an account, you can edit the following information (to name a few fields) using the usermod command, whose basic syntax of usermod is as follows.
# usermod [options] [username]
**Setting the expiry date for an account**
Use the expiredate flag followed by a date in YYYY-MM-DD format.
# usermod --expiredate 2014-10-30 tecmint
**Adding the user to supplementary groups**
Use the combined -aG, or append groups options, followed by a comma separated list of groups.
# usermod --append --groups root,users tecmint
**Changing the default location of the users home directory**
Use the -d, or home options, followed by the absolute path to the new home directory.
# usermod --home /tmp tecmint
**Changing the shell the user will use by default**
Use shell, followed by the path to the new shell.
# usermod --shell /bin/sh tecmint
**Displaying the groups an user is a member of**
# groups tecmint
# id tecmint
Now lets execute all the above commands in one go.
# usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint
![usermod Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/usermod-command-examples.png)
usermod Command Examples
Read Also:
- [15 useradd Command Examples in Linux][1]
- [15 usermod Command Examples in Linux][2]
For existing accounts, we can also do the following.
**Disabling account by locking password**
Use the -L (uppercase L) or the lock option to lock a users password.
# usermod --lock tecmint
**Unlocking user password**
Use the u or the unlock option to unlock a users password that was previously blocked.
# usermod --unlock tecmint
![Lock User in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/lock-user-in-linux.png)
Lock User Accounts
**Creating a new group for read and write access to files that need to be accessed by several users**
Run the following series of commands to achieve the goal.
# groupadd common_group # Add a new group
# chown :common_group common.txt # Change the group owner of common.txt to common_group
# usermod -aG common_group user1 # Add user1 to common_group
# usermod -aG common_group user2 # Add user2 to common_group
# usermod -aG common_group user3 # Add user3 to common_group
**Deleting a group**
You can delete a group with the following command.
# groupdel [group_name]
If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted.
### Linux File Permissions ###
Besides the basic read, write, and execute permissions that we discussed in [Setting File Attributes Part 3][3] of this series, there are other less used (but not less important) permission settings, sometimes referred to as “special permissions”.
Like the basic permissions discussed earlier, they are set using an octal file or through a letter (symbolic notation) that indicates the type of permission.
Deleting user accounts
You can delete an account (along with its home directory, if its owned by the user, and all the files residing therein, and also the mail spool) using the userdel command with the remove option.
# userdel --remove [username]
#### Group Management ####
Every time a new user account is added to the system, a group with the same name is created with the username as its only member. Other users can be added to the group later. One of the purposes of groups is to implement a simple access control to files and other system resources by setting the right permissions on those resources.
For example, suppose you have the following users.
- user1 (primary group: user1)
- user2 (primary group: user2)
- user3 (primary group: user3)
All of them need read and write access to a file called common.txt located somewhere on your local system, or maybe on a network share that user1 has created. You may be tempted to do something like,
# chmod 660 common.txt
OR
# chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file name]
However, this will only provide read and write access to the owner of the file and to those users who are members of the group owner of the file (user1 in this case). Again, you may be tempted to add user2 and user3 to group user1, but that will also give them access to the rest of the files owned by user user1 and group user1.
This is where groups come in handy, and heres what you should do in a case like this.
**Understanding Setuid**
When the setuid permission is applied to an executable file, an user running the program inherits the effective privileges of the programs owner. Since this approach can reasonably raise security concerns, the number of files with setuid permission must be kept to a minimum. You will likely find programs with this permission set when a system user needs to access a file owned by root.
Summing up, it isnt just that the user can execute the binary file, but also that he can do so with roots privileges. For example, lets check the permissions of /bin/passwd. This binary is used to change the password of an account, and modifies the /etc/shadow file. The superuser can change anyones password, but all other users should only be able to change their own.
![passwd Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/passwd-command.png)
passwd Command Examples
Thus, any user should have permission to run /bin/passwd, but only root will be able to specify an account. Other users can only change their corresponding passwords.
![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png)
Change User Password
**Understanding Setgid**
When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owners primary group.
# chmod g+s [filename]
To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions.
# chmod 2755 [directory]
**Setting the SETGID in a directory**
![Add Setgid in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-setgid-to-directory.png)
Add Setgid to Directory
**Understanding Sticky Bit**
When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect of preventing users from deleting or even renaming the files it contains unless the user owns the directory, the file, or is root.
# chmod o+t [directory]
To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic permissions.
# chmod 1755 [directory]
Without the sticky bit, anyone able to write to the directory can delete or rename files. For that reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable.
![Add Stickybit in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-sticky-bit-to-directory.png)
Add Stickybit to Directory
### Special Linux File Attributes ###
There are other attributes that enable further limits on the operations that are allowed on files. For example, prevent the file from being renamed, moved, deleted, or even modified. They are set with the [chattr command][4] and can be viewed using the lsattr tool, as follows.
# chattr +i file1
# chattr +a file2
After executing those two commands, file1 will be immutable (which means it cannot be moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in append mode for writing).
![Protect File from Deletion](http://www.tecmint.com/wp-content/uploads/2014/10/chattr-command.png)
Chattr Command to Protect Files
### Accessing the root Account and Using sudo ###
One of the ways users can gain access to the root account is by typing.
$ su
and then entering roots password.
If authentication succeeds, you will be logged on as root with the current working directory as the same as you were before. If you want to be placed in roots home directory instead, run.
$ su -
and then enter roots password.
![Enable sudo Access on Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png)
Enable Sudo Access on Users
The above procedure requires that a normal user knows roots password, which poses a serious security risk. For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute commands as a different user (usually the superuser) in a very controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run one or more specific privileged commands and no others.
- Read Also: [Difference Between su and sudo User][5]
To authenticate using sudo, the user uses his/her own password. After entering the command, we will be prompted for our password (not the superusers) and if the authentication succeeds (and if the user has been granted privileges to run the command), the specified command is carried out.
To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is recommended that this file is edited using the visudo command instead of opening it directly with a text editor.
# visudo
This opens the /etc/sudoers file using vim (you can follow the instructions given in [Install and Use vim as Editor Part 2][6] of this series to edit the file).
These are the most relevant lines.
Defaults secure_path="/usr/sbin:/usr/bin:/sbin"
root ALL=(ALL) ALL
tecmint ALL=/bin/yum update
gacanepa ALL=NOPASSWD:/bin/updatedb
%admin ALL=(ALL) ALL
Lets take a closer look at them.
Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin"
This line lets you specify the directories that will be used for sudo, and is used to prevent using user-specific directories, which can harm the system.
The next lines are used to specify permissions.
root ALL=(ALL) ALL
- The first ALL keyword indicates that this rule applies to all hosts.
- The second ALL indicates that the user in the first column can run commands with the privileges of any user.
- The third ALL means any command can be run.
tecmint ALL=/bin/yum update
If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be able to run yum update as root.
gacanepa ALL=NOPASSWD:/bin/updatedb
The NOPASSWD directive allows user gacanepa to run /bin/updatedb without needing to enter his password.
%admin ALL=(ALL) ALL
The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the line is identical to that of an regular user. This means that members of the group “admin” can run all commands as any user on all hosts.
To see what privileges are granted to you by sudo, use the “-l” option to list them.
![Sudo Access Rules](http://www.tecmint.com/wp-content/uploads/2014/10/sudo-access-rules.png)
Sudo Access Rules
### Summary ###
Effective user and file management skills are essential tools for any system administrator. In this article we have covered the basics and hope you can use it as a good starting to point to build upon. Feel free to leave your comments or questions below, and well respond quickly.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-users-and-groups-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/add-users-in-linux/
[2]:http://www.tecmint.com/usermod-command-examples/
[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/
[4]:http://www.tecmint.com/chattr-command-examples/
[5]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/
[6]:http://www.tecmint.com/vi-editor-usage/

View File

@ -0,0 +1,229 @@
Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper
================================================================================
Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams.
![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png)
Linux Foundation Certified Sysadmin Part 9
Watch the following video that explains about the Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam.
### Package Management ###
In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system.
In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled.
**How package management systems work**
If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well.
**Packaging Systems**
Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually.
Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification.
**High and low-level package tools**
In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed).
注:表格
<table cellspacing="0" border="0">
<colgroup width="200">
</colgroup>
<colgroup width="200">
</colgroup>
<colgroup width="200">
</colgroup>
<tbody>
<tr>
<td bgcolor="#AEA79F" align="CENTER" height="18" style="border: 1px solid #000001;"><b><span style="color: black;">DISTRIBUTION</span></b></td>
<td bgcolor="#AEA79F" align="CENTER" style="border: 1px solid #000001;"><b><span style="color: black;">LOW-LEVEL TOOL</span></b></td>
<td bgcolor="#AEA79F" align="CENTER" style="border: 1px solid #000001;"><b><span style="color: black;">HIGH-LEVEL TOOL</span></b></td>
</tr>
<tr class="alt">
<td bgcolor="#FFFFFF" align="LEFT" height="18" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;Debian and derivatives</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;dpkg</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;apt-get / aptitude</span></td>
</tr>
<tr>
<td bgcolor="#FFFFFF" align="LEFT" height="18" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;CentOS</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;rpm</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;yum</span></td>
</tr>
<tr class="alt">
<td bgcolor="#FFFFFF" align="LEFT" height="18" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;openSUSE</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;rpm</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;zypper</span></td>
</tr>
</tbody>
</table>
Let us see the descrption of the low-level and high-level tools.
dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it cant automatically download and install their corresponding dependencies.
- Read More: [15 dpkg Command Examples][1]
apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name.
- Read More: [25 apt-get Command Examples][2]
aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package.
rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS.
- Read More: [20 rpm Command Examples][3]
yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories.
- Read More: [20 yum Command Examples][4]
-
### Common Usage of Low-Level Tools ###
The most frequent tasks that you will do with low level tools are as follows:
**1. Installing a package from a compiled (*.deb or *.rpm) file**
The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distributions repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies.
# dpkg -i file.deb [Debian and derivative]
# rpm -i file.rpm [CentOS / openSUSE]
**Note**: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa!
**2. Upgrading a package from a compiled file**
Again, you will only upgrade an installed package manually when it is not available in the central repositories.
# dpkg -i file.deb [Debian and derivative]
# rpm -U file.rpm [CentOS / openSUSE]
**3. Listing installed packages**
When you first get your hands on an already working system, chances are youll want to know what packages are installed.
# dpkg -l [Debian and derivative]
# rpm -qa [CentOS / openSUSE]
If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in [manipulate files in Linux Part 1][6] of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system.
# dpkg -l | grep mysql-common
![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png)
Check Installed Packages
Another way to determine if a package is installed.
# dpkg --status package_name [Debian and derivative]
# rpm -q package_name [CentOS / openSUSE]
For example, lets find out whether package sysdig is installed on our system.
# rpm -qa | grep sysdig
![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png)
Check sysdig Package
**4. Finding out which package installed a file**
# dpkg --search file_name
# rpm -qf file_name
For example, which package installed pw_dict.hwm?
# rpm -qf /usr/share/cracklib/pw_dict.hwm
![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png)
Query File in Linux
### Common Usage of High-Level Tools ###
The most frequent tasks that you will do with high level tools are as follows.
**1. Searching for a package**
aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name.
# aptitude update && aptitude search package_name
In the search all option, yum will search for package_name not only in package names, but also in package descriptions.
# yum search package_name
# yum search all package_name
# yum whatprovides “*/package_name”
Lets supposed we need a file whose name is sysdig. To know that package we will have to install, lets run.
# yum whatprovides “*/sysdig”
![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png)
Check Package Description
whatprovides tells yum to search the package the will provide a file that matches the above regular expression.
# zypper refresh && zypper search package_name [On openSUSE]
**2. Installing a package from a repository**
While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons.
# aptitude update && aptitude install package_name [Debian and derivatives]
# yum update && yum install package_name [CentOS]
# zypper refresh && zypper install package_name [openSUSE]
**3. Removing a package**
The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system.
# aptitude remove / purge package_name
# yum erase package_name
---Notice the minus sign in front of the package that will be uninstalled, openSUSE ---
# zypper remove -package_name
Most (if not all) package managers will prompt you, by default, if youre sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble!
**4. Displaying information about a package**
The following command will display information about the birthday package.
# aptitude show birthday
# yum info birthday
# zypper info birthday
![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png)
Check Package Information
### Summary ###
Package management is something you just cant sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moments notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-package-management/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/dpkg-command-examples/
[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/
[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/

View File

@ -1,213 +0,0 @@
struggling 翻译中
Setting up RAID 1 (Mirroring) using Two Disks in Linux Part 3
================================================================================
RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and its useful only, when read performance or reliability is more precise than the data storage capacity.
![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg)
Setup Raid1 in Linux
Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.
### Features of RAID 1 ###
- Mirror has Good Performance.
- 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB.
- No data loss in Mirroring if one disk fails, because we have the same content in both disks.
- Reading will be good than writing data to drive.
#### Requirements ####
Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card).
Here were using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from its utility UI or using Ctrl+I key.
Read Also: [Basic Concepts of RAID in Linux][1]
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.226
Hostname : rd1.tecmintlocal.com
Disk 1 [20GB] : /dev/sdb
Disk 2 [20GB] : /dev/sdc
This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc.
### Step 1: Installing Prerequisites and Examine Drives ###
1. As I said above, were using mdadm utility for creating and managing RAID in Linux. So, lets install the mdadm software package on Linux using yum or apt-get package manager tool.
# yum install mdadm [on RedHat systems]
# apt-get install mdadm [on Debain systems]
2. Once mdadm package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command.
# mdadm -E /dev/sd[b-c]
![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png)
Check RAID on Disks
As you see from the above screen, that there is no any super-block detected yet, means no RAID defined.
### Step 2: Drive Partitioning for RAID ###
3. As I mentioned above, that were using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Lets create partitions on these two drives using fdisk command and change the type to raid during partition creation.
# fdisk /dev/sdb
Follow the below instructions
- Press n for creating new partition.
- Then choose P for Primary partition.
- Next select the partition number as 1.
- Give the default full size by just pressing two times Enter key.
- Next press p to print the defined partition.
- Press L to list all available types.
- Type tto choose the partitions.
- Choose fd for Linux raid auto and press Enter to apply.
- Then again use p to print the changes what we have made.
- Use w to write the changes.
![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png)
Create Disk Partitions
After /dev/sdb partition has been created, next follow the same instructions to create new partition on /dev/sdc drive.
# fdisk /dev/sdc
![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png)
Create Second Partitions
4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same mdadm command and also confirm the RAID type as shown in the following screen grabs.
# mdadm -E /dev/sd[b-c]
![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png)
Verify Partitions Changes
![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png)
Check RAID Type
**Note**: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, thats the reason we are getting as no super-blocks detected.
### Step 3: Creating RAID1 Devices ###
5. Next create RAID1 Device called /dev/md0 using the following command and verity it.
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
# cat /proc/mdstat
![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png)
Create RAID Device
6. Next check the raid devices type and raid array using following commands.
# mdadm -E /dev/sd[b-c]1
# mdadm --detail /dev/md0
![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png)
Check RAID Device type
![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png)
Check RAID Device Array
From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing.
### Step 4: Creating File System on RAID Device ###
7. Create file system using ext4 for md0 and mount under /mnt/raid1.
# mkfs.ext4 /dev/md0
![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png)
Create RAID Device Filesystem
8. Next, mount the newly created filesystem under /mnt/raid1 and create some files and verify the contents under mount point.
# mkdir /mnt/raid1
# mount /dev/md0 /mnt/raid1/
# touch /mnt/raid1/tecmint.txt
# echo "tecmint raid setups" > /mnt/raid1/tecmint.txt
![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png)
Mount Raid Device
9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open /etc/fstab file and add the following line at the bottom of the file.
/dev/md0 /mnt/raid1 ext4 defaults 0 0
![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png)
Raid Automount Device
10. Run mount -a to check whether there are any errors in fstab entry.
# mount -av
![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png)
Check Errors in fstab
11. Next, save the raid configuration manually to mdadm.conf file using the below command.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png)
Save Raid Configuration
The above configuration file is read by the system at the reboots and load the RAID devices.
### Step 5: Verify Data After Disk Failure ###
12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Lets see what will happen when any of disk disk is unavailable in array.
# mdadm --detail /dev/md0
![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png)
Raid Device Verify
In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails.
# ls -l /dev | grep sd
# mdadm --detail /dev/md0
![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png)
Test RAID Devices
Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data.
# cd /mnt/raid1/
# cat tecmint.txt
![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png)
Verify RAID Data
Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid1-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/

View File

@ -1,286 +0,0 @@
struggling 翻译中
Creating RAID 5 (Striping with Distributed Parity) in Linux Part 4
================================================================================
In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.
![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg)
Setup Raid 5 in Linux
For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where its cost effective and provide performance as well as redundancy.
#### What is Parity? ####
Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Lets say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity informations. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.
#### Pros and Cons of RAID 5 ####
- Gives better performance
- Support Redundancy and Fault tolerance.
- Support hot spare options.
- Will loose a single disk capacity for using parity information.
- No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
- Suits for transaction oriented environment as the reading will be faster.
- Due to parity overhead, writing will be slow.
- Rebuild takes long time.
#### Requirements ####
Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if youve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and mdadm package to create raid.
mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf.
Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.
- [Basic Concepts of RAID in Linux Part 1][1]
- [Creating RAID 0 (Stripe) in Linux Part 2][2]
- [Setting up RAID 1 (Mirroring) in Linux Part 3][3]
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.227
Hostname : rd5.tecmintlocal.com
Disk 1 [20GB] : /dev/sdb
Disk 2 [20GB] : /dev/sdc
Disk 3 [20GB] : /dev/sdd
This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.
### Step 1: Installing mdadm and Verify Drives ###
1. As we said earlier, that were using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.
# lsb_release -a
# ifconfig | grep inet
![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png)
CentOS 6.5 Summary
2. If youre following our raid series, we assume that youve already installed mdadm package, if not, use the following command according to your Linux distribution to install the package.
# yum install mdadm [on RedHat systems]
# apt-get install mdadm [on Debain systems]
3. After the mdadm package installation, lets list the three 20GB disks which we have added in our system using fdisk command.
# fdisk -l | grep sd
![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png)
Install mdadm Tool
4. Now its time to examine the attached three drives for any existing RAID blocks on these drives using following command.
# mdadm -E /dev/sd[b-d]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png)
Examine Drives For Raid
**Note**: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.
### Step 2: Partitioning the Disks for RAID ###
5. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using fdisk command, before forwarding to the next steps.
# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd
#### Create /dev/sdb Partition ####
Please follow the below instructions to create partition on /dev/sdb drive.
- Press n for creating new partition.
- Then choose P for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
- Then choose 1 to be the first partition. By default it will be 1.
- Here for cylinder size we dont have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
- Next press p to print the created partition.
- Change the Type, If we need to know the every available types Press L.
- Here, we are selecting fd as my type is RAID.
- Next press p to print the defined partition.
- Then again use p to print the changes what we have made.
- Use w to write the changes.
![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png)
Create sdb Partition
**Note**: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too.
#### Create /dev/sdc Partition ####
Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps.
# fdisk /dev/sdc
![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png)
Create sdc Partition
#### Create /dev/sdd Partition ####
# fdisk /dev/sdd
![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png)
Create sdd Partition
6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd.
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd
or
# mdadm -E /dev/sd[b-c]
![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png)
Check Partition Changes
**Note**: In the above pic. depict the type is fd i.e. for RAID.
7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.
![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png)
Check Raid on Partition
### Step 3: Creating md device md0 ###
8. Now create a Raid device md0 (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
or
# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1
9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output.
# cat /proc/mdstat
![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png)
Verify Raid Device
If you want to monitor the current building process, you can use watch command, just pass through the cat /proc/mdstat with watch command which will refresh screen every 1 second.
# watch -n1 cat /proc/mdstat
![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png)
Monitor Raid 5 Process
![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png)
Raid 5 Process Summary
10. After creation of raid, Verify the raid devices using the following command.
# mdadm -E /dev/sd[b-d]1
![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png)
Verify Raid Level
**Note**: The Output of the above command will be little long as it prints the information of all three drives.
11. Next, verify the RAID array to assume that the devices which weve included in the RAID level are running and started to re-sync.
# mdadm --detail /dev/md0
![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png)
Verify Raid Array
### Step 4: Creating file system for md0 ###
12. Create a file system for md0 device using ext4 before mounting.
# mkfs.ext4 /dev/md0
![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png)
Create md0 Filesystem
13. Now create a directory under /mnt then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory.
# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5/
# ls -l /mnt/raid5/
14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content.
# touch /mnt/raid5/raid5_tecmint_{1..5}
# ls -l /mnt/raid5/
# echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1
# cat /mnt/raid5/raid5_tecmint_1
# cat /proc/mdstat
![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png)
Mount Raid Device
15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.
# vim /etc/fstab
/dev/md0 /mnt/raid5 ext4 defaults 0 0
![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png)
Raid 5 Automount
16. Next, run mount -av command to check whether any errors in fstab entry.
# mount -av
![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png)
Check Fstab Errors
### Step 5: Save Raid 5 Configuration ###
17. As mentioned earlier in requirement section, by default RAID dont have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.
So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png)
Save Raid 5 Configuration
Note: Saving the configuration will keep the RAID level stable in md0 device.
### Step 6: Adding Spare Drives ###
18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.
For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.
- [Add Spare Drive to Raid 5 Setup][4]
### Conclusion ###
Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/create-raid-5-in-linux/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
[2]:http://www.tecmint.com/create-raid0-in-linux/
[3]:http://www.tecmint.com/create-raid1-in-linux/
[4]:http://www.tecmint.com/create-raid-6-in-linux/

View File

@ -1,228 +0,0 @@
Translating by ictlyh
Part 1 - RHCE Series: How to Setup and Test Static Network Routing
================================================================================
RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies.
![RHCE Exam Preparation Guide](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg)
RHCE Exam Preparation Guide
This RHCE (Red Hat Certified Engineer) is a performance-based exam (codename EX300), who possesses the additional skills, knowledge, and abilities required of a senior system administrator responsible for Red Hat Enterprise Linux (RHEL) systems.
**Important**: [Red Hat Certified System Administrator][1] (RHCSA) certification is required to earn RHCE certification.
Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the exam, which will going to cover in this RHCE series:
- Part 1: How to Setup and Test Static Routing in RHEL 7
- Part 2: How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters
- Part 3: How to Produce and Deliver System Activity Reports Using Linux Toolsets
- Part 4: Automate System Maintenance Tasks Using Shell Scripts
- Part 5: How to Configure Local and Remote System Logging
- Part 6: How to Configure a Samba Server and a NFS Server
- Part 7: Setting Up Complete SMTP Server for Mailing
- Part 8: Setting Up HTTPS and TLS on RHEL 7
- Part 9: Setting Up Network Time Protocol
- Part 10: How to Configure a Cache-Only DNS Server
To view fees and register for an exam in your country, check the [RHCE Certification][2] page.
In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases where the principles of static routing, packet filtering, and network address translation come into play.
![Setup Static Network Routing in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg)
RHCE: Setup and Test Network Static Routing Part 1
Please note that we will not cover them in depth, but rather organize these contents in such a way that will be helpful to take the first steps and build from there.
### Static Routing in Red Hat Enterprise Linux 7 ###
One of the wonders of modern networking is the vast availability of devices that can connect groups of computers, whether in relatively small numbers and confined to a single room or several machines in the same building, city, country, or across continents.
However, in order to effectively accomplish this in any situation, network packets need to be routed, or in other words, the path they follow from source to destination must be ruled somehow.
Static routing is the process of specifying a route for network packets other than the default, which is provided by a network device known as the default gateway. Unless specified otherwise through static routing, network packets are directed to the default gateway; with static routing, other paths are defined based on predefined criteria, such as the packet destination.
Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7 box connecting to router #1 [192.168.0.1] to access the Internet and machines in 192.168.0.0/24.
A second router (router #2) has two network interface cards: enp0s3 is also connected to router #1 to access the Internet and to communicate with the RHEL 7 box and other machines in the same network, whereas the other (enp0s8) is used to grant access to the 10.0.0.0/24 network where internal services reside, such as a web and / or database server.
This scenario is illustrated in the diagram below:
![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png)
Static Routing Network Diagram
In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to make sure that it can both access the Internet through router #1 and the internal network via router #2.
In RHEL 7, you will use the [ip command][3] to configure and show devices and routing using the command line. These changes can take effect immediately on a running system but since they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files inside /etc/sysconfig/network-scripts to save our configuration permanently.
To begin, lets print our current routing table:
# ip route show
![Check Routing Table in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png)
Check Current Routing Table
From the output above, we can see the following facts:
- The default gateways IP address is 192.168.0.1 and can be accessed via the enp0s3 NIC.
- When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in case). In few words, if a machine is set to obtain an IP address through DHCP but fails to do so for some reason, it is automatically assigned an address in this network. Bottom line is, this route will allow us to communicate, also via enp0s3, with other machines who have failed to obtain an IP address from a DHCP server.
- Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24 network through enp0s3, whose IP address is 192.168.0.18.
These are the typical tasks that you would have to perform in such a setting. Unless specified otherwise, the following tasks should be performed in router #2:
Make sure all NICs have been properly installed:
# ip link show
If one of them is down, bring it up:
# ip link set dev enp0s8 up
and assign an IP address in the 10.0.0.0/24 network to it:
# ip addr add 10.0.0.17 dev enp0s8
Oops! We made a mistake in the IP address. We will have to remove the one we assigned earlier and then add the right one (10.0.0.18):
# ip addr del 10.0.0.17 dev enp0s8
# ip addr add 10.0.0.18 dev enp0s8
Now, please note that you can only add a route to a destination network through a gateway that is itself already reachable. For that reason, we need to assign an IP address within the 192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it:
# ip addr add 192.168.0.19 dev enp0s3
Finally, we will need to enable packet forwarding:
# echo "1" > /proc/sys/net/ipv4/ip_forward
and stop / disable (just for the time being until we cover packet filtering in the next article) the firewall:
# systemctl stop firewalld
# systemctl disable firewalld
Back in our RHEL 7 box (192.168.0.18), lets configure a route to 10.0.0.0/24 through 192.168.0.19 (enp0s3 in router #2):
# ip route add 10.0.0.0/24 via 192.168.0.19
After that, the routing table looks as follows:
# ip route show
![Show Network Routing Table](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png)
Confirm Network Routing Table
Likewise, add the corresponding route in the machine(s) youre trying to reach in 10.0.0.0/24:
# ip route add 192.168.0.0/24 via 10.0.0.18
You can test for basic connectivity using ping:
In the RHEL 7 box, run
# ping -c 4 10.0.0.20
where 10.0.0.20 is the IP address of a web server in the 10.0.0.0/24 network.
In the web server (10.0.0.20), run
# ping -c 192.168.0.18
where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine.
Alternatively, we can use [tcpdump][4] (you may need to install it with yum install tcpdump) to check the 2-way communication over TCP between our RHEL 7 box and the web server at 10.0.0.20.
To do so, lets start the logging in the first machine with:
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
and from another terminal in the same system lets telnet to port 80 in the web server (assuming Apache is listening on that port; otherwise, indicate the right port in the following command):
# telnet 10.0.0.20 80
The tcpdump log should look as follows:
![Check Network Communication between Servers](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png)
Check Network Communication between Servers
Where the connection has been properly initialized, as we can tell by looking at the 2-way communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20).
Please remember that these changes will go away when you restart the system. If you want to make them persistent, you will need to edit (or create, if they dont already exist) the following files, in the same systems where we performed the above commands.
Though not strictly necessary for our test case, you should know that /etc/sysconfig/network contains system-wide network parameters. A typical /etc/sysconfig/network looks as follows:
# Enable networking on this system?
NETWORKING=yes
# Hostname. Should match the value in /etc/hostname
HOSTNAME=yourhostnamehere
# Default gateway
GATEWAY=XXX.XXX.XXX.XXX
# Device used to connect to default gateway. Replace X with the appropriate number.
GATEWAYDEV=enp0sX
When it comes to setting specific variables and values for each NIC (as we did for router #2), you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8.
Following our case,
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.0.19
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NAME=enp0s3
ONBOOT=yes
and
TYPE=Ethernet
BOOTPROTO=static
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
NAME=enp0s8
ONBOOT=yes
for enp0s3 and enp0s8, respectively.
As for routing in our client machine (192.168.0.18), we will need to edit /etc/sysconfig/network-scripts/route-enp0s3:
10.0.0.0/24 via 192.168.0.19 dev enp0s3
Now reboot your system and you should see that route in your table.
### Summary ###
In this article we have covered the essentials of static routing in Red Hat Enterprise Linux 7. Although scenarios may vary, the case presented here illustrates the required principles and the procedures to perform this task. Before wrapping up, I would like to suggest you to take a look at [Chapter 4][5] of the Securing and Optimizing Linux section in The Linux Documentation Project site for further details on the topics covered here.
Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) This 800+ eBook contains comprehensive collection of Linux security tips and how to use them safely and easily to configure Linux-based applications and services.
![Linux Security and Optimization Book](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif)
Linux Security and Optimization Book
[Download Now][6]
In the next article we will talk about packet filtering and network address translation to sum up the networking basic skills needed for the RHCE certification.
As always, we look forward to hearing from you, so feel free to leave your questions, comments, and suggestions using the form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
[2]:https://www.redhat.com/en/services/certification/rhce
[3]:http://www.tecmint.com/ip-command-examples/
[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/
[5]:http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/net-manage.html
[6]:http://tecmint.tradepub.com/free/w_opeb01/prgm.cgi

View File

@ -1,178 +0,0 @@
Translating by ictlyh
Part 2 - How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters
================================================================================
As promised in Part 1 (“[Setup Static Network Routing][1]”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise.
![Network Packet Filtering in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg)
RHCE: Network Packet Filtering Part 2
### Network Packet Filtering in RHEL 7 ###
When we talk about packet filtering, we refer to a process performed by a firewall in which it reads the header of each data packet that attempts to pass through it. Then, it filters the packet by taking the required action based on rules that have been previously defined by the system administrator.
As you probably know, beginning with RHEL 7, the default service that manages firewall rules is [firewalld][2]. Like iptables, it talks to the netfilter module in the Linux kernel in order to examine and manipulate network packets. Unlike iptables, updates can take effect immediately without interrupting active connections you dont even have to restart the service.
Another advantage of firewalld is that it allows us to define rules based on pre-configured service names (more on that in a minute).
In Part 1, we used the following scenario:
![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png)
Static Routing Network Diagram
However, you will recall that we disabled the firewall on router #2 to simplify the example since we had not covered packet filtering yet. Lets see now how we can enable incoming packets destined for a specific service or port in the destination.
First, lets add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8 (10.0.0.18):
# firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT
The above command will save the rule to /etc/firewalld/direct.xml:
# cat /etc/firewalld/direct.xml
![Check Firewalld Saved Rules in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png)
Check Firewalld Saved Rules
Then enable the rule for it to take effect immediately:
# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT
Now you can telnet to the web server from the RHEL 7 box and run [tcpdump][3] again to monitor the TCP traffic between the two machines, this time with the firewall in router #2 enabled.
# telnet 10.0.0.20 80
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
What if you want to only allow incoming connections to the web server (port 80) from 192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network?
In the web servers firewall, add the following rules:
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent
Now you can make HTTP requests to the web server, from 192.168.0.18 and from some other machine in 192.168.0.0/24. In the first case the connection should complete successfully, whereas in the second it will eventually timeout.
To do so, any of the following commands will do the trick:
# telnet 10.0.0.20 80
# wget 10.0.0.20
I strongly advise you to check out the [Firewalld Rich Language][4] documentation in the Fedora Project Wiki for further details on rich rules.
### Network Address Translation in RHEL 7 ###
Network Address Translation (NAT) is the process where a group of computers (it can also be just one of them) in a private network are assigned an unique public IP address. As result, they are still uniquely identified by their own private IP address inside the network but to the outside they all “seem” the same.
In addition, NAT makes it possible that computers inside a network sends requests to outside resources (like the Internet) and have the corresponding responses be sent back to the source system only.
Lets now consider the following scenario:
![Network Address Translation in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png)
Network Address Translation
In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the internal zone, where masquerading, or NAT, is enabled by default:
# firewall-cmd --list-all --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external --permanent
# firewall-cmd --change-interface=enp0s8 --zone=internal
# firewall-cmd --change-interface=enp0s8 --zone=internal --permanent
For our current setup, the internal zone along with everything that is enabled in it will be the default zone:
# firewall-cmd --set-default-zone=internal
Next, lets reload firewall rules and keep state information:
# firewall-cmd --reload
Finally, lets add router #2 as default gateway in the web server:
# ip route add default via 10.0.0.18
You can now verify that you can ping router #1 and an external site (tecmint.com, for example) from the web server:
# ping -c 2 192.168.0.1
# ping -c 2 tecmint.com
![Verify Network Routing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png)
Verify Network Routing
### Setting Kernel Runtime Parameters in RHEL 7 ###
In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-the-fly to modify the systems behavior without much hassle when operating conditions change.
To do so, the echo shell built-in is used to write to files inside /proc/sys/<category>, where <category> is most likely one of the following directories:
- dev: parameters for specific devices connected to the machine.
- fs: filesystem configuration (quotas and inodes, for example).
- kernel: kernel-specific configuration.
- net: network configuration.
- vm: use of the kernels virtual memory.
To display the list of all the currently available values, run
# sysctl -a | less
In Part 1, we changed the value of the net.ipv4.ip_forward parameter by doing
# echo 1 > /proc/sys/net/ipv4/ip_forward
in order to allow a Linux machine to act as router.
Another runtime parameter that you may want to set is kernel.sysrq, which enables the Sysrq key in your keyboard to instruct the system to perform gracefully some low-level functions, such as rebooting the system if it has frozen for some reason:
# echo 1 > /proc/sys/kernel/sysrq
To display the value of a specific parameter, use sysctl as follows:
# sysctl <parameter.name>
For example,
# sysctl net.ipv4.ip_forward
# sysctl kernel.sysrq
Some parameters, such as the ones mentioned above, require only one value, whereas others (for example, fs.inode-state) require multiple values:
![Check Kernel Parameters in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png)
Check Kernel Parameters
In either case, you need to read the kernels documentation before making any changes.
Please note that these settings will go away when the system is rebooted. To make these changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows:
# echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf
(where the number 10 indicates the order of processing relative to other files in the same directory).
and enable the changes with
# sysctl -p /etc/sysctl.d/10-forward.conf
### Summary ###
In this tutorial we have explained the basics of packet filtering, network address translation, and setting kernel runtime parameters on a running system and persistently across reboots. I hope you have found this information useful, and as always, we look forward to hearing from you!
Dont hesitate to share with us your questions, comments, or suggestions using the form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/perform-packet-filtering-network-address-translation-and-set-kernel-runtime-parameters-in-rhel/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/
[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/
[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage

View File

@ -1,183 +0,0 @@
Translating by ictlyh
Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets
================================================================================
As a system engineer, you will often need to produce reports that show the utilization of your systems resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons.
![Monitor Linux Performance Activity Reports](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg)
RHCE: Monitor Linux Performance Activity Reports Part 3
Besides the well-known native Linux tools that are used to check disk, memory, and CPU usage to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets to enhance the data you can collect for your reports: sysstat and dstat.
In this article we will describe both, but lets first start by reviewing the usage of the classic tools.
### Native Linux Tools ###
With df, you will be able to report disk space and inode usage of by filesystem. You need to monitor both because a lack of space will prevent you from being able to save further files (and may even cause the system to crash), just like running out of inodes will mean you cant link further files with their corresponding data structures, thus producing the same effect: you wont be able to save those files to disk.
# df -h [Display output in human-readable form]
# df -h --total [Produce a grand total]
![Check Linux Total Disk Usage](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png)
Check Linux Total Disk Usage
# df -i [Show inode count by filesystem]
# df -i --total [Produce a grand total]
![Check Linux Total inode Numbers](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png)
Check Linux Total inode Numbers
With du, you can estimate file space usage by either file, directory, or filesystem.
For example, lets see how much space is used by the /home directory, which includes all of the users personal files. The first command will return the overall space currently used by the entire /home directory, whereas the second will also display a disaggregated list by sub-directory as well:
# du -sch /home
# du -sch /home/*
![Check Linux Directory Disk Size](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png)
Check Linux Directory Disk Size
Dont Miss:
- [12 df Command Examples to Check Linux Disk Space Usage][1]
- [10 du Command Examples to Find Disk Usage of Files/Directories][2]
Another utility that cant be missing from your toolset is vmstat. It will allow you to see at a quick glance information about processes, CPU and memory usage, disk activity, and more.
If run without arguments, vmstat will return averages since the last reboot. While you may use this form of the command once in a while, it will be more helpful to take a certain amount of system utilization samples, one after another, with a defined time separation between samples.
For example,
# vmstat 5 10
will return 10 samples taken every 5 seconds:
![Check Linux System Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png)
Check Linux System Performance
As you can see in the above picture, the output of vmstat is divided by columns: procs (processes), memory, swap, io, system, and cpu. The meaning of each field can be found in the FIELD DESCRIPTION sections in the man page of vmstat.
Where can vmstat come in handy? Lets examine the behavior of the system before and during a yum update:
# vmstat -a 1 5
![Vmstat Linux Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png)
Vmstat Linux Performance Monitoring
Please note that as files are being modified on disk, the amount of active memory increases and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to user processes (us).
Or during the saving process of a large file directly to disk (caused by dsync):
# vmstat -a 1 5
# dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync
![VmStat Linux Disk Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png)
VmStat Linux Disk Performance Monitoring
In this case, we can see a yet larger number of blocks being written to disk (bo), which was to be expected, but also an increase of the amount of CPU time that it has to wait for I/O operations to complete before processing tasks (wa).
**Dont Miss**: [Vmstat Linux Performance Monitoring][3]
### Other Linux Tools ###
As mentioned in the introduction of this chapter, there are other tools that you can use to check the system status and utilization (they are not only provided by Red Hat but also by other major distributions from their officially supported repositories).
The sysstat package contains the following utilities:
- sar (collect, report, or save system activity information).
- sadf (display data collected by sar in multiple formats).
- mpstat (report processors related statistics).
- iostat (report CPU statistics and I/O statistics for devices and partitions).
- pidstat (report statistics for Linux tasks).
- nfsiostat (report input/output statistics for NFS).
- cifsiostat (report CIFS statistics) and
- sa1 (collect and store binary data in the system activity daily data file.
- sa2 (write a daily report in the /var/log/sa directory) tools.
whereas dstat adds some extra features to the functionality provided by those tools, along with more counters and flexibility. You can find an overall description of each tool by running yum info sysstat or yum info dstat, respectively, or checking the individual man pages after installation.
To install both packages:
# yum update && yum install sysstat dstat
The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following parameters in that file:
# How long to keep log files (in days).
# If value is greater than 28, then log files are kept in
# multiple directories, one for each month.
HISTORY=28
# Compress (using gzip or bzip2) sa and sar files older than (in days):
COMPRESSAFTER=31
# Parameters for the system activity data collector (see sadc manual page)
# which are used for the generation of log files.
SADC_OPTIONS="-S DISK"
# Compression program to use.
ZIP="bzip2"
When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The first job runs the system activity accounting tool every 10 minutes and stores the reports in /var/log/sa/saXX where XX is the day of the month.
Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month. This assumes that we are using the default value in the HISTORY variable in the configuration file above:
*/10 * * * * root /usr/lib64/sa/sa1 1 1
The second job generates a daily summary of process accounting at 11:53 pm every day and stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous example:
53 23 * * * root /usr/lib64/sa/sa2 -A
For example, you may want to output system statistics from 9:30 am through 5:30 pm of the sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or Microsoft Excel (this approach will also allow you to create charts or graphs):
# sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv
You could alternatively use the -j flag instead of -d in the sadf command above to output the system stats in JSON format, which could be useful if you need to consume the data in a web application, for example.
![Linux System Statistics](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png)
Linux System Statistics
Finally, lets see what dstat has to offer. Please note that if run without arguments, dstat assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats, respectively), and adds one line every second (execution can be interrupted anytime with Ctrl + C):
# dstat
![Linux Disk Statistics Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png)
Linux Disk Statistics Monitoring
To output the stats to a .csv file, use the output flag followed by a file name. Lets see how this looks on LibreOffice Calc:
![Monitor Linux Statistics Output](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png)
Monitor Linux Statistics Output
I strongly advise you to check out the man page of dstat, included with this article along with the man page of sysstat in PDF format for your reading convenience. You will find several other options that will help you create custom and detailed system activity reports.
**Dont Miss**: [Sysstat Linux Usage Activity Monitoring Tool][4]
### Summary ###
In this guide we have explained how to use both native Linux tools and specific utilities provided with RHEL 7 in order to produce reports on system utilization. At one point or another, you will come to rely on these reports as best friends.
You will probably have used other tools that we have not covered in this tutorial. If so, feel free to share them with the rest of the community along with any other suggestions / questions / comments that you may have- using the form below.
We look forward to hearing from you.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/
[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[4]:http://www.tecmint.com/install-sysstat-in-linux/

View File

@ -0,0 +1,208 @@
ictlyh Translating
Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks
================================================================================
Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why:
![Automate Linux System Maintenance Tasks](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png)
RHCE Series: Automate Linux System Maintenance Tasks Part 4
if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using,
for example, the tools reviewed in Part 3 [Monitor System Activity Reports Using Linux Toolsets][1] of this series. Thus, although he or she may not seem to be doing much, its because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what were going to talk about in this tutorial.
### What is a shell script? ###
In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user.
By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to [this Wikipedia article][2].
To find out more about the enormous set of features provided by this shell, you may want to check out its **man page**, which is downloaded in in PDF format at ([Bash Commands][3]). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through [A Guide from Newbies to SysAdmin][4] article in **Tecmint.com** before proceeding). Now lets get started.
### Writing a script to display system information ###
For our convenience, lets create a directory to store our shell scripts:
# mkdir scripts
# cd scripts
And open a new text file named `system_info.sh` with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards:
#!/bin/bash
# Sample script written for Part 4 of the RHCE series
# This script will return the following set of system information:
# -Hostname information:
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
hostnamectl
echo ""
# -File system disk space usage:
echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m"
df -h
echo ""
# -Free and used memory in the system:
echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m"
free
echo ""
# -System uptime and load:
echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m"
uptime
echo ""
# -Logged-in users:
echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m"
who
echo ""
# -Top 5 processes as far as memory usage is concerned
echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m"
ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6
echo ""
echo -e "\e[1;32mDone.\e[0m"
Next, give the script execute permissions:
# chmod +x system_info.sh
and run it:
./system_info.sh
Note that the headers of each section are shown in color for better visualization:
![Server Monitoring Shell Script](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png)
Server Monitoring Shell Script
That functionality is provided by this command:
echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"
Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the [Arch Linux Wiki][5]) and <YOUR TEXT HERE> is the string that you want to show in color.
### Automating Tasks ###
The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting:
**1)** update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit.
Lets create a file named `auto_tasks.sh` in our scripts directory with the following content:
#!/bin/bash
# Sample script to automate tasks:
# -Update local file database:
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
updatedb
if [ $? == 0 ]; then
echo "The local file database was updated correctly."
else
echo "The local file database was not updated correctly."
fi
echo ""
# -Find and / or delete files with 777 permissions.
echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m"
# Enable either option (comment out the other line), but not both.
# Option 1: Delete files without prompting for confirmation. Assumes GNU version of find.
#find -type f -perm 0777 -delete
# Option 2: Ask for confirmation before deleting files. More portable across systems.
find -type f -perm 0777 -exec rm -i {} +;
echo ""
# -Alert when file system usage surpasses a defined limit
echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m"
THRESHOLD=30
while read line; do
# This variable stores the file system path as a string
FILESYSTEM=$(echo $line | awk '{print $1}')
# This variable stores the use percentage (XX%)
PERCENTAGE=$(echo $line | awk '{print $5}')
# Use percentage without the % sign.
USAGE=${PERCENTAGE%?}
if [ $USAGE -gt $THRESHOLD ]; then
echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE"
fi
done < <(df -h --total | grep -vi filesystem)
Please note that there is a space between the two `<` signs in the last line of the script.
![Shell Script to Find 777 Permissions](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png)
Shell Script to Find 777 Permissions
### Using Cron ###
To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser.
The following script (filesystem_usage.sh) will run the well-known **df -h** command, format the output into a HTML table and save it in the **report.html** file:
#!/bin/bash
# Sample script to demonstrate the creation of an HTML report using shell scripting
# Web directory
WEB_DIR=/var/www/html
# A little CSS and table layout to make the report look a little nicer
echo "<HTML>
<HEAD>
<style>
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em 0.2em;}
table
{
border-collapse:collapse;
}
table, td, th
{
border:1px solid black;
}
</style>
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
</HEAD>
<BODY>" > $WEB_DIR/report.html
# View hostname and insert it at the top of the html body
HOST=$(hostname)
echo "Filesystem usage for host <strong>$HOST</strong><br>
Last updated: <strong>$(date)</strong><br><br>
<table border='1'>
<tr><th class='titulo'>Filesystem</td>
<th class='titulo'>Size</td>
<th class='titulo'>Use %</td>
</tr>" >> $WEB_DIR/report.html
# Read the output of df -h line by line
while read line; do
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
echo "</td></tr>" >> $WEB_DIR/report.html
done < <(df -h | grep -vi filesystem)
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html
In our **RHEL 7** server (**192.168.0.18**), this looks as follows:
![Server Monitoring Report](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png)
Server Monitoring Report
You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry:
30 13 * * * /root/scripts/filesystem_usage.sh
### Summary ###
You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don't hesitate to add your own ideas or comments via the form below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/
[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29
[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf
[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/
[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt

View File

@ -1,218 +0,0 @@
FSSlc translating
RHCSA Series: Process Management in RHEL 7: Boot, Shutdown, and Everything in Between Part 5
================================================================================
We will start this article with an overall and brief revision of what happens since the moment you press the Power button to turn on your RHEL 7 server until you are presented with the login screen in a command line interface.
![RHEL 7 Boot Process](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Process.png)
Linux Boot Process
**Please note that:**
1. the same basic principles apply, with perhaps minor modifications, to other Linux distributions as well, and
2. the following description is not intended to represent an exhaustive explanation of the boot process, but only the fundamentals.
### Linux Boot Process ###
1. The POST (Power On Self Test) initializes and performs hardware checks.
2. When the POST finishes, the system control is passed to the first stage boot loader, which is stored on either the boot sector of one of the hard disks (for older systems using BIOS and MBR), or a dedicated (U)EFI partition.
3. The first stage boot loader then loads the second stage boot loader, most usually GRUB (GRand Unified Boot Loader), which resides inside /boot, which in turn loads the kernel and the initial RAMbased file system (also known as initramfs, which contains programs and binary files that perform the necessary actions needed to ultimately mount the actual root filesystem).
4. We are presented with a splash screen that allows us to choose an operating system and kernel to boot:
![RHEL 7 Boot Screen](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Screen.png)
Boot Menu Screen
5. The kernel sets up the hardware attached to the system and once the root filesystem has been mounted, launches process with PID 1, which in turn will initialize other processes and present us with a login prompt.
Note: That if we wish to do so at a later time, we can examine the specifics of this process using the [dmesg command][1] and filtering its output using the tools that we have explained in previous articles of this series.
![Login Screen and Process PID](http://www.tecmint.com/wp-content/uploads/2015/03/Login-Screen-Process-PID.png)
Login Screen and Process PID
In the example above, we used the well-known ps command to display a list of current processes whose parent process (or in other words, the process that started them) is systemd (the system and service manager that most modern Linux distributions have switched to) during system startup:
# ps -o ppid,pid,uname,comm --ppid=1
Remember that the -o flag (short for format) allows you to present the output of ps in a customized format to suit your needs using the keywords specified in the STANDARD FORMAT SPECIFIERS section in man ps.
Another case in which you will want to define the output of ps instead of going with the default is when you need to find processes that are causing a significant CPU and / or memory load, and sort them accordingly:
# ps aux --sort=+pcpu # Sort by %CPU (ascending)
# ps aux --sort=-pcpu # Sort by %CPU (descending)
# ps aux --sort=+pmem # Sort by %MEM (ascending)
# ps aux --sort=-pmem # Sort by %MEM (descending)
# ps aux --sort=+pcpu,-pmem # Combine sort by %CPU (ascending) and %MEM (descending)
![http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png](http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png)
Customize ps Command Output
### An Introduction to SystemD ###
Few decisions in the Linux world have caused more controversies than the adoption of systemd by major Linux distributions. Systemds advocates name as its main advantages the following facts:
Read Also: [The Story Behind init and systemd][2]
1. Systemd allows more processing to be done in parallel during system startup (as opposed to older SysVinit, which always tends to be slower because it starts processes one by one, checks if one depends on another, and then waits for daemons to launch so more services can start), and
2. It works as a dynamic resource management in a running system. Thus, services are started when needed (to avoid consuming system resources if they are not being used) instead of being launched without a valid reason during boot.
3. Backwards compatibility with SysVinit scripts.
Systemd is controlled by the systemctl utility. If you come from a SysVinit background, chances are you will be familiar with:
- the service tool, which -in those older systems- was used to manage SysVinit scripts, and
- the chkconfig utility, which served the purpose of updating and querying runlevel information for system services.
- shutdown, which you must have used several times to either restart or halt a running system.
The following table shows the similarities between the use of these legacy tools and systemctl:
注:表格
<table cellspacing="0" border="0">
<colgroup width="237"></colgroup>
<colgroup width="256"></colgroup>
<colgroup width="1945"></colgroup>
<tbody>
<tr>
<td align="left" height="25" bgcolor="#B7B7B7" style="border: 1px solid #000000;"><b><span style="color: black; font-family: Arial; font-size: small;">Legacy tool</span></b></td>
<td align="left" bgcolor="#B7B7B7" style="border: 1px solid #000000;"><b><span style="color: black; font-family: Arial; font-size: small;">Systemctl equivalent</span></b></td>
<td align="left" bgcolor="#B7B7B7" style="border: 1px solid #000000;"><b><span style="color: black; font-family: Arial; font-size: small;">Description</span></b></td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name start</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl start name</span></td>
<td align="left" style="border: 1px solid #000000;">Start name (where name is a service)</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name stop</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl stop name</span></td>
<td align="left" style="border: 1px solid #000000;">Stop name</td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name condrestart</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl try-restart name</span></td>
<td align="left" style="border: 1px solid #000000;">Restarts name (if its already running)</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name restart</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl restart name</span></td>
<td align="left" style="border: 1px solid #000000;">Restarts name</td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name reload</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl reload name</span></td>
<td align="left" style="border: 1px solid #000000;">Reloads the configuration for name</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name status</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl status name</span></td>
<td align="left" style="border: 1px solid #000000;">Displays the current status of name</td>
</tr>
<tr class="alt">
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service &ndash;status-all</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Displays the status of all current services</span></td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig name on</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl enable name</span></td>
<td align="left" style="border: 1px solid #000000;">Enable name to run on startup as specified in the unit file (the file to which the symlink points). The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links inside the /etc/systemd/system directory.</td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig name off</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl disable name</span></td>
<td align="left" style="border: 1px solid #000000;">Disables name to run on startup as specified in the unit file (the file to which the symlink points)</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig &ndash;list name</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl is-enabled name</span></td>
<td align="left" style="border: 1px solid #000000;">Verify whether name (a specific service) is currently enabled</td>
</tr>
<tr class="alt">
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig &ndash;list</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl &ndash;type=service</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Displays all services and tells whether they are enabled or disabled</span></td>
</tr>
<tr>
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">shutdown -h now</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl poweroff</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Power-off the machine (halt)</span></td>
</tr>
<tr class="alt">
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">shutdown -r now</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl reboot</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Reboot the system</span></td>
</tr>
</tbody>
</table>
Systemd also introduced the concepts of units (which can be either a service, a mount point, a device, or a network socket) and targets (which is how systemd manages to start several related process at the same time, and can be considered -though not equal- as the equivalent of runlevels in SysVinit-based systems.
### Summing Up ###
Other tasks related with process management include, but may not be limited to, the ability to:
**1. Adjust the execution priority as far as the use of system resources is concerned of a process:**
This is accomplished through the renice utility, which alters the scheduling priority of one or more running processes. In simple terms, the scheduling priority is a feature that allows the kernel (present in versions => 2.6) to allocate system resources as per the assigned execution priority (aka niceness, in a range from -20 through 19) of a given process.
The basic syntax of renice is as follows:
# renice [-n] priority [-gpu] identifier
In the generic command above, the first argument is the priority value to be used, whereas the other argument can be interpreted as process IDs (which is the default setting), process group IDs, user IDs, or user names. A normal user (other than root) can only modify the scheduling priority of a process he or she owns, and only increase the niceness level (which means taking up less system resources).
![Renice Process in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Process-Scheduling-Priority.png)
Process Scheduling Priority
**2. Kill (or interrupt the normal execution) of a process as needed:**
In more precise terms, killing a process entitles sending it a signal to either finish its execution gracefully (SIGTERM=15) or immediately (SIGKILL=9) through the [kill or pkill commands][3].
The difference between these two tools is that the former is used to terminate a specific process or a process group altogether, while the latter allows you to do the same based on name and other attributes.
In addition, pkill comes bundled with pgrep, which shows you the PIDs that will be affected should pkill be used. For example, before running:
# pkill -u gacanepa
It may be useful to view at a glance which are the PIDs owned by gacanepa:
# pgrep -l -u gacanepa
![Find PIDs of User](http://www.tecmint.com/wp-content/uploads/2015/03/Find-PIDs-of-User.png)
Find PIDs of User
By default, both kill and pkill send the SIGTERM signal to the process. As we mentioned above, this signal can be ignored (while the process finishes its execution or for good), so when you seriously need to stop a running process with a valid reason, you will need to specify the SIGKILL signal on the command line:
# kill -9 identifier # Kill a process or a process group
# kill -s SIGNAL identifier # Idem
# pkill -s SIGNAL identifier # Kill a process by name or other attributes
### Conclusion ###
In this article we have explained the basics of the boot process in a RHEL 7 system, and analyzed some of the tools that are available to help you with managing processes using common utilities and systemd-specific commands.
Note that this list is not intended to cover all the bells and whistles of this topic, so feel free to add your own preferred tools and commands to this article using the comment form below. Questions and other comments are also welcome.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/dmesg-commands/
[2]:http://www.tecmint.com/systemd-replaces-init-in-linux/
[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/

View File

@ -1,269 +0,0 @@
RHCSA Series: Using Parted and SSM to Configure and Encrypt System Storage Part 6
================================================================================
In this article we will discuss how to set up and configure local system storage in Red Hat Enterprise Linux 7 using classic tools and introducing the System Storage Manager (also known as SSM), which greatly simplifies this task.
![Configure and Encrypt System Storage](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png)
RHCSA: Configure and Encrypt System Storage Part 6
Please note that we will present this topic in this article but will continue its description and usage on the next one (Part 7) due to vastness of the subject.
### Creating and Modifying Partitions in RHEL 7 ###
In RHEL 7, parted is the default utility to work with partitions, and will allow you to:
- Display the current partition table
- Manipulate (increase or decrease the size of) existing partitions
- Create partitions using free space or additional physical storage devices
It is recommended that before attempting the creation of a new partition or the modification of an existing one, you should ensure that none of the partitions on the device are in use (`umount /dev/partition`), and if youre using part of the device as swap you need to disable it (`swapoff -v /dev/partition`) during the process.
The easiest way to do this is to boot RHEL in rescue mode using an installation media such as a RHEL 7 installation DVD or USB (Troubleshooting → Rescue a Red Hat Enterprise Linux system) and Select Skip when youre prompted to choose an option to mount the existing Linux installation, and you will be presented with a command prompt where you can start typing the same commands as shown as follows during the creation of an ordinary partition in a physical device that is not being used.
![RHEL 7 Rescue Mode](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png)
RHEL 7 Rescue Mode
To start parted, simply type.
# parted /dev/sdb
Where `/dev/sdb` is the device where you will create the new partition; next, type print to display the current drives partition table:
![Creat New Partition](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png)
Creat New Partition
As you can see, in this example we are using a virtual drive of 5 GB. We will now proceed to create a 4 GB primary partition and then format it with the xfs filesystem, which is the default in RHEL 7.
You can choose from a variety of file systems. You will need to manually create the partition with mkpart and then format it with mkfs.fstype as usual because mkpart does not support many modern filesystems out-of-the-box.
In the following example we will set a label for the device and then create a primary partition `(p)` on `/dev/sdb`, which starts at the 0% percentage of the device and ends at 4000 MB (4 GB):
![Set Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png)
Label Partition Name
Next, we will format the partition as xfs and print the partition table again to verify that changes were applied:
# mkfs.xfs /dev/sdb1
# parted /dev/sdb print
![Format Partition in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png)
Format Partition as XFS Filesystem
For older filesystems, you could use the resize command in parted to resize a partition. Unfortunately, this only applies to ext2, fat16, fat32, hfs, linux-swap, and reiserfs (if libreiserfs is installed).
Thus, the only way to resize a partition is by deleting it and creating it again (so make sure you have a good backup of your data!). No wonder the default partitioning scheme in RHEL 7 is based on LVM.
To remove a partition with parted:
# parted /dev/sdb print
# parted /dev/sdb rm 1
![Remove Partition in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png)
Remove or Delete Partition
### The Logical Volume Manager (LVM) ###
Once a disk has been partitioned, it can be difficult or risky to change the partition sizes. For that reason, if we plan on resizing the partitions on our system, we should consider the possibility of using LVM instead of the classic partitioning system, where several physical devices can form a volume group that will host a defined number of logical volumes, which can be expanded or reduced without any hassle.
In simple terms, you may find the following diagram useful to remember the basic architecture of LVM.
![Basic Architecture of LVM](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png)
Basic Architecture of LVM
#### Creating Physical Volumes, Volume Group and Logical Volumes ####
Follow these steps in order to set up LVM using classic volume management tools. Since you can expand this topic reading the [LVM series on this site][1], I will only outline the basic steps to set up LVM, and then compare them to implementing the same functionality with SSM.
**Note**: That we will use the whole disks `/dev/sdb` and `/dev/sdc` as PVs (Physical Volumes) but its entirely up to you if you want to do the same.
**1. Create partitions `/dev/sdb1` and `/dev/sdc1` using 100% of the available disk space in /dev/sdb and /dev/sdc:**
# parted /dev/sdb print
# parted /dev/sdc print
![Create New Partitions](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png)
Create New Partitions
**2. Create 2 physical volumes on top of /dev/sdb1 and /dev/sdc1, respectively.**
# pvcreate /dev/sdb1
# pvcreate /dev/sdc1
![Create Two Physical Volumes](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png)
Create Two Physical Volumes
Remember that you can use pvdisplay /dev/sd{b,c}1 to show information about the newly created PVs.
**3. Create a VG on top of the PV that you created in the previous step:**
# vgcreate tecmint_vg /dev/sd{b,c}1
![Create Volume Group in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png)
Create Volume Group
Remember that you can use vgdisplay tecmint_vg to show information about the newly created VG.
**4. Create three logical volumes on top of VG tecmint_vg, as follows:**
# lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB]
# lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB]
# lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes → 6 GB]
![Create Logical Volumes in LVM](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png)
Create Logical Volumes
Remember that you can use lvdisplay tecmint_vg to show information about the newly created LVs on top of VG tecmint_vg.
**5. Format each of the logical volumes with xfs (do NOT use xfs if youre planning on shrinking volumes later!):**
# mkfs.xfs /dev/tecmint_vg/vol01_docs
# mkfs.xfs /dev/tecmint_vg/vol02_logs
# mkfs.xfs /dev/tecmint_vg/vol03_homes
**6. Finally, mount them:**
# mount /dev/tecmint_vg/vol01_docs /mnt/docs
# mount /dev/tecmint_vg/vol02_logs /mnt/logs
# mount /dev/tecmint_vg/vol03_homes /mnt/homes
#### Removing Logical Volumes, Volume Group and Physical Volumes ####
**7. Now we will reverse the LVM implementation and remove the LVs, the VG, and the PVs:**
# lvremove /dev/tecmint_vg/vol01_docs
# lvremove /dev/tecmint_vg/vol02_logs
# lvremove /dev/tecmint_vg/vol03_homes
# vgremove /dev/tecmint_vg
# pvremove /dev/sd{b,c}1
**8. Now lets install SSM and we will see how to perform the above in ONLY 1 STEP!**
# yum update && yum install system-storage-manager
We will use the same names and sizes as before:
# ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1
# ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1
# ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1
Yes! SSM will let you:
- initialize block devices as physical volumes
- create a volume group
- create logical volumes
- format LVs, and
- mount them using only one command
**9. We can now display the information about PVs, VGs, or LVs, respectively, as follows:**
# ssm list dev
# ssm list pool
# ssm list vol
![Check Information of PVs, VGs, or LVs](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png)
Check Information of PVs, VGs, or LVs
**10. As we already know, one of the distinguishing features of LVM is the possibility to resize (expand or decrease) logical volumes without downtime.**
Say we are running out of space in vol02_logs but have plenty of space in vol03_homes. We will resize vol03_homes to 4 GB and expand vol02_logs to use the remaining space:
# ssm resize -s 4G /dev/tecmint_vg/vol03_homes
Run ssm list pool again and take note of the free space in tecmint_vg:
![Check Volume Size](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png)
Check Volume Size
Then do:
# ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs
**Note**: that the plus sign after the -s flag indicates that the specified value should be added to the present value.
**11. Removing logical volumes and volume groups is much easier with ssm as well. A simple,**
# ssm remove tecmint_vg
will return a prompt asking you to confirm the deletion of the VG and the LVs it contains:
![Remove Logical Volume and Volume Group](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png)
Remove Logical Volume and Volume Group
### Managing Encrypted Volumes ###
SSM also provides system administrators with the capability of managing encryption for new or existing volumes. You will need the cryptsetup package installed first:
# yum update && yum install cryptsetup
Then issue the following command to create an encrypted volume. You will be prompted to enter a passphrase to maximize security:
# ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1
# ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1
# ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1
Our next task consists in adding the corresponding entries in /etc/fstab in order for those logical volumes to be available on boot. Rather than using the device identifier (/dev/something).
We will use each LVs UUID (so that our devices will still be uniquely identified should we add other logical volumes or devices), which we can find out with the blkid utility:
# blkid -o value UUID /dev/tecmint_vg/vol01_docs
# blkid -o value UUID /dev/tecmint_vg/vol02_logs
# blkid -o value UUID /dev/tecmint_vg/vol03_homes
In our case:
![Find Logical Volume UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png)
Find Logical Volume UUID
Next, create the /etc/crypttab file with the following contents (change the UUIDs for the ones that apply to your setup):
docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none
logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none
homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none
And insert the following entries in /etc/fstab. Note that device_name (/dev/mapper/device_name) is the mapper identifier that appears in the first column of /etc/crypttab.
# Logical volume vol01_docs:
/dev/mapper/docs /mnt/docs ext4 defaults 0 2
# Logical volume vol02_logs
/dev/mapper/logs /mnt/logs ext4 defaults 0 2
# Logical volume vol03_homes
/dev/mapper/homes /mnt/homes ext4 defaults 0 2
Now reboot (systemctl reboot) and you will be prompted to enter the passphrase for each LV. Afterwards you can confirm that the mount operation was successful by checking the corresponding mount points:
![Verify Logical Volume Mount Points](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png)
Verify Logical Volume Mount Points
### Conclusion ###
In this tutorial we have started to explore how to set up and configure system storage using classic volume management tools and SSM, which also integrates filesystem and encryption capabilities in one package. This makes SSM an invaluable tool for any sysadmin.
Let us know if you have any questions or comments feel free to use the form below to get in touch with us!
--------------------------------------------------------------------------------
via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/

View File

@ -1,3 +1,5 @@
FSSlc translating
RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares Part 7
================================================================================
In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm.
@ -209,4 +211,4 @@ via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/
[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html
[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html

View File

@ -0,0 +1,96 @@
Trickr一个开源的Linux桌面RSS新闻速递
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg)
**最新的!最新的!阅读关于它的一切!**
好了,所以我们今天要强调的应用程序不是相当于旧报纸的二进制版本—而是它会以一个伟大的方式,将最新的新闻推送到你的桌面上。
Tick是一个基于GTK的Linux桌面新闻速递能够在水平带滚动显示最新头条新闻以及你最爱的RSS资讯文章标题当然你可以放置在你桌面的任何地方。
请叫我Joey Calamezzo我把我的放在底部有电视新闻台的风格。
“到你了,子标题”
### RSS -还记得吗? ###
“谢谢段落结尾。”
在一个推送通知社交媒体以及点击诱饵的时代哄骗我们阅读最新的令人惊奇的人人都爱读的清单RSS看起来有一点过时了。
对我来说RSS是名副其实的真正简单的聚合。这是将消息通知给我的最简单最易于管理的方式。我可以在我愿意的时候管理和阅读一些东西没必要匆忙的去看以防这条微博消失在信息流中或者推送通知消失。
tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的底部然后不时地瞥一眼。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/tickr-close-up-750x58.jpg)
你不会有“阅读”或“标记所有为已读”的压力。当你看到一些你想读的东西你只需点击它将它在Web浏览器中打开。
### 开始设置 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/tickr-rss-settings.jpg)
尽管虽然tickr可以从Ubuntu软件中心安装然而它已经很久没有更新了。当你打开笨拙的不直观的控制面板的时候没有什么能够比这更让人感觉被遗弃的了。
打开它:
1. 右键单击tickr条
1. 转至编辑>首选项
1. 调整各种设置
选项和设置行的后面,有些似乎是容易理解的。但是知己知彼你能够几乎掌控一切,包括:
- 设置滚动速度
- 选择鼠标经过时的行为
- 资讯更新频率
- 字体,包括字体大小和颜色
- 分隔符“delineator”
- tickr在屏幕上的位置
- tickr条的颜色和不透明度
- 选择每种资讯显示多少文章
有个值得一提的“怪癖”是当你点击“应用”按钮只会更新tickr的屏幕预览。当您退出“首选项”窗口时请单击“确定”。
想要滚动条在你的显示屏上水平显示,也需要公平一点的调整,特别是统一显示。
按下“全宽按钮”能够让应用程序自动检测你的屏幕宽度。默认情况下当放置在顶部或底部时会留下25像素的间距应用程序被创建在过去的GNOME2.x桌面。只需添加额外的25像素到输入框来弥补这个问题。
其他可供选择的选项包括选择文章在哪个浏览器打开tickr是否以一个常规的窗口出现
是否显示一个时钟;以及应用程序多久检查一次文章资讯。
#### 添加资讯 ####
tickr自带的有超过30种不同的资讯列表从技术博客到主流新闻服务。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/feed-picker-750x398.jpg)
你可以选择很多你想在屏幕上显示的新闻提要。如果你想添加自己的资讯,你可以:—
1. 右键单击tickr条
1. 转至文件>打开资讯
1. 输入资讯网址
1. 点击“添加/更新”按钮
1. 单击“确定”(选择)
如果想设置每个资讯在ticker中显示多少条文章可以去另一个首选项窗口修改“每个资讯最大读取N条文章”
### 在Ubuntu 14.04 LTS或更高版本上安装Tickr ###
在Ubuntu 14.04 LTS或更高版本上安装Tickr
在Ubuntu 14.04 LTS或更高版本中安装转到Ubuntu软件中心但要点击下面的按钮。
- [点击此处进入Ubuntu软件中心安装tickr][1]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticker
作者:[Joey-Elijah Sneddon][a]
译者:[xiaoyu33](https://github.com/xiaoyu33)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:apt://tickr

View File

@ -0,0 +1,111 @@
在 VirtualBox 中使用 Docker Machine 管理主机
================================================================================
大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个应用,用于在我们的电脑上、在云端、在数据中心创建 Docker 主机,然后用户可以使用 Docker 客户端来配置一些东西。这个 API 为本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux并且是以一个独立的二进制文件包形式安装的。使用与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。
本文列出一些简单的步骤用 Docker Machine 来部署 docker 容器。
### 1. 安装 Docker Machine ###
Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [github][1] 下载最新版本的 Docker Machine本文使用 curl 作为下载工具Docker Machine 版本为 0.2.0。
** 64 位操作系统 **
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
** 32 位操作系统 **
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine
下载完成后,找到 **/usr/local/bin** 目录下的 **docker-machine** 文件,执行一下:
# chmod +x /usr/local/bin/docker-machine
确认是否成功安装了 docker-machine可以运行下面的命令它会打印 Docker Machine 的版本信息:
# docker-machine -v
![安装 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png)
运行下面的命令,安装 Docker 客户端,以便于在我们自己的电脑止运行 Docker 命令:
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
### 2. 创建 VirtualBox 虚拟机 ###
在 Linux 系统上安装完 Docker Machine 后,接下来我们可以安装 VirtualBox 虚拟机,运行下面的就可以了。--driver virtualbox 选项表示我们要在 VirtualBox 的虚拟机里面部署 docker最后的参数“linux” 是虚拟机的名称。这个命令会下载 [boot2docker][2] iso它是个基于 Tiny Core Linux 的轻量级发行版,自带 Docker 程序,然后 docker-machine 命令会创建一个 VirtualBox 虚拟机LCTT当然我们也可以选择其他的虚拟机软件来运行这个 boot2docker 系统。
# docker-machine create --driver virtualbox linux
![创建 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png)
测试下有没有成功运行 VirtualBox 和 Docker运行命令
# docker-machine ls
![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png)
如果执行成功,我们可以看到在 ACTIVE 那列下面会出现一个星号“*”。
### 3. 设置环境变量 ###
现在我们需要让 docker 与虚拟机通信,运行 docker-machine env <虚拟机名称> 来实现这个目的。
# eval "$(docker-machine env linux)"
# docker ps
这个命令会设置 TLS 认证的环境变量,每次重启机器或者重新打开一个会话都需要执行一下这个命令,我们可以看到它的输出内容:
# docker-machine env linux
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=/Users/<your username>/.docker/machine/machines/dev
export DOCKER_HOST=tcp://192.168.99.100:2376
### 4. 运行 Docker 容器 ###
完成配置后我们就可以在 VirtualBox 上运行 docker 容器了。测试一下,在虚拟机里执行 **docker run busybox echo hello world** 命令,我们可以看到容器的输出信息。
# docker run busybox echo hello world
![运行 Docker 容器](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png)
### 5. 拿到 Docker 主机的 IP ###
我们可以执行下面的命令获取 Docker 主机的 IP 地址。
# docker-machine ip
![Docker IP 地址](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png)
### 6. 管理主机 ###
现在我们可以随心所欲地使用上述的 docker-machine 命令来不断创建主机了。
当你使用完 docker 时,可以运行 **docker-machine stop** 来停止所有主机,如果想开启所有主机,运行 **docker-machine start**
# docker-machine stop
# docker-machine start
你也可以只停止或开启一台主机:
$ docker-machine stop linux
$ docker-machine start linux
### 总结 ###
最后,我们使用 Docker Machine 成功在 VirtualBox 上创建并管理一台 Docker 主机。Docker Machine 确实能让用户快速地在不同的平台上部署 Docker 主机,就像我们这里部署在 VirtualBox 上一样。这个 --driver virtulbox 驱动可以在本地机器上使用也可以在数据中心的虚拟机上使用。Docker Machine 驱动除了支持本地的 VirtualBox 之外,还支持远端的 Digital Ocean、AWS、Azure、VMware 以及其他基础设施。如果你有任何疑问,或者建议,请在评论栏中写出来,我们会不断改进我们的内容。谢谢,祝愉快。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/
作者:[Arun Pyasi][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://github.com/docker/machine/releases
[2]:https://github.com/boot2docker/boot2docker

View File

@ -1,55 +0,0 @@
将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第一节 - 简介
================================================================================
*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的不代表Phoronix和Michael的观点。它们完全是我自己的想法。
另外没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些因为我确实想在KDE和Gnome的社团上发起讨论反馈。因此当我想指出——我所看到的——一个瑕疵时我会尽量地做到具体而直接。这样相关的讨论也能做到同样的具体和直接。再次声明本文另一可选标题为“被[细纸片][1]千刀万剐”原文含paper cuts一词指易修复但烦人的缺陷译者注)。
现在,重申完毕……文章开始。
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
当我把[《评价Fedora 22 KDE》][2]一文发给Michael时感觉很不是滋味。不是因为我不喜欢KDE或者不享受Fedora远非如此。事实上我刚开始想把我的T450s的系统换为Arch Linux时马上又决定放弃了因为我很享受fedora在很多方面所带来的便捷性。
我感觉很不是滋味的原因是Fedora的开发者花费了大量的时间和精力在他们的“工作站”产品上但是我却一点也没看到。在使用Fedora时我采用的并非那些主要开发者希望用户采用的那种使用方式因此我也就体验不到所谓的“Fedora体验”。它感觉就像一个人评价Ubuntu时用的却是Kubuntu评价OS X时用的却是Hackintosh或者评价Gentoo时用的却是Sabayon。根据大量Michael论坛的读者的说法它们在评价各种发行版时使用的都是默认设置的发行版——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。
正是在怀着这种态度的情况下我决定到Gnome这个水坑里来泡泡澡。
但是我还要在此多加一个声明……我在这里所看到的KDE和Gnome都是打包在Fedora中的。OpenSUSE, Kubuntu, Arch等发行版的各个桌面可能有不同的实现方法使得我这里所说的具体的“痛处”跟你所用的发行版有所不同。还有虽然用了这个标题但这篇文章将会是一篇很沉重的非常“KDE”的文章。之所以这样称呼这篇文章是因为我在使用了Gnome之后才知道KDE的“剪纸”到底有多多。
### 登录界面 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
我一般情况下都不会介意发行版装载它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。
第一印象很重要对吧那么GDMGnome Display ManageGnome显示管理器译者注下同。决对干得漂亮。它的登录界面看起来极度简洁每一部分都应用了一致的设计风格。使用通用图标而不是输入框为它的简洁加了分。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
这并不是说Fedora 22 KDE——现在已经是SDDM而不是KDM了——的登录界面不好看但是看起来决对没有它这样和谐。
问题到底出来在哪顶部栏。看看Gnome的截图——你选择一个用户然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁它不挡着你的道儿实话讲如果你没注意的话可能完全会看不到它。现在看看那蓝色 blue有忧郁之意一语双关译者注的KDE截图顶部栏看起来甚至不像是用同一个工具渲染出来的它的整个位置的安排好像是某人想着“哎哟妈呀我们需要把这个选项扔在哪个地方……”之后决定下来的。
对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。
从实用观点来看GDM还要远远实用的多再看看顶部一栏。时间被列了出来还有一个音量控制按钮如果你想保持周围安静你甚至可以在登录前设置静音还有一个可用的按钮来实现高对比度缩放语音转文字等功能所有可用的功能通过简单的一个开关按钮就能得到。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
切换到upstream的Breeve主题……突然间我抱怨的大部分问题都被完善了。通用图标所有东西都放在了屏幕中央但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话但既然电源按钮被做成了通用图标那么这点还算可以原谅。当然gnome还是有一些很好的附加物例如音量小程序和可访问按钮但Breeze总归是Fedora的KDE主题的一个进步。
到Windows(Windows 8和10之前或者OS X中去你会看到类似的东西——非常简洁的“不挡你道”的锁屏与登录界面它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1
[3]:https://launchpad.net/hundredpapercuts

View File

@ -1,31 +0,0 @@
将GNOME作为我的Linux桌面的一周他们做对的与做错的 - 第二节 - GNOME桌面
================================================================================
### 桌面 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
在我这一周的前五天中我都是直接手动登录进Gnome的——没有打开自动登录功能。在第五天的晚上每一次都要手动登录让我觉得很厌烦所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示“你的密钥链(keychain)未解锁请输入你的密码解锁”。在这时我才意识到了什么……每一次我通过GDM登录时——我的KDB钱包提示——Gnome以前一直都在自动解锁我的密钥链当我绕开GDM的登录程序时Gnome才不得不介入让我手动解锁。
现在鄙人的陋见是如果你打开了自动登录功能那么你的密钥链也应当自动解锁——否则这功能还有何用无论如何你还是要输入你的密码况且在GDM登录界面你还能有机会选择要登录的会话。
但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉是它在**和我**一起工作是多么简单的一件事。当我通过SDDM登录KDE时甚至连启动界面都还没加载完成就有一个窗口弹出来遮挡了启动动画——因此启动动画也就被破坏了——提示我解锁我的KDE钱包或GPG钥匙环。
如果当前不存在钱包你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗——接着它又让你在两种加密模式中选择一种甚至还暗示我们其中一种是不安全的Blowfish),既然是为了安全为什么还要我选择一个不安全的东西作者声明如果你安装了真正的KDE spin版本而不是仅仅安装了KDE的事后版本那么在创建用户时它就会为你创建一个钱包。但很不幸的是它不会帮你解锁并且它似乎还使用了更老的Blowfish加密模式而不是更新而且更安全的GPG模式。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
如果你选择了那个安全的加密模式GPG那么它会尝试加载GPG密钥……我希望你已经创建过一个了因为如果你没有那么你可又要被批一顿了。怎么样才能创建一个额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用KGpg来创建一个密钥接着在你就会遇到一层层的菜单和一个个的提示而这些菜单和提示只能让新手感到困惑。为什么你要问我GPG的二进制文件在哪天知道在哪如果不止一个你就不能为我选择一个最新的吗如果只有一个我再问一次为什么你还要问我
为什么你要问我要使用多大的密钥大小和加密算法你既然默认选择了2048和RSA/RSA为什么不直接使用如果你想让这些选项能够被改变那就把它们扔在下面的"Expert mode专家模式"按钮里去。这不仅仅关于使配置可被用户改变而是关于默认地把多余的东西扔在了用户面前。这种问题将会成为剩下文章的主题之一……KDE需要更好更理智的默认配置。配置是好的我很喜欢在使用KDE时的配置但它还需要知道什么时候应该什么时候不应该去提示用户。而且它还需要知道“嗯它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置不好的默认配置注定要失去用户。
让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,61 +0,0 @@
将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第三节 - GNOME应用
================================================================================
### 应用 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920)
这是一个基本上一潭死水的地方。每一个桌面环境都有一些非常好的和不怎么样的应用。再次强调Gnome把那些KDE完全错失的小细节给做对了。我不是想说KDE中有哪些应用不好。他们都能工作。但仅此而已。也就是说它们合格了但确实还没有达到甚至接近100分。
Gnome的在左边KDE的在右边。Dragon运行得很好清晰的标出了播放文件、URL或和光盘的按钮正如你在Gnome Videos中能做到的一样……但是在便利的文件名和用户的友好度方面Gnome多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件不需要你做任何事情。KDE有Baloo——正如之前有Nepomuk——为什么不使用它们它们能列出可读取的影像文件……但却没被使用。
下一步……音乐播放器
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920)
这两个应用左边的Rhythmbox和右边的Amarok都是打开后没有做任何修改直接截屏的。看到差别了吗Rhythmbox看起来像个音乐播放器直接了当排序文件的方法也很清晰它知道它应该是什么样的它的工作是什么就是播放音乐。
Amarok感觉就像是某个人为了展示而把所有的扩展和选项都尽可能地塞进一个应用程序中去而做出来的一个技术演示产品tech demos)或者一个库演示产品library demos)——而这些是不应该做为产品装进去的它只应该展示一些零碎的东西。而Amarok给人的感觉却是这样的好像是某个人想把每一个感觉可能很酷的东西都塞进一个媒体播放器里甚至都不停下来想“我想写啥来着一个播放音乐的应用
看看默认布局就行了。前面和中心都呈现了什么?一个可视化工具和维基集成(wikipedia integration)——占了整个页面最大和最显眼的区域。第二大的呢?播放列表。第三大,同时也是最小的呢?真正的音乐列表。这种默认设置对于一个核心应用来说,怎么可能称得上理智?
软件管理器它在最近几年当中有很大的进步而且接下来的几个月中很可能只能看到它更大的进步。不幸的是这是另一个地方KDE做得差一点点就能……但还是在终点线前摔了脸。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920)
Gnome软件中心可能是我最新的最爱先放下牢骚等下再发。Muon, 我想爱上你真的。但你就是个设计上的梦魇。当VDG给你画设计草稿时模型在下面你看起来真漂亮。白色空间用得很好设计简洁类别列表也很好你的整个“不要分开做成两个应用程序”的设计都很不错。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920)
接着就有人为你写代码实现真正的UI但是我猜这些家伙当时一定是喝醉了。
我们来看看Gnome软件中心。正中间是什么软件软件截图和软件描述等等。Muon的正中心是什么白白浪费的大块白色空间。Gnome软件中心还有一个贴心便利特点那就是放了一个“运行“的按钮在那儿以防你已经安装了这个软件。便利性和易用性很重要啊大哥。说实话仅仅让Muon把东西都居中对齐了可能看起来的效果都要好得多。
Gnome软件中心沿着顶部的东西是什么像个标签列表所有软件已安装软件软件升级。语言简洁直接直指要点。Muon好吧我们有个”发现“这个语言表达上还算差强人意然后我们又有一个”已安装软件“然后就没有然后了。软件升级哪去了
好吧……开发者决定把升级独立分开成一个应用程序,这样你就得打开两个应用程序才能管理你的软件——一个用来安装,一个用来升级——自从有了新得立图形软件包管理器以来,首次有这种破天荒的设计,与任何已存的软件中心的设计范例相违背。
我不想贴上截图给你们看因为我不想等下还得清理我的电脑如果你进入Muon安装了什么那么它就会在屏幕下方根据安装的应用名创建一个标签所以如果你一次性安装很多软件的话那么下面的标签数量就会慢慢的增长然后你就不得不手动检查清除它们因为如果你不这样做当标签增长到超过屏幕显示时你就不得不一个个找过去来才能找到最近正在安装的软件。想想在火狐浏览器打开50个标签。太烦人太不方便
我说过我会给Gnome一点打击我是认真的。Muon有一点做得比Gnome软件中心做得好。在Muon的设置栏下面有个“显示技术包”编辑器软件库非图形应用程序无AppData的应用等等AppData,软件包中的一个特殊文件,用于专门存储软件的信息,译注)。Gnome则没有。如果你想安装其中任何一项你必须跑到终端操作。我想这是他们做得不对的一点。我完全理解他们推行AppData的心情但我想他们太急了推行所有软件包带有AppData,是Gnome软件中心的目标之一译注。我是在想安装PowerTop而Gnome不显示这个软件时我才发现这点的——没有AppData,没有“显示技术包“设置。
更不幸的事实是你不能“用Apper就行了”自从……
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920)
Apper对安装本地软件包的支持大约在Fedora 19时就中止了几乎两年了。我喜欢那种对细节与质量的关注。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,54 +0,0 @@
将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第四节 - GNOME设置
================================================================================
### Settings设置 ###
在这我要挑一挑几个特定KDE控制模块的毛病大部分原因是因为相比它们的对手GNOME来说糟糕得太可笑实话说真是悲哀。
第一个接招的?打印机。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920)
GNOME在左KDE在右。你知道左边跟右边的打印程序有什么区别吗当我在GNOME控制中心打开“打印机”时程序窗口弹出来了之后没有也没发生。而当我在KDE系统设置打开“打印机”时我收到了一条密码提示。甚至我都没能看一眼打印机呢我就必须先交出ROOT密码。
让我再重复一遍。在今天PolicyKit和Logind的日子里对一个应该是sudo的操作我依然被询问要求ROOT的密码。我安装系统的时候甚至都没设置root密码。所以我必须跑到Konsole去然后运行'sudo passwd root'命令这样我才能给root设一个密码这样我才能回到系统设置中的打印程序然后交出root密码然后仅仅是看一看哪些打印机可用。完成了这些工作后当我点击“添加打印机”时我再次收到请求ROOT密码的提示当我解决了它后再选择一个打印机和驱动时我再次收到请求ROOT密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求。
而在GNOME下添加打印机在点击打印机程序中的”解锁“之前我没有收到任何请求SUDO密码的提示。整个过程我只被请求过一次仅此而已。KDE求你了……采用GNOME的”解锁“模式吧。不到一定需要的时候不要发出提示。还有不管是哪个库只要它允许KDE应用程序绕过PolicyKit/Logind如果有的话并直接请求ROOT权限……那就把它封进箱里吧。如果这是个多用户系统那我要么必须交出ROOT密码要么我必须时时刻刻呆着以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。
有还一件事……
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920)
给论坛的问题怎么样看起来更简洁我在写这篇文章时意识到当有任何的附加打印机准备好时Gnome打印机程序会把过程做得非常简洁它们在左边上放了一个竖直栏来列出这些打印机。而我在KDE添加第二台打印机时它突然增加出一个左边栏来。而在添加之前我脑海中已经有了一个恐怖的画面它会像图片文件夹显示预览图一样直接插入另外一个图标到界面里去。我很高兴也很惊讶的看到我是错的。但是事实是它直接”长出”另外一个从末存在的竖直栏彻底改变了它的界面布局而这样也称不上“好”。终究还是一种令人困惑奇怪而又不直观的设计。
打印机说得够多了……下一个接受我公开石刑的KDE系统设置是多媒体即Phonon。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920)
一如既往GNOME在左边KDE在右边。让我们先看看GNOME的系统设置先……眼睛从左到右从上到下对吧来吧就这样做。首先音量控制滑条。滑条中的蓝色条与空白条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个On/Off开关用来开关静音功能。Gnome的再次得分在于静音后能记住当前设置的音量而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你。
继续输入输出和应用程序的标签选项每一个应用程序的音量随时可控Gnome每过一秒我爱你越深。均衡的选项设置声音配置和清晰地标上标志的“测试麦克风”选项。
我不清楚它能否以一种更干净更简洁的设计实现。是的它只是一个Gnome化的Pavucontrol但我想这就是重要的地方。Pavucontrol在这方面几乎完全做对了Gnome控制中心中的“声音”应用程序的改善使它向完美更进了一步。
Phonon该你上了。但开始前我想说我TM看到的是什么我知道我看到的是音频设备的权限列表但是它呈现的方式有点太坑。还有那些用户可能关心的那些东西哪去了拥有一个权限列表当然很好它也应该存在但问题是权限列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要或者说常用到可以直接放在正中间位置的程度。音量控制滑块呢对每个应用程序的音量控制功能呢那些用户使用最频繁的东西呢好吧它们在Kmix中一个分离的程序拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920)
上面展示的Gnome的网络设置。KDE的没有展示原因就是我接下来要吐槽的内容了。如果你进入KDE的系统设置里然后点击“网络”区域中三个选项中的任何一个你会得到一大堆的选项蓝牙设置Samba分享的默认用户名和密码说真的“连通性(Connectivity)”下面只有两个选项SMB的用户名和密码。TMD怎么就配得上“连通性”这么大的词浏览器身份验证控制只有Konqueror能用……一个已经倒闭的项目代理设置等等……我的wifi设置哪去了它们没在这。哪去了好吧它们在网络应用程序的设置里面……而不是在网络设置里……
KDE你这是要杀了我啊你有“系统设置”当凶器拿着它动手吧
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,39 +0,0 @@
将GNOME作为我的Linux桌面的一周他们做对的与做错的 - 第五节 - 总结
================================================================================
### 用户体验和最后想法 ###
当Gnome 2.x和KDE 4.x要正面交锋时……我相当开心的跳到其中。我爱的东西它们有恨的东西也有但总的来说它们使用起来还算是一种乐趣。然后Gnome 3.x来了带着一场Gnome Shell的戏剧。那时我就放弃了Gnome我尽我所能的避开它。当时它对用户是不友好的而且不直观它打破了原有的设计典范只为平板的统治世界做准备……而根据平板下跌的销量来看这样的未来不可能实现。
Gnome 3后续发面了八个版本后奇迹发生了。Gnome变得对对用户友好了。变得直观了。它完美吗当然不了。我还是很讨厌它想推动的那种设计范例我讨厌它总想把工作流(work flow)强加给我但是在时间和耐心的作用下这两都能被接受。只要你能够回头去看看Gnome Shell那外星人一样的界面然后开始跟Gnome的其它部分特别是控制中心互动你就能发现Gnome绝对做对了细节。对细节的关注
人们能适应新的界面设计范例能适应新的工作流——iPhone和iPad都证明了这一点——但真正一直让他们操心的是“纸片的割伤”paper cuts此处指易于修复但烦人的缺陷译注
它带出了KDE和Gnome之间最重要的一个区别。Gnome感觉像一个产品。像一种非凡的体验。你用它的时候觉得它是完整的你要的东西都在你的指尖。它让人感觉就像是一个拥有windows或者OS X那样桌面体验的Linux桌面版你要的都在里面而且它是被同一个目标一致的团队中的同一个人写出来的。天即使是一个应用程序发出的sudo请求都感觉是Gnome下的一个特意设计的部分就像在Windows下的一样。而在KDE它就像是任何应用程序都能创建的那种随机外观的弹窗。它不像是以系统的一部分这样的正式身份停下来说“嘿有个东西要请求管理员权限你要给它吗”。
KDE让人体验不到有凝聚力的体验。KDE像是在没有方向地打转感觉没有完整的体验。它就像是一堆东西往不同的的方向移动只不过恰好它们都有一个共同享有的工具包。如果开发者对此很开心那么好吧他们开心就好但是如果他们想提供最好体验的话那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心应当有一个视野知道KDE要提供什么——并且——知道它看起来应该是什么样的。
是不是有什么原因阻止我在KDE下使用Gnome磁盘管理 Rhythmbox? Evolution? 没有。没有。没有。但是这样说又错过了关键。Gnome和KDE都称它们为“桌面环境”。那么它们就应该是完整的环境这意味着他们的各个部件应该汇集并紧密结合在一起意味着你使用它们环境下的工具因为它们说“您在一个完整的桌面中需要的任何东西我们都支持。”说真的只有Gnome看起来能符合完整的要求。KDE在“汇集在一起”这一方面感觉就像个半成品更不用说提供“完整体验”中你所需要的东西。Gnome磁盘管理没有相应的对手——kpartionmanage要求ROOT权限。KDE不运行“首次用户注册”的过程原文No 'First Time User' run through.可能是指系统安装过程中KDE没有创建新用户的过程译注 现在也不过是在Kubuntu下引入了一个用户管理器。老天Gnome甚至提供了地图笔记日历和时钟应用。这些应用都是百分百要紧的吗当然不了。但是正是这些应用帮助Gnome推动“Gnome是一种完整丰富的体验”的想法。
我吐槽的KDE问题并非不可能解决决对不是这样的但是它需要人去关心它。它需要开发者为他们的作品感到自豪而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力——GNOME 3.x就是因为缺乏配置选项的能力而为我所诟病但别把“好吧你想怎么设置就怎么设置”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。
我知道KDE开发者们知道设计很重要这也是为什么Visual Design Group(视觉设计团体)存在的原因但是感觉好像他们没有让VDG充分发挥。所以KDE里存在组织上的缺陷。不是KDE没办法完整不是它没办法汇集整合在一起然后解决衰败问题只是开发者们没做到。他们瞄准了靶心……但是偏了。
还有,在任何人说这句话之前……千万别说“补丁很受欢迎啊"。因为当我开心的为个人提交补丁时只要开发者坚持以他们喜欢的却不直观的方式干事更多这样的烦事就会不断发生。这不关Muon有没有中心对齐。也不关Amarok的界面太丑。也不关每次我敲下快捷键后弹出的音量和亮度调节窗口占用了我一大块的屏幕“房地产”说真的有人会去缩小这些东西
这跟心态的冷漠有关跟开发者们在为他们的应用设计UI时根本就不多加思考有关。KDE团队做的东西都工作得很好。Amarok能播放音乐。Dragon能播放视频。Kwin或Qt和kdelibs似乎比Mutter/gtk更有力更效率仅根本我的电池电量消耗计算。非科学性测试。这些都很好很重要……但是它们呈现的方式也很重要。甚至可以说呈现方式是最重要的因为它是用户看到的和与之交互的东西。
KDE应用开发者们……让VDG参与进来吧。让VDG审查并核准每一个”核心“应用让一个VDG的UI/UX专家来设计应用的使用模式和使用流程以此保证其直观性。真见鬼不管你们在开发的是啥应用仅仅把它的模型发到VDG论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这现在赶紧用吧。
我不想说得好像我一点都不懂感恩。我爱KDE我爱那些志愿者们为了给Linux用户一个可视化的桌面而付出的工作与努力也爱可供选择的Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的KDE我想看到它走得比以前更加遥远。而这样做需要每个人继续努力并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评如果我们不说”这真垃圾那么情况永远不会变好。
这周后我会继续使用Gnome吗可能不不。Gnome还在试着强迫我接受其工作流而我不想追随也不想遵循因为我在使用它的时候感觉变得不够高效因为它并不遵循我的思维模式。可是对于我的朋友们当他们问我“我该用哪种桌面环境”我可能会推荐Gnome特别是那些不大懂技术只要求“能工作”就行的朋友。根据目前KDE的形势来看这可能是我能说出的最狠毒的评估了。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,154 @@
如何使用 Datadog 监控 NGINX - 第3部分
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
如果你已经阅读了[前面的如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX你需要一个强大的监控系统来存储并将指标可视化当异常发生时能提醒你。在这篇文章中我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标:
![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
Datadog 允许你建立单个主机服务流程度量或者几乎任何它们的组合图形周围和警报。例如你可以在一定的可用性区域监控所有NGINX主机或所有主机或者您可以监视被报道具有一定标签的所有主机的一个关键指标。本文将告诉您如何
Datadog 允许你来建立图表并报告周围的主机,进程,指标或其他的。例如,你可以在特定的可用性区域监控所有 NGINX 主机,或所有主机,或者你可以监视一个关键指标并将它报告给周围所有标记的主机。本文将告诉你如何做:
- 在 Datadog 仪表盘上监控 NGINX 指标,对其他所有系统
- 当一个关键指标急剧变化时设置自动警报来通知你
### 配置 NGINX ###
为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个URL 来报告 status 指标。下面将一步一步展示[配置开源 NGINX ][2]和[NGINX Plus][3]。
### 整合 Datadog 和 NGINX ###
#### 安装 Datadog 代理 ####
Datadog 代理是 [一个开源软件][4] 能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装代理通常 [仅需要一个命令][5]
只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。
![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
#### 配置 Agent ####
接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该 [在这儿][7]。
在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例:
init_config:
instances:
- nginx_status_url: http://localhost/nginx_status/
tags:
- instance:foo
一旦你修改了 status URLs 和其他标签,将配置文件保存为 conf.d/nginx.yaml。
#### 重启代理 ####
你必须重新启动代理程序来加载新的配置文件。重新启动命令 [在这里][9] 根据平台的不同而不同。
#### 检查配置文件 ####
要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的信息命令。每个平台使用的命令[看这儿][10]。
如果配置是正确的,你会看到这样的输出:
Checks
======
[...]
nginx
-----
- instance #0 [OK]
- Collected 8 metrics & 0 events
#### 安装整合 ####
最后,在你的 Datadog 帐户里面整合 Nginx。这非常简单你只要点击“Install Integration”按钮在 [NGINX 集成设置][11] 配置表中。
![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
### 指标! ###
一旦代理开始报告 NGINX 指标,你会看到 [一个 NGINX 仪表盘][12] 在你 Datadog 可用仪表盘的列表中。
基本的 NGINX 仪表盘显示了几个关键指标 [在我们介绍的 NGINX 监控中][13] 的最大值。 一些指标特别是请求处理时间日志分析Datadog 不提供。)
你可以轻松创建一个全面的仪表盘来监控你的整个网站区域通过增加额外的图形与 NGINX 外部的重要指标。例如,你可能想监视你 NGINX 主机的host-level 指标如系统负载。你需要构建一个自定义的仪表盘只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。
![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
你也可以更高级别的监控你的 NGINX 实例通过使用 Datadog 的 [Host Maps][14] -对于实例color-coding 你所有的 NGINX 主机通过 CPU 使用率来辨别潜在热点。
![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
### NGINX 指标 ###
一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动密切的关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。
#### 监控 NGINX 吞吐量 ####
Datadog 指标警报可以是 threshold-based当指标超过设定值会警报或 change-based当指标的变化超过一定范围会警报。在这种情况下我们会采取后一种方式当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。
1.**创建一个新的指标监控**. 从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。
![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
2.**定义你的指标监视器**. 我们想知道 NGINX 每秒总的请求量下降的数量。所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s度量和。
![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
3.**设置指标警报条件**.我们想要在变化时警报而不是一个固定的值所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30以上时警报。在这里我们使用一个 one-minute 数据窗口来表示“now” 指标的值,警报横跨该间隔内的平均变化,和之前 10 分钟的指标值作比较。
![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
4.**自定义通知**.如果 NGINX 的请求量下降,我们想要通知我们的团队。在这种情况下,我们将给 ops 队的聊天室发送通知网页呼叫工程师。在“Say whats happening”中我们将其命名为监控器并添加一个短消息将伴随该通知并建议首先开始调查。我们使用 @mention 作为一般警告,使用 ops 并用 @pagerduty [专门给 PagerDuty 发警告][15]。
![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
5.**保存集成监控**.点击页面底部的“Save”按钮。你现在监控的关键指标NGINX [work 指标][16],它边打电话给工程师并在它迅速下时随时分页。
### 结论 ###
在这篇文章中,我们已经通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。
如果你一直使用你自己的 Datadog 账号,你现在应该在 web 环境中有了很大的可视化提高,也有能力根据你的环境创建自动监控,你所使用的模式,指标应该是最有价值的对你的组织。
如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。
----------
这篇文章的来源在 [on GitHub][18]. 问题,错误,补充等?请[联系我们][19].
------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
作者K Young
译者:[strugglingyouth](https://github.com/译者ID)
校对:[strugglingyouth](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
[4]:https://github.com/DataDog/dd-agent
[5]:https://app.datadoghq.com/account/settings#agent
[6]:https://app.datadoghq.com/infrastructure
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
[12]:https://app.datadoghq.com/dash/integration/nginx
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
[15]:https://www.datadoghq.com/blog/pagerduty/
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
[19]:https://github.com/DataDog/the-monitor/issues

View File

@ -1,129 +0,0 @@
如何更新Linux内核提升系统性能
================================================================================
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f)
[Linux内核][1]内核的开发速度目前是空前大概每2到3个月就会有一个主要的版本发布。每个发布都带来让很多人的计算更加快、更加有效率、或者更好的功能和提升。
问题是你不能在这些内核发布的时候就用它们-你要等到你的发行版带来新内核的发布。我们先前发布了[定期更新内核的好处][2],你不必等到那时。我们会向你展示该怎么做。
> 免责声明: 我们先前的一些文章已经提到过,升级内核会带来(很小的)破坏你系统的风险。在这种情况下,通常可以通过旧内核来使系统工作,但是有时还是不行。因此我们对系统的任何损坏都不负责-你自己承担风险!
### 预备工作 ###
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f)
要更新你的内核你首先要确定你使用的是32位还是64位的系统。打开终端并运行
uname -a
检查一下输出的是x86_64还是i686。如果是x86_64你就运行64位的版本否则就运行32位的版本。记住这个因为这个很重要。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f)
接下来,访问[官方Linux内核网站][3],它会告诉你目前稳定内核的版本。如果你喜欢你可以尝试发布预选版,但是这比稳定版少了很多测试。除非你确定想要用发布预选版否则就用稳定内核。
### Ubuntu指导 ###
对Ubuntu及其衍生版的用户而言升级内核非常简单这要感谢Ubuntu主线内核PPA。虽然官方称为一个PPA。但是你不能像其他PPA一样用来添加它到你软件源列表中并指望它自动升级你的内核。而它只是一个简单的网页你可以下载到你想要的内核。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f)
现在,访问[内核PPA网页][4]并滚到底部。列表的最下面会含有最新发布的预选版本你可以在名字中看到“rc”字样但是这上面就可以看到最新的稳定版为了更容易地解释这个这时最新的稳定版是4.1.2。点击它你会看到几个选项。你需要下载3个文件并保存到各自的文件夹中如果你喜欢的话可以在下载文件夹中这样就可以将它们相互隔离了:
- 针对架构的含“generic”的头文件我这里是64位或者“amd64”
- 中间的头文件在文件名末尾有“all”
- 针对架构的含“generic”内核文件再说一次我会用“amd64”但是你如果用32位的你需要使用“i686”
你会看到还有含有“lowlatency”的文件可以下面。但最好忽略它们。这些文件相对不稳定并且只为那些通用文件不能满足像录音这类任务想要低延迟的人准备的。再说一次首选通用版除非你特定的任务需求不能很好地满足。一般的游戏和网络浏览不是使用低延时版的借口。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f)
你把它们放在各自的文件夹下,对么?现在打开终端,使用
cd
命令到新创建的文件夹下,像
cd /home/user/Downloads/Kernel
接着运行:
sudo dpkg -i *.deb
这个命令会标记所有文件夹的“.deb”文件为“待安装”接着执行安装。这是推荐的安装放大因为除非可以很简单地选择一个文件安装它总会报出依赖问题。这个方法可以避免这个问题。如果你不清楚cd和sudo是什么。快速地看一下[Linux基本命令][5]这篇文章。
安装完成后,**重启**你的系统这时应该就会运行刚安装的内核了你可以在命令行中使用uname -a来检查输出。
### Fedora指导 ###
如果你使用的是Fedora或者它的衍生版过程跟Ubuntu很类似。不同的是文件获取的位置不同安装的命令也不同。
![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f)
查看[Fedora最新内核编译][6]列表。选取列表中最新的稳定版并滚到下面选择i686或者x86_64版。这依赖于你的系统架构。这时你需要下载下面这些文件并保存到它们对应的目录下比如“Kernel”到下载目录下
- kernel
- kernel-core
- kernel-headers
- kernel-modules
- kernel-modules-extra
- kernel-tools
- perf and python-perf (optional)
如果你的系统是i68632位同时你有4GB或者更大的内存你需要下载所有这些文件的PAE版本。PAE是用于32位的地址扩展技术上它允许你使用3GB的内存。
现在使用
cd
命令进入文件夹,像这样
cd /home/user/Downloads/Kernel
and then run the following command to install all the files:
接着运行下面的命令来安装所有的文件
yum --nogpgcheck localinstall *.rpm
最后**重启**你的系统,这样你就可以运行新的内核了!
### 使用 Rawhide ###
另外一个方案是Fedora用户也可以[切换到Rawhide][7]它会自动更新所有的包到最新版本包括内核。然而Rawhide经常会破坏系统尤其是在早期的开发版中它**不应该**在你日常使用的系统中用。
### Arch指导 ###
[Arch][8]应该总是使用的是最新和最棒的稳定版或者相当接近的版本。如果你想要更接近最新发布的稳定版你可以启用测试库提前2到3周获取到主要的更新。
要这么做,用[你喜欢的编辑器][9]以sudo权限打开下面的文件
/etc/pacman.conf
接着取消注释带有testing的三行删除行前面的井号。如果你想要启用multilib仓库就把multilib-testing也做相同的事情。如果想要了解更多参考[这个Arch的wiki界面][10]。
升级内核并不简单(有意这么做),但是这会给你带来很多好处。只要你的新内核不会破坏任何东西,你可以享受它带来的性能提升,更好的效率,支持更多的硬件和潜在的新特性。尤其是你正在使用相对更新的硬件,升级内核可以帮助到它。
**怎么升级内核这篇文章帮助到你了么?你认为你所喜欢的发行版对内核的发布策略应该是怎样的?**。在评论栏让我们知道!
--------------------------------------------------------------------------------
via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/
作者:[Danny Stieben][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.makeuseof.com/tag/author/danny/
[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/
[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/
[3]:http://www.kernel.org/
[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/
[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/
[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8
[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/
[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/
[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/
[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories

View File

@ -0,0 +1,126 @@
如何使用Weave以及Docker搭建Nginx反向代理/负载均衡服务器
================================================================================
Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反向代理/负载均衡服务器。Weave创建一个虚拟网络将跨主机部署的Docker容器连接在一起并使它们自动暴露给外部世界。它让我们更加专注于应用的开发而不是基础架构。Weave提供了一个如此棒的环境仿佛它的所有容器都属于同个网络不需要端口/映射/连接等的配置。容器中的应用提供的服务在weave网络中可以轻易地被外部世界访问不论你的容器运行在哪里。在这个教程里我们将会使用weave快速并且轻易地将nginx web服务器部署为一个负载均衡器反向代理一个运行在Amazon Web Services里面多个节点上的docker容器中的简单php应用。这里我们将会介绍WeaveDNS,它提供一个简单的方式让容器利用主机名找到彼此,不需要改变代码,并且能够告诉其他容器连接到这些主机名。
在这篇教程里我们需要一个运行的容器集合来配置nginx负载均衡服务器。最简单轻松的方法就是使用Weave在ubuntu的docker容器中搭建nginx负载均衡服务器。
### 1. 搭建AWS实例 ###
首先我们需要搭建Amzaon Web Service实例这样才能在ubuntu下用weave跑docker容器。我们将会使用[AWS CLI][1]来搭建和配置两个AWS EC2实例。在这里我们使用最小的有效实例t1.micro。我们需要一个有效的**Amazon Web Services账户**用以AWS命令行界面的搭建和配置。我们先在AWS命令行界面下使用下面的命令将github上的weave仓库克隆下来。
$ git clone http://github.com/fintanr/weave-gs
$ cd weave-gs/aws-nginx-ubuntu-simple
在克隆完仓库之后我们执行下面的脚本这个脚本将会部署两个t1.micro实例每个实例中都是ubuntu作为操作系统并用weave跑着docker容器。
$ sudo ./demo-aws-setup.sh
在这里我们将会在以后用到这些实例的IP地址。这些地址储存在一个weavedemo.env文件中这个文件在执行demo-aws-setup.sh脚本的期间被创建。为了获取这些IP地址我们需要执行下面的命令命令输出类似下面的信息。
$ cat weavedemo.env
export WEAVE_AWS_DEMO_HOST1=52.26.175.175
export WEAVE_AWS_DEMO_HOST2=52.26.83.141
export WEAVE_AWS_DEMO_HOSTCOUNT=2
export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141)
请注意这些不是固定的IP地址AWS会为我们的实例动态地分配IP地址。
我们在bash下执行下面的命令使环境变量生效。
. ./weavedemo.env
### 2. 启动Weave and WeaveDNS ###
在安装完实例之后我们将会在每台主机上启动weave以及weavedns。Weave以及weavedns使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中 不需要改变代码也不需要去理解像Ambassador容器以及Link机制之类的概念。下面是在第一台主机上启动weave以及weavedns的命令。
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
$ sudo weave launch
$ sudo weave launch-dns 10.2.1.1/24
下一步我也准备在第二台主机上启动weave以及weavedns。
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
$ sudo weave launch $WEAVE_AWS_DEMO_HOST1
$ sudo weave launch-dns 10.2.1.2/24
### 3. 启动应用容器 ###
现在我们准备跨两台主机启动六个容器这两台主机都用Apache2 Web服务实例跑着简单的php网站。为了在第一个Apache2 Web服务器实例跑三个容器 我们将会使用下面的命令。
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
$ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache
在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2
$ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache
$ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache
注意: 在这里,--with-dns选项告诉容器使用weavedns来解析主机名-h x.weave.local则使得weavedns能够解析指定主机。
### 4. 启动Nginx容器 ###
在应用容器运行得有如意料中的稳定之后我们将会启动nginx容器它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动nginx容器请使用下面的命令。
ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1
$ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple
因此我们的nginx容器在$WEAVE_AWS_DEMO_HOST1上公开地暴露成为一个http服务器。
### 5. 测试负载均衡服务器 ###
为了测试我们的负载均衡服务器是否可以工作我们执行一段可以发送http请求给nginx容器的脚本。我们将会发送6个请求这样我们就能看到nginx在一次的轮询中服务于每台web服务器之间。
$ ./access-aws-hosts.sh
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws1.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws2.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws3.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws4.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws5.weave.local",
"date" : "2015-06-26 12:24:23"
}
{
"message" : "Hello Weave - nginx example",
"hostname" : "ws6.weave.local",
"date" : "2015-06-26 12:24:23"
}
### 结束语 ###
我们最终成功地将nginx配置成一个反向代理/负载均衡服务器通过使用weave以及运行在AWSAmazon Web ServiceEC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器这些容器在Apache2 Web服务器中跑着PHP应用。在这里我们部署了一个容器化的PHP应用使用nginx横跨多台在AWS EC2上的主机而不需要改变代码利用weavedns使得每个容器连接在一起只需要主机名就够了眼前的这些便捷 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/
作者:[Arun Pyasi][a]
译者:[dingdongnigetou](https://github.com/dingdongnigetou)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://console.aws.amazon.com/

View File

@ -0,0 +1,418 @@
Linux日志管理
================================================================================
管理日志的一个关键典型做法是集中或整合你的日志到一个地方,特别是如果你有许多服务器或多层级架构。我们将告诉你为什么这是一个好主意然后给出如何更容易的做这件事的一些小技巧。
### 集中管理日志的好处 ###
如果你有很多服务器,查看单独的一个日志文件可能会很麻烦。现代的网站和服务经常包括许多服务器层级,分布式的负载均衡器,还有更多。这将花费很长时间去获取正确的日志,甚至花更长时间在登录服务器的相关问题上。没什么比发现你找的信息没有被捕获更沮丧的了,或者本能保留答案时正好在重启后丢失了日志文件。
集中你的日志使他们查找更快速,可以帮助你更快速的解决产品问题。你不用猜测那个服务器存在问题,因为所有的日志在同一个地方。此外,你可以使用更强大的工具去分析他们,包括日志管理解决方案。一些解决方案能[转换纯文本日志][1]为一些字段,更容易查找和分析。
集中你的日志也可以是他们更易于管理:
- 他们更安全,当他们备份归档一个单独区域时意外或者有意的丢失。如果你的服务器宕机或者无响应,你可以使用集中的日志去调试问题。
- 你不用担心ssh或者低效的grep命令需要更多的资源在陷入困境的系统。
- 你不用担心磁盘占满,这个能让你的服务器死机。
- 你能保持你的产品服务安全性,只是为了查看日志无需给你所有团队登录权限。给你的团队从中心区域访问日志权限更安全。
随着集中日志管理,你仍需处理由于网络联通性不好或者用尽大量网络带宽导致不能传输日志到中心区域的风险。在下面的章节我们将要讨论如何聪明的解决这些问题。
### 流行的日志归集工具 ###
在Linux上最常见的日志归集是通过使用系统日志守护进程或者代理。系统日志守护进程支持本地日志的采集然后通过系统日志协议传输日志到中心服务器。你可以使用很多流行的守护进程来归集你的日志文件
- [rsyslog][2]是一个轻量后台程序在大多数Linux分支上已经安装。
- [syslog-ng][3]是第二流行的Linux系统日志后台程序。
- [logstash][4]是一个重量级的代理,他可以做更多高级加工和分析。
- [fluentd][5]是另一个有高级处理能力的代理。
Rsyslog是集中日志数据最流行的后台程序因为他在大多数Linux分支上是被默认安装的。你不用下载或安装它并且它是轻量的所以不需要占用你太多的系统资源。
如果你需要更多先进的过滤或者自定义分析功能如果你不在乎额外的系统封装Logstash是下一个最流行的选择。
### 配置Rsyslog.conf ###
既然rsyslog成为最广泛使用的系统日志程序我们将展示如何配置它为日志中心。全局配置文件位于/etc/rsyslog.conf。它加载模块设置全局指令和包含应用特有文件位于目录/etc/rsyslog.d中。这些目录包含/etc/rsyslog.d/50-default.conf命令rsyslog写系统日志到文件。在[rsyslog文档][6]你可以阅读更多相关配置。
rsyslog配置语言是是[RainerScript][7]。你建立特定的日志输入就像输出他们到另一个目标。Rsyslog已经配置为系统日志输入的默认标准所以你通常只需增加一个输出到你的日志服务器。这里有一个rsyslog输出到一个外部服务器的配置例子。在举例中**BEBOP**是一个服务器的主机名,所以你应该替换为你的自己的服务器名。
action(type="omfwd" protocol="tcp" target="BEBOP" port="514")
你可以发送你的日志到一个有丰富存储的日志服务器来存储,提供查询,备份和分析。如果你正存储日志在文件系统,然后你应该建立[日志转储][8]来防止你的磁盘报满。
作为一种选择,你可以发送这些日志到一个日志管理方案。如果你的解决方案是安装在本地你可以发送到您的本地系统文档中指定主机和端口。如果你使用基于云提供商,你将发送他们到你的提供商特定的主机名和端口。
### 日志目录 ###
你可以归集一个目录或者匹配一个通配符模式的所有文件。nxlog和syslog-ng程序支持目录和通配符(*)。
rsyslog的通用形式不支持直接的监控目录。一种解决方案你可以设置一个定时任务去监控这个目录的新文件然后配置rsyslog来发送这些文件到目的地比如你的日志管理系统。作为一个例子日志管理提供商Loggly有一个开源版本的[目录监控脚本][9]。
### 哪个协议: UDP, TCP, or RELP? ###
当你使用网络传输数据时你可以选择三个主流的协议。UDP在你自己的局域网是最常用的TCP是用在互联网。如果你不能失去日志就要使用更高级的RELP协议。
[UDP][10]发送一个数据包那只是一个简单的包信息。它是一个只外传的协议所以他不发送给你回执ACK。它只尝试发送包。当网络拥堵时UDP通常会巧妙的降级或者丢弃日志。它通常使用在类似局域网的可靠网络。
[TCP][11]通过多个包和返回确认发送流信息。TCP会多次尝试发送数据包但是受限于[TCP缓存][12]大小。这是在互联网上发送送日志最常用的协议。
[RELP][13]是这三个协议中最可靠的但是它是为rsyslog创建而且很少有行业应用。它在应用层接收数据然后再发出是否有错误。确认你的目标也支持这个协议。
### 用磁盘辅助队列可靠的传送 ###
如果rsyslog在存储日志时遭遇错误例如一个不可用网络连接他能将日志排队直到连接还原。队列日志默认被存储在内存里。无论如何内存是有限的并且如果问题仍然存在日志会超出内存容量。
**警告:如果你只存储日志到内存,你可能会失去数据。**
Rsyslog能在内存被占满时将日志队列放到磁盘。[磁盘辅助队列][14]使日志的传输更可靠。这里有一个例子如何配置rsyslog的磁盘辅助队列
$WorkDirectory /var/spool/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
### 使用TLS加密日志 ###
当你的安全隐私数据是一个关心的事你应该考虑加密你的日志。如果你使用纯文本在互联网传输日志嗅探器和中间人可以读到你的日志。如果日志包含私人信息、敏感的身份数据或者政府管制数据你应该加密你的日志。rsyslog程序能使用TLS协议加密你的日志保证你的数据更安全。
建立TLS加密你应该做如下任务
1. 生成一个[证书授权][15](CA)。在/contrib/gnutls有一些简单的证书只是有助于测试但是你需要创建自己的产品证书。如果你正在使用一个日志管理服务它将有一个证书给你。
1. 为你的服务器生成一个[数字证书][16]使它能SSL运算或者使用你自己的日志管理服务提供商的一个数字证书。
1. 配置你的rsyslog程序来发送TLS加密数据到你的日志管理系统。
这有一个rsyslog配置TLS加密的例子。替换CERT和DOMAIN_NAME为你自己的服务器配置。
$DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com
### 应用日志的最佳管理方法 ###
除Linux默认创建的日志之外归集重要的应用日志也是一个好主意。几乎所有基于Linux的服务器的应用把他们的状态信息写入到独立专门的日志文件。这包括数据库产品像PostgreSQL或者MySQL网站服务器像Nginx或者Apache防火墙打印和文件共享服务还有DNS服务等等。
管理员要做的第一件事是安装一个应用后配置它。Linux应用程序典型的有一个.conf文件在/etc目录里。它也可能在其他地方但是那是大家找配置文件首先会看的地方。
根据应用程序有多复杂多庞大,可配置参数的数量可能会很少或者上百行。如前所述,大多数应用程序可能会在某种日志文件写他们的状态:配置文件是日志设置的地方定义了其他的东西。
如果你不确定它在哪你可以使用locate命令去找到它
[root@localhost ~]# locate postgresql.conf
/usr/pgsql-9.4/share/postgresql.conf.sample
/var/lib/pgsql/9.4/data/postgresql.conf
#### 设置一个日志文件的标准位置 ####
Linux系统一般保存他们的日志文件在/var/log目录下。如果是很好如果不是你也许想在/var/log下创建一个专用目录为什么因为其他程序也在/var/log下保存他们的日志文件如果你的应用报错多于一个日志文件 - 也许每天一个或者每次重启一个 - 通过这么大的目录也许有点难于搜索找到你想要的文件。
如果你有多于一个的应用实例在你网络运行这个方法依然便利。想想这样的情景你也许有一打web服务器在你的网络运行。当排查任何一个盒子的问题你将知道确切的位置。
#### 使用一个标准的文件名 ####
给你的应用最新的日志使用一个标准的文件名。这使一些事变得容易因为你可以监控和追踪一个单独的文件。很多应用程序在他们的日志上追加一种时间戳。他让rsyslog更难于找到最新的文件和设置文件监控。一个更好的方法是使用日志转储增加时间戳到老的日志文件。这样更易去归档和历史查询。
#### 追加日志文件 ####
日志文件会在每个应用程序重启后被覆盖如果这样我们建议关掉它。每次重启app后应该去追加日志文件。这样你就可以追溯重启前最后的日志。
#### 日志文件追加 vs. 转储 ####
虽然应用程序每次重启后写一个新日志文件如何保存当前日志追加到一个单独文件巨大的文件Linux系统不是因频繁重启或者崩溃出名的应用程序可以运行很长时间甚至不间歇但是也会使日志文件非常大。如果你查询分析上周发生连接错误的原因你可能无疑的要在成千上万行里搜索。
我们建议你配置应用每天半晚转储它的日志文件。
为什么首先它将变得可管理。找一个有特定日期部分的文件名比遍历一个文件指定日期的条目更容易。文件也小的多你不用考虑当你打开一个日志文件时vi僵住。第二如果你正发送日志到另一个位置 - 也许每晚备份任务拷贝到归集日志服务器 - 这样不会消耗你的网络带宽。最后第三点,这样帮助你做日志保持。如果你想剔除旧的日志记录,这样删除超过指定日期的文件比一个应用解析一个大文件更容易。
#### 日志文件的保持 ####
你保留你的日志文件多长时间?这绝对可以归结为业务需求。你可能被要求保持一个星期的日志信息,或者管理要求保持一年的数据。无论如何,日志需要在一个时刻或其他从服务器删除。
在我们看来除非必要只在线保持最近一个月的日志文件加上拷贝他们到第二个地方如日志服务器。任何比这更旧的日志可以被转到一个单独的介质上。例如如果你在AWS上你的旧日志可以被拷贝到Glacier。
#### 给日志单独的磁盘分区 ####
Linux最典型的方式通常建议挂载到/var目录到一个单独度的文件系统。这是因为这个目录的高I/Os。我们推荐挂在/var/log目录到一个单独的磁盘系统下。这样可以节省与主应用的数据I/O竞争。另外如果一些日志文件变的太多或者一个文件变的太大不会占满整个磁盘。
#### 日志条目 ####
每个日志条目什么信息应该被捕获?
这依赖于你想用日志来做什么。你只想用它来排除故障,或者你想捕获所有发生的事?这是一个规则条件去捕获每个用户在运行什么或查看什么?
如果你正用日志做错误排查的目的只保存错误报警或者致命信息。没有理由去捕获调试信息例如应用也许默认记录了调试信息或者另一个管理员也许为了故障排查使用打开了调试信息但是你应该关闭它因为它肯定会很快的填满空间。在最低限度上捕获日期时间客户端应用名原ip或者客户端主机名执行动作和它自身信息。
#### 一个PostgreSQL的实例 ####
作为一个例子让我们看看vanilla这是一个开源论坛PostgreSQL 9.4安装主配置文件。它叫做postgresql.conf与其他Linux系统中的配置文件不同他不保存在/etc目录下。在代码段下我们可以在我们的Centos 7服务器的/var/lib/pgsql目录下看见
root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf
...
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr'
# Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on
# Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'pg_log'
# directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
# log_file_mode = 0600 .
# creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d
# Automatic rotation of logfiles will happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default
# terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '< %m >' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5
log_timezone = 'Australia/ACT'
虽然大多数参数被加上了注释他们呈现了默认数值。我们可以看见日志文件目录是pg_loglog_directory参数文件名应该以postgresql开头log_filename参数文件每天转储一次log_rotation_age参数然后日志记录以时间戳开头log_line_prefix参数。特别说明有趣的是log_line_prefix参数你可以包含很多整体丰富的信息在这。
看/var/lib/pgsql/9.4/data/pg_log目录下展现给我们这些文件
[root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log
total 20
-rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log
-rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log
-rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log
-rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log
-rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log
所以日志文件命只有工作日命名的标签。我们可以改变他。如何做在postgresql.conf配置log_filename参数。
查看一个日志内容,它的条目仅以日期时间开头:
[root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log
...
< 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request
< 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions
< 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down
< 2015-02-27 01:21:27.036 EST >LOG: shutting down
< 2015-02-27 01:21:27.211 EST >LOG: database system is shut down
### 集中应用日志 ###
#### 使用Imfile监控日志 ####
习惯上,应用通常记录他们数据在文件里。文件容易在一个机器上寻找但是多台服务器上就不是很恰当了。你可以设置日志文件监控然后当新的日志被添加到底部就发送事件到一个集中服务器。在/etc/rsyslog.d/里创建一个新的配置文件然后增加一个文件输入,像这样:
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
----------
# Input for FILE1
$InputFileName /FILE1
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
替换FILE1和APPNAME1位你自己的文件和应用名称。Rsyslog将发送它到你配置的输出中。
#### 本地套接字日志与Imuxsock ####
套接字类似UNIX文件句柄,所不同的是套接字内容是由系统日志程序读取到内存中,然后发送到目的地。没有文件需要被写入。例如logger命令发送他的日志到这个UNIX套接字。
如果你的服务器I/O有限或者你不需要本地文件日志这个方法使系统资源有效利用。这个方法缺点是套接字有队列大小的限制。如果你的系统日志程序宕掉或者不能保持运行然后你可能会丢失日志数据。
rsyslog程序将默认从/dev/log套接字中种读取但是你要用[imuxsock输入模块][17]如下命令使它生效:
$ModLoad imuxsock
#### UDP日志与Imupd ####
一些应用程序使用UDP格式输出日志数据这是在网络上或者本地传输日志文件的标准系统日志协议。你的系统日志程序收集这些日志然后处理他们或者用不同的格式传输他们。交替地你可以发送日志到你的日志服务器或者到一个日志管理方案中。
使用如下命令配置rsyslog来接收标准端口514的UDP系统日志数据
$ModLoad imudp
----------
$UDPServerRun 514
### 用Logrotate管理日志 ###
日志转储是当日志到达指定的时期时自动归档日志文件的方法。如果不介入,日志文件一直增长,会用尽磁盘空间。最后他们将破坏你的机器。
logrotate实例能随着日志的日期截取你的日志腾出空间。你的新日志文件保持文件名。你的旧日志文件被重命名为后缀加上数字。每次logrotate实例运行一个新文件被建立然后现存的文件被逐一重命名。你来决定何时旧文件被删除或归档的阈值。
当logrotate拷贝一个文件新的文件已经有一个新的索引节点这会妨碍rsyslog监控新文件。你可以通过增加copytruncate参数到你的logrotate定时任务来缓解这个问题。这个参数拷贝现有的日志文件内容到新文件然后从现有文件截短这些内容。这个索引节点从不改变因为日志文件自己保持不变它的内容是一个新文件。
logrotate实例使用的主配置文件是/etc/logrotate.conf应用特有设置在/etc/logrotate.d/目录下。DigitalOcean有一个详细的[logrotate教程][18]
### 管理很多服务器的配置 ###
当你只有很少的服务器你可以登陆上去手动配置。一旦你有几打或者更多服务器你可以用高级工具使这变得更容易和更可扩展。基本上所有的事情就是拷贝你的rsyslog配置到每个服务器然后重启rsyslog使更改生效。
#### Pssh ####
这个工具可以让你在很多服务器上并行的运行一个ssh命令。使用pssh部署只有一小部分的服务器。如果你其中一个服务器失败然后你必须ssh到失败的服务器然后手动部署。如果你有很多服务器失败那么手动部署他们会话费很长时间。
#### Puppet/Chef ####
Puppet和Chef是两个不同的工具他们能在你的网络按你规定的标准自动的配置所有服务器。他们的报表工具使你知道关于错误然后定期重新同步。Puppet和Chef有一些狂热的支持者。如果你不确定那个更适合你的部署配置管理你可以领会一下[InfoWorld上这两个工具的对比][19]
一些厂商也提供一些配置rsyslog的模块或者方法。这有一个Loggly上Puppet模块的例子。它提供给rsyslog一个类你可以添加一个标识令牌
node 'my_server_node.example.net' {
# Send syslog events to Loggly
class { 'loggly::rsyslog':
customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000',
}
}
#### Docker ####
Docker使用容器去运行应用不依赖底层服务。所有东西都从内部的容器运行你可以想象为一个单元功能。ZDNet有一个深入文章关于在你的数据中心[使用Docker][20]。
这有很多方式从Docker容器记录日志包括链接到一个日志容器记录到一个共享卷或者直接在容器里添加一个系统日志代理。其中最流行的日志容器叫做[logspout][21]。
#### 供应商的脚本或代理 ####
大多数日志管理方案提供一些脚本或者代理从一个或更多服务器比较简单的发送数据。重量级代理会耗尽额外的系统资源。一些供应商像Loggly提供配置脚本来使用现存的系统日志程序更轻松。这有一个Loggly上的例子[脚本][22],它能运行在任意数量的服务器上。
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl
[2]:http://www.rsyslog.com/
[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system
[4]:http://logstash.net/
[5]:http://www.fluentd.org/
[6]:http://www.rsyslog.com/doc/rsyslog_conf.html
[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html
[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87
[9]:https://www.loggly.com/docs/file-monitoring/
[10]:http://www.networksorcery.com/enp/protocol/udp.htm
[11]:http://www.networksorcery.com/enp/protocol/tcp.htm
[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html
[13]:http://www.rsyslog.com/doc/relp.html
[14]:http://www.rsyslog.com/doc/queues.html
[15]:http://www.rsyslog.com/doc/tls_cert_ca.html
[16]:http://www.rsyslog.com/doc/tls_cert_machine.html
[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html
[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html
[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
[21]:https://github.com/progrium/logspout
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/

View File

@ -1,54 +1,54 @@
How to Install Snort and Usage in Ubuntu 15.04
在Ubuntu 15.04中如何安装和使用Snort
================================================================================
Intrusion detection in a network is important for IT security. Intrusion Detection System used for the detection of illegal and malicious attempts in the network. Snort is well-known open source intrusion detection system. Web interface (Snorby) can be used for better analysis of alerts. Snort can be used as an intrusion prevention system with iptables/pf firewall. In this article, we will install and configure an open source IDS system snort.
对于IT安全而言入侵检测是一件非常重要的事。入侵检测系统用于检测网络中非法与恶意的请求。Snort是一款知名的开源入侵检测系统。Web界面Snorby可以用于更好地分析警告。Snort使用iptables/pf防火墙来作为入侵检测系统。本篇中我们会安装并配置一个开源的IDS系统snort。
### Snort Installation ###
### Snort 安装 ###
#### Prerequisite ####
#### 要求 ####
Data Acquisition library (DAQ) is used by the snort for abstract calls to packet capture libraries. It is available on snort website. Downloading process is shown in the following screenshot.
snort所使用的数据采集库DAQ用于抽象地调用采集库。这个在snort上就有。下载过程如下截图所示。
![downloading_daq](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_daq.png)
Extract it and run ./configure, make and make install commands for DAQ installation. However, DAQ required other tools therefore ./configure script will generate following errors .
解压并运行./configure、make、make install来安装DAQ。然而DAQ要求其他的工具因此./configure脚本会生成下面的错误。
flex and bison error
flex和bison错误
![flexandbison_error](http://blog.linoxide.com/wp-content/uploads/2015/07/flexandbison_error.png)
libpcap error.
libpcap错误
![libpcap error](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-error.png)
Therefore first install flex/bison and libcap before DAQ installation which is shown in the figure.
因此在安装DAQ之前先安装flex/bison和libcap。
![install_flex](http://blog.linoxide.com/wp-content/uploads/2015/07/install_flex.png)
Installation of libpcap development library is shown below
如下所示安装libpcap开发库
![libpcap-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-dev-installation.png)
After installation of necessary tools, again run ./configure script which will show following output.
安装完必要的工具后,再次运行./configure脚本将会显示下面的输出。
![without_error_configure](http://blog.linoxide.com/wp-content/uploads/2015/07/without_error_configure.png)
make and make install commands result is shown in the following screens.
make和make install 命令的结果如下所示。
![make install](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install.png)
![make](http://blog.linoxide.com/wp-content/uploads/2015/07/make.png)
After successful installation of DAQ, now we will install snort. Downloading using wget is shown in the below figure.
成功安装DAQ之后我们现在安装snort。如下图使用wget下载它。
![downloading_snort](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_snort.png)
Extract compressed package using below given command.
使用下面的命令解压安装包。
#tar -xvzf snort-2.9.7.3.tar.gz
![snort_extraction](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_extraction.png)
Create installation directory and set prefix parameter in the configure script. It is also recommended to enable sourcefire flag for Packet Performance Monitoring (PPM).
创建安装目录并在脚本中设置prefix参数。同样也建议启用包性能监控PPM标志。
#mkdir /usr/local/snort
@ -56,21 +56,21 @@ Create installation directory and set prefix parameter in the configure script.
![snort_installation](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_installation.png)
Configure script generates error due to missing libpcre-dev , libdumbnet-dev and zlib development libraries.
配置脚本由于缺少libpcre-dev、libdumbnet-dev 和zlib开发库而报错。
error due to missing libpcre library.
配置脚本由于缺少libpcre库报错。
![pcre-error](http://blog.linoxide.com/wp-content/uploads/2015/07/pcre-error.png)
error due to missing dnet (libdumbnet) library.
配置脚本由于缺少dnetlibdumbnet库而报错。
![libdnt error](http://blog.linoxide.com/wp-content/uploads/2015/07/libdnt-error.png)
configure script generate error due to missing zlib library.
配置脚本由于缺少zlib库而报错
![zlib error](http://blog.linoxide.com/wp-content/uploads/2015/07/zlib-error.png)
Installation of all required development libraries is shown in the next screenshots.
如下所示,安装所有需要的开发库。
# aptitude install libpcre3-dev
@ -84,9 +84,9 @@ Installation of all required development libraries is shown in the next screensh
![zlibg-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/zlibg-dev-installation.png)
After installation of above required libraries for snort, again run the configure scripts without any error.
安装完snort需要的库之后再次运行配置脚本就不会报错了。
Run make & make install commands for the compilation and installations of snort in /usr/local/snort directory.
运行make和make install命令在/usr/local/snort目录下完成安装。
#make
@ -96,22 +96,22 @@ Run make & make install commands for the compilation and installations of snort
![make install snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install-snort.png)
Finally snort running from /usr/local/snort/bin directory. Currently it is in promisc mode (packet dump mode) of all traffic on eth0 interface.
最终snort在/usr/local/snort/bin中运行。现在它对eth0的所有流量都处在promisc模式包转储模式
![snort running](http://blog.linoxide.com/wp-content/uploads/2015/07/snort-running.png)
Traffic dump by the snort interface is shown in following figure.
如下图所示snort转储流量。
![traffic](http://blog.linoxide.com/wp-content/uploads/2015/07/traffic1.png)
#### Rules and Configuration of Snort ####
#### Snort的规则和配置 ####
Snort installation from source code required rules and configuration setting therefore now we will copy rules and configuration under /etc/snort directory. We have created single bash scripts for rules and configuration setting. It is used for following snort setting.
从源码安装的snort需要规则和安装配置因此我们会从/etc/snort下面复制规则和配置。我们已经创建了单独的bash脚本来用于规则和配置。它会设置下面这些snort设置。
- Creation of snort user for snort IDS service on linux.
- Creation of directories and files under /etc directory for snort configuration.
- Permission setting and copying data from etc directory of snort source code.
- Remove # (comment sign) from rules path in snort.conf file.
- 在linux中创建snort用户用于snort IDS服务。
- 在/etc下面创建snort的配置文件和文件夹。
- 权限设置并从etc中复制snortsnort源代码
- 从snort文件中移除规则中的#(注释符号)。
#!/bin/bash##PATH of source code of snort
snort_src="/home/test/Downloads/snort-2.9.7.3"
@ -141,15 +141,15 @@ Snort installation from source code required rules and configuration setting the
sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf
echo "---DONE---"
Change the snort source directory in the script and run it. Following output appear in case of success.
改变脚本中的snort源目录并运行。下面是成功的输出。
![running script](http://blog.linoxide.com/wp-content/uploads/2015/08/running_script.png)
Above script copied following files/directories from snort source into /etc/snort configuration file.
上面的脚本从snort源中复制下面的文件/文件夹到/etc/snort配置文件中
![files copied](http://blog.linoxide.com/wp-content/uploads/2015/08/created.png)
Snort configuration file is very complex however following necessary changes are required in snort.conf for IDS proper working.
、snort的配置非常复杂然而为了IDS能正常工作需要进行下面必要的修改。
ipvar HOME_NET 192.168.1.0/24 # LAN side
@ -169,35 +169,35 @@ Snort configuration file is very complex however following necessary changes are
include $RULE_PATH/local.rules # file for custom rules
remove comment sign (#) from other rules such as ftp.rules,exploit.rules etc.
移除ftp.rules、exploit.rules前面的注释符号(#)。
![path rules](http://blog.linoxide.com/wp-content/uploads/2015/08/path-rules.png)
Now [Download community][1] rules and extract under /etc/snort/rules directory. Enable community and emerging threats rules in snort.conf file.
下载[下载社区][1]规则并解压到/etc/snort/rules。启用snort.conf中的社区及紧急威胁规则。
![wget_rules](http://blog.linoxide.com/wp-content/uploads/2015/08/wget_rules.png)
![community rules](http://blog.linoxide.com/wp-content/uploads/2015/08/community-rules1.png)
Run following command to test the configuration file after above mentioned changes.
进行了上面的更改后,运行下面的命令来检验配置文件。
#snort -T -c /etc/snort/snort.conf
![snort running](http://blog.linoxide.com/wp-content/uploads/2015/08/snort-final.png)
### Conclusion ###
### 总结 ###
In this article our focus was on the installation and configuration of an open source IDPS system snort on Ubuntu distribution. By default it is used for the monitoring of events however it can con configured inline mode for the protection of network. Snort rules can be tested and analysed in offline mode using pcap capture file.
本篇中我们致力于开源IDPS系统snort在Ubuntu上的安装和配置。默认它用于监控时间然而它可以被配置成用于网络保护的内联模式。snort规则可以在离线模式中可以使用pcap文件测试和分析
--------------------------------------------------------------------------------
via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/
作者:[nido][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/
[1]:https://www.snort.org/downloads/community/community-rules.tar.gz
[1]:https://www.snort.org/downloads/community/community-rules.tar.gz

View File

@ -0,0 +1,147 @@
Linux小技巧Chrome小游戏文字说话计划作业重复执行命令
================================================================================
重要的事情说两遍,我完成了一个[Linux提示与彩蛋][1]系列让你的Linux获得更多创造和娱乐。
![Linux提示与彩蛋系列](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png)
Linux提示与彩蛋系列
本文我将会讲解Google-chrome内建小游戏在终端中如何让文字说话使用at命令设置作业和使用watch命令重复执行命令。
### 1. Google Chrome 浏览器小游戏彩蛋 ###
网线脱掉或者其他什么原因连不上网时Google Chrome就会出现一个小游戏。声明我并不是游戏玩家因此我的电脑上并没有安装任何第三方的恶意游戏。安全是第一位。
所以当Internet发生出错会出现一个这样的界面
![不能连接到互联网](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png)
不能连接到互联网
按下空格键来激活Google-chrome彩蛋游戏。游戏没有时间限制。并且还不需要浪费时间安装使用。
不需要第三方软件的支持。同样支持Windows和Mac平台但是我的平台是Linux我也只谈论Linux。当然在Linux这个游戏运行很好。游戏简单但也很花费时间。
使用空格/向上方向键来跳跃。请看下列截图:
![Google Chrome中玩游戏](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif)
Google Chrome中玩游戏
### 2. Linux 终端中朗读文字 ###
对于那些不能文字朗读的设备,有个小工具可以实现文字说话的转换器。
espeak支持多种语言可以及时朗读输入文字。
系统应该默认安装了Espeak如果你的系统没有安装你可以使用下列命令来安装
# apt-get install espeak (Debian)
# yum install espeak (CentOS)
# dnf install espeak (Fedora 22 onwards)
You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do:
你可以设置接受从标准输入的交互地输入并及时转换成语音朗读出来。这样设置:
$ espeak [按回车键]
更详细的输出你可以这样做:
$ espeak --stdout | aplay [按回车键][这里需要双击]
espeak设置灵活也可以朗读文本文件。你可以这样设置
$ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter]
espeak可以设置朗读速度。默认速度是160词每分钟。使用-s参数来设置。
设置30词每分钟
$ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay
设置200词每分钟
$ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay
让其他语言说北印度语(作者母语),这样设置:
$ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay
espeak支持多种语言支持自定义设置。使用下列命令来获得语言表
$ espeak --voices
### 3. 快速计划作业 ###
我们已经非常熟悉使用[cron][2]后台执行一个计划命令。
Cron是一个Linux系统管理的高级命令用于计划定时任务如备份或者指定时间或间隔的任何事情。
但是你是否知道at命令可以让你计划一个作业或者命令在指定时间at命令可以指定时间和指定内容执行作业。
例如你打算在早上11点2分执行uptime命令你只需要这样做
$ at 11:02
uptime >> /home/$USER/uptime.txt
Ctrl+D
![Linux中计划作业](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png)
Linux中计划作业
检查at命令是否成功设置使用
$ at -l
![浏览计划作业](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png)
浏览计划作业
at支持计划多个命令例如
$ at 12:30
Command 1
Command 2
command 50
Ctrl + D
### 4. 特定时间重复执行命令 ###
有时我们可以需要在指定时间间隔执行特定命令。例如每3秒想打印一次时间。
查看现在时间,使用下列命令。
$ date +"%H:%M:%S
![Linux中查看日期和时间](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png)
Linux中查看日期和时间
为了查看这个命令每三秒的输出,我需要运行下列命令:
$ watch -n 3 'date +"%H:%M:%S"'
![Linux中watch命令](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif)
Linux中watch命令
watch命令的-n开关设定时间间隔。在上诉命令中我们定义了时间间隔为3秒。你可以按你的需求定义。同样watch
也支持其他命令或者脚本。
至此。希望你喜欢这个系列的文章让你的linux更有创造性获得更多快乐。所有的建议欢迎评论。欢迎你也看看其他文章谢谢。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/
作者:[Avishek Kumar][a]
译者:[VicYu/Vic020](http://vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/tag/linux-tricks/
[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/

View File

@ -0,0 +1,188 @@
在 Linux 中怎样将 MySQL 迁移到 MariaDB 上
================================================================================
自从甲骨文收购 MySQL 后,很多 MySQL 的开发者和用户放弃了 MySQL 由于甲骨文对 MySQL 的开发和维护更多倾向于闭门的立场。在社区驱动下,促使更多人移到 MySQL 的另一个分支中,叫 MariaDB。在原有 MySQL 开发人员的带领下MariaDB 的开发遵循开源的理念,并确保 [它的二进制格式与 MySQL 兼容][1]。Linux 发行版如 Red Hat 家族FedoraCentOSRHELUbuntu 和MintopenSUSE 和 Debian 已经开始使用,并支持 MariaDB 作为 MySQL 的简易替换品。
如果想要将 MySQL 中的数据库迁移到 MariaDB 中这篇文章就是你所期待的。幸运的是由于他们的二进制兼容性MySQL-to-MariaDB 迁移过程是非常简单的。如果你按照下面的步骤,将 MySQL 迁移到 MariaDB 会是无痛的。
### 准备 MySQL 数据库和表 ###
出于演示的目的,我们在做迁移之前在数据库中创建一个测试的 MySQL 数据库和表。如果你在 MySQL 中已经有了要迁移到 MariaDB 的数据库,跳过此步骤。否则,按以下步骤操作。
在终端输入 root 密码登录到 MySQL 。
$ mysql -u root -p
创建一个数据库和表。
mysql> create database test01;
mysql> use test01;
mysql> create table pet(name varchar(30), owner varchar(30), species varchar(20), sex char(1));
在表中添加一些数据。
mysql> insert into pet values('brandon','Jack','puddle','m'),('dixie','Danny','chihuahua','f');
退出 MySQL 数据库.
### 备份 MySQL 数据库 ###
下一步是备份现有的 MySQL 数据库。使用下面的 mysqldump 命令导出现有的数据库到文件中。运行此命令之前,请确保你的 MySQL 服务器上启用了二进制日志。如果你不知道如何启用二进制日志,请参阅结尾的教程说明。
$ mysqldump --all-databases --user=root --password --master-data > backupdb.sql
![](https://farm6.staticflickr.com/5775/20555772385_21b89335e3_b.jpg)
现在,在卸载 MySQL 之前先在系统上备份 my.cnf 文件。此步是可选的。
$ sudo cp /etc/mysql/my.cnf /opt/my.cnf.bak
### 卸载 MySQL ###
首先,停止 MySQL 服务。
$ sudo service mysql stop
或者:
$ sudo systemctl stop mysql
或:
$ sudo /etc/init.d/mysql stop
然后继续下一步,使用以下命令移除 MySQL 和配置文件。
在基于 RPM 的系统上 (例如, CentOS, Fedora 或 RHEL):
$ sudo yum remove mysql* mysql-server mysql-devel mysql-libs
$ sudo rm -rf /var/lib/mysql
在基于 Debian 的系统上(例如, Debian, Ubuntu 或 Mint):
$ sudo apt-get remove mysql-server mysql-client mysql-common
$ sudo apt-get autoremove
$ sudo apt-get autoclean
$ sudo deluser mysql
$ sudo rm -rf /var/lib/mysql
### 安装 MariaDB ###
在 CentOS/RHEL 7和Ubuntu14.04或更高版本)上,最新的 MariaDB 包含在其官方源。在 Fedora 上自19版本后 MariaDB 已经替代了 MySQL。如果你使用的是旧版本或 LTS 类型如 Ubuntu 13.10 或更早的,你仍然可以通过添加其官方仓库来安装 MariaDB。
[MariaDB 网站][2] 提供了一个在线工具帮助你依据你的 Linux 发行版中来添加 MariaDB 的官方仓库。此工具为 openSUSE, Arch Linux, Mageia, Fedora, CentOS, RedHat, Mint, Ubuntu, 和 Debian 提供了 MariaDB 的官方仓库.
![](https://farm6.staticflickr.com/5809/20367745260_073020b910_c.jpg)
下面例子中,我们使用 Ubuntu 14.04 发行版和 CentOS 7 配置 MariaDB 库。
**Ubuntu 14.04**
$ sudo apt-get install software-properties-common
$ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
$ sudo add-apt-repository 'deb http://mirror.mephi.ru/mariadb/repo/5.5/ubuntu trusty main'
$ sudo apt-get update
$ sudo apt-get install mariadb-server
**CentOS 7**
以下为 MariaDB 创建一个自定义的 yum 仓库文件。
$ sudo vi /etc/yum.repos.d/MariaDB.repo
----------
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
----------
$ sudo yum install MariaDB-server MariaDB-client
安装了所有必要的软件包后,你可能会被要求为 root 用户创建一个新密码。设置 root 的密码后,别忘了恢复备份的 my.cnf 文件。
$ sudo cp /opt/my.cnf /etc/mysql/
现在启动 MariaDB 服务。
$ sudo service mariadb start
或者:
$ sudo systemctl start mariadb
或:
$ sudo /etc/init.d/mariadb start
### 导入 MySQL 的数据库 ###
最后,我们将以前导出的数据库导入到 MariaDB 服务器中。
$ mysql -u root -p < backupdb.sql
输入你 MariaDB 的 root 密码,数据库导入过程将开始。导入过程完成后,将返回到命令提示符下。
要检查导入过程是否完全成功,请登录到 MariaDB 服务器,并查看一些样本来检查。
$ mysql -u root -p
----------
MariaDB [(none)]> show databases;
MariaDB [(none)]> use test01;
MariaDB [test01]> select * from pet;
![](https://farm6.staticflickr.com/5820/20562243721_428a9a12a7_b.jpg)
### 结论 ###
如你在本教程中看到的MySQL-to-MariaDB 的迁移并不难。MariaDB 相比 MySQL 有很多新的功能,你应该知道的。至于配置方面,在我的测试情况下,我只是将我旧的 MySQL 配置文件my.cnf作为 MariaDB 的配置文件导入过程完全没有出现任何问题。对于配置文件我建议你在迁移之前请仔细阅读MariaDB 配置选项的文件,特别是如果你正在使用 MySQL 的特殊配置。
如果你正在运行更复杂的配置有海量的数据库和表,包括群集或主从复制,看一看 Mozilla IT 和 Operations 团队的 [更详细的指南][3] ,或者 [官方的 MariaDB 文档][4]。
### 故障排除 ###
1.在运行 mysqldump 命令备份数据库时出现以下错误。
$ mysqldump --all-databases --user=root --password --master-data > backupdb.sql
----------
mysqldump: Error: Binlogging on server not active
通过使用 "--master-data",你要在导出的输出中包含二进制日志信息,这对于数据库的复制和恢复是有用的。但是,二进制日志未在 MySQL 服务器启用。要解决这个错误,修改 my.cnf 文件,并在 [mysqld] 部分添加下面的选项。
log-bin=mysql-bin
保存 my.cnf 文件,并重新启动 MySQL 服务:
$ sudo service mysql restart
或者:
$ sudo systemctl restart mysql
或:
$ sudo /etc/init.d/mysql restart
--------------------------------------------------------------------------------
via: http://xmodulo.com/migrate-mysql-to-mariadb-linux.html
作者:[Kristophorus Hadiono][a]
译者:[strugglingyouth](https://github.com/译者ID)
校对:[strugglingyouth](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/kristophorus
[1]:https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/
[2]:https://downloads.mariadb.org/mariadb/repositories/#mirror=aasaam
[3]:https://blog.mozilla.org/it/2013/12/16/upgrading-from-mysql-5-1-to-mariadb-5-5/
[4]:https://mariadb.com/kb/en/mariadb/documentation/

View File

@ -0,0 +1,51 @@
Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数
================================================================================
> **问题**: 我正在运行一个程序,它在运行时会派生出多个线程。我想知道程序在运行时会有多少线程。在 Linux 中检查进程的线程数最简单的方法是什么?
如果你想看到 Linux 中每个进程的线程数,有以下几种方法可以做到这一点。
### 方法一: /proc ###
proc 伪文件系统,它驻留在 /proc 目录,这是最简单的方法来查看任何活动进程的线程数。 /proc 目录以可读文本文件形式输出,提供现有进程和系统硬件相关的信息如 CPU, interrupts, memory, disk, 等等.
$ cat /proc/<pid>/status
上面的命令将显示进程 <pid> 的详细信息,包括过程状态(例如, sleeping, running),父进程 PIDUIDGID使用的文件描述符的数量以及上下文切换的数量。输出也包括**进程创建的总线程数**如下所示。
Threads: <N>
例如,检查 PID 20571进程的线程数
$ cat /proc/20571/status
![](https://farm6.staticflickr.com/5649/20341236279_f4a4d809d2_b.jpg)
输出表明该进程有28个线程。
或者,你可以在 /proc/<pid>/task 中简单的统计目录的数量,如下所示。
$ ls /proc/<pid>/task | wc
这是因为,对于一个进程中创建的每个线程,在 /proc/<pid>/task 中会创建一个相应的目录,命名为其线程 ID。由此在 /proc/<pid>/task 中目录的总数表示在进程中线程的数目。
### 方法二: ps ###
如果你是功能强大的 ps 命令的忠实用户这个命令也可以告诉你一个进程用“H”选项的线程数。下面的命令将输出进程的线程数。“h”选项需要放在前面。
$ ps hH p <pid> | wc -l
如果你想监视一个进程的不同线程消耗的硬件资源CPU & memory请参阅[此教程][1]。(注:此文我们翻译过)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/number-of-threads-process-linux.html
作者:[Dan Nanni][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://ask.xmodulo.com/view-threads-process-linux.html

View File

@ -0,0 +1,167 @@
使用dd命令在Linux和Unix环境下进行硬盘I/O性能检测
================================================================================
如何使用dd命令测试硬盘的性能如何在linux操作系统下检测硬盘的读写能力
你可以使用以下命令在一个Linux或类Unix操作系统上进行简单的I/O性能测试。
- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。
- **hparm命令**:它被用来获取或设置硬盘参数,包括测试读性能以及缓存性能等。
在这篇指南中你将会学到如何使用dd命令来测试硬盘性能。
### 使用dd命令来监控硬盘的读写性能###
- 打开shell终端这里貌似不能翻译为终端提示符
- 通过ssh登录到远程服务器。
- 使用dd命令来测量服务器的吞吐率写速度) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync`
- 使用dd命令测量服务器延迟 `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync`
####理解dd命令的选项###
在这个例子当中我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为:
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
## GNU dd语法 ##
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
##另外一种GNU dd的语法 ##
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
输出样例:
![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg)
Fig.01: 使用dd命令获取的服务器吞吐率
请各位注意在这个实验中我们写入一个G的数据可以发现服务器的吞吐率是135 MB/s这其中
- `if=/dev/zero (if=/dev/input.file)` :用来设置dd命令读取的输入文件名。
- `of=/tmp/test1.img (of=/path/to/output.file)` :dd命令将input.file写入的输出文件的名字。
- `bs=1G (bs=block-size)` :设置dd命令读取的块的大小。例子中为1个G。
- `count=1 (count=number-of-blocks)`: dd命令读取的块的个数。
- `oflag=dsync (oflag=dsync)` :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响以便呈现给你精准的结果。
- `conv=fdatasyn`: 这个选项和`oflag=dsync`含义一样。
在这个例子中一共写了1000次每次写入512字节来获得RAID10服务器的延迟时间
dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync
输出样例:
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s
请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的加载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。
####为什么服务器的吞吐率和延迟时间都这么差?###
低的数值并不意味着你在使用差劲的硬件。可能是HARDWARE RAID10的控制器缓存导致的。
使用hdparm命令来查看硬盘缓存的读速度。
我建议你运行下面的命令2-3次来对设备读性能进行检测以作为参照和相互比较
### 有缓存的硬盘读性能测试——/dev/sda ###
hdparm -t /dev/sda1
## 或者 ##
hdparm -t /dev/sda
然后运行下面这个命令2-3次来对缓存的读性能进行对照性检测
## Cache读基准——/dev/sda ###
hdparm -T /dev/sda1
## 或者 ##
hdparm -T /dev/sda
或者干脆把两个测试结合起来:
hdparm -Tt /dev/sda
输出样例:
![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg)
Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令
请再一次注意由于文件文件操作的缓存属性,你将总是会看到很高的读速度。
**使用dd命令来测试读入速度**
为了获得精确的读测试数据,首先在测试前运行下列命令,来将缓存设置为无效:
flush
echo 3 | sudo tee /proc/sys/vm/drop_caches
time time dd if=/path/to/bigfile of=/dev/null bs=8k
**笔记本上的示例**
运行下列命令:
### Cache存在的Debian系统笔记本吞吐率###
dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct
###使cache失效###
hdparm -W0 /dev/sda
###没有Cache的Debian系统笔记本吞吐率###
dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct
**苹果OS X Unix(Macbook pro)的例子**
GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows:
GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中 dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能
## 运行这个命令2-3次来获得更好地结果 ###
time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync"
输出样例:
1024+0 records in
1024+0 records out
104857600 bytes transferred in 0.165040 secs (635346520 bytes/sec)
real 0m0.241s
user 0m0.004s
sys 0m0.113s
本人Macbook Pro的写速度是635346520字节635.347MB/s)。
**不喜欢用命令行?^_^**
你可以在Linux或基于Unix的系统上使用disk utility(gnome-disk-utility)这款工具来得到同样的信息。下面的那个图就是在我的Fedora Linux v22 VM上截取的。
**图形化方法**
点击“Activites”或者“Super”按键来在桌面和Activites视图间切换。输入“Disks”
![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg)
Fig.03: 打开Gnome硬盘工具
在左边的面板上选择你的硬盘点击configure按钮然后点击“Benchmark partition”
![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg)
Fig.04: 评测硬盘/分区
最后点击“Start Benchmark...”按钮(你可能被要求输入管理员用户名和密码):
![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg)
Fig.05: 最终的评测结果
如果你要问,我推荐使用哪种命令和方法?
- 我推荐在所有的类Unix系统上使用dd命令`time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync`
- 如果你在使用GNU/Linux使用dd命令 (`dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync`)
- 确保你每次使用时都调整了count以及bs参数以获得更好的结果。
- GUI方法只适合桌面系统为Gnome2或Gnome3的Linux/Unix笔记本用户。
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
作者Vivek Gite
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,227 @@
RHCE 系列第一部分:如何设置和测试静态网络路由
================================================================================
RHCERed Hat Certified Engineer红帽认证工程师是红帽公司的一个认证红帽向企业社区贡献开源操作系统和软件同时它还给公司提供训练、支持和咨询服务。
![RHCE 考试准备指南](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg)
RHCE 考试准备指南
这个 RHCE 是基于性能的考试(代号 EX300面向那些拥有更多的技能、知识和能力的红帽企业版 LinuxRHEL系统高级系统管理员。
**重要** [红帽认证系统管理员][1] Red Hat Certified System AdministratorRHCSA认证要求先有 RHCE 认证。
以下是基于红帽企业版 Linux 7 考试的考试目标,我们会在该 RHCE 系列中分别介绍:
- 第一部分:如何在 RHEL 7 中设置和测试静态路由
- 第二部分:如果进行包过滤、网络地址转换和设置内核运行时参数
- 第三部分:如果使用 Linux 工具集产生和发送系统活动报告
- 第四部分:使用 Shell 脚本进行自动化系统维护
- 第五部分:如果配置本地和远程系统日志
- 第六部分:如果配置一个 Samba 服务器或 NFS 服务器译者注Samba 是在 Linux 和 UNI X系统上实现 SMB 协议的一个免费软件由服务器及客户端程序构成。SMBServer Messages Block信息服务块是一种在局域网上共享文件和打印机的一种通信协议它为局域网内的不同计算机之间提供文件及打印机等资源的共享服务。
- 第七部分:为收发邮件配置完整的 SMTP 服务器
- 第八部分:在 RHEL 7 上设置 HTTPS 和 TLS
- 第九部分:设置网络时间协议
- 第十部分:如何配置一个 Cache-Only DNS 服务器
在你的国家查看考试费用和注册考试,可以到 [RHCE 认证][2] 网页。
在 RHCE 的第一和第二部分,我们会介绍一些基本的但典型的情形,也就是静态路由原理、包过滤和网络地址转换。
![在 RHEL 中设置静态网络路由](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg)
RHCE 系列第一部分:设置和测试网络静态路由
请注意我们不会作深入的介绍,但以这种方式组织内容能帮助你开始第一步并继续后面的内容。
### 红帽企业版 Linux 7 中的静态路由 ###
现代网络的一个奇迹就是有很多可用的设备能将一组计算机连接起来,不管是在一个房间里少量的机器还是在一栋建筑物、城市、国家或者大洲之间的多台机器。
然而,为了能在任意情形下有效的实现这些,需要对网络包进行路由,或者换句话说,它们从源到目的地的路径需要按照某种规则。
静态路由是为网络包指定一个路由的过程,而不是使用网络设备提供的默认网关。除非另有指定,否则通过路由,网络包会被导向默认网关;基于预定义的标准,例如数据包目的地,使用静态路由可以定义其它路径。
我们在该篇指南中会考虑以下场景。我们有一台红帽企业版 Linux 7连接到路由器 1号 [192.168.0.1] 以访问因特网以及 192.168.0.0/24 中的其它机器。
第二个路由器(路由器 2号有两个网卡enp0s3 同样通过网络连接到路由器 1号以便连接RHEL 7 以及相同网络中的其它机器另外一个网卡enp0s8用于授权访问内部服务所在的 10.0.0.0/24 网络,例如 web 或数据库服务器。
该场景可以用下面的示意图表示:
![静态路由网络示意图](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png)
静态路由网络示意图
在这篇文章中我们会集中介绍在 RHEL 7 中设置路由表,确保它能通过路由器 1号访问因特网以及通过路由器 2号访问内部网络。
在 RHEL 7 中,你会通过命令行用 [命令 ip][3] 配置和显示设备和路由。这些更改能在运行的系统中及时生效,但由于重启后不会保存,我们会使用 /etc/sysconfig/network-scripts 目录下的 ifcfg-enp0sX 和 route-enp0sX 文件永久保存我们的配置。
首先,让我们打印出当前的路由表:
# ip route show
![在 Linux 中检查路由表](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png)
检查当前路由表
从上面的输出中,我们可以得出以下结论:
- 默认网关的 IP 是 192.168.0.1,可以通过网卡 enp0s3 访问。
- 系统启动的时候,它启用了到 169.254.0.0/16 的 zeroconf 路由(只是在本例中)。也就是说,如果机器设置为通过 DHCP 获取一个 IP 地址,但是由于某些原因失败了,它就会在该网络中自动分配到一个地址。这一行的意思是,该路由会允许我们通过 enp0s3 和其它没有从 DHCP 服务器中成功获得 IP 地址的机器机器连接。
- 最后,但同样重要的是,我们也可以通过 IP 地址是 192.168.0.18 的 enp0s3 和 192.168.0.0/24 网络中的其它机器连接。
下面是这样的配置中你需要做的一些典型任务。除非另有说明,下面的任务都在路由器 2号上进行。
确保正确安装了所有网卡:
# ip link show
如果有某块网卡停用了,启动它:
# ip link set dev enp0s8 up
分配 10.0.0.0/24 网络中的一个 IP 地址给它:
# ip addr add 10.0.0.17 dev enp0s8
噢!我们分配了一个错误的 IP 地址。我们需要删除之前分配的那个并添加正确的地址10.0.0.18
# ip addr del 10.0.0.17 dev enp0s8
# ip addr add 10.0.0.18 dev enp0s8
现在,请注意你只能添加一个通过已经能访问的网关到目标网络的路由。因为这个原因,我们需要在 192.168.0.0/24 范围中给 enp0s3 分配一个 IP 地址,这样我们的 RHEL 7 才能连接到它:
# ip addr add 192.168.0.19 dev enp0s3
最后,我们需要启用包转发:
# echo "1" > /proc/sys/net/ipv4/ip_forward
并停用/取消防火墙(从现在开始,直到下一篇文章中我们介绍了包过滤):
# systemctl stop firewalld
# systemctl disable firewalld
回到我们的 RHEL 7192.168.0.18),让我们配置一个通过 192.168.0.19(路由器 2号的 enp0s3到 10.0.0.0/24 的路由:
# ip route add 10.0.0.0/24 via 192.168.0.19
之后,路由表看起来像下面这样:
# ip route show
![显示网络路由表](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png)
确认网络路由表
同样,在你尝试连接的 10.0.0.0/24 网络的机器中添加对应的路由:
# ip route add 192.168.0.0/24 via 10.0.0.18
你可以使用 ping 测试基本连接:
在 RHEL 7 中运行:
# ping -c 4 10.0.0.20
10.0.0.20 是 10.0.0.0/24 网络中一个 web 服务器的 IP 地址。
在 web 服务器10.0.0.20)中运行
# ping -c 192.168.0.18
192.168.0.18 也就是我们的 RHEL 7 机器的 IP 地址。
另外,我们还可以使用 [tcpdump][4](需要通过 yum install tcpdump 安装)来检查我们 RHEL 7 和 10.0.0.20 中 web 服务器之间的 TCP 双向通信。
首先在第一台机器中启用日志:
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
在同一个系统上的另一个终端,让我们通过 telnet 连接到 web 服务器的 80 号端口(假设 Apache 正在监听该端口;否则在下面命令中使用正确的端口):
# telnet 10.0.0.20 80
tcpdump 日志看起来像下面这样:
![检查服务器之间的网络连接](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png)
检查服务器之间的网络连接
通过查看我们 RHEL 7192.168.0.18)和 web 服务器10.0.0.20)之间的双向通信,可以看出已经正确地初始化了连接。
请注意你重启系统后会丢失这些更改。如果你想把它们永久保存下来,你需要在我们运行上面的命令的相同系统中编辑(如果不存在的话就创建)以下的文件。
尽管对于我们的测试例子不是严格要求,你需要知道 /etc/sysconfig/network 包含了一些系统范围的网络参数。一个典型的 /etc/sysconfig/network 看起来类似下面这样:
# Enable networking on this system?
NETWORKING=yes
# Hostname. Should match the value in /etc/hostname
HOSTNAME=yourhostnamehere
# Default gateway
GATEWAY=XXX.XXX.XXX.XXX
# Device used to connect to default gateway. Replace X with the appropriate number.
GATEWAYDEV=enp0sX
当需要为每个网卡设置特定的变量和值时(正如我们在路由器 2号上面做的你需要编辑 /etc/sysconfig/network-scripts/ifcfg-enp0s3 和 /etc/sysconfig/network-scripts/ifcfg-enp0s8 文件。
下面是我们的例子,
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.0.19
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NAME=enp0s3
ONBOOT=yes
以及
TYPE=Ethernet
BOOTPROTO=static
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
NAME=enp0s8
ONBOOT=yes
分别对应 enp0s3 和 enp0s8。
由于要为我们的客户端机器(192.168.0.18)进行路由,我们需要编辑 /etc/sysconfig/network-scripts/route-enp0s3
10.0.0.0/24 via 192.168.0.19 dev enp0s3
现在重启系统你可以在路由表中看到该路由规则。
### 总结 ###
在这篇文章中我们介绍了红帽企业版 Linux 7 的静态路由。尽管场景可能不同,这里介绍的例子说明了所需的原理以及进行该任务的步骤。结束之前,我还建议你看一下 Linux 文档项目中 [第四章 4][5] 保护和优化 Linux 部分,以了解这里介绍主题的更详细内容。
免费电子书 Securing & Optimizing Linux: The Hacking Solution (v.3.0) - 这本 800 多页的电子书全面收集了 Linux 安全的小技巧以及如果安全和简便的使用它们去配置基于 Linux 的应用和服务。
![Linux 安全和优化](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif)
Linux 安全和优化
[马上下载][6]
在下篇文章中我们会介绍数据包过滤和网络地址转换,结束 RHCE 验证需要的网络基本技巧。
如往常一样,我们期望听到你的回复,用下面的表格留下你的疑问、评论和建议吧。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
作者:[Gabriel Cánepa][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/
[2]:https://www.redhat.com/en/services/certification/rhce
[3]:http://www.tecmint.com/ip-command-examples/
[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/
[5]:http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/net-manage.html
[6]:http://tecmint.tradepub.com/free/w_opeb01/prgm.cgi

View File

@ -0,0 +1,175 @@
RHCE 第二部分 - 如何进行包过滤、网络地址转换和设置内核运行时参数
================================================================================
正如第一部分(“[设置静态网络路由][1]”承诺的在这篇文章RHCE 系列第二部分),我们首先介绍红帽企业版 Linux 7中包过滤和网络地址转换原理然后再介绍某些条件发送变化或者需要激活时设置运行时内核参数以改变运行时内核行为。
![RHEL 中的网络包过滤](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg)
RHCE 第二部分:网络包过滤
### RHEL 7 中的网络包过滤 ###
当我们讨论数据包过滤的时候,我们指防火墙读取每个尝试通过它的数据包的包头所进行的处理。然后,根据系统管理员之前定义的规则,通过采取所要求的动作过滤数据包。
正如你可能知道的,从 RHEL 7 开始,管理防火墙的默认服务是 [firewalld][2]。类似 iptables它和 Linux 内核的 netfilter 模块交互以便检查和操作网络数据包。不像 iptablesFirewalld 的更新可以立即生效,而不用中断活跃的连接 - 你甚至不需要重启服务。
Firewalld 的另一个优势是它允许我们定义基于预配置服务名称的规则(之后会详细介绍)。
在第一部分,我们用了下面的场景:
![静态路由网络示意图](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png)
静态路由网络示意图
然而,你应该记得,由于还没有介绍包过滤,为了简化例子,我们停用了路由器 2号 的防火墙。现在让我们来看看如何可以使接收的数据包发送到目的地的特定服务或端口。
首先,让我们添加一条永久规则允许从 enp0s3 (192.168.0.19) 到 enp0s8 (10.0.0.18) 的绑定流量:
# firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT
上面的命令会把规则保存到 /etc/firewalld/direct.xml
# cat /etc/firewalld/direct.xml
![在 CentOS 7 中检查 Firewalld 保存的规则](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png)
检查 Firewalld 保存的规则
然后启用规则使其立即生效:
# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT
现在你可以从 RHEL 7 中通过 telnet 登录到 web 服务器并再次运行 [tcpdump][3] 监视两台机器之间的 TCP 流量,这次路由器 2号已经启用了防火墙。
# telnet 10.0.0.20 80
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
如果你想只允许从 192.168.0.18 到 web 服务器80 号端口)的连接而阻塞 192.168.0.0/24 网络中的其它来源呢?
在 web 服务器的防火墙中添加以下规则:
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent
现在你可以从 192.168.0.18 和 192.168.0.0/24 中的其它机器发送到 web 服务器的 HTTP 请求。第一种情况连接会成功完成,但第二种情况最终会超时。
任何下面的命令可以验证这个结果:
# telnet 10.0.0.20 80
# wget 10.0.0.20
我强烈建议你看看 Fedora Project Wiki 中的 [Firewalld Rich Language][4] 文档更详细地了解关于富规则的内容。
### RHEL 7 中的网络地址转换 ###
网络地址转换NAT是为专用网络中的一组计算机也可能是其中的一台分配一个独立的公共 IP 地址的过程。结果,在内部网络中仍然可以用它们自己的私有 IP 地址区别,但外部“看来”它们是一样的。
另外,网络地址转换使得内部网络中的计算机发送请求到外部资源(例如因特网)然后只有源系统能接收到对应的响应成为可能。
现在让我们考虑下面的场景:
![RHEL 中的网络地址转换](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png)
网络地址转换
在路由器 2 中,我们会把 enp0s3 接口移动到外部区域enp0s8 到内部区域,伪装或者说 NAT 默认是启用的:
# firewall-cmd --list-all --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external --permanent
# firewall-cmd --change-interface=enp0s8 --zone=internal
# firewall-cmd --change-interface=enp0s8 --zone=internal --permanent
对于我们当前的设置,内部区域 - 以及和它一起启用的任何东西都是默认区域:
# firewall-cmd --set-default-zone=internal
下一步,让我们重载防火墙规则并保持状态信息:
# firewall-cmd --reload
最后,在 web 服务器中添加路由器 2 为默认网关:
# ip route add default via 10.0.0.18
现在你会发现在 web 服务器中你可以 ping 路由器 1 和外部网站(例如 tecmint.com
# ping -c 2 192.168.0.1
# ping -c 2 tecmint.com
![验证网络路由](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png)
验证网络路由
### 在 RHEL 7 中设置内核运行时参数 ###
在 Linux 中允许你更改、启用以及停用内核运行时参数RHEL 也不例外。/proc/sys 接口允许你当操作条件发生变化时实时设置运行时参数以改变系统行为而不需太多麻烦。
为了实现这个目的,会用内建的 echo shell 写 /proc/sys/<category\> 中的文件其中 <category\> 很可能是以下目录中的一个
- dev: 连接到机器中的特定设备的参数。
- fs: 文件系统配置(例如 quotas 和 inodes
- kernel: 内核配置。
- net: 网络配置。
- vm: 内核虚拟内存的使用。
要显示所有当前可用值的列表,运行
# sysctl -a | less
在第一部分中,我们通过以下命令改变了 net.ipv4.ip_forward 参数的值以允许 Linux 机器作为一个路由器。
# echo 1 > /proc/sys/net/ipv4/ip_forward
另一个你可能想要设置的运行时参数是 kernel.sysrq它会启用你键盘上的 Sysrq 键,以使系统更好的运行一些底层函数,例如如果由于某些原因冻结了后重启系统:
# echo 1 > /proc/sys/kernel/sysrq
要显示特定参数的值,可以按照下面方式使用 sysctl
# sysctl <parameter.name>
例如,
# sysctl net.ipv4.ip_forward
# sysctl kernel.sysrq
一些参数,例如上面提到的一个,只需要一个值,而其它一些(例如 fs.inode-state要求多个值
![在 Linux 中查看内核参数](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png)
查看内核参数
不管什么情况下,做任何更改之前你都需要阅读内核文档。
请注意系统重启后这些设置会丢失。要使这些更改永久生效,我们需要添加内容到 /etc/sysctl.d 目录的 .conf 文件,像下面这样:
# echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf
(其中数字 10 表示相对同一个目录中其它文件的处理顺序)。
并用下面命令启用更改
# sysctl -p /etc/sysctl.d/10-forward.conf
### 总结 ###
在这篇指南中我们解释了基本的包过滤、网络地址变换和在运行的系统中设置内核运行时参数并使重启后能持久化。我希望这些信息能对你有用,如往常一样,我们期望收到你的回复!
别犹豫,在下面的表格中和我们分享你的疑问、评论和建议吧。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/perform-packet-filtering-network-address-translation-and-set-kernel-runtime-parameters-in-rhel/
作者:[Gabriel Cánepa][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/
[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/
[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/
[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage

View File

@ -0,0 +1,182 @@
RHCE 第三部分 - 如何使用 Linux 工具集产生和发送系统活动报告
================================================================================
作为一个系统工程师你经常需要生成一些显示系统资源利用率的报告以便确保1正最佳利用它们2防止出现瓶颈3确保可扩展性以及其它原因。
![监视 Linux 性能活动报告](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg)
RHCE 第三部分:监视 Linux 性能活动报告
除了著名的用于检测磁盘、内存和 CPU 使用率的原生 Linux 工具 - 可以给出很多例子,红帽企业版 Linux 7 还提供了两个额外的工具集用于为你的报告增加可以收集的数据sysstat 和 dstat。
在这篇文章中,我们会介绍两者,但首先让我们来回顾一下传统工具的使用。
### 原生 Linux 工具 ###
使用 df你可以报告磁盘空间以及文件系统的 inode 使用情况。你需要监视两者,因为缺少磁盘空间会阻止你保存更多文件(甚至会导致系统崩溃),就像耗尽 inode 意味着你不能将文件链接到对应的数据结构,从而导致同样的结果:你不能将那些文件保存到磁盘中。
# df -h [以人类可读形式显示输出]
# df -h --total [生成总计]
![检查 Linux 总的磁盘使用](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png)
检查 Linux 总的磁盘使用
# df -i [显示文件系统的 inode 数目]
# df -i --total [生成总计]
![检查 Linux 总的 inode 数目](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png)
检查 Linux 总的 inode 数目
用 du你可以估计文件、目录或文件系统的文件空间使用。
举个例子,让我们来看看 /home 目录使用了多少空间,它包括了所有用户的个人文件。第一条命令会返回整个 /home 目录当前使用的所有空间,第二条命令会显示子目录的分类列表:
# du -sch /home
# du -sch /home/*
![检查 Linux 目录磁盘大小](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png)
检查 Linux 目录磁盘大小
别错过了:
- [检查 Linux 磁盘空间使用的 12 个 df 命令例子][1]
- [查看文件/目录磁盘使用的 10 个 du 命令例子][2]
另一个你工具集中不容忽视的工具就是 vmstat。它允许你查看进程、CPU 和 内存使用、磁盘活动以及其它的大概信息。
如果不带参数运行vmstat 会返回自从上一次启动后的平均信息。尽管你可能以这种方式使用该命令有一段时间了,再看一些系统使用率的例子会有更多帮助,例如在例子中定义了时间间隔。
例如
# vmstat 5 10
会每个 5 秒返回 10 个事例:
![检查 Linux 系统性能](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png)
检查 Linux 系统性能
正如你从上面图片看到的vmstat 的输出分为很多列proc(process)、memory、swap、io、system、和 CPU。每个字段的意义可以在 vmstat man 手册的 FIELD DESCRIPTION 部分找到。
在哪里 vmstat 可以派上用场呢?让我们在 yum 升级之前和升级时检查系统行为:
# vmstat -a 1 5
![Vmstat Linux 性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png)
Vmstat Linux 性能监视
请注意当磁盘上的文件被更改时活跃内存的数量增加写到磁盘的块数目bo和属于用户进程的 CPU 时间us也是这样。
或者一个保存大文件到磁盘时dsync 引发):
# vmstat -a 1 5
# dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync
![Vmstat Linux 磁盘性能监视](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png)
Vmstat Linux 磁盘性能监视
在这个例子中我们可以看到很大数目的块被写入到磁盘bo这正如预期的那样同时 CPU 处理任务之前等待 IO 操作完成的时间wa也增加了。
**别错过**: [Vmstat Linux 性能监视][3]
### 其它 Linux 工具 ###
正如本文介绍部分提到的,这里有其它的工具你可以用来检测系统状态和利用率(不仅红帽,其它主流发行版的官方支持库中也提供了这些工具)。
sysstat 软件包包含以下工具:
- sar (收集、报告、或者保存系统活动信息)。
- sadf (以多种方式显式 sar 收集的数据)。
- mpstat (报告处理器相关的统计信息)。
- iostat (报告 CPU 统计信息和设备以及分区的 IO统计信息
- pidstat (报告 Linux 任务统计信息)。
- nfsiostat (报告 NFS 的输出/输出统计信息)。
- cifsiostat (报告 CIFS 统计信息)
- sa1 (收集并保存系统活动日常文件的二进制数据)。
- sa2 (在 /var/log/sa 目录写每日报告)。
dstat 为这些工具提供的功能添加了一些额外的特性,以及更多的计数器和更大的灵活性。你可以通过运行 yum info sysstat 或者 yum info dstat 找到每个工具完整的介绍,或者安装完成后分别查看每个工具的 man 手册。
安装两个软件包:
# yum update && yum install sysstat dstat
sysstat 主要的配置文件是 /etc/sysconfig/sysstat。你可以在该文件中找到下面的参数
# How long to keep log files (in days).
# If value is greater than 28, then log files are kept in
# multiple directories, one for each month.
HISTORY=28
# Compress (using gzip or bzip2) sa and sar files older than (in days):
COMPRESSAFTER=31
# Parameters for the system activity data collector (see sadc manual page)
# which are used for the generation of log files.
SADC_OPTIONS="-S DISK"
# Compression program to use.
ZIP="bzip2"
sysstat 安装完成后,/etc/cron.d/sysstat 中会添加和启用两个 cron 作业。第一个作业每 10 分钟运行系统活动计数工具并在 /var/log/sa/saXX 中保存报告,其中 XX 是该月的一天。
因此,/var/log/sa/sa05 会包括该月份第 5 天所有的系统活动报告。这里假设我们在上面的配置文件中对 HISTORY 变量使用默认的值:
*/10 * * * * root /usr/lib64/sa/sa1 1 1
第二个作业在每天夜间 1153 生成每日进程计数总结并把它保存到 /var/log/sa/sarXX 文件,其中 XX 和之前例子中的含义相同:
53 23 * * * root /usr/lib64/sa/sa2 -A
例如,你可能想要输出该月份第 6 天从上午 9:30 到晚上 530 的系统统计信息到一个 LibreOffice Calc 或 Microsoft Excel 可以查看的 .csv 文件(它也允许你创建表格和图片):
# sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv
你可以在上面的 sadf 命令中用 -j 标记代替 -d 以 JSON 格式输出系统统计信息,这当你在 web 应用中使用这些数据的时候非常有用。
![Linux 系统统计信息](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png)
Linux 系统统计信息
最后,让我们看看 dstat 提供什么功能。请注意如果不带参数运行dstat 默认使用 -cdngy表示 CPU、磁盘、网络、内存页、和系统统计信息并每秒添加一行可以在任何时候用 Ctrl + C 中断执行):
# dstat
![Linux 磁盘统计检测](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png)
Linux 磁盘统计检测
要输出统计信息到 .csv 文件,可以用 -output 标记后面跟一个文件名称。让我们来看看在 LibreOffice Calc 中该文件看起来是怎样的:
![检测 Linux 统计信息输出](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png)
检测 Linux 统计信息输出
我强烈建议你查看 dstat 的 man 手册,为了方便你的阅读用 PDF 格式包括本文以及 sysstat 的 man 手册。你会找到其它能帮助你创建自定义的详细系统活动报告的选项。
**别错过**: [Sysstat Linux 的使用活动检测工具][4]
### 总结 ###
在该指南中我们解释了如何使用 Linux 原生工具以及 RHEL 7 提供的特定工具来生成系统使用报告。在某种情况下,你可能像依赖最好的朋友那样依赖这些报告。
你很可能使用过这篇指南中我们没有介绍到的其它工具。如果真是这样的话,用下面的表格和社区中的其他成员一起分享吧,也可以是任何其它的建议/疑问/或者评论。
我们期待你的回复。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/
作者:[Gabriel Cánepa][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/
[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/
[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[4]:http://www.tecmint.com/install-sysstat-in-linux/

View File

@ -0,0 +1,214 @@
RHECSA 系列RHEL7 中的进程管理:开机,关机,以及两者之间的所有其他事项 Part 5
================================================================================
我们将概括和简要地复习从你按开机按钮来打开你的 RHEL 7 服务器到呈现出命令行界面的登录屏幕之间所发生的所有事情,以此来作为这篇文章的开始。
![RHEL 7 开机过程](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Process.png)
Linux 开机过程
**请注意:**
1. 相同的基本原则也可以应用到其他的 Linux 发行版本中,但可能需要较小的更改,并且
2. 下面的描述并不是旨在给出开机过程的一个详尽的解释,而只是介绍一些基础的东西
### Linux 开机过程 ###
1.初始化 POST(加电自检)并执行硬件检查;
2.当 POST 完成后,系统的控制权将移交给启动管理器的第一阶段,它存储在一个硬盘的引导扇区(对于使用 BIOS 和 MBR 的旧式的系统)或存储在一个专门的 (U)EFI 分区上。
3.启动管理器的第一阶段完成后,接着进入启动管理器的第二阶段,通常大多数使用的是 GRUB(GRand Unified Boot Loader 的简称),它驻留在 `/boot` 中,反过来加载内核和驻留在 RAM 中的初始化文件系统(被称为 initramfs它包含执行必要操作所需要的程序和二进制文件以此来最终挂载真实的根文件系统)。
4.接着经历了闪屏过后,呈现在我们眼前的是类似下图的画面,它允许我们选择一个操作系统和内核来启动:
![RHEL 7 开机屏幕](http://www.tecmint.com/wp-content/uploads/2015/03/RHEL-7-Boot-Screen.png)
启动菜单屏幕
5.然后内核对挂载到系统的硬件进行设置,一旦根文件系统被挂载,接着便启动 PID 为 1 的进程,反过来这个进程将初始化其他的进程并最终呈现给我们一个登录提示符界面。
注意:假如我们想在后面这样做(注:这句话我总感觉不通顺,不明白它的意思,希望改一下),我们可以使用 [dmesg 命令][1](注:这篇文章已经翻译并发表了,链接是 https://linux.cn/article-3587-1.html )并使用这个系列里的上一篇文章中解释过的工具(注:即 grep)来过滤它的输出。
![登录屏幕和进程的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Login-Screen-Process-PID.png)
登录屏幕和进程的 PID
在上面的例子中,我们使用了众所周知的 `ps` 命令来显示在系统启动过程中的一系列当前进程的信息,它们的父进程(或者换句话说,就是那个开启这些进程的进程) 为 systemd(大多数现代的 Linux 发行版本已经切换到的系统和服务管理器)
# ps -o ppid,pid,uname,comm --ppid=1
记住 `-o`(为 -format 的简写)选项允许你以一个自定义的格式来显示 ps 的输出,以此来满足你的需求;这个自定义格式使用 man ps 里 STANDARD FORMAT SPECIFIERS 一节中的特定关键词。
另一个你想自定义 ps 的输出而不是使用其默认输出的情形是:当你需要找到引起 CPU 或内存消耗过多的那些进程,并按照下列方式来对它们进行排序时:
# ps aux --sort=+pcpu # 以 %CPU 来排序(增序)
# ps aux --sort=-pcpu # 以 %CPU 来排序(降序)
# ps aux --sort=+pmem # 以 %MEM 来排序(增序)
# ps aux --sort=-pmem # 以 %MEM 来排序(降序)
# ps aux --sort=+pcpu,-pmem # 结合 %CPU (增序) 和 %MEM (降序)来排列
![http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png](http://www.tecmint.com/wp-content/uploads/2015/03/ps-command-output.png)
自定义 ps 命令的输出
### systemd 的一个介绍 ###
在 Linux 世界中,很少有决定能够比在主流的 Linux 发行版本中采用 systemd 引起更多的争论。systemd 的倡导者根据以下事实命名其主要的优势:
另外请阅读: ['init' 和 'systemd' 背后的故事][2]
1. 在系统启动期间systemd 允许并发地启动更多的进程(相比于先前的 SysVinitSysVinit 似乎总是表现得更慢,因为它一个接一个地启动进程,检查一个进程是否依赖于另一个进程,然后等待守护进程去开启可以开始的更多的服务),并且
2. 在一个运行着的系统中,它作为一个动态的资源管理器来工作。这样在开机期间,当一个服务被需要时,才启动它(以此来避免消耗系统资源)而不是在没有一个合理的原因的情况下启动额外的服务。
3. 向后兼容 sysvinit 的脚本。
systemd 由 systemctl 工具控制,假如你带有 SysVinit 背景,你将会对以下的内容感到熟悉:
- service 工具, 在旧一点的系统中,它被用来管理 SysVinit 脚本,以及
- chkconfig 工具, 为系统服务升级和查询运行级别信息
- shutdown, 你一定使用过几次来重启或关闭一个运行的系统。
下面的表格展示了使用传统的工具和 systemctl 之间的相似之处:
注:表格
<table cellspacing="0" border="0">
<colgroup width="237"></colgroup>
<colgroup width="256"></colgroup>
<colgroup width="1945"></colgroup>
<tbody>
<tr>
<td align="left" height="25" bgcolor="#B7B7B7" style="border: 1px solid #000000;"><b><span style="color: black; font-family: Arial; font-size: small;">Legacy tool</span></b></td>
<td align="left" bgcolor="#B7B7B7" style="border: 1px solid #000000;"><b><span style="color: black; font-family: Arial; font-size: small;">Systemctl equivalent</span></b></td>
<td align="left" bgcolor="#B7B7B7" style="border: 1px solid #000000;"><b><span style="color: black; font-family: Arial; font-size: small;">Description</span></b></td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name start</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl start name</span></td>
<td align="left" style="border: 1px solid #000000;">Start name (where name is a service)</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name stop</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl stop name</span></td>
<td align="left" style="border: 1px solid #000000;">Stop name</td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name condrestart</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl try-restart name</span></td>
<td align="left" style="border: 1px solid #000000;">Restarts name (if its already running)</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name restart</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl restart name</span></td>
<td align="left" style="border: 1px solid #000000;">Restarts name</td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name reload</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl reload name</span></td>
<td align="left" style="border: 1px solid #000000;">Reloads the configuration for name</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service name status</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl status name</span></td>
<td align="left" style="border: 1px solid #000000;">Displays the current status of name</td>
</tr>
<tr class="alt">
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">service &ndash;status-all</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Displays the status of all current services</span></td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig name on</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl enable name</span></td>
<td align="left" style="border: 1px solid #000000;">Enable name to run on startup as specified in the unit file (the file to which the symlink points). The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links inside the /etc/systemd/system directory.</td>
</tr>
<tr class="alt">
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig name off</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl disable name</span></td>
<td align="left" style="border: 1px solid #000000;">Disables name to run on startup as specified in the unit file (the file to which the symlink points)</td>
</tr>
<tr>
<td align="left" height="21" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig &ndash;list name</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl is-enabled name</span></td>
<td align="left" style="border: 1px solid #000000;">Verify whether name (a specific service) is currently enabled</td>
</tr>
<tr class="alt">
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">chkconfig &ndash;list</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl &ndash;type=service</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Displays all services and tells whether they are enabled or disabled</span></td>
</tr>
<tr>
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">shutdown -h now</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl poweroff</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Power-off the machine (halt)</span></td>
</tr>
<tr class="alt">
<td align="left" height="23" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">shutdown -r now</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Courier New;">systemctl reboot</span></td>
<td align="left" style="border: 1px solid #000000;"><span style="color: black; font-family: Arial;">Reboot the system</span></td>
</tr>
</tbody>
</table>
systemd 也引进了单元(它可能是一个服务,一个挂载点,一个设备或者一个网络套接字)和目标(它们定义了 systemd 如何去管理和同时开启几个相关的进程,并可认为它们与在基于 SysVinit 的系统中的运行级别等价,尽管事实上它们并不等价)。
### 总结归纳 ###
其他与进程管理相关,但并不仅限于下面所列的功能的任务有:
**1. 在考虑到系统资源的使用上,调整一个进程的执行优先级:**
这是通过 `renice` 工具来完成的,它可以改变一个或多个正在运行着的进程的调度优先级。简单来说,调度优先级是一个允许内核(当前只支持 >= 2.6 的版本)根据某个给定进程被分配的执行优先级(即优先级,从 -20 到 19)来为其分配系统资源的功能。
`renice` 的基本语法如下:
# renice [-n] priority [-gpu] identifier
在上面的通用命令中,第一个参数是将要使用的优先级数值,而另一个参数可以解释为进程 ID(这是默认的设定),进程组 ID用户 ID 或者用户名。一个常规的用户(即除 root 以外的用户)只可以更改他或她所拥有的进程的调度优先级,并且只能增加优先级的层次(这意味着占用更少的系统资源)。
![在 Linux 中调整进程的优先级](http://www.tecmint.com/wp-content/uploads/2015/03/Process-Scheduling-Priority.png)
进程调度优先级
**2. 按照需要杀死一个进程(或终止其正常执行)**
更精确地说,杀死一个进程指的是通过 [kill 或 pkill][3]命令给该进程发送一个信号,让它优雅地(SIGTERM=15)或立即(SIGKILL=9)结束它的执行。
这两个工具的不同之处在于前一个被用来终止一个特定的进程或一个进程组,而后一个则允许你在进程的名称和其他属性的基础上,执行相同的动作。
另外, pkill 与 pgrep 相捆绑pgrep 提供将受影响的进程的 PID 给 pkill 来使用。例如,在运行下面的命令之前:
# pkill -u gacanepa
查看一眼由 gacanepa 所拥有的 PID 或许会带来点帮助:
# pgrep -l -u gacanepa
![找到用户拥有的 PID](http://www.tecmint.com/wp-content/uploads/2015/03/Find-PIDs-of-User.png)
找到用户拥有的 PID
默认情况下kill 和 pkiill 都发送 SIGTERM 信号给进程,如我们上面提到的那样,这个信号可以被忽略(即该进程可能会终止其自身的执行或者不终止),所以当你因一个合理的理由要真正地停止一个运行着的进程,则你将需要在命令行中带上特定的 SIGKILL 信号:
# kill -9 identifier # 杀死一个进程或一个进程组
# kill -s SIGNAL identifier # 同上
# pkill -s SIGNAL identifier # 通过名称或其他属性来杀死一个进程
### 结论 ###
在这篇文章中,我们解释了在 RHEL 7 系统中,有关开机启动过程的基本知识,并分析了一些可用的工具来帮助你通过使用一般的程序和 systemd 特有的命令来管理进程。
请注意,这个列表并不旨在涵盖有关这个话题的所有花哨的工具,请随意使用下面的评论栏来添加你自已钟爱的工具和命令。同时欢迎你的提问和其他的评论。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/
作者:[Gabriel Cánepa][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/dmesg-commands/
[2]:http://www.tecmint.com/systemd-replaces-init-in-linux/
[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/

View File

@ -0,0 +1,269 @@
RHCSA 系列:使用 'Parted' 和 'SSM' 来配置和加密系统存储 Part 6
================================================================================
在本篇文章中,我们将讨论在 RHEL 7 中如何使用传统的工具来设置和配置本地系统存储,并介绍系统存储管理器(也称为 SSM),它将极大地简化上面的任务。
![配置和加密系统存储](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png)
RHCSA: 配置和加密系统存储 Part 6
请注意,我们将在这篇文章中展开这个话题,但由于该话题的宽泛性,我们将在下一期(Part 7)中继续介绍有关它的描述和使用。
### 在 RHEL 7 中创建和修改分区 ###
在 RHEL 7 中, parted 是默认的用来处理分区的程序,且它允许你:
- 展示当前的分区表
- 操纵(增加或减少分区的大小)现有的分区
- 利用空余的磁盘空间或额外的物理存储设备来创建分区
强烈建议你在试图增加一个新的分区或对一个现有分区进行更改前,你应当确保设备上没有任何一个分区正在使用(`umount /dev/partition`),且假如你正使用设备的一部分来作为 swap 分区,在进行上面的操作期间,你需要将它禁用(`swapoff -v /dev/partition`) 。
实施上面的操作的最简单的方法是使用一个安装介质例如一个 RHEL 7 安装 DVD 或 USB 以急救模式启动 RHEL(Troubleshooting → Rescue a Red Hat Enterprise Linux system),然后当让你选择一个选项来挂载现有的 Linux 安装时,选择'跳过'这个选项,接着你将看到一个命令行提示符,在其中你可以像下图显示的那样开始键入与在一个未被使用的物理设备上创建一个正常的分区时所用的相同的命令。
![RHEL 7 急救模式](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png)
RHEL 7 急救模式
要启动 parted只需键入
# parted /dev/sdb
其中 `/dev/sdb` 是你将要创建新分区所在的设备;然后键入 `print` 来显示当前设备的分区表:
![创建新的分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png)
创建新的分区
正如你所看到的那样,在这个例子中,我们正在使用一个 5 GB 的虚拟光驱。现在我们将要创建一个 4 GB 的主分区,然后将它格式化为 xfs 文件系统,它是 RHEL 7 中默认的文件系统。
你可以从一系列的文件系统中进行选择。你将需要使用 mkpart 来手动地创建分区,接着和平常一样,用 mkfs.fstype 来对分区进行格式化,因为 mkpart 并不支持许多现代的文件系统以达到即开即用。
在下面的例子中,我们将为设备设定一个标记,然后在 `/dev/sdb` 上创建一个主分区 `(p)`,它从设备的 0% 开始,并在 4000MB(4 GB) 处结束。
![在 Linux 中设定分区名称](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png)
标记分区的名称
接下来,我们将把分区格式化为 xfs 文件系统,然后再次打印出分区表,以此来确保更改已被应用。
# mkfs.xfs /dev/sdb1
# parted /dev/sdb print
![在 Linux 中格式化分区](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png)
格式化分区为 XFS 文件系统
对于旧一点的文件系统,在 parted 中你应该使用 `resize` 命令来改变分区的大小。不幸的是,这只适用于 ext2, fat16, fat32, hfs, linux-swap, 和 reiserfs (若 libreiserfs 已被安装)。
因此,改变分区大小的唯一方式是删除它然后再创建它(所以确保你对你的数据做了完整的备份!)。毫无疑问,在 RHEL 7 中默认的分区方案是基于 LVM 的。
使用 parted 来移除一个分区,可以用:
# parted /dev/sdb print
# parted /dev/sdb rm 1
![在 Linux 中移除分区](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png)
移除或删除分区
### 逻辑卷管理(LVM) ###
一旦一个磁盘被分好了分区,再去更改分区的大小就是一件困难或冒险的事了。基于这个原因,假如我们计划在我们的系统上对分区的大小进行更改,我们应当考虑使用 LVM 的可能性,而不是使用传统的分区系统。这样多个物理设备可以组成一个逻辑组,以此来寄宿可自定义数目的逻辑卷,而逻辑卷的增大或减少不会带来任何麻烦。
简单来说,你会发现下面的示意图对记住 LVM 的基础架构或许有用。
![LVM 的基本架构](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png)
LVM 的基本架构
#### 创建物理卷,卷组和逻辑卷 ####
遵循下面的步骤是为了使用传统的卷管理工具来设置 LVM。由于你可以通过阅读这个网站上的 LVM 系列来扩展这个话题,我将只是概要的介绍设置 LVM 的基本步骤,然后与使用 SSM 来实现相同功能做个比较。
**注**: 我们将使用整个磁盘 `/dev/sdb``/dev/sdc` 来作为 PVs (物理卷),但是否执行相同的操作完全取决于你。
**1. 使用 /dev/sdb 和 /dev/sdc 中 100% 的可用磁盘空间来创建分区 `/dev/sdb1``/dev/sdc1`**
# parted /dev/sdb print
# parted /dev/sdc print
![创建新分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png)
创建新分区
**2. 分别在 /dev/sdb1 和 /dev/sdc1 上共创建 2 个物理卷。**
# pvcreate /dev/sdb1
# pvcreate /dev/sdc1
![创建两个物理卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png)
创建两个物理卷
记住,你可以使用 pvdisplay /dev/sd{b,c}1 来显示有关新建的 PV 的信息。
**3. 在上一步中创建的 PV 之上创建一个 VG**
# vgcreate tecmint_vg /dev/sd{b,c}1
![在 Linux 中创建卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png)
创建卷组
记住,你可使用 vgdisplay tecmint_vg 来显示有关新建的 VG 的信息。
**4. 像下面那样,在 VG tecmint_vg 之上创建 3 个逻辑卷:**
# lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB]
# lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB]
# lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes → 6 GB]
![在 LVM 中创建逻辑卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png)
创建逻辑卷
记住,你可以使用 lvdisplay tecmint_vg 来显示有关在 VG tecmint_vg 之上新建的 LV 的信息。
**5. 格式化每个逻辑卷为 xfs 文件系统格式(假如你计划在以后将要缩小卷的大小,请别使用 xfs 文件系统格式!)**
# mkfs.xfs /dev/tecmint_vg/vol01_docs
# mkfs.xfs /dev/tecmint_vg/vol02_logs
# mkfs.xfs /dev/tecmint_vg/vol03_homes
**6. 最后,挂载它们:**
# mount /dev/tecmint_vg/vol01_docs /mnt/docs
# mount /dev/tecmint_vg/vol02_logs /mnt/logs
# mount /dev/tecmint_vg/vol03_homes /mnt/homes
#### 移除逻辑卷,卷组和物理卷 ####
**7.现在我们将进行与刚才相反的操作并移除 LVVG 和 PV**
# lvremove /dev/tecmint_vg/vol01_docs
# lvremove /dev/tecmint_vg/vol02_logs
# lvremove /dev/tecmint_vg/vol03_homes
# vgremove /dev/tecmint_vg
# pvremove /dev/sd{b,c}1
**8. 现在,让我们来安装 SSM我们将看到如何只用一步就完成上面所有的操作**
# yum update && yum install system-storage-manager
我们将和上面一样,使用相同的名称和大小:
# ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1
# ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1
# ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1
是的! SSM 可以让你:
- 初始化块设备来作为物理卷
- 创建一个卷组
- 创建逻辑卷
- 格式化 LV 和
- 只使用一个命令来挂载它们
**9. 现在,我们可以使用下面的命令来展示有关 PVVG 或 LV 的信息:**
# ssm list dev
# ssm list pool
# ssm list vol
![检查有关 PV, VG,或 LV 的信息](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png)
检查有关 PV, VG,或 LV 的信息
**10. 正如我们知道的那样, LVM 的一个显著的特点是可以在不停机的情况下更改(增大或缩小) 逻辑卷的大小:**
假定在 vol02_logs 上我们用尽了空间,而 vol03_homes 还留有足够的空间。我们将把 vol03_homes 的大小调整为 4 GB并使用剩余的空间来扩展 vol02_logs
# ssm resize -s 4G /dev/tecmint_vg/vol03_homes
再次运行 `ssm list pool`,并记录 tecmint_vg 中的剩余空间的大小:
![查看卷的大小](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png)
查看卷的大小
然后执行:
# ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs
**注**: 在 `-s` 后的加号暗示特定值应该被加到当前值上。
**11. 使用 ssm 来移除逻辑卷和卷组也更加简单,只需使用:**
# ssm remove tecmint_vg
这个命令将返回一个提示,询问你是否确认删除 VG 和它所包含的 LV
![移除逻辑卷和卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png)
移除逻辑卷和卷组
### 管理加密的卷 ###
SSM 也给系统管理员提供了为新的或现存的卷加密的能力。首先,你将需要安装 cryptsetup 软件包:
# yum update && yum install cryptsetup
然后写出下面的命令来创建一个加密卷,你将被要求输入一个密码来增强安全性:
# ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1
# ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1
# ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1
我们的下一个任务是往 /etc/fstab 中添加条目来让这些逻辑卷在启动时可用,而不是使用设备识别编号(/dev/something)。
我们将使用每个 LV 的 UUID (使得当我们添加其他的逻辑卷或设备后,我们的设备仍然可以被唯一的标记),而我们可以使用 blkid 应用来找到它们的 UUID
# blkid -o value UUID /dev/tecmint_vg/vol01_docs
# blkid -o value UUID /dev/tecmint_vg/vol02_logs
# blkid -o value UUID /dev/tecmint_vg/vol03_homes
在我们的例子中:
![找到逻辑卷的 UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png)
找到逻辑卷的 UUID
接着,使用下面的内容来创建 /etc/crypttab 文件(请更改 UUID 来适用于你的设置)
docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none
logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none
homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none
然后在 /etc/fstab 中添加如下的条目。请注意到 device_name (/dev/mapper/device_name) 是出现在 /etc/crypttab 中第一列的映射标识:
# Logical volume vol01_docs:
/dev/mapper/docs /mnt/docs ext4 defaults 0 2
# Logical volume vol02_logs
/dev/mapper/logs /mnt/logs ext4 defaults 0 2
# Logical volume vol03_homes
/dev/mapper/homes /mnt/homes ext4 defaults 0 2
现在重启(systemctl reboot),则你将被要求为每个 LV 输入密码。随后,你可以通过检查相应的挂载点来确保挂载操作是否成功:
![确保逻辑卷挂载点](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png)
确保逻辑卷挂载点
### 总结 ###
在这篇教程中,我们开始探索如何使用传统的卷管理工具和 SSM 来设置和配置系统存储SSM 也在一个软件包中集成了文件系统和加密功能。这使得对于任何系统管理员来说SSM 是一个非常有价值的工具。
假如你有任何的问题或评论,请让我们知晓 请随意使用下面的评论框来与我们保存联系!
--------------------------------------------------------------------------------
via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/
作者:[Gabriel Cánepa][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/