mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-24 02:20:09 +08:00
commit
667df81784
@ -6,84 +6,83 @@
|
||||
|
||||
这是初学者经常会问的一个问题,在这里,我会告诉你们10个我最喜欢的博客,这些博客可以帮助我们解决问题,能让我们及时了解所有 Ubuntu 版本的更新消息。不,我谈论的不是通常的 Linux 和 shell 脚本一类的东东。我是在说一个流畅的 Linux 桌面系统和一个普通的用户所要的关于 Ubuntu 的经验。
|
||||
|
||||
这些网站帮助你解决你正遇到的问题,提醒你关注各种应用和提供给你来自 Ubuntu 世界的最新消息。这个网站可以让你对 Ubuntu 更了解,所以,下面列出的10个我最喜欢的网站覆盖 Ubuntu 的方方面面。
|
||||
这些网站帮助你解决你正遇到的问题,提醒你关注各种应用和提供给你来自 Ubuntu 世界的最新消息。这个网站可以让你对 Ubuntu 更了解,所以,下面列出的是10个我最喜欢的博客,它们包括了 Ubuntu 的方方面面。
|
||||
|
||||
###10个Ubutun用户一定要知道的博客###
|
||||
|
||||
从我开始在 itsfoss 网站上写作开始,我特意把他排除在外,没有列入名单。我也并没有把[Planet Ubuntu][1]列入名单,因为他不适合初学的同学。废话不多说,让我们一起来看下**最好的乌邦图(ubuntu)博客**(排名不分先后):
|
||||
从我开始在 itsfoss 网站上写作开始,我特意把它排除在外,没有列入名单。我也并没有把[Planet Ubuntu][1]列入名单,因为它不适合初学者。废话不多说,让我们一起来看下**最好的乌邦图(ubuntu)博客**(排名不分先后):
|
||||
|
||||
### [OMG! Ubuntu!][2] ###
|
||||
|
||||
这是一个只针对 ubuntu 爱好者的网站。任何和乌邦图有关系的想法,不管成不成熟,OMG!Ubuntu上都会有收集!他主要包括新闻和应用。你也可以再这里找到一些关于 Ubuntu 的教程,但是不是很多。
|
||||
这是一个只针对 ubuntu 爱好者的网站。无论多小,只要是和乌邦图有关系的,OMG!Ubuntu 都会收入站内!博客主要包括新闻和应用。你也可以再这里找到一些关于 Ubuntu 的教程,但不是很多。
|
||||
|
||||
这个博客会让你知道 Ubuntu 的世界是怎么样的。
|
||||
这个博客会让你知道 Ubuntu 世界发生的各种事情。
|
||||
|
||||
### [Web Upd8][3] ###
|
||||
|
||||
Web Upd8 是我最喜欢的博客。除了涵盖新闻,他有很多容易理解的教程。Web Upd8 还维护了几个PPAs。博主[Andrei][4]有时会在评论里回答你的问题,这对你来说也会是很有帮助的。
|
||||
Web Upd8 是我最喜欢的博客。除了涵盖新闻,它有很多容易理解的教程。Web Upd8 还维护了几个PPAs。博主[Andrei][4]有时会在评论里回答你的问题,这对你来说也会是很有帮助的。
|
||||
|
||||
一个你可以追新闻资讯和教程的网站。
|
||||
这是一个你可以了解新闻资讯,学习教程的网站。
|
||||
|
||||
### [Noobs Lab][5] ###
|
||||
|
||||
和Web Upd8一样,Noobs Lab上也有很多教程,新闻,并且它可能是PPA里最大的主题和图标集。
|
||||
和Web Upd8一样,Noobs Lab上也有很多教程,新闻,并且它可能是PPA里最大的主题和图标集。
|
||||
|
||||
如果你是个小白,跟着Noobs Lab。
|
||||
如果你是个新手,去Noobs Lab看看吧。
|
||||
|
||||
### [Linux Scoop][6] ###
|
||||
|
||||
这里,大多数的博客都是“文字博客”。你通过看说明和截图来学习教程。而 Linux Scoop 上有很多录像来帮助初学者来学习,是一个实实在在的录像博客。
|
||||
大多数的博客都是“文字博客”。你通过看说明和截图来学习教程。而 Linux Scoop 上有很多录像来帮助初学者来学习,完全是一个视频博客。
|
||||
|
||||
如果你更喜欢看,而不是阅读的话,Linux Scoop应该是最适合你的。
|
||||
比起阅读来,如果你更喜欢视频,Linux Scoop应该是最适合你的。
|
||||
|
||||
### [Ubuntu Geek][7] ###
|
||||
|
||||
这是一个相对比较老的博客。覆盖面很广,并且有很多快速安装的教程和说明。虽然,有时我发现其中的一些教程文章缺乏深度,当然这也许只是我个人的观点。
|
||||
|
||||
想要快速的小贴士,去Ubuntu Geek。
|
||||
想要快速小贴士,去Ubuntu Geek。
|
||||
|
||||
### [Tech Drive-in][8] ###
|
||||
|
||||
这个网站的更新好像没有以前那么勤快了,可能是 Manuel 在忙于他的工作,但是仍然给我们提供了很多的东西。新闻,教程,应用评论是这个博客的重点。
|
||||
这个网站的更新频率好像没有以前那么快了,可能是 Manuel 在忙于他的工作,但是仍然给我们提供了很多的东西。新闻,教程,应用评论是这个博客的亮点。
|
||||
|
||||
博客经常被收入到[Ubuntu的新闻邮件请求][9],Tech Drive-in肯定是一个很值得你去追的网站。
|
||||
博客经常被收入到[Ubuntu的新闻邀请邮件中][9],Tech Drive-in肯定是一个很值得你去学习的网站。
|
||||
|
||||
### [UbuntuHandbook][10] ###
|
||||
|
||||
快速小贴士,新闻和教程是UbuntuHandbook的USP。[Ji m][11]最近也在参与维护一些PPAS。我必须很认真的说,这个站界面其实可以做得更好看点,纯属个人观点。
|
||||
快速小贴士,新闻和教程是UbuntuHandbook的USP。[Ji m][11]最近也在参与维护一些PPAS。我必须很认真的说,这个博客的页面其实可以做得更好看点,纯属个人观点。
|
||||
|
||||
UbuntuHandbook 真的很方便。
|
||||
|
||||
### [Unixmen][12] ###
|
||||
|
||||
这个网站是由很多人一起维护的,而且并不仅仅局限于Ubuntu,它也覆盖了很多的其他的Linux发行版。他用他自己的方式来帮助用户。
|
||||
这个网站是由很多人一起维护的,而且并不仅仅局限于Ubuntu,它也覆盖了很多的其他的Linux发行版。它有自己的论坛来帮助用户。
|
||||
|
||||
紧跟着 Unixmen 的步伐。。
|
||||
|
||||
### [The Mukt][13] ###
|
||||
|
||||
The Mukt是Muktware新的代表。Muktware是一个逐渐消亡的Linux组织,并以Mukt重生。Muktware是一个很严谨的Linux开源的博客,The Mukt涉及很多广泛的主题,包括,科技新闻,古怪的新闻,有时还有娱乐新闻(听起来是否有一种混搭风的感觉?)The Mukt也包括很多Ubuntu的新闻,有些可能是你感兴趣的。
|
||||
The Mukt是Muktware新的代表。Muktware是一个逐渐消亡的Linux组织,并以Mukt重生。Muktware是一个很严谨的Linux开源的博客,The Mukt涉及很多广泛的主题,包括,科技新闻,极客新闻,有时还有娱乐新闻(听起来是否有一种混搭风的感觉?)The Mukt也包括很多你感兴趣的Ubuntu新闻。
|
||||
|
||||
The Mukt 不仅仅是一个博客,它是一种文化潮流。
|
||||
|
||||
### [LinuxG][14] ###
|
||||
|
||||
LinuxG是一个你可以找到所有关于“怎样安装”文章的站点。几乎所有的文章都开始于一句话“你好,Linux geeksters,正如你所知道的。。。”,博客可以在不同的主题上做得更好。我经常发现有些是文章缺乏深度,并且是急急忙忙写出来的,但是它仍然是一个关注应用更新的好地方。
|
||||
LinuxG是一个你可以找到所有关于“怎样安装”类型文章的站点。几乎所有的文章都开始于一句话“你好,Linux geeksters,正如你所知道的……”,博客可以在不同的主题上做得更好。我经常发现有些是文章缺乏深度,并且是急急忙忙写出来的,但是它仍然是一个关注应用最新版本的好地方。
|
||||
|
||||
它很好的平衡了新的应用和他们最新的版本。
|
||||
这是个快速浏览新的应用和它们最新的版本好地方。
|
||||
|
||||
### 你还有什么好的站点吗? ###
|
||||
|
||||
This was my list of best Ubuntu blogs which I regularly follow. I know there are plenty more out there, perhaps better than some of those listed here. So why don’t you mention your favorite Ubuntu blog in the comment section below?
|
||||
这些就是我平时经常浏览的 Ubuntu 博客。我知道还有很多我不知道的站点,可能会比我列出来的这些更好。所以,欢迎把你最喜爱的 Ubuntu 博客写在下面评论区。
|
||||
|
||||
这些就是我平时经常浏览的 Ubuntu 博客。我知道还有很多我不知道的站点,可能会比我列出来的这些更好。所以,欢迎把你最喜爱的 Ubuntu 博客在下面评论的位置写出来。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/ten-blogs-every-ubuntu-user-must-follow/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[barney-ro](https://github.com/barney-ro)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,42 @@
|
||||
Debian 8 "Jessie" 将把GNOME作为默认桌面环境
|
||||
================================================================================
|
||||
> Debian的GNOME团队已经取得了实质进展
|
||||
|
||||
<center></center>
|
||||
|
||||
<center>*GNOME 3.14桌面*</center>
|
||||
|
||||
**Debian项目开发者花了很长一段时间来决定将Xfce,GNOME或一些其他桌面环境中的哪个作为默认环境,不过目前看起来像是GNOME赢了。**
|
||||
|
||||
[我们两天前提到了][1],GNOME 3.14的软件包被上传到 Debian Testing(Debian 8 “Jessie”)的软件仓库中,这是一个令人惊喜的事情。通常情况下,GNOME的维护者对任何类型的软件包都不会这么快地决定添加,更别说桌面环境。
|
||||
|
||||
事实证明,关于即将到来的Debian 8的发行版中所用的默认桌面的争论已经尘埃落定,尽管这个词可能有点过于武断。无论什么情况下,总是有些开发者想要Xfce,另外一些则是喜欢 GNOME,看起来 MATE 也是不少人的备选。
|
||||
|
||||
### 最有可能的是,GNOME将Debian 8“Jessie” 的默认桌面环境###
|
||||
|
||||
我们之所以说“最有可能”是因为协议尚未达成一致,但它看起来GNOME已经遥遥领先了。Debian的维护者和开发者乔伊·赫斯解释了为什么会这样。
|
||||
|
||||
“根据从 https://wiki.debian.org/DebianDesktop/Requalification/Jessie 初步结果看,一些所需数据尚不可用,但在这一点上,我百分之八十地确定GNOME已经领先了。特别是,由于“辅助功能”和某些“systemd”整合的进度。在辅助功能方面:Gnome和Mate都领先了一大截。其他一些桌面的辅助功能改善了在Debian上的支持,部分原因是这一过程推动的,但仍需要上游大力支持。“
|
||||
|
||||
“Systemd /etc 整合方面:Xfce,Mate等尽力追赶在这一领域正在发生的变化,当技术团队停止了修改之后,希望有时间能在冻结期间解决这些问题。所以这并不是完全否决这些桌面,但要从目前的状态看,GNOME是未来的选择,“乔伊·赫斯[补充说][2]。
|
||||
|
||||
开发者在邮件中表示,在Debian的GNOME团队对他们所维护的项目[充满了激情][3],而Debian的Xfce的团队是决定默认桌面的实际阻碍。
|
||||
|
||||
无论如何,Debian 8“Jessie”没有一个具体发布时间,并没有迹象显示何时可能会被发布。在另一方面,GNOME 3.14已经发布了(也许你已经看到新闻了),它将很快应对好进行Debian的测试。
|
||||
|
||||
我们也应该感谢Jordi Mallach,在Debian中的GNOME包的维护者之一,他为我们指引了正确的讯息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Debian-8-quot-Jessie-quot-to-Have-GNOME-as-the-Default-Desktop-459665.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[fbigun](https://github.com/fbigun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:http://news.softpedia.com/news/Debian-8-quot-Jessie-quot-to-Get-GNOME-3-14-459470.shtml
|
||||
[2]:http://anonscm.debian.org/cgit/tasksel/tasksel.git/commit/?id=dce99f5f8d84e4c885e6beb4cc1bb5bb1d9ee6d7
|
||||
[3]:http://news.softpedia.com/news/Debian-Maintainer-Says-that-Xfce-on-Debian-Will-Not-Meet-Quality-Standards-GNOME-Is-Needed-454962.shtml
|
@ -0,0 +1,29 @@
|
||||
Red Hat Enterprise Linux 5产品线终结
|
||||
================================================================================
|
||||
2007年3月,红帽公司首次宣布它的[Red Hat Enterprise Linux 5][1](RHEL)平台。虽然如今看来很普通,RHEL 5特别显著的一点是它是红帽公司第一个强调虚拟化的主要发行版本,而这点是如今现代发行版所广泛接受的特性。
|
||||
|
||||
最初的计划是为RHEL 5提供七年的寿命,但在2012年该计划改变了,红帽为RHEL 5[扩展][2]至10年的标准支持。
|
||||
|
||||
刚刚过去的这个星期,Red Hat发布的RHEL 5.11是RHEL 5.X系列的最后的、次要里程碑版本。红帽现在进入了将持续三年的名为“production 3”的支持周期。在这阶段将没有新的功能被添加到平台中,并且红帽公司将只提供有重大影响的安全修复程序和紧急优先级的bug修复。
|
||||
|
||||
平台事业部副总裁兼总经理Jim Totton在红帽公司在一份声明中说:“红帽公司致力于建立一个长期,稳定的产品生命周期,这将给那些依赖Red Hat Enterprise Linux为他们的关键应用服务的企业客户提供关键的益处。虽然RHEL 5.11是RHEL 5平台的最终次要版本,但它提供了安全性和可靠性方面的增强功能,以保持该平台接下来几年的活力。”
|
||||
|
||||
新的增强功能包括安全性和稳定性更新,包括改进了红帽帮助用户调试系统的方式。
|
||||
|
||||
还有一些新的存储的驱动程序,以支持新的存储适配器和改进在VMware ESXi上运行RHEL的支持。
|
||||
|
||||
在安全方面的巨大改进是OpenSCAP更新到版本1.0.8。红帽在2011年五月的[RHEL5.7的里程碑更新][3]中第一次支持了OpenSCAP。 OpenSCAP是安全内容自动化协议(SCAP)框架的开源实现,用于创建一个标准化方法来维护安全系统。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxplanet.com/news/end-of-the-line-for-red-hat-enterprise-linux-5.html
|
||||
|
||||
作者:Sean Michael Kerner
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.internetnews.com/ent-news/article.php/3665641
|
||||
[2]:http://www.serverwatch.com/server-news/red-hat-extends-linux-support.html
|
||||
[3]:http://www.internetnews.com/skerner/2011/05/red-hat-enterprise-linux-57-ad.html
|
@ -0,0 +1,39 @@
|
||||
KDE Plasma 5的第二个bug修复版本发布,带来了很多的改变
|
||||
================================================================================
|
||||
> 新的Plasma 5发布了,带来了新的外观
|
||||
|
||||
<center></center>
|
||||
|
||||
<center>*KDE Plasma 5*</center>
|
||||
|
||||
### Plasma 5的第二个bug修复版本发布,已可下载###
|
||||
|
||||
KDE Plasma 5的bug修复版本不断来到,它新的桌面体验将会是KDE的生态系统的一个组成部分。
|
||||
|
||||
[公告][1]称:“plasma-5.0.2这个版本,新增了一个月以来来自KDE的贡献者新的翻译和修订。Bug修复通常是很小但是很重要,如修正未翻译的文字,使用正确的图标和修正KDELibs 4软件的文件重复现象。它还增加了一个月以来辛勤的翻译成果,使其支持其他更多的语言”
|
||||
|
||||
这个桌面还没有在任何Linux发行版中默认安装,这将持续一段时间,直到我们测试完成。
|
||||
|
||||
开发者还解释说,更新的软件包可以在Kubuntu Plasma 5的开发版本中进行审查。
|
||||
|
||||
如果你个人需要它们,你也可以下载源码包。
|
||||
|
||||
- [KDE Plasma Packages][2]
|
||||
- [KDE Plasma Sources][3]
|
||||
|
||||
如果你决定去编译它,你必须需要知道 KDE Plasma 5.0.2是一组复杂的软件,可能你需要解决不少问题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Second-Bugfix-Release-for-KDE-Plasma-5-Arrives-with-Lots-of-Changes-459688.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:http://kde.org/announcements/plasma-5.0.2.php
|
||||
[2]:https://community.kde.org/Plasma/Packages
|
||||
[3]:http://kde.org/info/plasma-5.0.2.php
|
@ -1,38 +0,0 @@
|
||||
Translating by ZTinoZ
|
||||
Red Hat Acquires FeedHenry for $82 Million to Advance Mobile Development
|
||||
================================================================================
|
||||
> Red Hat jumps into the mobile development sector with a key acquisition.
|
||||
|
||||
Red Hat's JBoss developer tools division has always focused on enterprise development, but hasn't always been focused on mobile. Today that will start to change as Red Hat announced its intention to acquire mobile development vendor [FeedHenry][1] for $82 million in cash. The deal is set to close in the third quarter of Red Hat's fiscal 2015. Red Hat is set to disclose its second quarter fiscal 2015 earning at 4 ET today.
|
||||
|
||||
Mike Piech, general manager of Middleware at Red Hat, told Datamation that upon the deal's closing FeedHenry's employees will become Red Hat employees
|
||||
|
||||
FeedHenry's development platform enables application developers to rapidly build mobile application for Android, IOS, Windows Phone and BlackBerry. The FeedHenry platform leverages Node.js programming architecture, which is not an area where JBoss has had much exposure in the past.
|
||||
|
||||
"The acquisition of FeedHenry significantly expands Red Hat's support for and engagement in Node.js," Piech said.
|
||||
|
||||
Piech Red Hat's OpenShift Platform-as-a-Service (PaaS) technology already has a Node.js cartridge. Additionally Red Hat Enterprise Linux ships a tech preview of node.js as part of the Red Hat Software Collections.
|
||||
|
||||
While node.js itself is open source, not all of FeedHenry's technology is currently available under an open source license. As has been Red Hat's policy throughout its entire history, it is now committing to making FeedHenry open source as well.
|
||||
|
||||
"As we've done with other acquisitions, open sourcing the technology we acquire is a priority for Red Hat, and we have no reason to expect that approach will change with FeedHenry," Piech said.
|
||||
|
||||
Red Hat's last major acquisition of a company with non open source technology was with [ManageIQ][2] for $104 million back in 2012. In May of this year, Red Hat launched the ManageIQ open-source project, opening up development and code of the formerly closed-source cloud management technology.
|
||||
|
||||
From an integration standpoint, Red Hat is not yet providing full details of precisely where FeedHenry will fit it.
|
||||
|
||||
"We've already identified a number of areas where FeedHenry and Red Hat's existing technology and products can be better aligned and integrated," Piech said. "We'll share more details as we develop the roadmap over the next 90 days."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html
|
||||
[1]:http://www.feedhenry.com/
|
||||
[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html
|
@ -1,36 +0,0 @@
|
||||
Canonical Closes nginx Exploit in Ubuntu 14.04 LTS
|
||||
================================================================================
|
||||
> Users have to upgrade their systems to fix the issue
|
||||
|
||||

|
||||
|
||||
Ubuntu 14.04 LTS
|
||||
|
||||
**Canonical has published details in a security notice about an nginx vulnerability that affected Ubuntu 14.04 LTS (Trusty Tahr). The problem has been identified and fixed.**
|
||||
|
||||
The Ubuntu developers have fixed a small nginx exploit. They explain that nginx could have been made to expose sensitive information over the network.
|
||||
|
||||
According to the security notice, “Antoine Delignat-Lavaud and Karthikeyan Bhargavan discovered that nginx incorrectly reused cached SSL sessions. An attacker could possibly use this issue in certain configurations to obtain access to information from a different virtual host.”
|
||||
|
||||
For a more detailed description of the problems, you can see Canonical's security [notification][1]. Users should upgrade their Linux distribution in order to correct this issue.
|
||||
|
||||
The problem can be repaired by upgrading the system to the latest nginx package (and dependencies). To apply the patch, you can simply run the Update Manager application.
|
||||
|
||||
If you don't want to use the Software Updater, you can open a terminal and enter the following commands (you will need to be root):
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
In general, a standard system update will make all the necessary changes. You don't have to restart the PC in order to implement this fix.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:http://www.ubuntu.com/usn/usn-2351-1/
|
@ -1,29 +0,0 @@
|
||||
End of the Line for Red Hat Enterprise Linux 5
|
||||
================================================================================
|
||||
In March of 2007, Red Hat first announced its [Red Hat Enterprise Linux 5][1]( RHEL) platform. Though it might seem quant today, RHEL 5 was particularly notable in that it was the first major release for Red Hat to emphasize virtualization, which is a feature all modern distros now take for granted.
|
||||
|
||||
Originally the plan was for RHEL 5 to have seven years of life, but that plan changed in 2012 when when Red Hat [extended][2] its standard support for RHEL 5 to 10 years.
|
||||
|
||||
This past week, Red Hat released RHEL 5.11 which is the final minor milestone release for RHEL 5.X. RHEL now enters what Red Hat calls it production 3 support which will last for another three years. During the production three phase no new functionality is added to the platform and Red Hat will only provide critical impact security fixes and urgent priority bug fixes.
|
||||
|
||||
"Red Hat’s commitment to a long, stable product lifecycle is a key benefit for enterprise customers who rely on Red Hat Enterprise Linux for their critical applications," Jim Totton, vice president and general manager, Platform Business Unit, Red Hat said in a statement. " While Red Hat Enterprise Linux 5.11 is the final minor release of the Red Hat Enterprise Linux 5 platform, the enhancements it offers in terms of security and reliability are designed to maintain the platform’s viability for years to come."
|
||||
|
||||
The new enhancements include security and stability updates including improvements to the way that Red Hat can help users to debug a system.
|
||||
|
||||
There are also new storage drivers to support newer storage adapters and improved support for RHEL running on VMware ESXi.
|
||||
|
||||
On the security front the big improvement is an update to OpenSCAP version 1.0.8. Red Hat first provided support for OpenSCAP in May of 2011 with the [RHEL 5.7 milestone update][3]. OpenSCAP is an open source implementation of the Security Content Automation Protocol (SCAP) framework for creating a standardized approach for maintaining secure systems.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxplanet.com/news/end-of-the-line-for-red-hat-enterprise-linux-5.html
|
||||
|
||||
作者:Sean Michael Kerner
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.internetnews.com/ent-news/article.php/3665641
|
||||
[2]:http://www.serverwatch.com/server-news/red-hat-extends-linux-support.html
|
||||
[3]:http://www.internetnews.com/skerner/2011/05/red-hat-enterprise-linux-57-ad.html
|
@ -1,38 +0,0 @@
|
||||
Second Bugfix Release for KDE Plasma 5 Arrives with Lots of Changes
|
||||
================================================================================
|
||||
> The new Plasma 5 desktop is out with a new version
|
||||
|
||||

|
||||
|
||||
KDE Plasma 5
|
||||
|
||||
### The KDE Community has announced that the second bugfix release for Plasma 5 is now out and available for download. ###
|
||||
|
||||
Bugfix releases for the KDE Plasma 5, the new desktop experience that will be an integral part of the KDE ecosystem, have started to arrive very often.
|
||||
|
||||
"This release, versioned plasma-5.0.2, adds a month's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important such as fixing text which couldn't be translated, using the correct icons and fixing overlapping files with KDELibs 4 software. It also adds a month's hard work of translations to make support in other languages even more complete," reads the [announcement][1].
|
||||
|
||||
This particular desktop is not yet implemented by default in any Linux distro and it will be a while until we are able to test it properly.
|
||||
|
||||
The developers also explain that the updated packages can be reviewed in the development versions of Kubuntu Plasma 5.
|
||||
|
||||
You can also download the source packages, if you need them individually.
|
||||
|
||||
- [KDE Plasma Packages][2]
|
||||
- [KDE Plasma Sources][3]
|
||||
|
||||
You also have to keep in mind that KDE Plasma 5.0.2 is a sophisticated piece of software and you really need to know what you are doing if you decide to compile it.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Second-Bugfix-Release-for-KDE-Plasma-5-Arrives-with-Lots-of-Changes-459688.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:http://kde.org/announcements/plasma-5.0.2.php
|
||||
[2]:https://community.kde.org/Plasma/Packages
|
||||
[3]:http://kde.org/info/plasma-5.0.2.php
|
@ -1,92 +0,0 @@
|
||||
Making MySQL Better at GitHub
|
||||
================================================================================
|
||||
> At GitHub we say, "it's not fully shipped until it's fast." We've talked before about some of the ways we keep our [frontend experience speedy][1], but that's only part of the story. Our MySQL database infrastructure dramatically affects the performance of GitHub.com. Here's a look at how our infrastructure team seamlessly conducted a major MySQL improvement last August and made GitHub even faster.
|
||||
|
||||
### The mission ###
|
||||
|
||||
Last year we moved the bulk of GitHub.com's infrastructure into a new datacenter with world-class hardware and networking. Since MySQL forms the foundation of our backend systems, we expected database performance to benefit tremendously from an improved setup. But creating a brand-new cluster with brand-new hardware in a new datacenter is no small task, so we had to plan and test carefully to ensure a smooth transition.
|
||||
|
||||
### Preparation ###
|
||||
|
||||
A major infrastructure change like this requires measurement and metrics gathering every step of the way. After installing base operating systems on our new machines, it was time to test out our new setup with various configurations. To get a realistic test workload, we used tcpdump to extract SELECT queries from the old cluster that was serving production and replayed them onto the new cluster.
|
||||
|
||||
MySQL tuning is very workload specific, and well-known configuration settings like innodb_buffer_pool_size often make the most difference in MySQL's performance. But on a major change like this, we wanted to make sure we covered everything, so we took a look at settings like innodb_thread_concurrency, innodb_io_capacity, and innodb_buffer_pool_instances, among others.
|
||||
|
||||
We were careful to only make one test configuration change at a time, and to run tests for at least 12 hours. We looked for query response time changes, stalls in queries per second, and signs of reduced concurrency. We observed the output of SHOW ENGINE INNODB STATUS, particularly the SEMAPHORES section, which provides information on work load contention.
|
||||
|
||||
Once we were relatively comfortable with configuration settings, we started migrating one of our largest tables onto an isolated cluster. This served as an early test of the process, gave us more space in the buffer pools of our core cluster and provided greater flexibility for failover and storage. This initial migration introduced an interesting application challenge, as we had to make sure we could maintain multiple connections and direct queries to the correct cluster.
|
||||
|
||||
In addition to all our raw hardware improvements, we also made process and topology improvements: we added delayed replicas, faster and more frequent backups, and more read replica capacity. These were all built out and ready for go-live day.
|
||||
|
||||
### Making a list; checking it twice ###
|
||||
|
||||
With millions of people using GitHub.com on a daily basis, we did not want to take any chances with the actual switchover. We came up with a thorough [checklist][2] before the transition:
|
||||
|
||||

|
||||
|
||||
We also planned a maintenance window and [announced it on our blog][3] to give our users plenty of notice.
|
||||
|
||||
### Migration day ###
|
||||
|
||||
At 5am Pacific Time on a Saturday, the migration team assembled online in chat and the process began:
|
||||
|
||||

|
||||
|
||||
We put the site in maintenance mode, made an announcement on Twitter, and set out to work through the list above:
|
||||
|
||||

|
||||
|
||||
**13 minutes** later, we were able to confirm operations of the new cluster:
|
||||
|
||||

|
||||
|
||||
Then we flipped GitHub.com out of maintenance mode, and let the world know that we were in the clear.
|
||||
|
||||

|
||||
|
||||
Lots of up front testing and preparation meant that we kept the work we needed on go-live day to a minimum.
|
||||
|
||||
### Measuring the final results ###
|
||||
|
||||
In the weeks following the migration, we closely monitored performance and response times on GitHub.com. We found that our cluster migration cut the average GitHub.com page load time by half and the 99th percentile by *two-thirds*:
|
||||
|
||||

|
||||
|
||||
### What we learned ###
|
||||
|
||||
#### Functional partitioning ####
|
||||
|
||||
During this process we decided that moving larger tables that mostly store historic data to separate cluster was a good way to free up disk and buffer pool space. This allowed us to leave more resources for our "hot" data, splitting some connection logic to enable the application to query multiple clusters. This proved to be a big win for us and we are working to reuse this pattern.
|
||||
|
||||
#### Always be testing ####
|
||||
|
||||
You can never do too much acceptance and regression testing for your application. Replicating data from the old cluster to the new cluster while running acceptance tests and replaying queries were invaluable for tracing out issues and preventing surprises during the migration.
|
||||
|
||||
#### The power of collaboration ####
|
||||
|
||||
Large changes to infrastructure like this mean a lot of people need to be involved, so pull requests functioned as our primary point of coordination as a team. We had people all over the world jumping in to help.
|
||||
|
||||
Deploy day team map:
|
||||
|
||||
<iframe width="620" height="420" frameborder="0" src="https://render.githubusercontent.com/view/geojson?url=https://gist.githubusercontent.com/anonymous/5fa29a7ccbd0101630da/raw/map.geojson"></iframe>
|
||||
|
||||
This created a workflow where we could open a pull request to try out changes, get real-time feedback, and see commits that fixed regressions or errors -- all without phone calls or face-to-face meetings. When everything has a URL that can provide context, it's easy to involve a diverse range of people and make it simple for them give feedback.
|
||||
|
||||
### One year later.. ###
|
||||
|
||||
A full year later, we are happy to call this migration a success — MySQL performance and reliability continue to meet our expectations. And as an added bonus, the new cluster enabled us to make further improvements towards greater availability and query response times. I'll be writing more about those improvements here soon.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/blog/1880-making-mysql-better-at-github
|
||||
|
||||
作者:[samlambert][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/samlambert
|
||||
[1]:https://github.com/blog/1756-optimizing-large-selector-sets
|
||||
[2]:https://help.github.com/articles/writing-on-github#task-lists
|
||||
[3]:https://github.com/blog/1603-site-maintenance-august-31st-2013
|
@ -1,89 +0,0 @@
|
||||
Drab Desktop? Try These 4 Beautiful Linux Icon Themes
|
||||
================================================================================
|
||||
**Ubuntu’s default icon theme [hasn’t changed much][1] in almost 5 years, save for the [odd new icon here and there][2]. If you’re tired of how it looks we’re going to show you a handful of gorgeous alternatives that will easily freshen things up.**
|
||||
|
||||
Do feel free to share links to your own favourite choices in the comments below.
|
||||
|
||||
### Captiva ###
|
||||
|
||||

|
||||
|
||||
Captiva icons, elementary folders and Moka GTK
|
||||
|
||||
Captiva is a relatively new icon theme that even the least bling-prone user can appreicate.
|
||||
|
||||
Made by DeviantArt user ~[bokehlicia][3], Captiva shuns the 2D flat look of many current icon themes for a softer, rounded look. The icons themselves have an almost material or textured look, with subtle drop shadows and a rich colour palette adding to the charm.
|
||||
|
||||
It doesn’t yet include a set of its own folder icons, and will fallback to using elementary (if available) or stock Ubuntu icons.
|
||||
|
||||
To install Captiva icons in Ubuntu 14.04 you can add the official PPA by opening a new Terminal window and enter the following commands:
|
||||
|
||||
sudo add-apt-repository ppa:captiva/ppa
|
||||
|
||||
sudo apt-get update && sudo apt-get install captiva-icon-theme
|
||||
|
||||
Or, if you’re not into software source cruft, by downloading the icon pack direct from the DeviantArt page. To install, extract the archive and move the resulting folder to the ‘.icons‘ directory in Home.
|
||||
|
||||
However you choose to install it, you’ll need to apply this (and every other theme on this list) using a utility like [Unity Tweak Tool][4].
|
||||
|
||||
- [Captiva Icon Theme on DeviantArt][5]
|
||||
|
||||
### Square Beam ###
|
||||
|
||||

|
||||
|
||||
Square Beam icon set with Orchis GTK
|
||||
|
||||
After something a bit angular? Check out Square Beam. It offers a more imposing visual statement than other sets on this list, with electric colours, harsh gradients and stark iconography. It claims to have more than 30,000 different icons (!) included (you’ll forgive me for not counting) so you should find very few gaps in its coverage.
|
||||
|
||||
- [Square Beam Icon Theme on GNOME-Look.org][6]
|
||||
|
||||
### Moka & Faba ###
|
||||
|
||||

|
||||
|
||||
Moka/Faba Mono Icons with Orchis GTK
|
||||
|
||||
The Moka icon suite needs little introduction. In fact, I’d wager a good number of you are already using it
|
||||
|
||||
With pastel colours, soft edges and simple icon artwork, Moka is a truly standout and comprehensive set of application icons. It’s best used with its sibling, Faba, which Moka will inherit so as to fill in all the system icons, folders, panel icons, etc. The combined result is…well, you’ve got eyes!
|
||||
|
||||
For full details on how to install on Ubuntu head over to the official project website, link below.
|
||||
|
||||
- [Download Moka and Faba Icon Themes][7]
|
||||
|
||||
### Compass ###
|
||||
|
||||

|
||||
|
||||
Compass Icon Theme with Numix Blue GTK
|
||||
|
||||
Last on our list, but by no means least, is Compass. This is a true adherent to the ’2D, two-tone’ UI design right now. It may not be as visually diverse as others on this list, but that’s the point. It’s consistent and uniform and all the better for it — just check out those folder icons!
|
||||
|
||||
It’s available to download and install manually through GNOME-Look (link below) or through the Nitrux Artwork PPA:
|
||||
|
||||
sudo add-apt-repository ppa:nitrux/nitrux-artwork
|
||||
|
||||
sudo apt-get update && sudo apt-get install compass-icon-theme
|
||||
|
||||
- [Compass Icon Theme on GNOME-Look.org][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
|
||||
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
|
||||
[3]:http://bokehlicia.deviantart.com/
|
||||
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
|
||||
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
|
||||
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
|
||||
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
|
||||
[8]:http://gnome-look.org/content/show.php/Compass?content=160629
|
@ -1,65 +0,0 @@
|
||||
7 killer open source monitoring tools
|
||||
================================================================================
|
||||
Looking for greater visibility into your network? Look no further than these excellent free tools
|
||||
|
||||
Network and system monitoring is a broad category. There are solutions that monitor for the proper operation of servers, network gear, and applications, and there are solutions that track the performance of those systems and devices, providing trending and analysis. Some tools will sound alarms and notifications when problems are detected, while others will even trigger actions to run when alarms sound. Here is a collection of open source solutions that aim to provide some or all of these capabilities.
|
||||
|
||||
### Cacti ###
|
||||
|
||||

|
||||
|
||||
Cacti is a very extensive performance graphing and trending tool that can be used to track just about any monitored metric that can be plotted on a graph. From disk utilization to fan speeds in a power supply, if it can be monitored, Cacti can track it -- and make that data quickly available.
|
||||
|
||||
### Nagios ###
|
||||
|
||||

|
||||
|
||||
Nagios is the old guard of system and network monitoring. It is fast, reliable, and extremely customizable. Nagios can be a challenge for newcomers, but the rather complex configuration is also its strength, as it can be adapted to just about any monitoring task. What it may lack in looks it makes up for in power and reliability.
|
||||
|
||||
### Icinga ###
|
||||
|
||||

|
||||
|
||||
Icinga is an offshoot of Nagios that is currently being rebuilt anew. It offers a thorough monitoring and alerting framework that\u2019s designed to be as open and extensible as Nagios is, but with several different Web UI options. Icinga 1 is closely related to Nagios, while Icinga 2 is the rewrite. Both versions are currently supported, and Nagios users can migrate to Icinga 1 very easily.
|
||||
|
||||
### NeDi ###
|
||||
|
||||

|
||||
|
||||
NeDi may not be as well known as some of the others, but it\u2019s a great solution for tracking devices across a network. It continuously walks through a network infrastructure and catalogs devices, keeping track of everything it discovers. It can provide the current location of any device, as well as a history.
|
||||
|
||||
NeDi can be used to locate stolen or lost devices by alerting you if they reappear on the network. It can even display all known and discovered connections on a map, showing how every network interconnect is laid out, down to the physical port level.
|
||||
|
||||
### Observium ###
|
||||
|
||||

|
||||
|
||||
Observium combines system and network monitoring with performance trending. It uses both static and auto discovery to identify servers and network devices, leverages a variety of monitoring methods, and can be configured to track just about any available metric. The Web UI is very clean, well thought out, and easy to navigate.
|
||||
|
||||
As shown, Observium can also display the physical location of monitored devices on a geographical map. Note too the heads-up panels showing active alarms and device counts.
|
||||
|
||||
### Zabbix ###
|
||||
|
||||

|
||||
|
||||
Zabbix monitors servers and networks with an extensive array of tools. There are Zabbix agents for most operating systems, or you can use passive or external checks, including SNMP to monitor hosts and network devices. You'll also find extensive alerting and notification facilities, and a highly customizable Web UI that can be adapted to a variety of heads-up displays. In addition, Zabbix has specific tools that monitor Web application stacks and virtualization hypervisors.
|
||||
|
||||
Zabbix can also produce logical interconnection diagrams detailing how certain monitored objects are interconnected. These maps are customizable, and maps can be created for groups of monitored devices and hosts.
|
||||
|
||||
### Ntop ###
|
||||
|
||||

|
||||
|
||||
Ntop is a packet sniffing tool with a slick Web UI that displays live data on network traffic passing by a monitoring interface. Instant data on network flows is available through an advanced live graphing function. Host data flows and host communication pair information is also available in real-time.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/2686794/asset-management/164219-7-killer-open-source-monitoring-tools.html
|
||||
|
||||
作者:[Paul Venezia][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.networkworld.com/author/Paul-Venezia/
|
@ -0,0 +1,108 @@
|
||||
barney-ro translating
|
||||
|
||||
ChromeOS vs Linux: The Good, the Bad and the Ugly
|
||||
ChromeOS 对战 Linux : 孰优孰劣 仁者见仁 智者见智
|
||||
================================================================================
|
||||
> In the battle between ChromeOS and Linux, both desktop environments have strengths and weaknesses.
|
||||
|
||||
> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。
|
||||
|
||||
Anyone who believes Google isn't "making a play" for desktop users isn't paying attention. In recent years, I've seen [ChromeOS][1] making quite a splash on the [Google Chromebook][2]. Exploding with popularity on sites such as Amazon.com, it looks as if ChromeOS could be unstoppable.
|
||||
|
||||
任何不关注Google的人都不会相信Google在桌面用户当中扮演这一个很重要的角色。在近几年,我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon一样,似乎ChromeOS势不可挡。
|
||||
|
||||
In this article, I'm going to look at ChromeOS as a concept to market, how it's affecting Linux adoption and whether or not it's a good/bad thing for the Linux community as a whole. Plus, I'll talk about the biggest issue of all and how no one is doing anything about it.
|
||||
|
||||
在本文中,我们要了解的是ChromeOS概念的市场,ChromeOS怎么影响着Linux的使用,和整个 ChromeOS 对于一个社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。
|
||||
|
||||
### ChromeOS isn't really Linux ###
|
||||
|
||||
### ChromeOS 并不是真正的Linux ###
|
||||
|
||||
When folks ask me if ChromeOS is a Linux distribution, I usually reply that ChromeOS is to Linux what OS X is to BSD. In other words, I consider ChromeOS to be a forked operating system that uses the Linux kernel under the hood. Much of the operating system is made up of Google's own proprietary blend of code and software.
|
||||
|
||||
每当有朋友问我说是否ChromeOS 是否是Linux 的一个分支时,我都会这样回答:ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说,我认为,ChromeOS 是一个派生的操作系统,运行于Linux 内核的引擎之下。很多操作系统就组成了Google 的专利代码和软件。
|
||||
|
||||
So while the ChromeOS is using the Linux kernel under its hood, it's still very different from what we might find with today's modern Linux distributions.
|
||||
|
||||
尽管ChromeOS 是利用了Linux 内核引擎,但是它仍然有很大的不同和现在流行的Linux分支版本。
|
||||
|
||||
Where ChromeOS's difference becomes most apparent, however, is in the apps it offers the end user: Web applications. With everything being launched from a browser window, Linux users might find using ChromeOS to be a bit vanilla. But for non-Linux users, the experience is not all that different than what they may have used on their old PCs.
|
||||
|
||||
ChromeOS和它们最大的不同就在于它给终端用户提供的app,包括Web 应用。因为ChromeOS 每一个操作都是开始于浏览器窗口,对于Linux 用户来说,可能会有很多不一样的感受,但是,对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
|
||||
|
||||
For example: Anyone who is living a Google-centric lifestyle on Windows will feel right at home on ChromeOS. Odds are this individual is already relying on the Chrome browser, Google Drive and Gmail. By extension, moving over to ChromeOS feels fairly natural for these folks, as they're simply using the browser they're already used to.
|
||||
|
||||
就是说,每一个以Google-centric为生活方式的人来说,当他们回到家时在ChromeOS上的感觉将会非常良好。这样的优势就是这个人已经接受了Chrome 浏览器,Google 驱动器和Gmail 。久而久之,他们的亲朋好友也都对ChromeOs有了好感,就好像是他们很容易接受Chrome 流浪器,因为他们早已经用过。
|
||||
|
||||
Linux enthusiasts, however, tend to feel constrained almost immediately. Software choices feel limited and boxed in, plus games and VoIP are totally out of the question. Sorry, but [GooglePlus Hangouts][3] isn't a replacement for [VoIP][4] software. Not even by a long shot.
|
||||
|
||||
然而,对于Linux 爱好者来说,这样就立即带来了不适应。软件的选择是受限制的,盒装的,在加上游戏和VoIP 是完全不可能的。对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。
|
||||
|
||||
### ChromeOS or Linux on the desktop ###
|
||||
|
||||
### ChromeOS 和Linux 的桌面化 ###
|
||||
Anyone making the claim that ChromeOS hurts Linux adoption on the desktop needs to come up for air and meet non-technical users sometime.
|
||||
|
||||
有人断言,ChromeOS 要是想在桌面系统中对Linux 产生影响,只有在Linux 停下来浮出水面换气的时候或者是满足某个非技术用户的时候。
|
||||
|
||||
Yes, desktop Linux is absolutely fine for most casual computer users. However it helps to have someone to install the OS and offer "maintenance" services like we see in the Windows and OS X camps. Sadly Linux lacks this here in the States, which is where I see ChromeOS coming into play.
|
||||
|
||||
是的,桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。它有助于有专人安装操作系统,并且提供“维修”服务,从windows 和 OS X 的阵营来看。但是,令人失望的是,在美国Linux 正好在这个方面很缺乏。所以,我们看到,ChromeOS 慢慢的走入我们的视线。
|
||||
|
||||
I've found the Linux desktop is best suited for environments where on-site tech support can manage things on the down-low. Examples include: Homes where advanced users can drop by and handle updates, governments and schools with IT departments. These are environments where Linux on the desktop is set up to be used by users of any skill level or background.
|
||||
|
||||
By contrast, ChromeOS is built to be completely maintenance free, thus not requiring any third part assistance short of turning it on and allowing updates to do the magic behind the scenes. This is partly made possible due to the ChromeOS being designed for specific hardware builds, in a similar spirit to how Apple develops their own computers. Because Google has a pulse on the hardware ChromeOS is bundled with, it allows for a generally error free experience. And for some individuals, this is fantastic!
|
||||
|
||||
Comically, the folks who exclaim that there's a problem here are not even remotely the target market for ChromeOS. In short, these are passionate Linux enthusiasts looking for something to gripe about. My advice? Stop inventing problems where none exist.
|
||||
|
||||
The point is: the market share for ChromeOS and Linux on the desktop are not even remotely the same. This could change in the future, but at this time, these two groups are largely separate.
|
||||
|
||||
### ChromeOS use is growing ###
|
||||
|
||||
No matter what your view of ChromeOS happens to be, the fact remains that its adoption is growing. New computers built for ChromeOS are being released all the time. One of the most recent ChromeOS computer releases is from Dell. Appropriately named the [Dell Chromebox][5], this desktop ChromeOS appliance is yet another shot at traditional computing. It has zero software DVDs, no anti-malware software, and offfers completely seamless updates behind the scenes. For casual users, Chromeboxes and Chromebooks are becoming a viable option for those who do most of their work from within a web browser.
|
||||
|
||||
Despite this growth, ChromeOS appliances face one huge downside – storage. Bound by limited hard drive size and a heavy reliance on cloud storage, ChromeOS isn't going to cut it for anyone who uses their computers outside of basic web browser functionality.
|
||||
|
||||
### ChromeOS and Linux crossing streams ###
|
||||
|
||||
Previously, I mentioned that ChromeOS and Linux on the desktop are in two completely separate markets. The reason why this is the case stems from the fact that the Linux community has done a horrid job at promoting Linux on the desktop offline.
|
||||
|
||||
Yes, there are occasional events where casual folks might discover this "Linux thing" for the first time. But there isn't a single entity to then follow up with these folks, making sure they’re getting their questions answered and that they're getting the most out of Linux.
|
||||
|
||||
In reality, the likely offline discovery breakdown goes something like this:
|
||||
|
||||
- Casual user finds out Linux from their local Linux event.
|
||||
- They bring the DVD/USB device home and attempt to install the OS.
|
||||
- While some folks very well may have success with the install process, I've been contacted by a number of folks with the opposite experience.
|
||||
- Frustrated, these folks are then expected to "search" online forums for help. Difficult to do on a primary computer experiencing network or video issues.
|
||||
- Completely fed up, some of the above frustrated bring their computers back into a Windows shop for "repair." In addition to Windows being re-installed, they also receive an earful about how "Linux isn't for them" and should be avoided.
|
||||
|
||||
Some of you might charge that the above example is exaggerated. I would respond with this: It's happened to people I know personally and it happens often. Wake up Linux community, our adoption model is broken and tired.
|
||||
|
||||
### Great platforms, horrible marketing and closing thoughts ###
|
||||
|
||||
If there is one thing that I feel ChromeOS and Linux on the desktop have in common...besides the Linux kernel, it's that they both happen to be great products with rotten marketing. The advantage however, goes to Google with this one, due to their ability to spend big money online and reserve shelf space at big box stores.
|
||||
|
||||
Google believes that because they have the "online advantage" that offline efforts aren't really that important. This is incredibly short-sighted and reflects one of Google's biggest missteps. The belief that if you're not exposed to their online efforts, you're not worth bothering with, is only countered by local shelf-space at select big box stores.
|
||||
|
||||
My suggestion is this – offer Linux on the desktop to the ChromeOS market through offline efforts. This means Linux User Groups need to start raising funds to be present at county fairs, mall kiosks during the holiday season and teaching free classes at community centers. This will immediately put Linux on the desktop in front of the same audience that might otherwise end up with a ChromeOS powered appliance.
|
||||
|
||||
If local offline efforts like this don't happen, not to worry. Linux on the desktop will continue to grow as will the ChromeOS market. Sadly though, it will absolutely keep the two markets separate as they are now.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
|
||||
|
||||
作者:[Matt Hartley][a]
|
||||
译者:[barney-ro](https://github.com/barney-ro)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datamation.com/author/Matt-Hartley-3080.html
|
||||
[1]:http://en.wikipedia.org/wiki/Chrome_OS
|
||||
[2]:http://www.google.com/chrome/devices/features/
|
||||
[3]:https://plus.google.com/hangouts
|
||||
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
|
||||
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html
|
@ -1,111 +0,0 @@
|
||||
alim0x translating
|
||||
|
||||
The history of Android
|
||||
================================================================================
|
||||

|
||||
Both screens of the Email app. The first two screenshots show the combined label/inbox view, and the last shows a message.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
The message view was—surprise!—white. Android's e-mail app has historically been a watered-down version of the Gmail app, and you can see that close connection here. The message and compose views were taken directly from Gmail with almost no modifications.
|
||||
|
||||

|
||||
The “IM" applications. Screenshots show the short-lived provider selection screen, the friends list, and a chat.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Before Google Hangouts and even before Google Talk, there was "IM"—the only instant messaging client that shipped on Android 1.0. Surprisingly, multiple IM services were supported: users could pick from AIM, Google Talk, Windows Live Messenger, and Yahoo. Remember when OS creators cared about interoperability?
|
||||
|
||||
The friends list was a black background with white speech bubbles for open chats. Presence was indicated with colored circles, and a little Android on the right hand side would indicate that a person was mobile. It's amazing how much more communicative the IM app was than Google Hangouts. Green means the person is using a device they are signed into, yellow means they are signed in but idle, red means they have manually set busy and don't want to be bothered, and gray is offline. Today, Hangouts only shows when a user has the app open or closed.
|
||||
|
||||
The chats interface was clearly based on the Messaging program, and the chat backgrounds were changed from white and blue to white and green. No one changed the color of the blue text entry box, though, so along with the orange highlight effect, this screen used white, green, blue, and orange.
|
||||
|
||||

|
||||
YouTube on Android 1.0. The screens show the main page, the main page with the menu open, the categories screen, and the videos screen.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
YouTube might not have been the mobile sensation it is today with the 320p screen and 3G data speeds of the G1, but Google's video service was present and accounted for on Android 1.0. The main screen looked like a tweaked version of the Android Market, with a horizontally scrolling featured section along the top and vertically scrolling categories along the bottom. Some of Google's category choices were pretty strange: what would the difference be between "Most popular" and "Most viewed?"
|
||||
|
||||
In a sign that Google had no idea how big YouTube would eventually become, one of the video categories was "Most recent." Today, with [100 hours of video][1] uploaded to the site every minute, if this section actually worked it would be an unreadable blur of rapidly scrolling videos.
|
||||
|
||||
The menu housed search, favorites, categories, and settings. Settings (not pictured) was the lamest screen ever, housing one option to clear the search history. Categories was equally barren, showing only a black list of text.
|
||||
|
||||
The last screen shows a video, which only supported horizontal mode. The auto-hiding video controls weirdly had rewind and fast forward buttons, even though there was a seek bar.
|
||||
|
||||

|
||||
YouTube’s video menu, description page, and comments.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Additional sections for each video could be brought up by hitting the menu button. Here you could favorite the video, access details, and read comments. All of these screens, like the videos, were locked to horizontal mode.
|
||||
|
||||
"Share" didn't bring up a share dialog yet; it just kicked the link out to a Gmail message. Texting or IMing someone a link wasn't possible. Comments could be read, but you couldn't rate them or post your own. You couldn't rate or like a video either.
|
||||
|
||||

|
||||
The camera app’s picture taking interface, menu, and photo review mode.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Real Android on real hardware meant a functional camera app, even if there wasn't much to look at. That black square on the left was the camera interface, which should be showing a viewfinder image, but the SDK screenshot utility can't capture it. The G1 had a hardware camera button (remember those?), so there wasn't a need for an on-screen shutter button. There were no settings for exposure, white balance, or HDR—you could take a picture and that was about it.
|
||||
|
||||
The menu button revealed a meager two options: a way to jump to the Pictures app and Settings screen with two options. The first settings option was whether or not to enable geotagging for pictures, and the second was for a dialog prompt after every capture, which you can see on the right. Also, you could only take pictures—there was no video support yet.
|
||||
|
||||

|
||||
The Calendar’s month view, week view with the menu open, day view, and agenda.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Like most apps of this era, the primary command interface for the calendar was the menu. It was used to switch views, add a new event, navigate to the current day, pick visible calendars, and go to the settings. The menu functioned as a catch-all for every single button.
|
||||
|
||||
The month view couldn't show appointment text. Every date had a bar next to it, and appointments were displayed as green sections in the bar denoting what time of day an appointment was. Week view couldn't show text either—the 320×480 display of the G1 just wasn't dense enough—so you got a white block with a strip of color indicating which calendar it was from. The only views that provided text were the agenda and day views. You could move through dates by swiping—week and day used left and right, and month and agenda used up and down.
|
||||
|
||||

|
||||
The main settings page, the Wireless section, and the bottom of the about page.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Android 1.0 finally brought a settings screen to the party. It was a black and white wall of text that was roughly broken down into sections. Down arrows next to each list item confusingly look like they would expand line-in to show more of something, but touching anywhere on the list item would just load the next screen. All the screens were pretty boring and samey looking, but hey, it's a settings screen.
|
||||
|
||||
Any option with an on/off state used a cartoony-looking checkbox. The original checkboxes in Android 1.0 were pretty strange—even when they were "unchecked," they still had a gray check mark in them. Android treated the check mark like a light bulb that would light up when on and be dim when off, but that's not how checkboxes work. We did finally get an "About" page, though. Android 1.0 ran Linux kernel 2.6.25.
|
||||
|
||||
A settings screen means we can finally open the security settings and change lock screens. Android 1.0 only had two styles, the gray square lock screen pictured in the Android 0.9 section, and pattern unlock, which required you to draw a pattern over a grid of 9 dots. A swipe pattern like this was easier to remember and input than a PIN even if it did not add any more security.
|
||||
|
||||

|
||||
The Voice Dialer, pattern lock screen, low battery warning, and time picker.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
oice functions arrived in 1.0 with Voice Dialer. This feature hung around in various capacities in AOSP for a while, as it was a simple voice command app for calling numbers and contacts. Voice Dialer was completely unrelated to Google's future voice products, however, and it worked the same way a voice dialer on a dumbphone would work.
|
||||
|
||||
As for a final note, low battery popup would occur when the battery dropped below 15 percent. It was a funny graphic, depicting plugging the wrong end of the power cord into the phone. That wasn't (and still isn't) how phones work, Google.
|
||||
|
||||
Android 1.0 was a great first start, but there were still so many gaps in functionality. Physical keyboards and tons of hardware buttons were mandatory, as Android devices were still not allowed to be sold without a d-pad or trackball. Base smartphone functionality like auto-rotate wasn't here yet, either. Updates for built-in apps weren't possible through the Android Market the way they were today. All the Google Apps were interwoven with the operating system. If Google wanted to update a single app, an update for the entire operating system needed to be pushed out through the carriers. There was still a lot of work to do.
|
||||
|
||||
### Android 1.1—the first truly incremental update ###
|
||||
|
||||

|
||||
All of Android 1.1’s new features: Search by voice, the Android Market showing paid app support, Google Latitude, and the new “system updates" option in the settings.
|
||||
Photo by Ron Amadeo
|
||||
|
||||
Four and a half months after Android 1.0, in February 2009, Android got its first public update in Android 1.1. Not much changed in the OS, and just about every new thing Google added with 1.1 has been shut down by now. Google Voice Search was Android's first foray into cloud-powered voice search, and it had its own icon in the app drawer. While the app can't communicate with Google's servers anymore, you can check out how it used to work [on the iPhone][2]. It wasn't yet Voice Actions, but you could speak and the results would go to a simple Google Search.
|
||||
|
||||
Support for paid apps was added to the Android Market, but just like the beta client, this version of the Android Market could no longer connect to the Google Play servers. The most that we could get to work was this sorting screen, which lets you pick between displaying free apps, paid apps, or a mix of both.
|
||||
|
||||
Maps added [Google Latitude][3], a way to share your location with friends. Latitude was shut down in favor of Google+ a few months ago and no longer works. There was an option for it in the Maps menu, but tapping on it just brings up a loading spinner forever.
|
||||
|
||||
Given that system updates come quickly in the Android world—or at least, that was the plan before carriers and OEMs got in the way—Google also added a button to the "About Phone" screen to check for system updates.
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.youtube.com/yt/press/statistics.html
|
||||
[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A
|
||||
[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -1,3 +1,4 @@
|
||||
Translating by SPccman
|
||||
How to configure SNMPv3 on ubuntu 14.04 server
|
||||
================================================================================
|
||||
Simple Network Management Protocol (SNMP) is an "Internet-standard protocol for managing devices on IP networks". Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks and more.It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.[2]
|
||||
@ -96,4 +97,4 @@ via: http://www.ubuntugeek.com/how-to-configure-snmpv3-on-ubuntu-14-04-server.ht
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,466 +0,0 @@
|
||||
Linux Tutorial: Install Ansible Configuration Management And IT Automation Tool
|
||||
================================================================================
|
||||

|
||||
|
||||
Today I will be talking about ansible, a powerful configuration management solution written in python. There are many configuration management solutions available, all with pros and cons, ansible stands apart from many of them for its simplicity. What makes ansible different than many of the most popular configuration management systems is that its agent-less, no need to setup agents on every node you want to control. Plus, this has the benefit of being able to control you entire infrastructure from more than one place, if needed. That last point's validity, of being a benefit, may be debatable but I find it as a positive in most cases. Enough talk, lets get started with Ansible installation and configuration on a RHEL/CentOS, and Debian/Ubuntu based systems.
|
||||
|
||||
### Prerequisites ###
|
||||
|
||||
1. Distro: RHEL/CentOS/Debian/Ubuntu Linux
|
||||
1. Jinja2: A modern and designer friendly templating language for Python.
|
||||
1. PyYAML: A YAML parser and emitter for the Python programming language.
|
||||
1. parmiko: Native Python SSHv2 protocol library.
|
||||
1. httplib2: A comprehensive HTTP client library.
|
||||
1. Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.
|
||||
|
||||
How Ansible works
|
||||
|
||||
Ansible tool uses no agents. It requires no additional custom security infrastructure, so it’s easy to deploy. All you need is ssh client and server:
|
||||
|
||||
+----------------------+ +---------------+
|
||||
|Linux/Unix workstation| SSH | file_server1 |
|
||||
|with Ansible |<------------------>| db_server2 | Unix/Linux servers
|
||||
+----------------------+ Modules | proxy_server3 | in local/remote
|
||||
192.168.1.100 +---------------+ data centers
|
||||
|
||||
Where,
|
||||
|
||||
1. 192.168.1.100 - Install Ansible on your local workstation/server.
|
||||
1. file_server1..proxy_server3 - Use 192.168.1.100 and Ansible to automates configuration management of all servers.
|
||||
1. SSH - Setup ssh keys between 192.168.1.100 and local/remote servers.
|
||||
|
||||
### Ansible Installation Tutorial ###
|
||||
|
||||
Installation of ansible is a breeze, many distributions have a package available in their 3rd party repos which can easily be installed, a quick alternative is to just pip install it or grab the latest copy from github. To install using your package manager, on [RHEL/CentOS Linux based systems you will most likely need the EPEL repo][1] then:
|
||||
|
||||
#### Install ansible on a RHEL/CentOS Linux based system ####
|
||||
|
||||
Type the following [yum command][2]:
|
||||
|
||||
$ sudo yum install ansible
|
||||
|
||||
#### Install ansible on a Debian/Ubuntu Linux based system ####
|
||||
|
||||
Type the following [apt-get command][3]:
|
||||
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ansible/ansible
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install ansible
|
||||
|
||||
#### Install ansible using pip ####
|
||||
|
||||
The [pip command is a tool for installing and managing Python packages][4], such as those found in the Python Package Index. The following method works on Linux and Unix-like systems:
|
||||
|
||||
$ sudo pip install ansible
|
||||
|
||||
#### Install the latest version of ansible using source code ####
|
||||
|
||||
You can install the latest version from github as follows:
|
||||
|
||||
$ cd ~
|
||||
$ git clone git://github.com/ansible/ansible.git
|
||||
$ cd ./ansible
|
||||
$ source ./hacking/env-setup
|
||||
|
||||
When running ansible from a git checkout, one thing to remember is that you will need to setup your environment everytime you want to use it, or you can add it to your bash rc file:
|
||||
|
||||
# ADD TO BASH RC
|
||||
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
|
||||
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
|
||||
|
||||
The hosts file for ansible is basically a list of hosts that ansible is able to perform work on. By default ansible looks for the hosts file at /etc/ansible/hosts, but there are ways to override that which can be handy if you are working with multiple installs or have several different clients for whose datacenters you are responsible for. You can pass the hosts file on the command line using the -i option:
|
||||
|
||||
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
|
||||
|
||||
My preference however is to use and environment variable, this can be useful if source a different file when starting work for a specific client. The environment variable is $ANSIBLE_HOSTS, and can be set as follows:
|
||||
|
||||
$ export ANSIBLE_HOSTS=~/ansible_hosts
|
||||
|
||||
Once all requirements are installed and you have you hosts file setup you can give it a test run. For a quick test I put 127.0.0.1 into the ansible hosts file as follow:
|
||||
|
||||
$ echo "127.0.0.1" > ~/ansible_hosts
|
||||
|
||||
Now lets test with a quick ping:
|
||||
|
||||
$ ansible all -m ping
|
||||
|
||||
OR ask for the ssh password:
|
||||
|
||||
$ ansible all -m ping --ask-pass
|
||||
|
||||
I have run across a problem a few times regarding initial setup, it is highly recommended you setup keys for ansible to use but in the previous test we used --ask-pass, on some machines you will need [to install sshpass][5] or add a -c paramiko like so:
|
||||
|
||||
$ ansible all -m ping --ask-pass -c paramiko
|
||||
|
||||
Or you [can install sshpass][6], however sshpass is not always available in the standard repos so paramiko can be easier.
|
||||
|
||||
### Setup SSH Keys ###
|
||||
|
||||
Now that we have gotten the configuration, and other simple stuff, out of the way lets move onto doing something productive. Alot of the power of ansible lies in playbooks, which are basically scripted ansible runs (for the most part), but we will start with some one liners before we build out a playbook. Lets start with creating and configuring keys so we can avoid the -c and --ask-pass options:
|
||||
|
||||
$ ssh-keygen -t rsa
|
||||
|
||||
Sample outputs:
|
||||
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/mike/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
|
||||
The key's randomart image is:
|
||||
+--[ RSA 2048]----+
|
||||
|... . . |
|
||||
|. . + . . |
|
||||
|= . o o |
|
||||
|.* . |
|
||||
|. . . S |
|
||||
| E.o |
|
||||
|.. .. |
|
||||
|o o+.. |
|
||||
| +o+*o. |
|
||||
+-----------------+
|
||||
|
||||
Now obviously there are plenty of ways to put this in place on the remote machine but since we are using ansible, lets use that:
|
||||
|
||||
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"dest": "/tmp/id_rsa.pub",
|
||||
"gid": 100,
|
||||
"group": "users",
|
||||
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
|
||||
"mode": "0644",
|
||||
"owner": "mike",
|
||||
"size": 410,
|
||||
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
|
||||
"state": "file",
|
||||
"uid": 1000
|
||||
}
|
||||
|
||||
Next, add the public key in remote server, enter:
|
||||
|
||||
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | FAILED | rc=1 >>
|
||||
/bin/sh: /root/.ssh/authorized_keys: Permission denied
|
||||
|
||||
Whoops, we want to be able to run things as root, so lets add a -u option:
|
||||
|
||||
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success | rc=0 >>
|
||||
|
||||
Please note, I wanted to demonstrate a file transfer using ansible, there is however a more built in way for managing keys using ansible:
|
||||
|
||||
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
|
||||
|
||||
Sample outputs:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"gid": 100,
|
||||
"group": "users",
|
||||
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
|
||||
"key_options": null,
|
||||
"keyfile": "/home/mike/.ssh/authorized_keys",
|
||||
"manage_dir": false,
|
||||
"mode": "0600",
|
||||
"owner": "mike",
|
||||
"path": "/home/mike/.ssh/authorized_keys",
|
||||
"size": 410,
|
||||
"state": "file",
|
||||
"uid": 1000,
|
||||
"unique": false,
|
||||
"user": "mike"
|
||||
}
|
||||
|
||||
Now that the keys are in place lets try running an arbitrary command like hostname and hope we don't get prompted for a password
|
||||
|
||||
$ ansible all -m shell -a "hostname" -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
127.0.0.1 | success | rc=0 >>
|
||||
|
||||
Success!!! Now that we can run commands as root and not be bothered by using a password we are in a good place to easily configure any and all hosts in the ansible hosts file. Let's remove the key from /tmp:
|
||||
|
||||
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"path": "/tmp/id_rsa.pub",
|
||||
"state": "absent"
|
||||
}
|
||||
|
||||
Next, I'm going to make sure we have a few packages installed and on the latest version and we will move on to something a little more complicated:
|
||||
|
||||
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
|
||||
|
||||
Sample outputs:
|
||||
|
||||
127.0.0.1 | success >> {
|
||||
"changed": false,
|
||||
"name": "apache2",
|
||||
"state": "latest"
|
||||
}
|
||||
|
||||
Alright, the key we placed in /tmp is now absent and we have the latest version of apache installed. This brings me to the next point, something that makes ansible very flexible and gives more power to playbooks, many may have noticed the -m zypper in the previous commands. Now unless you use openSuse or Suse enterpise you may not be familiar with zypper, it is basically the equivalent of yum in the suse world. In all of the examples above I have only had one machine in my hosts file, and while everything but the last command should work on any standard *nix systems with standard ssh configs, this leads to a problem. What if we had multiple machine types that we wanted to manage? Well this is where playbooks, and the configurability of ansible really shines. First lets modify our hosts file a little, here goes:
|
||||
|
||||
$ cat ~/ansible_hosts
|
||||
|
||||
Sample outputs:
|
||||
|
||||
[RHELBased]
|
||||
10.50.1.33
|
||||
10.50.1.47
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1
|
||||
|
||||
First, we create some groups of servers, and give them some meaningful tags. Then we create a playbook that will do different things for the different kinds of servers. You might notice the similarity between the yaml data structures and the command line instructions we ran earlier. Basically the -m is a module, and -a is for module args. In the YAML representation you put the module then :, and finally the args.
|
||||
|
||||
---
|
||||
- hosts: SUSEBased
|
||||
remote_user: root
|
||||
tasks:
|
||||
- zypper: name=apache2 state=latest
|
||||
- hosts: RHELBased
|
||||
remote_user: root
|
||||
tasks:
|
||||
- yum: name=httpd state=latest
|
||||
|
||||
Now that we have a simple playbook, we can run it as follows:
|
||||
|
||||
$ ansible-playbook testPlaybook.yaml -f 10
|
||||
|
||||
Sample outputs:
|
||||
|
||||
PLAY [SUSEBased] **************************************************************
|
||||
|
||||
GATHERING FACTS ***************************************************************
|
||||
ok: [127.0.0.1]
|
||||
|
||||
TASK: [zypper name=apache2 state=latest] **************************************
|
||||
ok: [127.0.0.1]
|
||||
|
||||
PLAY [RHELBased] **************************************************************
|
||||
|
||||
GATHERING FACTS ***************************************************************
|
||||
ok: [10.50.1.33]
|
||||
ok: [10.50.1.47]
|
||||
|
||||
TASK: [yum name=httpd state=latest] *******************************************
|
||||
changed: [10.50.1.33]
|
||||
changed: [10.50.1.47]
|
||||
|
||||
PLAY RECAP ********************************************************************
|
||||
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
|
||||
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
|
||||
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
|
||||
|
||||
Now you will notice that you will see output from each machine that ansible contacted. The -f is what lets ansible run on multiple hosts in parallel. Instead of using all, or a name of a host group, on the command line you can put this passwords for the ask-pass prompt into the playbook. While we no longer need the --ask-pass since we have ssh keys setup, it comes in handy when setting up new machines, and even new machines can run from a playbook. To demonstrate this lets convert our earlier key example into a playbook:
|
||||
|
||||
---
|
||||
- hosts: SUSEBased
|
||||
remote_user: mike
|
||||
sudo: yes
|
||||
tasks:
|
||||
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
|
||||
- hosts: RHELBased
|
||||
remote_user: mdonlon
|
||||
sudo: yes
|
||||
tasks:
|
||||
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
|
||||
|
||||
Now there are plenty of other options here that could be done, for example having the keys dropped during a kickstart, or via some other kind of process involved with bringing up machines on the hosting of your choice, but this can be used in pretty much any situation assuming ssh is setup to accept a password. One thing to think about before writing out too many playbooks, version control can save you a lot of time. Machines need to change over time, you don't need to re-write a playbook every time a machine changes, just update the pertinent bits and commit the changes. Another benefit of this ties into what I said earlier about being able to manage the entire infrastructure from multiple places. You can easily git clone your playbook repo onto a new machine and be completely setup to manage everything in a repetitive manner.
|
||||
|
||||
#### Real world ansible example ####
|
||||
|
||||
I know a lot of people make great use of services like pastebin, and a lot of companies for obvious reasons setup their own internal instance of something similar. Recently, I came across a newish application called showterm and coincidentally I was asked to setup an internal instance of it for a client. I will spare you the details of this app, but you can google showterm if interested. So for a reasonable real world example I will attempt to setup a showterm server, and configure the needed app on the client to use it. In the process we will need a database server as well. So here goes, lets start with the client configuration.
|
||||
|
||||
---
|
||||
- hosts: showtermClients
|
||||
remote_user: root
|
||||
tasks:
|
||||
- yum: name=rubygems state=latest
|
||||
- yum: name=ruby-devel state=latest
|
||||
- yum: name=gcc state=latest
|
||||
- gem: name=showterm state=latest user_install=no
|
||||
|
||||
That was easy, lets move on to the main server:
|
||||
|
||||
---
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: ensure packages are installed
|
||||
yum: name={{item}} state=latest
|
||||
with_items:
|
||||
- postgresql
|
||||
- postgresql-server
|
||||
- postgresql-devel
|
||||
- python-psycopg2
|
||||
- git
|
||||
- ruby21
|
||||
- ruby21-passenger
|
||||
- name: showterm server from github
|
||||
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
|
||||
- name: Initdb
|
||||
command: service postgresql initdb
|
||||
creates=/var/lib/pgsql/data/postgresql.conf
|
||||
|
||||
- name: Start PostgreSQL and enable at boot
|
||||
service: name=postgresql
|
||||
enabled=yes
|
||||
state=started
|
||||
- gem: name=pg state=latest user_install=no
|
||||
handlers:
|
||||
- name: restart postgresql
|
||||
service: name=postgresql state=restarted
|
||||
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
sudo: yes
|
||||
sudo_user: postgres
|
||||
vars:
|
||||
dbname: showterm
|
||||
dbuser: showterm
|
||||
dbpassword: showtermpassword
|
||||
tasks:
|
||||
- name: create db
|
||||
postgresql_db: name={{dbname}}
|
||||
|
||||
- name: create user with ALL priv
|
||||
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: database.yml
|
||||
template: src=database.yml dest=/root/showterm/config/database.yml
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: run bundle install
|
||||
shell: bundle install
|
||||
args:
|
||||
chdir: /root/showterm
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: run rake db tasks
|
||||
shell: 'bundle exec rake db:create db:migrate db:seed'
|
||||
args:
|
||||
chdir: /root/showterm
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: apache config
|
||||
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
|
||||
|
||||
Not so bad, now keeping in mind that this is a somewhat random and obscure app that we can now install in a consistent fashion on any number of machines, this is where the benefits of configuration management really come to light. Also, in most cases the declarative syntax almost speaks for itself and wiki pages need not go into as much detail, although a wiki page with too much detail is never a bad thing in my opinion.
|
||||
|
||||
### Expanding Configuration ###
|
||||
|
||||
We have not touched on everything here, Ansible has many options for configuring you setup. You can do things like embedding variables in your hosts file, so that Ansible will interpolate them on the remote nodes, eg.
|
||||
|
||||
[RHELBased]
|
||||
10.50.1.33 http_port=443
|
||||
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1 http_port=443
|
||||
|
||||
While this is really handy for quick configurations, you can also layer variables across multiple files in yaml format. In you hosts file path you can make two sub directories named group_vars and host_vars. Any files in those paths that match the name of the group of hosts, or a host name in your hosts file will be interpolated at run time. So the previous example would look like this:
|
||||
|
||||
ultrabook:/etc/ansible # pwd
|
||||
/etc/ansible
|
||||
ultrabook:/etc/ansible # tree
|
||||
.
|
||||
├── group_vars
|
||||
│ ├── RHELBased
|
||||
│ └── SUSEBased
|
||||
├── hosts
|
||||
└── host_vars
|
||||
├── 10.50.1.33
|
||||
└── 10.50.1.47
|
||||
|
||||
----------
|
||||
|
||||
2 directories, 5 files
|
||||
ultrabook:/etc/ansible # cat hosts
|
||||
[RHELBased]
|
||||
10.50.1.33
|
||||
10.50.1.47
|
||||
|
||||
----------
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1
|
||||
ultrabook:/etc/ansible # cat group_vars/RHELBased
|
||||
ultrabook:/etc/ansible # cat group_vars/SUSEBased
|
||||
---
|
||||
http_port: 443
|
||||
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
|
||||
---
|
||||
http_port: 443
|
||||
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
|
||||
---
|
||||
http_port:80
|
||||
ansible_ssh_user: mdonlon
|
||||
|
||||
### Refining Playbooks ###
|
||||
|
||||
There are many ways to organize playbooks as well. In the previous examples we used a single file, and everything is really simplified. One way of organizing things that is commonly used is creating roles. Basically you load a main file as your playbook, and that then imports all the data from the extra files, the extra files are organized as roles. For example if you have a wordpress site, you need a web head, and a database. The web head will have a web server, the app code, and any needed modules. The database is sometimes ran on the same host and some times ran on remote hosts, and this is where roles really shine. You make a directory, and small playbook for each role. In this case we can have an apache role, mysql role, wordpress role, mod_php, and php roles. The big advantage to this is that not every role has to be applied on one server, in this case mysql could be applied to a separate machine. This also allows for code re-use, for example you apache role could be used with python apps and php apps alike. Demonstrating this is a little beyond the scope of this article, and there are many different ways of doing thing, I would recommend searching for ansible playbook examples. There are many people contributing code on github, and I am sure various other sites.
|
||||
|
||||
### Modules ###
|
||||
|
||||
All of the work being done behind the scenes in ansible is driven by modules. Ansible has an excellent library of built in modules that do things like package installation, transferring files, and everything we have done in this article. But for some people this will not be suitable for their setup, ansible has a means of adding your own modules. One great thing about the API provided by Ansible is that you are not restricted to the language it was written in, Python, you can use any language really. Ansible modules work by passing around JSON data structures, so as long as you can build a JSON data structure in your language of choice, which I am pretty sure any scripting language can do, you can begin coding something right away. There is much documentation on the Ansible site, about how the module interface works, and many examples of modules on github as well. Keep in mind that some obscure languages may not have great support, but that would only be because not enough people are contributing code in that language, try it out and publish your results somewhere!
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In conclusion, there are many systems around for configuration management, I hope this article shows the ease of setup for ansible which I believe is one of its strongest points. Please keep in mind that I was trying to show a lot of different ways to do things, and not everything above may be considered best practice in your private infrastructure, or the coding world abroad. Here are some more links to take you knowledge of ansible to the next level:
|
||||
|
||||
- [Ansible project][7] home page.
|
||||
- [Ansible project documentation][8].
|
||||
- [Multistage environments with Ansible][9].
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
|
||||
|
||||
作者:[Nix Craft][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.cyberciti.biz/tips/about-us
|
||||
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
|
||||
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
|
||||
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
|
||||
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
|
||||
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[7]:http://www.ansible.com/
|
||||
[8]:http://docs.ansible.com/
|
||||
[9]:http://rosstuck.com/multistage-environments-with-ansible/
|
@ -1,219 +0,0 @@
|
||||
DoubleC is translating
|
||||
How to create a site-to-site IPsec VPN tunnel using Openswan in Linux
|
||||
================================================================================
|
||||
A virtual private network (VPN) tunnel is used to securely interconnect two physically separate networks through a tunnel over the Internet. Tunneling is needed when the separate networks are private LAN subnets with globally non-routable private IP addresses, which are not reachable to each other via traditional routing over the Internet. For example, VPN tunnels are often deployed to connect different NATed branch office networks belonging to the same institution.
|
||||
|
||||
Sometimes VPN tunneling may be used simply for its security benefit as well. Service providers or private companies may design their networks in such a way that vital servers (e.g., database, VoIP, banking servers) are placed in a subnet that is accessible to trusted personnel through a VPN tunnel only. When a secure VPN tunnel is required, [IPsec][1] is often a preferred choice because an IPsec VPN tunnel is secured with multiple layers of security.
|
||||
|
||||
This tutorial will show how we can easily create a site-to-site VPN tunnel using [Openswan][2] in Linux.
|
||||
|
||||
### Topology ###
|
||||
|
||||
This tutorial will focus on the following topologies for creating an IPsec tunnel.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Installing Packages and Preparing VPN Servers ###
|
||||
|
||||
Usually, you will be managing site-A only, but based on the requirements, you could be managing both site-A and site-B. We start the process by installing Openswan.
|
||||
|
||||
On Red Hat based Systems (CentOS, Fedora or RHEL):
|
||||
|
||||
# yum install openswan lsof
|
||||
|
||||
On Debian based Systems (Debian, Ubuntu or Linux Mint):
|
||||
|
||||
# apt-get install openswan
|
||||
|
||||
Now we disable VPN redirects, if any, in the server using these commands:
|
||||
|
||||
# for vpn in /proc/sys/net/ipv4/conf/*;
|
||||
# do echo 0 > $vpn/accept_redirects;
|
||||
# echo 0 > $vpn/send_redirects;
|
||||
# done
|
||||
|
||||
Next, we modify the kernel parameters to allow IP forwarding and disable redirects permanently.
|
||||
|
||||
# vim /etc/sysctl.conf
|
||||
|
||||
----------
|
||||
|
||||
net.ipv4.ip_forward = 1
|
||||
net.ipv4.conf.all.accept_redirects = 0
|
||||
net.ipv4.conf.all.send_redirects = 0
|
||||
|
||||
Reload /etc/sysctl.conf:
|
||||
|
||||
# sysctl -p
|
||||
|
||||
We allow necessary ports in the firewall. Please make sure that the rules are not conflicting with existing firewall rules.
|
||||
|
||||
# iptables -A INPUT -p udp --dport 500 -j ACCEPT
|
||||
# iptables -A INPUT -p tcp --dport 4500 -j ACCEPT
|
||||
# iptables -A INPUT -p udp --dport 4500 -j ACCEPT
|
||||
|
||||
Finally, we create firewall rules for NAT.
|
||||
|
||||
# iptables -t nat -A POSTROUTING -s site-A-private-subnet -d site-B-private-subnet -j SNAT --to site-A-Public-IP
|
||||
|
||||
Please make sure that the firewall rules are persistent.
|
||||
|
||||
#### Note: ####
|
||||
|
||||
- You could use MASQUERADE instead of SNAT. Logically it should work, but it caused me to have issues with virtual private servers (VPS) in the past. So I would use SNAT if I were you.
|
||||
- If you are managing site-B as well, create similar rules in site-B server.
|
||||
- Direct routing does not need SNAT.
|
||||
|
||||
### Preparing Configuration Files ###
|
||||
|
||||
The first configuration file that we will work with is ipsec.conf. Regardless of which server you are configuring, always consider your site as 'left' and remote site as 'right'. The following configuration is done in siteA's VPN server.
|
||||
|
||||
# vim /etc/ipsec.conf
|
||||
|
||||
----------
|
||||
|
||||
## general configuration parameters ##
|
||||
|
||||
config setup
|
||||
plutodebug=all
|
||||
plutostderrlog=/var/log/pluto.log
|
||||
protostack=netkey
|
||||
nat_traversal=yes
|
||||
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/16
|
||||
## disable opportunistic encryption in Red Hat ##
|
||||
oe=off
|
||||
|
||||
## disable opportunistic encryption in Debian ##
|
||||
## Note: this is a separate declaration statement ##
|
||||
include /etc/ipsec.d/examples/no_oe.conf
|
||||
|
||||
## connection definition in Red Hat ##
|
||||
conn demo-connection-redhat
|
||||
authby=secret
|
||||
auto=start
|
||||
ike=3des-md5
|
||||
## phase 1 ##
|
||||
keyexchange=ike
|
||||
## phase 2 ##
|
||||
phase2=esp
|
||||
phase2alg=3des-md5
|
||||
compress=no
|
||||
pfs=yes
|
||||
type=tunnel
|
||||
left=<siteA-public-IP>
|
||||
leftsourceip=<siteA-public-IP>
|
||||
leftsubnet=<siteA-private-subnet>/netmask
|
||||
## for direct routing ##
|
||||
leftsubnet=<siteA-public-IP>/32
|
||||
leftnexthop=%defaultroute
|
||||
right=<siteB-public-IP>
|
||||
rightsubnet=<siteB-private-subnet>/netmask
|
||||
|
||||
## connection definition in Debian ##
|
||||
conn demo-connection-debian
|
||||
authby=secret
|
||||
auto=start
|
||||
## phase 1 ##
|
||||
keyexchange=ike
|
||||
## phase 2 ##
|
||||
esp=3des-md5
|
||||
pfs=yes
|
||||
type=tunnel
|
||||
left=<siteA-public-IP>
|
||||
leftsourceip=<siteA-public-IP>
|
||||
leftsubnet=<siteA-private-subnet>/netmask
|
||||
## for direct routing ##
|
||||
leftsubnet=<siteA-public-IP>/32
|
||||
leftnexthop=%defaultroute
|
||||
right=<siteB-public-IP>
|
||||
rightsubnet=<siteB-private-subnet>/netmask
|
||||
|
||||
Authentication can be done in several different ways. This tutorial will cover the use of pre-shared key, which is added to the file /etc/ipsec.secrets.
|
||||
|
||||
# vim /etc/ipsec.secrets
|
||||
|
||||
----------
|
||||
|
||||
siteA-public-IP siteB-public-IP: PSK "pre-shared-key"
|
||||
## in case of multiple sites ##
|
||||
siteA-public-IP siteC-public-IP: PSK "corresponding-pre-shared-key"
|
||||
|
||||
### Starting the Service and Troubleshooting ###
|
||||
|
||||
The server should now be ready to create a site-to-site VPN tunnel. If you are managing siteB as well, please make sure that you have configured the siteB server with necessary parameters. For Red Hat based systems, please make sure that you add the service into startup using chkconfig command.
|
||||
|
||||
# /etc/init.d/ipsec restart
|
||||
|
||||
If there are no errors in both end servers, the tunnel should be up now. Taking the following into consideration, you can test the tunnel with ping command.
|
||||
|
||||
1. The siteB-private subnet should not be reachable from site A, i.e., ping should not work if the tunnel is not up.
|
||||
1. After the tunnel is up, try ping to siteB-private-subnet from siteA. This should work.
|
||||
|
||||
Also, the routes to the destination's private subnet should appear in the server's routing table.
|
||||
|
||||
# ip route
|
||||
|
||||
----------
|
||||
|
||||
[siteB-private-subnet] via [siteA-gateway] dev eth0 src [siteA-public-IP]
|
||||
default via [siteA-gateway] dev eth0
|
||||
|
||||
Additionally, we can check the status of the tunnel using the following useful commands.
|
||||
|
||||
# service ipsec status
|
||||
|
||||
----------
|
||||
|
||||
IPsec running - pluto pid: 20754
|
||||
pluto pid 20754
|
||||
1 tunnels up
|
||||
some eroutes exist
|
||||
|
||||
----------
|
||||
|
||||
# ipsec auto --status
|
||||
|
||||
----------
|
||||
|
||||
## output truncated ##
|
||||
000 "demo-connection-debian": myip=<siteA-public-IP>; hisip=unset;
|
||||
000 "demo-connection-debian": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; nat_keepalive: yes
|
||||
000 "demo-connection-debian": policy: PSK+ENCRYPT+TUNNEL+PFS+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 32,28; interface: eth0;
|
||||
|
||||
## output truncated ##
|
||||
000 #184: "demo-connection-debian":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 1653s; newest IPSEC; eroute owner; isakmp#183; idle; import:not set
|
||||
|
||||
## output truncated ##
|
||||
000 #183: "demo-connection-debian":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 1093s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:not set
|
||||
|
||||
The log file /var/log/pluto.log should also contain useful information regarding authentication, key exchanges and information on different phases of the tunnel. If your tunnel doesn't come up, you could check there as well.
|
||||
|
||||
If you are sure that all the configuration is correct, and if your tunnel is still not coming up, you should check the following things.
|
||||
|
||||
1. Many ISPs filter IPsec ports. Make sure that UDP 500, TCP/UDP 4500 ports are allowed by your ISP. You could try connecting to your server IPsec ports from a remote location by telnet.
|
||||
1. Make sure that necessary ports are allowed in the firewall of the server/s.
|
||||
1. Make sure that the pre-shared keys are identical in both end servers.
|
||||
1. The left and right parameters should be properly configured on both end servers.
|
||||
1. If you are facing problems with NAT, try using SNAT instead of MASQUERADING.
|
||||
|
||||
To sum up, this tutorial focused on the procedure of creating a site-to-site IPSec VPN tunnel in Linux using Openswan. VPN tunnels are very useful in enhancing security as they allow admins to make critical resources available only through the tunnels. Also VPN tunnels ensure that the data in transit is secured from eavesdropping or interception.
|
||||
|
||||
Hope this helps. Let me know what you think.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/08/create-site-to-site-ipsec-vpn-tunnel-openswan-linux.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://en.wikipedia.org/wiki/IPsec
|
||||
[2]:https://www.openswan.org/
|
@ -1,3 +1,5 @@
|
||||
[felixonmars translating...]
|
||||
|
||||
How to set up Nagios Remote Plugin Executor (NRPE) in Linux
|
||||
================================================================================
|
||||
As far as network management is concerned, Nagios is one of the most powerful tools. Nagios can monitor the reachability of remote hosts, as well as the state of services running on them. However, what if we want to monitor something other than network services for a remote host? For example, we may want to monitor the disk utilization or [CPU processor load][1] of a remote host. Nagios Remote Plugin Executor (NRPE) is a tool that can help with doing that. NRPE allows one to execute Nagios plugins installed on remote hosts, and integrate them with an [existing Nagios server][2].
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating by cvsher
|
||||
Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux
|
||||
================================================================================
|
||||
**Sysstat** is really a handy tool which comes with number of utilities to monitor system resources, their performance and usage activities. Number of utilities that we all use in our daily bases comes with sysstat package. It also provide the tool which can be scheduled using cron to collect all performance and activity data.
|
||||
@ -121,4 +122,4 @@ via: http://www.tecmint.com/install-sysstat-in-linux/
|
||||
[a]:http://www.tecmint.com/author/kuldeepsharma47/
|
||||
[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
|
||||
[2]:http://sebastien.godard.pagesperso-orange.fr/download.html
|
||||
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html
|
||||
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html
|
||||
|
@ -0,0 +1,78 @@
|
||||
Linux FAQs with Answers--How to configure a static IP address on CentOS 7
|
||||
================================================================================
|
||||
> **Question**: On CentOS 7, I want to switch from DHCP to static IP address configuration with one of my network interfaces. What is a proper way to assign a static IP address to a network interface permanently on CentOS or RHEL 7?
|
||||
|
||||
If you want to set up a static IP address on a network interface in CentOS 7, there are several different ways to do it, varying depending on whether or not you want to use Network Manager for that.
|
||||
|
||||
Network Manager is a dynamic network control and configuration system that attempts to keep network devices and connections up and active when they are available). CentOS/RHEL 7 comes with Network Manager service installed and enabled by default.
|
||||
|
||||
To verify the status of Network Manager service:
|
||||
|
||||
$ systemctl status NetworkManager.service
|
||||
|
||||
To check which network interface is managed by Network Manager, run:
|
||||
|
||||
$ nmcli dev status
|
||||
|
||||

|
||||
|
||||
If the output of nmcli shows "connected" for a particular interface (e.g., enp0s3 in the example), it means that the interface is managed by Network Manager. You can easily disable Network Manager for a particular interface, so that you can configure it on your own for a static IP address.
|
||||
|
||||
Here are **two different ways to assign a static IP address to a network interface on CentOS 7**. We will be configuring a network interface named enp0s3.
|
||||
|
||||
### Configure a Static IP Address without Network Manager ###
|
||||
|
||||
Go to the /etc/sysconfig/network-scripts directory, and locate its configuration file (ifcfg-enp0s3). Create it if not found.
|
||||
|
||||

|
||||
|
||||
Open the configuration file and edit the following variables:
|
||||
|
||||

|
||||
|
||||
In the above, "NM_CONTROLLED=no" indicates that this interface will be set up using this configuration file, instead of being managed by Network Manager service. "ONBOOT=yes" tells the system to bring up the interface during boot.
|
||||
|
||||
Save changes and restart the network service using the following command:
|
||||
|
||||
# systemctl restart network.service
|
||||
|
||||
Now verify that the interface has been properly configured:
|
||||
|
||||
# ip add
|
||||
|
||||

|
||||
|
||||
### Configure a Static IP Address with Network Manager ###
|
||||
|
||||
If you want to use Network Manager to manage the interface, you can use nmtui (Network Manager Text User Interface) which provides a way to configure Network Manager in a terminal environment.
|
||||
|
||||
Before using nmtui, first set "NM_CONTROLLED=yes" in /etc/sysconfig/network-scripts/ifcfg-enp0s3.
|
||||
|
||||
Now let's install nmtui as follows.
|
||||
|
||||
# yum install NetworkManager-tui
|
||||
|
||||
Then go ahead and edit the Network Manager configuration of enp0s3 interface:
|
||||
|
||||
# nmtui edit enp0s3
|
||||
|
||||
The following screen will allow us to manually enter the same information that is contained in /etc/sysconfig/network-scripts/ifcfg-enp0s3.
|
||||
|
||||
Use the arrow keys to navigate this screen, press Enter to select from a list of values (or fill in the desired values), and finally click OK at the bottom right:
|
||||
|
||||

|
||||
|
||||
Finally, restart the network service.
|
||||
|
||||
# systemctl restart network.service
|
||||
|
||||
and you're ready to go.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -0,0 +1,53 @@
|
||||
[su-kaiyao]翻译中
|
||||
|
||||
How To Reset Root Password On CentOS 7
|
||||
================================================================================
|
||||
The way to reset the root password on centos7 is totally different to Centos 6. Let me show you how to reset root password in CentOS 7.
|
||||
|
||||
1 – In the boot grub menu select option to edit.
|
||||
|
||||

|
||||
|
||||
2 – Select Option to edit (e).
|
||||
|
||||

|
||||
|
||||
3 – Go to the line of Linux 16 and change ro with rw init=/sysroot/bin/sh.
|
||||
|
||||

|
||||
|
||||
4 – Now press Control+x to start on single user mode.
|
||||
|
||||

|
||||
|
||||
5 – Now access the system with this command.
|
||||
|
||||
chroot /sysroot
|
||||
|
||||
6 – Reset the password.
|
||||
|
||||
passwd root
|
||||
|
||||
7 – Update selinux information
|
||||
|
||||
touch /.autorelabel
|
||||
|
||||
8 – Exit chroot
|
||||
|
||||
exit
|
||||
|
||||
9 – Reboot your system
|
||||
|
||||
reboot
|
||||
|
||||
That’s it. Enjoy.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/reset-root-password-centos-7/
|
||||
|
||||
作者:M.el Khamlichi
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -0,0 +1,151 @@
|
||||
How to manage configurations in Linux with Puppet and Augeas
|
||||
================================================================================
|
||||
Although [Puppet][1](注:此文原文原文中曾今做过,文件名:“20140808 How to install Puppet server and client on CentOS and RHEL.md”,如果翻译发布过,可修改此链接为发布地址) is a really unique and useful tool, there are situations where you could use a bit of a different approach. Situations like modification of configuration files which are already present on several of your servers and are unique on each one of them at the same time. Folks from Puppet labs realized this as well, and integrated a great tool called [Augeas][2] that is designed exactly for this usage.
|
||||
|
||||
Augeas can be best thought of as filling for the gaps in Puppet's capabilities where an objectspecific resource type (such as the host resource to manipulate /etc/hosts entries) is not yet available. In this howto, you will learn how to use Augeas to ease your configuration file management.
|
||||
|
||||
### What is Augeas? ###
|
||||
|
||||
Augeas is basically a configuration editing tool. It parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native config files.
|
||||
|
||||
### What are we going to achieve in this tutorial? ###
|
||||
|
||||
We will install and configure the Augeas tool for use with our previously built Puppet server. We will create and test several different configurations with this tool, and learn how to properly use it to manage our system configurations.
|
||||
|
||||
### Prerequisites ###
|
||||
|
||||
We will need a working Puppet server and client setup. If you don't have it, please follow my previous tutorial.
|
||||
|
||||
Augeas package can be found in our standard CentOS/RHEL repositories. Unfortunately, Puppet uses Augeas ruby wrapper which is only available in the puppetlabs repository (or [EPEL][4]). If you don't have this repository in your system already, add it using following command:
|
||||
|
||||
On CentOS/RHEL 6.5:
|
||||
|
||||
# rpm -ivh https://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabsrelease610.noarch.rpm
|
||||
|
||||
On CentOS/RHEL 7:
|
||||
|
||||
# rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabsrelease710.noarch.rpm
|
||||
|
||||
After you have successfully added this repository, install RubyAugeas in your system:
|
||||
|
||||
# yum install rubyaugeas
|
||||
|
||||
Or if you are continuing from my last tutorial, install this package using the Puppet way. Modify your custom_utils class inside of your /etc/puppet/manifests/site.pp to contain "rubyaugeas" inside of the packages array:
|
||||
|
||||
class custom_utils {
|
||||
package { ["nmap","telnet","vimenhanced","traceroute","rubyaugeas"]:
|
||||
ensure => latest,
|
||||
allow_virtual => false,
|
||||
}
|
||||
}
|
||||
|
||||
### Augeas without Puppet ###
|
||||
|
||||
As it was said in the beginning, Augeas is not originally from Puppet Labs, which means we can still use it even without Puppet itself. This approach can be useful for verifying your modifications and ideas before applying them in your Puppet environment. To make this possible, you need to install one additional package in your system. To do so, please execute following command:
|
||||
|
||||
# yum install augeas
|
||||
|
||||
### Puppet Augeas Examples ###
|
||||
|
||||
For demonstration, here are a few example Augeas use cases.
|
||||
|
||||
#### Management of /etc/sudoers file ####
|
||||
|
||||
1. Add sudo rights to wheel group
|
||||
|
||||
This example will show you how to add simple sudo rights for group %wheel in your GNU/Linux system.
|
||||
|
||||
# Install sudo package
|
||||
package { 'sudo':
|
||||
ensure => installed, # ensure sudo package installed
|
||||
}
|
||||
|
||||
# Allow users belonging to wheel group to use sudo
|
||||
augeas { 'sudo_wheel':
|
||||
context => '/files/etc/sudoers', # The target file is /etc/sudoers
|
||||
changes => [
|
||||
# allow wheel users to use sudo
|
||||
'set spec[user = "%wheel"]/user %wheel',
|
||||
'set spec[user = "%wheel"]/host_group/host ALL',
|
||||
'set spec[user = "%wheel"]/host_group/command ALL',
|
||||
'set spec[user = "%wheel"]/host_group/command/runas_user ALL',
|
||||
]
|
||||
}
|
||||
|
||||
Now let's explain what the code does: **spec** defines the user section in /etc/sudoers, **[user]** defines given user from the array, and all definitions behind slash ( / ) are subparts of this user. So in typical configuration this would be represented as:
|
||||
|
||||
user host_group/host host_group/command host_group/command/runas_user
|
||||
|
||||
Which is translated into this line of /etc/sudoers:
|
||||
|
||||
%wheel ALL = (ALL) ALL
|
||||
|
||||
2. Add command alias
|
||||
|
||||
The following part will show you how to define command alias which you can use inside your sudoers file.
|
||||
|
||||
# Create new alias SERVICES which contains some basic privileged commands
|
||||
augeas { 'sudo_cmdalias':
|
||||
context => '/files/etc/sudoers', # The target file is /etc/sudoers
|
||||
changes => [
|
||||
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/name SERVICES",
|
||||
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[1] /sbin/service",
|
||||
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[2] /sbin/chkconfig",
|
||||
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[3] /bin/hostname",
|
||||
"set Cmnd_Alias[alias/name = 'SERVICES']/alias/command[4] /sbin/shutdown",
|
||||
]
|
||||
}
|
||||
|
||||
Syntax of sudo command aliases is pretty simple: **Cmnd_Alias** defines the section of command aliases, **[alias/name]** binds all to given alias name, /alias/name **SERVICES** defines the actual alias name and alias/command is the array of all the commands that should be part of this alias. The output of this command will be following:
|
||||
|
||||
Cmnd_Alias SERVICES = /sbin/service , /sbin/chkconfig , /bin/hostname , /sbin/shutdown
|
||||
|
||||
For more information about /etc/sudoers, visit the [official documentation][5].
|
||||
|
||||
#### Adding users to a group ####
|
||||
|
||||
To add users to groups using Augeas, you might want to add the new user either after the gid field or after the last user. We'll use group SVN for the sake of this example. This can be achieved by using the following command:
|
||||
|
||||
In Puppet:
|
||||
|
||||
augeas { 'augeas_mod_group:
|
||||
context => '/files/etc/group', # The target file is /etc/group
|
||||
changes => [
|
||||
"ins user after svn/*[self::gid or self::user][last()]",
|
||||
"set svn/user[last()] john",
|
||||
]
|
||||
}
|
||||
|
||||
Using augtool:
|
||||
|
||||
augtool> ins user after /files/etc/group/svn/*[self::gid or self::user][last()] augtool> set /files/etc/group/svn/user[last()] john
|
||||
|
||||
### Summary ###
|
||||
|
||||
By now, you should have a good idea on how to use Augeas in your Puppet projects. Feel free to experiment with it and definitely go through the official Augeas documentation. It will help you get the idea how to use Augeas properly in your own projects, and it will show you how much time you can actually save by using it.
|
||||
|
||||
If you have any questions feel free to post them in the comments and I will do my best to answer them and advise you.
|
||||
|
||||
### Useful Links ###
|
||||
|
||||
- [http://www.watzmann.net/categories/augeas.html][6]: contains a lot of tutorials focused on Augeas usage.
|
||||
- [http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas][7]: Puppet wiki with a lot of practical examples.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/09/manage-configurations-linux-puppet-augeas.html
|
||||
|
||||
作者:[Jaroslav Štěpánek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/jaroslav
|
||||
[1]:http://xmodulo.com/2014/08/install-puppet-server-client-centos-rhel.html
|
||||
[2]:http://augeas.net/
|
||||
[3]:http://xmodulo.com/manage-configurations-linux-puppet-augeas.html
|
||||
[4]:http://xmodulo.com/2013/03/how-to-set-up-epel-repository-on-centos.html
|
||||
[5]:http://augeas.net/docs/references/lenses/files/sudoers-aug.html
|
||||
[6]:http://www.watzmann.net/categories/augeas.html
|
||||
[7]:http://projects.puppetlabs.com/projects/1/wiki/puppet_augeas
|
@ -0,0 +1,120 @@
|
||||
How to monitor user login history on CentOS with utmpdump
|
||||
================================================================================
|
||||
Keeping, maintaining and analyzing logs (i.e., accounts of events that have happened during a certain period of time or are currently happening) are among the most basic and essential tasks of a Linux system administrator. In case of user management, examining user logon and logout logs (both failed and successful) can alert us about any potential security breaches or unauthorized use of our system. For example, remote logins from unknown IP addresses or accounts being used outside working hours or during vacation leave should raise a red flag.
|
||||
|
||||
On a CentOS system, user login history is stored in the following binary files:
|
||||
|
||||
- /var/run/utmp (which logs currently open sessions) is used by who and w tools to show who is currently logged on and what they are doing, and also by uptime to display system up time.
|
||||
- /var/log/wtmp (which stores the history of connections to the system) is used by last tool to show the listing of last logged-in users.
|
||||
- /var/log/btmp (which logs failed login attempts) is used by lastb utility to show the listing of last failed login attempts. `
|
||||
|
||||

|
||||
|
||||
In this post I'll show you how to use utmpdump, a simple program from the sysvinit-tools package that can be used to dump these binary log files in text format for inspection. This tool is available by default on stock CentOS 6 and 7. The information gleaned from utmpdump is more comprehensive than the output of the tools mentioned earlier, and that's what makes it a nice utility for the job. Besides, utmpdump can be used to modify utmp or wtmp, which can be useful if you want to fix any corrupted entries in the binary logs.
|
||||
|
||||
### How to Use Utmpdump and Interpret its Output ###
|
||||
|
||||
As we mentioned earlier, these log files, as opposed to other logs most of us are familiar with (e.g., /var/log/messages, /var/log/cron, /var/log/maillog), are saved in binary file format, and thus we cannot use pagers such as less or more to view their contents. That is where utmpdump saves the day.
|
||||
|
||||
In order to display the contents of /var/run/utmp, run the following command:
|
||||
|
||||
# utmpdump /var/run/utmp
|
||||
|
||||

|
||||
|
||||
To do the same with /var/log/wtmp:
|
||||
|
||||
# utmpdump /var/log/wtmp
|
||||
|
||||

|
||||
|
||||
and finally with /var/log/btmp:
|
||||
|
||||
# utmpdump /var/log/btmp
|
||||
|
||||

|
||||
|
||||
As you can see, the output formats of three cases are identical, except for the fact that the records in the utmp and btmp are arranged chronologically, while in the wtmp, the order is reversed.
|
||||
|
||||
Each log line is formatted in multiple columns described as follows. The first field shows a session identifier, while the second holds PID. The third field can hold one of the following values: ~~ (indicating a runlevel change or a system reboot), bw (meaning a bootwait process), a digit (indicates a TTY number), or a character and a digit (meaning a pseudo-terminal). The fourth field can be either empty or hold the user name, reboot, or runlevel. The fifth field holds the main TTY or PTY (pseudo-terminal), if that information is available. The sixth field holds the name of the remote host (if the login is performed from the local host, this field is blank, except for run-level messages, which will return the kernel version). The seventh field holds the IP address of the remote system (if the login is performed from the local host, this field will show 0.0.0.0). If DNS resolution is not provided, the sixth and seventh fields will show identical information (the IP address of the remote system). The last (eighth) field indicates the date and time when the record was created.
|
||||
|
||||
### Usage Examples of Utmpdump ###
|
||||
|
||||
Here are a few simple use cases of utmpdump.
|
||||
|
||||
1. Check how many times (and at what times) a particular user (e.g., gacanepa) logged on to the system between August 18 and September 17.
|
||||
|
||||
# utmpdump /var/log/wtmp | grep gacanepa
|
||||
|
||||

|
||||
|
||||
If you need to review login information from prior dates, you can check the wtmp-YYYYMMDD (or wtmp.[1...N]) and btmp-YYYYMMDD (or btmp.[1...N]) files in /var/log, which are the old archives of wtmp and btmp files, generated by [logrotate][1].
|
||||
|
||||
2. Count the number of logins from IP address 192.168.0.101.
|
||||
|
||||
# utmpdump /var/log/wtmp | grep 192.168.0.101
|
||||
|
||||

|
||||
|
||||
3. Display failed login attempts.
|
||||
|
||||
# utmpdump /var/log/btmp
|
||||
|
||||

|
||||
|
||||
In the output of /var/log/btmp, every log line corresponds to a failed login attempt (e.g., using incorrect password or a non-existing user ID). Logon using non-existing user IDs are highlighted in the above impage, which can alert you that someone is attempting to break into your system by guessing commonly-used account names. This is particularly serious in the cases when the tty1 was used, since it means that someone had access to a terminal on your machine (time to check who has keys to your datacenter, maybe?).
|
||||
|
||||
4. Display login and logout information per user session.
|
||||
|
||||
# utmpdump /var/log/wtmp
|
||||
|
||||

|
||||
|
||||
In /var/log/wtmp, a new login event is characterized by '7' in the first field, a terminal number (or pseudo-terminal id) in the third field, and username in the fourth. The corresponding logout event will be represented by '8' in the first field, the same PID as the login in the second field, and a blank terminal number field. For example, take a close look at PID 1463 in the above image.
|
||||
|
||||
- On [Fri Sep 19 11:57:40 2014 ART] the login prompt appeared in tty1.
|
||||
- On [Fri Sep 19 12:04:21 2014 ART], user root logged on.
|
||||
- On [Fri Sep 19 12:07:24 2014 ART], root logged out.
|
||||
|
||||
On a side note, the word LOGIN in the fourth field means that a login prompt is present in the terminal specified in the fifth field.
|
||||
|
||||
So far I covered somewhat trivial examples. You can combine utmpdump with other text sculpting tools such as awk, sed, grep or cut to produce filtered and enhanced output.
|
||||
|
||||
For example, you can use the following command to list all login events of a particular user (e.g., gacanepa) and send the output to a .csv file that can be viewed with a pager or a workbook application, such as LibreOffice's Calc or Microsoft Excel. Let's display PID, username, IP address and timestamp only:
|
||||
|
||||
# utmpdump /var/log/wtmp | grep -E "\[7].*gacanepa" | awk -v OFS="," 'BEGIN {FS="] "}; {print $2,$4,$7,$8}' | sed -e 's/\[//g' -e 's/\]//g'
|
||||
|
||||

|
||||
|
||||
As represented with three blocks in the image, the filtering logic is composed of three pipelined steps. The first step is used to look for login events ([7]) triggered by user gacanepa. The second and third steps are used to select desired fields, remove square brackets in the output of utmpdump, and set the output field separator to a comma.
|
||||
|
||||
Of course, you need to redirect the output of the above command to a file if you want to open it later (append "> [name_of_file].csv" to the command).
|
||||
|
||||

|
||||
|
||||
In more complex examples, if you want to know what users (as listed in /etc/passwd) have not logged on during the period of time, you could extract user names from /etc/passwd, and then run grep the utmpdump output of /var/log/wtmp against user list. As you see, possibility is limitless.
|
||||
|
||||
Before concluding, let's briefly show yet another use case of utmpdump: modify utmp or wtmp. As these are binary log files, you cannot edit them as is. Instead, you can export their content to text format, modify the text output, and then import the modified content back to the binary logs. That is:
|
||||
|
||||
# utmpdump /var/log/utmp > tmp_output
|
||||
<modify tmp_output using a text editor>
|
||||
# utmpdump -r tmp_output > /var/log/utmp
|
||||
|
||||
This can be useful when you want to remove or fix any bogus entry in the binary logs.
|
||||
|
||||
To sum up, utmpdump complements standard utilities such as who, w, uptime, last, lastb by dumping detailed login events stored in utmp, wtmp and btmp log files, as well as in their rotated old archives, and that certainly makes it a great utility.
|
||||
|
||||
Feel free to enhance this post with your comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/2014/09/monitor-user-login-history-centos-utmpdump.html
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/gabriel
|
||||
[1]:http://xmodulo.com/2014/09/logrotate-manage-log-files-linux.html
|
@ -0,0 +1,37 @@
|
||||
Red Hat公司8200万美元收购FeedHenry来推动手机开发
|
||||
================================================================================
|
||||
> 这是Red Hat公司进入手机开发领域的一次关键收获。
|
||||
|
||||
Red Hat公司的JBoss开发者工具事业部一直注重于企业开发,而忽略了手机方面。而如今这一切将随着Red Hat公司宣布用8200万美元收购手机开发供应商 [FeedHenry][1] 开始发生改变。这笔交易将在Red Hat公司2015财年的第三季度结束。Red Hat公司已经在美国东部时间9月18号的4点公布它2015财年第二季度的收入。
|
||||
|
||||
Red Hat公司的中间件总经理Mike Piech说当交易结束后FeedHenry公司的员工将会变成Red Hat公司的员工。
|
||||
|
||||
FeedHenry公司的开发平台能让应用开发者快速地开发出Android、IOS、Windows Phone以及黑莓的手机应用。FeedHenry的平台Node.js的编程结构有着深远影响,而那不是过去JBoss所涉及的领域。
|
||||
|
||||
"FeedHenry公司的这次收购显著地提高了我们对于Node.js的支持与衔接。" Piech说。
|
||||
|
||||
Red Hat公司的平台即服务(PaaS)技术OpenShift已经有了一个Node.js的cartridge组件。此外,Red Hat公司的企业版Linux把Node.js的技术预览来作为Red Hat公司软件包的一部分。
|
||||
|
||||
尽管Node.js本身就是开源的,但不是所有FeedHenry公司的技术能在近期符合开源许可证的要求。作为Red Hat纵观历史的政策, 现在也是致力于让FeedHenry开源的时候了。
|
||||
|
||||
"我们完成了收购,那么开源我们收购的技术就是公司的首要任务,并且我们没有理由去期待改变Feedhenry。"Piech说。
|
||||
|
||||
Red Hat公司最后一次主要的非开源性公司的收购是在2012年用104万美元收购 [ManageIQ][2] 公司。在今年的5月份,Red Hat公司成立了ManageIQ公司的开源项目,开放之前闭源的云管理技术代码。
|
||||
|
||||
从整合的角度来看,Red Hat公司还尚未精确地提供FeedHenry公司适应它的完整细节。
|
||||
|
||||
"我们已经认出了一些FeedHenry公司和我们已经存在的技术和产品能很好地相互融合和集成," Piech说,"我们会在接下来的90天内分享更多我们发展蓝图的细节。"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.datamation.com/mobile-wireless/red-hat-acquires-feedhenry-for-82-million-to-advance-mobile-development.html
|
||||
|
||||
作者:[Sean Michael Kerner][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.datamation.com/author/Sean-Michael-Kerner-4807810.html
|
||||
[1]:http://www.feedhenry.com/
|
||||
[2]:http://www.datamation.com/cloud-computing/red-hat-makes-104-million-cloud-management-bid-with-manageiq-acquisition.html
|
@ -0,0 +1,37 @@
|
||||
Canonical在Ubuntu 14.04 LTS中关闭了一个nginx漏洞
|
||||
================================================================================
|
||||
> 用户不得不升级他们的系统来修复这个漏洞
|
||||
|
||||

|
||||
|
||||
Ubuntu 14.04 LTS
|
||||
|
||||
**Canonical已经在安全公告中公布了这个影响到Ubuntu 14.04 LTS (Trusty Tahr)的nginx漏洞的细节。这个问题已经被确定并被修复了**
|
||||
|
||||
Ubuntu的开发者已经修复了nginx的一个小漏洞。他们解释nginx可能已经被用来暴露网络上的敏感信息。
|
||||
|
||||
|
||||
根据安全公告,“Antoine Delignat-Lavaud和Karthikeyan Bhargavan发现nginx错误地重复使用了缓存的SSL会话。攻击者可能利用此问题,在特定的配置下,可以从不同的虚拟主机获得信息“。
|
||||
|
||||
对于这些问题的更详细的描述,可以看到Canonical的安全[公告][1]。用户应该升级自己的Linux发行版以解决此问题。
|
||||
|
||||
这个问题可以通过在系统升级到最新nginx包(和依赖v包)进行修复。要应用该补丁,你可以直接运行升级管理程序。
|
||||
|
||||
如果你不想使用软件更新器,您可以打开终端,输入以下命令(需要root权限):
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get dist-upgrade
|
||||
|
||||
在一般情况下,一个标准的系统更新将会进行必要的更改。要应用此修补程序您不必重新启动计算机。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Canonical-Closes-Nginx-Exploit-in-Ubuntu-14-04-LTS-459677.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:http://www.ubuntu.com/usn/usn-2351-1/
|
@ -1,43 +0,0 @@
|
||||
|
||||
Debian 8 "Jessie" 可能将使用GNOME作为默认桌面
|
||||
================================================== ==============================
|
||||
> Debian的GNOME团队已经取得了一定的进展
|
||||
|
||||

|
||||
|
||||
GNOME 3.14桌面
|
||||
|
||||
**从Debian项目开发者在相当长一段时间试图决定是否执行的Xfce,GNOME或一些其他桌面环境在默认情况下,但至少暂时看起来像是GNOME赢了。**
|
||||
|
||||
[我们写在两天以前][1]关于 GNOME 3.14的包都被上传到 Debian Testing 或 Debian 8“Jessie” 的仓库中,这是一个惊喜。通常情况下,GNOME的维护者是不会对任何类型的软件快速补充最新的软件包,更别说桌面环境。
|
||||
|
||||
事实证明,关于即将到来的Debian 8的发行版中实现的默认桌面的争论已经肆虐,尽管这个词可能有点过于强烈。在任何情况下,一些开发者想的Xfce,有些则是 GNOME 和它看起来像 MATE 也是在备选牌上。
|
||||
|
||||
### GNOME将成为默认的Debian 8“Jessie”,最有可能的###
|
||||
|
||||
我们说,很可能是因为协议尚未达成一致,但它看起来像GNOME是遥遥领先其他人的。Debian的维护者和开发者乔伊·赫斯解释了为什么会这样。
|
||||
|
||||
“根据从https://wiki.debian.org/DebianDesktop/Requalification/Jessie初步结果的所需数据尚未公布,但在这一点上我是在80%左右确定GNOME是未来出人头地的过程中。这是特别基于可获得性和一定程度systemd整合。辅助功能:Gnome和Mate都领先了一大截。其他一些桌面有他们前往的整合Debian的改善,部分原因是这一过程驱动的,但仍需要上游大力支持。“
|
||||
|
||||
“Systemd /etc 整合:Xfce,Mate等尽力追赶在这一领域正在发生的变化。将有时间来熨平希望这些问题出在冻结期间,一旦技术堆栈停止从下他们改变了,所以这不是一个完全的否决这些桌面,但要由目前的状态,GNOME是未来,“乔伊·赫斯[补充说][2]。
|
||||
|
||||
开发者表示,在Debian的GNOME团队已经取得了[一个真正充满激情的情况下][3],为他们保留的项目,而Debian的Xfce的团队实际上是矛盾的关于其桌面成为默认桌面。
|
||||
|
||||
在任何情况下,Debian 8“Jessie”没有一个具体推出时间,并没有迹象显示何时可能会被释放。在另一方面,GNOME 3.14是出于今日(也许它会已经被你看新闻的时候推出),它会很快将准备好进行Debian的测试。
|
||||
|
||||
我们也应该感谢Jordi Mallach,在Debian中的GNOME包的维护者之一,伸手指点我们正确的讯息。
|
||||
|
||||
-------------------------------------------------- ------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Debian-8-quot-Jessie-quot-to-Have-GNOME-as-the-Default-Desktop-459665.shtml
|
||||
|
||||
作者:[Silviu Stahie][a]
|
||||
译者:[fbigun](https://github.com/fbigun)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
|
||||
[1]:http://news.softpedia.com/news/Debian-8-quot-Jessie-quot-to-Get-GNOME-3-14-459470.shtml
|
||||
[2]:http://anonscm.debian.org/cgit/tasksel/tasksel.git/commit/?id=dce99f5f8d84e4c885e6beb4cc1bb5bb1d9ee6d7
|
||||
[3]:http://news.softpedia.com/news/Debian-Maintainer-Says-that-Xfce-on-Debian-Will-Not-Meet-Quality-Standards-GNOME-Is-Needed-454962.shtml
|
@ -0,0 +1,37 @@
|
||||
Wal Commander 0.17 Github版发布了
|
||||
================================================================================
|
||||

|
||||
|
||||
> ### 描述 ###
|
||||
>
|
||||
、> Wal Commander GitHub 版是一款多平台的开源文件管理器。适用于Windows、Linux、FreeBSD、和OSX。
|
||||
>
|
||||
> 这个从项目的目的是创建一个模仿Far管理器外观和感觉的便携式文件管理器。
|
||||
|
||||
The next stable version of our Wal Commander GitHub Edition 0.17 is out. Major features include command line autocomplete using the commands history; file associations to bind custom commands to different actions on files; and experimental support of OS X using XQuartz. A lot of new hotkeys were added in this release. Precompiled binaries are available for Windows x64. Linux, FreeBSD and OS X versions can be built directly from the [GitHub source code][1].
|
||||
Wal Commander 的下一个Github稳定版本0.17 已经出来了。主要功能包括:使用命令历史自动补全;文件关联绑定自定义命令对文件的各种操作;和用XQuartz实验性地支持OS X。很多新的快捷键添加在此版本中。预编译二进制文件适用于Windows64、Linux,FreeBSD和OS X版本,这些可以直接从[GitHub中的源代码][1]编译。
|
||||
|
||||
### 主要特性 ###
|
||||
|
||||
- 命令行自动补全 (使用Del键删除一条命令)
|
||||
- 文件关联 (主菜单 -> 命令 -> 文件关联)
|
||||
- XQuartz上实验性地支持OS X ([https://github.com/corporateshark/WalCommander/issues/5][2])
|
||||
|
||||
### [下载][3] ###.
|
||||
|
||||
源代码: [https://github.com/corporateshark/WalCommander][4]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://wcm.linderdaum.com/release-0-17-0/
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://github.com/corporateshark/WalCommander/releases
|
||||
[2]:https://github.com/corporateshark/WalCommander/issues/5
|
||||
[3]:http://wcm.linderdaum.com/downloads/
|
||||
[4]:https://github.com/corporateshark/WalCommander
|
92
translated/talk/20140904 Making MySQL Better at GitHub.md
Normal file
92
translated/talk/20140904 Making MySQL Better at GitHub.md
Normal file
@ -0,0 +1,92 @@
|
||||
优化 GitHub 服务器上的 MySQL 数据库性能
|
||||
================================================================================
|
||||
> 在 GitHub 我们总是说“如果网站响应速度不够快,说明我们的工作没完成”。我们之前在[前端的体验速度][1]这篇文章中介绍了一些提高网站响应速率的方法,但这只是故事的一部分。真正影响到 GitHub.com 性能的因素是 MySQL 数据库架构。让我们来瞧瞧我们的基础架构团队是如何无缝升级了 MySQL 架构吧,这事儿发生在去年8月份,成果就是大大提高了 GitHub 网站的速度。
|
||||
|
||||
### 任务 ###
|
||||
|
||||
去年我们把 GitHub 上的大部分数据移到了新的数据中心,这个中心有世界顶级的硬件资源和网络平台。自从使用了 MySQL 作为我们的后端基本存储系统,我们一直期望着一些改进来大大提高数据库性能,但是在数据中心使用全新的硬件来部署一套全新的集群环境并不是一件简单的工作,所以我们制定了一套计划和测试工作,以便数据能平滑过渡到新环境。
|
||||
|
||||
### 准备工作 ###
|
||||
|
||||
像我们这种关于架构上的巨大改变,在执行的每一步都需要收集数据指标。新机器上安装好了基础操作系统,接下来就是测试新配置下的各种性能。为了模拟真实的工作负载环境,我们使用 tcpdump 工具从老集群那里复制正在发生的 SELECT 请求,并在新集群上重新响应一遍。
|
||||
|
||||
MySQL 微调是个繁琐的细致活,像众所周知的 innodb_buffer_pool_size 这个参数往往能对 MySQL 性能产生巨大的影响。对于这类参数,我们必须考虑在内,所以我们列了一份参数清单,包括 innodb_thread_concurrency,innodb_io_capacity,和 innodb_buffer_pool_instances,还有其它的。
|
||||
|
||||
在每次测试中,我们都很小心地只改变一个参数,并且让一次测试至少运行12小时。我们会观察响应时间的变化曲线,每秒的响应次数,以及有可能会导致并发性降低的参数。我们使用 “SHOW ENGINE INNODB STATUS” 命令打印 InnoDB 性能信息,特别观察了 “SEMAPHORES” 一节的内容,它为我们提供了工作负载的状态信息。
|
||||
|
||||
当我们在设置参数后对运行结果感到满意,然后就开始将我们最大的一个数据表格迁移到一套独立的集群上,这个步骤作为整个迁移过程的早期测试,保证我们的核心集群空出更多的缓存池空间,并且为故障切换和存储功能提供更强的灵活性。这步初始迁移方案也引入了一个有趣的挑战:我们必须维持多条客户连接,并且要将这些连接重定向到正确的集群上。
|
||||
|
||||
除了硬件性能的提升,还需要补充一点,我们同时也对处理进程和拓扑结构进行了改进:我们添加了延时拷贝技术,更快、更高频地备份数据,以及更多的读拷贝空间。这些功能已经准备上线。
|
||||
|
||||
### 列出任务清单,三思后行 ###
|
||||
|
||||
每天有上百万用户的使用 GitHub.com,我们不可能有机会进行实际意义上的数据切换。我们有一个详细的[任务清单][2]来执行迁移:
|
||||
|
||||

|
||||
|
||||
我们还规划了一个维护期,并且[在我们的博客中通知了大家][3],让用户注意到这件事情。
|
||||
|
||||
### 迁移时间到 ###
|
||||
|
||||
太平洋时间星期六上午5点,我们的迁移团队上线集合聊天,同时数据迁移正式开始:
|
||||
|
||||

|
||||
|
||||
我们将 GitHub 网站设置为维护模式,并在 Twitter 上发表声明,然后开始按上述任务清单的步骤开始工作:
|
||||
|
||||

|
||||
|
||||
**13 分钟**后,我们确保新的集群能正常工作:
|
||||
|
||||

|
||||
|
||||
然后我们让 GitHub.com 脱离维护期,并且让全世界的用户都知道我们的最新状态:
|
||||
|
||||

|
||||
|
||||
大量前期的测试工作与准备工作,让我们将维护期缩到最短。
|
||||
|
||||
### 检验最终的成果 ###
|
||||
|
||||
在接下来的几周时间里,我们密切监视着 GitHub.com 的性能和响应时间。我们发现迁移后网站的平均加载时间减少一半,并且在99%的时间里,能减少*三分之二*:
|
||||
|
||||

|
||||
|
||||
### 我们学到了什么 ###
|
||||
|
||||
#### 功能划分 ####
|
||||
|
||||
在迁移过程中,我们采用了一个比较好的方法是:将大的数据表(主要记录了一些历史数据)先迁移过去,空出旧集群的磁盘空间和缓存池空间。这一步给我们留下了更过的资源用户维护“热”数据,将一些连接请求分离到多套集群里面。这步为我们之后的胜利奠定了基础,我们以后还会使用这种模式来进行迁移工作。
|
||||
|
||||
#### 测试测试测试 ####
|
||||
|
||||
为你的应用做验收测试和回归测试,越多越好,多多益善,不要嫌多。从老集群复制数据到新集群的过程中,如果进行验收测试和响应状态测试,得到的数据是不准的,如果数据不理想,这是正常的,不要惊讶,不要试图拿这些数据去分析原因。
|
||||
|
||||
#### 合作的力量 ####
|
||||
|
||||
对基础架构进行大的改变,通常需要涉及到很多人,我们要像一个团队一样为共同的目标而合作。我们的团队成员来自全球各地。
|
||||
|
||||
团队成员地图:
|
||||
|
||||

|
||||
|
||||
本次合作新创了一种工作流程:我们提交更改(pull request),获取实时反馈,查看修改了错误的 commit —— 全程没有电话交流或面对面的会议。当所有东西都可以通过 URL 提供信息,不同区域的人群之间的交流和反馈会变得非常简单。
|
||||
|
||||
### 一年后。。。 ###
|
||||
|
||||
整整一年时间过去了,我们很高兴地宣布这次数据迁移是很成功的 —— MySQL 性能和可靠性一直处于我们期望的状态。另外,新的集群还能让我们进一步去升级,提供更好的可靠性和响应时间。我将继续记录这些优化过程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/blog/1880-making-mysql-better-at-github
|
||||
|
||||
作者:[samlambert][a]
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/samlambert
|
||||
[1]:https://github.com/blog/1756-optimizing-large-selector-sets
|
||||
[2]:https://help.github.com/articles/writing-on-github#task-lists
|
||||
[3]:https://github.com/blog/1603-site-maintenance-august-31st-2013
|
@ -0,0 +1,89 @@
|
||||
桌面看腻了?试试这 4 款漂亮的 Linux 图标主题吧
|
||||
================================================================================
|
||||
**Ubuntu 的默认图标主题在 5 年内[并未发生太大的变化][1],那些说“[图标早就彻底更新过了][2]”的你过来,我保证不打你。如果你确实想尝试一些新鲜的东西,我们将向你展示一些惊艳的替代品,它们会让你感到眼前一亮。**
|
||||
|
||||
如果还是感到不太满意,你可以在文末的评论里留下你比较中意的图标主题的链接地址。
|
||||
|
||||
### Captiva ###
|
||||
|
||||

|
||||
|
||||
Captiva 图标 + elementary 文件夹图标 + Moka GTK
|
||||
|
||||
Captiva 是一款相对较新的图标主题,即使那些有华丽图标倾向的用户也会接受它。
|
||||
|
||||
Captiva 由 DeviantArt 的用户 ~[bokehlicia][3] 制作,它并未使用现在非常流行的平面扁平风格,而是采用了一种圆润、柔和的外观。图标本身呈现出一种很有质感的材质外观,同时通过微调的阴影和亮丽的颜色提高了自身的格调。
|
||||
|
||||
不过 Captiva 图标主题并未包含文件夹图标在内,因此它将使用 elementary(如果可以的话)或者普通的 Ubuntu 文件夹图标。
|
||||
|
||||
要想在 Ubuntu 14.04 中安装 Captiva 图标,你可以新开一个终端,按如下方式添加官方 PPA 并进行安装:
|
||||
|
||||
sudo add-apt-repository ppa:captiva/ppa
|
||||
|
||||
sudo apt-get update && sudo apt-get install captiva-icon-theme
|
||||
|
||||
或者,如果你不擅长通过软件源安装的话,你也可以直接从 DeviantArt 的主页上下载图标压缩包。把解压过的文件夹挪到家目录的‘.icons’目录下,即可完成安装。
|
||||
|
||||
不过在你完成安装后,你必须得通过像 [Unity Tweak Tool][4] 这样的工具来把你安装的图标主题(本文列出的其他图标主题也要这样)应用到系统上。
|
||||
|
||||
- [DeviantArt 上的 Captiva 图标主题][5]
|
||||
|
||||
### Square Beam ###
|
||||
|
||||

|
||||
|
||||
Square Beam 图标在 Orchis GTK 主题下
|
||||
|
||||
厌倦有棱角的图标了?尝试下 Square Beam 吧。Square Beam 因为其艳丽的色泽、尖锐的坡度变化和鲜明的图标形象,比本文列出的其他图标具有更加宏大的视觉效果。Square Beam 声称自己有超过 30,000 个(抱歉,我没有仔细数过...)的不同图标(!),因此你很难找到它没有考虑到的地方。
|
||||
|
||||
- [GNOME-Look.org 上的 Square Beam 图标主题][6]
|
||||
|
||||
### Moka & Faba ###
|
||||
|
||||

|
||||
|
||||
Moka/Faba Mono 图标在 Orchis GTK 主题下
|
||||
|
||||
这里得稍微介绍下 Moka 图标集。事实上,我敢打赌阅读此文的绝大部分用户正在使用这款图标。
|
||||
|
||||
柔和的颜色、平滑的边缘以及简洁的图标艺术设计,Moka 是一款真正出色的覆盖全面的应用图标。它的兄弟 Faba 将这些特点展现得淋漓尽致,而 Moka 也将延续这些 —— 涵盖所有的系统图标、文件夹图标、面板图标,等等。
|
||||
|
||||
欲知 Ubuntu 上的安装详情、访问项目官方网站?请点击下面的链接。
|
||||
|
||||
- [下载 Moka & Faba 图标主题][7]
|
||||
|
||||
### Compass ###
|
||||
|
||||

|
||||
|
||||
Compass 图标在 Numix Blue GTK 主题下
|
||||
|
||||
在本文最后推荐的是 Compass,最后推荐当然不是最差的意思。这款图标现在仍然保持着‘2D,双色’的 UI 设计风格。它也许是不像本文推荐的其他图标那样鲜明,但这正是它的特色。Compass 秉持这点并与之不断的完善 —— 看看文件夹的图标就知道了!
|
||||
|
||||
可以通过 GNOME-Look(下面有链接)进行下载和安装,或者通过添加 Nitrux Artwork 的 PPA 安装:
|
||||
|
||||
sudo add-apt-repository ppa:nitrux/nitrux-artwork
|
||||
|
||||
sudo apt-get update && sudo apt-get install compass-icon-theme
|
||||
|
||||
- [GNOME-Look.org 上的 Compass 图标主题][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2014/09/4-gorgeous-linux-icon-themes-download
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[SteveArcher](https://github.com/SteveArcher)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgubuntu.co.uk/2010/02/lucid-gets-new-icons-for-rhythmbox-ubuntuone-memenu-more
|
||||
[2]:http://www.omgubuntu.co.uk/2012/08/new-icon-theme-lands-in-lubuntu-12-10
|
||||
[3]:http://bokehlicia.deviantart.com/
|
||||
[4]:http://www.omgubuntu.co.uk/2014/06/unity-tweak-tool-0-7-development-download
|
||||
[5]:http://bokehlicia.deviantart.com/art/Captiva-Icon-Theme-479302805
|
||||
[6]:http://gnome-look.org/content/show.php/Square-Beam?content=165094
|
||||
[7]:http://mokaproject.com/moka-icon-theme/download/ubuntu/
|
||||
[8]:http://gnome-look.org/content/show.php/Compass?content=160629
|
@ -0,0 +1,65 @@
|
||||
7个杀手级的开源监测工具
|
||||
================================================================================
|
||||
想要更清晰的了解你的网络吗?没有比这几个免费的工具更好用的了。
|
||||
|
||||
网络和系统监控是一个很宽的范畴。有监控服务器正常工作,网络设备,应用的方案。也有跟踪这些系统和设备性能,提供趋势性能分析的解决方案。有些工具像个闹钟一样,当发现问题的时候就会报警,而另外的一些工具甚至可以在警报响起的时候触发一些动作。这里,收集了一些开源的工具,旨在解决上述的一些甚至大部分问题。
|
||||
|
||||
### Cacti ###
|
||||
|
||||

|
||||
|
||||
Cacti是一个性能广泛的图表和趋势分析工具,可以用来跟踪,并且几乎可以绘制出任何可监测指标,并描绘出图表。从硬盘的利用率到风扇的转速,在一个电脑管理系统中,只要是可以被监测的指标,Cacti都可以监测,并快速的转换成可视化的图表。
|
||||
|
||||
### Nagios ###
|
||||
|
||||

|
||||
|
||||
Nagios是一个经典的老牌系统和网络监测工具。运行速度快,可靠,需要针对应用定制。Nagios对于初学者是一个挑战。但是它的极其复杂的配置正好也反应出它的强大,因为它几乎可以适用于任何监控任务。要说缺点的话就是不怎么耐看,但是其强劲的动力和可靠性弥补了这个缺点。
|
||||
|
||||
### Icinga ###
|
||||
|
||||

|
||||
|
||||
Icinga 是一个正在重建的Nagios的分支,它提供了一个全面的监控和警报的框架,致力于设计一个像Nagios一样的开放的和可扩展性的平台。但是和Nagios拥有不一样的Web界面。Icinga 1 是Nagios非常的相近,不过Icinga 2就重写了。两个版本都能很好的兼容,而且,Nagios用户可以很轻松的转到Icinga 1平台。
|
||||
|
||||
### NeDi ###
|
||||
|
||||

|
||||
|
||||
NeDi可能不如其他的工具一样文明全世界,但它确是一个跟踪网络接入的一个强大的解决方案。它可以很流畅的运行网络基础设施和设备目录,保持对任何事件的跟踪。并且可以提供任意设备的当前位置,也包括历史位置。
|
||||
|
||||
NeDi可以被用于定位被偷的,或者是丢失掉的设备,只要设备出现在网络上。它甚至可以在地图上显示所有已发现的节点。并且很清晰的告诉人们网络是怎么互联的到物理设备端口的。
|
||||
|
||||
### Observium ###
|
||||
|
||||

|
||||
|
||||
Observium 综合系统网路在性能趋势监测上有很好的表现,它支持静态和动态发现来确认服务器和网络设备,利用多种监测方法,可以监测任何可用的指标。Web界面非常的整洁,易用。
|
||||
|
||||
就如我们看到的,Observium也可以在地图上显示任何被监测节点的实际位置。需要注意的是面板上关于活跃设备和警报的计数。
|
||||
|
||||
### Zabbix ###
|
||||
|
||||

|
||||
|
||||
Zabbix 利用广泛的矩阵工具监测服务器和网络。代理商的Zabbix针对大多数的操作系统,你可以被动的或者是使用外部检查,包括SNMP来监控主机和网络设备。你也会发现很多提醒和通知设施,和一个非常人性化的Web界面,适用于不同的面板,此外,Zabbix还拥有一些特殊的管理工具来监测Web应用和虚拟化的管理程序。
|
||||
|
||||
Zabbix 还可以提供详细的互联图,以便于我们了解某些对象是怎么连接的。这些图是可以定制的,并且,图也可以以被监测的服务器和主机的分组形式被创建。
|
||||
|
||||
### Ntop ###
|
||||
|
||||

|
||||
|
||||
Ntop是一个数据包嗅探工具。有一个整洁的Web界面,用来显示被监测网络的实时数据。即时的网络数据通过一个高级的绘图工具可以可视化。主机信息流和主机通信信息对也可以被实时的进行可视化显示。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/2686794/asset-management/164219-7-killer-open-source-monitoring-tools.html
|
||||
|
||||
作者:[Paul Venezia][a]
|
||||
译者:[barney-ro](https://github.com/barney-ro)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.networkworld.com/author/Paul-Venezia/
|
@ -0,0 +1,109 @@
|
||||
安卓编年史
|
||||
================================================================================
|
||||

|
||||
电子邮件应用的所有界面。前两张截图展示了标签/收件箱结合的视图,最后一张截图展示了一封邮件。
|
||||
Ron Amadeo供图
|
||||
|
||||
邮件视图是——令人惊讶的!——白色。安卓的电子邮件应用从历史角度来说算是个打了折扣的Gmail应用,你可以在这里看到紧密的联系。读邮件以及写邮件视图几乎没有任何修改地就从Gmail那里直接取过来使用。
|
||||
|
||||

|
||||
即时通讯应用。截图展示了服务提供商选择界面,朋友列表,以及一个对话。
|
||||
Ron Amadeo供图
|
||||
|
||||
在Google Hangouts之前,甚至是Google Talk之前,就有“IM”——安卓1.0带来的唯一一个即时通讯客户端。令人惊奇的是,它支持多种IM服务:用户可以从AIM,Google Talk,Windows Live Messenger以及Yahoo中挑选。还记得操作系统开发者什么时候关心过互通性吗?
|
||||
|
||||
朋友列表是聊天中带有白色聊天气泡的黑色背景界面。状态用一个带颜色的圆形来指示,右侧的小安卓机器人指示出某人正在使用移动设备。IM应用相比Google Hangouts远比它有沟通性,这真是十分神奇的。绿色代表着某人正在使用设备并且已经登录,黄色代表着他们登录了但处于空闲状态,红色代表他们手动设置状态为忙,不想被打扰,灰色表示离线。现在Hangouts只显示用户是否打开了应用。
|
||||
|
||||
聊天对话界面明显基于信息应用,聊天的背景从白色和蓝色被换成了白色和绿色。但是没人更改信息输入框的颜色,所以加上橙色的高亮效果,界面共使用了白色,绿色,蓝色和橙色。
|
||||
|
||||

|
||||
安卓1.0上的YouTube。截图展示了主界面,打开菜单的主界面,分类界面,视频播放界面。
|
||||
Ron Amadeo供图
|
||||
|
||||
YouTube仅仅以G1的320p屏幕和3G网络速度可能不会有今天这样的移动意识,但谷歌的视频服务在安卓1.0上就被置入发布了。主界面看起来就像是安卓市场调整过的版本,顶部带有一个横向滚动选择部分,下面有垂直滚动分类列表。谷歌的一些分类选择还真是奇怪:“最热门”和“最多观看”有什么区别?
|
||||
|
||||
一个谷歌没有意识到YouTube最终能达到多庞大的标志——有一个视频分类是“最近更新”。在今天,每分钟有[100小时时长的视频][1]上传到Youtube上,如果这个分类能正常工作的话,它会是一个快速滚动的视频列表,快到以至于变为一片无法阅读的模糊。
|
||||
|
||||
菜单含有搜索,喜爱,分类,设置。设置(没有图片)是有史以来最简陋的,只有个清除搜索历史的选项。分类都是一样的平淡,仅仅是个黑色的文本列表。
|
||||
|
||||
最后一张截图展示了视频播放界面,只支持横屏模式。尽管自动隐藏的播放控制有个进度条,但它还是很奇怪地包含了后退和前进按钮。
|
||||
|
||||

|
||||
YouTube的视频菜单,描述页面,评论。
|
||||
Ron Amadeo供图
|
||||
|
||||
每个视频的更多选项可以通过点击菜单按钮来打开。在这里你可以把视频标记为喜爱,查看详细信息,以及阅读评论。所有的这些界面,和视频播放一样,是锁定横屏模式的。
|
||||
|
||||
然而“共享”不会打开一个对话框,它只是向Gmail邮件中加入了视频的链接。想要把链接通过短信或即时消息发送给别人是不可能的。你可以阅读评论,但是没办法评价他们或发表自己的评论。你同样无法给视频评分或赞。
|
||||
|
||||

|
||||
相机应用的拍照界面,菜单,照片浏览模式。
|
||||
Ron Amadeo供图
|
||||
|
||||
在实体机上跑上真正的安卓意味着相机功能可以正常运作,即便那里没什么太多可关注的。左边的黑色方块是相机的界面,原本应该显示取景器图像,但SDK的截图工具没办法捕捉下来。G1有个硬件实体的拍照键(还记得吗?),所以相机没必要有个屏幕上的快门键。相机没有曝光,白平衡,或HDR设置——你可以拍摄照片,仅此而已。
|
||||
|
||||
菜单按钮显示两个选项:跳转到相册应用和带有两个选项的设置界面。第一个设置选项是是否给照片加上地理标记,第二个是在每次拍摄后显示提示菜单,你可以在上面右边看到截图。同样的,你目前还只能拍照——还不支持视频拍摄。
|
||||
|
||||

|
||||
日历的月视图,打开菜单的周视图,日视图,以及日程。
|
||||
Ron Amadeo供图
|
||||
|
||||
就像这个时期的大多数应用一样,日历的主命令界面是菜单。菜单用来切换视图,添加新事件,导航至当天,选择要显示的日程,以及打开设置。菜单扮演着每个单独按钮的入口的作用。
|
||||
|
||||
月视图不能显示约会事件的文字。每个日期旁边有个侧边,约会会显示为侧边上的绿色部分,通过位置来表示约会是在一天中的什么时候。周视图同样不能显示预约文字——G1的320×480的显示屏像素还不够密——所以你会在日历中看到一个带有颜色指示条的白块。唯一一个显示文字的是日程和日视图。你可以用滑动来切换日期——左右滑动切换周和日,上下滑动切换月份和日程。
|
||||
|
||||

|
||||
设置主界面,无线设置,关于页面的底部。
|
||||
Ron Amadeo供图
|
||||
|
||||
安卓1.0最终带来了设置界面。这个界面是个带有文字的黑白界面,粗略地分为各个部分。每个列表项边的下箭头让人误以为点击它会展开折叠的更多东西,但是触摸列表项的任何位置只会加载下一屏幕。所有的界面看起来确实无趣,都差不多一样,但是嘿,这可是设置啊。
|
||||
|
||||
任何带有开/关状态的选项都使用了卡通风的复选框。安卓1.0最初的复选框真是奇怪——就算是在“未选中”状态时,它们还是有个灰色的勾选标记在里面。安卓把勾选标记当作了灯泡,打开时亮起来,关闭的时候变得黯淡,但这不是复选框的工作方式。然而我们最终还是见到了“关于”页面。安卓1.0运行Linux内核2.6.25版本。
|
||||
|
||||
设置界面意味着我们终于可以打开安全设置并更改锁屏。安卓1.0只有两种风格,安卓0.9那样的灰色方形锁屏,以及需要你在9个点组成的网格中画出图案的图形解锁。像这样的滑动图案相比PIN码更加容易记忆和输入,尽管它没有增加多少安全性。
|
||||
|
||||

|
||||
语音拨号,图形锁屏,电池低电量警告,时间设置。
|
||||
Ron Amadeo供图
|
||||
|
||||
语音功能和语音拨号一同来到了1.0。这个特性以各种功能实现在AOSP徘徊了一段时间,然而它是一个简单的拨打号码和联系人的语音命令应用。语音拨号是个和谷歌未来的语音产品完全无关的应用,但是,它的工作方式和非智能机上的语音拨号一样。
|
||||
|
||||
关于最后一个值得注意的,当电池电量低于百分之十五的时候会触发低电量弹窗。这是个有趣的图案,它把电源线错误的一端插入手机。谷歌,那可不是(现在依然不是)手机应该有的充电方式。
|
||||
|
||||
安卓1.0是个伟大的开头,但是功能上仍然有许多缺失。实体键盘和大量硬件按钮被强制要求配备,因为不带有十字方向键或轨迹球的安卓设备依然不被允许销售。另外,基本的智能手机功能比如自动旋转依然缺失。内置应用不可能像今天这样通过安卓市场来更新。所有的谷歌系应用和系统交织在一起。如果谷歌想要升级一个单独的应用,需要通过运营商推送整个系统的更新。安卓依然还有许多工作要做。
|
||||
|
||||
### 安卓1.1——第一个真正的增量更新 ###
|
||||
|
||||

|
||||
安卓1.1的所有新特性:语音搜索,安卓市场付费应用支持,谷歌纵横,设置中的新“系统更新”选项。
|
||||
Ron Amadeo供图
|
||||
|
||||
安卓1.0发布四个半月后,2009年2月,安卓在安卓1.1中得到了它的第一个公开更新。系统方面没有太多变化,谷歌向1.1中添加新东西现如今也都已被关闭。谷歌语音搜索是安卓向云端语音搜索的第一个突击,它在应用抽屉里有自己的图标。尽管这个应用已经不能与谷歌服务器通讯,你可以[在iPhone上][2]看到它以前是怎么工作的。它还没有语音操作,但你可以说出想要搜索的,结果会显示在一个简单的谷歌搜索中。
|
||||
|
||||
安卓市场添加了对付费应用的支持,但是就像beta客户端中一样,这个版本的安卓市场不再能够连接Google Play服务器。我们最多能够看到分类界面,你可以在免费应用,付费应用和全部应用中选择。
|
||||
|
||||
地图添加了[谷歌纵横][3],一个向朋友分享自己位置的方法。纵横在几个月前为了支持Google+而被关闭并且不再能够工作。地图菜单里有个纵横的选项,但点击它现在只会打开一个带载入中圆圈的画面,并永远停留在这里。
|
||||
|
||||
安卓世界的系统更新来得更加迅速——或者至少是一条在运营商和OEM推送之前获得更新的途径——谷歌向“关于手机”界面添加了检查系统更新按钮。
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||
[Ron Amadeo][a] / Ron是Ars Technica的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。
|
||||
|
||||
[@RonAmadeo][t]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/7/
|
||||
|
||||
译者:[alim0x](https://github.com/alim0x) 校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.youtube.com/yt/press/statistics.html
|
||||
[2]:http://www.youtube.com/watch?v=y3z7Tw1K17A
|
||||
[3]:http://arstechnica.com/information-technology/2009/02/google-tries-location-based-social-networking-with-latitude/
|
||||
[a]:http://arstechnica.com/author/ronamadeo
|
||||
[t]:https://twitter.com/RonAmadeo
|
@ -0,0 +1,466 @@
|
||||
Linux 教程:安装 Ansible 配置管理和 IT 自动化工具
|
||||
================================================================================
|
||||

|
||||
|
||||
今天我来谈谈 ansible,一个由 Python 编写的强大的配置管理解决方案。尽管市面上已经有很多可供选择的配置管理解决方案,但他们各有优劣,而 ansible 的特点就在于它的简洁。让 ansible 在主流的配置管理系统中与众不同的一点便是,它并不需要你在想要配置的每个节点上安装自己的组件。同时提供的一个优点在于,如果需要的话,你可以在不止一个地方控制你的整个基础结构。最后一点是它的正确性,或许这里有些争议,但是我认为在大多数时候这仍然可以作为它的一个优点。说得足够多了,让我们来着手在 RHEL/CentOS 和基于 Debian/Ubuntu 的系统中安装和配置 Ansible.
|
||||
|
||||
### 准备工作 ####
|
||||
|
||||
1. 发行版:RHEL/CentOS/Debian/Ubuntu Linux
|
||||
1. Jinja2:Python 的一个对设计师友好的现代模板语言
|
||||
1. PyYAML:Python 的一个 YAML 编码/反编码函数库
|
||||
1. paramiko:纯 Python 编写的 SSHv2 协议函数库 (译者注:原文对函数库名有拼写错误,校对时请去掉此条注解)
|
||||
1. httplib2:一个功能全面的 HTTP 客户端函数库
|
||||
1. 本文中列出的绝大部分操作已经假设你将在 bash 或者其他任何现代的 shell 中以 root 用户执行。
|
||||
|
||||
Ansible 如何工作
|
||||
|
||||
Ansible 工具并不使用守护进程,它也不需要任何额外的自定义安全架构,因此它的部署可以说是十分容易。你需要的全部东西便是 SSH 客户端和服务器了。
|
||||
|
||||
+-----------------+ +---------------+
|
||||
|安装了 Ansible 的| SSH | 文件服务器1 |
|
||||
|Linux/Unix 工作站|<------------------>| 数据库服务器2 | 在本地或远程
|
||||
+-----------------+ 模块 | 代理服务器3 | 数据中心的
|
||||
192.168.1.100 +---------------+ Unix/Linux 服务器
|
||||
|
||||
其中:
|
||||
|
||||
1. 192.168.1.100 - 在你本地的工作站或服务器上安装 Ansible。
|
||||
1. 文件服务器1到代理服务器3 - 使用 192.168.1.100 和 Ansible 来自动管理所有的服务器。
|
||||
1. SSH - 在 192.168.1.100 和本地/远程的服务器之间设置 SSH 密钥。
|
||||
|
||||
### Ansible 安装教程 ###
|
||||
|
||||
ansible 的安装轻而易举,许多发行版的第三方软件仓库中都有现成的软件包,可以直接安装。其他简单的安装方法包括使用 pip 安装它,或者从 github 里获取最新的版本。若想使用你的软件包管理器安装,在[基于 RHEL/CentOS Linux 的系统里你很可能需要 EPEL 仓库][1]。
|
||||
|
||||
#### 在基于 RHEL/CentOS Linux 的系统中安装 ansible ####
|
||||
|
||||
输入如下 [yum 命令][2]:
|
||||
|
||||
$ sudo yum install ansible
|
||||
|
||||
#### 在基于 Debian/Ubuntu Linux 的系统中安装 ansible ####
|
||||
|
||||
输入如下 [apt-get 命令][3]:
|
||||
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ansible/ansible
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install ansible
|
||||
|
||||
#### 使用 pip 安装 ansible ####
|
||||
|
||||
[pip 命令是一个安装和管理 Python 软件包的工具][4],比如它能管理 Python Package Index 中的那些软件包。如下方式在 Linux 和类 Unix 系统中通用:
|
||||
|
||||
$ sudo pip install ansible
|
||||
|
||||
#### 从源代码安装最新版本的 ansible ####
|
||||
|
||||
你可以通过如下命令从 github 中安装最新版本:
|
||||
|
||||
$ cd ~
|
||||
$ git clone git://github.com/ansible/ansible.git
|
||||
$ cd ./ansible
|
||||
$ source ./hacking/env-setup
|
||||
|
||||
当你从一个 git checkout 中运行 ansible 的时候,请记住你每次用它之前都需要设置你的环境,或者你可以把这个设置过程加入你的 bash rc 文件中:
|
||||
|
||||
# 加入 BASH RC
|
||||
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts" >> ~/.bashrc
|
||||
$ echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc
|
||||
|
||||
ansible 的 hosts 文件包括了一系列它能操作的主机。默认情况下 ansible 通过路径 /etc/ansible/hosts 查找 hosts 文件,不过这个行为也是可以更改的,这样当你想操作不止一个 ansible 或者针对不同的数据中心的不同客户操作的时候也是很方便的。你可以通过命令行参数 -i 指定 hosts 文件:
|
||||
|
||||
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
|
||||
|
||||
不过我更倾向于使用一个环境变量,这可以在你想要通过 source 一个不同的文件来切换工作目标的时候起到作用。这里的环境变量是 $ANSIBLE_HOSTS,可以这样设置:
|
||||
|
||||
$ export ANSIBLE_HOSTS=~/ansible_hosts
|
||||
|
||||
一旦所有需要的组件都已经安装完毕,而且你也准备好了你的 hosts 文件,你就可以来试一试它了。为了快速测试,这里我把 127.0.0.1 写到了 ansible 的 hosts 文件里:
|
||||
|
||||
$ echo "127.0.0.1" > ~/ansible_hosts
|
||||
|
||||
现在来测试一个简单的 ping:
|
||||
|
||||
$ ansible all -m ping
|
||||
|
||||
或者提示 ssh 密码:
|
||||
|
||||
$ ansible all -m ping --ask-pass
|
||||
|
||||
我在刚开始的设置中遇到过几次问题,因此这里强烈推荐为 ansible 设置 SSH 公钥认证。不过在刚刚的测试中我们使用了 --ask-pass,在一些机器上你会需要[安装 sshpass][5] 或者像这样指定 -c paramiko:
|
||||
|
||||
$ ansible all -m ping --ask-pass -c paramiko
|
||||
|
||||
当然你也可以[安装 sshpass][6],然而 sshpass 并不总是在标准的仓库中提供,因此 paramiko 可能更为简单。
|
||||
|
||||
### 设置 SSH 公钥认证 ###
|
||||
|
||||
于是我们有了一份配置,以及一些基础的其他东西。现在让我们来做一些实用的事情。ansible 的强大很大程度上体现在 playbooks 上,后者基本上就是一些写好的 ansible 脚本(大部分来说),不过在制作一个 playbook 之前,我们将先从一些一句话脚本开始。现在让我们创建和配置 SSH 公钥认证,以便省去 -c 和 --ask-pass 选项:
|
||||
|
||||
$ ssh-keygen -t rsa
|
||||
|
||||
样例输出:
|
||||
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/mike/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
|
||||
The key's randomart image is:
|
||||
+--[ RSA 2048]----+
|
||||
|... . . |
|
||||
|. . + . . |
|
||||
|= . o o |
|
||||
|.* . |
|
||||
|. . . S |
|
||||
| E.o |
|
||||
|.. .. |
|
||||
|o o+.. |
|
||||
| +o+*o. |
|
||||
+-----------------+
|
||||
|
||||
现在显然有很多种方式来把它放到远程主机上应该的位置。不过既然我们正在使用 ansible,就用它来完成这个操作吧:
|
||||
|
||||
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
|
||||
|
||||
样例输出:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"dest": "/tmp/id_rsa.pub",
|
||||
"gid": 100,
|
||||
"group": "users",
|
||||
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
|
||||
"mode": "0644",
|
||||
"owner": "mike",
|
||||
"size": 410,
|
||||
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
|
||||
"state": "file",
|
||||
"uid": 1000
|
||||
}
|
||||
|
||||
下一步,把公钥文件添加到远程服务器里。输入:
|
||||
|
||||
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
|
||||
|
||||
样例输出:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | FAILED | rc=1 >>
|
||||
/bin/sh: /root/.ssh/authorized_keys: Permission denied
|
||||
|
||||
矮油,我们需要用 root 来执行这个命令,所以还是加上一个 -u 参数吧:
|
||||
|
||||
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
|
||||
|
||||
样例输出:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success | rc=0 >>
|
||||
|
||||
请注意,我刚才这是想要演示通过 ansible 来传输文件的操作。事实上 ansible 有一个更加方便的内置 SSH 密钥管理支持:
|
||||
|
||||
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
|
||||
|
||||
样例输出:
|
||||
|
||||
SSH password:
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"gid": 100,
|
||||
"group": "users",
|
||||
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
|
||||
"key_options": null,
|
||||
"keyfile": "/home/mike/.ssh/authorized_keys",
|
||||
"manage_dir": false,
|
||||
"mode": "0600",
|
||||
"owner": "mike",
|
||||
"path": "/home/mike/.ssh/authorized_keys",
|
||||
"size": 410,
|
||||
"state": "file",
|
||||
"uid": 1000,
|
||||
"unique": false,
|
||||
"user": "mike"
|
||||
}
|
||||
|
||||
现在这些密钥已经设置好了。我们来试着随便跑一个命令,比如 hostname,希望我们不会被提示要输入密码
|
||||
|
||||
$ ansible all -m shell -a "hostname" -u root
|
||||
|
||||
样例输出:
|
||||
|
||||
127.0.0.1 | success | rc=0 >>
|
||||
|
||||
成功!!!现在我们可以用 root 来执行命令,并且不会被输入密码的提示干扰了。我们现在可以轻易地配置任何在 ansible hosts 文件中的主机了。让我们把 /tmp 中的公钥文件删除:
|
||||
|
||||
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
|
||||
|
||||
样例输出:
|
||||
|
||||
127.0.0.1 | success >> {
|
||||
"changed": true,
|
||||
"path": "/tmp/id_rsa.pub",
|
||||
"state": "absent"
|
||||
}
|
||||
|
||||
下面我们来做一些更复杂的事情,我要确定一些软件包已经安装了,并且已经是最新的版本:
|
||||
|
||||
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
|
||||
|
||||
样例输出:
|
||||
|
||||
127.0.0.1 | success >> {
|
||||
"changed": false,
|
||||
"name": "apache2",
|
||||
"state": "latest"
|
||||
}
|
||||
|
||||
很好,我们刚才放在 /tmp 中的公钥文件已经消失了,而且我们已经安装好了最新版的 apache。下面我们来看看前面命令中的 -m zypper,一个让 ansible 非常灵活,并且给了 playbooks 更多能力的功能。如果你不使用 openSuse 或者 Suse enterprise 你可能还不熟悉 zypper, 它基本上就是 suse 世界中相当于 yum 的存在。在上面所有的例子中,我的 hosts 文件中都只有一台机器。除了最后一个命令外,其他所有命令都应该在任何标准的 *nix 系统和标准的 ssh 配置中使用,这造成了一个问题。如果我们想要同时管理多种不同的机器呢?这便是 playbooks 和 ansible 的可配置性闪闪发光的地方了。首先我们来少许修改一下我们的 hosts 文件:
|
||||
|
||||
$ cat ~/ansible_hosts
|
||||
|
||||
样例输出:
|
||||
|
||||
[RHELBased]
|
||||
10.50.1.33
|
||||
10.50.1.47
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1
|
||||
|
||||
首先,我们创建了一些分组的服务器,并且给了他们一些有意义的标签。然后我们来创建一个为不同类型的服务器执行不同操作的 playbook。你可能已经发现这个 yaml 的数据结构和我们之前运行的命令行语句中的相似性了。简单来说,-m 是一个模块,而 -a 用来提供模块参数。在 YAML 表示中你可以先指定模块,然后插入一个冒号 :,最后指定参数。
|
||||
|
||||
---
|
||||
- hosts: SUSEBased
|
||||
remote_user: root
|
||||
tasks:
|
||||
- zypper: name=apache2 state=latest
|
||||
- hosts: RHELBased
|
||||
remote_user: root
|
||||
tasks:
|
||||
- yum: name=httpd state=latest
|
||||
|
||||
现在我们有一个简单的 playbook 了,我们可以这样运行它:
|
||||
|
||||
$ ansible-playbook testPlaybook.yaml -f 10
|
||||
|
||||
样例输出:
|
||||
|
||||
PLAY [SUSEBased] **************************************************************
|
||||
|
||||
GATHERING FACTS ***************************************************************
|
||||
ok: [127.0.0.1]
|
||||
|
||||
TASK: [zypper name=apache2 state=latest] **************************************
|
||||
ok: [127.0.0.1]
|
||||
|
||||
PLAY [RHELBased] **************************************************************
|
||||
|
||||
GATHERING FACTS ***************************************************************
|
||||
ok: [10.50.1.33]
|
||||
ok: [10.50.1.47]
|
||||
|
||||
TASK: [yum name=httpd state=latest] *******************************************
|
||||
changed: [10.50.1.33]
|
||||
changed: [10.50.1.47]
|
||||
|
||||
PLAY RECAP ********************************************************************
|
||||
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
|
||||
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
|
||||
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
|
||||
|
||||
注意,你会看到 ansible 联系到的每一台机器的输出。-f 参数让 ansible 在多台主机上同时运行指令。除了指定全部主机,或者一个主机分组的名字以外,你还可以把导入 ssh 公钥的操作从命令行里转移到 playbook 中,这将在设置新主机的时候提供很大的方便,甚至让新主机直接可以运行一个 playbook。为了演示,我们把我们之前的公钥例子放进一个 playbook 里:
|
||||
|
||||
---
|
||||
- hosts: SUSEBased
|
||||
remote_user: mike
|
||||
sudo: yes
|
||||
tasks:
|
||||
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
|
||||
- hosts: RHELBased
|
||||
remote_user: mdonlon
|
||||
sudo: yes
|
||||
tasks:
|
||||
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}" path=/root/.ssh/authorized_keys manage_dir=no
|
||||
|
||||
除此之外还有很多可以做的事情,比如在启动的时候把公钥配置好,或者引入其他的流程来让你按需配置一些机器。不过只要 SSH 被配置成接受密码登陆,这些几乎可以用在所有的流程中。在你准备开始写太多 playbook 之前,另一个值得考虑的事情是,代码管理可以有效节省你的时间。机器需要不断变化,然而你并不需要在每次机器发生变化时都重新写一个 playbook,只需要更新相关的部分并提交这些修改。与此相关的另一个好处是,如同我之前所述,你可以从不同的地方管理你的整个基础结构。你只需要将你的 playbook 仓库 git clone 到新的机器上,就完成了管理所有东西的全部设置流程。
|
||||
|
||||
#### 现实中的 ansible 例子 ####
|
||||
|
||||
我知道很多用户经常使用 pastebin 这样的服务,以及很多公司基于显而易见的理由配置了他们内部使用的类似东西。最近,我遇到了一个叫做 showterm 的程序,巧合之下我被一个客户要求配置它用于内部使用。这里我不打算赘述这个应用程序的细节,不过如果你感兴趣的话,你可以使用 Google 搜索 showterm。作为一个合理的现实中的例子,我将会试图配置一个 showterm 服务器,并且配置使用它所需要的客户端应用程序。在这个过程中我们还需要一个数据库服务器。现在我们从配置客户端开始:
|
||||
|
||||
---
|
||||
- hosts: showtermClients
|
||||
remote_user: root
|
||||
tasks:
|
||||
- yum: name=rubygems state=latest
|
||||
- yum: name=ruby-devel state=latest
|
||||
- yum: name=gcc state=latest
|
||||
- gem: name=showterm state=latest user_install=no
|
||||
|
||||
这部分很简单。下面是主服务器:
|
||||
|
||||
---
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: ensure packages are installed
|
||||
yum: name={{item}} state=latest
|
||||
with_items:
|
||||
- postgresql
|
||||
- postgresql-server
|
||||
- postgresql-devel
|
||||
- python-psycopg2
|
||||
- git
|
||||
- ruby21
|
||||
- ruby21-passenger
|
||||
- name: showterm server from github
|
||||
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
|
||||
- name: Initdb
|
||||
command: service postgresql initdb
|
||||
creates=/var/lib/pgsql/data/postgresql.conf
|
||||
|
||||
- name: Start PostgreSQL and enable at boot
|
||||
service: name=postgresql
|
||||
enabled=yes
|
||||
state=started
|
||||
- gem: name=pg state=latest user_install=no
|
||||
handlers:
|
||||
- name: restart postgresql
|
||||
service: name=postgresql state=restarted
|
||||
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
sudo: yes
|
||||
sudo_user: postgres
|
||||
vars:
|
||||
dbname: showterm
|
||||
dbuser: showterm
|
||||
dbpassword: showtermpassword
|
||||
tasks:
|
||||
- name: create db
|
||||
postgresql_db: name={{dbname}}
|
||||
|
||||
- name: create user with ALL priv
|
||||
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: database.yml
|
||||
template: src=database.yml dest=/root/showterm/config/database.yml
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: run bundle install
|
||||
shell: bundle install
|
||||
args:
|
||||
chdir: /root/showterm
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: run rake db tasks
|
||||
shell: 'bundle exec rake db:create db:migrate db:seed'
|
||||
args:
|
||||
chdir: /root/showterm
|
||||
- hosts: showtermServers
|
||||
remote_user: root
|
||||
tasks:
|
||||
- name: apache config
|
||||
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
|
||||
|
||||
还凑合。请注意,从某种意义上来说这是一个任意选择的程序,然而我们现在已经可以持续地在任意数量的机器上部署它了,这便是配置管理的好处。此外,在大多数情况下这里的定义语法几乎是不言而喻的,wiki 页面也就不需要加入太多细节了。当然在我的观点里,一个有太多细节的 wiki 页面绝不会是一件坏事。
|
||||
|
||||
### 扩展配置 ###
|
||||
|
||||
我们并没有涉及到这里所有的细节。Ansible 有许多选项可以用来配置你的系统。你可以在你的 hosts 文件中内嵌变量,而 ansible 将会把它们应用到远程节点。如:
|
||||
|
||||
[RHELBased]
|
||||
10.50.1.33 http_port=443
|
||||
10.50.1.47 http_port=80 ansible_ssh_user=mdonlon
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1 http_port=443
|
||||
|
||||
尽管这对于快速配置来说已经非常方便,你还可以将变量分成存放在 yaml 格式的多个文件中。在你的 hosts 文件路径里,你可以创建两个子目录 group_vars 和 host_vars。在这些路径里放置的任何文件,只要能对得上一个主机分组的名字,或者你的 hosts 文件中的一个主机名,它们都会在运行时被插入进来。所以前面的一个例子将会变成这样:
|
||||
|
||||
ultrabook:/etc/ansible # pwd
|
||||
/etc/ansible
|
||||
ultrabook:/etc/ansible # tree
|
||||
.
|
||||
├── group_vars
|
||||
│ ├── RHELBased
|
||||
│ └── SUSEBased
|
||||
├── hosts
|
||||
└── host_vars
|
||||
├── 10.50.1.33
|
||||
└── 10.50.1.47
|
||||
|
||||
----------
|
||||
|
||||
2 directories, 5 files
|
||||
ultrabook:/etc/ansible # cat hosts
|
||||
[RHELBased]
|
||||
10.50.1.33
|
||||
10.50.1.47
|
||||
|
||||
----------
|
||||
|
||||
[SUSEBased]
|
||||
127.0.0.1
|
||||
ultrabook:/etc/ansible # cat group_vars/RHELBased
|
||||
ultrabook:/etc/ansible # cat group_vars/SUSEBased
|
||||
---
|
||||
http_port: 443
|
||||
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
|
||||
---
|
||||
http_port: 443
|
||||
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
|
||||
---
|
||||
http_port:80
|
||||
ansible_ssh_user: mdonlon
|
||||
|
||||
### 改善 Playbooks ###
|
||||
|
||||
组织 playbooks 也已经有很多种现成的方式。在前面的例子中我们用了一个单独的文件,因此这方面被大幅地简化了。组织这些文件的一个常用方式是创建角色。简单来说,你将一个主文件加载为你的 playbook,而它将会从其它文件中导入所有的数据,这些其他的文件便是角色。举例来说,如果你有了一个 wordpress 网站,你需要一个 web 前端,和一个数据库。web 前端将包括一个 web 服务器,应用程序代码,以及任何需要的模块。数据库有时候运行在同一台主机上,有时候运行在远程的主机上,这时候角色就可以派上用场了。你创建一个目录,并对每个角色创建对应的小 playbook。在这个例子中我们需要一个 apache 角色,mysql 角色,wordpress 角色,mod_php,以及 php 角色。最大的好处是,并不是每个角色都必须被应用到同一台机器上。在这个例子中,mysql 可以被应用到一台单独的机器。这同样为代码重用提供了可能,比如你的 apache 角色还可以被用在 python 和其他相似的 php 应用程序中。展示这些已经有些超出了本文的范畴,而且做一件事总是有很多不同的方式,我建议搜索一些 ansible 的 playbook 例子。有很多人在 github 上贡献代码,当然还有其他一些网站。
|
||||
|
||||
### 模块 ###
|
||||
|
||||
在 ansible 中,对于所有完成的工作,幕后的工作都是由模块主导的。Ansible 有一个非常丰富的内置模块仓库,其中包括软件包安装,文件传输,以及我们在本文中做的所有事情。但是对一部分人来说,这些并不能满足他们的配置需求,ansible 也提供了方法让你添加自己的模块。Ansible 的 API 有一个非常棒的事情是,它并没有限制模块也必须用编写它的语言 Python 来编写,也就是说,你可以用任何语言来编写模块。Ansible 模块通过传递 JSON 数据来工作,因此你只需要用想用的语言生成一段 JSON 数据。我很确定任何脚本语言都可以做到这一点,因此你现在就可以开始写点什么了。在 Ansible 的网站上有很多的文档,包括模块的接口是如何工作的,以及 Github 上也有很多模块的例子。注意一些小众的语言可能没有很好的支持,不过那只可能是因为没有多少人在用这种语言贡献代码。试着写点什么,然后把你的结果发布出来吧!
|
||||
|
||||
### 总结 ###
|
||||
|
||||
总的来说,虽然在配置管理方面已经有很多解决方案,我希望本文能显示出 ansible 简单的设置过程,在我看来这是它最重要的一个要点。请注意,因为我试图展示做一件事的不同方式,所以并不是前文中所有的例子都是适用于你的个别环境或者对于普遍情况的最佳实践。这里有一些链接能让你对 ansible 的了解进入下一个层次:
|
||||
|
||||
- [Ansible 项目][7]主页.
|
||||
- [Ansible 项目文档][8].
|
||||
- [多级环境与 Ansible][9].
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool/
|
||||
|
||||
作者:[Nix Craft][a]
|
||||
译者:[felixonmars](https://github.com/felixonmars)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.cyberciti.biz/tips/about-us
|
||||
[1]:http://www.cyberciti.biz/faq/fedora-sl-centos-redhat6-enable-epel-repo/
|
||||
[2]:http://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/
|
||||
[3]:http://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html
|
||||
[4]:http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-linux-install-pipclient/
|
||||
[5]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[6]:http://www.cyberciti.biz/faq/noninteractive-shell-script-ssh-password-provider/
|
||||
[7]:http://www.ansible.com/
|
||||
[8]:http://docs.ansible.com/
|
||||
[9]:http://rosstuck.com/multistage-environments-with-ansible/
|
@ -0,0 +1,30 @@
|
||||
Linux 有问必答-- 如何在Perl中捕捉并处理信号
|
||||
================================================================================
|
||||
> **提问**: 我需要通过使用Perl的自定义信号处理程序来处理一个中断信号。在一般情况下,我怎么在Perl程序中捕获并处理各种信号(如INT,TERM)?
|
||||
|
||||
作为POSIX标准的异步通知机制,信号由操作系统发送给进程某个事件来通知它。当产生信号时,目标程序的执行是通过操作系统中断,并且该信号被发送到处理该信号的处理程序。任何人可以定义和注册自定义信号处理程序或依赖于默认的信号处理程序。
|
||||
|
||||
在Perl中,信号可以被捕获并被一个全局的%SIG哈希变量处理。这个%SIG哈希变量被信号号锁定并包含对相应的信号处理程序。因此,如果你想为特定的信号定义自定义信号处理程序,你可以直接更新%SIG的信号的哈希值。
|
||||
|
||||
下面是一个代码段来处理使用自定义信号处理程序中断(INT)和终止(TERM)的信号。
|
||||
|
||||
$SIG{INT} = \&signal_handler;
|
||||
$SIG{TERM} = \&signal_handler;
|
||||
|
||||
sub signal_handler {
|
||||
print "This is a custom signal handler\n";
|
||||
die "Caught a signal $!";
|
||||
}
|
||||
|
||||

|
||||
|
||||
%SIG其他有效的哈希值有'IGNORE'和'DEFAULT'。当所分配的哈希值是'IGNORE'(例如,$SIG{CHLD}='IGNORE')时,相应的信号将被忽略。分配'DEFAULT'的哈希值(例如,$SIG{HUP}='DEFAULT'),意味着我们将使用一个默认的信号处理程序。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/catch-handle-interrupt-signal-perl.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -0,0 +1,69 @@
|
||||
Linux有问必答 -- 如何在CentOS7上改变网络接口名
|
||||
================================================================================
|
||||
> **提问**: 在CentOS7,我想将分配的网络接口名更改为别的名字。有什么合适的方法来来重命名CentOS或RHEL7的网络接口?
|
||||
|
||||
传统上,Linux的网络接口被枚举为eth[0123...],但这些名称并不一定符合实际的硬件插槽,PCI位置,USB接口数量等,这引入了一个不可预知的命名问题(例如,由于不确定的设备探测行为),这可能会导致不同的网络配置错误(例如,由无意的接口改名引起的禁止接口或者防火墙旁路)。基于MAC地址的udev规则在虚拟化的环境中并不有用,这里的MAC地址如端口数量一样无常。
|
||||
|
||||
CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口的方法。这些特性可以唯一地确定网络接口的名称以使定位和区分设备更容易,并且在这样一种方式下,它随着启动,时间和硬件改变的情况下是持久的。然而,这种命名规则并不是默认在CentOS/RHEL6上开启。
|
||||
|
||||
从CentOS/RHEL7起,可预见的命名规则变成了默认。根据这一规则,接口名称被自动基于固件,拓扑结构和位置信息来确定。现在,即使添加或移除网络设备,接口名称仍然保持固定,而无需重新枚举,和坏掉的硬件可以无缝替换。
|
||||
|
||||
* 基于接口类型的两个字母前缀:
|
||||
* en -- 以太网
|
||||
* sl -- 串行线路IP (slip)
|
||||
* wl -- wlan
|
||||
* ww -- wwan
|
||||
*
|
||||
* Type of names:
|
||||
* b<number> -- BCMA总线和新书
|
||||
* ccw<name> -- CCW总线组名
|
||||
* o<index> -- 车载设备的索引号
|
||||
* s<slot>[f<function>][d<dev_port>] -- 热插拔插槽索引号
|
||||
* x<MAC> -- MAC 地址
|
||||
* [P<domain>]p<bus>s<slot>[f<function>][d<dev_port>]
|
||||
* -- PCI 位置
|
||||
* [P<domain>]p<bus>s<slot>[f<function>][u<port>][..]1[i<interface>]
|
||||
* -- USB端口号链
|
||||
|
||||
新的命名方案的一个小的缺点是接口名称相比传统名称有点难以阅读。例如,你可能会发现像enp0s3名字。再者,你再也无法来控制接口名了。
|
||||
|
||||

|
||||
|
||||
如果由于某种原因,你喜欢旧的方式,并希望能够选择任意名称分配给CentOS/ RHEL7的设备,你需要重写默认的可预测的命名规则,定义基于MAC地址udev规则。
|
||||
|
||||
**下面是如何在CentOS或RHEL7命名网络接口。**
|
||||
|
||||
首先,让我们来禁用该可预测命名规则。对于这一点,你可以在启动时传递“net.ifnames=0”的内核参数。这是通过编辑/etc/default/grub并加入“net.ifnames=0”到GRUB_CMDLINE_LINUX变量来实现的。
|
||||
|
||||

|
||||
|
||||
然后运行这条命令来重新生成GRUB配置并更新内核参数。
|
||||
|
||||
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
|
||||

|
||||
|
||||
接下来,编辑(或创建)一个udev的网络命名规则文件(/etc/udev/rules.d/70-persistent-net.rules),并添加下面一行。更换成你自己的MAC地址和接口。
|
||||
|
||||
$ sudo vi /etc/udev/rules.d/70-persistent-net.rules
|
||||
|
||||
----------
|
||||
|
||||
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:a9:7a:e1", ATTR{type}=="1", KERNEL=="eth*", NAME="sushi"
|
||||
|
||||
最后,重启电脑并验证新的接口名。
|
||||
|
||||

|
||||
|
||||
请注意,配置重命名后的接口仍然是你的责任。如果网络配置(例如,IPv4设置,防火墙规则)是基于旧名称(变更前)的,则需要更新的网络配置以反映更改的名称。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/change-network-interface-name-centos7.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/appe-Consistent_Network_Device_Naming.html
|
@ -0,0 +1,53 @@
|
||||
Linux有问必答-- 如何用Perl检测Linux的发行版本
|
||||
================================================================================
|
||||
> **提问**:我需要写一个Perl程序,它会包含Linux发行版相关的代码。为此,Perl程序需要能够自动检测运行中的Linux的发行版(如Ubuntu、CentOS、Debian、Fedora等等),以及它是什么版本号。如何用Perl检测Linux的发行版本?
|
||||
|
||||
如果要用Perl脚本检测Linux的发行版,你可以使用一个名为[Linux::Distribution][1]的Perl模块。该模块通过检查/etc/lsb-release以及其他特定的/etc下的发行版特定的目录来猜测底层Linux操作系统。它支持检测所有主要的Linux发行版,包括Fedora、CentOS、Arch Linux、Debian、Ubuntu、SUSE、Red Hat、Gentoo、Slackware、Knoppix和Mandrake。
|
||||
|
||||
要在Perl中使用这个模块,你首先需要安装它。
|
||||
|
||||
### 在Debian或者Ubuntu上安装 Linux::Distribution ###
|
||||
|
||||
基于Debian的系统直接用apt-get安装
|
||||
|
||||
$ sudo apt-get install liblinux-distribution-packages-perl
|
||||
|
||||
### 在Fedora、CentOS 或者RHEL上安装 Linux::Distribution ###
|
||||
|
||||
如果你的Linux没有Linux::Distribution模块的安装包(如基于红帽的系统),你可以使用CPAN来构建。
|
||||
|
||||
首先确保你的Linux系统安装了CPAN
|
||||
|
||||
$ sudo yum -y install perl-CPAN
|
||||
|
||||
使用这条命令来构建并安装模块:
|
||||
|
||||
$ sudo perl -MCPAN -e 'install Linux::Distribution'
|
||||
|
||||
### 用Perl确定Linux发行版 ###
|
||||
|
||||
Linux::Distribution模块安装完成之后,你可以使用下面的代码片段来确定你运行的Linux发行版本。
|
||||
|
||||
use Linux::Distribution qw(distribution_name distribution_version);
|
||||
|
||||
my $linux = Linux::Distribution->new;
|
||||
|
||||
if ($linux) {
|
||||
my $distro = $linux->distribution_name();
|
||||
my $version = $linux->distribution_version();
|
||||
print "Distro: $distro $version\n";
|
||||
}
|
||||
else {
|
||||
print "Distro: unknown\n";
|
||||
}
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/detect-linux-distribution-in-perl.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://metacpan.org/pod/Linux::Distribution
|
@ -0,0 +1,39 @@
|
||||
Linux有问必答-- 如何在PDF中嵌入LaTex中的所有字体
|
||||
================================================================================
|
||||
> **提问**: 我通过编译LaTex源文件生成了一份PDF文档。然而,我注意到,并不是所有字体都嵌入到了PDF文档中。我怎样才能确保所有的字体嵌入在由LaTex生成的PDF文档中?
|
||||
|
||||
当你创建一个PDF文件时,在PDF文件中嵌入字体是一个好主意。如果你不嵌入字体,PDF浏览器可以在计算机上没有字体的情况下使用其他东西代替。这将导致文件被在不同的PDF浏览器或操作系统平台上呈现不同的样式。当你打印出来的文档时,缺少的字体是一个问题。
|
||||
|
||||
当你从LaTex中生成PDF文档时(例如用pdflatex或dvipdfm),可能并不是所有的字体都嵌入在PDF文档中。例如,[pdffonts][1]下面的输出中提示PDF文档中有缺少的字体(如Helvetica)。
|
||||
|
||||

|
||||
|
||||
为了避免这样的问题,下面是如何在LaTex编译时嵌入所有的字体。
|
||||
|
||||
$ latex document.tex
|
||||
$ dvips -Ppdf -G0 -t letter -o document.ps document.dvi
|
||||
$ ps2pdf -dPDFSETTINGS=/prepress \
|
||||
-dCompatibilityLevel=1.4 \
|
||||
-dAutoFilterColorImages=false \
|
||||
-dAutoFilterGrayImages=false \
|
||||
-dColorImageFilter=/FlateEncode \
|
||||
-dGrayImageFilter=/FlateEncode \
|
||||
-dMonoImageFilter=/FlateEncode \
|
||||
-dDownsampleColorImages=false \
|
||||
-dDownsampleGrayImages=false \
|
||||
document.ps document.pdf
|
||||
|
||||
现在你可以看到所有的字体都被嵌入到PDF中了。
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/embed-all-fonts-pdf-document-latex.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://ask.xmodulo.com/check-which-fonts-are-used-pdf-document.html
|
@ -0,0 +1,215 @@
|
||||
在Linux中使用Openswan搭建站点到站点IPsec VPN 隧道
|
||||
================================================================================
|
||||
虚拟私有网络(VPN)隧道通过Internet隧道技术将两个不同地理位置的网络安全的连接起来。当这两个网络是使用私有IP地址的私有局域网络时,那么两个网络之间是不能相互访问的,这时使用隧道技术就可以使得子网间的主机进行通讯。例如,VPN隧道技术经常被用于大型机构中不同办公区域子网的连接。
|
||||
|
||||
有时,使用VPN隧道仅仅是因为它很安全。服务提供商与公司会使用这样一种方式设网络,他们将重要的服务器(如,数据库,VoIP,银行服务器)放置到一个子网内,仅仅让有权限的用户通过VPN隧道进行访问。如果需要搭建一个安的VPN隧道,通常会选用[IPsec][1],因为IPsec VPN隧道被多重安全层所保护。
|
||||
|
||||
这篇指导文章将会告诉你如何构建站点到站点的 VPN隧道。
|
||||
|
||||
### 拓扑结构 ###
|
||||
|
||||
这边指导文章将按照以下的拓扑结构来构建一个IPsec 隧道。
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### 安装软件包以及准备VPN服务器 ###
|
||||
|
||||
一般情况下,你仅能管理A点,但是根据需求,你可能需要同时管理A点与B点。我们从安装Openswan软件开始。
|
||||
|
||||
基于Red Hat的系统(CentOS,Fedora,或RHEL):
|
||||
|
||||
# yum install openswan lsof
|
||||
|
||||
在基于Debian的系统(Debian,Ubuntu或Linux Mint):
|
||||
|
||||
# apt-get install openswan
|
||||
|
||||
现在禁用VPN的重定向功能,如果有服务器,可以执行下列命令:
|
||||
|
||||
# for vpn in /proc/sys/net/ipv4/conf/*;
|
||||
# do echo 0 > $vpn/accept_redirects;
|
||||
# echo 0 > $vpn/send_redirects;
|
||||
# done
|
||||
|
||||
接下来,允许IP转发并且禁重定向功能。
|
||||
|
||||
# vim /etc/sysctl.conf
|
||||
|
||||
----------
|
||||
|
||||
net.ipv4.ip_forward = 1
|
||||
net.ipv4.conf.all.accept_redirects = 0
|
||||
net.ipv4.conf.all.send_redirects = 0
|
||||
|
||||
重加载 /etc/sysctl.conf文件:
|
||||
|
||||
# sysctl -p
|
||||
|
||||
在防火墙中启用所需的端口,并保证不与系统当前的规则冲突。
|
||||
|
||||
# iptables -A INPUT -p udp --dport 500 -j ACCEPT
|
||||
# iptables -A INPUT -p tcp --dport 4500 -j ACCEPT
|
||||
# iptables -A INPUT -p udp --dport 4500 -j ACCEPT
|
||||
|
||||
最后,我们为NAT创建防火墙规则。
|
||||
|
||||
# iptables -t nat -A POSTROUTING -s site-A-private-subnet -d site-B-private-subnet -j SNAT --to site-A-Public-IP
|
||||
确保防火墙规则的健壮性。
|
||||
|
||||
#### 注意: ####
|
||||
|
||||
- 你可以使用MASQUERAD替代SNAT(iptables).理论上说它也能正常工作,但是有可能会与VPS发生冲突,所以我任然建议使用SNAT.
|
||||
- 如果你同时在管理B点,那么在B点也设置同样的规则。
|
||||
- 直连路由则不需要SNAT。
|
||||
|
||||
### 准配置文件 ###
|
||||
|
||||
我们将要用配置的第一个文件是ipsec.conf。不论你将要配置哪一台服务器,总是将你这端的服务器看成是左边的,而将远端的看作是右边的。以下配置是在站点A的VPN服务器做的。
|
||||
|
||||
# vim /etc/ipsec.conf
|
||||
|
||||
----------
|
||||
|
||||
## general configuration parameters ##
|
||||
|
||||
config setup
|
||||
plutodebug=all
|
||||
plutostderrlog=/var/log/pluto.log
|
||||
protostack=netkey
|
||||
nat_traversal=yes
|
||||
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/16
|
||||
## disable opportunistic encryption in Red Hat ##
|
||||
oe=off
|
||||
|
||||
## disable opportunistic encryption in Debian ##
|
||||
## Note: this is a separate declaration statement ##
|
||||
include /etc/ipsec.d/examples/no_oe.conf
|
||||
|
||||
## connection definition in Red Hat ##
|
||||
conn demo-connection-redhat
|
||||
authby=secret
|
||||
auto=start
|
||||
ike=3des-md5
|
||||
## phase 1 ##
|
||||
keyexchange=ike
|
||||
## phase 2 ##
|
||||
phase2=esp
|
||||
phase2alg=3des-md5
|
||||
compress=no
|
||||
pfs=yes
|
||||
type=tunnel
|
||||
left=<siteA-public-IP>
|
||||
leftsourceip=<siteA-public-IP>
|
||||
leftsubnet=<siteA-private-subnet>/netmask
|
||||
## for direct routing ##
|
||||
leftsubnet=<siteA-public-IP>/32
|
||||
leftnexthop=%defaultroute
|
||||
right=<siteB-public-IP>
|
||||
rightsubnet=<siteB-private-subnet>/netmask
|
||||
|
||||
## connection definition in Debian ##
|
||||
conn demo-connection-debian
|
||||
authby=secret
|
||||
auto=start
|
||||
## phase 1 ##
|
||||
keyexchange=ike
|
||||
## phase 2 ##
|
||||
esp=3des-md5
|
||||
pfs=yes
|
||||
type=tunnel
|
||||
left=<siteA-public-IP>
|
||||
leftsourceip=<siteA-public-IP>
|
||||
leftsubnet=<siteA-private-subnet>/netmask
|
||||
## for direct routing ##
|
||||
leftsubnet=<siteA-public-IP>/32
|
||||
leftnexthop=%defaultroute
|
||||
right=<siteB-public-IP>
|
||||
rightsubnet=<siteB-private-subnet>/netmask
|
||||
|
||||
有许多方式实现身份验证。这里使用预共享密钥,并将它添加到文件 file /etc/ipsec.secrets。
|
||||
|
||||
# vim /etc/ipsec.secrets
|
||||
|
||||
----------
|
||||
|
||||
siteA-public-IP siteB-public-IP: PSK "pre-shared-key"
|
||||
## in case of multiple sites ##
|
||||
siteA-public-IP siteC-public-IP: PSK "corresponding-pre-shared-key"
|
||||
|
||||
### 启动服务并排除故障 ###
|
||||
|
||||
目前,服务器已经可以创建站点到站点的VPN隧道了。如果你可以管理B站点,请确认已经为B服务器配置了所需的参数。对于基于Red Hat的系统,使用chkconfig命令以确定这项服务以设置为开机自启动。
|
||||
|
||||
# /etc/init.d/ipsec restart
|
||||
|
||||
如果所有服务器没有问题的话,那么可以打通隧道了。注意以下内容后,你可以使用ping命令来测试隧道。
|
||||
|
||||
1.A点不可达B点的子网,当隧道没有启动时ping无效。
|
||||
1.隧道启动后,在A点直接ping B点的子网IP。是可以ping通的。
|
||||
|
||||
并且,到达目的子网的路由也会出现在服务器的路由表中。(译者:子网指的是site-B,服务器指的是site-A)
|
||||
|
||||
# ip route
|
||||
|
||||
----------
|
||||
|
||||
[siteB-private-subnet] via [siteA-gateway] dev eth0 src [siteA-public-IP]
|
||||
default via [siteA-gateway] dev eth0
|
||||
|
||||
另外,我们可以使用命令来检测隧道的状态。
|
||||
|
||||
# service ipsec status
|
||||
|
||||
----------
|
||||
|
||||
IPsec running - pluto pid: 20754
|
||||
pluto pid 20754
|
||||
1 tunnels up
|
||||
some eroutes exist
|
||||
|
||||
----------
|
||||
|
||||
# ipsec auto --status
|
||||
|
||||
----------
|
||||
|
||||
## output truncated ##
|
||||
000 "demo-connection-debian": myip=<siteA-public-IP>; hisip=unset;
|
||||
000 "demo-connection-debian": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; nat_keepalive: yes
|
||||
000 "demo-connection-debian": policy: PSK+ENCRYPT+TUNNEL+PFS+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 32,28; interface: eth0;
|
||||
|
||||
## output truncated ##
|
||||
000 #184: "demo-connection-debian":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 1653s; newest IPSEC; eroute owner; isakmp#183; idle; import:not set
|
||||
|
||||
## output truncated ##
|
||||
000 #183: "demo-connection-debian":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 1093s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:not set
|
||||
|
||||
日志文件/var/log/pluto.log记录了关于身份验证,密钥交换以及隧道处于不同时期的一些信息。如果你的隧道无法启动了,可以查看这个文档。
|
||||
|
||||
如果你确信所有配置都是正确的,但是你的隧道任然无法启动,那么你需要检查以下的事件。
|
||||
|
||||
1.很多ISP会过滤IPsec端口。确认你的网络ISP允许使用UDP 500, TCP/UDP 4500端口。你可以试着在远端通过talnet连接服务器的IPsec端口。
|
||||
1.确认所用的端口在服务器防火墙规则中是允许的。
|
||||
1.确认两端服务器的预共享密钥是一致的。
|
||||
1.左边和右边的参数应该正确配置在两端的服务器上
|
||||
1.如果你遇到的是NAT问题,试着使用SNAT替换MASQUERADING。
|
||||
|
||||
总结,这篇指导重点在于使用Openswa搭建站点到站点IPsec VPN的流程。管理员可以使用VPN使得一些重要的资源仅能通过隧道来获取,这对于加强安全性很有效果。同时VPN确保数据不被监听以及截。
|
||||
|
||||
希望对你有帮助。让我知道你的意。
|
||||
|
||||
via: http://xmodulo.com/2014/08/create-site-to-site-ipsec-vpn-tunnel-openswan-linux.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[SPccman](https://github.com/SPccman)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://en.wikipedia.org/wiki/IPsec
|
||||
[2]:https://www.openswan.org/
|
@ -0,0 +1,86 @@
|
||||
Linux 常见问题解答 --怎么用checkinstall从源码创建一个RPM或DEB包
|
||||
================================================================================
|
||||
> **问题**:我想要从源码创建安装的软件程序。有没有一种方式来创建并且从源码安装包,而不是运行“make install”?那样,以后如果我想,我可以容易的卸载程序。
|
||||
|
||||
如果你已经从从它的源码运行“make install”安装了linux程序。想完整移除它将变得真的很麻烦,除非程序的创造者在Makefile里提供卸载的目标。你会有在你系统里文件的完整列表来和从源码安装之后比较,然后手工移除所有在制作安装过程中加入的文件
|
||||
|
||||
|
||||
这时候Checkinstall就可以派上使用。Checkinstall保留命令行创建或修改的所有文件的路径(例如:“make install”“make install_modules”等)并建立一个标准的二进制包,让你能用你发行版的标准包管理系统安装或卸载它,(例子:Red Hat的yum或者Debian的apt-get命令) It has been also known to work with Slackware, SuSe, Mandrake and Gentoo as well, as per the official documentation. [official documentation][1].
|
||||
|
||||
在这篇文章中,我们只集中在红帽子和Debian为基础的发行版,并展示怎样从源码使用Checkinstall创建一个RPM和DEB软件包
|
||||
|
||||
### 在linux上安装Checkinstall ###
|
||||
|
||||
在Debian衍生上安装Checkinstall:
|
||||
|
||||
# aptitude install checkinstall
|
||||
|
||||
在红帽子的发行版上安装Checkinstall,你需要下载一个预先建立的Checkinstall rpm(例如:从 [http://rpm.pbone.net][2]),他已经从Repoforge库里删除。对于Cent OS6这个rpm包也可在Cent OS7里工作。
|
||||
|
||||
# wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/ikoinoba/CentOS_CentOS-6/x86_64/checkinstall-1.6.2-3.el6.1.x86_64.rpm
|
||||
# yum install checkinstall-1.6.2-3.el6.1.x86_64.rpm
|
||||
|
||||
一旦checkinstall安装,你可以用下列格式创建一个特定的软件包
|
||||
|
||||
# checkinstall <install-command>
|
||||
|
||||
如果没有参数,默认安装命令“make install”将被使用
|
||||
|
||||
### 用Checkinstall创建一个RPM或DEB包 ###
|
||||
|
||||
在这个例子里,我们将创建一个htop包,对于linux交互式文本模式进程查看器(就像上面的 steroids)
|
||||
|
||||
|
||||
首先,让我们从项目的官方网站下载源代码,一个最佳的练习,我们存储源码到/usr/local/src,并解压它
|
||||
|
||||
# cd /usr/local/src
|
||||
# wget http://hisham.hm/htop/releases/1.0.3/htop-1.0.3.tar.gz
|
||||
# tar xzf htop-1.0.3.tar.gz
|
||||
# cd htop-1.0.3
|
||||
|
||||
让我们找出htop安装命令,那样我们就能调用Checkinstall命令,下面展示了,htop用“make install”命令安装
|
||||
|
||||
# ./configure
|
||||
# make install
|
||||
|
||||
因此,创建一个htop包,我们可以调用checkinstall不带任何参数安装,这将使用“make install”命令创建一个包。随着这个过程 checkinstall命令会问你一个连串的问题。
|
||||
|
||||
总之,这个命令会创建一个htop包: **htop**:
|
||||
|
||||
# ./configure
|
||||
# checkinstall
|
||||
|
||||
回答“Y”“我会创建一个默认设置的包文件?”
|
||||
|
||||

|
||||
|
||||
你可以输入一个包的简短描述,然后按两次ENTER
|
||||
|
||||

|
||||
|
||||
输入一个数值修改下面的任何值或ENTER前进
|
||||
|
||||

|
||||
|
||||
然后checkinstall将自动地创建一个.rpm或者.deb包,根据你的linux系统是什么:
|
||||
|
||||
在CentOS7:
|
||||
|
||||

|
||||
|
||||
在Debian 7:
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/build-rpm-deb-package-source-checkinstall.html
|
||||
|
||||
译者:[luoyutiantang](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://checkinstall.izto.org/docs/README
|
||||
[2]:http://rpm.pbone.net/
|
||||
[3]:http://ask.xmodulo.com/install-htop-centos-rhel.html
|
@ -1,20 +1,18 @@
|
||||
translating by cvsher
|
||||
20 Useful Commands of ‘Sysstat’ Utilities (mpstat, pidstat, iostat and sar) for Linux Performance Monitoring
|
||||
================================================================================
|
||||
In our last article, we have learned about installing and upgrading the **sysstat** package and understanding briefly about the utilities which comes with the package.
|
||||
‘Sysstat’工具包中20个实用的Linux性能监控工具(包括mpstat, pidstat, iostat 和sar)
|
||||
===============================================================
|
||||
在我们上一篇文章中,我们已经学习了如何去安装和更新**sysstat**,并且了解了包中的一些实用工具。
|
||||
|
||||
注:此文一并附上,在同一个原文更新中
|
||||
注:此文一并附上,在同一个原文中更新
|
||||
- [Sysstat – Performance and Usage Activity Monitoring Tool For Linux][1]
|
||||
|
||||

|
||||
|
||||
20 Sysstat Commands for Linux Monitoring
|
||||
Linux系统监控的20个Sysstat命令
|
||||
今天,我们将会通过一些有趣的实例来学习**mpstat**, **pidstat**, **iostat**和**sar**等工具,这些工具可以帮组我们找出系统中的问题。这些工具都包含了不同的选项,这意味着你可以根据不同的工作使用不同的选项,或者根据你的需求来自定义脚本。我们都知道,系统管理员都会有点懒,他们经常去寻找一些更简单的方法来完成他们的工作。
|
||||
|
||||
Today, we are going to work with some interesting practical examples of **mpstat, pidstat, iostat** and **sar** utilities, which can help us to identify the issues. We have different options to use these utilities, I mean you can fire the commands manually with different options for different kind of work or you can create your customized scripts according to your requirements. You know Sysadmins are always bit Lazy, and always tried to find out the easy way to do the things with minimum efforts.
|
||||
### mpstat - 处理器统计信息 ###
|
||||
|
||||
### mpstat – Processors Statistics ###
|
||||
|
||||
1.Using mpstat command without any option, will display the Global Average Activities by All CPUs.
|
||||
1.不带任何参数的使用mpstat命令将会输出所有CPU的平均统计信息
|
||||
|
||||
tecmint@tecmint ~ $ mpstat
|
||||
|
||||
@ -23,7 +21,7 @@ Today, we are going to work with some interesting practical examples of **mpstat
|
||||
12:23:57 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
|
||||
12:23:57 IST all 37.35 0.01 4.72 2.96 0.00 0.07 0.00 0.00 0.00 54.88
|
||||
|
||||
2.Using mpstat with option ‘**-P**‘ (Indicate Processor Number) and ‘ALL’, will display statistics about all CPUs one by one starting from 0. 0 will the first one.
|
||||
2.使用‘**-p**’(处理器编码)和‘ALL’参数将会从0开始独立的输出每个CPU的统计信息,0表示第一个cpu。
|
||||
|
||||
tecmint@tecmint ~ $ mpstat -P ALL
|
||||
|
||||
@ -34,7 +32,7 @@ Today, we are going to work with some interesting practical examples of **mpstat
|
||||
12:29:26 IST 0 37.90 0.01 4.96 2.62 0.00 0.03 0.00 0.00 0.00 54.48
|
||||
12:29:26 IST 1 36.75 0.01 4.19 2.54 0.00 0.11 0.00 0.00 0.00 56.40
|
||||
|
||||
3.To display the statistics for **N** number of iterations after n seconds interval with average of each cpu use the following command.
|
||||
3.要进行‘**N**’次,平均每次间隔n秒的输出CPU统计信息,如下所示。
|
||||
|
||||
tecmint@tecmint ~ $ mpstat -P ALL 2 5
|
||||
|
||||
@ -55,11 +53,13 @@ Today, we are going to work with some interesting practical examples of **mpstat
|
||||
12:36:27 IST 0 34.34 0.00 4.04 0.00 0.00 0.00 0.00 0.00 0.00 61.62
|
||||
12:36:27 IST 1 32.82 0.00 6.15 0.51 0.00 0.00 0.00 0.00 0.00 60.51
|
||||
|
||||
4.The option ‘**I**‘ will print total number of interrupt statistics about per processor.
|
||||
(LCTT译注: 上面命令中‘2’ 表示每2秒执行一次‘mpstat -P ALL’命令, ‘5’表示共执行5次)
|
||||
|
||||
4.使用‘**I**’参数将会输出每个处理器的中断统计信息
|
||||
|
||||
tecmint@tecmint ~ $ mpstat -I
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
12:39:56 IST CPU intr/s
|
||||
12:39:56 IST all 651.04
|
||||
@ -72,11 +72,11 @@ Today, we are going to work with some interesting practical examples of **mpstat
|
||||
12:39:56 IST 0 0.00 116.49 0.05 0.27 7.33 0.00 1.22 10.44 0.13 37.47
|
||||
12:39:56 IST 1 0.00 111.65 0.05 0.41 7.07 0.00 56.36 9.97 0.13 41.38
|
||||
|
||||
5.Get all the above information in one command i.e. equivalent to “**-u -I ALL -p ALL**“.
|
||||
5.使用‘**A**’参数将会输出上面提到的所有信息,等同于‘**-u -I All -p ALL**’。
|
||||
|
||||
tecmint@tecmint ~ $ mpstat -A
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
12:41:39 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
|
||||
12:41:39 IST all 38.70 0.01 4.47 2.01 0.00 0.06 0.00 0.00 0.00 54.76
|
||||
@ -96,19 +96,19 @@ Today, we are going to work with some interesting practical examples of **mpstat
|
||||
12:41:39 IST 0 0.00 116.96 0.05 0.26 7.12 0.00 1.24 10.42 0.12 36.99
|
||||
12:41:39 IST 1 0.00 112.25 0.05 0.40 6.88 0.00 55.05 9.93 0.13 41.20
|
||||
|
||||
### pidstat – Process and Kernel Threads Statistics ###
|
||||
###pidstat - 进程和内核线程的统计信息###
|
||||
|
||||
This is used for process monitoring and current threads, which are being managed by kernel. pidstat can also check the status about child processes and threads.
|
||||
该命令是用于监控进程和当前受内核管理的线程。pidstat还可以检查子进程和线程的状态。
|
||||
|
||||
#### Syntax ####
|
||||
#### 语法 ####
|
||||
|
||||
# pidstat <OPTIONS> [INTERVAL] [COUNT]
|
||||
|
||||
6.Using pidstat command without any argument, will display all active tasks.
|
||||
6.不带任何参数使用pidstat将会输出所有活跃的任务。
|
||||
|
||||
tecmint@tecmint ~ $ pidstat
|
||||
tecmint@tecmint ~ $ pidstat
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
12:47:24 IST UID PID %usr %system %guest %CPU CPU Command
|
||||
12:47:24 IST 0 1 0.01 0.12 0.00 0.13 1 init
|
||||
@ -126,11 +126,11 @@ This is used for process monitoring and current threads, which are being managed
|
||||
12:47:24 IST 0 365 0.01 0.00 0.00 0.01 0 systemd-udevd
|
||||
12:47:24 IST 0 476 0.00 0.00 0.00 0.00 0 kworker/u9:1
|
||||
|
||||
7.To print all active and non-active tasks use the option ‘**-p**‘ (processes).
|
||||
7.使用‘**-p**’(进程)参数输出所有活跃和非活跃的任务。
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -p ALL
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
12:51:55 IST UID PID %usr %system %guest %CPU CPU Command
|
||||
12:51:55 IST 0 1 0.01 0.11 0.00 0.12 1 init
|
||||
@ -151,11 +151,11 @@ This is used for process monitoring and current threads, which are being managed
|
||||
12:51:55 IST 0 19 0.00 0.00 0.00 0.00 0 writeback
|
||||
12:51:55 IST 0 20 0.00 0.00 0.00 0.00 1 kintegrityd
|
||||
|
||||
8.Using pidstat command with ‘**-d 2**‘ option, we can get I/O statistics and 2 is interval in seconds to get refreshed statistics. This option can be handy in situation, where your system is undergoing heavy I/O and you want to get clues about the processes consuming high resources.
|
||||
8.使用‘**-d 2**’参数,我们可以看到I/O统计信息,2表示以秒为单位对统计信息进行刷新。这个参数可以方便的知道当系统在进行繁重的I/O时,那些进行占用大量的资源。
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -d 2
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
03:26:53 EDT PID kB_rd/s kB_wr/s kB_ccwr/s Command
|
||||
|
||||
@ -169,11 +169,12 @@ This is used for process monitoring and current threads, which are being managed
|
||||
03:27:03 EDT 25100 0.00 6.00 0.00 sendmail
|
||||
03:27:03 EDT 30829 0.00 6.00 0.00 java
|
||||
|
||||
9.To know the cpu statistics along with all threads about the process id **4164** at interval of **2** sec for **3** times use the following command with option ‘-t‘ (display statistics of selected process).
|
||||
9.想要每间隔**2**秒对进程**4164**的cpu统计信息输出**3**次,则使用如下带参数‘**-t**’(输出某个选定进程的统计信息)的命令。
|
||||
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -t -p 4164 2 3
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
01:09:06 IST UID TGID TID %usr %system %guest %CPU CPU Command
|
||||
01:09:08 IST 1000 4164 - 22.00 1.00 0.00 23.00 1 firefox
|
||||
@ -186,11 +187,11 @@ This is used for process monitoring and current threads, which are being managed
|
||||
01:09:08 IST 1000 - 4176 0.00 0.00 0.00 0.00 1 |__gdbus
|
||||
01:09:08 IST 1000 - 4177 0.00 0.00 0.00 0.00 1 |__gmain
|
||||
|
||||
10.Use the ‘**-rh**‘ option, to know the about memory utilization of processes which are frequently varying their utilization in **2** second interval.
|
||||
10.使用‘**-rh**’参数,将会输出进程的内存使用情况。如下命令每隔2秒刷新经常的内存使用情况。
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -rh 2 3
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
# Time UID PID minflt/s majflt/s VSZ RSS %MEM Command
|
||||
1409816695 1000 3958 3378.22 0.00 707420 215972 5.32 cinnamon
|
||||
@ -209,21 +210,21 @@ This is used for process monitoring and current threads, which are being managed
|
||||
1409816699 1000 4164 599.00 0.00 1261944 476664 11.74 firefox
|
||||
1409816699 1000 6676 168.00 0.00 4436 1020 0.03 pidstat
|
||||
|
||||
11.To print all the process of containing string “**VB**“, use ‘**-t**‘ option to see threads as well.
|
||||
11.要使用‘**-G**’参数可以输出包含某个特定字符串的进程信息。如下命令输出所有包含‘**VB**’字符串的进程的统计信息,使用‘**-t**’参数将线程的信息也进行输出。
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -G VB
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
01:09:06 IST UID PID %usr %system %guest %CPU CPU Command
|
||||
01:09:08 IST 1000 1492 22.00 1.00 0.00 23.00 1 VBoxService
|
||||
01:09:08 IST 1000 1902 4164 20.00 0.50 0.00 20.50 VBoxClient
|
||||
01:09:08 IST 1000 1922 4171 0.00 0.00 0.00 0.00 VBoxClient
|
||||
01:09:06 IST UID PID %usr %system %guest %CPU CPU Command
|
||||
01:09:08 IST 1000 1492 22.00 1.00 0.00 23.00 1 VBoxService
|
||||
01:09:08 IST 1000 1902 4164 20.00 0.50 0.00 20.50 VBoxClient
|
||||
01:09:08 IST 1000 1922 4171 0.00 0.00 0.00 0.00 VBoxClient
|
||||
|
||||
----------
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -t -G VB
|
||||
Linux 2.6.32-431.el6.i686 (tecmint) 09/04/2014 _i686_ (2 CPU)
|
||||
Linux 2.6.32-431.el6.i686 (tecmint) 09/04/2014 _i686_ (2 CPU)
|
||||
|
||||
03:19:52 PM UID TGID TID %usr %system %guest %CPU CPU Command
|
||||
03:19:52 PM 0 1479 - 0.01 0.12 0.00 0.13 1 VBoxService
|
||||
@ -238,32 +239,32 @@ This is used for process monitoring and current threads, which are being managed
|
||||
03:19:52 PM 0 1933 - 0.04 0.89 0.00 0.93 0 VBoxClient
|
||||
03:19:52 PM 0 - 1936 0.04 0.89 0.00 0.93 1 |__X11-NOTIFY
|
||||
|
||||
12.To get realtime priority and scheduling information use option ‘**-R**‘ .
|
||||
12.使用‘**-R**’参数输出实时的进程优先级和调度信息。
|
||||
|
||||
tecmint@tecmint ~ $ pidstat -R
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
01:09:06 IST UID PID prio policy Command
|
||||
01:09:08 IST 1000 3 99 FIFO migration/0
|
||||
01:09:08 IST 1000 5 99 FIFO migration/0
|
||||
01:09:08 IST 1000 6 99 FIFO watchdog/0
|
||||
01:09:06 IST UID PID prio policy Command
|
||||
01:09:08 IST 1000 3 99 FIFO migration/0
|
||||
01:09:08 IST 1000 5 99 FIFO migration/0
|
||||
01:09:08 IST 1000 6 99 FIFO watchdog/0
|
||||
|
||||
Here, I am not going to cover about Iostat utility, as we are already covered it. Please have a look on “[Linux Performance Monitoring with Vmstat and Iostat][2]注:此文也一并附上在同一个原文更新中” to get all details about iostat.
|
||||
因为我们已经学习过Iostat命令了,因此在本文中不在对其进行赘述。若想查看Iostat命令的详细信息,请参看“[使用Iostat和Vmstat进行Linux性能监控][2]注:此文也一并附上在同一个原文更新中”
|
||||
|
||||
### sar – System Activity Reporter ###
|
||||
###sar - 系统活动报告###
|
||||
|
||||
Using “**sar**” command, we can get the reports about whole system’s performance. This can help us to locate the system bottleneck and provide the help to find out the solutions to these annoying performance issues.
|
||||
我们可以使用‘**sar**’命令来获得整个系统性能的报告。这有助于我们定位系统性能的瓶颈,并且有助于我们找出这些烦人的性能问题的解决方法。
|
||||
|
||||
The Linux Kernel maintains some counter internally, which keeps track of all requests, their completion time and I/O block counts etc. From all these information, sar calculates rates and ratio of these request to find out about bottleneck areas.
|
||||
Linux内核维护者一些内部计数器,这些计数器包含了所有的请求及其完成时间和I/O块数等信息,sar命令从所有的这些信息中计算出请求的利用率和比例,以便找出瓶颈所在。
|
||||
|
||||
The main thing about the sar is that, it reports all activities over a period if time. So, make sure that sar collect data on appropriate time (not on Lunch time or on weekend.:)
|
||||
sar命令主要的用途是生成某段时间内所有活动的报告,因此,必需确保sar命令在适当的时间进行数据采集(而不是在午餐时间或者周末。)
|
||||
|
||||
13.Following is a basic command to invoke sar. It will create one file named “**sarfile**” in your current directory. The options ‘**-u**‘ is for CPU details and will collect **5** reports at an interval of **2** seconds.
|
||||
13.下面是执行sar命令的基本用法。它将会在当前目录下创建一个名为‘**sarfile**’的文件。‘**-u**’参数表示CPU详细信息,**5**表示生产5次报告,**2**表示每次报告的时间间隔为2秒。
|
||||
|
||||
tecmint@tecmint ~ $ sar -u -o sarfile 2 5
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
01:42:28 IST CPU %user %nice %system %iowait %steal %idle
|
||||
01:42:30 IST all 36.52 0.00 3.02 0.00 0.00 60.45
|
||||
@ -273,26 +274,26 @@ The main thing about the sar is that, it reports all activities over a period if
|
||||
01:42:38 IST all 50.75 0.00 3.75 0.00 0.00 45.50
|
||||
Average: all 46.30 0.00 3.93 0.00 0.00 49.77
|
||||
|
||||
14.In the above example, we have invoked sar interactively. We also have an option to invoke it non-interactively via cron using scripts **/usr/local/lib/sa1** and **/usr/local/lib/sa2** (If you have used **/usr/local** as prefix during installation time).
|
||||
14.在上面的例子中,我们交互的执行sar命令。sar命令提供了使用cron进行非交互的执行sar命令的方法,使用**/usr/local/lib/sa1**和**/usr/local/lib/sa2**脚本(如果你在安装时使用了**/usr/local**作为前缀)
|
||||
|
||||
- **/usr/local/lib/sa1** is a shell script that we can use for scheduling cron which will create daily binary log file.
|
||||
- **/usr/local/lib/sa2** is a shell script will change binary log file to human-readable form.
|
||||
- **/usr/local/lib/sa1**是一个可以使用cron进行调度生成二进制日志文件的shell脚本。
|
||||
- **/usr/local/lib/sa2**是一个可以将二进制日志文件转换为用户可读的编码方式。
|
||||
|
||||
Use the following Cron entries for making this non-interactive:
|
||||
使用如下Cron项目来将sar命令非交互化。
|
||||
|
||||
# Run sa1 shell script every 10 minutes for collecting data
|
||||
# 每10分钟运行sa1脚本来采集数据
|
||||
*/2 * * * * /usr/local/lib/sa/sa1 2 10
|
||||
|
||||
# Generate a daily report in human readable format at 23:53
|
||||
#在每天23:53时生成一个用户可读的日常报告
|
||||
53 23 * * * /usr/local/lib/sa/sa2 -A
|
||||
|
||||
At the back-end sa1 script will call **sadc** (System Activity Data Collector) utility for fetching the data at a particular interval. **sa2** will call sar for changing binary log file to human readable form.
|
||||
在sa1脚本执行后期,sa1脚本会调用**sabc**(系统活动数据收集器,System Activity Data Collector)工具采集特定时间间隔内的数据。**sa2**脚本会调用sar来将二进制日志文件转换为用户可读的形式。
|
||||
|
||||
15.Check run queue length, total number of processes and load average using ‘**-q**‘ option.
|
||||
15.使用‘**-q**’参数来检查运行队列的长度,所有进程的数量和平均负载
|
||||
|
||||
tecmint@tecmint ~ $ sar -q 2 5
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
02:00:44 IST runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
|
||||
02:00:46 IST 1 431 1.67 1.22 0.97 0
|
||||
@ -302,11 +303,11 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
|
||||
02:00:54 IST 0 431 1.64 1.23 0.97 0
|
||||
Average: 2 431 1.68 1.23 0.97 0
|
||||
|
||||
16.Check statistics about the mounted file systems using ‘**-F**‘.
|
||||
|
||||
16.使用‘**-F**’参数查看当前挂载的文件系统统计信息
|
||||
|
||||
tecmint@tecmint ~ $ sar -F 2 4
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
02:02:31 IST MBfsfree MBfsused %fsused %ufsused Ifree Iused %Iused FILESYSTEM
|
||||
02:02:33 IST 1001 449 30.95 1213790475088.85 18919505 364463 1.89 /dev/sda1
|
||||
@ -323,11 +324,11 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
|
||||
Summary MBfsfree MBfsused %fsused %ufsused Ifree Iused %Iused FILESYSTEM
|
||||
Summary 1001 449 30.95 1213790475088.86 18919505 364463 1.89 /dev/sda1
|
||||
|
||||
17.View network statistics using ‘**-n DEV**‘.
|
||||
17.使用‘**-n DEV**’参数查看网络统计信息
|
||||
|
||||
tecmint@tecmint ~ $ sar -n DEV 1 3 | egrep -v lo
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
02:11:59 IST IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
|
||||
02:12:00 IST wlan0 8.00 10.00 1.23 0.92 0.00 0.00 0.00
|
||||
@ -335,11 +336,11 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
|
||||
02:12:00 IST eth0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
02:12:00 IST vmnet1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
|
||||
18.View block device statistics like iostat using ‘**-d**‘.
|
||||
18.使用‘**-d**’参数查看块设备统计信息(与iostat类似)。
|
||||
|
||||
tecmint@tecmint ~ $ sar -d 1 3
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
02:13:17 IST DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
|
||||
02:13:18 IST dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
@ -350,11 +351,11 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
|
||||
02:13:19 IST DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
|
||||
02:13:20 IST dev8-0 7.00 32.00 80.00 16.00 0.11 15.43 15.43 10.80
|
||||
|
||||
19.To print memory statistics use ‘**-r**‘ option.
|
||||
19.使用‘**-r**’参数输出内存统计信息。
|
||||
|
||||
tecmint@tecmint ~ $ sar -r 1 3
|
||||
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
|
||||
|
||||
02:14:29 IST kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
|
||||
02:14:30 IST 1465660 2594840 63.90 133052 1549644 3710800 45.35 1133148 1359792 392
|
||||
@ -362,7 +363,7 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
|
||||
02:14:32 IST 1469112 2591388 63.82 133060 1550036 3705288 45.28 1130252 1360168 804
|
||||
Average: 1469165 2591335 63.82 133057 1549824 3710531 45.34 1129739 1359987 677
|
||||
|
||||
20.Using ‘**sadf -d**‘, we can extract data in format which can be processed using databases.
|
||||
20.使用‘**sadf -d**’参数可以将数据导出为数据库可以使用的格式。
|
||||
|
||||
tecmint@tecmint ~ $ safd -d /var/log/sa/sa20140903 -- -n DEV | grep -v lo
|
||||
|
||||
@ -382,20 +383,20 @@ At the back-end sa1 script will call **sadc** (System Activity Data Collector) u
|
||||
tecmint;2;2014-09-03 12:00:10 UTC;eth0;0.50;0.50;0.03;0.04;0.00;0.00;0.00;0.00
|
||||
tecmint;2;2014-09-03 12:00:12 UTC;eth0;1.00;0.50;0.12;0.04;0.00;0.00;0.00;0.00
|
||||
|
||||
You can also save this to a csv and then can draw chart for presentation kind of stuff as below.
|
||||
你也可以将这些数据存储在一个csv文档中,然后绘制成图表展示方式,如下所示
|
||||
|
||||

|
||||
|
||||
Network Graph
|
||||
网络信息图表
|
||||
|
||||
That’s it for now, you can refer man pages for more information about each option and don’t forget to tell about article with your valuable comments.
|
||||
现在,你可以参考man手册来后去每个参数的更多详细信息,并且请在文章下留下你宝贵的评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/sysstat-commands-to-monitor-linux/
|
||||
|
||||
作者:[Kuldeep Sharma][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[cvsher](https://github.com/cvsher)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user