Merge pull request #24 from LCTT/master

Update Repo
This commit is contained in:
joeren 2015-01-13 08:20:49 +08:00
commit a330de140d
34 changed files with 1633 additions and 1222 deletions

View File

@ -1,4 +1,4 @@
伴随Apple Watch的揭幕下一个智能手表会是Ubuntu吗
Apple Watch之后下一个智能手表会是Ubuntu吗
===
**苹果借助Apple Watch的发布证实了其进军穿戴式电子设备市场的长期传言**

View File

@ -0,0 +1,82 @@
ChromeOS 对战 Linux : 孰优孰劣,仁者见仁,智者见智
================================================================================
> 在 ChromeOS 和 Linux 的斗争过程中,两个桌面环境都有强有弱,这两者到底怎样呢?
只要稍加留意任何人都会相信Google 在桌面领域绝不是“玩玩而已”。在近几年,我们见到的 [ChromeOS][1] 制造的 [Google Chromebook][2] 相当的轰动。和同期人气火爆的 Amazon 一样ChromeOS 似乎势不可挡。
在本文中,我们要了解的是 ChromeOS 的概念市场ChromeOS 怎么影响着Linux 的份额,整个 ChromeOS 对于Linux 社区来说,是好事还是坏事。另外,我将会谈到一些重大问题,以及为什么没人针对这些问题做点什么。
### ChromeOS 并非真正的Linux ###
每当有朋友问我说 ChromeOS 是否是 Linux 的一个发行版时我都会这样回答ChromeOS 之于 Linux 就如同 OS X 之于 BSD。换句话说我认为ChromeOS 是 Linux 的一个派生操作系统,运行于 Linux 内核的引擎之下。而这个操作系统的大部分由 Google 的专利代码及软件组成。
尽管 ChromeOS 是利用了 Linux 的内核引擎,但是和现在流行的 Linux 分支版本相比,它仍然有很大的不同。
其实ChromeOS 的差异化越来越明显的原因,是在于它给终端用户提供的包括 Web 应用在内的 app。因为ChromeOS 的每一个操作都是开始于浏览器窗口,这对于 Linux 用户来说,可能会有很多不一样的感受,但是,对于没有 Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
比方说,每一个以“依赖 Google 产品”为生活方式的人来说,在 ChromeOS 上的感觉将会非常良好,就好像是回家一样,特别是这个人已经接受了 Chrome 浏览器、Google Drive 云存储和Gmail 的话。久而久之他们使用ChromeOS 也就是很自然的事情了,因为他们很容易接受使用早已习惯的 Chrome 浏览器。
然而,对于 Linux 爱好者来说,这样的约束就立即带来了不适应。因为软件的选择是被限制、被禁锢的,再加上要想玩游戏和 VoIP 是完全不可能的。对不起,因为 [GooglePlus Hangouts][3] 是代替不了VoIP 软件的。甚至这种情况将持续很长一段时间。
### ChromeOS 还是 Linux 桌面 ###
有人断言ChromeOS 要是想在桌面系统的浪潮中对 Linux 产生影响,只有在 Linux 停下来浮出水面喘气的时候,或者是满足某个非技术用户的时候。
是的,桌面 Linux 对于大多数休闲型的用户来说绝对是一个好东西。然而,它必须有专人帮助你安装操作系统,并且提供“维修”服务,就如同我们在 Windows 和 OS X 阵营看到的一样。但是,令人失望的是,在美国, Linux 恰恰在这个方面很缺乏。所以我们看到ChromeOS 正慢慢的走入我们的视线。
我发现 Linux 桌面系统最适合那些能够提供在线技术支持的环境中。比如说:可以在家里操作和处理更新的高级用户、政府和学校的 IT 部门等等。这些环境中Linux 桌面系统可以被配置给任何技能水平和背景的人使用。
相比之下ChromeOS 是建立在完全免维护的初衷之下的,因此,不需要第三者的帮忙,你只需要允许更新,然后让他静默完成即可。这在一定程度上可能是由于 ChromeOS 是为某些特定的硬件结构设计的这与苹果开发自己的PC 电脑也有异曲同工之妙。因为 Google 的 ChromeOS 伴随着其硬件一起提供,大部分情况下都无需担心错误的驱动、适配什么的问题。对于某些人来说,这太好了。
然而有些人则认为这是一个很严重的问题,不过滑稽的是,对 ChomeOS 来说,这些人压根就不在它的目标市场里。简言之,这只是一些狂热的 Linux 爱好者在对 ChomeOS 鸡蛋里挑骨头罢了。要我说,还是停止这些没必要的批评吧。
问题的关键在于ChromeOS 的市场份额和 Linux 桌面系统在很长的一段时间内是不同的。这个局面可能会在将来被打破,然而在现在,仍然会是两军对峙的局面。
### ChromeOS 的使用率正在增长 ###
不管你对ChromeOS 有怎么样的看法事实是ChromeOS 的使用率正在增长。专门针对 ChromeOS 的电脑也一直有发布。最近戴尔Dell也发布了一款针对 ChromeOS 的电脑。命名为 [Dell Chromebox][5],这款 ChromeOS 设备将会是对传统设备的又一次冲击。它没有软件光驱没有反病毒软件能够提供无缝的幕后自动更新。对于一般的用户Chromebox 和 Chromebook 正逐渐成为那些工作在 Web 浏览器上的人们的一个可靠选择。
尽管增长速度很快ChromeOS 设备仍然面临着一个很严峻的问题 - 存储。受限于有限的硬盘大小和严重依赖于云存储ChromeOS 对于那些需要使用基本的浏览器功能之外的人们来说还不够用。
### ChromeOS 和 Linux 的异同点 ###
以前,我注意到 ChromeOS 和 Linux 桌面系统分别占有着两个完全不同的市场。出现这样的情况是源于 Linux 社区在线下的桌面支持上一直都有着极其糟糕的表现。
是的偶然的有些人可能会第一时间发现这个“Linux特点”。但是并没有一个人接着跟进这些问题确保得到问题的答案以让他们得到 Linux 方面更多的帮助。
事实上,线下问题的出现可能是这样的:
- 有些用户偶然的在当地的 Linux 活动中发现了 Linux。
- 他们带回了 DVD/USB 设备,并尝试安装这个操作系统。
- 当然,有些人很幸运的成功完成了安装过程,但是,据我所知大多数的人并没有那么幸运。
- 令人失望的是,这些人只能寄希望于在网上论坛里搜索帮助。他们很难通过主流的计算机网络经验或视频教程解决这些问题。
-于是这些人受够了。后来有很多失望的用户拿着他们的电脑到 Windows 商店来“维修”。除了重装一个 Windows 操作系统他们很多时候都会听到一句话“Linux 并不适合你们”,应该尽量避免。
有些人肯定会说上面的举例肯定夸大其词了。让我来告诉你这是发生在我身边的真事而且是经常发生。醒醒吧Linux 社区的人们,我们的推广模式早已过期无力了。
### 伟大的平台,糟糕的营销和最终结论 ###
如果非要找一个 ChromeOS 和 Linux 桌面系统的共同点,除了它们都使用了 Linux 内核那就是它们都是伟大的产品却拥有极其差劲的市场营销。对此Google 认为自己的优势是,它能投入大量的资金在网上构建大面积存储空间。
Google 相信他们拥有“网上的优势”而线下的问题不是很重要。这真是一个让人难以置信的目光短浅这也成了Google 最严重的失误之一。而当地的 Linux 零售商则坚信,对于不怎么上网的人,自然不必担心他们会受到 Google巨大的在线存储的诱惑。
我的建议是Linux 可以通过线下的努力,提供桌面系统,渗透 ChromeOS 市场。这就意味着 Linux 社区需要在节假日筹集资金来出席博览会、商场展览,并且在社区中进行免费的教学课程。这会立即使 Linux 桌面系统走入人们的视线,否则,最终将会是一个 ChromeOS 设备出现在人们的面前。
如果说本地的线下市场并没有像我说的这样别担心。Linux 桌面系统的市场仍然会像 ChromeOS 一样增长。最坏也能保持现在这种两军对峙的市场局面。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
作者:[Matt Hartley][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Matt-Hartley-3080.html
[1]:http://en.wikipedia.org/wiki/Chrome_OS
[2]:http://www.google.com/chrome/devices/features/
[3]:https://plus.google.com/hangouts
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html

View File

@ -0,0 +1,66 @@
ESR黑客年暮
================================================================================
近来我一直在与某资深开源开发团队中的多个成员缠斗,尽管密切关注我的人们会在读完本文后猜到是哪个组织,但我不会在这里说出这个组织的名字。
怎么让某些人进入 21 世纪就这么难呢?真是的...
我快 56 岁了,也就是大部分年轻人会以为的我将时不时朝他们发出诸如“滚出我的草坪”之类歇斯底里咆哮的年龄。但事实并非如此 —— 我发现,尤其是在技术背景之下,我变得与我的年龄非常不相称。
在我这个年龄的大部分人确实变成了爱发牢骚、墨守成规的老顽固。并且,尴尬的是,偶尔我会成为那个打断谈话的人,我会指出他们某个在 1995 年或者在某些特殊情况下1985 年)时很适合的方法... 几十年后的今天就不再是好方法了。
为什么是我?因为年轻人在我的同龄人中很难有什么说服力。如果有人想让那帮老头改变主意,首先他得是自己同龄人中具有较高思想觉悟的佼佼者。即便如此,在与习惯做斗争的过程中,我也比看起来花费了更多的时间。
年轻人犯下无知的错误是可以被原谅的。他们还年轻。年轻意味着缺乏经验,缺乏经验通常会导致片面的判断。我很难原谅那些经历了足够多本该有经验的人,却被*长期的固化思维*蒙蔽,无法发觉近在咫尺的东西。
(补充一下:我真的不是保守党拥护者。那些和我争论政治的,无论保守党还是非保守党都没有注意到这点,我觉得这颇有点嘲讽的意味。)
那么,现在我们来讨论下 GNU 更新日志文件ChangeLog这件事。在 1985 年的时候,这是一个不错的主意,甚至可以说是必须的。当时的想法是用单独的更新日志条目来记录多个相关文件的变更情况。用这种方式来对那些存在版本缺失或者非常原始的版本进行版本控制确实不错。当时我也*在场*,所以我知道这些。
不过即使到了 1995 年,甚至 21 世纪早期许多版本控制系统仍然没有太大改进。也就是说这些版本控制系统并非对批量文件的变化进行分组再保存到一条记录上而是对每个变化的文件分别进行记录并保存到不同的地方。CVS当时被广泛使用的版本控制系统仅仅是模拟日志变更 —— 并且在这方面表现得很糟糕,导致大多数人不再依赖这个功能。即便如此,更新日志文件的出现依然是必要的。
但随后,版本控制系统 Subversion 于 2003 年发布 beta 版,并于 2004 年发布 1.0 正式版Subversion 真正实现了更新日志记录功能得到了人们的广泛认可。它与一年后兴起的分布式版本控制系统Distributed Version Control SystemDVCS共同引发了主流世界的激烈争论。因为如果你在项目上同时使用了分布式版本控制与更新日志文件记录的功能它们将会因为争夺相同元数据的控制权而产生不可预料的冲突。
有几种不同的方法可以折衷解决这个问题。一种是继续将更新日志作为代码变更的授权记录。这样一来,你基本上只能得到简陋的、形式上的提交评论数据。
另一种方法是对提交的评论日志进行授权。如果你这样做了,不久后你就会开始思忖为什么自己仍然对所有的日志更新条目进行记录。提交元数据与变化的代码具有更好的相容性,毕竟这才是当初设计它的目的。
(现在,试想有这样一个项目,同样本着把项目做得最好的想法,但两拨人却做出了完全不同的选择。因此你必须同时阅读更新日志和评论日志以了解到底发生了什么。最好在矛盾激化前把问题解决....
第三种办法是尝试同时使用以上两种方法 —— 在更新日志条目中以稍微变化后的的格式复制一份评论数据将其作为评论提交的一部分。这会导致各种你意想不到的问题最具代表性的就是它不符合“真理的单点性single point of truth”原理只要其中有拷贝文件损坏或者日志文件条目被修改这就不再是同步时数据匹配的问题它将导致在其后参与进来的人试图搞清人们是怎么想的时候变得非常困惑。LCTT 译注:《[程序员修炼之道][1]》The Pragmatic Programmer任何一个知识点在系统内都应当有一个唯一、明确、权威的表述。根据Brian Kernighan的建议把这个原则称为“真理的单点性Single Point of Truth”或者SPOT原则。
或者,正如这个*我就不说出具体名字的特定项目*所做的,它的高层开发人员在电子邮件中最近声明说,提交可以包含多个更新日志条目,并且提交的元数据与更新日志是无关的。这导致我们直到现在还得不断进行记录。
当时我读到邮件的时候都要吐了。什么样的傻瓜才会意识不到这是自找麻烦 —— 事实上,在 DVCS 中针对可靠的提交日志有很好的浏览工具,围绕更新日志文件的整个定制措施只会成为负担和拖累。
唉,这是比较特殊的笨蛋:变老的并且思维僵化了的黑客。所有的合理化改革他都会极力反对。他所遵循的行事方法在几十年前是有效的,但现在只能适得其反。如果你试图向他解释这些不仅仅和 git 的摘要信息有关,同时还为了正确适应当前的工具集,以便实现更新日志的去条目化... 呵呵,那你就准备好迎接无法忍受、无法想象的疯狂对话吧。
的确,它成功激怒了我。这样那样的胡言乱语使这个项目变成了很难完成的工作。而且,同样的糟糕还体现在他们吸引年轻开发者的过程中,我认为这是真正的问题。相关 Google+ 社区的人员数量已经达到了 4 位数,他们大部分都是孩子,还没有成长起来。显然外界已经接受了这样的信息:这个项目的开发者都是部落中地位根深蒂固的崇高首领,最好的崇拜方式就是远远的景仰着他们。
这件事给我的最大触动就是每当我要和这些部落首领较量时,我都会想:有一天我也会这样吗?或者更糟的是,我看到的只是如同镜子一般对我自己的真实写照,而我自己却浑然不觉?我的意思是,我所得到的印象来自于他的网站,这个特殊的笨蛋要比我年轻。年轻至少 15 岁呢。
我总是认为自己的思路很清晰。当我和那些比我聪明的人打交道时我不会受挫我只会因为那些思路跟不上我、看不清事实的人而沮丧。但这种自信也许只是邓宁·克鲁格效应Dunning-Krueger effect在我身上的消极影响我并不确定这意味着什么。很少有什么事情会让我感到害怕而这件事在让我害怕的事情名单上是名列前茅的。
另一件让人不安的事是当我逐渐变老的时候,这样的矛盾发生得越来越频繁。不知怎的,我希望我的黑客同行们能以更加优雅的姿态老去,即使身体老去也应该保持一颗年轻的心灵。有些人确实是这样;但可惜绝大多数人都不是。真令人悲哀。
我不确定我的职业生涯会不会完美收场。假如我最后成功避免了思维僵化(注意我说的是假如),我想我一定知道其中的部分原因,但我不确定这种模式是否可以被复制 —— 为了达成目的也许得在你的头脑中发生一些复杂的化学反应。尽管如此,无论对错,请听听我给年轻黑客以及其他有志青年的建议。
你们——对的,也包括你——一定无法在你中年老年的时候保持不错的心灵,除非你能很好的控制这点。你必须不断地去磨练你的内心、在你还年轻的时候完成自己的种种心愿,你必须把这些行为养成一种习惯直到你老去。
有种说法是中年人锻炼身体的最佳时机是 30 岁以前。我以为同样的方法,坚持我以上所说的习惯能让你在 56 岁,甚至 65 岁的时候仍然保持灵活的头脑。挑战你的极限,使不断地挑战自己成为一种习惯。立刻离开安乐窝,由此当你以后真正需要它的时候你可以建立起自己的安乐窝。
你必须要清楚的了解这点;还有一个可选择的挑战是你选择一个可以实现的目标并且为了这个目标不断努力。这个月我要学习 Go 语言。不是指游戏,我早就玩儿过了(虽然玩儿的不是太好)。并不是因为工作需要,而是因为我觉得是时候来扩展下我自己了。
保持这个习惯。永远不要放弃。
--------------------------------------------------------------------------------
via: http://esr.ibiblio.org/?p=6485
作者:[Eric Raymond][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://esr.ibiblio.org/?author=2
[1]:http://book.51cto.com/art/200809/88490.htm

View File

@ -1,10 +1,10 @@
Linux有问必答-- 如何在Linux上安装内核头文件
Linux有问必答如何在Linux上安装内核头文件
================================================================================
> **提问**:我在安装一个设备驱动前先要安装内核头文件。怎样安装合适的内核头文件?
当你在编译一个设备驱动模块时你需要在系统中安装内核头文件。内核头文件同样在你编译与内核直接链接的用户空间程序时需要。当你在这些情况下安装内核头文件时你必须确保内核头文件精确地与你当前内核版本匹配比如3.13.0-24-generic
如果你的内核发行版自带的内核版本或者使用默认的包管理器的基础仓库升级的比如apt-ger、aptitude或者yum你也可以使用包管理器来安装内核头文件。另一方面如果下载的是[kernel源码][1]并且手动编译的,你可以使用[make命令][2]来安装匹配的内核头文件。
如果你的内核发行版自带的内核版本或者使用默认的包管理器的基础仓库升级的比如apt-ger、aptitude或者yum你也可以使用包管理器来安装内核头文件。另一方面如果下载的是[kernel源码][1]并且手动编译的,你可以使用[make命令][2]来安装匹配的内核头文件。
现在我们假设你的内核是发行版自带的,让我们看下该如何安装匹配的头文件。
@ -41,7 +41,7 @@ Debian、Ubuntu、Linux Mint默认头文件在**/usr/src**下。
假设你没有手动编译内核你可以使用yum命令来安装匹配的内核头文件。
首先,用下面的命令检查系统是否已经按炸ung了头文件。如果下面的命令没有任何输出,这就意味着还没有头文件。
首先,用下面的命令检查系统是否已经安装了头文件。如果下面的命令没有任何输出,这就意味着还没有头文件。
$ rpm -qa | grep kernel-headers-$(uname -r)
@ -66,7 +66,7 @@ Fedora、CentOS 或者 RHEL上默认内核头文件的位置是**/usr/include/li
via: http://ask.xmodulo.com/install-kernel-headers-linux.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,25 +0,0 @@
Git 2.2.1 Released To Fix Critical Security Issue
================================================================================
![](http://www.phoronix.com/assets/categories/freesoftware.jpg)
Git 2.2.1 was released this afternoon to fix a critical security vulnerability in Git clients. Fortunately, the vulnerability doesn't plague Unix/Linux users but rather OS X and Windows.
Today's Git vulnerability affects those using the Git client on case-insensitive file-systems. On case-insensitive platforms like Windows and OS X, committing to .Git/config could overwrite the user's .git/config and could lead to arbitrary code execution. Fortunately with most Phoronix readers out there running Linux, this isn't an issue thanks to case-sensitive file-systems.
Besides the attack vector from case insensitive file-systems, Windows and OS X's HFS+ would map some strings back to .git too if certain characters are present, which could lead to overwriting the Git config file. Git 2.2.1 addresses these issues.
More details via the [Git 2.2.1 release announcement][1] and [GitHub has additional details][2].
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=news_item&px=MTg2ODA
作者:[Michael Larabel][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.michaellarabel.com/
[1]:http://article.gmane.org/gmane.linux.kernel/1853266
[2]:https://github.com/blog/1938-git-client-vulnerability-announced

View File

@ -1,168 +0,0 @@
Translating By H-mudcup
Easy File Comparisons With These Great Free Diff Tools
================================================================================
by Frazer Kline
File comparison compares the contents of computer files, finding their common contents and their differences. The result of the comparison is often known as a diff.
diff is also the name of a famous console based file comparison utility that outputs the differences between two files. The diff utility was developed in the early 1970s on the Unix operating system. diff will output the parts of the files where they are different.
Linux has many good GUI tools that enable you to clearly see the difference between two files or two versions of the same file. This roundup selects 5 of my favourite GUI diff tools, with all but one released under an open source license.
These utilities are an essential software development tool, as they visualize the differences between files or directories, merge files with differences, resolve conflicts and save output to a new file or patch, and assist file changes reviewing and comment production (e.g. approving source code changes before they get merged into a source tree). They help developers work on a file, passing it back and forth between each other. The diff tools are not only useful for showing differences in source code files; they can be used on many text-based file types as well. The visualisations make it easier to compare files.
----------
![](http://www.linuxlinks.com/portal/content2/png/Meld.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Meld.png)
Meld is an open source graphical diff viewer and merge application for the Gnome desktop. It supports 2 and 3-file diffs, recursive directory diffs, diffing of directories under version control (Bazaar, Codeville, CVS, Darcs, Fossil SCM, Git, Mercurial, Monotone, Subversion), as well as the ability to manually and automatically merge file differences.
Meld's focus is on helping developers compare and merge source files, and get a visual overview of changes in their favourite version control system.
Features include
- Edit files in-place, and your comparison updates on-the-fly
- Perform twoand three-way diffs and merges
- Easily navigate between differences and conflicts
- Visualise global and local differences with insertions, changes and conflicts marked
- Built-in regex text filtering to ignore uninteresting differences
- Syntax highlighting (with optional gtksourceview)
- Compare two or three directories file-by-file, showing new, missing, and altered files
- Directly open file comparisons of any conflicting or differing files
- Filter out files or directories to avoid seeing spurious differences
- Auto-merge mode and actions on change blocks help make merges easier
- Simple file management is also available
- Supports many version control systems, including Git, Mercurial, Bazaar and SVN
- Launch file comparisons to check what changes were made, before you commit
- View file versioning statuses
- Simple version control actions are also available (i.e., commit/update/add/remove/delete files)
- Automatically merge two files using a common ancestor
- Mark and display the base version of all conflicting changes in the middle pane
- Visualise and merge independent modifications of the same file
- Lock down read-only merge bases to avoid mistakes
- Command line interface for easy integration with existing tools, including git mergetool
- Internationalization support
- Visualisations make it easier to compare your files
- Website: [meldmerge.org][1]
- Developer: Kai Willadsen
- License: GNU GPL v2
- Version Number: 1.8.5
----------
![](http://www.linuxlinks.com/portal/content2/png/DiffMerge.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-DiffMerge.png)
注:上面这个图访问不到,图的地址是原文地址的小图的链接地址,发布的时候在验证一下,如果还访问不到,不行先采用小图或者网上搜一下看有没有大图
DiffMerge is an application to visually compare and merge files on Linux, Windows, and OS X.
Features include:
- Graphically shows the changes between two files. Includes intra-line highlighting and full support for editing
- Graphically shows the changes between 3 files. Allows automatic merging (when safe to do so) and full control over editing the resulting file
- Performs a side-by-side comparison of 2 folders, showing which files are only present in one file or the other, as well as file pairs which are identical, equivalent or different
- Rulesets and options provide for customized appearance and behavior
- Unicode-based application and can import files in a wide range of character encodings
- Cross-platform tool
- Website: [sourcegear.com/diffmerge][2]
- Developer: SourceGear LLC
- License: Licensed for use free of charge (not open source)
- Version Number: 4.2
----------
![](http://www.linuxlinks.com/portal/content2/png/xxdiff.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-xxdiff.png)
xxdiff is an open source graphical file and directories comparator and merge tool.
xxdiff can be used for viewing the differences between two or three files, or two directories, and can be used to produce a merged version. The texts of the two or three files are presented side by side with their differences highlighted with colors for easy identification.
This program is an essential software development tool that can be used to visualize the differences between files or directories, merge files with differences, resolving conflicts and saving output to a new file or patch, and assist file changes reviewing and comment production (e.g. approving source code changes before they get merged into a source tree).
Features include:
- Compare two files, three files, or two directories (shallow and recursive)
- Horizontal diffs highlighting
- Files can be merged interactively and resulting output visualized and saved
- Features to assist in performing merge reviews/policing
- Unmerge CVS conflicts in automatically merged file and display them as two files, to help resolve conflicts
- Uses external diff program to compute differences: works with GNU diff, SGI diff and ClearCase's cleardiff, and any other diff whose output is similar to those
- Fully customizable with a resource file
- Look-and-feel similar to Rudy Wortel's/SGI xdiff, it is desktop agnostic
- Features and output that ease integration with scripts
- Website: [furius.ca/xxdiff][3]
- Developer: Martin Blais
- License: GNU GPL
- Version Number: 4.0
----------
![](http://www.linuxlinks.com/portal/content2/png/Diffuse.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Diffuse.png)
Diffuse is an open source graphical tool for merging and comparing text files. Diffuse is able to compare an arbitrary number of files side-by-side and offers the ability to manually adjust line-matching and directly edit files. Diffuse can also retrieve revisions of files from bazaar, CVS, darcs, git, mercurial, monotone, Subversion and GNU Revision Control System (RCS) repositories for comparison and merging.
Features include:
- Compare and merge an arbitrary number of files side-by-side (n-way merges)
- Line matching can be manually corrected by the user
- Directly edit files
- Syntax highlighting
- Bazaar, CVS, Darcs, Git, Mercurial, Monotone, RCS, Subversion, and SVK support
- Unicode support
- Unlimited undo
- Easy keyboard navigation
- Website: [diffuse.sourceforge.net][]
- Developer: Derrick Moser
- License: GNU GPL v2
- Version Number: 0.4.7
----------
![](http://www.linuxlinks.com/portal/content2/png/Kompare.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Kompare.png)
Kompare is an open source GUI front-end program that enables differences between source files to be viewed and merged. Kompare can be used to compare differences on files or the contents of folders. Kompare supports a variety of diff formats and provide many options to customize the information level displayed.
Whether you are a developer comparing source code, or you just want to see the difference between that research paper draft and the final document, Kompare is a useful tool.
Kompare is part of the KDE desktop environment.
Features include:
- Compare two text files
- Recursively compare directories
- View patches generated by diff
- Merge a patch into an existing directory
- Entertain you during that boring compile
- Website: [www.caffeinated.me.uk/kompare/][5]
- Developer: The Kompare Team
- License: GNU GPL
- Version Number: Part of KDE
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/2014062814400262/FileComparisons.html
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://meldmerge.org/
[2]:https://sourcegear.com/diffmerge/
[3]:http://furius.ca/xxdiff/
[4]:http://diffuse.sourceforge.net/
[5]:http://www.caffeinated.me.uk/kompare/

View File

@ -0,0 +1,111 @@
Best GNOME Shell Themes For Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Best_Gnome_Shell_Themes.jpeg)
Themes are the best way to customize your Linux desktop. If you [install GNOME on Ubuntu 14.04][1] or 14.10, you might want to change the default theme and give it a different look. To help you in this task, I have compiled here a **list of best GNOME shell themes for Ubuntu** or any other Linux OS that has GNOME shell installed on it. But before we see the list, lets first see how to change install new themes in GNOME Shell.
### Install themes in GNOME Shell ###
To install new themes in GNOME with Ubuntu, you can use Gnome Tweak Tool which is available in software repository in Ubuntu. Open a terminal and use the following command:
sudo apt-get install gnome-tweak-tool
Alternatively, you can use themes by putting them in ~/.themes directory. I have written a detailed tutorial on [how to install and use themes in GNOME Shell][2], in case you need it.
### Best GNOME Shell themes ###
The themes listed here are tested on GNOME Shell 3.10.4 but it should work for all version of GNOME 3 and higher. For the sake of mentioning, the themes are not in any kind of priority order. Lets have a look at the best GNOME themes:
#### Numix ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/02/mockups_numix_5.jpeg)
No list can be completed without the mention of [Numix themes][3]. These themes got so popular that it encouraged [Numix team to work on a new Linux OS, Ozon][4]. Considering their design work with Numix theme, it wont be exaggeration to call it one of the [most beautiful Linux OS][5] releasing in near future.
To install Numix theme in Ubuntu based distributions, use the following commands:
sudo apt-add-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-icon-theme-circle
#### Elegance Colors ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Elegance_Colors_Theme_GNOME_Shell.jpeg)
Another beautiful theme from Satyajit Sahoo, who is also a member of Numix team. [Elegance Colors][6] has its own PPA so that you can easily install it:
sudo add-apt-repository ppa:satyajit-happy/themes
sudo apt-get update
sudo apt-get install gnome-shell-theme-elegance-colors
#### Moka ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Moka_GNOME_Shell.jpeg)
[Moka][7] is another mesmerizing theme that is always included in the list of beautiful themes. Designed by the same developer who gave us Unity Tweak Tool, Moka is a must try:
sudo add-apt-repository ppa:moka/stable
sudo apt-get update
sudo apt-get install moka-gnome-shell-theme
#### Viva ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Viva_GNOME_Theme.jpg)
Based on Gnomes default Adwaita theme, Viva is a nice theme with shades of black and oranges. You can download Viva from the link below.
- [Download Viva GNOME Shell Theme][8]
#### Ciliora-Prima ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Ciliora_Prima_Gnome_Shell.jpeg)
Previously known as Zukitwo Dark, Ciliora-Prima has square icons theme. Theme is available in three versions that are slightly different from each other. You can download it from the link below.
- [Download Ciliora-Prima GNOME Shell Theme][9]
#### Faience ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Faience_GNOME_Shell_Theme.jpeg)
Faience has been a popular theme for quite some time and rightly so. You can install Faience using the PPA below for GNOME 3.10 and higher.
sudo add-apt-repository ppa:tiheum/equinox
sudo apt-get update
sudo apt-get install faience-theme
#### Paper [Incomplete] ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Paper_GTK_Theme.jpeg)
Ever since Google talked about Material Design, people have been going gaga over it. Paper GTK theme, by Sam Hewitt (of Moka Project), is inspired by Google Material design and currently under development. Which means you will not have the best experience with Paper at the moment. But if your a bit experimental, like me, you can definitely give it a try.
sudo add-apt-repository ppa:snwh/pulp
sudo apt-get update
sudo apt-get install paper-gtk-theme
That concludes my list. If you are trying to give a different look to your Ubuntu, you should also try the list of [best icon themes for Ubuntu 14.04][10].
How do you find this list of **best GNOME Shell themes**? Which one is your favorite among the one listed here? And if its not listed here, do let us know which theme you think is the best GNOME Shell theme.
--------------------------------------------------------------------------------
via: http://itsfoss.com/gnome-shell-themes-ubuntu-1404/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://itsfoss.com/how-to-install-gnome-in-ubuntu-14-04/
[2]:http://itsfoss.com/install-switch-themes-gnome-shell/
[3]:https://numixproject.org/
[4]:http://itsfoss.com/numix-linux-distribution/
[5]:http://itsfoss.com/new-beautiful-linux-2015/
[6]:http://satya164.deviantart.com/art/Gnome-Shell-Elegance-Colors-305966388
[7]:http://mokaproject.com/
[8]:https://github.com/vivaeltopo/gnome-shell-theme-viva
[9]:http://zagortenay333.deviantart.com/art/Ciliora-Prima-Shell-451947568
[10]:http://itsfoss.com/best-icon-themes-ubuntu-1404/

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
20 Linux Commands Interview Questions & Answers
================================================================================
**Q:1 How to check current run level of a linux server ?**
@ -140,4 +141,4 @@ via: http://www.linuxtechi.com/20-linux-commands-interview-questions-answers/
[17]:
[18]:
[19]:
[20]:
[20]:

View File

@ -1,3 +1,4 @@
(translating by runningwater)
2015: Open Source Has Won, But It Isn't Finished
================================================================================
> After the wins of 2014, what's next?
@ -31,7 +32,7 @@ In other words, whatever amazing free software 2014 has already brought us, we c
via: http://www.computerworlduk.com/blogs/open-enterprise/open-source-has-won-3592314/
作者:[lyn Moody][a]
译者:[译者ID](https://github.com/译者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
@ -44,4 +45,4 @@ via: http://www.computerworlduk.com/blogs/open-enterprise/open-source-has-won-35
[5]:http://timesofindia.indiatimes.com/tech/tech-news/Android-tablet-market-share-hits-70-in-Q2-iPads-slip-to-25-Survey/articleshow/38966512.cms
[6]:http://linuxgizmos.com/embedded-developers-prefer-linux-love-android/
[7]:http://www.computerworlduk.com/blogs/open-enterprise/allseen-3591023/
[8]:http://peerproduction.net/issues/issue-3-free-software-epistemics/debate/there-is-no-free-software/
[8]:http://peerproduction.net/issues/issue-3-free-software-epistemics/debate/there-is-no-free-software/

View File

@ -0,0 +1,29 @@
Linus Tells Wired Leap Second Irrelevant
================================================================================
![](https://farm4.staticflickr.com/3852/14863156322_a354770b14_o.jpg)
Two larger publications today featured Linux and the effect of the upcoming leap second. The Register today said that the leap second effects of the past are no longer an issue. Coincidentally, Wired talked to Linus Torvalds about the same issue today as well.
**Linus Torvalds** spoke with Wired's Robert McMillan about the approaching leap second due to be added in June. The Register said the last leap second in 2012 took out Mozilla, StumbleUpon, Yelp, FourSquare, Reddit and LinkedIn as well as several major airlines and travel reservation services that ran Linux. Torvalds told Wired today that the kernel is patched and he doesn't expect too many issues this time around. [He said][1], "Just take the leap second as an excuse to have a small nonsensical party for your closest friends. Wear silly hats, get a banner printed, and get silly drunk. Thats exactly how relevant it should be to most people."
**However**, The Register said not everyone agrees with Torvalds' sentiments. They quote Daily Mail saying, "The year 2015 will have an extra second — which could wreak havoc on the infrastructure powering the Internet," then remind us of the Y2K scare that ended up being a non-event. The Register's Gavin [Clarke concluded][2]:
> No reason the Penguins were caught sans pants.
> Now they've gone belt and braces.
The take-away is: move along, nothing to see here.
--------------------------------------------------------------------------------
via: http://ostatic.com/blog/linus-tells-wired-leap-second-irrelevant
作者:[Susan Linton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ostatic.com/member/susan-linton
[1]:http://www.wired.com/2015/01/torvalds_leapsecond/
[2]:http://www.theregister.co.uk/2015/01/09/leap_second_bug_linux_hysteria/

View File

@ -0,0 +1,36 @@
diff -u: What's New in Kernel Development
================================================================================
**David Drysdale** wanted to add Capsicum security features to Linux after he noticed that FreeBSD already had Capsicum support. Capsicum defines fine-grained security privileges, not unlike filesystem capabilities. But as David discovered, Capsicum also has some controversy surrounding it.
Capsicum has been around for a while and was described in a USENIX paper in 2010: [http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf][1].
Part of the controversy is just because of the similarity with capabilities. As Eric Biderman pointed out during the discussion, it would be possible to implement features approaching Capsicum's as an extension of capabilities, but implementing Capsicum directly would involve creating a whole new (and extensive) abstraction layer in the kernel. Although David argued that capabilities couldn't actually be extended far enough to match Capsicum's fine-grained security controls.
Capsicum also was controversial within its own developer community. For example, as Eric described, it lacked a specification for how to revoke privileges. And, David pointed out that this was because the community couldn't agree on how that could best be done. David quoted an e-mail sent by Ben Laurie to the cl-capsicum-discuss mailing list in 2011, where Ben said, "It would require additional book-keeping to find and revoke outstanding capabilities, which requires knowing how to reach capabilities, and then whether they are derived from the capability being revoked. It also requires an authorization model for revocation. The former two points mean additional overhead in terms of data structure operations and synchronisation."
Given the ongoing controversy within the Capsicum developer community and the corresponding lack of specification of key features, and given the existence of capabilities that already perform a similar function in the kernel and the invasiveness of Capsicum patches, Eric was opposed to David implementing Capsicum in Linux.
But, given the fact that capabilities are much coarser-grained than Capsicum's security features, to the point that capabilities can't really be extended far enough to mimic Capsicum's features, and given that FreeBSD already has Capsicum implemented in its kernel, showing that it can be done and that people might want it, it seems there will remain a lot of folks interested in getting Capsicum into the Linux kernel.
Sometimes it's unclear whether there's a bug in the code or just a bug in the written specification. Henrique de Moraes Holschuh noticed that the Intel Software Developer Manual (vol. 3A, section 9.11.6) said quite clearly that microcode updates required 16-byte alignment for the P6 family of CPUs, the Pentium 4 and the Xeon. But, the code in the kernel's microcode driver didn't enforce that alignment.
In fact, Henrique's investigation uncovered the fact that some Intel chips, like the Xeon X5550 and the second-generation i5 chips, needed only 4-byte alignment in practice, and not 16. However, to conform to the documented specification, he suggested fixing the kernel code to match the spec.
Borislav Petkov objected to this. He said Henrique was looking for problems where there weren't any. He said that Henrique simply had discovered a bug in Intel's documentation, because the alignment issue clearly wasn't a problem in the real world. He suggested alerting the Intel folks to the documentation problem and moving on. As he put it, "If the processor accepts the non-16-byte-aligned update, why do you care?"
But, as H. Peter Anvin remarked, the written spec was Intel's guarantee that certain behaviors would work. If the kernel ignored the spec, it could lead to subtle bugs later on. And, Bill Davidsen said that if the kernel ignored the alignment requirement, and "if the requirement is enforced in some future revision, and updates then fail in some insane way, the vendor is justified in claiming 'I told you so'."
The end result was that Henrique sent in some patches to make the microcode driver enforce the 16-byte alignment requirement.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/diff-u-whats-new-kernel-development-6
作者:[Zack Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/user/801501
[1]:http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf

View File

@ -1,155 +0,0 @@
How to Backup and Restore Your Apps and PPAs in Ubuntu Using Aptik
================================================================================
![00_lead_image_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x300x00_lead_image_aptik.png.pagespeed.ic.n3TJwp8YK_.png)
If you need to reinstall Ubuntu or if you just want to install a new version from scratch, wouldnt it be useful to have an easy way to reinstall all your apps and settings? You can easily accomplish this using a free tool called Aptik.
Aptik (Automated Package Backup and Restore), an application available in Ubuntu, Linux Mint, and other Debian- and Ubuntu-based Linux distributions, allows you to backup a list of installed PPAs (Personal Package Archives), which are software repositories, downloaded packages, installed applications and themes, and application settings to an external USB drive, network drive, or a cloud service like Dropbox.
NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise.
To install Aptik, you must add the PPA. To do so, press Ctrl + Alt + T to open a Terminal window. Type the following text at the prompt and press Enter.
sudo apt-add-repository y ppa:teejee2008/ppa
Type your password when prompted and press Enter.
![01_command_to_add_repository](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x99x01_command_to_add_repository.png.pagespeed.ic.UfVC9QLj54.png)
Type the following text at the prompt to make sure the repository is up-to-date.
sudo apt-get update
![02_update_command](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x252x02_update_command.png.pagespeed.ic.m9pvd88WNx.png)
When the update is finished, you are ready to install Aptik. Type the following text at the prompt and press Enter.
sudo apt-get install aptik
NOTE: You may see some errors about packages that the update failed to fetch. If they are similar to the ones listed on the following image, you should have no problem installing Aptik.
![03_command_to_install_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x03_command_to_install_aptik.png.pagespeed.ic.1jtHysRO9h.png)
The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter.
![04_do_you_want_to_continue](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x04_do_you_want_to_continue.png.pagespeed.ic.WQ15_UxK5Z.png)
When the installation if finished, close the Terminal window by typing “Exit” and pressing Enter, or by clicking the “X” button in the upper-left corner of the window.
![05_closing_terminal_window](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x05_closing_terminal_window.png.pagespeed.ic.9QoqwM7Mfr.png)
Before running Aptik, you should set up a backup directory on a USB flash drive, a network drive, or on a cloud account, such as Dropbox or Google Drive. For this example, will will use Dropbox.
![06_creating_backup_folder](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x243x06_creating_backup_folder.png.pagespeed.ic.7HzR9KwAfQ.png)
Once your backup directory is set up, click the “Search” button at the top of the Unity Launcher bar.
![07_opening_search](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x177x07_opening_search.png.pagespeed.ic.qvFiw6_sXa.png)
Type “aptik” in the search box. Results of the search display as you type. When the icon for Aptik displays, click on it to open the application.
![08_starting_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x338x08_starting_aptik.png.pagespeed.ic.8fSl4tYR0n.png)
A dialog box displays asking for your password. Enter your password in the edit box and click “OK.”
![09_entering_password](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x337x09_entering_password.png.pagespeed.ic.yanJYFyP1i.png)
The main Aptik window displays. Select “Other…” from the “Backup Directory” drop-down list. This allows you to select the backup directory you created.
NOTE: The “Open” button to the right of the drop-down list opens the selected directory in a Files Manager window.
![10_selecting_other_for_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x533x10_selecting_other_for_directory.png.pagespeed.ic.dHbmYdAHYx.png)
On the “Backup Directory” dialog box, navigate to your backup directory and then click “Open.”
NOTE: If you havent created a backup directory yet, or you want to add a subdirectory in the selected directory, use the “Create Folder” button to create a new directory.
![11_choosing_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x470x11_choosing_directory.png.pagespeed.ic.E-56x54cy9.png)
To backup the list of installed PPAs, click “Backup” to the right of “Software Sources (PPAs).”
![12_clicking_backup_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
The “Backup Software Sources” dialog box displays. The list of installed packages and the associated PPA for each displays. Select the PPAs you want to backup, or use the “Select All” button to select all the PPAs in the list.
![13_selecting_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
Click “Backup” to begin the backup process.
![14_clicking_backup_for_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x14_clicking_backup_for_all_software_sources.png.pagespeed.ic.n5h_KnQVZa.png)
A dialog box displays when the backup is finished telling you the backup was created successfully. Click “OK” to close the dialog box.
A file named “ppa.list” will be created in the backup directory.
![15_closing_finished_dialog_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x15_closing_finished_dialog_software_sources.png.pagespeed.ic.V25-KgSXdY.png)
The next item, “Downloaded Packages (APT Cache)”, is only useful if you are re-installing the same version of Ubuntu. It backs up the packages in your system cache (/var/cache/apt/archives). If you are upgrading your system, you can skip this step because the packages for the new version of the system will be newer than the packages in the system cache.
Backing up downloaded packages and then restoring them on the re-installed Ubuntu system will save time and Internet bandwidth when the packages are reinstalled. Because the packages will be available in the system cache once you restore them, the download will be skipped and the installation of the packages will complete more quickly.
If you are reinstalling the same version of your Ubuntu system, click the “Backup” button to the right of “Downloaded Packages (APT Cache)” to backup the packages in the system cache.
NOTE: When you backup the downloaded packages, there is no secondary dialog box. The packages in your system cache (/var/cache/apt/archives) are copied to an “archives” directory in the backup directory and a dialog box displays when the backup is finished, indicating that the packages were copied successfully.
![16_downloaded_packages_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x544x16_downloaded_packages_backed_up.png.pagespeed.ic.z8ysuwzQAK.png)
There are some packages that are part of your Ubuntu distribution. These are not checked, since they are automatically installed when you install the Ubuntu system. For example, Firefox is a package that is installed by default in Ubuntu and other similar Linux distributions. Therefore, it will not be selected by default.
Packages that you installed after installing the system, such as the [package for the Chrome web browser][1] or the package containing Aptik (yes, Aptik is automatically selected to back up), are selected by default. This allows you to easily back up the packages that are not included in the system when installed.
Select the packages you want to back up and de-select the packages you dont want to backup. Click “Backup” to the right of “Software Selections” to back up the selected top-level packages.
NOTE: Dependency packages are not included in this backup.
![18_clicking_backup_for_software_selections](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x18_clicking_backup_for_software_selections.png.pagespeed.ic.QI5D-IgnP_.png)
Two files, named “packages.list” and “packages-installed.list”, are created in the backup directory and a dialog box displays indicating that the backup was created successfully. Click “OK” to close the dialog box.
NOTE: The “packages-installed.list” file lists all the packages. The “packages.list” file also lists all the packages, but indicates which ones were selected.
![19_software_selections_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x19_software_selections_backed_up.png.pagespeed.ic.LVmgs6MKPL.png)
To backup settings for installed applications, click the “Backup” button to the right of “Application Settings” on the main Aptik window. Select the settings you want to back up and click “Backup”.
NOTE: Click the “Select All” button if you want to back up all application settings.
![20_backing_up_app_settings](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x20_backing_up_app_settings.png.pagespeed.ic.7_kgU3Dj_m.png)
The selected settings files are zipped into a file called “app-settings.tar.gz”.
![21_zipping_settings_files](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x21_zipping_settings_files.png.pagespeed.ic.dgoBj7egqv.png)
When the zipping is complete, the zipped file is copied to the backup directory and a dialog box displays telling you that the backups were created successfully. Click “OK” to close the dialog box.
![22_app_settings_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22_app_settings_backed_up.png.pagespeed.ic.Mb6utyLJ3W.png)
Themes from the “/usr/share/themes” directory and icons from the “/usr/share/icons” directory can also be backed up. To do so, click the “Backup” button to the right of “Themes and Icons”. The “Backup Themes” dialog box displays with all the themes and icons selected by default. De-select any themes or icons you dont want to back up and click “Backup.”
![22a_backing_up_themes_and_icons](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22a_backing_up_themes_and_icons.png.pagespeed.ic.KXa8W3YhyF.png)
The themes are zipped and copied to a “themes” directory in the backup directory and the icons are zipped and copied to an “icons” directory in the backup directory. A dialog box displays telling you that the backups were created successfully. Click “OK” to close the dialog box.
![22b_themes_and_icons_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22b_themes_and_icons_backed_up.png.pagespeed.ic.ejjRaymD39.png)
Once youve completed the desired backups, close Aptik by clicking the “X” button in the upper-left corner of the main window.
![23_closing_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x542x23_closing_aptik.png.pagespeed.ic.pNk9Vt3--l.png)
Your backup files are available in the backup directory you chose.
![24_backup_files_in_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x374x24_backup_files_in_directory.png.pagespeed.ic.vwblOfN915.png)
When you re-install your Ubuntu system or install a new version of Ubuntu, install Aptik on the newly installed system and make the backup files you generated available to the system. Run Aptik and use the “Restore” button for each item to restore your PPAs, applications, packages, settings, themes, and icons.
--------------------------------------------------------------------------------
via: http://www.howtogeek.com/206454/how-to-backup-and-restore-your-apps-and-ppas-in-ubuntu-using-aptik/
作者Lori Kaufman
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.howtogeek.com/203768

View File

@ -1,75 +0,0 @@
(translating by runningwater)
Linux FAQs with Answers--How to install 7zip on Linux
================================================================================
> **Question**: I need to extract files from an ISO image, and for that I want to use 7zip program. How can I install 7zip on [insert your Linux distro]?
7zip is an open-source archive program originally developed for Windows, which can pack or unpack a variety of archive formats including its native format 7z as well as XZ, GZIP, TAR, ZIP and BZIP2. 7zip is also popularly used to extract RAR, DEB, RPM and ISO files. Besides simple archiving, 7zip can support AES-256 encryption as well as self-extracting and multi-volume archiving. For POSIX systems (Linux, Unix, BSD), the original 7zip program has been ported as p7zip (short for "POSIX 7zip").
Here is how to install 7zip (or p7zip) on Linux.
### Install 7zip on Debian, Ubuntu or Linux Mint ###
Debian-based distributions come with three packages related to 7zip.
- **p7zip**: contains 7zr (a minimal 7zip archive tool) which can handle its native 7z format only.
- **p7zip-full**: contains 7z which can support 7z, LZMA2, XZ, ZIP, CAB, GZIP, BZIP2, ARJ, TAR, CPIO, RPM, ISO and DEB.
- **p7zip-rar**: contains a plugin for extracting RAR files.
It is recommended to install p7zip-full package (not p7zip) since this is the most complete 7zip package which supports many archive formats. In addition, if you want to extract RAR files, you also need to install p7zip-rar package as well. The reason for having a separate plugin package is because RAR is a proprietary format.
$ sudo apt-get install p7zip-full p7zip-rar
### Install 7zip on Fedora or CentOS/RHEL ###
Red Hat-based distributions offer two packages related to 7zip.
- **p7zip**: contains 7za command which can support 7z, ZIP, GZIP, CAB, ARJ, BZIP2, TAR, CPIO, RPM and DEB.
- **p7zip-plugins**: contains 7z command and additional plugins to extend 7za command (e.g., ISO extraction).
On CentOS/RHEL, you need to enable [EPEL repository][1] before running yum command below. On Fedora, there is not need to set up additional repository.
$ sudo yum install p7zip p7zip-plugins
Note that unlike Debian based distributions, Red Hat based distributions do not offer a RAR plugin. Therefore you will not be able to extract RAR files using 7z command.
### Create or Extract an Archive with 7z ###
Once you installed 7zip, you can use 7z command to pack or unpack various types of archives. The 7z command uses other plugins to handle the archives.
![](https://farm8.staticflickr.com/7583/15874000610_878a85b06a_b.jpg)
To create an archive, use "a" option. Supported archive types for creation are 7z, XZ, GZIP, TAR, ZIP and BZIP2. If the specified archive file already exists, it will "add" the files to the existing archive, instead of overwriting it.
$ 7z a <archive-filename> <list-of-files>
To extract an archive, use "e" option. It will extract the archive in the current directory. Supported archive types for extraction are a lot more than those for creation. The list includes 7z, XZ, GZIP, TAR, ZIP, BZIP2, LZMA2, CAB, ARJ, CPIO, RPM, ISO and DEB.
$ 7z e <archive-filename>
Another way to unpack an archive is to use "x" option. Unlike "e" option, it will extract the content with full paths.
$ 7z x <archive-filename>
To see a list of files in an archive, use "l" option.
$ 7z l <archive-filename>
You can update or remove file(s) in an archive with "u" and "d" options, respectively.
$ 7z u <archive-filename> <list-of-files-to-update>
$ 7z d <archive-filename> <list-of-files-to-delete>
To test the integrity of an archive:
$ 7z t <archive-filename>
--------------------------------------------------------------------------------
via:http://ask.xmodulo.com/install-7zip-linux.html
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html

View File

@ -1,134 +0,0 @@
翻译中 by小眼儿
Docker Image Insecurity
================================================================================
Recently while downloading an “official” container image with Docker I saw this line:
ubuntu:14.04: The image you are pulling has been verified
I assumed this referenced Dockers [heavily promoted][1] image signing system and didnt investigate further at the time. Later, while researching the cryptographic digest system that Docker tries to secure images with, I had the opportunity to explore further. What I found was a total systemic failure of all logic related to image security.
Dockers report that a downloaded image is “verified” is based solely on the presence of a signed manifest, and Docker never verifies the image checksum from the manifest. An attacker could provide any image alongside a signed manifest. This opens the door to a number of serious vulnerabilities.
Images are downloaded from an HTTPS server and go through an insecure streaming processing pipeline in the Docker daemon:
[decompress] -> [tarsum] -> [unpack]
This pipeline is performant but completely insecure. Untrusted input should not be processed before verifying its signature. Unfortunately Docker processes images three times before checksum verification is supposed to occur.
However, despite [Dockers claims][2], image checksums are never actually checked. This is the only section[0][3] of Dockers code related to verifying image checksums, and I was unable to trigger the warning even when presenting images with mismatched checksums.
if img.Checksum != "" && img.Checksum != checksum {
log.Warnf("image layer checksum mismatch: computed %q,
expected %q", checksum, img.Checksum)
}
### Insecure processing pipeline ###
**Decompress**
Docker supports three compression algorithms: gzip, bzip2, and xz. The first two use the Go standard library implementations, which are [memory-safe][4], so the exploit types Id expect to see here are denial of service attacks like crashes and excessive CPU and memory usage.
The third compression algorithm, xz, is more interesting. Since there is no native Go implementation, Docker [execs][5] the `xz` binary to do the decompression.
The xz binary comes from the [XZ Utils][6] project, and is built from approximately[1][7] twenty thousand lines of C code. C is not a memory-safe language. This means malicious input to a C program, in this case the Docker image XZ Utils is unpacking, could potentially execute arbitrary code.
Docker exacerbates this situation by *running* `xz` as root. This means that if there is a single vulnerability in `xz`, a call to `docker pull` could result in the complete compromise of your entire system.
**Tarsum**
The use of tarsum is well-meaning but completely flawed. In order to get a deterministic checksum of the contents of an arbitrarily encoded tar file, Docker decodes the tar and then hashes specific portions, while excluding others, in a [deterministic order][8].
Since this processing is done in order to generate the checksum, it is decoding untrusted data which could be designed to exploit the tarsum code[2][9]. Potential exploits here are denial of service as well as logic flaws that could cause files to be injected, skipped, processed differently, modified, appended to, etc. without the checksum changing.
**Unpacking**
Unpacking consists of decoding the tar and placing files on the disk. This is extraordinarily dangerous as there have been three other vulnerabilities reported[3][10] in the unpack stage at the time of writing.
There is no situation where data that has not been verified should be unpacked onto disk.
### libtrust ###
[libtrust][11] is a Docker package that claims to provide “authorization and access control through a distributed trust graph.” Unfortunately no specification appears to exist, however it looks like it implements some parts of the [Javascript Object Signing and Encryption][12] specifications along with other unspecified algorithms.
Downloading an image with a manifest signed and verified using libtrust is what triggers this inaccurate message (only the manifest is checked, not the actual image contents):
ubuntu:14.04: The image you are pulling has been verified
Currently only “official” image manifests published by Docker, Inc are signed using this system, but from discussions I participated in at the last Docker Governance Advisory Board meeting[4][13], my understanding is that Docker, Inc is planning on deploying this more widely in the future. The intended goal is centralization with Docker, Inc controlling a Certificate Authority that then signs images and/or client certificates.
I looked for the signing key in Dockers code but was unable to find it. As it turns out the key is not embedded in the binary as one would expect. Instead the Docker daemon fetches it [over HTTPS from a CDN][14] before each image download. This is a terrible approach as a variety of attacks could lead to trusted keys being replaced with malicious ones. These attacks include but are not limited to: compromise of the CDN vendor, compromise of the CDN origin serving the key, and man in the middle attacks on clients downloading the keys.
### Remediation ###
I [reported][15] some of the issues I found with the tarsum system before I finished this research, but so far nothing I have reported has been fixed.
Some steps I believe should be taken to improve the security of the Docker image download system:
Drop tarsum and actually verify image digests
Tarsum should not be used for security. Instead, images must be fully downloaded and their cryptographic signatures verified before any processing takes place.
**Add privilege isolation**
Image processing steps that involve decompression or unpacking should be run in isolated processes (containers?) that have only the bare minimum required privileges to operate. There is no scenario where a decompression tool like `xz` should be run as root.
**Replace libtrust**
Libtrust should be replaced with [The Update Framework][16] which is explicitly designed to solve the real problems around signing software binaries. The threat model is very comprehensive and addresses many things that have not been considered in libtrust. There is a complete specification as well as a reference implementation written in Python, and I have begun work on a [Go implementation][17] and welcome contributions.
As part of adding TUF to Docker, a local keystore should be added that maps root keys to registry URLs so that users can have their own signing keys that are not managed by Docker, Inc.
I would like to note that using non-Docker, Inc hosted registries is a very poor user experience in general. Docker, Inc seems content with relegating third party registries to second class status when there is no technical reason to do so. This is a problem both for the ecosystem in general and the security of end users. A comprehensive, decentralized security model for third party registries is both necessary and desirable. I encourage Docker, Inc to take this into consideration when redesigning their security model and image verification system.
### Conclusion ###
Docker users should be aware that the code responsible for downloading images is shockingly insecure. Users should only download images whose provenance is without question. At present, this does *not* include “trusted” images hosted by Docker, Inc including the official Ubuntu and other base images.
The best option is to block `index.docker.io` locally, and download and verify images manually before importing them into Docker using `docker load`. Red Hats security blog has [a good post about this][18].
Thanks to Lewis Marshall for pointing out the tarsums are never verified.
- [Checksum code context][19].
- [cloc][20] says 18,141 non-blank, non-comment lines of C and 5,900 lines of headers in v5.2.0.
- Very similar bugs been [found in Android][21], which allowed arbitrary files to be injected into signed packages, and [the Windows Authenticode][22] signature system, which allowed binary modification.
- Specifically: [CVE-2014-6407][23], [CVE-2014-9356][24], and [CVE-2014-9357][25]. There were two Docker [security releases][26] in response.
- See page 8 of the [notes from the 2014-10-28 DGAB meeting][27].
--------------------------------------------------------------------------------
via: https://titanous.com/posts/docker-insecurity
作者:[titanous][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://twitter.com/titanous
[1]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[2]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[3]:https://titanous.com/posts/docker-insecurity#fn:0
[4]:https://en.wikipedia.org/wiki/Memory_safety
[5]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/archive/archive.go#L91-L95
[6]:http://tukaani.org/xz/
[7]:https://titanous.com/posts/docker-insecurity#fn:1
[8]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/tarsum/tarsum_spec.md
[9]:https://titanous.com/posts/docker-insecurity#fn:2
[10]:https://titanous.com/posts/docker-insecurity#fn:3
[11]:https://github.com/docker/libtrust
[12]:https://tools.ietf.org/html/draft-ietf-jose-json-web-signature-11
[13]:https://titanous.com/posts/docker-insecurity#fn:4
[14]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/trust/trusts.go#L38
[15]:https://github.com/docker/docker/issues/9719
[16]:http://theupdateframework.com/
[17]:https://github.com/flynn/go-tuf
[18]:https://securityblog.redhat.com/2014/12/18/before-you-initiate-a-docker-pull/
[19]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/image/image.go#L114-L116
[20]:http://cloc.sourceforge.net/
[21]:http://www.saurik.com/id/17
[22]:http://blogs.technet.com/b/srd/archive/2013/12/10/ms13-098-update-to-enhance-the-security-of-authenticode.aspx
[23]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6407
[24]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9356
[25]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9357
[26]:https://groups.google.com/d/topic/docker-user/nFAz-B-n4Bw/discussion
[27]:https://docs.google.com/document/d/1JfWNzfwptsMgSx82QyWH_Aj0DRKyZKxYQ1aursxNorg/edit?pli=1

View File

@ -1,207 +0,0 @@
How to configure fail2ban to protect Apache HTTP server
================================================================================
An Apache HTTP server in production environments can be under attack in various different ways. Attackers may attempt to gain access to unauthorized or forbidden directories by using brute-force attacks or executing evil scripts. Some malicious bots may scan your websites for any security vulnerability, or collect email addresses or web forms to send spams to.
Apache HTTP server comes with comprehensive logging capabilities capturing various abnormal events indicative of such attacks. However, it is still non-trivial to systematically parse detailed Apache logs and react to potential attacks quickly (e.g., ban/unban offending IP addresses) as they are perpetrated in the wild. That is when `fail2ban` comes to the rescue, making a sysadmin's life easier.
`fail2ban` is an open-source intrusion prevention tool which detects various attacks based on system logs and automatically initiates prevention actions e.g., banning IP addresses with `iptables`, blocking connections via /etc/hosts.deny, or notifying the events via emails. fail2ban comes with a set of predefined "jails" which use application-specific log filters to detect common attacks. You can also write custom jails to deter any specific attack on an arbitrary application.
In this tutorial, I am going to demonstrate how you can configure fail2ban to protect your Apache HTTP server. I assume that you have Apache HTTP server and fail2ban already installed. Refer to [another tutorial][1] for fail2ban installation.
### What is a Fail2ban Jail ###
Let me go over more detail on fail2ban jails. A jail defines an application-specific policy under which fail2ban triggers an action to protect a given application. fail2ban comes with several jails pre-defined in /etc/fail2ban/jail.conf, for popular applications such as Apache, Dovecot, Lighttpd, MySQL, Postfix, [SSH][2], etc. Each jail relies on application-specific log filters (found in /etc/fail2ban/fileter.d) to detect common attacks. Let's check out one example jail: SSH jail.
[ssh]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 6
banaction = iptables-multiport
This SSH jail configuration is defined with several parameters:
- **[ssh]**: the name of a jail with square brackets.
- **enabled**: whether the jail is activated or not.
- **port**: a port number to protect (either numeric number of well-known name).
- **filter**: a log parsing rule to detect attacks with.
- **logpath**: a log file to examine.
- **maxretry**: maximum number of failures before banning.
- **banaction**: a banning action.
Any parameter defined in a jail configuration will override a corresponding `fail2ban-wide` default parameter. Conversely, any parameter missing will be assgined a default value defined in [DEFAULT] section.
Predefined log filters are found in /etc/fail2ban/filter.d, and available actions are in /etc/fail2ban/action.d.
![](https://farm8.staticflickr.com/7538/16076581722_cbca3c1307_b.jpg)
If you want to overwrite `fail2ban` defaults or define any custom jail, you can do so by creating **/etc/fail2ban/jail.local** file. In this tutorial, I am going to use /etc/fail2ban/jail.local.
### Enable Predefined Apache Jails ###
Default installation of `fail2ban` offers several predefined jails and filters for Apache HTTP server. I am going to enable those built-in Apache jails. Due to slight differences between Debian and Red Hat configurations, let me provide fail2ban jail configurations for them separately.
#### Enable Apache Jails on Debian or Ubuntu ####
To enable predefined Apache jails on a Debian-based system, create /etc/fail2ban/jail.local as follows.
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect potential search for exploits and php vulnerabilities
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/apache*/*error.log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/apache*/*error.log
maxretry = 2
Since none of the jails above specifies an action, all of these jails will perform a default action when triggered. To find out the default action, look for "banaction" under [DEFAULT] section in /etc/fail2ban/jail.conf.
banaction = iptables-multiport
In this case, the default action is iptables-multiport (defined in /etc/fail2ban/action.d/iptables-multiport.conf). This action bans an IP address using iptables with multiport module.
After enabling jails, you must restart fail2ban to load the jails.
$ sudo service fail2ban restart
#### Enable Apache Jails on CentOS/RHEL or Fedora ####
To enable predefined Apache jails on a Red Hat based system, create /etc/fail2ban/jail.local as follows.
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect spammer robots crawling email addresses
[apache-badbots]
enabled = true
port = http,https
filter = apache-badbots
logpath = /var/log/httpd/*access_log
bantime = 172800
maxretry = 1
# detect potential search for exploits and php <a href="http://xmodulo.com/recommend/penetrationbook" style="" target="_blank" rel="nofollow" >vulnerabilities</a>
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to execute non-existing scripts that
# are associated with several popular web services
# e.g. webmail, phpMyAdmin, WordPress
port = http,https
filter = apache-botsearch
logpath = /var/log/httpd/*error_log
maxretry = 2
Note that the default action for all these jails is iptables-multiport (defined as "banaction" under [DEFAULT] in /etc/fail2ban/jail.conf). This action bans an IP address using iptables with multiport module.
After enabling jails, you must restart fail2ban to load the jails in fail2ban.
On Fedora or CentOS/RHEL 7:
$ sudo systemctl restart fail2ban
On CentOS/RHEL 6:
$ sudo service fail2ban restart
### Check and Manage Fail2ban Banning Status ###
Once jails are activated, you can monitor current banning status with fail2ban-client command-line tool.
To see a list of active jails:
$ sudo fail2ban-client status
To see the status of a particular jail (including banned IP list):
$ sudo fail2ban-client status [name-of-jail]
![](https://farm8.staticflickr.com/7572/15891521967_5c6cbc5f8f_c.jpg)
You can also manually ban or unban IP addresses.
To ban an IP address with a particular jail:
$ sudo fail2ban-client set [name-of-jail] banip [ip-address]
To unban an IP address blocked by a particular jail:
$ sudo fail2ban-client set [name-of-jail] unbanip [ip-address]
### Summary ###
This tutorial explains how a fail2ban jail works and how to protect an Apache HTTP server using built-in Apache jails. Depending on your environments and types of web services you need to protect, you may need to adapt existing jails, or write custom jails and log filters. Check outfail2ban's [official Github page][3] for more up-to-date examples of jails and filters.
Are you using fail2ban in any production environment? Share your experience.
--------------------------------------------------------------------------------
via: http://xmodulo.com/configure-fail2ban-apache-http-server.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[2]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[3]:https://github.com/fail2ban/fail2ban

View File

@ -1,3 +1,5 @@
hi ! 让我来翻译
How to debug a C/C++ program with Nemiver debugger
================================================================================
If you read [my post on GDB][1], you know how important and useful a debugger I think can be for a C/C++ program. However, if a command line debugger like GDB sounds more like a problem than a solution to you, you might be more interested in Nemiver. [Nemiver][2] is a GTK+-based standalone graphical debugger for C/C++ programs, using GDB as its back-end. Admirable for its speed and stability, Nemiver is a very reliable debugger filled with goodies.
@ -106,4 +108,4 @@ via: http://xmodulo.com/debug-program-nemiver-debugger.html
[1]:http://xmodulo.com/gdb-command-line-debugger.html
[2]:https://wiki.gnome.org/Apps/Nemiver
[3]:https://download.gnome.org/sources/nemiver/0.9/
[4]:http://xmodulo.com/recommend/linuxclibook
[4]:http://xmodulo.com/recommend/linuxclibook

View File

@ -1,76 +0,0 @@
How to deduplicate files on Linux with dupeGuru
================================================================================
Recently, I was given the task to clean up my father's files and folders. What made it difficult was the abnormal amount of duplicate files with incorrect names. By keeping a backup on an external drive, simultaneously editing multiple versions of the same file, or even changing the directory structure, the same file can get copied many times, change names, change locations, and just clog disk space. Hunting down every single one of them can become a problem of gigantic proportions. Hopefully, there exists nice little software that can save your precious hours by finding and removing duplicate files on your system: [dupeGuru][1]. Written in Python, this file deduplication software switched to a GPLv3 license a few hours ago. So time to apply your new year's resolutions and clean up your stuff!
### Installation of dupeGuru ###
On Ubuntu, you can add the Hardcoded Software PPA:
$ sudo apt-add-repository ppa:hsoft/ppa
$ sudo apt-get update
And then install with:
$ sudo apt-get install dupeguru-se
On Arch Linux, the package is present in the [AUR][2].
If you prefer compiling it yourself, the sources are on [GitHub][3].
### Basic Usage of dupeGuru ###
DupeGuru is conceived to be fast and safe. Which means that the program is not going to run berserk on your system. It has a very low risk of deleting stuff that you did not intend to delete. However, as we are still talking about file deletion, it is always a good idea to stay vigilant and cautious: a good backup is always necessary.
Once you took your precautions, you can launch dupeGuru via the command:
$ dupeguru_se
You should be greeted by the folder selection screen, where you can add folders to scan for deduplication.
![](https://farm9.staticflickr.com/8596/16199976251_f78b042fba.jpg)
Once you selected your directories and launched the scan, dupeGuru will show its results by grouping duplicate files together in a list.
![](https://farm9.staticflickr.com/8600/16016041367_5ab2834efb_z.jpg)
Note that by default dupeGuru matches files based on their content, and not their name. To be sure that you do not accidentally delete something important, the match column shows you the accuracy of the matching algorithm. From there, you can select the duplicate files that you want to take action on, and click on "Actions" button to see available actions.
![](https://farm8.staticflickr.com/7516/16199976361_c8f919b06e_b.jpg)
The choice of actions is quite extensive. In short, you can delete the duplicates, move them to another location, ignore them, open them, rename them, or even invoke a custom command on them. If you choose to delete a duplicate, you might get as pleasantly surprised as I was by available deletion options.
![](https://farm8.staticflickr.com/7503/16014366568_54f70e3140.jpg)
You can not only send the duplicate files to the trash or delete them permanently, but you can also choose to leave a link to the original file (either using a symlink or a hardlink). In oher words, the duplicates will be erased, and a link to the original will be left instead, saving a lot of disk space. This can be particularly useful if you imported those files into a workspace, or have dependencies based on them.
Another fancy option: you can export the results to a HTML or CSV file. Not really sure why you would do that, but I suppose that it can be useful if you prefer keeping track of duplicates rather than use any of dupeGuru's actions on them.
Finally, last but not least, the preferences menu will make all your dream about duplicate busting come true.
![](https://farm8.staticflickr.com/7493/16015755749_a9f343b943_z.jpg)
There you can select the criterion for the scan, either content based or name based, and a threshold for duplicates to control the number of results. It is also possible to define the custom command that you can select in the actions. Among the myriad of other little options, it is good to notice that by default, dupeGuru ignores files less than 10KB.
For more information, I suggest that you go check out the [official website][4], which is filled with documention, support forums, and other goodies.
To conclude, dupeGuru is my go-to software whenever I have to prepare a backup or to free some space. I find it powerful enough for advanced users, and yet intuitive to use for newcomers. Cherry on the cake: dupeGuru is cross platform, which means that you can also use it for your Mac or Windows PC. If you have specific needs, and want to clean up music or image files, there exists two variations: [dupeguru-me][5] and [dupeguru-pe][6], which respectively find duplicate audio tracks and pictures. The main difference from the regular version is that it compares beyond file formats and takes into account specific media meta-data like quality and bit-rate.
What do you think of dupeGuru? Would you consider using it? Or do you have any alternative deduplication software to suggest? Let us know in the comments.
--------------------------------------------------------------------------------
via: http://xmodulo.com/dupeguru-deduplicate-files-linux.html
作者:[Adrien Brochard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://www.hardcoded.net/dupeguru/
[2]:https://aur.archlinux.org/packages/dupeguru-se/
[3]:https://github.com/hsoft/dupeguru
[4]:http://www.hardcoded.net/dupeguru/
[5]:http://www.hardcoded.net/dupeguru_me/
[6]:http://www.hardcoded.net/dupeguru_pe/

View File

@ -1,71 +0,0 @@
How to Install SSL on Apache 2.4 in Ubuntu 14.0.4
================================================================================
Today I will show you how to install a **SSL certificate** on your personal website or blog, to help secure the communications between your visitors and your website.
Secure Sockets Layer or SSL, is the standard security technology for creating an encrypted connection between a web server and a web browser. This ensures that all data passed between the web server and the web browser remain private and secure. It is used by millions of websites in the protection of their online communications with their customers. In order to be able to generate an SSL link, a web server requires a SSL Certificate.
You can create your own SSL Certificate, but it will not be trusted by default in web browsers, to fix this you will have to buy a digital certificate from a trusted Certification Authority (CA), we will show you below how to get the certificate and install it in apache.
### Generating a Certificate Signing Request ###
The Certification Authority (CA) will ask you for a Certificate Signing Request (CSR) generated on your web server. This is a simple step and only takes a minute, you will have to run the following command and input the requested information:
# openssl req -new -newkey rsa:2048 -nodes -keyout yourdomainname.key -out yourdomainname.csr
The output should look something like this:
![generate csr](http://blog.linoxide.com/wp-content/uploads/2015/01/generate-csr.jpg)
This begins the process of generating two files: the Private-Key file for the decryption of your SSL Certificate, and a certificate signing request (CSR) file (used to apply for your SSL Certificate) with apache openssl.
Depending on the authority you apply to, you will either have to upload your csr file or paste it's content in a web form.
### Installing the actual certificate in Apache ###
After the generation process is finished you will receive your new digital certificate, for this article we have used [Comodo SSL][1] and received the certificate in a zip file. To use it in apache you will first have to create a bundle of the certificates you received in the zip file with the following command:
# cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > bundle.crt
![bundle](http://blog.linoxide.com/wp-content/uploads/2015/01/bundle.jpg)
Now make sure that the ssl module is loaded in apache by running the following command:
# a2enmod ssl
If you get the message "Module ssl already enabled" you are ok, if you get the message "Enabling module ssl." you will also have to run the following command to restart apache:
# service apache2 restart
Finally modify your virtual host file (generally found in /etc/apache2/sites-enabled) to look something like this:
DocumentRoot /var/www/html/
ServerName linoxide.com
SSLEngine on
SSLCertificateFile /usr/local/ssl/crt/yourdomainname.crt
SSLCertificateKeyFile /usr/local/ssl/yourdomainname.key
SSLCACertificateFile /usr/local/ssl/bundle.crt
You should now access your website using https://YOURDOMAIN/ (be careful to use 'https' not http) and see the SSL in progress (generally indicated by a lock in your web browser).
**NOTE:** All the links must now point to https, if some of the content on the website (like images or css files) still point to http links you will get a warning in the browser, to fix this you have to make sure that every link points to https.
### Redirect HTTP requests to HTTPS version of your website ###
If you wish to redirect the normal HTTP requests to HTTPS version of your website, add the following text to either the virtual host you wish to apply it to or to the apache.conf if you wish to apply it for all websites hosted on the server:
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-ssl-apache-2-4-in-ubuntu/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:https://ssl.comodo.com/

View File

@ -1,129 +0,0 @@
How to Install Scrapy a Web Crawling Tool in Ubuntu 14.04 LTS
================================================================================
It is an open source software which is used for extracting the data from websites. Scrapy framework is developed in Python and it perform the crawling job in fast, simple and extensible way. We have created a Virtual Machine (VM) in virtual box and Ubuntu 14.04 LTS is installed on it.
### Install Scrapy ###
Scrapy is dependent on Python, development libraries and pip software. Python latest version is pre-installed on Ubuntu. So we have to install pip and python developer libraries before installation of Scrapy.
Pip is the replacement for easy_install for python package indexer. It is used for installation and management of Python packages. Installation of pip package is shown in Figure 1.
sudo apt-get install python-pip
![Fig:1 Pip installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f1.png)
Fig:1 Pip installation
We have to install python development libraries by using following command. If this package is not installed then installation of scrapy framework generates error about python.h header file.
sudo apt-get install python-dev
![Fig:2 Python Developer Libraries](http://blog.linoxide.com/wp-content/uploads/2014/11/f2.png)
Fig:2 Python Developer Libraries
Scrapy framework can be installed either from deb package or source code. However we have installed deb package using pip (Python package manager) which is shown in Figure 3.
sudo pip install scrapy
![Fig:3 Scrapy Installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f3.png)
Fig:3 Scrapy Installation
Scrapy successful installation takes some time which is shown in Figure 4.
![Fig:4 Successful installation of Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f4.png)
Fig:4 Successful installation of Scrapy Framework
### Data extraction using Scrapy framework ###
**(Basic Tutorial)**
We will use Scrapy for the extraction of store names (which are providing Cards) item from fatwallet.com web site. First of all, we created new scrapy project “store_name” using below given command and shown in Figure 5.
$sudo scrapy startproject store_name
![Fig:5 Creation of new project in Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f5.png)
Fig:5 Creation of new project in Scrapy Framework
Above command creates a directory with title “store_name” at current path. This main directory of the project contains files/folders which are shown in the following Figure 6.
$sudo ls lR store_name
![Fig:6 Contents of store_name project.](http://blog.linoxide.com/wp-content/uploads/2014/11/f6.png)
Fig:6 Contents of store_name project.
A brief description of each file/folder is given below;
- scrapy.cfg is the project configuration file
- store_name/ is another directory inside the main directory. This directory contains python code of the project.
- store_name/items.py contains those items which will be extracted by the spider.
- store_name/pipelines.py is the pipelines file.
- Setting of store_name project is in store_name/settings.py file.
- and the store_name/spiders/ directory, contains spider for the crawling
As we are interested to extract the store names of the Cards from fatwallet.com site, so we updated the contents of the file as shown below.
import scrapy
class StoreNameItem(scrapy.Item):
name = scrapy.Field() # extract the names of Cards store
After this, we have to write new spider under store_name/spiders/ directory of the project. Spider is python class which consist of following mandatory attributes :
1. Name of the spider (name )
1. Starting url of spider for crawling (start_urls)
1. And parse method which consist of regex for the extraction of desired items from the page response. Parse method is the important part of spider.
We created spider “store_name.py” under store_name/spiders/ directory and added following python code for the extraction of store name from fatwallet.com site. The output of the spider is written in the file (**StoreName.txt**) which is shown in Figure 7.
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy.http import FormRequest
import re
class StoreNameItem(BaseSpider):
name = "storename"
allowed_domains = ["fatwallet.com"]
start_urls = ["http://fatwallet.com/cash-back-shopping/"]
def parse(self,response):
output = open('StoreName.txt','w')
resp = Selector(response)
tags = resp.xpath('//tr[@class="storeListRow"]|\
//tr[@class="storeListRow even"]|\
//tr[@class="storeListRow even last"]|\
//tr[@class="storeListRow last"]').extract()
for i in tags:
i = i.encode('utf-8', 'ignore').strip()
store_name = ''
if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):
store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()
store_name = re.search(r">.*?<",store_name,re.I|re.S).group()
store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))
store_name = re.sub(r'&amp;',"&",re.sub(r'&amp;',"&",store_name,re.I))
#print store_name
output.write(store_name+""+"\n")
![Fig:7 Output of the Spider code .](http://blog.linoxide.com/wp-content/uploads/2014/11/f7.png)
Fig:7 Output of the Spider code .
*NOTE: The purpose of this tutorial is only the understanding of Scrapy Framework*
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/scrapy-install-ubuntu/
作者:[nido][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/

View File

@ -0,0 +1,93 @@
How to Find and Remove Duplicate Files on Linux
================================================================================
Hi all, today we're gonna learn how to find and remove duplicate files on you Linux PC or Server. So, here's a tool that you may use anyone of them according to your needs and comfort.
Whether youre using Linux on your desktop or a server, there are good tools that will scan your system for duplicate files and help you remove them to free up space. Solid graphical and command-line interfaces are both available. Duplicate files are an unnecessary waste of disk space. After all, if you really need the same file in two different locations you could always set up a symbolic link or hard link, storing the data in only one location on disk.
### FSlint ###
[FSlint][1] is available in various Linux distributions binary repository, including Ubuntu, Debian, Fedora, and Red Hat. Just fire up your package manager and install the “fslint” package. This utility provides a convenient graphical interface by default and it also includes command-line versions of its various functions.
Dont let that scare you away from using FSlints convenient graphical interface, though. By default, it opens with the Duplicates pane selected and your home directory as the default search path.
To install fslint, as I am running ubuntu, here is the default command:
$ sudo apt-get install fslint
But here are installation commands for other linux distributions:
Debian:
svn checkout http://fslint.googlecode.com/svn/trunk/ fslint-2.45
cd fslint-2.45
dpkg-buildpackage -I.svn -rfakeroot -tc
sudo dpkg -i ../fslint_2.45-1_all.deb
Fedora:
sudo yum install fslint
For OpenSuse:
[ -f /etc/mandrake-release ] && pkg=rpm
[ -f /etc/SuSE-release ] && pkg=packages
wget http://www.pixelbeat.org/fslint/fslint-2.42.tar.gz
sudo rpmbuild -ta fslint-2.42.tar.gz
sudo rpm -Uvh /usr/src/$pkg/RPMS/noarch/fslint-2.42-1.*.noarch.rpm
For Other Distro:
wget http://www.pixelbeat.org/fslint/fslint-2.44.tar.gz
tar -xzf fslint-2.44.tar.gz
cd fslint-2.44
(cd po && make)
./fslint-gui
To run fslint in GUI version run fslint-gui in Ubuntu, run command (Alt+F2) or terminal:
$ fslint-gui
By default, it opens with the Duplicates pane selected and your home directory as the default search path. All you have to do is click the Find button and FSlint will find a list of duplicate files in directories under your home folder.
![Delete Duplicate files with Fslint](http://blog.linoxide.com/wp-content/uploads/2015/01/delete-duplicates-fslint.png)
Use the buttons to delete any files you want to remove, and double-click them to preview them.
Finally, you are done. Hurray, we have sucessfully removed duplicate files from your system.
**Note** that the command-line utilities arent in your path by default, so you cant run them like typical commands. On Ubuntu, youll find them under /usr/share/fslint/fslint. So, if you wanted to run the entire fslint scan on a single directory, here are the commands youd run on Ubuntu:
cd /usr/share/fslint/fslint
./fslint /path/to/directory
**This command wont actually delete anything. It will just print a list of duplicate files — youre on your own for the rest.**
$ /usr/share/fslint/fslint/findup --help
find dUPlicate files.
Usage: findup [[[-t [-m|-d]] | [--summary]] [-r] [-f] paths(s) ...]
If no path(s) specified then the current directory is assumed.
When -m is specified any found duplicates will be merged (using hardlinks).
When -d is specified any found duplicates will be deleted (leaving just 1).
When -t is specfied, only report what -m or -d would do.
When --summary is specified change output format to include file sizes.
You can also pipe this summary format to /usr/share/fslint/fslint/fstool/dupwaste
to get a total of the wastage due to duplicates.
![fslint help](http://blog.linoxide.com/wp-content/uploads/2015/01/fslint-help.png)
--------------------------------------------------------------------------------
via: http://linoxide.com/file-system/find-remove-duplicate-files-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.pixelbeat.org/fslint/
[2]:http://www.pixelbeat.org/fslint/fslint-2.42.tar.gz

View File

@ -0,0 +1,135 @@
What are useful command-line network monitors on Linux
================================================================================
Network monitoring is a critical IT function for businesses of all sizes. The goal of network monitoring can vary. For example, the monitoring activity can be part of long-term network provisioning, security protection, performance troubleshooting, network usage accounting, and so on. Depending on its goal, network monitoring is done in many different ways, such as performing packet-level sniffing, collecting flow-level statistics, actively injecting probes into the network, parsing server logs, etc.
While there are many dedicated network monitoring systems capable of 24/7/365 monitoring, you can also leverage command-line network monitors in certain situations, where a dedicated monitor is an overkill. If you are a system admin, you are expected to have hands-on experience with some of well known CLI network monitors. Here is a list of **popular and useful command-line network monitors on Linux**.
### Packet-Level Sniffing ###
In this category, monitoring tools capture individual packets on the wire, dissect their content, and display decoded packet content or packet-level statistics. These tools conduct network monitoring from the lowest level, and as such, can possibly do the most fine-grained monitoring at the cost of network I/O and analysis efforts.
1. **dhcpdump**: a comman-line DHCP traffic sniffer capturing DHCP request/response traffic, and displays dissected DHCP protocol messages in a human-friendly format. It is useful when you are troubleshooting DHCP related issues.
2. **[dsniff][1]**: a collection of command-line based sniffing, spoofing and hijacking tools designed for network auditing and penetration testing. They can sniff various information such as passwords, NSF traffic, email messages, website URLs, and so on.
3. **[httpry][2]**: an HTTP packet sniffer which captures and decode HTTP requests and response packets, and display them in a human-readable format.
4. **IPTraf**: a console-based network statistics viewer. It displays packet-level, connection-level, interface-level, protocol-level packet/byte counters in real-time. Packet capturing can be controlled by protocol filters, and its operation is full menu-driven.
![](https://farm8.staticflickr.com/7519/16055246118_8ea182b413_c.jpg)
5. **[mysql-sniffer][3]**: a packet sniffer which captures and decodes packets associated with MySQL queries. It displays the most frequent or all queries in a human-readable format.
6. **[ngrep][4]**: grep over network packets. It can capture live packets, and match (filtered) packets against regular expressions or hexadecimal expressions. It is useful for detecting and storing any anomalous traffic, or for sniffing particular patterns of information from live traffic.
7. **[p0f][5]**: a passive fingerprinting tool which, based on packet sniffing, reliably identifies operating systems, NAT or proxy settings, network link types and various other properites associated with an active TCP connection.
8. **pktstat**: a command-line tool which analyzes live packets to display connection-level bandwidth usages as well as descriptive information of protocols involved (e.g., HTTP GET/POST, FTP, X11).
![](https://farm8.staticflickr.com/7477/16048970999_be60f74952_b.jpg)
9. **Snort**: an intrusion detection and prevention tool which can detect/prevent a variety of backdoor, botnets, phishing, spyware attacks from live traffic based on rule-driven protocol analysis and content matching.
10. **tcpdump**: a command-line packet sniffer which is capable of capturing nework packets on the wire based on filter expressions, dissect the packets, and dump the packet content for packet-level analysis. It is widely used for any kinds of networking related troubleshooting, network application debugging, or [security][6] monitoring.
11. **tshark**: a command-line packet sniffing tool that comes with Wireshark GUI program. It can capture and decode live packets on the wire, and show decoded packet content in a human-friendly fashion.
### Flow-/Process-/Interface-Level Monitoring ###
In this category, network monitoring is done by classifying network traffic into flows, associated processes or interfaces, and collecting per-flow, per-process or per-interface statistics. Source of information can be libpcap packet capture library or sysfs kernel virtual filesystem. Monitoring overhead of these tools is low, but packet-level inspection capabilities are missing.
12. **bmon**: a console-based bandwidth monitoring tool which shows various per-interface information, including not-only aggregate/average RX/TX statistics, but also a historical view of bandwidth usage.
![](https://farm9.staticflickr.com/8580/16234265932_87f20c5d17_b.jpg)
13. **[iftop][7]**: a bandwidth usage monitoring tool that can shows bandwidth usage for individual network connections in real time. It comes with ncurses-based interface to visualize bandwidth usage of all connections in a sorted order. It is useful for monitoring which connections are consuming the most bandwidth.
14. **nethogs**: a process monitoring tool which offers a real-time view of upload/download bandwidth usage of individual processes or programs in an ncurses-based interface. This is useful for detecting bandwidth hogging processes.
15. **netstat**: a command-line tool that shows various statistics and properties of the networking stack, such as open TCP/UDP connections, network interface RX/TX statistics, routing tables, protocol/socket statistics. It is useful when you diagnose performance and resource usage related problems of the networking stack.
16. **[speedometer][8]**: a console-based traffic monitor which visualizes the historical trend of an interface's RX/TX bandwidth usage with ncurses-drawn bar charts.
![](https://farm8.staticflickr.com/7485/16048971069_31dd573a4f_c.jpg)
17. **[sysdig][9]**: a comprehensive system-level debugging tool with a unified interface for investigating different Linux subsystems. Its network monitoring module is capable of monitoring, either online or offline, various per-process/per-host networking statistics such as bandwidth usage, number of connections/requests, etc.
18. **tcptrack**: a TCP connection monitoring tool which displays information of active TCP connections, including source/destination IP addresses/ports, TCP state, and bandwidth usage.
![](https://farm8.staticflickr.com/7507/16047703080_5fdda2e811_b.jpg)
19. **vnStat**: a command-line traffic monitor which maintains a historical view of RX/TX bandwidh usage (e.g., current, daily, monthly) on a per-interface basis. Running as a background daemon, it collects and stores interface statistics on bandwidth rate and total bytes transferred.
### Active Network Monitoring ###
Unlike passive monitoring tools presented so far, tools in this category perform network monitoring by actively "injecting" probes into the network and collecting corresponding responses. Monitoring targets include routing path, available bandwidth, loss rates, delay, jitter, system settings or vulnerabilities, and so on.
20. **[dnsyo][10]**: a DNS monitoring tool which can conduct DNS lookup from open resolvers scattered across more than 1,500 different networks. It is useful when you check DNS propagation or troubleshoot DNS configuration.
21. **[iperf][11]**: a TCP/UDP bandwidth measurement utility which can measure maximum available bandwidth between two end points. It measures available bandwidth by having two hosts pump out TCP/UDP probe traffic between them either unidirectionally or bi-directionally. It is useful when you test the network capacity, or tune the parameters of network stack. A variant called [netperf][12] exists with more features and better statistics.
22. **[netcat][13]/socat**: versatile network debugging tools capable of reading from, writing to, or listen on TCP/UDP sockets. They are often used alongside with other programs or scripts for backend network transfer or port listening.
23. **nmap**: a command-line port scanning and network discovery utility. It relies on a number of TCP/UDP based scanning techniques to detect open ports, live hosts, or existing operating systems on the local network. It is useful when you audit local hosts for vulnerabilities or build a host map for maintenance purpose. [zmap][14] is an alernative scanning tool with Internet-wide scanning capability.
24. ping: a network testing tool which works by exchaning ICMP echo and reply packets with a remote host. It is useful when you measure round-trip-time (RTT) delay and loss rate of a routing path, as well as test the status or firewall rules of a remote system. Variations of ping exist with fancier interface (e.g., [noping][15]), multi-protocol support (e.g., [hping][16]) or parallel probing capability (e.g., [fping][17]).
![](https://farm8.staticflickr.com/7466/15612665344_a4bb665a5b_c.jpg)
25. **[sprobe][18]**: a command-line tool that heuristically infers the bottleneck bandwidth between a local host and any arbitrary remote IP address. It uses TCP three-way handshake tricks to estimate the bottleneck bandwidth. It is useful when troubleshooting wide-area network performance and routing related problems.
26. **traceroute**: a network discovery tool which reveals a layer-3 routing/forwarding path from a local host to a remote host. It works by sending TTL-limited probe packets and collecting ICMP responses from intermediate routers. It is useful when troubleshooting slow network connections or routing related problems. Variations of traceroute exist with better RTT statistics (e.g., [mtr][19]).
### Application Log Parsing ###
In this category, network monitoring is targeted at a specific server application (e.g., web server or database server). Network traffic generated or consumed by a server application is monitored by analyzing its log file. Unlike network-level monitors presented in earlier categories, tools in this category can analyze and monitor network traffic from application-level.
27. **[GoAccess][20]**: a console-based interactive viewer for Apache and Nginx web server traffic. Based on access log analysis, it presents a real-time statistics of a number of metrics including daily visits, top requests, client operating systems, client locations, client browsers, in a scrollable view.
![](https://farm8.staticflickr.com/7518/16209185266_da6c5c56eb_c.jpg)
28. **[mtop][21]**: a command-line MySQL/MariaDB server moniter which visualizes the most expensive queries and current database server load. It is useful when you optimize MySQL server performance and tune server configurations.
![](https://farm8.staticflickr.com/7472/16047570248_bc996795f2_c.jpg)
29. **[ngxtop][22]**: a traffic monitoring tool for Nginx and Apache web server, which visualizes web server traffic in a top-like interface. It works by parsing a web server's access log file and collecting traffic statistics for individual destinations or requests.
### Conclusion ###
In this article, I presented a wide variety of command-line network monitoring tools, ranging from the lowest packet-level monitors to the highest application-level network monitors. Knowing which tool does what is one thing, and choosing which tool to use is another, as any single tool cannot be a universal solution for your every need. A good system admin should be able to decide which tool is right for the circumstance at hand. Hopefully the list helps with that.
You are always welcome to improve the list with your comment!
--------------------------------------------------------------------------------
via: http://xmodulo.com/useful-command-line-network-monitors-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.monkey.org/~dugsong/dsniff/
[2]:http://xmodulo.com/monitor-http-traffic-command-line-linux.html
[3]:https://github.com/zorkian/mysql-sniffer
[4]:http://ngrep.sourceforge.net/
[5]:http://lcamtuf.coredump.cx/p0f3/
[6]:http://xmodulo.com/recommend/firewallbook
[7]:http://xmodulo.com/how-to-install-iftop-on-linux.html
[8]:https://excess.org/speedometer/
[9]:http://xmodulo.com/monitor-troubleshoot-linux-server-sysdig.html
[10]:http://xmodulo.com/check-dns-propagation-linux.html
[11]:https://iperf.fr/
[12]:http://www.netperf.org/netperf/
[13]:http://xmodulo.com/useful-netcat-examples-linux.html
[14]:https://zmap.io/
[15]:http://noping.cc/
[16]:http://www.hping.org/
[17]:http://fping.org/
[18]:http://sprobe.cs.washington.edu/
[19]:http://xmodulo.com/better-alternatives-basic-command-line-utilities.html#mtr_link
[20]:http://goaccess.io/
[21]:http://mtop.sourceforge.net/
[22]:http://xmodulo.com/monitor-nginx-web-server-command-line-real-time.html

View File

@ -0,0 +1,24 @@
Git 发布2.2.1版,修复严重安全问题
================================================================================
![](http://www.phoronix.com/assets/categories/freesoftware.jpg)
Git 今天下午发布2.2.1版本修复了Git客服端中一个严重的安全漏洞原意为脆弱的。幸运的是这个漏洞虽然影响到了OS X 和Windows用户却没有影响到Unix/Linux用户。
这次的Git漏洞影响那些使用Git客户端不区分大小写的文件系统。对大小写不敏感的平台像Windows和OS X传递.Git /config 可以覆盖用户的.Git /config从而可能导致执行任意代码。幸运的是大多数的Phoronix读者在Linux感谢大小写敏感的文件系统这就不是个问题了。
除了攻击不区分大小写的文件系统的一些可能导致覆盖git 配置文件的字符出现了Windows和OS X的HFS + 也会把某些字符串映射回.git文件。而Git 2.2.1版本就是解决这些问题。
更多的细节请戳[Git 2.2.1 release announcement][1] and [GitHub has additional details][2].
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=news_item&px=MTg2ODA
作者:[Michael Larabel][a]
译者:[kingname](https://github.com/kingname)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.michaellarabel.com/
[1]:http://article.gmane.org/gmane.linux.kernel/1853266
[2]:https://github.com/blog/1938-git-client-vulnerability-announced

View File

@ -0,0 +1,168 @@
Translated By H-mudcup
文件轻松比对,伟大而自由的比较软件们
================================================================================
作者 Frazer Kline
文件比较工具用于比较电脑文件的内容找到他们之间相同与不同之处。比较的结果通常被称为diff。
diff同时也是一个著名的基于控制台的能输出两个文件之间不同之处的文件比较程序的名字。diff是二十世纪70年代早期在Unix操作系统上被开发出来的。diff将会把两个文件之间不同之处的部分进行输出。
Linux拥有很多不错的能使你能清楚的看到两个文件或同一文件不同版本之间的不同之处的很棒的GUI工具。这次我从自己最喜欢的GUI比较工具中选出了五个推荐给大家。除了其中的一个其他的都有开源许可证。
这些应用程序可以让文件或目录的差别变得可见,能合并有差异的文件,可以解决冲突并将其输出成一个新的文件或补丁,还能帮助回顾文件被改动过的地方并评论最终产品(比如,在源代码合并到源文件树之前,要先批准源代码的改变)。因此它们是非常重要的软件开发工具。它们不停的把文件传来传,帮助开发人员们在同一个文件上工作。这些比较工具不仅仅能用于显示源代码文件中的不同之处;他们还适用于很多种文本类文件。可视化的特性使文件比较变得容易、简单。
----------
![](http://www.linuxlinks.com/portal/content2/png/Meld.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Meld.png)
Meld是一个适用于Gnome桌面的开源的图形化的文件差异查看和合并的应用程序。它支持2到3个文件的同时比较递归式的目录比较版本控制(Bazaar, Codeville, CVS, Darcs, Fossil SCM, Git, Mercurial, Monotone, Subversion)之下的目录比较。还能够手动或自动合并文件差异。
eld的重点在于帮助开发人员比较和合并多个源文件并在他们最喜欢的版本控制系统下能直观的浏览改动过的地方。
功能包括
- 原地编辑文件,即时更新
- 进行两到三个文件的比较及合并
- 差异和冲突之间的导航
- 可视化本地和总体间的插入、改变和冲突这几种不同之处。
- 内置正则表达式文本过滤器,可以忽略不重要的差异
- 语法高亮度显示可选择gtksourceview)
- 将两到三个目录一个文件一个文件的进行比较,显示新建,缺失和替换过的文件。
- 可直接开启任何有冲突或差异的文件的比较
- 可以过滤文件或目录以避免出现假差异
- 被改动区域的自动合并模式使合并更容易
- 简单的文件管理
- 支持多种版本控制系统包括Git, Mercurial, Bazaar and SVN
- 在提交前开启文件比较来检查改动的地方和内容
- 查看文件版本状态
- 还能进行简单的版本控制操作(例如,提交、更新、添加、移动或删除文件)
- 继承自同一文件的两个文件进行自动合并
- 标注并在中间的窗格显示所有有冲突的变更的基础版本
- 显示并合并同一文件的各自独立的修改
- 锁定只读性质的基础文件以避免出错
- 可以整合到已有的命令行界面中包括gitmergetool
- 国际化支持
- 可视化使文件比较更简单
- 网址: [meldmerge.org][1]
- 开发人员: Kai Willadsen
- 证书: GNU GPL v2
- 版本号: 1.8.5
----------
![](http://www.linuxlinks.com/portal/content2/png/DiffMerge.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-DiffMerge.png)
注:上面这个图访问不到,图的地址是原文地址的小图的链接地址,发布的时候在验证一下,如果还访问不到,不行先采用小图或者网上搜一下看有没有大图
DiffMerge是一个可以在Linux、Windows和OS X上运行的可以可视化文件的比较和合并的应用软件。
功能包括:
- 图形化的显示两个文件之间的差别。包括插入行,高亮标注以及对编辑的全面支持。
- 图形化的显示三个文件之间的差别。(安全的前提下)允许自动合并还完全拥有最终文件的编辑权。
- 并排显示两个文件夹的比较,显示哪一个文件只存在于其中一个文件夹而不存在于与之相比较的那个文件夹,还能一对一的将完全相同的、等价的或不同的文件配对。
- 规则设置和选项让你可以个性化它的外观和行为
- 基于Unicode可以导入多种编码的字符
- 跨平台工具
- 网址: [sourcegear.com/diffmerge][2]
- 开发人员: SourceGear LLC
- 证书: Licensed for use free of charge (not open source)
- 版本号: 4.2
----------
![](http://www.linuxlinks.com/portal/content2/png/xxdiff.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-xxdiff.png)
xxdiff是个开源的图形化的可进行文件、目录比较及合并的工具。
xxdiff可以用于显示两到三个文件或两个目录的差别还能产生一个合并后的版本。被比较的两到三个文件会并排显示并将有区别的文字内容用不同颜色高亮显示以便于识别。
这个程序是个非常重要的软件开发工具。他可以图形化的显示两个文件或目录之间的差别,合并有差异的文件,解决冲突并评论结果(例如在源代码合并到一个源文件树里之前必须先允许其改变)
功能包括:
- 比较两到三个文件,或是两个目录(浅层或递归)
- 水平差别高亮显示
- 文件可以被交互式的合并,可视化的输出和保存
- 可以可视化合并的评论/监管
- 保留自动合并文件中的冲突,并以两个文件显示以便于解决冲突
- 用额外的比较程序估算差异适用于GNU diff、SGI diff和ClearCase的cleardiff以及所有与这些程序输出相似的文件比较程序。
- 可以在源文件上实现完全的个性化设置
- 用起来感觉和Rudy Wortel或SGI的xdiff差不多 it is desktop agnostic
- 功能和输出可以和脚本轻松集成
- 网址: [furius.ca/xxdiff][3]
- 开发人员: Martin Blais
- 证书: GNU GPL
- 版本号: 4.0
----------
![](http://www.linuxlinks.com/portal/content2/png/Diffuse.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Diffuse.png)
Diffuse是个开源的图形化工具可用于合并和比较文本文件。Diffuse能够比较任意数量的文件并排显示并提供手动行匹配调整能直接编辑文件。Diffuse还能从bazaar、CVS、darcs, git, mercurial, monotone, Subversion和GNU矫正控制系统GNU Revision Control System RCS)这些关于比较及合并的资源中对文件进行恢复和矫正。
功能包括:
- 比较任意数量的文件,并排显示(多方合并)
- 行匹配可以被用户人工矫正
- 直接编辑文件
- 语法高亮
- 支持Bazaar, CVS, Darcs, Git, Mercurial, Monotone, RCS, Subversion和SVK
- 支持Unicode
- 可无限撤销
- 简易键盘导航
- 网址: [diffuse.sourceforge.net][]
- 开发人员: Derrick Moser
- 证书: GNU GPL v2
- 版本号: 0.4.7
----------
![](http://www.linuxlinks.com/portal/content2/png/Kompare.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Kompare.png)
Kompare是个开源的GUI前端程序可以开启不同源文件之间差异的可视化和合并。Kompare可以比较文件或文件夹内容的差异。Kompare支持很多种diff格式并提供各种选项来设置显示的信息级别。
不论你是个想比较源代码的开发人员还是只想比较一下研究论文手稿与最终文档的差异Kompare都是个有用的工具。
Kompare是KDE桌面环境的一部分。
功能包括:
- 比较两个文本文件
- 递归式比较目录
- 显示diff产生的补丁
- 将补丁合并到一个已存在的目录
- 在无聊的编译时刻,逗你玩
- 网址: [www.caffeinated.me.uk/kompare/][5]
- 开发者: The Kompare Team
- 证书: GNU GPL
- 版本号: Part of KDE
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/2014062814400262/FileComparisons.html
译者:[H-mudcup](https://github.com/H-mudcup) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://meldmerge.org/
[2]:https://sourcegear.com/diffmerge/
[3]:http://furius.ca/xxdiff/
[4]:http://diffuse.sourceforge.net/
[5]:http://www.caffeinated.me.uk/kompare/

View File

@ -1,82 +0,0 @@
ChromeOS 对战 Linux : 孰优孰劣,仁者见仁,智者见智
================================================================================
> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。
任何不关注Google 的人都不会相信Google在桌面用户当中扮演着一个很重要的角色。在近几年我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon 一样似乎ChromeOS 势不可挡。
在本文中我们要了解的是ChromeOS 的概念市场ChromeOS 怎么影响着Linux 的份额,和整个 ChromeOS 对于linux 社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。
### ChromeOS 并非真正的Linux ###
每当有朋友问我说是否ChromeOS 是否是Linux 的一个版本时我都会这样回答ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说我认为ChromeOS 是linux 的一个派生操作系统运行于Linux 内核的引擎之下。而很多操作系统就组成了Google 的专利代码和软件。
尽管ChromeOS 是利用了Linux 内核引擎但是它仍然有很大的不同和现在流行的Linux 分支版本。
尽管ChromeOS 的差异化越来越明显是在于它给终端用户提供的app包括Web 应用。因为ChromeOS 的每一个操作都是开始于浏览器窗口这对于Linux 用户来说可能会有很多不一样的感受但是对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
就是说每一个以Google-centric 为生活方式的人来说在ChromeOS上的感觉将会非常良好就好像是回家一样。这样的优势就是这个人已经接受了Chrome 浏览器Google 驱动器和Gmail 。久而久之他们的亲朋好友使用ChromeOs 也就是很自然的事情了就好像是他们很容易接受Chrome 浏览器,因为他们觉得早已经用过。
然而对于Linux 爱好者来说这样的约束就立即带来了不适应。因为软件的选择被限制有范围的在加上要想玩游戏和VoIP 是完全不可能的。那么对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。
### ChromeOS 还是Linux 桌面 ###
有人断言ChromeOS 要是想在桌面系统的浪潮中对Linux 产生影响只有在Linux 停下来浮出水面栖息的时候或者是满足某个非技术用户的时候。
是的桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。然而它必须有专人帮助你安装操作系统并且提供“维修”服务从windows 和 OS X 的阵营来看。但是令人失望的是在美国Linux 正好在这个方面很缺乏。所以我们看到ChromeOS 正慢慢的走入我们的视线。
我发现Linux 桌面系统最适合做网上技术支持来管理。比如说家里的高级用户可以操作和处理更新政府和学校的IT 部门。Linux 还可以应用于这样的环境Linux桌面系统可以被配置给任何技能水平和背景的人使用。
相比之下ChromeOS 是建立在完全免维护的初衷之下的因此不需要第三者的帮忙你只需要允许更新然后让他静默完成即可。这在一定程度上可能是由于ChromeOS 是为某些特定的硬件结构设计的这与苹果开发自己的PC 电脑也有异曲同工之妙。因为Google 的ChromeOS 附带一个硬件脉冲,它允许“犯错误”。对于某些人来说,这是一个很奇妙的地方。
滑稽的是有些人却宣称ChomeOs 的远期的市场存在很多问题。简言之这只是一些Linux 激情的爱好者在找对于ChomeOS 的抱怨罢了。在我看来,停止造谣这些子虚乌有的事情才是关键。
问题是ChromeOS 的市场份额和Linux 桌面系统在很长的一段时间内是不同的。这个存在可能会在将来被打破,然而在现在,仍然会是两军对峙的局面。
### ChromeOS 的使用率正在增长 ###
不管你对ChromeOS 有怎么样的看法事实是ChromeOS 的使用率正在增长。专门针对ChromeOS 的电脑也一直有发布。最近戴尔Dell也发布了一款针对ChromeOS 的电脑。命名为[Dell Chromebox][5],这款ChromeOS 设备将会是另一些传统设备的终结者。它没有软件光驱没有反病毒软件offers 能够无缝的在屏幕后面自动更新。对于一般的用户Chromebox 和Chromebook 正逐渐成为那些工作在web 浏览器上的人的一个选择。
尽管增长速度很快ChromeOS 设备仍然面临着一个很严峻的问题 - 存储。受限于有限的硬盘的大小和严重依赖于云存储并且ChromeOS 不会为了任何使用它们电脑的人消减基本的web 浏览器的功能。
### ChromeOS 和Linux 的异同点 ###
以前我注意到ChromeOS 和Linux 桌面系统分别占有着两个完全不同的市场。出现这样的情况是源于Linux 社区的致力于提升Linux 桌面系统的脱机性能。
是的偶然的有些人可能会第一时间发现这个“Linux 的问题”。但是并没有一个人接着跟进这些问题确保得到问题的答案确保他们得到Linux 最多的帮助。
事实上,脱机故障可能是这样发现的:
- 有些用户偶然的在Linux 本地事件发现了Linux 的问题。
- 他们带回了DVD/USB 设备,并尝试安装这个操作系统。
- 当然,有些人很幸运的成功的安装成功了这个进程,但是,据我所知大多数的人并没有那么幸运。
- 令人失望的是,这些人希望在网上论坛里搜索帮助。很难做一个主计算机,没有网络和视频的问题。
- 我真的是受够了后来有很多失望的用户拿着他们的电脑到windows 商店来“维修”。除了重装一个windows 操作系统他们很多时候都会听到一句话“Linux 并不适合你们”,应该尽量避免。
有些人肯定会说上面的举例肯定夸大其词了。让我来告诉你这是发生在我身边真实的事的而且是经常发生。醒醒吧Linux 社区的人们,我们的这种模式已经过时了。
### 伟大的平台,强大的营销和结论 ###
如果非要说ChromeOS 和Linux 桌面系统相同的地方除了它们都使用了Linux 内核就是它们都伟大的产品却拥有极其差劲的市场营销。而Google 的好处就是,他们投入大量的资金在网上构建大面积存储空间。
Google 相信他们拥有“网上的优势”而线下的影响不是很重要。这真是一个让人难以置信的目光短浅这也成了Google 历史上最大的一个失误之一。相信,如果你没有接触到他们在线的努力,你不值得困扰,仅仅就当是他们在是在选择网上存储空间上做出反击。
我的建议是通过Google 的线下影响提供Linux 桌面系统给ChromeOS 的市场。这就意味着Linux 社区的人需要筹集资金来出席县博览会、商场展览在节日季节和在社区中进行免费的教学课程。这会立即使Linux 桌面系统走入人们的视线否则最终将会是一个ChromeOS 设备出现在人们的面前。
如果说本地的线下市场并没有想我说的这样别担心。Linux 桌面系统的市场仍然会像ChromeOS 一样增长。最坏也能保持现在这种两军对峙的市场局面。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
作者:[Matt Hartley][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Matt-Hartley-3080.html
[1]:http://en.wikipedia.org/wiki/Chrome_OS
[2]:http://www.google.com/chrome/devices/features/
[3]:https://plus.google.com/hangouts
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html

View File

@ -1,22 +1,22 @@
Linux用户应该了解一下开源硬件
Linux用户,你们真的了解开源硬件吗?
================================================================================
> Linux用户不了解一点开源硬件制造相关的事情他们将会很失望。
商业软件和免费软件已经互相纠缠很多年了,但是这俩经常误解对方。这并不奇怪 -- 对一方来说是生意,而另一方只是一种生活方式。但是,这种误解会给人带来痛苦,这也是为什么值得花精力去揭露这里面的内幕。
一个逐渐普遍的现象对开源硬件的不断尝试不管是CanonicalJollaMakePlayLive或者其他几个。不管是评论员或终端用户,一般的免费软件用户会为新的硬件平台发布表现出过分的狂热,然后因为不断延期有所醒悟,最终放弃整个产品。
一个逐渐普遍的现象对开源硬件的不断尝试不管是CanonicalJollaMakePlayLive或者其他公司。无论是评论员或是终端用户,一般的免费软件用户会为新的硬件平台发布表现出过分的狂热,然后因为不断延期有所醒悟,直到最终放弃整个产品。
这是一个没有人获益的怪圈,而且滋生出不信任 - 都是因为一般的Linux用户根本不知道这些新闻背后发生的事情。
这是一个没有人获益的怪圈,而且常常滋生出不信任 - 都是因为一般的Linux用户根本不知道这些新闻背后发生的事情。
我个人对于把产品推向市场的经验很有限。但是,我还不知道谁能有所突破。推出一个开源硬件或其他产品到市场仍然不仅仅是个残酷的生意,而且严重不利于新加入的厂商。
我个人对于把产品推向市场的经验很有限。但是,我还没听说谁能有所突破。推出一个开源硬件或其他产品到市场仍然不仅仅是个残酷的生意,而且严重不利于新厂商。
### 寻找合作伙伴 ###
不管是数码产品的生产还是分销都被相对较少的一些公司控制着有时需要数月的预订。利润率也会很低所以就像那些购买古老情景喜剧的电影工作室一样生成商一般也希望复制当前热销产品的成功。像Aaron Seigo在谈到他花精力开发Vivaldi平板时告诉我的生产商更希望能由其他人去承担开发新产品的风险。
不仅如此,他们更希望和那些有现成销售记录的有可能带来可复制生意的人合作。
不仅如此,他们更希望和那些有现成销售记录的有可能带来长期客户生意的人合作。
而且,一般新加入的厂商所关心的产品只有几千的量。芯片制造商更愿意和苹果或三星合作,因为它们的订单很可能是几百K
而且,一般新加入的厂商所关心的产品只有几千的量。芯片制造商更愿意和苹果或三星合作,因为它们的订单很可能是几十上百万的量
面对这种情形,开源硬件制造者们可能会发现他们在工厂的列表中被淹没了,除非能找到二线或三线厂愿意尝试一下小批量生产新产品。
@ -28,9 +28,9 @@ Linux用户应该了解一下开源硬件
这样必然会引起潜在用户的批评,但是开源硬件制造者没得选,只能折中他们的愿景。寻找其他生产商也不能解决问题,有一个原因是这样做意味着更多延迟,但是更多的是因为完全免授权费的硬件是不存在的。像三星这样的业内巨头对免费硬件没有任何兴趣,而作为新人,开源硬件制造者也没有影响力去要求什么。
更何况,就算有免费硬件,生产商也不能保证会用在下一批生产中。制造者们会轻易地发现他们每次需要生产的时候都要重打一样的仗。
更何况,就算有免费硬件,生产商也不能保证会用在下一批生产中。制造者们会轻易地发现他们每次需要生产的时候都要重打一次一模一样的仗。
这些都还不够这个时候开源硬件制造者们也许已经花了6-12个月时间来讨价还价。机会来了,产业标准已经变更,他们也许为了升级产品规格又要从头来过。
这些都还不够这个时候开源硬件制造者们也许已经花了6-12个月时间来讨价还价。等机会终于来了,产业标准却已经变更,于是他们可能为了升级产品规格又要从头来过。
### 短暂而且残忍的货架期 ###
@ -42,15 +42,15 @@ Linux用户应该了解一下开源硬件
### 衡量整件怪事 ###
在这里我只是粗略地概括了一下,但是任何涉足过制造的人会认出我形容成标准的东西。而更糟糕的是,开源硬件制造者们通常在这个过程中才会有所觉悟。不可避免,他们也会犯错,从而带来更多的延迟。
在这里我只是粗略地概括了一下,但是任何涉足过制造的人会认同我形容为行业标准的东西。而更糟糕的是,开源硬件制造者们通常只有在亲身经历过后才会有所觉悟。不可避免,他们也会犯错,从而带来更多的延迟。
但重点是,一旦你对整个过程有所了解,你对另一个开源硬件进行尝试的消息的反应就会改变。这个过程意味着除非哪家公司处于严格的保密模式对于产品将于六个月内发布的声明会很快会被证实是过期的推测。很可能是12-18个月而且面对之前提过的那些困难很可能意味着这个产品永远不会真正发布。
但重点是,一旦你对整个过程有所了解,你对另一个开源硬件进行尝试的新闻的反应就会改变。这个过程意味着除非哪家公司处于严格的保密模式对于产品将于六个月内发布的声明会很快会被证实是过期的推测。很可能是12-18个月而且面对之前提过的那些困难很可能意味着这个产品永远不会真正发布。
举个例子就像我写的人们等待第一代Steam Machines面世它是一台基于Linux的游戏主机。他们相信Steam Machines能彻底改变Linux和游戏。
作为一个市场分类Steam Machines也许比其他新产品更有优势因为参与开发的人员至少有开发软件产品的经验。然而整整一年过去了Steam Machines的开发成果都还只有原型机而且直到2015年中都不一定能买到。面对硬件生产的实际情况就算有一半能见到阳光都是很幸运了。而实际上能发布2-4台也许更实际。
我做出这个预测并没有考虑个体努力。但是对硬件生产的理解比起那些Linux和游戏的黄金年代之类的预言我估计这个更靠谱。如果我错了也会很开心但是事实不会改变让人吃惊的不是如此多的Linux相关硬件产品失败了而是那些即使是短暂的成功的产品。
我做出这个预测并没有考虑个体努力。但是对硬件生产的理解比起那些Linux和游戏的黄金年代之类的预言我估计这个更靠谱。如果我错了也会很开心但是事实不会改变让人吃惊的不是如此多的Linux相关硬件产品失败了而是那些虽然短暂但却成功的产品。
--------------------------------------------------------------------------------
@ -58,7 +58,7 @@ via: http://www.datamation.com/open-source/what-linux-users-should-know-about-op
作者:[Bruce Byfield][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,65 +0,0 @@
黑客年暮
================================================================================
近来我一直在与某资深开源组织的各成员进行争斗,尽管密切关注我的人们会在读完本文后猜到是哪个组织,但我不会在这里说出这个组织的名字。
怎么让某些人进入 21 世纪就这么难呢?真是的...
我快56 岁了,也就是大部分年轻人会以为的我将时不时朝他们发出诸如“滚出我的草坪”之类歇斯底里咆哮的年龄。但事实并非如此 —— 我发现,尤其是在技术背景之下,我变得与我的年龄非常不相称。
在我这个年龄的大部分人确实变成了爱发牢骚、墨守成规的老顽固。并且,尴尬的是,偶尔我会成为那个打断谈话的人,然后提出在 1995 年或者在某些特殊情况下1985 年)时很适合的方法... 但十年后就不是个好方法了。
为什么是我?因为我的同龄人里大部分人在孩童时期都没有什么名气。任何想要改变自己的人,就必须成为他们中具有较高思想觉悟的佼佼者。即便如此,在与习惯做斗争的过程中,我也比实际上花费了更多的时间。
年轻人犯下无知的错误是可以被原谅的。他们还年轻。年轻意味着缺乏经验,缺乏经验通常会导致片面的判断。我很难原谅那些经历了足够多本该有经验的人,却被<em>长期的固化思维</em>蒙蔽,无法发觉近在咫尺的东西。
(补充一下:我真的不是老顽固。那些和我争论政治的,无论保守派还是非保守派都没有注意到这点,我觉得这颇有点嘲讽的意味。)
那么,现在我们来讨论下 GNU 更新日志文件这件事。在 1985 年的时候,这是一个不错的主意,甚至可以说是必须的。当时的想法是用单独的更新日志文件来记录相关文件的变更情况。用这种方式来对那些存在版本缺失或者非常原始的版本进行版本控制确实不错。当时我也在场,所以我知道这些。
不过即使到了 1995 年,甚至 21 世纪早期许多版本控制系统仍然没有太大改进。也就是说这些版本控制系统并非对批量文件的变化进行分组再保存到一条记录上而是对每个变化的文件分别进行记录并保存到不同的地方。CVS当时被广泛使用的版本控制系统仅仅是模拟日志变更 —— 并且在这方面表现得很糟糕,导致大多数人不再依赖这个功能。即便如此,更新日志文件的出现依然是必要的。
但随后,版本控制系统 Subversion 于 2003 年发布 beta 版,并于 2004 年发布 1.0 正式版Subversion 真正实现了更新日志记录功能得到了人们的广泛认可。它与一年后兴起的分散式版本控制系统Distributed Version Control SystemDVCS共同引发了主流世界的激烈争论。因为如果你在项目上同时使用了分散式版本控制与更新日志文件记录的功能它们将会因为争夺相同元数据的控制权而产生不可预料的冲突。
另一种方法是对提交的评论日志进行授权。如果你这样做了,不久后你就会开始思忖为什么自己仍然对所有的日志更新条目进行记录。提交的元数据与变化的代码具有更好的相容性,毕竟这就是它当初设计的目的。
(现在,试想有这样一个项目,同样本着把项目做得最好的想法,但两拨人却做出了完全不同的选择。因此你必须同时阅读更新日志和评论日志以了解到底发生了什么。最好在矛盾激化前把问题解决....
第三种办法是尝试同时使用两种方法 —— 以另一种格式再次提交评论数据,作为更新日志提交的一部分。这解决了所有你期待的有代表性的问题,并且没有任何缺陷遗留下来;只要其中有拷贝文件损坏,日志文件就会修改,因此这不再是同步时数据匹配的问题,而且导致在其后参与进来的人试图搞清人们是怎么想的时候将会变得非常困惑。
或者,如某个<em>我就不说出具体名字的特定项目</em>的高层开发只是通过电子邮件来完成这些,声明提交可以包含多个更新日志,以及提交的元数据与更新日志是无关的。这导致我们直到现在还得不断进行记录。
当我读到那条的时候我的眼光停在了那个地方。什么样的傻瓜才会没有意识到这是在自找麻烦 —— 事实上,针对更新日志文件采取的定制措施完全是不必要的,尤其是在分散式版本控制系统中
有很好的浏览工具来阅读可靠的提交日志的时候。
这是比较特殊的笨蛋变老的并且思维僵化了的黑客。所有的合理化改革他都会极力反对。他所遵循的行事方法在十年前是有效的但现在只能使得其反了。如果你试图解释不只是git的总摘要还得正确掌握当前的各种工具才能完全弃用更新日志... 呵呵,准备好迎接无法忍受、无法想象的疯狂对话吧。
幸运的是这激怒了我。因为这点还有其他相关的胡言乱语使这个项目变成了很难完成的工作。而且,这类糟糕的事时常发生在年轻的开发者身上,这才是问题所在。相关 G+ 社群的数量已经达到了 4 位数,他们大部分都是孩子,他们也没有紧张起来。显然消息已经传到了外面;这个项目的开发者都是被莫名关注者的老牌黑客,同时还有很多对他们崇拜的人。
这件事给我的最大触动就是每当我要和这些老牌黑客较量时,我都会想:有一天我也会这样吗?或者更糟的是,我看到的只是如同镜子一般对我自己的真实写照,而我自己却浑然不觉吗?我的意思是,我的印象来自于他的网站,这个特殊的样本要比我年轻。通过十五年的仔细观察得出的结论。
我觉得思路很清晰。当我和那些比我聪明的人打交道时我不会受挫,我只会因为那些不能跟上我的人而
沮丧,这些人也不能看见事实。但这种自信也许只是邓宁·克鲁格效应的消极影响,至少我明白这点。很少有什么事情会让我感到害怕;而这件事在让我害怕的事情名单上是名列前茅的。
另一件让人不安的事是当我逐渐变老的时候,这样的矛盾发生得越来越频繁。不知怎的,我希望我的黑客同行们能以更加优雅的姿态老去,即使身体老去也应该保持一颗年轻的心灵。有些人确实是这样;但可是绝大多数人都不是。真令人悲哀。
我不确定我的职业生涯会不会完美收场。假如我最后成功避免了思维僵化(注意我说的是假如),我想我一定知道其中的部分原因,但我不确定这种模式是否可以被复制 —— 为了达成目的也许得在你的头脑中发生一些复杂的化学反应。尽管如此,无论对错,请听听我给年轻黑客以及其他有志青年的建议。
你们 —— 对的,也包括你 —— 一定无法在你中年老年的时候保持不错的心灵,除非你能很好的控制这点。你必须不断地去磨练你的内心、在你还年轻的时候完成自己的种种心愿,你必须把这些行为养成一种习惯直到你老去。
有种说法是中年人锻炼身体的最佳时机是他进入中年的 30 年前。我以为同样的方法,坚持我以上所说的习惯能让你在 56 岁,甚至 65 岁的时候仍然保持灵活的头脑。挑战你的极限,使不断地挑战自己成为一种习惯。立刻离开安乐窝,由此当你以后真正需要它的时候你可以建立起自己的安乐窝。
你必须要清楚的了解这点;还有一个可选择的挑战是你选择一个可以实现的目标并且为了这个目标不断努力。这个月我要学习 Go 语言。不是指游戏,我早就玩儿过了(虽然玩儿的不是太好)。并不是因为工作需要,而是因为我觉得是时候来扩展下我自己了。
保持这个习惯。永远不要放弃。
--------------------------------------------------------------------------------
via: http://esr.ibiblio.org/?p=6485
作者:[Eric Raymond][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://esr.ibiblio.org/?author=2

View File

@ -0,0 +1,155 @@
如何使用Aptik来备份和恢复Ubuntu中的Apps和PPAs
================================================================================
![00_lead_image_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x300x00_lead_image_aptik.png.pagespeed.ic.n3TJwp8YK_.png)
当你想重装Ubuntu或者仅仅是想安装它的一个新版本的时候寻到一个便捷的方法去重新安装之前的应用并且重置其设置是很有用的。此时 *Aptik* 粉墨登场,它可以帮助你轻松实现。
Aptik自动包备份和回复是一个可以用在UbuntuLinux Mint, 和其他基于Debian以及Ubuntu的Linux发行版上的应用它允许你将已经安装过的包括软件库、下载包、安装的应用及其主题和设置在内的PPAs(个人软件包存档)备份到外部的U盘、网络存储或者类似于Dropbox的云服务上。
注意:当我们在此文章中说到输入某些东西的时候,如果被输入的内容被引号包裹,请不要将引号一起输入进去,除非我们有特殊说明。
想要安装Aptik需要先添加其PPA。使用Ctrl + Alt + T快捷键打开一个新的终端窗口。输入以下文字并按回车执行。
sudo apt-add-repository y ppa:teejee2008/ppa
当提示输入密码的时候,输入你的密码然后按回车。
![01_command_to_add_repository](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x99x01_command_to_add_repository.png.pagespeed.ic.UfVC9QLj54.png)
输入下边的命令到提示符旁边,来确保资源库已经是最新版本。
sudo apt-get update
![02_update_command](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x252x02_update_command.png.pagespeed.ic.m9pvd88WNx.png)
更新完毕后你就完成了安装Aptik的准备工作。接下来输入以下命令并按回车
sudo apt-get install aptik
注意你可能会看到一些有关于获取不到包更新的错误提示。不过别担心如果这些提示看起来跟下边图片中类似的话你的Aptik的安装就没有任何问题。
![03_command_to_install_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x03_command_to_install_aptik.png.pagespeed.ic.1jtHysRO9h.png)
安装过程会被显示出来。其中一个被显示出来的消息会提到此次安装会使用掉多少磁盘空间然后提示你是否要继续按下“y”再按回车继续安装。
![04_do_you_want_to_continue](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x04_do_you_want_to_continue.png.pagespeed.ic.WQ15_UxK5Z.png)
当安装完成后输入“Exit”并按回车或者按下左上角的“X”按钮关闭终端窗口。
![05_closing_terminal_window](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x05_closing_terminal_window.png.pagespeed.ic.9QoqwM7Mfr.png)
在正式运行Aptik前你需要设置好备份目录到一个U盘、网络驱动器或者类似于Dropbox和Google Drive的云帐号上。这儿的例子中我们使用的是Dropbox。
![06_creating_backup_folder](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x243x06_creating_backup_folder.png.pagespeed.ic.7HzR9KwAfQ.png)
一旦设置好备份目录点击启动栏上方的“Search”按钮。
![07_opening_search](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x177x07_opening_search.png.pagespeed.ic.qvFiw6_sXa.png)
在搜索框中键入 “aptik”。结果会随着你的输入显示出来。当Aptik图标显示出来的时候点击它打开应用。
![08_starting_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x338x08_starting_aptik.png.pagespeed.ic.8fSl4tYR0n.png)
此时一个对话框会显示出来要求你输入密码。输入你的密码并按“OK”按钮。
![09_entering_password](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x337x09_entering_password.png.pagespeed.ic.yanJYFyP1i.png)
Aptik的主窗口显示出来了。从“Backup Directory”下拉列表中选择“Other…”。这个操作允许你选择你已经建立好的备份目录。
注意:在下拉列表的右侧的 “Open” 按钮会在一个文件管理窗口中打开选择目录功能。
![10_selecting_other_for_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x533x10_selecting_other_for_directory.png.pagespeed.ic.dHbmYdAHYx.png)
在 “Backup Directory” 对话窗口中定位到你的备份目录然后按“Open”。
注意如果此时你尚未建立备份目录或者想在备份目录中新建个子目录你可以点“Create Folder”来新建目录。
![11_choosing_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x470x11_choosing_directory.png.pagespeed.ic.E-56x54cy9.png)
点击“Software Sources (PPAs).”右侧的 “Backup”来备份已安装的PPAs。
![12_clicking_backup_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
然后“Backup Software Sources”对话窗口显示出来。已安装的包和对应的源PPA同时也显示出来了。选择你需要备份的源PPAs或者点“Select All”按钮选择所有源。
![13_selecting_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
点击 “Backup” 开始备份。
![14_clicking_backup_for_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x14_clicking_backup_for_all_software_sources.png.pagespeed.ic.n5h_KnQVZa.png)
备份完成后,一个提示你备份完成的对话窗口会蹦出来。点击 “OK” 关掉。
一个名为“ppa.list”的文件出现在了备份目录中。
![15_closing_finished_dialog_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x15_closing_finished_dialog_software_sources.png.pagespeed.ic.V25-KgSXdY.png)
接下来“Downloaded Packages (APT Cache)”的项目只对重装同样版本的Ubuntu有用处。它会备份下你系统缓存(/var/cache/apt/archives)中的包。如果你是升级系统的话,可以跳过这个条目,因为针对新系统的包会比现有系统缓存中的包更加新一些。
备份和回复下载过的包这可以在重装Ubuntu并且重装包的时候节省时间和网络带宽。因为一旦你把这些包恢复到系统缓存中之后他们可以重新被利用起来这样下载过程就免了包的安装会更加快捷。
如果你是重装相同版本的Ubuntu系统的话点击 “Downloaded Packages (APT Cache)” 右侧的 “Backup” 按钮来备份系统缓存中的包。
注意:当你备份下载过的包的时候是没有二级对话框出现。你系统缓存 (/var/cache/apt/archives) 中的包会被拷贝到备份目录下一个名叫 “archives” 的文件夹中,当整个过程完成后会出现一个对话框来告诉你备份已经完成。
![16_downloaded_packages_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x544x16_downloaded_packages_backed_up.png.pagespeed.ic.z8ysuwzQAK.png)
有一些包是你的Ubuntu发行版的一部分。因为安装Ubuntu系统的时候会自动安装它们所以它们是不会被备份下来的。例如火狐浏览器在Ubuntu和其他类似Linux发行版上都是默认被安装的所以默认情况下它不会被选择备份。
像[package for the Chrome web browser][1]这种系统安装完后才安装的包或者包含 Aptik 的包会默认被选择上。这可以方便你备份这些后安装的包。
按照需要选择想要备份的包。点击 “Software Selections” 右侧的 “Backup” 按钮备份顶层包。
注意:依赖包不会出现在这个备份中。
![18_clicking_backup_for_software_selections](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x18_clicking_backup_for_software_selections.png.pagespeed.ic.QI5D-IgnP_.png)
名为 “packages.list” and “packages-installed.list” 的两个文件出现在了备份目录中,并且一个用来通知你备份完成的对话框出现。点击 ”OK“关闭它。
注意“packages-installed.list”文件包含了所有的包而 “packages.list” 在包含了所有包的前提下还指出了那些包被选择上了。
![19_software_selections_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x19_software_selections_backed_up.png.pagespeed.ic.LVmgs6MKPL.png)
要备份已安装软件的设置的话,点击 Aptik 主界面 “Application Settings” 右侧的 “Backup” 按钮选择你要备份的设置点击“Backup”。
注意如果你要选择所有设置点击“Select All”按钮。
![20_backing_up_app_settings](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x20_backing_up_app_settings.png.pagespeed.ic.7_kgU3Dj_m.png)
被选择的配置文件统一被打包到一个名叫 “app-settings.tar.gz” 的文件中。
![21_zipping_settings_files](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x21_zipping_settings_files.png.pagespeed.ic.dgoBj7egqv.png)
当打包完成后打包后的文件被拷贝到备份目录下另外一个备份成功的对话框出现。点击”OK“关掉。
![22_app_settings_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22_app_settings_backed_up.png.pagespeed.ic.Mb6utyLJ3W.png)
来自 “/usr/share/themes” 目录的主题和来自 “/usr/share/icons” 目录的图标也可以备份。点击 “Themes and Icons” 右侧的 “Backup” 来进行此操作。“Backup Themes” 对话框默认选择了所有的主题和图标。你可以安装需要取消到一些然后点击 “Backup” 进行备份。
![22a_backing_up_themes_and_icons](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22a_backing_up_themes_and_icons.png.pagespeed.ic.KXa8W3YhyF.png)
主题被打包拷贝到备份目录下的 “themes” 文件夹中,图标被打包拷贝到备份目录下的 “icons” 文件夹中。然后成功提示对话框出现点击”OK“关闭它。
![22b_themes_and_icons_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22b_themes_and_icons_backed_up.png.pagespeed.ic.ejjRaymD39.png)
一旦你完成了需要的备份点击主界面左上角的”X“关闭 Aptik 。
![23_closing_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x542x23_closing_aptik.png.pagespeed.ic.pNk9Vt3--l.png)
备份过的文件已存在于你选择的备份目录中,可以随时取阅。
![24_backup_files_in_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x374x24_backup_files_in_directory.png.pagespeed.ic.vwblOfN915.png)
当你重装Ubuntu或者安装新版本的Ubuntu后在新的系统中安装 Aptik 并且将备份好的文件置于新系统中让其可被使用。运行 Aptik并使用每个条目的 “Restore” 按钮来恢复你的软件源、应用、包、设置、主题以及图标。
--------------------------------------------------------------------------------
via: http://www.howtogeek.com/206454/how-to-backup-and-restore-your-apps-and-ppas-in-ubuntu-using-aptik/
作者Lori Kaufman
译者:[Ping](https://github.com/mr-ping)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.howtogeek.com/203768

View File

@ -6,7 +6,7 @@
虽然对于本教程,我只会演示怎样来添加**64位**网络安装镜像但对于Ubuntu或者Debian的**32位**系统或者其它架构的镜像操作步骤也基本相同。同时就我而言我会解释添加Ubuntu 32位源的方法但不会演示配置。
从PXE服务器安装 **Ubuntu**或者**Debian**要求你的客户机必须激活网络连接,最好是使用**DHCP**通过**NAT**来进行动态分配地址。以便安装器拉取需求包并完成安装进程。
从PXE服务器安装 **Ubuntu**或者**Debian**要求你的客户机必须激活网络连接,最好是使用**DHCP**通过**NAT**来进行动态分配地址。以便安装器拉取所需的包并完成安装过程。
#### 需求 ####
@ -14,11 +14,11 @@
## 步骤 1 添加Ubuntu 14.10和Ubuntu 14.04服务器到PXE菜单 ##
**1.** 为**Ubuntu 14.10****Ubuntu 14.04**添加网络安装源到PXE菜单可以通过两种方式实现其一是通过下载Ubuntu CD ISO镜像并挂载到PXE服务器机器上以便可以读取Ubuntu网络启动文件其二是通过直接下载Ubuntu网络启动归档包并将其解压缩到系统中。下面我将进一步讨论这两种方法
**1.** 为**Ubuntu 14.10****Ubuntu 14.04**添加网络安装源到PXE菜单可以通过两种方式实现其一是通过下载Ubuntu CD ISO镜像并挂载到PXE服务器机器上以便可以读取Ubuntu网络启动文件其二是通过直接下载Ubuntu网络启动归档包并将其解压缩到系统中。下面我将进一步讨论这两种方法
### 使用Ubuntu 14.10和Ubuntu 14.04 CD ISO镜像 ###
为了能使用此方法你的PXE服务器需要有一台可工作的CD/DVD驱动器。在一台专有计算机上转到[Ubuntu 14.10下载][2]和[Ubuntu 14.04 下载][3]页,取64位**服务器安装镜像**将它烧录到CD并将CD镜像放到PXE服务器DVD/CD驱动器然后使用以下命令挂载到系统。
为了能使用此方法你的PXE服务器需要有一台可工作的CD/DVD驱动器。在一台专有计算机上转到[Ubuntu 14.10下载][2]和[Ubuntu 14.04 下载][3]页,取64位**服务器安装镜像**将它烧录到CD并将CD镜像放到PXE服务器DVD/CD驱动器然后使用以下命令挂载到系统。
# mount /dev/cdrom /mnt
@ -26,28 +26,28 @@
#### 在Ubuntu 14.10上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# wget http://releases.ubuntu.com/14.10/ubuntu-14.10-server-i386.iso
# mount -o loop /path/to/ubuntu-14.10-server-i386.iso /mnt
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# wget http://releases.ubuntu.com/14.10/ubuntu-14.10-server-amd64.iso
# mount -o loop /path/to/ubuntu-14.10-server-amd64.iso /mnt
#### 在Ubuntu 14.04上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# wget http://releases.ubuntu.com/14.04/ubuntu-14.04.1-server-i386.iso
# mount -o loop /path/to/ubuntu-14.04.1-server-i386.iso /mnt
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# wget http://releases.ubuntu.com/14.04/ubuntu-14.04.1-server-amd64.iso
# mount -o loop /path/to/ubuntu-14.04.1-server-amd64.iso /mnt
@ -58,33 +58,33 @@
#### 在Ubuntu 14.04上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# cd
# wget http://archive.ubuntu.com/ubuntu/dists/utopic/main/installer-i386/current/images/netboot/netboot.tar.gz
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# cd
# http://archive.ubuntu.com/ubuntu/dists/utopic/main/installer-amd64/current/images/netboot/netboot.tar.gz
#### 在Ubuntu 14.04上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# cd
# wget http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-i386/current/images/netboot/netboot.tar.gz
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# cd
# wget http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/netboot.tar.gz
对于其它处理器架构请访问下面的Ubuntu 14.10和Ubuntu 14.04网络启动官方页面,选择你的架构类型并下载需文件。
对于其它处理器架构请访问下面的Ubuntu 14.10和Ubuntu 14.04网络启动官方页面,选择你的架构类型并下载需文件。
- [http://cdimage.ubuntu.com/netboot/14.10/][4]
- [http://cdimage.ubuntu.com/netboot/14.04/][5]
@ -101,7 +101,7 @@
# tar xfz netboot.tar.gz
# cp -rf ubuntu-installer/ /var/lib/tftpboot/
如果你想要在PXE服务器上同时使用两种Ubuntu服务器架构先请下载然后根据不同的情况挂载并解压缩32位并拷贝**ubuntu-installer**目录到**/var/lib/tftpboot**然后卸载CD或删除网络启动归档以及解压缩的文件和文件夹。对于64位架构请重复上述步骤以便让最终的**tftp**路径形成以下结构。
如果你想要在PXE服务器上同时使用两种Ubuntu服务器架构先请下载然后根据不同的情况挂载或解压缩32位架构然后拷贝**ubuntu-installer**目录到**/var/lib/tftpboot**然后卸载CD或删除网络启动归档以及解压缩的文件和文件夹。对于64位架构请重复上述步骤以便让最终的**tftp**路径形成以下结构。
/var/lib/tftpboot/ubuntu-installer/amd64
/var/lib/tftpboot/ubuntu-installer/i386
@ -238,7 +238,7 @@ via: http://www.tecmint.com/add-ubuntu-to-pxe-network-boot/
作者:[Matei Cezar][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
Linux 有问必答--Linux 中如何安装 7zip
================================================================================
> **问题**: 我需要要从 ISO 映像中获取某些文件,为此我想要使用 7zip 程序。那么我应该如何安装 7zip 软件呢,[在 Linux 发布版本上完全安装]?
7zip 是一款开源的归档应用程序,开始是为 Windows 系统而开发的。它能对多种格式的档案文件进行打包或解包处理,除了支持原生的 7z 格式的文档外,还支持包括 XZ、GZIP、TAR、ZIP 和 BZIP2 等这些格式。 一般地7zip 也常用来解压 RAR、DEB、RPM 和 ISO 等格式的文件。除了简单的归档功能7zip 还具有支持 AES-256 算法加密以及自解压和建立多卷存档功能。在以 POSIX 协议为标准的系统上Linux、Unix、BSD原生的 7zip 程序被移植过来并被命名为 p7zip“POSIX 7zip” 的简称)。
下面介绍如何在 Linux 中安装 7zip (或 p7zip
### 在 Debian、Ubuntu 或 Linux Mint 系统中安装 7zip ###
在基于的 Debian 的发布系统中存在有三种 7zip 的软件包。
- **p7zip**: 包含 7zr最小的 7zip 归档工具),仅仅只能处理原生的 7z 格式。
- **p7zip-full**: 包含 7z ,支持 7z、LZMA2、XZ、ZIP、CAB、GZIP、BZIP2、ARJ、TAR、CPIO、RPM、ISO 和 DEB 格式。
- **p7zip-rar**: 包含一个能解压 RAR 文件的插件。
建议安装 p7zip-full 包(不是 p7zip因为这是最完全的 7zip 程序包,它支持很多归档格式。此外,如果您想处理 RAR 文件话,还需要安装 p7zip-rar 包,做成一个独立的插件包的原因是因为 RAR 是一种专有格式。
$ sudo apt-get install p7zip-full p7zip-rar
### 在 Fedora 或 CentOS/RHEL 系统中安装 7zip ###
基于红帽的发布系统上提供了两个 7zip 的软件包。
- **p7zip**: 包含 7za 命令,支持 7z、ZIP、GZIP、CAB、ARJ、BZIP2、TAR、CPIO、RPM 和 DEB 格式。
- **p7zip-plugins**: 包含 7z 命令,额外的插件,它扩展了 7za 命令(例如 支持 ISO 格式的抽取)。
在 CentOS/RHEL 系统中,在运行下面命令前您需要确保 [EPEL 资源库][1] 可用,但在 Fedora 系统中就不需要额外的资源库了。
$ sudo yum install p7zip p7zip-plugins
注意,跟基于 Debian 的发布系统不同的是,基于红帽的发布系统没有提供 RAR 插件,所以您不能使用 7z 命令来抽取解压 RAR 文件。
### 使用 7z 创建或提取归档文件 ###
一旦安装好 7zip 软件后,就可以使用 7z 命令来打包解包各式各样的归档文件了。7z 命令会使用不同的插件来辅助处理对应格式的归档文件。
![](https://farm8.staticflickr.com/7583/15874000610_878a85b06a_b.jpg)
使用 “a” 选项就可以创建一个归档文件,它可以创建 7z、XZ、GZIP、TAR、 ZIP 和 BZIP2 这几种格式的文件。如果指定的归档文件已经存在的话,它会把文件“添加”到存在的归档中,而不是覆盖原有归档文件。
$ 7z a <archive-filename> <list-of-files>
使用 “e” 选项可以抽取出一个归档文件,抽取出的文件会放在当前目录。抽取支持的格式比创建时支持的格式要多的多,包括 7z、XZ、GZIP、TAR、ZIP、BZIP2、LZMA2、CAB、ARJ、CPIO、RPM、ISO 和 DEB 这些格式。
$ 7z e <archive-filename>
解包的另外一种方式是使用 “x” 选项。和 “e” 选项不同的是,它使用的是全路径来抽取归档的内容。
$ 7z x <archive-filename>
要查看归档的文件列表,使用 “l” 选项。
$ 7z l <archive-filename>
要更新或删除归档文件,分别使用 “u” 和 “d” 选项。
$ 7z u <archive-filename> <list-of-files-to-update>
$ 7z d <archive-filename> <list-of-files-to-delete>
要测试归档的完整性,使用:
$ 7z t <archive-filename>
--------------------------------------------------------------------------------
via:http://ask.xmodulo.com/install-7zip-linux.html
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html

View File

@ -0,0 +1,135 @@
Docker的镜像并不安全
================================================================================
最近使用Docker下载“官方”容器镜像的时候我发现这样一句话
ubuntu:14.04: The image you are pulling has been verified (您所拉取的镜像已经经过验证)
起初我以为这条信息引自Docker[大力推广][1]的镜像签名系统因此也就没有继续跟进。后来研究加密摘要系统的时候——Docker用这套系统来对镜像进行安全加固——我才有机会更深入的发现逻辑上整个与镜像安全相关的部分具有一系列系统性问题。
Docker所报告的一个已下载的镜像经过“验证”它基于的仅仅是一个标记清单signed manifest)而Docker却从未根据清单对镜像的校验和进行验证。一名攻击者以此可以提供任意所谓具有标记清单的镜像。一系列严重漏洞的大门就此敞开。
镜像经由HTTPS服务器下载后通过一个未加密的管道流进入Docker守护进程
[decompress] -> [tarsum] -> [unpack]
这条管道的性能没有问题但是却完全没有经过加密。不可信的输入在签名验证之前是不应当进入管道的。不幸的是Docker在上面处理镜像的三个步骤中都没有对校验和进行验证。
然而不论Docker如何[声明][2]实际上镜像的校验和从未经过校验。下面是Docker与镜像校验和的验证相关的代码[片段][3],即使我提交了校验和不匹配的镜像,都无法触发警告信息。
if img.Checksum != "" && img.Checksum != checksum {
log.Warnf("image layer checksum mismatch: computed %q,
expected %q", checksum, img.Checksum)
}
### 不安全的处理管道 ###
**解压缩**
Docker支持三种压缩算法gzip、bzip2和xz。前两种使用Go的标准库实现是[内存安全memory-safe)][4]的因此这里我预计的攻击类型应该是拒绝服务类的攻击包括CPU和内存使用上的当机或过载等等。
第三种压缩算法xz比较有意思。因为没有现成的Go实现Docker 通过[执行(exec)][5]`xz`二进制命令来实现解压缩。
xz二进制程序来自于[XZ Utils][6]项目,由[大概][7]2万行C代码生成而来。而C语言不是一门内存安全的语言。这意味着C程序的恶意输入在这里也就是Docker镜像的XZ Utils解包程序潜在地可能会执行任意代码。
Docker以root权限*运行* `xz` 命令,更加恶化了这一潜在威胁。这意味着如果在`xz`中出现了一个漏洞,对`docker pull`命令的调用就会导致用户整个系统的完全沦陷。
**Tarsum**
对tarsum的使用其出发点是好的但却是最大的败笔。为了得到任意一个加密tar文件的准确校验和Docker先对tar文件进行解密然后求出特定部分的哈希值同时排除剩余的部分而这些步骤的[顺序都是固定的][8]。
由于其生成校验和的步骤固定,它解码不可信数据的过程就有可能被设计成[攻破tarsum的代码][9]。这里潜在的攻击既包括拒绝服务攻击,还有逻辑上的漏洞攻击,可能导致文件被感染、忽略、进程被篡改、植入等等,这一切攻击的同时,校验和可能都是不变的。
**解包**
解包的过程包括tar解码和生成硬盘上的文件。这一过程尤其危险因为在解包写入硬盘的过程中有另外三个[已报告的漏洞][10]。
任何情形下未经验证的数据都不应当解包后直接写入硬盘。
### libtrust ###
Docker的工具包[libtrust][11],号称“通过一个分布式的信任图表进行认证和访问控制”。很不幸,对此官方没有任何具体的说明,看起来它好像是实现了一些[javascript对象标记和加密][12]规格以及其他一些未说明的算法。
使用libtrust下载一个清单经过签名和认证的镜像就可以触发下面这条不准确的信息说不准确是因为事实上它验证的只是清单并非真正的镜像
ubuntu:14.04: The image you are pulling has been verified(您所拉取的镜像已经经过验证)
目前只有Docker公司“官方”发布的镜像清单使用了这套签名系统但是上次我参加Docker[管理咨询委员会][13]的会议讨论时我所理解的是Docker公司正计划在未来扩大部署这套系统。他们的目标是以Docker公司为中心控制一个认证授权机构对镜像进行签名和客户认证。
我试图从Docker的代码中找到签名秘钥但是没找到。好像它并不像我们所期望的把密钥嵌在二进制代码中而是在每次镜像下载前由Docker守护进程[通过HTTPS从CDN][14]远程获取。这是一个多么糟糕的方案有无数种攻击手段可能会将可信密钥替换成恶意密钥。这些攻击包括但不限于CDN供应商出问题、CDN初始密钥出现问题、客户端下载时的中间人攻击等等。
### 补救 ###
研究结束前,我[报告][15]了一些在tarsum系统中发现的问题但是截至目前我报告的这些问题仍然
没有修复。
要改进Docker镜像下载系统的安全问题我认为应当有以下措施
**摒弃tarsum并且真正对镜像本身进行验证**
出于安全原因tarsum应当被摒弃同时镜像在完整下载后、其他步骤开始前就对镜像的加密签名进行验证。
**添加权限隔离**
镜像的处理过程中涉及到解压缩或解包的步骤必须在隔离的进程容器中进行即只给予其操作所需的最小权限。任何场景下都不应当使用root运行`xz`这样的解压缩工具。
**替换 libtrust**
应当用[更新框架(The Update Framework)][16]替换掉libtrust这是专门设计用来解决软件二进制签名此类实际问题的。其威胁模型非常全方位能够解决libtrust中未曾考虑到的诸多问题目前已经有了完整的说明文档。除了已有的Python实现我已经开始着手用[Go语言实现][17]的工作,也欢迎大家的贡献。
作为将更新框架加入Docker的一部分还应当加入一个本地密钥存储池将root密钥与registry的地址进行映射这样用户就可以拥有他们自己的签名密钥而不必使用Docker公司的了。
我注意到使用Docker公司非官方的宿主仓库往往会是一种非常糟糕的用户体验。当没有技术上的原因时Docker也会将第三方的仓库内容降为二等地位来看待。这个问题不仅仅是生态问题还是一个终端用户的安全问题。针对第三方仓库的全方位、去中心化的安全模型即必须又迫切。我希望Docker公司在重新设计他们的安全模型和镜像认证系统时能采纳这一点。
### 结论 ###
Docker用户应当意识到负责下载镜像的代码是非常不安全的。用户们应当只下载那些出处没有问题的镜像。目前这里的“没有问题”并不包括Docker公司的“可信trusted”镜像例如官方的Ubuntu和其他基础镜像。
最好的选择就是在本地屏蔽 `index.docker.io`,然后使用`docker load`命令在导入Docker之前手动下载镜像并对其进行验证。Red Hat的安全博客有一篇[很好的文章][18],大家可以看看。
感谢Lewis Marshall指出tarsum从未真正验证。
- [校验和的代码][19]
- [cloc][20]介绍了18141行没有空格没有注释的C代码以及5900行的header代码版本号为v5.2.0。
- [Android中也发现了][21]类似的bug能够感染已签名包中的任意文件。同样出现问题的还有[Windows的Authenticode][22]认证系统,二进制文件会被篡改。
- 特别的:[CVE-2014-6407][23]、 [CVE-2014-9356][24]以及 [CVE-2014-9357][25]。目前已有两个Docker[安全发布][26]有了回应。
- 参见[2014-10-28 DGAB会议记录][27]的第8页。
--------------------------------------------------------------------------------
via: https://titanous.com/posts/docker-insecurity
作者:[titanous][a]
译者:[Mr小眼儿](http://blog.csdn.net/tinyeyeser)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://twitter.com/titanous
[1]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[2]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[3]:https://titanous.com/posts/docker-insecurity#fn:0
[4]:https://en.wikipedia.org/wiki/Memory_safety
[5]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/archive/archive.go#L91-L95
[6]:http://tukaani.org/xz/
[7]:https://titanous.com/posts/docker-insecurity#fn:1
[8]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/tarsum/tarsum_spec.md
[9]:https://titanous.com/posts/docker-insecurity#fn:2
[10]:https://titanous.com/posts/docker-insecurity#fn:3
[11]:https://github.com/docker/libtrust
[12]:https://tools.ietf.org/html/draft-ietf-jose-json-web-signature-11
[13]:https://titanous.com/posts/docker-insecurity#fn:4
[14]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/trust/trusts.go#L38
[15]:https://github.com/docker/docker/issues/9719
[16]:http://theupdateframework.com/
[17]:https://github.com/flynn/go-tuf
[18]:https://securityblog.redhat.com/2014/12/18/before-you-initiate-a-docker-pull/
[19]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/image/image.go#L114-L116
[20]:http://cloc.sourceforge.net/
[21]:http://www.saurik.com/id/17
[22]:http://blogs.technet.com/b/srd/archive/2013/12/10/ms13-098-update-to-enhance-the-security-of-authenticode.aspx
[23]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6407
[24]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9356
[25]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9357
[26]:https://groups.google.com/d/topic/docker-user/nFAz-B-n4Bw/discussion
[27]:https://docs.google.com/document/d/1JfWNzfwptsMgSx82QyWH_Aj0DRKyZKxYQ1aursxNorg/edit?pli=1

View File

@ -0,0 +1,207 @@
如何配置fail2ban来保护Apache服务器
================================================================================
生产环境中的Apache服务器可能会受到不同的攻击。攻击者或许试图通过暴力攻击或者执行恶意脚本来获取未经授权或者禁止访问的目录。一些恶意爬虫或许会扫描你网站下的任意安全漏洞或者手机email地址或者web表格来发送垃圾邮件。
Apache服务器具有综合的日志功能来捕捉不同表明是攻击的异常事件。然而它还不能系统地解析具体的apache日志并迅速地反应到潜在的攻击比如禁止/解禁IP地址。这时候`fail2ban`可以解救这一切,解放了系统管理员的工作。
`fail2ban`是一款入侵防御工具,可以基于系统日志检测不同的工具并且可以自动采取保护措施比如:通过`iptables`禁止ip、阻止/etc/hosts.deny中的连接、或者通过邮件通知事件。fail2ban具有一系列预定义的“监狱”它使用特定程序日志过滤器来检测通常的攻击。你也可以编写自定义的规则来检测来自任意程序的攻击。
在本教程中我会演示如何配置fail2ban来保护你的apache服务器。我假设你已经安装了apache和fail2ban。对于安装请参考[另外一篇教程][1]。
### 什么是 Fail2ban 监狱 ###
让我们更深入地了解fail2ban监狱。监狱定义了具体的应用策略它会为指定的程序触发一个保护措施。fail2ban在/etc/fail2ban/jail.conf 下为一些流行程序如Apache、Dovecot、Lighttpd、MySQL、Postfix、[SSH][2]等预定义了一些监狱。每个依赖于特定的程序日志过滤器(在/etc/fail2ban/fileter.d 下面来检测通常的攻击。让我看一个例子监狱SSH监狱。
[ssh]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 6
banaction = iptables-multiport
SSH监狱的配置定义了这些参数
- **[ssh]** 方括号内是监狱的名字。
- **enabled**:是否启用监狱
- **port** 端口的数字 (或者数字对应的名称).
- **filter** 检测攻击的检测规则
- **logpath** 检测的日志文件
- **maxretry** 禁止前失败的最大数字
- **banaction** 禁止操作
定义配置文件中的任意参数都会覆盖相应的默认配置`fail2ban-wide` 中的参数。相反,任意缺少的参数都会使用定义在[DEFAULT]字段的值。
预定义日志过滤器都必须在/etc/fail2ban/filter.d可以采取的操作在/etc/fail2ban/action.d。
![](https://farm8.staticflickr.com/7538/16076581722_cbca3c1307_b.jpg)
如果你想要覆盖`fail2ban`的默认操作或者定义任何自定义监狱,你可以创建*/etc/fail2ban/jail.local**文件。本篇教程中,我会使用/etc/fail2ban/jail.local。
### 启用预定义的apache监狱 ###
`fail2ban`的默认安装为Apache服务提供了一些预定义监狱以及过滤器。我要启用这些内建的Apache监狱。由于Debian和红买配置的稍微不同我会分别它们的配置文件。
#### 在Debian 或者 Ubuntu启用Apache监狱 ####
要在基于Debian的系统上启用预定义的apache监狱如下创建/etc/fail2ban/jail.local。
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect potential search for exploits and php vulnerabilities
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/apache*/*error.log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/apache*/*error.log
maxretry = 2
由于上面的监狱没有指定措施,这些监狱都将会触发默认的措施。要查看默认的措施,在/etc/fail2ban/jail.conf中的[DEFAULT]下找到“banaction”。
banaction = iptables-multiport
本例中默认的操作是iptables-multiport定义在/etc/fail2ban/action.d/iptables-multiport.conf。这个措施使用iptable的多端口模块禁止一个IP地址。
在启用监狱后你必须重启fail2ban来加载监狱。
$ sudo service fail2ban restart
#### 在CentOS/RHEL 或者 Fedora中启用Apache监狱 ####
要在基于红帽的系统中启用预定义的监狱,如下创建/etc/fail2ban/jail.local。
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect spammer robots crawling email addresses
[apache-badbots]
enabled = true
port = http,https
filter = apache-badbots
logpath = /var/log/httpd/*access_log
bantime = 172800
maxretry = 1
# detect potential search for exploits and php <a href="http://xmodulo.com/recommend/penetrationbook" style="" target="_blank" rel="nofollow" >vulnerabilities</a>
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to execute non-existing scripts that
# are associated with several popular web services
# e.g. webmail, phpMyAdmin, WordPress
port = http,https
filter = apache-botsearch
logpath = /var/log/httpd/*error_log
maxretry = 2
注意这些监狱文件默认的操作是iptables-multiport定义在/etc/fail2ban/jail.conf中[DEFAULT]字段下的“banaction”中。这个措施使用iptable的多端口模块禁止一个IP地址。
启用监狱后你必须重启fail2ban来加载监狱。
在 Fedora 或者 CentOS/RHEL 7中
$ sudo systemctl restart fail2ban
在 CentOS/RHEL 6中
$ sudo service fail2ban restart
### 检查和管理fail2ban禁止状态 ###
监狱一旦激活后你可以用fail2ban的客户端命令行工具来监测当前的禁止状态。
查看激活的监狱列表:
$ sudo fail2ban-client status
查看特定监狱的状态包含禁止的IP列表
$ sudo fail2ban-client status [监狱名]
![](https://farm8.staticflickr.com/7572/15891521967_5c6cbc5f8f_c.jpg)
你也可以手动禁止或者解禁IP地址
要用制定监狱禁止IP
$ sudo fail2ban-client set [name-of-jail] banip [ip-address]
要解禁指定监狱屏蔽的IP
$ sudo fail2ban-client set [name-of-jail] unbanip [ip-address]
### 总结 ###
本篇教程解释了fail2ban监狱如何工作以及如何使用内置的监狱来保护Apache服务器。依赖于你的环境以及要保护的web服务器类型你或许要适配已存在的监狱或者编写自定义监狱和日志过滤器。查看outfail2ban的[官方Github页面][3]来获取最新的监狱和过滤器示例。
你有在生产环境中使用fail2ban么分享一下你的经验吧。
--------------------------------------------------------------------------------
via: http://xmodulo.com/configure-fail2ban-apache-http-server.html
作者:[Dan Nanni][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[2]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[3]:https://github.com/fail2ban/fail2ban

View File

@ -0,0 +1,76 @@
如何在Linux上使用dupeGuru删除重复文件
================================================================================
最近,我被要求清理我父亲的文件和文件夹。有一个难题是,里面存在很多不正确的名字的重复文件。有移动硬盘的备份,同时还为同一个文件编辑了多个版本,甚至改变的目录结构,同一个文件被复制了好几次,改变名字,改变位置等,这些挤满了磁盘空间。追踪每一个文件成了一个最大的问题。万幸的是,有一个小巧的软件可以帮助你省下很多时间来找到删除你系统中重复的文件:[dupeGuru][1]。它用Python写成这个去重软件几个小时钱前切换到了GPLv3许可证。因此是时候用它来清理你的文件了
### dupeGuru的安装 ###
在Ubuntu上 你可以加入Hardcoded的软件PPA
$ sudo apt-add-repository ppa:hsoft/ppa
$ sudo apt-get update
接着用下面的命令安装:
$ sudo apt-get install dupeguru-se
在ArchLinux中这个包在[AUR][2]中。
如果你想自己编译,源码在[GitHub][3]上。
### dupeGuru的基本使用 ###
DupeGuru的构想是既快又安全。这意味着程序不会在你的系统上疯狂地运行。它很少会删除你不想要删除的文件。然而既然在讨论文件删除保持谨慎和小心总是好的备份总是需要的。
你看完注意事项后你可以用下面的命令运行duprGuru了
$ dupeguru_se
你应该看到要你选择文件夹的欢迎界面,在这里加入你你想要扫描的重复文件夹。
![](https://farm9.staticflickr.com/8596/16199976251_f78b042fba.jpg)
一旦你选择完文件夹并启动扫描后dupeFuru会以列表的形式显示重复文件的组
![](https://farm9.staticflickr.com/8600/16016041367_5ab2834efb_z.jpg)
注意的是默认上dupeGuru基于文件的内容匹配而不是他们的名字。为了防止意外地删除了重要的文件匹配那列列出了使用的匹配算法。在这里你可以选择你想要删除的匹配文件并按下“Action” 按钮来看到可用的操作。
![](https://farm8.staticflickr.com/7516/16199976361_c8f919b06e_b.jpg)
可用的选项是相当广泛的。简而言之,你可以删除重复、移动到另外的位置、忽略它们、打开它们、重命名它们甚至用自定义命令运行它们。如果你选择删除重复文件,你可能会像我一样非常意外竟然还有删除选项。
![](https://farm8.staticflickr.com/7503/16014366568_54f70e3140.jpg)
你不及可以将删除文件移到垃圾箱或者永久删除,还可以选择留下指向原文件的链接(软链接或者硬链接)。也就是说,重复文件按将会删除但是会保留下指向原文件的链接。这将会省下大量的磁盘空间。如果你将这些文件导入到工作空间或者它们有一些依赖时很有用。
还有一个奇特的选项你可以用HTML或者CSV文件导出结果。不确定你会不会这么做但是我假设你想追踪重复文件而不是想让dupeGuru处理它们时会有用。
最后但并不是最不重要的是,偏好菜单可以让你对去重的想法成真。
![](https://farm8.staticflickr.com/7493/16015755749_a9f343b943_z.jpg)
这里你可以选择扫描的标准基于内容还是基于文字并且有一个阈值来控制结果的数量。这里同样可以定义自定义在执行中可以选择的命令。在无数其他小的选项中要注意的是dupeGuru默认忽略小于10KB的文件。
要了解更多的信息,我建议你到[official website][4]官方网站看下,这里有很多文档、论坛支持和其他好东西。
总结一下dupeGuru是我无论何时准备备份或者释放空间时想到的软件。我发现这对高级用户而言也足够强大了对新人而言也很直观。锦上添花的是dupeGuru是跨平台的额这意味着你可以在Mac或者在Windows PC上都可以使用。如果你有特定的需求想要清理音乐或者图片。这里有两个变种[dupeguru-me][5]和 [dupeguru-pe][6] 相应地可以清理音频和图片文件。与常规版本的不同是它不仅比较文件格式还比较特定的媒体数据像质量和码率。
你dupeGuru怎么样你会考虑使用它么或者你有任何可以替代的软件的建议么让我在评论区知道你们的想法。
--------------------------------------------------------------------------------
via: http://xmodulo.com/dupeguru-deduplicate-files-linux.html
作者:[Adrien Brochard][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://www.hardcoded.net/dupeguru/
[2]:https://aur.archlinux.org/packages/dupeguru-se/
[3]:https://github.com/hsoft/dupeguru
[4]:http://www.hardcoded.net/dupeguru/
[5]:http://www.hardcoded.net/dupeguru_me/
[6]:http://www.hardcoded.net/dupeguru_pe/

View File

@ -0,0 +1,73 @@
如何在Ubuntu 14.04 上为Apache 2.4 安装SSL
================================================================================
今天我会站如如何为你的个人网站或者博客安装**SSL 证书**,来保护你的访问者和网站之间通信的安全。
安全套接字层或称SSL是一种加密网站和浏览器之间连接的标准安全技术。这确保服务器和浏览器之间传输的数据保持隐私和安全。这被成千上万的人使用来保护他们与客户的通信。要启用SSL链接web服务器需要安装SSL证书。
你可以创建你自己的SSL证书但是这默认不会被浏览器信任要修复这个问题你需要从受信任的证书机构CA处购买证书我们会向你展示如何
或者证书并在apache中安装。
### 生成一个证书签名请求 ###
证书机构CA会要求你在你的服务器上生成一个证书签名请求CSR。这是一个很简单的过程只需要一会就行你需要运行下面的命令并输入需要的信息
# openssl req -new -newkey rsa:2048 -nodes -keyout yourdomainname.key -out yourdomainname.csr
输出看上去会像这样:
![generate csr](http://blog.linoxide.com/wp-content/uploads/2015/01/generate-csr.jpg)
这一步会生成两个文件按一个用于解密SSL证书的私钥文件一个证书签名请求CSR文件用于申请你的SSL证书
根据你申请的机构你会需要上传csr文件或者在网站表格中粘帖他的内容。
### 在Apache中安装实际的证书 ###
生成步骤完成之后,你会收到新的数字证书,本篇教程中我们使用[Comodo SSL][1]并在一个zip文件中收到了证书。要在apache中使用它你首先需要用下面的命令为收到的证书创建一个组合的证书
# cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > bundle.crt
![bundle](http://blog.linoxide.com/wp-content/uploads/2015/01/bundle.jpg)
用下面的命令确保ssl模块已经加载进apache了
# a2enmod ssl
如果你看到了“Module ssl already enabled”这样的信息就说明你成功了如果你看到了“Enabling module ssl”那么你还需要用下面的命令重启apache
# service apache2 restart
最后像下面这样修改你的虚拟主机文件(通常在/etc/apache2/sites-enabled 下):
DocumentRoot /var/www/html/
ServerName linoxide.com
SSLEngine on
SSLCertificateFile /usr/local/ssl/crt/yourdomainname.crt
SSLCertificateKeyFile /usr/local/ssl/yourdomainname.key
SSLCACertificateFile /usr/local/ssl/bundle.crt
你现在应该可以用https://YOURDOMAIN/注意使用https而不是http来访问你的网站了并可以看到SSL的进度条了通常在你浏览器中用一把锁来表示
**NOTE:** All the links must now point to https, if some of the content on the website (like images or css files) still point to http links you will get a warning in the browser, to fix this you have to make sure that every link points to https.
**注意:** 现在所有的链接都必须指向https如果网站上的一些内容像图片或者css文件等仍旧指向http链接的话你会在浏览器中得到一个警告要修复这个问题请确保每个链接都指向了https。
### 在你的网站上重定向HTTP请求到HTTPS中 ###
如果你希望重定向常规的HTTP请求到HTTPS添加下面的文本到你希望的虚拟主机或者如果希望给服务器上所有网站都添加的话就加入到apache.conf中
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-ssl-apache-2-4-in-ubuntu/
作者:[Adrian Dinu][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:https://ssl.comodo.com/

View File

@ -0,0 +1,130 @@
如何在Ubuntu 14.04 LTS安装网络爬虫工具
================================================================================
这是一款提取网站数据的开源工具。Scrapy框架用Python开发而成它使抓取工作又快又简单且可扩展。我们已经在virtual box中创建一台虚拟机VM并且在上面安装了Ubuntu 14.04 LTS。
### 安装 Scrapy ###
Scrapy依赖于Python、开发库和pip。Python最新的版本已经在Ubuntu上预装了。因此我们在安装Scrapy之前只需安装pip和python开发库就可以了。
pip是作为python包索引器easy_install的替代品。用于安装和管理Python包。pip包的安装可见图 1。
sudo apt-get install python-pip
![Fig:1 Pip installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f1.png)
图:1 pip安装
我们必须要用下面的命令安装python开发库。如果包没有安装那么就会在安装scrapy框架的时候报关于python.h头文件的错误。
sudo apt-get install python-dev
![Fig:2 Python Developer Libraries](http://blog.linoxide.com/wp-content/uploads/2014/11/f2.png)
图:2 Python 开发库
scrapy框架即可从deb包安装也可以从源码安装。然而在图3中我们已经用pipPython 包管理器安装了deb包了。
sudo pip install scrapy
![Fig:3 Scrapy Installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f3.png)
图:3 Scrapy 安装
图4中scrapy的成功安装需要一些时间。
![Fig:4 Successful installation of Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f4.png)
图:4 成功安装Scrapy框架
### 使用scrapy框架提取数据 ###
**(基础教程)**
我们将用scrapy从fatwallet.com上提取店名提供卡的店。首先我们使用下面的命令新建一个scrapy项目“store name” 见图5。
$sudo scrapy startproject store_name
![Fig:5 Creation of new project in Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f5.png)
图:5 Scrapy框架新建项目
Above command creates a directory with title “store_name” at current path. This main directory of the project contains files/folders which are shown in the following Figure 6.
上面的命令在当前路径创建了一个“store_name”的目录。项目主目录下包含的文件/文件夹见图6。
$sudo ls lR store_name
![Fig:6 Contents of store_name project.](http://blog.linoxide.com/wp-content/uploads/2014/11/f6.png)
图:6 store_name项目的内容
每个文件/文件夹的概要如下:
- scrapy.cfg 是项目配置文件
- store_name/ 主目录下的另一个文件夹。 这个目录包含了项目的python代码
- store_name/items.py 包含了将由蜘蛛爬取的项目
- store_name/pipelines.py 是管道文件
- store_name/settings.py 是项目的配置文件
- store_name/spiders/ 包含了用于爬取的蜘蛛
由于我们要从fatwallet.com上如提取店名因此我们如下修改文件。
import scrapy
class StoreNameItem(scrapy.Item):
name = scrapy.Field() # extract the names of Cards store
之后我们要在项目的store_name/spiders/文件夹下写一个新的蜘蛛。蜘蛛是一个python类它包含了下面几个必须的属性
1. 蜘蛛名 (name )
2. 爬取起点url (start_urls)
3. 包含了从响应中提取需要内容相应的正则表达式的解析方法。解析方法对爬虫而言很重要。
我们在store_name/spiders/目录下创建了“store_name.py”爬虫并添加如下的代码来从fatwallet.com上提取点名。爬虫的输出到文件**StoreName.txt**见图7。
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy.http import FormRequest
import re
class StoreNameItem(BaseSpider):
name = "storename"
allowed_domains = ["fatwallet.com"]
start_urls = ["http://fatwallet.com/cash-back-shopping/"]
def parse(self,response):
output = open('StoreName.txt','w')
resp = Selector(response)
tags = resp.xpath('//tr[@class="storeListRow"]|\
//tr[@class="storeListRow even"]|\
//tr[@class="storeListRow even last"]|\
//tr[@class="storeListRow last"]').extract()
for i in tags:
i = i.encode('utf-8', 'ignore').strip()
store_name = ''
if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):
store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()
store_name = re.search(r">.*?<",store_name,re.I|re.S).group()
store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))
store_name = re.sub(r'&amp;',"&",re.sub(r'&amp;',"&",store_name,re.I))
#print store_name
output.write(store_name+""+"\n")
![Fig:7 Output of the Spider code .](http://blog.linoxide.com/wp-content/uploads/2014/11/f7.png)
图:7 爬虫的输出
*注意: 本教程的目的仅用于理解scrapy框架*
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/scrapy-install-ubuntu/
作者:[nido][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/