Merge pull request #14 from LCTT/master

update
This commit is contained in:
DoubleC 2015-01-18 20:44:31 +08:00
commit 58620a3f18
96 changed files with 6270 additions and 3253 deletions

View File

@ -0,0 +1,166 @@
文件轻松比对,伟大而自由的比较软件们
================================================================================
文件比较工具用于比较计算机上的文件的内容找到他们之间相同与不同之处。比较的结果通常被称为diff。
diff同时也是一个基于控制台的、能输出两个文件之间不同之处的著名的文件比较程序的名字。diff是于二十世纪70年代早期在Unix操作系统上被开发出来的。diff将会把两个文件之间不同之处的部分进行输出。
Linux拥有很多不错的GUI工具能使你能清楚的看到两个文件或同一文件不同版本之间的不同之处。这次我从自己最喜欢的GUI比较工具中选出了五个推荐给大家。除了其中的一个其他的都是开源的。
这些应用程序可以让你更清楚的看到文件或目录的差别,能合并有差异的文件,可以解决冲突并将其输出成一个新的文件或补丁,其也用于那些预览和备注文件改动的产品上(比如,在源代码合并到源文件树之前,要先接受源代码的改变)。因此它们是非常重要的软件开发工具。它们可以帮助开发人员们对文件进行处理,不停的把文件转来转去。这些比较工具不仅仅能用于显示源代码文件中的不同之处;他们还适用于很多种的文本文件。可视化的特性使文件比较变得容易、简单。
----------
###Meld
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Meld.png)
Meld是一个适用于Gnome桌面的、开源的、图形化的文件差异查看和合并的应用程序。它支持2到3个文件的同时比较、递归式的目录比较、处于版本控制(Bazaar, Codeville, CVS, Darcs, Fossil SCM, Git, Mercurial, Monotone, Subversion)之下的目录比较。还能够手动或自动合并文件差异。
Meld的重点在于帮助开发人员比较和合并多个源文件并在他们最喜欢的版本控制系统下能直观的浏览改动过的地方。
功能包括
- 原地编辑文件,即时更新
- 进行两到三个文件的比较及合并
- 在显示的差异和冲突之间的导航
- 使用插入、改变和冲突这几种标记可视化展示本地和全局的差异
- 内置正则表达式文本过滤器,可以忽略不重要的差异
- 语法高亮度显示使用可选的gtksourceview)
- 将两到三个目录中的文件逐个进行比较,显示新建,缺失和替换过的文件
- 对任何有冲突或差异的文件直接打开比较界面
- 可以过滤文件或目录以避免以忽略某些差异
- 被改动区域的自动合并模式使合并更容易
- 也有一个简单的文件管理
- 支持多种版本控制系统包括Git, Mercurial, Bazaar 和 SVN
- 在提交前开启文件比较来检查改动的地方和内容
- 查看文件版本状态
- 还能进行简单的版本控制操作(例如,提交、更新、添加、移动或删除文件)
- 继承自同一文件的两个文件进行自动合并
- 标注并在中间的窗格显示所有有冲突的变更的基础版本
- 显示并合并同一文件的无关的独立修改
- 锁定只读性质的基础文件以避免出错
- 可以整合到已有的命令行界面中包括gitmergetool
- 国际化支持
- 可视化使文件比较更简单
- 网址: [meldmerge.org][1]
- 开发人员: Kai Willadsen
- 证书: GNU GPL v2
- 版本号: 1.8.5
----------
###DiffMerge
![](http://www.sourcegear.com/images/screenshots/diffmerge/img_merge_linux.png)
DiffMerge是一个可以在Linux、Windows和OS X上运行的可以可视化文件的比较和合并的应用软件。
功能包括:
- 图形化显示两个文件之间的差别。包括插入行,高亮标注以及对编辑的全面支持
- 图形化显示三个文件之间的差别。(安全的前提下)允许自动合并,并对最终文件可以随意编辑
- 并排显示两个文件夹的比较,显示哪一个文件只存在于其中一个文件夹而不存在于另外的一个文件夹,还能一对一的将完全相同的、等价的或不同的文件配对
- 规则设置和选项让你可以个性化它的外观和行为
- 基于Unicode可以导入多种编码的字符
- 跨平台工具
- 网址: [sourcegear.com/diffmerge][2]
- 开发人员: SourceGear LLC
- 证书: Licensed for use free of charge (not open source)
- 版本号: 4.2
----------
###xxdiff
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-xxdiff.png)
xxdiff是个开源的图形化的可进行文件、目录比较及合并的工具。
xxdiff可以用于显示两到三个文件或两个目录的差别还能产生一个合并后的版本。被比较的两到三个文件会并排显示并将有区别的文字内容用不同颜色高亮显示以便于识别。
这个程序是个非常重要的软件开发工具。他可以图形化的显示两个文件或目录之间的差别,合并有差异的文件,其也用于那些预览和备注文件改动的产品上(比如,在源代码合并到源文件树之前,要先接受源代码的改变)
功能包括:
- 比较两到三个文件,或是两个目录(浅层或递归)
- 横向高亮显示差异
- 交互式的文件合并,可视化的输出和保存
- 可以辅助合并的评论/监管
- 自动合并文件中时不合并 CVS 冲突,并以两个文件显示以便于解决冲突
- 可以用其它的比较程序计算差异适用于GNU diff、SGI diff和ClearCase的cleardiff以及所有与这些程序输出相似的文件比较程序。
- 可以使用资源文件实现完全的个性化设置
- 用起来感觉和Rudy Wortel或SGI的xdiff差不多与桌面系统无关
- 功能和输出可以和脚本轻松集成
- 网址: [furius.ca/xxdiff][3]
- 开发人员: Martin Blais
- 证书: GNU GPL
- 版本号: 4.0
----------
###Diffuse
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Diffuse.png)
Diffuse是个开源的图形化工具可用于合并和比较文本文件。Diffuse能够比较任意数量的文件并排显示并提供手动行匹配调整能直接编辑文件。Diffuse还能从bazaar、CVS、darcs, git, mercurial, monotone, Subversion和GNU RCS 库中获取版本用于比较及合并。
功能包括:
- 比较任意数量的文件,并排显示(多方合并)
- 行匹配可以被用户人工矫正
- 直接编辑文件
- 语法高亮
- 支持Bazaar, CVS, Darcs, Git, Mercurial, Monotone, RCS, Subversion和SVK
- 支持Unicode
- 可无限撤销
- 易用的键盘导航
- 网址: [diffuse.sourceforge.net][]
- 开发人员: Derrick Moser
- 证书: GNU GPL v2
- 版本号: 0.4.7
----------
###Kompare
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Kompare.png)
Kompare是个开源的GUI前端程序可以对不同源文件之间差异的可视化和合并。Kompare可以比较文件或文件夹内容的差异。Kompare支持很多种diff格式并提供各种选项来设置显示的信息级别。
不论你是个想比较源代码的开发人员还是只想比较一下研究论文手稿与最终文档的差异Kompare都是个有用的工具。
Kompare是KDE桌面环境的一部分。
功能包括:
- 比较两个文本文件
- 递归式比较目录
- 显示diff产生的补丁
- 将补丁合并到一个已存在的目录
- 可以让你在编译时更轻松
- 网址: [www.caffeinated.me.uk/kompare/][5]
- 开发者: The Kompare Team
- 证书: GNU GPL
- 版本号: Part of KDE
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/2014062814400262/FileComparisons.html
作者Frazer Kline
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://meldmerge.org/
[2]:https://sourcegear.com/diffmerge/
[3]:http://furius.ca/xxdiff/
[4]:http://diffuse.sourceforge.net/
[5]:http://www.caffeinated.me.uk/kompare/

View File

@ -0,0 +1,72 @@
Apple Watch之后下一个智能手表会是Ubuntu吗
===
**苹果借助Apple Watch的发布证实了其进军穿戴式电子设备市场的长期传言**
![Ubuntu Smartwatch good idea?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/ubuntu-galaxy-gear-smartwatch.png)
Ubuntu智能手表 - 好主意?
拥有一系列稳定功能、硬件解决方案和应用合作伙伴关系的支持,手腕穿戴设备被许多公司预示为“人与技术关系的新篇章”。
它的到来以及用户兴趣的提升有可能意味着Ubuntu需要跟进一个为智能手表定制的Ubuntu版本。
### 大的方面还是成功的 ###
苹果在正确的时间加入了快速发展的智能手表行列。手腕穿戴设备功能的界限并不是一成不变。失败的设计、简陋的用户界面以及主流用户使用穿戴技术功能的弱定制化,这些都见证了硬件类产品仍然很脆弱 这一因素使得Cupertino把时间花费在Apple Watch上。
> 分析师说超过2200万的智能手表将在今年销售
去年全球范围内可穿戴设备的销售数量包括健身追踪器仅仅1000万。今年分析师希望设备的销量可以超过2200万 不包括苹果手表因为其直到2015年初才开始零售。
其实我们很容易就可以看出增长的来源。今年九月初柏林举办的IFA 2014展览会展示了一系列来自主要制造商们的可穿戴设备包括索尼和华硕。大多数搭载着Google最新发布的安卓穿戴平台。
更成熟的一个表现是:安卓穿戴设备打破了与形式因素保持一致的新奇争论,进而呈现出一致且令人信服的用户方案。和新的苹果手表一样,它紧密地连接在一个现存的智能手机生态系统上。
但Ubuntu手腕穿戴系统是否能与之匹配成为一个实用案例目前还不清楚。
#### 目前还没有Ubuntu智能手表的计划 ####
Ubuntu操作系统的通用性将多种设备的严格标准与统一的未来目标联合在一起Canonical已经将目标指向了智能电视平板电脑和智能手机。公司自家的显示服务Mir甚至被用来为所有尺寸的屏幕提供驱动接口虽然不是公认1.5"的)。
今年年初Canonical社区负责人Jono Bacon被问到是否有制作Ubuntu智能手表的打算。Bacon提供了他对这个问题的看法“为[Ubuntu触摸设备]路线增加额外的形式因素只会减缓现有的进度”。
在Ubuntu手机发布两周年之际我们还是挺赞同他的想法的。
###除了A面还有B面###
但是并不是没有希望的。在[几个月之后的一次电话采访][1]中Ubuntu创始人Mark Shuttleworth提到可穿戴技术和智能电视、平板电脑、智能手机一样都在公司计划当中。
> “Ubuntu因其在电话中的完美设计变得独一无二但同时它的设计也能够满足其他生态系统从穿戴设备到PC机。”
然而这还没得到具体的证实,它更像一个指针,在某个方向给我们提供一个乐观的指引。
#### 不大可能 — 但这就是原因所在 ####
Canonical并不反对利用牢固的专利进军市场。事实上它恰恰是公司DNA基因的一部分 — 犹如服务器端的RHEL,桌面端的Windows,智能手机上的安卓...
设备上的Ubuntu系统被制作成可以在更小的屏幕上扩展和适配运行甚至在小如手表一样的屏幕上。当普通的代码基础已经在手机、平板电脑、桌面和TV上准备就绪在同样的方向上如果看不到社区的努力是十分令人吃惊的。
但是我之所以不认为它会从Canonical发生至少目前还没有是基于今年早些时候Jono Bacon的个人思想得出的结论时间和努力。
Tim Cook在他的主题演讲中说道“*我们并没有追随iPhone也没有缩水用户界面将其强硬捆绑在你的手腕上。*”这是一个很明显的陈述。为如此小的屏幕设计UI和UX模型、通过交互原则工作、对硬件和输入模式的推崇这些都不是容易的事。
可穿戴技术仍然是一个新兴的市场。在这个阶段Canonical可能会在探寻的过程中浪费一些发展、设计和商业上的机会。如果在一些更为紧迫的领域落后了造成的后果远比眼前利益的损失更严重。
打一场持久战耐心等待看哪些努力成功哪些会失败这是一条更难的路线但是却更适合Ubuntu就如同今天它做的一样。在新产品出现之前Canonical把力量用在现存的产品上是更好的选择这是一些已经来迟的理论
想更进一步了解什么是Ubuntu智能手表点击下面的[视频][2]里面展示了一个交互的Unity主题皮肤Tizen(它已经支持Samsung Galaxy Gear智能手表)。
---
via: http://www.omgubuntu.co.uk/2014/09/ubuntu-smartwatch-apple-iwatch
作者:[Joey-Elijah Sneddon][a]
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由[LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/03/ubuntu-tablets-coming-year
[2]:https://www.youtube.com/embed/8Zf5dktXzEs?feature=oembed

View File

@ -1,8 +1,7 @@
IPv6IPv4犯的,为什么要我来弥补
IPv6IPv4犯的,为什么要我来弥补
================================================================================
LCTT标题党了一把哈哈哈好过瘾求不拍砖
在过去的十年间IPv6 本来应该得到很大的发展,但事实上这种好事并没有降临。由此导致了一个结果,那就是大部分人都不了解 IPv6 的一些知识:它是什么,怎么使用,以及,为什么它会存在?LCTT这是要回答蒙田的“我是谁”哲学思考题吗
在过去的十年间IPv6 本来应该得到很大的发展,但事实上这种好事并没有降临。由此导致了一个结果,那就是大部分人都不了解 IPv6 的一些知识:它是什么,怎么使用,以及,为什么它会存在?
![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif)
@ -12,15 +11,15 @@ IPv4 和 IPv6 的区别
自从1981年发布了 RFC 791 标准以来我们就一直在使用 **IPv4**。在那个时候,电脑又大又贵还不多见,而 IPv4 号称能提供**40亿条 IP 地址**,在当时看来,这个数字好大好大。不幸的是,这么多的 IP 地址并没有被充分利用起来,地址与地址之间存在间隙。举个例子,一家公司可能有**254(2^8-2)**条地址但只使用其中的25条剩下的229条被空占着以备将来之需。于是这些空闲着的地址不能服务于真正需要它们的用户原因就是网络路由规则的限制。最终的结果是在1981年看起来那个好大好大的数字在2014年看起来变得好小好小。
互联网工程任务组(**IETF**在90年代指出了这个问题并提供了两套解决方案无类型域间选路**CIDR**)以及私有地址。在 CIDR 出现之前,你只能选择三种网络地址长度:**24 位** (共可用16,777,214个地址), **20位** (共可用1,048,574个地址)以及**16位** (共可用65,534个地址)。CIDR 出现之后,你可以将一个网络再划分成多个子网。
互联网工程任务组(**IETF**在90年代指出了这个问题,并提供了两套解决方案:无类型域间选路(**CIDR**)以及私有IP地址。在 CIDR 出现之前,你只能选择三种网络地址长度:**24 位** (共16,777,214个可用地址), **20位** (共1,048,574个可用地址)以及**16位** (共65,534个可用地址)。CIDR 出现之后,你可以将一个网络再划分成多个子网。
举个例子,如果你需要**5个 IP 地址**,你的 ISP 会为你提供一个子网里面的主机地址长度为3位也就是说你最多能得到**6个地址**LCTT抛开子网的网络号3位主机地址长度可以表示07共8个地址但第0个和第7个有特殊用途不能被用户使用所以你最多能得到6个地址。这种方法让 ISP 能尽最大效率分配 IP 地址。“私有地址”这套解决方案的效果是你可以自己创建一个网络里面的主机可以访问外网的主机但外网的主机很难访问到你创建的那个网络上的主机因为你的网络是私有的、别人不可见的。你可以创建一个非常大的网络因为你可以使用16,777,214个主机地址并且你可以将这个网络分割成更小的子网方便自己管理。
也许你现在正在使用私有地址。看看你自己的 IP 地址,如果这个地址在这些范围内:**10.0.0.0 10.255.255.255**、**172.16.0.0 172.31.255.255**或**192.168.0.0 192.168.255.255**就说明你在使用私有地址。这两套方案有效地将“IP 地址用尽”这个灾难延迟了好长时间,但这毕竟只是权宜之计,现在我们正面临最终的审判。
**IPv4** 还有另外一个问题,那就是这个协议的消息头长度可变。如果数据通过软件来路由,这个问题还好说。但现在路由器功能都是由硬件提供的,处理变长消息头对硬件来说是一件困难的事情。一个大的路由器需要处理来自世界各地的大量数据包,这个时候路由器的负载是非常大的。所以很明显,我们需要固定消息头的长度。
**IPv4** 还有另外一个问题,那就是这个协议的消息头长度可变。如果数据的路由通过软件来实现,这个问题还好说。但现在路由器功能都是由硬件提供的,处理变长消息头对硬件来说是一件困难的事情。一个大的路由器需要处理来自世界各地的大量数据包,这个时候路由器的负载是非常大的。所以很明显,我们需要固定消息头的长度。
还有一个问题,在分配 IP 地址的时候,美国人发了因特网LCTT这个万恶的资本主义国家占用了大量 IP 地址)。其他国家只得到了 IP 地址的碎片。我们需要重新定制一个架构,让连续的 IP 地址能在地理位置上集中分布这样一来路由表可以做的更小LCTT想想吧网速肯定更快
在分配 IP 地址的同时,还有一个问题,因特网是美国人发明的LCTT这个万恶的资本主义国家占用了大量 IP 地址)。其他国家只得到了 IP 地址的碎片。我们需要重新定制一个架构,让连续的 IP 地址能在地理位置上集中分布这样一来路由表可以做的更小LCTT想想吧网速肯定更快
还有一个问题,这个问题你听起来可能还不大相信,就是 IPv4 配置起来比较困难,而且还不好改变。你可能不会碰到这个问题,因为你的路由器为你做了这些事情,不用你去操心。但是你的 ISP 对此一直是很头疼的。
@ -28,10 +27,10 @@ IPv4 和 IPv6 的区别
### IPv6 和它的优点 ###
**IETF** 在1995年12月公布了下一代 IP 地址标准,名字叫 IPv6为什么不是 IPv5因为某个错误原因“版本5”这个编号被其他项目用去了。IPv6 的优点如下:
**IETF** 在1995年12月公布了下一代 IP 地址标准,名字叫 IPv6为什么不是 IPv5→_→ 因为某个错误原因“版本5”这个编号被其他项目用去了。IPv6 的优点如下:
- 128位地址长度共有3.402823669×10³⁸个地址
- 这个架构下的地址在逻辑上聚合
- 架构下的地址在逻辑上聚合
- 消息头长度固定
- 支持自动配置和修改你的网络。
@ -43,7 +42,7 @@ IPv4 和 IPv6 的区别
#### 聚合 ####
有这么多的地址,这地址可以被稀稀拉拉地分配给主机,从而更高效地路由数据包。算一笔帐啊,你的 ISP 拿到一个**80位**地址长度的网络空间其中16位是 ISP 的子网地址剩下64位分给你作为主机地址。这样一来你的 ISP 可以分配65,534个子网。
有这么多的地址,这地址可以被稀稀拉拉地分配给主机,从而更高效地路由数据包。算一笔帐啊,你的 ISP 拿到一个**80位**地址长度的网络空间其中16位是 ISP 的子网地址剩下64位分给你作为主机地址。这样一来你的 ISP 可以分配65,534个子网。
然而,这些地址分配不是一成不变地,如果 ISP 想拥有更多的小子网,完全可以做到(当然,土豪 ISP 可能会要求再来一个80位网络空间。最高的48位地址是相互独立地也就是说 ISP 与 ISP 之间虽然可能分到相同地80位网络空间但是这两个空间是相互隔离的好处就是一个网络空间里面的地址会聚合在一起。
@ -51,25 +50,25 @@ IPv4 和 IPv6 的区别
**IPv4** 消息头长度可变,但 **IPv6** 消息头长度被固定为40字节。IPv4 会由于额外的参数导致消息头变长IPv6 中,如果有额外参数,这些信息会被放到一个紧挨着消息头的地方,不会被路由器处理,当消息到达目的地时,这些额外参数会被软件提取出来。
IPv6 消息头有一个部分叫“flow”是一个20位伪随机数用于简化路由器对数据包路由过程。如果一个数据包存在“flow”路由器就可以根据这个值作为索引查找路由表不必慢吞吞地遍历整张路由表来查询路由路径。这个优点使 **IPv6** 更容易被路由。
IPv6 消息头有一个部分叫“flow”是一个20位伪随机数用于简化路由器对数据包路由过程。如果一个数据包存在“flow”路由器就可以根据这个值作为索引查找路由表不必慢吞吞地遍历整张路由表来查询路由路径。这个优点使 **IPv6** 更容易被路由。
#### 自动配置 ####
**IPv6** 中,当主机开机时,会检查本地网络,看看有没有其他主机使用了自己的 IP 地址。如果地址没有被使用,就接着查询本地的 IPv6 路由器,找到后就向它请求一个 IPv6 地址。然后这台主机就可以连上互联网了 —— 它有自己的 IP 地址,和自己的默认路由器。
如果这台默认路由器机,主机就会接着找其他路由器,作为备用路由器。这个功能在 IPv4 协议里实现起来非常困难。同样地,假如路由器想改变自己的地址,自己改掉就好了。主机会自动搜索路由器,并自动更新路由器地址。路由器会同时保存新老地址,直到所有主机都把自己地路由器地址更新成新地址。
如果这台默认路由器机,主机就会接着找其他路由器,作为备用路由器。这个功能在 IPv4 协议里实现起来非常困难。同样地,假如路由器想改变自己的地址,自己改掉就好了。主机会自动搜索路由器,并自动更新路由器地址。路由器会同时保存新老地址,直到所有主机都把自己地路由器地址更新成新地址。
IPv6 自动配置还不是一个完整地解决方案。想要有效地使用互联网,一台主机还需要另外的东西:域名服务器、时间同步服务器、或者还需要一台文件服务器。于是 **dhcp6** 出现了,提供与 dhcp 一样的服务,唯一的区别是 dhcp6 的机器可以在可路由的状态下启动,一个 dhcp 进程可以为大量网络提供服务。
#### 唯一的大问题 ####
如果 IPv6 真的比 IPv4 好那么多为什么它还没有被广泛使用起来Google 在**2014年5月份**估计 IPv6 的市场占有率为**4%**)?一个最基本的原因是“先有鸡还是先有蛋”问题,用户需要让自己的服务器能为尽可能多的客户提供服务,这就意味着他们必须部署一个 **IPv4** 地址。
如果 IPv6 真的比 IPv4 好那么多为什么它还没有被广泛使用起来Google 在**2014年5月份**估计 IPv6 的市场占有率为**4%**)?一个最基本的原因是“先有鸡还是先有蛋”。服务商想让自己的服务器为尽可能多的客户提供服务,这就意味着他们必须部署一个 **IPv4** 地址。
当然,他们可以同时使用 IPv4 和 IPv6 两套地址,但很少有客户会用到 IPv6并且你还需要对你的软件做一些小修改来适应 IPv6。另外比较头疼的一点是很多家庭的路由器压根不支持 IPv6。还有就是 ISP 也不愿意支持 IPv6我问过我的 ISP 这个问题,得到的回答是:只有客户明确指出要部署这个时,他们才会用 IPv6。然后我问了现在有多少人有这个需求答案是包括我在内共有1个。
与这种现实状况呈明显对比的是所有主流操作系统Windows、OS X、Linux 都默认支持 IPv6 好多年了。这些操作系统甚至提供软件让 IPv6 的数据包披上 IPv4 的皮来骗过那些会丢弃 IPv6 数据包的主机,从而达到传输数据的目的LCTT这是高科技偷渡
与这种现实状况呈明显对比的是所有主流操作系统Windows、OS X、Linux 都默认支持 IPv6 好多年了。这些操作系统甚至提供软件让 IPv6 的数据包披上 IPv4 的皮来骗过那些会丢弃 IPv6 数据包的主机,从而达到传输数据的目的。
#### 总结 ####
### 总结 ###
IPv4 已经为我们服务了好长时间。但是它的缺陷会在不远的将来遭遇不可克服的困难。IPv6 通过改变地址分配规则、简化数据包路由过程、简化首次加入网络时的配置过程等策略,可以完美解决这个问题。
@ -81,7 +80,7 @@ via: http://www.tecmint.com/ipv4-and-ipv6-comparison/
作者:[Jeff Silverman][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,82 @@
ChromeOS 对战 Linux : 孰优孰劣,仁者见仁,智者见智
================================================================================
> 在 ChromeOS 和 Linux 的斗争过程中,两个桌面环境都有强有弱,这两者到底怎样呢?
只要稍加留意任何人都会相信Google 在桌面领域绝不是“玩玩而已”。在近几年,我们见到的 [ChromeOS][1] 制造的 [Google Chromebook][2] 相当的轰动。和同期人气火爆的 Amazon 一样ChromeOS 似乎势不可挡。
在本文中,我们要了解的是 ChromeOS 的概念市场ChromeOS 怎么影响着Linux 的份额,整个 ChromeOS 对于Linux 社区来说,是好事还是坏事。另外,我将会谈到一些重大问题,以及为什么没人针对这些问题做点什么。
### ChromeOS 并非真正的Linux ###
每当有朋友问我说 ChromeOS 是否是 Linux 的一个发行版时我都会这样回答ChromeOS 之于 Linux 就如同 OS X 之于 BSD。换句话说我认为ChromeOS 是 Linux 的一个派生操作系统,运行于 Linux 内核的引擎之下。而这个操作系统的大部分由 Google 的专利代码及软件组成。
尽管 ChromeOS 是利用了 Linux 的内核引擎,但是和现在流行的 Linux 分支版本相比,它仍然有很大的不同。
其实ChromeOS 的差异化越来越明显的原因,是在于它给终端用户提供的包括 Web 应用在内的 app。因为ChromeOS 的每一个操作都是开始于浏览器窗口,这对于 Linux 用户来说,可能会有很多不一样的感受,但是,对于没有 Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
比方说,每一个以“依赖 Google 产品”为生活方式的人来说,在 ChromeOS 上的感觉将会非常良好,就好像是回家一样,特别是这个人已经接受了 Chrome 浏览器、Google Drive 云存储和Gmail 的话。久而久之他们使用ChromeOS 也就是很自然的事情了,因为他们很容易接受使用早已习惯的 Chrome 浏览器。
然而,对于 Linux 爱好者来说,这样的约束就立即带来了不适应。因为软件的选择是被限制、被禁锢的,再加上要想玩游戏和 VoIP 是完全不可能的。对不起,因为 [GooglePlus Hangouts][3] 是代替不了VoIP 软件的。甚至这种情况将持续很长一段时间。
### ChromeOS 还是 Linux 桌面 ###
有人断言ChromeOS 要是想在桌面系统的浪潮中对 Linux 产生影响,只有在 Linux 停下来浮出水面喘气的时候,或者是满足某个非技术用户的时候。
是的,桌面 Linux 对于大多数休闲型的用户来说绝对是一个好东西。然而,它必须有专人帮助你安装操作系统,并且提供“维修”服务,就如同我们在 Windows 和 OS X 阵营看到的一样。但是,令人失望的是,在美国, Linux 恰恰在这个方面很缺乏。所以我们看到ChromeOS 正慢慢的走入我们的视线。
我发现 Linux 桌面系统最适合那些能够提供在线技术支持的环境中。比如说:可以在家里操作和处理更新的高级用户、政府和学校的 IT 部门等等。这些环境中Linux 桌面系统可以被配置给任何技能水平和背景的人使用。
相比之下ChromeOS 是建立在完全免维护的初衷之下的,因此,不需要第三者的帮忙,你只需要允许更新,然后让他静默完成即可。这在一定程度上可能是由于 ChromeOS 是为某些特定的硬件结构设计的这与苹果开发自己的PC 电脑也有异曲同工之妙。因为 Google 的 ChromeOS 伴随着其硬件一起提供,大部分情况下都无需担心错误的驱动、适配什么的问题。对于某些人来说,这太好了。
然而有些人则认为这是一个很严重的问题,不过滑稽的是,对 ChomeOS 来说,这些人压根就不在它的目标市场里。简言之,这只是一些狂热的 Linux 爱好者在对 ChomeOS 鸡蛋里挑骨头罢了。要我说,还是停止这些没必要的批评吧。
问题的关键在于ChromeOS 的市场份额和 Linux 桌面系统在很长的一段时间内是不同的。这个局面可能会在将来被打破,然而在现在,仍然会是两军对峙的局面。
### ChromeOS 的使用率正在增长 ###
不管你对ChromeOS 有怎么样的看法事实是ChromeOS 的使用率正在增长。专门针对 ChromeOS 的电脑也一直有发布。最近戴尔Dell也发布了一款针对 ChromeOS 的电脑。命名为 [Dell Chromebox][5],这款 ChromeOS 设备将会是对传统设备的又一次冲击。它没有软件光驱没有反病毒软件能够提供无缝的幕后自动更新。对于一般的用户Chromebox 和 Chromebook 正逐渐成为那些工作在 Web 浏览器上的人们的一个可靠选择。
尽管增长速度很快ChromeOS 设备仍然面临着一个很严峻的问题 - 存储。受限于有限的硬盘大小和严重依赖于云存储ChromeOS 对于那些需要使用基本的浏览器功能之外的人们来说还不够用。
### ChromeOS 和 Linux 的异同点 ###
以前,我注意到 ChromeOS 和 Linux 桌面系统分别占有着两个完全不同的市场。出现这样的情况是源于 Linux 社区在线下的桌面支持上一直都有着极其糟糕的表现。
是的偶然的有些人可能会第一时间发现这个“Linux特点”。但是并没有一个人接着跟进这些问题确保得到问题的答案以让他们得到 Linux 方面更多的帮助。
事实上,线下问题的出现可能是这样的:
- 有些用户偶然的在当地的 Linux 活动中发现了 Linux。
- 他们带回了 DVD/USB 设备,并尝试安装这个操作系统。
- 当然,有些人很幸运的成功完成了安装过程,但是,据我所知大多数的人并没有那么幸运。
- 令人失望的是,这些人只能寄希望于在网上论坛里搜索帮助。他们很难通过主流的计算机网络经验或视频教程解决这些问题。
-于是这些人受够了。后来有很多失望的用户拿着他们的电脑到 Windows 商店来“维修”。除了重装一个 Windows 操作系统他们很多时候都会听到一句话“Linux 并不适合你们”,应该尽量避免。
有些人肯定会说上面的举例肯定夸大其词了。让我来告诉你这是发生在我身边的真事而且是经常发生。醒醒吧Linux 社区的人们,我们的推广模式早已过期无力了。
### 伟大的平台,糟糕的营销和最终结论 ###
如果非要找一个 ChromeOS 和 Linux 桌面系统的共同点,除了它们都使用了 Linux 内核那就是它们都是伟大的产品却拥有极其差劲的市场营销。对此Google 认为自己的优势是,它能投入大量的资金在网上构建大面积存储空间。
Google 相信他们拥有“网上的优势”而线下的问题不是很重要。这真是一个让人难以置信的目光短浅这也成了Google 最严重的失误之一。而当地的 Linux 零售商则坚信,对于不怎么上网的人,自然不必担心他们会受到 Google巨大的在线存储的诱惑。
我的建议是Linux 可以通过线下的努力,提供桌面系统,渗透 ChromeOS 市场。这就意味着 Linux 社区需要在节假日筹集资金来出席博览会、商场展览,并且在社区中进行免费的教学课程。这会立即使 Linux 桌面系统走入人们的视线,否则,最终将会是一个 ChromeOS 设备出现在人们的面前。
如果说本地的线下市场并没有像我说的这样别担心。Linux 桌面系统的市场仍然会像 ChromeOS 一样增长。最坏也能保持现在这种两军对峙的市场局面。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
作者:[Matt Hartley][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Matt-Hartley-3080.html
[1]:http://en.wikipedia.org/wiki/Chrome_OS
[2]:http://www.google.com/chrome/devices/features/
[3]:https://plus.google.com/hangouts
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html

View File

@ -0,0 +1,64 @@
Linux上几款好用的字幕编辑器
================================================================================
如果你经常看国外的大片你应该会喜欢带字幕版本而不是有国语配音的版本。我在法国长大童年的记忆里充满了迪斯尼电影。但是这些电影因为有了法语的配音而听起来很怪。如果现在有机会能看原始的版本我想对于大多数的人来说字幕还是必须的。我很高兴能为家人制作字幕。给我带来希望的是Linux 也不乏有很多花哨、开源的字幕编辑器。总之一句话文中Linux上字幕编辑器的列表并不详尽你可以告诉我哪一款是你认为最好的字幕编辑器。
### 1. Gnome Subtitles ###
![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg)
当有现有字幕需要快速编辑时,[Gnome Subtitles][1] 是我的一个选择。你可以载入视频,载入字幕文本,然后就可以即刻开始了。我很欣赏其对于易用性和高级特性之间的平衡。它带有一个同步工具以及一个拼写检查工具。最后但同样重要的的一点,这么好用最主要的是因为它的快捷键:当你编辑很多的台词的时候,你最好把你的手放在键盘上,使用其内置的快捷键来移动。
### 2. Aegisub ###
![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg)
[Aegisub][2] 已经是一款高级别的复杂字幕编辑器。仅仅是界面就反映出了一定的学习曲线。但是除了它吓人的样子以外Aegisub 是一个非常完整的软件提供的工具远远超出你能想象的。和Gnome Subtitles 一样Aegisub也采用了所见即所得WYSIWYG:what you see is what you get的处理方式。但是是一个全新的高度可以再屏幕上任意拖动字幕也可以在另一边查看音频的频谱并且可以利用快捷键做任何的事情。除此以外它还带有一个汉字工具有一个kalaok模式并且你可以导入lua 脚本让它自动完成一些任务。我希望你在用之前,先去阅读下它的[指南][3]。
### 3. Gaupol ###
![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg)
另一个操作复杂的软件是[Gaupol][4],不像Aegisub Gaupol 很容易上手而且采用了一个和Gnome Subtitles 很像的界面。但是在这些相对简单背后,它拥有很多很必要的工具:快捷键、第三方扩展、拼写检查,甚至是语音识别(由[CMU Sphinx][5]提供。这里也提一个缺点我注意到有时候在测试的时候也软件会有消极怠工的表现不是很严重但是也足以让我更有理由喜欢Gnome Subtitles了。
### 4. Subtitle Editor ###
![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg)
[Subtitle Editor][6]和 Gaupol 很像但是它的界面有点不太直观特性也只是稍微的高级一点点。我很欣赏的一点是它可以定义“关键帧”而且提供所有的同步选项。然而多一点的图标或者是少一点的文字都能提供界面的特性。作为一个值得称赞的字幕编辑器Subtitle Editor 可以模仿“作家”打字的效果,虽然我不确定它是否特别有用。最后但同样重要的一点,重定义快捷键的功能很实用。
### 5. Jubler ###
![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg)
[Jubler][7]是一个用Java编写并有多平台支持的字幕编辑器。我对它的界面印象特别深刻。在上面我确实看出了Java特点的东西但是它仍然是经过精心的构造和构思的。像Aegisub 一样你可以再屏幕上任意的拖动字幕让你有愉快的体验而不单单是打字。它也可以为字幕自定义一个风格在另外的一个轨道播放音频翻译字幕或者是是做拼写检查。不过要注意的是你需要事先安装好媒体播放器并且正确的配置如果你想完整的使用Jubler。我把这些归功于在[官方页面][8]下载了脚本以后其简便的安装方式。
### 6. Subtitle Composer ###
![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg)
[Subtitle Composer][9]被视为“KDE里的字幕作曲家”它能够唤起对很多传统功能的回忆。伴随着KDE界面我们充满了期待。我们自然会说到快捷键我特别喜欢这个功能。除此之外Subtitle Composer 与上面提到的编辑器最大的不同地方就在于它可以执行用JavaScriptPython甚至是Ruby写成的脚本。软件带有几个例子肯定能够帮助你很好的学习使用这些特性的语法。
最后不管你是否喜欢都来为你的家庭编辑几个字幕吧重新同步整个轨道或者是一切从头开始那么Linux 有很好的工具给你。对我来说,快捷键和易用性使得各个工具有差异,想要更高级别的使用体验,脚本和语音识别就成了很便利的一个功能。
你会使用哪个字幕编辑器,为什么?你认为还有没有更好用的字幕编辑器这里没有提到的?在评论里告诉我们吧。
--------------------------------------------------------------------------------
via: http://xmodulo.com/good-subtitle-editor-linux.html
作者:[Adrien Brochard][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://gnomesubtitles.org/
[2]:http://www.aegisub.org/
[3]:http://docs.aegisub.org/3.2/Main_Page/
[4]:http://home.gna.org/gaupol/
[5]:http://cmusphinx.sourceforge.net/
[6]:http://home.gna.org/subtitleeditor/
[7]:http://www.jubler.org/
[8]:http://www.jubler.org/download.html
[9]:http://sourceforge.net/projects/subcomposer/

View File

@ -1,22 +1,22 @@
Linux用户应该了解一下开源硬件
Linux用户,你们真的了解开源硬件吗?
================================================================================
> Linux用户不了解一点开源硬件制造相关的事情他们将会很失望
> Linux用户不了解一点开源硬件制造相关的事情他们就会经常陷入失望的情绪中
商业软件和免费软件已经互相纠缠很多年了,但是这俩经常误解对方。这并不奇怪 -- 对一方来说是生意,而另一方只是一种生活方式。但是,这种误解会给人带来痛苦,这也是为什么值得花精力去揭露这里面的内幕。
一个逐渐普遍的现象对开源硬件的不断尝试不管是CanonicalJollaMakePlayLive或者其他几个。不管是评论员或终端用户,一般的免费软件用户会为新的硬件平台发布表现出过分的狂热,然后因为不断延期有所醒悟,最终放弃整个产品。
一个逐渐普遍的现象对开源硬件的不断尝试不管是CanonicalJollaMakePlayLive或者其他公司。无论是评论员或是终端用户,通常免费软件用户都会为新的硬件平台发布表现出过分的狂热,然后因为不断延期有所醒悟,直到最终放弃整个产品。
这是一个没有人获益的怪圈,而且滋生出不信任 - 都是因为一般的Linux用户根本不知道这些新闻背后发生的事情。
这是一个没有人获益的怪圈,而且常常滋生出不信任 - 都是因为一般的Linux用户根本不知道这些新闻背后发生的事情。
我个人对于把产品推向市场的经验很有限。但是,我还不知道谁能有所突破。推出一个开源硬件或其他产品到市场仍然不仅仅是个残酷的生意,而且严重不利于新加入的厂商。
我个人对于把产品推向市场的经验很有限。但是,我还没听说谁能有所突破。推出一个开源硬件或其他产品到市场仍然不仅仅是个残酷的生意,而且严重不利于新厂商。
### 寻找合作伙伴 ###
不管是数码产品的生产还是分销都被相对较少的一些公司控制着,有时需要数月的预订。利润率也会很低,所以就像那些购买古老情景喜剧的电影工作室一样,生商一般也希望复制当前热销产品的成功。像Aaron Seigo在谈到他花精力开发Vivaldi平板时告诉我的生产商更希望能由其他人去承担开发新产品的风险。
不管是数码产品的生产还是分销都被相对较少的一些公司控制着,有时需要数月的预订。利润率也会很低,所以就像那些购买古老情景喜剧的电影工作室一样,生商一般也希望复制当前热销产品的成功。像Aaron Seigo在谈到他花精力开发Vivaldi平板时告诉我的生产商更希望能由其他人去承担开发新产品的风险。
不仅如此,他们更希望和那些有现成销售记录的有可能带来可复制生意的人合作。
不仅如此,他们更希望和那些有现成销售记录的有可能带来长期客户生意的人合作。
而且,一般新加入的厂商所关心的产品只有几千的量。芯片制造商更愿意和苹果或三星合作因为它们的订单很可能是几百K
而且,一般新加入的厂商所关心的产品只有几千的量。芯片制造商更愿意和苹果或三星这样的公司合作,因为它们的订单很可能是几十上百万的量
面对这种情形,开源硬件制造者们可能会发现他们在工厂的列表中被淹没了,除非能找到二线或三线厂愿意尝试一下小批量生产新产品。
@ -28,9 +28,9 @@ Linux用户应该了解一下开源硬件
这样必然会引起潜在用户的批评,但是开源硬件制造者没得选,只能折中他们的愿景。寻找其他生产商也不能解决问题,有一个原因是这样做意味着更多延迟,但是更多的是因为完全免授权费的硬件是不存在的。像三星这样的业内巨头对免费硬件没有任何兴趣,而作为新人,开源硬件制造者也没有影响力去要求什么。
更何况,就算有免费硬件,生产商也不能保证会用在下一批生产中。制造者们会轻易地发现他们每次需要生产的时候都要重打一样的仗。
更何况,就算有免费硬件,生产商也不能保证会用在下一批生产中。制造者们会轻易地发现他们每次需要生产的时候都要重打一次一模一样的仗。
这些都还不够这个时候开源硬件制造者们也许已经花了6-12个月时间来讨价还价。机会来了,产业标准已经变更,他们也许为了升级产品规格又要从头来过。
这些都还不够这个时候开源硬件制造者们也许已经花了6-12个月时间来讨价还价。等机会终于来了,产业标准却已经变更,于是他们可能为了升级产品规格又要从头来过。
### 短暂而且残忍的货架期 ###
@ -42,15 +42,15 @@ Linux用户应该了解一下开源硬件
### 衡量整件怪事 ###
在这里我只是粗略地概括了一下,但是任何涉足过制造的人会认出我形容成标准的东西。而更糟糕的是,开源硬件制造者们通常在这个过程中才会有所觉悟。不可避免,他们也会犯错,从而带来更多的延迟。
在这里我只是粗略地概括了一下,但是任何涉足过制造的人会认同我形容为行业标准的东西。而更糟糕的是,开源硬件制造者们通常只有在亲身经历过后才会有所觉悟。不可避免,他们也会犯错,从而带来更多的延迟。
但重点是,一旦你对整个过程有所了解,你对另一个开源硬件进行尝试的消息的反应就会改变。这个过程意味着除非哪家公司处于严格的保密模式对于产品将于六个月内发布的声明会很快会被证实是过期的推测。很可能是12-18个月而且面对之前提过的那些困难很可能意味着这个产品永远不会真正发布。
但重点是,一旦你对整个过程有所了解,你对另一个开源硬件进行尝试的新闻的反应就会改变。这个过程意味着除非哪家公司处于严格的保密模式对于产品将于六个月内发布的声明会很快会被证实是过期的推测。很可能是12-18个月而且面对之前提过的那些困难很可能意味着这个产品永远不会真正发布。
举个例子就像我写的人们等待第一代Steam Machines面世它是一台基于Linux的游戏主机。他们相信Steam Machines能彻底改变Linux和游戏。
作为一个市场分类Steam Machines也许比其他新产品更有优势因为参与开发的人员至少有开发软件产品的经验。然而整整一年过去了Steam Machines的开发成果都还只有原型机而且直到2015年中都不一定能买到。面对硬件生产的实际情况就算有一半能见到阳光都是很幸运了。而实际上能发布2-4台也许更实际。
我做出这个预测并没有考虑个体努力。但是对硬件生产的理解比起那些Linux和游戏的黄金年代之类的预言我估计这个更靠谱。如果我错了也会很开心但是事实不会改变让人吃惊的不是如此多的Linux相关硬件产品失败了而是那些即使是短暂的成功的产品。
我做出这个预测并没有考虑个体努力。但是对硬件生产的理解比起那些Linux和游戏的黄金年代之类的预言我估计这个更靠谱。如果我错了也会很开心但是事实不会改变让人吃惊的不是如此多的Linux相关硬件产品失败了而是那些虽然短暂但却成功的产品。
--------------------------------------------------------------------------------
@ -58,7 +58,7 @@ via: http://www.datamation.com/open-source/what-linux-users-should-know-about-op
作者:[Bruce Byfield][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -95,7 +95,7 @@ via: http://xmodulo.com/configure-peer-to-peer-vpn-linux.html
作者:[Dan Nanni][a]
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,324 @@
使用 Quagga 将你的 CentOS 系统变成一个 BGP 路由器
================================================================================
在[之前的教程中][1]我对如何简单地使用Quagga把CentOS系统变成一个不折不扣地OSPF路由器做了一些介绍。Quagga是一个开源路由软件套件。在这个教程中我将会重点讲讲**如何把一个Linux系统变成一个BGP路由器还是使用Quagga**演示如何建立BGP与其它BGP路由器对等。
在我们进入细节之前一些BGP的背景知识还是必要的。边界网关协议即BGP是互联网的域间路由协议的实际标准。在BGP术语中全球互联网是由成千上万相关联的自治系统(AS)组成其中每一个AS代表每一个特定运营商提供的一个网络管理域[据说][2],美国前总统乔治.布什都有自己的 AS 编号)。
为了使其网络在全球范围内路由可达每一个AS需要知道如何在英特网中到达其它的AS。这时候就需要BGP出来扮演这个角色了。BGP是一个AS去与相邻的AS交换路由信息的语言。这些路由信息通常被称为BGP线路或者BGP前缀。包括AS号(ASN全球唯一号码)以及相关的IP地址块。一旦所有的BGP线路被当地的BGP路由表学习和记录每一个AS将会知道如何到达互联网的任何公网IP。
在不同域(AS)之间路由的能力是BGP被称为外部网关协议(EGP)或者域间协议的主要原因。就如一些路由协议例如OSPF、IS-IS、RIP和EIGRP都是内部网关协议(IGPs)或者域内路由协议,用于处理一个域内的路由.
### 测试方案 ###
在这个教程中,让我们来使用以下拓扑。
![](https://farm6.staticflickr.com/5598/15603223841_4c76343313_z.jpg)
我们假设运营商A想要建立一个BGP来与运营商B对等交换路由。它们的AS号和IP地址空间的细节如下所示
- **运营商 A**: ASN (100) IP地址空间 (100.100.0.0/22) 分配给BGP路由器eth1网卡的IP地址(100.100.1.1)
- **运营商 B**: ASN (200) IP地址空间 (200.200.0.0/22) 分配给BGP路由器eth1网卡的IP地址(200.200.1.1)
路由器A和路由器B使用100.100.0.0/30子网来连接到对方。从理论上来说任何子网从运营商那里都是可达的、可互连的。在真实场景中建议使用掩码为30位的公网IP地址空间来实现运营商A和运营商B之间的连通。
### 在 CentOS中安装Quagga ###
如果Quagga还没安装好我们可以使用yum来安装Quagga。
# yum install quagga
如果你正在使用的是CentOS7系统你需要应用一下策略来设置SELinux。否则SElinux将会阻止Zebra守护进程写入它的配置目录。如果你正在使用的是CentOS6你可以跳过这一步。
# setsebool -P zebra_write_config 1
Quagga软件套件包含几个守护进程这些进程可以协同工作。关于BGP路由我们将把重点放在建立以下2个守护进程。
- **Zebra**:一个核心守护进程用于内核接口和静态路由.
- **BGPd**:一个BGP守护进程.
### 配置日志记录 ###
在Quagga被安装后下一步就是配置Zebra来管理BGP路由器的网络接口。我们通过创建一个Zebra配置文件和启用日志记录来开始第一步。
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
在CentOS6系统中
# service zebra start
# chkconfig zebra on
在CentOS7系统中:
# systemctl start zebra
# systemctl enable zebra
Quagga提供了一个叫做vtysh特有的命令行工具你可以输入与路由器厂商(例如Cisco和Juniper)兼容和支持的命令。我们将使用vtysh shell来配置BGP路由在教程的其余部分。
启动vtysh shell 命令,输入:
# vtysh
提示将被改成该主机名这表明你是在vtysh shell中。
Router-A#
现在我们将使用以下命令来为Zebra配置日志文件
Router-A# configure terminal
Router-A(config)# log file /var/log/quagga/quagga.log
Router-A(config)# exit
永久保存Zebra配置
Router-A# write
在路由器B操作同样的步骤。
### 配置对等的IP地址 ###
下一步我们将在可用的接口上配置对等的IP地址。
Router-A# show interface #显示接口信息
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
配置eth0接口的参数
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 100.100.0.1/30
site-A-RTR(config-if)# description "to Router-B"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
继续配置eth1接口的参数
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 100.100.1.1/24
site-A-RTR(config-if)# description "test ip from provider A network"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
现在确认配置:
Router-A# show interface
----------
Interface eth0 is up, line protocol detection is disabled
Description: "to Router-B"
inet 100.100.0.1/30 broadcast 100.100.0.3
Interface eth1 is up, line protocol detection is disabled
Description: "test ip from provider A network"
inet 100.100.1.1/24 broadcast 100.100.1.255
----------
Router-A# show interface description #显示接口描述
----------
Interface Status Protocol Description
eth0 up unknown "to Router-B"
eth1 up unknown "test ip from provider A network"
如果一切看起来正常,别忘记保存配置。
Router-A# write
同样地在路由器B重复一次配置。
在我们继续下一步之前确认下彼此的IP是可以ping通的。
Router-A# ping 100.100.0.2
----------
PING 100.100.0.2 (100.100.0.2) 56(84) bytes of data.
64 bytes from 100.100.0.2: icmp_seq=1 ttl=64 time=0.616 ms
下一步我们将继续配置BGP对等和前缀设置。
### 配置BGP对等 ###
Quagga守护进程负责BGP的服务叫bgpd。首先我们来准备它的配置文件。
# cp /usr/share/doc/quagga-XXXXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
在CentOS6系统中
# service bgpd start
# chkconfig bgpd on
在CentOS7中
# systemctl start bgpd
# systemctl enable bgpd
现在让我们来进入Quagga 的shell。
# vtysh
第一步我们要确认当前没有已经配置的BGP会话。在一些版本我们可能会发现一个AS号为7675的BGP会话。由于我们不需要这个会话所以把它移除。
Router-A# show running-config
----------
... ... ...
router bgp 7675
bgp router-id 200.200.1.1
... ... ...
我们将移除一些预先配置好的BGP会话并建立我们所需的会话取而代之。
Router-A# configure terminal
Router-A(config)# no router bgp 7675
Router-A(config)# router bgp 100
Router-A(config)# no auto-summary
Router-A(config)# no synchronizaiton
Router-A(config-router)# neighbor 100.100.0.2 remote-as 200
Router-A(config-router)# neighbor 100.100.0.2 description "provider B"
Router-A(config-router)# exit
Router-A(config)# exit
Router-A# write
路由器B将用同样的方式来进行配置以下配置提供作为参考。
Router-B# configure terminal
Router-B(config)# no router bgp 7675
Router-B(config)# router bgp 200
Router-B(config)# no auto-summary
Router-B(config)# no synchronizaiton
Router-B(config-router)# neighbor 100.100.0.1 remote-as 100
Router-B(config-router)# neighbor 100.100.0.1 description "provider A"
Router-B(config-router)# exit
Router-B(config)# exit
Router-B# write
当相关的路由器都被配置好,两台路由器之间的对等将被建立。现在让我们通过运行下面的命令来确认:
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5614/15420135700_e3568d2e5f_z.jpg)
从输出中,我们可以看到"State/PfxRcd"部分。如果对等关闭,输出将会显示"Idle"或者"Active'。请记住,单词'Active'这个词在路由器中总是不好的意思。它意味着路由器正在积极地寻找邻居、前缀或者路由。当对等是up状态"State/PfxRcd"下的输出状态将会从特殊邻居接收到前缀号。
在这个例子的输出中BGP对等只是在AS100和AS200之间呈up状态。因此没有前缀被更改所以最右边列的数值是0。
### 配置前缀通告 ###
正如一开始提到AS 100将以100.100.0.0/22作为通告在我们的例子中AS 200将同样以200.200.0.0/22作为通告。这些前缀需要被添加到BGP配置如下。
在路由器-A中
Router-A# configure terminal
Router-A(config)# router bgp 100
Router-A(config)# network 100.100.0.0/22
Router-A(config)# exit
Router-A# write
在路由器-B中
Router-B# configure terminal
Router-B(config)# router bgp 200
Router-B(config)# network 200.200.0.0/22
Router-B(config)# exit
Router-B# write
在这一点上,两个路由器会根据需要开始通告前缀。
### 测试前缀通告 ###
首先,让我们来确认前缀的数量是否被改变了。
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5608/15419095659_0ebb384eee_z.jpg)
为了查看所接收的更多前缀细节我们可以使用以下命令这个命令用于显示邻居100.100.0.2所接收到的前缀总数。
Router-A# show ip bgp neighbors 100.100.0.2 advertised-routes
![](https://farm6.staticflickr.com/5597/15419618208_4604e5639a_z.jpg)
查看哪一个前缀是我们从邻居接收到的:
Router-A# show ip bgp neighbors 100.100.0.2 routes
![](https://farm4.staticflickr.com/3935/15606556462_e17eae7f49_z.jpg)
我们也可以查看所有的BGP路由器
Router-A# show ip bgp
![](https://farm6.staticflickr.com/5609/15419618228_5c776423a5_z.jpg)
以上的命令都可以被用于检查哪个路由器通过BGP在路由器表中被学习到。
Router-A# show ip route
----------
代码: K - 内核路由, C - 已链接 , S - 静态 , R - 路由信息协议 , O - 开放式最短路径优先协议,
I - 中间系统到中间系统的路由选择协议, B - 边界网关协议, > - 选择路由, * - FIB 路由
C>* 100.100.0.0/30 is directly connected, eth0
C>* 100.100.1.0/24 is directly connected, eth1
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:06:45
----------
Router-A# show ip route bgp
----------
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:08:13
BGP学习到的路由也将会在Linux路由表中出现。
[root@Router-A~]# ip route
----------
100.100.0.0/30 dev eth0 proto kernel scope link src 100.100.0.1
100.100.1.0/24 dev eth1 proto kernel scope link src 100.100.1.1
200.200.0.0/22 via 100.100.0.2 dev eth0 proto zebra
最后我们将使用ping命令来测试连通。结果将成功ping通。
[root@Router-A~]# ping 200.200.1.1 -c 2
总而言之本教程将重点放在如何在CentOS系统中运行一个基本的BGP路由器。这个教程让你开始学习BGP的配置一些更高级的设置例如设置过滤器、BGP属性调整、本地优先级和预先路径准备等我将会在后续的教程中覆盖这些主题。
希望这篇教程能给大家一些帮助。
--------------------------------------------------------------------------------
via: http://xmodulo.com/centos-bgp-router-quagga.html
作者:[Sarmed Rahman][a]
译者:[disylee](https://github.com/disylee)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://linux.cn/article-4232-1.html
[2]:http://weibo.com/3181671860/BngyXxEUF

View File

@ -1,11 +1,10 @@
“ntpq -p”命令输出详解
网络时间的那些事情及 ntpq 详解
================================================================================
[Gentoo][1](也许其他发行版也是?)中 ["ntp -q" 的 man page][2] 只有简短的描述:“*打印出服务器已知的节点列表和它们的状态概要信息。*”
[Gentoo][1](也许其他发行版也是?)中 ["ntpq -p" 的 man page][2] 只有简短的描述:“*打印出服务器已知的节点列表和它们的状态概要信息。*”
我还没见到关于这个命令的说明文档,因此这里对此作一个总结,可以补充进 "[man ntpq][3]" man page 中。更多的细节见这里 “[ntpq standard NTP query program][4]”(原作者),和 [其他关于 man ntpq 的例子][5].
我还没见到关于这个命令的说明文档,因此这里对此作一个总结,可以补充进 "[man ntpq][3]" man page 中。更多的细节见这里 “[ntpq 标准 NTP 请求程序][4]”(原作者),和 [其他关于 man ntpq 的例子][5].
[NTP][6] 是一个设计用于通过 [udp][9] 网络 ([WAN][7] 或者 [LAN][8]) 来同步计算机时钟的协议。引用 [Wikipedia NTP][10]
[NTP][6] is a protocol designed to synchronize the clocks of computers over a ([WAN][7] or [LAN][8]) [udp][9] network. From [Wikipedia NTP][10]:
> 网络时间协议英语Network Time ProtocolNTP一种协议和软件实现用于通过使用有网络延迟的报文交换网络同步计算机系统间的时钟。最初由美国特拉华大学的 David L. Mills 设计,现在仍然由他和志愿者小组维护,它于 1985 年之前开始使用,是因特网中最老的协议之一。
@ -28,10 +27,10 @@
- **st** 远程节点或服务器的 [Stratum][17]级别NTP 时间同步是分层的)
- **t** 类型 (u: [unicast单播][18] 或 [manycast选播][19] 客户端, b: [broadcast广播][20] 或 [multicast多播][21] 客户端, l: 本地时钟, s: 对称节点(用于备份), A: 选播服务器, B: 广播服务器, M: 多播服务器, 参见“[Automatic Server Discovery][22]“)
- **when** 最后一次同步到现在的时间 (默认单位为秒, “h”表示小时“d”表示天)
- **poll** 同步的频率:[rfc5905][23]建议在 NTPv4 中这个值的范围在 4 (16s) 至 17 (36h) 之间(2的指数次秒然而观察发现这个值的实际大小在一个小的多的范围内 64 (2的6次方)秒 至 1024 (2的10次方)秒
- **poll** 同步的频率:[rfc5905][23]建议在 NTPv4 中这个值的范围在 4 (16秒) 至 17 (36小时) 之间(即2的指数次秒然而观察发现这个值的实际大小在一个小的多的范围内 64 (2^6 )秒 至 1024 (2^10 )秒
- **reach** 一个8位的左移移位寄存器值用来测试能否和服务器连接每成功连接一次它的值就会增加以 [8 进制][24]显示
- **delay** 从本地到远程节点或服务器通信的往返时间(毫秒)
- **offset** 主机与远程节点或服务器时间源的时间偏移量offset 越接近于0主机和 NTP 服务器的时间越接近([方均根][25]表示,单位为毫秒)
- **offset** 主机与远程节点或服务器时间源的时间偏移量offset 越接近于0主机和 NTP 服务器的时间越接近([方均根][25]表示,单位为毫秒)
- **jitter** 与远程节点同步的时间源的平均偏差(多个时间样本中的 offset 的偏差,单位是毫秒),这个数值的绝对值越小,主机的时间就越精确
#### 字段的统计代码 ####
@ -47,7 +46,7 @@
- “**-**” 已不再使用
- “**#**” 良好的远程节点或服务器但是未被使用 (不在按同步距离排序的前六个节点中,作为备用节点使用)
- “**+**” 良好的且优先使用的远程节点或服务器(包含在组合算法中)
- “***** 当前作为优先主同步对象的远程节点或服务器
- “*” 当前作为优先主同步对象的远程节点或服务器
- “**o**” PPS 节点 (当优先节点是有效时)。实际的系统同步是源于秒脉冲信号pulse-per-secondPPS可能通过PPS 时钟驱动或者通过内核接口。
参考 [Clock Select Algorithm][27].
@ -74,9 +73,9 @@
- **.WWV.** [WWV][46] (HF, Ft. Collins, CO, America) 标准时间无线电接收器
- **.WWVB.** [WWVB][47] (LF, Ft. Collins, CO, America) 标准时间无线电接收器
- **.WWVH.** [WWVH][48] (HF, Kauai, HI, America) 标准时间无线电接收器
- **.GOES.** 美国 [静止环境观测卫星][49];
- **.GOES.** 美国[静止环境观测卫星][49];
- **.GPS.** 美国 [GPS][50];
- **.GAL.** [伽利略定位系统][51] 欧洲 [GNSS][52];
- **.GAL.** [伽利略定位系统][51]欧洲 [GNSS][52];
- **.ACST.** 选播服务器
- **.AUTH.** 认证错误
- **.AUTO.** Autokey NTP 的一种认证机制)顺序错误
@ -105,7 +104,7 @@ NTP 协议是高精度的使用的精度小于纳秒2的 -32 次方)。
#### “ntpq -c rl”输出参数 ####
- **precision** 为四舍五入值,且为 2 的幂数。因此精度为 2*precision* 此幂(秒)
- **precision** 为四舍五入值,且为 2 的幂数。因此精度为 2^precision (秒)
- **rootdelay** 与同步网络中主同步服务器的总往返延时。注意这个值可以是正数或者负数,取决于时钟的精度。
- **rootdisp** 相对于同步网络中主同步服务器的偏差(秒)
- **tc** NTP 算法 [PLL][59] phase locked loop锁相环路 或 [FLL][60] (frequency locked loop锁频回路) 时间常量
@ -122,20 +121,20 @@ Jitter (也叫 timing jitter) 表示短期变化大于10HZ 的频率, wander
NTP 软件维护一系列连续更新的频率变化的校正值。对于设置正确的稳定系统,在非拥塞的网络中,现代硬件的 NTP 时钟同步通常与 UTC 标准时间相差在毫秒内。(在千兆 LAN 网络中可以达到何种精度?)
对于 UTC 时间,[闰秒][62] 可以每两年插入一次用于同步地球自传的变化。注意本地时间为[夏令时][63]时时间会有一小时的变化。在重同步之前客户端设备会使用独立的 UTC 时间,除非客户端使用了偏移校准。
对于 UTC 时间,[闰秒 leap second ][62] 可以每两年插入一次用于同步地球自传的变化。注意本地时间为[夏令时][63]时时间会有一小时的变化。在重同步之前客户端设备会使用独立的 UTC 时间,除非客户端使用了偏移校准。
#### [闰秒发生时会怎样][64] ####
> 闰秒发生时,会对当天时间增加或减少一秒。闰秒的调整在 UTC 时间当天的最后一秒。如果增加一秒UTC 时间会出现 23:59:60。即 23:59:59 到 0:00:00 之间实际上需要 2 秒钟。如果减少一秒,时间会从 23:59:58 跳至 0:00:00 。另见 [The Kernel Discipline][65].
好了… 间隔阈值step threshold的真实值是多少: 125ms 还是 128ms PLL/FLL tc 的单位是什么 (log2 s? ms?)?在非拥塞的千兆 LAN 中时间节点间的精度能达到多少?
那么… 间隔阈值step threshold的真实值是多少: 125ms 还是 128ms PLL/FLL tc 的单位是什么 (log2 s? ms?)?在非拥塞的千兆 LAN 中时间节点间的精度能达到多少?
感谢 Camilo M 和 Chris B的评论。 欢迎校正错误和更多细节的探讨。
谢谢
Martin
### 外传 ###
### 附录 ###
- [NTP 的纪元][66] 从 1900 开始而 UNIX 的从 1970开始.
- [时间校正][67] 是逐渐进行的,因此时间的完全同步可能会画上几个小时。
@ -152,7 +151,7 @@ Martin
- [ntpq 标准 NTP 查询程序][77]
- [The Network Time Protocol (NTP) 分布][78]
- NTP 的简明 [历史][79]
- NTP 的简明[历史][79]
- 一个更多细节的简明历史 “Mills, D.L., A brief history of NTP time: confessions of an Internet timekeeper. Submitted for publication; please do not cite or redistribute” ([pdf][80])
- [NTP RFC][81] 标准文档
- Network Time Protocol (Version 3) RFC [txt][82], or [pdf][83]. Appendix E, The NTP Timescale and its Chronometry, p70, 包含了对过去 5000 年我们的计时系统的变化和关系的有趣解释。
@ -165,7 +164,7 @@ Martin
### 其他 ###
SNTP Simple Network Time Protocol, [RFC 4330][91],简单未落协议基本上也是NTP但是缺少一些基于 [RFC 1305][92] 实现的 NTP 的一些不再需要的内部算法。
SNTP Simple Network Time Protocol, [RFC 4330][91],简单网络协议基本上也是NTP但是少了一些基于 [RFC 1305][92] 实现的 NTP 的一些不再需要的内部算法。
Win32 时间 [Windows Time Service][93] 是 SNTP 的非标准实现,没有精度的保证,并假设精度几乎有 1-2 秒的范围。(因为没有系统时间变化校正)
@ -184,7 +183,7 @@ via: http://nlug.ml1.co.uk/2012/01/ntpq-p-output/831
作者Martin L
译者:[Liao](https://github.com/liaosishere)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,6 @@
Linux 和类Unix 系统上5个极品的开源软件备份工具
Linux 和类 Unix 系统上5个最佳开源备份工具
================================================================================
一个好的备份最基本的就是为了能够从一些错误中恢复
一个好的备份最基本的目的就是为了能够从一些错误中恢复
- 人为的失误
- 磁盘阵列或是硬盘故障
@ -13,7 +13,7 @@ Linux 和类Unix 系统上5个极品的开源软件备份工具
确定你正在部署的软件具有下面的特性
1. **开源软件** - 你务必要选择那些源码可以免费获得,并且可以修改的软件。确信可以恢复你的数据,即使是软件供应商或者/或是项目停止继续维护这个软件或者是拒绝继续为这个软件提供补丁。
1. **开源软件** - 你务必要选择那些源码可以免费获得,并且可以修改的软件。确信可以恢复你的数据,即使是软件供应商/项目停止继续维护这个软件或者是拒绝继续为这个软件提供补丁。
2. **跨平台支持** - 确定备份软件可以很好的运行各种需要部署的桌面操作系统和服务器系统。
@ -21,21 +21,21 @@ Linux 和类Unix 系统上5个极品的开源软件备份工具
4. **自动转换** - 自动转换本来是没什么,除了对于各种备份设备,包括图书馆,近线存储和自动加载,自动转换可以自动完成一些任务,包括加载,挂载和标签备份像磁带这些媒体设备。
5. **备份介质** - 确定你可以备份到磁带硬盘DVD 和云存储像AWS。
5. **备份介质** - 确定你可以备份到磁带硬盘DVD 和像 AWS 这样的云存储
6. **加密数据流** - 确定所有客户端到服务器的传输都被加密保证在LAN/WAN/Internet 中传输的安全性。
6. **加密数据流** - 确定所有客户端到服务器的传输都被加密,保证在 LAN/WAN/Internet 中传输的安全性。
7. **数据库支持** - 确定备份软件可以备份到数据库像MySQL 或是 Oracle。
8. **备份可以跨越多个卷** - 备份软件(转存文件)可以把每个备份文件分成几个部分,允许将每个部分存在于不同的卷。这样可以保证一些数据量很大的备份(像100TB的文件)可以被存储在一些比单个部分大的设备中,比如说像硬盘和磁盘卷。
8. **备份可以跨越多个卷** - 备份软件(转储文件时)可以把每个备份文件分成几个部分,允许将每个部分存在于不同的卷。这样可以保证一些数据量很大的备份(像100TB的文件)可以被存储在一些单个容量较小的设备中,比如说像硬盘和磁盘卷。
9. **VSS (卷影复制)** - 这是[微软的卷影复制服务VSS][1]通过创建数据的快照来备份。确定备份软件支持VSS的MS-Windows 客户端/服务器。
10. **重复数据删除** - 这是一种数据压缩技术,用来消除重复数据的副本(比如,图片)。
11. **许可证和成本** - 确定你[理解和使用的开源许可证][3]下的软件源码你可以得到
11. **许可证和成本** - 确定你对备份软件所用的[许可证了解和明白其使用方式][3]
12. **商业支持** - 开源软件可以提供社区支持(像邮件列表和论坛)和专业的支持(像发行版提供额外的付费支持)。你可以使用付费的专业支持以培训和咨询为目的
12. **商业支持** - 开源软件可以提供社区支持(像邮件列表和论坛)和专业的支持(如发行版提供额外的付费支持)。你可以使用付费的专业支持为你提供培训和咨询
13. **报告和警告** - 最后,你必须能够看到备份的报告,当前的工作状态,也能够在备份出错的时候提供警告。
@ -59,7 +59,7 @@ Linux 和类Unix 系统上5个极品的开源软件备份工具
### Amanda - 又一个客户端服务器备份工具 ###
AMANDA 是 Advanced Maryland Automatic Network Disk Archiver 的缩写。它允许系统管理员创建一个单独的服务器来备份网络上的其他主机到磁带驱动器或硬盘或者是自动转换器。
AMANDA 是 Advanced Maryland Automatic Network Disk Archiver 的缩写。它允许系统管理员创建一个单独的备份服务器来将网络上的其他主机的数据备份到磁带驱动器、硬盘或者是自动换盘器。
- 操作系统:支持跨平台运行。
- 备份级别:完全,差异,增量,合并。
@ -75,7 +75,7 @@ AMANDA 是 Advanced Maryland Automatic Network Disk Archiver 的缩写。它允
### Backupninja - 轻量级备份系统 ###
Backupninja 是一个简单易用的备份系统。你可以简单的拖放配置文件到 /etc/backup.d/ 目录来备份多个主机。
Backupninja 是一个简单易用的备份系统。你可以简单的拖放一个配置文件到 /etc/backup.d/ 目录来备份多个主机。
![](http://s0.cyberciti.org/uploads/cms/2014/11/ninjabackup-helper-script.jpg)
@ -93,7 +93,7 @@ Backupninja 是一个简单易用的备份系统。你可以简单的拖放配
### Backuppc - 高效的客户端服务器备份工具###
Backuppc 可以用来备份基于LInux 和Windows 系统的主服务器硬盘。它配备了一个巧妙的池计划来最大限度的减少磁盘储存,磁盘I/O 和网络I/O。
Backuppc 可以用来备份基于Linux 和Windows 系统的主服务器硬盘。它配备了一个巧妙的池计划来最大限度的减少磁盘储存、磁盘 I/O 和网络I/O。
![](http://s0.cyberciti.org/uploads/cms/2014/11/BackupPCServerStatus.jpg)
@ -111,7 +111,7 @@ Backuppc 可以用来备份基于LInux 和Windows 系统的主服务器硬盘。
### UrBackup - 最容易配置的客户端服务器系统 ###
UrBackup 是一个非常容易配置的开源客户端服务器备份系统,通过图像和文件备份的组合完成了数据安全性和快速的恢复。你的文件可以通过Web界面或者是在Windows资源管理器中恢复而驱动卷的备份用引导CD或者是USB 棒来恢复(逻辑恢复)。一个Web 界面使得配置你自己的备份服务变得非常简单。
UrBackup 是一个非常容易配置的开源客户端服务器备份系统,通过镜像 方式和文件备份的组合完成了数据安全性和快速的恢复。磁盘卷备份可以使用可引导 CD 或U盘通过Web界面或Windows资源管理器来恢复你的文件硬恢复。一个 Web 界面使得配置你自己的备份服务变得非常简单。
![](http://s0.cyberciti.org/uploads/cms/2014/11/urbackup.jpg)
@ -129,19 +129,19 @@ UrBackup 是一个非常容易配置的开源客户端服务器备份系统,
### 其他供你考虑的一些极好用的开源备份软件 ###
AmandaBacula 和上面所提到的软件都是功能丰富,但是配置比较复杂对于一些小的网络或者是单独的服务器。我建议你学习和使用一下的备份软件:
AmandaBacula 和上面所提到的这些软件功能都很丰富,但是对于一些小的网络或者是单独的服务器来说配置比较复杂。我建议你学习和使用一下的下面这些备份软件:
1. [Rsnapshot][10] - 我建议用这个作为对本地和远程的文件系统快照工具。查看[怎么设置和使用这个工具在Debian 和Ubuntu linux][11]和[基于CentOSRHEL 的操作系统][12]。
1. [Rsnapshot][10] - 我建议用这个作为对本地和远程的文件系统快照工具。看看[在Debian 和Ubuntu linux][11]和[基于CentOSRHEL 的操作系统][12]怎么设置和使用这个工具
2. [rdiff-backup][13] - 另一个好用的类Unix 远程增量备份工具。
3. [Burp][14] - Burp 是一个网络备份和恢复程序。它使用了librsync来节省网络流量和节省每个备份占用的空间。它也使用了VSS卷影复制服务在备份Windows计算机时进行快照。
4. [Duplicity][15] - 伟大的加密和高效的备份类Unix操作系统。查看如何[安装Duplicity来加密云备份][16]来获取更多的信息。
5. [SafeKeep][17] - SafeKeep是一个集中和易于使用的备份应用程序,结合了镜像和增量备份最佳功能的备份应用程序。
5. [SafeKeep][17] - SafeKeep是一个中心化的、易于使用的备份应用程序,结合了镜像和增量备份最佳功能的备份应用程序。
6. [DREBS][18] - DREBS 是EBS定期快照的工具。它被设计成在EBS快照所连接的EC2主机上运行。
7. 古老的unix 程序像rsync tar cpio mt 和dump。
###结论###
我希望你会发现这篇有用的文章来备份你的数据。不要忘了验证你的备份和创建多个数据备份。然而,对于磁盘阵列并不是一个备份解决方案。使用任何一个上面提到的程序来备份你的服务器,桌面和笔记本电脑和私人的移动设备。如果你知道其他任何开源的备份软件我没有提到的,请分享在评论里。
我希望你会发现这篇有用的文章来备份你的数据。不要忘了验证你的备份和创建多个数据备份。注意,磁盘阵列并不是一个备份解决方案!使用任何一个上面提到的程序来备份你的服务器、桌面和笔记本电脑和私人的移动设备。如果你知道其他任何开源的备份软件我没有提到的,请分享在评论里。
--------------------------------------------------------------------------------
@ -149,7 +149,7 @@ via: http://www.cyberciti.biz/open-source/awesome-backup-software-for-linux-unix
作者:[nixCraft][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
ESR黑客年暮
================================================================================
近来我一直在与某资深开源开发团队中的多个成员缠斗,尽管密切关注我的人们会在读完本文后猜到是哪个组织,但我不会在这里说出这个组织的名字。
怎么让某些人进入 21 世纪就这么难呢?真是的...
我快 56 岁了,也就是大部分年轻人会以为的我将时不时朝他们发出诸如“滚出我的草坪”之类歇斯底里咆哮的年龄。但事实并非如此 —— 我发现,尤其是在技术背景之下,我变得与我的年龄非常不相称。
在我这个年龄的大部分人确实变成了爱发牢骚、墨守成规的老顽固。并且,尴尬的是,偶尔我会成为那个打断谈话的人,我会指出他们某个在 1995 年或者在某些特殊情况下1985 年)时很适合的方法... 几十年后的今天就不再是好方法了。
为什么是我?因为年轻人在我的同龄人中很难有什么说服力。如果有人想让那帮老头改变主意,首先他得是自己同龄人中具有较高思想觉悟的佼佼者。即便如此,在与习惯做斗争的过程中,我也比看起来花费了更多的时间。
年轻人犯下无知的错误是可以被原谅的。他们还年轻。年轻意味着缺乏经验,缺乏经验通常会导致片面的判断。我很难原谅那些经历了足够多本该有经验的人,却被*长期的固化思维*蒙蔽,无法发觉近在咫尺的东西。
(补充一下:我真的不是保守党拥护者。那些和我争论政治的,无论保守党还是非保守党都没有注意到这点,我觉得这颇有点嘲讽的意味。)
那么,现在我们来讨论下 GNU 更新日志文件ChangeLog这件事。在 1985 年的时候,这是一个不错的主意,甚至可以说是必须的。当时的想法是用单独的更新日志条目来记录多个相关文件的变更情况。用这种方式来对那些存在版本缺失或者非常原始的版本进行版本控制确实不错。当时我也*在场*,所以我知道这些。
不过即使到了 1995 年,甚至 21 世纪早期许多版本控制系统仍然没有太大改进。也就是说这些版本控制系统并非对批量文件的变化进行分组再保存到一条记录上而是对每个变化的文件分别进行记录并保存到不同的地方。CVS当时被广泛使用的版本控制系统仅仅是模拟日志变更 —— 并且在这方面表现得很糟糕,导致大多数人不再依赖这个功能。即便如此,更新日志文件的出现依然是必要的。
但随后,版本控制系统 Subversion 于 2003 年发布 beta 版,并于 2004 年发布 1.0 正式版Subversion 真正实现了更新日志记录功能得到了人们的广泛认可。它与一年后兴起的分布式版本控制系统Distributed Version Control SystemDVCS共同引发了主流世界的激烈争论。因为如果你在项目上同时使用了分布式版本控制与更新日志文件记录的功能它们将会因为争夺相同元数据的控制权而产生不可预料的冲突。
有几种不同的方法可以折衷解决这个问题。一种是继续将更新日志作为代码变更的授权记录。这样一来,你基本上只能得到简陋的、形式上的提交评论数据。
另一种方法是对提交的评论日志进行授权。如果你这样做了,不久后你就会开始思忖为什么自己仍然对所有的日志更新条目进行记录。提交元数据与变化的代码具有更好的相容性,毕竟这才是当初设计它的目的。
(现在,试想有这样一个项目,同样本着把项目做得最好的想法,但两拨人却做出了完全不同的选择。因此你必须同时阅读更新日志和评论日志以了解到底发生了什么。最好在矛盾激化前把问题解决....
第三种办法是尝试同时使用以上两种方法 —— 在更新日志条目中以稍微变化后的的格式复制一份评论数据将其作为评论提交的一部分。这会导致各种你意想不到的问题最具代表性的就是它不符合“真理的单点性single point of truth”原理只要其中有拷贝文件损坏或者日志文件条目被修改这就不再是同步时数据匹配的问题它将导致在其后参与进来的人试图搞清人们是怎么想的时候变得非常困惑。LCTT 译注:《[程序员修炼之道][1]》The Pragmatic Programmer任何一个知识点在系统内都应当有一个唯一、明确、权威的表述。根据Brian Kernighan的建议把这个原则称为“真理的单点性Single Point of Truth”或者SPOT原则。
或者,正如这个*我就不说出具体名字的特定项目*所做的,它的高层开发人员在电子邮件中最近声明说,提交可以包含多个更新日志条目,并且提交的元数据与更新日志是无关的。这导致我们直到现在还得不断进行记录。
当时我读到邮件的时候都要吐了。什么样的傻瓜才会意识不到这是自找麻烦 —— 事实上,在 DVCS 中针对可靠的提交日志有很好的浏览工具,围绕更新日志文件的整个定制措施只会成为负担和拖累。
唉,这是比较特殊的笨蛋:变老的并且思维僵化了的黑客。所有的合理化改革他都会极力反对。他所遵循的行事方法在几十年前是有效的,但现在只能适得其反。如果你试图向他解释这些不仅仅和 git 的摘要信息有关,同时还为了正确适应当前的工具集,以便实现更新日志的去条目化... 呵呵,那你就准备好迎接无法忍受、无法想象的疯狂对话吧。
的确,它成功激怒了我。这样那样的胡言乱语使这个项目变成了很难完成的工作。而且,同样的糟糕还体现在他们吸引年轻开发者的过程中,我认为这是真正的问题。相关 Google+ 社区的人员数量已经达到了 4 位数,他们大部分都是孩子,还没有成长起来。显然外界已经接受了这样的信息:这个项目的开发者都是部落中地位根深蒂固的崇高首领,最好的崇拜方式就是远远的景仰着他们。
这件事给我的最大触动就是每当我要和这些部落首领较量时,我都会想:有一天我也会这样吗?或者更糟的是,我看到的只是如同镜子一般对我自己的真实写照,而我自己却浑然不觉?我的意思是,我所得到的印象来自于他的网站,这个特殊的笨蛋要比我年轻。年轻至少 15 岁呢。
我总是认为自己的思路很清晰。当我和那些比我聪明的人打交道时我不会受挫我只会因为那些思路跟不上我、看不清事实的人而沮丧。但这种自信也许只是邓宁·克鲁格效应Dunning-Krueger effect在我身上的消极影响我并不确定这意味着什么。很少有什么事情会让我感到害怕而这件事在让我害怕的事情名单上是名列前茅的。
另一件让人不安的事是当我逐渐变老的时候,这样的矛盾发生得越来越频繁。不知怎的,我希望我的黑客同行们能以更加优雅的姿态老去,即使身体老去也应该保持一颗年轻的心灵。有些人确实是这样;但可惜绝大多数人都不是。真令人悲哀。
我不确定我的职业生涯会不会完美收场。假如我最后成功避免了思维僵化(注意我说的是假如),我想我一定知道其中的部分原因,但我不确定这种模式是否可以被复制 —— 为了达成目的也许得在你的头脑中发生一些复杂的化学反应。尽管如此,无论对错,请听听我给年轻黑客以及其他有志青年的建议。
你们——对的,也包括你——一定无法在你中年老年的时候保持不错的心灵,除非你能很好的控制这点。你必须不断地去磨练你的内心、在你还年轻的时候完成自己的种种心愿,你必须把这些行为养成一种习惯直到你老去。
有种说法是中年人锻炼身体的最佳时机是 30 岁以前。我以为同样的方法,坚持我以上所说的习惯能让你在 56 岁,甚至 65 岁的时候仍然保持灵活的头脑。挑战你的极限,使不断地挑战自己成为一种习惯。立刻离开安乐窝,由此当你以后真正需要它的时候你可以建立起自己的安乐窝。
你必须要清楚的了解这点;还有一个可选择的挑战是你选择一个可以实现的目标并且为了这个目标不断努力。这个月我要学习 Go 语言。不是指游戏,我早就玩儿过了(虽然玩儿的不是太好)。并不是因为工作需要,而是因为我觉得是时候来扩展下我自己了。
保持这个习惯。永远不要放弃。
--------------------------------------------------------------------------------
via: http://esr.ibiblio.org/?p=6485
作者:[Eric Raymond][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://esr.ibiblio.org/?author=2
[1]:http://book.51cto.com/art/200809/88490.htm

View File

@ -2,7 +2,7 @@
================================================================================
你能快速定位CPU性能回退的问题么 如果你的工作环境非常复杂且变化快速,那么使用现有的工具是来定位这类问题是很具有挑战性的。当你花掉数周时间把根因找到时,代码已经又变更了好几轮,新的性能问题又冒了出来。
辛亏有了[CPU火焰图][1]flame graphs,CPU使用率的问题一般都比较好定位。但要处理性能回退问题就要在修改前后的火焰图间不断切换对比来找出问题所在这感觉就是像在太阳系中搜寻冥王星。虽然这种方法可以解决问题但我觉得应该会有更好的办法。
幸亏有了[CPU火焰图][1]flame graphsCPU使用率的问题一般都比较好定位。但要处理性能回退问题就要在修改前后的火焰图间,不断切换对比,来找出问题所在,这感觉就是像在太阳系中搜寻冥王星。虽然,这种方法可以解决问题,但我觉得应该会有更好的办法。
所以,下面就隆重介绍**红/蓝差分火焰图red/blue differential flame graphs**
@ -14,7 +14,7 @@
这张火焰图中各火焰的形状和大小都是和第二次抓取的profile文件对应的CPU火焰图是相同的。其中y轴表示栈的深度x轴表示样本的总数栈帧的宽度表示了profile文件中该函数出现的比例最顶层表示正在运行的函数再往下就是调用它的栈
在下面这个案例展示了,在系统升级后,一个工作载的CPU使用率上升了。 下面是对应的CPU火焰图[SVG格式][4]
在下面这个案例展示了,在系统升级后,一个工作载的CPU使用率上升了。 下面是对应的CPU火焰图[SVG格式][4]
<p><object data="http://www.brendangregg.com/blog/images/2014/zfs-flamegraph-after.svg" type="image/svg+xml" width=720 height=296>
<img src="http://www.brendangregg.com/blog/images/2014/zfs-flamegraph-after.svg" width=720 />
@ -22,7 +22,7 @@
通常,在标准的火焰图中栈帧和栈塔的颜色是随机选择的。 而在红/蓝差分火焰图中使用不同的颜色来表示两个profile文件中的差异部分。
在第二个profile中deflate_slow()函数以及它后续调用的函数运行的次数要比前一次更多所以在上图中这个栈帧被标为了红色。可以看出问题的原因是ZFS的压缩功能被使能了,而在系统升级前这项功能是关闭的。
在第二个profile中deflate_slow()函数以及它后续调用的函数运行的次数要比前一次更多所以在上图中这个栈帧被标为了红色。可以看出问题的原因是ZFS的压缩功能被启用了,而在系统升级前这项功能是关闭的。
这个例子过于简单我甚至可以不用差分火焰图也能分析出来。但想象一下如果是在分析一个微小的性能下降比如说小于5%,而且代码也更加复杂的时候,问题就为那么好处理了。
@ -69,7 +69,9 @@ difffolded.p只能对“折叠”过的堆栈profile文件进行操作折叠
在上面的例子中"func_a()->func_b()->func_c()" 代表调用栈这个调用栈在profile1文件中共出现了31次在profile2文件中共出现了33次。然后使用flamegraph.pl脚本处理这3列数据会自动生成一张红/蓝差分火焰图。
### 其他选项 ###
再介绍一些有用的选项:
**difffolded.pl -n**这个选项会把两个profile文件中的数据规范化使其能相互匹配上。如果你不这样做抓取到所有栈的统计值肯定会不相同因为抓取的时间和CPU负载都不同。这样的话看上去要么就是一片红负载增加要么就是一片蓝负载下降。-n选项对第一个profile文件进行了平衡这样你就可以得到完整红/蓝图谱。
**difffolded.pl -x**: 这个选项会把16进制的地址删掉。 profiler时常会无法将地址转换为符号这样的话栈里就会有16进制地址。如果这个地址在两个profile文件中不同这两个栈就会认为是不同的栈而实际上它们是相同的。遇到这样的问题就用-x选项搞定。
@ -77,6 +79,7 @@ difffolded.p只能对“折叠”过的堆栈profile文件进行操作折叠
**flamegraph.pl --negate**: 用于颠倒红/蓝配色。 在下面的章节中,会用到这个功能。
### 不足之处 ###
虽然我的红/蓝差分火焰图很有用但实际上还是有一个问题如果一个代码执行路径完全消失了那么在火焰图中就找不到地方来标注蓝色。你只能看到当前的CPU使用情况而不知道为什么会变成这样。
一个办法是,将对比顺序颠倒,画一个相反的差分火焰图。例如:
@ -95,12 +98,13 @@ difffolded.p只能对“折叠”过的堆栈profile文件进行操作折叠
这样把前面生成diff2.svg一并使用我们就能得到
- **diff1.svg**: 宽度是以修改前profile文件为基准, 颜色表明将要发生的情况
- **diff2.svg**: 宽度以修改后profile文件为基准颜色表明已经发生的情况
- **diff1.svg**: 宽度是以修改前profile文件为基准颜色表明将要发生的情况
- **diff2.svg**: 宽度以修改后profile文件为基准颜色表明已经发生的情况
如果是在做功能验证测试,我会同时生成这两张图。
### CPI 火焰图 ###
这些脚本开始是被使用在[CPI火焰图][8]的分析上。与比较修改前后的profile文件不同在分析CPI火焰图时可以分析CPU工作周期与停顿周期的差异变化这样可以凸显出CPU的工作状态来。
### 其他的差分火焰图 ###
@ -110,6 +114,7 @@ difffolded.p只能对“折叠”过的堆栈profile文件进行操作折叠
也有其他人做过类似的工作。[Robert Mustacchi][10]在不久前也做了一些尝试他使用的方法类似于代码检视时的标色风格只显示了差异的部分红色表示新增上升的代码路径蓝色表示删除下降的代码路径。一个关键的差别是栈帧的宽度只体现了差异的样本数。右边是一个例子。这个是个很好的主意但在实际使用中会感觉有点奇怪因为缺失了完整profile文件的上下文作为背景这张图显得有些难以理解。
[![](http://www.brendangregg.com/blog/images/2014/corpaul-flamegraph-diff.png)][12]
Cor-Paul Bezemer也制作了一种差分显示方法[flamegraphdiff][13]他同时将3张火焰图放在同一张图中修改前后的标准火焰图各一张下面再补充了一张差分火焰图但栈帧宽度也是差异的样本数。 上图是一个[例子][14]。在差分图中将鼠标移到栈帧上3张图中同一栈帧都会被高亮显示。这种方法中补充了两张标准的火焰图因此解决了上下文的问题。
我们3人的差分火焰图都各有所长。三者可以结合起来使用Cor-Paul方法中上方的两张图可以用我的diff1.svg 和 diff2.svg。下方的火焰图可以用Robert的方式。为保持一致性下方的火焰图可以用我的着色方式蓝->白->红。
@ -128,7 +133,7 @@ via: http://www.brendangregg.com/blog/2014-11-09/differential-flame-graphs.html
作者:[Brendan Gregg][a]
译者:[coloka](https://github.com/coloka)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -24,11 +24,11 @@
![](https://farm8.staticflickr.com/7486/15543918097_fbcf33ee6b.jpg)
然后 4 会被插入到文件中。
然后计算结果“4 ”会被插入到文件中。
### 查找重复的连续的单词 ###
当你很快地打字时,很有可能会连续输入同一个单词两次,就像 this this。这种错误可能骗过任何一个人即使是你自己重新阅读一也不可避免。幸运的是,有一个简单的正则表达式可以用来预防这个错误。使用搜索命令(默认时 `/`)然后输入:
当你很快地打字时,很有可能会连续输入同一个单词两次,就像 this this。这种错误可能骗过任何一个人即使是你自己重新阅读一也不可避免。幸运的是,有一个简单的正则表达式可以用来预防这个错误。使用搜索命令(默认时 `/`)然后输入:
\(\<\w\+\>\)\_s*\1
@ -72,7 +72,7 @@
`gg` 把光标移动到 Vim 缓冲区的第一行,`V` 进入可视模式,`G` 把光标移动到缓冲区的最后一行。因此,`ggVG` 使可视模式覆盖这个当前缓冲区。最后 `g?` 使用 ROT13 对整个区域进行编码。
注意它应该被映射到一个最长使用的键。它对字母符号也可以很好地工作。要对它进行撤销,最好的方法就是使用撤销命令:`u`。
注意它可以被映射到一个最常使用的键。它对字母符号也可以很好地工作。要对它进行撤销,最好的方法就是使用撤销命令:`u`。
###自动补全 ###
@ -110,7 +110,7 @@
### 按时间回退文件 ###
Vim 会记录文件的更改,你很容易可以回退到之前某个时间。该命令相当直观的。比如:
Vim 会记录文件的更改,你很容易可以回退到之前某个时间。该命令相当直观的。比如:
:earlier 1m
@ -122,7 +122,7 @@ Vim 会记录文件的更改,你很容易可以回退到之前某个时间。
### 删除标记内部的文字 ###
当我开始使用 Vim 时一件我总是想很方便做的事情是如何轻松的删除方括号或圆括号里的内容。转到开始的标记,然后使用下面的语法:
当我开始使用 Vim 时一件我总是想很方便做的事情是如何轻松的删除方括号或圆括号里的内容。转到开始的标记,然后使用下面的语法:
di[标记]
@ -164,11 +164,11 @@ Vim 会记录文件的更改,你很容易可以回退到之前某个时间。
### 把光标下的文字置于屏幕中央 ###
所有要做的事情都包含在标题中。如果你想强制滚动屏幕来把光标下的文字置于屏幕的中央,在可视模式中使用命令(译者注:在普通模式中也可以):
我们所要做的事情如标题所示。如果你想强制滚动屏幕来把光标下的文字置于屏幕的中央,在可视模式中使用命令(译者注:在普通模式中也可以):
zz
### 跳到上一个/下一个 位置 ###
### 跳到上一个/下一个位置 ###
当你编辑一个很大的文件时,经常要做的事是在某处进行修改,然后跳到另外一处。如果你想跳回之前修改的地方,使用命令:
@ -196,7 +196,7 @@ Vim 会记录文件的更改,你很容易可以回退到之前某个时间。
总的来说,这一系列命令是在我读了许多论坛主题和 [Vim Tips wiki][3](如果你想学习更多关于编辑器的知识,我非常推荐这篇文章) 之后收集起来的。
如果你还知道哪些非常有用但你认为大多数人并不知道的命令,可以随意在评论中分享出来。就像引言中所说的,一个“鲜为人知但很有用的”命令是很主观的,但分享出来总是好的。
如果你还知道哪些非常有用但你认为大多数人并不知道的命令,可以随意在评论中分享出来。就像引言中所说的,一个“鲜为人知但很有用的”命令也许只是你自己的看法,但分享出来总是好的。
--------------------------------------------------------------------------------
@ -204,7 +204,7 @@ via: http://xmodulo.com/useful-vim-commands.html
作者:[Adrien Brochard][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,7 +1,6 @@
用Grub启动ISO镜像
================================================================================
如果你需要使用多个Linux发行版你没有那么多的选项。你可以安装到你的物理机或虚拟机中也可以以live模式从ISO文件启动。第二个选择如果对硬盘空间需求更少就有点麻烦因为你需要将ISO文件写入到USB棒或CD来启动。但是这里有另外一个可选的折中方案把ISO镜像放在硬盘中然后以live模式来启动。该方案比完全安装更省空间但是功能完备这对于缓慢的虚拟机而言是个不错的替代方案。下面我将介绍怎样使用流行的Grub启动加载器来实现该方案。
如果你想要使用多个Linux发行版你没有那么多的选择。你要么安装到你的物理机或虚拟机中要么以live模式从ISO文件启动。第二个选择对硬盘空间需求较小只是有点麻烦因为你需要将ISO文件写入到U盘或CD/DVD中来启动。不过这里还有另外一个可选的折中方案把ISO镜像放在硬盘中然后以live模式来启动。该方案比完全安装更省空间而且功能也完备这对于缓慢的虚拟机而言是个不错的替代方案。下面我将介绍怎样使用流行的Grub启动加载器来实现该方案。
很明显你将需要使用到Grub这是几乎所有现代Linux发行版都使用的。你也需要你所想用的Linux版本的ISO文件将它下载到本地磁盘。最后你需要知道启动分区在哪里并怎样在Grub中描述。对于此请使用以下命令
@ -31,7 +30,7 @@
[some specific] arguments
}
例如如果你想要从ISO文件启动Ubuntu那么你就是想要添加行到40_custom文件
例如如果你想要从ISO文件启动Ubuntu那么你就是想要添加如下行到40_custom文件
menuentry "Ubuntu 14.04 (LTS) Live Desktop amd64" {
set isofile="/boot/ubuntu-14.04-desktop-amd64.iso"
@ -62,7 +61,7 @@
initrd (loop)/isolinux/initrd0.img
}
注意,参数可根据发行版进行修改。有幸的是,有许多地方你可以查阅。我喜欢这一个,但是还有很多其它的。同时,请考虑你放置ISO文件的地方。如果你的家目录被加密或者无法被访问到你可能更喜欢将这些文件放到像例子中的启动分区。但是请首先确保有足够的空间。
注意,参数可根据发行版进行修改。幸运的是,有许多地方你可以查阅到。我喜欢这个发行版,但是还有很多其它的发行版你可以启动。同时,请注意你放置ISO文件的地方。如果你的家目录被加密或者无法被访问到你可能更喜欢将这些文件放到像例子中的启动分区。但是请首先确保启动分区有足够的空间。
最后不要忘了保存40_custom文件并使用以下命令来更新grub
@ -92,7 +91,7 @@
可以显示DBAN选项让你选择清除驱动器。**当心,因为它仍然十分危险**。
小结一下对于ISO文件和Grub有很多事情可做从快速live会话到用你的指尖来破坏一切,都可以满足你。下一步是启动一些关注隐私的发行版如[Tails][2]。
小结一下对于ISO文件和Grub有很多事情可做从快速live会话到一键毁灭,都可以满足你。之后,你也可以试试启动一些针对隐私方面的发行版,如[Tails][2]。
你认为从Grub启动一个ISO这个主意怎样这是不是你想要做的呢为什么呢请在下面留言。
@ -102,7 +101,7 @@ via: http://xmodulo.com/boot-iso-image-from-grub.html
作者:[Adrien Brochard][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,30 +1,34 @@
硬盘监控和分析神器——Smartctl
硬盘监控和分析工具——Smartctl
================================================================================
**Smartctl**自监控分析和报告技术是类Unix系统下实施SMART任务命令行套件或工具它用于打印SMART**自检**和**错误日志**启用并禁用SMRAT**自动检测**,以及初始化设备自检。
**Smartctl**S.M.A.R.T 自监控分析和报告技术是类Unix系统下实施SMART任务命令行套件或工具它用于打印SMART**自检**和**错误日志**启用并禁用SMRAT**自动检测**,以及初始化设备自检。
Smartctl对于Linux物理服务器十分有用在这些服务器上可以对智能磁盘进行错误检查并将与**硬件RAID**相关的磁盘信息摘录下来。
Smartctl对于Linux物理服务器十分有用在这些服务器上可以对智能磁盘进行错误检查并将与**硬件RAID**相关的磁盘信息摘录下来。
在本帖中我们将讨论smartctl命令的一些实用样例。如果你的Linux上海没有安装smartctl请按以下步骤来安装。
### Ubuntu中smartctl的安装 ###
### 安装 Smartctl ###
**对于 Ubuntu**
$ sudo apt-get install smartmontools
### Redhat / CentOS中smartctl的安装 ###
**对于 CentOS & RHEL**
# yum install smartmontools
**启动Smartctl服务**
###启动Smartctl服务###
**对于Ubuntu**
**对于 Ubuntu**
$ sudo /etc/init.d/smartmontools start
**对于CentOS & RHEL**
**对于 CentOS & RHEL**
# service smartd start ; chkconfig smartd on
**样例1 检查针对磁盘的Smart负载量**
### 样例 ###
#### 样例1 检查磁盘的 Smart 功能是否启用
root@linuxtechi:~# smartctl -i /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
@ -46,9 +50,9 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
这里‘/dev/sdb是你的硬盘。上面输出中的最后两行显示了SMART负载量已启用。
这里‘/dev/sdb是你的硬盘。上面输出中的最后两行显示了SMART功能已启用。
**样例2 为磁盘启用Smart负载量**
#### 样例2 启用磁盘的 Smart 功能
root@linuxtechi:~# smartctl -s on /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
@ -57,7 +61,7 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Enabled.
**样例3 为磁盘禁用Smart负载量**
#### 样例3 禁用磁盘的 Smart 功能
root@linuxtechi:~# smartctl -s off /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
@ -66,12 +70,12 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
=== START OF ENABLE/DISABLE COMMANDS SECTION ===
SMART Disabled. Use option -s with argument 'on' to enable it.
**样例4 为磁盘显示详细Smart信息**
#### 样例4 显示磁盘的详细 Smart 信息
root@linuxtechi:~# smartctl -a /dev/sdb // For IDE drive
root@linuxtechi:~# smartctl -a -d ata /dev/sdb // For SATA drive
**样例5 显示磁盘总体健康状况**
#### 样例5 显示磁盘总体健康状况
root@linuxtechi:~# smartctl -H /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
@ -84,7 +88,7 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
190 Airflow_Temperature_Cel 0x0022 067 045 045 Old_age Always In_the_past 33 (Min/Max 25/33)
**样例6 使用long和short选项测试硬盘**
#### 样例6 使用long和short选项测试硬盘
**Long测试**
@ -126,7 +130,7 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
**注意**short测试将花费最多2分钟而在long测试中没有时间限制因为它会读取并验证磁盘的每个段。
**样例7 查看驱动器的自检结果**
#### 样例7 查看驱动器的自检结果
root@linuxtechi:~# smartctl -l selftest /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
@ -138,7 +142,7 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
# 1 Short offline Completed: read failure 90% 492 210841222
# 2 Extended offline Completed: read failure 90% 492 210841222
**样例8 计算测试时间估值**
#### 样例8 计算测试时间估值
root@linuxtechi:~# smartctl -c /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
@ -178,7 +182,7 @@ Smartctl对于Linux物理服务器十分有用在这些服务器上可以
SCT Feature Control supported.
SCT Data Table supported.
**样例9 显示磁盘错误日志**
#### 样例9 显示磁盘错误日志
root@linuxtechi:~# smartctl -l error /dev/sdb
@ -219,7 +223,7 @@ via: http://www.linuxtechi.com/smartctl-monitoring-analysis-tool-hard-drive/
作者:[Pradeep Kumar][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,37 +1,40 @@
四招搞定Linux内核热补丁
不重启不当机Linux内核热补丁的四种技术
================================================================================
![Credit: Shutterstock](http://images.techhive.com/images/article/2014/10/patch_f-100526950-primary.idge.jpeg)
Credit: Shutterstock
多种技术在竞争成为实现inux内核热补丁的最优方案。
供图: Shutterstock
有多种技术在竞争成为实现Linux内核热补丁的最优方案。
没人喜欢重启机器,尤其是涉及到一个内核问题的最新补丁程序。
为达到不重启的目的目前有3个项目在朝这方面努力将为大家提供对内核进行运行时打热补丁的机制这样就可以做到完全不重启机器。
为达到不重启的目的目前有3个项目在朝这方面努力将为大家提供内核升级时打热补丁的机制这样就可以做到完全不重启机器。
### Ksplice项目 ###
首先要介绍的项目是Ksplice它是热补丁技术的创始者并于2008年建立了与项目同名的公司。Ksplice在替换新内核时不需要预先修改只需要一个diff文件将内核的修改点列全即可。Ksplice公司免费提供软件但技术支持是需要收费的目前能够支持大部分常用的Linux发行版本。
首先要介绍的项目是Ksplice它是热补丁技术的创始者并于2008年建立了与项目同名的公司。Ksplice在替换新内核时不需要预先修改只需要一个diff文件列出内核即将接受的修改即可。Ksplice公司免费提供软件但技术支持是需要收费的目前能够支持大部分常用的Linux发行版本。
但在2011年[Oracle收购了这家公司][1]后,情况发生了变化。 这项功能被合入到Oracle的Linux发行版本中且只对Oralcle的版本提供技术更新。 这就导致其他内核hacker们开始寻找替代Ksplice的方法以避免缴纳Oracle税。
但在2011年[Oracle收购了这家公司][1]后,情况发生了变化。 这项功能被合入到Oracle自己的Linux发行版本中只对Oralcle自己提供技术更新。 这就导致其他内核hacker们开始寻找替代Ksplice的方法以避免缴纳Oracle税。
### Kgraft项目 ###
2014年2月Suse提供了一个很好的解决方案[Kgraft][2]该技术以GPLv2/GPLv3混合许可证发布且Suse不会将其作为一个专有的实现。Kgraft被[提交][3]到Linux内核主线很有可能被内核主线采用。目前Suse已经把此技术集成到[Suse Linux Enterprise Server 12][4]。
2014年2月Suse提供了一个很好的解决方案[Kgraft][2],该内核更新技术以GPLv2/GPLv3混合许可证发布且Suse不会将其作为一个专有发明封闭起来。Kgraft被[提交][3]到Linux内核主线很有可能被内核主线采用。目前Suse已经把此技术集成到[Suse Linux Enterprise Server 12][4]。
Kgraft和Ksplice在工作原理上很相似都是使用一组diff文件来计算内核中需要修改的部分。但与Ksplice不同的是Kgraft在做替换时不需要完全停止内核。 在打补丁时,正在运行的函数可以先使用老版本中对应的部分,当补丁打完后就可以切换新的版本。
Kgraft和Ksplice在工作原理上很相似都是使用一组diff文件来计算内核中需要修改的部分。但与Ksplice不同的是Kgraft在做替换时不需要完全停止内核。 在打补丁时,正在运行的函数可以先使用老版本或新内核中对应的部分,当补丁打完后就可以完全切换新的版本。
### Kpatch项目 ###
Red Hat也提出了他们的内核热补丁技术。同样是在今年年初 -- 与Suse在这方面的工作差不多 -- [Kpatch][5]的工作原理也和Kgraft相似。
Red Hat也提出了他们的内核热补丁技术。同样是在2014年初 -- 与Suse在这方面的工作差不多 -- [Kpatch][5]的工作原理也和Kgraft相似。
主要的区别点在于正如Red Hat的Josh Poimboeuf[总结][6]的那样Kpatch不将内核调用重定向到老版本。相反它会等待所有函数调用都停止时再切换到新内核。Red Hat的工程师认为这种方法更为安全且更容易维护缺点就是在打补丁的过程中会带来更大的延迟。
主要的区别点在于正如Red Hat的Josh Poimboeuf[总结][6]的那样Kpatch不将内核调用重定向到老版本。相反它会等待所有函数调用都停止时再切换到新内核。Red Hat的工程师认为这种方法更为安全且更容易维护缺点就是在打补丁的过程中会带来更大的延迟。
和Kgraft一样Kpatch不仅仅能在Red Hat的发行版本上可以使用,同时也被提交到了内核主线,作为一个可能的候选。 坏消息是Red Hat还未将此技术集成到产品中。 它只是被合入到了Red Hat Enterprise Linux 7的技术预览版中。
和Kgraft一样Kpatch不仅仅可以在Red Hat的发行版本上使用,同时也被提交到了内核主线,作为一个可能的候选。 坏消息是Red Hat还未将此技术集成到产品中。 它只是被合入到了Red Hat Enterprise Linux 7的技术预览版中。
### ...也许 Kgraft + Kpatch更合适? ###
Red Hat的工程师Seth Jennings在2014年11月初提出了[第四种解决方案][7]。将Kgraft和Kpatch结合起来, 补丁包用这两种方式都可以。在新的方法中Jennings提出“热补丁核心为其他内核模块提供了热补丁的注册机制”, 通过这种方法,打补丁的过程 -- 更准确的说,如何处理运行时内核调用 --可以被更加有序的进行
Red Hat的工程师Seth Jennings在2014年11月初提出了[第四种解决方案][7]。将Kgraft和Kpatch结合起来, 补丁包用这两种方式都可以。在新的方法中Jennings提出“热补丁核心为其他内核模块提供了一个热补丁的注册接口”, 通过这种方法,打补丁的过程 -- 更准确的说,如何处理运行时内核调用 --可以被更加有序的组织起来
这项新建议也意味着两个方案都还需要更长的时间才能被linux内核正式采纳。尽管Suse步子迈得更快并把Kgraft应用到了最新的enterprise版本中。让我们也关注一下Red Hat和Linux官方近期的动态
这项新建议也意味着两个方案都还需要更长的时间才能被linux内核正式采纳。尽管Suse步子迈得更快并把Kgraft应用到了最新的enterprise版本中。让我们也关注一下Red Hat和Canonical近期是否会跟进
--------------------------------------------------------------------------------
@ -40,7 +43,7 @@ via: http://www.infoworld.com/article/2851028/linux/four-ways-linux-is-headed-fo
作者:[Serdar Yegulalp][a]
译者:[coloka](https://github.com/coloka)
校对:[校对者ID](https://github.com/校对者ID)
校对:[tinyeyeser](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
@ -51,4 +54,4 @@ via: http://www.infoworld.com/article/2851028/linux/four-ways-linux-is-headed-fo
[4]:http://www.infoworld.com/article/2838421/linux/suse-linux-enterprise-12-goes-light-on-docker-heavy-on-reliability.html
[5]:https://github.com/dynup/kpatch
[6]:https://lwn.net/Articles/597123/
[7]:http://lkml.iu.edu/hypermail/linux/kernel/1411.0/04020.html
[7]:http://lkml.iu.edu/hypermail/linux/kernel/1411.0/04020.html

View File

@ -1,38 +1,39 @@
How to install Cacti (Monitoring tool) on ubuntu 14.10 server
怎样在 Ubuntu 14.10 Server 上安装 Cacti监控工具
怎样在 Ubuntu 14.10 Server 上安装 Cacti
================================================================================
Cacti 是一个网络绘图解决方案,它被设计用来管理 RRDTool (一个 Linux 数据存储和绘图工具的数据存储和绘图的强大功能。Cacti 提供一个快速的轮询器,高级的绘图模版,多种数据获取方法和用户管理功能,并且可以开箱即用。所有的这些都被打包进一个直观,易用的界面,可用于监控简单的 LAN 网络,乃至包含成百上千设备的复杂网络。
Cacti 是一个完善的网络监控的图形化解决方案,它被设计用来发挥 RRDTool (一个 Linux 数据存储和绘图工具的数据存储和绘图的强大功能。Cacti 提供一个快速的轮询器,高级的绘图模版,多种数据获取方法和用户管理功能,并且可以开箱即用。所有的这些都被打包进一个直观,易用的界面,可用于监控简单的 LAN 网络,乃至包含成百上千设备的复杂网络。
### 功能 ###
#### 绘图 ####
无上限的监控图条目graph item每个图形可以视情况使用 Cacti 中的 CDEFs Calculation Define可以对图形输出结果进行计算或者数据源。
没有数量限制的监控图条目graph item每个图形可以视情况使用 Cacti 中的 CDEFs Calculation Define可以对图形输出结果进行计算或者数据源。
自动将 GPRINT 条目分组至 AREASTACK 和 LINE[1-3] 中,可以对图形进行快速重排序。
自动将 GPRINT 条目分组至 AREASTACK 和 LINE[1-3] 中,来对监控图条目进行快速重排序。
自动填充功能使得图形的说明整齐排列
自动填充功能支持整齐排列图形内的说明项
可以使用 RRDTool 中内置的 CDEF 数学函数对图形数据进行处理。这些 CDEF 函数可以定义在 Cacti 中,并且每一个图形都可以使用它们。
支持所有的 RRDTool 图形类型包括 AREASTACKLINE[1-3]GPRINTCOMMENTVRULE 和 HRULE。
支持所有的 RRDTool 图形类型包括 AREASTACKLINE[1-3]GPRINTCOMMENTVRULE 和 HRULE。
#### 数据源 ####
数据源可以使用 RRDTool 的 "create" 和 "update" 功能创建。每一个数据源可以用来收集本地或者远程的数据,并将数据输出图形。
数据源可以使用 RRDTool 的 "create" 和 "update" 功能创建。每一个数据源可以用来收集本地或者远程的数据,并将数据输出图形。
支持包含多个数据源的 RRD 文件,并可以使用存储在本地文件系统中任何位置的 RRD 文件。
可以自定义轮询归档RRA设置用户可以在存储数据时使用非标准的时间间隔标准时间间隔是5分钟30分钟2小时 和 1天
可以自定义轮询归档RRA设置用户可以在存储数据时使用非标准的时间间隔标准时间间隔是5分钟30分钟2小时和 1天
#### 数据收集 ####
Cacti 包含一个 "data input" 机制,可以让用户定义自定义的脚本用来收集数据。每个脚本可以包含调用参数,每次创建调用此脚本的数据源时输入相应的调用参数(如 IP 地址)。
Cacti 包含一个 "data input" 机制,可以让用户定义自定义的脚本用来收集数据。每个脚本可以包含调用参数,每次调用此脚本的创建数据源时必须输入相应的调用参数(如 IP 地址)。
支持 SNMP 功能,可以使用 php-snmpucd-snmp 或者 net-snmp。
可以基于索引来使用 SNMP 或者脚本收集数据。例如,可以列出一个服务器上所有网卡接口或者已挂载分区的索引列表。集成的绘图模版可以用来一键为主机创建图形。
可以基于索引来使用 SNMP 或者脚本收集数据。例如,可以列出一个服务器上所有网卡接口或者已挂载分区的索引列表。集成的绘图模版可以用来为主机一键创建图形。
提供一个基于 PHP 的轮询器用于执行脚本,收集 SNMP数据并更新数据至 RRD 文件中。
提供一个基于 PHP 的轮询器执行脚本,可以收集 SNMP数据并更新数据至 RRD 文件中。
#### 模版 ####
@ -44,7 +45,7 @@ Cacti 包含一个 "data input" 机制,可以让用户定义自定义的脚本
#### 图形展示 ####
图形树允许用户创建「图形层次结构」并将图形放至树中。这种方法可以方便的管理大量图形。
图形树模式允许用户创建「图形层次结构」并将图形放至树中。这种方法可以方便的管理大量图形。
列表模式将所有图形的链接在一个大列表中展示出来,链接指向用户创建的图形。
@ -55,7 +56,10 @@ Cacti 包含一个 "data input" 机制,可以让用户定义自定义的脚本
用户管理功能允许管理员创建用户并分配给用户访问 Cacti 接口的不同级别的权限。
权限可以为每个用户指定其对每个图形的权限,这适用于主机租用的场景。
每个用户可以保存他自己的图形显示模式。
### 安装 ###
#### 系统准备 ####
@ -123,7 +127,7 @@ via: http://www.ubuntugeek.com/how-to-install-cacti-monitoring-tool-on-ubuntu-14
作者:[ruchi][a]
译者:[Liao](https://github.com/liaoishere)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,20 +1,19 @@
Linux FAQs with Answers--How to disable Apport internal error reporting on Ubuntu
有问必答--如何禁止Ubuntu的Apport内部错误报告程序
Linux有问必答如何禁止Ubuntu的Apport内部错误报告程序
================================================================================
> **问题**在桌面版Ubuntu中我经常遇到一些弹窗窗口警告我Ubuntu发生了内部错误问我要不要发送错误报告。每次软件崩溃都要烦扰我我如何才能关掉这个错误报告功能呢
Ubuntu桌面版预装了Apport它是一个错误收集系统会收集软件崩溃、未处理异常和其他包括程序bug并为调试目的生成崩溃报告。当一个应用程序崩溃或者出现Bug时候Apport就会通过弹窗警告用户并且询问用户是否提交崩溃报告。你也许也看到过下面的消息。
Ubuntu桌面版预装了Apport它是一个错误收集系统会收集软件崩溃、未处理异常和其他包括程序bug并为调试目的生成崩溃报告。当一个应用程序崩溃或者出现Bug时候Apport就会通过弹窗警告用户并且询问用户是否提交崩溃报告。你也许也看到过下面的消息。
- "Sorry, the application XXXX has closed unexpectedly."
- "对不起应用程序XXXX意外关闭了"
- "对不起应用程序XXXX意外关闭了"
- "Sorry, Ubuntu XX.XX has experienced an internal error."
- "对不起Ubuntu XX.XX 经历了一个内部错误"
- "对不起Ubuntu XX.XX 发生了一个内部错误。"
- "System program problem detected."
- "系统程序问题发现"
- "检测到系统程序问题。"
![](https://farm9.staticflickr.com/8635/15688551119_708b23b12a_z.jpg)
也许因为应用一直崩溃频繁的错误报告会使人心烦。也许你担心Apport会收集和上传你的Ubuntu系统的敏感信息。无论什么原因需要关掉Apport的错误报告功能。
也许因为应用一直崩溃频繁的错误报告会使人心烦。也许你担心Apport会收集和上传你的Ubuntu系统的敏感信息。无论什么原因关掉Apport的错误报告功能。
### 临时关闭Apport错误报告 ###
@ -41,6 +40,6 @@ Ubuntu桌面版预安装了Apport它是一个错误收集系统会收集
via: http://ask.xmodulo.com/disable-apport-internal-error-reporting-ubuntu.html
译者:[VicYu/Vic020](http://www.vicyu.net/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,4 @@
Traslated by H-mudcup
美国海军陆战队想把雷达操作系统从Windows XP换成Linux
美国海军陆战队要把雷达操作系统从Windows XP换成Linux
================================================================================
**一个新的雷达系统已经被送回去升级了**
@ -18,13 +16,13 @@ Traslated by H-mudcup
>一谈到稳定性和性能没什么能真的比得过Linux。这就是为什么美国海军陆战队的领导们已经决定让Northrop Grumman Corp. Electronic Systems把新送到的地面/空中任务导向雷达G/ATOR的操作系统从Windows XP换成Linux。
地面/空中任务导向雷达G/ATOR系统已经研制了很多年。很可能在这项工程启动的时候Windows XP被认为是合理的选择。在研制的这段时间事情发生了变化。微软已经撤销了对Windows XP的支持而且只有极少的几个组织会使用它。操作系统要么升级要么被换掉。在这种情况下Linux成了合理的选择。特别是当替换的费用很可能远远少于更新的费用。
地面/空中任务导向雷达G/ATOR系统已经研制了很多年。很可能在这项工程启动的时候Windows XP被认为是合理的选择。在研制的这段时间事情发生了变化。微软已经撤销了对Windows XP的支持而且只有极少的几个组织会使用它。操作系统要么升级要么被换掉。在这种情况下Linux成了合理的选择。特别是当替换的费用很可能远远少于更新的费用。
有个很有趣的地方值得注意一下。地面/空中任务导向雷达G/ATOR才刚刚送到美国海军陆战队但是制造它的公司却还是选择了保留这个过时的操作系统。一定有人注意到的这样一个事实。这是一个糟糕的决定并且指挥系统已经被告知了可能出现的问题了。
### G/ATOR雷达的软件将是基于Linux的 ###
Unix类系统比如基于BSD或者基于Linux的操作系统通常会出现在条件苛刻的领域或者任何情况下都不失败的的技术中。例如这就是为什么大多数的服务器都运行着Linux。一个雷达系统配上一个几乎不可能崩溃的操作系统看起来非常相配。
Unix类系统比如基于BSD或者基于Linux的操作系统通常会出现在条件苛刻的领域或者任何情况下都不允许失败的的技术中。例如这就是为什么大多数的服务器都运行着Linux。一个雷达系统配上一个几乎不可能崩溃的操作系统看起来非常相配。
“弗吉尼亚州Quantico海军基地海军陆战队系统司令部的官员在周三宣布了一项与Northrop Grumman Corp. Electronic Systems在林西科姆高地的部分的总经理签订的价值1020万美元的修正合同。这个合同的修改将包括这样一项把G/ATOR的控制电脑从微软的Windows XP操作系统换成与国防信息局DISA兼容的Linux操作系统。”
@ -40,7 +38,7 @@ via: http://news.softpedia.com/news/U-S-Marine-Corps-Want-to-Change-OS-for-Radar
作者:[Silviu Stahie][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,9 @@
NetHack
也许是有史以来最好的游戏:NetHack
================================================================================
## 一直以来最好的游戏? ##
**这款游戏非常容易让你上瘾。你可能需要花费一生的时间来掌握它。许多人玩了几十年也没有通关。欢迎来到 NetHack 的世界...**
不管你信不信,在 NetHack 里你见到字母 D 的时候你会被吓着。但是当你看见一个 % 的时候,你将会欣喜若狂。(忘了说 ^,你看见它将会更激动)在你寻思我们的脑子是不是烧坏了并准备关闭浏览器标签之前,请给我们一点时间解释:这些符号分别代表龙、食物以及陷阱。欢迎来到 NetHack 的世界,在这里你的想象力需要发挥巨大的作用。
不管你信不信,在 NetHack 里你见到字母 **D** 的时候你会被吓着。但是当你看见一个 **%** 的时候,你将会欣喜若狂。(忘了说 **\^**,你看见它将会更激动)在你寻思我们的脑子是不是烧坏了并准备关闭浏览器标签之前,请给我们一点时间解释:这些符号分别代表龙、食物以及陷阱。欢迎来到 NetHack 的世界,在这里你的想象力需要发挥巨大的作用。
如你所见NetHack 是一款文字模式的游戏:它仅仅使用标准终端字符集来刻画玩家、敌人、物品还有环境。游戏的图形版是存在的,不过 NetHack 的骨灰级玩家们都倾向于不去使用它们,问题在于假如你使用图形界面,当你通过 SSH 登录到你的古董级的运行着 NetBSD 的 Amiga 3000 上时你还能进行游戏吗在某些方面NetHack 和 Vi 非常相似 - 几乎被移植到了现存的所有的操作系统上,并且依赖都非常少。
@ -16,68 +15,68 @@ NetHack
![NetHack 界面](http://www.linuxvoice.com/wp-content/uploads/2014/12/nh_annotated.png)
NetHack 界面
*NetHack 界面*
### 也许是最古老的仍在开发的游戏里 ###
名非其实NetHack 并不是一款网络游戏。它只不过是基于一款出现较早的名为 Hack 的地牢探险类游戏开发出来的,而这款 Hack 游戏是 1980 年的游戏 Rogue 的后代。NetHack 在 1987 年发布了第一个版本,并于 2003 年发布了 3.4.3 版本,尽管在这期间一直没有加入新的功能,但各种补丁、插件,以及衍生作品还是在网络上疯狂流传。这使得它可以说是最古老的、拥有众多对游戏乐此不疲的粉丝的游戏。当你访问 [www.reddit.com/r/nethack][1] 之后,你就会了解我们的意思了 - 骨灰级的 NetHack 的玩家们仍然聚集在一起讨论新的策略、发现和技巧。偶尔你也可以发现 NetHack 的元老级玩家在历经千辛万苦终于通关之后发出的欢呼。
但怎样才能通关呢首先NetHack 被设定在既大又深的地牢中。游戏开始时你在最顶层 - 第 1 层 - 你的目标是不断往下深入直到你找到一个非常宝贵的物品,护身符 Yendor。通常来说 Yendor 在 第 20 层或者更深的地方,但它是可以变化的。随着你在地牢的不断深入,你会遇到各种各样的怪物、陷阱以及 NPC有些会试图杀掉你有些会挡在你前进的路上还有些... 总而言之,在你靠近 TA 们之前你永远不知道 TA 们会怎样。
> 要学习的有太多太多,绝大多数物品只有在和其他物品同时使用的情况下才会发挥最好的效果。
NetHack 如此引人入胜的原因是游戏中所加入的大量物品。武器、盔甲、附魔书、戒指、宝石 - 要学习的有太多太多,绝大多数物品只有在和其他物品同时使用的情况下才会发挥最好的效果。怪物在死亡后经常会掉落一些有用的物品,以及某些物品如果你不正确使用的话会产生及其不良的作用。你可以在地牢找到商店,里面有许多看似平凡实则非常有用的物品不过别指望店主能给你详细的描述。你只能靠自己的经验来了解各个物品的用途。有些物品确实没有太大用处NetHack 中有很多的恶搞元素 - 比如你可以把一块奶油砸到自己的脸上。
使 NetHack 如此引人入胜的原因是游戏中所加入的大量物品。武器、盔甲、附魔书、戒指、宝石 - 要学习的有太多太多绝大多数物品只有在和其他物品同时使用的情况下才会发挥最好的效果。怪物在死亡后经常会掉落一些有用的物品以及某些物品如果你不正确使用的话会产生及其不良的作用。你可以在地牢找到商店里面有许多看似平凡实则非常有用的物品不过别指望店主能给你详细的描述。你只能靠自己的经验来了解各个物品的用途。有些物品确实没有太大用处NetHack 中有很多的恶搞元素 - 比如你可以把一块奶油砸到自己的脸上。
不过在你踏入地牢之前NetHack 会询问你要选择哪种角色进行游戏。你可以为你接下来的地牢之行选择骑士、修道士、巫师或者卑微的旅者还有许多其他的角色类型。每种角色都有其独特的优势与弱点NetHack 的重度玩家喜欢选择那些相对较弱的角色来挑战游戏。你懂的,这样可以向其他玩家炫耀自己的实力。
> ## 情报不会降低游戏的乐趣 ##
> **情报不会降低游戏的乐趣**
> 用 NetHack 的说法来讲,“情报员”给指其他玩家提供关于怪物、物品、武器和盔甲信息的玩家。理论上来说,完全可以不借助任何外来信息而通关,但几乎没有几个玩家能做到,游戏实在是太难了。因此使用情报并不会被视为一件糟糕的事情 - 但是一开始由你自己来探索游戏和解决难题,这样才会获得更多的乐趣,只有当你遇到瓶颈的时候再去使用那些情报。
> 在这里给出一个比较有名的情报站点 [www.statslab.cam.ac.uk/~eva/nethack/spoilerlist.html][2],其中的情报被分为了不同的类别。游戏中随机发生的事,比如在喷泉旁饮水可能导致的不同结果,从这里你可以得知已确定的不同结果的发生概率。
>
> ### 你的首次地牢之行 ###
NetHack 几乎可以在所有的主流操作系统以及 Linux 发行版上运行,因此你可以通过 "apt-get install nethack" 或者 "yum install nethack" 等适合你用的发行版的命令来安装游戏。安装完毕后,在一个命令行窗口中键入 "nethack" 就可以开始游戏了。游戏开始时系统会询问是否为你随机挑选一位角色 - 但作为一个新手,你最好自己从里面挑选一位比较强的角色。所以,你应该点 "n",然后点 "v" 以选取女武神Valkyrie最后点 "d" 选择成为侏儒dwarf
### 你的首次地牢之行 ###
NetHack 几乎可以在所有的主流操作系统以及 Linux 发行版上运行,因此你可以通过 "apt-get install nethack" 或者 "yum install nethack" 等适合你用的发行版的命令来安装游戏。安装完毕后,在一个命令行窗口中键入 "nethack" 就可以开始游戏了。游戏开始时系统会询问是否为你随机挑选一位角色 - 但作为一个新手,你最好自己从里面挑选一位比较强的角色。所以,你应该点 "n",然后点 "v" 以选取女武神Valkyrie而点 "d" 会选择成为侏儒dwarf
接着 NetHack 上会显示出剧情,说你的神正在寻找护身符 Yendor你的目标就是找到它并将它带给神。阅读完毕后点击空格键其他任何时候当你见到屏幕上的 "-More-" 时都可以这样)。接着就让我们出发 - 开始地牢之行吧!
先前已经介绍过了,你的角色用 @ 来表示。你可以看见角色所出房间周围的墙壁房间里显示点的那些地方是你可以移动的空间。首先你得明白怎样移动角色h、j、k 以及 l。是的和 Vim 中移动光标的操作相同)这些操作分别会使角色向向左、向下、向上以及向右移动。你也可以通过 y、u、b 和 n 来使角色斜向移动。在你熟悉如何控制角色移动前你最好在房间里来回移动你的角色。
先前已经介绍过了,你的角色用 @ 来表示。你可以看见角色所出房间周围的墙壁,房间里显示的那些地方是你可以移动的空间。首先你得明白怎样移动角色h、j、k 以及 l。是的和 Vim 中移动光标的操作相同)这些操作分别会使角色向向左、向下、向上以及向右移动。你也可以通过 y、u、b 和 n 来使角色斜向移动。在你熟悉如何控制角色移动前你最好在房间里来回移动你的角色。
NetHack 采用了回合制,因此即使你不进行任何动作,游戏仍然在进行。这是你可以提前计划你的行动。你可以看见一个 "d" 字符或者 "f" 字符在房间里来回移动:这是你的宠物狗/猫,(通常情况下)它们 不会伤害你而是帮助你击杀怪物。但是宠物也会被惹怒 - 它们偶尔也会抢在你接近食物或者怪物尸体之前吃掉它们。
NetHack 采用了回合制,因此即使你不进行任何动作,游戏仍然在进行。这是你可以提前计划你的行动。你可以看见一个 "d" 字符或者 "f" 字符在房间里来回移动:这是你的宠物狗/猫,(通常情况下)它们不会伤害你而是帮助你击杀怪物。但是宠物也会被惹怒 - 它们偶尔也会抢在你接近食物或者怪物尸体之前吃掉它们。
![点击 “i” 列出你当前携带的物品清单](http://www.linuxvoice.com/wp-content/uploads/2014/12/nh_inventory.png)
点击 “i” 列出你当前携带的物品清单
*点击 “i” 列出你当前携带的物品清单*
### 门后有什么? ###
接下来,让我们离开房间。房间四周的墙壁某处会有缝隙,可能是 "+" 号。"+" 号表示一扇关闭的门,这时你应该靠近它然后点击 "o" 来开门。接着系统会询问你开门的方向,假如门在你的左方,就点击 "h"。(如果门被卡住了,就多试几次)然后你就可以看见门后的走廊了,它们由 "#" 号表示,沿着走廊前进直到你找到另一个房间。
地牢之行中你会见到各种各样的物品。某些物品,比如金币(由 "$" 号表示)会被自动捡起来;至于另一些物品,你只能站在上面按下逗号键手动拾起。如果同一位置有多个物品,系统会给你显示一个列表,你只要通过合适的案件选择列表中你想要的物品最后按下 "Enter" 键即可。任何时间你都可以点击 "i" 键在屏幕上列出你当前携带的物品清单。
地牢之行中你会见到各种各样的物品。某些物品,比如金币(由 "$" 号表示)会被自动捡起来;至于另一些物品,你只能站在上面按下逗号键手动拾起。如果同一位置有多个物品,系统会给你显示一个列表,你只要通过合适的按键选择列表中你想要的物品最后按下 "Enter" 键即可。任何时间你都可以点击 "i" 键在屏幕上列出你当前携带的物品清单。
如果看见了怪物该怎么办?在游戏早期,你可能会遇到的怪物会用符号 "d"、"x" 和 ":" 表示。想要攻击的话,只要简单地朝怪物的方向移动即可。系统会在屏幕顶部通过信息显示来告诉你你的攻击是否成功 - 以及怪物做出了何种反应。早期的怪物很容易击杀,所以你可以毫不费力地打败他们,但请留意底部状态栏里显示的角色的 HP 值。
如果看见了怪物该怎么办?在游戏早期,你可能会遇到的怪物会用符号 "d"、"x" 和 ":" 表示。想要攻击的话,只要简单地朝怪物的方向移动即可。系统会在屏幕顶部通过信息显示来告诉你攻击是否成功 - 以及怪物做出了何种反应。早期的怪物很容易击杀,所以你可以毫不费力地打败他们,但请留意底部状态栏里显示的角色的 HP 值。
> 早期的怪物很容易击杀,但请留意角色的 HP 值。
如果怪物死后掉落了一具尸体("%"),你可以点击逗号进行拾取,并点击 "e" 来食用。(在任何时候系统提示你选择一件物品,你都可以从物品列表中点击相应的按键,或者点击 "?" 来查询迷你菜单。)意!有些尸体是有毒的,这些知识你将在日后的冒险中逐渐学会掌握。
如果怪物死后掉落了一具尸体("%"),你可以点击逗号进行拾取,并点击 "e" 来食用。(在任何时候系统提示你选择一件物品,你都可以从物品列表中点击相应的按键,或者点击 "?" 来查询迷你菜单。)意!有些尸体是有毒的,这些知识你将在日后的冒险中逐渐学会掌握。
如果你在走廊里行进时遇到了死胡同,你可以点击 "s" 进行搜寻直到找到一扇门。这会花费时间,但是你由此加速了游戏进程:输入 "10" 并点击 "s" 你将一下搜索 10 次。这将花费游戏中进行 10 次动作的时间,不过如果你正在饥饿状态,你将有可能会被饿死
如果你在走廊里行进时遇到了死胡同,你可以点击 "s" 进行搜寻直到找到一扇门。这会花费时间,但是你可以这样加速游戏进程:输入 "10" 并点击 "s" 你将一下搜索 10 次。这将花费游戏中进行 10 次动作的时间,不过如果你正在饥饿状态,你将有可能会被饿死
通常你可以在地牢顶部找到 "{"(喷泉)以及 "!"(药水)。当你找到喷泉的时候,你可以站在上面并点击 "q" 键开始 “畅饮quaff” - 引用后会得到积极的到致命的多种效果。当你找到药水的时候,将其拾起并点击 "q" 来引用。如果你找到一个商店,你可以拾取其中的物品并在离开前点击 "p" 键进行支付。当你负重过大时,你可以点击 "d" 键丢掉一些东西。
通常你可以在地牢顶部找到 "{"(喷泉)以及 "!"(药水)。当你找到喷泉的时候,你可以站在上面并点击 "q" 键开始 “畅饮quaff” - 引用后会得到从振奋的到致命的多种效果。当你找到药水的时候,将其拾起并点击 "q" 来饮用。如果你找到一个商店,你可以拾取其中的物品并在离开前点击 "p" 键进行支付。当你负重过大时,你可以点击 "d" 键丢掉一些东西。
![现在已经有带音效的 3D 版 Nethack 了Falcons Eye](http://www.linuxvoice.com/wp-content/uploads/2014/12/falcon.jpg)
现在已经有带音效的 3D 版 Nethack 了Falcons Eye
*现在已经有带音效的 3D 版 Nethack 了Falcons Eye*
> ## 愚蠢的死法 ##
> **愚蠢的死法**
> 在 NetHack 玩家中流行着一个缩写词 "YASD" - 又一种愚蠢的死法Yet Another Stupid Death。这个缩写词表示了玩家由于自身的的愚蠢或者粗心大意导致了角色的死亡。我们搜集了很多这类死法但我们最喜欢的是下面描述的
> 在 NetHack 玩家中流行着一个缩写词 "YASD" - 又一种愚蠢的死法Yet Another Stupid Death。这个缩写词表示了玩家由于自身的的愚蠢或者粗心大意导致了角色的死亡。我们搜集了很多这类死法但我们最喜欢的是下面这种死法
> 我们正在商店浏览商品,这时一条蛇突然从药剂后面跳了出来。在杀死蛇之后,系统弹出一条信息提醒我们角色饥饿值过低了,因此我们顺手食用了蛇的尸体。坏事了!这使得我们的角色失明,导致我们的角色再也不能看见商店里的其他角色及地上的商品了。我们试图离开商店,但在慌乱中却撞在了店主身上并攻击了他。这种做法激怒了店主:他立即向我们的角色使用了火球术。我们试图逃到商店外的走廊上,但却在逃亡的过程中被烧死。
> 如果你有类似的死法,一定要来我们的论坛告诉我们。不要担心 - 没有人会嘲笑你。经历这样的死法也是你在 NetHack 的世界里不断成长的一部分。
> 如果你有类似的死法,一定要来我们的论坛告诉我们。不要担心 - 没有人会嘲笑你。经历这样的死法也是你在 NetHack 的世界里不断成长的一部分。哈哈。
### 武装自己 ###
@ -85,7 +84,7 @@ NetHack 采用了回合制,因此即使你不进行任何动作,游戏仍然
在靠近掉在地下的装备之前最好检查一下身上的东西。点击 ";"(分号)后,"Pick an object"(选择一样物品)选项将出现在屏幕顶部。选择该选项,使用移动键直到选中你想要检查的物品,然后点击 ":"(冒号)。接着屏幕顶部将出现这件物品的描述。
因为你的目标是不断深入地牢直到找到护身符 Yendor所以请随时留意周围的 "<" 和 ">" 符号。这两个符号分别表示向上和向下的楼梯,你可以用与之对应的按键来上楼或下楼。注意!如果你想让宠物跟随你进入下/上一层地牢,下/上楼前请确保你的宠物在你邻近的方格内。若果你想退出,点击 "S"(大写的 s)来保存进度,输入 #quit 退出游戏。当你再次运行 NetHack 时,系统将会自动读取你上次退出时的游戏进度。
因为你的目标是不断深入地牢直到找到护身符 Yendor所以请随时留意周围的 "<" 和 ">" 符号。这两个符号分别表示向上和向下的楼梯,你可以用与之对应的按键来上楼或下楼。注意!如果你想让宠物跟随你进入下/上一层地牢,下/上楼前请确保你的宠物在你邻近的方格内。若果你想退出,点击 "S"(大写的)来保存进度,输入 #quit 退出游戏。当你再次运行 NetHack 时,系统将会自动读取你上次退出时的游戏进度。
我们就不继续剧透了,地牢深处还有更多的神秘细节、陌生的 NPC 以及不为人知的秘密等着你去发掘。那么,我们再给你点建议:当你遇到了让你困惑不已的物品时,你可以尝试去 NetHack 维基 [http://nethack.wikia.com][3] 进行搜索。你也可以在 [www.nethack.org/v343/Guidebook.html][4] 找到一本非常不错(尽管很长)的指导手册。最后,祝游戏愉快!
@ -95,7 +94,7 @@ via: http://www.linuxvoice.com/nethack/
作者:[Mike Saunders][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,76 @@
没错Linux是感染了木马这并非企鹅的末日。
================================================================================
![Is something watching you?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/spyware.jpg)
译注原文标题中Tuxpocalypse是作者造的词由Tux和apocalypse组合而来。Tux是Linux的LOGO中那只企鹅的名字apocalypse意为末世、大灾变这里翻译成企鹅的末日。
你被监视了吗?
带上一箱罐头,挖一个深坑碉堡,准备进入一个完全不同的新世界吧:[一个强大的木马已经在Linux中被发现][1]。
没错,迄今为止最牢不可破的计算机世外桃源已经被攻破了,安全专家们都已成惊弓之鸟。
关掉电脑拔掉键盘然后再买只猫忘掉YouTube吧。企鹅末日已经降临我们的日子不多了。
我去?这是真的吗?依我看,不一定吧~
### 一次可怕的异常事件! ###
先声明,**我并没有刻意轻视此次威胁人们给这个木马起名为Turla的严重性**为了避免质疑我要强调的是作为Linux用户我们不应该为此次事件过分担心。
此次发现的木马能够在人们毫无察觉的情况下感染Linux系统这是非常可怕的。事实上它的主要工作是搜寻并向外发送各种类型的敏感信息这一点同样令人感到恐惧。据了解它已经存在至少4年时间而且无需root权限就能完成这些工作。呃这是要把人吓尿的节奏吗
But - 但是 - 新闻稿里常常这个时候该出现but了 - 要说恐慌正在横扫桌面Linux的粉丝那就有点断章取义、甚至不着边际了。
对我们中的有些人来说计算机安全隐患的确是一种新鲜事物然而我们应该对其审慎对待对桌面用户来说Linux仍然是一个天生安全的操作系统。一次瑕疵不应该否定它的一切我们没有必要慌忙地割断网线。
### 国家资助,目标政府 ###
![Is a penguin snake a Penguake or a Snaguin?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/penguin-snakle-by-icao-292x300.jpg)
企鹅和蛇的组合该叫‘企蛇’还是‘蛇鹅’?
Turla木马是一个复杂、高级的持续威胁四年多来它以政府、大使馆以及制药公司的系统为目标其使用的攻击方式所基于的代码[至少在14年前][2]就已存在了。
在Windows系统中安全研究领域来自赛门铁克和卡巴斯基实验室的超级英雄们首先发现了这条黏黏的蛇他们发现Turla及其组件已经**感染了45个国家的数百台个人电脑**其中许多都是通过未打补丁的0day漏洞感染的。
*微软,干得漂亮。*
经过卡巴斯基实验室的进一步努力他们发现同样的木马出现在了Linux上。
这款木马无需高权限就可以“拦截传入的数据包在系统中执行传入的命令”但是它的触角到底有多深有多少Linux系统被感染它的完整功能都有哪些这些目前都暂时还不明朗。
根据它选定的目标我们推断“Turla”及其变种是由某些民族的国家资助的。美国和英国的读者不要想当然以为这些国家就是“那些国家”。不要忘了我们自己的政府也很乐于趟这摊浑水。
#### 观点 与 责任 ####
这次的发现从情感上、技术上、伦理上,都是一次严重的失利,但它远没有达到说我们已经进入一个病毒和恶意软件针对桌面自由肆虐的时代。
**Turla 并不是那种用户关注的“我想要你的信用卡”病毒**那些病毒往往绑定在一个伪造的软件下载链接中。Turla是一种复杂的、经过巧妙处理的、具有高度适应性的威胁它时刻都具有着特定的目标因此它绝不仅仅满足于搜集一些卖萌少女的网站账户密码sorry 绿茶婊们!)。
卡巴斯基实验室是这样介绍的:
> “Linux上的Turla模块是一个链接多个静态库的C/C++可执行文件,这大大增加了它的文件体积。但它并没有着重减小自身的文件体积,而是剥离了自身的符号信息,这样就增加了对它逆向分析的难度。它的功能主要包括隐藏网络通信、远程执行任意命令以及远程管理等等。它的大部分代码都基于公开源码。”
不管它的影响和感染率如何,它的技术优势都将不断给那些号称聪明的专家们留下一个又一个问题,就让他们花费大把时间去追踪、分析、解决这些问题吧。
我不是一个计算机安全专家但我是一个理智的网络脑残粉要我说这次事件应该被看做是一个通jinggao而并非有些网站所标榜的洪shijiemori
在更多细节披露之前我们都不必恐慌。只需继续计算机领域的安全实践避免从不信任的网站或PPA源下载运行脚本、app或二进制文件更不要冒险进入web网络的黑暗领域。
如果你仍然十分担心,你可以前往[卡巴斯基的博客][1]查看更多细节,以确定自己是否感染。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/12/government-spying-turla-linux-trojan-found
作者:[Joey-Elijah Sneddon][a]
译者:[Mr小眼儿](http://blog.csdn.net/tinyeyeser)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://securelist.com/blog/research/67962/the-penquin-turla-2/
[2]:https://twitter.com/joernchen/status/542060412188262400
[3]:https://securelist.com/blog/research/67962/the-penquin-turla-2/

View File

@ -0,0 +1,74 @@
Linux有问必答Linux 中如何安装 7zip
================================================================================
> **问题**: 我需要要从 ISO 映像中获取某些文件,为此我想要使用 7zip 程序。那么我应该如何安装 7zip 软件呢,[在 Linux 发布版本上完全安装]?
7zip 是一款开源的归档应用程序,开始是为 Windows 系统而开发的。它能对多种格式的档案文件进行打包或解包处理,除了支持其原生的 7z 格式的文档外,还支持包括 XZ、GZIP、TAR、ZIP 和 BZIP2 等这些格式。 通常7zip 也用来解压 RAR、DEB、RPM 和 ISO 等格式的文件。除了简单的归档功能7zip 还具有支持 AES-256 算法加密以及自解压和建立多卷存档功能。在支持 POSIX 标准的系统上Linux、Unix、BSD原生的 7zip 程序被移植过来并被命名为 p7zip“POSIX 7zip” 的简称)。
下面介绍如何在 Linux 中安装 7zip (或 p7zip
### 在 Debian、Ubuntu 或 Linux Mint 系统中安装 7zip ###
在基于的 Debian 的发布系统中存在有三种 7zip 的软件包。
- **p7zip**: 包含 7zr最小的 7zip 归档工具),仅仅只能处理原生的 7z 格式。
- **p7zip-full**: 包含 7z ,支持 7z、LZMA2、XZ、ZIP、CAB、GZIP、BZIP2、ARJ、TAR、CPIO、RPM、ISO 和 DEB 格式。
- **p7zip-rar**: 包含一个能解压 RAR 文件的插件。
建议安装 p7zip-full 包(不是 p7zip因为这是最完全的 7zip 程序包,它支持很多归档格式。此外,如果您想处理 RAR 文件话,还需要安装 p7zip-rar 包,做成一个独立的插件包的原因是因为 RAR 是一种专有格式。
$ sudo apt-get install p7zip-full p7zip-rar
### 在 Fedora 或 CentOS/RHEL 系统中安装 7zip ###
基于红帽的发布系统上提供了两个 7zip 的软件包。
- **p7zip**: 包含 7za 命令,支持 7z、ZIP、GZIP、CAB、ARJ、BZIP2、TAR、CPIO、RPM 和 DEB 格式。
- **p7zip-plugins**: 包含 7z 命令,额外的插件,它扩展了 7za 命令(例如支持 ISO 格式的抽取)。
在 CentOS/RHEL 系统中,在运行下面命令前您需要确保 [EPEL 资源库][1] 可用,但在 Fedora 系统中就不需要额外的资源库了。
$ sudo yum install p7zip p7zip-plugins
注意,跟基于 Debian 的发布系统不同的是,基于红帽的发布系统没有提供 RAR 插件,所以您不能使用 7z 命令来抽取解压 RAR 文件。
### 使用 7z 创建或提取归档文件 ###
一旦安装好 7zip 软件后,就可以使用 7z 命令来打包解包各式各样的归档文件了。7z 命令会使用不同的插件来辅助处理对应格式的归档文件。
![](https://farm8.staticflickr.com/7583/15874000610_878a85b06a_b.jpg)
使用 “a” 选项就可以创建一个归档文件,它可以创建 7z、XZ、GZIP、TAR、 ZIP 和 BZIP2 这几种格式的文件。如果指定的归档文件已经存在的话,它会把文件“附加”到存在的归档中,而不是覆盖原有归档文件。
$ 7z a <archive-filename> <list-of-files>
使用 “e” 选项可以抽取一个归档文件,抽取出的文件会放在当前目录。抽取支持的格式比创建时支持的格式要多的多,包括 7z、XZ、GZIP、TAR、ZIP、BZIP2、LZMA2、CAB、ARJ、CPIO、RPM、ISO 和 DEB 这些格式。
$ 7z e <archive-filename>
解包的另外一种方式是使用 “x” 选项。和 “e” 选项不同的是,它使用的是全路径来抽取归档的内容。
$ 7z x <archive-filename>
要查看归档的文件列表,使用 “l” 选项。
$ 7z l <archive-filename>
要更新或删除归档文件,分别使用 “u” 和 “d” 选项。
$ 7z u <archive-filename> <list-of-files-to-update>
$ 7z d <archive-filename> <list-of-files-to-delete>
要测试归档的完整性,使用:
$ 7z t <archive-filename>
--------------------------------------------------------------------------------
via:http://ask.xmodulo.com/install-7zip-linux.html
译者:[runningwater](https://github.com/runningwater)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://linux.cn/article-2324-1.html

View File

@ -1,10 +1,10 @@
Linux有问必答-- 如何在Linux上安装内核头文件
Linux有问必答如何在Linux上安装内核头文件
================================================================================
> **提问**:我在安装一个设备驱动前先要安装内核头文件。怎样安装合适的内核头文件?
当你在编译一个设备驱动模块时你需要在系统中安装内核头文件。内核头文件同样在你编译与内核直接链接的用户空间程序时需要。当你在这些情况下安装内核头文件时你必须确保内核头文件精确地与你当前内核版本匹配比如3.13.0-24-generic
如果你的内核发行版自带的内核版本或者使用默认的包管理器的基础仓库升级的比如apt-ger、aptitude或者yum你也可以使用包管理器来安装内核头文件。另一方面如果下载的是[kernel源码][1]并且手动编译的,你可以使用[make命令][2]来安装匹配的内核头文件。
如果你的内核发行版自带的内核版本或者使用默认的包管理器的基础仓库升级的比如apt-ger、aptitude或者yum你也可以使用包管理器来安装内核头文件。另一方面如果下载的是[kernel源码][1]并且手动编译的,你可以使用[make命令][2]来安装匹配的内核头文件。
现在我们假设你的内核是发行版自带的,让我们看下该如何安装匹配的头文件。
@ -41,7 +41,7 @@ Debian、Ubuntu、Linux Mint默认头文件在**/usr/src**下。
假设你没有手动编译内核你可以使用yum命令来安装匹配的内核头文件。
首先,用下面的命令检查系统是否已经按炸ung了头文件。如果下面的命令没有任何输出,这就意味着还没有头文件。
首先,用下面的命令检查系统是否已经安装了头文件。如果下面的命令没有任何输出,这就意味着还没有头文件。
$ rpm -qa | grep kernel-headers-$(uname -r)
@ -66,7 +66,7 @@ Fedora、CentOS 或者 RHEL上默认内核头文件的位置是**/usr/include/li
via: http://ask.xmodulo.com/install-kernel-headers-linux.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,8 @@
Translated by H-mudcup
2014年Linux界发生的好事坏事和丑事
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/12/Buggest_Linux_Stories.jpeg)
2014年已经接近尾声,现在正是盘点**2014年Linux大事件**的时候。整整一年我们关注了有关Linux和开源的一些好事坏事和丑事。让我们来快速回顾一下2014对于Linux是怎样的一年。
2014年已经过去,现在正是盘点**2014年Linux大事件**的时候。整整一年我们关注了有关Linux和开源的一些好事坏事和丑事。让我们来快速回顾一下2014对于Linux是怎样的一年。
### 好事 ###
@ -14,7 +12,7 @@ Translated by H-mudcup
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/12/netflix-linux.jpg)
从使用Wine到[使用Chrome的测试功能][1]为了能让Netflix能在Linux上工作Linux用户曾尝试了各种方法。好消息是Netflix终于在2014年带来了Linux的本地支持。这让所有能使用Netflix的地区的Linux用户的脸上浮现出了微笑。想在[美国以外的地区使用Netflix][2]或其他官方授权使用Netflix的国家之外的人还是得靠其他的方法。
从使用Wine到[使用Chrome的测试功能][1]为了能让Netflix能在Linux上工作Linux用户曾尝试了各种方法。好消息是Netflix终于在2014年带来了Linux的本地支持。这让所有能使用Netflix的地区的Linux用户的脸上浮现出了微笑。不过,想在[美国以外的地区使用Netflix][2]或其他官方授权使用Netflix的国家之外的人还是得靠其他的方法。
#### 欧洲国家采用开源/Linux ####
@ -30,19 +28,19 @@ Translated by H-mudcup
### 坏事 ###
Linux在2014年并不是一帆风顺。某些事件的发生坏了Linux/开源的形象。
Linux在2014年并不是一帆风顺。某些事件的发生坏了Linux/开源的形象。
#### Heartbleed心血 ####
#### Heartbleed 心血漏洞 ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/12/heartbleed-bug.jpg)
在今年的四月份,检测到[OpenSSL][8]有一个缺陷。这个漏洞被命名为[Heartbleed心血][9]。他影响了包括Facebook和Google在内的50多万个“安全”网站。这项漏洞可以真正的允许任何人读取系统的内存并能因此给予用于加密数据流的密匙的访问权限。[xkcd上的漫画以更简单的方式解释了心血][10]。不必说这个漏洞在OpenSSL的更新中被修复了。
在今年的四月份,检测到[OpenSSL][8]有一个缺陷。这个漏洞被命名为[Heartbleed心血漏洞][9]。他影响了包括Facebook和Google在内的50多万个“安全”网站。这项漏洞可以真正的允许任何人读取系统的内存并能因此给予用于加密数据流的密匙的访问权限。[xkcd上的漫画以更简单的方式解释了心血漏洞][10]。自然这个漏洞在OpenSSL的更新中被修复了。
#### Shellshock ####
#### Shellshock 破壳漏洞 ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/shellshock_Linux_check.jpeg)
好像有个心血还不够似的在Bash里的一个缺陷更严重的震撼了Linux世界。这个漏洞被命名为[Shellshock][11]。这个漏洞把Linux往远程攻击的危险深渊又推了一把。这项漏洞是通过黑客的DDoS攻击暴露出来的。升级一下Bash版本应该能修复这个问题。
好像有个心血漏洞还不够似的在Bash里的一个缺陷更严重的震撼了Linux世界。这个漏洞被命名为[Shellshock 破壳漏洞][11]。这个漏洞把Linux往远程攻击的危险深渊又推了一把。这项漏洞是通过黑客的DDoS攻击暴露出来的。升级一下Bash版本应该能修复这个问题。
#### Ubuntu Phone和Steam控制台 ####
@ -52,13 +50,13 @@ Linux在2014年并不是一帆风顺。某些事件的发生损坏了Linux/开
### 丑事 ###
systemd的归属战变得不知廉耻。
是否采用 systemd 的争论变得让人羞耻。
### systemd大论战 ###
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/12/Systemd_everywhere.jpg)
用init还是systemd的争吵已经进行了一段时间了。但是在2014年当systemd准备在包括Debian, Ubuntu, OpenSUSE, Arch Linux and Fedora几个主流Linux分布中替代init时事情变得不知廉耻了起来。它是如此的一发不可收拾以至于它已经不限于boycottsystemd.org这类网站了。Lennart Poetteringsystemd的首席开发人员及作者在一条Google Plus状态上声明说那些反对systemd的人在“收集比特币来雇杀手杀他”。Lennart还声称开源社区“是个恶心得不能待的地方”。人们吵得越来越离谱以至于把Debian分裂成了一个新的操作系统称为[Devuan][15]。
用init还是systemd的争吵已经进行了一段时间了。但是在2014年当systemd准备在包括Debian, Ubuntu, OpenSUSE, Arch Linux Fedora几个主流Linux分布中替代init时事情变得不知廉耻了起来。它是如此的一发不可收拾以至于它已经不限于boycottsystemd.org这类网站了。Lennart Poetteringsystemd的首席开发人员及作者在一条Google Plus状态上声明说那些反对systemd的人在“收集比特币来雇杀手杀他”。Lennart还声称开源社区“是个恶心得不能待的地方”。人们吵得越来越离谱以至于把Debian分裂成了一个新的操作系统称为[Devuan][15]。
### 还有诡异的事 ###
@ -81,10 +79,10 @@ via: http://itsfoss.com/biggest-linux-stories-2014/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://itsfoss.com/watch-netflix-in-ubuntu-14-04/
[1]:http://linux.cn/article-3024-1.html
[2]:http://itsfoss.com/easiest-watch-netflix-hulu-usa/
[3]:http://itsfoss.com/french-city-toulouse-saved-1-million-euro-libreoffice/
[4]:http://itsfoss.com/italian-city-turin-open-source/
[3]:http://linux.cn/article-3575-1.html
[4]:http://linux.cn/article-3602-1.html
[5]:http://itsfoss.com/170-primary-public-schools-geneva-switch-ubuntu/
[6]:http://itsfoss.com/german-town-gummersbach-completes-switch-open-source/
[7]:http://itsfoss.com/windows-10-inspired-linux/
@ -95,8 +93,8 @@ via: http://itsfoss.com/biggest-linux-stories-2014/
[12]:http://itsfoss.com/ubuntu-phone-specification-release-date-pricing/
[13]:http://www.tecmint.com/systemd-replaces-init-in-linux/
[14]:https://plus.google.com/+LennartPoetteringTheOneAndOnly/posts/J2TZrTvu7vd
[15]:http://debianfork.org/
[16]:http://thenewstack.io/microsoft-professes-love-for-linux-adds-support-for-coreos-cloudera-and-host-of-new-features/
[15]:http://linux.cn/article-4512-1.html
[16]:http://linux.cn/article-4056-1.html
[17]:http://www.theregister.co.uk/2001/06/02/ballmer_linux_is_a_cancer/
[18]:http://azure.microsoft.com/en-us/
[19]:http://www.zdnet.com/article/top-five-linux-contributor-microsoft/

View File

@ -4,7 +4,9 @@ Windows和Ubuntu双系统修复UEFI引导的两种办法
这里有两种修复EFI启动引导的方法使Ubuntu可以正常启动
![](http://0.tqn.com/y/linux/1/L/E/J/1/grub2.JPG) "将GRUB2设置为启动引导"
![](http://0.tqn.com/y/linux/1/L/E/J/1/grub2.JPG)
*将GRUB2设置为启动引导*
### 1. 启用GRUB引导 ###
@ -18,22 +20,22 @@ Windows和Ubuntu双系统修复UEFI引导的两种办法
可以按照以下几个步骤将GRUB2设置为默认的引导程序
1.登录Windows 8
2.转到桌面
3.右击开始按钮,选择管理员命令行
4.输入 mountvol g: (将你的EFI目录结构映射到G盘)
5.输入 cd g:\EFI
6.当你输入 dir 列出文件夹内容时你可以看到一个Ubuntu的文件夹
7.这里的参数可以是grubx64.efi或者shimx64.efi
8.运行下列命令将grub64.efi设置为启动引导程序
1. 登录Windows 8
2. 转到桌面
3. 右击开始按钮,选择管理员命令行
4. 输入 mountvol g: /s (将你的EFI目录结构映射到G盘)
5. 输入 cd g:\EFI
6. 当你输入 dir 列出文件夹内容时你可以看到一个Ubuntu的文件夹
7. 这里的参数可以是grubx64.efi或者shimx64.efi
8. 运行下列命令将grub64.efi设置为启动引导程序
bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi
9.重启你的电脑
10.你将会看到一个包含Ubuntu和Windows选项的GRUB菜单
11.如果你的电脑仍然直接启动到Windows重复步骤1到7但是这次输入
9. 重启你的电脑
10. 你将会看到一个包含Ubuntu和Windows选项的GRUB菜单
11. 如果你的电脑仍然直接启动到Windows重复步骤1到7但是这次输入
bcdedit /set {bootmgr} path \EFI\ubuntu\shimx64.efi
12.重启你的电脑
12. 重启你的电脑
这里你做的事情是登录Windows管理员命令行将EFI引导区映射到磁盘上来查看Ubuntu的引导程序是否安装成功然后选择grubx64.efi或者shimx64.efi作为引导程序。
这里你做的事情是登录Windows管理员命令行将EFI引导区映射到磁盘上来查看Ubuntu的引导程序是否安装成功然后选择grubx64.efi或者shimx64.efi作为引导程序。
那么[grubx64.efi和shimx64.efi有什么区别呢][4]在安全启动serureboot关闭的情况下你可以使用grubx64.efi。如果安全启动打开则需要选择shimx64.efi。
@ -41,28 +43,30 @@ bcdedit /set {bootmgr} path \EFI\ubuntu\shimx64.efi
### 2.使用rEFInd引导Ubuntu和Windows双系统 ###
![](http://f.tqn.com/y/linux/1/L/F/J/1/refind.png)
[rEFInd引导程序][5]会以图标的方式列出你所有的操作系统。因此你可以通过点击相应的图标来启动Windows、Ubuntu或者优盘中的操作系统。
[点击这里][6]下载rEFInd for Windows 8。
下载和解压以后按照以下的步骤安装rEFInd。
1.返回桌面
2.右击开始按钮,选择管理员命令行
3.输入 mountvol g: (将你的EFI目录结构映射到G盘)
4.进入解压的rEFInd目录。例如
cd c:\users\gary\downloads\refind-bin-0.8.4\refind-bin-0.8.4
1. 返回桌面
2. 右击开始按钮,选择管理员命令行
3. 输入 mountvol g: /s (将你的EFI目录结构映射到G盘)
4. 进入解压的rEFInd目录。例如
cd c:\users\gary\downloads\refind-bin-0.8.4\refind-bin-0.8.4
当你输入 dir 命令你可以看到一个refind目录
5.输入如下命令将refind拷贝到EFI引导区
5. 输入如下命令将refind拷贝到EFI引导区
xcopy /E refind g:\EFI\refind\
6.输入如下命令进入refind文件夹
6. 输入如下命令进入refind文件夹
cd g:\EFI\refind
7.重命名示例配置文件
7. 重命名示例配置文件
rename refind.conf-sample refind.conf
8.运行如下命令将rEFind设置为引导程序
8. 运行如下命令将rEFind设置为引导程序
bcdedit /set {bootmgr} path \EFI\refind\refind_x64.efi
9.重启你的电脑
10.你将会看到一个包含Ubuntu和Windows的图形菜单
9. 重启你的电脑
10. 你将会看到一个包含Ubuntu和Windows的图形菜单
这个过程和选择GRUB引导程序十分相似。
@ -72,17 +76,20 @@ bcdedit /set {bootmgr} path \EFI\refind\refind_x64.efi
希望这篇文章可以解决有些人在安装Ubuntu和Windows 8.1双系统时出现的问题。如果你仍然有问题,可以通过上面的电邮和我进行交流。
---
via: http://linux.about.com/od/LinuxNewbieDesktopGuide/tp/3-Ways-To-Fix-The-UEFI-Bootloader-When-Dual-Booting-Windows-And-Ubuntu.htm
作者:[Gary Newell][a]
译者:[zhouj-sh](https://github.com/zhouj-sh)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
via:http://linux.about.com/od/LinuxNewbieDesktopGuide/tp/3-Ways-To-Fix-The-UEFI-Bootloader-When-Dual-Booting-Windows-And-Ubuntu.htm
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
[1]:http://linux.about.com/od/LinuxNewbieDesktopGuide/ss/The-Ultimate-Windows-81-And-Ubuntu-
[2]:http://linux.about.com/od/howtos/ss/How-To-Create-A-UEFI-Bootable-Ubuntu-USB-Drive-Using-Windows_3.htm#step-heading
[3]:http://linux.about.com/od/howtos/ss/How-To-Create-A-UEFI-Bootable-Ubuntu-USB-Drive-Using-Windows.htm
[1]:http://linux.cn/article-3178-1.html
[2]:http://linux.cn/article-3178-1.html#4_3289
[3]:http://linux.cn/article-3178-1.html#4_1717
[4]:https://wiki.ubuntu.com/SecurityTeam/SecureBoot
[5]:http://www.rodsbooks.com/refind/installing.html#windows
[6]:http://sourceforge.net/projects/refind/files/0.8.4/refind-bin-0.8.4.zip/download

View File

@ -0,0 +1,200 @@
如何在Ubuntu / CentOS 6.x上安装Bugzilla 4.4
================================================================================
这里我们将展示如何在一台Ubuntu 14.04或CentOS 6.5/7上安装Bugzilla。Bugzilla是一款基于web用来记录跟踪缺陷数据库的bug跟踪软件它同时是一款免费及开源软件(FOSS)它的bug跟踪系统允许个人和开发团体有效地记录下他们产品的一些突出问题。尽管是"免费"的Bugzilla依然有很多其它同类产品所没有的“珍贵”特性。因此Bugzilla很快就变成了全球范围内数以千计的组织最喜欢的bug管理工具。
Bugzilla对于不同使用场景的适应能力非常强。如今它们应用在各个不同的IT领域如系统管理中的部署管理、芯片设计及部署的问题跟踪(制造前期和后期)还有为那些诸如RedhatNASALinux-Mandrake和VA Systems这些著名公司提供软硬件bug跟踪。
### 1. 安装依赖程序 ###
安装Bugzilla相当**简单**。这篇文章特别针对Ubuntu 14.04和CentOS 6.5两个版本(不过也适用于更老的版本)。
为了获取并能在Ubuntu或CentOS系统中运行Bugzilla我们要安装Apache网络服务器(启用SSL)MySQL数据库服务器和一些需要来安装并配置Bugzilla的工具。
要在你的服务器上安装使用Bugzilla你需要安装好以下程序
- Perl(5.8.1 或以上)
- MySQL
- Apache2
- Bugzilla
- Perl模块
- 使用apache的Bugzilla
正如我们所提到的本文会阐述Ubuntu 14.04和CentOS 6.5/7两种发行版的安装过程为此我们会分成两部分来表示。
以下就是在你的Ubuntu 14.04 LTS和CentOS 7机器安装Bugzilla的步骤
**准备所需的依赖包:**
你需要运行以下命令来安装些必要的包:
**Ubuntu版本:**
$ sudo apt-get install apache2 mysql-server libapache2-mod-perl2 libapache2-mod-perl2-dev libapache2-mod-perl2-doc perl postfix make gcc g++
**CentOS版本:**
$ sudo yum install httpd mod_ssl mysql-server mysql php-mysql gcc perl* mod_perl-devel
**注意请在shell或者终端下运行所有的命令并且确保你用root用户sudo操作机器。**
### 2. 启动Apache服务 ###
你已经按照以上步骤安装好了apache服务那么我们现在需要配置apache服务并运行它。我们需要用sodo或root来敲命令去完成它我们先切换到root连接。
$ sudo -s
我们需要在防火墙中打开80端口并保存改动。
# iptables -I INPUT -p tcp --dport 80 -j ACCEPT
# service iptables save
现在,我们需要启动服务:
CentOS版本:
# service httpd start
我们来确保Apache会在每次你重启机器的时候一并启动起来
# /sbin/chkconfig httpd on
Ubuntu版本:
# service apache2 start
现在由于我们已经启动了我们apache的http服务我们就能在默认的127.0.0.1地址下打开apache服务了。
### 3. 配置MySQL服务器 ###
现在我们需要启动我们的MySQL服务
CentOS版本:
# chkconfig mysqld on
# service start mysqld
Ubuntu版本:
# service mysql-server start
![mysql](http://blog.linoxide.com/wp-content/uploads/2014/12/mysql.png)
用root用户登录连接MySQL并给Bugzilla创建一个数据库把你的mysql密码更改成你想要的稍后配置Bugzilla的时候会用到它。
CentOS 6.5和Ubuntu 14.04 Trusty两个版本
# mysql -u root -p
# password: (You'll need to enter your password)
# mysql > create database bugs;
# mysql > grant all on bugs.* to root@localhost identified by "mypassword";
#mysql > quit
**注意请记住数据库名和mysql的密码我们稍后会用到它们。**
### 4. 安装并配置Bugzilla ###
现在我们所有需要的包已经设置完毕并运行起来了我们就要配置我们的Bugzilla。
那么首先我们要下载最新版的Bugzilla包这里我下载的是4.5.2版本。
使用wget工具在shell或终端上下载
wget http://ftp.mozilla.org/pub/mozilla.org/webtools/bugzilla-4.5.2.tar.gz
你也可以从官方网站进行下载。[http://www.bugzilla.org/download/][1]
**从下载下来的bugzilla压缩包中提取文件并重命名**
# tar zxvf bugzilla-4.5.2.tar.gz -C /var/www/html/
# cd /var/www/html/
# mv -v bugzilla-4.5.2 bugzilla
**注意**:这里,**/var/www/html/bugzilla/**就是**Bugzilla主目录**.
现在我们来配置buzilla
# cd /var/www/html/bugzilla/
# ./checksetup.pl --check-modules
![bugzilla-check-module](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla2-300x198.png)
检查完成之后,我们会发现缺少了一些组件,我们需要安装它们,用以下命令即可实现:
# cd /var/www/html/bugzilla
# perl install-module.pl --all
这一步会花掉一点时间去下载安装所有依赖程序,然后再次运行**checksetup.pl --check-modules**命令来验证有没有漏装什么。
现在我们需要运行以下这条命令,它会在/var/www/html/bugzilla路径下自动生成一个名为localconfig的文件。
# ./checksetup.pl
确认一下你刚才在localconfig文件中所输入的数据库名、用户和密码是否正确。
# nano ./localconfig
# checksetup.pl
![bugzilla-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla-success.png)
如果一切正常checksetup.pl现在应该就成功地配置Bugzilla了。
现在我们需要添加Bugzilla至我们的Apache配置文件中。那么我们需要用文本编辑器打开 /etc/httpd/conf/httpd.conf 文件(CentOS版本)或者 /etc/apache2/apache2.conf 文件(Ubuntu版本)
CentOS版本:
# nano /etc/httpd/conf/httpd.conf
Ubuntu版本:
# nano etc/apache2/apache2.conf
现在我们需要配置Apache服务器我们要把以下配置添加到配置文件里
<VirtualHost *:80>
DocumentRoot /var/www/html/bugzilla/
</VirtualHost>
<Directory /var/www/html/bugzilla>
AddHandler cgi-script .cgi
Options +Indexes +ExecCGI
DirectoryIndex index.cgi
AllowOverride Limit FileInfo Indexes
</Directory>
接着,我们需要编辑 .htaccess 文件并用“#”注释掉顶部“Options -Indexes”这一行。
让我们重启我们的apache服务并测试下我们的安装情况。
CentOS版本:
# service httpd restart
Ubuntu版本:
# service apache2 restart
![bugzilla-install-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla_apache.png)
这样我们的Bugzilla就准备好在我们的Ubuntu 14.04 LTS和CentOS 6.5上获取bug报告了你就可以通过本地回环地址或你网页浏览器上的IP地址来浏览bugzilla了。
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-bugzilla-ubuntu-centos/
作者:[Arun Pyasi][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.bugzilla.org/download/

View File

@ -1,4 +1,4 @@
systemd-nspawn 指南
systemd-nspawn 快速指南
===========================
我目前已从 chroot译者注chroot可以构建类似沙盒的环境建议各位同学先了解chroot 迁移到 systemd-nspawn同时我写了一篇快速指南。简单的说我强烈建议正在使用 systemd 的用户从 chroot 转为 systemd-nspawn因为只要你的内核配置正确的话它几乎没有什么缺点。
@ -6,11 +6,11 @@ systemd-nspawn 指南
###chroot 面临的挑战
大多数交互环境下仅运行chroot还不够。通常还要挂载 /proc /sys另外为了确保不会出现类似“丢失 ptys”之类的错误我们还得 bind译者注bind 是 mount 的一个选项) 挂载 /dev。如果你使用 tmpfs你可能想要以 tmpfs 类型挂载新的 tmp var/tmp。接下来你可能还想将其他的挂载点 bind 到 chroot 中。这些都不是特别难,但是一般情况下要写一个脚本来管理它。
大多数交互环境下仅运行chroot还不够。通常还要挂载 /proc /sys另外为了确保不会出现类似“丢失 ptys”之类的错误我们还得 bind译者注bind 是 mount 的一个选项) 挂载 /dev。如果你使用 tmpfs你可能想要以 tmpfs 类型挂载新的 tmp var/tmp。接下来你可能还想将其他的挂载点 bind 到 chroot 中。这些都不是特别难,但是一般情况下要写一个脚本来管理它。
现在我按照日常计划执行备份操作,当然有一些不必备份的数据如 tmp 目录,或任何 bind 挂载的内容。当我配置了一个新的 chroot 意味着我要更新我的备份配置了,但我经常忘记这点,因为大多数时间里 chroot 挂载点并没有运行。当这些挂载点然存在的情况下执行备份的话,那么备份中会多出很多不需要的内容。
现在我按照日常计划执行备份操作,当然有一些不必备份的数据如 tmp 目录,或任何 bind 挂载的内容。当我配置了一个新的 chroot 后就意味着我要更新我的备份配置了,但我经常忘记这点,因为大多数时间里 chroot 挂载点并没有运行。当这些挂载点然存在的情况下执行备份的话,那么备份中会多出很多不需要的内容。
当 bind 挂载点包含其他挂载点时(比如挂载时使用 -rbind 选项),这种情况下 systemd 的默认处理方式略有不同。在 bind 挂载中卸载一些东西时systemd 会将处于 bind 另一边的目录也卸载掉。想像一下,如果我卸载了 chroot 中以bind 挂载 /dev 的某个目录后发现主机上的 /dev/pts 与 /dev/shm 也不见了,我肯定会很吃惊。不过好像有其他方法可以避免,但是这不是我们此次讨论的重点。
当 bind 挂载点包含其他挂载点时(比如挂载时使用 -rbind 选项),这种情况下 systemd 的默认处理方式略有不同。在 bind 挂载中卸载一些东西时systemd 会将处于 bind 另一边的目录也卸载掉。想像一下,如果我卸载了 chroot 中以 bind 挂载 /dev 的某个目录后发现主机上的 /dev/pts 与 /dev/shm 也不见了,我肯定会很吃惊。不过好像有其他方法可以避免,但是这不是我们此次讨论的重点。
### Systemd-nspawn 优点
@ -39,25 +39,25 @@ Systemd-nspawn 用于启动一个容器,并且它的最简模式就可以像 c
像 chroot 那样启动 namespace 是非常简单的:
systemd-nspawn -D .
systemd-nspawn -D .
也可以像 chroot 那样退出。在内部可以运行 mount 并且可以看到默认它已将 /dev 与 /tmp 准备好了。 ”.“就是 chroot 的路径,也就是当前路径。在它内部运行的是 bash。
如果要添加一些 bind 挂载点也非常简便:
systemd-nspawn -D . --bind /usr/portage
systemd-nspawn -D . --bind /usr/portage
现在,容器中的 /usr/portage 就与主机的对应目录绑定起来了,我们无需 sync /etc。如果想要绑定到指定的路径只要在原路径后添加 ”dest“相当于 chroot 的 root--bind foo 与 --bind foo:foo是一样的
现在,容器中的 /usr/portage 就与主机的对应目录绑定起来了,我们无需 sync /etc。如果想要绑定到指定的路径只要在原路径后添加 ”:dest“相当于 chroot 的 root--bind foo 与 --bind foo:foo是一样的
如果容器具有 init 功能并且可以在内部运行,可以通过添加 -b 选项启动它:
systemd-nspawn -D . --bind /usr/portage -b
systemd-nspawn -D . --bind /usr/portage -b
可以观察到 init 的运作。关闭容器会自动退出。
如果容器内运行了 systemd ,你可以使用 -h 选项将它的日志重定向到主机的systemd日志
systemd-nspawn -D . --bind /usr/portage -j -b
systemd-nspawn -D . --bind /usr/portage -j -b
使用 nspawn 注册容器以便它能够在 machinectl 中显示。如此可以方便的在主机上对它进行操作,如启动新的 getty ssh 连接,关机等。
@ -68,8 +68,8 @@ Systemd-nspawn 用于启动一个容器,并且它的最简模式就可以像 c
via: http://rich0gentoo.wordpress.com/2014/07/14/quick-systemd-nspawn-guide/
作者:[rich0][a]
译者:[SPccman](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[SPccman](https://github.com/SPccman)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,35 +0,0 @@
Turla espionage operation infects Linux systems with malware
================================================================================
![](http://images.techhive.com/images/article/2014/12/open-source-linux-100533457-primary.idge.jpg)
> A newly identified Linux backdoor program is tied to the Turla cyberespionage campaign, researchers from Kaspersky Lab said
A newly discovered malware program designed to infect Linux systems is tied to a sophisticated cyberespionage operation of Russian origin dubbed Epic Turla, security researchers found.
The Turla campaign, also known as Snake or Uroburos, [was originally uncovered in February][1], but goes back several years. The massive operation infected computers at government organizations, embassies, military installations, education and research institutions and pharmaceutical companies in over 45 countries.
The newly identified Turla component for Linux was uploaded recently to a multi-engine antivirus scanning service and was described by security researchers from antivirus vendor Kaspersky Lab as "a previously unknown piece of a larger puzzle."
"So far, every single Turla sample we've encountered was designed for the Microsoft Windows family, 32 and 64 bit operating systems," the Kaspersky researchers said Monday in a [blog post][2]. "The newly discovered Turla sample is unusual in the fact that it's the first Turla sample targeting the Linux operating system that we have discovered."
The Turla Linux malware is based on an open-source backdoor program called cd00r developed in 2000. It allows attackers to execute arbitrary commands on a compromised system, but doesn't require elevated privileges or root access to function and listens to commands received via hidden TCP/UDP packets, making it stealthy.
"It can't be discovered via netstat, a commonly used administrative tool," said the Kaspersky researchers, who are still analyzing the malware's functionality.
"We suspect that this component was running for years at a victim site, but do not have concrete data to support that statement just yet," they said.
Since their blog post Monday, the Kaspersky researchers also found a second Turla Linux component that appears to be a separate malware program.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2857129/turla-espionage-operation-infects-linux-systems-with-malware.html
作者:[Lucian Constantin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Lucian-Constantin/
[1]:http://news.techworld.com/security/3505688/invisible-russian-cyberweapon-stalked-us-and-ukraine-since-2005-new-research-reveals/
[2]:https://securelist.com/blog/research/67962/the-penquin-turla-2/

View File

@ -1,76 +0,0 @@
翻译中 by小眼儿
Yes, This Trojan Infects Linux. No, Its Not The Tuxpocalypse
================================================================================
![Is something watching you?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/spyware.jpg)
Is something watching you?
Grab a crate of canned food, start digging a deep underground bunker and prepare to settle into a world that will never be the same again: [a powerful trojan has been uncovered on Linux][1].
Yes, the hitherto impregnable fortress of computing nirvana has been compromised in a way that has left security experts a touch perturbed.
Unplug your PC, disinfect your keyboard and buy a cat (no more YouTube ). The Tuxpocalypse is upon us. Weve reached the end of days.
Right? RIGHT? Nah, not quite.
### A Terrifying Anomalous Thing! ###
Let me set off by saying that **I am not underplaying the severity of this threat (known by the nickname Turla)** nor, for the avoidance of doubt, am I suggesting that we as Linux users shouldnt be concerned by the implications.
The discovery of a silent trojan infecting Linux systems is terrifying. The fact it was tasked with sucking up and sending off all sorts of sensitive information is horrific. And to learn its been doing this for at least four years and doesnt require root privileges? My seat is wet. Im sorry.
But — and along with hyphens and typos, theres always a but on this site — the panic currently sweeping desktop Linux fans, Mexican wave style, is a little out of context.
Vulnerability may be a new feeling for some of us, yet lets keep it in check: Linux remains an inherently secure operating system for desktop users. One clever workaround does not negate that and shouldnt send you scurrying offline.
### State Sponsored, Targeting Governments ###
![Is a penguin snake a Penguake or a Snaguin?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/penguin-snakle-by-icao-292x300.jpg)
Is a penguin snake a Penguake or a Snaguin?
Turla is a complex APT (Advanced Persistent Threat) that has (thus far) targeted government, embassy and pharmaceutical companies systems for around four years using a method based on [14 year old code, no less][2].
On Windows, where the superhero security researchers at Symantec and Kaspersky Lab first sighted the slimy snake, Turla and components of it were found to have **infected hundreds (100s) of PCs across 45 countries**, many through unpatched zero-day exploits.
*Nice one Microsoft.*
Further diligence by Kaspersky Lab has now uncovered that parts of the same trojan have also been active on Linux for some time.
The Trojan doesnt require elevated privileges and can “intercept incoming packets and run incoming commands on the system”, but its not yet clear how deep its tentacles reach or how many Linux systems are infected, nor is the full extent of its capabilities known.
“Turla” (and its children) are presumed to be nation-state sponsored due to its choice of targets. US and UK readers shouldnt assume its “*them*“, either. Our own governments are just as happy to play in the mud, too.
#### Perspective and Responsibility ####
As terrible a breach as this discovery is emotionally, technically and ethically it remains far, far, far away from being an indication that were entering a new “free for all” era of viruses and malware aimed at the desktop.
**Turla is not a user-focused “i wantZ ur CredIt carD” virus** bundled inside a faux software download. Its a complex, finessed and adaptable threat with specific targets in mind (ergo grander ambitions than collecting a bunch of fruity tube dot com passwords, sorry ego!).
Kaspersky Lab explains:
> “The Linux Turla module is a C/C++ executable statically linked against multiple libraries, greatly increasing its file size. It was stripped of symbol information, more likely intended to increase analysis effort than to decrease file size. Its functionality includes hidden network communications, arbitrary remote command execution, and remote management. Much of its code is based on public sources.”
Regardless of impact or infection rate its precedes will still raise big, big questions that clever, clever people will now spend time addressing, analysing and (importantly) solving.
IANACSE (I am not a computer security expert) but IAFOA (I am a fan of acronyms), and AFAICT (as far as I can tell) this news should be viewed as as a cautionary PSA or FYI than the kind of OMGGTFO that some sites are painting it as.
Until more details are known none of us should panic. Lets continue to practice safe computing. Avoid downloading/running scripts, apps, or binaries from untrusted sites or PPAs, and dont venture into dodgy dark parts of the web.
If you remain super concerned you can check out the [Kaspersky blog][1] for details on how to check that youre not infected.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/12/government-spying-turla-linux-trojan-found
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://securelist.com/blog/research/67962/the-penquin-turla-2/
[2]:https://twitter.com/joernchen/status/542060412188262400
[3]:https://securelist.com/blog/research/67962/the-penquin-turla-2/

View File

@ -1,25 +0,0 @@
Git 2.2.1 Released To Fix Critical Security Issue
================================================================================
![](http://www.phoronix.com/assets/categories/freesoftware.jpg)
Git 2.2.1 was released this afternoon to fix a critical security vulnerability in Git clients. Fortunately, the vulnerability doesn't plague Unix/Linux users but rather OS X and Windows.
Today's Git vulnerability affects those using the Git client on case-insensitive file-systems. On case-insensitive platforms like Windows and OS X, committing to .Git/config could overwrite the user's .git/config and could lead to arbitrary code execution. Fortunately with most Phoronix readers out there running Linux, this isn't an issue thanks to case-sensitive file-systems.
Besides the attack vector from case insensitive file-systems, Windows and OS X's HFS+ would map some strings back to .git too if certain characters are present, which could lead to overwriting the Git config file. Git 2.2.1 addresses these issues.
More details via the [Git 2.2.1 release announcement][1] and [GitHub has additional details][2].
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=news_item&px=MTg2ODA
作者:[Michael Larabel][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.michaellarabel.com/
[1]:http://article.gmane.org/gmane.linux.kernel/1853266
[2]:https://github.com/blog/1938-git-client-vulnerability-announced

View File

@ -1,28 +0,0 @@
New 64-bit Linux Kernel Vulnerabilities Disclosed This Week
================================================================================
![](http://www.phoronix.com/assets/categories/linuxkernel.jpg)
For those that didn't hear the news yet, multiple Linux x86_64 vulnerabilities were made public this week.
With CVE-2014-9322 that's now public, there's a local privilege escalation issue affecting all kernel versions prior to Linux 3.17.5. CVE-2014-9322 is described as "privilege escalation due to incorrect handling of a #SS fault caused
by an IRET instruction. In particular, if IRET executes on a writeable kernel stack (this was always the case before 3.16 and is sometimes the case on 3.16 and newer), the assembly function general_protection will execute with the user's gsbase and the kernel's gsbase swapped. This is likely to be easy to exploit for privilege escalation, except on systems with SMAP or UDEREF. On those systems, assuming that the mitigation works correctly, the impact of this bug may be limited to massive memory corruption and an eventual crash or reboot."
Fortunately, it's fixed [in Linux kernel Git since late November][1]. CVE-2014-9322 is linked to CVE-2014-9090, which is also corrected by the fixes in Git.
There's also two x86_64 kernel bugs related to espfix. "The next two bugs are related to espfix. The IRET instruction has IMO a blatant design flaw: IRET to a 16-bit user stack segment will leak bits 31:16 of the kernel stack pointer. This flaw exists on 32-bit and 64-bit systems. 32-bit Linux kernels have mitigated this leak for a long time, and 64-bit Linux kernels have mitigated this leak since 3.16. The mitigation is called espfix."
Fixes for CVE-2014-8133 and CVE-2014-8134 are in KVM and Linux kernel Git as of a few days ago. More details on these x86_64 vulnerabilities via [this oss-sec posting][2]. These issues were uncovered by Andy Lutomirski at AMA Capital Management.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=news_item&px=MTg2NzY
作者:[Michael Larabel][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.michaellarabel.com/
[1]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kernel/entry_64.S?id=6f442be2fb22be02cafa606f1769fa1e6f894441
[2]:http://seclists.org/oss-sec/2014/q4/1052

View File

@ -1,166 +0,0 @@
Easy File Comparisons With These Great Free Diff Tools
================================================================================
by Frazer Kline
File comparison compares the contents of computer files, finding their common contents and their differences. The result of the comparison is often known as a diff.
diff is also the name of a famous console based file comparison utility that outputs the differences between two files. The diff utility was developed in the early 1970s on the Unix operating system. diff will output the parts of the files where they are different.
Linux has many good GUI tools that enable you to clearly see the difference between two files or two versions of the same file. This roundup selects 5 of my favourite GUI diff tools, with all but one released under an open source license.
These utilities are an essential software development tool, as they visualize the differences between files or directories, merge files with differences, resolve conflicts and save output to a new file or patch, and assist file changes reviewing and comment production (e.g. approving source code changes before they get merged into a source tree). They help developers work on a file, passing it back and forth between each other. The diff tools are not only useful for showing differences in source code files; they can be used on many text-based file types as well. The visualisations make it easier to compare files.
----------
![](http://www.linuxlinks.com/portal/content2/png/Meld.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Meld.png)
Meld is an open source graphical diff viewer and merge application for the Gnome desktop. It supports 2 and 3-file diffs, recursive directory diffs, diffing of directories under version control (Bazaar, Codeville, CVS, Darcs, Fossil SCM, Git, Mercurial, Monotone, Subversion), as well as the ability to manually and automatically merge file differences.
Meld's focus is on helping developers compare and merge source files, and get a visual overview of changes in their favourite version control system.
Features include
- Edit files in-place, and your comparison updates on-the-fly
- Perform twoand three-way diffs and merges
- Easily navigate between differences and conflicts
- Visualise global and local differences with insertions, changes and conflicts marked
- Built-in regex text filtering to ignore uninteresting differences
- Syntax highlighting (with optional gtksourceview)
- Compare two or three directories file-by-file, showing new, missing, and altered files
- Directly open file comparisons of any conflicting or differing files
- Filter out files or directories to avoid seeing spurious differences
- Auto-merge mode and actions on change blocks help make merges easier
- Simple file management is also available
- Supports many version control systems, including Git, Mercurial, Bazaar and SVN
- Launch file comparisons to check what changes were made, before you commit
- View file versioning statuses
- Simple version control actions are also available (i.e., commit/update/add/remove/delete files)
- Automatically merge two files using a common ancestor
- Mark and display the base version of all conflicting changes in the middle pane
- Visualise and merge independent modifications of the same file
- Lock down read-only merge bases to avoid mistakes
- Command line interface for easy integration with existing tools, including git mergetool
- Internationalization support
- Visualisations make it easier to compare your files
- Website: [meldmerge.org][1]
- Developer: Kai Willadsen
- License: GNU GPL v2
- Version Number: 1.8.5
----------
![](http://www.linuxlinks.com/portal/content2/png/DiffMerge.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-DiffMerge.png)
注:上面这个图访问不到,图的地址是原文地址的小图的链接地址,发布的时候在验证一下,如果还访问不到,不行先采用小图或者网上搜一下看有没有大图
DiffMerge is an application to visually compare and merge files on Linux, Windows, and OS X.
Features include:
- Graphically shows the changes between two files. Includes intra-line highlighting and full support for editing
- Graphically shows the changes between 3 files. Allows automatic merging (when safe to do so) and full control over editing the resulting file
- Performs a side-by-side comparison of 2 folders, showing which files are only present in one file or the other, as well as file pairs which are identical, equivalent or different
- Rulesets and options provide for customized appearance and behavior
- Unicode-based application and can import files in a wide range of character encodings
- Cross-platform tool
- Website: [sourcegear.com/diffmerge][2]
- Developer: SourceGear LLC
- License: Licensed for use free of charge (not open source)
- Version Number: 4.2
----------
![](http://www.linuxlinks.com/portal/content2/png/xxdiff.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-xxdiff.png)
xxdiff is an open source graphical file and directories comparator and merge tool.
xxdiff can be used for viewing the differences between two or three files, or two directories, and can be used to produce a merged version. The texts of the two or three files are presented side by side with their differences highlighted with colors for easy identification.
This program is an essential software development tool that can be used to visualize the differences between files or directories, merge files with differences, resolving conflicts and saving output to a new file or patch, and assist file changes reviewing and comment production (e.g. approving source code changes before they get merged into a source tree).
Features include:
- Compare two files, three files, or two directories (shallow and recursive)
- Horizontal diffs highlighting
- Files can be merged interactively and resulting output visualized and saved
- Features to assist in performing merge reviews/policing
- Unmerge CVS conflicts in automatically merged file and display them as two files, to help resolve conflicts
- Uses external diff program to compute differences: works with GNU diff, SGI diff and ClearCase's cleardiff, and any other diff whose output is similar to those
- Fully customizable with a resource file
- Look-and-feel similar to Rudy Wortel's/SGI xdiff, it is desktop agnostic
- Features and output that ease integration with scripts
- Website: [furius.ca/xxdiff][3]
- Developer: Martin Blais
- License: GNU GPL
- Version Number: 4.0
----------
![](http://www.linuxlinks.com/portal/content2/png/Diffuse.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Diffuse.png)
Diffuse is an open source graphical tool for merging and comparing text files. Diffuse is able to compare an arbitrary number of files side-by-side and offers the ability to manually adjust line-matching and directly edit files. Diffuse can also retrieve revisions of files from bazaar, CVS, darcs, git, mercurial, monotone, Subversion and GNU Revision Control System (RCS) repositories for comparison and merging.
Features include:
- Compare and merge an arbitrary number of files side-by-side (n-way merges)
- Line matching can be manually corrected by the user
- Directly edit files
- Syntax highlighting
- Bazaar, CVS, Darcs, Git, Mercurial, Monotone, RCS, Subversion, and SVK support
- Unicode support
- Unlimited undo
- Easy keyboard navigation
- Website: [diffuse.sourceforge.net][]
- Developer: Derrick Moser
- License: GNU GPL v2
- Version Number: 0.4.7
----------
![](http://www.linuxlinks.com/portal/content2/png/Kompare.png)
![](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Kompare.png)
Kompare is an open source GUI front-end program that enables differences between source files to be viewed and merged. Kompare can be used to compare differences on files or the contents of folders. Kompare supports a variety of diff formats and provide many options to customize the information level displayed.
Whether you are a developer comparing source code, or you just want to see the difference between that research paper draft and the final document, Kompare is a useful tool.
Kompare is part of the KDE desktop environment.
Features include:
- Compare two text files
- Recursively compare directories
- View patches generated by diff
- Merge a patch into an existing directory
- Entertain you during that boring compile
- Website: [www.caffeinated.me.uk/kompare/][5]
- Developer: The Kompare Team
- License: GNU GPL
- Version Number: Part of KDE
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/2014062814400262/FileComparisons.html
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://meldmerge.org/
[2]:https://sourcegear.com/diffmerge/
[3]:http://furius.ca/xxdiff/
[4]:http://diffuse.sourceforge.net/
[5]:http://www.caffeinated.me.uk/kompare/

View File

@ -1,62 +0,0 @@
Tomahawk Music Player Returns With New Look, Features
================================================================================
**After a quiet year Tomahawk, the Swiss Army knife of music players, is back with a brand new release to sing about. **
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-tile-1.jpg)
Version 0.8 of the open-source and cross-platform app adds **support for more online services**, refreshes its appearance, and doubles down on making sure its innovative social features work flawlessly.
### Tomahawk — The Best of Both Worlds ###
Tomahawk marries a traditional app structure with the modernity of our “on demand” culture. It can browse and play music from local libraries as well as online services like Spotify, Grooveshark, and SoundCloud. In its latest release it adds Google Play Music and Beats Music to its roster.
That may sound cumbersome or confusing on paper but in practice it all works fantastically.
When you want to play a song, and dont care where its played back from, you just tell Tomahawk the track title and artist and it automatically finds a high-quality version from enabled sources — you dont need to do anything.
![](http://i.imgur.com/nk5oixy.jpg)
The app also sports some additional features, like EchoNest profiling, Last.fm suggestions, and Jabber support so you can play friends music. Theres also a built-in messaging service so you can quickly share playlists and tracks with others.
> “This fundamentally different approach to music enables a range of new music consumption and sharing experiences previously not possible,” the project says on its website. And with little else like it, its not wrong.
![Tomahawk supports the Sound Menu](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-controllers.jpg)
Tomahawk supports the Sound Menu
### Tomahawk 0.8 Release Highlights ###
- New UI
- Support for Beats Music support
- Support for Google Play Music (stored and Play All Access)
- Support for drag and drop iTunes, Spotify, etc. web links
- Now Playing notifications
- Android app (beta)
- Inbox improvements
### Install Tomahawk 0.8 in Ubuntu ###
As a big music streaming user Ill be using the app over the next few days to get a fuller appreciation of the changes on offer. In the mean time, to go hands on for yourself, you can.
Tomahawk 0.8 is available for Ubuntu 14.04 LTS and Ubuntu 14.10 via an official PPA.
sudo add-apt-repository ppa:tomahawk/ppa
sudo apt-get update && sudo apt-get install tomahawk
Standalone installers, and more information, can be found on the official project website.
- [Visit the Official Tomahawk Website][1]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/11/tomahawk-media-player-returns-new-look-features
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://gettomahawk.com/

View File

@ -1,48 +0,0 @@
[Translating by Stevarzh]
How to Download Music from Grooveshark with a Linux OS
================================================================================
> The solution is actually much simpler than you think
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-2.jpg)
**Grooveshark is a great online platform for people who want to listen to music, and there are a number of ways to download music from there. Groovesquid is just one of the applications that let users get music from Grooveshark, and it's multiplatform.**
If there is a service that streams something online, then there is a way to download the stuff that you are just watching or listening. As it turns out, it's not that difficult and there are a ton of solutions, no matter the platform. For example, there are dozens of YouTube downloaders and it stands to reason that it's not all that difficult to get stuff from Grooveshark either.
Now, there is the problem of legality. Like many other applications out there, Groovesquid is not actually illegal. It's the user's fault if they do something illegal with an application. The same reasoning can be applied to apps like utorrent or Bittorrent. As long as you don't touch copyrighted material, there are no problems in using Groovesquid.
### Groovesquid is fast and efficient ###
The only problem that you could find with Groovesquid is the fact that it's based on Java and that's never a good sign. This is a good way to ensure that an application runs on all the platforms, but it's an issue when it comes to the interface. It's not great, but it doesn't really matter all that much for users, especially since the app is doing a great job.
There is one caveat though. Groovesquid is a free application, but in order to remain free, it has to display an ad on the right side of the menu. This shouldn't be a problem for most people, but it's a good idea to mention that right from the start.
From a usability point of view, the application is pretty straightforward. Users can download a single song by entering the link in the top field, but the purpose of that field can be changed by accessing the small drop-down menu to its left. From there, it's possible to change to Song, Popular, Albums, Playlist, and Artist. Some of the options provide access to things like the most popular song on Grooveshark and other options allow you to download an entire playlist, for example.
You can download Groovesquid 0.7.0
- [jar][1] File size: 3.8 MB
- [tar.gz][2] File size: 549 KB
You will get a Jar file and all you have to do is to make it executable and let Java do the rest.
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-3.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-4.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-5.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-6.jpg)
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://github.com/groovesquid/groovesquid/releases/download/v0.7.0/Groovesquid.jar
[2]:https://github.com/groovesquid/groovesquid/archive/v0.7.0.tar.gz

View File

@ -1,104 +0,0 @@
[zhouj-sh translating...]
2 Ways To Fix The UEFI Bootloader When Dual Booting Windows And Ubuntu
================================================================================
The main problem that users experience after following my [tutorials for dual booting Ubuntu and Windows 8][1] is that their computer continues to boot directly into Windows 8 with no option for running Ubuntu.
Here are two ways to fix the EFI boot loader to get the Ubuntu portion to boot correctly.
![Set GRUB2 As The Bootloader.](http://0.tqn.com/y/linux/1/L/E/J/1/grub2.JPG)
### 1. Make GRUB The Active Bootloader ###
There are a few things that may have gone wrong during the installation.
In theory if you have managed to install Ubuntu in the first place then you will have [turned off fast boot][2].
Hopefully you [followed this guide to create a bootable UEFI Ubuntu USB drive][3] as this installs the correct UEFI boot loader.
If you have done both of these things as part of the installation, the bit that may have gone wrong is the part where you set GRUB2 as the boot manager.
To set GRUB2 as the default bootloader follow these steps:
1.Login to Windows 8
2.Go to the desktop
3.Right click on the start button and choose administrator command prompt
4.Type mountvol g: /s (This maps your EFI folder structure to the G drive).
5.Type cd g:\EFI
6.When you do a directory listing you will see a folder for Ubuntu. Type dir.
7.There should be options for grubx64.efi and shimx64.efi
8.Run the following command to set grubx64.efi as the bootloader:
bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi
9:Reboot your computer
10:You should now have a GRUB menu appear with options for Ubuntu and Windows.
11:If your computer still boots straight to Windows repeat steps 1 through 7 again but this time type:
bcdedit /set {bootmgr} path \EFI\ubuntu\shimx64.efi
12:Reboot your computer
What you are doing here is logging into the Windows administration command prompt, mapping a drive to the EFI partition so that you can see where the Ubuntu bootloaders are installed and then either choosing grubx64.efi or shimx64.efi as the bootloader.
So [what is the difference between grubx64.efi and shimx64.efi][4]? You should choose grubx64.efi if secureboot is turned off. If secureboot is turned on you should choose shimx64.efi.
In my steps above I have suggested trying one and then trying another. The other option is to install one and then turn secure boot on or off within the UEFI firmware for your computer depending on the bootloader you chose.
### 2. Use rEFInd To Dual Boot Windows 8 And Ubuntu ###
The [rEFInd boot loader][5] works by listing all of your operating systems as icons. You will therefore be able to boot Windows, Ubuntu and operating systems from USB drives simply by clicking the appropriate icon.
To download rEFInd for Windows 8 [click here][6].
After you have downloaded the file extract the zip file.
Now follow these steps to install rEFInd.
1.Go to the desktop
2.Right click on the start button and choose administrator command prompt
3.Type mountvol g: /s (This maps your EFI folder structure to the G drive)
4.Navigate to the extracted rEFInd folder. For example:
cd c:\users\gary\downloads\refind-bin-0.8.4\refind-bin-0.8.4
When you type dir you should see a folder for refind
5.Type the following to copy refind to the EFI partition:
xcopy /E refind g:\EFI\refind\
6.Type the following to navigate to the refind folder
cd g:\EFI\refind
7.Rename the sample configuration file:
rename refind.conf-sample refind.conf
8.Run the following command to set rEFInd as the bootloader
bcdedit /set {bootmgr} path \EFI\refind\refind_x64.efi
9.Reboot your computer
10.You should now have a menu similar to the image above with options to boot Windows and Ubuntu
This process is fairly similar to choosing the GRUB bootloader.
Basically it involves downloading rEFInd, extracting the files. copying the files to the EFI partition, renaming the configuration file and then setting rEFInd as the boot loader.
### Summary ###
Hopefully this guide has solved the issues that some of you have been having with dual booting Ubuntu and Windows 8.1. If you are still having issues feel free to get back in touch using the email link above.
作者:[Gary Newell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
via:http://linux.about.com/od/LinuxNewbieDesktopGuide/tp/3-Ways-To-Fix-The-UEFI-Bootloader-When-Dual-Booting-Windows-And-Ubuntu.htm
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
[1]:http://linux.about.com/od/LinuxNewbieDesktopGuide/ss/The-Ultimate-Windows-81-And-Ubuntu-
[2]:http://linux.about.com/od/howtos/ss/How-To-Create-A-UEFI-Bootable-Ubuntu-USB-Drive-Using-Windows_3.htm#step-heading
[3]:http://linux.about.com/od/howtos/ss/How-To-Create-A-UEFI-Bootable-Ubuntu-USB-Drive-Using-Windows.htm
[4]:https://wiki.ubuntu.com/SecurityTeam/SecureBoot
[5]:http://www.rodsbooks.com/refind/installing.html#windows
[6]:http://sourceforge.net/projects/refind/files/0.8.4/refind-bin-0.8.4.zip/download

View File

@ -0,0 +1,59 @@
This App Can Write a Single ISO to 20 USB Drives Simultaneously
================================================================================
**If I were to ask you to burn a single Linux ISO to 17 USB thumb drives how would you go about doing it?**
Code savvy folks would write a little bash script to automate the process, and a large number would use a GUI tool like the USB Startup Disk Creator to burn the ISO to each drive in turn, one by one. But the rest of us would fast conclude that neither method is ideal.
### Problem > Solution ###
![GNOME MultiWriter in action](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/gnome-multi-writer.jpg)
GNOME MultiWriter in action
Richard Hughes, a GNOME developer, faced a similar dilemma. He wanted to create a number of USB drives pre-loaded with an OS, but wanted a tool simple enough for someone like his dad to use.
His response was to create a **brand new app** that combines both approaches into one easy to use tool.
Its called “[GNOME MultiWriter][1]” and lets you write a single ISO or IMG to multiple USB drives at the same time.
It nixes the need to customize or create a command line script and relinquishes the need to waste an afternoon performing an identical set of actions on repeat.
All you need is this app, an ISO, some thumb-drives and lots of empty USB ports.
### Use Cases and Installing ###
![The app can be installed on Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/mutli-writer-on-ubuntu.jpg)
The app can be installed on Ubuntu
The app has a pretty defined usage scenario, that being situations where USB sticks pre-loaded with an OS or live image are being distributed.
That being said, it should work just as well for anyone wanting to create a solitary bootable USB stick, too — and since Ive never once successfully created a bootable image from Ubuntus built-in disk creator utility, working alternatives are welcome news to me!
Hughes, the developer, says it **supports up to 20 USB drives**, each being between 1GB and 32GB in size.
The drawback (for now) is that GNOME MultiWriter is not a finished, stable product. It works, but at this early blush there are no pre-built binaries to install or a PPA to add to your overstocked software sources.
If you know your way around the usual configure/make process you can get it up and running in no time. On Ubuntu 14.10 you may also need to install the following packages first:
sudo apt-get install gnome-common yelp-tools libcanberra-gtk3-dev libudisks2-dev gobject-introspection
If you get it up and running, give it a whirl and let us know what you think!
Bugs and pull requests can be longed on the GitHub page for the project, which is where youll also found tarball downloads for manual installation.
- [GNOME MultiWriter on Github][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/01/gnome-multiwriter-iso-usb-utility
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://github.com/hughsie/gnome-multi-writer/
[2]:https://github.com/hughsie/gnome-multi-writer/

View File

@ -0,0 +1,111 @@
Best GNOME Shell Themes For Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Best_Gnome_Shell_Themes.jpeg)
Themes are the best way to customize your Linux desktop. If you [install GNOME on Ubuntu 14.04][1] or 14.10, you might want to change the default theme and give it a different look. To help you in this task, I have compiled here a **list of best GNOME shell themes for Ubuntu** or any other Linux OS that has GNOME shell installed on it. But before we see the list, lets first see how to change install new themes in GNOME Shell.
### Install themes in GNOME Shell ###
To install new themes in GNOME with Ubuntu, you can use Gnome Tweak Tool which is available in software repository in Ubuntu. Open a terminal and use the following command:
sudo apt-get install gnome-tweak-tool
Alternatively, you can use themes by putting them in ~/.themes directory. I have written a detailed tutorial on [how to install and use themes in GNOME Shell][2], in case you need it.
### Best GNOME Shell themes ###
The themes listed here are tested on GNOME Shell 3.10.4 but it should work for all version of GNOME 3 and higher. For the sake of mentioning, the themes are not in any kind of priority order. Lets have a look at the best GNOME themes:
#### Numix ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/02/mockups_numix_5.jpeg)
No list can be completed without the mention of [Numix themes][3]. These themes got so popular that it encouraged [Numix team to work on a new Linux OS, Ozon][4]. Considering their design work with Numix theme, it wont be exaggeration to call it one of the [most beautiful Linux OS][5] releasing in near future.
To install Numix theme in Ubuntu based distributions, use the following commands:
sudo apt-add-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-icon-theme-circle
#### Elegance Colors ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Elegance_Colors_Theme_GNOME_Shell.jpeg)
Another beautiful theme from Satyajit Sahoo, who is also a member of Numix team. [Elegance Colors][6] has its own PPA so that you can easily install it:
sudo add-apt-repository ppa:satyajit-happy/themes
sudo apt-get update
sudo apt-get install gnome-shell-theme-elegance-colors
#### Moka ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Moka_GNOME_Shell.jpeg)
[Moka][7] is another mesmerizing theme that is always included in the list of beautiful themes. Designed by the same developer who gave us Unity Tweak Tool, Moka is a must try:
sudo add-apt-repository ppa:moka/stable
sudo apt-get update
sudo apt-get install moka-gnome-shell-theme
#### Viva ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Viva_GNOME_Theme.jpg)
Based on Gnomes default Adwaita theme, Viva is a nice theme with shades of black and oranges. You can download Viva from the link below.
- [Download Viva GNOME Shell Theme][8]
#### Ciliora-Prima ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Ciliora_Prima_Gnome_Shell.jpeg)
Previously known as Zukitwo Dark, Ciliora-Prima has square icons theme. Theme is available in three versions that are slightly different from each other. You can download it from the link below.
- [Download Ciliora-Prima GNOME Shell Theme][9]
#### Faience ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Faience_GNOME_Shell_Theme.jpeg)
Faience has been a popular theme for quite some time and rightly so. You can install Faience using the PPA below for GNOME 3.10 and higher.
sudo add-apt-repository ppa:tiheum/equinox
sudo apt-get update
sudo apt-get install faience-theme
#### Paper [Incomplete] ####
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Paper_GTK_Theme.jpeg)
Ever since Google talked about Material Design, people have been going gaga over it. Paper GTK theme, by Sam Hewitt (of Moka Project), is inspired by Google Material design and currently under development. Which means you will not have the best experience with Paper at the moment. But if your a bit experimental, like me, you can definitely give it a try.
sudo add-apt-repository ppa:snwh/pulp
sudo apt-get update
sudo apt-get install paper-gtk-theme
That concludes my list. If you are trying to give a different look to your Ubuntu, you should also try the list of [best icon themes for Ubuntu 14.04][10].
How do you find this list of **best GNOME Shell themes**? Which one is your favorite among the one listed here? And if its not listed here, do let us know which theme you think is the best GNOME Shell theme.
--------------------------------------------------------------------------------
via: http://itsfoss.com/gnome-shell-themes-ubuntu-1404/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://itsfoss.com/how-to-install-gnome-in-ubuntu-14-04/
[2]:http://itsfoss.com/install-switch-themes-gnome-shell/
[3]:https://numixproject.org/
[4]:http://itsfoss.com/numix-linux-distribution/
[5]:http://itsfoss.com/new-beautiful-linux-2015/
[6]:http://satya164.deviantart.com/art/Gnome-Shell-Elegance-Colors-305966388
[7]:http://mokaproject.com/
[8]:https://github.com/vivaeltopo/gnome-shell-theme-viva
[9]:http://zagortenay333.deviantart.com/art/Ciliora-Prima-Shell-451947568
[10]:http://itsfoss.com/best-icon-themes-ubuntu-1404/

View File

@ -0,0 +1,83 @@
What is a good IDE for C/C++ on Linux
================================================================================
"A real coder doesn't use an IDE, a real coder uses [insert a text editor name here] with such and such plugins." We all heard that somewhere. Yet, as much as one can agree with that statement, an IDE remains quite useful. An IDE is easy to set up and use out of the box. Hence there is no better way to start coding a project from scratch. So for this post, let me present you with my list of good IDEs for C/C++ on Linux. Why is C/C++ specifically? Because C is my favorite language, and we need to start somewhere. Also note that there are in general a lot of ways to code in C, so in order to trim down the list, I only selected "real out-of-the-box IDE", not text editors like Gedit or Vim pumped with [plugins][1]. Not that this alternative is bad in any way, just that the list will go on forever if I include text editors.
### 1. Code::Blocks ###
![](https://farm8.staticflickr.com/7520/16089880989_10173db27b_c.jpg)
Starting all out with my personal favorite, [Code::Blocks][2] is a simple and fast IDE for C/C++ exclusively. Like any respectable IDE, it integrates syntax highlighting, bookmarking, word completion, project management, and a debugger. Where it shines is via its simple plugin system which adds indispensable tools like Valgrind and CppCheck, and less indispensable like a Tetris mini-game. But my reason for liking it particularly is for its coherent set of handy shortcuts, and the large number of options that never feel too overwhelming.
### 2. Eclipse ###
![](https://farm8.staticflickr.com/7522/16276001255_66235a0a69_c.jpg)
I know that I said only "real out-of-the-box IDE" and not a text editor pumped with plugins, but [Eclipse][3] is a "real out-of-the-box IDE." It's just that Eclipse needs a little [plugin][4] (or a variant) to code in C. So I technically did not contradict myself. And it would have been impossible to make an IDE list without mentioning the behemoth that is Eclipse. Like it or not, Eclipse remains a great tool to code in Java. And thanks to the [CDT Project][5], it is possible to program in C/C++ too. You will benefit from all the power of Eclipse and its traditional features like word completion, code outline, code generator, and advanced refactoring. What it lacks in my opinion is the lightness of Code::Blocks. It is still very heavy and takes time to load. But if your machine can take it, or if you are a hardcore Eclipse fan, it is a very safe option.
### 3. Geany ###
![](https://farm9.staticflickr.com/8573/16088461968_c6a6c9e49a_c.jpg)
With a lot less features but a lot more flexibility, [Geany][6] is at the opposite of Eclipse. But what it lacks (like a debugger for example), Geany makes it up with nice little features: a space for note taking, creation from template, code outline, customizable shortcuts, and plugins management. Geany is still closer to an extensive text editor than an IDE here. However I keep it in the list for its lightness and its well designed interface.
### 4. MonoDevelop ###
![](https://farm8.staticflickr.com/7515/16275175052_61487480ce_c.jpg)
Another monster to add to the list, [MonoDevelop][7] has a very unique feel derived from its look and interface. I personally love its project management and its integrated version control system. The plugin system is also pretty amazing. But for some reason, all the options and the support for all kind of programming languages make it feel a bit overwhelming to me. It remains a great tool that I used many times in the past, but just not my number one when dealing with "simplistic" C.
### 5. Anjuta ###
![](https://farm8.staticflickr.com/7514/16088462018_7ee6e5b433_c.jpg)
With a very strong "GNOME feeling" attached to it, [Anjuta][8]'s appearance is a hit or miss. I tend to see it as an advanced version of Geany with a debugger included, but the interface is actually a lot more elaborate. I do enjoy the tab system to switch between the project, folders, and code outline view. I would have liked maybe a bit more shortcuts to move around in a file. However, it is a good tool, and offers outstanding compilation and build options, which can support the most specific needs.
### 6. Komodo Edit ###
![](https://farm8.staticflickr.com/7502/16088462028_81d1114c84_c.jpg)
I was not very familiar with [Komodo Edit][9], but after trying it a few days, it surprised me with many many good things. First, the tab-based navigation is always appreciable. Then the fancy looking code outline reminds me a lot of Sublime Text. Furthermore, the macro system and the file comparator make Komodo Edit very practical. Its plugin library makes it almost perfect. "Almost" because I do not find the shortcuts as nice as in other IDEs. Also, I would enjoy more specific C/C++ tools, and this is typically the flaw of general IDEs. Yet, very enjoyable software.
### 7. NetBeans ###
![](https://farm8.staticflickr.com/7569/16089881229_98beb0fce3_c.jpg)
Just like Eclipse, impossible to avoid this beast. With navigation via tabs, project management, code outline, change history tracking, and a plethora of tools, [NetBeans][10] might be the most complete IDE out there. I could list for half a page all of its amazing features. But that will tip you off too easily about its main disadvantage, it might be too big. As great as it is, I prefer plugin based software because I doubt that anyone will need both Git and Mercurial integration for the same project. Call me crazy. But if you have the patience to master all of its options, you will be pretty much become the master of IDEs everywhere.
### 8. KDevelop ###
![](https://farm8.staticflickr.com/7519/15653583824_e412f2ab1f_c.jpg)
For all KDE fans out there, [KDevelop][11] might be the answer to your prayers. With a lot of configuration options, KDevelop is yours if you manage to seize it. Call me superficial but I never really got past the interface. But it's too bad for me as the editor itself packs quite a punch with a lot of navigation options and customizable shortcuts. The debugger is also very advanced and will take a bit of practice to master. However, this patience will be rewarded with this very flexible IDE's full power. And it gets special credits for its amazing embedded documentation.
### 9. CodeLite ###
![](https://farm9.staticflickr.com/8594/16250066446_b5f654e63f_c.jpg)
Finally, last for not least, [CodeLite][12] shows that you can take a traditional formula and still get something with its own feeling attached to it. If the interface really reminded me of Code::Blocks and Anjuta at first, I was just blown away by the extensive plugin library. Whether you want to diff a file, insert a copyright block, define an abbreviation, or push your work on Git, there is a plugin for you. If I had to nitpick, I would say that it lacks a few navigation shortcuts for my taste, but that's really it.
To conclude, I hope that this list had you discover new IDEs for coding in your favorite language. While Code::Blocks remains my favorite, it has some serious challengers. Also we are far from covering all the ways to code in C/C++ using an IDE on Linux. So if you have another one to propose, let us know in the comments. Also if you would like me to cover IDEs for a different language next, also let us know in the comment section.
--------------------------------------------------------------------------------
via: http://xmodulo.com/good-ide-for-c-cpp-linux.html
作者:[Adrien Brochard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://xmodulo.com/turn-vim-full-fledged-ide.html
[2]:http://www.codeblocks.org/
[3]:https://eclipse.org/
[4]:http://xmodulo.com/how-to-set-up-c-cpp-development-environment-in-eclipse.html
[5]:https://eclipse.org/cdt/
[6]:http://www.geany.org/
[7]:http://www.monodevelop.com/
[8]:http://anjuta.org/
[9]:http://komodoide.com/komodo-edit/
[10]:https://netbeans.org/
[11]:https://www.kdevelop.org/
[12]:http://codelite.org/

View File

@ -0,0 +1,157 @@
A Step By Step Guide To Installing Xubuntu Linux
================================================================
### Introduction To Installing Xubuntu Linux ###
![Xubuntu](http://f.tqn.com/y/linux/1/S/J/J/1/fulldesktop.png)
This guide shows how to install Xubuntu Linux using step by step instructions.
Why would you want to install Xubuntu? Here are three reasons:
1. You have a computer running Windows XP that is out of support
2. You have [a computer that is running really slowly][1] and you want a lightweight but modern operating system
3. You want to be able to customise your computing experience
The first thing you need to do is download Xubuntu and create a bootable USB drive.
After you have done this boot into a live version of Xubuntu and click on the install Xubuntu icon.
### Choose Your Installation Language ###
![Choose Language](http://f.tqn.com/y/linux/1/S/K/J/1/xubuntuinstall1.png)
The first step is to choose your language.
Click on the language in the left pane and then click "Continue"
### Choose Wireless Connection ###
![Set Up Your Wireless Connection](http://f.tqn.com/y/linux/1/S/L/J/1/xubuntuinstall2.png)
The second step requires you to choose your internet connection. This is not a required step and there are reasons why you might choose not to set up your internet connection at this stage.
If you have a [poor internet connection][3] it is a good idea not to choose a wireless network because the installer will attempt to download updates as part of the installation. Your installation will therefore take a long time to complete.
If you have a really [good internet connection][4] choose your wireless network and enter the security key.
### Be Prepared ###
![Preparing To Install Xubuntu](http://f.tqn.com/y/linux/1/S/M/J/1/xubuntuinstall3.png)
You will now see a checklist which shows how well prepared you are for installing Xubuntu:
- Do you have at least 6.2 gigabytes of disk space
- Are you connected to the internet
- Are you connected to a power source
The only one that is a necessity is the disk space.
As mentioned in the previous step you can install Xubuntu without being connected to the internet. You can install updates once the installation is complete.
You only need to be connected to a power source if you are likely to run out of battery power during the installation.
Note that if you are connected to the internet there is a checkbox to turn off the option to download updates while installing.
There is also a checkbox that lets you install third party software to enable you to [play MP3s][5] and watch [Flash videos][6]. This is a step that can be completed post installation as well.
### Choose Your Installation Type ###
![](http://f.tqn.com/y/linux/1/S/N/J/1/xubuntuinstall4.png)
The next step is to choose the installation type. The options available will depend on what is already installed on the computer.
In my case I was installing Xubuntu on a netbook over the top of [Ubuntu MATE][7] and so I had options to reinstall Ubuntu, erase and reinstall, install Xubuntu alongside Ubuntu or something else.
If you have Windows on your computer you will have options to install alongside, replace Windows with Xubuntu or something else.
This guide shows how to install Xubuntu on a computer and not how to dual boot. That is a completely different guide altogether.
Choose the option to replace your operating system with Xubuntu and click "Continue"
Note: This will cause your disk to be wiped and you should backup all of your data before continuing
### Choose The Disk To Install To ###
![](http://f.tqn.com/y/linux/1/S/O/J/1/xubuntuinstall5.png)
Select the drive you wish to install Xubuntu to.
Click "Install Now".
A warning will appear telling you that the drive will be wiped and you will be shown a list of partitions that will be created.
Note: This is the very last chance to change your mind. If you click continue the disk will be wiped and Xubuntu will be installed
Click "Continue" to install Xubuntu
### Choose Your Location ###
![](http://f.tqn.com/y/linux/1/S/P/J/1/xubuntuinstall7.png)
You are now required to choose your location by clicking on the map. This sets your timezone so that your clock is set to the right time.
After you have selected the correct location click "Continue".
### Choose Your Keyboard Layout ###
![](http://f.tqn.com/y/linux/1/S/Q/J/1/xubuntuinstall8.png)
Choose your keyboard layout.
To do this select the language of your keyboard in the left hand pane and then choose the exact layout in the right pane such as dialect, number of keys etc.
You can click the "Detect Keyboard Layout" button to automatically select the best keyboard layout.
To make sure the keyboard layout is set correctly enter text into the "Type here to test your keyboard". Pay close attention to function keys and symbols such as pound and dollar symbols.
Don't worry if you don't get this right during installation. You can set the keyboard layout again within Xubuntu's system settings post installation.
### Add A User ###
![](http://f.tqn.com/y/linux/1/S/R/J/1/xubuntuinstall9.png)
n order to use Xubuntu you will need to have at least one user set up and so the installer requires you to create a default user.
Enter your name and a name to distinguish the computer into the first two boxes.
Choose a username and [set up a password][8] for the user. You will need to type the password in twice to make sure you have set the password correctly.
If you want Xubuntu to automatically login without having to enter a password check the box marked "Log in automatically". Personally I would never recommend doing this though.
The better option is to check the "Require my password to log in" radio button and if you want to be completely secure check the "Encrypt my home folder" option.
Click "Continue" to move on.
### Wait For Installation To Complete ###
![](http://f.tqn.com/y/linux/1/S/S/J/1/xubuntuinstall10.png)
The files will now be copied to your computer and Xubuntu will be installed.
During this process you will see a short slide show. You can go and [make some coffee][9] at this point and relax.
A message will appear stating that you can continue to try Xubuntu or reboot to start using the newly installed Xubuntu.
When you are ready, reboot and remove the USB drive.
Note: To install Xubuntu on a UEFI based machine requires some extra steps not included here. These instructions will be added as a separate guide
via : http://linux.about.com/od/howtos/ss/A-Step-By-Step-Guide-To-Installing-Xubuntu-Linux.htm#step-heading
作者:[Gary Newell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
[1]:http://windows.about.com/od/maintainandfix/a/8ways2speedup.htm
[2]:http://linux.about.com/od/howtos/ss/How-To-Create-A-Persistent-Bootable-Xubuntu-Linux-USB-Drive.htm
[3]:http://netforbeginners.about.com/od/basicinternethardware/f/Why-Internet-Connections-Can-Be-Slow.htm
[4]:http://netforbeginners.about.com/b/2011/09/07/test-your-internet-connection-speed-here.htm
[5]:http://mp3.about.com/od/freebies/tp/freemusictp.htm
[6]:http://animation.about.com/od/2danimationtutorials/ss/2d_fla_lesson1.htm
[7]:http://www.everydaylinuxuser.com/2014/11/ubuntu-mate-vs-lubuntu-on-old-netbook.html
[8]:http://netsecurity.about.com/cs/generalsecurity/a/aa112103b.htm
[9]:http://coffeetea.about.com/od/preparationandrecipes/

View File

@ -1,116 +0,0 @@
The awesomely epic guide to KDE
================================================================================
**Everything you ever wanted to know about KDE (but were too afraid of the number of possible solutions to ask).**
Desktops on Linux. Theyre a concept completely alien to users of other operating systems because they never having to think about them. Desktops must feel like the abstract idea of time to the Amondawa tribe, a thought that doesnt have any use until youre in a different environment. But here it is on Linux you dont have to use the graphical environment lurking beneath your mouse cursor. You can change it for something completely different. If you dont like windows, switch to xmonad. If you like full-screen apps, try Gnome. And if youre after the most powerful and configurable point-and-click desktop, theres KDE.
KDE is wonderful, as they all are in their own way. But in our opinion, KDE in particular suffers from poor default configuration and a rather allusive learning curve. This is doubly frustrating, firstly because it has been quietly growing more brilliant over the last couple of years, and secondly, because KDE should be the first choice for users unhappy with their old desktop in particular, Windows 8 users pining for an interface that makes sense.
But fear not. Were going to use a decades worth of KDE firefighting to bring you the definitive guide to making KDE look good and function slightly more like how you might expect it to. Were not going to look at KDEs applications, other than perhaps Dolphin; were instead going to look at the functionality in the desktop environment itself. And while our guinea pig distribution is going to be Mageia, this guide will be equally applicable to any recent KDE desktop running from almost any distribution, so dont let the default Mageia background put you off.
### Fonts ###
A great first target for getting your system looking good is its selection of fonts. It used to be the case that many of us would routinely copy fonts across from a Windows installation, getting the professional Ariel and Helvetica font rendering that was missing from Linux at the time. But thanks to generic quality fonts such as DejaVu and Nimbus Sans/Roman, this isnt a problem any more. But its still worth finding a font you prefer, as there are now so many great alternatives to choose between.
The best source of free fonts weve found is [www.fontsquirrel.com][1] it hosts the Roboto, Roboto Slab (Hello!) and Roboto Condensed (Hello!) typefaces used throughout our magazine, and also on the Nexus 5 smartphone (Roboto was developed for use in the Ice Cream Sandwich version of the Android mobile operating system).
TrueType fonts, with their **.ttf** file extensions, are incredibly easy to install from KDE. Download the zip file, right-click and select something from the Extract menu. Now all you need to do is drag a selection across the TrueType fonts you want to install and select Install from the right-click Actions menu. KDE will take care of the rest.
Another brilliant thing about KDE is that you can change all the fonts at once. Open the System Settings panel and click on Application Appearances, followed by the fonts tab, and click on Adjust All Fonts. Now just select a font from the requester. Most KDE applications will update with your choice immediately, while other applications, such as Firefox, will require a restart. Either way, its a quick and effective way of experimenting with your desktops usability and appearance. Wed recommend either Open Sans or the thinner Aller fonts.
![Most distributions dont include decent fonts. But KDE enables you to quickly install new ones and apply them to your desktop.](http://www.linuxvoice.com/wp-content/uploads/2014/09/kde-4.png)
Most distributions dont include decent fonts. But KDE enables you to quickly install new ones and apply them to your desktop.
### Eye candy ###
One of KDEs most secret features is that backgrounds can be dynamic. We dont find much use for this when it comes to the desktops that tells us the weather outside the window, but we do like backgrounds that dynamically grab images from the internet. With most distributions youll need to install something for this to work. Just search for **plasma-wallpaper** in your distributions package manager. Our favourite is **plasma-wallpaper-potd**, as this installs easy access to update-able wallpaper images from a variety of sources.
Changing a desktop background is easy with KDE, but its not intuitive. Mageia, for example, defaults to using Folder view, as this is closer to the traditional desktop where files from the Desktop folder in your home directory are displayed on the background, and the whole desktop works like a file manager. Right-click and select Folder Settings if this is the view youre using. Alternatively, KDE defaults to Desktop, where the background is clear apart from any widgets you add yourself, and files and folders are considered links to the sources. The menu item in this mode is labelled Desktop Settings. The View Configuration panel that changes the background is the same, however, and you need to make your changes in the Wallpaper drop-down menu. Wed recommend Picture Of The Day as the wallpaper, and the Astronomy Picture Of The Day as the image source.
Another default option we think is crazy is the blue glow that surrounds the active window. While every other desktop uses a slightly deeper drop-shadow, KDEs active window looks like its bathed in radioactive light. The solution to this lies in the default theme, and this can be changed by going to KDEs System Settings control panel and selecting Workspace Appearance. On the first page, which is labelled Window Decorations, youll find that Oxygen is nearly always selected, and its this theme that contains the option to change the blue glow. Just click on the Configure Decoration button, flip to the Shadows tab and disable Active Window Glow. Alternatively, if youd like active windows to have a more pronounced shadow, change the inner and outer colours to black.
You may have seen the option to download wallpapers, for example, from within a KDE window, and you can see this now by clicking on the Get New Decorations button. Themes are subjective, but our favourite combination is currently the Chrome window decoration (it looks identical to Googles default theme for its browser) with the Aya desktop theme. The term desktop theme is a bit of a misnomer, as it doesnt encapsulate every setting as you might expect. Instead it controls how generic desktop elements are rendered. The most visible of these elements is the launch panel, and changing the desktop theme will usually have a dramatic effect on its appearance, but youll also notice a difference in the widgets system.
The final graphical flourish wed suggest is to change the icon set that KDE uses. Theres nothing wrong with the default Oxygen set, but there are better options. Unfortunately, this is where the Get New Themes download option often fails, probably because icon packages are large and can overwhelm the personal storage space often reserved for projects like these. Wed suggest going to [kde-look.org][2] and browsing its icon collections. Open up the Icons panel from KDEs System Settings, click on the Icons tab followed by Install Theme File and point the requester at the location of the archive you just downloaded. KDE will take it from there and add the icon set to the list in the panel. Try Kotenza for a flat theme, or keep an eye on Nitrux development.
![Remove the blue glow and change a few of the display options, and KDE starts to look pretty good in our opinion.](http://www.linuxvoice.com/wp-content/uploads/2014/09/kde-5.png)
Remove the blue glow and change a few of the display options, and KDE starts to look pretty good in our opinion.
### The panel ###
Our next target is going to be the panel at the bottom of the screen. This has become a little dated, especially if youre using KDE on a large or high-resolution display, so our first suggestion is to re-scale and centre it for your screen. The key to moving screen components in KDE is making sure theyre unlocked, and this accomplished by right-clicking on the plasma cashew in the top-right of the display where the current activity is listed. Only when widgets are unlocked can you re-size the panel, and even add new applications from the launch menu.
With widgets unlocked, click on the cashew on the side of the panel followed by More Settings and select Centre for panel alignment. With this enabled you can re-size the panel using the sliders on either side and the panel itself will always stay in the middle of your screen. Just pretend youre working on indentation on a word processor and youll get the idea. You can also change its height when the sliders are visible by dragging the central height widget, and to the left of this, you can drag the panel to a different edge on your screen. The top edge works quite well, but many of KDEs applets dont work well when stacked vertically on the left or right edges of the display.
There are two different kinds of task manager applets that come with KDE. The default displays each running application as a title bar in the panel, but this takes up quite a bit of space. The alternative task manager displays only the icon of the application, which we think is much more useful. Mageia defaults to the icon version, but most others and KDE itself prefer the title bar applet. To change this, click on the cashew again and hover over the old applet so that the X appears, then click on this X to remove the applet from the panel. Now click on Add Widgets, find the two task managers and drag the icon version on to your panel. You can re-arrange any other applets in this mode by dragging them to the left and right.
By default, the Icon-Only task manager will only display icons for tasks running on the current desktop, which we think is counterintuitive, as its more convenient to see all of the applications you may have running and to quickly switch between whatever desktops on which they may be running with a simple click. To change this behaviour, right-click on the applet and select the Settings menu option and the Behaviour tab in the next window. Deselect Only Show Tasks From The Current Desktop, and perhaps Only Show Tasks From The Current Activity if you use KDEs activities.
Another alteration we like to make is to reconfigure the virtual desktops applet from showing four desktops as a 2×2, which doesnt look too good on a small panel, to 4×1. This can be done by right-clicking on the applet, selecting Pager Settings and then clicking on the Virtual Desktops tabs and changing the number of rows to 1.
Finally, theres the launch menu. Mageia has switched this from the new style of application launcher to the old style originally seen in Microsoft Windows. We prefer the former because of its search field, but the two can be switched by right-clicking the icon and selecting the Switch To… menu option.
If you find the hover-select action of this mode annoying, where moving the mouse over one of the categories automatically selects it, you can disable it by right-clicking on the launcher, selecting Launcher Settings from the menu and disabling Switch Tabs On Hover from the General settings page. Its worth reiterating that many of these menu options are only available when widgets are unlocked, so dont despair if you dont see the correct menu entry at first.
> ### Activities ###
>
> No article on KDE would be complete without some discussion of what KDE calls Activities. In many ways, Activities are a solution waiting for a problem. Theyre meta-virtual desktops that allow you to group desktop configuration and applications together. You may have an activity for photo editing, for example, or one for working and another for the internet. If youve got a touchscreen laptop, activities could be used to switch between an Android-style app launcher (the Search and Launch mode from the Desktop Settings panel), and the regular desktop mode. We use a single activity as a default for screenshots, for instance, while another activity switches everything to the file manager desktop mode. But the truth is that you have to understand what they are before you can find a way of using them.
>
> Some installations of KDE will include the Activity applet in the toolbar. Its red, blue and green dots can be clicked on to open the activity manager, or you can click on the Plasma cashew in the top-right and select Activities. This will open the bar at the bottom of the screen, which lists activities installed and primed on your system. Clicking on any will switch between them; as will pressing the meta key (usually the Windows key) and Tab.
>
> Wed suggest that finding a fast way to switch between activities, such as with a keyboard shortcut or with the Activity Bar widget is the key to using them more. With the Activity Manager open, clicking on Create Activity lets you either clone the current desktop, add a blank desktop or create a new activity from a list of templates. Clone works well if you want to add some default applications to the desktop for your current setup. To remove an activity, switch to another one and press the Stop and Delete buttons from the Activity Manager.
### Upgraded launch menu ###
You may want to look into replacing the default launch menu entirely. If you open the Add Widgets view, for instance, and search for menus, youll see several results. Our current favourite is called Application Launcher (QML). It provides the same kind of functionality as the default menu, but has a cleaner interface after youve enlarged the initial window. But if were being honest, we dont use the launcher that much. We prefer to do most launching through KRunner, which is the seemingly simple requester that appears when you hold Alt+F2.
KRunner is better than the default launcher, because you can type this shortcut from anywhere, regardless of which applications are running or where your mouse is located. When you start to type the name of the application you want to run into KRunner, youll see the results filtered in real time beneath the entry field press Enter to launch the top choice.
KRunner is capable of so much more. You can type in calculations like **=sin(90)**, for example, and see the result in real time. You can search Google with **gg**: or Wikipedia with **wp**: followed by the search terms, and add many other operations through installable modules. To make best use of this awesome KDE feature, make sure youve got the **plasma-addons** package installed, and search for **runner** on your distributions package manager. When you next launch KRunner and click on the tool icon to the left of the search bar, youll see a wide variety of plugins that can do all kinds of things with the text you type in. In classic KDE style, many dont include instructions on how to use them, so heres our breakdown of the most useful things you can do with KRunner:
![](http://www.linuxvoice.com/wp-content/uploads/2014/09/kde-3.png)
### File management ###
File management may not be the most exciting subject in Linux, but it is one we all seem to spend a lot of time doing, whether thats moving a download into a better folder, or copying photos from a camera. The old file manager, Konqueror, was one of the best reasons for using KDE in the first place, and while Konqueror has been superseded by Dolphin in KDE 4.x, its still knocking around even if it is labelled a web browser.
If you open Konqueror and enter the URL as **file:/**, it turns back into that file manager of old, with many of its best features intact. You can click on the lower status bar, for example, and split the view vertically or horizontally, into other views. You can fill the view with proportionally sized blocks by selecting Preview File Size View from the right-click menu, and preview many other file types without ever leaving Konqueror.
Mageia uses a double-click for most options, whereas we prefer a single click. This can be changed from the System-Settings panel by opening Input Devices, clicking on Mouse and enabling Single-click To Open Files And Folders. If youve become used to Apples reverse scroll, youll also find an option here to reverse the scroll direction on Linux.
Konqueror is a great application, but it hasnt been a focus of KDE development for a considerable period of time. Dolphin has replaced it, and while this is a much simplified file manager, it does inherit some of Konquerors best features. You can still split the view, for instance, albeit one only once, and only horizontally, from the toolbar. You can also view lots of metadata. Select the Details View and right-click on the column headings for the files, and you can add columns that list the word counts in text files, or an images size and orientation, or the artist, title and duration of an audio file, all from within the contents of the data. This is KDEs semantic desktop in action, and its been growing in functionality for the last couple of years. Apples OS X, for example, has only just started pushing its ability to tag files and applications weve been able to do this from KDE for a long time. We dont know any other desktop that comes close to providing that level of control.
### Window management ###
KDE has a comprehensive set of windowing functions as well as graphical effects. Theyre all part of the window manager, KWin, rather than the desktop, which is what weve been dealing with so far. Its the window managers job to handle the positioning, moving and rendering of your windows, which is why they can be replaced without switching the whole desktop. You might want to try KWin on the RazorQt desktop, for example, to get the best of both the minimal environment RazorQt offers and the power of KDEs window manager.
The easiest way to get to KWins configuration settings is to right-click on the title bar of any window (this is usually the most visible element of any window manager), and select Window Manager Settings from the More Actions menu.
The Task Switcher is the tool that appears when you press Alt+Tab, and continually pressing those two keys will switch between all running applications on the current desktop. You can also use cursor keys to move left and right through the list. These settings are mostly sensibly configured, but you may want to include All Other Desktops in the Filter Windows By section, as that will allow you to quickly switch to applications running on other desktops. We also like the Cover Switch visualisation rather than the Thumbnails view, and you can even configure the perceived distance of the windows by clicking on the toolbar icon.
The next page on the window manager control module handles what happens at the edges of your screen. At the very least, we prefer to enable Switch Desktop On Edge by selecting Only When Moving Windows from the drop-down list. This means that when you drag a window to one edge, the virtual desktop will switch beneath, effectively dragging the window on to a new virtual desktop.
The great thing about enabling this only for dragged windows is that it doesnt interfere with KDEs fantastic window snapping feature. When you drag a window close to the left or right edge, for instance, KDE displays a ghosted window where your window will snap to if you release the mouse. This is a great way of turning KDE into a tiling window manager, where you can easily have two windows split down the middle of the screen area. Moving a window into any of the corners will also give you the ability to neatly arrange your windows to occupy a quarter of the screen, which is ideal for large displays.
We also enable a mode similar to Mission Control on OS X when the cursor is in the region of the top-left corner of the screen. On the screen edge layout, click on the dot in the top-right of the screen (or any other point youd prefer) and select Desktop Grid from the drop-down menu that appears. Now when you move to the top-right of your display, youll get an overview of all your virtual desktops, any of which can be chosen with a click.
Two pages down in the configuration module, theres a page called Focus. This is an old idea where you can change whether a window becomes active when you click on it, or when you roll your mouse cursor over it. KDE adds another twist to this by providing a slider that progresses from click to a strict hover policy, where the window under the cursor always becomes active. We prefer to use one of the middle options Focus Follows Mouse as this chooses the most obvious window to activate for us without making too many mistakes, and it means we seldom click to focus. We also reduce the focus delay to 200ms, but this will depend on how you feel about the feature after using it for a while.
KDE has so many features, many of which only come to light when you start to use the desktop. It really is a case of developers often adding things and then telling no one. But we feel KDE is worth the effort, and unlikely some other desktops, is unlikely to change too much in the transition from 4.x to 5. That means the time you spend learning how to use KDE now is an investment. Dive in!.
![KDE visual effects (click for larger)](http://www.linuxvoice.com/wp-content/uploads/2014/09/kde-1.png)
KDE visual effects (click for larger)
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/desktops/
作者:[Ben Everard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/ben_everard/
[1]:http://www.fontsquirrel.com/
[2]:http://kde-look.org/

View File

@ -1,119 +0,0 @@
Interview: Thomas Voß of Mir
================================================================================
**Mir was big during the space race and its a big part of Canonicals unification strategy. We talk to one of its chief architects at mission control.**
Not since the days of 2004, when X.org split from XFree86, have we seen such exciting developments in the normally prosaic realms of display servers. These are the bits that run behind your desktop, making sure Gnome, KDE, Xfce and the rest can talk to your graphics hardware, your screen and even your keyboard and mouse. They have a profound effect on your systems performance and capabilities. And where we once had one, we now have two more Wayland and Mir, and both are competing to win your affections in the battle for an X replacement.
We spoke to Waylands Daniel Stone in issue 6 of Linux Voice, so we thought it was only fair to give equal coverage to Mir, Canonicals own in-house X replacement, and a project that has so far courted controversy with some of its decisions. Which is why we headed to Frankfurt and asked its Technical Architect, Thomas Voß, for some background context…
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_1.jpg)
**Linux Voice: Lets go right back to the beginning, and look at what X was originally designed for. X solved the problems that were present 30 years ago, where people had entirely different needs, right?**
**Thomas Voß**: It was mainframes. It was very expensive mainframe computers with very cheap terminals, trying to keep the price as low as possible. And one of the first and foremost goals was: “Hey, I want to be able to distribute my UI across the network, ideally compressed and using as little data as possible”. So a lot of the decisions in X were motivated by that.
A lot of the graphics languages that X supports even today have been motivated by that decision. The X developers started off in a 2D world; everything was a 2D graphics language, the X way of drawing rectangles. And its present today. So X is not necessarily bad in that respect; it still solves a lot of use cases, but its grown over time.
One of the reasons is that X is a protocol, in essence. So a lot of things got added to the protocol. The problem with adding things to a protocol is that they tend to stick. To use a 2D graphics language as an example, XVideo is something that no-one really likes today. Its difficult to support and the GPU vendors actually cry out in pain when you start talking about XVideo. Its somewhat bloated, and its just old. Its an old proven technology and Im all for that. I actually like X for a lot of things, and it was a good source of inspiration. But then when you look at your current use cases and the current setup we are in, where convergence is one of the buzzwords massively overrated obviously but at the heart of convergence lies the fact that you want to scale across different form factors.
**LV: And convergence is big for Canonical isnt it?**
**Thomas**: Its big, I think, for everyone, especially over time. But convergence is a use case that was always of interest to us. So we always had this idea that we want one codebase. We dont want a situation like Apple has with OS X and iOS, which are two different codebases. We basically said “Look, whatever we want to do, we want to do it from one codebase, because its more efficient.” We dont want to end up in the situation where we have to be maintaining two, three or four separate codebases.
Thats where we were coming from when we were looking at X, and it was just too bloated. And we looked at a lot of alternatives. We started looking at how Mac OS X was doing things. We obviously didnt have access to the source code, but if you see the transition from OS 9 to OS X, it was as if they entirely switched to one graphics language. It was pre-PostScript at that time. But they chose one graphics language, and thats it. From that point on, when you choose a graphics language, things suddenly become more simple to do. Todays graphics language is EGL ES, so there was inspiration for us to say we were converged on GL and EGL. From our perspective, thats the least common denominator.
> We basically said: whatever we want to do, we want to do it from one codebase, because its more efficient.
Obviously there are disadvantages to having only one graphics language, but the benefits outweigh the disadvantages. And I think thats a common theme in the industry. Android made the same decision to go that way. Even Wayland to a certain degree has been doing that. They have to support EGL and GL, simply because its very convenient for app developers and toolkit developers an open graphics language. That was the part that inspired us, and we wanted to have this one graphics language and support it well. And that takes a lot of craft.
So, once you can say: no more weird 2D API, no more weird phong API, and everything is mapped out to GL, youre way better off. And you can distill down the scope of the overall project to something more manageable. So it went from being impossible to possible. And then there was me, being very opinionated. I dont believe in extensibility from the beginning traditionally in Linux everything is super extensible, which has got benefits for a certain audience.
If you think about the audience of the display server, its one of the few places in the system where youve got three audiences. So youve got the users, who dont care, or shouldnt care, about the display server.
**LV: Its transparent to them.**
**Thomas**: Yes, its pixels, right? Thats all they care about. It should be smooth. It should be super nice to use. But the display server is not their main concern. It obviously feeds into a user experience, quite significantly, but there are a lot of other parts in the system that are important as well.
Then youve got developers who care about the display server in terms of the API. Obviously we said we want to satisfy this audience, and we want to provide a super-fast experience for users. It should be rock solid and stable. People have been making fun of us and saying “yeah, every project wants to be rock solid and stable”. Cool so many fail in doing that, so lets get that down and just write out what we really want to achieve.
And then youve got developers, and the moment you expose an API to them, or a protocol, you sign a contract with them, essentially. So they develop to your API well, many app developers wont directly because theyll be using toolkits but at some point youve got developers who sign up to your API.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_3.jpg)
**LV: The developers writing the toolkits, then?**
**Thomas**: We do a lot of work in that arena, but in general its a contract that we have with normal app developers. And we said: look, we dont want the API or contract to be super extensible and trying to satisfy every need out there. We want to understand what people really want to do, and we want to commit to one API and contract. Not five different variants of the contract, but we want to say: look, this is what we support and we, as Canonical and as the Mir maintainers, will sign up to.
So I think thats a very good thing. You can buy into specific shells sitting on top of Mir, but you can always assume a certain base level of functionality that we will always provide in terms of window management, in terms of rendering capabilities, and so on and so forth. And funnily enough, that also helps with convergence. Because once you start thinking about the API as very important, you really start thinking about convergence. And what happens if we think about form factor and we transfer from a phone to a tablet to a desktop to a fridge?
**LV: And whatever might come!**
**Thomas**: Right, right. How do we account for future developments? And we said we dont feel comfortable making Mir super extensible, because it will just grow. Either it will just grow and grow, or you will end up with an organisation that just maintains your protocol and protocol extensions.
**LV: So thats looking at Mir in relation to X. The obvious question is comparing Mir to Wayland so what is it that Mir does, that Wayland doesnt?**
**Thomas**: This might sound picky, but we have to distinguish what Wayland really is. Wayland is a protocol specification which is interesting because the value proposition is somewhat difficult. Youve got a protocol and youve got a reference implementation. Specifically, when we started, Weston was still a test bed and everything being developed ended up in there.
No one was buying into that; no one was saying, “Look, were moving this to production-level quality with a bona fide protocol layer that is frozen and stable for a specific version that caters to application authors”. If you look at the Ubuntu repository today, or in Debian, theres Wayland-cursor-whatever, so they have extensions already. So thats a bit different from our approach to Mir, from my perspective at least.
There was this protocol that the Wayland developers finished and back then, before we did Mir and I looked into all of this, I wrote a Wayland compositor in Go, just to get to know things.
**LV: As you do!**
**Thomas**: And I said: you know, I dont think a protocol is a good way of approaching this because versioning a protocol in a packaging scenario is super difficult. But versioning a C API, or any sort of API that has a binary stability contract, is way easier and we are way more experienced at that. So, in that respect, we are different in that we are saying the protocol is an implementation detail, at least up to a certain point.
Im pretty sure for version 1.0, which we will call a golden release, we will open up the protocol for communication purposes. Under the covers its Google buffers and sockets. So well say: this is the API, work against that, and were committed to it.
Thats one thing, and then we said: OK, theres Weston, but we cannot use Weston because its not working on Android, the driver model is not well defined, and theres so much work that we would have to do to actually implement a Wayland compositor. And then we are in a situation where we would have to cut out a set of functionality from the Wayland protocol and commit to that, no matter what happens, and ultimately that would be a fork, over time, right?.
**LV: Its a difficult concept for many end users, who just want to see something working.**
**Thomas**: Right, and even from a developers perspective and lets jump to the political part I find it somewhat difficult to have a party owning a protocol definition and another party building the reference implementations. Now, Gnome and KDE do two different Wayland compositors. I dont see the benefit in that, to be quite frank, so the value proposition is difficult to my mind.
The driver model in Mir and Wayland is ultimately not that different its GL/EGL based. That is kind of the denominator that you will find in both things, which is actually a good thing, because if you look at the contract to application developers and toolkit developers, most of them dont want Mir or Wayland. They talk ELG and GL, and at that point, its not that much of a problem to support both.
> If there had been a full reference implementation of Wayland, our decision might have been different.
So we did this work for porting the Chromium browser to Mir. We actually took the Chromium Wayland back-end, factored out all the common pieces to EGL and GL ES, and split it up into Wayland and Mir.
And I think from a users or application developers perspective, the difference is not there. I think, in retrospect, if there would have been something like a full reference implementation of Wayland, where a company had signed up to provide something that is working, and committed to a certain protocol version, our decision might have been different. But there just wasnt. It was five years out there, Wayland, Wayland, Wayland, and there was nothing that we could build upon.
**LV: The main experience weve had is with RebeccaBlackOS, which has Weston and Wayland, because, like you say, theres no that much out there running it.**
**Thomas**: Right. I find Wayland impressive, obviously, but I think Mir will be significantly more relevant than Wayland in two years time. We just keep on bootstrapping everything, and weve got things working across multiple platforms. Are there issues, and are there open questions to solve? Most likely. We never said we would come up with the perfect solution in version 1. That was not our goal. I dont think software should be built that way. So it just should be iterated.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_2.jpg)
**LV: When was Mir originally planned for? Which Ubuntu release? Because it has been pushed back a couple of times.**
**Thomas**: Well, we originally planned to have it by 14.04. That was the kind of stretch goal, because it highly depends on the availability of proprietary graphics drivers. So you cant ship an LTS [Long Term Support] release of Ubuntu on a new display server without supporting the hardware of the big guys.
**LV: We thought that would be quite ambitious anyway a Long Term Support release with a whole new display server!**
**Thomas**: Yes, it was ambitious but for a reason. If you dont set a stretch goal, and probably fail in reaching it, and then re-evaluate how you move forward, its difficult to drive a project. So if you just keep it evolving and evolving and evolving, and you dont have a checkpoint at some point…
**LV: Thats like a lot of open source projects. Inkscape is still on 0.48 or something, and it works, its reliable, but they never get to 1.0. Because they always say: “Oh lets add this feature, and that feature”, and the rest of us are left thinking: just release 1.0 already!.**
**Thomas**: And I wouldnt actually tie it to a version number. To me, that is secondary. To me, the question is whether we call this ready for broad public consumption on all of the hardware versions we want to support?
In Canonical, as a company, we have OEM contracts and we are enabling Ubuntu on a host of devices, and laptops and whatever, so we have to deliver on those contracts. And the question is, can we do that? No. Well, you never like a no.
> The question is whether we call this ready for broad public consumption on the hardware we want to support.
Usually, when you encounter a problem and you tackle it, and you start thinking how to solve the problem, thats more beneficial than never hearing a no. Thats kind of what we were aiming for. Ubuntu 14.04 was a stretch goal everyone was aware of that and we didnt reach it. Fine, cool. Lets go on.
So how do we stage ourself for the next cycle, until an LTS? Now we have this initiative where we have a daily testable image with Unity 8 and Mir. Its not super usable because its just essentially the tethered UI that you are seeing there, but still its something that we didnt have a year ago. And for me, thats a huge gain.
And ultimately, before we can ship something, before any new display server can ship in an LTS release, you need to have buy-in from the GPU vendors. Thats what you need.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/interview-thomas-vos-of-mir/
作者:[Mike Saunders][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/mike/

View File

@ -1,131 +0,0 @@
FOSS and the Fear Factor
================================================================================
![](http://www.linuxinsider.com/ai/181807/foss-open-source-security.jpg)
> "'Many eyes' is a complete and total myth," said SoylentNews' hairyfeet. "I bet my last dollar that if you looked at every.single.package. that makes up your most popular distros and then looked at how many have actually downloaded the source for those various packages, you'd find that there is less than 30 percent ... that are downloaded by anybody but the guys that actually maintain the things."
In a world that's been dominated for far too long by the [Systemd Inferno][1], Linux fans will have to be forgiven if they seize perhaps a bit too gleefully upon the scraps of cheerful news that come along on any given day.
Of course, for cheerful news, there's never any better place to look than the [Reglue][2] effort. Run by longtime Linux advocate and all-around-hero-for-kids Ken Starks, as alert readers [may recall][3], Reglue just last week launched a brand-new [fundraising effort][4] on Indiegogo to support its efforts over the coming year.
Since 2005, Reglue has placed more than 1,600 donated and then refurbished computers into the homes of financially disadvantaged kids in Central Texas. Over the next year, it aims to place 200 more, as well as paying for the first 90 days of Internet connection for each of them.
"As overused as the term is, the 'Digital Divide' is alive and well in some parts of America," Starks explained. "We will bridge that divide where we can."
How's that for a heaping helping of hope and inspiration?
### Windows as Attack Vector ###
![](http://www.linuxinsider.com/images/article_images/linuxgirl_bg_pinkswirl_150x245.jpg)
Offering discouraged FOSS fans a bit of well-earned validation, meanwhile -- and perhaps even a bit of levity -- is the news that Russian hackers apparently have begun using Windows as a weapon against the rest of the world.
"Russian hackers use Windows against NATO" is the [headline][5] over at Fortune, making it plain for all the world to see that Windows isn't the bastion of security some might say it is.
The sarcasm is [knee-deep][6] in the comments section on Google+ over that one.
### 'Hackers Shake Confidence' ###
Of course, malicious hacking is no laughing matter, and the FOSS world has gotten a bitter taste of the effects for itself in recent months with the Heartbleed and Shellshock flaws, to name just two.
Has it been enough to scare Linux aficionados away?
That essentially is [the suggestion][7] over at Bloomberg, whose story, entitled "Hackers Shake Confidence in 1980s Free Software Idealism," has gotten more than a few FOSS fans' knickers in a twist.
### 'No Software Is Perfect' ###
"None of this has shaken my confidence in the slightest," asserted [Linux Rants][8] blogger Mike Stone down at the blogosphere's Broken Windows Lounge, for instance.
"I remember a time when you couldn't put a Windows machine on the network without firewall software or it would be infected with viruses/malware in seconds," he explained. "I don't recall the articles claiming that confidence had been shaken in Microsoft.
"The fact of the matter is that no software is perfect, not even FOSS, but it comes closer than the alternatives," Stone opined.
### 'My Faith Is Just Fine' ###
"It is hard to even begin to get into where the Bloomberg article fails," began consultant and [Slashdot][9] blogger Gerhard Mack.
"For one, decompilers have existed for ages and allow black hats to find flaws in proprietary software, so the black-hats can find problems but cannot admit they found them let alone fix them," Mack explained. "Secondly, it has been a long time since most open source was volunteer-written, and most contributions need to be paid.
"The author goes on to rip into people who use open source for not contributing monetarily, when most of the listed companies are already Linux Foundation members, so they are already contributing," he added.
In short, "my faith in open source is just fine, and no clickbait Bloomberg article will change that," Mack concluded.
### 'The Author Is Wrong' ###
"Clickbait" is also the term Google+ blogger Alessandro Ebersol chose to describe the Bloomberg account.
"I could not see the point the author was trying to make, except sensationalism and views," he told Linux Girl.
"The author is wrong," Ebersol charged. "He should educate himself on the topic. The flaws are results of lack of funding, and too many corporations taking advantage of free software and giving nothing back."
Moreover, "I still believe that a piece of code that can be studied and checked by many is far more secure than a piece made by a few," Google+ blogger Gonzalo Velasco C. chimed in.
"All the rumors that FLOSS is as weak as proprietary software are only [FUD][10] -- period," he said. "It is even more sad when it comes from private companies that drink in the FLOSS fountain."
### 'Source Helps Ensure Security' ###
Chris Travers, a [blogger][11] who works on the [LedgerSMB][12] project, had a similar view.
"I do think that having the source available helps ensure security for well-designed, well-maintained software," he began.
"Those of us who do development on such software must necessarily approach the security process under a different set of constraints than proprietary vendors do," Travers explained.
"Since our code changes are public, when we release a security fix this also provides effectively full disclosure," he said, "ensuring that the concerns for unpatched systems are higher than they would be for proprietary solutions absent full disclosure."
At the same time, "this disclosure cuts both ways, as software security vendors can use this to provide further testing and uncover more problems," Travers pointed out. "In the long run, this leads to more secure software, but in the short run it has security costs for users."
Bottom line: "If there is good communication with the community, if there is good software maintenance and if there is good design," he said, "then the software will be secure."
### 'Source Code Isn't Magic Fairy Dust' ###
SoylentNews blogger hairyfeet had a very different view.
"'Many eyes' is a complete and total myth," hairyfeet charged. "I bet my last dollar that if you looked at every.single.package. that makes up your most popular distros and then looked at how many have actually downloaded the source for those various packages, you'd find that there is less than 30 percent of the packages that are downloaded by anybody but the guys that actually maintain the things.
"How many people have done a code audit on Firefox? [LibreOffice][13]? Gimp? I bet you won't find a single one, because everybody ASSUMES that somebody else did it," he added.
"At the end of the day, Wall Street is finding out what guys like me have been saying for years: Source code isn't magic fairy dust that makes the bugs go away," hairyfeet observed.
### 'No One Actually Looked at It' ###
"The problem with [SSL][14] was that everyone assumed the code was good, but almost no one had actually looked at, so you never had the 'many eyeballs' making the bugs shallow," Google+ blogger Kevin O'Brien conceded.
Still, "I think the methodology and the idealism are separable," he suggested. "Open source is a way of writing software in which the value created for everyone is much greater than the value captured by any one entity, which is why it is so powerful.
"The idea that corporate contributions somehow sully the purity is a stupid idea," added O'Brien. "Corporate involvement is not inherently bad; what is bad is trying to lock other people out of the value created. Many companies handle this well, such as Red Hat."
### 'The Right Way to Do IT' ###
Last but not least, "my confidence in FLOSS is unshaken," blogger [Robert Pogson][15] declared.
"After all, I need software to run my computers, and as bad as some flaws are in FLOSS, that vulnerability pales into insignificance compared to the flaws in that other OS -- you know, the one that thinks images are executable and has so much complexity that no one, not even M$ with its $billions, can fix."
FOSS is "the right way to do IT," Pogson added. "The world can and does make its own software, and the world has more and better programmers than the big corporations.
"Those big corporations use FLOSS and should support FLOSS," he maintained, offering "thanks to the corporations who hire FLOSS programmers; sponsor websites, mirrors and projects; and who give back code -- the fuel in the FLOSS economy."
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/FOSS-and-the-Fear-Factor-81221.html
作者Katherine Noyes
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.linuxinsider.com/perl/story/80980.html
[2]:http://www.reglue.org/
[3]:http://www.linuxinsider.com/story/78422.html
[4]:https://www.indiegogo.com/projects/deleting-the-digital-divide-one-computer-at-a-time
[5]:http://fortune.com/video/2014/10/14/russian-hackers-use-windows-against-nato/
[6]:https://plus.google.com/+KatherineNoyes/posts/DQvRMekLHV4
[7]:http://www.bloomberg.com/news/2014-10-14/hackers-shake-confidence-in-1980s-free-software-idealism.html
[8]:http://linuxrants.com/
[9]:http://slashdot.org/
[10]:http://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt
[11]:http://ledgersmbdev.blogspot.com/
[12]:http://www.ledgersmb.org/
[13]:http://www.libreoffice.org/
[14]:http://en.wikipedia.org/wiki/Transport_Layer_Security
[15]:http://mrpogson.com/

View File

@ -1,115 +0,0 @@
Calculate Linux Provides Consistency by Design
================================================================================
![](http://www.linuxinsider.com/ai/120560/linux-desktop-kde-xfce.jpg)
> Calculate Linux has a rather interesting strategy for desktop environments. It is characterized by two flavors with the same look and feel. That does not mean that the inherent functionality of the KDE and Xfce desktops are compromised. Rather, the Calculate Linux developers did what you seldom see within a Linux distribution with more than one desktop option: They unified the design.
Calculate Linux 14 is a distribution designed with home and SMB users in mind. It is optimized for rapid deployment in corporate environments as well.
Calculate gives users something no other Linux distro makes possible. The Xfce desktop session is customized to imitate the look of the [KDE][1] desktop environment.
This design approach goes a long way toward making Calculate Linux a one-distro-fits-all solution. Individual users or entire departments within an organization can fine-tune user preferences and features without changing the common appearance or performance.
Calculate Linux 14, developed by Alexander Tratsevskiy in Russia, is not your typical cookie-cutter type of Linux OS. This latest version, released Sept. 5, is a rolling-release distribution that provides a number of preconfigured features.
It uses a source-based approach to package management to optimize the software. This in part comes from its roots as a Gentoo Linux-based distribution.
Calculate Linux comes in three more versions to expand its reach. Calculate Directory Server is for servers, and Calculate Linux Scratch for building customized systems. The Calculate Media Center is a distro to run a home multimedia center.
### What's New ###
This latest version of Calculate ships with a few new features, including notification of software updates and an improved administration panel.
This release adds an improved graphical user interface for Calculate Utilities. It also provides various kernel and other software package updates.
It comes in 32-bit or 64-bit builds that include two desktop options for personal/business use: KDE and Xfce. A boot menu lets users choose to run the Calculate live desktop environment from RAM for added performance or with a command line interface only.
Why two choices? Users get better performance on low-end computers using the lightweight desktop environment that comes with Xfce. This is the second release containing this option. It solves the problem of not being able to run the KDE edition of Calculate Linux on underpowered hardware.
### Designing Details ###
Calculate Linux has a rather interesting strategy for desktop environments. It is characterized by two flavors with one common design.
That does not mean that the inherent functionality of the KDE and Xfce desktops are compromised. Rather, the Calculate Linux developers did what you seldom see within a Linux distribution with more than one desktop option.
Typically, KDE by design is much more animation based. By design, Xfce has fewer visual frills in keeping with its lightweight philosophy. Most KDE distributions place the panel bar at the bottom and do not have a Docky-style launcher anywhere in the desktop decor.
In Calculate Linux, a classic style application menu, task switcher and system tray are configured at the top of the screen in both desktop versions. At the bottom of the display, there is a hidden quick-launch bar that pops up when the mouse pointer strays toward the lower edge of the screen.
> ![](http://www.linuxinsider.com/article_images/2014/81242_990x557.jpg)
> Calculate Linux has a unified design that makes KDE and Xfce desktops look nearly the same. The panel and menu display are very nontraditional as seen in this KDE desktop view.
This duality ties the two desktops together. Both the KDE and the Xfce versions have right-click access to some of the most commonly used system commands and features.
### Look and Feel ###
Whether you run the KDE or the Xfce desktops, the panel design is the same. The menu falls from the top left corner as a single box with the same categories in both versions.
> ![](http://www.linuxinsider.com/article_images/2014/81242_990x540.jpg)
> The Xfce desktop in Calculate Linux is almost totally indistinguishable from its KDE counterpart.
Hover the mouse over the right edge of the menu box to see the category contents slide out to the right of the box. Only then do you see a varying range of applications to launch with a click.
The same operation governs the popup launcher bar hidden at the bottom of the screen. Some of the offerings are desktop-specific, however.
> ![](http://www.linuxinsider.com/article_images/2014/81242_990x556.jpg)
> Calculate Linux embeds a popup launch dock in both the KDE and Xfce desktop editions.
For example, the bottom dock in both desktop versions launches the Chromium Web browser, [LibreOffice][3], GIMP, SMPlayer and Leafpad (simple text editor). The KDE dock launches kcalc, digikam, Amarok and k3b disk burner. Xfce launches Galculator, Clementine and xfburn.
### Designed to Differ ###
One difference is the KDE version has an added button where expected along the upper right edge of the screen. It also has a Widgets button near the far right end of the top panel.
These provide access to the activities layout where you choose the style of desktop typical of KDE. These are: Grid, Newspaper, Folder, Grouping and Search & Launch.
A second style difference between the two desktop versions is the inclusion of widgets with the KDE version. These desktop widgets personalize the desktop items.
### Feature Folly ###
The Calculate Desktop edition, both KDE and Xfce, creates a user profile when it loads. This profile is fully integrated with Calculate Directory Server. Roaming profiles also are supported. Auto-tuning applications at logon are based on the server settings.
The approach greatly simplifies the setup and maintenance roles for users with no IT department to support the computer system. The desktop version functions simply as a standalone operating system. No server is needed. However, enterprise and SMB environments can pair the desktop version with the server version for seamless integration.
Either way, the common set of toolbars, desktop applications and basic settings are easier to configure for desktop and server use, regardless of the desktop environment choice.
You can install Calculate Linux on a USB thumb drive or a USB hard drive with a choice of these volume formats: ext4, ext3, ext2, reiserfs, btrfs, xfs, jfs, nilfs2 or fat32.
### Gentler Gentoo ###
The Gentoo distro in its own right installs applications compiled from source. It uses a software packaging system called "Portage" to semi-automate this process. It also uses the command-line compiling system run by Emerge.
Calculate's developers soften this Gentoo-based software compiling process somewhat, but it is still more complex than using a community-managed automated software binary repository.
Calculate Linux is fully compatible with Gentoo repositories and support for binary repository updates. System files are updated via Portage throughout the distribution life cycle.
### Bottom Line ###
Calculate Linux is a well-tooled Linux distro that makes consistency in design job number one. It is highly configurable and is optimized for nearly every computing circumstance.
It runs a full-blown KDE desktop on upper-end hardware, and provides the same look and feel with Xfce on low-end gear. Calculate Linux runs from a hard drive installation or by loading directly into RAM.
It could offer home and SMB users an effective distro alternative. However, typical for Gentoo-based distros, Calculate Linux's weak point is the lack of a full-fledged binary software repository system.
### Want to Suggest a Review? ###
Is there a Linux software application or distro you'd like to suggest for review? Something you love or would like to get to know?
Please [email your ideas to me][4], and I'll consider them for a future Linux Picks and Pans column.
And use the Talkback feature below to add your comments!
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/Calculate-Linux-Provides-Consistency-by-Design-81242.html
作者Jack M. Germain
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.calculate-linux.org/
[2]:http://www.kde.org/
[3]:http://www.libreoffice.org/
[4]:jack.germain@newsroom.ectnews.com

View File

@ -1,136 +0,0 @@
Debian's Civil War: Has It Really Come to This?
================================================================================
![](http://www.linuxinsider.com/ai/254566/debian.jpg)
**"The 'new' Debian would be rather weak," said blogger Robert Pogson. "Would it have the hundreds of mirrors that make Debian wonderful? I doubt that. Debian is a great distro. Disemboweling it out of spite is just wrong. Why can't we come to some amicable agreement? Why do we have to race at full speed to the edge of a cliff when we don't know if we can stop?"**
Well it seems no matter how loudly we here in the Linux blogosphere try to hum a happy tune or discuss [cheerful FOSS matters][1], we just can't seem to drown out the shouts and screams coming from those standing too close to the Systemd Inferno.
Stand back, people! It's dangerous!
The embers, of course, [had been hot][2] for some time already before the blaze [flared sky-high][3] a few months ago. Now, the conflagration appears to be completely out of control.
Need proof? Two words: [Debian fork][4].
That's right: Debian, the granddaddy of Linux distributions and embodiment of everything so many FOSS fans hold dear, may be forked, and it's apparently all because of Systemd.
A more upsetting development would be hard to conceive.
### 'Roll Up Your Sleeves' ###
![](http://www.linuxinsider.com/images/article_images/linuxgirl_bg_pinkswirl_150x245.jpg)
"Debian today is haunted by the tendency to betray its own mandate, a base principle of the Free Software movement: put the user's rights first," explained the anonymous developers behind the Debian Fork site. "What is happening now instead is that through a so-called 'do-ocracy,' developers and package maintainers are imposing their choices on users."
Their conclusion: "Roll up your sleeves, we may need to fork Debian."
Quick as a flash, [word traveled][5] to [Slashdot][6], [LXer][7] and beyond.
Down at the Linux blogosphere's Punchy Penguin Saloon, a profound hush fell as soon as the news arrived. Fortunately, it lasted only a fraction of a second.
### 'I Say Go for It' ###
"Freedom of choice implies the freedom to be a complete idiot, and clearly Free Software has its share," Google+ blogger Kevin O'Brien said.
"I have been skeptical about Systemd, but I have trouble believing there are enough people this crazy to actually pull off a fork of Debian," O'Brien added. "I predict a year from now we won't remember what this was all about."
On the other hand: "I say go for it if you're that passionate about it," offered [Linux Rants][8] blogger Mike Stone. "This is Linux we're talking about, after all, and Linux is open source. Anybody should always feel free to do what they want with Linux, as long as they're willing to share.
"The fact that SysVinit will still be available on standard Debian kind of makes forking it over Systemd seem a little silly, but I'm not going to stand in the way of anybody that wants to fork any FOSS for their own use," Stone added.
Indeed, "Linux's strength is also its Achilles' Heel," Google+ blogger Rodolfo Saenz opined. 'In the Linux world, forking is inevitable. It is part of Linux's evolution."
### 'A Lot of Misinformation' ###
At the same time, "I think if they were likely to actually fork Debian, they would have just gone and done it rather than throw a massive public temper tantrum," consultant and Slashdot blogger Gerhard Mack suggested.
"Secondly, I think there is a lot of misinformation out there about what Systemd does and how it works," Mack added. 'At the beginning of all of this I was very worried about the stability and security of the systems I maintain after reading the nerd rage on Slashdot, The Register, and sites like [Boycott Systemd][9], so I looked into Systemd for myself.
"What I have discovered is that they seem to be confusing Systemd with things that are bundled with Systemd but run separately using a 'least privilege needed for the task' type design," he explained. "There are things I don't like, such as the binary logs, but then I can just configure it to run through syslogd as usual and ignore the binary logs."
Particularly "hilarious," Mack added, is that people "suggest that only desktops need to boot quickly," he said. "I have seen some automated systems that load VMs on demand, and they would be much more effective if they booted faster."
### 'I'm Really Confused' ###
It will be "a sad day if Debian forks over this Systemd thing," longtime Debian user [Robert Pogson][10] told Linux Girl.
"I am one of the haters, I guess," Pogson said. "I see adopting Systemd as something that kept Jessie's bug count high for months. I just don't see the need for it. I've read that some desktop users complain that Systemd is all for server users and I've read that some server users complain that Systemd is all for desktop users. I'm both and I'm really confused."
Meanwhile, "do I need to learn a lot about Systemd to use it?" Pogson wondered. "I'm too old to learn too many new tricks. Does it give me any benefits, or is it just a nuisance?
"I see faster booting as a rather small benefit for a lot of nuisance value like binary logs... what's with that?" he added. "I've learned to use grep on current logs to get what I need. Hiding them is just making GNU/Linux more like that other OS. Yuck!"
### 'Nonfree Software Is the Real Enemy' ###
Debian is an organization of roughly a thousand developers, Pogson pointed out.
"They work hard and make the world a better place," he said. "Forcing them to choose which fork to take is really cruel and unusual punishment for such generous people. If the fork is 50/50, Debian might take years of recruitment to recover. That does no one any good.
"The 'new' Debian would be rather weak," Pogson added. "Would it have the hundreds of mirrors that make Debian wonderful? I doubt that. Debian is a great distro. Disemboweling it out of spite is just wrong. Why can't we come to some amicable agreement? Why do we have to race at full speed to the edge of a cliff when we don't know if we can stop?"
Bottom line: "If this civil war gets any worse, I may switch back to Debian Stable/Wheezy, my 'bomb shelter,' in the hope that I can wait for peace to break out," he concluded. "I don't need the drama. Bill Gates must be laughing at this waste of energy. Nonfree software is the real enemy -- not folks building/using Debian GNU/Linux."
### 'It Is What Happens' ###
It is a sad development, Google+ blogger Gonzalo Velasco C. agreed.
At the same time, "it is what happens in the FLOSS world when you don't listen to your peers and users and listen to others that have their own (commercial) agenda and 'suggest' you use a tool as hungry as Systemd, regardless of its merits and modernism comparing to old sisVinit," he said.
"There are a lot of technical discussions and arguments out there, and Debian must show it is neither deaf nor blind and re-discuss the issue," he added.
### Red Hat's Influence ###
"Do the users wish to be beholden to [Red Hat's][11] corporate roadmap? If the answer is 'no,' then a fork is the only choice left open, as it's pretty plain to see that Debian will go Systemd whether their users like it or not," SoylentNews blogger hairyfeet said.
"It all comes down to cloud computing, and RH intends to foist its version of SVCHOSTS for Linux onto Debian and Ubuntu," he added. "The reason why is obvious: it gives them pretty much every major Linux distro, as they are nearly all built on RH, Debian or Ubuntu."
So, the answer is simple, hairyfeet said: "If you want RH calling the shots, then stay; if not, fork."
### 'Seems Like a Lot of Work' ###
Of course, there's nothing to prevent a fork, Google+ blogger Brett Legree pointed out.
"If someone wants to do it, that's their choice," he noted.
"Seems like a lot of work, though," Legree added. "I mean, I figure that most people wouldn't care either way what init system is being used, and those who do know can probably figure out how to configure Debian (or whatever) to use a different init system. That's been possible up to now, and I'd expect it will continue to be so."
Forks are a lot of work to maintain, agreed Chris Travers, a [blogger][12] who works on the [LedgerSMB][13] project.
"Trust me -- I know from experience, as LedgerSMB began life as a fork of SQL-Ledger," Travers said.
Still, "there are huge differences in philosophy between init scripts and Systemd, and this is an area where there is probably room for a good Unix-like distro to keep the old ways," Travers said. "There are certainly worse things than forks developing. This being said, I wonder if people who really want Unix should instead switch to the BSDs."
### 'Like Killing Mosquitoes With Shotguns' ###
The Debian community was not aware of everything the changes in the init system would bring, Google+ blogger Alessandro Ebersol suggested. "They thought it was a non-issue."
Now that "a large number of Debian sysadmins are not pleased," however, forking would be "an extreme measure," he said, "and a last resort. There are still a lot of things that can be done."
After all, Debian is "the GNU/Linux that runs on anything, in any *nix setup -- remember the Debian BSD flavor, and that Debian BSD will have to be accommodated to work with the new init system," Ebersol pointed out.
"So, I believe all is not lost for Debian, but a fork, right now, is too extreme, like killing mosquitoes with shotguns," he concluded. "There's still time and place to make peace and amendments in the Debian community."
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/81262.html
作者:[Katherine Noyes][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://twitter.com/noyesk
[1]:http://www.reglue.org/
[2]:http://www.linuxinsider.com/story/80472.html
[3]:http://www.linuxinsider.com/story/80980.html
[4]:http://debianfork.org/
[5]:http://linux.slashdot.org/story/14/10/20/1944226/debians-systemd-adoption-inspires-threat-of-fork
[6]:http://slashdot.org/
[7]:http://lxer.com/module/forums/t/35625/
[8]:http://linuxrants.com/
[9]:http://boycottsystemd.org/
[10]:http://mrpogson.com/
[11]:http://www.redhat.com/
[12]:http://ledgersmbdev.blogspot.com/
[13]:http://www.ledgersmb.org/

View File

@ -1,92 +0,0 @@
[translating by KayGuoWhu]
A brief history of Linux malware
================================================================================
A look at some of the worms and viruses and Trojans that have plagued Linux throughout the years.
### Nobodys immune ###
![Image courtesy Shutterstock](http://images.techhive.com/images/article/2014/12/121114-linux-malware-1-100535381-orig.jpg)
Although not as common as malware targeting Windows or even OS X, security threats to Linux have become both more numerous and more severe in recent years. There are a couple of reasons for that the mobile explosion has meant that Android (which is Linux-based) is among the most attractive targets for malicious hackers, and the use of Linux as a server OS for and in the data center has also grown but Linux malware has been around in some form since well before the turn of the century. Have a look.
### Staog (1996) ###
![](http://images.techhive.com/images/article/2014/12/121114-stago-100535400-orig.gif)
The first recognized piece of Linux malware was Staog, a rudimentary virus that tried to attach itself to running executables and gain root access. It didnt spread very well, and it was quickly patched out in any case, but the concept of the Linux virus had been proved.
### Bliss (1997) ###
![](http://images.techhive.com/images/article/2014/12/121114-3new-100535402-orig.gif)
If Staog was the first, however, Bliss was the first to grab the headlines though it was a similarly mild-mannered infection, trying to grab permissions via compromised executables, and it could be deactivated with a simple shell switch. It even kept a neat little log, [according to online documentation from Ubuntu][1].
### Ramen/Cheese (2001) ###
![](http://images.techhive.com/images/article/2014/12/121114-ramen-100535404-orig.jpg)
Cheese is the malware you actually want to get certain Linux worms, like Cheese, may actually have been beneficial, patching the vulnerabilities the earlier Ramen worm used to infect computers in the first place. (Ramen was so named because it replaced web server homepages with a goofy image saying that “hackers looooove noodles.”
### Slapper (2002) ###
![Image courtesy Wikimedia CommonsCC LicenseKevin Collins](http://images.techhive.com/images/article/2014/12/121114-linux-malware-5-100535389-orig.jpg)
The Slapper worm struck in 2002, infecting servers via an SSL bug in Apache. That predates Heartbleed by 12 years, if youre keeping score at home.
### Badbunny (2007) ###
![Image courtesy Shutterstock](http://images.techhive.com/images/article/2014/12/121114-linux-malware-6-100535384-orig.jpg)
Badbunny was an OpenOffice macro worm that carries a sophisticated script payload that worked on multiple platforms even though the only effect of a successful infection was to download a raunchy pic of a guy in a bunny suit, er, doing what bunnies are known to do.
### Snakso (2012) ###
![](http://images.techhive.com/images/article/2014/12/121114-linux-malware-7-100535385-orig.jpg)
Image courtesy [TechWorld UK][2]
The Snakso rootkit targeted specific versions of the Linux kernel to directly mess with TCP packets, injecting iFrames into traffic generated by the infected machine and pushing drive-by downloads.
### Hand of Thief (2013) ###
![](http://images.techhive.com/images/article/2014/12/121114-thief-100535405-orig.jpg)
Hand of Thief is a commercial (sold on Russian hacker forums) Linux Trojan creator that made quite a splash when it was introduced last year. RSA researchers, however, discovered soon after that [it wasnt quite as dangerous as initially thought][3].
### Windigo (2014) ###
![](http://images.techhive.com/images/article/2014/12/121114-linux-malware-9-100535390-orig.jpg)
Image courtesy [freezelight][4]
Windigo is a complex, large-scale cybercrime operation that targeted tens of thousands of Linux servers, causing them to produce spam and serve drive-by malware and redirect links. Its still out there, according to ESET security, [so admins should tread carefully][5].
### Shellshock/Mayhem (2014) ###
![Shellshock/Mayhem (2014)](http://images.techhive.com/images/article/2014/12/121114-malware-mayhem-100535406-orig.gif)
Striking at the terminal strikes at the heart of Linux, which is why the recent Mayhem attacks which targeted the so-called Shellshock vulnerabilities in Linuxs Bash command-line interpreter using a specially crafted ELF library were so noteworthy. Researchers at Yandex said that the network [had snared 1,400 victims as of July][6].
### Turla (2014) ###
![Image courtesy CW](http://images.techhive.com/images/article/2014/12/121114-linux-malware-11-100535391-orig.jpg)
A large-scale campaign of cyberespionage emanating from Russia, called Epic Turla by researchers, was found to have a new Linux-focused component earlier this week. Its apparently [based on a backdoor access program from all the way back in 2000 called cd00r][7].
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2858742/linux/a-brief-history-of-linux-malware.html
作者:[Jon Gold][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Jon-Gold/
[1]:https://help.ubuntu.com/community/Linuxvirus
[2]:http://news.techworld.com/security/3412075/linux-users-targeted-by-mystery-drive-by-rootkit/
[3]:http://www.networkworld.com/article/2168938/network-security/dangerous-linux-trojan-could-be-sign-of-things-to-come.html
[4]:https://www.flickr.com/photos/63056612@N00/155554663
[5]:http://www.welivesecurity.com/2014/04/10/windigo-not-windigone-linux-ebury-updated/
[6]:http://www.pcworld.com/article/2825032/linux-botnet-mayhem-spreads-through-shellshock-exploits.html
[7]:http://www.computerworld.com/article/2857129/turla-espionage-operation-infects-linux-systems-with-malware.html

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
20 Linux Commands Interview Questions & Answers
================================================================================
**Q:1 How to check current run level of a linux server ?**
@ -140,4 +141,4 @@ via: http://www.linuxtechi.com/20-linux-commands-interview-questions-answers/
[17]:
[18]:
[19]:
[20]:
[20]:

View File

@ -1,102 +0,0 @@
Docker CTO Solomon Hykes to Devs: Have It Your Way
================================================================================
![](http://www.linuxinsider.com/ai/845971/docker-cloud.jpg)
**"We made a very conscious effort with Docker to insert the technology into an existing toolbox. We did not want to turn the developer's world upside down on the first day. ... We showed them incremental improvements so that over time the developers discovered more things they could do with Docker. So the developers could transition into the new architecture using the new tools at their own pace."**
[Docker][1] in the last two years has moved from an obscure Linux project to one of the most popular open source technologies in cloud computing.
Project developers have witnessed millions of Docker Engine downloads. Hundreds of Docker groups have formed in 40 countries. Many more companies are announcing Docker integration. Even Microsoft will ship Windows 10 with Docker preinstalled.
![](http://www.linuxinsider.com/article_images/2014/81504_330x260.jpg)
Solomon Hykes
Founder and CTO of Docker
"That caught a lot of people by surprise," Docker founder and CTO Solomon Hykes told LinuxInsider.
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. It uses a Docker engine along with a portable, lightweight runtime and packaging tool. It also needs the Docker Hub and a cloud service for sharing applications and automating workflows.
Docker provides a vehicle for developers to quickly assemble their applications from components. It eliminates the friction between development, quality assurance and production environments. Thus, IT can ship applications faster and run them unchanged on laptops, on data center virtual machines, and in any cloud.
In this exclusive interview, LinuxInsider discusses with Solomon Hykes why Docker is revitalizing Linux and the cloud.
**LinuxInsider: You have said that Docker's success is more the result of being in the right place at the right time for a trend that's much bigger than Docker. Why is that important to users?**
**Solomon Hykes**: There is always an element of being in the right place at the right time. We worked on this concept for a long time. Until recently, the market was not ready for this kind of technology. Then it was, and we were there. Also, we were very deliberate to make the technology flexible and very easy to get started using.
**LI: Is Docker a new cloud technology or merely a new way to do cloud storage?**
**Hykes**: Containers in themselves are just an enabler. The really big story is how it changes the software model enormously. Developers are creating new kinds of applications. They are building applications that do not run on only one machine. There is a need for completely new architecture. At the heart of that is independence from the machine.
The problem for the developer is to create the kind of software that can run independently on any kind of machine. You need to package it up so it can be moved around. You need to cross that line. That is what containers do.
**LI: How analogous is the software technology to traditional cargo shipping in containers?**
**Hykes**: That is a very apt example. It is the same thing for shipping containers. The innovation is not in the box. It is in how the automation handles millions of those boxes moving around. That is what is important.
**LI: How is Docker affecting the way developers build their applications?**
**Hykes**: The biggest way is it helps them structure their applications for a better distributive system. Another distributive application is Gmail. It does not run on just one application. It is distributive. Developers can package the application as a series of services. That is their style of reasoning when they design. It brings the tooling up to the level of design.
**LI: What led you to this different architecture approach?**
**Hykes**: What is interesting about this process is that we did not invent this model. It was there. If you look around, you see this trend where developers are increasingly building distributive applications where the tooling is inadequate. Many people have tried to deal with the existing tooling level. This is a new architecture. When you come up with tools that support this new model, the logical thing to do is tell the developer that the tools are out of date and are inadequate. So throw away the old tools and here are the new tools.
**LI: How much friction did you encounter from developers not wanting to throw away their old tools?**
**Hykes**: That approach sounds perfectly reasonable and logical. But in fact it is very hard to get developers to throw away their tools. And for IT departments the same thing is very true. They have legacy performance to support. So most of these attempts to move into next-generation tools have failed. They ask too much of the developers from day one.
**LI: How did you combat that reaction from developers?**
**Hykes**: We made a very conscious effort with Docker to insert the technology into an existing toolbox. We did not want to turn the developer's world upside down on the first day. Instead, we showed them incremental improvements so that over time the developers discovered more things they could do with Docker. So the developers could transition into the new architecture using the new tools at their own pace. That makes all the difference in the world.
**LI: What reaction are you seeing from this strategy?**
**Hykes**: When I ask people using Docker today how revolutionary it is, some say they are not using it in a revolutionary way. It is just a little improvement in my toolbox. That is the point. Others say that they jumped all in on the first day. Both responses are OK. Everyone can take their time moving toward that new model.
**LI: So is it a case of integrating Docker into existing platforms, or is a complete swap of technology required to get the full benefit?**
**Hykes**: Developers can go either way. There is a lot of demand for Docker native. But there is a whole ecosystem of new tools and companies competing to build brand new platforms entirely build on top of Docker. Over time the world is trending towards Docker native, but there is no rush. We totally support the idea of developers using bits and pieces of Docker in their existing platform forever. We encourage that.
**LI: What about Docker's shared Linux kernel architecture?**
**Hykes**: There are two steps involved in answering that question. What Docker does is become a layer on top of the Linux kernel. It exposes an abstraction function. It takes advantage of the underlying system. It has access to all of the Linux features. It also takes advantage of the networking stack and the storage subsystem. It uses the abstraction feature to map what developers need.
**LI: How detailed a process is this for developers?**
**Hykes**: As a developer, when I make an application I need a run-time that can run my application in a sandbox environment. I need a packaging system that makes it easy to move it around to other machines. I need a networking model that allows my application to talk to the outside world. I need storage, etc. We abstract ... the gritty details of whatever the kernel does right now.
**LI: Why does this benefit the developer?**
**Hykes**: There are two really big advantages to that. The first is simplicity. Developers can actually be productive now because that abstraction is easier for them to comprehend and is designed for that. The system APIs are designed for the system. What the developer needs is a consistent abstraction that works everywhere.
The second advantage is that over time you can support more systems. For example, early on Docker could only work on a single distribution of Linux under very narrow versions of the kernel. Over time, we expanded the surface area for the number of systems out there that Docker supports natively. So now you can run Docker on every major Linux distribution and in combination with many more networking and storage features.
**LI: Does this functionality trickle down to nondevelopers, or is the benefit solely targeting developers?**
**Hykes**: Every time we expand that surface area, every single developer that uses the Docker abstraction benefits from that too. So every application running Docker gets the added functionality every time the Docker community adds to the expansion. That is the thing that benefits all users. Without that universal expansion, every single developer would not have time to invest to update. There is just too much to support.
**LI: What about Microsoft's recent announcement that it was shipping Docker support with Windows?**
**Hykes**: If you think of Docker as a very narrow and very simple tool, then why would you roll out support for Windows? The whole point is that over time, you can expand the reach of that abstraction. Windows works very differently, obviously. But now that Microsoft has committed to adding features to Windows 10, it exposes the functionality required to run Docker. That is real exciting.
Docker still has to be ported to Windows, but Microsoft has committed to contributing in a major way to the port. Realize how far Microsoft has come in doing this. Microsoft is doing this fully upstream in a completely native, open source way. Everyone installing Windows 10 will get Docker preinstalled.
**LI: What lies ahead for growing Docker's feature set and user base?**
**Hykes**: The community has a lot of features on the drawing board. Most of them have to do with more improved tools for developers to build better distributive applications. A toolkit implies having a series of tools with each tool designed for one job.
In each of these subsystems, there is a need for new tools. In each of these areas, you will see an enormous amount of activity in the community in terms of contributions and designs. In that regard, the Docker project is enormously ambitious. The ability to address each of these areas will ensure that developers have a huge array of choices without fragmentation.
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/Docker-CTO-Solomon-Hykes-to-Devs-Have-It-Your-Way-81504.html
作者Jack M. Germain
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://www.docker.com/

View File

@ -0,0 +1,48 @@
(translating by runningwater)
2015: Open Source Has Won, But It Isn't Finished
================================================================================
> After the wins of 2014, what's next?
At the beginning of a new year, it's traditional to look back over the last 12 months. But as far as this column is concerned, it's easy to summarise what happened then: open source has won. Let's take it from the top:
**Supercomputers**. Linux is so dominant on the Top 500 Supercomputers lists it is almost embarrassing. The [November 2014 figures][1] show that 485 of the top 500 systems were running some form of Linux; Windows runs on just one. Things are even more impressive if you look at the numbers of cores involved. Here, Linux is to be found on 22,851,693 of them, while Windows is on just 30,720; what that means is that not only does Linux dominate, it is particularly strong on the bigger systems.
**Cloud computing**. The Linux Foundation produced an interesting [report][2] last year, which looked at the use of Linux in the cloud by large companies. It found that 75% of them use Linux as their primary platform there, against just 23% that use Windows. It's hard to translate that into market share, since the mix between cloud and non-cloud needs to be factored in; however, given the current popularity of cloud computing, it's safe to say that the use of Linux is high and increasing. Indeed, the same survey found Linux deployments in the cloud have increased from 65% to 79%, while those for Windows have fallen from 45% to 36%. Of course, some may not regard the Linux Foundation as totaly disinterested here, but even allowing for that, and for statistical uncertainties, it's pretty clear which direction things are moving in.
**Web servers**. Open source has dominated this sector for nearly 20 years - an astonishing record. However, more recently there's been some interesting movement in market share: at one point, Microsoft's IIS managed to overtake Apache in terms of the total number of Web servers. But as Netcraft explains in its most recent [analysis][3], there's more than meets the eye here:
> This is the second month in a row where there has been a large drop in the total number of websites, giving this month the lowest count since January. As was the case in November, the loss has been concentrated at just a small number of hosting companies, with the ten largest drops accounting for over 52 million hostnames. The active sites and web facing computers metrics were not affected by the loss, with the sites involved being mostly advertising linkfarms, having very little unique content. The majority of these sites were running on Microsoft IIS, causing it to overtake Apache in the July 2014 survey. However the recent losses have resulted in its market share dropping to 29.8%, leaving it now over 10 percentage points behind Apache.
As that indicates, Microsoft's "surge" was more apparent than real, and largely based on linkfarms with little useful content. Indeed, Netcraft's figures for active sites paints a very different picture: Apache has 50.57% market share, with nginx second on 14.73%; Microsoft IIS limps in with a rather feeble 11.72%. This means that open source has around 65% of the active Web server market - not quite at the supercomputer level, but pretty good.
**Mobile systems**. Here, the march of open source as the foundation of Android continues. Latest figures show that Android accounted for [83.6%][4] of smartphone shipments in the third quarter of 2014, up from 81.4% in the same quarter the previous year. Apple achieved 12.3%, down from 13.4%. As far as tablets are concerned, Android is following a similar trajectory: for the second quarter of 2014, Android notched up around [75% of global tablet sales][5], while Apple was on 25%.
**Embedded systems**. Although it's much harder to quantify the market share of Linux in the important embedded system market, but figures from one 2013 study indicated that around [half of planned embedded systems][6] would use it.
**Internet of Things**. In many ways this is simply another incarnation of embedded systems, with the difference that they are designed to be online, all the time. It's too early to talk of market share, but as I've [discussed][7] recently, AllSeen's open source framework is coming on apace. What's striking by their absence are any credible closed-source rivals; it therefore seems highly likely that the Internet of Things will see supercomputer-like levels of open source adoption.
Of course, this level of success always begs the question: where do we go from here? Given that open source is approaching saturation levels of success in many sectors, surely the only way is down? In answer to that question, I recommend a thought-provoking essay from 2013 written by Christopher Kelty for the Journal of Peer Production, with the intriguing title of "[There is no free software.][8]" Here's how it begins:
> Free software does not exist. This is sad for me, since I wrote a whole book about it. But it was also a point I tried to make in my book. Free software—and its doppelganger open source—is constantly becoming. Its existence is not one of stability, permanence, or persistence through time, and this is part of its power.
In other words, whatever amazing free software 2014 has already brought us, we can be sure that 2015 will be full of yet more of it, as it continues its never-ending evolution.
--------------------------------------------------------------------------------
via: http://www.computerworlduk.com/blogs/open-enterprise/open-source-has-won-3592314/
作者:[lyn Moody][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworlduk.com/author/glyn-moody/
[1]:http://www.top500.org/statistics/list/
[2]:http://www.linuxfoundation.org/publications/linux-foundation/linux-end-user-trends-report-2014
[3]:http://news.netcraft.com/archives/2014/12/18/december-2014-web-server-survey.html
[4]:http://www.cnet.com/news/android-stays-unbeatable-in-smartphone-market-for-now/
[5]:http://timesofindia.indiatimes.com/tech/tech-news/Android-tablet-market-share-hits-70-in-Q2-iPads-slip-to-25-Survey/articleshow/38966512.cms
[6]:http://linuxgizmos.com/embedded-developers-prefer-linux-love-android/
[7]:http://www.computerworlduk.com/blogs/open-enterprise/allseen-3591023/
[8]:http://peerproduction.net/issues/issue-3-free-software-epistemics/debate/there-is-no-free-software/

View File

@ -0,0 +1,29 @@
Linus Tells Wired Leap Second Irrelevant
================================================================================
![](https://farm4.staticflickr.com/3852/14863156322_a354770b14_o.jpg)
Two larger publications today featured Linux and the effect of the upcoming leap second. The Register today said that the leap second effects of the past are no longer an issue. Coincidentally, Wired talked to Linus Torvalds about the same issue today as well.
**Linus Torvalds** spoke with Wired's Robert McMillan about the approaching leap second due to be added in June. The Register said the last leap second in 2012 took out Mozilla, StumbleUpon, Yelp, FourSquare, Reddit and LinkedIn as well as several major airlines and travel reservation services that ran Linux. Torvalds told Wired today that the kernel is patched and he doesn't expect too many issues this time around. [He said][1], "Just take the leap second as an excuse to have a small nonsensical party for your closest friends. Wear silly hats, get a banner printed, and get silly drunk. Thats exactly how relevant it should be to most people."
**However**, The Register said not everyone agrees with Torvalds' sentiments. They quote Daily Mail saying, "The year 2015 will have an extra second — which could wreak havoc on the infrastructure powering the Internet," then remind us of the Y2K scare that ended up being a non-event. The Register's Gavin [Clarke concluded][2]:
> No reason the Penguins were caught sans pants.
> Now they've gone belt and braces.
The take-away is: move along, nothing to see here.
--------------------------------------------------------------------------------
via: http://ostatic.com/blog/linus-tells-wired-leap-second-irrelevant
作者:[Susan Linton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ostatic.com/member/susan-linton
[1]:http://www.wired.com/2015/01/torvalds_leapsecond/
[2]:http://www.theregister.co.uk/2015/01/09/leap_second_bug_linux_hysteria/

View File

@ -0,0 +1,36 @@
diff -u: What's New in Kernel Development
================================================================================
**David Drysdale** wanted to add Capsicum security features to Linux after he noticed that FreeBSD already had Capsicum support. Capsicum defines fine-grained security privileges, not unlike filesystem capabilities. But as David discovered, Capsicum also has some controversy surrounding it.
Capsicum has been around for a while and was described in a USENIX paper in 2010: [http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf][1].
Part of the controversy is just because of the similarity with capabilities. As Eric Biderman pointed out during the discussion, it would be possible to implement features approaching Capsicum's as an extension of capabilities, but implementing Capsicum directly would involve creating a whole new (and extensive) abstraction layer in the kernel. Although David argued that capabilities couldn't actually be extended far enough to match Capsicum's fine-grained security controls.
Capsicum also was controversial within its own developer community. For example, as Eric described, it lacked a specification for how to revoke privileges. And, David pointed out that this was because the community couldn't agree on how that could best be done. David quoted an e-mail sent by Ben Laurie to the cl-capsicum-discuss mailing list in 2011, where Ben said, "It would require additional book-keeping to find and revoke outstanding capabilities, which requires knowing how to reach capabilities, and then whether they are derived from the capability being revoked. It also requires an authorization model for revocation. The former two points mean additional overhead in terms of data structure operations and synchronisation."
Given the ongoing controversy within the Capsicum developer community and the corresponding lack of specification of key features, and given the existence of capabilities that already perform a similar function in the kernel and the invasiveness of Capsicum patches, Eric was opposed to David implementing Capsicum in Linux.
But, given the fact that capabilities are much coarser-grained than Capsicum's security features, to the point that capabilities can't really be extended far enough to mimic Capsicum's features, and given that FreeBSD already has Capsicum implemented in its kernel, showing that it can be done and that people might want it, it seems there will remain a lot of folks interested in getting Capsicum into the Linux kernel.
Sometimes it's unclear whether there's a bug in the code or just a bug in the written specification. Henrique de Moraes Holschuh noticed that the Intel Software Developer Manual (vol. 3A, section 9.11.6) said quite clearly that microcode updates required 16-byte alignment for the P6 family of CPUs, the Pentium 4 and the Xeon. But, the code in the kernel's microcode driver didn't enforce that alignment.
In fact, Henrique's investigation uncovered the fact that some Intel chips, like the Xeon X5550 and the second-generation i5 chips, needed only 4-byte alignment in practice, and not 16. However, to conform to the documented specification, he suggested fixing the kernel code to match the spec.
Borislav Petkov objected to this. He said Henrique was looking for problems where there weren't any. He said that Henrique simply had discovered a bug in Intel's documentation, because the alignment issue clearly wasn't a problem in the real world. He suggested alerting the Intel folks to the documentation problem and moving on. As he put it, "If the processor accepts the non-16-byte-aligned update, why do you care?"
But, as H. Peter Anvin remarked, the written spec was Intel's guarantee that certain behaviors would work. If the kernel ignored the spec, it could lead to subtle bugs later on. And, Bill Davidsen said that if the kernel ignored the alignment requirement, and "if the requirement is enforced in some future revision, and updates then fail in some insane way, the vendor is justified in claiming 'I told you so'."
The end result was that Henrique sent in some patches to make the microcode driver enforce the 16-byte alignment requirement.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/diff-u-whats-new-kernel-development-6
作者:[Zack Brown][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/user/801501
[1]:http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf

View File

@ -0,0 +1,81 @@
Why Mac users dont switch to Linux
================================================================================
Linux and Mac users share at least one common thing: they prefer not to use Windows. But after that the two groups part company and tend to go their separate ways. But why dont more Mac users switch to Linux? Is there something that prevents Mac users from making the jump?
[Datamation took a look at these questions][1] and tried to answer them. Datamations conclusion was that its really about the applications and workflow, not the operating system:
> …there are some instances where replacing existing applications with new options isnt terribly practical both in workflow and in overall functionality. This is an area where, sadly, Apple has excelled in. So while its hardly “impossible” to get around these issues, they are definitely a large enough challenge that it will give the typical Mac enthusiast pause.
>
> But outside of Web developers, honestly, I dont see Mac users “en masse,” seeking to disrupt their workflows for the mere idea of avoiding the upgrade to OS X Yosemite. Granted, having seen Yosemite up close Mac users who are considered power users will absolutely find this change-up to be hideous. However, despite poor OS X UI changes, the core workflow for existing Mac users will remain largely unchanged and unchallenged.
>
> No, I believe Linux adoption will continue to be sporadic and random. Ever-growing, but not something that is easily measured or accurately calculated.
I agree to a certain extent with Datamations take on the importance of applications and workflows, both things are important and matter in the choice of a desktop operating system. But I think theres something more going on with Mac users than just that. I believe that theres a different mentality that exists between Linux and Mac users, and I think thats the real reason why many Mac users dont switch to Linux.
![](http://jimlynch.com/wp-content/uploads/2015/01/mac-users-switch-to-linux.jpeg)
### Its all about control for Linux users ###
Linux users tend to want control over their computing experience, they want to be able to change things to make them the way that they want them. One simply cannot do that in the same way with OS X or any other Apple products. With Apple you get what they give you for the most part.
For Mac (and iOS) users this is fine, they seem mostly content to stay within Apples walled garden and live according to whatever standards and options Apple gives them. But this is totally unacceptable to most Linux users. People who move to Linux usually come from Windows, and its there that they develop their loathing for someone else trying to define or control their computing experiences.
And once someone like that has tasted the freedom that Linux offers, its almost impossible for them to want to go back to living under the thumb of Apple, Microsoft or anyone else. Youd have to pry Linux from their cold, dead fingers before theyd accept the computing experience created for them Apple or Microsoft.
But you wont find that same determination to have control among most Mac users. For them its mostly about getting the most out of whatever Apple has done with OS X in its latest update. They tend to adjust fairly quickly to new versions of OS X and even when unhappy with Apples changes they seem content to continue living within Apples walled garden.
So the need for control is a huge difference between Mac and Linux users. I dont see it as a problem though since it just reflects the reality of two very different attitudes toward using computers.
### Mac users need Apples support mechanisms ###
Linux users are also different in the sense that they dont mind getting their hands dirty by getting “under the hood” of their computers. Along with control comes the personal responsibility of making sure that their Linux systems work well and efficiently, and digging into the operating system is something that many Linux users have no problem doing.
When a Linux user needs to fix something, chances are they will attempt to do so immediately themselves. If that doesnt work then theyll seek additional information online from other Linux users and work through the problem until it has been resolved.
But Mac users are most likely not going to do that to the same extent. That is probably one of the reasons why Apple stores are so popular and why so many Mac users opt to buy Apple Care when they get a new Mac. A Mac user can simply take his or her computer to the Apple store and ask someone to fix it for them. There they can belly up to the Genius Bar and have their computer looked at by someone Apple has paid to fix it.
Most Linux users would blanche at the thought of doing such a thing. Who wants some guy you dont even know to lay hands on your computer and start trying to fix it for you? Some Linux users would shudder at the very idea of such a thing happening.
So it would be hard for a Mac user to switch to Linux and suddenly be bereft of the support from Apple that he or she was used to getting in the past. Some Mac users might feel very vulnerable and uncertain if they were cut off from the Apple mothership in terms of support.
### Mac users love Apples hardware ###
The Datamation article focused on software, but I believe that hardware also matters to Mac users. Most Apple customers tend to love Apples hardware. When they buy a Mac, they arent just buying it for OS X. They are also buying Apples industrial design expertise and that can be an important differentiator for Mac users. Mac users are willing to pay more because they perceive that the overall value they are getting from Apple for a Mac is worth it.
Linux users, on the other hand, seem less concerned by such things. I think they tend to focus more on cost and less on the looks or design of their computer hardware. For them its probably about getting the most value from the hardware at the lowest cost. They arent in love with the way their computer hardware looks in the same way that some Mac users probably are, and so they dont make buying decisions based on it.
I think both points of view on hardware are equally valid. It ultimately gets down to the needs of the individual user and what matters to them when they choose to buy or, in the case of some Linux users, build their computer. Value is the key for both groups, and each has its own perceptions of what constitutes real value in a computer.
Of course it is [possible to run Linux on a Mac][2], directly or indirectly via virtual machine. So a user that really liked Apples hardware does have the option of keeping their Mac but installing Linux on it.
### Too many Linux distros to choose from? ###
Another reason that might make it hard for a Mac user to move to Linux is the sheer number of distributions to choose from in the world of Linux. While most Linux users probably welcome the huge diversity of distros available, it could also be very confusing for a Mac user who hasnt learned to navigate those choices.
Over time I think a Mac user would learn and adjust by figuring out which distribution worked best for him or her. But in the short term it might be a very daunting hurdle to overcome after being used to OS X for a long period of time. I dont think its insurmountable, but its definitely something that is worth mentioning here.
Of course we do have helpful resources like [DistroWatch][3] and even my own [Desktop Linux Reviews][4] blog that can help people find the right Linux distribution. Plus there are many articles available about “the best Linux distro” and that sort of thing that Mac users can use as resources when trying to figure out the distribution they want to use.
But one of the reasons why Apple customers buy Macs is the simplicity and all-in-one solution that they offer in terms of the hardware and software being unified by Apple. So I am not sure how many Mac users would really want to spend the time trying to find the right Linux distribution. It might be something that puts them off really considering the switch to Linux.
### Mac users are apples and Linux users are oranges ###
I see nothing wrong with Mac and Linux users going their separate ways. I think were just talking about two very different groups of people, and its a good thing that both groups can find and use the operating system and software that they prefer. Let Mac users enjoy OS X and let Linux users enjoy Linux, and hopefully both groups will be happy and content with their computers.
Every once in a while a Mac user might stray over to Linux or vice versa, but for the most part I think the two groups live in different worlds and mostly prefer to stay separate and apart from one another. I generally dont compare the two because when you get right down to it, its really just a case of apples and oranges.
--------------------------------------------------------------------------------
via: http://jimlynch.com/linux-articles/why-mac-users-dont-switch-to-linux/
作者:[Jim Lynch][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://jimlynch.com/author/Jim/
[1]:http://www.datamation.com/open-source/why-linux-isnt-winning-over-mac-users-1.html
[2]:http://www.howtogeek.com/187410/how-to-install-and-dual-boot-linux-on-a-mac/
[3]:http://distrowatch.com/
[4]:http://desktoplinuxreviews.com/

View File

@ -1,155 +0,0 @@
How to Backup and Restore Your Apps and PPAs in Ubuntu Using Aptik
================================================================================
![00_lead_image_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x300x00_lead_image_aptik.png.pagespeed.ic.n3TJwp8YK_.png)
If you need to reinstall Ubuntu or if you just want to install a new version from scratch, wouldnt it be useful to have an easy way to reinstall all your apps and settings? You can easily accomplish this using a free tool called Aptik.
Aptik (Automated Package Backup and Restore), an application available in Ubuntu, Linux Mint, and other Debian- and Ubuntu-based Linux distributions, allows you to backup a list of installed PPAs (Personal Package Archives), which are software repositories, downloaded packages, installed applications and themes, and application settings to an external USB drive, network drive, or a cloud service like Dropbox.
NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise.
To install Aptik, you must add the PPA. To do so, press Ctrl + Alt + T to open a Terminal window. Type the following text at the prompt and press Enter.
sudo apt-add-repository y ppa:teejee2008/ppa
Type your password when prompted and press Enter.
![01_command_to_add_repository](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x99x01_command_to_add_repository.png.pagespeed.ic.UfVC9QLj54.png)
Type the following text at the prompt to make sure the repository is up-to-date.
sudo apt-get update
![02_update_command](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x252x02_update_command.png.pagespeed.ic.m9pvd88WNx.png)
When the update is finished, you are ready to install Aptik. Type the following text at the prompt and press Enter.
sudo apt-get install aptik
NOTE: You may see some errors about packages that the update failed to fetch. If they are similar to the ones listed on the following image, you should have no problem installing Aptik.
![03_command_to_install_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x03_command_to_install_aptik.png.pagespeed.ic.1jtHysRO9h.png)
The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter.
![04_do_you_want_to_continue](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x04_do_you_want_to_continue.png.pagespeed.ic.WQ15_UxK5Z.png)
When the installation if finished, close the Terminal window by typing “Exit” and pressing Enter, or by clicking the “X” button in the upper-left corner of the window.
![05_closing_terminal_window](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x05_closing_terminal_window.png.pagespeed.ic.9QoqwM7Mfr.png)
Before running Aptik, you should set up a backup directory on a USB flash drive, a network drive, or on a cloud account, such as Dropbox or Google Drive. For this example, will will use Dropbox.
![06_creating_backup_folder](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x243x06_creating_backup_folder.png.pagespeed.ic.7HzR9KwAfQ.png)
Once your backup directory is set up, click the “Search” button at the top of the Unity Launcher bar.
![07_opening_search](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x177x07_opening_search.png.pagespeed.ic.qvFiw6_sXa.png)
Type “aptik” in the search box. Results of the search display as you type. When the icon for Aptik displays, click on it to open the application.
![08_starting_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x338x08_starting_aptik.png.pagespeed.ic.8fSl4tYR0n.png)
A dialog box displays asking for your password. Enter your password in the edit box and click “OK.”
![09_entering_password](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x337x09_entering_password.png.pagespeed.ic.yanJYFyP1i.png)
The main Aptik window displays. Select “Other…” from the “Backup Directory” drop-down list. This allows you to select the backup directory you created.
NOTE: The “Open” button to the right of the drop-down list opens the selected directory in a Files Manager window.
![10_selecting_other_for_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x533x10_selecting_other_for_directory.png.pagespeed.ic.dHbmYdAHYx.png)
On the “Backup Directory” dialog box, navigate to your backup directory and then click “Open.”
NOTE: If you havent created a backup directory yet, or you want to add a subdirectory in the selected directory, use the “Create Folder” button to create a new directory.
![11_choosing_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x470x11_choosing_directory.png.pagespeed.ic.E-56x54cy9.png)
To backup the list of installed PPAs, click “Backup” to the right of “Software Sources (PPAs).”
![12_clicking_backup_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
The “Backup Software Sources” dialog box displays. The list of installed packages and the associated PPA for each displays. Select the PPAs you want to backup, or use the “Select All” button to select all the PPAs in the list.
![13_selecting_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
Click “Backup” to begin the backup process.
![14_clicking_backup_for_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x14_clicking_backup_for_all_software_sources.png.pagespeed.ic.n5h_KnQVZa.png)
A dialog box displays when the backup is finished telling you the backup was created successfully. Click “OK” to close the dialog box.
A file named “ppa.list” will be created in the backup directory.
![15_closing_finished_dialog_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x15_closing_finished_dialog_software_sources.png.pagespeed.ic.V25-KgSXdY.png)
The next item, “Downloaded Packages (APT Cache)”, is only useful if you are re-installing the same version of Ubuntu. It backs up the packages in your system cache (/var/cache/apt/archives). If you are upgrading your system, you can skip this step because the packages for the new version of the system will be newer than the packages in the system cache.
Backing up downloaded packages and then restoring them on the re-installed Ubuntu system will save time and Internet bandwidth when the packages are reinstalled. Because the packages will be available in the system cache once you restore them, the download will be skipped and the installation of the packages will complete more quickly.
If you are reinstalling the same version of your Ubuntu system, click the “Backup” button to the right of “Downloaded Packages (APT Cache)” to backup the packages in the system cache.
NOTE: When you backup the downloaded packages, there is no secondary dialog box. The packages in your system cache (/var/cache/apt/archives) are copied to an “archives” directory in the backup directory and a dialog box displays when the backup is finished, indicating that the packages were copied successfully.
![16_downloaded_packages_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x544x16_downloaded_packages_backed_up.png.pagespeed.ic.z8ysuwzQAK.png)
There are some packages that are part of your Ubuntu distribution. These are not checked, since they are automatically installed when you install the Ubuntu system. For example, Firefox is a package that is installed by default in Ubuntu and other similar Linux distributions. Therefore, it will not be selected by default.
Packages that you installed after installing the system, such as the [package for the Chrome web browser][1] or the package containing Aptik (yes, Aptik is automatically selected to back up), are selected by default. This allows you to easily back up the packages that are not included in the system when installed.
Select the packages you want to back up and de-select the packages you dont want to backup. Click “Backup” to the right of “Software Selections” to back up the selected top-level packages.
NOTE: Dependency packages are not included in this backup.
![18_clicking_backup_for_software_selections](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x18_clicking_backup_for_software_selections.png.pagespeed.ic.QI5D-IgnP_.png)
Two files, named “packages.list” and “packages-installed.list”, are created in the backup directory and a dialog box displays indicating that the backup was created successfully. Click “OK” to close the dialog box.
NOTE: The “packages-installed.list” file lists all the packages. The “packages.list” file also lists all the packages, but indicates which ones were selected.
![19_software_selections_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x19_software_selections_backed_up.png.pagespeed.ic.LVmgs6MKPL.png)
To backup settings for installed applications, click the “Backup” button to the right of “Application Settings” on the main Aptik window. Select the settings you want to back up and click “Backup”.
NOTE: Click the “Select All” button if you want to back up all application settings.
![20_backing_up_app_settings](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x20_backing_up_app_settings.png.pagespeed.ic.7_kgU3Dj_m.png)
The selected settings files are zipped into a file called “app-settings.tar.gz”.
![21_zipping_settings_files](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x21_zipping_settings_files.png.pagespeed.ic.dgoBj7egqv.png)
When the zipping is complete, the zipped file is copied to the backup directory and a dialog box displays telling you that the backups were created successfully. Click “OK” to close the dialog box.
![22_app_settings_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22_app_settings_backed_up.png.pagespeed.ic.Mb6utyLJ3W.png)
Themes from the “/usr/share/themes” directory and icons from the “/usr/share/icons” directory can also be backed up. To do so, click the “Backup” button to the right of “Themes and Icons”. The “Backup Themes” dialog box displays with all the themes and icons selected by default. De-select any themes or icons you dont want to back up and click “Backup.”
![22a_backing_up_themes_and_icons](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22a_backing_up_themes_and_icons.png.pagespeed.ic.KXa8W3YhyF.png)
The themes are zipped and copied to a “themes” directory in the backup directory and the icons are zipped and copied to an “icons” directory in the backup directory. A dialog box displays telling you that the backups were created successfully. Click “OK” to close the dialog box.
![22b_themes_and_icons_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22b_themes_and_icons_backed_up.png.pagespeed.ic.ejjRaymD39.png)
Once youve completed the desired backups, close Aptik by clicking the “X” button in the upper-left corner of the main window.
![23_closing_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x542x23_closing_aptik.png.pagespeed.ic.pNk9Vt3--l.png)
Your backup files are available in the backup directory you chose.
![24_backup_files_in_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x374x24_backup_files_in_directory.png.pagespeed.ic.vwblOfN915.png)
When you re-install your Ubuntu system or install a new version of Ubuntu, install Aptik on the newly installed system and make the backup files you generated available to the system. Run Aptik and use the “Restore” button for each item to restore your PPAs, applications, packages, settings, themes, and icons.
--------------------------------------------------------------------------------
via: http://www.howtogeek.com/206454/how-to-backup-and-restore-your-apps-and-ppas-in-ubuntu-using-aptik/
作者Lori Kaufman
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.howtogeek.com/203768

View File

@ -1,86 +0,0 @@
How to convert image, audio and video formats on Ubuntu
================================================================================
If you need to work with a variety of image, audio and video files encoded in all sorts of different formats, you are probably using more than one tools to convert among all those heterogeneous media formats. If there is a versatile all-in-one media conversion tool that is capable of dealing with all different image/audio/video formats, that will be awesome.
[Format Junkie][1] is one such all-in-one media conversion tool with an extremely user-friendly GUI. Better yet, it is free software! With Format Junkie, you can convert image, audio, video and archive files of pretty much all the popular formats simply with a few mouse clicks.
### Install Format Junkie on Ubuntu 12.04, 12.10 and 13.04 ###
Format Junkie is available for installation via Ubuntu PPA format-junkie-team. This PPA supports Ubuntu 12.04, 12.10 and 13.04. To install Format Junkie on one of those Ubuntu releases, simply run the following.
$ sudo add-apt-repository ppa:format-junkie-team/release
$ sudo apt-get update
$ sudo apt-get install formatjunkie
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
### Install Format Junkie on Ubuntu 13.10 ###
If you are running Ubuntu 13.10 (Saucy Salamander), you can download and install .deb package for Ubuntu 13.04 as follows. Since the .deb package for Format Junkie requires quite a few dependent packages, install it using [gdebi deb installer][2].
On 32-bit Ubuntu 13.10:
$ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_i386.deb
$ sudo gdebi formatjunkie_1.07-1~raring0.2_i386.deb
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
On 64-bit Ubuntu 13.10:
$ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_amd64.deb
$ sudo gdebi formatjunkie_1.07-1~raring0.2_amd64.deb
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
### Install Format Junkie on Ubuntu 14.04 or Later ###
The currently available official Format Junkie .deb file requires libavcodec-extra-53 which has become obsolete starting from Ubuntu 14.04. Thus if you want to install Format Junkie on Ubuntu 14.04 or later, you can use the following third-party PPA repositories instead.
$ sudo add-apt-repository ppa:jon-severinsson/ffmpeg
$ sudo add-apt-repository ppa:noobslab/apps
$ sudo apt-get update
$ sudo apt-get install formatjunkie
### How to Use Format Junkie ###
To start Format Junkie after installation, simply run:
$ formatjunkie
#### Convert audio, video, image and archive formats with Format Junkie ####
The user interface of Format Junkie is pretty simple and intuitive, as shown below. To choose among audio, video, image and iso media, click on one of four tabs at the top. You can add as many files as you want for batch conversion. After you add files, and select output format, simply click on "Start Converting" button to convert.
![](http://farm9.staticflickr.com/8107/8643695905_082b323059.jpg)
Format Junkie supports conversion among the following media formats:
- **Audio**: mp3, wav, ogg, wma, flac, m4r, aac, m4a, mp2.
- **Video**: avi, ogv, vob, mp4, 3gp, wmv, mkv, mpg, mov, flv, webm.
- **Image**: jpg, png, ico, bmp, svg, tif, pcx, pdf, tga, pnm.
- **Archive**: iso, cso.
#### Subtitle encoding with Format Junkie ####
Besides media conversion, Format Junkie also provides GUI for subtitle encoding. Actual subtitle encoding is done by MEncoder. In order to do subtitle encoding via Format Junkie interface, first you need to install MEencoder.
$ sudo apt-get install mencoder
Then click on "Advanced" tab on Format Junkie. Choose AVI/subtitle files to use for encoding, as shown below.
![](http://farm9.staticflickr.com/8100/8644791396_bfe602cd16.jpg)
Overall, Format Junkie is an extremely easy-to-use and versatile media conversion tool. One drawback, though, is that it does not allow any sort of customization during conversion (e.g., bitrate, fps, sampling frequency, image quality, size). So this tool is recommended for newbies who are looking for an easy-to-use simple media conversion tool.
Enjoyed this post? I will appreciate your like/share buttons on Facebook, Twitter and Google+.
--------------------------------------------------------------------------------
via: http://xmodulo.com/how-to-convert-image-audio-and-video-formats-on-ubuntu.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://launchpad.net/format-junkie
[2]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html

View File

@ -1,75 +0,0 @@
(translating by runningwater)
Linux FAQs with Answers--How to install 7zip on Linux
================================================================================
> **Question**: I need to extract files from an ISO image, and for that I want to use 7zip program. How can I install 7zip on [insert your Linux distro]?
7zip is an open-source archive program originally developed for Windows, which can pack or unpack a variety of archive formats including its native format 7z as well as XZ, GZIP, TAR, ZIP and BZIP2. 7zip is also popularly used to extract RAR, DEB, RPM and ISO files. Besides simple archiving, 7zip can support AES-256 encryption as well as self-extracting and multi-volume archiving. For POSIX systems (Linux, Unix, BSD), the original 7zip program has been ported as p7zip (short for "POSIX 7zip").
Here is how to install 7zip (or p7zip) on Linux.
### Install 7zip on Debian, Ubuntu or Linux Mint ###
Debian-based distributions come with three packages related to 7zip.
- **p7zip**: contains 7zr (a minimal 7zip archive tool) which can handle its native 7z format only.
- **p7zip-full**: contains 7z which can support 7z, LZMA2, XZ, ZIP, CAB, GZIP, BZIP2, ARJ, TAR, CPIO, RPM, ISO and DEB.
- **p7zip-rar**: contains a plugin for extracting RAR files.
It is recommended to install p7zip-full package (not p7zip) since this is the most complete 7zip package which supports many archive formats. In addition, if you want to extract RAR files, you also need to install p7zip-rar package as well. The reason for having a separate plugin package is because RAR is a proprietary format.
$ sudo apt-get install p7zip-full p7zip-rar
### Install 7zip on Fedora or CentOS/RHEL ###
Red Hat-based distributions offer two packages related to 7zip.
- **p7zip**: contains 7za command which can support 7z, ZIP, GZIP, CAB, ARJ, BZIP2, TAR, CPIO, RPM and DEB.
- **p7zip-plugins**: contains 7z command and additional plugins to extend 7za command (e.g., ISO extraction).
On CentOS/RHEL, you need to enable [EPEL repository][1] before running yum command below. On Fedora, there is not need to set up additional repository.
$ sudo yum install p7zip p7zip-plugins
Note that unlike Debian based distributions, Red Hat based distributions do not offer a RAR plugin. Therefore you will not be able to extract RAR files using 7z command.
### Create or Extract an Archive with 7z ###
Once you installed 7zip, you can use 7z command to pack or unpack various types of archives. The 7z command uses other plugins to handle the archives.
![](https://farm8.staticflickr.com/7583/15874000610_878a85b06a_b.jpg)
To create an archive, use "a" option. Supported archive types for creation are 7z, XZ, GZIP, TAR, ZIP and BZIP2. If the specified archive file already exists, it will "add" the files to the existing archive, instead of overwriting it.
$ 7z a <archive-filename> <list-of-files>
To extract an archive, use "e" option. It will extract the archive in the current directory. Supported archive types for extraction are a lot more than those for creation. The list includes 7z, XZ, GZIP, TAR, ZIP, BZIP2, LZMA2, CAB, ARJ, CPIO, RPM, ISO and DEB.
$ 7z e <archive-filename>
Another way to unpack an archive is to use "x" option. Unlike "e" option, it will extract the content with full paths.
$ 7z x <archive-filename>
To see a list of files in an archive, use "l" option.
$ 7z l <archive-filename>
You can update or remove file(s) in an archive with "u" and "d" options, respectively.
$ 7z u <archive-filename> <list-of-files-to-update>
$ 7z d <archive-filename> <list-of-files-to-delete>
To test the integrity of an archive:
$ 7z t <archive-filename>
--------------------------------------------------------------------------------
via:http://ask.xmodulo.com/install-7zip-linux.html
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html

View File

@ -1,203 +0,0 @@
Translating by ZTinoZ
How to Install Bugzilla 4.4 on Ubuntu / CentOS 6.x
================================================================================
Here, we are gonna show you how we can install Bugzilla in an Ubuntu 14.04 or CentOS 6.5/7. Bugzilla is a Free and Open Source Software(FOSS) which is web based bug tracking tool used to log and track defect database, its Bug-tracking systems allow individual or groups of developers effectively to keep track of outstanding problems with their product. Despite being "free", Bugzilla has many features its expensive counterparts lack. Consequently, Bugzilla has quickly become a favorite of thousands of organizations across the globe.
Bugzilla is very adaptable to various situations. They are used now a days in different IT support queues, Systems Administration deployment management, chip design and development problem tracking (both pre-and-post fabrication), and software and hardware bug tracking for luminaries such as Redhat, NASA, Linux-Mandrake, and VA Systems.
### 1. Installing dependencies ###
Setting up Bugzilla is fairly **easy**. This blog is specific to Ubuntu 14.04 and CentOS 6.5 ( though it might work with older versions too )
In order to get Bugzilla up and running in Ubuntu or CentOS, we are going to install Apache webserver ( SSL enabled ) , MySQL database server and also some tools that are required to install and configure Bugzilla.
To install Bugzilla in your server, you'll need to have the following components installed:
- Per l(5.8.1 or above)
- MySQL
- Apache2
- Bugzilla
- Perl modules
- Bugzilla using apache
As we have mentioned that this article explains installation of both Ubuntu 14.04 and CentOS 6.5/7, we will have 2 different sections for them.
Here are the steps you need to follow to setup Bugzilla in your Ubuntu 14.04 LTS and CentOS 7:
**Preparing the required dependency packages:**
You need to install the essential packages by running the following command:
**For Ubuntu:**
$ sudo apt-get install apache2 mysql-server libapache2-mod-perl2
libapache2-mod-perl2-dev libapache2-mod-perl2-doc perl postfix make gcc g++
**For CentOS:**
$ sudo yum install httpd mod_ssl mysql-server mysql php-mysql gcc perl* mod_perl-devel
**Note: Please run all the commands in a shell or terminal and make sure you have root access (sudo) on the machine.**
### 2. Running Apache server ###
As you have already installed the apache server from the above step, we need to now configure apache server and run it. We'll need to go for sudo or root mode to get all the commands working so, we'll gonna switch to root access.
$ sudo -s
Now, we need to open port 80 in the firewall and need to save the changes.
# iptables -I INPUT -p tcp --dport 80 -j ACCEPT
# service iptables save
Now, we need to run the service:
For CentOS:
# service httpd start
Lets make sure that Apache will restart every time you restart the machine:
# /sbin/chkconfig httpd on
For Ubuntu:
# service apache2 start
Now, as we have started our apache http server, we will be able to open apache server at IP address of 127.0.0.1 by default.
### 3. Configuring MySQL Server ###
Now, we need to start our MySQL server:
For CentOS:
# chkconfig mysqld on
# service start mysqld
For Ubuntu:
# service mysql-server start
![mysql](http://blog.linoxide.com/wp-content/uploads/2014/12/mysql.png)
Login with root access to MySQL and create a DB for Bugzilla. Change “mypassword” to anything you want for your mysql password. You will need it later when configuring Bugzilla too.
For Both CentOS 6.5 and Ubuntu 14.04 Trusty
# mysql -u root -p
# password: (You'll need to enter your password)
# mysql > create database bugs;
# mysql > grant all on bugs.* to root@localhost identified by "mypassword";
#mysql > quit
**Note: Please remember the DB name, passwords for mysql , we'll need it later.**
### 4. Installing and configuring Bugzilla ###
Now, as we have all the required packages set and running, we'll want to configure our Bugzilla.
So, first we'll want to download the latest Bugzilla package, here I am downloading version 4.5.2 .
To download using wget in a shell or terminal:
wget http://ftp.mozilla.org/pub/mozilla.org/webtools/bugzilla-4.5.2.tar.gz
You can also download from their official site ie. [http://www.bugzilla.org/download/][1]
**Extracting and renaming the downloaded bugzilla tarball:**
# tar zxvf bugzilla-4.5.2.tar.gz -C /var/www/html/
# cd /var/www/html/
# mv -v bugzilla-4.5.2 bugzilla
**Note**: Here, **/var/www/html/bugzilla/** is the directory where we're gonna **host Bugzilla**.
Now, we'll configure buzilla:
# cd /var/www/html/bugzilla/
# ./checksetup.pl --check-modules
![bugzilla-check-module](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla2-300x198.png)
After the check is done, we will see some missing modules that needs to be installed And that can be installed by the command below:
# cd /var/www/html/bugzilla
# perl install-module.pl --all
This will take a bit time to download and install all dependencies. Run the **checksetup.pl check-modules** command again to verify there are nothing left to install.
Now we'll need to run the below command which will automatically generate a file called “localconfig” in the /var/www/html/bugzilla directory.
# ./checksetup.pl
Make sure you input the correct database name, user, and password we created earlier in the localconfig file
# nano ./localconfig
# checksetup.pl
![bugzilla-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla-success.png)
If all is well, checksetup.pl should now successfully configure Bugzilla.
Now we need to add Bugzilla to our Apache config file. so, we'll need to open /etc/httpd/conf/httpd.conf (For CentOS) or etc/apache2/apache2.conf (For Ubuntu) with a text editor:
For CentOS:
# nano /etc/httpd/conf/httpd.conf
For Ubuntu:
# nano etc/apache2/apache2.conf
Now, we'll need to configure Apache server we'll need to add the below configuration in the config file:
<VirtualHost *:80>
DocumentRoot /var/www/html/bugzilla/
</VirtualHost>
<Directory /var/www/html/bugzilla>
AddHandler cgi-script .cgi
Options +Indexes +ExecCGI
DirectoryIndex index.cgi
AllowOverride Limit FileInfo Indexes
</Directory>
Lastly, we need to edit .htaccess file and comment out “Options -Indexes” line at the top by adding “#”
Lets restart our apache server and test our installation.
For CentOS:
# service httpd restart
For Ubuntu:
# service apache2 restart
![bugzilla-install-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla_apache.png)
Finally, our Bugzilla is ready to get bug reports now in our Ubuntu 14.04 LTS and CentOS 6.5 and you can browse to bugzilla by going to the localhost page ie 127.0.0.1 or to your IP address in your web browser .
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-bugzilla-ubuntu-centos/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.bugzilla.org/download/

View File

@ -1,132 +0,0 @@
Docker Image Insecurity
================================================================================
Recently while downloading an “official” container image with Docker I saw this line:
ubuntu:14.04: The image you are pulling has been verified
I assumed this referenced Dockers [heavily promoted][1] image signing system and didnt investigate further at the time. Later, while researching the cryptographic digest system that Docker tries to secure images with, I had the opportunity to explore further. What I found was a total systemic failure of all logic related to image security.
Dockers report that a downloaded image is “verified” is based solely on the presence of a signed manifest, and Docker never verifies the image checksum from the manifest. An attacker could provide any image alongside a signed manifest. This opens the door to a number of serious vulnerabilities.
Images are downloaded from an HTTPS server and go through an insecure streaming processing pipeline in the Docker daemon:
[decompress] -> [tarsum] -> [unpack]
This pipeline is performant but completely insecure. Untrusted input should not be processed before verifying its signature. Unfortunately Docker processes images three times before checksum verification is supposed to occur.
However, despite [Dockers claims][2], image checksums are never actually checked. This is the only section[0][3] of Dockers code related to verifying image checksums, and I was unable to trigger the warning even when presenting images with mismatched checksums.
if img.Checksum != "" && img.Checksum != checksum {
log.Warnf("image layer checksum mismatch: computed %q,
expected %q", checksum, img.Checksum)
}
### Insecure processing pipeline ###
**Decompress**
Docker supports three compression algorithms: gzip, bzip2, and xz. The first two use the Go standard library implementations, which are [memory-safe][4], so the exploit types Id expect to see here are denial of service attacks like crashes and excessive CPU and memory usage.
The third compression algorithm, xz, is more interesting. Since there is no native Go implementation, Docker [execs][5] the `xz` binary to do the decompression.
The xz binary comes from the [XZ Utils][6] project, and is built from approximately[1][7] twenty thousand lines of C code. C is not a memory-safe language. This means malicious input to a C program, in this case the Docker image XZ Utils is unpacking, could potentially execute arbitrary code.
Docker exacerbates this situation by *running* `xz` as root. This means that if there is a single vulnerability in `xz`, a call to `docker pull` could result in the complete compromise of your entire system.
**Tarsum**
The use of tarsum is well-meaning but completely flawed. In order to get a deterministic checksum of the contents of an arbitrarily encoded tar file, Docker decodes the tar and then hashes specific portions, while excluding others, in a [deterministic order][8].
Since this processing is done in order to generate the checksum, it is decoding untrusted data which could be designed to exploit the tarsum code[2][9]. Potential exploits here are denial of service as well as logic flaws that could cause files to be injected, skipped, processed differently, modified, appended to, etc. without the checksum changing.
**Unpacking**
Unpacking consists of decoding the tar and placing files on the disk. This is extraordinarily dangerous as there have been three other vulnerabilities reported[3][10] in the unpack stage at the time of writing.
There is no situation where data that has not been verified should be unpacked onto disk.
### libtrust ###
[libtrust][11] is a Docker package that claims to provide “authorization and access control through a distributed trust graph.” Unfortunately no specification appears to exist, however it looks like it implements some parts of the [Javascript Object Signing and Encryption][12] specifications along with other unspecified algorithms.
Downloading an image with a manifest signed and verified using libtrust is what triggers this inaccurate message (only the manifest is checked, not the actual image contents):
ubuntu:14.04: The image you are pulling has been verified
Currently only “official” image manifests published by Docker, Inc are signed using this system, but from discussions I participated in at the last Docker Governance Advisory Board meeting[4][13], my understanding is that Docker, Inc is planning on deploying this more widely in the future. The intended goal is centralization with Docker, Inc controlling a Certificate Authority that then signs images and/or client certificates.
I looked for the signing key in Dockers code but was unable to find it. As it turns out the key is not embedded in the binary as one would expect. Instead the Docker daemon fetches it [over HTTPS from a CDN][14] before each image download. This is a terrible approach as a variety of attacks could lead to trusted keys being replaced with malicious ones. These attacks include but are not limited to: compromise of the CDN vendor, compromise of the CDN origin serving the key, and man in the middle attacks on clients downloading the keys.
### Remediation ###
I [reported][15] some of the issues I found with the tarsum system before I finished this research, but so far nothing I have reported has been fixed.
Some steps I believe should be taken to improve the security of the Docker image download system:
Drop tarsum and actually verify image digests
Tarsum should not be used for security. Instead, images must be fully downloaded and their cryptographic signatures verified before any processing takes place.
**Add privilege isolation**
Image processing steps that involve decompression or unpacking should be run in isolated processes (containers?) that have only the bare minimum required privileges to operate. There is no scenario where a decompression tool like `xz` should be run as root.
**Replace libtrust**
Libtrust should be replaced with [The Update Framework][16] which is explicitly designed to solve the real problems around signing software binaries. The threat model is very comprehensive and addresses many things that have not been considered in libtrust. There is a complete specification as well as a reference implementation written in Python, and I have begun work on a [Go implementation][17] and welcome contributions.
As part of adding TUF to Docker, a local keystore should be added that maps root keys to registry URLs so that users can have their own signing keys that are not managed by Docker, Inc.
I would like to note that using non-Docker, Inc hosted registries is a very poor user experience in general. Docker, Inc seems content with relegating third party registries to second class status when there is no technical reason to do so. This is a problem both for the ecosystem in general and the security of end users. A comprehensive, decentralized security model for third party registries is both necessary and desirable. I encourage Docker, Inc to take this into consideration when redesigning their security model and image verification system.
### Conclusion ###
Docker users should be aware that the code responsible for downloading images is shockingly insecure. Users should only download images whose provenance is without question. At present, this does *not* include “trusted” images hosted by Docker, Inc including the official Ubuntu and other base images.
The best option is to block `index.docker.io` locally, and download and verify images manually before importing them into Docker using `docker load`. Red Hats security blog has [a good post about this][18].
Thanks to Lewis Marshall for pointing out the tarsums are never verified.
- [Checksum code context][19].
- [cloc][20] says 18,141 non-blank, non-comment lines of C and 5,900 lines of headers in v5.2.0.
- Very similar bugs been [found in Android][21], which allowed arbitrary files to be injected into signed packages, and [the Windows Authenticode][22] signature system, which allowed binary modification.
- Specifically: [CVE-2014-6407][23], [CVE-2014-9356][24], and [CVE-2014-9357][25]. There were two Docker [security releases][26] in response.
- See page 8 of the [notes from the 2014-10-28 DGAB meeting][27].
--------------------------------------------------------------------------------
via: https://titanous.com/posts/docker-insecurity
作者:[titanous][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://twitter.com/titanous
[1]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[2]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[3]:https://titanous.com/posts/docker-insecurity#fn:0
[4]:https://en.wikipedia.org/wiki/Memory_safety
[5]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/archive/archive.go#L91-L95
[6]:http://tukaani.org/xz/
[7]:https://titanous.com/posts/docker-insecurity#fn:1
[8]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/tarsum/tarsum_spec.md
[9]:https://titanous.com/posts/docker-insecurity#fn:2
[10]:https://titanous.com/posts/docker-insecurity#fn:3
[11]:https://github.com/docker/libtrust
[12]:https://tools.ietf.org/html/draft-ietf-jose-json-web-signature-11
[13]:https://titanous.com/posts/docker-insecurity#fn:4
[14]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/trust/trusts.go#L38
[15]:https://github.com/docker/docker/issues/9719
[16]:http://theupdateframework.com/
[17]:https://github.com/flynn/go-tuf
[18]:https://securityblog.redhat.com/2014/12/18/before-you-initiate-a-docker-pull/
[19]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/image/image.go#L114-L116
[20]:http://cloc.sourceforge.net/
[21]:http://www.saurik.com/id/17
[22]:http://blogs.technet.com/b/srd/archive/2013/12/10/ms13-098-update-to-enhance-the-security-of-authenticode.aspx
[23]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6407
[24]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9356
[25]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9357
[26]:https://groups.google.com/d/topic/docker-user/nFAz-B-n4Bw/discussion
[27]:https://docs.google.com/document/d/1JfWNzfwptsMgSx82QyWH_Aj0DRKyZKxYQ1aursxNorg/edit?pli=1

View File

@ -1,207 +0,0 @@
How to configure fail2ban to protect Apache HTTP server
================================================================================
An Apache HTTP server in production environments can be under attack in various different ways. Attackers may attempt to gain access to unauthorized or forbidden directories by using brute-force attacks or executing evil scripts. Some malicious bots may scan your websites for any security vulnerability, or collect email addresses or web forms to send spams to.
Apache HTTP server comes with comprehensive logging capabilities capturing various abnormal events indicative of such attacks. However, it is still non-trivial to systematically parse detailed Apache logs and react to potential attacks quickly (e.g., ban/unban offending IP addresses) as they are perpetrated in the wild. That is when `fail2ban` comes to the rescue, making a sysadmin's life easier.
`fail2ban` is an open-source intrusion prevention tool which detects various attacks based on system logs and automatically initiates prevention actions e.g., banning IP addresses with `iptables`, blocking connections via /etc/hosts.deny, or notifying the events via emails. fail2ban comes with a set of predefined "jails" which use application-specific log filters to detect common attacks. You can also write custom jails to deter any specific attack on an arbitrary application.
In this tutorial, I am going to demonstrate how you can configure fail2ban to protect your Apache HTTP server. I assume that you have Apache HTTP server and fail2ban already installed. Refer to [another tutorial][1] for fail2ban installation.
### What is a Fail2ban Jail ###
Let me go over more detail on fail2ban jails. A jail defines an application-specific policy under which fail2ban triggers an action to protect a given application. fail2ban comes with several jails pre-defined in /etc/fail2ban/jail.conf, for popular applications such as Apache, Dovecot, Lighttpd, MySQL, Postfix, [SSH][2], etc. Each jail relies on application-specific log filters (found in /etc/fail2ban/fileter.d) to detect common attacks. Let's check out one example jail: SSH jail.
[ssh]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 6
banaction = iptables-multiport
This SSH jail configuration is defined with several parameters:
- **[ssh]**: the name of a jail with square brackets.
- **enabled**: whether the jail is activated or not.
- **port**: a port number to protect (either numeric number of well-known name).
- **filter**: a log parsing rule to detect attacks with.
- **logpath**: a log file to examine.
- **maxretry**: maximum number of failures before banning.
- **banaction**: a banning action.
Any parameter defined in a jail configuration will override a corresponding `fail2ban-wide` default parameter. Conversely, any parameter missing will be assgined a default value defined in [DEFAULT] section.
Predefined log filters are found in /etc/fail2ban/filter.d, and available actions are in /etc/fail2ban/action.d.
![](https://farm8.staticflickr.com/7538/16076581722_cbca3c1307_b.jpg)
If you want to overwrite `fail2ban` defaults or define any custom jail, you can do so by creating **/etc/fail2ban/jail.local** file. In this tutorial, I am going to use /etc/fail2ban/jail.local.
### Enable Predefined Apache Jails ###
Default installation of `fail2ban` offers several predefined jails and filters for Apache HTTP server. I am going to enable those built-in Apache jails. Due to slight differences between Debian and Red Hat configurations, let me provide fail2ban jail configurations for them separately.
#### Enable Apache Jails on Debian or Ubuntu ####
To enable predefined Apache jails on a Debian-based system, create /etc/fail2ban/jail.local as follows.
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect potential search for exploits and php vulnerabilities
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/apache*/*error.log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/apache*/*error.log
maxretry = 2
Since none of the jails above specifies an action, all of these jails will perform a default action when triggered. To find out the default action, look for "banaction" under [DEFAULT] section in /etc/fail2ban/jail.conf.
banaction = iptables-multiport
In this case, the default action is iptables-multiport (defined in /etc/fail2ban/action.d/iptables-multiport.conf). This action bans an IP address using iptables with multiport module.
After enabling jails, you must restart fail2ban to load the jails.
$ sudo service fail2ban restart
#### Enable Apache Jails on CentOS/RHEL or Fedora ####
To enable predefined Apache jails on a Red Hat based system, create /etc/fail2ban/jail.local as follows.
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect spammer robots crawling email addresses
[apache-badbots]
enabled = true
port = http,https
filter = apache-badbots
logpath = /var/log/httpd/*access_log
bantime = 172800
maxretry = 1
# detect potential search for exploits and php <a href="http://xmodulo.com/recommend/penetrationbook" style="" target="_blank" rel="nofollow" >vulnerabilities</a>
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to execute non-existing scripts that
# are associated with several popular web services
# e.g. webmail, phpMyAdmin, WordPress
port = http,https
filter = apache-botsearch
logpath = /var/log/httpd/*error_log
maxretry = 2
Note that the default action for all these jails is iptables-multiport (defined as "banaction" under [DEFAULT] in /etc/fail2ban/jail.conf). This action bans an IP address using iptables with multiport module.
After enabling jails, you must restart fail2ban to load the jails in fail2ban.
On Fedora or CentOS/RHEL 7:
$ sudo systemctl restart fail2ban
On CentOS/RHEL 6:
$ sudo service fail2ban restart
### Check and Manage Fail2ban Banning Status ###
Once jails are activated, you can monitor current banning status with fail2ban-client command-line tool.
To see a list of active jails:
$ sudo fail2ban-client status
To see the status of a particular jail (including banned IP list):
$ sudo fail2ban-client status [name-of-jail]
![](https://farm8.staticflickr.com/7572/15891521967_5c6cbc5f8f_c.jpg)
You can also manually ban or unban IP addresses.
To ban an IP address with a particular jail:
$ sudo fail2ban-client set [name-of-jail] banip [ip-address]
To unban an IP address blocked by a particular jail:
$ sudo fail2ban-client set [name-of-jail] unbanip [ip-address]
### Summary ###
This tutorial explains how a fail2ban jail works and how to protect an Apache HTTP server using built-in Apache jails. Depending on your environments and types of web services you need to protect, you may need to adapt existing jails, or write custom jails and log filters. Check outfail2ban's [official Github page][3] for more up-to-date examples of jails and filters.
Are you using fail2ban in any production environment? Share your experience.
--------------------------------------------------------------------------------
via: http://xmodulo.com/configure-fail2ban-apache-http-server.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[2]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[3]:https://github.com/fail2ban/fail2ban

View File

@ -1,3 +1,5 @@
hi ! 让我来翻译
How to debug a C/C++ program with Nemiver debugger
================================================================================
If you read [my post on GDB][1], you know how important and useful a debugger I think can be for a C/C++ program. However, if a command line debugger like GDB sounds more like a problem than a solution to you, you might be more interested in Nemiver. [Nemiver][2] is a GTK+-based standalone graphical debugger for C/C++ programs, using GDB as its back-end. Admirable for its speed and stability, Nemiver is a very reliable debugger filled with goodies.
@ -106,4 +108,4 @@ via: http://xmodulo.com/debug-program-nemiver-debugger.html
[1]:http://xmodulo.com/gdb-command-line-debugger.html
[2]:https://wiki.gnome.org/Apps/Nemiver
[3]:https://download.gnome.org/sources/nemiver/0.9/
[4]:http://xmodulo.com/recommend/linuxclibook
[4]:http://xmodulo.com/recommend/linuxclibook

View File

@ -1,46 +0,0 @@
How To Install Kodi 14 (XBMC) In Ubuntu 14.04 & Linux Mint 17
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Kodi_Xmas.jpg)
[Kodi][1], formerly and popularly known as XBMC, has [released its latest version 14][2] which is code named Helix. It is fairly easy to **install Kodi 14 in Ubuntu 14.04** thanks to the official PPA provided by XBMC.
For those who do not know already, Kodi is a media center application available for all major platforms like Windows, Linux, Mac, Android etc. It turns your device in to a full screen media center where you can manage all your music and videos, either on local or on network drive, watch You Tube, [Netflix][3], Hulu, Amazon Prime and other streaming services.
### Install XBMC 14 Kodi Helix in Ubuntu 14.04, 14.10 and Linux Mint 17 ###
Thanks to the official PPA, you can easily install Kodi 14 in Ubuntu 14.04, Ubuntu 12.04, Linux Mint 17, Pinguy OS 14.04, Deepin 2014, LXLE 14.04, Linux Lite 2.0, Elementary OS and other Ubuntu based Linux distributions. Open a terminal (Ctrl+Alt+T) and use the following commands:
sudo add-apt-repository ppa:team-xbmc/ppa
sudo apt-get update
sudo apt-get install kodi
The download size would be around 100 MB, which is not huge in my opinion. To install some encode addons, use the command below:
sudo apt-get install kodi-audioencoder-* kodi-pvr-*
#### Remove Kodi 14 from Ubuntu ####
To uninstall Kodi 14 from your system, use the command below:
sudo apt-get remove kodi
You should also remove the PPA from the software sources:
sudo add-apt-repository --remove ppa:team-xbmc/ppa
I hope this quick post helped you to install Kodi 14 in Ubuntu, Linux Mint and other Linux. How do you find Kodi 14 Helix? Do you use some other media center as an alternative to XBMC? Do share your views in the comment section.
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-kodi-14-xbmc-in-ubuntu-14-04-linux-mint-17/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://kodi.tv/
[2]:http://kodi.tv/kodi-14-0-helix-unwinds/
[3]:http://itsfoss.com/watch-netflix-in-ubuntu-14-04/

View File

@ -1,47 +0,0 @@
How To Install Winusb In Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/WinUSB_Ubuntu_1404.jpeg)
[WinUSB][1] is a simple and useful tool that lets you create USB stick Windows installer from the Windows ISO image or DVD. It comprises of both GUI and command line tool and you can decide to choose which to use based on your preference.
In this quick post we shall see **how to install WinUSB in Ubuntu 14.04, 14.10 and Linux Mint 17**.
### Install WinUSB in Ubuntu 14.04 and Ubuntu 14.10 ###
Until Ubuntu 13.10, WinUSB was developed actively and it was available for installation via its official PPA. This PPA has not been updated for Ubuntu 14.04 Trusty Tahr and 14.10 but the binaries are still there and works fine in newer version of Ubuntu and Linux Mint. Based on [whether your Ubuntu system is 32 bit or 64 bit][2], use the command below to download the binaries:
Open a terminal and use the following command for 32 bit system:
wget https://launchpad.net/~colingille/+archive/freshlight/+files/winusb_1.0.11+saucy1_i386.deb
For 64 bit systems, use the command below:
wget https://launchpad.net/~colingille/+archive/freshlight/+files/winusb_1.0.11+saucy1_amd64.deb
Once you have downloaded the correct binaries, you can install WinUSB using the command below:
sudo dpkg -i winusb*
Dont worry if you see error when you try to install WinUSB. Fix the dependency errors with this command:
sudo apt-get -f install
Afterwards, you can search for WinUSB in Unity Dash and use it to create a live USB of Windows in Ubuntu 14.04.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/WinUSB_Ubuntu.png)
I hope this quick post helped you to **install WinUSB in Ubuntu 14.04, 14.10 and Linux Mint 17**.
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-winusb-in-ubuntu-14-04/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://en.congelli.eu/prog_info_winusb.html
[2]:http://itsfoss.com/how-to-know-ubuntu-unity-version/

View File

@ -0,0 +1,136 @@
Interface (NICs) Bonding in Linux using nmcli
================================================================================
Today, we'll learn how to perform Interface (NICs) bonding in our CentOS 7.x using nmcli (Network Manager Command Line Interface).
NICs (Interfaces) bonding is a method for linking **NICs** together logically to allow fail-over or higher throughput. One of the ways to increase the network availability of a server is by using multiple network interfaces. The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical bonded interface. It is a new implementation that does not affect the older bonding driver in linux kernel; it offers an alternate implementation.
**NIC bonding is done to provide two main benefits for us:**
1. **High bandwidth**
1. **Redundancy/resilience**
Now lets configure NICs bonding in CentOS 7. We'll need to decide which interfaces that we would like to configure a Team interface.
run **ip link** command to check the available interface in the system.
$ ip link
![ip link](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-link.png)
Here we are using **eno16777736** and **eno33554960** NICs to create a team interface in **activebackup** mode.
Use **nmcli** command to create a connection for the network team interface,with the following syntax.
# nmcli con add type team con-name CNAME ifname INAME [config JSON]
Where **CNAME** will be the name used to refer the connection ,**INAME** will be the interface name and **JSON** (JavaScript Object Notation) specifies the runner to be used.**JSON** has the following syntax:
'{"runner":{"name":"METHOD"}}'
where **METHOD** is one of the following: **broadcast, activebackup, roundrobin, loadbalance** or **lacp**.
### 1. Creating Team Interface ###
Now let us create the team interface. here is the command we used to create the team interface.
# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}'
![nmcli con create](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-con-create.png)
run **# nmcli con show** command to verify the team configuration.
# nmcli con show
![Show Teamed Interace](http://blog.linoxide.com/wp-content/uploads/2015/01/show-team-interface.png)
### 2. Adding Slave Devices ###
Now lets add the slave devices to the master team0. here is the syntax for adding the slave devices.
# nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
Here we are adding **eno16777736** and **eno33554960** as slave devices for **team0** interface.
# nmcli con add type team-slave con-name team0-port1 ifname eno16777736 master team0
# nmcli con add type team-slave con-name team0-port2 ifname eno33554960 master team0
![adding slave devices to team](http://blog.linoxide.com/wp-content/uploads/2015/01/adding-to-team.png)
Verify the connection configuration using **#nmcli con show** again. now we could see the slave configuration.
#nmcli con show
![show slave config](http://blog.linoxide.com/wp-content/uploads/2015/01/show-slave-config.png)
### 3. Assigning IP Address ###
All the above command will create the required configuration files under **/etc/sysconfig/network-scripts/**.
Lets assign an IP address to this team0 interface and enable the connection now. Here is the command to perform the IP assignment.
# nmcli con mod team0 ipv4.addresses "192.168.1.24/24 192.168.1.1"
# nmcli con mod team0 ipv4.method manual
# nmcli con up team0
![ip assignment](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-assignment.png)
### 4. Verifying the Bonding ###
Verify the IP address information in **#ip add show team0** command.
#ip add show team0
![verfiy ip address](http://blog.linoxide.com/wp-content/uploads/2015/01/verfiy-ip-adress.png)
Now lets check the **activebackup** configuration functionality using the **teamdctl** command.
# teamdctl team0 state
![teamdctl active backup check](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-activebackup-check.png)
Now lets disconnect the active port and check the state again. to confirm whether the active backup configuration is working as expected.
# nmcli dev dis eno33554960
![disconnect activeport](http://blog.linoxide.com/wp-content/uploads/2015/01/disconnect-activeport.png)
disconnected the active port and now check the state again using **#teamdctl team0 state**.
# teamdctl team0 state
![teamdctl check activeport disconnect](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-check-activeport-disconnect.png)
Yes its working cool !! we will connect the disconnected connection back to team0 using the following command.
#nmcli dev con eno33554960
![nmcli dev connect disconected](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-dev-connect-disconected.png)
We have one more command called **teamnl** let us show some options with **teamnl** command.
to check the ports in team0 run the following command.
# teamnl team0 ports
![teamnl check ports](http://blog.linoxide.com/wp-content/uploads/2015/01/teamnl-check-ports.png)
Display currently active port of **team0**.
# teamnl team0 getoption activeport
![display active port team0](http://blog.linoxide.com/wp-content/uploads/2015/01/display-active-port-team0.png)
Hurray, we have successfully configured NICs bonding :-) Please share feedback if any.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/interface-nics-bonding-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/

View File

@ -0,0 +1,135 @@
What are useful command-line network monitors on Linux
================================================================================
Network monitoring is a critical IT function for businesses of all sizes. The goal of network monitoring can vary. For example, the monitoring activity can be part of long-term network provisioning, security protection, performance troubleshooting, network usage accounting, and so on. Depending on its goal, network monitoring is done in many different ways, such as performing packet-level sniffing, collecting flow-level statistics, actively injecting probes into the network, parsing server logs, etc.
While there are many dedicated network monitoring systems capable of 24/7/365 monitoring, you can also leverage command-line network monitors in certain situations, where a dedicated monitor is an overkill. If you are a system admin, you are expected to have hands-on experience with some of well known CLI network monitors. Here is a list of **popular and useful command-line network monitors on Linux**.
### Packet-Level Sniffing ###
In this category, monitoring tools capture individual packets on the wire, dissect their content, and display decoded packet content or packet-level statistics. These tools conduct network monitoring from the lowest level, and as such, can possibly do the most fine-grained monitoring at the cost of network I/O and analysis efforts.
1. **dhcpdump**: a comman-line DHCP traffic sniffer capturing DHCP request/response traffic, and displays dissected DHCP protocol messages in a human-friendly format. It is useful when you are troubleshooting DHCP related issues.
2. **[dsniff][1]**: a collection of command-line based sniffing, spoofing and hijacking tools designed for network auditing and penetration testing. They can sniff various information such as passwords, NSF traffic, email messages, website URLs, and so on.
3. **[httpry][2]**: an HTTP packet sniffer which captures and decode HTTP requests and response packets, and display them in a human-readable format.
4. **IPTraf**: a console-based network statistics viewer. It displays packet-level, connection-level, interface-level, protocol-level packet/byte counters in real-time. Packet capturing can be controlled by protocol filters, and its operation is full menu-driven.
![](https://farm8.staticflickr.com/7519/16055246118_8ea182b413_c.jpg)
5. **[mysql-sniffer][3]**: a packet sniffer which captures and decodes packets associated with MySQL queries. It displays the most frequent or all queries in a human-readable format.
6. **[ngrep][4]**: grep over network packets. It can capture live packets, and match (filtered) packets against regular expressions or hexadecimal expressions. It is useful for detecting and storing any anomalous traffic, or for sniffing particular patterns of information from live traffic.
7. **[p0f][5]**: a passive fingerprinting tool which, based on packet sniffing, reliably identifies operating systems, NAT or proxy settings, network link types and various other properites associated with an active TCP connection.
8. **pktstat**: a command-line tool which analyzes live packets to display connection-level bandwidth usages as well as descriptive information of protocols involved (e.g., HTTP GET/POST, FTP, X11).
![](https://farm8.staticflickr.com/7477/16048970999_be60f74952_b.jpg)
9. **Snort**: an intrusion detection and prevention tool which can detect/prevent a variety of backdoor, botnets, phishing, spyware attacks from live traffic based on rule-driven protocol analysis and content matching.
10. **tcpdump**: a command-line packet sniffer which is capable of capturing nework packets on the wire based on filter expressions, dissect the packets, and dump the packet content for packet-level analysis. It is widely used for any kinds of networking related troubleshooting, network application debugging, or [security][6] monitoring.
11. **tshark**: a command-line packet sniffing tool that comes with Wireshark GUI program. It can capture and decode live packets on the wire, and show decoded packet content in a human-friendly fashion.
### Flow-/Process-/Interface-Level Monitoring ###
In this category, network monitoring is done by classifying network traffic into flows, associated processes or interfaces, and collecting per-flow, per-process or per-interface statistics. Source of information can be libpcap packet capture library or sysfs kernel virtual filesystem. Monitoring overhead of these tools is low, but packet-level inspection capabilities are missing.
12. **bmon**: a console-based bandwidth monitoring tool which shows various per-interface information, including not-only aggregate/average RX/TX statistics, but also a historical view of bandwidth usage.
![](https://farm9.staticflickr.com/8580/16234265932_87f20c5d17_b.jpg)
13. **[iftop][7]**: a bandwidth usage monitoring tool that can shows bandwidth usage for individual network connections in real time. It comes with ncurses-based interface to visualize bandwidth usage of all connections in a sorted order. It is useful for monitoring which connections are consuming the most bandwidth.
14. **nethogs**: a process monitoring tool which offers a real-time view of upload/download bandwidth usage of individual processes or programs in an ncurses-based interface. This is useful for detecting bandwidth hogging processes.
15. **netstat**: a command-line tool that shows various statistics and properties of the networking stack, such as open TCP/UDP connections, network interface RX/TX statistics, routing tables, protocol/socket statistics. It is useful when you diagnose performance and resource usage related problems of the networking stack.
16. **[speedometer][8]**: a console-based traffic monitor which visualizes the historical trend of an interface's RX/TX bandwidth usage with ncurses-drawn bar charts.
![](https://farm8.staticflickr.com/7485/16048971069_31dd573a4f_c.jpg)
17. **[sysdig][9]**: a comprehensive system-level debugging tool with a unified interface for investigating different Linux subsystems. Its network monitoring module is capable of monitoring, either online or offline, various per-process/per-host networking statistics such as bandwidth usage, number of connections/requests, etc.
18. **tcptrack**: a TCP connection monitoring tool which displays information of active TCP connections, including source/destination IP addresses/ports, TCP state, and bandwidth usage.
![](https://farm8.staticflickr.com/7507/16047703080_5fdda2e811_b.jpg)
19. **vnStat**: a command-line traffic monitor which maintains a historical view of RX/TX bandwidh usage (e.g., current, daily, monthly) on a per-interface basis. Running as a background daemon, it collects and stores interface statistics on bandwidth rate and total bytes transferred.
### Active Network Monitoring ###
Unlike passive monitoring tools presented so far, tools in this category perform network monitoring by actively "injecting" probes into the network and collecting corresponding responses. Monitoring targets include routing path, available bandwidth, loss rates, delay, jitter, system settings or vulnerabilities, and so on.
20. **[dnsyo][10]**: a DNS monitoring tool which can conduct DNS lookup from open resolvers scattered across more than 1,500 different networks. It is useful when you check DNS propagation or troubleshoot DNS configuration.
21. **[iperf][11]**: a TCP/UDP bandwidth measurement utility which can measure maximum available bandwidth between two end points. It measures available bandwidth by having two hosts pump out TCP/UDP probe traffic between them either unidirectionally or bi-directionally. It is useful when you test the network capacity, or tune the parameters of network stack. A variant called [netperf][12] exists with more features and better statistics.
22. **[netcat][13]/socat**: versatile network debugging tools capable of reading from, writing to, or listen on TCP/UDP sockets. They are often used alongside with other programs or scripts for backend network transfer or port listening.
23. **nmap**: a command-line port scanning and network discovery utility. It relies on a number of TCP/UDP based scanning techniques to detect open ports, live hosts, or existing operating systems on the local network. It is useful when you audit local hosts for vulnerabilities or build a host map for maintenance purpose. [zmap][14] is an alernative scanning tool with Internet-wide scanning capability.
24. ping: a network testing tool which works by exchaning ICMP echo and reply packets with a remote host. It is useful when you measure round-trip-time (RTT) delay and loss rate of a routing path, as well as test the status or firewall rules of a remote system. Variations of ping exist with fancier interface (e.g., [noping][15]), multi-protocol support (e.g., [hping][16]) or parallel probing capability (e.g., [fping][17]).
![](https://farm8.staticflickr.com/7466/15612665344_a4bb665a5b_c.jpg)
25. **[sprobe][18]**: a command-line tool that heuristically infers the bottleneck bandwidth between a local host and any arbitrary remote IP address. It uses TCP three-way handshake tricks to estimate the bottleneck bandwidth. It is useful when troubleshooting wide-area network performance and routing related problems.
26. **traceroute**: a network discovery tool which reveals a layer-3 routing/forwarding path from a local host to a remote host. It works by sending TTL-limited probe packets and collecting ICMP responses from intermediate routers. It is useful when troubleshooting slow network connections or routing related problems. Variations of traceroute exist with better RTT statistics (e.g., [mtr][19]).
### Application Log Parsing ###
In this category, network monitoring is targeted at a specific server application (e.g., web server or database server). Network traffic generated or consumed by a server application is monitored by analyzing its log file. Unlike network-level monitors presented in earlier categories, tools in this category can analyze and monitor network traffic from application-level.
27. **[GoAccess][20]**: a console-based interactive viewer for Apache and Nginx web server traffic. Based on access log analysis, it presents a real-time statistics of a number of metrics including daily visits, top requests, client operating systems, client locations, client browsers, in a scrollable view.
![](https://farm8.staticflickr.com/7518/16209185266_da6c5c56eb_c.jpg)
28. **[mtop][21]**: a command-line MySQL/MariaDB server moniter which visualizes the most expensive queries and current database server load. It is useful when you optimize MySQL server performance and tune server configurations.
![](https://farm8.staticflickr.com/7472/16047570248_bc996795f2_c.jpg)
29. **[ngxtop][22]**: a traffic monitoring tool for Nginx and Apache web server, which visualizes web server traffic in a top-like interface. It works by parsing a web server's access log file and collecting traffic statistics for individual destinations or requests.
### Conclusion ###
In this article, I presented a wide variety of command-line network monitoring tools, ranging from the lowest packet-level monitors to the highest application-level network monitors. Knowing which tool does what is one thing, and choosing which tool to use is another, as any single tool cannot be a universal solution for your every need. A good system admin should be able to decide which tool is right for the circumstance at hand. Hopefully the list helps with that.
You are always welcome to improve the list with your comment!
--------------------------------------------------------------------------------
via: http://xmodulo.com/useful-command-line-network-monitors-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.monkey.org/~dugsong/dsniff/
[2]:http://xmodulo.com/monitor-http-traffic-command-line-linux.html
[3]:https://github.com/zorkian/mysql-sniffer
[4]:http://ngrep.sourceforge.net/
[5]:http://lcamtuf.coredump.cx/p0f3/
[6]:http://xmodulo.com/recommend/firewallbook
[7]:http://xmodulo.com/how-to-install-iftop-on-linux.html
[8]:https://excess.org/speedometer/
[9]:http://xmodulo.com/monitor-troubleshoot-linux-server-sysdig.html
[10]:http://xmodulo.com/check-dns-propagation-linux.html
[11]:https://iperf.fr/
[12]:http://www.netperf.org/netperf/
[13]:http://xmodulo.com/useful-netcat-examples-linux.html
[14]:https://zmap.io/
[15]:http://noping.cc/
[16]:http://www.hping.org/
[17]:http://fping.org/
[18]:http://sprobe.cs.washington.edu/
[19]:http://xmodulo.com/better-alternatives-basic-command-line-utilities.html#mtr_link
[20]:http://goaccess.io/
[21]:http://mtop.sourceforge.net/
[22]:http://xmodulo.com/monitor-nginx-web-server-command-line-real-time.html

View File

@ -0,0 +1,180 @@
How To Recover Windows 7 And Delete Ubuntu In 3 Easy Steps
================================================================================
### Introduction ###
This is a strange article for me to write as I am normally in a position where I would advocate installing Ubuntu and getting rid of Windows.
What makes writing this article today doubly strange is that I am choosing to write it on the day that Windows 7 mainstream support comes to an end.
So why am I writing this now?
I have been asked on so many occasions now how to remove Ubuntu from a dual booting Windows 7 or a dual booting Windows 8 system and it just makes sense to write the article.
I spent the Christmas period looking through the comments that people have left on articles and it is time to write the posts that are missing and update some of those that have become old and need attention.
I am going to spend the rest of January doing just that. This is the first step. If you have Windows 7 dual booting with Ubuntu and you want Windows 7 back without restoring to factory settings follow this guide. (Note there is a separate guide required for Windows 8)
### The Steps Required To Remove Ubuntu ###
1. Remove Grub By Fixing The Windows Boot Record
1. Delete The Ubuntu Partitions
1. Expand The Windows Partition
### Back Up Your System ###
Before you begin I recommend taking a backup of your system.
I also recommend not leaving this to chance nor Microsoft's own tools.
[Click here for a guide showing how to backup your drive using Macrium Reflect.][1]
If you have any data you wish to save within Ubuntu log into it now and back up the data to external hard drives, USB drives or DVDs.
### Step 1 - Remove The Grub Boot Menu ###
![](http://1.bp.blogspot.com/-arVqwMLpJRQ/VLWbHWkqYsI/AAAAAAAAHmw/kn3jDPOltX4/s1600/grubmenu.jpg)
When you boot your system you will see a menu similar to the one in the image.
To remove this menu and boot straight into Windows you have to fix the master boot record.
To do this I am going to show you how to create a system recovery disk, how to boot to the recovery disk and how to fix the master boot record.
![](http://2.bp.blogspot.com/-ML2JnNc8OWY/VLWcAovwGNI/AAAAAAAAHm4/KH778_MkU7U/s1600/recoverywindow1.PNG)
Press the "Start" button and search for "backup and restore". Click the icon that appears.
A window should open as shown in the image above.
Click on "Create a system repair disc".
You will need a [blank DVD][2].
![](http://2.bp.blogspot.com/-r0GUDZ4AAMI/VLWfJ0nuJLI/AAAAAAAAHnE/RloNqdXLLcY/s1600/recoverywindow2.PNG)
Insert the blank DVD in the drive and select your DVD drive from the dropdown list.
Click "Create Disc".
Restart your computer leaving the disk in and when the message appears to boot from CD press "Enter" on the keyboard.
![](http://2.bp.blogspot.com/-VPSD50bmk2E/VLWftBg7HxI/AAAAAAAAHnM/APVzvPg4rC0/s1600/recoveryoptionschooselanguage.jpg)
A set of "Systems Recovery Options" screens will appear.
You will be asked to choose your keyboard layout.
Choose the appropriate options from the lists provided and click "Next".
![](http://2.bp.blogspot.com/-klK4SihPv0E/VLWgLiPO1mI/AAAAAAAAHnU/DUgxH6N2SFE/s1600/RecoveryOptions.jpg)
The next screen lets you choose an operating system to attempt to fix.
Alternatively you can restore your computer using a system image saved earlier.
Leave the top option checked and click "Next".
![](http://2.bp.blogspot.com/-WOk-Unm6cCQ/VLWgvzoBgzI/AAAAAAAAHng/vfxm1jhW1Ms/s1600/RecoveryOptions2.jpg)
You will now see a screen with options to repair your disk and restore your system etc.
All you need to do is fix the master boot record and this can be done from the command prompt.
Click "Command Prompt".
![](http://4.bp.blogspot.com/-duT-EUC0yuo/VLWhHygCApI/AAAAAAAAHno/bO7UlouyR9M/s1600/FixMBR.jpg)
Now simply type the following command into the command prompt:
bootrec.exe /fixmbr
A message will appear stating that the operation has completed successfully.
You can now close the command prompt window.
Click the "Restart" button and remove the DVD.
Your computer should boot straight into Windows 7.
### Step 2 - Delete The Ubuntu Partitions ###
![](http://4.bp.blogspot.com/-1OM0b3qBeHk/VLWh89gtgVI/AAAAAAAAHn0/ECHIARNCRp8/s1600/diskmanagement1.PNG)
To delete Ubuntu you need to use the "Disk Management" tool from within Windows.
Press "Start" and type "Create and format hard disk partitions" into the search box. A window will appear similar to the image above.
Now my screen above isn't going to be quite the same as yours but it won't be much different. If you look at disk 0 there is 101 MB of unallocated space and then 4 partitions.
The 101 MB of space is a mistake I made when installing Windows 7 in the first place. The C: drive is Windows 7, the next partition (46.57 GB) is Ubuntu's root partition. The 287 GB partition is the /HOME partition and the 8 GB partition is the SWAP space.
The only one we really need for Windows is the C: drive so the rest can be deleted.
**Note: Be careful. You may have recovery partitions on the disk. Do not delete the recovery partitions. They should be labelled and will have file systems set to NTFS or FAT32**
![](http://3.bp.blogspot.com/-8YUE2p5Fj8Q/VLWlHXst6JI/AAAAAAAAHoQ/BJC57d9Nilg/s1600/deletevolume.png)
Right click on one of the partitions you wish to delete (i.e. the root, home and swap partitions) and from the menu click "Delete Volume".
**(Do not delete any partitions that have a file system of NTFS or FAT32)**
Repeat this process for the other two partitions.
![](http://3.bp.blogspot.com/-IGbJLkc_soY/VLWk1Vh0XAI/AAAAAAAAHoA/v7TVFT0rC0E/s1600/diskmanagement2.PNG)
After the partitions have been deleted you will have a large area of free space. Right click the free space and choose delete.
![](http://4.bp.blogspot.com/-2xUBkWHpnC4/VLWk9cYXGZI/AAAAAAAAHoI/8F2ANkorGeM/s1600/diskmanagement3.PNG)
Your disk will now contain your C drive and a large amount of unallocated space.
### Step 3 - Expand The Windows Partition ###
![](http://4.bp.blogspot.com/-pLV5L3CvQ1Y/VLWmh-5SKTI/AAAAAAAAHoc/7sJzITyvduo/s1600/diskmanagement4.png)
The final step is to expand Windows so that it is one large partition again.
To do this right click on the Windows partition (C: drive) and choose "Extend Volume".
![](http://1.bp.blogspot.com/-vgmw_N2WZWw/VLWm7i5oSxI/AAAAAAAAHok/k0q_gnIik9A/s1600/extendvolume1.PNG)
When the Window to the left appears click "Next",
![](http://3.bp.blogspot.com/-WLA86V-Au8g/VLWnTq5RpAI/AAAAAAAAHos/6vzjLNkrwRQ/s1600/extendvolume2.PNG)
The next screen shows a wizard whereby you can select the disks to expand to and change the size to expand to.
By default the wizard shows the maximum amount of disk space it can claim from unallocated space.
Accept the defaults and click "Next".
![](http://4.bp.blogspot.com/-1rhTJvwem0k/VLWnvx7fWFI/AAAAAAAAHo0/D-4HA8E8y2c/s1600/extendvolume3.PNG)
The final screen shows the settings that you chose from the previous screen.
Click "Finish" to expand the disk.
![](http://2.bp.blogspot.com/-CpuLXSYyPKY/VLWoEGU3sCI/AAAAAAAAHo8/7o5G4W4b7zU/s1600/diskmanagement5.PNG)
As you can see from the image above my Windows partition now takes up the entire disk (except for the 101 MB that I accidentally created before installing Windows in the first place).
### Summary ###
![](http://1.bp.blogspot.com/-h1Flo2aGFcI/VLWogr2zfMI/AAAAAAAAHpE/2ypTSgR8_iM/s1600/fullwindowsscreen.PNG)
That is all folks. A site dedicated to Linux has just shown you how to remove Linux and replace it with Windows 7.
Any questions? Use the comments section below.
--------------------------------------------------------------------------------
via: http://www.everydaylinuxuser.com/2015/01/how-to-recover-windows-7-and-delete.html
作者Gary Newell
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://linux.about.com/od/LinuxNewbieDesktopGuide/ss/Create-A-Recovery-Drive-For-All-Versions-Of-Windows.htm
[2]:http://www.amazon.co.uk/gp/product/B0006L2HTK/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=B0006L2HTK&linkCode=as2&tag=evelinuse-21&linkId=3R363EA63XB4Z3IL

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,93 @@
SPccman.......translating
How to Manage Network using nmcli Tool in RedHat / CentOS 7.x
================================================================================
A new feature of [**Red Hat Enterprise Linux 7**][1] and **CentOS 7** is that the default networking service is provided by **NetworkManager**, a dynamic network control and configuration daemon that attempts to keep network devices and connections up and active when they are available while still supporting the traditional ifcfg type configuration files. NetworkManager can be used with the following types of connections: Ethernet, VLANs, Bridges, Bonds, Teams, Wi-Fi, mobile broadband (such as cellular 3G), and IP-over-InfiniBand. For these connection types, NetworkManager can configure network aliases, IP addresses, static routes, DNS information, and VPN connections, as well as many connection-specific parameters.
The NetworkManager can be controlled with the command-line tool, nmcli.
### General nmcli usage ###
The general syntax for nmcli is:
# nmcli [ OPTIONS ] OBJECT { COMMAND | help }
One cool thing is that you can use the TAB key to complete actions when you write the command so if at any time you forget the syntax you can just press TAB to see a list of available options.
![nmcli tab](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-tab.jpg)
Some examples of general nmcli usage:
# nmcli general status
Will display the overall status of NetworkManager.
# nmcli connection show
Will display all connections.
# nmcli connection show -a
Will display only the active connections.
# nmcli device status
Will display a list of devices recognized by NetworkManager and their current state.
![nmcli general](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-gneral.jpg)
### Starting / stopping network interfaces ###
You can use the nmcli tool to start or stop network interfaces from the command line, this is the equivalent of up/down in ifconfig.
To stop an interface use the following syntax:
# nmcli device disconnect eno16777736
To start it you can use this syntax:
# nmcli device connect eno16777736
### Adding an ethernet connection with static IP ###
To add a new ethernet connection with a static IP address you can use the following command:
# nmcli connection add type ethernet con-name NAME_OF_CONNECTION ifname interface-name ip4 IP_ADDRESS gw4 GW_ADDRESS
replacing the NAME_OF_CONNECTION with the name you wish to apply to the new connection, the IP_ADDRESS with the IP address you wish to use and the GW_ADDRESS with the gateway address you use (if you don't use a gateway you can omit this last part).
# nmcli connection add type ethernet con-name NEW ifname eno16777736 ip4 192.168.1.141 gw4 192.168.1.1
To set the DNS servers for this connection you can use the following command:
# nmcli connection modify NEW ipv4.dns "8.8.8.8 8.8.4.4"
To bring up the new Ethernet connection, issue a command as follows:
# nmcli connection up NEW ifname eno16777736
To view detailed information about the newly configured connection, issue a command as follows:
# nmcli -p connection show NEW
![nmcli add static](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-add-static.jpg)
Adding a connection that will use DHCP
If you wish to add a new connection that will use DHCP to configure the interface IP address, gateway address and dns servers, all you have to do is omit the ip/gw address part of the command and Network Manager will use DHCP to get the configuration details.
For example, to create a DHCP configured connection profile named NEW_DHCP, on device
eno16777736 you can use the following command:
# nmcli connection add type ethernet con-name NEW_DHCP ifname eno16777736
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/nmcli-tool-red-hat-centos-7/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.0_Release_Notes/

View File

@ -0,0 +1,58 @@
Vic020
Tips for Apache Migration From 2.2 to 2.4 on Ubuntu 14.04
================================================================================
If you do a distribution upgrade from **Ubuntu** 12.04 to 14.04, the upgrade will bring among other things an important update to **Apache**, from [version 2.2][1] to version 2.4. The update brings many improvements but it may cause some errors when used with the old configuration file from 2.2.
### Access control in Apache 2.4 Virtual Hosts ###
Starting with **Apache 2.4** authorization is applied in a way that is much more flexible then just a single check against a single data store like it was in 2.2. In the past it was tricky to figure how and in what order authorization is applied but with the introduction of authorization container directives such as and , the configuration also has control over when the authorization methods are called and what criteria determines when access is granted.
This is the point where most upgrades fail because of wrong configuration because in 2.2 access control based on IP address, hostname or other characteristic was done using the directives Order, Allow, Deny or Satisfy, but in 2.4 this is done with authorization checks using the new modules.
To be clear let's see some virtual host examples, this can be found in your /etc/apache2/sites-enabled/default or /etc/apache2/sites-enabled/YOUR_WEBSITE_NAME:
Old 2.2 virtual host configuration:
Order allow,deny
Allow from all
New 2.4 virtual host configuration:
Require all granted
![apache 2.4 config](http://blog.linoxide.com/wp-content/uploads/2014/12/apache-2.4-config.jpg)
### .htaccess problems ###
If after the upgrade some settings don't work or you get redirect errors, check if those settings are in a .htaccess file. If settings in the .htaccess file are not used by Apache it's because in 2.4 AllowOverride directive is set to None by default, thus ignoring the .htaccess files. All you have to do is to either change or add the AllowOverride All directive to your site configuration file.
You also see the AllowOverride All directive set in the screenshot above.
### Missing config file or module ###
From my experience another problem during upgrades is that your configuration file includes an old module or configuration file that is no longer needed or supported in 2.4, you will get a clear warning that Apache can't include the respective file and all you have to do is go to your configuration file and remove the line that causes problem. Afterwards you can search or install a similar module.
### Other small changes you shound know about ###
There are a few other changes that you should consider, although they generally result in an warning and not an error:
- MaxClients has been renamed to MaxRequestWorkers, which describes more accurately what it does. For async MPMs, like event, the maximum number of clients is not equivalent than the number of worker threads. The old name is still supported.
- The DefaultType directive no longer has any effect, other than to emit a warning if it's used with any value other than none. You need to use other configuration settings to replace it in 2.4.
- EnableSendfile now defaults to Off.
- FileETag now defaults to "MTime Size" (without INode).
- KeepAlive only accepts values of On or Off. Previously, any value other than "Off" or "0" was treated as "On".
- Directives AcceptMutex, LockFile, RewriteLock, SSLMutex, SSLStaplingMutex, and WatchdogMutexPath have been replaced with a single Mutex directive. You will need to evaluate any use of these removed directives in your 2.2 configuration to determine if they can just be deleted or will need to be replaced using Mutex.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/apache-migration-2-2-to-2-4-ubuntu-14-04/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:http://httpd.apache.org/docs/2.4/

View File

@ -0,0 +1,63 @@
Tomahawk音乐播放器带着新形象、新功能回来了
================================================================================
**在悄无声息得过了一年之后Tomahawk——音乐播放器中的瑞士军刀——带着值得歌颂的全新发行版回归了。 **
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-tile-1.jpg)
这个0.8版的开源跨平台应用增添了**更多在线服务的支持**,更新了它的外观,又一次确保了它创新的社交功能完美运行。
### Tomahawk——两个世界的极品 ###
Tomahawk嫁给了一个带有我们的“即时”现代文化的传统应用结构。它可以浏览和播放本地的音乐和Spotify、Grooveshark以及SoundCloud这类的线上音乐。在最新的发行版中它把Google Play Music和Beats Music列入了它的名册。
这可能听着很繁复或令人困惑,但实际上它表现得出奇的好。
若你想要播放一首歌而且不介意它是从哪里来的你只需告诉Tomahawk音乐的标题和作者它就会自动从可获取的源里找出高品质版本的音乐——你不需要做任何事。
![](http://i.imgur.com/nk5oixy.jpg)
这个应用还弄了一些附加的功能比如EchoNest剖析Last.fm建议还有对Jabber的支持这样你就能播放朋友的音乐。它还有一个内置的信息服务以便于你能和其他人快速的分享播放列表和音乐。
>“这种从根本上就与众不同的听音乐的方式,开启了前所未有的音乐的消费和分享体验”,项目的网站上这样写道。而且即便它如此独特,这也没有错。
![Tomahawk supports the Sound Menu](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-controllers.jpg)
支持声音菜单
### Tomahawk0.8发行版的亮点 ###
- 新的交互界面
- 对Beats Music的支持
- 对Google Play Music的支持保存的和播放全部链接
- 对拖拽iTunesSpotify这类网站的链接的支持
- 正在播放的提示
- Android应用测试版
- 收件箱的改进
### 在Ubuntu上安装Tomahawk0.8 ###
作为一个流媒体音乐的大用户,我会在接下来的几天里体验一下这个应用软件,然后提供一个关于他的改变的更全面的赏析。与此同时,你也可以尝尝鲜。
在Ubuntu 14.04 LTS和Ubuntu 14.10上可以通过官方PPA获得Tomahawk。
sudo add-apt-repository ppa:tomahawk/ppa
sudo apt-get update && sudo apt-get install tomahawk
在官方项目网站上可以找的独立安装程序和更详细的信息。
- [Visit the Official Tomahawk Website][1]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/11/tomahawk-media-player-returns-new-look-features
作者:[Joey-Elijah Sneddon][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://gettomahawk.com/

View File

@ -1,22 +0,0 @@
开始使用Ubuntu 14.04PDF指南
================================================================================
开始熟悉每天的任务,像上网冲浪,听听音乐,还有扫描文档之类。
好好享受这份全面而综合的Ubuntu操作系统初学者指南吧。本教程适用于任何经验等级的人跟着傻瓜式的指令一步一步操作吧。好好探索Ubuntu系统的潜力吧你不会因为技术细节而陷入困境。
- [**开始使用Ubuntu 14.04 (PDF指南)**][1]
![](http://img.tradepub.com/free/w_ubun06/images/w_ubun06c.gif)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/getting-started-with-ubuntu-14-04-pdf-guide.html
作者:[ruchi][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ubuntugeek.com/author/ubuntufix
[1]:http://ubuntugeek.tradepub.com/free/w_ubun06/

View File

@ -0,0 +1,47 @@
如何使用 Linux 从 Grooveshark 下载音乐
================================================================================
> 解决办法通常没有那么难
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-2.jpg)
**Grooveshark 对于喜欢音乐的人来说是一个不错的在线平台同时有多种从上面下载音乐的方法。Groovesquid 是众多允许用户从 Grooveshark 上下载音乐的应用之一,并且是支持多平台的。**
只要有在线流媒体服务,就一定有方法从获取你之前看过或听过的视频及音乐。即使下载接口关闭了,也不是什么大不了的事,因为还有很多种解决方法,无论你用的什么操作系统。比如,网络上就有许多种 YouTube 下载器,同样的道理,从 Grooveshark 上下载音乐也并非难事。
现在得考虑合法性的问题。与许多其他应用一样Groovesquid 并非是完全不合法的。如果有用户使用应用去做一些非法的事情,那责任应归咎于用户。同样的道理也适用于 utorrent 或者 Bittorrent。只要你不触及版权问题那你就可以无所顾忌的使用 Groovesquid 了。
### 快捷高效的 Groovesquid ###
你能够找到的 Groovesquid 的唯一缺点是,它是基于 Java 而编写的,这从来都不是一个好的兆头。虽然为了确保应用的可移植性这样做确实是一个好方法,但这样做的结果导致了其糟糕的界面。确实是非常糟糕的的界面,不过这一点并不会影响到用户的使用体验,特别是这款应用所完成的工作时如此的有用。
有一点需要注意的地方。Groovesquid 是一款免费的应用,但为了将免费保持下去,它会在菜单栏的右侧显示一则广告。这对大多数人来说都应该不是问题,不过最好在打开应用后注意下菜单栏右侧。
从易用性的角度来看,这款应用非常简洁。用户可以通过在顶部地址栏里输入链接直接下载单曲,地址栏的位置可以通过其左侧的下拉菜单进行修改。在下拉菜单中,也可以修改为歌曲名称、流行度、专辑名称、播放列表以及艺术家。有些选项向你提供了诸如查看 Grooveshark 上最流行的音乐,或者下载整个播放列表等。
你可以下载 Groovesquid 0.7.0
- [jar][1] 文件大小3.8 MB
- [tar.gz][2] 文件大小549 KB
下载完 Jar 文件后,你所需要做的是将其权限修改为可执行,然后让 Java 来完成剩下的工作。
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-3.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-4.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-5.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-6.jpg)
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268.shtml
作者:[Silviu Stahie][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://github.com/groovesquid/groovesquid/releases/download/v0.7.0/Groovesquid.jar
[2]:https://github.com/groovesquid/groovesquid/archive/v0.7.0.tar.gz

View File

@ -0,0 +1,90 @@
如何在终端使用后台运行模式启动一个Linux应用程序
========================================================================
![Linux终端窗口](http://f.tqn.com/y/linux/1/W/r/G/1/terminal.JPG)
这是一个篇幅不长但是十分有用的教程可以帮助你在终端启动一个Linux应用程序并且使终端窗口不会丢失焦点。
我们有很多方法可以在Linux系统中打开一个终端窗口这取决与你的选择以及你的桌面环境。
如果是使用Ubuntu的话你可以利用CTRL+ALT+T组合键打开终端。当然你也可以使用超级键Windows键[打开Dash][1]搜索“TERM”然后点击“Term”图标来打开终端窗口。
对于其他的桌面环境来说例如XFCE、KDE、LXDE、Cinnamon以及MATE你可以在菜单中找到终端。有些环境会在停靠栏或者面板上面包含终端图标。
通常情况下你可以在终端里面直接输入应用程序名来启动一个应用程序。比如说你可以通过输入“firefox”来启动Firefox。
在终端启动应用程序的好处是,你可以包含一些额外的参数。
例如你可以通过下列命令来打开一个Firefox浏览窗口然后利用默认的搜索引擎搜索相关信息
firefox -search "linux.cn"
你可能会注意到当你启动Firefox的时候如果程序打开以后焦点重新会到终端窗口的话你就可以继续在终端进行工作。
通常情况下,如果你在终端启动了应用程序,焦点会切换到新启动的应用程序,只有程序被关闭以后焦点才会重新切换到终端。这是因为你在前台启动了这个程序。
如果要实现焦点仍然保持在终端窗口的目的,那么你需要将应用程序启动为后台进程。
向下面所列的命令一样,我们可以通过增加一个(&)符号,将应用程序在后台启动。
libreoffice &
>译者注:如果需要加参数的话,记得把&符号放在最后。
>译者注:一般情况下,关闭终端时,在这个终端启动的后台程序也会被终止,要使终端关闭以后,后台程序依然保持执行可以使用下列命令
> nohup command [arg...] &
如果应用程序目录没有安装在PATH变量包含的目录里面的话我们就没有办法直接通过应用程序名来启动程序必须输入应用程序的整个路径来启动它。
/path/to/yourprogram &
如果你不确定程序输入哪个Linux目录结构的话可以使用[find][2]或者[location][3]命令来定位它。
可以输入下列符号来找到一个文件:
find /path/to/start/from -name programname
例如你可以输入下列命令来找到Firefox
find / -name firefox
命令运行的结果会嗖的一下输出一大堆,别担心,你也可以通过[less][4]或者[more][5]来进行分页查看。
find / -name firefox | more
find / -name firefox | less
当find命令查找到没有权限访问的文件夹时会报出一条拒绝访问错误
你可以通过[sudo命令来提示权限][6]。当然如果你没有安装sudo的话就只能切换到一个拥有权限的用户了。
sudo find / -name firefox | more
如果你知道你要查找的文件在你的当前目录及其子目录中,那么你可以使用点来代替斜杠:
sudo find . -name firefox | more
你可能需要sudo来提升权限也可能根本就不需要如果这个文件在你的跟目录里面那么就不需要使用sudo。
有些应用程序则必须要提升权限才能运行否则你就会得到一大堆拒绝访问错误除非你使用一个具有权限的用户或者使用sudo提升权限。
这里有个小窍门。如果你运行了一个程序,但是它报了权限错误以后,输入下面命令试试:
sudo !!
via : http://linux.about.com/od/commands/fl/How-To-Run-Linux-Programs-From-The-Terminal-In-Background-Mode.htm
作者:[Gary Newell][a]
译者:[zhouj-sh](https://github.com/zhouj-sh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linux.about.com/bio/Gary-Newell-132058.htm
[1]:http://linux.about.com/od/howtos/fl/Learn-Ubuntu-The-Unity-Dash.htm
[2]:http://linux.about.com/od/commands/l/blcmdl1_find.htm
[3]:http://linux.about.com/od/commands/l/blcmdl1_locate.htm
[4]:http://linux.about.com/library/cmd/blcmdl1_less.htm
[5]:http://linux.about.com/library/cmd/blcmdl1_more.htm
[6]:http://linux.about.com/od/commands/l/blcmdl8_sudo.htm

View File

@ -1,72 +0,0 @@
伴随苹果手表的揭幕Ubuntu智能手表会成为下一个吗
===
**今天,苹果借助‘苹果手表’的发布,证实了其进军穿戴式计算设备市场的长期传言**
![Ubuntu Smartwatch good idea?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/ubuntu-galaxy-gear-smartwatch.png)
Ubuntu智能手表 - 好的主意?
拥有一系列稳定功能,硬件解决方案和应用合作伙伴关系的支持,手腕穿戴设备被许多公司预示为“人与技术关系的新篇章”。
它的到来以及用户兴趣的提升有可能意味着Ubuntu需要遵循一个为智能手表定制的Ubuntu版本。
### 大的方面还是成功的 ###
苹果在正确时间加入了快速发展的智能手表部门。约束手腕穿戴电脑功能的界限并不是一成不变。失败的设计,不良的用户界面以及主流用户使用穿戴技术功能的弱参数化,这些都见证了硬件种类保持着高效的影响力 一个准许Cupertino把时间花费在苹果手表上的因素。
> 分析师说超过2200万的智能手表将在今年销售
去年全球范围内可穿戴设备的销售数量包括健身追踪器仅仅1000万。今年分析师希望设备数量的改变可以超过2200万 不包括苹果手表因为其直到2015年初才开始零售。
很容易就可以看出增长的来源。今年九月初柏林举办的IFA 2014展览会展示了一系列来自主要制造商们的可穿戴设备包括索尼和华硕。大多数搭载着Google最新发布的安卓穿戴系统。
一个更加成熟的表现:安卓穿戴设备打破了与形式因素保持一致的新奇争论,进而呈现出一致并令人折服的用户方案。和新的苹果手表一样,它紧密地连接在一个现存的智能手机生态系统上。
可能它只是一个使用案例Ubuntu手腕穿戴系统是否能匹配它还不清楚。
#### 目前还没有Ubuntu智能手表的计划 ####
Ubuntu操作系统的通用性结合以为多装置设备和趋势性未来定制的严格版本已经产生了典型目标智能电视平板电脑和智能手机。Mir,公司的本土显示服务器被用来运转所有尺寸屏幕上的接口虽然不是公认1.5"的)
今年年初Canonical社区负责人Jono Bacon被询问是否有制作Ubuntu智能手表的打算。Bacon提供了他对这个问题的看法“增加另一个形式因素到[Ubuntu触摸设备]路线只会减缓其余的东西”。
在Ubuntu电话发布两周年之际我们还是挺赞同他的想法的。
滴答,滴答,对冲你的赌注点(实在不懂什么意思...)
但是并不是没有希望的。在一个[几个月之后的电话采访][1]中Ubuntu创始人Mark Shuttleworth提及到可穿戴技术和智能电视平板电脑智能手机一样都在公司计划当中。
> “Ubuntu因其在电话中的完美设计变得独一无二但是它同时也被设计成满足其余生态系统的样子比如从穿戴设备到PC机。”
然而这还没得到具体的证实,它更像一个指针,在这个方向是给我们提供一个乐观的指引。
### 不可能 — 这就是原因所在 ###
Canonical并不反对利用牢固的专利进军市场。事实上它的重要性犹如公司的DHA — 犹如服务器上的RHEL,桌面上的Windows,智能手机上的安卓...
设备上的Ubuntu系统被制作成可以在更小的屏幕上扩展和适应性运行。甚至很有可能在和手表一样小的屏幕上运行。当普通的代码基础已经在手机平板电脑桌面和TV上准备就绪我想如果我们没有看到来自社区这一方向上的努力我会感到奇怪。
但是我之所以不认为它会从规范社区发生至少目前还没有是今年早些时候Jono Bacon个人思想的共鸣时间和努力。
Tim Cook在他的主题演讲中说道“*我们并没有追随iPhone也没有缩水用户界面将其强硬捆绑在你的手腕上。*”这是一个很明显的陈述。为如此小的屏幕设计UI和UX模型;通过交互原则工作;对硬件和输入模式的恭维,都不是一件容易的事。
可穿戴技术仍然是一个新兴的市场。在这个阶段Canonical将会浪费发展设计以及进行中的业务。在一些更为紧迫的地区任何利益的重要性将要超过损失。
玩一局更久的游戏等待直到看出那些努力在何地成功和失败这是一条更难的路线。但是更适合Ubuntu的就是今天。在新产品出现之前让Canonical把力量用在现存的产品上是更好的选择这是一些已经来迟的理论
想更进一步了解什么是Ubuntu智能手表点击下面的[视频][2]。它展示了一个互动的主体性皮肤Tizen(它已经支持Samsung Galaxy Gear智能手表)。
---
via: http://www.omgubuntu.co.uk/2014/09/ubuntu-smartwatch-apple-iwatch
作者:[Joey-Elijah Sneddon][a]
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[校对者ID](https://github.com/校对者ID)
本文由[LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/03/ubuntu-tablets-coming-year
[2]:https://www.youtube.com/embed/8Zf5dktXzEs?feature=oembed

View File

@ -1,82 +0,0 @@
ChromeOS 对战 Linux : 孰优孰劣,仁者见仁,智者见智
================================================================================
> 在 ChromeOS 和 Linux 的斗争过程中,不管是哪一家的操作系统都是有优有劣。
任何不关注Google 的人都不会相信Google在桌面用户当中扮演着一个很重要的角色。在近几年我们见到的[ChromeOS][1]制造的[Google Chromebook][2]相当的轰动。和同期的人气火爆的Amazon 一样似乎ChromeOS 势不可挡。
在本文中我们要了解的是ChromeOS 的概念市场ChromeOS 怎么影响着Linux 的份额,和整个 ChromeOS 对于linux 社区来说,是好事还是坏事。另外,我将会谈到一些重大的事情,和为什么没人去为他做点什么事情。
### ChromeOS 并非真正的Linux ###
每当有朋友问我说是否ChromeOS 是否是Linux 的一个版本时我都会这样回答ChromeOS 对于Linux 就好像是 OS X 对于BSD 。换句话说我认为ChromeOS 是linux 的一个派生操作系统运行于Linux 内核的引擎之下。而很多操作系统就组成了Google 的专利代码和软件。
尽管ChromeOS 是利用了Linux 内核引擎但是它仍然有很大的不同和现在流行的Linux 分支版本。
尽管ChromeOS 的差异化越来越明显是在于它给终端用户提供的app包括Web 应用。因为ChromeOS 的每一个操作都是开始于浏览器窗口这对于Linux 用户来说可能会有很多不一样的感受但是对于没有Linux 经验的用户来说,这与他们使用的旧电脑并没有什么不同。
就是说每一个以Google-centric 为生活方式的人来说在ChromeOS上的感觉将会非常良好就好像是回家一样。这样的优势就是这个人已经接受了Chrome 浏览器Google 驱动器和Gmail 。久而久之他们的亲朋好友使用ChromeOs 也就是很自然的事情了就好像是他们很容易接受Chrome 浏览器,因为他们觉得早已经用过。
然而对于Linux 爱好者来说这样的约束就立即带来了不适应。因为软件的选择被限制有范围的在加上要想玩游戏和VoIP 是完全不可能的。那么对不起,因为[GooglePlus Hangouts][3]是代替不了VoIP 软件的。甚至在很长的一段时间里。
### ChromeOS 还是Linux 桌面 ###
有人断言ChromeOS 要是想在桌面系统的浪潮中对Linux 产生影响只有在Linux 停下来浮出水面栖息的时候或者是满足某个非技术用户的时候。
是的桌面Linux 对于大多数休闲型的用户来说绝对是一个好东西。然而它必须有专人帮助你安装操作系统并且提供“维修”服务从windows 和 OS X 的阵营来看。但是令人失望的是在美国Linux 正好在这个方面很缺乏。所以我们看到ChromeOS 正慢慢的走入我们的视线。
我发现Linux 桌面系统最适合做网上技术支持来管理。比如说家里的高级用户可以操作和处理更新政府和学校的IT 部门。Linux 还可以应用于这样的环境Linux桌面系统可以被配置给任何技能水平和背景的人使用。
相比之下ChromeOS 是建立在完全免维护的初衷之下的因此不需要第三者的帮忙你只需要允许更新然后让他静默完成即可。这在一定程度上可能是由于ChromeOS 是为某些特定的硬件结构设计的这与苹果开发自己的PC 电脑也有异曲同工之妙。因为Google 的ChromeOS 附带一个硬件脉冲,它允许“犯错误”。对于某些人来说,这是一个很奇妙的地方。
滑稽的是有些人却宣称ChomeOs 的远期的市场存在很多问题。简言之这只是一些Linux 激情的爱好者在找对于ChomeOS 的抱怨罢了。在我看来,停止造谣这些子虚乌有的事情才是关键。
问题是ChromeOS 的市场份额和Linux 桌面系统在很长的一段时间内是不同的。这个存在可能会在将来被打破,然而在现在,仍然会是两军对峙的局面。
### ChromeOS 的使用率正在增长 ###
不管你对ChromeOS 有怎么样的看法事实是ChromeOS 的使用率正在增长。专门针对ChromeOS 的电脑也一直有发布。最近戴尔Dell也发布了一款针对ChromeOS 的电脑。命名为[Dell Chromebox][5],这款ChromeOS 设备将会是另一些传统设备的终结者。它没有软件光驱没有反病毒软件offers 能够无缝的在屏幕后面自动更新。对于一般的用户Chromebox 和Chromebook 正逐渐成为那些工作在web 浏览器上的人的一个选择。
尽管增长速度很快ChromeOS 设备仍然面临着一个很严峻的问题 - 存储。受限于有限的硬盘的大小和严重依赖于云存储并且ChromeOS 不会为了任何使用它们电脑的人消减基本的web 浏览器的功能。
### ChromeOS 和Linux 的异同点 ###
以前我注意到ChromeOS 和Linux 桌面系统分别占有着两个完全不同的市场。出现这样的情况是源于Linux 社区的致力于提升Linux 桌面系统的脱机性能。
是的偶然的有些人可能会第一时间发现这个“Linux 的问题”。但是并没有一个人接着跟进这些问题确保得到问题的答案确保他们得到Linux 最多的帮助。
事实上,脱机故障可能是这样发现的:
- 有些用户偶然的在Linux 本地事件发现了Linux 的问题。
- 他们带回了DVD/USB 设备,并尝试安装这个操作系统。
- 当然,有些人很幸运的成功的安装成功了这个进程,但是,据我所知大多数的人并没有那么幸运。
- 令人失望的是,这些人希望在网上论坛里搜索帮助。很难做一个主计算机,没有网络和视频的问题。
- 我真的是受够了后来有很多失望的用户拿着他们的电脑到windows 商店来“维修”。除了重装一个windows 操作系统他们很多时候都会听到一句话“Linux 并不适合你们”,应该尽量避免。
有些人肯定会说上面的举例肯定夸大其词了。让我来告诉你这是发生在我身边真实的事的而且是经常发生。醒醒吧Linux 社区的人们,我们的这种模式已经过时了。
### 伟大的平台,强大的营销和结论 ###
如果非要说ChromeOS 和Linux 桌面系统相同的地方除了它们都使用了Linux 内核就是它们都伟大的产品却拥有极其差劲的市场营销。而Google 的好处就是,他们投入大量的资金在网上构建大面积存储空间。
Google 相信他们拥有“网上的优势”而线下的影响不是很重要。这真是一个让人难以置信的目光短浅这也成了Google 历史上最大的一个失误之一。相信,如果你没有接触到他们在线的努力,你不值得困扰,仅仅就当是他们在是在选择网上存储空间上做出反击。
我的建议是通过Google 的线下影响提供Linux 桌面系统给ChromeOS 的市场。这就意味着Linux 社区的人需要筹集资金来出席县博览会、商场展览在节日季节和在社区中进行免费的教学课程。这会立即使Linux 桌面系统走入人们的视线否则最终将会是一个ChromeOS 设备出现在人们的面前。
如果说本地的线下市场并没有想我说的这样别担心。Linux 桌面系统的市场仍然会像ChromeOS 一样增长。最坏也能保持现在这种两军对峙的市场局面。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/chromeos-vs-linux-the-good-the-bad-and-the-ugly-1.html
作者:[Matt Hartley][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Matt-Hartley-3080.html
[1]:http://en.wikipedia.org/wiki/Chrome_OS
[2]:http://www.google.com/chrome/devices/features/
[3]:https://plus.google.com/hangouts
[4]:http://en.wikipedia.org/wiki/Voice_over_IP
[5]:http://www.pcworld.com/article/2602845/dell-brings-googles-chrome-os-to-desktops.html

View File

@ -1,64 +0,0 @@
Linux 上好用的几款字幕编辑器介绍
================================================================================
如果你经常看国外的大片你应该会喜欢带字幕版本而不是有国语配音的版本。在法国长大我的童年记忆里充满了迪斯尼电影。但是这些电影因为有了法语的配音而听起来很怪。如果现在有机会能看原始的版本我知道对于大多数的人来说字幕还是必须的。我很高兴能为家人制作字幕。最让我感到希望的是Linux 也不无花哨而且有很多开源的字幕编辑器。总之一句话这篇文章并不是一个详尽的Linux上字幕编辑器的列表。你可以告诉我那一款是你认为最好的字幕编辑器。
### 1. Gnome Subtitles ###
![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg)
[Gnome Subtitles][1] 我的一个选择,当有字幕需要快速编辑时。你可以载入视频,载入字幕文本,然后就可以即刻开始了。我很欣赏其对于易用性和高级特性之间的平衡性。它带有一个同步工具以及一个拼写检查工具。最后,虽然最后,但并不是不重要,这么好用最主要的是因为它的快捷键:当你编辑很多的台词的时候,你最好把你的手放在键盘上,使用其内置的快捷键来移动。
### 2. Aegisub ###
![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg)
[Aegisub][2] 有更高级别的复杂性。接口仅仅反映了学习曲线。但是除了它吓人的样子以外Aegisub 是一个非常完整的软件提供的工具远远超出你能想象的。和Gnome Subtitles 一样Aegisub也采用了所见即所得WYSIWYG:what you see is what you get的处理方式。但是是一个全新的高度可以再屏幕上任意拖动字幕也可以在另一边查看音频的频谱并且可以利用快捷键做任何的事情。除此以外它还带有一个汉字工具有一个kalaok模式并且你可以导入lua 脚本让它自动完成一些任务。我希望你在用之前,先去阅读下它的[指南][3]。
### 3. Gaupol ###
![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg)
另一个操作复杂的软件是[Gaupol][4],不像Aegisub Gaupol 很容易上手而且采用了一个和Gnome Subtitles 很像的界面。但是在这些相对简单背后,它拥有很多很必要的工具:快捷键、第三方扩展、拼写检查,甚至是语音识别(由[CMU Sphinx][5]提供。这里也提一个缺点我注意到有时候在测试的时候也软件会有消极怠工的表现不是很严重但是也足以让我更有理由喜欢Gnome Subtitles了。
### 4. Subtitle Editor ###
![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg)
[Subtitle Editor][6]和Gaupol 很像。但是它的界面有点不太直观特性也只是稍微的高级一点点。我很欣赏的一点是它可以定义“关键帧”而且提供所有的同步选项。然而多一点的图标或者是少一点的文字都能提供界面的特性。作为一个好人我认为Subtitle Editor 可以模仿“作家”打字的效果,虽然我不知道它是否有用。最后但并非不重要。重定义快捷键的功能很实用。
### 5. Jubler ###
![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg)
用Java 写的,[Jubler][7]是一个多平台支持的字幕编辑器。我对它的界面印象特别深刻。在上面我确实看出了Java-ish 方面的东西但是它仍然是经过精心的构造和构思的。像Aegisub 一样你可以再屏幕上任意的拖动字幕让你有愉快的体验而不单单是打字。它也可以为字幕自定义一个风格在另外的一个轨道播放音频翻译字幕或者是是做拼写检查然而你必须要注意的是你必须事先安装好媒体播放器并且正确的配置如果你想完整的使用Jubler。我把这些归功于在[官方页面][8]下载了脚本以后其简便的安装方式。
### 6. Subtitle Composer ###
![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg)
被视为“KDE里的字幕作曲家”[Subtitle Composer][9]能够唤起对很多传统功能的回忆。伴随着KDE界面我们很期望。很自然的我们就会说到快捷键我特别喜欢这个功能。除此之外Subtitle Composer 与上面提到的编辑器最大的不同地方就在于它可以执行用JavaScriptPython甚至是Ruby写成的脚本。软件带有几个例子肯定能够帮助你很好的学习使用这些特性的语法。
最后不管你喜不喜欢我都要为你的家庭编辑几个字幕重新同步整个轨道或者是一切从头开始那么Linux 有很好的工具给你。对我来说,快捷键和易用性使得各个工具有差异,想要更高级别的使用体验,脚本和语音识别就成了很便利的一个功能。
你会使用哪个字幕编辑器,为什么?你认为还有没有更好用的字幕编辑器这里没有提到的?在评论里告诉我们。
--------------------------------------------------------------------------------
via: http://xmodulo.com/good-subtitle-editor-linux.html
作者:[Adrien Brochard][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://gnomesubtitles.org/
[2]:http://www.aegisub.org/
[3]:http://docs.aegisub.org/3.2/Main_Page/
[4]:http://home.gna.org/gaupol/
[5]:http://cmusphinx.sourceforge.net/
[6]:http://home.gna.org/subtitleeditor/
[7]:http://www.jubler.org/
[8]:http://www.jubler.org/download.html
[9]:http://sourceforge.net/projects/subcomposer/

View File

@ -1,65 +0,0 @@
黑客年暮
================================================================================
近来我一直在与某资深开源组织的各成员进行争斗,尽管密切关注我的人们会在读完本文后猜到是哪个组织,但我不会在这里说出这个组织的名字。
怎么让某些人进入 21 世纪就这么难呢?真是的...
我快56 岁了,也就是大部分年轻人会以为的我将时不时朝他们发出诸如“滚出我的草坪”之类歇斯底里咆哮的年龄。但事实并非如此 —— 我发现,尤其是在技术背景之下,我变得与我的年龄非常不相称。
在我这个年龄的大部分人确实变成了爱发牢骚、墨守成规的老顽固。并且,尴尬的是,偶尔我会成为那个打断谈话的人,然后提出在 1995 年或者在某些特殊情况下1985 年)时很适合的方法... 但十年后就不是个好方法了。
为什么是我?因为我的同龄人里大部分人在孩童时期都没有什么名气。任何想要改变自己的人,就必须成为他们中具有较高思想觉悟的佼佼者。即便如此,在与习惯做斗争的过程中,我也比实际上花费了更多的时间。
年轻人犯下无知的错误是可以被原谅的。他们还年轻。年轻意味着缺乏经验,缺乏经验通常会导致片面的判断。我很难原谅那些经历了足够多本该有经验的人,却被<em>长期的固化思维</em>蒙蔽,无法发觉近在咫尺的东西。
(补充一下:我真的不是老顽固。那些和我争论政治的,无论保守派还是非保守派都没有注意到这点,我觉得这颇有点嘲讽的意味。)
那么,现在我们来讨论下 GNU 更新日志文件这件事。在 1985 年的时候,这是一个不错的主意,甚至可以说是必须的。当时的想法是用单独的更新日志文件来记录相关文件的变更情况。用这种方式来对那些存在版本缺失或者非常原始的版本进行版本控制确实不错。当时我也在场,所以我知道这些。
不过即使到了 1995 年,甚至 21 世纪早期许多版本控制系统仍然没有太大改进。也就是说这些版本控制系统并非对批量文件的变化进行分组再保存到一条记录上而是对每个变化的文件分别进行记录并保存到不同的地方。CVS当时被广泛使用的版本控制系统仅仅是模拟日志变更 —— 并且在这方面表现得很糟糕,导致大多数人不再依赖这个功能。即便如此,更新日志文件的出现依然是必要的。
但随后,版本控制系统 Subversion 于 2003 年发布 beta 版,并于 2004 年发布 1.0 正式版Subversion 真正实现了更新日志记录功能得到了人们的广泛认可。它与一年后兴起的分散式版本控制系统Distributed Version Control SystemDVCS共同引发了主流世界的激烈争论。因为如果你在项目上同时使用了分散式版本控制与更新日志文件记录的功能它们将会因为争夺相同元数据的控制权而产生不可预料的冲突。
另一种方法是对提交的评论日志进行授权。如果你这样做了,不久后你就会开始思忖为什么自己仍然对所有的日志更新条目进行记录。提交的元数据与变化的代码具有更好的相容性,毕竟这就是它当初设计的目的。
(现在,试想有这样一个项目,同样本着把项目做得最好的想法,但两拨人却做出了完全不同的选择。因此你必须同时阅读更新日志和评论日志以了解到底发生了什么。最好在矛盾激化前把问题解决....
第三种办法是尝试同时使用两种方法 —— 以另一种格式再次提交评论数据,作为更新日志提交的一部分。这解决了所有你期待的有代表性的问题,并且没有任何缺陷遗留下来;只要其中有拷贝文件损坏,日志文件就会修改,因此这不再是同步时数据匹配的问题,而且导致在其后参与进来的人试图搞清人们是怎么想的时候将会变得非常困惑。
或者,如某个<em>我就不说出具体名字的特定项目</em>的高层开发只是通过电子邮件来完成这些,声明提交可以包含多个更新日志,以及提交的元数据与更新日志是无关的。这导致我们直到现在还得不断进行记录。
当我读到那条的时候我的眼光停在了那个地方。什么样的傻瓜才会没有意识到这是在自找麻烦 —— 事实上,针对更新日志文件采取的定制措施完全是不必要的,尤其是在分散式版本控制系统中
有很好的浏览工具来阅读可靠的提交日志的时候。
这是比较特殊的笨蛋变老的并且思维僵化了的黑客。所有的合理化改革他都会极力反对。他所遵循的行事方法在十年前是有效的但现在只能使得其反了。如果你试图解释不只是git的总摘要还得正确掌握当前的各种工具才能完全弃用更新日志... 呵呵,准备好迎接无法忍受、无法想象的疯狂对话吧。
幸运的是这激怒了我。因为这点还有其他相关的胡言乱语使这个项目变成了很难完成的工作。而且,这类糟糕的事时常发生在年轻的开发者身上,这才是问题所在。相关 G+ 社群的数量已经达到了 4 位数,他们大部分都是孩子,他们也没有紧张起来。显然消息已经传到了外面;这个项目的开发者都是被莫名关注者的老牌黑客,同时还有很多对他们崇拜的人。
这件事给我的最大触动就是每当我要和这些老牌黑客较量时,我都会想:有一天我也会这样吗?或者更糟的是,我看到的只是如同镜子一般对我自己的真实写照,而我自己却浑然不觉吗?我的意思是,我的印象来自于他的网站,这个特殊的样本要比我年轻。通过十五年的仔细观察得出的结论。
我觉得思路很清晰。当我和那些比我聪明的人打交道时我不会受挫,我只会因为那些不能跟上我的人而
沮丧,这些人也不能看见事实。但这种自信也许只是邓宁·克鲁格效应的消极影响,至少我明白这点。很少有什么事情会让我感到害怕;而这件事在让我害怕的事情名单上是名列前茅的。
另一件让人不安的事是当我逐渐变老的时候,这样的矛盾发生得越来越频繁。不知怎的,我希望我的黑客同行们能以更加优雅的姿态老去,即使身体老去也应该保持一颗年轻的心灵。有些人确实是这样;但可是绝大多数人都不是。真令人悲哀。
我不确定我的职业生涯会不会完美收场。假如我最后成功避免了思维僵化(注意我说的是假如),我想我一定知道其中的部分原因,但我不确定这种模式是否可以被复制 —— 为了达成目的也许得在你的头脑中发生一些复杂的化学反应。尽管如此,无论对错,请听听我给年轻黑客以及其他有志青年的建议。
你们 —— 对的,也包括你 —— 一定无法在你中年老年的时候保持不错的心灵,除非你能很好的控制这点。你必须不断地去磨练你的内心、在你还年轻的时候完成自己的种种心愿,你必须把这些行为养成一种习惯直到你老去。
有种说法是中年人锻炼身体的最佳时机是他进入中年的 30 年前。我以为同样的方法,坚持我以上所说的习惯能让你在 56 岁,甚至 65 岁的时候仍然保持灵活的头脑。挑战你的极限,使不断地挑战自己成为一种习惯。立刻离开安乐窝,由此当你以后真正需要它的时候你可以建立起自己的安乐窝。
你必须要清楚的了解这点;还有一个可选择的挑战是你选择一个可以实现的目标并且为了这个目标不断努力。这个月我要学习 Go 语言。不是指游戏,我早就玩儿过了(虽然玩儿的不是太好)。并不是因为工作需要,而是因为我觉得是时候来扩展下我自己了。
保持这个习惯。永远不要放弃。
--------------------------------------------------------------------------------
via: http://esr.ibiblio.org/?p=6485
作者:[Eric Raymond][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://esr.ibiblio.org/?author=2

View File

@ -0,0 +1,155 @@
如何使用Aptik来备份和恢复Ubuntu中的Apps和PPAs
================================================================================
![00_lead_image_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x300x00_lead_image_aptik.png.pagespeed.ic.n3TJwp8YK_.png)
当你想重装Ubuntu或者仅仅是想安装它的一个新版本的时候寻到一个便捷的方法去重新安装之前的应用并且重置其设置是很有用的。此时 *Aptik* 粉墨登场,它可以帮助你轻松实现。
Aptik自动包备份和回复是一个可以用在UbuntuLinux Mint, 和其他基于Debian以及Ubuntu的Linux发行版上的应用它允许你将已经安装过的包括软件库、下载包、安装的应用及其主题和设置在内的PPAs(个人软件包存档)备份到外部的U盘、网络存储或者类似于Dropbox的云服务上。
注意:当我们在此文章中说到输入某些东西的时候,如果被输入的内容被引号包裹,请不要将引号一起输入进去,除非我们有特殊说明。
想要安装Aptik需要先添加其PPA。使用Ctrl + Alt + T快捷键打开一个新的终端窗口。输入以下文字并按回车执行。
sudo apt-add-repository y ppa:teejee2008/ppa
当提示输入密码的时候,输入你的密码然后按回车。
![01_command_to_add_repository](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x99x01_command_to_add_repository.png.pagespeed.ic.UfVC9QLj54.png)
输入下边的命令到提示符旁边,来确保资源库已经是最新版本。
sudo apt-get update
![02_update_command](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x252x02_update_command.png.pagespeed.ic.m9pvd88WNx.png)
更新完毕后你就完成了安装Aptik的准备工作。接下来输入以下命令并按回车
sudo apt-get install aptik
注意你可能会看到一些有关于获取不到包更新的错误提示。不过别担心如果这些提示看起来跟下边图片中类似的话你的Aptik的安装就没有任何问题。
![03_command_to_install_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x03_command_to_install_aptik.png.pagespeed.ic.1jtHysRO9h.png)
安装过程会被显示出来。其中一个被显示出来的消息会提到此次安装会使用掉多少磁盘空间然后提示你是否要继续按下“y”再按回车继续安装。
![04_do_you_want_to_continue](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x04_do_you_want_to_continue.png.pagespeed.ic.WQ15_UxK5Z.png)
当安装完成后输入“Exit”并按回车或者按下左上角的“X”按钮关闭终端窗口。
![05_closing_terminal_window](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x05_closing_terminal_window.png.pagespeed.ic.9QoqwM7Mfr.png)
在正式运行Aptik前你需要设置好备份目录到一个U盘、网络驱动器或者类似于Dropbox和Google Drive的云帐号上。这儿的例子中我们使用的是Dropbox。
![06_creating_backup_folder](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x243x06_creating_backup_folder.png.pagespeed.ic.7HzR9KwAfQ.png)
一旦设置好备份目录点击启动栏上方的“Search”按钮。
![07_opening_search](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x177x07_opening_search.png.pagespeed.ic.qvFiw6_sXa.png)
在搜索框中键入 “aptik”。结果会随着你的输入显示出来。当Aptik图标显示出来的时候点击它打开应用。
![08_starting_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x338x08_starting_aptik.png.pagespeed.ic.8fSl4tYR0n.png)
此时一个对话框会显示出来要求你输入密码。输入你的密码并按“OK”按钮。
![09_entering_password](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x337x09_entering_password.png.pagespeed.ic.yanJYFyP1i.png)
Aptik的主窗口显示出来了。从“Backup Directory”下拉列表中选择“Other…”。这个操作允许你选择你已经建立好的备份目录。
注意:在下拉列表的右侧的 “Open” 按钮会在一个文件管理窗口中打开选择目录功能。
![10_selecting_other_for_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x533x10_selecting_other_for_directory.png.pagespeed.ic.dHbmYdAHYx.png)
在 “Backup Directory” 对话窗口中定位到你的备份目录然后按“Open”。
注意如果此时你尚未建立备份目录或者想在备份目录中新建个子目录你可以点“Create Folder”来新建目录。
![11_choosing_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x470x11_choosing_directory.png.pagespeed.ic.E-56x54cy9.png)
点击“Software Sources (PPAs).”右侧的 “Backup”来备份已安装的PPAs。
![12_clicking_backup_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
然后“Backup Software Sources”对话窗口显示出来。已安装的包和对应的源PPA同时也显示出来了。选择你需要备份的源PPAs或者点“Select All”按钮选择所有源。
![13_selecting_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
点击 “Backup” 开始备份。
![14_clicking_backup_for_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x14_clicking_backup_for_all_software_sources.png.pagespeed.ic.n5h_KnQVZa.png)
备份完成后,一个提示你备份完成的对话窗口会蹦出来。点击 “OK” 关掉。
一个名为“ppa.list”的文件出现在了备份目录中。
![15_closing_finished_dialog_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x15_closing_finished_dialog_software_sources.png.pagespeed.ic.V25-KgSXdY.png)
接下来“Downloaded Packages (APT Cache)”的项目只对重装同样版本的Ubuntu有用处。它会备份下你系统缓存(/var/cache/apt/archives)中的包。如果你是升级系统的话,可以跳过这个条目,因为针对新系统的包会比现有系统缓存中的包更加新一些。
备份和回复下载过的包这可以在重装Ubuntu并且重装包的时候节省时间和网络带宽。因为一旦你把这些包恢复到系统缓存中之后他们可以重新被利用起来这样下载过程就免了包的安装会更加快捷。
如果你是重装相同版本的Ubuntu系统的话点击 “Downloaded Packages (APT Cache)” 右侧的 “Backup” 按钮来备份系统缓存中的包。
注意:当你备份下载过的包的时候是没有二级对话框出现。你系统缓存 (/var/cache/apt/archives) 中的包会被拷贝到备份目录下一个名叫 “archives” 的文件夹中,当整个过程完成后会出现一个对话框来告诉你备份已经完成。
![16_downloaded_packages_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x544x16_downloaded_packages_backed_up.png.pagespeed.ic.z8ysuwzQAK.png)
有一些包是你的Ubuntu发行版的一部分。因为安装Ubuntu系统的时候会自动安装它们所以它们是不会被备份下来的。例如火狐浏览器在Ubuntu和其他类似Linux发行版上都是默认被安装的所以默认情况下它不会被选择备份。
像[package for the Chrome web browser][1]这种系统安装完后才安装的包或者包含 Aptik 的包会默认被选择上。这可以方便你备份这些后安装的包。
按照需要选择想要备份的包。点击 “Software Selections” 右侧的 “Backup” 按钮备份顶层包。
注意:依赖包不会出现在这个备份中。
![18_clicking_backup_for_software_selections](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x18_clicking_backup_for_software_selections.png.pagespeed.ic.QI5D-IgnP_.png)
名为 “packages.list” and “packages-installed.list” 的两个文件出现在了备份目录中,并且一个用来通知你备份完成的对话框出现。点击 ”OK“关闭它。
注意“packages-installed.list”文件包含了所有的包而 “packages.list” 在包含了所有包的前提下还指出了那些包被选择上了。
![19_software_selections_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x19_software_selections_backed_up.png.pagespeed.ic.LVmgs6MKPL.png)
要备份已安装软件的设置的话,点击 Aptik 主界面 “Application Settings” 右侧的 “Backup” 按钮选择你要备份的设置点击“Backup”。
注意如果你要选择所有设置点击“Select All”按钮。
![20_backing_up_app_settings](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x20_backing_up_app_settings.png.pagespeed.ic.7_kgU3Dj_m.png)
被选择的配置文件统一被打包到一个名叫 “app-settings.tar.gz” 的文件中。
![21_zipping_settings_files](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x21_zipping_settings_files.png.pagespeed.ic.dgoBj7egqv.png)
当打包完成后打包后的文件被拷贝到备份目录下另外一个备份成功的对话框出现。点击”OK“关掉。
![22_app_settings_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22_app_settings_backed_up.png.pagespeed.ic.Mb6utyLJ3W.png)
来自 “/usr/share/themes” 目录的主题和来自 “/usr/share/icons” 目录的图标也可以备份。点击 “Themes and Icons” 右侧的 “Backup” 来进行此操作。“Backup Themes” 对话框默认选择了所有的主题和图标。你可以安装需要取消到一些然后点击 “Backup” 进行备份。
![22a_backing_up_themes_and_icons](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22a_backing_up_themes_and_icons.png.pagespeed.ic.KXa8W3YhyF.png)
主题被打包拷贝到备份目录下的 “themes” 文件夹中,图标被打包拷贝到备份目录下的 “icons” 文件夹中。然后成功提示对话框出现点击”OK“关闭它。
![22b_themes_and_icons_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22b_themes_and_icons_backed_up.png.pagespeed.ic.ejjRaymD39.png)
一旦你完成了需要的备份点击主界面左上角的”X“关闭 Aptik 。
![23_closing_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x542x23_closing_aptik.png.pagespeed.ic.pNk9Vt3--l.png)
备份过的文件已存在于你选择的备份目录中,可以随时取阅。
![24_backup_files_in_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x374x24_backup_files_in_directory.png.pagespeed.ic.vwblOfN915.png)
当你重装Ubuntu或者安装新版本的Ubuntu后在新的系统中安装 Aptik 并且将备份好的文件置于新系统中让其可被使用。运行 Aptik并使用每个条目的 “Restore” 按钮来恢复你的软件源、应用、包、设置、主题以及图标。
--------------------------------------------------------------------------------
via: http://www.howtogeek.com/206454/how-to-backup-and-restore-your-apps-and-ppas-in-ubuntu-using-aptik/
作者Lori Kaufman
译者:[Ping](https://github.com/mr-ping)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.howtogeek.com/203768

View File

@ -0,0 +1,87 @@
如何在Ubuntu上转换图片音频和视频格式
================================================================================
如果你的工作中需要接触到各种不同编码格式的图片、音频和视频,那么你或许正在使用多个工具来转换这些不同的媒介格式。如果存在一个能够处理所有文件/音频/视频格式的多和一的转换工具,那就太好了。
[Format Junkie][1] 就是这样一个有着极其友好的用户界面的多和一的媒介转换工具。更棒的是它是一个免费软件。你可以使用 Format Junkie 来转换几乎所有的流行格式的图像、音频、视频和归档文件(或称压缩文件),所有这些只需要简单地点击几下鼠标而已。
### 在Ubuntu 12.04, 12.10 和 13.04 上安装 Format Junkie ###
Format Junkie 可以通过 Ubuntu PPA format-junkie-team 进行安装。这个PPA支持Ubuntu 12.04, 12.10 和 13.04。在以上任意一种Ubuntu版本中安装Format Junkie的话简单的执行一下命令即可
$ sudo add-apt-repository ppa:format-junkie-team/release
$ sudo apt-get update
$ sudo apt-get install formatjunkie
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
### 将 Format Junkie 安装到 Ubuntu 13.10 ###
如果你正在运行Ubuntu 13.10 (Saucy Salamander),你可以按照以下步骤下载 .deb 安装包来进行安装。由于Format Junkie 的 .deb 安装包只有很少的依赖包,所以使用 [gdebi deb installer][2] 来按安装它。
在32位版Ubuntu 13.10上:
$ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_i386.deb
$ sudo gdebi formatjunkie_1.07-1~raring0.2_i386.deb
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
在32位版Ubuntu 13.10上:
$ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_amd64.deb
$ sudo gdebi formatjunkie_1.07-1~raring0.2_amd64.deb
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
### 将 Format Junkie 安装到 Ubuntu 14.04 或 之后版本 ###
现有的可供使用的官方 Format Junkie .deb 文件 需要 libavcodec-extra-53这个东西从Ubuntu 14.04开始就已经过时了。所以如果你想在Ubuntu 14.04或之后版本上安装Format Junkie的话可以使用以下的第三方PPA来代替。
$ sudo add-apt-repository ppa:jon-severinsson/ffmpeg
$ sudo add-apt-repository ppa:noobslab/apps
$ sudo apt-get update
$ sudo apt-get install formatjunkie
### 如何使用 Format Junkie ###
安装完成后,只需运行以下命令即可启动 Format Junkie
$ formatjunkie
#### 使用 Format Junkie 来转换音频、视频、图像和归档格式 ####
就像下方展示的一样Format Junkie 的用户界面简单而且直观。在音频、视频、图像和iso媒介之间进行选择在顶部四个标签当中点击你需要的那个。你可以根据需要添加无限量的文件用于批量转换。添加文件后选择输出格式直接点击 "Start Converting" 按钮进行转换。
![](http://farm9.staticflickr.com/8107/8643695905_082b323059.jpg)
Format Junkie支持以下媒介媒介媒介格式间的转换
- **Audio**: mp3, wav, ogg, wma, flac, m4r, aac, m4a, mp2.
- **Video**: avi, ogv, vob, mp4, 3gp, wmv, mkv, mpg, mov, flv, webm.
- **Image**: jpg, png, ico, bmp, svg, tif, pcx, pdf, tga, pnm.
- **Archive**: iso, cso.
#### 用 Format Junkie 进行字幕编码 ####
除了媒介转换Format Junkie 可提供了字幕编码的图形界面。实际的字幕编码是由MEncoder来完成的。为了使用Format Junkie的字幕编码接口首先你需要安装MEencoder。
$ sudo apt-get install mencoder
然后点击Format Junkie 中的 "Advanced"标签。选择 AVI/subtitle 文件来进行编码,如下所示:
![](http://farm9.staticflickr.com/8100/8644791396_bfe602cd16.jpg)
总而言之Format Junkie 是一个非常易于使用和多才多艺的媒介转换工具。但也有一个缺陷,它不允许对转换进行任何定制化(例如:比特率,帧率,采样频率,图像质量,尺寸)。所以这个工具推荐正在寻找一个简单易用的媒介转换工具的新手使用。
喜欢这篇文章吗在facebook、twitter和google+上给我点赞吧。多谢!
--------------------------------------------------------------------------------
via: http://xmodulo.com/how-to-convert-image-audio-and-video-formats-on-ubuntu.html
作者:[Dan Nanni][a]
译者:[Ping](https://github.com/mr-ping)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://launchpad.net/format-junkie
[2]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html

View File

@ -1,365 +0,0 @@
How to turn your CentOS box into a BGP router using Quagga
如何使用Quagga把你的CentOS系统变成一个BGP路由器?
================================================================================
在[之前的教程中][1]此文原文做过文件名“20140928 How to turn your CentOS box into an OSPF router using Quagga.md”如果前面翻译发布了可以修改此链接,我对如何简单地使用Quagga把CentOS系统变成一个不折不扣地OSPF路由器做了一些描述,Quagga是一个开源路由软件套件.在这个教程中,我将会着重**把一个Linux系统变成一个BGP路由器,又是使用Quagga**,演示如何建立BGP与其它BGP路由器对等.
在我们进入细节之前,一些BGP的背景知识还是必要的.边界网关协议(或者BGP)是互联网的域间路由协议的实际标准。在BGP术语中,全球互联网是由成千上万相关联的自治系统(ASE)组成,其中每一个AS代表每一个特定运营商提供的一个网络管理域.
为了使其网络在全球范围内路由可达,每一个AS需要知道如何在英特网中到达其它的AS.这时候BGP出来取代这个角色了.BGP作为一种语言用于一个AS去与相邻的AS交换路由信息的一种工具.这些路由信息通常被称为BGP线路或者BGP前缀,包括AS号(ASN全球唯一号码)以及相关的IP地址块.一旦所有的BGP线路被当地的BGP路由表学习和填充,每一个AS将会知道如何到达互联网的任何公网IP.
路由在不同域(ASes)的能力是BGP被称为外部网关协议(EGP)或者域间协议的主要原因.就如一些路由协议例如OSPF,IS-IS,RIP和EIGRP都是内部网关协议(IGPs)或者域内路由协议.
### 测试方案 ###
在这个教程中,让我们来关注以下拓扑.
![](https://farm6.staticflickr.com/5598/15603223841_4c76343313_z.jpg)
我们假设运营商A想要建立一个BGP来与运营商B对等交换路由.它们的AS号和IP地址空间登细节如下所示.
- **运营商 A**: ASN (100), IP地址空间 (100.100.0.0/22), 分配给BGP路由器eth1网卡的IP地址(100.100.1.1)
- **运营商 B**: ASN (200), IP地址空间 (200.200.0.0/22), 分配给BGP路由器eth1网卡的IP地址(200.200.1.1)
路由器A和路由器B使用100.100.0.0/30子网来连接到对方.从理论上来说,任何子网从运营商那里都是可达的,可互连的.在真实场景中,建议使用掩码为30位的公网IP地址空间来实现运营商A和运营商B之间的连通.
### 在 CentOS中安装Quagga ###
如果Quagga还没被安装,我们可以使用yum来安装Quagga.
# yum install quagga
如果你正在使用的是CentOS7系统,你需要应用一下策略来设置SELinux.否则,SElinux将会阻止Zebra守护进程写入它的配置目录.如果你正在使用的是CentOS6,你可以跳过这一步.
# setsebool -P zebra_write_config 1
Quagga软件套件包含几个守护进程,这些进程可以一起工作.关于BGP路由,我们将把重点放在建立一下2个守护进程.
- **Zebra**:一个核心守护进程用于内核接口和静态路由.
- **BGPd**:一个BGP守护进程.
### 配置日志记录 ###
在Quagga被安装后,下一步就是配置Zebra来管理BGP路由器的网络接口.我们通过创建一个Zebra配置文件和启用日志记录来开始第一步.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
在CentOS6系统中:
# service zebra start
# chkconfig zebra on
在CentOS7系统中:
# systemctl start zebra
# systemctl enable zebra
Quagga提供了一个叫做vtysh特有的命令行工具,你可以输入路由器厂商(例如Cisco和Juniper)兼容和支持的命令.我们将使用vtysh shell来配置BGP路由在教程的其余部分.
启动vtysh shell 命令,输入:
# vtysh
提示将被改成主机名,这表明你是在vtysh shell中.
Router-A#
现在我们将使用以下命令来为Zebra配置日志文件:
Router-A# configure terminal
Router-A(config)# log file /var/log/quagga/quagga.log
Router-A(config)# exit
永久保存Zebra配置:
Router-A# write
在路由器B操作同样的步骤.
### 配置对等的IP地址 ###
下一步,我们将在可用的接口上配置对等的IP地址.
Router-A# show interface #显示接口信息
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
配置eth0接口的参数:
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 100.100.0.1/30
site-A-RTR(config-if)# description "to Router-B"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
继续配置eth1接口的参数:
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 100.100.1.1/24
site-A-RTR(config-if)# description "test ip from provider A network"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
现在确认配置:
Router-A# show interface
----------
Interface eth0 is up, line protocol detection is disabled
Description: "to Router-B"
inet 100.100.0.1/30 broadcast 100.100.0.3
Interface eth1 is up, line protocol detection is disabled
Description: "test ip from provider A network"
inet 100.100.1.1/24 broadcast 100.100.1.255
----------
Router-A# show interface description #现实接口描述
----------
Interface Status Protocol Description
eth0 up unknown "to Router-B"
eth1 up unknown "test ip from provider A network"
如果一切看起来正常,别忘记保存配置.
Router-A# write
同样地,在路由器B重复一次配置.
在我们继续下一步之前,确认下彼此的IP是可以ping通的.
Router-A# ping 100.100.0.2
----------
PING 100.100.0.2 (100.100.0.2) 56(84) bytes of data.
64 bytes from 100.100.0.2: icmp_seq=1 ttl=64 time=0.616 ms
下一步,我们将继续配置BGP对等和前缀设置.
### 配置BGP对等 ###
Quagga守护进程负责BGP的服务叫bgpd.首先我们来准备它的配置文件.
# cp /usr/share/doc/quagga-XXXXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
在CentOS6系统中:
# service bgpd start
# chkconfig bgpd on
在CentOS7中
# systemctl start bgpd
# systemctl enable bgpd
现在,让我们来进入Quagga 的shell.
# vtysh
第一步,我们要确认当前没有已经配置的BGP会话.在一些版本,我们可能会发现一个AS号为7675的BGP会话.由于我们不需要这个会话,所以把它移除.
Router-A# show running-config
----------
... ... ...
router bgp 7675
bgp router-id 200.200.1.1
... ... ...
我们将移除一些预先配置好的BGP会话,并建立我们所需的会话取而代之.
Router-A# configure terminal
Router-A(config)# no router bgp 7675
Router-A(config)# router bgp 100
Router-A(config)# no auto-summary
Router-A(config)# no synchronizaiton
Router-A(config-router)# neighbor 100.100.0.2 remote-as 200
Router-A(config-router)# neighbor 100.100.0.2 description "provider B"
Router-A(config-router)# exit
Router-A(config)# exit
Router-A# write
路由器B将用同样的方式来进行配置,以下配置提供作为参考.
Router-B# configure terminal
Router-B(config)# no router bgp 7675
Router-B(config)# router bgp 200
Router-B(config)# no auto-summary
Router-B(config)# no synchronizaiton
Router-B(config-router)# neighbor 100.100.0.1 remote-as 100
Router-B(config-router)# neighbor 100.100.0.1 description "provider A"
Router-B(config-router)# exit
Router-B(config)# exit
Router-B# write
当相关的路由器都被配置好,两台路由器之间的对等将被建立.现在让我们通过运行下面的命令来确认:
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5614/15420135700_e3568d2e5f_z.jpg)
从输出中,我们可以看到"State/PfxRcd"部分.如果对等关闭,输出将会现实"空闲"或者"活动'.请记住,单词'Active'这个词在路由器中总是不好的意思.它意味着路由器正在积极地寻找邻居,前缀或者路由.当对等是up状态,"State/PfxRcd"下的输出状态将会从特殊邻居接收到前缀号.
在这个例子的输出中,BGP对等知识在AS100和AS200之间呈up状态.因此,没有前缀被更改,所以最右边列的数值是0.
### 配置前缀通告 ###
正如一开始提到,AS 100将以100.100.0.0/22作为通告,在我们的例子中AS 200将同样以200.200.0.0/22作为通告.这些前缀需要被添加到BGP配置如下.
在路由器-A中:
Router-A# configure terminal
Router-A(config)# router bgp 100
Router-A(config)# network 100.100.0.0/22
Router-A(config)# exit
Router-A# write
在路由器-B中:
Router-B# configure terminal
Router-B(config)# router bgp 200
Router-B(config)# network 200.200.0.0/22
Router-B(config)# exit
Router-B# write
在这一点上,两个路由器会根据需要开始通告前缀.
### 测试前缀通告 ###
首先,让我们来确认前缀的数量是否被改变了.
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5608/15419095659_0ebb384eee_z.jpg)
为了查看所接收的更多前缀细节,我们可以使用一下命令,这个命令用于显示邻居100.100.0.2所接收到的前缀总数.
Router-A# show ip bgp neighbors 100.100.0.2 advertised-routes
![](https://farm6.staticflickr.com/5597/15419618208_4604e5639a_z.jpg)
查看哪一个前缀是我们从邻居接收到的:
Router-A# show ip bgp neighbors 100.100.0.2 routes
![](https://farm4.staticflickr.com/3935/15606556462_e17eae7f49_z.jpg)
我们也可以查看所有的BGP路由器:
Router-A# show ip bgp
![](https://farm6.staticflickr.com/5609/15419618228_5c776423a5_z.jpg)
以上的命令都可以被用于检查哪个路由器通过BGP在路由器表中被学习到.
Router-A# show ip route
----------
代码: K - 内核路由, C - 已链接 , S - 静态 , R - 路由信息协议 , O - 开放式最短路径优先协议,
I - 中间系统到中间系统的路由选择协议, B - 边界网关协议, > - 选择路由, * - FIB 路由
C>* 100.100.0.0/30 is directly connected, eth0
C>* 100.100.1.0/24 is directly connected, eth1
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:06:45
----------
Router-A# show ip route bgp
----------
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:08:13
BGP学习到的路由也将会在Linux路由表中出现.
[root@Router-A~]# ip route
----------
100.100.0.0/30 dev eth0 proto kernel scope link src 100.100.0.1
100.100.1.0/24 dev eth1 proto kernel scope link src 100.100.1.1
200.200.0.0/22 via 100.100.0.2 dev eth0 proto zebra
最后,我们将使用ping命令来测试连通.结果将成功ping通.
[root@Router-A~]# ping 200.200.1.1 -c 2
总而言之,该教程将重点放在如何运行一个基本的BGP在CentOS系统中.当这个教程让你开始BGP的配置,那么一些更高级的设置例如设置过滤器,BGP属性调整,本地优先级和预先路径准备等.我将会在后续的教程中覆盖这些主题.
希望这篇教程能给大家一些帮助.
--------------------------------------------------------------------------------
via: http://xmodulo.com/centos-bgp-router-quagga.html
作者:[Sarmed Rahman][a]
译者:[disylee](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/turn-centos-box-into-ospf-router-quagga.html

View File

@ -6,7 +6,7 @@
虽然对于本教程,我只会演示怎样来添加**64位**网络安装镜像但对于Ubuntu或者Debian的**32位**系统或者其它架构的镜像操作步骤也基本相同。同时就我而言我会解释添加Ubuntu 32位源的方法但不会演示配置。
从PXE服务器安装 **Ubuntu**或者**Debian**要求你的客户机必须激活网络连接,最好是使用**DHCP**通过**NAT**来进行动态分配地址。以便安装器拉取需求包并完成安装进程。
从PXE服务器安装 **Ubuntu**或者**Debian**要求你的客户机必须激活网络连接,最好是使用**DHCP**通过**NAT**来进行动态分配地址。以便安装器拉取所需的包并完成安装过程。
#### 需求 ####
@ -14,11 +14,11 @@
## 步骤 1 添加Ubuntu 14.10和Ubuntu 14.04服务器到PXE菜单 ##
**1.** 为**Ubuntu 14.10****Ubuntu 14.04**添加网络安装源到PXE菜单可以通过两种方式实现其一是通过下载Ubuntu CD ISO镜像并挂载到PXE服务器机器上以便可以读取Ubuntu网络启动文件其二是通过直接下载Ubuntu网络启动归档包并将其解压缩到系统中。下面我将进一步讨论这两种方法
**1.** 为**Ubuntu 14.10****Ubuntu 14.04**添加网络安装源到PXE菜单可以通过两种方式实现其一是通过下载Ubuntu CD ISO镜像并挂载到PXE服务器机器上以便可以读取Ubuntu网络启动文件其二是通过直接下载Ubuntu网络启动归档包并将其解压缩到系统中。下面我将进一步讨论这两种方法
### 使用Ubuntu 14.10和Ubuntu 14.04 CD ISO镜像 ###
为了能使用此方法你的PXE服务器需要有一台可工作的CD/DVD驱动器。在一台专有计算机上转到[Ubuntu 14.10下载][2]和[Ubuntu 14.04 下载][3]页,取64位**服务器安装镜像**将它烧录到CD并将CD镜像放到PXE服务器DVD/CD驱动器然后使用以下命令挂载到系统。
为了能使用此方法你的PXE服务器需要有一台可工作的CD/DVD驱动器。在一台专有计算机上转到[Ubuntu 14.10下载][2]和[Ubuntu 14.04 下载][3]页,取64位**服务器安装镜像**将它烧录到CD并将CD镜像放到PXE服务器DVD/CD驱动器然后使用以下命令挂载到系统。
# mount /dev/cdrom /mnt
@ -26,28 +26,28 @@
#### 在Ubuntu 14.10上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# wget http://releases.ubuntu.com/14.10/ubuntu-14.10-server-i386.iso
# mount -o loop /path/to/ubuntu-14.10-server-i386.iso /mnt
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# wget http://releases.ubuntu.com/14.10/ubuntu-14.10-server-amd64.iso
# mount -o loop /path/to/ubuntu-14.10-server-amd64.iso /mnt
#### 在Ubuntu 14.04上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# wget http://releases.ubuntu.com/14.04/ubuntu-14.04.1-server-i386.iso
# mount -o loop /path/to/ubuntu-14.04.1-server-i386.iso /mnt
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# wget http://releases.ubuntu.com/14.04/ubuntu-14.04.1-server-amd64.iso
# mount -o loop /path/to/ubuntu-14.04.1-server-amd64.iso /mnt
@ -58,33 +58,33 @@
#### 在Ubuntu 14.04上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# cd
# wget http://archive.ubuntu.com/ubuntu/dists/utopic/main/installer-i386/current/images/netboot/netboot.tar.gz
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# cd
# http://archive.ubuntu.com/ubuntu/dists/utopic/main/installer-amd64/current/images/netboot/netboot.tar.gz
#### 在Ubuntu 14.04上 ####
------------------ 32位 ------------------
------------------ 32位 ------------------
# cd
# wget http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-i386/current/images/netboot/netboot.tar.gz
----------
------------------ 64位 ------------------
------------------ 64位 ------------------
# cd
# wget http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/netboot.tar.gz
对于其它处理器架构请访问下面的Ubuntu 14.10和Ubuntu 14.04网络启动官方页面,选择你的架构类型并下载需文件。
对于其它处理器架构请访问下面的Ubuntu 14.10和Ubuntu 14.04网络启动官方页面,选择你的架构类型并下载需文件。
- [http://cdimage.ubuntu.com/netboot/14.10/][4]
- [http://cdimage.ubuntu.com/netboot/14.04/][5]
@ -101,7 +101,7 @@
# tar xfz netboot.tar.gz
# cp -rf ubuntu-installer/ /var/lib/tftpboot/
如果你想要在PXE服务器上同时使用两种Ubuntu服务器架构先请下载然后根据不同的情况挂载并解压缩32位并拷贝**ubuntu-installer**目录到**/var/lib/tftpboot**然后卸载CD或删除网络启动归档以及解压缩的文件和文件夹。对于64位架构请重复上述步骤以便让最终的**tftp**路径形成以下结构。
如果你想要在PXE服务器上同时使用两种Ubuntu服务器架构先请下载然后根据不同的情况挂载或解压缩32位架构然后拷贝**ubuntu-installer**目录到**/var/lib/tftpboot**然后卸载CD或删除网络启动归档以及解压缩的文件和文件夹。对于64位架构请重复上述步骤以便让最终的**tftp**路径形成以下结构。
/var/lib/tftpboot/ubuntu-installer/amd64
/var/lib/tftpboot/ubuntu-installer/i386
@ -238,7 +238,7 @@ via: http://www.tecmint.com/add-ubuntu-to-pxe-network-boot/
作者:[Matei Cezar][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,135 @@
Docker的镜像并不安全
================================================================================
最近使用Docker下载“官方”容器镜像的时候我发现这样一句话
ubuntu:14.04: The image you are pulling has been verified (您所拉取的镜像已经经过验证)
起初我以为这条信息引自Docker[大力推广][1]的镜像签名系统因此也就没有继续跟进。后来研究加密摘要系统的时候——Docker用这套系统来对镜像进行安全加固——我才有机会更深入的发现逻辑上整个与镜像安全相关的部分具有一系列系统性问题。
Docker所报告的一个已下载的镜像经过“验证”它基于的仅仅是一个标记清单signed manifest)而Docker却从未根据清单对镜像的校验和进行验证。一名攻击者以此可以提供任意所谓具有标记清单的镜像。一系列严重漏洞的大门就此敞开。
镜像经由HTTPS服务器下载后通过一个未加密的管道流进入Docker守护进程
[decompress] -> [tarsum] -> [unpack]
这条管道的性能没有问题但是却完全没有经过加密。不可信的输入在签名验证之前是不应当进入管道的。不幸的是Docker在上面处理镜像的三个步骤中都没有对校验和进行验证。
然而不论Docker如何[声明][2]实际上镜像的校验和从未经过校验。下面是Docker与镜像校验和的验证相关的代码[片段][3],即使我提交了校验和不匹配的镜像,都无法触发警告信息。
if img.Checksum != "" && img.Checksum != checksum {
log.Warnf("image layer checksum mismatch: computed %q,
expected %q", checksum, img.Checksum)
}
### 不安全的处理管道 ###
**解压缩**
Docker支持三种压缩算法gzip、bzip2和xz。前两种使用Go的标准库实现是[内存安全memory-safe)][4]的因此这里我预计的攻击类型应该是拒绝服务类的攻击包括CPU和内存使用上的当机或过载等等。
第三种压缩算法xz比较有意思。因为没有现成的Go实现Docker 通过[执行(exec)][5]`xz`二进制命令来实现解压缩。
xz二进制程序来自于[XZ Utils][6]项目,由[大概][7]2万行C代码生成而来。而C语言不是一门内存安全的语言。这意味着C程序的恶意输入在这里也就是Docker镜像的XZ Utils解包程序潜在地可能会执行任意代码。
Docker以root权限*运行* `xz` 命令,更加恶化了这一潜在威胁。这意味着如果在`xz`中出现了一个漏洞,对`docker pull`命令的调用就会导致用户整个系统的完全沦陷。
**Tarsum**
对tarsum的使用其出发点是好的但却是最大的败笔。为了得到任意一个加密tar文件的准确校验和Docker先对tar文件进行解密然后求出特定部分的哈希值同时排除剩余的部分而这些步骤的[顺序都是固定的][8]。
由于其生成校验和的步骤固定,它解码不可信数据的过程就有可能被设计成[攻破tarsum的代码][9]。这里潜在的攻击既包括拒绝服务攻击,还有逻辑上的漏洞攻击,可能导致文件被感染、忽略、进程被篡改、植入等等,这一切攻击的同时,校验和可能都是不变的。
**解包**
解包的过程包括tar解码和生成硬盘上的文件。这一过程尤其危险因为在解包写入硬盘的过程中有另外三个[已报告的漏洞][10]。
任何情形下未经验证的数据都不应当解包后直接写入硬盘。
### libtrust ###
Docker的工具包[libtrust][11],号称“通过一个分布式的信任图表进行认证和访问控制”。很不幸,对此官方没有任何具体的说明,看起来它好像是实现了一些[javascript对象标记和加密][12]规格以及其他一些未说明的算法。
使用libtrust下载一个清单经过签名和认证的镜像就可以触发下面这条不准确的信息说不准确是因为事实上它验证的只是清单并非真正的镜像
ubuntu:14.04: The image you are pulling has been verified(您所拉取的镜像已经经过验证)
目前只有Docker公司“官方”发布的镜像清单使用了这套签名系统但是上次我参加Docker[管理咨询委员会][13]的会议讨论时我所理解的是Docker公司正计划在未来扩大部署这套系统。他们的目标是以Docker公司为中心控制一个认证授权机构对镜像进行签名和客户认证。
我试图从Docker的代码中找到签名秘钥但是没找到。好像它并不像我们所期望的把密钥嵌在二进制代码中而是在每次镜像下载前由Docker守护进程[通过HTTPS从CDN][14]远程获取。这是一个多么糟糕的方案有无数种攻击手段可能会将可信密钥替换成恶意密钥。这些攻击包括但不限于CDN供应商出问题、CDN初始密钥出现问题、客户端下载时的中间人攻击等等。
### 补救 ###
研究结束前,我[报告][15]了一些在tarsum系统中发现的问题但是截至目前我报告的这些问题仍然
没有修复。
要改进Docker镜像下载系统的安全问题我认为应当有以下措施
**摒弃tarsum并且真正对镜像本身进行验证**
出于安全原因tarsum应当被摒弃同时镜像在完整下载后、其他步骤开始前就对镜像的加密签名进行验证。
**添加权限隔离**
镜像的处理过程中涉及到解压缩或解包的步骤必须在隔离的进程容器中进行即只给予其操作所需的最小权限。任何场景下都不应当使用root运行`xz`这样的解压缩工具。
**替换 libtrust**
应当用[更新框架(The Update Framework)][16]替换掉libtrust这是专门设计用来解决软件二进制签名此类实际问题的。其威胁模型非常全方位能够解决libtrust中未曾考虑到的诸多问题目前已经有了完整的说明文档。除了已有的Python实现我已经开始着手用[Go语言实现][17]的工作,也欢迎大家的贡献。
作为将更新框架加入Docker的一部分还应当加入一个本地密钥存储池将root密钥与registry的地址进行映射这样用户就可以拥有他们自己的签名密钥而不必使用Docker公司的了。
我注意到使用Docker公司非官方的宿主仓库往往会是一种非常糟糕的用户体验。当没有技术上的原因时Docker也会将第三方的仓库内容降为二等地位来看待。这个问题不仅仅是生态问题还是一个终端用户的安全问题。针对第三方仓库的全方位、去中心化的安全模型即必须又迫切。我希望Docker公司在重新设计他们的安全模型和镜像认证系统时能采纳这一点。
### 结论 ###
Docker用户应当意识到负责下载镜像的代码是非常不安全的。用户们应当只下载那些出处没有问题的镜像。目前这里的“没有问题”并不包括Docker公司的“可信trusted”镜像例如官方的Ubuntu和其他基础镜像。
最好的选择就是在本地屏蔽 `index.docker.io`,然后使用`docker load`命令在导入Docker之前手动下载镜像并对其进行验证。Red Hat的安全博客有一篇[很好的文章][18],大家可以看看。
感谢Lewis Marshall指出tarsum从未真正验证。
- [校验和的代码][19]
- [cloc][20]介绍了18141行没有空格没有注释的C代码以及5900行的header代码版本号为v5.2.0。
- [Android中也发现了][21]类似的bug能够感染已签名包中的任意文件。同样出现问题的还有[Windows的Authenticode][22]认证系统,二进制文件会被篡改。
- 特别的:[CVE-2014-6407][23]、 [CVE-2014-9356][24]以及 [CVE-2014-9357][25]。目前已有两个Docker[安全发布][26]有了回应。
- 参见[2014-10-28 DGAB会议记录][27]的第8页。
--------------------------------------------------------------------------------
via: https://titanous.com/posts/docker-insecurity
作者:[titanous][a]
译者:[Mr小眼儿](http://blog.csdn.net/tinyeyeser)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://twitter.com/titanous
[1]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[2]:https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
[3]:https://titanous.com/posts/docker-insecurity#fn:0
[4]:https://en.wikipedia.org/wiki/Memory_safety
[5]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/archive/archive.go#L91-L95
[6]:http://tukaani.org/xz/
[7]:https://titanous.com/posts/docker-insecurity#fn:1
[8]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/pkg/tarsum/tarsum_spec.md
[9]:https://titanous.com/posts/docker-insecurity#fn:2
[10]:https://titanous.com/posts/docker-insecurity#fn:3
[11]:https://github.com/docker/libtrust
[12]:https://tools.ietf.org/html/draft-ietf-jose-json-web-signature-11
[13]:https://titanous.com/posts/docker-insecurity#fn:4
[14]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/trust/trusts.go#L38
[15]:https://github.com/docker/docker/issues/9719
[16]:http://theupdateframework.com/
[17]:https://github.com/flynn/go-tuf
[18]:https://securityblog.redhat.com/2014/12/18/before-you-initiate-a-docker-pull/
[19]:https://github.com/docker/docker/blob/0874f9ab77a7957633cd835241a76ee4406196d8/image/image.go#L114-L116
[20]:http://cloc.sourceforge.net/
[21]:http://www.saurik.com/id/17
[22]:http://blogs.technet.com/b/srd/archive/2013/12/10/ms13-098-update-to-enhance-the-security-of-authenticode.aspx
[23]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6407
[24]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9356
[25]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9357
[26]:https://groups.google.com/d/topic/docker-user/nFAz-B-n4Bw/discussion
[27]:https://docs.google.com/document/d/1JfWNzfwptsMgSx82QyWH_Aj0DRKyZKxYQ1aursxNorg/edit?pli=1

View File

@ -0,0 +1,207 @@
如何配置fail2ban来保护Apache服务器
================================================================================
生产环境中的Apache服务器可能会受到不同的攻击。攻击者或许试图通过暴力攻击或者执行恶意脚本来获取未经授权或者禁止访问的目录。一些恶意爬虫或许会扫描你网站下的任意安全漏洞或者手机email地址或者web表格来发送垃圾邮件。
Apache服务器具有综合的日志功能来捕捉不同表明是攻击的异常事件。然而它还不能系统地解析具体的apache日志并迅速地反应到潜在的攻击比如禁止/解禁IP地址。这时候`fail2ban`可以解救这一切,解放了系统管理员的工作。
`fail2ban`是一款入侵防御工具,可以基于系统日志检测不同的工具并且可以自动采取保护措施比如:通过`iptables`禁止ip、阻止/etc/hosts.deny中的连接、或者通过邮件通知事件。fail2ban具有一系列预定义的“监狱”它使用特定程序日志过滤器来检测通常的攻击。你也可以编写自定义的规则来检测来自任意程序的攻击。
在本教程中我会演示如何配置fail2ban来保护你的apache服务器。我假设你已经安装了apache和fail2ban。对于安装请参考[另外一篇教程][1]。
### 什么是 Fail2ban 监狱 ###
让我们更深入地了解fail2ban监狱。监狱定义了具体的应用策略它会为指定的程序触发一个保护措施。fail2ban在/etc/fail2ban/jail.conf 下为一些流行程序如Apache、Dovecot、Lighttpd、MySQL、Postfix、[SSH][2]等预定义了一些监狱。每个依赖于特定的程序日志过滤器(在/etc/fail2ban/fileter.d 下面来检测通常的攻击。让我看一个例子监狱SSH监狱。
[ssh]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 6
banaction = iptables-multiport
SSH监狱的配置定义了这些参数
- **[ssh]** 方括号内是监狱的名字。
- **enabled**:是否启用监狱
- **port** 端口的数字 (或者数字对应的名称).
- **filter** 检测攻击的检测规则
- **logpath** 检测的日志文件
- **maxretry** 禁止前失败的最大数字
- **banaction** 禁止操作
定义配置文件中的任意参数都会覆盖相应的默认配置`fail2ban-wide` 中的参数。相反,任意缺少的参数都会使用定义在[DEFAULT]字段的值。
预定义日志过滤器都必须在/etc/fail2ban/filter.d可以采取的操作在/etc/fail2ban/action.d。
![](https://farm8.staticflickr.com/7538/16076581722_cbca3c1307_b.jpg)
如果你想要覆盖`fail2ban`的默认操作或者定义任何自定义监狱,你可以创建*/etc/fail2ban/jail.local**文件。本篇教程中,我会使用/etc/fail2ban/jail.local。
### 启用预定义的apache监狱 ###
`fail2ban`的默认安装为Apache服务提供了一些预定义监狱以及过滤器。我要启用这些内建的Apache监狱。由于Debian和红买配置的稍微不同我会分别它们的配置文件。
#### 在Debian 或者 Ubuntu启用Apache监狱 ####
要在基于Debian的系统上启用预定义的apache监狱如下创建/etc/fail2ban/jail.local。
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect potential search for exploits and php vulnerabilities
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache*/*error.log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/apache*/*error.log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/apache*/*error.log
maxretry = 2
由于上面的监狱没有指定措施,这些监狱都将会触发默认的措施。要查看默认的措施,在/etc/fail2ban/jail.conf中的[DEFAULT]下找到“banaction”。
banaction = iptables-multiport
本例中默认的操作是iptables-multiport定义在/etc/fail2ban/action.d/iptables-multiport.conf。这个措施使用iptable的多端口模块禁止一个IP地址。
在启用监狱后你必须重启fail2ban来加载监狱。
$ sudo service fail2ban restart
#### 在CentOS/RHEL 或者 Fedora中启用Apache监狱 ####
要在基于红帽的系统中启用预定义的监狱,如下创建/etc/fail2ban/jail.local。
$ sudo vi /etc/fail2ban/jail.local
----------
# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect spammer robots crawling email addresses
[apache-badbots]
enabled = true
port = http,https
filter = apache-badbots
logpath = /var/log/httpd/*access_log
bantime = 172800
maxretry = 1
# detect potential search for exploits and php <a href="http://xmodulo.com/recommend/penetrationbook" style="" target="_blank" rel="nofollow" >vulnerabilities</a>
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/httpd/*error_log
maxretry = 6
# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/httpd/*error_log
maxretry = 2
# detect failures to execute non-existing scripts that
# are associated with several popular web services
# e.g. webmail, phpMyAdmin, WordPress
port = http,https
filter = apache-botsearch
logpath = /var/log/httpd/*error_log
maxretry = 2
注意这些监狱文件默认的操作是iptables-multiport定义在/etc/fail2ban/jail.conf中[DEFAULT]字段下的“banaction”中。这个措施使用iptable的多端口模块禁止一个IP地址。
启用监狱后你必须重启fail2ban来加载监狱。
在 Fedora 或者 CentOS/RHEL 7中
$ sudo systemctl restart fail2ban
在 CentOS/RHEL 6中
$ sudo service fail2ban restart
### 检查和管理fail2ban禁止状态 ###
监狱一旦激活后你可以用fail2ban的客户端命令行工具来监测当前的禁止状态。
查看激活的监狱列表:
$ sudo fail2ban-client status
查看特定监狱的状态包含禁止的IP列表
$ sudo fail2ban-client status [监狱名]
![](https://farm8.staticflickr.com/7572/15891521967_5c6cbc5f8f_c.jpg)
你也可以手动禁止或者解禁IP地址
要用制定监狱禁止IP
$ sudo fail2ban-client set [name-of-jail] banip [ip-address]
要解禁指定监狱屏蔽的IP
$ sudo fail2ban-client set [name-of-jail] unbanip [ip-address]
### 总结 ###
本篇教程解释了fail2ban监狱如何工作以及如何使用内置的监狱来保护Apache服务器。依赖于你的环境以及要保护的web服务器类型你或许要适配已存在的监狱或者编写自定义监狱和日志过滤器。查看outfail2ban的[官方Github页面][3]来获取最新的监狱和过滤器示例。
你有在生产环境中使用fail2ban么分享一下你的经验吧。
--------------------------------------------------------------------------------
via: http://xmodulo.com/configure-fail2ban-apache-http-server.html
作者:[Dan Nanni][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[2]:http://xmodulo.com/how-to-protect-ssh-server-from-brute-force-attacks-using-fail2ban.html
[3]:https://github.com/fail2ban/fail2ban

View File

@ -0,0 +1,51 @@
Ubuntu14.04或Mint17如何安装Kodi14XBMC
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Kodi_Xmas.jpg)
[Kodi][1]原名就是大名鼎鼎的XBMC发布[最新版本14][2]命名为Helix。感谢官方XMBC提供的PPA现在可以很简单地在Ubuntu14.04中安装了。
Kodi是一个优秀的自由和开源的GPL媒体中心软件支持所有平台如Windows, Linux, Mac, Android等。此软件拥有全屏幕的媒体中心可以管理所有音乐和视频不单支持本地文件还支持网络播放如Tube[Netflix][3], Hulu, Amazon Prime和其他串流服务商。
### Ubuntu 14.04, 14.10 和 Linux Mint 17 中安装XBMC 14 Kodi Helix ###
再次感谢官方的PPA让我们可以轻松安装Kodi 14。
支持Ubuntu 14.04, Ubuntu 12.04, Linux Mint 17, Pinguy OS 14.04, Deepin 2014, LXLE 14.04, Linux Lite 2.0, Elementary OS and 其他基于Ubuntu的Linux 发行版。
打开终端Ctrl+Alt+T然后使用下列命令。
sudo add-apt-repository ppa:team-xbmc/ppa
sudo apt-get update
sudo apt-get install kodi
需要下载大约100MB在我的观点这不是很大。若需安装解码插件使用下列命令
sudo apt-get install kodi-audioencoder-* kodi-pvr-*
#### 从Ubuntu中移除Kodi 14 ####
从系统中移除Kodi 14 ,使用下列命令:
sudo apt-get remove kodi
同样也应该移除PPA软件源
sudo add-apt-repository --remove ppa:team-xbmc/ppa
我希望这个简单的文章可以帮助到你在Ubuntu, Linux Mint 和其他 Linux版本中轻松安装Kodi 14。
你怎么发现Kodi 14 Helix?
你有没有使用其他的什么媒体中心?
可以在下面的评论区分享你的观点。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-kodi-14-xbmc-in-ubuntu-14-04-linux-mint-17/
作者:[Abhishek][a]
译者:[Vic020/VicYu](http://www.vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://kodi.tv/
[2]:http://kodi.tv/kodi-14-0-helix-unwinds/
[3]:http://itsfoss.com/watch-netflix-in-ubuntu-14-04/

View File

@ -0,0 +1,47 @@
如何在Ubuntu 14.04 中安装Winusb
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/WinUSB_Ubuntu_1404.jpeg)
[WinUSB][1]是一款简单的且有用的工具可以让你从Windows ISO镜像或者DVD中创建USB安装盘。它结合了GUI和命令行你可以根据你的喜好决定使用哪种。
在本篇中我们会展示**如何在Ubuntu 14.04、14.10 和 Linux Mint 17 中安装WinUSB**。
### 在Ubuntu 14.04、14.10 和 Linux Mint 17 中安装WinUSB ###
直到Ubuntu 13.10, WinUSBu一直都在积极开发且在官方PPA中可以找到。这个PPA还没有为Ubuntu 14.04 和14.10更新但是二进制文件仍旧可在更新版本的Ubuntu和Linux Mint中运行。基于[基于你使用的系统是32位还是64位的][2],使用下面的命令来下载二进制文件:
打开终端并在32位的系统下使用下面的命令
wget https://launchpad.net/~colingille/+archive/freshlight/+files/winusb_1.0.11+saucy1_i386.deb
对于64位的系统使用下面的命令
wget https://launchpad.net/~colingille/+archive/freshlight/+files/winusb_1.0.11+saucy1_amd64.deb
一旦你下载了正确的二进制包你可以用下面的命令安装WinUSB
sudo dpkg -i winusb*
不要担心在你安装WinUSB时看见错误。使用这条命令修复依赖
sudo apt-get -f install
之后你就可以在Unity Dash中查找WinUSB并且用它在Ubuntu 14.04 中创建Windows的live USB了。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/WinUSB_Ubuntu.png)
我希望这篇文章能够帮到你**在Ubuntu 14.04、14.10 和 Linux Mint 17 中安装WinUSB**。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-winusb-in-ubuntu-14-04/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://en.congelli.eu/prog_info_winusb.html
[2]:http://itsfoss.com/how-to-know-ubuntu-unity-version/

View File

@ -1,8 +1,8 @@
Ubuntu apt-get & apt-cache commands with practical examples
实例展示Ubuntu中apt-get和apt-cache命令的使用
================================================================================
Apt-get & apt-cache are the command line **package management** utility in **Ubuntu Linux**. GUI version of apt-get command is the Synaptic Package Manager, in this post we are going to discuss 15 different examples of apt-get & apt-cache commands.
apt-get和apt-cache是**Ubuntu Linux**中的命令行下的**包管理**工具。 apt-get的GUI版本是Synaptic包管理器本篇中我们会讨论apt-get和apt-cache命令的不同。
### Example:1 List of all the available packages ###
### 示例1 列出所有可用包 ###
linuxtechi@localhost:~$ apt-cache pkgnames
account-plugin-yahoojp
@ -14,9 +14,9 @@ Apt-get & apt-cache are the command line **package management** utility in **Ubu
gweled
.......................................
### Example:2 Search Packages using keywords ###
### 示例2 用关键字搜索包 ###
This command is very helpful when you are not sure about package name , just enter the keyword and apt-get command will list packages related to the keyword.
这个命令在你不确定包名时很有用只要在apt-cache这里原文是apt-get应为笔误后面输入与包相关的关键字即可/
linuxtechi@localhost:~$ apt-cache search "web server"
apache2 - Apache HTTP Server
@ -28,7 +28,7 @@ This command is very helpful when you are not sure about package name , just ent
apache2-utils - Apache HTTP Server (utility programs for web servers)
......................................................................
**Note**: If you have installed “**apt-file**” package then we can also search the package using config files as shown below :
**注意** 如果你安装了“**apt-file**”包,我们就可以像下面那样用配置文件搜索包。
linuxtechi@localhost:~$ apt-file search nagios.cfg
ganglia-nagios-bridge: /usr/share/doc/ganglia-nagios-bridge/nagios.cfg
@ -37,7 +37,7 @@ This command is very helpful when you are not sure about package name , just ent
pnp4nagios-bin: /etc/pnp4nagios/nagios.cfg
pnp4nagios-bin: /usr/share/doc/pnp4nagios/examples/nagios.cfg
### Example:3 Display the basic information of Specific package. ###
### 示例:3 显示特定包的基本信息 ###
linuxtechi@localhost:~$ apt-cache show postfix
Package: postfix
@ -51,7 +51,7 @@ This command is very helpful when you are not sure about package name , just ent
Provides: default-mta, mail-transport-agent
.....................................................
### Example:4 List the dependency of Package. ###
### 示例4 列出包的依赖 ###
linuxtechi@localhost:~$ apt-cache depends postfix
postfix
@ -69,7 +69,7 @@ This command is very helpful when you are not sure about package name , just ent
Depends: dpkg
............................................
### Example:5 Display the Cache Statistics using apt-cache. ###
### 示例5 使用apt-cache显示缓存统计 ###
linuxtechi@localhost:~$ apt-cache stats
Total package names: 60877 (1,218 k)
@ -90,9 +90,9 @@ This command is very helpful when you are not sure about package name , just ent
Total slack space: 37.3 k
Total space accounted for: 29.5 M
### Example:6 Update the package repository using “apt-get update” ###
### 示例6 使用 “apt-get update” 更新仓库 ###
Using the command “apt-get update” , we can resynchronize the package index files from their sources repository. Package index are retrieved from the file located at “/etc/apt/sources.list”
使用命令“apt-get update”, 我们可以重新从源仓库中同步文件索引。包的索引从“/etc/apt/sources.list”中检索
linuxtechi@localhost:~$ sudo apt-get update
Ign http://extras.ubuntu.com utopic InRelease
@ -106,70 +106,70 @@ Using the command “apt-get update” , we can resynchronize the package index
Ign http://in.archive.ubuntu.com utopic-backports InRelease
................................................................
### Example:7 Install a package using apt-get command. ###
### 示例:7 使用apt-get安装包 ###
linuxtechi@localhost:~$ sudo apt-get install icinga
In the above example we are installing a package named “icinga”
上面的命令会安装叫“icinga”的包。
### Example:8 Upgrade all the Installed Packages ###
### 示例8 升级所有已安装的包 ###
linuxtechi@localhost:~$ sudo apt-get upgrade
### Example:9 Upgrade a Particular Package. ###
### 示例9 更新特定的包 ###
“install” option along with “only-upgrade” in apt-get command is used to upgrade a particular package , example is shown below :
在apt-get命令中的“install”选项后面接上“-only-upgrade”用来更新一个特定的包如下所示
linuxtechi@localhost:~$ sudo apt-get install filezilla --only-upgrade
### Example:10 Removing a package using apt-get command. ###
### 示例10 使用apt-get卸载包 ###
linuxtechi@localhost:~$ sudo apt-get remove skype
Above command will remove or delete the skype package only , if you want to delete its config files then use the “purge” option in the apt-get command. Example is shown below :
上面的命令只会删除skype包如果你想要删除它的配置文件在apt-get命令中使用“purge”选项。如下所示
linuxtechi@localhost:~$ sudo apt-get purge skype
We can also use the combination of above commands :
我们可以结合使用上面的两个命令:
linuxtechi@localhost:~$ sudo apt-get remove --purge skype
### Example:11 Download the package in the Current Working Directory ###
### 示例11 在当前的目录中下载包 ###
linuxtechi@localhost:~$ sudo apt-get download icinga
Get:1 http://in.archive.ubuntu.com/ubuntu/ utopic/universe icinga amd64 1.11.6-1build1 [1,474 B]
Fetched 1,474 B in 1s (1,363 B/s)
Above command will download icinga package in your current working directory.
上面的目录会从你当前的目录下载icinga包。
### Example:12 Clear disk Space used by retrieved package files. ###
### 示例12 清理本地包占用的磁盘空间 ###
linuxtechi@localhost:~$ sudo apt-get clean
Above Command will clear the disk space used by apt-get command while retrieving(download) packages.
上面的命令会清零apt-get在下载包时占用的磁盘空间。
We can also use “**autoclean**” option in place of “**clean**“, the main difference between them is that autoclean removes package files that can no longer be downloaded, and are largely useless.
我们也可以使用“**autoclean**”选项来代替“**clean**“两者之间主要的区别是autoclean清理不再使用且没用的下载。
linuxtechi@localhost:~$ sudo apt-get autoclean
Reading package lists... Done
Building dependency tree
Reading state information... Done
### Example:13 Remove Packages using “autoremove” option. ###
### 示例13 使用“autoremove”删除包 ###
When we use “autoremove” option with apt-get command , then it will remove the packages that were installed to satisfy the dependency of other packages and are now no longer needed or used.
当在apt-get命令中使用“autoremove”时它会删除为了满足依赖而安装且现在没用的包。
linuxtechi@localhost:~$ sudo apt-get autoremove icinga
### Example:14 Display Changelog of a Package. ###
### 示例14 显示包的更新日志 ###
linuxtechi@localhost:~$ sudo apt-get changelog apache2
Get:1 Changelog for apache2 (http://changelogs.ubuntu.com/changelogs/pool/main/a/apache2/apache2_2.4.10-1ubuntu1/changelog) [195 kB]
Fetched 195 kB in 3s (60.9 kB/s)
Above Command will download the changelog of apache2 package and will display through sensible-pager on your screen.
上面的命令会下载apache2的更新日志并在你屏幕上显示。
### Example:15 List broken dependencies using “check” option ###
### 示例15 使用 “check” 选项显示损坏的依赖 ###
linuxtechi@localhost:~$ sudo apt-get check
Reading package lists... Done
@ -181,9 +181,9 @@ Above Command will download the changelog of apache2 package and will display th
via: http://www.linuxtechi.com/ubuntu-apt-get-apt-cache-commands-examples/
作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxtechi.com/author/pradeep/
[a]:http://www.linuxtechi.com/author/pradeep/

View File

@ -0,0 +1,64 @@
如何在Ubuntu 14.04 和14.10 上安装新的字体
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/fonts.jpg)
Ubuntu默认自带了很多字体。但你或许对这些字体还不满意。因此你可以做的是在**Ubuntu 14.04、 14.10或者像Linux Mint其他的系统中安装额外的字体**。
### 第一步: 获取字体 ###
第一步也是最重要的下载你选择的字体。现在你或许在考虑从哪里下载字体。不要担心Google搜索可以给你提供几个免费的字体网站。你可以先去看看[ Lost Type 的字体][1]。[Squirrel的字体][2]同样也是一个下载字体的好地方。
### 第二步在Ubuntu中安装新字体 ###
Font Viewer. In here, you can see the option to install the font in top right corner:
下载的字体文件可能是一个压缩包。先解压它。大多数字体文件的格式是[TTF][3] (TrueType Fonts) 或者[OTF][4] (OpenType Fonts)。无论是哪种,只要双击字体文件。它会自动用字体查看器打开。这里你可以在右上角看到安装安装选项。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Install_New_Fonts_Ubuntu.png)
在安装字体时不会看到其他信息。几秒钟后,你会看到状态变成已安装。不用猜,这就是已安装的字体。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Install_New_Fonts_Ubuntu_1.png)
安装完毕后你就可以在GIMP、Pina等应用中看到你新安装的字体了。
### 第二步在Linux上一次安装几个字体 ###
我没有打错。这仍旧是第二步但是只是是一个备选方案。我上面看到的在Ubuntu中安装字体的方法是不错的。但是这有一个小问题。当你有20个新字体要安装时。一个个单独双击即繁琐又麻烦。你不这么认为么
要在Ubuntu中一次安装几个字体你要做的是创建一个.fonts文件夹如果在你的家目录下还不存在这个目录的话。并把解压后的TTF和OTF文件复制到这个文件夹内。
在文件管理器中进入家目录。按下Ctrl+H [显示Ubuntu中的隐藏文件][5]。 右键创建一个文件夹并命名为.fonts。 这里的点很重要。在Linux中在文件的前面加上点意味在普通的视图中都会隐藏。
#### 备选方案: ####
另外你可以安装字体管理程序来以GUI的形式管理字体。要在Ubuntu中安装字体管理程序打开终端并输入下面的命令
sudo apt-get install font-manager
Open the Font Manager from Unity Dash. You can see installed fonts and option to install new fonts, remove existing fonts etc here.
从Unity Dash中打开字体管理器。你可以看到已安装的字体和安装新字体、删除字体等选项。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Font_Manager_Ubuntu.jpeg)
要卸载字体管理器,使用下面的命令:
sudo apt-get remove font-manager
我希望这篇文章可以帮助你在Ubuntu或其他Linux系统上安装字体。如果你有任何问题或建议请让我知道。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-fonts-ubuntu-1404-1410/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://www.losttype.com/browse/
[2]:http://www.fontsquirrel.com/
[3]:http://en.wikipedia.org/wiki/TrueType
[4]:http://en.wikipedia.org/wiki/OpenType
[5]:http://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/

View File

@ -0,0 +1,76 @@
如何在Linux上使用dupeGuru删除重复文件
================================================================================
最近,我被要求清理我父亲的文件和文件夹。有一个难题是,里面存在很多不正确的名字的重复文件。有移动硬盘的备份,同时还为同一个文件编辑了多个版本,甚至改变的目录结构,同一个文件被复制了好几次,改变名字,改变位置等,这些挤满了磁盘空间。追踪每一个文件成了一个最大的问题。万幸的是,有一个小巧的软件可以帮助你省下很多时间来找到删除你系统中重复的文件:[dupeGuru][1]。它用Python写成这个去重软件几个小时钱前切换到了GPLv3许可证。因此是时候用它来清理你的文件了
### dupeGuru的安装 ###
在Ubuntu上 你可以加入Hardcoded的软件PPA
$ sudo apt-add-repository ppa:hsoft/ppa
$ sudo apt-get update
接着用下面的命令安装:
$ sudo apt-get install dupeguru-se
在ArchLinux中这个包在[AUR][2]中。
如果你想自己编译,源码在[GitHub][3]上。
### dupeGuru的基本使用 ###
DupeGuru的构想是既快又安全。这意味着程序不会在你的系统上疯狂地运行。它很少会删除你不想要删除的文件。然而既然在讨论文件删除保持谨慎和小心总是好的备份总是需要的。
你看完注意事项后你可以用下面的命令运行duprGuru了
$ dupeguru_se
你应该看到要你选择文件夹的欢迎界面,在这里加入你你想要扫描的重复文件夹。
![](https://farm9.staticflickr.com/8596/16199976251_f78b042fba.jpg)
一旦你选择完文件夹并启动扫描后dupeFuru会以列表的形式显示重复文件的组
![](https://farm9.staticflickr.com/8600/16016041367_5ab2834efb_z.jpg)
注意的是默认上dupeGuru基于文件的内容匹配而不是他们的名字。为了防止意外地删除了重要的文件匹配那列列出了使用的匹配算法。在这里你可以选择你想要删除的匹配文件并按下“Action” 按钮来看到可用的操作。
![](https://farm8.staticflickr.com/7516/16199976361_c8f919b06e_b.jpg)
可用的选项是相当广泛的。简而言之,你可以删除重复、移动到另外的位置、忽略它们、打开它们、重命名它们甚至用自定义命令运行它们。如果你选择删除重复文件,你可能会像我一样非常意外竟然还有删除选项。
![](https://farm8.staticflickr.com/7503/16014366568_54f70e3140.jpg)
你不及可以将删除文件移到垃圾箱或者永久删除,还可以选择留下指向原文件的链接(软链接或者硬链接)。也就是说,重复文件按将会删除但是会保留下指向原文件的链接。这将会省下大量的磁盘空间。如果你将这些文件导入到工作空间或者它们有一些依赖时很有用。
还有一个奇特的选项你可以用HTML或者CSV文件导出结果。不确定你会不会这么做但是我假设你想追踪重复文件而不是想让dupeGuru处理它们时会有用。
最后但并不是最不重要的是,偏好菜单可以让你对去重的想法成真。
![](https://farm8.staticflickr.com/7493/16015755749_a9f343b943_z.jpg)
这里你可以选择扫描的标准基于内容还是基于文字并且有一个阈值来控制结果的数量。这里同样可以定义自定义在执行中可以选择的命令。在无数其他小的选项中要注意的是dupeGuru默认忽略小于10KB的文件。
要了解更多的信息,我建议你到[official website][4]官方网站看下,这里有很多文档、论坛支持和其他好东西。
总结一下dupeGuru是我无论何时准备备份或者释放空间时想到的软件。我发现这对高级用户而言也足够强大了对新人而言也很直观。锦上添花的是dupeGuru是跨平台的额这意味着你可以在Mac或者在Windows PC上都可以使用。如果你有特定的需求想要清理音乐或者图片。这里有两个变种[dupeguru-me][5]和 [dupeguru-pe][6] 相应地可以清理音频和图片文件。与常规版本的不同是它不仅比较文件格式还比较特定的媒体数据像质量和码率。
你dupeGuru怎么样你会考虑使用它么或者你有任何可以替代的软件的建议么让我在评论区知道你们的想法。
--------------------------------------------------------------------------------
via: http://xmodulo.com/dupeguru-deduplicate-files-linux.html
作者:[Adrien Brochard][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://www.hardcoded.net/dupeguru/
[2]:https://aur.archlinux.org/packages/dupeguru-se/
[3]:https://github.com/hsoft/dupeguru
[4]:http://www.hardcoded.net/dupeguru/
[5]:http://www.hardcoded.net/dupeguru_me/
[6]:http://www.hardcoded.net/dupeguru_pe/

View File

@ -0,0 +1,340 @@
SaltStackLinux服务器配置管理神器
================================================================================
![](http://techarena51.com/wp-content/uploads/2015/01/SaltStack+logo+-+black+on+white.png)
我在搜索[Puppet][1]的替代品时偶然间碰到了Salt。我喜欢puppet但是我又爱上Salt了:)。我发现Salt在配置和使用上都要比Puppet简单当然这只是一家之言你大可不必介怀。另外一个爱上Salt的理由是它可以让你从命令行管理服务器配置比如
要通过Salt来更新所有服务器你只需运行以下命令
salt * pkg.upgrade
**安装SaltStack到Linux上。**
如果你是在CentOS 6/7上安装的话那么Salt可以通过EPEL仓库获取到。而对于Pi和Ubuntu Linux用户你可以从[这里][2]添加Salt仓库。Salt是基于python的所以你也可以使用pip来安装但是你得用yum-utils或是其它包管理器来自己处理它的依赖关系哦。
Salt遵循服务器-客户端模式,服务器端称为领主,而客户端则称为下属。
**安装并配置Salt领主**
[root@salt-master~]# yum install salt-master
Salt配置文件位于/etc/salt和/srv/salt。Salt虽然可以开箱即用但我还是建议你将日志配置得更详细点以方便日后排除故障。
[root@salt-master ~]# vim /etc/salt/master
# 默认是warning修改如下
log_level: debug
log_level_logfile: debug
[root@salt-master ~]# systemctl start salt-master
**安装并配置Salt下属**
[root@salt-minion~]#yum install salt-minion
# 添加你的Salt领主的主机名
[root@salt-minion~]#vim /etc/salt/minion
master: salt-master.com
# 启动下属
[root@salt-minion~] systemctl start salt-minion
在启动时下属客户机会生成一个密钥和一个id。然后它会连接到Salt领主服务器并验证自己的身份。Salt领主服务器在允许下属客户机下载配置之前必须接受下属的密钥。
**在Salt领主服务器上列出并接受密钥**
# 列出所有密钥
[root@salt-master~] salt-key -L
Accepted Keys:
Unaccepted Keys:
minion.com
Rejected Keys:
# 使用id 'minion.com'命令接受密钥
[root@salt-master~]salt-key -a minion.com
[root@salt-master~] salt-key -L
Accepted Keys:
minion.com
Unaccepted Keys:
Rejected Keys:
在接受下属客户机的密钥后你可以使用salt命令来立即获取信息。
**Salt命令行实例**
# 检查下属是否启动并运行
[root@salt-master~] salt 'minion.com' test.ping
minion.com:
True
# 在下属客户机上运行shell命令
[root@salt-master~]# salt 'minion.com' cmd.run 'ls -l'
minion.com:
total 2988
-rw-r--r--. 1 root root 1024 Jul 31 08:24 1g.img
-rw-------. 1 root root 940 Jul 14 15:04 anaconda-ks.cfg
-rw-r--r--. 1 root root 1024 Aug 14 17:21 test
# 安装/更新所有服务器上的软件
[root@salt-master ~]# salt '*' pkg.install git
salt命令需要一些组件来发送信息其中之一是mimion id而另一个是下属客户机上要调用的函数。
在第一个实例中我使用test模块的ping函数来检查系统是否启动。该函数并不是真的实施一次ping它仅仅是在下属客户机作出回应时返回
cmd.run用于执行远程命令pkg模块包含了包管理的函数。本文结尾提供了全部内建模块的列表。
**颗粒实例**
Salt使用一个名为**颗粒**的界面来获取系统信息。你可以使用颗粒在指定属性的系统上运行命令。
[root@vps4544 ~]# salt -G 'os:Centos' test.ping
minion:
True
更多颗粒实例请访问http://docs.saltstack.com/en/latest/topics/targeting/grains.html
**通过状态文件系统进行包管理。**
为了是软件配置自动化你需要使用状态系统并创建状态文件。这些文件使用YAML格式和python字典、列表、字符串以及编号来构成数据结构。将这些文件从头到尾研读一遍这将有助于你更好地理解它的配置。
**VIM状态文件实例**
[root@salt-master~]# vim /srv/salt/vim.sls
vim-enhanced:
pkg.installed
/etc/vimrc:
file.managed:
- source: salt://vimrc
- user: root
- group: root
- mode: 644
该文件的第一和第三行成为状态id它们必须包含有需要管理的包或文件的确切名称或路径。在状态id之后是状态和函数声明pkgfile是状态声明installedmanaged是函数声明。函数接受参数用户、组、模式和源都是函数managed的参数。
要将该配置应用到下属客户端请移动你的vimrc文件到/src/salt然后运行以下命令。
[root@salt-master~]# salt 'minion.com' state.sls vim
minion.com:
----------
ID: vim-enhanced
Function: pkg.installed
Result: True
Comment: The following packages were installed/updated: vim-enhanced.
Started: 09:36:23.438571
Duration: 94045.954 ms
Changes:
----------
vim-enhanced:
----------
new:
7.4.160-1.el7
old:
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
你也可以添加依赖关系到你的配置中。
[root@salt-master~]# vim /srv/salt/ssh.sls
openssh-server:
pkg.installed
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 600
- source: salt://ssh/sshd_config
sshd:
service.running:
- require:
- pkg: openssh-server
这里的require声明是必须的它在servicepkg状态之间创建依赖关系。该声明将首先检查包是否安装然后运行服务。
但是我更偏向于使用watch声明因为它也可以检查文件是否修改和重启服务。
[root@salt-master~]# vim /srv/salt/ssh.sls
openssh-server:
pkg.installed
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 600
- source: salt://sshd_config
sshd:
service.running:
- watch:
- pkg: openssh-server
- file: /etc/ssh/sshd_config
[root@vps4544 ssh]# salt 'minion.com' state.sls ssh
seven.leog.in:
Changes:
----------
ID: openssh-server
Function: pkg.installed
Result: True
Comment: Package openssh-server is already installed.
Started: 13:01:55.824367
Duration: 1.156 ms
Changes:
----------
ID: /etc/ssh/sshd_config
Function: file.managed
Result: True
Comment: File /etc/ssh/sshd_config updated
Started: 13:01:55.825731
Duration: 334.539 ms
Changes:
----------
diff:
---
+++
@@ -14,7 +14,7 @@
# SELinux about this change.
# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER
#
-Port 22
+Port 422
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
----------
ID: sshd
Function: service.running
Result: True
Comment: Service restarted
Started: 13:01:56.473121
Duration: 407.214 ms
Changes:
----------
sshd:
True
Summary
------------
Succeeded: 4 (changed=2)
Failed: 0
------------
Total states run: 4
在单一目录中维护所有的配置文件是一项复杂的大工程因此你可以创建子目录并在其中添加配置文件init.sls文件。
[root@salt-master~]# mkdir /srv/salt/ssh
[root@salt-master~]# vim /srv/salt/ssh/init.sls
openssh-server:
pkg.installed
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 600
- source: salt://ssh/sshd_config
sshd:
service.running:
- watch:
- pkg: openssh-server
- file: /etc/ssh/sshd_config
[root@vps4544 ssh]# cp /etc/ssh/sshd_config /srv/salt/ssh/
[root@vps4544 ssh]# salt 'minion.com' state.sls ssh
**Top文件和环境。**
top文件top.sls是用来定义你的环境的文件它允许你映射下属客户机到包默认环境是base。你需要定义在基本环境下哪个包会被安装到哪台服务器。
如果对于一台特定的下属客户机而言,有多个环境,并且有多于一个的定义,那么默认情况下,基本环境将取代其它环境。
要定义环境你需要将它添加到领主配置文件的file_roots指针。
[root@salt-master ~]# vim /etc/salt/master
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev
现在添加一个top.sls文件到/src/salt
[root@salt-master ~]# vim /srv/salt/top.sls
base:
'*':
- vim
'minion.com':
- ssh
应用top文件配置
[root@salt-master~]# salt '*' state.highstate
minion.com:
----------
ID: vim-enhanced
Function: pkg.installed
Result: True
Comment: Package vim-enhanced is already installed.
Started: 13:10:55
Duration: 1678.779 ms
Changes:
----------
ID: openssh-server
Function: pkg.installed
Result: True
Comment: Package openssh-server is already installed.
Started: 13:10:55.
Duration: 2.156 ms
下属客户机将下载top文件并搜索用于它的配置领主服务器也会将配置应用到所有下属客户机。
这仅仅是一个Salt的简明教程如果你想要深入学习并理解你可以访问以下链接。如果你已经在使用Salt那么请告诉我你的建议和意见吧。
更新: [Foreman][3] 已经通过[插件][4]支持salt。
阅读链接
- http://docs.saltstack.com/en/latest/ref/states/top.html#how-top-files-are-compiled
- http://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
- http://docs.saltstack.com/en/latest/ref/states/highstate.html#state-declaration
颗粒
- http://docs.saltstack.com/en/latest/topics/targeting/grains.html
Salt模块列表
Salt和Puppet的充分比较
- https://mywushublog.com/2013/03/configuration-management-with-salt-stack/
内建执行模块的完全列表
- http://docs.saltstack.com/en/latest/ref/modules/all/
--------------------------------------------------------------------------------
via: http://techarena51.com/index.php/getting-started-with-saltstack/
作者:[Leo G][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://techarena51.com/
[1]:http://techarena51.com/index.php/a-simple-way-to-install-and-configure-a-puppet-server-on-linux/
[2]:http://docs.saltstack.com/en/latest/topics/installation/index.html
[3]:http://techarena51.com/index.php/using-foreman-opensource-frontend-puppet/
[4]:https://github.com/theforeman/foreman_salt/wiki

View File

@ -0,0 +1,73 @@
如何在Ubuntu 14.04 上为Apache 2.4 安装SSL
================================================================================
今天我会站如如何为你的个人网站或者博客安装**SSL 证书**,来保护你的访问者和网站之间通信的安全。
安全套接字层或称SSL是一种加密网站和浏览器之间连接的标准安全技术。这确保服务器和浏览器之间传输的数据保持隐私和安全。这被成千上万的人使用来保护他们与客户的通信。要启用SSL链接web服务器需要安装SSL证书。
你可以创建你自己的SSL证书但是这默认不会被浏览器信任要修复这个问题你需要从受信任的证书机构CA处购买证书我们会向你展示如何
或者证书并在apache中安装。
### 生成一个证书签名请求 ###
证书机构CA会要求你在你的服务器上生成一个证书签名请求CSR。这是一个很简单的过程只需要一会就行你需要运行下面的命令并输入需要的信息
# openssl req -new -newkey rsa:2048 -nodes -keyout yourdomainname.key -out yourdomainname.csr
输出看上去会像这样:
![generate csr](http://blog.linoxide.com/wp-content/uploads/2015/01/generate-csr.jpg)
这一步会生成两个文件按一个用于解密SSL证书的私钥文件一个证书签名请求CSR文件用于申请你的SSL证书
根据你申请的机构你会需要上传csr文件或者在网站表格中粘帖他的内容。
### 在Apache中安装实际的证书 ###
生成步骤完成之后,你会收到新的数字证书,本篇教程中我们使用[Comodo SSL][1]并在一个zip文件中收到了证书。要在apache中使用它你首先需要用下面的命令为收到的证书创建一个组合的证书
# cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > bundle.crt
![bundle](http://blog.linoxide.com/wp-content/uploads/2015/01/bundle.jpg)
用下面的命令确保ssl模块已经加载进apache了
# a2enmod ssl
如果你看到了“Module ssl already enabled”这样的信息就说明你成功了如果你看到了“Enabling module ssl”那么你还需要用下面的命令重启apache
# service apache2 restart
最后像下面这样修改你的虚拟主机文件(通常在/etc/apache2/sites-enabled 下):
DocumentRoot /var/www/html/
ServerName linoxide.com
SSLEngine on
SSLCertificateFile /usr/local/ssl/crt/yourdomainname.crt
SSLCertificateKeyFile /usr/local/ssl/yourdomainname.key
SSLCACertificateFile /usr/local/ssl/bundle.crt
你现在应该可以用https://YOURDOMAIN/注意使用https而不是http来访问你的网站了并可以看到SSL的进度条了通常在你浏览器中用一把锁来表示
**NOTE:** All the links must now point to https, if some of the content on the website (like images or css files) still point to http links you will get a warning in the browser, to fix this you have to make sure that every link points to https.
**注意:** 现在所有的链接都必须指向https如果网站上的一些内容像图片或者css文件等仍旧指向http链接的话你会在浏览器中得到一个警告要修复这个问题请确保每个链接都指向了https。
### 在你的网站上重定向HTTP请求到HTTPS中 ###
如果你希望重定向常规的HTTP请求到HTTPS添加下面的文本到你希望的虚拟主机或者如果希望给服务器上所有网站都添加的话就加入到apache.conf中
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-ssl-apache-2-4-in-ubuntu/
作者:[Adrian Dinu][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:https://ssl.comodo.com/

View File

@ -0,0 +1,130 @@
如何在Ubuntu 14.04 LTS安装网络爬虫工具
================================================================================
这是一款提取网站数据的开源工具。Scrapy框架用Python开发而成它使抓取工作又快又简单且可扩展。我们已经在virtual box中创建一台虚拟机VM并且在上面安装了Ubuntu 14.04 LTS。
### 安装 Scrapy ###
Scrapy依赖于Python、开发库和pip。Python最新的版本已经在Ubuntu上预装了。因此我们在安装Scrapy之前只需安装pip和python开发库就可以了。
pip是作为python包索引器easy_install的替代品。用于安装和管理Python包。pip包的安装可见图 1。
sudo apt-get install python-pip
![Fig:1 Pip installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f1.png)
图:1 pip安装
我们必须要用下面的命令安装python开发库。如果包没有安装那么就会在安装scrapy框架的时候报关于python.h头文件的错误。
sudo apt-get install python-dev
![Fig:2 Python Developer Libraries](http://blog.linoxide.com/wp-content/uploads/2014/11/f2.png)
图:2 Python 开发库
scrapy框架即可从deb包安装也可以从源码安装。然而在图3中我们已经用pipPython 包管理器安装了deb包了。
sudo pip install scrapy
![Fig:3 Scrapy Installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f3.png)
图:3 Scrapy 安装
图4中scrapy的成功安装需要一些时间。
![Fig:4 Successful installation of Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f4.png)
图:4 成功安装Scrapy框架
### 使用scrapy框架提取数据 ###
**(基础教程)**
我们将用scrapy从fatwallet.com上提取店名提供卡的店。首先我们使用下面的命令新建一个scrapy项目“store name” 见图5。
$sudo scrapy startproject store_name
![Fig:5 Creation of new project in Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f5.png)
图:5 Scrapy框架新建项目
Above command creates a directory with title “store_name” at current path. This main directory of the project contains files/folders which are shown in the following Figure 6.
上面的命令在当前路径创建了一个“store_name”的目录。项目主目录下包含的文件/文件夹见图6。
$sudo ls lR store_name
![Fig:6 Contents of store_name project.](http://blog.linoxide.com/wp-content/uploads/2014/11/f6.png)
图:6 store_name项目的内容
每个文件/文件夹的概要如下:
- scrapy.cfg 是项目配置文件
- store_name/ 主目录下的另一个文件夹。 这个目录包含了项目的python代码
- store_name/items.py 包含了将由蜘蛛爬取的项目
- store_name/pipelines.py 是管道文件
- store_name/settings.py 是项目的配置文件
- store_name/spiders/ 包含了用于爬取的蜘蛛
由于我们要从fatwallet.com上如提取店名因此我们如下修改文件。
import scrapy
class StoreNameItem(scrapy.Item):
name = scrapy.Field() # extract the names of Cards store
之后我们要在项目的store_name/spiders/文件夹下写一个新的蜘蛛。蜘蛛是一个python类它包含了下面几个必须的属性
1. 蜘蛛名 (name )
2. 爬取起点url (start_urls)
3. 包含了从响应中提取需要内容相应的正则表达式的解析方法。解析方法对爬虫而言很重要。
我们在store_name/spiders/目录下创建了“store_name.py”爬虫并添加如下的代码来从fatwallet.com上提取点名。爬虫的输出到文件**StoreName.txt**见图7。
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy.http import FormRequest
import re
class StoreNameItem(BaseSpider):
name = "storename"
allowed_domains = ["fatwallet.com"]
start_urls = ["http://fatwallet.com/cash-back-shopping/"]
def parse(self,response):
output = open('StoreName.txt','w')
resp = Selector(response)
tags = resp.xpath('//tr[@class="storeListRow"]|\
//tr[@class="storeListRow even"]|\
//tr[@class="storeListRow even last"]|\
//tr[@class="storeListRow last"]').extract()
for i in tags:
i = i.encode('utf-8', 'ignore').strip()
store_name = ''
if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):
store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()
store_name = re.search(r">.*?<",store_name,re.I|re.S).group()
store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))
store_name = re.sub(r'&amp;',"&",re.sub(r'&amp;',"&",store_name,re.I))
#print store_name
output.write(store_name+""+"\n")
![Fig:7 Output of the Spider code .](http://blog.linoxide.com/wp-content/uploads/2014/11/f7.png)
图:7 爬虫的输出
*注意: 本教程的目的仅用于理解scrapy框架*
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/scrapy-install-ubuntu/
作者:[nido][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/

View File

@ -0,0 +1,93 @@
如何在Linux上找出并删除重复的文件
================================================================================
大家好今天我们会学习如何在Linux PC或者服务器上找出和删除重复文件。这里有一款工具你可以工具自己的需要使用。
无论你是否正在使用Linux桌面或者服务器有一些很好的工具能一帮你扫描系统中的重复文件并删除它们来释放空间。图形界面和命令行界面的都有。重复文件是磁盘空间不必要的浪费。毕竟如果你的确需要在不同的位置享有同一个文件你可以使用软链接或者硬链接这样就可以这样就可以在磁盘的一处地方存储数据了。
### FSlint ###
[FSlint][1] 在不同的Linux发行办二进制仓库中都有包括Ubuntu、Debian、Fedora和Red Hat。只需你运行你的包管理器并安装“fslint”包就行。这个工具默认提供了一个简单的图形化界面同样也有包含各种功能的命令行版本。
不要让它让你害怕使用FSlint的图形化界面。默认情况下它会自动选中Duplicate窗格并以你的家目录作为搜索路径。
要安装fslint若像我这样运行的是Ubuntu这里是默认的命令
$ sudo apt-get install fslint
这里还有针对其他发行版的安装命令:
Debian
svn checkout http://fslint.googlecode.com/svn/trunk/ fslint-2.45
cd fslint-2.45
dpkg-buildpackage -I.svn -rfakeroot -tc
sudo dpkg -i ../fslint_2.45-1_all.deb
Fedora
sudo yum install fslint
For OpenSuse
[ -f /etc/mandrake-release ] && pkg=rpm
[ -f /etc/SuSE-release ] && pkg=packages
wget http://www.pixelbeat.org/fslint/fslint-2.42.tar.gz
sudo rpmbuild -ta fslint-2.42.tar.gz
sudo rpm -Uvh /usr/src/$pkg/RPMS/noarch/fslint-2.42-1.*.noarch.rpm
对于其他发行版:
wget http://www.pixelbeat.org/fslint/fslint-2.44.tar.gz
tar -xzf fslint-2.44.tar.gz
cd fslint-2.44
(cd po && make)
./fslint-gui
要在Ubuntu中运行fslint的GUI版本fslint-gui, 使用Alt+F2运行命令或者在终端输入
$ fslint-gui
默认情况下它会自动选中Duplicate窗格并以你的家目录作为搜索路径。你要做的就是点击Find按钮FSlint会自动在你的家目录下找出重复文件列表。
![Delete Duplicate files with Fslint](http://blog.linoxide.com/wp-content/uploads/2015/01/delete-duplicates-fslint.png)
使用按钮来删除任何你要删除的文件,并且可以双击预览。
完成这一切后,我们就成功地删除你系统中的重复文件了。
**注意** 的是命令行工具默认不在环境的路径中你不能像典型的命令那样运行它。在Ubuntu中你可以在/usr/share/fslint/fslint下找到它。因此如果你要在一个单独的目录运行fslint完整扫描下面是Ubuntu中的运行命令
cd /usr/share/fslint/fslint
./fslint /path/to/directory
**这个命令实际上并不会删除任何文件。它只会打印出重复文件的列表-你需要自己做接下来的事。**
$ /usr/share/fslint/fslint/findup --help
find dUPlicate files.
Usage: findup [[[-t [-m|-d]] | [--summary]] [-r] [-f] paths(s) ...]
If no path(s) specified then the current directory is assumed.
When -m is specified any found duplicates will be merged (using hardlinks).
When -d is specified any found duplicates will be deleted (leaving just 1).
When -t is specfied, only report what -m or -d would do.
When --summary is specified change output format to include file sizes.
You can also pipe this summary format to /usr/share/fslint/fslint/fstool/dupwaste
to get a total of the wastage due to duplicates.
![fslint help](http://blog.linoxide.com/wp-content/uploads/2015/01/fslint-help.png)
--------------------------------------------------------------------------------
via: http://linoxide.com/file-system/find-remove-duplicate-files-linux/
作者:[Arun Pyasi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.pixelbeat.org/fslint/
[2]:http://www.pixelbeat.org/fslint/fslint-2.42.tar.gz

View File

@ -0,0 +1,74 @@
如何在Ubuntu Server 14.04 LTS(Trusty) 上安装Ghost
================================================================================
今天我们将会在Ubuntu Server 14.04 LTS (Trusty)上安装一个博客平台Ghost。
Ghost是一款设计优美的发布平台很容易使用且对任何人都免费。它是免费的开源软件FOSS它的源码在Github上。截至2014年1月它的界面很简单还有分析面板。编辑使用的是分屏显示。
因此有了这篇步骤明确的在Ubuntu Server上安装Ghost的教程
### 1. 升级Ubuntu ###
第一步是运行Ubuntu软件升级并安装一系列需要的额外包。
sudo apt-get update
sudo apt-get upgrade -y
sudo aptitude install -y build-essential zip vim wget
### 2. 下载并安装 Node.js 源码 ###
wget http://nodejs.org/dist/node-latest.tar.gz
tar -xzf node-latest.tar.gz
cd node-v*
现在我们使用下面的命令安装Node.js
./configure
make
sudo make install
### 3. 下载并安装Ghost ###
sudo mkdir -p /var/www/
cd /var/www/
sudo wget https://ghost.org/zip/ghost-latest.zip
sudo unzip -d ghost ghost-latest.zip
cd ghost/
sudo npm install --production
### 4. 配置Ghost ###
sudo nano config.example.js
在“Production”字段
host: '127.0.0.1',
修改成
host: '0.0.0.0',
### 创建Ghost用户 ###
sudo adduser --shell /bin/bash --gecos 'Ghost application' ghost
sudo chown -R ghost:ghost /var/www/ghost/
现在启动Ghost你需要以“ghsot”用户登录。
su - ghost
cd /var/www/ghost/
现在你已经以“ghsot”用户登录并可启动Ghost
npm start --production
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-ghost-ubuntu-server-14-04/
作者:[Arun Pyasi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/

View File

@ -0,0 +1,92 @@
如何在RedHat/CentOS 7.x中使用nmcli管理网络
================================================================================
[**Red Hat Enterprise Linux 7**][1]和**CentOS 7**的一个新特性是默认的网络服务由**NetworkManager**提供这是一个动态的网络控制和配置守护进程它在网络设备和连接可用时保持链接正常同时也提供了典型的ifcfg类型的配置文件。NetworkManager可以用于下面这些连接Ethernet、 VLANs、桥接、Bonds、Teams、 Wi-Fi、 移动宽带 (比如 3G)和IP-over-InfiniBand(IPoIB)。
NetworkManager可以由命令行工具**nmcli**控制。
### nmcli的通常用法 ###
nmcli的通常语法是
# nmcli [ OPTIONS ] OBJECT { COMMAND | help }
一件很酷的事情是你可以使用tab键来补全操作这样你在何时忘记了语法你都可以用tab来看到可用的选项了。
![nmcli tab](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-tab.jpg)
nmcli通常用法的一些例子
# nmcli general status
会显示NetworkManager的整体状态。
# nmcli connection show
会显示所有的连接
# nmcli connection show -a
仅显示活跃的连接
# nmcli device status
显示NetworkManager识别的设备列表和它们当前的状态。
![nmcli general](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-gneral.jpg)
### 启动/停止网络设备 ###
你可以使用nmcli从命令行启动或者停止网络设备这等同于ifconfig中的up和down。
停止网络设备使用下面的语法:
# nmcli device disconnect eno16777736
要启动它使用下面的语法:
# nmcli device connect eno16777736
### 使用静态IP添加一个以太网连接 ###
要用静态IP添加一个以太网连接可以使用下面的命令
# nmcli connection add type ethernet con-name NAME_OF_CONNECTION ifname interface-name ip4 IP_ADDRESS gw4 GW_ADDRESS
将NAME_OF_CONNECTION替换成新的连接名IP_ADDRESS替换成你要的IP地址GW_ADDRESS替换成你使用的网关地址如果你并不使用网关你可以忽略这部分
# nmcli connection add type ethernet con-name NEW ifname eno16777736 ip4 192.168.1.141 gw4 192.168.1.1
=要设置这个连接的DNS服务器使用下面的命令
# nmcli connection modify NEW ipv4.dns "8.8.8.8 8.8.4.4"
要启用新的以太网连接,使用下面的命令:
# nmcli connection up NEW ifname eno16777736
要查看新配置连接的详细信息,使用下面的命令:
# nmcli -p connection show NEW
![nmcli add static](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-add-static.jpg)
### 添加一个使用DHCP的连接 ###
如果你想要添加一个使用DHCP来配置接口IP地址、网关地址和dns服务器地址的新的连接你要做的就是忽略命令中的ip/gw部分NetworkManager会自动使用DHCP来获取配置细节。
比如要创建一个新的叫NEW_DHCP的DHCP连接在设备eno16777736上你可以使用下面的命令
# nmcli connection add type ethernet con-name NEW_DHCP ifname eno16777736
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/nmcli-tool-red-hat-centos-7/
作者:[Adrian Dinu][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.0_Release_Notes/

View File

@ -0,0 +1,149 @@
在Ubuntu/Fedora/CentOS中安装Gitblit
================================================================================
**Git**是一款注重速度、数据完整性、分布式支持和非线性工作流的分布式版本控制工具。Git最初由Linus Torvalds在2005年为Linux内核开发而设计如今已经成为被广泛接受的版本控制系统。
和其他大多数分布式版本控制系统比起来,不像大多数客户端-服务端的系统每个Git工作目录是一个完整的仓库带有完整的历史记录和完整的版本跟踪能力不需要依赖网络或者中心服务器。像Linux内核一样Git意识在GPLv2许可证下的免费软件。
本篇教程我会演示如何安装gitlit服务器。gitlit的最新稳定版是1.6.2。[Gitblit][1]是一款开源、纯Java开发的用于管理浏览和服务的[Git][2]仓库。它被设计成一款为希望托管中心仓库的小工作组服务的工具。
mkdir -p /opt/gitblit; cd /opt/gitblit; wget http://dl.bintray.com/gitblit/releases/gitblit-1.6.2.tar.gz
### 列出目录: ###
root@vps124229 [/opt/gitblit]# ls
./ docs/ gitblit-stop.sh* LICENSE service-ubuntu.sh*
../ ext/ install-service-centos.sh* migrate-tickets.sh*
add-indexed-branch.sh* gitblit-1.6.2.tar.gz install-service-fedora.sh* NOTICE
authority.sh* gitblit.jar install-service-ubuntu.sh* reindex-tickets.sh*
data/ gitblit.sh* java-proxy-config.sh* service-centos.sh*
默认配置文件在data/gitblit.properties你可以根据需要自己修改。
### 启动gitlit服务 ###
### 通过service命令 ###
root@vps124229 [/opt/gitblit]# cp service-centos.sh /etc/init.d/gitblit
root@vps124229 [/opt/gitblit]# chkconfig --add gitblit
root@vps124229 [/opt/gitblit]# service gitblit start
Starting gitblit server
.
### 手动启动: ###
root@vps124229 [/opt/gitblit]# java -jar gitblit.jar --baseFolder data
2015-01-10 09:16:53 [INFO ] *****************************************************************
2015-01-10 09:16:53 [INFO ] _____ _ _ _ _ _ _
2015-01-10 09:16:53 [INFO ] | __ \(_)| | | | | |(_)| |
2015-01-10 09:16:53 [INFO ] | | \/ _ | |_ | |__ | | _ | |_
2015-01-10 09:16:53 [INFO ] | | __ | || __|| '_ \ | || || __|
2015-01-10 09:16:53 [INFO ] | |_\ \| || |_ | |_) || || || |_
2015-01-10 09:16:53 [INFO ] \____/|_| \__||_.__/ |_||_| \__|
2015-01-10 09:16:53 [INFO ] Gitblit v1.6.2
2015-01-10 09:16:53 [INFO ]
2015-01-10 09:16:53 [INFO ] *****************************************************************
2015-01-10 09:16:53 [INFO ] Running on Linux (3.8.13-xxxx-grs-ipv6-64-vps)
2015-01-10 09:16:53 [INFO ] Logging initialized @842ms
2015-01-10 09:16:54 [INFO ] Using JCE Unlimited Strength Jurisdiction Policy files
2015-01-10 09:16:54 [INFO ] Setting up HTTPS transport on port 8443
2015-01-10 09:16:54 [INFO ] certificate alias = localhost
2015-01-10 09:16:54 [INFO ] keyStorePath = /opt/gitblit/data/serverKeyStore.jks
2015-01-10 09:16:54 [INFO ] trustStorePath = /opt/gitblit/data/serverTrustStore.jks
2015-01-10 09:16:54 [INFO ] crlPath = /opt/gitblit/data/certs/caRevocationList.crl
2015-01-10 09:16:54 [INFO ] Shutdown Monitor listening on port 8081
2015-01-10 09:16:54 [INFO ] jetty-9.2.3.v20140905
2015-01-10 09:16:55 [INFO ] NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IRuntimeManager]----
2015-01-10 09:16:55 [INFO ] Basefolder : /opt/gitblit/data
2015-01-10 09:16:55 [INFO ] Settings : /opt/gitblit/data/gitblit.properties
2015-01-10 09:16:55 [INFO ] JVM timezone: America/Montreal (EST -0500)
2015-01-10 09:16:55 [INFO ] App timezone: America/Montreal (EST -0500)
2015-01-10 09:16:55 [INFO ] JVM locale : en_US
2015-01-10 09:16:55 [INFO ] App locale : <client>
2015-01-10 09:16:55 [INFO ] PF4J runtime mode is 'deployment'
2015-01-10 09:16:55 [INFO ] Enabled plugins: []
2015-01-10 09:16:55 [INFO ] Disabled plugins: []
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.INotificationManager]----
2015-01-10 09:16:55 [WARN ] Mail service disabled.
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IUserManager]----
2015-01-10 09:16:55 [INFO ] ConfigUserService(/opt/gitblit/data/users.conf)
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IAuthenticationManager]----
2015-01-10 09:16:55 [INFO ] External authentication disabled.
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ---- [com.gitblit.transport.ssh.IPublicKeyManager]----
2015-01-10 09:16:55 [INFO ] FileKeyManager (/opt/gitblit/data/ssh)
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IRepositoryManager]----
2015-01-10 09:16:55 [INFO ] Repositories folder : /opt/gitblit/data/git
2015-01-10 09:16:55 [INFO ] Identifying repositories...
2015-01-10 09:16:55 [INFO ] 0 repositories identified with calculated folder sizes in 11 msecs
2015-01-10 09:16:55 [INFO ] Lucene will process indexed branches every 2 minutes.
2015-01-10 09:16:55 [INFO ] Garbage Collector (GC) is disabled.
2015-01-10 09:16:55 [INFO ] Mirror service is disabled.
2015-01-10 09:16:55 [INFO ] Alias UTF-9 & UTF-18 encodings as UTF-8 in JGit
2015-01-10 09:16:55 [INFO ] Preparing 14 day commit cache. please wait...
2015-01-10 09:16:55 [INFO ] 0 repositories identified with calculated folder sizes in 0 msecs
2015-01-10 09:16:55 [INFO ] built 14 day commit cache of 0 commits across 0 repositories in 2 msecs
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IProjectManager]----
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IFederationManager]----
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IGitblit]----
2015-01-10 09:16:55 [INFO ] Starting services manager...
2015-01-10 09:16:55 [INFO ] Federation passphrase is blank! This server can not be PULLED from.
2015-01-10 09:16:55 [INFO ] Fanout PubSub service is disabled.
2015-01-10 09:16:55 [INFO ] Git Daemon is listening on 0.0.0.0:9418
2015-01-10 09:16:55 [INFO ] SSH Daemon (NIO2) is listening on 0.0.0.0:29418
2015-01-10 09:16:55 [WARN ] No ticket service configured.
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] ----[com.gitblit.manager.IPluginManager]----
2015-01-10 09:16:55 [INFO ] No plugins
2015-01-10 09:16:55 [INFO ]
2015-01-10 09:16:55 [INFO ] All managers started.
打开浏览器,依据你的配置进入**http://localhost:8080** 或者 **https://localhost:8443**。 输入默认的管理员授权:**admin / admin** 并点击**Login** 按钮
![snapshot2](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/snapshot2.png)
### 添加用户: ###
![snapshot1](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/snapshot1.png)
添加仓库:
![snapshot3](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/snapshot3.png)
### 用命令行创建新的仓库: ###
touch README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin ssh://admin@142.4.202.70:29418/Programming.git
git push -u origin master
### 从命令行推送已有的仓库: ###
git remote add origin ssh://admin@142.4.202.70:29418/Programming.git
git push -u origin master
完成!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/install-gitblit-ubuntu-fedora-centos/
作者:[M.el Khamlichi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/
[1]:http://gitblit.com/
[2]:http://git-scm.com/

View File

@ -0,0 +1,159 @@
在CentOS/RHEL/Scientific Linux 6 & 7 上安装Telnet
================================================================================
#### 声明: ####
在安装和使用Telnet之前需要记住以下几点。
- 在公网(WAN)中使用Telnet是非常不好的想法。它会以明文的格式传输登入数据。每个人都可以看到明文。
- 如果你还是需要Telnet强烈建议你只在局域网内部使用。
- 你可以使用**SSH**作为替代方法。但是确保不要用root用户登录。
### Telnet是什么 ###
[Telnet][1] 是用于通过TCP/IP网络远程登录计算机的协议。一旦与远程计算机建立了连接它就会成为一个虚拟终端且允许你与远程计算机通信。
在本篇教程中我们会展示如何安装Telnet并且如何通过Telnet访问远程系统。
### 安装 ###
打开终端并输入下面的命令来安装telnet
yum install telnet telnet-server -y
现在telnet已经安装在你的服务器上了。接下来编辑文件**/etc/xinetd.d/telnet**
vi /etc/xinetd.d/telnet
设置 **disable = no**:
# default: on
# description: The telnet server serves telnet sessions; it uses \
# unencrypted username/password pairs for authentication.
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
保存并退出文件。记住我们不必在CentOS 7做这步。
接下来使用下面的命令重启telnet服务
在CentOS 6.x 系统中:
service xinetd start
让这个服务在每次重启时都会启动:
在CentOS 6上
chkconfig telnet on
chkconfig xinetd on
在CentOS 7上
systemctl start telnet.socket
systemctl enable telnet.socket
让telnet的默认端口**23**可以通过防火墙和路由器。要让telnet端口可以通过防火墙在CentOS 6.x系统中编辑下面的文件
vi /etc/sysconfig/iptables
加入红色显示的行:
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW --dport 23 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
保存并退出文件。重启iptables服务
service iptables restart
在CentOS 7中运行下面的命令让telnet服务可以通过防火墙。
firewall-cmd --permanent --add-port=23/tcp
firewall-cmd --reload
就是这样。现在telnet服务就可以使用了。
#### 创建用户 ####
创建一个测试用户,比如用户名是“**sk**”,密码是“**centos**“:
useradd sk
passwd sk
#### 客户端配置 ####
安装telnet包
yum install telnet
在基于DEB的系统中
sudo apt-get install telnet
现在,打开终端,尝试访问你的服务器(远程主机)。
如果你的客户端是Linux系统打开终端并输入下面的命令来连接到telnet服务器上。
telnet 192.168.1.150
输入服务器上已经创建的用户名和密码:
示例输出:
Trying 192.168.1.150...
Connected to 192.168.1.150.
Escape character is '^]'.
Kernel 3.10.0-123.13.2.el7.x86_64 on an x86_64
server1 login: sk
Password:
[sk@server1 ~]$
如你所见,已经成功从本地访问远程主机了。
如果你的系统是windows进入**开始 -> 运行 -> 命令提示符**。
在命令提示符中,输入命令:
telnet 192.168.1.150
**192.168.1.150**是远程主机IP地址。
现在你就可以连接到你的服务器上了。
就是这样。
干杯!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/installing-telnet-centosrhelscientific-linux-6-7/
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:http://en.wikipedia.org/wiki/Telnet

View File

@ -0,0 +1,305 @@
20个Unix命令技巧 - 第一部分
================================================================================
让我们用**这些Unix命令技巧**开启新的一年,提高在终端下的生产力。我已经找了很久了,现在就与你们分享。
![](http://s0.cyberciti.org/uploads/cms/2015/01/unix-command-line-tricks.001.jpg)
### 删除一个大文件 ###
我在生产服务器上有一个很大的200GB的日志文件需要删除。我的rm和ls命令已经崩溃我担心这是由于巨大的磁盘IO造成的要删除这个大文件输入
> /path/to/file.log
# or use the following syntax
: > /path/to/file.log
# finally delete it
rm /path/to/file.log
### 如何缓存终端输出? ###
尝试使用script命令行工具来为你的终端输出创建typescript。
script my.terminal.sessio
输入命令:
ls
date
sudo service foo stop
要退出结束script绘画输入*exit* 或者 *logout* 或者按下 *control-D*
exit
要浏览输入:
more my.terminal.session
less my.terminal.session
cat my.terminal.session
### 还原删除的 /tmp 文件夹 ###
我在文章[Linux和Unix shell我犯了一些错误][1]。我意外地删除了/tmp文件夹。要还原它我需要这么做
mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
### 锁定一个文件夹 ###
为了我的数据隐私,我想要锁定我文件服务器下的/downloads文件夹。因此我运行
chmod 0000 /downloads
root用户仍旧可以访问但是ls和cd命令还不可用。要还原它用
chmod 0755 /downloads
### 在vim中用密码保护文件 ###
害怕root用户或者其他人偷窥你的个人文件么尝试在vim中用密码保护输入
vim +X filename
或者在退出vim之前使用:X 命令来加密你的文件vim会提示你输入一个密码。
### 清除屏幕上的输出 ###
只要输入:
reset
### 成为人类 ###
传递*-h*或者*-H*和其他选项选项给GNU或者BSD工具来获取像ls、df、du等命令以人类可读的格式输出
ls -lh
# 以人类可读的格式 (比如: 1K 234M 2G)
df -h
df -k
# 已字节输出如: KB, MB, or GB
free -b
free -k
free -m
free -g
# 以人类可读的格式打印 (比如 1K 234M 2G)
du -h
# 以人类可读的格式获取系统perms
stat -c %A /boot
# 比较人类可读的数字
sort -h -a file
# 在Linux上以人类可读的形式显示cpu信息
lscpu
lscpu -e
lscpu -e=cpu,node
# 以人类可读的形式显示每个文件的大小
tree -h
tree -h /boot
### 在Linux系统中显示已知用户的信息 ###
只要输入:
## linux 版本 ##
lslogins
## BSD 版本 ##
logins
示例输出:
UID USER PWD-LOCK PWD-DENY LAST-LOGIN GECOS
0 root 0 0 22:37:59 root
1 bin 0 1 bin
2 daemon 0 1 daemon
3 adm 0 1 adm
4 lp 0 1 lp
5 sync 0 1 sync
6 shutdown 0 1 2014-Dec17 shutdown
7 halt 0 1 halt
8 mail 0 1 mail
10 uucp 0 1 uucp
11 operator 0 1 operator
12 games 0 1 games
13 gopher 0 1 gopher
14 ftp 0 1 FTP User
27 mysql 0 1 MySQL Server
38 ntp 0 1
48 apache 0 1 Apache
68 haldaemon 0 1 HAL daemon
69 vcsa 0 1 virtual console memory owner
72 tcpdump 0 1
74 sshd 0 1 Privilege-separated SSH
81 dbus 0 1 System message bus
89 postfix 0 1
99 nobody 0 1 Nobody
173 abrt 0 1
497 vnstat 0 1 vnStat user
498 nginx 0 1 nginx user
499 saslauth 0 1 "Saslauthd user"
### 我如何删除意外在当前文件夹下解压的文件? ###
我意外在/var/www/html/而不是/home/projects/www/current下解压了一个tarball。它混乱了/var/www/html下的文件。最简单修复这个问题的方法是
cd /var/www/html/
/bin/rm -f "$(tar ztf /path/to/file.tar.gz)"
### 对top命令的输出感到疑惑 ###
正经地说你应该试一下用htop代替top
sudo htop
### 想要再次运行相同的命令 ###
只需要输入!!。比如:
/myhome/dir/script/name arg1 arg2
# 要再次运行相同的命令
!!
## 以root用户运行最后运行的命令
sudo !!
!!会运行最近使用的命令。要运行最近运行的“foo”命令
!foo
# 以root用户运行上一次以“service”开头的命令
sudo !service
!$用于运行带上最后一个参数的命令:
# 编辑 nginx.conf
sudo vi /etc/nginx/nginx.conf
# 测试 nginx.conf
/sbin/nginx -t -c /etc/nginx/nginx.conf
# 测试完 "/sbin/nginx -t -c /etc/nginx/nginx.conf"你可以用vi编辑了
sudo vi !$
### 在你要离开的时候留下一个提醒 ###
If you need a reminder to leave your terminal, type the following command:
如果你需要提醒离开你的终端,输入下面的命令:
leave +hhmm
这里:
- **hhmm** - 时间是以hhmm的形式hh表示小时12时制或者24小时制mm代表分钟。所有的时间都转化成12时制并且假定发生在接下来的12小时。
### 甜蜜的家 ###
想要进入刚才进入的地方?运行:
cd -
需要快速地回到家目录?输入:
cd
变量*CDPATH*定义了含有这个目录的搜索目录路径:
export CDPATH=/var/www:/nas10
现在不用输入cd */var/www/html/ ,我可以直接输入下面的命令进入/var/www/html
cd html
### 编辑一个用less浏览的文件 ###
要编辑一个用less浏览的文件按下v。你就可以用变量$EDITOR下的编辑器来编辑了
less *.c
less foo.html
## 下载v编辑文件 ##
## 退出编辑器你可以继续用less浏览了 ##
### 列出你系统中的所有文件和目录 ###
要看到你系统中的所有目录,运行:
find / -type d | less
# 列出$HOME 所有目录
find $HOME -type d -ls | less
要看到所有的文件,运行:
find / -type f | less
# 列出 $HOME 中所有的文件
find $HOME -type f -ls | less
### 用一条命令构造命令树 ###
你可以用mkdir加上-p选项一次创建目录树
mkdir -p /jail/{dev,bin,sbin,etc,usr,lib,lib64}
ls -l /jail/
### 复制文件到多个目录中 ###
不必运行:
cp /path/to/file /usr/dir1
cp /path/to/file /var/dir2
cp /path/to/file /nas/dir3
运行下面的命令来复制文件到多个目录中:
echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file
留下[创建一个shell函数][2]作为读者的练习。
### 快速找出两个目录的不同 ###
diff命令会按行比较文件。它也可以比较两个目录
ls -l /tmp/r
ls -l /tmp/s
# Compare two folders using diff ##
diff /tmp/r/ /tmp/s/
[![Fig. : Finding differences between folders](http://s0.cyberciti.org/uploads/cms/2015/01/differences-between-folders.jpg)][3]
图片: 找出目录之间的不同
### 文本格式化 ###
你可以用fmt命令重新格式化每个段落。在本例中我要用分割超长的行并且填充短行
fmt file.txt
你也可以分割长的行,但是不重新填充,也就是说分割长行,但是不填充短行:
fmt -s file.txt
### 看见输出并写入到一个文件中 ###
如下使用tee命令在屏幕上看见输出并同样写入到日志文件my.log中
mycoolapp arg1 arg2 input.file | tee my.log
tee可以保证你同时在屏幕上看到mycoolapp的输出和写入文件。
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/open-source/command-line-hacks/20-unix-command-line-tricks-part-i/
作者:[nixCraft][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.cyberciti.biz/tips/about-us
[1]:http://www.cyberciti.biz/tips/my-10-unix-command-line-mistakes.html
[2]:http://bash.cyberciti.biz/guide/Writing_your_first_shell_function
[3]:http://www.cyberciti.biz/open-source/command-line-hacks/20-unix-command-line-tricks-part-i/attachment/differences-between-folders/

View File

@ -0,0 +1,33 @@
使用Mate Tweak配置Mate桌面
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Mate_Tweak.jpeg)
[在Ubuntu中安装Mate桌面][1]是一码事但是你或许想要知道如何**配置Mate桌面** 大多数桌面环境都有它们自己的调整工具。比如Unity有Unity TweakGnome有Gnome TweakElementary OS有 Elementary OS Teweak。好消息是Mate桌面也有它自己的调整工具叫Mate Tweak][2]。
Mate Tweak是[mintDesktop][3]的克隆分支一款Linux Mint的配置工具。
### 安装Mate Tweak来配置Mate桌面 ###
Mate Tweak可以通过官方的PPA很简单地在Ubuntu和基于Ubuntu的系统中安装。打开终端输入下面的命令
sudo add-apt-repository ppa:ubuntu-mate-dev/ppa
sudo apt-get update
sudo apt-get install mate-tweak
你可以控制桌面的显示、按钮的布局和其他界面微调和窗口行为。与Unity和Gnome的调整工具比起来Mate Tweak没有提供太多的调整选项。比如你还不能用它[改变主题][4],但是至少它提供了一个简单的方法来改变一些配置。我希望它可以在不久的将来提供更多的特性。
--------------------------------------------------------------------------------
via: http://itsfoss.com/configure-mate-desktop-mate-tweak/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://itsfoss.com/install-mate-desktop-ubuntu-14-04/
[2]:https://bitbucket.org/flexiondotorg/mate-tweak
[3]:https://github.com/linuxmint/mintdesktop
[4]:http://itsfoss.com/how-to-install-themes-in-ubuntu-13-10/

View File

@ -0,0 +1,63 @@
如何在Linux/类Unix系统中解压tar文件到不同的目录中
================================================================================
我想要解压一个tar文件到一个指定的目录叫/tmp/data。我该如何在Linux或者类Unix的系统中使用tar命令解压一个tar文件到不同的目录中
你不必使用cd名切换到其他的目录并解压。可以使用下面的语法解压一个文件
### 语法 ###
典型Unix tar语法
tar -xf file.name.tar -C /path/to/directory
GNU/tar 语法:
tar xf file.tar -C /path/to/directory
tar xf file.tar --directory /path/to/directory
### 示例:解压文件到另一个文件夹中 ###
在本例中。我解压$HOME/etc.backup.tar到文件夹/tmp/data中。首先你需要手动创建这个目录输入
mkdir /tmp/data
要解压$HOME/etc.backup.tar 到/tmp/data中输入
tar -xf $HOME/etc.backup.tar -C /tmp/data
要看到进度,使用-v选项
tar -xvf $HOME/etc.backup.tar -C /tmp/data
示例输出:
![Gif 01: tar Command Extract Archive To Different Directory Command](http://s0.cyberciti.org/uploads/faq/2015/01/tar-extract-archive-to-dir.gif)
Gif 01: tar命令解压文件到不同的目录
你也可以指定解压的文件:
tar -xvf $HOME/etc.backup.tar file1 file2 file3 dir1 -C /tmp/data
要解压foo.tar.gz.tgz扩展文件包到/tmp/bar中输入
mkdir /tmp/bar
tar -zxvf foo.tar.gz -C /tmp/bar
要解压foo.tar.bz2.tbz, .tbz2 和 .tb2 扩展文件)包到/tmp/bar中输入
mkdir /tmp/bar
tar -jxvf foo.tar.bz2 -C /tmp/bar
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/howto-extract-tar-file-to-specific-directory-on-unixlinux/
作者:[nixCraft][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.cyberciti.biz/tips/about-us