Merge pull request #23 from LCTT/master

Update Repo
This commit is contained in:
joeren 2015-01-09 09:02:49 +08:00
commit ff7a2b0b75
34 changed files with 2498 additions and 1170 deletions

View File

@ -1,8 +1,7 @@
IPv6IPv4犯的,为什么要我来弥补
IPv6IPv4犯的,为什么要我来弥补
================================================================================
LCTT标题党了一把哈哈哈好过瘾求不拍砖
在过去的十年间IPv6 本来应该得到很大的发展,但事实上这种好事并没有降临。由此导致了一个结果,那就是大部分人都不了解 IPv6 的一些知识:它是什么,怎么使用,以及,为什么它会存在?LCTT这是要回答蒙田的“我是谁”哲学思考题吗
在过去的十年间IPv6 本来应该得到很大的发展,但事实上这种好事并没有降临。由此导致了一个结果,那就是大部分人都不了解 IPv6 的一些知识:它是什么,怎么使用,以及,为什么它会存在?
![IPv4 and IPv6 Comparison](http://www.tecmint.com/wp-content/uploads/2014/09/ipv4-ipv6.gif)
@ -12,15 +11,15 @@ IPv4 和 IPv6 的区别
自从1981年发布了 RFC 791 标准以来我们就一直在使用 **IPv4**。在那个时候,电脑又大又贵还不多见,而 IPv4 号称能提供**40亿条 IP 地址**,在当时看来,这个数字好大好大。不幸的是,这么多的 IP 地址并没有被充分利用起来,地址与地址之间存在间隙。举个例子,一家公司可能有**254(2^8-2)**条地址但只使用其中的25条剩下的229条被空占着以备将来之需。于是这些空闲着的地址不能服务于真正需要它们的用户原因就是网络路由规则的限制。最终的结果是在1981年看起来那个好大好大的数字在2014年看起来变得好小好小。
互联网工程任务组(**IETF**在90年代指出了这个问题并提供了两套解决方案无类型域间选路**CIDR**)以及私有地址。在 CIDR 出现之前,你只能选择三种网络地址长度:**24 位** (共可用16,777,214个地址), **20位** (共可用1,048,574个地址)以及**16位** (共可用65,534个地址)。CIDR 出现之后,你可以将一个网络再划分成多个子网。
互联网工程任务组(**IETF**在90年代指出了这个问题,并提供了两套解决方案:无类型域间选路(**CIDR**)以及私有IP地址。在 CIDR 出现之前,你只能选择三种网络地址长度:**24 位** (共16,777,214个可用地址), **20位** (共1,048,574个可用地址)以及**16位** (共65,534个可用地址)。CIDR 出现之后,你可以将一个网络再划分成多个子网。
举个例子,如果你需要**5个 IP 地址**,你的 ISP 会为你提供一个子网里面的主机地址长度为3位也就是说你最多能得到**6个地址**LCTT抛开子网的网络号3位主机地址长度可以表示07共8个地址但第0个和第7个有特殊用途不能被用户使用所以你最多能得到6个地址。这种方法让 ISP 能尽最大效率分配 IP 地址。“私有地址”这套解决方案的效果是你可以自己创建一个网络里面的主机可以访问外网的主机但外网的主机很难访问到你创建的那个网络上的主机因为你的网络是私有的、别人不可见的。你可以创建一个非常大的网络因为你可以使用16,777,214个主机地址并且你可以将这个网络分割成更小的子网方便自己管理。
也许你现在正在使用私有地址。看看你自己的 IP 地址,如果这个地址在这些范围内:**10.0.0.0 10.255.255.255**、**172.16.0.0 172.31.255.255**或**192.168.0.0 192.168.255.255**就说明你在使用私有地址。这两套方案有效地将“IP 地址用尽”这个灾难延迟了好长时间,但这毕竟只是权宜之计,现在我们正面临最终的审判。
**IPv4** 还有另外一个问题,那就是这个协议的消息头长度可变。如果数据通过软件来路由,这个问题还好说。但现在路由器功能都是由硬件提供的,处理变长消息头对硬件来说是一件困难的事情。一个大的路由器需要处理来自世界各地的大量数据包,这个时候路由器的负载是非常大的。所以很明显,我们需要固定消息头的长度。
**IPv4** 还有另外一个问题,那就是这个协议的消息头长度可变。如果数据的路由通过软件来实现,这个问题还好说。但现在路由器功能都是由硬件提供的,处理变长消息头对硬件来说是一件困难的事情。一个大的路由器需要处理来自世界各地的大量数据包,这个时候路由器的负载是非常大的。所以很明显,我们需要固定消息头的长度。
还有一个问题,在分配 IP 地址的时候,美国人发了因特网LCTT这个万恶的资本主义国家占用了大量 IP 地址)。其他国家只得到了 IP 地址的碎片。我们需要重新定制一个架构,让连续的 IP 地址能在地理位置上集中分布这样一来路由表可以做的更小LCTT想想吧网速肯定更快
在分配 IP 地址的同时,还有一个问题,因特网是美国人发明的LCTT这个万恶的资本主义国家占用了大量 IP 地址)。其他国家只得到了 IP 地址的碎片。我们需要重新定制一个架构,让连续的 IP 地址能在地理位置上集中分布这样一来路由表可以做的更小LCTT想想吧网速肯定更快
还有一个问题,这个问题你听起来可能还不大相信,就是 IPv4 配置起来比较困难,而且还不好改变。你可能不会碰到这个问题,因为你的路由器为你做了这些事情,不用你去操心。但是你的 ISP 对此一直是很头疼的。
@ -28,10 +27,10 @@ IPv4 和 IPv6 的区别
### IPv6 和它的优点 ###
**IETF** 在1995年12月公布了下一代 IP 地址标准,名字叫 IPv6为什么不是 IPv5因为某个错误原因“版本5”这个编号被其他项目用去了。IPv6 的优点如下:
**IETF** 在1995年12月公布了下一代 IP 地址标准,名字叫 IPv6为什么不是 IPv5→_→ 因为某个错误原因“版本5”这个编号被其他项目用去了。IPv6 的优点如下:
- 128位地址长度共有3.402823669×10³⁸个地址
- 这个架构下的地址在逻辑上聚合
- 架构下的地址在逻辑上聚合
- 消息头长度固定
- 支持自动配置和修改你的网络。
@ -43,7 +42,7 @@ IPv4 和 IPv6 的区别
#### 聚合 ####
有这么多的地址,这地址可以被稀稀拉拉地分配给主机,从而更高效地路由数据包。算一笔帐啊,你的 ISP 拿到一个**80位**地址长度的网络空间其中16位是 ISP 的子网地址剩下64位分给你作为主机地址。这样一来你的 ISP 可以分配65,534个子网。
有这么多的地址,这地址可以被稀稀拉拉地分配给主机,从而更高效地路由数据包。算一笔帐啊,你的 ISP 拿到一个**80位**地址长度的网络空间其中16位是 ISP 的子网地址剩下64位分给你作为主机地址。这样一来你的 ISP 可以分配65,534个子网。
然而,这些地址分配不是一成不变地,如果 ISP 想拥有更多的小子网,完全可以做到(当然,土豪 ISP 可能会要求再来一个80位网络空间。最高的48位地址是相互独立地也就是说 ISP 与 ISP 之间虽然可能分到相同地80位网络空间但是这两个空间是相互隔离的好处就是一个网络空间里面的地址会聚合在一起。
@ -51,25 +50,25 @@ IPv4 和 IPv6 的区别
**IPv4** 消息头长度可变,但 **IPv6** 消息头长度被固定为40字节。IPv4 会由于额外的参数导致消息头变长IPv6 中,如果有额外参数,这些信息会被放到一个紧挨着消息头的地方,不会被路由器处理,当消息到达目的地时,这些额外参数会被软件提取出来。
IPv6 消息头有一个部分叫“flow”是一个20位伪随机数用于简化路由器对数据包路由过程。如果一个数据包存在“flow”路由器就可以根据这个值作为索引查找路由表不必慢吞吞地遍历整张路由表来查询路由路径。这个优点使 **IPv6** 更容易被路由。
IPv6 消息头有一个部分叫“flow”是一个20位伪随机数用于简化路由器对数据包路由过程。如果一个数据包存在“flow”路由器就可以根据这个值作为索引查找路由表不必慢吞吞地遍历整张路由表来查询路由路径。这个优点使 **IPv6** 更容易被路由。
#### 自动配置 ####
**IPv6** 中,当主机开机时,会检查本地网络,看看有没有其他主机使用了自己的 IP 地址。如果地址没有被使用,就接着查询本地的 IPv6 路由器,找到后就向它请求一个 IPv6 地址。然后这台主机就可以连上互联网了 —— 它有自己的 IP 地址,和自己的默认路由器。
如果这台默认路由器机,主机就会接着找其他路由器,作为备用路由器。这个功能在 IPv4 协议里实现起来非常困难。同样地,假如路由器想改变自己的地址,自己改掉就好了。主机会自动搜索路由器,并自动更新路由器地址。路由器会同时保存新老地址,直到所有主机都把自己地路由器地址更新成新地址。
如果这台默认路由器机,主机就会接着找其他路由器,作为备用路由器。这个功能在 IPv4 协议里实现起来非常困难。同样地,假如路由器想改变自己的地址,自己改掉就好了。主机会自动搜索路由器,并自动更新路由器地址。路由器会同时保存新老地址,直到所有主机都把自己地路由器地址更新成新地址。
IPv6 自动配置还不是一个完整地解决方案。想要有效地使用互联网,一台主机还需要另外的东西:域名服务器、时间同步服务器、或者还需要一台文件服务器。于是 **dhcp6** 出现了,提供与 dhcp 一样的服务,唯一的区别是 dhcp6 的机器可以在可路由的状态下启动,一个 dhcp 进程可以为大量网络提供服务。
#### 唯一的大问题 ####
如果 IPv6 真的比 IPv4 好那么多为什么它还没有被广泛使用起来Google 在**2014年5月份**估计 IPv6 的市场占有率为**4%**)?一个最基本的原因是“先有鸡还是先有蛋”问题,用户需要让自己的服务器能为尽可能多的客户提供服务,这就意味着他们必须部署一个 **IPv4** 地址。
如果 IPv6 真的比 IPv4 好那么多为什么它还没有被广泛使用起来Google 在**2014年5月份**估计 IPv6 的市场占有率为**4%**)?一个最基本的原因是“先有鸡还是先有蛋”。服务商想让自己的服务器为尽可能多的客户提供服务,这就意味着他们必须部署一个 **IPv4** 地址。
当然,他们可以同时使用 IPv4 和 IPv6 两套地址,但很少有客户会用到 IPv6并且你还需要对你的软件做一些小修改来适应 IPv6。另外比较头疼的一点是很多家庭的路由器压根不支持 IPv6。还有就是 ISP 也不愿意支持 IPv6我问过我的 ISP 这个问题,得到的回答是:只有客户明确指出要部署这个时,他们才会用 IPv6。然后我问了现在有多少人有这个需求答案是包括我在内共有1个。
与这种现实状况呈明显对比的是所有主流操作系统Windows、OS X、Linux 都默认支持 IPv6 好多年了。这些操作系统甚至提供软件让 IPv6 的数据包披上 IPv4 的皮来骗过那些会丢弃 IPv6 数据包的主机,从而达到传输数据的目的LCTT这是高科技偷渡
与这种现实状况呈明显对比的是所有主流操作系统Windows、OS X、Linux 都默认支持 IPv6 好多年了。这些操作系统甚至提供软件让 IPv6 的数据包披上 IPv4 的皮来骗过那些会丢弃 IPv6 数据包的主机,从而达到传输数据的目的。
#### 总结 ####
### 总结 ###
IPv4 已经为我们服务了好长时间。但是它的缺陷会在不远的将来遭遇不可克服的困难。IPv6 通过改变地址分配规则、简化数据包路由过程、简化首次加入网络时的配置过程等策略,可以完美解决这个问题。
@ -81,7 +80,7 @@ via: http://www.tecmint.com/ipv4-and-ipv6-comparison/
作者:[Jeff Silverman][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -95,7 +95,7 @@ via: http://xmodulo.com/configure-peer-to-peer-vpn-linux.html
作者:[Dan Nanni][a]
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,324 @@
使用 Quagga 将你的 CentOS 系统变成一个 BGP 路由器
================================================================================
在[之前的教程中][1]我对如何简单地使用Quagga把CentOS系统变成一个不折不扣地OSPF路由器做了一些介绍。Quagga是一个开源路由软件套件。在这个教程中我将会重点讲讲**如何把一个Linux系统变成一个BGP路由器还是使用Quagga**演示如何建立BGP与其它BGP路由器对等。
在我们进入细节之前一些BGP的背景知识还是必要的。边界网关协议即BGP是互联网的域间路由协议的实际标准。在BGP术语中全球互联网是由成千上万相关联的自治系统(AS)组成其中每一个AS代表每一个特定运营商提供的一个网络管理域[据说][2],美国前总统乔治.布什都有自己的 AS 编号)。
为了使其网络在全球范围内路由可达每一个AS需要知道如何在英特网中到达其它的AS。这时候就需要BGP出来扮演这个角色了。BGP是一个AS去与相邻的AS交换路由信息的语言。这些路由信息通常被称为BGP线路或者BGP前缀。包括AS号(ASN全球唯一号码)以及相关的IP地址块。一旦所有的BGP线路被当地的BGP路由表学习和记录每一个AS将会知道如何到达互联网的任何公网IP。
在不同域(AS)之间路由的能力是BGP被称为外部网关协议(EGP)或者域间协议的主要原因。就如一些路由协议例如OSPF、IS-IS、RIP和EIGRP都是内部网关协议(IGPs)或者域内路由协议,用于处理一个域内的路由.
### 测试方案 ###
在这个教程中,让我们来使用以下拓扑。
![](https://farm6.staticflickr.com/5598/15603223841_4c76343313_z.jpg)
我们假设运营商A想要建立一个BGP来与运营商B对等交换路由。它们的AS号和IP地址空间的细节如下所示
- **运营商 A**: ASN (100) IP地址空间 (100.100.0.0/22) 分配给BGP路由器eth1网卡的IP地址(100.100.1.1)
- **运营商 B**: ASN (200) IP地址空间 (200.200.0.0/22) 分配给BGP路由器eth1网卡的IP地址(200.200.1.1)
路由器A和路由器B使用100.100.0.0/30子网来连接到对方。从理论上来说任何子网从运营商那里都是可达的、可互连的。在真实场景中建议使用掩码为30位的公网IP地址空间来实现运营商A和运营商B之间的连通。
### 在 CentOS中安装Quagga ###
如果Quagga还没安装好我们可以使用yum来安装Quagga。
# yum install quagga
如果你正在使用的是CentOS7系统你需要应用一下策略来设置SELinux。否则SElinux将会阻止Zebra守护进程写入它的配置目录。如果你正在使用的是CentOS6你可以跳过这一步。
# setsebool -P zebra_write_config 1
Quagga软件套件包含几个守护进程这些进程可以协同工作。关于BGP路由我们将把重点放在建立以下2个守护进程。
- **Zebra**:一个核心守护进程用于内核接口和静态路由.
- **BGPd**:一个BGP守护进程.
### 配置日志记录 ###
在Quagga被安装后下一步就是配置Zebra来管理BGP路由器的网络接口。我们通过创建一个Zebra配置文件和启用日志记录来开始第一步。
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
在CentOS6系统中
# service zebra start
# chkconfig zebra on
在CentOS7系统中:
# systemctl start zebra
# systemctl enable zebra
Quagga提供了一个叫做vtysh特有的命令行工具你可以输入与路由器厂商(例如Cisco和Juniper)兼容和支持的命令。我们将使用vtysh shell来配置BGP路由在教程的其余部分。
启动vtysh shell 命令,输入:
# vtysh
提示将被改成该主机名这表明你是在vtysh shell中。
Router-A#
现在我们将使用以下命令来为Zebra配置日志文件
Router-A# configure terminal
Router-A(config)# log file /var/log/quagga/quagga.log
Router-A(config)# exit
永久保存Zebra配置
Router-A# write
在路由器B操作同样的步骤。
### 配置对等的IP地址 ###
下一步我们将在可用的接口上配置对等的IP地址。
Router-A# show interface #显示接口信息
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
配置eth0接口的参数
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 100.100.0.1/30
site-A-RTR(config-if)# description "to Router-B"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
继续配置eth1接口的参数
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 100.100.1.1/24
site-A-RTR(config-if)# description "test ip from provider A network"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
现在确认配置:
Router-A# show interface
----------
Interface eth0 is up, line protocol detection is disabled
Description: "to Router-B"
inet 100.100.0.1/30 broadcast 100.100.0.3
Interface eth1 is up, line protocol detection is disabled
Description: "test ip from provider A network"
inet 100.100.1.1/24 broadcast 100.100.1.255
----------
Router-A# show interface description #显示接口描述
----------
Interface Status Protocol Description
eth0 up unknown "to Router-B"
eth1 up unknown "test ip from provider A network"
如果一切看起来正常,别忘记保存配置。
Router-A# write
同样地在路由器B重复一次配置。
在我们继续下一步之前确认下彼此的IP是可以ping通的。
Router-A# ping 100.100.0.2
----------
PING 100.100.0.2 (100.100.0.2) 56(84) bytes of data.
64 bytes from 100.100.0.2: icmp_seq=1 ttl=64 time=0.616 ms
下一步我们将继续配置BGP对等和前缀设置。
### 配置BGP对等 ###
Quagga守护进程负责BGP的服务叫bgpd。首先我们来准备它的配置文件。
# cp /usr/share/doc/quagga-XXXXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
在CentOS6系统中
# service bgpd start
# chkconfig bgpd on
在CentOS7中
# systemctl start bgpd
# systemctl enable bgpd
现在让我们来进入Quagga 的shell。
# vtysh
第一步我们要确认当前没有已经配置的BGP会话。在一些版本我们可能会发现一个AS号为7675的BGP会话。由于我们不需要这个会话所以把它移除。
Router-A# show running-config
----------
... ... ...
router bgp 7675
bgp router-id 200.200.1.1
... ... ...
我们将移除一些预先配置好的BGP会话并建立我们所需的会话取而代之。
Router-A# configure terminal
Router-A(config)# no router bgp 7675
Router-A(config)# router bgp 100
Router-A(config)# no auto-summary
Router-A(config)# no synchronizaiton
Router-A(config-router)# neighbor 100.100.0.2 remote-as 200
Router-A(config-router)# neighbor 100.100.0.2 description "provider B"
Router-A(config-router)# exit
Router-A(config)# exit
Router-A# write
路由器B将用同样的方式来进行配置以下配置提供作为参考。
Router-B# configure terminal
Router-B(config)# no router bgp 7675
Router-B(config)# router bgp 200
Router-B(config)# no auto-summary
Router-B(config)# no synchronizaiton
Router-B(config-router)# neighbor 100.100.0.1 remote-as 100
Router-B(config-router)# neighbor 100.100.0.1 description "provider A"
Router-B(config-router)# exit
Router-B(config)# exit
Router-B# write
当相关的路由器都被配置好,两台路由器之间的对等将被建立。现在让我们通过运行下面的命令来确认:
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5614/15420135700_e3568d2e5f_z.jpg)
从输出中,我们可以看到"State/PfxRcd"部分。如果对等关闭,输出将会显示"Idle"或者"Active'。请记住,单词'Active'这个词在路由器中总是不好的意思。它意味着路由器正在积极地寻找邻居、前缀或者路由。当对等是up状态"State/PfxRcd"下的输出状态将会从特殊邻居接收到前缀号。
在这个例子的输出中BGP对等只是在AS100和AS200之间呈up状态。因此没有前缀被更改所以最右边列的数值是0。
### 配置前缀通告 ###
正如一开始提到AS 100将以100.100.0.0/22作为通告在我们的例子中AS 200将同样以200.200.0.0/22作为通告。这些前缀需要被添加到BGP配置如下。
在路由器-A中
Router-A# configure terminal
Router-A(config)# router bgp 100
Router-A(config)# network 100.100.0.0/22
Router-A(config)# exit
Router-A# write
在路由器-B中
Router-B# configure terminal
Router-B(config)# router bgp 200
Router-B(config)# network 200.200.0.0/22
Router-B(config)# exit
Router-B# write
在这一点上,两个路由器会根据需要开始通告前缀。
### 测试前缀通告 ###
首先,让我们来确认前缀的数量是否被改变了。
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5608/15419095659_0ebb384eee_z.jpg)
为了查看所接收的更多前缀细节我们可以使用以下命令这个命令用于显示邻居100.100.0.2所接收到的前缀总数。
Router-A# show ip bgp neighbors 100.100.0.2 advertised-routes
![](https://farm6.staticflickr.com/5597/15419618208_4604e5639a_z.jpg)
查看哪一个前缀是我们从邻居接收到的:
Router-A# show ip bgp neighbors 100.100.0.2 routes
![](https://farm4.staticflickr.com/3935/15606556462_e17eae7f49_z.jpg)
我们也可以查看所有的BGP路由器
Router-A# show ip bgp
![](https://farm6.staticflickr.com/5609/15419618228_5c776423a5_z.jpg)
以上的命令都可以被用于检查哪个路由器通过BGP在路由器表中被学习到。
Router-A# show ip route
----------
代码: K - 内核路由, C - 已链接 , S - 静态 , R - 路由信息协议 , O - 开放式最短路径优先协议,
I - 中间系统到中间系统的路由选择协议, B - 边界网关协议, > - 选择路由, * - FIB 路由
C>* 100.100.0.0/30 is directly connected, eth0
C>* 100.100.1.0/24 is directly connected, eth1
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:06:45
----------
Router-A# show ip route bgp
----------
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:08:13
BGP学习到的路由也将会在Linux路由表中出现。
[root@Router-A~]# ip route
----------
100.100.0.0/30 dev eth0 proto kernel scope link src 100.100.0.1
100.100.1.0/24 dev eth1 proto kernel scope link src 100.100.1.1
200.200.0.0/22 via 100.100.0.2 dev eth0 proto zebra
最后我们将使用ping命令来测试连通。结果将成功ping通。
[root@Router-A~]# ping 200.200.1.1 -c 2
总而言之本教程将重点放在如何在CentOS系统中运行一个基本的BGP路由器。这个教程让你开始学习BGP的配置一些更高级的设置例如设置过滤器、BGP属性调整、本地优先级和预先路径准备等我将会在后续的教程中覆盖这些主题。
希望这篇教程能给大家一些帮助。
--------------------------------------------------------------------------------
via: http://xmodulo.com/centos-bgp-router-quagga.html
作者:[Sarmed Rahman][a]
译者:[disylee](https://github.com/disylee)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://linux.cn/article-4232-1.html
[2]:http://weibo.com/3181671860/BngyXxEUF

View File

@ -1,6 +1,6 @@
Linux 和类Unix 系统上5个极品的开源软件备份工具
Linux 和类 Unix 系统上5个最佳开源备份工具
================================================================================
一个好的备份最基本的就是为了能够从一些错误中恢复
一个好的备份最基本的目的就是为了能够从一些错误中恢复
- 人为的失误
- 磁盘阵列或是硬盘故障
@ -13,7 +13,7 @@ Linux 和类Unix 系统上5个极品的开源软件备份工具
确定你正在部署的软件具有下面的特性
1. **开源软件** - 你务必要选择那些源码可以免费获得,并且可以修改的软件。确信可以恢复你的数据,即使是软件供应商或者/或是项目停止继续维护这个软件或者是拒绝继续为这个软件提供补丁。
1. **开源软件** - 你务必要选择那些源码可以免费获得,并且可以修改的软件。确信可以恢复你的数据,即使是软件供应商/项目停止继续维护这个软件或者是拒绝继续为这个软件提供补丁。
2. **跨平台支持** - 确定备份软件可以很好的运行各种需要部署的桌面操作系统和服务器系统。
@ -21,21 +21,21 @@ Linux 和类Unix 系统上5个极品的开源软件备份工具
4. **自动转换** - 自动转换本来是没什么,除了对于各种备份设备,包括图书馆,近线存储和自动加载,自动转换可以自动完成一些任务,包括加载,挂载和标签备份像磁带这些媒体设备。
5. **备份介质** - 确定你可以备份到磁带硬盘DVD 和云存储像AWS。
5. **备份介质** - 确定你可以备份到磁带硬盘DVD 和像 AWS 这样的云存储
6. **加密数据流** - 确定所有客户端到服务器的传输都被加密保证在LAN/WAN/Internet 中传输的安全性。
6. **加密数据流** - 确定所有客户端到服务器的传输都被加密,保证在 LAN/WAN/Internet 中传输的安全性。
7. **数据库支持** - 确定备份软件可以备份到数据库像MySQL 或是 Oracle。
8. **备份可以跨越多个卷** - 备份软件(转存文件)可以把每个备份文件分成几个部分,允许将每个部分存在于不同的卷。这样可以保证一些数据量很大的备份(像100TB的文件)可以被存储在一些比单个部分大的设备中,比如说像硬盘和磁盘卷。
8. **备份可以跨越多个卷** - 备份软件(转储文件时)可以把每个备份文件分成几个部分,允许将每个部分存在于不同的卷。这样可以保证一些数据量很大的备份(像100TB的文件)可以被存储在一些单个容量较小的设备中,比如说像硬盘和磁盘卷。
9. **VSS (卷影复制)** - 这是[微软的卷影复制服务VSS][1]通过创建数据的快照来备份。确定备份软件支持VSS的MS-Windows 客户端/服务器。
10. **重复数据删除** - 这是一种数据压缩技术,用来消除重复数据的副本(比如,图片)。
11. **许可证和成本** - 确定你[理解和使用的开源许可证][3]下的软件源码你可以得到
11. **许可证和成本** - 确定你对备份软件所用的[许可证了解和明白其使用方式][3]
12. **商业支持** - 开源软件可以提供社区支持(像邮件列表和论坛)和专业的支持(像发行版提供额外的付费支持)。你可以使用付费的专业支持以培训和咨询为目的
12. **商业支持** - 开源软件可以提供社区支持(像邮件列表和论坛)和专业的支持(如发行版提供额外的付费支持)。你可以使用付费的专业支持为你提供培训和咨询
13. **报告和警告** - 最后,你必须能够看到备份的报告,当前的工作状态,也能够在备份出错的时候提供警告。
@ -59,7 +59,7 @@ Linux 和类Unix 系统上5个极品的开源软件备份工具
### Amanda - 又一个客户端服务器备份工具 ###
AMANDA 是 Advanced Maryland Automatic Network Disk Archiver 的缩写。它允许系统管理员创建一个单独的服务器来备份网络上的其他主机到磁带驱动器或硬盘或者是自动转换器。
AMANDA 是 Advanced Maryland Automatic Network Disk Archiver 的缩写。它允许系统管理员创建一个单独的备份服务器来将网络上的其他主机的数据备份到磁带驱动器、硬盘或者是自动换盘器。
- 操作系统:支持跨平台运行。
- 备份级别:完全,差异,增量,合并。
@ -75,7 +75,7 @@ AMANDA 是 Advanced Maryland Automatic Network Disk Archiver 的缩写。它允
### Backupninja - 轻量级备份系统 ###
Backupninja 是一个简单易用的备份系统。你可以简单的拖放配置文件到 /etc/backup.d/ 目录来备份多个主机。
Backupninja 是一个简单易用的备份系统。你可以简单的拖放一个配置文件到 /etc/backup.d/ 目录来备份多个主机。
![](http://s0.cyberciti.org/uploads/cms/2014/11/ninjabackup-helper-script.jpg)
@ -93,7 +93,7 @@ Backupninja 是一个简单易用的备份系统。你可以简单的拖放配
### Backuppc - 高效的客户端服务器备份工具###
Backuppc 可以用来备份基于LInux 和Windows 系统的主服务器硬盘。它配备了一个巧妙的池计划来最大限度的减少磁盘储存,磁盘I/O 和网络I/O。
Backuppc 可以用来备份基于Linux 和Windows 系统的主服务器硬盘。它配备了一个巧妙的池计划来最大限度的减少磁盘储存、磁盘 I/O 和网络I/O。
![](http://s0.cyberciti.org/uploads/cms/2014/11/BackupPCServerStatus.jpg)
@ -111,7 +111,7 @@ Backuppc 可以用来备份基于LInux 和Windows 系统的主服务器硬盘。
### UrBackup - 最容易配置的客户端服务器系统 ###
UrBackup 是一个非常容易配置的开源客户端服务器备份系统,通过图像和文件备份的组合完成了数据安全性和快速的恢复。你的文件可以通过Web界面或者是在Windows资源管理器中恢复而驱动卷的备份用引导CD或者是USB 棒来恢复(逻辑恢复)。一个Web 界面使得配置你自己的备份服务变得非常简单。
UrBackup 是一个非常容易配置的开源客户端服务器备份系统,通过镜像 方式和文件备份的组合完成了数据安全性和快速的恢复。磁盘卷备份可以使用可引导 CD 或U盘通过Web界面或Windows资源管理器来恢复你的文件硬恢复。一个 Web 界面使得配置你自己的备份服务变得非常简单。
![](http://s0.cyberciti.org/uploads/cms/2014/11/urbackup.jpg)
@ -129,19 +129,19 @@ UrBackup 是一个非常容易配置的开源客户端服务器备份系统,
### 其他供你考虑的一些极好用的开源备份软件 ###
AmandaBacula 和上面所提到的软件都是功能丰富,但是配置比较复杂对于一些小的网络或者是单独的服务器。我建议你学习和使用一下的备份软件:
AmandaBacula 和上面所提到的这些软件功能都很丰富,但是对于一些小的网络或者是单独的服务器来说配置比较复杂。我建议你学习和使用一下的下面这些备份软件:
1. [Rsnapshot][10] - 我建议用这个作为对本地和远程的文件系统快照工具。查看[怎么设置和使用这个工具在Debian 和Ubuntu linux][11]和[基于CentOSRHEL 的操作系统][12]。
1. [Rsnapshot][10] - 我建议用这个作为对本地和远程的文件系统快照工具。看看[在Debian 和Ubuntu linux][11]和[基于CentOSRHEL 的操作系统][12]怎么设置和使用这个工具
2. [rdiff-backup][13] - 另一个好用的类Unix 远程增量备份工具。
3. [Burp][14] - Burp 是一个网络备份和恢复程序。它使用了librsync来节省网络流量和节省每个备份占用的空间。它也使用了VSS卷影复制服务在备份Windows计算机时进行快照。
4. [Duplicity][15] - 伟大的加密和高效的备份类Unix操作系统。查看如何[安装Duplicity来加密云备份][16]来获取更多的信息。
5. [SafeKeep][17] - SafeKeep是一个集中和易于使用的备份应用程序,结合了镜像和增量备份最佳功能的备份应用程序。
5. [SafeKeep][17] - SafeKeep是一个中心化的、易于使用的备份应用程序,结合了镜像和增量备份最佳功能的备份应用程序。
6. [DREBS][18] - DREBS 是EBS定期快照的工具。它被设计成在EBS快照所连接的EC2主机上运行。
7. 古老的unix 程序像rsync tar cpio mt 和dump。
###结论###
我希望你会发现这篇有用的文章来备份你的数据。不要忘了验证你的备份和创建多个数据备份。然而,对于磁盘阵列并不是一个备份解决方案。使用任何一个上面提到的程序来备份你的服务器,桌面和笔记本电脑和私人的移动设备。如果你知道其他任何开源的备份软件我没有提到的,请分享在评论里。
我希望你会发现这篇有用的文章来备份你的数据。不要忘了验证你的备份和创建多个数据备份。注意,磁盘阵列并不是一个备份解决方案!使用任何一个上面提到的程序来备份你的服务器、桌面和笔记本电脑和私人的移动设备。如果你知道其他任何开源的备份软件我没有提到的,请分享在评论里。
--------------------------------------------------------------------------------
@ -149,7 +149,7 @@ via: http://www.cyberciti.biz/open-source/awesome-backup-software-for-linux-unix
作者:[nixCraft][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,4 @@
Traslated by H-mudcup
美国海军陆战队想把雷达操作系统从Windows XP换成Linux
美国海军陆战队要把雷达操作系统从Windows XP换成Linux
================================================================================
**一个新的雷达系统已经被送回去升级了**
@ -18,13 +16,13 @@ Traslated by H-mudcup
>一谈到稳定性和性能没什么能真的比得过Linux。这就是为什么美国海军陆战队的领导们已经决定让Northrop Grumman Corp. Electronic Systems把新送到的地面/空中任务导向雷达G/ATOR的操作系统从Windows XP换成Linux。
地面/空中任务导向雷达G/ATOR系统已经研制了很多年。很可能在这项工程启动的时候Windows XP被认为是合理的选择。在研制的这段时间事情发生了变化。微软已经撤销了对Windows XP的支持而且只有极少的几个组织会使用它。操作系统要么升级要么被换掉。在这种情况下Linux成了合理的选择。特别是当替换的费用很可能远远少于更新的费用。
地面/空中任务导向雷达G/ATOR系统已经研制了很多年。很可能在这项工程启动的时候Windows XP被认为是合理的选择。在研制的这段时间事情发生了变化。微软已经撤销了对Windows XP的支持而且只有极少的几个组织会使用它。操作系统要么升级要么被换掉。在这种情况下Linux成了合理的选择。特别是当替换的费用很可能远远少于更新的费用。
有个很有趣的地方值得注意一下。地面/空中任务导向雷达G/ATOR才刚刚送到美国海军陆战队但是制造它的公司却还是选择了保留这个过时的操作系统。一定有人注意到的这样一个事实。这是一个糟糕的决定并且指挥系统已经被告知了可能出现的问题了。
### G/ATOR雷达的软件将是基于Linux的 ###
Unix类系统比如基于BSD或者基于Linux的操作系统通常会出现在条件苛刻的领域或者任何情况下都不失败的的技术中。例如这就是为什么大多数的服务器都运行着Linux。一个雷达系统配上一个几乎不可能崩溃的操作系统看起来非常相配。
Unix类系统比如基于BSD或者基于Linux的操作系统通常会出现在条件苛刻的领域或者任何情况下都不允许失败的的技术中。例如这就是为什么大多数的服务器都运行着Linux。一个雷达系统配上一个几乎不可能崩溃的操作系统看起来非常相配。
“弗吉尼亚州Quantico海军基地海军陆战队系统司令部的官员在周三宣布了一项与Northrop Grumman Corp. Electronic Systems在林西科姆高地的部分的总经理签订的价值1020万美元的修正合同。这个合同的修改将包括这样一项把G/ATOR的控制电脑从微软的Windows XP操作系统换成与国防信息局DISA兼容的Linux操作系统。”
@ -40,7 +38,7 @@ via: http://news.softpedia.com/news/U-S-Marine-Corps-Want-to-Change-OS-for-Radar
作者:[Silviu Stahie][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,76 @@
没错Linux是感染了木马这并非企鹅的末日。
================================================================================
![Is something watching you?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/spyware.jpg)
译注原文标题中Tuxpocalypse是作者造的词由Tux和apocalypse组合而来。Tux是Linux的LOGO中那只企鹅的名字apocalypse意为末世、大灾变这里翻译成企鹅的末日。
你被监视了吗?
带上一箱罐头,挖一个深坑碉堡,准备进入一个完全不同的新世界吧:[一个强大的木马已经在Linux中被发现][1]。
没错,迄今为止最牢不可破的计算机世外桃源已经被攻破了,安全专家们都已成惊弓之鸟。
关掉电脑拔掉键盘然后再买只猫忘掉YouTube吧。企鹅末日已经降临我们的日子不多了。
我去?这是真的吗?依我看,不一定吧~
### 一次可怕的异常事件! ###
先声明,**我并没有刻意轻视此次威胁人们给这个木马起名为Turla的严重性**为了避免质疑我要强调的是作为Linux用户我们不应该为此次事件过分担心。
此次发现的木马能够在人们毫无察觉的情况下感染Linux系统这是非常可怕的。事实上它的主要工作是搜寻并向外发送各种类型的敏感信息这一点同样令人感到恐惧。据了解它已经存在至少4年时间而且无需root权限就能完成这些工作。呃这是要把人吓尿的节奏吗
But - 但是 - 新闻稿里常常这个时候该出现but了 - 要说恐慌正在横扫桌面Linux的粉丝那就有点断章取义、甚至不着边际了。
对我们中的有些人来说计算机安全隐患的确是一种新鲜事物然而我们应该对其审慎对待对桌面用户来说Linux仍然是一个天生安全的操作系统。一次瑕疵不应该否定它的一切我们没有必要慌忙地割断网线。
### 国家资助,目标政府 ###
![Is a penguin snake a Penguake or a Snaguin?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/penguin-snakle-by-icao-292x300.jpg)
企鹅和蛇的组合该叫‘企蛇’还是‘蛇鹅’?
Turla木马是一个复杂、高级的持续威胁四年多来它以政府、大使馆以及制药公司的系统为目标其使用的攻击方式所基于的代码[至少在14年前][2]就已存在了。
在Windows系统中安全研究领域来自赛门铁克和卡巴斯基实验室的超级英雄们首先发现了这条黏黏的蛇他们发现Turla及其组件已经**感染了45个国家的数百台个人电脑**其中许多都是通过未打补丁的0day漏洞感染的。
*微软,干得漂亮。*
经过卡巴斯基实验室的进一步努力他们发现同样的木马出现在了Linux上。
这款木马无需高权限就可以“拦截传入的数据包在系统中执行传入的命令”但是它的触角到底有多深有多少Linux系统被感染它的完整功能都有哪些这些目前都暂时还不明朗。
根据它选定的目标我们推断“Turla”及其变种是由某些民族的国家资助的。美国和英国的读者不要想当然以为这些国家就是“那些国家”。不要忘了我们自己的政府也很乐于趟这摊浑水。
#### 观点 与 责任 ####
这次的发现从情感上、技术上、伦理上,都是一次严重的失利,但它远没有达到说我们已经进入一个病毒和恶意软件针对桌面自由肆虐的时代。
**Turla 并不是那种用户关注的“我想要你的信用卡”病毒**那些病毒往往绑定在一个伪造的软件下载链接中。Turla是一种复杂的、经过巧妙处理的、具有高度适应性的威胁它时刻都具有着特定的目标因此它绝不仅仅满足于搜集一些卖萌少女的网站账户密码sorry 绿茶婊们!)。
卡巴斯基实验室是这样介绍的:
> “Linux上的Turla模块是一个链接多个静态库的C/C++可执行文件,这大大增加了它的文件体积。但它并没有着重减小自身的文件体积,而是剥离了自身的符号信息,这样就增加了对它逆向分析的难度。它的功能主要包括隐藏网络通信、远程执行任意命令以及远程管理等等。它的大部分代码都基于公开源码。”
不管它的影响和感染率如何,它的技术优势都将不断给那些号称聪明的专家们留下一个又一个问题,就让他们花费大把时间去追踪、分析、解决这些问题吧。
我不是一个计算机安全专家但我是一个理智的网络脑残粉要我说这次事件应该被看做是一个通jinggao而并非有些网站所标榜的洪shijiemori
在更多细节披露之前我们都不必恐慌。只需继续计算机领域的安全实践避免从不信任的网站或PPA源下载运行脚本、app或二进制文件更不要冒险进入web网络的黑暗领域。
如果你仍然十分担心,你可以前往[卡巴斯基的博客][1]查看更多细节,以确定自己是否感染。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/12/government-spying-turla-linux-trojan-found
作者:[Joey-Elijah Sneddon][a]
译者:[Mr小眼儿](http://blog.csdn.net/tinyeyeser)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://securelist.com/blog/research/67962/the-penquin-turla-2/
[2]:https://twitter.com/joernchen/status/542060412188262400
[3]:https://securelist.com/blog/research/67962/the-penquin-turla-2/

View File

@ -1,74 +0,0 @@
Yes, This Trojan Infects Linux. No, Its Not The Tuxpocalypse
================================================================================
![Is something watching you?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/spyware.jpg)
Is something watching you?
Grab a crate of canned food, start digging a deep underground bunker and prepare to settle into a world that will never be the same again: [a powerful trojan has been uncovered on Linux][1].
Yes, the hitherto impregnable fortress of computing nirvana has been compromised in a way that has left security experts a touch perturbed.
Unplug your PC, disinfect your keyboard and buy a cat (no more YouTube ). The Tuxpocalypse is upon us. Weve reached the end of days.
Right? RIGHT? Nah, not quite.
### A Terrifying Anomalous Thing! ###
Let me set off by saying that **I am not underplaying the severity of this threat (known by the nickname Turla)** nor, for the avoidance of doubt, am I suggesting that we as Linux users shouldnt be concerned by the implications.
The discovery of a silent trojan infecting Linux systems is terrifying. The fact it was tasked with sucking up and sending off all sorts of sensitive information is horrific. And to learn its been doing this for at least four years and doesnt require root privileges? My seat is wet. Im sorry.
But — and along with hyphens and typos, theres always a but on this site — the panic currently sweeping desktop Linux fans, Mexican wave style, is a little out of context.
Vulnerability may be a new feeling for some of us, yet lets keep it in check: Linux remains an inherently secure operating system for desktop users. One clever workaround does not negate that and shouldnt send you scurrying offline.
### State Sponsored, Targeting Governments ###
![Is a penguin snake a Penguake or a Snaguin?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/12/penguin-snakle-by-icao-292x300.jpg)
Is a penguin snake a Penguake or a Snaguin?
Turla is a complex APT (Advanced Persistent Threat) that has (thus far) targeted government, embassy and pharmaceutical companies systems for around four years using a method based on [14 year old code, no less][2].
On Windows, where the superhero security researchers at Symantec and Kaspersky Lab first sighted the slimy snake, Turla and components of it were found to have **infected hundreds (100s) of PCs across 45 countries**, many through unpatched zero-day exploits.
*Nice one Microsoft.*
Further diligence by Kaspersky Lab has now uncovered that parts of the same trojan have also been active on Linux for some time.
The Trojan doesnt require elevated privileges and can “intercept incoming packets and run incoming commands on the system”, but its not yet clear how deep its tentacles reach or how many Linux systems are infected, nor is the full extent of its capabilities known.
“Turla” (and its children) are presumed to be nation-state sponsored due to its choice of targets. US and UK readers shouldnt assume its “*them*“, either. Our own governments are just as happy to play in the mud, too.
#### Perspective and Responsibility ####
As terrible a breach as this discovery is emotionally, technically and ethically it remains far, far, far away from being an indication that were entering a new “free for all” era of viruses and malware aimed at the desktop.
**Turla is not a user-focused “i wantZ ur CredIt carD” virus** bundled inside a faux software download. Its a complex, finessed and adaptable threat with specific targets in mind (ergo grander ambitions than collecting a bunch of fruity tube dot com passwords, sorry ego!).
Kaspersky Lab explains:
> “The Linux Turla module is a C/C++ executable statically linked against multiple libraries, greatly increasing its file size. It was stripped of symbol information, more likely intended to increase analysis effort than to decrease file size. Its functionality includes hidden network communications, arbitrary remote command execution, and remote management. Much of its code is based on public sources.”
Regardless of impact or infection rate its precedes will still raise big, big questions that clever, clever people will now spend time addressing, analysing and (importantly) solving.
IANACSE (I am not a computer security expert) but IAFOA (I am a fan of acronyms), and AFAICT (as far as I can tell) this news should be viewed as as a cautionary PSA or FYI than the kind of OMGGTFO that some sites are painting it as.
Until more details are known none of us should panic. Lets continue to practice safe computing. Avoid downloading/running scripts, apps, or binaries from untrusted sites or PPAs, and dont venture into dodgy dark parts of the web.
If you remain super concerned you can check out the [Kaspersky blog][1] for details on how to check that youre not infected.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/12/government-spying-turla-linux-trojan-found
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://securelist.com/blog/research/67962/the-penquin-turla-2/
[2]:https://twitter.com/joernchen/status/542060412188262400
[3]:https://securelist.com/blog/research/67962/the-penquin-turla-2/

View File

@ -1,3 +1,5 @@
Translating By H-mudcup
Easy File Comparisons With These Great Free Diff Tools
================================================================================
by Frazer Kline
@ -163,4 +165,4 @@ via: http://www.linuxlinks.com/article/2014062814400262/FileComparisons.html
[2]:https://sourcegear.com/diffmerge/
[3]:http://furius.ca/xxdiff/
[4]:http://diffuse.sourceforge.net/
[5]:http://www.caffeinated.me.uk/kompare/
[5]:http://www.caffeinated.me.uk/kompare/

View File

@ -1,62 +0,0 @@
Tomahawk Music Player Returns With New Look, Features
================================================================================
**After a quiet year Tomahawk, the Swiss Army knife of music players, is back with a brand new release to sing about. **
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-tile-1.jpg)
Version 0.8 of the open-source and cross-platform app adds **support for more online services**, refreshes its appearance, and doubles down on making sure its innovative social features work flawlessly.
### Tomahawk — The Best of Both Worlds ###
Tomahawk marries a traditional app structure with the modernity of our “on demand” culture. It can browse and play music from local libraries as well as online services like Spotify, Grooveshark, and SoundCloud. In its latest release it adds Google Play Music and Beats Music to its roster.
That may sound cumbersome or confusing on paper but in practice it all works fantastically.
When you want to play a song, and dont care where its played back from, you just tell Tomahawk the track title and artist and it automatically finds a high-quality version from enabled sources — you dont need to do anything.
![](http://i.imgur.com/nk5oixy.jpg)
The app also sports some additional features, like EchoNest profiling, Last.fm suggestions, and Jabber support so you can play friends music. Theres also a built-in messaging service so you can quickly share playlists and tracks with others.
> “This fundamentally different approach to music enables a range of new music consumption and sharing experiences previously not possible,” the project says on its website. And with little else like it, its not wrong.
![Tomahawk supports the Sound Menu](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-controllers.jpg)
Tomahawk supports the Sound Menu
### Tomahawk 0.8 Release Highlights ###
- New UI
- Support for Beats Music support
- Support for Google Play Music (stored and Play All Access)
- Support for drag and drop iTunes, Spotify, etc. web links
- Now Playing notifications
- Android app (beta)
- Inbox improvements
### Install Tomahawk 0.8 in Ubuntu ###
As a big music streaming user Ill be using the app over the next few days to get a fuller appreciation of the changes on offer. In the mean time, to go hands on for yourself, you can.
Tomahawk 0.8 is available for Ubuntu 14.04 LTS and Ubuntu 14.10 via an official PPA.
sudo add-apt-repository ppa:tomahawk/ppa
sudo apt-get update && sudo apt-get install tomahawk
Standalone installers, and more information, can be found on the official project website.
- [Visit the Official Tomahawk Website][1]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/11/tomahawk-media-player-returns-new-look-features
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://gettomahawk.com/

View File

@ -1,48 +0,0 @@
[Translating by Stevarzh]
How to Download Music from Grooveshark with a Linux OS
================================================================================
> The solution is actually much simpler than you think
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-2.jpg)
**Grooveshark is a great online platform for people who want to listen to music, and there are a number of ways to download music from there. Groovesquid is just one of the applications that let users get music from Grooveshark, and it's multiplatform.**
If there is a service that streams something online, then there is a way to download the stuff that you are just watching or listening. As it turns out, it's not that difficult and there are a ton of solutions, no matter the platform. For example, there are dozens of YouTube downloaders and it stands to reason that it's not all that difficult to get stuff from Grooveshark either.
Now, there is the problem of legality. Like many other applications out there, Groovesquid is not actually illegal. It's the user's fault if they do something illegal with an application. The same reasoning can be applied to apps like utorrent or Bittorrent. As long as you don't touch copyrighted material, there are no problems in using Groovesquid.
### Groovesquid is fast and efficient ###
The only problem that you could find with Groovesquid is the fact that it's based on Java and that's never a good sign. This is a good way to ensure that an application runs on all the platforms, but it's an issue when it comes to the interface. It's not great, but it doesn't really matter all that much for users, especially since the app is doing a great job.
There is one caveat though. Groovesquid is a free application, but in order to remain free, it has to display an ad on the right side of the menu. This shouldn't be a problem for most people, but it's a good idea to mention that right from the start.
From a usability point of view, the application is pretty straightforward. Users can download a single song by entering the link in the top field, but the purpose of that field can be changed by accessing the small drop-down menu to its left. From there, it's possible to change to Song, Popular, Albums, Playlist, and Artist. Some of the options provide access to things like the most popular song on Grooveshark and other options allow you to download an entire playlist, for example.
You can download Groovesquid 0.7.0
- [jar][1] File size: 3.8 MB
- [tar.gz][2] File size: 549 KB
You will get a Jar file and all you have to do is to make it executable and let Java do the rest.
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-3.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-4.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-5.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-6.jpg)
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://github.com/groovesquid/groovesquid/releases/download/v0.7.0/Groovesquid.jar
[2]:https://github.com/groovesquid/groovesquid/archive/v0.7.0.tar.gz

View File

@ -0,0 +1,59 @@
This App Can Write a Single ISO to 20 USB Drives Simultaneously
================================================================================
**If I were to ask you to burn a single Linux ISO to 17 USB thumb drives how would you go about doing it?**
Code savvy folks would write a little bash script to automate the process, and a large number would use a GUI tool like the USB Startup Disk Creator to burn the ISO to each drive in turn, one by one. But the rest of us would fast conclude that neither method is ideal.
### Problem > Solution ###
![GNOME MultiWriter in action](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/gnome-multi-writer.jpg)
GNOME MultiWriter in action
Richard Hughes, a GNOME developer, faced a similar dilemma. He wanted to create a number of USB drives pre-loaded with an OS, but wanted a tool simple enough for someone like his dad to use.
His response was to create a **brand new app** that combines both approaches into one easy to use tool.
Its called “[GNOME MultiWriter][1]” and lets you write a single ISO or IMG to multiple USB drives at the same time.
It nixes the need to customize or create a command line script and relinquishes the need to waste an afternoon performing an identical set of actions on repeat.
All you need is this app, an ISO, some thumb-drives and lots of empty USB ports.
### Use Cases and Installing ###
![The app can be installed on Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/mutli-writer-on-ubuntu.jpg)
The app can be installed on Ubuntu
The app has a pretty defined usage scenario, that being situations where USB sticks pre-loaded with an OS or live image are being distributed.
That being said, it should work just as well for anyone wanting to create a solitary bootable USB stick, too — and since Ive never once successfully created a bootable image from Ubuntus built-in disk creator utility, working alternatives are welcome news to me!
Hughes, the developer, says it **supports up to 20 USB drives**, each being between 1GB and 32GB in size.
The drawback (for now) is that GNOME MultiWriter is not a finished, stable product. It works, but at this early blush there are no pre-built binaries to install or a PPA to add to your overstocked software sources.
If you know your way around the usual configure/make process you can get it up and running in no time. On Ubuntu 14.10 you may also need to install the following packages first:
sudo apt-get install gnome-common yelp-tools libcanberra-gtk3-dev libudisks2-dev gobject-introspection
If you get it up and running, give it a whirl and let us know what you think!
Bugs and pull requests can be longed on the GitHub page for the project, which is where youll also found tarball downloads for manual installation.
- [GNOME MultiWriter on Github][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/01/gnome-multiwriter-iso-usb-utility
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://github.com/hughsie/gnome-multi-writer/
[2]:https://github.com/hughsie/gnome-multi-writer/

View File

@ -1,3 +1,5 @@
translating by barney-ro
2015 will be the year Linux takes over the enterprise (and other predictions)
================================================================================
> Jack Wallen removes his rose-colored glasses and peers into the crystal ball to predict what 2015 has in store for Linux.
@ -62,7 +64,7 @@ What are your predictions for Linux and open source in 2015? Share your thoughts
via: http://www.techrepublic.com/article/2015-will-be-the-year-linux-takes-over-the-enterprise-and-other-predictions/
作者:[Jack Wallen][a]
译者:[译者ID](https://github.com/译者ID)
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,47 @@
2015: Open Source Has Won, But It Isn't Finished
================================================================================
> After the wins of 2014, what's next?
At the beginning of a new year, it's traditional to look back over the last 12 months. But as far as this column is concerned, it's easy to summarise what happened then: open source has won. Let's take it from the top:
**Supercomputers**. Linux is so dominant on the Top 500 Supercomputers lists it is almost embarrassing. The [November 2014 figures][1] show that 485 of the top 500 systems were running some form of Linux; Windows runs on just one. Things are even more impressive if you look at the numbers of cores involved. Here, Linux is to be found on 22,851,693 of them, while Windows is on just 30,720; what that means is that not only does Linux dominate, it is particularly strong on the bigger systems.
**Cloud computing**. The Linux Foundation produced an interesting [report][2] last year, which looked at the use of Linux in the cloud by large companies. It found that 75% of them use Linux as their primary platform there, against just 23% that use Windows. It's hard to translate that into market share, since the mix between cloud and non-cloud needs to be factored in; however, given the current popularity of cloud computing, it's safe to say that the use of Linux is high and increasing. Indeed, the same survey found Linux deployments in the cloud have increased from 65% to 79%, while those for Windows have fallen from 45% to 36%. Of course, some may not regard the Linux Foundation as totaly disinterested here, but even allowing for that, and for statistical uncertainties, it's pretty clear which direction things are moving in.
**Web servers**. Open source has dominated this sector for nearly 20 years - an astonishing record. However, more recently there's been some interesting movement in market share: at one point, Microsoft's IIS managed to overtake Apache in terms of the total number of Web servers. But as Netcraft explains in its most recent [analysis][3], there's more than meets the eye here:
> This is the second month in a row where there has been a large drop in the total number of websites, giving this month the lowest count since January. As was the case in November, the loss has been concentrated at just a small number of hosting companies, with the ten largest drops accounting for over 52 million hostnames. The active sites and web facing computers metrics were not affected by the loss, with the sites involved being mostly advertising linkfarms, having very little unique content. The majority of these sites were running on Microsoft IIS, causing it to overtake Apache in the July 2014 survey. However the recent losses have resulted in its market share dropping to 29.8%, leaving it now over 10 percentage points behind Apache.
As that indicates, Microsoft's "surge" was more apparent than real, and largely based on linkfarms with little useful content. Indeed, Netcraft's figures for active sites paints a very different picture: Apache has 50.57% market share, with nginx second on 14.73%; Microsoft IIS limps in with a rather feeble 11.72%. This means that open source has around 65% of the active Web server market - not quite at the supercomputer level, but pretty good.
**Mobile systems**. Here, the march of open source as the foundation of Android continues. Latest figures show that Android accounted for [83.6%][4] of smartphone shipments in the third quarter of 2014, up from 81.4% in the same quarter the previous year. Apple achieved 12.3%, down from 13.4%. As far as tablets are concerned, Android is following a similar trajectory: for the second quarter of 2014, Android notched up around [75% of global tablet sales][5], while Apple was on 25%.
**Embedded systems**. Although it's much harder to quantify the market share of Linux in the important embedded system market, but figures from one 2013 study indicated that around [half of planned embedded systems][6] would use it.
**Internet of Things**. In many ways this is simply another incarnation of embedded systems, with the difference that they are designed to be online, all the time. It's too early to talk of market share, but as I've [discussed][7] recently, AllSeen's open source framework is coming on apace. What's striking by their absence are any credible closed-source rivals; it therefore seems highly likely that the Internet of Things will see supercomputer-like levels of open source adoption.
Of course, this level of success always begs the question: where do we go from here? Given that open source is approaching saturation levels of success in many sectors, surely the only way is down? In answer to that question, I recommend a thought-provoking essay from 2013 written by Christopher Kelty for the Journal of Peer Production, with the intriguing title of "[There is no free software.][8]" Here's how it begins:
> Free software does not exist. This is sad for me, since I wrote a whole book about it. But it was also a point I tried to make in my book. Free software—and its doppelganger open source—is constantly becoming. Its existence is not one of stability, permanence, or persistence through time, and this is part of its power.
In other words, whatever amazing free software 2014 has already brought us, we can be sure that 2015 will be full of yet more of it, as it continues its never-ending evolution.
--------------------------------------------------------------------------------
via: http://www.computerworlduk.com/blogs/open-enterprise/open-source-has-won-3592314/
作者:[lyn Moody][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworlduk.com/author/glyn-moody/
[1]:http://www.top500.org/statistics/list/
[2]:http://www.linuxfoundation.org/publications/linux-foundation/linux-end-user-trends-report-2014
[3]:http://news.netcraft.com/archives/2014/12/18/december-2014-web-server-survey.html
[4]:http://www.cnet.com/news/android-stays-unbeatable-in-smartphone-market-for-now/
[5]:http://timesofindia.indiatimes.com/tech/tech-news/Android-tablet-market-share-hits-70-in-Q2-iPads-slip-to-25-Survey/articleshow/38966512.cms
[6]:http://linuxgizmos.com/embedded-developers-prefer-linux-love-android/
[7]:http://www.computerworlduk.com/blogs/open-enterprise/allseen-3591023/
[8]:http://peerproduction.net/issues/issue-3-free-software-epistemics/debate/there-is-no-free-software/

View File

@ -0,0 +1,155 @@
How to Backup and Restore Your Apps and PPAs in Ubuntu Using Aptik
================================================================================
![00_lead_image_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x300x00_lead_image_aptik.png.pagespeed.ic.n3TJwp8YK_.png)
If you need to reinstall Ubuntu or if you just want to install a new version from scratch, wouldnt it be useful to have an easy way to reinstall all your apps and settings? You can easily accomplish this using a free tool called Aptik.
Aptik (Automated Package Backup and Restore), an application available in Ubuntu, Linux Mint, and other Debian- and Ubuntu-based Linux distributions, allows you to backup a list of installed PPAs (Personal Package Archives), which are software repositories, downloaded packages, installed applications and themes, and application settings to an external USB drive, network drive, or a cloud service like Dropbox.
NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise.
To install Aptik, you must add the PPA. To do so, press Ctrl + Alt + T to open a Terminal window. Type the following text at the prompt and press Enter.
sudo apt-add-repository y ppa:teejee2008/ppa
Type your password when prompted and press Enter.
![01_command_to_add_repository](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x99x01_command_to_add_repository.png.pagespeed.ic.UfVC9QLj54.png)
Type the following text at the prompt to make sure the repository is up-to-date.
sudo apt-get update
![02_update_command](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x252x02_update_command.png.pagespeed.ic.m9pvd88WNx.png)
When the update is finished, you are ready to install Aptik. Type the following text at the prompt and press Enter.
sudo apt-get install aptik
NOTE: You may see some errors about packages that the update failed to fetch. If they are similar to the ones listed on the following image, you should have no problem installing Aptik.
![03_command_to_install_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x03_command_to_install_aptik.png.pagespeed.ic.1jtHysRO9h.png)
The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter.
![04_do_you_want_to_continue](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x04_do_you_want_to_continue.png.pagespeed.ic.WQ15_UxK5Z.png)
When the installation if finished, close the Terminal window by typing “Exit” and pressing Enter, or by clicking the “X” button in the upper-left corner of the window.
![05_closing_terminal_window](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x416x05_closing_terminal_window.png.pagespeed.ic.9QoqwM7Mfr.png)
Before running Aptik, you should set up a backup directory on a USB flash drive, a network drive, or on a cloud account, such as Dropbox or Google Drive. For this example, will will use Dropbox.
![06_creating_backup_folder](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x243x06_creating_backup_folder.png.pagespeed.ic.7HzR9KwAfQ.png)
Once your backup directory is set up, click the “Search” button at the top of the Unity Launcher bar.
![07_opening_search](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x177x07_opening_search.png.pagespeed.ic.qvFiw6_sXa.png)
Type “aptik” in the search box. Results of the search display as you type. When the icon for Aptik displays, click on it to open the application.
![08_starting_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x338x08_starting_aptik.png.pagespeed.ic.8fSl4tYR0n.png)
A dialog box displays asking for your password. Enter your password in the edit box and click “OK.”
![09_entering_password](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x337x09_entering_password.png.pagespeed.ic.yanJYFyP1i.png)
The main Aptik window displays. Select “Other…” from the “Backup Directory” drop-down list. This allows you to select the backup directory you created.
NOTE: The “Open” button to the right of the drop-down list opens the selected directory in a Files Manager window.
![10_selecting_other_for_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x533x10_selecting_other_for_directory.png.pagespeed.ic.dHbmYdAHYx.png)
On the “Backup Directory” dialog box, navigate to your backup directory and then click “Open.”
NOTE: If you havent created a backup directory yet, or you want to add a subdirectory in the selected directory, use the “Create Folder” button to create a new directory.
![11_choosing_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x470x11_choosing_directory.png.pagespeed.ic.E-56x54cy9.png)
To backup the list of installed PPAs, click “Backup” to the right of “Software Sources (PPAs).”
![12_clicking_backup_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
The “Backup Software Sources” dialog box displays. The list of installed packages and the associated PPA for each displays. Select the PPAs you want to backup, or use the “Select All” button to select all the PPAs in the list.
![13_selecting_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x13_selecting_all_software_sources.png.pagespeed.ic.zDFiDGfnks.png)
Click “Backup” to begin the backup process.
![14_clicking_backup_for_all_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x14_clicking_backup_for_all_software_sources.png.pagespeed.ic.n5h_KnQVZa.png)
A dialog box displays when the backup is finished telling you the backup was created successfully. Click “OK” to close the dialog box.
A file named “ppa.list” will be created in the backup directory.
![15_closing_finished_dialog_software_sources](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x15_closing_finished_dialog_software_sources.png.pagespeed.ic.V25-KgSXdY.png)
The next item, “Downloaded Packages (APT Cache)”, is only useful if you are re-installing the same version of Ubuntu. It backs up the packages in your system cache (/var/cache/apt/archives). If you are upgrading your system, you can skip this step because the packages for the new version of the system will be newer than the packages in the system cache.
Backing up downloaded packages and then restoring them on the re-installed Ubuntu system will save time and Internet bandwidth when the packages are reinstalled. Because the packages will be available in the system cache once you restore them, the download will be skipped and the installation of the packages will complete more quickly.
If you are reinstalling the same version of your Ubuntu system, click the “Backup” button to the right of “Downloaded Packages (APT Cache)” to backup the packages in the system cache.
NOTE: When you backup the downloaded packages, there is no secondary dialog box. The packages in your system cache (/var/cache/apt/archives) are copied to an “archives” directory in the backup directory and a dialog box displays when the backup is finished, indicating that the packages were copied successfully.
![16_downloaded_packages_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x544x16_downloaded_packages_backed_up.png.pagespeed.ic.z8ysuwzQAK.png)
There are some packages that are part of your Ubuntu distribution. These are not checked, since they are automatically installed when you install the Ubuntu system. For example, Firefox is a package that is installed by default in Ubuntu and other similar Linux distributions. Therefore, it will not be selected by default.
Packages that you installed after installing the system, such as the [package for the Chrome web browser][1] or the package containing Aptik (yes, Aptik is automatically selected to back up), are selected by default. This allows you to easily back up the packages that are not included in the system when installed.
Select the packages you want to back up and de-select the packages you dont want to backup. Click “Backup” to the right of “Software Selections” to back up the selected top-level packages.
NOTE: Dependency packages are not included in this backup.
![18_clicking_backup_for_software_selections](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x18_clicking_backup_for_software_selections.png.pagespeed.ic.QI5D-IgnP_.png)
Two files, named “packages.list” and “packages-installed.list”, are created in the backup directory and a dialog box displays indicating that the backup was created successfully. Click “OK” to close the dialog box.
NOTE: The “packages-installed.list” file lists all the packages. The “packages.list” file also lists all the packages, but indicates which ones were selected.
![19_software_selections_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x19_software_selections_backed_up.png.pagespeed.ic.LVmgs6MKPL.png)
To backup settings for installed applications, click the “Backup” button to the right of “Application Settings” on the main Aptik window. Select the settings you want to back up and click “Backup”.
NOTE: Click the “Select All” button if you want to back up all application settings.
![20_backing_up_app_settings](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x20_backing_up_app_settings.png.pagespeed.ic.7_kgU3Dj_m.png)
The selected settings files are zipped into a file called “app-settings.tar.gz”.
![21_zipping_settings_files](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x21_zipping_settings_files.png.pagespeed.ic.dgoBj7egqv.png)
When the zipping is complete, the zipped file is copied to the backup directory and a dialog box displays telling you that the backups were created successfully. Click “OK” to close the dialog box.
![22_app_settings_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22_app_settings_backed_up.png.pagespeed.ic.Mb6utyLJ3W.png)
Themes from the “/usr/share/themes” directory and icons from the “/usr/share/icons” directory can also be backed up. To do so, click the “Backup” button to the right of “Themes and Icons”. The “Backup Themes” dialog box displays with all the themes and icons selected by default. De-select any themes or icons you dont want to back up and click “Backup.”
![22a_backing_up_themes_and_icons](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22a_backing_up_themes_and_icons.png.pagespeed.ic.KXa8W3YhyF.png)
The themes are zipped and copied to a “themes” directory in the backup directory and the icons are zipped and copied to an “icons” directory in the backup directory. A dialog box displays telling you that the backups were created successfully. Click “OK” to close the dialog box.
![22b_themes_and_icons_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22b_themes_and_icons_backed_up.png.pagespeed.ic.ejjRaymD39.png)
Once youve completed the desired backups, close Aptik by clicking the “X” button in the upper-left corner of the main window.
![23_closing_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x542x23_closing_aptik.png.pagespeed.ic.pNk9Vt3--l.png)
Your backup files are available in the backup directory you chose.
![24_backup_files_in_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x374x24_backup_files_in_directory.png.pagespeed.ic.vwblOfN915.png)
When you re-install your Ubuntu system or install a new version of Ubuntu, install Aptik on the newly installed system and make the backup files you generated available to the system. Run Aptik and use the “Restore” button for each item to restore your PPAs, applications, packages, settings, themes, and icons.
--------------------------------------------------------------------------------
via: http://www.howtogeek.com/206454/how-to-backup-and-restore-your-apps-and-ppas-in-ubuntu-using-aptik/
作者Lori Kaufman
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.howtogeek.com/203768

View File

@ -1,273 +0,0 @@
How to configure HTTP load balancer with HAProxy on Linux
================================================================================
Increased demand on web based applications and services are putting more and more weight on the shoulders of IT administrators. When faced with unexpected traffic spikes, organic traffic growth, or internal challenges such as hardware failures and urgent maintenance, your web application must remain available, no matter what. Even modern devops and continuous delivery practices can threaten the reliability and consistent performance of your web service.
Unpredictability or inconsistent performance is not something you can afford. But how can we eliminate these downsides? In most cases a proper load balancing solution will do the job. And today I will show you how to set up HTTP load balancer using [HAProxy][1].
### What is HTTP load balancing? ###
HTTP load balancing is a networking solution responsible for distributing incoming HTTP or HTTPS traffic among servers hosting the same application content. By balancing application requests across multiple available servers, a load balancer prevents any application server from becoming a single point of failure, thus improving overall application availability and responsiveness. It also allows you to easily scale in/out an application deployment by adding or removing extra application servers with changing workloads.
### Where and when to use load balancing? ###
As load balancers improve server utilization and maximize availability, you should use it whenever your servers start to be under high loads. Or if you are just planning your architecture for a bigger project, it's a good habit to plan usage of load balancer upfront. It will prove itself useful in the future when you need to scale your environment.
### What is HAProxy? ###
HAProxy is a popular open-source load balancer and proxy for TCP/HTTP servers on GNU/Linux platforms. Designed in a single-threaded event-driven architecture, HAproxy is capable of handling [10G NIC line rate][2] easily, and is being extensively used in many production environments. Its features include automatic health checks, customizable load balancing algorithms, HTTPS/SSL support, session rate limiting, etc.
### What are we going to achieve in this tutorial? ###
In this tutorial, we will go through the process of configuring a HAProxy-based load balancer for HTTP web servers.
### Prerequisites ###
You will need at least one, or preferably two web servers to verify functionality of your load balancer. We assume that backend HTTP web servers are already [up and running][3].
### Install HAProxy on Linux ###
For most distributions, we can install HAProxy using your distribution's package manager.
#### Install HAProxy on Debian ####
In Debian we need to add backports for Wheezy. To do that, please create a new file called "backports.list" in /etc/apt/sources.list.d, with the following content:
deb http://cdn.debian.net/debian wheezy­backports main
Refresh your repository data and install HAProxy.
# apt­ get update
# apt ­get install haproxy
#### Install HAProxy on Ubuntu ####
# apt ­get install haproxy
#### Install HAProxy on CentOS and RHEL ####
# yum install haproxy
### Configure HAProxy ###
In this tutorial, we assume that there are two HTTP web servers up and running with IP addresses 192.168.100.2 and 192.168.100.3. We also assume that the load balancer will be configured at a server with IP address 192.168.100.4.
To make HAProxy functional, you need to change a number of items in /etc/haproxy/haproxy.cfg. These changes are described in this section. In case some configuration differs for different GNU/Linux distributions, it will be noted in the paragraph.
#### 1. Configure Logging ####
One of the first things you should do is to set up proper logging for your HAProxy, which will be useful for future debugging. Log configuration can be found in the global section of /etc/haproxy/haproxy.cfg. The following are distro-specific instructions for configuring logging for HAProxy.
**CentOS or RHEL:**
To enable logging on CentOS/RHEL, replace:
log 127.0.0.1 local2
with:
log 127.0.0.1 local0
The next step is to set up separate log files for HAProxy in /var/log. For that, we need to modify our current rsyslog configuration. To make the configuration simple and clear, we will create a new file called haproxy.conf in /etc/rsyslog.d/ with the following content.
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info ­/var/log/haproxy.log;Haproxy
local0.notice ­/var/log/haproxy­status.log;Haproxy
local0.* ~
This configuration will separate all HAProxy messages based on the $template to log files in /var/log. Now restart rsyslog to apply the changes.
# service rsyslog restart
**Debian or Ubuntu:**
To enable logging for HAProxy on Debian or Ubuntu, replace:
log /dev/log local0
log /dev/log local1 notice
with:
log 127.0.0.1 local0
Next, to configure separate log files for HAProxy, edit a file called haproxy.conf (or 49-haproxy.conf in Debian) in /etc/rsyslog.d/ with the following content.
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info ­/var/log/haproxy.log;Haproxy
local0.notice ­/var/log/haproxy­status.log;Haproxy
local0.* ~
This configuration will separate all HAProxy messages based on the $template to log files in /var/log. Now restart rsyslog to apply the changes.
# service rsyslog restart
#### 2. Setting Defaults ####
The next step is to set default variables for HAProxy. Find the defaults section in /etc/haproxy/haproxy.cfg, and replace it with the following configuration.
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 20000
contimeout 5000
clitimeout 50000
srvtimeout 50000
The configuration stated above is recommended for HTTP load balancer use, but it may not be the optimal solution for your environment. In that case, feel free to explore HAProxy man pages to tweak it.
#### 3. Webfarm Configuration ####
Webfarm configuration defines the pool of available HTTP servers. Most of the settings for our load balancer will be placed here. Now we will create some basic configuration, where our nodes will be defined. Replace all of the configuration from frontend section until the end of file with the following code:
listen webfarm *:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
cookie LBN insert indirect nocache
option httpclose
option forwardfor
server web01 192.168.100.2:80 cookie node1 check
server web02 192.168.100.3:80 cookie node2 check
The line "listen webfarm *:80" defines on which interfaces our load balancer will listen. For the sake of the tutorial, I've set that to "*" which makes the load balancer listen on all our interfaces. In a real world scenario, this might be undesirable and should be replaced with an interface that is accessible from the internet.
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
The above settings declare that our load balancer statistics can be accessed on http://<load-balancer-IP>/haproxy?stats. The access is secured with a simple HTTP authentication with login name "haproxy" and password "stats". These settings should be replaced with your own credentials. If you don't need to have these statistics available, then completely disable them.
Here is an example of HAProxy statistics.
![](https://farm4.staticflickr.com/3928/15416835905_a678c8f286_c.jpg)
The line "balance roundrobin" defines the type of load balancing we will use. In this tutorial we will use simple round robin algorithm, which is fully sufficient for HTTP load balancing. HAProxy also offers other types of load balancing:
- **leastconn**:­ gives connections to the server with the lowest number of connections.
- **source**: hashes the source IP address, and divides it by the total weight of the running servers to decide which server will receive the request.
- **uri**: the left part of the URI (before the question mark) is hashed and divided by the total weight of the running servers. The result determines which server will receive the request.
- **url_param**: the URL parameter specified in the argument will be looked up in the query string of each HTTP GET request. You can basically lock the request using crafted URL to specific load balancer node.
- **hdr(name**): the HTTP header <name> will be looked up in each HTTP request and directed to specific node.
The line "cookie LBN insert indirect nocache" makes our load balancer store persistent cookies, which allows us to pinpoint which node from the pool is used for a particular session. These node cookies will be stored with a defined name. In our case, I used "LBN", but you can specify any name you like. The node will store its string as a value for this cookie.
server web01 192.168.100.2:80 cookie node1 check
server web02 192.168.100.3:80 cookie node2 check
The above part is the definition of our pool of web server nodes. Each server is represented with its internal name (e.g., web01, web02). IP address, and unique cookie string. The cookie string can be defined as anything you want. I am using simple node1, node2 ... node(n).
### Start HAProxy ###
When you are done with the configuration, it's time to start HAProxy and verify that everything is working as intended.
#### Start HAProxy on Centos/RHEL ####
Enable HAProxy to be started after boot and turn it on using:
# chkconfig haproxy on
# service haproxy start
And of course don't forget to enable port 80 in the firewall as follows.
**Firewall on CentOS/RHEL 7:**
# firewall­cmd ­­permanent ­­zone=public ­­add­port=80/tcp
# firewall­cmd ­­reload
**Firewall on CentOS/RHEL 6:**
Add following line into section ":OUTPUT ACCEPT" of /etc/sysconfig/iptables:
­A INPUT ­m state ­­state NEW ­m tcp ­p tcp ­­dport 80 ­j ACCEPT
and restart **iptables**:
# service iptables restart
#### Start HAProxy on Debian ####
#### Start HAProxy with: ####
# service haproxy start
Don't forget to enable port 80 in the firewall by adding the following line into /etc/iptables.up.rules:
­A INPUT ­p tcp ­­dport 80 ­j ACCEPT
#### Start HAProxy on Ubuntu ####
Enable HAProxy to be started after boot by setting "ENABLED" option to "1" in /etc/default/haproxy:
ENABLED=1
Start HAProxy:
# service haproxy start
and enable port 80 in the firewall:
# ufw allow 80
### Test HAProxy ###
To check whether HAproxy is working properly, we can do the following.
First, prepare test.php file with the following content:
<?php
header('Content-Type: text/plain');
echo "Server IP: ".$_SERVER['SERVER_ADDR'];
echo "\nX-Forwarded-for: ".$_SERVER['HTTP_X_FORWARDED_FOR'];
?>
This PHP file will tell us which server (i.e., load balancer) forwarded the request, and what backend web server actually handled the request.
Place this PHP file in the root directory of both backend web servers. Now use curl command to fetch this PHP file from the load balancer (192.168.100.4).
$ curl http://192.168.100.4/test.php
When we run this command multiple times, we should see the following two outputs alternate (due to the round robin algorithm).
Server IP: 192.168.100.2
X-Forwarded-for: 192.168.100.4
----------
Server IP: 192.168.100.3
X-Forwarded-for: 192.168.100.4
If we stop one of the two backend web servers, the curl command should still work, directing requests to the other available web server.
### Summary ###
By now you should have a fully operational load balancer that supplies your web nodes with requests in round robin mode. As always, feel free to experiment with the configuration to make it more suitable for your infrastructure. I hope this tutorial helped you to make your web projects more resistant and available.
As most of you already noticed, this tutorial contains settings for only one load balancer. Which means that we have just replaced one single point of failure with another. In real life scenarios you should deploy at least two or three load balancers to cover for any failures that might happen, but that is out of the scope of this tutorial right now.
If you have any questions or suggestions feel free to post them in the comments and I will do my best to answer or advice.
--------------------------------------------------------------------------------
via: http://xmodulo.com/haproxy-http-load-balancer-linux.html
作者:[Jaroslav Štěpánek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/jaroslav
[1]:http://www.haproxy.org/
[2]:http://www.haproxy.org/10g.html
[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html

View File

@ -1,77 +0,0 @@
SPccman...................
Quick systemd-nspawn guide
================================================================================
I switched to using systemd-nspawn in place of chroot and wanted to give a quick guide to using it. The short version is that Id strongly recommend that anybody running systemd that uses chroot switch over - there really are no downsides as long as your kernel is properly configured.
Chroot should be no stranger to anybody who works on distros, and I suspect that the majority of Gentoo users have need for it from time to time.
### The Challenges of chroot ###
For most interactive uses it isnt sufficient to just run chroot. Usually you need to mount /proc, /sys, and bind mount /dev so that you dont have issues like missing ptys, etc. If you use tmpfs you might also want to mount the new tmp, var/tmp as tmpfs. Then you might want to make other bind mounts into the chroot. None of this is particularly difficult, but you usually end up writing a small script to manage it.
Now, I routinely do full backups, and usually that involves excluding stuff like tmp dirs, and anything resembling a bind mount. When I set up a new chroot that means updating my backup config, which I usually forget to do since most of the time the chroot mounts arent running anyway. Then when I do leave it mounted overnight I end up with backups consuming lots of extra space (bind mounts of large trees).
Finally, systemd now by default handles bind mounts a little differently when they contain other mount points (such as when using -rbind). Apparently unmounting something in the bind mount will cause systemd to unmount the corresponding directory on the other side of the bind. Imagine my surprise when I unmounted my chroot bind to /dev and discovered /dev/pts and /dev/shm no longer mounted on the host. It looks like there are ways to change that, but this isnt the point of my post (it just spurred me to find another way).
### Systemd-nspawns Advantages ###
Systemd-nspawn is a tool that launches a container, and it can operate just like chroot in its simplest form. By default it automatically sets up most of the overhead like /dev, /tmp, etc. With a few options it can also set up other bind mounts as well. When the container exits all the mounts are cleaned up.
From the outside of the container nothing appears different when the container is running. In fact, you could spawn 5 different systemd-nspawn container instances from the same chroot and they wouldnt have any interaction except via the filesystem (and that excludes /dev, /tmp, and so on - only changes in /usr, /etc will propagate across). Your backup wont see the bind mounts, or tmpfs, or anything else mounted within the container.
The container also has all those other nifty container benefits like containment - a killall inside the container wont touch anything outside, and so on. The security isnt airtight - the intent is to prevent accidental mistakes.
Then, if you use a compatible sysvinit (which includes systemd, and I think recent versions of openrc), you can actually boot the container, which drops you to a getty inside. That means you can use fstab to do additional mounts inside the container, run daemons, and so on. You get almost all the benefits of virtualization for the cost of a chroot (no need to build a kernel, and so on). It is a bit odd to be running systemctl poweroff inside what looks just like a chroot, but it works.
Note that unless you do a bit more setup you will share the same network interface with the host, so no running sshd on the container if you have it on the host, etc. I wont get into this but it shouldnt be hard to run a separate network namespace and bind the interfaces so that the new instance can run dhcp.
### How to do it ###
So, getting it actually working will likely be the shortest bit in this post.
You need support for namespaces and multiple devpts instances in your kernel:
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
From there launching a namespace just like a chroot is really simple:
systemd-nspawn -D .
Thats it - you can exit from it just like a chroot. From inside you can run mount and see that it has taken care of /dev and /tmp for you. The “.” is the path to the chroot, which I assume is the current directory. With nothing further it runs bash inside.
If you want to add some bind mounts it is easy:
systemd-nspawn -D . --bind /usr/portage
Now your /usr/portage is bound to your host, so no need to sync/etc. If you want to bind to a different destination add a “:dest” after the source, relative to the root of the chroot (so --bind foo is the same as --bind foo:foo).
If the container has a functional init that can handle being run inside, you can add a -b to boot it:
systemd-nspawn -D . --bind /usr/portage -b
Watch the init do its job. Shut down the container to exit.
Now, if that container is running systemd you can direct its journal to the host journal with -h:
systemd-nspawn -D . --bind /usr/portage -j -b
Now, nspawn registers the container so that it shows up in machinectl. That makes it easy to launch a new getty on it, or ssh to it (if it is running ssh - see my note above about network namespaces), or power it off from the host.
Thats it. If youre running systemd Id suggest ditching chroot almost entirely in favor of nspawn.
--------------------------------------------------------------------------------
via: http://rich0gentoo.wordpress.com/2014/07/14/quick-systemd-nspawn-guide/
作者:[rich0][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://rich0gentoo.wordpress.com/

View File

@ -1,203 +0,0 @@
Translating by ZTinoZ
How to Install Bugzilla 4.4 on Ubuntu / CentOS 6.x
================================================================================
Here, we are gonna show you how we can install Bugzilla in an Ubuntu 14.04 or CentOS 6.5/7. Bugzilla is a Free and Open Source Software(FOSS) which is web based bug tracking tool used to log and track defect database, its Bug-tracking systems allow individual or groups of developers effectively to keep track of outstanding problems with their product. Despite being "free", Bugzilla has many features its expensive counterparts lack. Consequently, Bugzilla has quickly become a favorite of thousands of organizations across the globe.
Bugzilla is very adaptable to various situations. They are used now a days in different IT support queues, Systems Administration deployment management, chip design and development problem tracking (both pre-and-post fabrication), and software and hardware bug tracking for luminaries such as Redhat, NASA, Linux-Mandrake, and VA Systems.
### 1. Installing dependencies ###
Setting up Bugzilla is fairly **easy**. This blog is specific to Ubuntu 14.04 and CentOS 6.5 ( though it might work with older versions too )
In order to get Bugzilla up and running in Ubuntu or CentOS, we are going to install Apache webserver ( SSL enabled ) , MySQL database server and also some tools that are required to install and configure Bugzilla.
To install Bugzilla in your server, you'll need to have the following components installed:
- Per l(5.8.1 or above)
- MySQL
- Apache2
- Bugzilla
- Perl modules
- Bugzilla using apache
As we have mentioned that this article explains installation of both Ubuntu 14.04 and CentOS 6.5/7, we will have 2 different sections for them.
Here are the steps you need to follow to setup Bugzilla in your Ubuntu 14.04 LTS and CentOS 7:
**Preparing the required dependency packages:**
You need to install the essential packages by running the following command:
**For Ubuntu:**
$ sudo apt-get install apache2 mysql-server libapache2-mod-perl2
libapache2-mod-perl2-dev libapache2-mod-perl2-doc perl postfix make gcc g++
**For CentOS:**
$ sudo yum install httpd mod_ssl mysql-server mysql php-mysql gcc perl* mod_perl-devel
**Note: Please run all the commands in a shell or terminal and make sure you have root access (sudo) on the machine.**
### 2. Running Apache server ###
As you have already installed the apache server from the above step, we need to now configure apache server and run it. We'll need to go for sudo or root mode to get all the commands working so, we'll gonna switch to root access.
$ sudo -s
Now, we need to open port 80 in the firewall and need to save the changes.
# iptables -I INPUT -p tcp --dport 80 -j ACCEPT
# service iptables save
Now, we need to run the service:
For CentOS:
# service httpd start
Lets make sure that Apache will restart every time you restart the machine:
# /sbin/chkconfig httpd on
For Ubuntu:
# service apache2 start
Now, as we have started our apache http server, we will be able to open apache server at IP address of 127.0.0.1 by default.
### 3. Configuring MySQL Server ###
Now, we need to start our MySQL server:
For CentOS:
# chkconfig mysqld on
# service start mysqld
For Ubuntu:
# service mysql-server start
![mysql](http://blog.linoxide.com/wp-content/uploads/2014/12/mysql.png)
Login with root access to MySQL and create a DB for Bugzilla. Change “mypassword” to anything you want for your mysql password. You will need it later when configuring Bugzilla too.
For Both CentOS 6.5 and Ubuntu 14.04 Trusty
# mysql -u root -p
# password: (You'll need to enter your password)
# mysql > create database bugs;
# mysql > grant all on bugs.* to root@localhost identified by "mypassword";
#mysql > quit
**Note: Please remember the DB name, passwords for mysql , we'll need it later.**
### 4. Installing and configuring Bugzilla ###
Now, as we have all the required packages set and running, we'll want to configure our Bugzilla.
So, first we'll want to download the latest Bugzilla package, here I am downloading version 4.5.2 .
To download using wget in a shell or terminal:
wget http://ftp.mozilla.org/pub/mozilla.org/webtools/bugzilla-4.5.2.tar.gz
You can also download from their official site ie. [http://www.bugzilla.org/download/][1]
**Extracting and renaming the downloaded bugzilla tarball:**
# tar zxvf bugzilla-4.5.2.tar.gz -C /var/www/html/
# cd /var/www/html/
# mv -v bugzilla-4.5.2 bugzilla
**Note**: Here, **/var/www/html/bugzilla/** is the directory where we're gonna **host Bugzilla**.
Now, we'll configure buzilla:
# cd /var/www/html/bugzilla/
# ./checksetup.pl --check-modules
![bugzilla-check-module](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla2-300x198.png)
After the check is done, we will see some missing modules that needs to be installed And that can be installed by the command below:
# cd /var/www/html/bugzilla
# perl install-module.pl --all
This will take a bit time to download and install all dependencies. Run the **checksetup.pl check-modules** command again to verify there are nothing left to install.
Now we'll need to run the below command which will automatically generate a file called “localconfig” in the /var/www/html/bugzilla directory.
# ./checksetup.pl
Make sure you input the correct database name, user, and password we created earlier in the localconfig file
# nano ./localconfig
# checksetup.pl
![bugzilla-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla-success.png)
If all is well, checksetup.pl should now successfully configure Bugzilla.
Now we need to add Bugzilla to our Apache config file. so, we'll need to open /etc/httpd/conf/httpd.conf (For CentOS) or etc/apache2/apache2.conf (For Ubuntu) with a text editor:
For CentOS:
# nano /etc/httpd/conf/httpd.conf
For Ubuntu:
# nano etc/apache2/apache2.conf
Now, we'll need to configure Apache server we'll need to add the below configuration in the config file:
<VirtualHost *:80>
DocumentRoot /var/www/html/bugzilla/
</VirtualHost>
<Directory /var/www/html/bugzilla>
AddHandler cgi-script .cgi
Options +Indexes +ExecCGI
DirectoryIndex index.cgi
AllowOverride Limit FileInfo Indexes
</Directory>
Lastly, we need to edit .htaccess file and comment out “Options -Indexes” line at the top by adding “#”
Lets restart our apache server and test our installation.
For CentOS:
# service httpd restart
For Ubuntu:
# service apache2 restart
![bugzilla-install-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla_apache.png)
Finally, our Bugzilla is ready to get bug reports now in our Ubuntu 14.04 LTS and CentOS 6.5 and you can browse to bugzilla by going to the localhost page ie 127.0.0.1 or to your IP address in your web browser .
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-bugzilla-ubuntu-centos/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.bugzilla.org/download/

View File

@ -1,3 +1,5 @@
翻译中 by小眼儿
Docker Image Insecurity
================================================================================
Recently while downloading an “official” container image with Docker I saw this line:
@ -129,4 +131,4 @@ via: https://titanous.com/posts/docker-insecurity
[24]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9356
[25]:https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9357
[26]:https://groups.google.com/d/topic/docker-user/nFAz-B-n4Bw/discussion
[27]:https://docs.google.com/document/d/1JfWNzfwptsMgSx82QyWH_Aj0DRKyZKxYQ1aursxNorg/edit?pli=1
[27]:https://docs.google.com/document/d/1JfWNzfwptsMgSx82QyWH_Aj0DRKyZKxYQ1aursxNorg/edit?pli=1

View File

@ -0,0 +1,76 @@
How to deduplicate files on Linux with dupeGuru
================================================================================
Recently, I was given the task to clean up my father's files and folders. What made it difficult was the abnormal amount of duplicate files with incorrect names. By keeping a backup on an external drive, simultaneously editing multiple versions of the same file, or even changing the directory structure, the same file can get copied many times, change names, change locations, and just clog disk space. Hunting down every single one of them can become a problem of gigantic proportions. Hopefully, there exists nice little software that can save your precious hours by finding and removing duplicate files on your system: [dupeGuru][1]. Written in Python, this file deduplication software switched to a GPLv3 license a few hours ago. So time to apply your new year's resolutions and clean up your stuff!
### Installation of dupeGuru ###
On Ubuntu, you can add the Hardcoded Software PPA:
$ sudo apt-add-repository ppa:hsoft/ppa
$ sudo apt-get update
And then install with:
$ sudo apt-get install dupeguru-se
On Arch Linux, the package is present in the [AUR][2].
If you prefer compiling it yourself, the sources are on [GitHub][3].
### Basic Usage of dupeGuru ###
DupeGuru is conceived to be fast and safe. Which means that the program is not going to run berserk on your system. It has a very low risk of deleting stuff that you did not intend to delete. However, as we are still talking about file deletion, it is always a good idea to stay vigilant and cautious: a good backup is always necessary.
Once you took your precautions, you can launch dupeGuru via the command:
$ dupeguru_se
You should be greeted by the folder selection screen, where you can add folders to scan for deduplication.
![](https://farm9.staticflickr.com/8596/16199976251_f78b042fba.jpg)
Once you selected your directories and launched the scan, dupeGuru will show its results by grouping duplicate files together in a list.
![](https://farm9.staticflickr.com/8600/16016041367_5ab2834efb_z.jpg)
Note that by default dupeGuru matches files based on their content, and not their name. To be sure that you do not accidentally delete something important, the match column shows you the accuracy of the matching algorithm. From there, you can select the duplicate files that you want to take action on, and click on "Actions" button to see available actions.
![](https://farm8.staticflickr.com/7516/16199976361_c8f919b06e_b.jpg)
The choice of actions is quite extensive. In short, you can delete the duplicates, move them to another location, ignore them, open them, rename them, or even invoke a custom command on them. If you choose to delete a duplicate, you might get as pleasantly surprised as I was by available deletion options.
![](https://farm8.staticflickr.com/7503/16014366568_54f70e3140.jpg)
You can not only send the duplicate files to the trash or delete them permanently, but you can also choose to leave a link to the original file (either using a symlink or a hardlink). In oher words, the duplicates will be erased, and a link to the original will be left instead, saving a lot of disk space. This can be particularly useful if you imported those files into a workspace, or have dependencies based on them.
Another fancy option: you can export the results to a HTML or CSV file. Not really sure why you would do that, but I suppose that it can be useful if you prefer keeping track of duplicates rather than use any of dupeGuru's actions on them.
Finally, last but not least, the preferences menu will make all your dream about duplicate busting come true.
![](https://farm8.staticflickr.com/7493/16015755749_a9f343b943_z.jpg)
There you can select the criterion for the scan, either content based or name based, and a threshold for duplicates to control the number of results. It is also possible to define the custom command that you can select in the actions. Among the myriad of other little options, it is good to notice that by default, dupeGuru ignores files less than 10KB.
For more information, I suggest that you go check out the [official website][4], which is filled with documention, support forums, and other goodies.
To conclude, dupeGuru is my go-to software whenever I have to prepare a backup or to free some space. I find it powerful enough for advanced users, and yet intuitive to use for newcomers. Cherry on the cake: dupeGuru is cross platform, which means that you can also use it for your Mac or Windows PC. If you have specific needs, and want to clean up music or image files, there exists two variations: [dupeguru-me][5] and [dupeguru-pe][6], which respectively find duplicate audio tracks and pictures. The main difference from the regular version is that it compares beyond file formats and takes into account specific media meta-data like quality and bit-rate.
What do you think of dupeGuru? Would you consider using it? Or do you have any alternative deduplication software to suggest? Let us know in the comments.
--------------------------------------------------------------------------------
via: http://xmodulo.com/dupeguru-deduplicate-files-linux.html
作者:[Adrien Brochard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://www.hardcoded.net/dupeguru/
[2]:https://aur.archlinux.org/packages/dupeguru-se/
[3]:https://github.com/hsoft/dupeguru
[4]:http://www.hardcoded.net/dupeguru/
[5]:http://www.hardcoded.net/dupeguru_me/
[6]:http://www.hardcoded.net/dupeguru_pe/

View File

@ -0,0 +1,339 @@
Managing Linux server configs with the SaltStack
================================================================================
![](http://techarena51.com/wp-content/uploads/2015/01/SaltStack+logo+-+black+on+white.png)
I came across Salt while searching for an alternative to [Puppet][1]. I like puppet, but I am falling in love with Salt :). This maybe a personal opinion but I found Salt easier to configure and get started with as compared to Puppet. Another reason I like Salt is that it lets you manage your server configurations from the command line, for example:
To update all your servers with Salt, just run
salt * pkg.upgrade
**Installing the SaltStack on Linux.**
Salt is available in the EPEL repo if you are installing it on CentOS 6/7, Pi and Ubuntu linux users can add the Salt Repository from [here][2]. Since Salt is python based you can also use pip to install it but you have take care of dependencies like yum-utils and other packages yourself.
Salt follows the Server-Client model, The Server is known as the master whereas clients are called minions.
**Installation and Configuration of a Salt Master**
[root@salt-master~]# yum install salt-master
Salt configurations files are stored in /etc/salt and /srv/salt. Salt is good to go out of the box, but I would recommend you configure a bit more verbose logging to help your troubleshoot.
[root@salt-master ~]# vim /etc/salt/master
#Default is warning change to the following
log_level: debug
log_level_logfile: debug
[root@salt-master ~]# systemctl start salt-master
**Installation and Configuration of a Salt minion**
[root@salt-minion~]#yum install salt-minion
#Add the hostname of your Salt Master
[root@salt-minion~]#vim /etc/salt/minion
master: salt-master.com
#start the minion
[root@salt-minion~] systemctl start salt-minion
On Startup, a minion will generate a cryptographic key and an id. It will then connect to the Salt Master and identify itself. The Salt Master must accept the minions key before allowing the minion to download a configuration.
**Listing and Accepting keys on the Salt Master**
#List all keys
[root@salt-master~] salt-key -L
Accepted Keys:
Unaccepted Keys:
minion.com
Rejected Keys:
#Accept key with id minion.com
[root@salt-master~]salt-key -a minion.com
[root@salt-master~] salt-key -L
Accepted Keys:
minion.com
Unaccepted Keys:
Rejected Keys:
Once you have accepted a minions keys, you can get information on it immediately using the salt command.
**Salt command line examples**
#Check if a minion is up and running
[root@salt-master~] salt 'minion.com' test.ping
minion.com:
True
# run shell commands on the minion
[root@salt-master~]# salt 'minion.com' cmd.run 'ls -l'
minion.com:
total 2988
-rw-r--r--. 1 root root 1024 Jul 31 08:24 1g.img
-rw-------. 1 root root 940 Jul 14 15:04 anaconda-ks.cfg
-rw-r--r--. 1 root root 1024 Aug 14 17:21 test
#install/update a software on all your servers
[root@salt-master ~]# salt '*' pkg.install git
The salt command needs a few components to send information. One of these components is the minion id and another is the function to be called on the minion.
In the first example I used the ping function of the test module to check if the system is up. This function does not perform an actual ping, it just returns true if the minion responds.
cmd.run is used to execute remote commands and pkg module contains functions for package management. The full list of builin modules is at the end of this post.
**Grains example**
Salt uses an interface called **Grains** to get system information. You can use grains to run commands on systems with particular properties.
[root@vps4544 ~]# salt -G 'os:Centos' test.ping
minion:
True
More grain examples are available at http://docs.saltstack.com/en/latest/topics/targeting/grains.html
**Package Management via the State File System.**
In order to automate software configurations you will need to use the state system and create a state file. These files use the YAML format and python dictionaries, lists, strings and numbers for data structure. Reading up on them will help you understand the configurations better.
**VIM state file example**
[root@salt-master~]# vim /srv/salt/vim.sls
vim-enhanced:
pkg.installed
/etc/vimrc:
file.managed:
- source: salt://vimrc
- user: root
- group: root
- mode: 644
The first and third line in this file are called state id. They must contain the exact name or path of the package or file to be managed. After the state ids are state and function declaration. pkg and file are state declarations whereas installed and managed are function declarations. Functions accept arguments, user,group,mode and source are all arguments to the function managed.
To apply this configuration to a minion move your vimrc file to /srv/salt and run.
[root@salt-master~]# salt 'minion.com' state.sls vim
minion.com:
----------
ID: vim-enhanced
Function: pkg.installed
Result: True
Comment: The following packages were installed/updated: vim-enhanced.
Started: 09:36:23.438571
Duration: 94045.954 ms
Changes:
----------
vim-enhanced:
----------
new:
7.4.160-1.el7
old:
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
You can also add dependencies to your configurations.
[root@salt-master~]# vim /srv/salt/ssh.sls
openssh-server:
pkg.installed
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 600
- source: salt://ssh/sshd_config
sshd:
service.running:
- require:
- pkg: openssh-server
The require statement here is a requisite declaration, it creates a dependency between the service and pkg states. This declaration will first check if the package is installed and then run the service.
However, I prefer using the watch statement as it also checks for file modifications and restarts the service.
[root@salt-master~]# vim /srv/salt/ssh.sls
openssh-server:
pkg.installed
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 600
- source: salt://sshd_config
sshd:
service.running:
- watch:
- pkg: openssh-server
- file: /etc/ssh/sshd_config
[root@vps4544 ssh]# salt 'minion.com' state.sls ssh
seven.leog.in:
Changes:
----------
ID: openssh-server
Function: pkg.installed
Result: True
Comment: Package openssh-server is already installed.
Started: 13:01:55.824367
Duration: 1.156 ms
Changes:
----------
ID: /etc/ssh/sshd_config
Function: file.managed
Result: True
Comment: File /etc/ssh/sshd_config updated
Started: 13:01:55.825731
Duration: 334.539 ms
Changes:
----------
diff:
---
+++
@@ -14,7 +14,7 @@
# SELinux about this change.
# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER
#
-Port 22
+Port 422
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
----------
ID: sshd
Function: service.running
Result: True
Comment: Service restarted
Started: 13:01:56.473121
Duration: 407.214 ms
Changes:
----------
sshd:
True
Summary
------------
Succeeded: 4 (changed=2)
Failed: 0
------------
Total states run: 4
Maintaining all config files in single directory can make scaling a complex task, hence you can create sub-directories and add your configuration in them with a init.sls file
[root@salt-master~]# mkdir /srv/salt/ssh
[root@salt-master~]# vim /srv/salt/ssh/init.sls
openssh-server:
pkg.installed
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 600
- source: salt://ssh/sshd_config
sshd:
service.running:
- watch:
- pkg: openssh-server
- file: /etc/ssh/sshd_config
[root@vps4544 ssh]# cp /etc/ssh/sshd_config /srv/salt/ssh/
[root@vps4544 ssh]# salt 'minion.com' state.sls ssh
**Top File and Environments.**
A Top file (top.sls) is where you define your environments. A top file allows you to map minions to packages. The default environment is base. You need to define which packages will be installed on which server under the base environment.
If there are multiple environments and more than one definitions for a particular minion is used then by default the base environment will supersede the others.
To define an environment you need to add it to the file_roots directive in the master configuration file.
[root@salt-master ~]# vim /etc/salt/master
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev
Now add a top.sls file in /srv/salt
[root@salt-master ~]# vim /srv/salt/top.sls
base:
'*':
- vim
'minion.com':
- ssh
Apply the top file configuration with
[root@salt-master~]# salt '*' state.highstate
minion.com:
----------
ID: vim-enhanced
Function: pkg.installed
Result: True
Comment: Package vim-enhanced is already installed.
Started: 13:10:55
Duration: 1678.779 ms
Changes:
----------
ID: openssh-server
Function: pkg.installed
Result: True
Comment: Package openssh-server is already installed.
Started: 13:10:55.
Duration: 2.156 ms
The minion will download the top file and search for its configuration. It will also apply the configuration for all minions.
This is just a brief introduction to Salt, for in depth understanding you can go through the links below and if you are already using Salt and have any recommendations do let me know.
Update: [Foreman][3] has support for salt via [plugins][4].
Read
- http://docs.saltstack.com/en/latest/ref/states/top.html#how-top-files-are-compiled
- http://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
- http://docs.saltstack.com/en/latest/ref/states/highstate.html#state-declaration
Grains
- http://docs.saltstack.com/en/latest/topics/targeting/grains.html
List of Salt Modules
Good comparison of Salt and puppet
- https://mywushublog.com/2013/03/configuration-management-with-salt-stack/
Full list of builtin execution modules
- http://docs.saltstack.com/en/latest/ref/modules/all/
--------------------------------------------------------------------------------
via: http://techarena51.com/index.php/getting-started-with-saltstack/
作者:[Leo G][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://techarena51.com/
[1]:http://techarena51.com/index.php/a-simple-way-to-install-and-configure-a-puppet-server-on-linux/
[2]:http://docs.saltstack.com/en/latest/topics/installation/index.html
[3]:http://techarena51.com/index.php/using-foreman-opensource-frontend-puppet/
[4]:https://github.com/theforeman/foreman_salt/wiki

View File

@ -0,0 +1,71 @@
How to Install SSL on Apache 2.4 in Ubuntu 14.0.4
================================================================================
Today I will show you how to install a **SSL certificate** on your personal website or blog, to help secure the communications between your visitors and your website.
Secure Sockets Layer or SSL, is the standard security technology for creating an encrypted connection between a web server and a web browser. This ensures that all data passed between the web server and the web browser remain private and secure. It is used by millions of websites in the protection of their online communications with their customers. In order to be able to generate an SSL link, a web server requires a SSL Certificate.
You can create your own SSL Certificate, but it will not be trusted by default in web browsers, to fix this you will have to buy a digital certificate from a trusted Certification Authority (CA), we will show you below how to get the certificate and install it in apache.
### Generating a Certificate Signing Request ###
The Certification Authority (CA) will ask you for a Certificate Signing Request (CSR) generated on your web server. This is a simple step and only takes a minute, you will have to run the following command and input the requested information:
# openssl req -new -newkey rsa:2048 -nodes -keyout yourdomainname.key -out yourdomainname.csr
The output should look something like this:
![generate csr](http://blog.linoxide.com/wp-content/uploads/2015/01/generate-csr.jpg)
This begins the process of generating two files: the Private-Key file for the decryption of your SSL Certificate, and a certificate signing request (CSR) file (used to apply for your SSL Certificate) with apache openssl.
Depending on the authority you apply to, you will either have to upload your csr file or paste it's content in a web form.
### Installing the actual certificate in Apache ###
After the generation process is finished you will receive your new digital certificate, for this article we have used [Comodo SSL][1] and received the certificate in a zip file. To use it in apache you will first have to create a bundle of the certificates you received in the zip file with the following command:
# cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > bundle.crt
![bundle](http://blog.linoxide.com/wp-content/uploads/2015/01/bundle.jpg)
Now make sure that the ssl module is loaded in apache by running the following command:
# a2enmod ssl
If you get the message "Module ssl already enabled" you are ok, if you get the message "Enabling module ssl." you will also have to run the following command to restart apache:
# service apache2 restart
Finally modify your virtual host file (generally found in /etc/apache2/sites-enabled) to look something like this:
DocumentRoot /var/www/html/
ServerName linoxide.com
SSLEngine on
SSLCertificateFile /usr/local/ssl/crt/yourdomainname.crt
SSLCertificateKeyFile /usr/local/ssl/yourdomainname.key
SSLCACertificateFile /usr/local/ssl/bundle.crt
You should now access your website using https://YOURDOMAIN/ (be careful to use 'https' not http) and see the SSL in progress (generally indicated by a lock in your web browser).
**NOTE:** All the links must now point to https, if some of the content on the website (like images or css files) still point to http links you will get a warning in the browser, to fix this you have to make sure that every link points to https.
### Redirect HTTP requests to HTTPS version of your website ###
If you wish to redirect the normal HTTP requests to HTTPS version of your website, add the following text to either the virtual host you wish to apply it to or to the apache.conf if you wish to apply it for all websites hosted on the server:
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-ssl-apache-2-4-in-ubuntu/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:https://ssl.comodo.com/

View File

@ -0,0 +1,129 @@
How to Install Scrapy a Web Crawling Tool in Ubuntu 14.04 LTS
================================================================================
It is an open source software which is used for extracting the data from websites. Scrapy framework is developed in Python and it perform the crawling job in fast, simple and extensible way. We have created a Virtual Machine (VM) in virtual box and Ubuntu 14.04 LTS is installed on it.
### Install Scrapy ###
Scrapy is dependent on Python, development libraries and pip software. Python latest version is pre-installed on Ubuntu. So we have to install pip and python developer libraries before installation of Scrapy.
Pip is the replacement for easy_install for python package indexer. It is used for installation and management of Python packages. Installation of pip package is shown in Figure 1.
sudo apt-get install python-pip
![Fig:1 Pip installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f1.png)
Fig:1 Pip installation
We have to install python development libraries by using following command. If this package is not installed then installation of scrapy framework generates error about python.h header file.
sudo apt-get install python-dev
![Fig:2 Python Developer Libraries](http://blog.linoxide.com/wp-content/uploads/2014/11/f2.png)
Fig:2 Python Developer Libraries
Scrapy framework can be installed either from deb package or source code. However we have installed deb package using pip (Python package manager) which is shown in Figure 3.
sudo pip install scrapy
![Fig:3 Scrapy Installation](http://blog.linoxide.com/wp-content/uploads/2014/11/f3.png)
Fig:3 Scrapy Installation
Scrapy successful installation takes some time which is shown in Figure 4.
![Fig:4 Successful installation of Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f4.png)
Fig:4 Successful installation of Scrapy Framework
### Data extraction using Scrapy framework ###
**(Basic Tutorial)**
We will use Scrapy for the extraction of store names (which are providing Cards) item from fatwallet.com web site. First of all, we created new scrapy project “store_name” using below given command and shown in Figure 5.
$sudo scrapy startproject store_name
![Fig:5 Creation of new project in Scrapy Framework](http://blog.linoxide.com/wp-content/uploads/2014/11/f5.png)
Fig:5 Creation of new project in Scrapy Framework
Above command creates a directory with title “store_name” at current path. This main directory of the project contains files/folders which are shown in the following Figure 6.
$sudo ls lR store_name
![Fig:6 Contents of store_name project.](http://blog.linoxide.com/wp-content/uploads/2014/11/f6.png)
Fig:6 Contents of store_name project.
A brief description of each file/folder is given below;
- scrapy.cfg is the project configuration file
- store_name/ is another directory inside the main directory. This directory contains python code of the project.
- store_name/items.py contains those items which will be extracted by the spider.
- store_name/pipelines.py is the pipelines file.
- Setting of store_name project is in store_name/settings.py file.
- and the store_name/spiders/ directory, contains spider for the crawling
As we are interested to extract the store names of the Cards from fatwallet.com site, so we updated the contents of the file as shown below.
import scrapy
class StoreNameItem(scrapy.Item):
name = scrapy.Field() # extract the names of Cards store
After this, we have to write new spider under store_name/spiders/ directory of the project. Spider is python class which consist of following mandatory attributes :
1. Name of the spider (name )
1. Starting url of spider for crawling (start_urls)
1. And parse method which consist of regex for the extraction of desired items from the page response. Parse method is the important part of spider.
We created spider “store_name.py” under store_name/spiders/ directory and added following python code for the extraction of store name from fatwallet.com site. The output of the spider is written in the file (**StoreName.txt**) which is shown in Figure 7.
from scrapy.selector import Selector
from scrapy.spider import BaseSpider
from scrapy.http import Request
from scrapy.http import FormRequest
import re
class StoreNameItem(BaseSpider):
name = "storename"
allowed_domains = ["fatwallet.com"]
start_urls = ["http://fatwallet.com/cash-back-shopping/"]
def parse(self,response):
output = open('StoreName.txt','w')
resp = Selector(response)
tags = resp.xpath('//tr[@class="storeListRow"]|\
//tr[@class="storeListRow even"]|\
//tr[@class="storeListRow even last"]|\
//tr[@class="storeListRow last"]').extract()
for i in tags:
i = i.encode('utf-8', 'ignore').strip()
store_name = ''
if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):
store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()
store_name = re.search(r">.*?<",store_name,re.I|re.S).group()
store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))
store_name = re.sub(r'&amp;',"&",re.sub(r'&amp;',"&",store_name,re.I))
#print store_name
output.write(store_name+""+"\n")
![Fig:7 Output of the Spider code .](http://blog.linoxide.com/wp-content/uploads/2014/11/f7.png)
Fig:7 Output of the Spider code .
*NOTE: The purpose of this tutorial is only the understanding of Scrapy Framework*
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/scrapy-install-ubuntu/
作者:[nido][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/naveeda/

View File

@ -0,0 +1,136 @@
Interface (NICs) Bonding in Linux using nmcli
================================================================================
Today, we'll learn how to perform Interface (NICs) bonding in our CentOS 7.x using nmcli (Network Manager Command Line Interface).
NICs (Interfaces) bonding is a method for linking **NICs** together logically to allow fail-over or higher throughput. One of the ways to increase the network availability of a server is by using multiple network interfaces. The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical bonded interface. It is a new implementation that does not affect the older bonding driver in linux kernel; it offers an alternate implementation.
**NIC bonding is done to provide two main benefits for us:**
1. **High bandwidth**
1. **Redundancy/resilience**
Now lets configure NICs bonding in CentOS 7. We'll need to decide which interfaces that we would like to configure a Team interface.
run **ip link** command to check the available interface in the system.
$ ip link
![ip link](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-link.png)
Here we are using **eno16777736** and **eno33554960** NICs to create a team interface in **activebackup** mode.
Use **nmcli** command to create a connection for the network team interface,with the following syntax.
# nmcli con add type team con-name CNAME ifname INAME [config JSON]
Where **CNAME** will be the name used to refer the connection ,**INAME** will be the interface name and **JSON** (JavaScript Object Notation) specifies the runner to be used.**JSON** has the following syntax:
'{"runner":{"name":"METHOD"}}'
where **METHOD** is one of the following: **broadcast, activebackup, roundrobin, loadbalance** or **lacp**.
### 1. Creating Team Interface ###
Now let us create the team interface. here is the command we used to create the team interface.
# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}'
![nmcli con create](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-con-create.png)
run **# nmcli con show** command to verify the team configuration.
# nmcli con show
![Show Teamed Interace](http://blog.linoxide.com/wp-content/uploads/2015/01/show-team-interface.png)
### 2. Adding Slave Devices ###
Now lets add the slave devices to the master team0. here is the syntax for adding the slave devices.
# nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
Here we are adding **eno16777736** and **eno33554960** as slave devices for **team0** interface.
# nmcli con add type team-slave con-name team0-port1 ifname eno16777736 master team0
# nmcli con add type team-slave con-name team0-port2 ifname eno33554960 master team0
![adding slave devices to team](http://blog.linoxide.com/wp-content/uploads/2015/01/adding-to-team.png)
Verify the connection configuration using **#nmcli con show** again. now we could see the slave configuration.
#nmcli con show
![show slave config](http://blog.linoxide.com/wp-content/uploads/2015/01/show-slave-config.png)
### 3. Assigning IP Address ###
All the above command will create the required configuration files under **/etc/sysconfig/network-scripts/**.
Lets assign an IP address to this team0 interface and enable the connection now. Here is the command to perform the IP assignment.
# nmcli con mod team0 ipv4.addresses "192.168.1.24/24 192.168.1.1"
# nmcli con mod team0 ipv4.method manual
# nmcli con up team0
![ip assignment](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-assignment.png)
### 4. Verifying the Bonding ###
Verify the IP address information in **#ip add show team0** command.
#ip add show team0
![verfiy ip address](http://blog.linoxide.com/wp-content/uploads/2015/01/verfiy-ip-adress.png)
Now lets check the **activebackup** configuration functionality using the **teamdctl** command.
# teamdctl team0 state
![teamdctl active backup check](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-activebackup-check.png)
Now lets disconnect the active port and check the state again. to confirm whether the active backup configuration is working as expected.
# nmcli dev dis eno33554960
![disconnect activeport](http://blog.linoxide.com/wp-content/uploads/2015/01/disconnect-activeport.png)
disconnected the active port and now check the state again using **#teamdctl team0 state**.
# teamdctl team0 state
![teamdctl check activeport disconnect](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-check-activeport-disconnect.png)
Yes its working cool !! we will connect the disconnected connection back to team0 using the following command.
#nmcli dev con eno33554960
![nmcli dev connect disconected](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-dev-connect-disconected.png)
We have one more command called **teamnl** let us show some options with **teamnl** command.
to check the ports in team0 run the following command.
# teamnl team0 ports
![teamnl check ports](http://blog.linoxide.com/wp-content/uploads/2015/01/teamnl-check-ports.png)
Display currently active port of **team0**.
# teamnl team0 getoption activeport
![display active port team0](http://blog.linoxide.com/wp-content/uploads/2015/01/display-active-port-team0.png)
Hurray, we have successfully configured NICs bonding :-) Please share feedback if any.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/interface-nics-bonding-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/

View File

@ -0,0 +1,63 @@
Tomahawk音乐播放器带着新形象、新功能回来了
================================================================================
**在悄无声息得过了一年之后Tomahawk——音乐播放器中的瑞士军刀——带着值得歌颂的全新发行版回归了。 **
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-tile-1.jpg)
这个0.8版的开源跨平台应用增添了**更多在线服务的支持**,更新了它的外观,又一次确保了它创新的社交功能完美运行。
### Tomahawk——两个世界的极品 ###
Tomahawk嫁给了一个带有我们的“即时”现代文化的传统应用结构。它可以浏览和播放本地的音乐和Spotify、Grooveshark以及SoundCloud这类的线上音乐。在最新的发行版中它把Google Play Music和Beats Music列入了它的名册。
这可能听着很繁复或令人困惑,但实际上它表现得出奇的好。
若你想要播放一首歌而且不介意它是从哪里来的你只需告诉Tomahawk音乐的标题和作者它就会自动从可获取的源里找出高品质版本的音乐——你不需要做任何事。
![](http://i.imgur.com/nk5oixy.jpg)
这个应用还弄了一些附加的功能比如EchoNest剖析Last.fm建议还有对Jabber的支持这样你就能播放朋友的音乐。它还有一个内置的信息服务以便于你能和其他人快速的分享播放列表和音乐。
>“这种从根本上就与众不同的听音乐的方式,开启了前所未有的音乐的消费和分享体验”,项目的网站上这样写道。而且即便它如此独特,这也没有错。
![Tomahawk supports the Sound Menu](http://www.omgubuntu.co.uk/wp-content/uploads/2014/11/tomahawk-controllers.jpg)
支持声音菜单
### Tomahawk0.8发行版的亮点 ###
- 新的交互界面
- 对Beats Music的支持
- 对Google Play Music的支持保存的和播放全部链接
- 对拖拽iTunesSpotify这类网站的链接的支持
- 正在播放的提示
- Android应用测试版
- 收件箱的改进
### 在Ubuntu上安装Tomahawk0.8 ###
作为一个流媒体音乐的大用户,我会在接下来的几天里体验一下这个应用软件,然后提供一个关于他的改变的更全面的赏析。与此同时,你也可以尝尝鲜。
在Ubuntu 14.04 LTS和Ubuntu 14.10上可以通过官方PPA获得Tomahawk。
sudo add-apt-repository ppa:tomahawk/ppa
sudo apt-get update && sudo apt-get install tomahawk
在官方项目网站上可以找的独立安装程序和更详细的信息。
- [Visit the Official Tomahawk Website][1]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/11/tomahawk-media-player-returns-new-look-features
作者:[Joey-Elijah Sneddon][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://gettomahawk.com/

View File

@ -0,0 +1,47 @@
如何使用 Linux 从 Grooveshark 下载音乐
================================================================================
> 解决办法通常没有那么难
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-2.jpg)
**Grooveshark 对于喜欢音乐的人来说是一个不错的在线平台同时有多种从上面下载音乐的方法。Groovesquid 是众多允许用户从 Grooveshark 上下载音乐的应用之一,并且是支持多平台的。**
只要有在线流媒体服务,就一定有方法从获取你之前看过或听过的视频及音乐。即使下载接口关闭了,也不是什么大不了的事,因为还有很多种解决方法,无论你用的什么操作系统。比如,网络上就有许多种 YouTube 下载器,同样的道理,从 Grooveshark 上下载音乐也并非难事。
现在得考虑合法性的问题。与许多其他应用一样Groovesquid 并非是完全不合法的。如果有用户使用应用去做一些非法的事情,那责任应归咎于用户。同样的道理也适用于 utorrent 或者 Bittorrent。只要你不触及版权问题那你就可以无所顾忌的使用 Groovesquid 了。
### 快捷高效的 Groovesquid ###
你能够找到的 Groovesquid 的唯一缺点是,它是基于 Java 而编写的,这从来都不是一个好的兆头。虽然为了确保应用的可移植性这样做确实是一个好方法,但这样做的结果导致了其糟糕的界面。确实是非常糟糕的的界面,不过这一点并不会影响到用户的使用体验,特别是这款应用所完成的工作时如此的有用。
有一点需要注意的地方。Groovesquid 是一款免费的应用,但为了将免费保持下去,它会在菜单栏的右侧显示一则广告。这对大多数人来说都应该不是问题,不过最好在打开应用后注意下菜单栏右侧。
从易用性的角度来看,这款应用非常简洁。用户可以通过在顶部地址栏里输入链接直接下载单曲,地址栏的位置可以通过其左侧的下拉菜单进行修改。在下拉菜单中,也可以修改为歌曲名称、流行度、专辑名称、播放列表以及艺术家。有些选项向你提供了诸如查看 Grooveshark 上最流行的音乐,或者下载整个播放列表等。
你可以下载 Groovesquid 0.7.0
- [jar][1] 文件大小3.8 MB
- [tar.gz][2] 文件大小549 KB
下载完 Jar 文件后,你所需要做的是将其权限修改为可执行,然后让 Java 来完成剩下的工作。
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-3.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-4.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-5.jpg)
![](http://i1-news.softpedia-static.com/images/news2/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268-6.jpg)
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/How-to-Download-Music-from-Grooveshark-with-a-Linux-OS-468268.shtml
作者:[Silviu Stahie][a]
译者:[Stevearzh](https://github.com/Stevearzh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://github.com/groovesquid/groovesquid/releases/download/v0.7.0/Groovesquid.jar
[2]:https://github.com/groovesquid/groovesquid/archive/v0.7.0.tar.gz

View File

@ -1,61 +1,61 @@
伴随苹果手表的揭幕Ubuntu智能手表会成为下一个吗?
伴随Apple Watch的揭幕下一个智能手表会是Ubuntu吗?
===
**今天,苹果借助‘苹果手表’的发布,证实了其进军穿戴式计算设备市场的长期传言**
**苹果借助Apple Watch的发布证实了其进军穿戴式电子设备市场的长期传言**
![Ubuntu Smartwatch good idea?](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/ubuntu-galaxy-gear-smartwatch.png)
Ubuntu智能手表 - 好主意?
Ubuntu智能手表 - 好主意?
拥有一系列稳定功能硬件解决方案和应用合作伙伴关系的支持,手腕穿戴设备被许多公司预示为“人与技术关系的新篇章”。
拥有一系列稳定功能硬件解决方案和应用合作伙伴关系的支持,手腕穿戴设备被许多公司预示为“人与技术关系的新篇章”。
它的到来以及用户兴趣的提升有可能意味着Ubuntu需要遵循一个为智能手表定制的Ubuntu版本。
它的到来以及用户兴趣的提升有可能意味着Ubuntu需要跟进一个为智能手表定制的Ubuntu版本。
### 大的方面还是成功的 ###
苹果在正确时间加入了快速发展的智能手表部门。约束手腕穿戴电脑功能的界限并不是一成不变。失败的设计,不良的用户界面以及主流用户使用穿戴技术功能的弱参数化,这些都见证了硬件种类保持着高效的影响力 一个准许Cupertino把时间花费在苹果手表上的因素
苹果在正确的时间加入了快速发展的智能手表行列。手腕穿戴设备功能的界限并不是一成不变。失败的设计、简陋的用户界面以及主流用户使用穿戴技术功能的弱定制化,这些都见证了硬件类产品仍然很脆弱 这一因素使得Cupertino把时间花费在Apple Watch上
> 分析师说超过2200万的智能手表将在今年销售
去年全球范围内可穿戴设备的销售数量包括健身追踪器仅仅1000万。今年分析师希望设备数量的改变可以超过2200万 不包括苹果手表因为其直到2015年初才开始零售。
去年全球范围内可穿戴设备的销售数量包括健身追踪器仅仅1000万。今年分析师希望设备的销量可以超过2200万 不包括苹果手表因为其直到2015年初才开始零售。
很容易就可以看出增长的来源。今年九月初柏林举办的IFA 2014展览会展示了一系列来自主要制造商们的可穿戴设备包括索尼和华硕。大多数搭载着Google最新发布的安卓穿戴系统
其实,我们很容易就可以看出增长的来源。今年九月初柏林举办的IFA 2014展览会展示了一系列来自主要制造商们的可穿戴设备包括索尼和华硕。大多数搭载着Google最新发布的安卓穿戴平台
一个成熟的表现:安卓穿戴设备打破了与形式因素保持一致的新奇争论,进而呈现出一致并令人折服的用户方案。和新的苹果手表一样,它紧密地连接在一个现存的智能手机生态系统上。
更成熟的一个表现:安卓穿戴设备打破了与形式因素保持一致的新奇争论,进而呈现出一致且令人信服的用户方案。和新的苹果手表一样,它紧密地连接在一个现存的智能手机生态系统上。
可能它只是一个使用案例Ubuntu手腕穿戴系统是否能匹配它还不清楚。
但Ubuntu手腕穿戴系统是否能与之匹配成为一个实用案例目前还不清楚。
#### 目前还没有Ubuntu智能手表的计划 ####
Ubuntu操作系统的通用性结合以为多装置设备和趋势性未来定制的严格版本已经产生了典型目标智能电视平板电脑和智能手机。Mir,公司的本土显示服务器被用来运转所有尺寸屏幕上的接口虽然不是公认1.5"的)
Ubuntu操作系统的通用性将多种设备的严格标准与统一的未来目标联合在一起Canonical已经将目标指向了智能电视平板电脑和智能手机。公司自家的显示服务Mir甚至被用来为所有尺寸的屏幕提供驱动接口虽然不是公认1.5"的)。
今年年初Canonical社区负责人Jono Bacon被问是否有制作Ubuntu智能手表的打算。Bacon提供了他对这个问题的看法增加另一个形式因素到[Ubuntu触摸设备]路线只会减缓其余的东西”。
今年年初Canonical社区负责人Jono Bacon被问是否有制作Ubuntu智能手表的打算。Bacon提供了他对这个问题的看法为[Ubuntu触摸设备]路线增加额外的形式因素只会减缓现有的进度”。
在Ubuntu电话发布两周年之际,我们还是挺赞同他的想法的。
在Ubuntu手机发布两周年之际,我们还是挺赞同他的想法的。
滴答,滴答,对冲你的赌注点(实在不懂什么意思...)
###除了A面还有B面###
但是并不是没有希望的。在一个[几个月之后的电话采访][1]中Ubuntu创始人Mark Shuttleworth提及到可穿戴技术和智能电视,平板电脑,智能手机一样,都在公司计划当中。
但是并不是没有希望的。在[几个月之后的一次电话采访][1]中Ubuntu创始人Mark Shuttleworth提到,可穿戴技术和智能电视、平板电脑、智能手机一样,都在公司计划当中。
> “Ubuntu因其在电话中的完美设计变得独一无二是它同时也被设计成满足其余生态系统的样子,比如从穿戴设备到PC机。”
> “Ubuntu因其在电话中的完美设计变得独一无二同时它的设计也能够满足其他生态系统,从穿戴设备到PC机。”
然而这还没得到具体的证实,它更像一个指针,在这个方向是给我们提供一个乐观的指引。
然而这还没得到具体的证实,它更像一个指针,在某个方向给我们提供一个乐观的指引。
### 不可能 — 这就是原因所在 ###
####可能 — 这就是原因所在 ####
Canonical并不反对利用牢固的专利进军市场。事实上的重要性犹如公司的DHA — 犹如服务器上的RHEL,桌面上的Windows,智能手机上的安卓...
Canonical并不反对利用牢固的专利进军市场。事实上恰恰是公司DNA基因的一部分 — 犹如服务器端的RHEL,桌面端的Windows,智能手机上的安卓...
设备上的Ubuntu系统被制作成可以在更小的屏幕上扩展和适应性运行。甚至很有可能在和手表一样小的屏幕上运行。当普通的代码基础已经在手机平板电脑桌面和TV上准备就绪我想如果我们没有看到来自社区这一方向上的努力我会感到奇怪
设备上的Ubuntu系统被制作成可以在更小的屏幕上扩展和适配运行甚至在小如手表一样的屏幕上。当普通的代码基础已经在手机、平板电脑、桌面和TV上准备就绪在同样的方向上如果看不到社区的努力是十分令人吃惊的
但是我之所以不认为它会从规范社区发生至少目前还没有是今年早些时候Jono Bacon个人思想的共鸣:时间和努力。
但是我之所以不认为它会从Canonical发生至少目前还没有是基于今年早些时候Jono Bacon的个人思想得出的结论:时间和努力。
Tim Cook在他的主题演讲中说道“*我们并没有追随iPhone也没有缩水用户界面将其强硬捆绑在你的手腕上。*”这是一个很明显的陈述。为如此小的屏幕设计UI和UX模型;通过交互原则工作;对硬件和输入模式的恭维,都不是一件容易的事。
Tim Cook在他的主题演讲中说道“*我们并没有追随iPhone也没有缩水用户界面将其强硬捆绑在你的手腕上。*”这是一个很明显的陈述。为如此小的屏幕设计UI和UX模型、通过交互原则工作、对硬件和输入模式的推崇,这些都不是容易的事。
可穿戴技术仍然是一个新兴的市场。在这个阶段Canonical将会浪费发展,设计以及进行中的业务。在一些更为紧迫的地区,任何利益的重要性将要超过损失
可穿戴技术仍然是一个新兴的市场。在这个阶段Canonical可能会在探寻的过程中浪费一些发展、设计和商业上的机会。如果在一些更为紧迫的领域落后了,造成的后果远比眼前利益的损失更严重
玩一局更久的游戏等待直到看出那些努力在何地成功和失败这是一条更难的路线。但是更适合Ubuntu的就是今天。在新产品出现之前Canonical把力量用在现存的产品上是更好的选择这是一些已经来迟的理论
打一场持久战耐心等待看哪些努力成功哪些会失败这是一条更难的路线但是却更适合Ubuntu就如同今天它做的一样。在新产品出现之前Canonical把力量用在现存的产品上是更好的选择这是一些已经来迟的理论
想更进一步了解什么是Ubuntu智能手表点击下面的[视频][2]。它展示了一个互动的主体性皮肤Tizen(它已经支持Samsung Galaxy Gear智能手表)。
想更进一步了解什么是Ubuntu智能手表点击下面的[视频][2]里面展示了一个交互的Unity主题皮肤Tizen(它已经支持Samsung Galaxy Gear智能手表)。
---
@ -63,7 +63,7 @@ via: http://www.omgubuntu.co.uk/2014/09/ubuntu-smartwatch-apple-iwatch
作者:[Joey-Elijah Sneddon][a]
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Mr小眼儿](https://github.com/tinyeyeser)
本文由[LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,275 @@
如何在 Linux 上使用 HAProxy 配置 HTTP 负载均衡器
================================================================================
随着基于 Web 的应用和服务的增多IT 系统管理员肩上的责任也越来越重。当遇到不可预期的事件如流量达到高峰,流量增大或者内部的挑战比如硬件的损坏或紧急维修,无论如何,你的 Web 应用都必须要保持可用性。甚至现在流行的 devops 和持续交付也可能威胁到你的 Web 服务的可靠性和性能的一致性。
不可预测,不一直的性能表现是你无法接受的。但是我们怎样消除这些缺点呢?大多数情况下一个合适的负载均衡解决方案可以解决这个问题。今天我会给你们介绍如何使用 [HAProxy][1] 配置 HTTP 负载均衡器。
###什么是 HTTP 负载均衡? ###
HTTP 负载均衡是一个网络解决方案,它将发入的 HTTP 或 HTTPs 请求分配至一组提供相同的 Web 应用内容的服务器用于响应。通过将请求在这样的多个服务器间进行均衡,负载均衡器可以防止服务器出现单点故障,可以提升整体的可用性和响应速度。它还可以让你能够简单的通过添加或者移除服务器来进行横向扩展或收缩,对工作负载进行调整。
### 什么时候,什么情况下需要使用负载均衡? ###
负载均衡可以提升服务器的使用性能和最大可用性,当你的服务器开始出现高负载时就可以使用负载均衡。或者你在为一个大型项目设计架构时,在前端使用负载均衡是一个很好的习惯。当你的环境需要扩展的时候它会很有用。
### 什么是 HAProxy ###
HAProxy 是一个流行的开源的 GNU/Linux 平台下的 TCP/HTTP 服务器的负载均衡和代理软件。HAProxy 是单线程,事件驱动架构,可以轻松的处理 [10 Gbps 速率][2] 的流量在生产环境中被广泛的使用。它的功能包括自动健康状态检查自定义负载均衡算法HTTPS/SSL 支持,会话速率限制等等。
### 这个教程要实现怎样的负载均衡 ###
在这个教程中,我们会为 HTTP Web 服务器配置一个基于 HAProxy 的负载均衡。
### 准备条件 ###
你至少要有一台,或者最好是两台 Web 服务器来验证你的负载均衡的功能。我们假设后端的 HTTP Web 服务器已经配置好并[可以运行][3]。
You will need at least one, or preferably two web servers to verify functionality of your load balancer. We assume that backend HTTP web servers are already [up and running][3].
### 在 Linux 中安装 HAProxy ###
对于大多数的发行版,我们可以使用发行版的包管理器来安装 HAProxy。
#### 在 Debian 中安装 HAProxy ####
在 Debian Wheezy 中我们需要添加源,在 /etc/apt/sources.list.d 下创建一个文件 "backports.list" ,写入下面的内容
deb http://cdn.debian.net/debian wheezy­backports main
刷新仓库的数据,并安装 HAProxy
# apt­ get update
# apt ­get install haproxy
#### 在 Ubuntu 中安装 HAProxy ####
# apt ­get install haproxy
#### 在 CentOS 和 RHEL 中安装 HAProxy ####
# yum install haproxy
### 配置 HAProxy ###
本教程假设有两台运行的 HTTP Web 服务器,它们的 IP 地址是 192.168.100.2 和 192.168.100.3。我们将负载均衡配置在 192.168.100.4 的这台服务器上。
为了让 HAProxy 工作正常,你需要修改 /etc/haproxy/haproxy.cfg 中的一些选项。我们会在这一节中解释这些修改。一些配置可能因 GNU/Linux 发行版的不同而变化,这些会被标注出来。
#### 1. 配置日志功能 ####
你要做的第一件事是为 HAProxy 配置日志功能,在排错时日志将很有用。日志配置可以在 /etc/haproxy/haproxy.cfg 的 global 段中找到他们。下面是针对不同的 Linux 发型版的 HAProxy 日志配置。
**CentOS 或 RHEL:**
在 CentOS/RHEL中启用日志将下面的
log 127.0.0.1 local2
替换为:
log 127.0.0.1 local0
然后配置 HAProxy 在 /var/log 中的日志分割,我们需要修改当前的 rsyslog 配置。为了简洁和明了,我们在 /etc/rsyslog.d 下创建一个叫 haproxy.conf 的文件,添加下面的内容:
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info ­/var/log/haproxy.log;Haproxy
local0.notice ­/var/log/haproxy­status.log;Haproxy
local0.* ~
这个配置会基于 $template 在 /var/log 中分割 HAProxy 日志。现在重启 rsyslog 应用这些更改。
# service rsyslog restart
**Debian 或 Ubuntu:**
在 Debian 或 Ubuntu 中启用日志,将下面的内容
log /dev/log local0
log /dev/log local1 notice
替换为:
log 127.0.0.1 local0
然后为 HAProxy 配置日志分割,编辑 /etc/rsyslog.d/ 下的 haproxy.conf (在 Debian 中可能叫 49-haproxy.conf写入下面你的内容
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info ­/var/log/haproxy.log;Haproxy
local0.notice ­/var/log/haproxy­status.log;Haproxy
local0.* ~
这个配置会基于 $template 在 /var/log 中分割 HAProxy 日志。现在重启 rsyslog 应用这些更改。
# service rsyslog restart
#### 2. 设置默认选项 ####
下一步是设置 HAProxy 的默认选项。在 /etc/haproxy/haproxy.cfg 的 default 段中,替换为下面的配置:
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 20000
contimeout 5000
clitimeout 50000
srvtimeout 50000
上面的配置是当 HAProxy 为 HTTP 负载均衡时建议使用的,但是并不一定是你的环境的最优方案。你可以自己研究 HAProxy 的手册并配置它。
#### 3. Web 集群配置 ####
Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡中的大多数设置都在这里。现在我们会创建一些基本配置,定义我们的节点。将配置文件中从 frontend 段开始的内容全部替换为下面的:
listen webfarm *:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
cookie LBN insert indirect nocache
option httpclose
option forwardfor
server web01 192.168.100.2:80 cookie node1 check
server web02 192.168.100.3:80 cookie node2 check
"listen webfarm *:80" 定义了负载均衡器监听的地址和端口。为了教程的需要,我设置为 "\*" 表示监听在所有接口上。在真实的场景汇总,这样设置可能不太合适,应该替换为可以从 internet 访问的那个网卡接口。
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
上面的设置定义了,负载均衡器的状态统计信息可以通过 http://<load-balancer-IP>/haproxy?stats 访问。访问需要简单的 HTTP 认证,用户名为 "haproxy" 密码为 "stats"。这些设置可以替换为你自己的认证方式。如果你不需要状态统计信息,可以完全禁用掉。
下面是一个 HAProxy 统计信息的例子
![](https://farm4.staticflickr.com/3928/15416835905_a678c8f286_c.jpg)
"balance roundrobin" 这一行表明我们使用的负载均衡类型。这个教程中,我们使用简单的轮询算法,可以完全满足 HTTP 负载均衡的需要。HAProxy 还提供其他的负载均衡类型:
- **leastconn**:将请求调度至连接数最少的服务器­
- **source**:对请求的客户端 IP 地址进行哈希计算,根据哈希值和服务器的权重将请求调度至后端服务器。
- **uri**:对 URI 的左半部分(问号之前的部分)进行哈希,根据哈希结果和服务器的权重对请求进行调度
- **url_param**:根据每个 HTTP GET 请求的 URL 查询参数进行调度,使用固定的请求参数将会被调度至指定的服务器上
- **hdr(name**):根据 HTTP 首部中的 <name> 字段来进行调度
"cookie LBN insert indirect nocache" 这一行表示我们的负载均衡器会存储 cookie 信息,可以将后端服务器池中的节点与某个特定会话绑定。节点的 cookie 存储为一个自定义的名字。这里,我们使用的是 "LBN",你可以指定其他的名称。后端节点会保存这个 cookie 的会话。
server web01 192.168.100.2:80 cookie node1 check
server web02 192.168.100.3:80 cookie node2 check
上面是我们的 Web 服务器节点的定义。服务器有由内部名称如web01web02IP 地址和唯一的 cookie 字符串表示。cookie 字符串可以自定义,我这里使用的是简单的 node1node2 ... node(n)
### 启动 HAProxy ###
如果你完成了配置,现在启动 HAProxy 并验证是否运行正常。
#### 在 Centos/RHEL 中启动 HAProxy ####
让 HAProxy 开机自启,使用下面的命令
# chkconfig haproxy on
# service haproxy start
当然,防火墙需要开放 80 端口,想下面这样
**CentOS/RHEL 7 的防火墙**
# firewall­cmd ­­permanent ­­zone=public ­­add­port=80/tcp
# firewall­cmd ­­reload
**CentOS/RHEL 6 的防火墙**
把下面内容加至 /etc/sysconfig/iptables 中的 ":OUTPUT ACCEPT" 段中
­A INPUT ­m state ­­state NEW ­m tcp ­p tcp ­­dport 80 ­j ACCEPT
重启**iptables**
# service iptables restart
#### 在 Debian 中启动 HAProxy ####
#### 启动 HAProxy ####
# service haproxy start
不要忘了防火墙开放 80 端口,在 /etc/iptables.up.rules 中加入:
­A INPUT ­p tcp ­­dport 80 ­j ACCEPT
#### 在 Ubuntu 中启动HAProxy ####
让 HAProxy 开机自动启动在 /etc/default/haproxy 中配置
ENABLED=1
启动 HAProxy
# service haproxy start
防火墙开放 80 端口:
# ufw allow 80
### 测试 HAProxy ###
检查 HAProxy 是否工作正常,我们可以这样做
首先准备一个 test.php 文件,文件内容如下
<?php
header('Content-Type: text/plain');
echo "Server IP: ".$_SERVER['SERVER_ADDR'];
echo "\nX-Forwarded-for: ".$_SERVER['HTTP_X_FORWARDED_FOR'];
?>
这个 PHP 文件会告诉我们哪台服务器(如负载均衡)转发了请求,哪台后端 Web 服务器实际处理了请求。
将这个 PHP 文件放到两个后端 Web 服务器的 Web 根目录中。然后用 curl 命令通过负载均衡器192.168.100.4)访问这个文件
$ curl http://192.168.100.4/test.php
我们多次使用这个命令此时,会发现交替的输出下面的内容(因为使用了轮询算法):
Server IP: 192.168.100.2
X-Forwarded-for: 192.168.100.4
----------
Server IP: 192.168.100.3
X-Forwarded-for: 192.168.100.4
如果我们停掉一台后端 Web 服务curl 命令仍然正常工作,请求被分发至另一台可用的 Web 服务器。
### 总结 ###
现在你有了一个完全可用的负载均衡器,以轮询的模式对你的 Web 节点进行负载均衡。还可以去实验其他的配置选项以适应你的环境。希望这个教程可以帮会组你们的 Web 项目有更好的可用性。
你可能已经发现了,这个教程只包含单台负载均衡的设置。这意味着我们仍然有单点故障的问题。在真实场景中,你应该至少部署 2 台或者 3 台负载均衡以防止意外发生,但这不是本教程的范围。
如果 你有任何问题或建议,请在评论中提出,我会尽我的努力回答。
--------------------------------------------------------------------------------
via: http://xmodulo.com/haproxy-http-load-balancer-linux.html
作者:[Jaroslav Štěpánek][a]
译者:[Liao](https://github.com/liaoishere)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/jaroslav
[1]:http://www.haproxy.org/
[2]:http://www.haproxy.org/10g.html
[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html

View File

@ -1,365 +0,0 @@
How to turn your CentOS box into a BGP router using Quagga
如何使用Quagga把你的CentOS系统变成一个BGP路由器?
================================================================================
在[之前的教程中][1]此文原文做过文件名“20140928 How to turn your CentOS box into an OSPF router using Quagga.md”如果前面翻译发布了可以修改此链接,我对如何简单地使用Quagga把CentOS系统变成一个不折不扣地OSPF路由器做了一些描述,Quagga是一个开源路由软件套件.在这个教程中,我将会着重**把一个Linux系统变成一个BGP路由器,又是使用Quagga**,演示如何建立BGP与其它BGP路由器对等.
在我们进入细节之前,一些BGP的背景知识还是必要的.边界网关协议(或者BGP)是互联网的域间路由协议的实际标准。在BGP术语中,全球互联网是由成千上万相关联的自治系统(ASE)组成,其中每一个AS代表每一个特定运营商提供的一个网络管理域.
为了使其网络在全球范围内路由可达,每一个AS需要知道如何在英特网中到达其它的AS.这时候BGP出来取代这个角色了.BGP作为一种语言用于一个AS去与相邻的AS交换路由信息的一种工具.这些路由信息通常被称为BGP线路或者BGP前缀,包括AS号(ASN全球唯一号码)以及相关的IP地址块.一旦所有的BGP线路被当地的BGP路由表学习和填充,每一个AS将会知道如何到达互联网的任何公网IP.
路由在不同域(ASes)的能力是BGP被称为外部网关协议(EGP)或者域间协议的主要原因.就如一些路由协议例如OSPF,IS-IS,RIP和EIGRP都是内部网关协议(IGPs)或者域内路由协议.
### 测试方案 ###
在这个教程中,让我们来关注以下拓扑.
![](https://farm6.staticflickr.com/5598/15603223841_4c76343313_z.jpg)
我们假设运营商A想要建立一个BGP来与运营商B对等交换路由.它们的AS号和IP地址空间登细节如下所示.
- **运营商 A**: ASN (100), IP地址空间 (100.100.0.0/22), 分配给BGP路由器eth1网卡的IP地址(100.100.1.1)
- **运营商 B**: ASN (200), IP地址空间 (200.200.0.0/22), 分配给BGP路由器eth1网卡的IP地址(200.200.1.1)
路由器A和路由器B使用100.100.0.0/30子网来连接到对方.从理论上来说,任何子网从运营商那里都是可达的,可互连的.在真实场景中,建议使用掩码为30位的公网IP地址空间来实现运营商A和运营商B之间的连通.
### 在 CentOS中安装Quagga ###
如果Quagga还没被安装,我们可以使用yum来安装Quagga.
# yum install quagga
如果你正在使用的是CentOS7系统,你需要应用一下策略来设置SELinux.否则,SElinux将会阻止Zebra守护进程写入它的配置目录.如果你正在使用的是CentOS6,你可以跳过这一步.
# setsebool -P zebra_write_config 1
Quagga软件套件包含几个守护进程,这些进程可以一起工作.关于BGP路由,我们将把重点放在建立一下2个守护进程.
- **Zebra**:一个核心守护进程用于内核接口和静态路由.
- **BGPd**:一个BGP守护进程.
### 配置日志记录 ###
在Quagga被安装后,下一步就是配置Zebra来管理BGP路由器的网络接口.我们通过创建一个Zebra配置文件和启用日志记录来开始第一步.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
在CentOS6系统中:
# service zebra start
# chkconfig zebra on
在CentOS7系统中:
# systemctl start zebra
# systemctl enable zebra
Quagga提供了一个叫做vtysh特有的命令行工具,你可以输入路由器厂商(例如Cisco和Juniper)兼容和支持的命令.我们将使用vtysh shell来配置BGP路由在教程的其余部分.
启动vtysh shell 命令,输入:
# vtysh
提示将被改成主机名,这表明你是在vtysh shell中.
Router-A#
现在我们将使用以下命令来为Zebra配置日志文件:
Router-A# configure terminal
Router-A(config)# log file /var/log/quagga/quagga.log
Router-A(config)# exit
永久保存Zebra配置:
Router-A# write
在路由器B操作同样的步骤.
### 配置对等的IP地址 ###
下一步,我们将在可用的接口上配置对等的IP地址.
Router-A# show interface #显示接口信息
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
配置eth0接口的参数:
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 100.100.0.1/30
site-A-RTR(config-if)# description "to Router-B"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
继续配置eth1接口的参数:
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 100.100.1.1/24
site-A-RTR(config-if)# description "test ip from provider A network"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
现在确认配置:
Router-A# show interface
----------
Interface eth0 is up, line protocol detection is disabled
Description: "to Router-B"
inet 100.100.0.1/30 broadcast 100.100.0.3
Interface eth1 is up, line protocol detection is disabled
Description: "test ip from provider A network"
inet 100.100.1.1/24 broadcast 100.100.1.255
----------
Router-A# show interface description #现实接口描述
----------
Interface Status Protocol Description
eth0 up unknown "to Router-B"
eth1 up unknown "test ip from provider A network"
如果一切看起来正常,别忘记保存配置.
Router-A# write
同样地,在路由器B重复一次配置.
在我们继续下一步之前,确认下彼此的IP是可以ping通的.
Router-A# ping 100.100.0.2
----------
PING 100.100.0.2 (100.100.0.2) 56(84) bytes of data.
64 bytes from 100.100.0.2: icmp_seq=1 ttl=64 time=0.616 ms
下一步,我们将继续配置BGP对等和前缀设置.
### 配置BGP对等 ###
Quagga守护进程负责BGP的服务叫bgpd.首先我们来准备它的配置文件.
# cp /usr/share/doc/quagga-XXXXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
在CentOS6系统中:
# service bgpd start
# chkconfig bgpd on
在CentOS7中
# systemctl start bgpd
# systemctl enable bgpd
现在,让我们来进入Quagga 的shell.
# vtysh
第一步,我们要确认当前没有已经配置的BGP会话.在一些版本,我们可能会发现一个AS号为7675的BGP会话.由于我们不需要这个会话,所以把它移除.
Router-A# show running-config
----------
... ... ...
router bgp 7675
bgp router-id 200.200.1.1
... ... ...
我们将移除一些预先配置好的BGP会话,并建立我们所需的会话取而代之.
Router-A# configure terminal
Router-A(config)# no router bgp 7675
Router-A(config)# router bgp 100
Router-A(config)# no auto-summary
Router-A(config)# no synchronizaiton
Router-A(config-router)# neighbor 100.100.0.2 remote-as 200
Router-A(config-router)# neighbor 100.100.0.2 description "provider B"
Router-A(config-router)# exit
Router-A(config)# exit
Router-A# write
路由器B将用同样的方式来进行配置,以下配置提供作为参考.
Router-B# configure terminal
Router-B(config)# no router bgp 7675
Router-B(config)# router bgp 200
Router-B(config)# no auto-summary
Router-B(config)# no synchronizaiton
Router-B(config-router)# neighbor 100.100.0.1 remote-as 100
Router-B(config-router)# neighbor 100.100.0.1 description "provider A"
Router-B(config-router)# exit
Router-B(config)# exit
Router-B# write
当相关的路由器都被配置好,两台路由器之间的对等将被建立.现在让我们通过运行下面的命令来确认:
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5614/15420135700_e3568d2e5f_z.jpg)
从输出中,我们可以看到"State/PfxRcd"部分.如果对等关闭,输出将会现实"空闲"或者"活动'.请记住,单词'Active'这个词在路由器中总是不好的意思.它意味着路由器正在积极地寻找邻居,前缀或者路由.当对等是up状态,"State/PfxRcd"下的输出状态将会从特殊邻居接收到前缀号.
在这个例子的输出中,BGP对等知识在AS100和AS200之间呈up状态.因此,没有前缀被更改,所以最右边列的数值是0.
### 配置前缀通告 ###
正如一开始提到,AS 100将以100.100.0.0/22作为通告,在我们的例子中AS 200将同样以200.200.0.0/22作为通告.这些前缀需要被添加到BGP配置如下.
在路由器-A中:
Router-A# configure terminal
Router-A(config)# router bgp 100
Router-A(config)# network 100.100.0.0/22
Router-A(config)# exit
Router-A# write
在路由器-B中:
Router-B# configure terminal
Router-B(config)# router bgp 200
Router-B(config)# network 200.200.0.0/22
Router-B(config)# exit
Router-B# write
在这一点上,两个路由器会根据需要开始通告前缀.
### 测试前缀通告 ###
首先,让我们来确认前缀的数量是否被改变了.
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5608/15419095659_0ebb384eee_z.jpg)
为了查看所接收的更多前缀细节,我们可以使用一下命令,这个命令用于显示邻居100.100.0.2所接收到的前缀总数.
Router-A# show ip bgp neighbors 100.100.0.2 advertised-routes
![](https://farm6.staticflickr.com/5597/15419618208_4604e5639a_z.jpg)
查看哪一个前缀是我们从邻居接收到的:
Router-A# show ip bgp neighbors 100.100.0.2 routes
![](https://farm4.staticflickr.com/3935/15606556462_e17eae7f49_z.jpg)
我们也可以查看所有的BGP路由器:
Router-A# show ip bgp
![](https://farm6.staticflickr.com/5609/15419618228_5c776423a5_z.jpg)
以上的命令都可以被用于检查哪个路由器通过BGP在路由器表中被学习到.
Router-A# show ip route
----------
代码: K - 内核路由, C - 已链接 , S - 静态 , R - 路由信息协议 , O - 开放式最短路径优先协议,
I - 中间系统到中间系统的路由选择协议, B - 边界网关协议, > - 选择路由, * - FIB 路由
C>* 100.100.0.0/30 is directly connected, eth0
C>* 100.100.1.0/24 is directly connected, eth1
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:06:45
----------
Router-A# show ip route bgp
----------
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:08:13
BGP学习到的路由也将会在Linux路由表中出现.
[root@Router-A~]# ip route
----------
100.100.0.0/30 dev eth0 proto kernel scope link src 100.100.0.1
100.100.1.0/24 dev eth1 proto kernel scope link src 100.100.1.1
200.200.0.0/22 via 100.100.0.2 dev eth0 proto zebra
最后,我们将使用ping命令来测试连通.结果将成功ping通.
[root@Router-A~]# ping 200.200.1.1 -c 2
总而言之,该教程将重点放在如何运行一个基本的BGP在CentOS系统中.当这个教程让你开始BGP的配置,那么一些更高级的设置例如设置过滤器,BGP属性调整,本地优先级和预先路径准备等.我将会在后续的教程中覆盖这些主题.
希望这篇教程能给大家一些帮助.
--------------------------------------------------------------------------------
via: http://xmodulo.com/centos-bgp-router-quagga.html
作者:[Sarmed Rahman][a]
译者:[disylee](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/turn-centos-box-into-ospf-router-quagga.html

View File

@ -0,0 +1,202 @@
如何在Ubuntu / CentOS 6.x上安装Bugzilla 4.4
================================================================================
这里我们将展示如何在一台Ubuntu 14.04或CentOS 6.5/7上安装Bugzilla。Bugzilla是一款基于web用来记录跟踪缺陷数据库的bug跟踪软件它同时是一款免费及开源软件(FOSS)它的bug跟踪系统允许个人和开发团体有效地记录下他们产品的一些突出问题。尽管是"免费"的Bugzilla依然有很多其它同类产品所没有的“珍贵”特性。因此Bugzilla很快就变成了全球范围内数以千计的组织最喜欢的bug管理工具。
Bugzilla对于不同状况的适应能力非常强。如今它们应用在各个不同的IT领域系统管理员部署管理、芯片设计和部署问题跟踪(制作前后),还有为那些诸如RedhatNASALinux-Mandrake和VA Systems这些名家提供软硬件bug跟踪。
### 1. 安装依赖程序 ###
安装Bugzilla相当**简单**。这篇文章特别针对Ubuntu 14.04和CentOS 6.5两个版本(不过也适用于更老的版本)。
为了获取并能在Ubuntu或CentOS系统中运行Bugzilla我们要安装Apache网络服务器(允许SSL)MySQL数据库服务器和一些需要来安装并配置Bugzilla的工具。
要在你的服务器上安装使用Bugzilla你需要安装好以下程序
- Perl(5.8.1 或以上)
- MySQL
- Apache2
- Bugzilla
- Perl模块
- 使用apache的Bugzilla
正如我们所提到的本文会阐述Ubuntu 14.04和CentOS 6.5/7两种发行版的安装过程为此我们会分成两部分来表示。
以下就是在你的Ubuntu 14.04 LTS和CentOS 7机器安装Bugzilla的步骤
**准备所需的依赖包:**
你需要运行以下命令来安装些必要的包:
**Ubuntu版本:**
$ sudo apt-get install apache2 mysql-server libapache2-mod-perl2
libapache2-mod-perl2-dev libapache2-mod-perl2-doc perl postfix make gcc g++
**CentOS版本:**
$ sudo yum install httpd mod_ssl mysql-server mysql php-mysql gcc perl* mod_perl-devel
**注意请在shell或者终端下运行所有的命令并且确保你用root用户sudo连接机器。**
### 2. 启动Apache服务 ###
你已经按照以上步骤安装好了apache服务那么我们现在需要配置apache服务并运行它。我们需要用sodo或root来敲命令去完成它我们先切换到root连接。
$ sudo -s
我们需要在防火墙中打开80端口并保存改动。
# iptables -I INPUT -p tcp --dport 80 -j ACCEPT
# service iptables save
现在,我们需要启动服务:
CentOS版本:
# service httpd start
我们来确保Apache会在每次你重启机器的时候一并启动起来
# /sbin/chkconfig httpd on
Ubuntu版本:
# service apache2 start
现在由于我们已经启动了我们apache的http服务我们就能在默认的127.0.0.1地址下打开apache服务了。
### 3. 配置MySQL服务器 ###
现在我们需要启动我们的MySQL服务
CentOS版本:
# chkconfig mysqld on
# service start mysqld
Ubuntu版本:
# service mysql-server start
![mysql](http://blog.linoxide.com/wp-content/uploads/2014/12/mysql.png)
用root用户登录连接MySQL并给Bugzilla创建一个数据库把你的mysql密码更改成你想要的稍后配置Bugzilla的时候会用到它。
CentOS 6.5和Ubuntu 14.04 Trusty两个版本
# mysql -u root -p
# password: (You'll need to enter your password)
# mysql > create database bugs;
# mysql > grant all on bugs.* to root@localhost identified by "mypassword";
#mysql > quit
**注意请记住数据库名和mysql的密码我们稍后会用到它们。**
### 4. 安装并配置Bugzilla ###
现在我们所有需要的包已经设置完毕并运行起来了我们就要配置我们的Bugzilla。
那么首先我们要下载最新版的Bugzilla包这里我下载的是4.5.2版本。
使用wget工具在shell或终端上下载
wget http://ftp.mozilla.org/pub/mozilla.org/webtools/bugzilla-4.5.2.tar.gz
你也可以从官方网站进行下载。[http://www.bugzilla.org/download/][1]
**从下载下来的bugzilla压缩包中提取文件并重命名**
# tar zxvf bugzilla-4.5.2.tar.gz -C /var/www/html/
# cd /var/www/html/
# mv -v bugzilla-4.5.2 bugzilla
**注意**:这里,**/var/www/html/bugzilla/**就是**Bugzilla主目录**.
现在我们来配置buzilla
# cd /var/www/html/bugzilla/
# ./checksetup.pl --check-modules
![bugzilla-check-module](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla2-300x198.png)
检查完成之后,我们会发现缺少了一些组件,我们需要安装它们,用以下命令即可实现:
# cd /var/www/html/bugzilla
# perl install-module.pl --all
这一步会花掉一点时间去下载安装所有依赖程序,然后再次运行**checksetup.pl --check-modules**命令来验证有没有漏装什么。
现在我们需要运行以下这条命令,它会在/var/www/html/bugzilla路径下自动生成一个名为localconfig的文件。
# ./checksetup.pl
确认一下你刚才在localconfig文件中所输入的数据库名、用户和密码是否正确。
# nano ./localconfig
# checksetup.pl
![bugzilla-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla-success.png)
如果一切正常checksetup.pl现在应该就成功地配置Bugzilla了。
现在我们需要添加Bugzilla至我们的Apache配置文件中。那么我们需要用文本编辑器打开 /etc/httpd/conf/httpd.conf 文件(CentOS版本)或者 /etc/apache2/apache2.conf 文件(Ubuntu版本)
CentOS版本:
# nano /etc/httpd/conf/httpd.conf
Ubuntu版本:
# nano etc/apache2/apache2.conf
现在我们需要配置Apache服务器我们要把以下配置添加到配置文件里
<VirtualHost *:80>
DocumentRoot /var/www/html/bugzilla/
</VirtualHost>
<Directory /var/www/html/bugzilla>
AddHandler cgi-script .cgi
Options +Indexes +ExecCGI
DirectoryIndex index.cgi
AllowOverride Limit FileInfo Indexes
</Directory>
接着,我们需要编辑 .htaccess 文件并用“#”注释掉顶部“Options -Indexes”这一行。
让我们重启我们的apache服务并测试下我们的安装情况。
CentOS版本:
# service httpd restart
Ubuntu版本:
# service apache2 restart
![bugzilla-install-success](http://blog.linoxide.com/wp-content/uploads/2014/12/bugzilla_apache.png)
这样我们的Bugzilla就准备好在我们的Ubuntu 14.04 LTS和CentOS 6.5上获取bug报告了你就可以通过本地回环地址或你网页浏览器上的IP地址来浏览bugzilla了。
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-bugzilla-ubuntu-centos/
作者:[Arun Pyasi][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.bugzilla.org/download/

View File

@ -0,0 +1,51 @@
Ubuntu14.04或Mint17如何安装Kodi14XBMC
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Kodi_Xmas.jpg)
[Kodi][1]原名就是大名鼎鼎的XBMC发布[最新版本14][2]命名为Helix。感谢官方XMBC提供的PPA现在可以很简单地在Ubuntu14.04中安装了。
Kodi是一个优秀的自由和开源的GPL媒体中心软件支持所有平台如Windows, Linux, Mac, Android等。此软件拥有全屏幕的媒体中心可以管理所有音乐和视频不单支持本地文件还支持网络播放如Tube[Netflix][3], Hulu, Amazon Prime和其他串流服务商。
### Ubuntu 14.04, 14.10 和 Linux Mint 17 中安装XBMC 14 Kodi Helix ###
再次感谢官方的PPA让我们可以轻松安装Kodi 14。
支持Ubuntu 14.04, Ubuntu 12.04, Linux Mint 17, Pinguy OS 14.04, Deepin 2014, LXLE 14.04, Linux Lite 2.0, Elementary OS and 其他基于Ubuntu的Linux 发行版。
打开终端Ctrl+Alt+T然后使用下列命令。
sudo add-apt-repository ppa:team-xbmc/ppa
sudo apt-get update
sudo apt-get install kodi
需要下载大约100MB在我的观点这不是很大。若需安装解码插件使用下列命令
sudo apt-get install kodi-audioencoder-* kodi-pvr-*
#### 从Ubuntu中移除Kodi 14 ####
从系统中移除Kodi 14 ,使用下列命令:
sudo apt-get remove kodi
同样也应该移除PPA软件源
sudo add-apt-repository --remove ppa:team-xbmc/ppa
我希望这个简单的文章可以帮助到你在Ubuntu, Linux Mint 和其他 Linux版本中轻松安装Kodi 14。
你怎么发现Kodi 14 Helix?
你有没有使用其他的什么媒体中心?
可以在下面的评论区分享你的观点。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-kodi-14-xbmc-in-ubuntu-14-04-linux-mint-17/
作者:[Abhishek][a]
译者:[Vic020/VicYu](http://www.vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://kodi.tv/
[2]:http://kodi.tv/kodi-14-0-helix-unwinds/
[3]:http://itsfoss.com/watch-netflix-in-ubuntu-14-04/

View File

@ -0,0 +1,47 @@
如何在Ubuntu 14.04 中安装Winusb
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/WinUSB_Ubuntu_1404.jpeg)
[WinUSB][1]是一款简单的且有用的工具可以让你从Windows ISO镜像或者DVD中创建USB安装盘。它结合了GUI和命令行你可以根据你的喜好决定使用哪种。
在本篇中我们会展示**如何在Ubuntu 14.04、14.10 和 Linux Mint 17 中安装WinUSB**。
### 在Ubuntu 14.04、14.10 和 Linux Mint 17 中安装WinUSB ###
直到Ubuntu 13.10, WinUSBu一直都在积极开发且在官方PPA中可以找到。这个PPA还没有为Ubuntu 14.04 和14.10更新但是二进制文件仍旧可在更新版本的Ubuntu和Linux Mint中运行。基于[基于你使用的系统是32位还是64位的][2],使用下面的命令来下载二进制文件:
打开终端并在32位的系统下使用下面的命令
wget https://launchpad.net/~colingille/+archive/freshlight/+files/winusb_1.0.11+saucy1_i386.deb
对于64位的系统使用下面的命令
wget https://launchpad.net/~colingille/+archive/freshlight/+files/winusb_1.0.11+saucy1_amd64.deb
一旦你下载了正确的二进制包你可以用下面的命令安装WinUSB
sudo dpkg -i winusb*
不要担心在你安装WinUSB时看见错误。使用这条命令修复依赖
sudo apt-get -f install
之后你就可以在Unity Dash中查找WinUSB并且用它在Ubuntu 14.04 中创建Windows的live USB了。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/WinUSB_Ubuntu.png)
我希望这篇文章能够帮到你**在Ubuntu 14.04、14.10 和 Linux Mint 17 中安装WinUSB**。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-winusb-in-ubuntu-14-04/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://en.congelli.eu/prog_info_winusb.html
[2]:http://itsfoss.com/how-to-know-ubuntu-unity-version/

View File

@ -0,0 +1,189 @@
实例展示Ubuntu中apt-get和apt-cache命令的使用
================================================================================
apt-get和apt-cache是**Ubuntu Linux**中的命令行下的**包管理**工具。 apt-get的GUI版本是Synaptic包管理器本篇中我们会讨论apt-get和apt-cache命令的不同。
### 示例1 列出所有可用包 ###
linuxtechi@localhost:~$ apt-cache pkgnames
account-plugin-yahoojp
ceph-fuse
dvd+rw-tools
e3
gnome-commander-data
grub-gfxpayload-lists
gweled
.......................................
### 示例2 用关键字搜索包 ###
这个命令在你不确定包名时很有用只要在apt-cache这里原文是apt-get应为笔误后面输入与包相关的关键字即可/
linuxtechi@localhost:~$ apt-cache search "web server"
apache2 - Apache HTTP Server
apache2-bin - Apache HTTP Server (binary files and modules)
apache2-data - Apache HTTP Server (common files)
apache2-dbg - Apache debugging symbols
apache2-dev - Apache HTTP Server (development headers)
apache2-doc - Apache HTTP Server (on-site documentation)
apache2-utils - Apache HTTP Server (utility programs for web servers)
......................................................................
**注意** 如果你安装了“**apt-file**”包,我们就可以像下面那样用配置文件搜索包。
linuxtechi@localhost:~$ apt-file search nagios.cfg
ganglia-nagios-bridge: /usr/share/doc/ganglia-nagios-bridge/nagios.cfg
nagios3-common: /etc/nagios3/nagios.cfg
nagios3-common: /usr/share/doc/nagios3-common/examples/nagios.cfg.gz
pnp4nagios-bin: /etc/pnp4nagios/nagios.cfg
pnp4nagios-bin: /usr/share/doc/pnp4nagios/examples/nagios.cfg
### 示例:3 显示特定包的基本信息 ###
linuxtechi@localhost:~$ apt-cache show postfix
Package: postfix
Priority: optional
Section: mail
Installed-Size: 3524
Maintainer: LaMont Jones <lamont@debian.org>
Architecture: amd64
Version: 2.11.1-1
Replaces: mail-transport-agent
Provides: default-mta, mail-transport-agent
.....................................................
### 示例4 列出包的依赖 ###
linuxtechi@localhost:~$ apt-cache depends postfix
postfix
Depends: libc6
Depends: libdb5.3
Depends: libsasl2-2
Depends: libsqlite3-0
Depends: libssl1.0.0
|Depends: debconf
Depends: <debconf-2.0>
cdebconf
debconf
Depends: netbase
Depends: adduser
Depends: dpkg
............................................
### 示例5 使用apt-cache显示缓存统计 ###
linuxtechi@localhost:~$ apt-cache stats
Total package names: 60877 (1,218 k)
Total package structures: 102824 (5,758 k)
Normal packages: 71285
Pure virtual packages: 1102
Single virtual packages: 9151
Mixed virtual packages: 1827
Missing: 19459
Total distinct versions: 74913 (5,394 k)
Total distinct descriptions: 93792 (2,251 k)
Total dependencies: 573443 (16.1 M)
Total ver/file relations: 78007 (1,872 k)
Total Desc/File relations: 93792 (2,251 k)
Total Provides mappings: 16583 (332 k)
Total globbed strings: 171 (2,263 )
Total dependency version space: 2,665 k
Total slack space: 37.3 k
Total space accounted for: 29.5 M
### 示例6 使用 “apt-get update” 更新仓库 ###
使用命令“apt-get update”, 我们可以重新从源仓库中同步文件索引。包的索引从“/etc/apt/sources.list”中检索
linuxtechi@localhost:~$ sudo apt-get update
Ign http://extras.ubuntu.com utopic InRelease
Hit http://extras.ubuntu.com utopic Release.gpg
Hit http://extras.ubuntu.com utopic Release
Hit http://extras.ubuntu.com utopic/main Sources
Hit http://extras.ubuntu.com utopic/main amd64 Packages
Hit http://extras.ubuntu.com utopic/main i386 Packages
Ign http://in.archive.ubuntu.com utopic InRelease
Ign http://in.archive.ubuntu.com utopic-updates InRelease
Ign http://in.archive.ubuntu.com utopic-backports InRelease
................................................................
### 示例:7 使用apt-get安装包 ###
linuxtechi@localhost:~$ sudo apt-get install icinga
上面的命令会安装叫“icinga”的包。
### 示例8 升级所有已安装的包 ###
linuxtechi@localhost:~$ sudo apt-get upgrade
### 示例9 更新特定的包 ###
在apt-get命令中的“install”选项后面接上“-only-upgrade”用来更新一个特定的包如下所示
linuxtechi@localhost:~$ sudo apt-get install filezilla --only-upgrade
### 示例10 使用apt-get卸载包 ###
linuxtechi@localhost:~$ sudo apt-get remove skype
上面的命令只会删除skype包如果你想要删除它的配置文件在apt-get命令中使用“purge”选项。如下所示
linuxtechi@localhost:~$ sudo apt-get purge skype
我们可以结合使用上面的两个命令:
linuxtechi@localhost:~$ sudo apt-get remove --purge skype
### 示例11 在当前的目录中下载包 ###
linuxtechi@localhost:~$ sudo apt-get download icinga
Get:1 http://in.archive.ubuntu.com/ubuntu/ utopic/universe icinga amd64 1.11.6-1build1 [1,474 B]
Fetched 1,474 B in 1s (1,363 B/s)
上面的目录会从你当前的目录下载icinga包。
### 示例12 清理本地包占用的磁盘空间 ###
linuxtechi@localhost:~$ sudo apt-get clean
上面的命令会清零apt-get在下载包时占用的磁盘空间。
我们也可以使用“**autoclean**”选项来代替“**clean**“两者之间主要的区别是autoclean清理不再使用且没用的下载。
linuxtechi@localhost:~$ sudo apt-get autoclean
Reading package lists... Done
Building dependency tree
Reading state information... Done
### 示例13 使用“autoremove”删除包 ###
当在apt-get命令中使用“autoremove”时它会删除为了满足依赖而安装且现在没用的包。
linuxtechi@localhost:~$ sudo apt-get autoremove icinga
### 示例14 显示包的更新日志 ###
linuxtechi@localhost:~$ sudo apt-get changelog apache2
Get:1 Changelog for apache2 (http://changelogs.ubuntu.com/changelogs/pool/main/a/apache2/apache2_2.4.10-1ubuntu1/changelog) [195 kB]
Fetched 195 kB in 3s (60.9 kB/s)
上面的命令会下载apache2的更新日志并在你屏幕上显示。
### 示例15 使用 “check” 选项显示损坏的依赖 ###
linuxtechi@localhost:~$ sudo apt-get check
Reading package lists... Done
Building dependency tree
Reading state information... Done
--------------------------------------------------------------------------------
via: http://www.linuxtechi.com/ubuntu-apt-get-apt-cache-commands-examples/
作者:[Pradeep Kumar][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxtechi.com/author/pradeep/

View File

@ -0,0 +1,64 @@
如何在Ubuntu 14.04 和14.10 上安装新的字体
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/fonts.jpg)
Ubuntu默认自带了很多字体。但你或许对这些字体还不满意。因此你可以做的是在**Ubuntu 14.04、 14.10或者像Linux Mint其他的系统中安装额外的字体**。
### 第一步: 获取字体 ###
第一步也是最重要的下载你选择的字体。现在你或许在考虑从哪里下载字体。不要担心Google搜索可以给你提供几个免费的字体网站。你可以先去看看[ Lost Type 的字体][1]。[Squirrel的字体][2]同样也是一个下载字体的好地方。
### 第二步在Ubuntu中安装新字体 ###
Font Viewer. In here, you can see the option to install the font in top right corner:
下载的字体文件可能是一个压缩包。先解压它。大多数字体文件的格式是[TTF][3] (TrueType Fonts) 或者[OTF][4] (OpenType Fonts)。无论是哪种,只要双击字体文件。它会自动用字体查看器打开。这里你可以在右上角看到安装安装选项。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Install_New_Fonts_Ubuntu.png)
在安装字体时不会看到其他信息。几秒钟后,你会看到状态变成已安装。不用猜,这就是已安装的字体。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Install_New_Fonts_Ubuntu_1.png)
安装完毕后你就可以在GIMP、Pina等应用中看到你新安装的字体了。
### 第二步在Linux上一次安装几个字体 ###
我没有打错。这仍旧是第二步但是只是是一个备选方案。我上面看到的在Ubuntu中安装字体的方法是不错的。但是这有一个小问题。当你有20个新字体要安装时。一个个单独双击即繁琐又麻烦。你不这么认为么
要在Ubuntu中一次安装几个字体你要做的是创建一个.fonts文件夹如果在你的家目录下还不存在这个目录的话。并把解压后的TTF和OTF文件复制到这个文件夹内。
在文件管理器中进入家目录。按下Ctrl+H [显示Ubuntu中的隐藏文件][5]。 右键创建一个文件夹并命名为.fonts。 这里的点很重要。在Linux中在文件的前面加上点意味在普通的视图中都会隐藏。
#### 备选方案: ####
另外你可以安装字体管理程序来以GUI的形式管理字体。要在Ubuntu中安装字体管理程序打开终端并输入下面的命令
sudo apt-get install font-manager
Open the Font Manager from Unity Dash. You can see installed fonts and option to install new fonts, remove existing fonts etc here.
从Unity Dash中打开字体管理器。你可以看到已安装的字体和安装新字体、删除字体等选项。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Font_Manager_Ubuntu.jpeg)
要卸载字体管理器,使用下面的命令:
sudo apt-get remove font-manager
我希望这篇文章可以帮助你在Ubuntu或其他Linux系统上安装字体。如果你有任何问题或建议请让我知道。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-fonts-ubuntu-1404-1410/
作者:[Abhishek][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/
[1]:http://www.losttype.com/browse/
[2]:http://www.fontsquirrel.com/
[3]:http://en.wikipedia.org/wiki/TrueType
[4]:http://en.wikipedia.org/wiki/OpenType
[5]:http://itsfoss.com/hide-folders-and-show-hidden-files-in-ubuntu-beginner-trick/

View File

@ -0,0 +1,77 @@
systemd-nspawn 指南
===========================
我目前已从 chroot译者注chroot可以构建类似沙盒的环境建议各位同学先了解chroot 迁移到 systemd-nspawn同时我写了一篇快速指南。简单的说我强烈建议正在使用 systemd 的用户从 chroot 转为 systemd-nspawn因为只要你的内核配置正确的话它几乎没有什么缺点。
想必在各大发行版中的用户对 chroot 都不陌生,而且我猜想 Gentoo 用户要时不时的使用它。
###chroot 面临的挑战
大多数交互环境下仅运行chroot还不够。通常还要挂载 /proc /sys另外为了确保不会出现类似“丢失 ptys”之类的错误我们还得 bind译者注bind 是 mount 的一个选项) 挂载 /dev。如果你使用 tmpfs你可能想要以 tmpfs 类型挂载新的 tmp var/tmp。接下来你可能还想将其他的挂载点 bind 到 chroot 中。这些都不是特别难,但是一般情况下要写一个脚本来管理它。
现在我按照日常计划执行备份操作,当然有一些不必备份的数据如 tmp 目录,或任何 bind 挂载的内容。当我配置了一个新的 chroot 意味着我要更新我的备份配置了,但我经常忘记这点,因为大多数时间里 chroot 挂载点并没有运行。当这些挂载点任然存在的情况下执行备份的话,那么备份中会多出很多不需要的内容。
当 bind 挂载点包含其他挂载点时(比如挂载时使用 -rbind 选项),这种情况下 systemd 的默认处理方式略有不同。在 bind 挂载中卸载一些东西时systemd 会将处于 bind 另一边的目录也卸载掉。想像一下,如果我卸载了 chroot 中以bind 挂载 /dev 的某个目录后发现主机上的 /dev/pts 与 /dev/shm 也不见了,我肯定会很吃惊。不过好像有其他方法可以避免,但是这不是我们此次讨论的重点。
### Systemd-nspawn 优点
Systemd-nspawn 用于启动一个容器,并且它的最简模式就可以像 chroot 那样运行。默认情况下,它自动配置容器所需的开销如 /dev, /tmp 等等。通过配合一些选项它也可配置其他的 bind 挂载点。当容器退出后,所有的挂载点都会被清除。
容器运行时,从外部看上去没什么变化。事实上,可以从同一个 chroot 产生5个不同的 systemd-nspawn 容器实例,并且除了文件系统(不包括 /dev, /tmp等只有 /usr,/etc 的改变会传递)外它们之间没有任何联系。你的备份将会忽略 bind 挂载点、tmpfs 及任何挂载在容器中的内容。
它同时具有其它优秀容器的优点,比如 containment - 可以杀死容器内的所有活动但不影响外部,等等。它的安全性并不是无懈可击的-它的作用仅仅是防止意外的错误。
如果你使用的是兼容的 sysvinit它包含了 systemdopenrc你可以启动容器。这意味着你可以在容器中使用 fstab 添加挂载点,运行守护进程等。只需要一个 chroot 的开销,几乎就可以获得虚拟化的所有好处(不需要构建内核等)。在一个看起来像 chroot 的容器中运行systemctl poweroff 看起来很奇怪,但这条命令能够起作用。
注意,如果不做额外配置的话那么容器就会共享主机的网络,所以主机上的容器不要运行 sshd。运行一个分离的网络 namespace 不是太难为了新的实例可以使用DHCP分离之后记得绑定接口。
###操作步骤
让它工作起来是此次讨论中最简短的部分了。
首先系统内核要支持 namespaces 与 devpts
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
像 chroot 那样启动 namespace 是非常简单的:
systemd-nspawn -D .
也可以像 chroot 那样退出。在内部可以运行 mount 并且可以看到默认它已将 /dev 与 /tmp 准备好了。 ”.“就是 chroot 的路径,也就是当前路径。在它内部运行的是 bash。
如果要添加一些 bind 挂载点也非常简便:
systemd-nspawn -D . --bind /usr/portage
现在,容器中的 /usr/portage 就与主机的对应目录绑定起来了,我们无需 sync /etc。如果想要绑定到指定的路径只要在原路径后添加 ”dest“相当于 chroot 的 root--bind foo 与 --bind foo:foo是一样的
如果容器具有 init 功能并且可以在内部运行,可以通过添加 -b 选项启动它:
systemd-nspawn -D . --bind /usr/portage -b
可以观察到 init 的运作。关闭容器会自动退出。
如果容器内运行了 systemd ,你可以使用 -h 选项将它的日志重定向到主机的systemd日志
systemd-nspawn -D . --bind /usr/portage -j -b
使用 nspawn 注册容器以便它能够在 machinectl 中显示。如此可以方便的在主机上对它进行操作,如启动新的 getty ssh 连接,关机等。
如果你正在使用 systemd 那么甩开 chroot 拥抱 nspawn 吧。
---------------------
via: http://rich0gentoo.wordpress.com/2014/07/14/quick-systemd-nspawn-guide/
作者:[rich0][a]
译者:[SPccman](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://rich0gentoo.wordpress.com/