Merge pull request #2 from LCTT/master

更新至2015年8月4日
This commit is contained in:
struggling 2015-08-04 09:04:04 +08:00
commit 8fa301fb24
26 changed files with 1596 additions and 905 deletions

View File

@ -0,0 +1,205 @@
关于Linux防火墙'iptables'的面试问答
================================================================================
Nishita Agarwal是Tecmint的用户她将分享关于她刚刚经历的一家公司印度的一家私人公司Pune的面试经验。在面试中她被问及许多不同的问题但她是iptables方面的专家因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。
![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg)
所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。
> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位我的专业集中在UNIX和它的变种BSDLinux。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化并将供职于印度的Pune公司。”
下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。
### 1. 你听说过Linux下面的iptables和Firewalld么知不知道它们是什么是用来干什么的 ###
**答案** : iptables和Firewalld我都知道并且我已经使用iptables好一段时间了。iptables主要由C语言写成并且以GNU GPL许可证发布。它是从系统管理员的角度写的最新的稳定版是iptables 1.4.21。iptables通常被用作类UNIX系统中的防火墙更准确的说可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块它执行包过滤的任务。
Firewalld是RHEL/CentOS 7也许还有其他发行版但我不太清楚中最新的过滤规则的实现。它已经取代了iptables接口并与netfilter相连接。
### 2. 你用过一些iptables的GUI或命令行工具么 ###
**答案** : 虽然我既用过GUI工具比如与[Webmin][1]结合的Shorewall以及直接通过终端访问iptables但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员而终端适合有经验的管理员。
### 3. 那么iptables和firewalld的基本区别是什么呢 ###
**答案** : iptables和firewalld都有着同样的目的包过滤但它们使用不同的方式。iptables与firewalld不同在每次发生更改时都刷新整个规则集。通常iptables配置文件位于/etc/sysconfig/iptables而firewalld的配置文件位于/etc/firewalld/。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易但是两者都可以完成同样的任务。例如firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。
### 4. 如果有机会的话你会在你所有的服务器上用firewalld替换iptables么 ###
**答案** : 我对iptables很熟悉它也工作的很好。如果没有任何需求需要firewalld的动态特性那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下目前为止我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢”。上面是我自己的想法但如果组织愿意用firewalld替换iptables的话我不介意。
### 5. 你看上去对iptables很有信心巧的是我们的服务器也在使用iptables。 ###
iptables使用的表有哪些请简要的描述iptables使用的表以及它们所支持的链。
**答案** : 谢谢您的赞赏。至于您问的问题iptables使用的表有四个它们是
- Nat 表
- Mangle 表
- Filter 表
- Raw 表
Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如如果一个通过某个接口的包被修饰修改了IP地址该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤由NAT表支持的链称为PREROUTING 链POSTROUTING 链和OUTPUT 链。
Mangle表 : 正如它的名字一样这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING 链OUTPUT 链Forward 链Input 链和POSTROUTING 链。
Filter表 : Filter表是iptables中使用的默认表它用来过滤网络包。如果没有定义任何规则Filter表则被当作默认的表并且基于它来过滤。支持的链有INPUT 链OUTPUT 链FORWARD 链。
Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING 链和OUTPUT 链。
### 6. 简要谈谈什么是iptables中的目标值能被指定为目标他们有什么用 ###
**答案** : 下面是在iptables中可以指定为目标的值
- ACCEPT : 接受包
- QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方)
- DROP : 丢弃包
- RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调用规则
### 7. 让我们来谈谈iptables技术方面的东西我的意思是说实际使用方面 ###
你怎么检测在CentOS中安装iptables时需要的iptables的rpm
**答案** : iptables已经被默认安装在CentOS中我们不需要单独安装它。但可以这样检测rpm
# rpm -qa iptables
iptables-1.4.21-13.el7.x86_64
如果您需要安装它您可以用yum来安装。
# yum install iptables-services
### 8. 怎样检测并且确保iptables服务正在运行 ###
**答案** : 您可以在终端中运行下面的命令来检测iptables的状态。
# service status iptables [On CentOS 6/5]
# systemctl status iptables [On CentOS 7]
如果iptables没有在运行可以使用下面的语句
---------------- 在CentOS 6/5下 ----------------
# chkconfig --level 35 iptables on
# service iptables start
---------------- 在CentOS 7下 ----------------
# systemctl enable iptables
# systemctl start iptables
我们还可以检测iptables的模块是否被加载
# lsmod | grep ip_tables
### 9. 你怎么检查iptables中当前定义的规则呢 ###
**答案** : 当前的规则可以简单的用下面的命令查看:
# iptables -L
示例输出
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
### 10. 你怎样刷新所有的iptables规则或者特定的链呢 ###
**答案** : 您可以使用下面的命令来刷新一个特定的链。
# iptables --flush OUTPUT
要刷新所有的规则,可以用:
# iptables --flush
### 11. 请在iptables中添加一条规则接受所有从一个信任的IP地址例如192.168.0.7)过来的包。 ###
**答案** : 上面的场景可以通过运行下面的命令来完成。
# iptables -A INPUT -s 192.168.0.7 -j ACCEPT
我们还可以在源IP中使用标准的斜线和子网掩码
# iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT
# iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT
### 12. 怎样在iptables中添加规则以ACCEPTREJECTDENY和DROP ssh的服务 ###
**答案** : 但愿ssh运行在22端口那也是ssh的默认端口我们可以在iptables中添加规则来ACCEPT ssh的tcp包在22号端口上
# iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT
REJECT ssh服务22号端口的tcp包。
# iptables -A INPUT -s -p tcp --dport 22 -j REJECT
DENY ssh服务22号端口的tcp包。
# iptables -A INPUT -s -p tcp --dport 22 -j DENY
DROP ssh服务22号端口的tcp包。
# iptables -A INPUT -s -p tcp --dport 22 -j DROP
### 13. 让我给你另一个场景假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接你会怎么做 ###
**答案** : 这时我所需要的就是在iptables中使用multiport选项并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定
# iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP
可以用下面的语句查看写入的规则。
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
**面试官** : 好了我问的就是这些。你是一个很有价值的雇员我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题请问我。
作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事这样会打断愉快的对话。更不用说HR轮会不会比较难总之我获得了机会。
同时我要感谢Avishek和Ravi我的朋友花时间帮我整理我的面试。
朋友如果您有过类似的面试并且愿意与数百万Tecmint读者一起分享您的面试经历请将您的问题和答案发送到admin@tecmint.com。
谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/

View File

@ -1,11 +1,14 @@
如何使用Docker Machine部署Swarm集群
================================================================================
大家好今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了独立的Docker API所以任何与Docker守护进程进行交互的工具都可以使用Swarm来透明地扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器安装Docker以及根据用户设定配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群并且swarm集群将由于使用了TLS加密具有极好的安全性。
大家好今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了标准的Docker API 支持所以任何可以与Docker守护进程进行交互的工具都可以使用Swarm来透明地扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器安装Docker以及根据用户设定来配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群并且swarm集群将由于使用了TLS加密具有极好的安全性。
下面是我提供的简便方法。
### 1. 安装Docker Machine ###
Docker Machine 在任何Linux系统上都被支持。首先我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。
Docker Machine 在各种Linux系统上都支持的很好。首先我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。
64位操作系统
# curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine
@ -18,7 +21,7 @@ Docker Machine 在任何Linux系统上都被支持。首先我们需要从Git
# chmod +x /usr/local/bin/docker-machine
在做完上面的事情以后,我们必须确保docker-machine已经安装好。怎么检查呢运行docker-machine -v指令指令将会给出我们系统上所安装的docker-machine版本。
在做完上面的事情以后,我们要确保docker-machine已经安装正确。怎么检查呢运行`docker-machine -v`指令,该指令将会给出我们系统上所安装的docker-machine版本。
# docker-machine -v
@ -31,14 +34,15 @@ Docker Machine 在任何Linux系统上都被支持。首先我们需要从Git
### 2. 创建Machine ###
在将Docker Machine安装到我们的设备上之后我们需要使用Docker Machine创建一个machine。在这片文章中我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主节点我们还要创建另外一个Droplet并将其设定为Swarm节点代理。
在将Docker Machine安装到我们的设备上之后我们需要使用Docker Machine创建一个machine。在这篇文章中我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主控节点我们还要创建另外一个Droplet并将其设定为Swarm节点代理。
创建machine的命令如下
# docker-machine create --driver digitalocean --digitalocean-access-token <API-Token> linux-dev
**Note**: 假设我们要创建一个名为“linux-dev”的machine。<API-Token>是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥我们需要登录我们的Digital Ocean控制面板然后点击API选项之后点击Generate New Token起个名字然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥这个就是<API-Token>了。用其替换上面那条命令中的API-Token字段。
**备注** 假设我们要创建一个名为“linux-dev”的machine。<API-Token>是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥我们需要登录我们的Digital Ocean控制面板然后点击API选项之后点击Generate New Token起个名字然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥这个就是<API-Token>了。用其替换上面那条命令中的API-Token字段。
现在运行下面的指令将Machine configuration装载进shell
现在运行下面的指令将Machine 的配置变量加载进shell里
# eval "$(docker-machine env linux-dev)"
@ -48,7 +52,7 @@ Docker Machine 在任何Linux系统上都被支持。首先我们需要从Git
# docker-machine active linux-dev
现在,我们检查是否指machine被标记为了 ACTIVE "*"。
现在我们检查它指machine是否被标记为了 ACTIVE "*"。
# docker-machine ls
@ -56,22 +60,21 @@ Docker Machine 在任何Linux系统上都被支持。首先我们需要从Git
### 3. 运行Swarm Docker镜像 ###
现在在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像
现在在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像:
# docker run swarm create
![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png)
If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet.
如果你想要在**32位操作系统**上运行swarm docker镜像。你需要SSH登录到Droplet当中。
# docker-machine ssh
#docker run swarm create
#exit
### 4. 创建Swarm主节点 ###
### 4. 创建Swarm主节点 ###
在我们的swarm image已经运行在machine当中之后我们将要创建一个Swarm主节点。使用下面的语句添加一个主节点。这里的感觉怪怪的好像少翻译了很多东西是我把Master翻译为主节点的原因吗
在我们的swarm image已经运行在machine当中之后我们将要创建一个Swarm主控节点。使用下面的语句,添加一个主控节点。
# docker-machine create \
-d digitalocean \
@ -83,9 +86,9 @@ If you are trying to run swarm docker image using **32 bit Operating System** in
![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png)
### 5. 创建Swarm结点群 ###
### 5. 创建Swarm从节点 ###
现在我们将要创建一个swarm结点此结点将与Swarm主节点相连接。下面的指令将创建一个新的名为swarm-node的droplet其与Swarm主节点相连。到此我们就拥有了一个两节点的swarm集群了。
现在我们将要创建一个swarm从节点此节点将与Swarm主控节点相连接。下面的指令将创建一个新的名为swarm-node的droplet其与Swarm主节点相连。到此我们就拥有了一个两节点的swarm集群了。
# docker-machine create \
-d digitalocean \
@ -96,21 +99,19 @@ If you are trying to run swarm docker image using **32 bit Operating System** in
![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png)
### 6. Connecting to the Swarm Master ###
### 6. 与Swarm主节点连接 ###
### 6. 与Swarm主控节点连接 ###
现在我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。
现在我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。
# eval "$(docker-machine env --swarm swarm-master)"
然后,我们就可以跨点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。
然后,我们就可以跨点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。
# docker info
### Conclusion ###
### 总结 ###
我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中我们以Digital Ocean作为驱动通过创建一个主节点和一个从节点成功地部署了集群。其他类似的应用还有VirtualBoxGoogle Cloud ComputingAmazon Web ServiceMicrosoft Azure等等。这些连接都是通过TLS进行加密的具有很高的安全性。如果你有任何的疑问建议反馈欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量
我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中我们以Digital Ocean作为驱动通过创建一个主节点和一个从节点成功地部署了集群。其他类似的驱动还有VirtualBoxGoogle Cloud ComputingAmazon Web ServiceMicrosoft Azure等等。这些连接都是通过TLS进行加密的具有很高的安全性。如果你有任何的疑问建议反馈欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量
--------------------------------------------------------------------------------
@ -118,7 +119,7 @@ via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-mach
作者:[Arun Pyasi][a]
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,13 @@
如何在 Fedora 22 上配置 Proftpd 服务器
================================================================================
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款基于 GPL 授权的自由开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是提供许多高级功能以及给用户提供丰富的配置选项以轻松实现定制。它具备许多在其他一些 FTP 服务器软件里仍然没有的配置选项。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。
- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
- 每个目录都可以包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
- 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。
- 可以作为独立进程启动服务或者通过 inetd/xinetd 启动
- 它的文件/目录属性、属主和权限采用类 UNIX 方式
- 它的文件/目录属性、属主和权限是基于 UNIX 方式的
- 它可以独立运行,保护系统避免 root 访问可能带来的损坏。
- 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器SSL/TLS 加密RADIUS 支持,等等。
- ProFTPD 服务器还支持 IPv6.
@ -38,7 +40,7 @@
### 3. 添加 FTP 用户 ###
在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
在设定好了基本的配置文件后,我们很自然地希望添加一个以特定目录为根目录的 FTP 用户。目前登录的用户自动就可以使用 FTP 服务,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
下面,我们将建立一个名字是 ftpgroup 的新用户组。
@ -57,7 +59,7 @@
Retype new password:
passwd: all authentication tokens updated successfully.
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限LCTT 译注这是SELinux 相关设置,如果未启用 SELinux可以不用
$ sudo setsebool -P allow_ftpd_full_access=1
$ sudo setsebool -P ftp_home_dir=1
@ -129,7 +131,7 @@
如果 **打开了 TLS/SSL 加密**,执行下面的命令。
$sudo firewall-cmd --add-port=1024-65534/tcp
$ sudo firewall-cmd --add-port=1024-65534/tcp
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
如果 **没有打开 TLS/SSL 加密**,执行下面的命令。
@ -158,7 +160,7 @@
### 7. 登录到 FTP 服务器 ###
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla使用 **服务器的 IP 或名称 **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **要求显式的基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**
![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
@ -170,7 +172,7 @@
### 总结 ###
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度定制和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
--------------------------------------------------------------------------------
@ -178,7 +180,7 @@ via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
作者:[Arun Pyasi][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -6,17 +6,17 @@
每当你开机进入一个操作系统,一系列的应用将会自动启动。这些应用被称为‘开机启动应用’ 或‘开机启动程序’。随着时间的推移,当你在系统中安装了足够多的应用时,你将发现有太多的‘开机启动应用’在开机时自动地启动了,它们吃掉了很多的系统资源,并将你的系统拖慢。这可能会让你感觉卡顿,我想这种情况并不是你想要的。
让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你发现这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。
让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你找到这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。
在这篇文章中,我们将看到 **在 Ubuntu 中,如何控制开机启动应用,如何让一个应用在开机时启动以及如何发现隐藏的开机启动应用。**这里提供的指导对所有的 Ubuntu 版本均适用,例如 Ubuntu 12.04, Ubuntu 14.04 和 Ubuntu 15.04。
### 在 Ubuntu 中管理开机启动应用 ###
默认情况下, Ubuntu 提供了一个`开机启动应用工具`来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。
默认情况下, Ubuntu 提供了一个`Startup Applications`工具来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。
![ubuntu 中的开机启动应用工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_Ubuntu.jpeg)
点击它来启动。下面是我的`开机启动应用`的样子:
点击它来启动。下面是我的`Startup Applications`的样子:
![在 Ubuntu 中查看开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Screenshot-from-2015-07-18-122550.png)
@ -84,7 +84,7 @@
就这样,你将在下一次开机时看到这个程序会自动运行。这就是在 Ubuntu 中你能做的关于开机启动应用的所有事情。
到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。
到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。
### 在 Ubuntu 中查看隐藏的开机启动程序 ###
@ -97,13 +97,14 @@
![在 Ubuntu 中查看隐藏的开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Hidden_startup_program_Ubuntu.jpg)
你可以像先前我们讨论的那样管理这些开机启动应用。我希望这篇教程可以帮助你在 Ubuntu 中控制开机启动程序。任何的问题或建议总是欢迎的。
--------------------------------------------------------------------------------
via: http://itsfoss.com/manage-startup-applications-ubuntu/
作者:[Abhishek][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,15 +1,15 @@
Ubuntu上比较PDF文件
如何在 Ubuntu 上比较 PDF 文件
================================================================================
如果你想要对PDF文件进行比较你可以使用下面工具之一。
### Comparepdf ###
comparepdf是一个命令行应用用于将两个PDF文件进行对比。默认对比模式文本模式该模式会对各对相关页面进行文字对比。只要一检测到差异该程序就会终止并显示一条信息除非设置了-v0和一个指示性的返回码。
comparepdf是一个命令行应用用于将两个PDF文件进行对比。默认对比模式文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0和一个指示性的返回码。
用于文本模式对比的选项有 -ct 或 --compare=text默认用于视觉对比这对图标或其它图像发生改变时很有用的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应)使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。
用于文本模式对比的选项有 -ct 或 --compare=text默认用于视觉对比这对图标或其它图像发生改变时很有用的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应)使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。
### 安装comparepdf到Ubuntu ###
#### 安装comparepdf到Ubuntu ####
打开终端,然后运行以下命令
@ -19,17 +19,17 @@ comparepdf是一个命令行应用用于将两个PDF文件进行对比。默
comparepdf [OPTIONS] file1.pdf file2.pdf
**Diffpdf**
###Diffpdf###
DiffPDF是一个图形化应用程序用于对两个PDF文件进行对比。默认情况下它只会对比两个相关页面的文字但是也支持对图形化页面进行对比例如如果图表被修改过或者段落被重新格式化过。它也可以对特定的页面或者页面范围进行对比。例如如果同一个PDF文件有两个版本其中一个有页面1-12而另一个则有页面1-13因为这里添加了一个额外的页面4它们可以通过指定两个页面范围来进行对比第一个是1-12而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面1,12,23,34,55,6以此类推直到12,13
### 安装 diffpdf 到 ubuntu ###
#### 安装 diffpdf 到 ubuntu ####
打开终端,然后运行以下命令
sudo apt-get install diffpdf
### 截图 ###
#### 截图 ####
![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png)
@ -41,7 +41,7 @@ via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html
作者:[ruchi][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,88 +0,0 @@
FSSlc Translating
7 communities driving open source development
================================================================================
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
### Open Development of Tech Drives Innovation ###
Over the past two decades, open development of technology has come to be seen as a key to driving innovation. Even companies that once saw open source as a threat have come around — Microsoft, for example, is now active in a number of open source initiatives. To date, most open development has focused on software. But even that is changing as communities have begun to coalesce around open hardware initiatives. Here are seven organizations that are successfully promoting and developing open technologies, both hardware and software.
### OpenPOWER Foundation ###
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
The [OpenPOWER Foundation][2] was founded by IBM, Google, Mellanox, Tyan and NVIDIA in 2013 to drive open collaboration hardware development in the same spirit as the open source software development which has found fertile ground in the past two decades.
IBM seeded the foundation by opening up its Power-based hardware and software technologies, offering licenses to use Power IP in independent hardware products. More than 70 members now work together to create custom open servers, components and software for Linux-based data centers.
In April, OpenPOWER unveiled a technology roadmap based on new POWER8 process-based servers capable of analyzing data 50 times faster than the latest x86-based systems. In July, IBM and Google released a firmware stack. October saw the availability of NVIDIA GPU accelerated POWER8 systems and the first OpenPOWER reference server from Tyan.
### The Linux Foundation ###
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
Founded in 2000, [The Linux Foundation][2] is now the host for the largest open source, collaborative development effort in history, with more than 180 corporate members and many individual and student members. It sponsors the work of key Linux developers and promotes, protects and advances the Linux operating system and collaborative software development.
Some of its most successful collaborative projects include Code Aurora Forum (a consortium of companies with projects serving the mobile wireless industry), MeeGo (a project to build a Linux kernel-based operating system for mobile devices and IVI) and the Open Virtualization Alliance (which fosters the adoption of free and open source software virtualization solutions).
### Open Virtualization Alliance ###
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
The [Open Virtualization Alliance (OVA)][3] exists to foster the adoption of free and open source software virtualization solutions like Kernel-based Virtual Machine (KVM) through use cases and support for the development of interoperable common interfaces and APIs. KVM turns the Linux kernel into a hypervisor.
Today, KVM is the most commonly used hypervisor with OpenStack.
### The OpenStack Foundation ###
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
Originally launched as an Infrastructure-as-a-Service (IaaS) product by NASA and Rackspace hosting in 2010, the [OpenStack Foundation][4] has become the home for one of the biggest open source projects around. It boasts more than 200 member companies, including AT&T, AMD, Avaya, Canonical, Cisco, Dell and HP.
Organized around a six-month release cycle, the foundation's OpenStack projects are developed to control pools of processing, storage and networking resources through a data center — all managed or provisioned through a Web-based dashboard, command-line tools or a RESTful API. So far, the collaborative development supported by the foundation has resulted in the creation of OpenStack components including OpenStack Compute (a cloud computing fabric controller that is the main part of an IaaS system), OpenStack Networking (a system for managing networks and IP addresses) and OpenStack Object Storage (a scalable redundant storage system).
### OpenDaylight ###
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
Another collaborative project to come out of the Linux Foundation, [OpenDaylight][5] is a joint initiative of industry vendors, like Dell, HP, Oracle and Avaya founded in April 2013. Its mandate is the creation of a community-led, open, industry-supported framework consisting of code and blueprints for Software-Defined Networking (SDN). The idea is to provide a fully functional SDN platform that can be deployed directly, without requiring other components, though vendors can offer add-ons and enhancements.
### Apache Software Foundation ###
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
The [Apache Software Foundation (ASF)][7] is home to nearly 150 top level projects ranging from open source enterprise automation software to a whole ecosystem of distributed computing projects related to Apache Hadoop. These projects deliver enterprise-grade, freely available software products, while the Apache License is intended to make it easy for users, whether commercial or individual, to deploy Apache products.
ASF was incorporated in 1999 as a membership-based, not-for-profit corporation with meritocracy at its heart — to become a member you must first be actively contributing to one or more of the foundation's collaborative projects.
### Open Compute Project ###
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
An outgrowth of Facebook's redesign of its Oregon data center, the [Open Compute Project (OCP)][7] aims to develop open hardware solutions for data centers. The OCP is an initiative made up of cheap, vanity-free servers, modular I/O storage for Open Rack (a rack standard designed for data centers to integrate the rack into the data center infrastructure) and a relatively "green" data center design.
OCP board members include representatives from Facebook, Intel, Goldman Sachs, Rackspace and Microsoft.
OCP recently announced two options for licensing: an Apache 2.0-like license that allows for derivative works and a more prescriptive license that encourages changes to be rolled back into the original software.
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
作者:[Thor Olavsrud][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
[1]:http://openpowerfoundation.org/
[2]:http://www.linuxfoundation.org/
[3]:https://openvirtualizationalliance.org/
[4]:http://www.openstack.org/foundation/
[5]:http://www.opendaylight.org/
[6]:http://www.apache.org/
[7]:http://www.opencompute.org/

View File

@ -1,260 +0,0 @@
translating...
How to set up IPv6 BGP peering and filtering in Quagga BGP router
================================================================================
In the previous tutorials, we demonstrated how we can set up a [full-fledged BGP router][1] and configure [prefix filtering][2] with Quagga. In this tutorial, we are going to show you how we can set up IPv6 BGP peering and advertise IPv6 prefixes through BGP. We will also demonstrate how we can filter IPv6 prefixes advertised or received by using prefix-list and route-map features.
### Topology ###
For this tutorial, we will be considering the following topology.
![](https://farm9.staticflickr.com/8599/15944659374_1c65852df2_c.jpg)
Service providers A and B want to establish an IPv6 BGP peering between them. Their IPv6 and AS information is as follows.
- Peering IP block: 2001:DB8:3::/64
- Service provider A: AS 100, 2001:DB8:1::/48
- Service provider B: AS 200, 2001:DB8:2::/48
### Installing Quagga on CentOS/RHEL ###
If Quagga has not already been installed, we can install it using yum.
# yum install quagga
On CentOS/RHEL 7, the default SELinux policy, which prevents /usr/sbin/zebra from writing to its configuration directory, can interfere with the setup procedure we are going to describe. Thus we want to disable this policy as follows. Skip this step if you are using CentOS/RHEL 6.
# setsebool -P zebra_write_config 1
### Creating Configuration Files ###
After installation, we start the configuration process by creating the zebra/bgpd configuration files.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
# cp /usr/share/doc/quagga-XXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
Next, enable auto-start of these services.
**On CentOS/RHEL 6:**
# service zebra start; service bgpd start
# chkconfig zebra on; chkconfig bgpd on
**On CentOS/RHEL 7:**
# systemctl start zebra; systemctl start bgpd
# systemctl enable zebra; systmectl enable bgpd
Quagga provides a built-in shell called vtysh, whose interface is similar to those of major router vendors such as Cisco or Juniper. Launch vtysh command shell:
# vtysh
The prompt will be changed to:
router-a#
or
router-b#
In the rest of the tutorials, these prompts indicate that you are inside vtysh shell of either router.
### Specifying Log File for Zebra ###
Let's configure the log file for Zebra, which will be helpful for debugging.
First, enter the global configuration mode by typing:
router-a# configure terminal
The prompt will be changed to:
router-a(config)#
Now specify log file location. Then exit the configuration mode:
router-a(config)# log file /var/log/quagga/quagga.log
router-a(config)# exit
Save configuration permanently by:
router-a# write
### Configuring Interface IP Addresses ###
Let's now configure the IP addresses for Quagga's physical interfaces.
First, we check the available interfaces from inside vtysh.
router-a# show interfaces
----------
Interface eth0 is up, line protocol detection is disabled
## OUTPUT TRUNCATED ###
Interface eth1 is up, line protocol detection is disabled
## OUTPUT TRUNCATED ##
Now we assign necessary IPv6 addresses.
router-a# conf terminal
router-a(config)# interface eth0
router-a(config-if)# ipv6 address 2001:db8:3::1/64
router-a(config-if)# interface eth1
router-a(config-if)# ipv6 address 2001:db8:1::1/64
We use the same method to assign IPv6 addresses to router-B. I am summarizing the configuration below.
router-b# show running-config
----------
interface eth0
ipv6 address 2001:db8:3::2/64
interface eth1
ipv6 address 2001:db8:2::1/64
Since the eth0 interface of both routers are in the same subnet, i.e., 2001:DB8:3::/64, you should be able to ping from one router to another. Make sure that you can ping successfully before moving on to the next step.
router-a# ping ipv6 2001:db8:3::2
----------
PING 2001:db8:3::2(2001:db8:3::2) 56 data bytes
64 bytes from 2001:db8:3::2: icmp_seq=1 ttl=64 time=3.20 ms
64 bytes from 2001:db8:3::2: icmp_seq=2 ttl=64 time=1.05 ms
### Phase 1: IPv6 BGP Peering ###
In this section, we will configure IPv6 BGP between the two routers. We start by specifying BGP neighbors in router-A.
router-a# conf t
router-a(config)# router bgp 100
router-a(config-router)# no auto-summary
router-a(config-router)# no synchronization
router-a(config-router)# neighbor 2001:DB8:3::2 remote-as 200
Next, we define the address family for IPv6. Within the address family section, we will define the network to be advertised, and activate the neighbors as well.
router-a(config-router)# address-family ipv6
router-a(config-router-af)# network 2001:DB8:1::/48
router-a(config-router-af)# neighbor 2001:DB8:3::2 activate
We will go through the same configuration for router-B. I'm providing the summary of the configuration.
router-b# conf t
router-b(config)# router bgp 200
router-b(config-router)# no auto-summary
router-b(config-router)# no synchronization
router-b(config-router)# neighbor 2001:DB8:3::1 remote-as 100
router-b(config-router)# address-family ipv6
router-b(config-router-af)# network 2001:DB8:2::/48
router-b(config-router-af)# neighbor 2001:DB8:3::1 activate
If all goes well, an IPv6 BGP session should be up between the two routers. If not already done, please make sure that necessary ports (TCP 179) are [open in your firewall][3].
We can check IPv6 BGP session information using the following commands.
**For BGP summary:**
router-a# show bgp ipv6 unicast summary
**For BGP advertised routes:**
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> advertised-routes
**For BGP received routes:**
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> routes
![](https://farm8.staticflickr.com/7317/16379555088_6e29cb6884_b.jpg)
### Phase 2: Filtering IPv6 Prefixes ###
As we can see from the above output, the routers are advertising their full /48 IPv6 prefix. For demonstration purposes, we will consider the following requirements.
- Router-B will advertise one /64 prefix, one /56 prefix, as well as one full /48 prefix.
- Router-A will accept any IPv6 prefix owned by service provider B, which has a netmask length between /56 and /64.
We are going to filter the prefix as required, using prefix-list and route-map in router-A.
![](https://farm8.staticflickr.com/7367/16381297417_6549218289_c.jpg)
#### Modifying prefix advertisement for Router-B ####
Currently, router-B is advertising only one /48 prefix. We will modify router-B's BGP configuration so that it advertises additional /56 and /64 prefixes as well.
router-b# conf t
router-b(config)# router bgp 200
router-b(config-router)# address-family ipv6
router-b(config-router-af)# network 2001:DB8:2::/56
router-b(config-router-af)# network 2001:DB8:2::/64
We will verify that all prefixes are received at router-A.
![](https://farm9.staticflickr.com/8598/16379761980_7c083ae977_b.jpg)
Great! As we are receiving all prefixes in router-A, we will move forward and create prefix-list and route-map entries to filter these prefixes.
#### Creating Prefix-List ####
As described in the [previous tutorial][4], prefix-list is a mechanism that is used to match an IP address prefix with a subnet length. Once a matched prefix is found, we can apply filtering or other actions to the matched prefix. To meet our requirements, we will go ahead and create a necessary prefix-list entry in router-A.
router-a# conf t
router-a(config)# ipv6 prefix-list FILTER-IPV6-PRFX permit 2001:DB8:2::/56 le 64
The above commands will create a prefix-list entry named 'FILTER-IPV6-PRFX' which will match any prefix in the 2001:DB8:2:: pool with a netmask between 56 and 64.
#### Creating and Applying Route-Map ####
Now that the prefix-list entry is created, we will create a corresponding route-map rule which uses the prefix-list entry.
router-a# conf t
router-a(config)# route-map FILTER-IPV6-RMAP permit 10
router-a(config-route-map)# match ipv6 address prefix-list FILTER-IPV6-PRFX
The above commands will create a route-map rule named 'FILTER-IPV6-RMAP'. This rule will permit IPv6 addresses matched by the prefix-list 'FILTER-IPV6-PRFX' that we have created earlier.
Remember that a route-map rule is only effective when it is applied to a neighbor or an interface in a certain direction. We will apply the route-map in the BGP neighbor configuration. As the filter is meant for inbound prefixes, we apply the route-map in the inbound direction.
router-a# conf t
router-a(config)# router bgp 100
router-a(config-router)# address-family ipv6
router-a(config-router-af)# neighbor 2001:DB8:3::2 route-map FILTER-IPV6-RMAP in
Now when we check the routes received at router-A, we should see only two prefixes that are allowed.
![](https://farm8.staticflickr.com/7337/16379762020_ec2dc39b31_c.jpg)
**Note**: You may need to reset the BGP session for the route-map to take effect.
All IPv6 BGP sessions can be restarted using the following command:
router-a# clear bgp ipv6 *
I am summarizing the configuration of both routers so you get a clear picture at a glance.
![](https://farm9.staticflickr.com/8672/16567240165_eee4398dc8_c.jpg)
### Summary ###
To sum up, this tutorial focused on how to set up BGP peering and filtering using IPv6. We showed how to advertise IPv6 prefixes to a neighboring BGP router, and how to filter the prefixes advertised or received are advertised. Note that the process described in this tutorial may affect production networks of a service provider, so please use caution.
Hope this helps.
--------------------------------------------------------------------------------
via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
作者:[Sarmed Rahman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html

View File

@ -1,89 +0,0 @@
How to run Ubuntu Snappy Core on Raspberry Pi 2
================================================================================
The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core.
Snappy, the new component in the mix, represents a package format that is derived from DEB, is a frontend to update the system that lends its idea from atomic upgrades used in CoreOS, Red Hat's Atomic and elsewhere. As soon as the Raspberry Pi 2 was marketed, Canonical released Snappy Core for that plattform. The first edition of the Raspberry Pi was not able to run Ubuntu because Ubuntu's ARM images use the ARMv7 architecture, while the first Raspberry Pis were based on ARMv6. That has changed now, and Canonical, by releasing a RPI2-Image of Snappy Core, took the opportunity to make clear that Snappy was meant for the cloud and especially for IoT.
Snappy also runs on other platforms like Amazon EC2, Microsofts Azure, and Google's Compute Engine, and can also be virtualized with KVM, Virtualbox, or Vagrant. Canonical has embraced big players like Microsoft, Google, Docker or OpenStack and, at the same time, also included small projects from the maker scene as partners. Besides startups like Ninja Sphere and Erle Robotics, there are board manufacturers like Odroid, Banana Pro, Udoo, PCDuino and Parallella as well as Allwinner. Snappy Core will also run in routers soon to help with the poor upgrade policy that vendors perform.
In this post, let's see how we can test Ubuntu Snappy Core on Raspberry Pi 2.
The image for Snappy Core for the RPI2 can be downloaded from the [Raspberry Pi website][2]. Unpacked from the archive, the resulting image should be [written to an SD card][3] of at least 8 GB. Even though the OS is small, atomic upgrades and the rollback function eat up quite a bit of space. After booting up your Raspberry Pi 2 with Snappy Core, you can log into the system with the default username and password being 'ubuntu'.
![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg)
sudo is already configured and ready for use. For security reasons you should change the username with:
$ sudo usermod -l <new name> <old name>
Alternatively, you can add a new user with the command `adduser`.
Due to the lack of a hardware clock on the RPI, that the Snappy Core image does not take account of, the image has a small bug that will throw a lot of errors when processing commands. It is easy to fix.
To find out if the bug affects you, use the command:
$ date
If the output is "Thu Jan 1 01:56:44 UTC 1970", you can fix it with:
$ sudo date --set="Sun Apr 04 17:43:26 UTC 2015"
adapted to your actual time.
![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg)
Now you might want to check if there are any updates available. Note that the usual commands:
$ sudo apt-get update && sudo apt-get distupgrade
will not get you very far though, as Snappy uses its own simplified package management system which is based on dpkg. This makes sense, as Snappy will run on a lot of embedded appliances, and you want things to be as simple as possible.
Let's dive into the engine room for a minute to understand how things work with Snappy. The SD card you run Snappy on has three partitions besides the boot partition. Two of those house a duplicated file system. Both of those parallel file systems are permanently mounted as "read only", and only one is active at any given time. The third partition holds a partial writable file system and the users persistent data. With a fresh system, the partition labeled 'system-a' holds one complete file system, called a core, leaving the parallel partition still empty.
![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg)
If we run the following command now:
$ sudo snappy update
the system will install the update as a complete core, similar to an image, on 'system-b'. You will be asked to reboot your device afterwards to activate the new core.
After the reboot, run the following command to check if your system is up to date and which core is active.
$ sudo snappy versions -a
After rolling out the update and rebooting, you should see that the core that is now active has changed.
As we have not installed any apps yet, the following command:
$ sudo snappy update ubuntu-core
would have been sufficient, and is the way if you want to upgrade just the underlying OS. Should something go wrong, you can rollback by:
$ sudo snappy rollback ubuntu-core
which will take you back to the system's state before the update.
![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg)
Speaking of apps, they are what makes Snappy useful. There are not that many at this point, but the IRC channel #snappy on Freenode is humming along nicely and with a lot of people involved, the Snappy App Store gets new apps added on a regular basis. You can visit the shop by pointing your browser to http://<ip-address>:4200, and you can install apps right from the shop and then launch them with http://webdm.local in your browser. Building apps yourself for Snappy is not all that hard, and [well documented][4]. You can also port DEB packages into the snappy format quite easily.
![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg)
Ubuntu Snappy Core, due to the limited number of available apps, is not overly useful in a productive way at this point in time, although it invites us to dive into the new Snappy package format and play with atomic upgrades the Canonical way. Since it is easy to set up, this seems like a good opportunity to learn something new.
--------------------------------------------------------------------------------
via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html
作者:[Ferdinand Thommes][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/ferdinand
[1]:http://www.ubuntu.com/things
[2]:http://www.raspberrypi.org/downloads/
[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html
[4]:https://developer.ubuntu.com/en/snappy/

View File

@ -1,131 +0,0 @@
ictlyh Translating
How to access a Linux server behind NAT via reverse SSH tunnel
================================================================================
You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users.
### What is Reverse SSH Tunneling? ###
One alternative to SSH port forwarding is **reverse SSH tunneling**. The concept of reverse SSH tunneling is simple. For this, you will need another host (so-called "relay host") outside your restrictive home network, which you can connect to via SSH from where you are. You could set up a relay host using a [VPS instance][1] with a public IP address. What you do then is to set up a persistent SSH tunnel from the server in your home network to the public relay host. With that, you can connect "back" to the home server from the relay host (which is why it's called a "reverse" tunnel). As long as the relay host is reachable to you, you can connect to your home server wherever you are, or however restrictive your NAT or firewall is in your home network.
![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg)
### Set up a Reverse SSH Tunnel on Linux ###
Let's see how we can create and use a reverse SSH tunnel. We assume the following. We will be setting up a reverse SSH tunnel from homeserver to relayserver, so that we can SSH to homeserver via relayserver from another computer called clientcomputer. The public IP address of **relayserver** is 1.1.1.1.
On homeserver, open an SSH connection to relayserver as follows.
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
Here the port 10022 is any arbitrary port number you can choose. Just make sure that this port is not used by other programs on relayserver.
The "-R 10022:localhost:22" option defines a reverse tunnel. It forwards traffic on port 10022 of relayserver to port 22 of homeserver.
With "-fN" option, SSH will go right into the background once you successfully authenticate with an SSH server. This option is useful when you do not want to execute any command on a remote SSH server, and just want to forward ports, like in our case.
After running the above command, you will be right back to the command prompt of homeserver.
Log in to relayserver, and verify that 127.0.0.1:10022 is bound to sshd. If so, that means a reverse tunnel is set up correctly.
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd
Now from any other computer (e.g., clientcomputer), log in to relayserver. Then access homeserver as follows.
relayserver~$ ssh -p 10022 homeserver_user@localhost
One thing to take note is that the SSH login/password you type for localhost should be for homeserver, not for relayserver, since you are logging in to homeserver via the tunnel's local endpoint. So do not type login/password for relayserver. After successful login, you will be on homeserver.
### Connect Directly to a NATed Server via a Reverse SSH Tunnel ###
While the above method allows you to reach **homeserver** behind NAT, you need to log in twice: first to **relayserver**, and then to **homeserver**. This is because the end point of an SSH tunnel on relayserver is binding to loopback address (127.0.0.1).
But in fact, there is a way to reach NATed homeserver directly with a single login to relayserver. For this, you will need to let sshd on relayserver forward a port not only from loopback address, but also from an external host. This is achieved by specifying **GatewayPorts** option in sshd running on relayserver.
Open /etc/ssh/sshd_conf of **relayserver** and add the following line.
relayserver~$ vi /etc/ssh/sshd_conf
----------
GatewayPorts clientspecified
Restart sshd.
Debian-based system:
relayserver~$ sudo /etc/init.d/ssh restart
Red Hat-based system:
relayserver~$ sudo systemctl restart sshd
Now let's initiate a reverse SSH tunnel from homeserver as follows.
homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
Log in to relayserver and confirm with netstat command that a reverse SSH tunnel is established successfully.
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
Unlike a previous case, the end point of a tunnel is now at 1.1.1.1:10022 (relayserver's public IP address), not 127.0.0.1:10022. This means that the end point of the tunnel is reachable from an external host.
Now from any other computer (e.g., clientcomputer), type the following command to gain access to NATed homeserver.
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
In the above command, while 1.1.1.1 is the public IP address of relayserver, homeserver_user must be the user account associated with homeserver. This is because the real host you are logging in to is homeserver, not relayserver. The latter simply relays your SSH traffic to homeserver.
### Set up a Persistent Reverse SSH Tunnel on Linux ###
Now that you understand how to create a reverse SSH tunnel, let's make the tunnel "persistent", so that the tunnel is up and running all the time (regardless of temporary network congestion, SSH timeout, relay host rebooting, etc.). After all, if the tunnel is not always up, you won't be able to connect to your home server reliably.
For a persistent tunnel, I am going to use a tool called autossh. As the name implies, this program allows you to automatically restart an SSH session should it breaks for any reason. So it is useful to keep a reverse SSH tunnel active.
As the first step, let's set up [passwordless SSH login][2] from homeserver to relayserver. That way, autossh can restart a broken reverse SSH tunnel without user's involvement.
Next, [install autossh][3] on homeserver where a tunnel is initiated.
From homeserver, run autossh with the following arguments to create a persistent SSH tunnel destined to relayserver.
homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
The "-M 10900" option specifies a monitoring port on relayserver which will be used to exchange test data to monitor an SSH session. This port should not be used by any program on relayserver.
The "-fN" option is passed to ssh command, which will let the SSH tunnel run in the background.
The "-o XXXX" options tell ssh to:
- Use key authentication, not password authentication.
- Automatically accept (unknown) SSH host keys.
- Exchange keep-alive messages every 60 seconds.
- Send up to 3 keep-alive messages without receiving any response back.
The rest of reverse SSH tunneling related options remain the same as before.
If you want an SSH tunnel to be automatically up upon boot, you can add the above autossh command in /etc/rc.local.
### Conclusion ###
In this post, I talked about how you can use a reverse SSH tunnel to access a Linux server behind a restrictive firewall or NAT gateway from outside world. While I demonstrated its use case for a home network, you must be careful when applying it for corporate networks. Such a tunnel can be considered as a breach of a corporate policy, as it circumvents corporate firewalls and can expose corporate networks to outside attacks. There is a great chance it can be misused or abused. So always remember its implication before setting it up.
--------------------------------------------------------------------------------
via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/digitalocean
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
[3]:http://ask.xmodulo.com/install-autossh-linux.html

View File

@ -1,55 +0,0 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction
================================================================================
*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.*
Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]".
Now, with that out of the way... Onto the article.
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things.
The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse.
It was with that attitude in mind that I decided to take a dip in the Gnome pool.
I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has.
### Login Screen ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception.
First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring.
Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..."
The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought.
GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme.
Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1

View File

@ -1,32 +0,0 @@
Translating by XLCYun.
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop
================================================================================
### The Desktop ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually.
Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to.
But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring.
If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me?
Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users.
Let's move on from the key chain issue though, because I think I made my point.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
translating wi-cuckoo
translation by strugglingyouth
How to monitor NGINX with Datadog - Part 3
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
@ -148,4 +148,4 @@ via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
[19]:https://github.com/DataDog/the-monitor/issues
[19]:https://github.com/DataDog/the-monitor/issues

View File

@ -1,4 +1,4 @@
KevinSJ Translating
translation by strugglingyouth
How to monitor NGINX - Part 1
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)

View File

@ -0,0 +1,65 @@
translation by strugglingyouth
Handy commands for profiling your Unix file systems
================================================================================
![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png)
Credit: Sandra H-S
One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge.
One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests.
When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this.
$ find . -type f | wc -l
187534
Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list.
In the second command, we do the same, but print the last line -- thus the newest -- file.
$ find -type f -printf '%T+ %p\n' | sort | head -n 1
2006-02-03+02:40:33 ./skel/.xemacs/init.el
$ find -type f -printf '%T+ %p\n' | sort | tail -n 1
2015-07-19+14:20:16 ./.bash_history
The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work.
If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below.
$ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1
2015-07-19+13:02:12 ./isPrime
Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show.
$ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1
20183040 project.org.tar
To summarize file ownership, use the %u (owner)
$ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c
180034 shs
7500 jdoe
If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this:
$ find -type f -printf '%a+ %p\n' | sort | head -n 1
Fri Dec 15 03:00:30 2006+ ./statreport
Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction.
$ find -type f -printf '%a+ %p\n' | sort | tail -n 1
Wed Nov 26 03:00:27 2007+ ./my-notes
Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up.
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/

View File

@ -0,0 +1,92 @@
FSSlc translating
Linux Logging Basics
================================================================================
First well describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section.
### Linux System Logs ###
Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system:
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png)
Some of the most important Linux system logs include:
- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages.
- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure.
- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels.
- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully.
Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS.
Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below).
### Whats Syslog? ###
How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file.
The word “syslog” is an overloaded term and is often used in short to refer to one of these:
1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.”
1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). Its officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.”
1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.”
Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue.
Here is an example log message with the syslog header included. Its from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt:
<34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2
### Syslog Format and Fields ###
Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name.
<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n
Below, youll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues.
#### Timestamp ####
The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this:
- **2003-10-11** is the year, month, and day.
- **T** is a required element of the TIMESTAMP field, separating the date and the time.
- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second.
- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST.
#### Hostname ####
The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message.
#### App-Name ####
The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message.
#### Priority ####
The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. Its a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use.
Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos
[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb
[3]:https://tools.ietf.org/html/rfc5424
[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3
[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4
[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5
[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1

View File

@ -0,0 +1,418 @@
Managing Linux Logs
================================================================================
A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. Well tell you why this is a good idea and give tips on how to do it easily.
### Benefits of Centralizing Logs ###
It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. Itd take a long time to hunt down the right file, and even longer to correlate problems across servers. Theres nothing more frustrating than finding the information you are looking for hasnt been captured, or the log file that could have held the answer has just been lost after a restart.
Centralizing your logs makes them faster to search, which can help you solve production issues faster. You dont have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed.
Centralizing your logs also makes them easier to manage:
- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem.
- You dont have to worry about ssh or inefficient grep commands requiring more resources on troubled systems.
- You dont have to worry about full disks, which can crash your servers.
- You can keep your production servers secure without giving your entire team access just to look at logs. Its much safer to give your team access to logs from the central location.
With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. Well discuss how to intelligently address these issues in the sections below.
### Popular Tools for Centralizing Logs ###
The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files:
- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions.
- [syslog-ng][3] is the second most popular syslog daemon for Linux.
- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing.
- [fluentd][5] is another agent with advanced processing capabilities.
Rsyslog is the most popular daemon for centralizing your log data because its installed by default in most common distributions of Linux. You dont need to download it or install it, and its lightweight so it wont take up much of your system resources.
If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you dont mind the extra system footprint.
### Configure Rsyslog.conf ###
Since rsyslog is the most widely used syslog daemon, well show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6].
The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name.
action(type="omfwd" protocol="tcp" target="BEBOP" port="514")
You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If youre storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full.
Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider.
### Log Directories ###
You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*).
Common versions of rsyslog cant monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9].
### Which Protocol: UDP, TCP, or RELP? ###
There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol.
[UDP][10] sends a datagram packet, which is a single packet of information. Its an outbound-only protocol, so it doesnt send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. Its most commonly used on reliable networks like localhost.
[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet.
[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol.
### Reliably Send with Disk Assisted Queues ###
If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity.
**Warning: You can lose data if you store logs only in memory.**
Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue:
$WorkDirectory /var/spool/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
### Encrypt Logs Using TLS ###
When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer.
To set up TLS encryption, you need to do the following tasks:
1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If youre using a log management service, it will have one ready for you.
1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider.
1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system.
Heres an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting.
$DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com
### Best Practices for Application Logging ###
In addition to the logs that Linux creates by default, its also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on.
The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but thats the first place where people look for configuration files.
Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things.
If youre not sure where it is, you can use the locate command to find it:
[root@localhost ~]# locate postgresql.conf
/usr/pgsql-9.4/share/postgresql.conf.sample
/var/lib/pgsql/9.4/data/postgresql.conf
#### Set a Standard Location for Log Files ####
Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? Thats because other applications save their log files under /var/log too and if your app saves more than one log file perhaps once every day or after each service restart it may be a bit difficult to trawl through a large directory to find the file you want.
If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go.
#### Use A Standard Filename ####
Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically.
#### Append the Log File ####
Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart.
#### Appending vs. Rotation of Log File ####
Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines.
We recommend you configure the application to rotate its log file once every day, say at mid-night.
Why? Well it becomes manageable for a starter. Its much easier to find a file name with a specific date time pattern than to search through one file for that dates entries. Files are also much smaller: you dont think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location perhaps a nightly backup job copying to a centralized log server it doesnt chew up your networks bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, its easier to delete files older than a particular date than to have an application parsing one single large file.
#### Retention of Log File ####
How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one weeks worth of logging information, or it may be a regulatory requirement to keep ten years worth of data. Whatever it is, logs need to go from the server at one time or other.
In our opinion, unless otherwise required, keep at least a months worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier.
#### Separate Disk Location for Log Files ####
Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main applications data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesnt fill up the entire disk.
#### Log Entries ####
What information should be captured in each log entry?
That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything thats happening? Is it a legal requirement to capture what each user is running or viewing?
If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. Theres no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself.
#### A Practical Example for PostgreSQL ####
As an example, lets look at the main configuration file for a vanilla PostgreSQL 9.4 installation. Its called postgresql.conf and contrary to other config files in Linux systems, its not saved under /etc directory. In the code snippet below, we can see its in /var/lib/pgsql directory of our CentOS 7 server:
root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf
...
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr'
# Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on
# Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'pg_log'
# directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
# log_file_mode = 0600 .
# creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d
# Automatic rotation of logfiles will happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default
# terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '< %m >' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5
log_timezone = 'Australia/ACT'
Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there.
Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files:
[root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log
total 20
-rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log
-rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log
-rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log
-rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log
-rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log
So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf.
Looking inside one log file shows its entries start with date time only:
[root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log
...
< 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request
< 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions
< 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down
< 2015-02-27 01:21:27.036 EST >LOG: shutting down
< 2015-02-27 01:21:27.211 EST >LOG: database system is shut down
### Centralizing Application Logs ###
#### Log File Monitoring with Imfile ####
Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but dont scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this:
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
----------
# Input for FILE1
$InputFileName /FILE1
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured.
#### Local Socket Logs with Imuxsock ####
A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket.
This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or cant keep up, then you could lose log data.
The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command:
$ModLoad imuxsock
#### UDP Logs with Imupd ####
Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution.
Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514:
$ModLoad imudp
----------
$UDPServerRun 514
### Manage Logs with Logrotate ###
Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine.
The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived.
When logrotate copies a file, the new file has a new inode, which can interfere with rsyslogs ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file.
The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18].
### Manage Configuration on Many Servers ###
When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect.
#### Pssh ####
This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time.
#### Puppet/Chef ####
Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you arent sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorlds comparison of the two tools][19].
Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Logglys Puppet module. It offers a class for rsyslog to which you can add an identifying token:
node 'my_server_node.example.net' {
# Send syslog events to Loggly
class { 'loggly::rsyslog':
customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000',
}
}
#### Docker ####
Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center.
There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21].
#### Vendor Scripts or Agents ####
Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl
[2]:http://www.rsyslog.com/
[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system
[4]:http://logstash.net/
[5]:http://www.fluentd.org/
[6]:http://www.rsyslog.com/doc/rsyslog_conf.html
[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html
[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87
[9]:https://www.loggly.com/docs/file-monitoring/
[10]:http://www.networksorcery.com/enp/protocol/udp.htm
[11]:http://www.networksorcery.com/enp/protocol/tcp.htm
[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html
[13]:http://www.rsyslog.com/doc/relp.html
[14]:http://www.rsyslog.com/doc/queues.html
[15]:http://www.rsyslog.com/doc/tls_cert_ca.html
[16]:http://www.rsyslog.com/doc/tls_cert_machine.html
[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html
[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html
[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
[21]:https://github.com/progrium/logspout
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/

View File

@ -0,0 +1,117 @@
translation by strugglingyouth
Troubleshooting with Linux Logs
================================================================================
Troubleshooting is the main reason people create logs. Often youll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs.
### Cause of Login Failures ###
If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another users permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened.
Failure Examples:
pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2
Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2
pam_unix(sshd:auth): check pass; user unknown
PAM service(sshd) ignoring max retries; 6 > 3
Success Examples:
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
pam_unix(sshd:session): session opened for user hoover by (uid=0)
pam_unix(sshd:session): session closed for user hoover
You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system.
$ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr
23 oracle
18 postgres
17 nagios
10 zabbix
6 test
Youll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username.
Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user.
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png)
Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, its more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system.
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png)
### Cause of Reboots ###
Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it?
#### Shutdown Command ####
If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down.
Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh
Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now
#### Kernel Initializing ####
If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. Youd search for the facility kernel messages and Initializing cpu.
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25)
### Detect Memory Problems ###
There are lots of reasons a server might crash, but one common cause is running out of memory.
When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash.
Examples:
[33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child
[29923450.995084] select 5230 (docker), adj 0, size 708, to kill
You can find these logs using a tool like grep. This example is for Ubuntu:
$ grep “Out of memory” /var/log/syslog
[33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child
Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason its a fabulous idea to centralize your logs!
### Log Cron Job Errors ###
The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself.
By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here.
Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from=<hoover>
Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110>
Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=<hoover@loggly.com>, size=607, nrcpt=1 (queue active)
Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=<hoover@loggly.com>, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp)
You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to.
*/5 * * * * echo Hello World 2>&1 | /usr/bin/logger -t helloCron
Which creates the log entries:
Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron)
Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World!
Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:http://linux.die.net/man/8/pam.d

View File

@ -0,0 +1,86 @@
7 个驱动开源发展的社区
================================================================================
不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
### 技术的开放发展驱动着创新 ###
在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。
### OpenPOWER 基金会 ###
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
[OpenPOWER 基金会][2] 由 IBM, Google, Mellanox, Tyan 和 NVIDIA 于 2013 年共同创建,在与开源软件发展相同的精神下,旨在驱动开放协作硬件的发展,在过去的 20 几年间,开源软件发展已经找到了肥沃的土壤。
IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。
今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。
### Linux 基金会 ###
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。
它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团)MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。
### 开放虚拟化联盟 ###
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。
如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。
### OpenStack 基金会 ###
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。
大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分)OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。
### OpenDaylight ###
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。
### Apache 软件基金会 ###
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。
ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。
### 开放计算项目 ###
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。
OCP 董事会成员包括来自 FacebookIntelGoldman SachsRackspace 和 Microsoft 的代表。
OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
作者:[Thor Olavsrud][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
[1]:http://openpowerfoundation.org/
[2]:http://www.linuxfoundation.org/
[3]:https://openvirtualizationalliance.org/
[4]:http://www.openstack.org/foundation/
[5]:http://www.opendaylight.org/
[6]:http://www.apache.org/
[7]:http://www.opencompute.org/

View File

@ -0,0 +1,258 @@
如何设置在Quagga BGP路由器中设置IPv6的BGP对等体和过滤
================================================================================
在之前的教程中我们演示了如何使用Quagga建立一个[完备的BGP路由器][1]和配置[前缀过滤][2]。在本教程中我们会向你演示如何创建IPv6 BGP对等体并通过BGP通告IPv6前缀。同时我们也将演示如何使用前缀列表和路由映射特性来过滤通告的或者获取到的IPv6前缀。
### 拓扑 ###
教程中,我们主要参考如下拓扑。
![](https://farm9.staticflickr.com/8599/15944659374_1c65852df2_c.jpg)
服务供应商A和B希望在他们之间建立一个IPv6的BGP对等体。他们的IPv6地址和AS信息如下所示。
- 对等体IP块: 2001:DB8:3::/64
- 供应商A: AS 100, 2001:DB8:1::/48
- 供应商B: AS 200, 2001:DB8:2::/48
### CentOS/RHEL安装Quagga ###
如果Quagga还没有安装我们可以先使用yum安装。
# yum install quagga
在CentOS/RHEL 7SELinux策略会默认的阻止对于/usr/sbin/zebra配置目录的写操作这会对我们将要介绍的安装操作有所影响。因此我们需要像下面这样关闭这个策略。如果你使用的是CentOS/RHEL 6可以跳过这一步。
# setsebool -P zebra_write_config 1
### 创建配置文件 ###
在安装过后我们先创建配置文件zebra/bgpd作为配置流程的开始。
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
# cp /usr/share/doc/quagga-XXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
然后,允许这些服务开机自启。
**在 CentOS/RHEL 6:**
# service zebra start; service bgpd start
# chkconfig zebra on; chkconfig bgpd on
**在 CentOS/RHEL 7:**
# systemctl start zebra; systemctl start bgpd
# systemctl enable zebra; systmectl enable bgpd
Quagga内部提供一个叫作vtysh的shell其界面与那些主流路由厂商Cisco或Juniper十分相似。启动vtysh shell命令行
# vtysh
提示将改为:
router-a#
router-b#
在教程的其余部分这个提示可以表明你正身处在哪个路由的vtysh shell中。
### 为Zebra指定日志文件 ###
来为Zebra配置日志文件这会有助于调试。
首先,进入全局配置模式通过输入:
router-a# configure terminal
提示将变更成:
router-a(config)#
指定日志文件的位置。然后退出配置模式:
router-a(config)# log file /var/log/quagga/quagga.log
router-a(config)# exit
保存配置通过:
router-a# write
### 配置接口IP地址 ###
现在让我们为Quagga的物理接口配置IP地址。
首先查看一下vtysh中现有的接口。
router-a# show interfaces
----------
Interface eth0 is up, line protocol detection is disabled
## OUTPUT TRUNCATED ###
Interface eth1 is up, line protocol detection is disabled
## OUTPUT TRUNCATED ##
现在我们配置IPv6地址。
router-a# conf terminal
router-a(config)# interface eth0
router-a(config-if)# ipv6 address 2001:db8:3::1/64
router-a(config-if)# interface eth1
router-a(config-if)# ipv6 address 2001:db8:1::1/64
在路由B上采用同样的方式分配IPv6地址。我将配置汇总成如下。
router-b# show running-config
----------
interface eth0
ipv6 address 2001:db8:3::2/64
interface eth1
ipv6 address 2001:db8:2::1/64
由于两台路由的eth0端口同属一个子网即2001DB83::/64你应该可以相互ping通。在保证ping通的情况下我们开始下面的内容。
router-a# ping ipv6 2001:db8:3::2
----------
PING 2001:db8:3::2(2001:db8:3::2) 56 data bytes
64 bytes from 2001:db8:3::2: icmp_seq=1 ttl=64 time=3.20 ms
64 bytes from 2001:db8:3::2: icmp_seq=2 ttl=64 time=1.05 ms
### 步骤 1: IPv6 BGP 对等体 ###
本段我们将在两个路由之间配置IPv6 BGP。首先我们在路由A上指定BGP邻居。
router-a# conf t
router-a(config)# router bgp 100
router-a(config-router)# no auto-summary
router-a(config-router)# no synchronization
router-a(config-router)# neighbor 2001:DB8:3::2 remote-as 200
然后我们定义IPv6的地址族。在地址族中我们需要定义要通告的网段并激活邻居。
router-a(config-router)# address-family ipv6
router-a(config-router-af)# network 2001:DB8:1::/48
router-a(config-router-af)# neighbor 2001:DB8:3::2 activate
我们在路由B上也实施相同的配置。这里提供我归总后的配置。
router-b# conf t
router-b(config)# router bgp 200
router-b(config-router)# no auto-summary
router-b(config-router)# no synchronization
router-b(config-router)# neighbor 2001:DB8:3::1 remote-as 100
router-b(config-router)# address-family ipv6
router-b(config-router-af)# network 2001:DB8:2::/48
router-b(config-router-af)# neighbor 2001:DB8:3::1 activate
如果一切顺利在路由间将会形成一个IPv6 BGP会话。如果失败了请确保[在防火墙中开启了][3]必要的端口TCP 179
我们使用以下命令来确认IPv6 BGP会话的信息。
**查看BGP汇总**
router-a# show bgp ipv6 unicast summary
**查看BGP通告的路由**
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> advertised-routes
**查看BGP获得的路由**
router-a# show bgp ipv6 neighbors <neighbor-IPv6-address> routes
![](https://farm8.staticflickr.com/7317/16379555088_6e29cb6884_b.jpg)
### 步骤 2 过滤IPv6前缀 ###
正如我们在上面看到的输出信息那样,路由间通告了他们完整的/48 IPv6前缀。出于演示的目的我们会考虑以下要求。
- Router-B将通告一个/64前缀一个/56前缀和一个完整的/48前缀.
- Router-A将接受任由B提供的何形式的IPv6前缀其中包含有/56和/64之间的网络掩码长度。
我们将根据需要过滤的前缀,来使用路由器的前缀列表和路由映射。
![](https://farm8.staticflickr.com/7367/16381297417_6549218289_c.jpg)
#### 为路由B修改通告的前缀 ####
目前路由B只通告一个/48前缀。我们修改路由B的BGP配置使它可以通告额外的/56和/64前缀。
router-b# conf t
router-b(config)# router bgp 200
router-b(config-router)# address-family ipv6
router-b(config-router-af)# network 2001:DB8:2::/56
router-b(config-router-af)# network 2001:DB8:2::/64
我们将路由A上验证了所有的前缀都获得到了。
![](https://farm9.staticflickr.com/8598/16379761980_7c083ae977_b.jpg)
太好了我们在路由A上收到了所有的前缀那么我们可以更进一步创建前缀列表和路由映射来过滤这些前缀。
#### 创建前缀列表 ####
就像在[上则教程中][4]描述的那样前缀列表是一种机制用来匹配带有子网长度的IP地址前缀。按照我们指定的需求我们需要在路由A的前缀列表中创建一则必要的条目。
router-a# conf t
router-a(config)# ipv6 prefix-list FILTER-IPV6-PRFX permit 2001:DB8:2::/56 le 64
以上的命令会创建一个名为'FILTER-IPV6-PRFX'的前缀列表用以匹配任何2001:DB8:2::池内掩码在56和64之间的所有前缀。
#### 创建并应用路由映射 ####
现在已经在前缀列表中创建了条目,我们也应该相应的创建一条使用此条目的路由映射规则了。
router-a# conf t
router-a(config)# route-map FILTER-IPV6-RMAP permit 10
router-a(config-route-map)# match ipv6 address prefix-list FILTER-IPV6-PRFX
以上的命令会创建一条名为'FILTER-IPV6-RMAP'的路由映射规则。这则规则将会允许与之前在前缀列表中创建'FILTER-IPV6-PRFX'所匹配的IPv6
要记住路由映射规则只有在应用在邻居或者端口的指定方向时才有效。我们将把路由映射应用到BGP的邻居配置中。我们将路由映射应用于入方向作为进入路由端的前缀过滤器。
router-a# conf t
router-a(config)# router bgp 100
router-a(config-router)# address-family ipv6
router-a(config-router-af)# neighbor 2001:DB8:3::2 route-map FILTER-IPV6-RMAP in
现在我们在路由A上再查看一边获得到的路由我们应该只能看见两个被允许的前缀了。
![](https://farm8.staticflickr.com/7337/16379762020_ec2dc39b31_c.jpg)
**注意** 你可能需要重置BGP会话来刷新路由表。
所有IPv6的BGP会话可以使用以下的命令重启
router-a# clear bgp ipv6 *
我汇总了两个路由的配置,并做成了一张清晰的图片以便阅读。
![](https://farm9.staticflickr.com/8672/16567240165_eee4398dc8_c.jpg)
### 总结 ###
总结一下这篇教程重点在于如何创建BGP对等体和IPv6的过滤。我们演示了如何向邻居BGP路由通告IPv6前缀和如何过滤通告前缀或获得的通告。需要注意本教程使用的过程可能会对网络供应商的网络运作有所影响请谨慎参考。
希望这些对你有用。
--------------------------------------------------------------------------------
via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
作者:[Sarmed Rahman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/centos-bgp-router-quagga.html
[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html
[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html
[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html

View File

@ -0,0 +1,89 @@
如何在树莓派2 代运行ubuntu Snappy Core
================================================================================
物联网(Internet of Things, IoT) 时代即将来临。很快过不了几年我们就会问自己当初是怎么在没有物联网的情况下生存的就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上就像他们已经在“云”上做过的一样。。在今年一月底Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。
Snappy 是一种用来替代deb 的新的打包格式是一个用来更新系统的前端从CoreOS、红帽子和其他地方借鉴了原子更新这个想法。很快树莓派2 代投入市场Canonical 就发布了用于树莓派的Snappy Core 版本。第一代树莓派因为是基于ARMv6 而Ubuntu 的ARM 镜像是基于ARMv7 所以不能运行ubuntu 。不过这种状况现在改变了Canonical 通过发布用于RPI2 的镜像抓住机会澄清了Snappy 就是一个用于云计算特别是IoT 的系统。
Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google's Compute Engine 这样的云端上也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手同时也与一些小项目达成合作关系。除了一些创业公司像Ninja Sphere、Erle Robotics还有一些开发板生产商比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志。Snappy Core 也希望很快能运行在路由器上,来帮助改进路由器生产商目前很少更新固件的策略。
接下来让我们看看怎么样在树莓派2 上运行Snappy。
用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小但是院子升级和回滚功能会蚕食不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。
![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg)
sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名
$ sudo usermod -l <new name> <old name>
或者也可以使用`adduser` 为你添加一个新用户。
因为RPI缺少硬件始终而Snappy 不知道这一点所以系统会有一个小bug处理命令时会报很多错。不过这个很容易解决
使用这个命令来确认这个bug 是否影响:
$ date
如果输出是 "Thu Jan 1 01:56:44 UTC 1970" 你可以这样做来改正:
$ sudo date --set="Sun Apr 04 17:43:26 UTC 2015"
改成你的实际时间。
![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg)
现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令:
$ sudo apt-get update && sudo apt-get distupgrade
现在将不会让你通过因为Snappy 会使用它自己精简过的、基于dpkg 的包管理系统。这是做是应为Snappy 会运行很多嵌入式程序,而你也会想着所有事情尽可能的简化。
让我们来看看最关键的部分理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统用来让用户存储数据。通过更新系统标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。
![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg)
如果我们运行以下命令:
$ sudo snappy update
系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。
重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心
$ sudo snappy versions -a
经过更新-重启的操作,你应该可以看到被激活的核心已经被改变了。
因为到目前为止我们还没有安装任何软件,下面的命令:
$ sudo snappy update ubuntu-core
将会生效而且如果你打算仅仅更新特定的OS这也是一个办法。如果出了问题你可以使用下面的命令回滚
$ sudo snappy rollback ubuntu-core
这将会把系统状态回滚到更新之前。
![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg)
再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息那个上面有很多人参与。你可以通过浏览器访问http://<ip-address>:4200 来浏览应用商店然后从商店安装软件再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。
![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg)
尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作但是因为有限的可用应用它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。
--------------------------------------------------------------------------------
via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html
作者:[Ferdinand Thommes][a]
译者:[译者ID](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/ferdinand
[1]:http://www.ubuntu.com/things
[2]:http://www.raspberrypi.org/downloads/
[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html
[4]:https://developer.ubuntu.com/en/snappy/

View File

@ -0,0 +1,131 @@
如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器
================================================================================
你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。
### 什么是反向 SSH 隧道? ###
SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。
![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg)
### 在 Linux 上设置反向 SSH 隧道 ###
让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。
在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
这里端口 10022 是任何你可以使用的端口数字。只需要确保中继服务器上不会有其它程序使用这个端口。
“-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。
用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。
运行上面的命令之后,你就会回到家庭主机的命令行提示框中。
登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd
现在就可以从任何其它计算机(客户端计算机)登录到中继服务器,然后按照下面的方法访问家庭服务器。
relayserver~$ ssh -p 10022 homeserver_user@localhost
需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。
### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ###
上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址127.0.0.1)。
事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。
打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。
relayserver~$ vi /etc/ssh/sshd_conf
----------
GatewayPorts clientspecified
重启 sshd。
基于 Debian 的系统:
relayserver~$ sudo /etc/init.d/ssh restart
基于红帽的系统:
relayserver~$ sudo systemctl restart sshd
现在在家庭服务器中按照下面方式初始化一个反向 SSH 隧道。
homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
登录到中继服务器然后用 netstat 命令确认成功建立的一个反向 SSH 隧道。
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
不像之前的情况,现在隧道的端点是 1.1.1.1:10022中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。
现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
在上面的命令中1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。
### 在 Linux 上设置一个永久反向 SSH 隧道 ###
现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”这样隧道启动后就会一直运行不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。
对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。
第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。
下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。
在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。
homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
“-M 10900” 选项指定中继服务器上的监视端口,用于交换监视 SSH 会话的测试数据。中继服务器上的其它程序不能使用这个端口。
“-fN” 选项传递给 ssh 命令,让 SSH 隧道在后台运行。
“-o XXXX” 选项让 ssh
- 使用密钥验证,而不是密码验证。
- 自动接受未知SSH 主机密钥。
- 每 60 秒交换 keep-alive 消息。
- 没有收到任何响应时最多发送 3 条 keep-alive 消息。
其余 SSH 隧道相关的选项和之前介绍的一样。
如果你想系统启动时自动运行 SSH 隧道,你可以将上面的 autossh 命令添加到 /etc/rc.local。
### 总结 ###
在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。
--------------------------------------------------------------------------------
via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
作者:[Dan Nanni][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/digitalocean
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
[3]:http://ask.xmodulo.com/install-autossh-linux.html

View File

@ -1,205 +0,0 @@
Nishita Agarwal分享它关于Linux防火墙'iptables'的面试经验
================================================================================
Nishita Agarwal是Tecmint的用户她将分享关于她刚刚经历的一家公司私人公司Pune印度的面试经验。在面试中她被问及许多不同的问题但她是iptables方面的专家因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。
![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg)
所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。
> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位我的专业集中在UNIX和它的变种BSDLinux。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化并将供职于印度的Pune公司。”
下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。
### 1. 你听说过Linux下面的iptables和Firewalld么知不知道它们是什么是用来干什么的 ###
> **答案** : iptables和Firewalld我都知道并且我已经使用iptables好一段时间了。iptables主要由C语言写成并且以GNU GPL许可证发布。它是从系统管理员的角度写的最新的稳定版是iptables 1.4.21。iptables通常被认为是类UNIX系统中的防火墙更准确的说可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块它执行过滤的任务。
>
> Firewalld是RHEL/CentOS 7也许还有其他发行版但我不太清楚中最新的过滤规则的实现。它已经取代了iptables接口并与netfilter相连接。
### 2. 你用过一些iptables的GUI或命令行工具么 ###
> **答案** : 虽然我既用过GUI工具比如与[Webmin][1]结合的Shorewall以及直接通过终端访问iptables。但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员而终端适合有经验的管理员。
### 3. 那么iptables和firewalld的基本区别是什么呢 ###
> **答案** : iptables和firewalld都有着同样的目的包过滤但它们使用不同的方式。iptables与firewalld不同在每次发生更改时都刷新整个规则集。通常iptables配置文件位于/etc/sysconfig/iptables而firewalld的配置文件位于/etc/firewalld/。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易但是两者都可以完成同样的任务。例如firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。
### 4. 如果有机会的话你会在你所有的服务器上用firewalld替换iptables么 ###
> **答案** : 我对iptables很熟悉它也工作的很好。如果没有任何需求需要firewalld的动态特性那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下目前为止我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢”。上面是我自己的想法但如果组织愿意用firewalld替换iptables的话我不介意。
### 5. 你看上去对iptables很有信心巧的是我们的服务器也在使用iptables。 ###
iptables使用的表有哪些请简要的描述iptables使用的表以及它们所支持的链。
> **答案** : 谢谢您的赞赏。至于您问的问题iptables使用的表有四个它们是
>
> Nat 表
> Mangle 表
> Filter 表
> Raw 表
>
> Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如如果一个通过某个接口的包被修饰修改了IP地址该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤由NAT表支持的链称为PREROUTING ChainPOSTROUTING Chain和OUTPUT Chain。
>
> Mangle表 : 正如它的名字一样这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING ChainOUTPUT ChainForward ChainInputChain和POSTROUTING Chain。
>
> Filter表 : Filter表是iptables中使用的默认表它用来过滤网络包。如果没有定义任何规则Filter表则被当作默认的表并且基于它来过滤。支持的链有INPUT ChainOUTPUT ChainFORWARD Chain。
>
> Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING Chain 和OUTPUT Chain。
### 6. 简要谈谈什么是iptables中的目标值能被指定为目标他们有什么用 ###
> **答案** : 下面是在iptables中可以指定为目标的值
>
> ACCEPT : 接受包
> QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方)
> DROP : 丢弃包
> RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调规则
### 7. 让我们来谈谈iptables技术方面的东西我的意思是说实际使用方面 ###
你怎么检测在CentOS中安装iptables时需要的iptables的rpm
> **答案** : iptables已经被默认安装在CentOS中我们不需要单独安装它。但可以这样检测rpm
>
> # rpm -qa iptables
>
> iptables-1.4.21-13.el7.x86_64
>
> 如果您需要安装它您可以用yum来安装。
>
> # yum install iptables-services
### 8. 怎样检测并且确保iptables服务正在运行 ###
> **答案** : 您可以在终端中运行下面的命令来检测iptables的状态。
>
> # service status iptables [On CentOS 6/5]
> # systemctl status iptables [On CentOS 7]
>
> 如果iptables没有在运行可以使用下面的语句
>
> ---------------- 在CentOS 6/5下 ----------------
> # chkconfig --level 35 iptables on
> # service iptables start
>
> ---------------- 在CentOS 7下 ----------------
> # systemctl enable iptables
> # systemctl start iptables
>
> 我们还可以检测iptables的模块是否被加载
>
> # lsmod | grep ip_tables
### 9. 你怎么检查iptables中当前定义的规则呢 ###
> **答案** : 当前的规则可以简单的用下面的命令查看:
>
> # iptables -L
>
> 示例输出
>
> Chain INPUT (policy ACCEPT)
> target prot opt source destination
> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
> ACCEPT icmp -- anywhere anywhere
> ACCEPT all -- anywhere anywhere
> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source destination
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source destination
### 10. 你怎样刷新所有的iptables规则或者特定的链呢 ###
> **答案** : 您可以使用下面的命令来刷新一个特定的链。
>
> # iptables --flush OUTPUT
>
> 要刷新所有的规则,可以用:
>
> # iptables --flush
### 11. 请在iptables中添加一条规则接受所有从一个信任的IP地址例如192.168.0.7)过来的包。 ###
> **答案** : 上面的场景可以通过运行下面的命令来完成。
>
> # iptables -A INPUT -s 192.168.0.7 -j ACCEPT
>
> 我们还可以在源IP中使用标准的斜线和子网掩码
>
> # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT
> # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT
### 12. 怎样在iptables中添加规则以ACCEPTREJECTDENY和DROP ssh的服务 ###
> **答案** : 但愿ssh运行在22端口那也是ssh的默认端口我们可以在iptables中添加规则来ACCEPT ssh的tcp包在22号端口上
>
> # iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT
>
> REJECT ssh服务22号端口的tcp包。
>
> # iptables -A INPUT -s -p tcp --dport 22 -j REJECT
>
> DENY ssh服务22号端口的tcp包。
>
>
> # iptables -A INPUT -s -p tcp --dport 22 -j DENY
>
> DROP ssh服务22号端口的tcp包。
>
>
> # iptables -A INPUT -s -p tcp --dport 22 -j DROP
### 13. 让我给你另一个场景假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接你会怎么做 ###
> **答案** : 这时我所需要的就是在iptables中使用multiport选项并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定
>
> # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP
>
> 可以用下面的语句查看写入的规则。
>
> # iptables -L
>
> Chain INPUT (policy ACCEPT)
> target prot opt source destination
> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
> ACCEPT icmp -- anywhere anywhere
> ACCEPT all -- anywhere anywhere
> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
> DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source destination
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source destination
**面试官** : 好了我问的就是这些。你是一个很有价值的雇员我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题请问我。
作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事这样会打断愉快的对话。更不用说HR轮会不会比较难总之我获得了机会。
同时我要感谢Avishek和Ravi我的朋友花时间帮我整理我的面试。
朋友如果您有过类似的面试并且愿意与数百万Tecmint读者一起分享您的面试经历请将您的问题和答案发送到admin@tecmint.com。
谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/

View File

@ -0,0 +1,55 @@
将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第一节 - 简介
================================================================================
*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的不代表Phoronix和Michael的观点。它们完全是我自己的想法。
另外没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些因为我确实想在KDE和Gnome的社团上发起讨论反馈。因此当我想指出——我所看到的——一个瑕疵时我会尽量地做到具体而直接。这样相关的讨论也能做到同样的具体和直接。再次声明本文另一可选标题为“被[剪纸][1]千刀万剐”原文剪纸一词为papercuts, 指易修复而烦人的漏洞,译者注)。
现在,重申完毕……文章开始。
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
当我把[《评价Fedora 22 KDE》][2]一文发给Michael时感觉很不是滋味。不是因为我不喜欢KDE或者不享受Fedora远非如此。事实上我刚开始想把我的T450s的系统换为Arch Linux时马上又决定放弃了因为我很享受fedora在很多方面所带来的便捷性。
我感觉很不是滋味的原因是Fedora的开发者花费了大量的时间和精力在他们的“工作站”产品上但是我却一点也没看到。在使用Fedora时我采用的并非那些主要开发者希望用户采用的那种使用方式因此我也就体验不到所谓的“Fedora体验”。它感觉就像一个人评价Ubuntu时用的却是Kubuntu评价OS X时用的却是Hackintosh或者评价Gentoo时用的却是Sabayon。根据大量Michael论坛的读者的说法它们在评价各种发行版时使用的都是默认设置的发行版——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。
正是在怀着这种态度的情况下我决定到Gnome这个水坑里来泡泡澡。
但是我还要在此多加一个声明……我在这里所看到的KDE和Gnome都是打包在Fedora中的。OpenSUSE, Kubuntu, Arch等发行版的各个桌面可能有不同的实现方法使得我这里所说的具体的“痛处”跟你所用的发行版有所不同。还有虽然用了这个标题但这篇文章将会是一篇很沉重的非常“KDE”的文章。之所以这样称呼这篇文章是因为我在使用了Gnome之后才知道KDE的“剪纸”到底有多多。
### 登录界面 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
我一般情况下都不会介意发行版装载它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。
第一印象很重要对吧那么GDMGnome Display ManageGnome显示管理器译者注下同。决对干得漂亮。它的登录界面看起来极度简洁每一部分都应用了一致的设计风格。使用通用图标而不是输入框为它的简洁加了分。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
这并不是说Fedora 22 KDE——现在已经是SDDM而不是KDM了——的登录界面不好看但是看起来决对没有它这样和谐。
问题到底出来在哪顶部栏。看看Gnome的截图——你选择一个用户然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁它不挡着你的道儿实话讲如果你没注意的话可能完全会看不到它。现在看看那蓝色 blue有忧郁之意一语双关译者注的KDE截图顶部栏看起来甚至不像是用同一个工具渲染出来的它的整个位置的安排好像是某人想着“哎哟妈呀我们需要把这个选项扔在哪个地方……”之后决定下来的。
对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。
从实用观点来看GDM还要远远实用的多再看看顶部一栏。时间被列了出来还有一个音量控制按钮如果你想保持周围安静你甚至可以在登录前设置静音还有一个可用的按钮来实现高对比度缩放语音转文字等功能所有可用的功能通过简单的一个开关按钮就能得到。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
切换到上流的Breeve主题……突然间我抱怨的大部分问题都被完善了。通用图标所有东西都放在了屏幕中央但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话但既然电源按钮被做成了通用图标那么这点还算可以原谅。当然gnome还是有一些很好的附加物例如音量小程序和可访问按钮但Breeze总归是Fedora的KDE主题的一个进步。
到Windows(Windows 8和10之前或者OS X中去你会看到类似的东西——非常简洁的“不挡你道”的锁屏与登录界面它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1
[3]:https://launchpad.net/hundredpapercuts

View File

@ -0,0 +1,31 @@
将GNOME作为我的Linux桌面的一周他们做对的与做错的 - 第二节 - GNOME桌面
================================================================================
### 桌面 ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
在我这一周的前五天中我都是直接手动登录进Gnome的——没有打开自动登录功能。在第五天的晚上每一次都要手动登录让我觉得很厌烦所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示“你的密钥链(keychain)未解锁请输入你的密码解锁”。在这时我才意识到了什么……每一次我通过GDM登录时——我的KDB钱包提示——Gnome以前一直都在自动解锁我的密钥链当我绕开GDM的登录程序时Gnome才不得不介入让我手动解锁。
现在鄙人的陋见是如果你打开了自动登录功能那么你的密钥链也应当自动解锁——否则这功能还有何用无论如何你还是要输入你的密码况且在GDM登录界面你还能有机会选择要登录的会话。
但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉是它在**和我**一起工作是多么简单的一件事。当我通过SDDM登录KDE时甚至连启动界面都还没加载完成就有一个窗口弹出来遮挡了启动动画——因此启动动画也就被破坏了——提示我解锁我的KDE钱包或GPG钥匙环。
如果当前不存在钱包你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗——接着它又让你在两种加密模式中选择一种甚至还暗示我们其中一种是不安全的Blowfish),既然是为了安全为什么还要我选择一个不安全的东西作者声明如果你安装了真正的KDE spin版本而不是仅仅安装了KDE的事后版本那么在创建用户时它就会为你创建一个钱包。但很不幸的是它不会帮你解锁并且它似乎还使用了更老的Blowfish加密模式而不是更新而且更安全的GPG模式。
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
如果你选择了那个安全的加密模式GPG那么它会尝试加载GPG密钥……我希望你已经创建过一个了因为如果你没有那么你可又要被批一顿了。怎么样才能创建一个额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用KGpg来创建一个密钥接着在你就会遇到一层层的菜单和一个个的提示而这些菜单和提示只能让新手感到困惑。为什么你要问我GPG的二进制文件在哪天知道在哪如果不止一个你就不能为我选择一个最新的吗如果只有一个我再问一次为什么你还要问我
为什么你要问我要使用多大的密钥大小和加密算法你既然默认选择了2048和RSA/RSA为什么不直接使用如果你想让这些选项能够被改变那就把它们扔在下面的"Expert mode专家模式"按钮里去。这不仅仅关于使配置可被用户改变而是关于默认地把多余的东西扔在了用户面前。这种问题将会成为剩下文章的主题之一……KDE需要更好更理智的默认配置。配置是好的我很喜欢在使用KDE时的配置但它还需要知道什么时候应该什么时候不应该去提示用户。而且它还需要知道“嗯它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置不好的默认配置注定要失去用户。
让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
作者Eric Griffith
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出