Merge pull request #2 from LCTT/master

Update my Repo
This commit is contained in:
joeren 2015-08-04 07:59:31 +08:00
commit 2514c1c03d
18 changed files with 1225 additions and 535 deletions

View File

@ -0,0 +1,205 @@
关于Linux防火墙'iptables'的面试问答
================================================================================
Nishita Agarwal是Tecmint的用户她将分享关于她刚刚经历的一家公司印度的一家私人公司Pune的面试经验。在面试中她被问及许多不同的问题但她是iptables方面的专家因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。
![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg)
所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。
> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位我的专业集中在UNIX和它的变种BSDLinux。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化并将供职于印度的Pune公司。”
下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。
### 1. 你听说过Linux下面的iptables和Firewalld么知不知道它们是什么是用来干什么的 ###
**答案** : iptables和Firewalld我都知道并且我已经使用iptables好一段时间了。iptables主要由C语言写成并且以GNU GPL许可证发布。它是从系统管理员的角度写的最新的稳定版是iptables 1.4.21。iptables通常被用作类UNIX系统中的防火墙更准确的说可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块它执行包过滤的任务。
Firewalld是RHEL/CentOS 7也许还有其他发行版但我不太清楚中最新的过滤规则的实现。它已经取代了iptables接口并与netfilter相连接。
### 2. 你用过一些iptables的GUI或命令行工具么 ###
**答案** : 虽然我既用过GUI工具比如与[Webmin][1]结合的Shorewall以及直接通过终端访问iptables但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员而终端适合有经验的管理员。
### 3. 那么iptables和firewalld的基本区别是什么呢 ###
**答案** : iptables和firewalld都有着同样的目的包过滤但它们使用不同的方式。iptables与firewalld不同在每次发生更改时都刷新整个规则集。通常iptables配置文件位于/etc/sysconfig/iptables而firewalld的配置文件位于/etc/firewalld/。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易但是两者都可以完成同样的任务。例如firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。
### 4. 如果有机会的话你会在你所有的服务器上用firewalld替换iptables么 ###
**答案** : 我对iptables很熟悉它也工作的很好。如果没有任何需求需要firewalld的动态特性那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下目前为止我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢”。上面是我自己的想法但如果组织愿意用firewalld替换iptables的话我不介意。
### 5. 你看上去对iptables很有信心巧的是我们的服务器也在使用iptables。 ###
iptables使用的表有哪些请简要的描述iptables使用的表以及它们所支持的链。
**答案** : 谢谢您的赞赏。至于您问的问题iptables使用的表有四个它们是
- Nat 表
- Mangle 表
- Filter 表
- Raw 表
Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如如果一个通过某个接口的包被修饰修改了IP地址该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤由NAT表支持的链称为PREROUTING 链POSTROUTING 链和OUTPUT 链。
Mangle表 : 正如它的名字一样这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING 链OUTPUT 链Forward 链Input 链和POSTROUTING 链。
Filter表 : Filter表是iptables中使用的默认表它用来过滤网络包。如果没有定义任何规则Filter表则被当作默认的表并且基于它来过滤。支持的链有INPUT 链OUTPUT 链FORWARD 链。
Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING 链和OUTPUT 链。
### 6. 简要谈谈什么是iptables中的目标值能被指定为目标他们有什么用 ###
**答案** : 下面是在iptables中可以指定为目标的值
- ACCEPT : 接受包
- QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方)
- DROP : 丢弃包
- RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调用规则
### 7. 让我们来谈谈iptables技术方面的东西我的意思是说实际使用方面 ###
你怎么检测在CentOS中安装iptables时需要的iptables的rpm
**答案** : iptables已经被默认安装在CentOS中我们不需要单独安装它。但可以这样检测rpm
# rpm -qa iptables
iptables-1.4.21-13.el7.x86_64
如果您需要安装它您可以用yum来安装。
# yum install iptables-services
### 8. 怎样检测并且确保iptables服务正在运行 ###
**答案** : 您可以在终端中运行下面的命令来检测iptables的状态。
# service status iptables [On CentOS 6/5]
# systemctl status iptables [On CentOS 7]
如果iptables没有在运行可以使用下面的语句
---------------- 在CentOS 6/5下 ----------------
# chkconfig --level 35 iptables on
# service iptables start
---------------- 在CentOS 7下 ----------------
# systemctl enable iptables
# systemctl start iptables
我们还可以检测iptables的模块是否被加载
# lsmod | grep ip_tables
### 9. 你怎么检查iptables中当前定义的规则呢 ###
**答案** : 当前的规则可以简单的用下面的命令查看:
# iptables -L
示例输出
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
### 10. 你怎样刷新所有的iptables规则或者特定的链呢 ###
**答案** : 您可以使用下面的命令来刷新一个特定的链。
# iptables --flush OUTPUT
要刷新所有的规则,可以用:
# iptables --flush
### 11. 请在iptables中添加一条规则接受所有从一个信任的IP地址例如192.168.0.7)过来的包。 ###
**答案** : 上面的场景可以通过运行下面的命令来完成。
# iptables -A INPUT -s 192.168.0.7 -j ACCEPT
我们还可以在源IP中使用标准的斜线和子网掩码
# iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT
# iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT
### 12. 怎样在iptables中添加规则以ACCEPTREJECTDENY和DROP ssh的服务 ###
**答案** : 但愿ssh运行在22端口那也是ssh的默认端口我们可以在iptables中添加规则来ACCEPT ssh的tcp包在22号端口上
# iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT
REJECT ssh服务22号端口的tcp包。
# iptables -A INPUT -s -p tcp --dport 22 -j REJECT
DENY ssh服务22号端口的tcp包。
# iptables -A INPUT -s -p tcp --dport 22 -j DENY
DROP ssh服务22号端口的tcp包。
# iptables -A INPUT -s -p tcp --dport 22 -j DROP
### 13. 让我给你另一个场景假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接你会怎么做 ###
**答案** : 这时我所需要的就是在iptables中使用multiport选项并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定
# iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP
可以用下面的语句查看写入的规则。
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
**面试官** : 好了我问的就是这些。你是一个很有价值的雇员我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题请问我。
作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事这样会打断愉快的对话。更不用说HR轮会不会比较难总之我获得了机会。
同时我要感谢Avishek和Ravi我的朋友花时间帮我整理我的面试。
朋友如果您有过类似的面试并且愿意与数百万Tecmint读者一起分享您的面试经历请将您的问题和答案发送到admin@tecmint.com。
谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/

View File

@ -1,11 +1,13 @@
如何在 Fedora 22 上配置 Proftpd 服务器 如何在 Fedora 22 上配置 Proftpd 服务器
================================================================================ ================================================================================
在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。 在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款基于 GPL 授权的自由开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是提供许多高级功能以及给用户提供丰富的配置选项以轻松实现定制。它具备许多在其他一些 FTP 服务器软件里仍然没有的配置选项。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。
- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess" FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。
- 每个目录都可以包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess"
- 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。 - 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。
- 可以作为独立进程启动服务或者通过 inetd/xinetd 启动 - 可以作为独立进程启动服务或者通过 inetd/xinetd 启动
- 它的文件/目录属性、属主和权限采用类 UNIX 方式 - 它的文件/目录属性、属主和权限是基于 UNIX 方式的
- 它可以独立运行,保护系统避免 root 访问可能带来的损坏。 - 它可以独立运行,保护系统避免 root 访问可能带来的损坏。
- 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器SSL/TLS 加密RADIUS 支持,等等。 - 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器SSL/TLS 加密RADIUS 支持,等等。
- ProFTPD 服务器还支持 IPv6. - ProFTPD 服务器还支持 IPv6.
@ -38,7 +40,7 @@
### 3. 添加 FTP 用户 ### ### 3. 添加 FTP 用户 ###
在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。 在设定好了基本的配置文件后,我们很自然地希望添加一个以特定目录为根目录的 FTP 用户。目前登录的用户自动就可以使用 FTP 服务,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。
下面,我们将建立一个名字是 ftpgroup 的新用户组。 下面,我们将建立一个名字是 ftpgroup 的新用户组。
@ -57,7 +59,7 @@
Retype new password: Retype new password:
passwd: all authentication tokens updated successfully. passwd: all authentication tokens updated successfully.
现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。 现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限LCTT 译注这是SELinux 相关设置,如果未启用 SELinux可以不用
$ sudo setsebool -P allow_ftpd_full_access=1 $ sudo setsebool -P allow_ftpd_full_access=1
$ sudo setsebool -P ftp_home_dir=1 $ sudo setsebool -P ftp_home_dir=1
@ -158,7 +160,7 @@
### 7. 登录到 FTP 服务器 ### ### 7. 登录到 FTP 服务器 ###
现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP** 现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla使用 **服务器的 IP 或名称 **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **要求显式的基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**
![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png) ![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
@ -170,7 +172,7 @@
### 总结 ### ### 总结 ###
最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-) 最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度定制和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-)
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -178,7 +180,7 @@ via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
作者:[Arun Pyasi][a] 作者:[Arun Pyasi][a]
译者:[zpl1025](https://github.com/zpl1025) 译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,15 +1,15 @@
Ubuntu上比较PDF文件 如何在 Ubuntu 上比较 PDF 文件
================================================================================ ================================================================================
如果你想要对PDF文件进行比较你可以使用下面工具之一。 如果你想要对PDF文件进行比较你可以使用下面工具之一。
### Comparepdf ### ### Comparepdf ###
comparepdf是一个命令行应用用于将两个PDF文件进行对比。默认对比模式文本模式该模式会对各对相关页面进行文字对比。只要一检测到差异该程序就会终止并显示一条信息除非设置了-v0和一个指示性的返回码。 comparepdf是一个命令行应用用于将两个PDF文件进行对比。默认对比模式文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0和一个指示性的返回码。
用于文本模式对比的选项有 -ct 或 --compare=text默认用于视觉对比这对图标或其它图像发生改变时很有用的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应)使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。 用于文本模式对比的选项有 -ct 或 --compare=text默认用于视觉对比这对图标或其它图像发生改变时很有用的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应)使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。
### 安装comparepdf到Ubuntu ### #### 安装comparepdf到Ubuntu ####
打开终端,然后运行以下命令 打开终端,然后运行以下命令
@ -19,17 +19,17 @@ comparepdf是一个命令行应用用于将两个PDF文件进行对比。默
comparepdf [OPTIONS] file1.pdf file2.pdf comparepdf [OPTIONS] file1.pdf file2.pdf
**Diffpdf** ###Diffpdf###
DiffPDF是一个图形化应用程序用于对两个PDF文件进行对比。默认情况下它只会对比两个相关页面的文字但是也支持对图形化页面进行对比例如如果图表被修改过或者段落被重新格式化过。它也可以对特定的页面或者页面范围进行对比。例如如果同一个PDF文件有两个版本其中一个有页面1-12而另一个则有页面1-13因为这里添加了一个额外的页面4它们可以通过指定两个页面范围来进行对比第一个是1-12而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面1,12,23,34,55,6以此类推直到12,13 DiffPDF是一个图形化应用程序用于对两个PDF文件进行对比。默认情况下它只会对比两个相关页面的文字但是也支持对图形化页面进行对比例如如果图表被修改过或者段落被重新格式化过。它也可以对特定的页面或者页面范围进行对比。例如如果同一个PDF文件有两个版本其中一个有页面1-12而另一个则有页面1-13因为这里添加了一个额外的页面4它们可以通过指定两个页面范围来进行对比第一个是1-12而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面1,12,23,34,55,6以此类推直到12,13
### 安装 diffpdf 到 ubuntu ### #### 安装 diffpdf 到 ubuntu ####
打开终端,然后运行以下命令 打开终端,然后运行以下命令
sudo apt-get install diffpdf sudo apt-get install diffpdf
### 截图 ### #### 截图 ####
![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png) ![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png)
@ -41,7 +41,7 @@ via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html
作者:[ruchi][a] 作者:[ruchi][a]
译者:[GOLinux](https://github.com/GOLinux) 译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,88 +0,0 @@
FSSlc Translating
7 communities driving open source development
================================================================================
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
### Open Development of Tech Drives Innovation ###
Over the past two decades, open development of technology has come to be seen as a key to driving innovation. Even companies that once saw open source as a threat have come around — Microsoft, for example, is now active in a number of open source initiatives. To date, most open development has focused on software. But even that is changing as communities have begun to coalesce around open hardware initiatives. Here are seven organizations that are successfully promoting and developing open technologies, both hardware and software.
### OpenPOWER Foundation ###
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
The [OpenPOWER Foundation][2] was founded by IBM, Google, Mellanox, Tyan and NVIDIA in 2013 to drive open collaboration hardware development in the same spirit as the open source software development which has found fertile ground in the past two decades.
IBM seeded the foundation by opening up its Power-based hardware and software technologies, offering licenses to use Power IP in independent hardware products. More than 70 members now work together to create custom open servers, components and software for Linux-based data centers.
In April, OpenPOWER unveiled a technology roadmap based on new POWER8 process-based servers capable of analyzing data 50 times faster than the latest x86-based systems. In July, IBM and Google released a firmware stack. October saw the availability of NVIDIA GPU accelerated POWER8 systems and the first OpenPOWER reference server from Tyan.
### The Linux Foundation ###
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
Founded in 2000, [The Linux Foundation][2] is now the host for the largest open source, collaborative development effort in history, with more than 180 corporate members and many individual and student members. It sponsors the work of key Linux developers and promotes, protects and advances the Linux operating system and collaborative software development.
Some of its most successful collaborative projects include Code Aurora Forum (a consortium of companies with projects serving the mobile wireless industry), MeeGo (a project to build a Linux kernel-based operating system for mobile devices and IVI) and the Open Virtualization Alliance (which fosters the adoption of free and open source software virtualization solutions).
### Open Virtualization Alliance ###
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
The [Open Virtualization Alliance (OVA)][3] exists to foster the adoption of free and open source software virtualization solutions like Kernel-based Virtual Machine (KVM) through use cases and support for the development of interoperable common interfaces and APIs. KVM turns the Linux kernel into a hypervisor.
Today, KVM is the most commonly used hypervisor with OpenStack.
### The OpenStack Foundation ###
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
Originally launched as an Infrastructure-as-a-Service (IaaS) product by NASA and Rackspace hosting in 2010, the [OpenStack Foundation][4] has become the home for one of the biggest open source projects around. It boasts more than 200 member companies, including AT&T, AMD, Avaya, Canonical, Cisco, Dell and HP.
Organized around a six-month release cycle, the foundation's OpenStack projects are developed to control pools of processing, storage and networking resources through a data center — all managed or provisioned through a Web-based dashboard, command-line tools or a RESTful API. So far, the collaborative development supported by the foundation has resulted in the creation of OpenStack components including OpenStack Compute (a cloud computing fabric controller that is the main part of an IaaS system), OpenStack Networking (a system for managing networks and IP addresses) and OpenStack Object Storage (a scalable redundant storage system).
### OpenDaylight ###
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
Another collaborative project to come out of the Linux Foundation, [OpenDaylight][5] is a joint initiative of industry vendors, like Dell, HP, Oracle and Avaya founded in April 2013. Its mandate is the creation of a community-led, open, industry-supported framework consisting of code and blueprints for Software-Defined Networking (SDN). The idea is to provide a fully functional SDN platform that can be deployed directly, without requiring other components, though vendors can offer add-ons and enhancements.
### Apache Software Foundation ###
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
The [Apache Software Foundation (ASF)][7] is home to nearly 150 top level projects ranging from open source enterprise automation software to a whole ecosystem of distributed computing projects related to Apache Hadoop. These projects deliver enterprise-grade, freely available software products, while the Apache License is intended to make it easy for users, whether commercial or individual, to deploy Apache products.
ASF was incorporated in 1999 as a membership-based, not-for-profit corporation with meritocracy at its heart — to become a member you must first be actively contributing to one or more of the foundation's collaborative projects.
### Open Compute Project ###
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
An outgrowth of Facebook's redesign of its Oregon data center, the [Open Compute Project (OCP)][7] aims to develop open hardware solutions for data centers. The OCP is an initiative made up of cheap, vanity-free servers, modular I/O storage for Open Rack (a rack standard designed for data centers to integrate the rack into the data center infrastructure) and a relatively "green" data center design.
OCP board members include representatives from Facebook, Intel, Goldman Sachs, Rackspace and Microsoft.
OCP recently announced two options for licensing: an Apache 2.0-like license that allows for derivative works and a more prescriptive license that encourages changes to be rolled back into the original software.
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
作者:[Thor Olavsrud][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
[1]:http://openpowerfoundation.org/
[2]:http://www.linuxfoundation.org/
[3]:https://openvirtualizationalliance.org/
[4]:http://www.openstack.org/foundation/
[5]:http://www.opendaylight.org/
[6]:http://www.apache.org/
[7]:http://www.opencompute.org/

View File

@ -1,91 +0,0 @@
+Translating by Ezio
How to run Ubuntu Snappy Core on Raspberry Pi 2
================================================================================
The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core.
Snappy, the new component in the mix, represents a package format that is derived from DEB, is a frontend to update the system that lends its idea from atomic upgrades used in CoreOS, Red Hat's Atomic and elsewhere. As soon as the Raspberry Pi 2 was marketed, Canonical released Snappy Core for that plattform. The first edition of the Raspberry Pi was not able to run Ubuntu because Ubuntu's ARM images use the ARMv7 architecture, while the first Raspberry Pis were based on ARMv6. That has changed now, and Canonical, by releasing a RPI2-Image of Snappy Core, took the opportunity to make clear that Snappy was meant for the cloud and especially for IoT.
Snappy also runs on other platforms like Amazon EC2, Microsofts Azure, and Google's Compute Engine, and can also be virtualized with KVM, Virtualbox, or Vagrant. Canonical has embraced big players like Microsoft, Google, Docker or OpenStack and, at the same time, also included small projects from the maker scene as partners. Besides startups like Ninja Sphere and Erle Robotics, there are board manufacturers like Odroid, Banana Pro, Udoo, PCDuino and Parallella as well as Allwinner. Snappy Core will also run in routers soon to help with the poor upgrade policy that vendors perform.
In this post, let's see how we can test Ubuntu Snappy Core on Raspberry Pi 2.
The image for Snappy Core for the RPI2 can be downloaded from the [Raspberry Pi website][2]. Unpacked from the archive, the resulting image should be [written to an SD card][3] of at least 8 GB. Even though the OS is small, atomic upgrades and the rollback function eat up quite a bit of space. After booting up your Raspberry Pi 2 with Snappy Core, you can log into the system with the default username and password being 'ubuntu'.
![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg)
sudo is already configured and ready for use. For security reasons you should change the username with:
$ sudo usermod -l <new name> <old name>
Alternatively, you can add a new user with the command `adduser`.
Due to the lack of a hardware clock on the RPI, that the Snappy Core image does not take account of, the image has a small bug that will throw a lot of errors when processing commands. It is easy to fix.
To find out if the bug affects you, use the command:
$ date
If the output is "Thu Jan 1 01:56:44 UTC 1970", you can fix it with:
$ sudo date --set="Sun Apr 04 17:43:26 UTC 2015"
adapted to your actual time.
![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg)
Now you might want to check if there are any updates available. Note that the usual commands:
$ sudo apt-get update && sudo apt-get distupgrade
will not get you very far though, as Snappy uses its own simplified package management system which is based on dpkg. This makes sense, as Snappy will run on a lot of embedded appliances, and you want things to be as simple as possible.
Let's dive into the engine room for a minute to understand how things work with Snappy. The SD card you run Snappy on has three partitions besides the boot partition. Two of those house a duplicated file system. Both of those parallel file systems are permanently mounted as "read only", and only one is active at any given time. The third partition holds a partial writable file system and the users persistent data. With a fresh system, the partition labeled 'system-a' holds one complete file system, called a core, leaving the parallel partition still empty.
![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg)
If we run the following command now:
$ sudo snappy update
the system will install the update as a complete core, similar to an image, on 'system-b'. You will be asked to reboot your device afterwards to activate the new core.
After the reboot, run the following command to check if your system is up to date and which core is active.
$ sudo snappy versions -a
After rolling out the update and rebooting, you should see that the core that is now active has changed.
As we have not installed any apps yet, the following command:
$ sudo snappy update ubuntu-core
would have been sufficient, and is the way if you want to upgrade just the underlying OS. Should something go wrong, you can rollback by:
$ sudo snappy rollback ubuntu-core
which will take you back to the system's state before the update.
![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg)
Speaking of apps, they are what makes Snappy useful. There are not that many at this point, but the IRC channel #snappy on Freenode is humming along nicely and with a lot of people involved, the Snappy App Store gets new apps added on a regular basis. You can visit the shop by pointing your browser to http://<ip-address>:4200, and you can install apps right from the shop and then launch them with http://webdm.local in your browser. Building apps yourself for Snappy is not all that hard, and [well documented][4]. You can also port DEB packages into the snappy format quite easily.
![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg)
Ubuntu Snappy Core, due to the limited number of available apps, is not overly useful in a productive way at this point in time, although it invites us to dive into the new Snappy package format and play with atomic upgrades the Canonical way. Since it is easy to set up, this seems like a good opportunity to learn something new.
--------------------------------------------------------------------------------
via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html
作者:[Ferdinand Thommes][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/ferdinand
[1]:http://www.ubuntu.com/things
[2]:http://www.raspberrypi.org/downloads/
[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html
[4]:https://developer.ubuntu.com/en/snappy/

View File

@ -1,131 +0,0 @@
ictlyh Translating
How to access a Linux server behind NAT via reverse SSH tunnel
================================================================================
You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users.
### What is Reverse SSH Tunneling? ###
One alternative to SSH port forwarding is **reverse SSH tunneling**. The concept of reverse SSH tunneling is simple. For this, you will need another host (so-called "relay host") outside your restrictive home network, which you can connect to via SSH from where you are. You could set up a relay host using a [VPS instance][1] with a public IP address. What you do then is to set up a persistent SSH tunnel from the server in your home network to the public relay host. With that, you can connect "back" to the home server from the relay host (which is why it's called a "reverse" tunnel). As long as the relay host is reachable to you, you can connect to your home server wherever you are, or however restrictive your NAT or firewall is in your home network.
![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg)
### Set up a Reverse SSH Tunnel on Linux ###
Let's see how we can create and use a reverse SSH tunnel. We assume the following. We will be setting up a reverse SSH tunnel from homeserver to relayserver, so that we can SSH to homeserver via relayserver from another computer called clientcomputer. The public IP address of **relayserver** is 1.1.1.1.
On homeserver, open an SSH connection to relayserver as follows.
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
Here the port 10022 is any arbitrary port number you can choose. Just make sure that this port is not used by other programs on relayserver.
The "-R 10022:localhost:22" option defines a reverse tunnel. It forwards traffic on port 10022 of relayserver to port 22 of homeserver.
With "-fN" option, SSH will go right into the background once you successfully authenticate with an SSH server. This option is useful when you do not want to execute any command on a remote SSH server, and just want to forward ports, like in our case.
After running the above command, you will be right back to the command prompt of homeserver.
Log in to relayserver, and verify that 127.0.0.1:10022 is bound to sshd. If so, that means a reverse tunnel is set up correctly.
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd
Now from any other computer (e.g., clientcomputer), log in to relayserver. Then access homeserver as follows.
relayserver~$ ssh -p 10022 homeserver_user@localhost
One thing to take note is that the SSH login/password you type for localhost should be for homeserver, not for relayserver, since you are logging in to homeserver via the tunnel's local endpoint. So do not type login/password for relayserver. After successful login, you will be on homeserver.
### Connect Directly to a NATed Server via a Reverse SSH Tunnel ###
While the above method allows you to reach **homeserver** behind NAT, you need to log in twice: first to **relayserver**, and then to **homeserver**. This is because the end point of an SSH tunnel on relayserver is binding to loopback address (127.0.0.1).
But in fact, there is a way to reach NATed homeserver directly with a single login to relayserver. For this, you will need to let sshd on relayserver forward a port not only from loopback address, but also from an external host. This is achieved by specifying **GatewayPorts** option in sshd running on relayserver.
Open /etc/ssh/sshd_conf of **relayserver** and add the following line.
relayserver~$ vi /etc/ssh/sshd_conf
----------
GatewayPorts clientspecified
Restart sshd.
Debian-based system:
relayserver~$ sudo /etc/init.d/ssh restart
Red Hat-based system:
relayserver~$ sudo systemctl restart sshd
Now let's initiate a reverse SSH tunnel from homeserver as follows.
homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
Log in to relayserver and confirm with netstat command that a reverse SSH tunnel is established successfully.
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
Unlike a previous case, the end point of a tunnel is now at 1.1.1.1:10022 (relayserver's public IP address), not 127.0.0.1:10022. This means that the end point of the tunnel is reachable from an external host.
Now from any other computer (e.g., clientcomputer), type the following command to gain access to NATed homeserver.
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
In the above command, while 1.1.1.1 is the public IP address of relayserver, homeserver_user must be the user account associated with homeserver. This is because the real host you are logging in to is homeserver, not relayserver. The latter simply relays your SSH traffic to homeserver.
### Set up a Persistent Reverse SSH Tunnel on Linux ###
Now that you understand how to create a reverse SSH tunnel, let's make the tunnel "persistent", so that the tunnel is up and running all the time (regardless of temporary network congestion, SSH timeout, relay host rebooting, etc.). After all, if the tunnel is not always up, you won't be able to connect to your home server reliably.
For a persistent tunnel, I am going to use a tool called autossh. As the name implies, this program allows you to automatically restart an SSH session should it breaks for any reason. So it is useful to keep a reverse SSH tunnel active.
As the first step, let's set up [passwordless SSH login][2] from homeserver to relayserver. That way, autossh can restart a broken reverse SSH tunnel without user's involvement.
Next, [install autossh][3] on homeserver where a tunnel is initiated.
From homeserver, run autossh with the following arguments to create a persistent SSH tunnel destined to relayserver.
homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
The "-M 10900" option specifies a monitoring port on relayserver which will be used to exchange test data to monitor an SSH session. This port should not be used by any program on relayserver.
The "-fN" option is passed to ssh command, which will let the SSH tunnel run in the background.
The "-o XXXX" options tell ssh to:
- Use key authentication, not password authentication.
- Automatically accept (unknown) SSH host keys.
- Exchange keep-alive messages every 60 seconds.
- Send up to 3 keep-alive messages without receiving any response back.
The rest of reverse SSH tunneling related options remain the same as before.
If you want an SSH tunnel to be automatically up upon boot, you can add the above autossh command in /etc/rc.local.
### Conclusion ###
In this post, I talked about how you can use a reverse SSH tunnel to access a Linux server behind a restrictive firewall or NAT gateway from outside world. While I demonstrated its use case for a home network, you must be careful when applying it for corporate networks. Such a tunnel can be considered as a breach of a corporate policy, as it circumvents corporate firewalls and can expose corporate networks to outside attacks. There is a great chance it can be misused or abused. So always remember its implication before setting it up.
--------------------------------------------------------------------------------
via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/digitalocean
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
[3]:http://ask.xmodulo.com/install-autossh-linux.html

View File

@ -1,4 +1,4 @@
translating wi-cuckoo translation by strugglingyouth
How to monitor NGINX with Datadog - Part 3 How to monitor NGINX with Datadog - Part 3
================================================================================ ================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)

View File

@ -1,4 +1,4 @@
KevinSJ Translating translation by strugglingyouth
How to monitor NGINX - Part 1 How to monitor NGINX - Part 1
================================================================================ ================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)

View File

@ -0,0 +1,65 @@
translation by strugglingyouth
Handy commands for profiling your Unix file systems
================================================================================
![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png)
Credit: Sandra H-S
One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge.
One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests.
When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this.
$ find . -type f | wc -l
187534
Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list.
In the second command, we do the same, but print the last line -- thus the newest -- file.
$ find -type f -printf '%T+ %p\n' | sort | head -n 1
2006-02-03+02:40:33 ./skel/.xemacs/init.el
$ find -type f -printf '%T+ %p\n' | sort | tail -n 1
2015-07-19+14:20:16 ./.bash_history
The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work.
If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below.
$ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1
2015-07-19+13:02:12 ./isPrime
Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show.
$ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1
20183040 project.org.tar
To summarize file ownership, use the %u (owner)
$ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c
180034 shs
7500 jdoe
If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this:
$ find -type f -printf '%a+ %p\n' | sort | head -n 1
Fri Dec 15 03:00:30 2006+ ./statreport
Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction.
$ find -type f -printf '%a+ %p\n' | sort | tail -n 1
Wed Nov 26 03:00:27 2007+ ./my-notes
Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up.
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/

View File

@ -0,0 +1,92 @@
FSSlc translating
Linux Logging Basics
================================================================================
First well describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section.
### Linux System Logs ###
Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system:
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png)
Some of the most important Linux system logs include:
- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages.
- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure.
- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels.
- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully.
Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS.
Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below).
### Whats Syslog? ###
How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file.
The word “syslog” is an overloaded term and is often used in short to refer to one of these:
1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.”
1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). Its officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.”
1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.”
Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue.
Here is an example log message with the syslog header included. Its from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt:
<34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2
### Syslog Format and Fields ###
Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name.
<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n
Below, youll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues.
#### Timestamp ####
The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this:
- **2003-10-11** is the year, month, and day.
- **T** is a required element of the TIMESTAMP field, separating the date and the time.
- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second.
- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST.
#### Hostname ####
The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message.
#### App-Name ####
The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message.
#### Priority ####
The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. Its a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use.
Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos
[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb
[3]:https://tools.ietf.org/html/rfc5424
[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3
[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4
[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5
[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1

View File

@ -0,0 +1,418 @@
Managing Linux Logs
================================================================================
A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. Well tell you why this is a good idea and give tips on how to do it easily.
### Benefits of Centralizing Logs ###
It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. Itd take a long time to hunt down the right file, and even longer to correlate problems across servers. Theres nothing more frustrating than finding the information you are looking for hasnt been captured, or the log file that could have held the answer has just been lost after a restart.
Centralizing your logs makes them faster to search, which can help you solve production issues faster. You dont have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed.
Centralizing your logs also makes them easier to manage:
- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem.
- You dont have to worry about ssh or inefficient grep commands requiring more resources on troubled systems.
- You dont have to worry about full disks, which can crash your servers.
- You can keep your production servers secure without giving your entire team access just to look at logs. Its much safer to give your team access to logs from the central location.
With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. Well discuss how to intelligently address these issues in the sections below.
### Popular Tools for Centralizing Logs ###
The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files:
- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions.
- [syslog-ng][3] is the second most popular syslog daemon for Linux.
- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing.
- [fluentd][5] is another agent with advanced processing capabilities.
Rsyslog is the most popular daemon for centralizing your log data because its installed by default in most common distributions of Linux. You dont need to download it or install it, and its lightweight so it wont take up much of your system resources.
If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you dont mind the extra system footprint.
### Configure Rsyslog.conf ###
Since rsyslog is the most widely used syslog daemon, well show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6].
The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name.
action(type="omfwd" protocol="tcp" target="BEBOP" port="514")
You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If youre storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full.
Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider.
### Log Directories ###
You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*).
Common versions of rsyslog cant monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9].
### Which Protocol: UDP, TCP, or RELP? ###
There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol.
[UDP][10] sends a datagram packet, which is a single packet of information. Its an outbound-only protocol, so it doesnt send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. Its most commonly used on reliable networks like localhost.
[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet.
[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol.
### Reliably Send with Disk Assisted Queues ###
If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity.
**Warning: You can lose data if you store logs only in memory.**
Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue:
$WorkDirectory /var/spool/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
### Encrypt Logs Using TLS ###
When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer.
To set up TLS encryption, you need to do the following tasks:
1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If youre using a log management service, it will have one ready for you.
1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider.
1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system.
Heres an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting.
$DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com
### Best Practices for Application Logging ###
In addition to the logs that Linux creates by default, its also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on.
The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but thats the first place where people look for configuration files.
Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things.
If youre not sure where it is, you can use the locate command to find it:
[root@localhost ~]# locate postgresql.conf
/usr/pgsql-9.4/share/postgresql.conf.sample
/var/lib/pgsql/9.4/data/postgresql.conf
#### Set a Standard Location for Log Files ####
Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? Thats because other applications save their log files under /var/log too and if your app saves more than one log file perhaps once every day or after each service restart it may be a bit difficult to trawl through a large directory to find the file you want.
If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go.
#### Use A Standard Filename ####
Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically.
#### Append the Log File ####
Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart.
#### Appending vs. Rotation of Log File ####
Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines.
We recommend you configure the application to rotate its log file once every day, say at mid-night.
Why? Well it becomes manageable for a starter. Its much easier to find a file name with a specific date time pattern than to search through one file for that dates entries. Files are also much smaller: you dont think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location perhaps a nightly backup job copying to a centralized log server it doesnt chew up your networks bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, its easier to delete files older than a particular date than to have an application parsing one single large file.
#### Retention of Log File ####
How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one weeks worth of logging information, or it may be a regulatory requirement to keep ten years worth of data. Whatever it is, logs need to go from the server at one time or other.
In our opinion, unless otherwise required, keep at least a months worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier.
#### Separate Disk Location for Log Files ####
Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main applications data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesnt fill up the entire disk.
#### Log Entries ####
What information should be captured in each log entry?
That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything thats happening? Is it a legal requirement to capture what each user is running or viewing?
If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. Theres no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself.
#### A Practical Example for PostgreSQL ####
As an example, lets look at the main configuration file for a vanilla PostgreSQL 9.4 installation. Its called postgresql.conf and contrary to other config files in Linux systems, its not saved under /etc directory. In the code snippet below, we can see its in /var/lib/pgsql directory of our CentOS 7 server:
root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf
...
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr'
# Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on
# Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'pg_log'
# directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
# log_file_mode = 0600 .
# creation mode for log files,
# begin with 0 to use octal notation
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d
# Automatic rotation of logfiles will happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default
# terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '< %m >' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5
log_timezone = 'Australia/ACT'
Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there.
Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files:
[root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log
total 20
-rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log
-rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log
-rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log
-rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log
-rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log
So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf.
Looking inside one log file shows its entries start with date time only:
[root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log
...
< 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request
< 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions
< 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down
< 2015-02-27 01:21:27.036 EST >LOG: shutting down
< 2015-02-27 01:21:27.211 EST >LOG: database system is shut down
### Centralizing Application Logs ###
#### Log File Monitoring with Imfile ####
Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but dont scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this:
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
----------
# Input for FILE1
$InputFileName /FILE1
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured.
#### Local Socket Logs with Imuxsock ####
A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket.
This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or cant keep up, then you could lose log data.
The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command:
$ModLoad imuxsock
#### UDP Logs with Imupd ####
Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution.
Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514:
$ModLoad imudp
----------
$UDPServerRun 514
### Manage Logs with Logrotate ###
Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine.
The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived.
When logrotate copies a file, the new file has a new inode, which can interfere with rsyslogs ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file.
The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18].
### Manage Configuration on Many Servers ###
When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect.
#### Pssh ####
This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time.
#### Puppet/Chef ####
Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you arent sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorlds comparison of the two tools][19].
Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Logglys Puppet module. It offers a class for rsyslog to which you can add an identifying token:
node 'my_server_node.example.net' {
# Send syslog events to Loggly
class { 'loggly::rsyslog':
customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000',
}
}
#### Docker ####
Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center.
There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21].
#### Vendor Scripts or Agents ####
Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl
[2]:http://www.rsyslog.com/
[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system
[4]:http://logstash.net/
[5]:http://www.fluentd.org/
[6]:http://www.rsyslog.com/doc/rsyslog_conf.html
[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html
[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87
[9]:https://www.loggly.com/docs/file-monitoring/
[10]:http://www.networksorcery.com/enp/protocol/udp.htm
[11]:http://www.networksorcery.com/enp/protocol/tcp.htm
[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html
[13]:http://www.rsyslog.com/doc/relp.html
[14]:http://www.rsyslog.com/doc/queues.html
[15]:http://www.rsyslog.com/doc/tls_cert_ca.html
[16]:http://www.rsyslog.com/doc/tls_cert_machine.html
[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html
[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html
[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/
[21]:https://github.com/progrium/logspout
[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/

View File

@ -0,0 +1,117 @@
translation by strugglingyouth
Troubleshooting with Linux Logs
================================================================================
Troubleshooting is the main reason people create logs. Often youll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs.
### Cause of Login Failures ###
If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another users permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened.
Failure Examples:
pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2
Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2
pam_unix(sshd:auth): check pass; user unknown
PAM service(sshd) ignoring max retries; 6 > 3
Success Examples:
Accepted password for hoover from 10.0.2.2 port 4792 ssh2
pam_unix(sshd:session): session opened for user hoover by (uid=0)
pam_unix(sshd:session): session closed for user hoover
You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system.
$ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr
23 oracle
18 postgres
17 nagios
10 zabbix
6 test
Youll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username.
Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user.
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png)
Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, its more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system.
![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png)
### Cause of Reboots ###
Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it?
#### Shutdown Command ####
If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down.
Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh
Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now
#### Kernel Initializing ####
If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. Youd search for the facility kernel messages and Initializing cpu.
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu
Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25)
### Detect Memory Problems ###
There are lots of reasons a server might crash, but one common cause is running out of memory.
When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash.
Examples:
[33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child
[29923450.995084] select 5230 (docker), adj 0, size 708, to kill
You can find these logs using a tool like grep. This example is for Ubuntu:
$ grep “Out of memory” /var/log/syslog
[33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child
Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason its a fabulous idea to centralize your logs!
### Log Cron Job Errors ###
The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself.
By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here.
Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from=<hoover>
Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110>
Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=<hoover@loggly.com>, size=607, nrcpt=1 (queue active)
Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=<hoover@loggly.com>, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp)
You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to.
*/5 * * * * echo Hello World 2>&1 | /usr/bin/logger -t helloCron
Which creates the log entries:
Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron)
Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World!
Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed.
--------------------------------------------------------------------------------
via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/
作者:[Jason Skowronski][a1]
作者:[Amy Echeverri][a2]
作者:[Sadequl Hussain][a3]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a1]:https://www.linkedin.com/in/jasonskowronski
[a2]:https://www.linkedin.com/in/amyecheverri
[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7
[1]:http://linux.die.net/man/8/pam.d

View File

@ -0,0 +1,86 @@
7 个驱动开源发展的社区
================================================================================
不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
### 技术的开放发展驱动着创新 ###
在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。
### OpenPOWER 基金会 ###
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
[OpenPOWER 基金会][2] 由 IBM, Google, Mellanox, Tyan 和 NVIDIA 于 2013 年共同创建,在与开源软件发展相同的精神下,旨在驱动开放协作硬件的发展,在过去的 20 几年间,开源软件发展已经找到了肥沃的土壤。
IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。
今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。
### Linux 基金会 ###
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。
它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团)MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。
### 开放虚拟化联盟 ###
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。
如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。
### OpenStack 基金会 ###
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。
大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分)OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。
### OpenDaylight ###
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。
### Apache 软件基金会 ###
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。
ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。
### 开放计算项目 ###
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。
OCP 董事会成员包括来自 FacebookIntelGoldman SachsRackspace 和 Microsoft 的代表。
OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。
--------------------------------------------------------------------------------
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
作者:[Thor Olavsrud][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
[1]:http://openpowerfoundation.org/
[2]:http://www.linuxfoundation.org/
[3]:https://openvirtualizationalliance.org/
[4]:http://www.openstack.org/foundation/
[5]:http://www.opendaylight.org/
[6]:http://www.apache.org/
[7]:http://www.opencompute.org/

View File

@ -0,0 +1,89 @@
如何在树莓派2 代运行ubuntu Snappy Core
================================================================================
物联网(Internet of Things, IoT) 时代即将来临。很快过不了几年我们就会问自己当初是怎么在没有物联网的情况下生存的就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上就像他们已经在“云”上做过的一样。。在今年一月底Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。
Snappy 是一种用来替代deb 的新的打包格式是一个用来更新系统的前端从CoreOS、红帽子和其他地方借鉴了原子更新这个想法。很快树莓派2 代投入市场Canonical 就发布了用于树莓派的Snappy Core 版本。第一代树莓派因为是基于ARMv6 而Ubuntu 的ARM 镜像是基于ARMv7 所以不能运行ubuntu 。不过这种状况现在改变了Canonical 通过发布用于RPI2 的镜像抓住机会澄清了Snappy 就是一个用于云计算特别是IoT 的系统。
Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google's Compute Engine 这样的云端上也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手同时也与一些小项目达成合作关系。除了一些创业公司像Ninja Sphere、Erle Robotics还有一些开发板生产商比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志。Snappy Core 也希望很快能运行在路由器上,来帮助改进路由器生产商目前很少更新固件的策略。
接下来让我们看看怎么样在树莓派2 上运行Snappy。
用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小但是院子升级和回滚功能会蚕食不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。
![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg)
sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名
$ sudo usermod -l <new name> <old name>
或者也可以使用`adduser` 为你添加一个新用户。
因为RPI缺少硬件始终而Snappy 不知道这一点所以系统会有一个小bug处理命令时会报很多错。不过这个很容易解决
使用这个命令来确认这个bug 是否影响:
$ date
如果输出是 "Thu Jan 1 01:56:44 UTC 1970" 你可以这样做来改正:
$ sudo date --set="Sun Apr 04 17:43:26 UTC 2015"
改成你的实际时间。
![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg)
现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令:
$ sudo apt-get update && sudo apt-get distupgrade
现在将不会让你通过因为Snappy 会使用它自己精简过的、基于dpkg 的包管理系统。这是做是应为Snappy 会运行很多嵌入式程序,而你也会想着所有事情尽可能的简化。
让我们来看看最关键的部分理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统用来让用户存储数据。通过更新系统标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。
![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg)
如果我们运行以下命令:
$ sudo snappy update
系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。
重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心
$ sudo snappy versions -a
经过更新-重启的操作,你应该可以看到被激活的核心已经被改变了。
因为到目前为止我们还没有安装任何软件,下面的命令:
$ sudo snappy update ubuntu-core
将会生效而且如果你打算仅仅更新特定的OS这也是一个办法。如果出了问题你可以使用下面的命令回滚
$ sudo snappy rollback ubuntu-core
这将会把系统状态回滚到更新之前。
![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg)
再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息那个上面有很多人参与。你可以通过浏览器访问http://<ip-address>:4200 来浏览应用商店然后从商店安装软件再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。
![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg)
尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作但是因为有限的可用应用它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。
--------------------------------------------------------------------------------
via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html
作者:[Ferdinand Thommes][a]
译者:[译者ID](https://github.com/oska874)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/ferdinand
[1]:http://www.ubuntu.com/things
[2]:http://www.raspberrypi.org/downloads/
[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html
[4]:https://developer.ubuntu.com/en/snappy/

View File

@ -0,0 +1,131 @@
如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器
================================================================================
你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。
### 什么是反向 SSH 隧道? ###
SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。
![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg)
### 在 Linux 上设置反向 SSH 隧道 ###
让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。
在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。
homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1
这里端口 10022 是任何你可以使用的端口数字。只需要确保中继服务器上不会有其它程序使用这个端口。
“-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。
用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。
运行上面的命令之后,你就会回到家庭主机的命令行提示框中。
登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd
现在就可以从任何其它计算机(客户端计算机)登录到中继服务器,然后按照下面的方法访问家庭服务器。
relayserver~$ ssh -p 10022 homeserver_user@localhost
需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。
### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ###
上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址127.0.0.1)。
事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。
打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。
relayserver~$ vi /etc/ssh/sshd_conf
----------
GatewayPorts clientspecified
重启 sshd。
基于 Debian 的系统:
relayserver~$ sudo /etc/init.d/ssh restart
基于红帽的系统:
relayserver~$ sudo systemctl restart sshd
现在在家庭服务器中按照下面方式初始化一个反向 SSH 隧道。
homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
登录到中继服务器然后用 netstat 命令确认成功建立的一个反向 SSH 隧道。
relayserver~$ sudo netstat -nap | grep 10022
----------
tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev
不像之前的情况,现在隧道的端点是 1.1.1.1:10022中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。
现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。
clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1
在上面的命令中1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。
### 在 Linux 上设置一个永久反向 SSH 隧道 ###
现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”这样隧道启动后就会一直运行不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。
对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。
第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。
下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。
在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。
homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1
“-M 10900” 选项指定中继服务器上的监视端口,用于交换监视 SSH 会话的测试数据。中继服务器上的其它程序不能使用这个端口。
“-fN” 选项传递给 ssh 命令,让 SSH 隧道在后台运行。
“-o XXXX” 选项让 ssh
- 使用密钥验证,而不是密码验证。
- 自动接受未知SSH 主机密钥。
- 每 60 秒交换 keep-alive 消息。
- 没有收到任何响应时最多发送 3 条 keep-alive 消息。
其余 SSH 隧道相关的选项和之前介绍的一样。
如果你想系统启动时自动运行 SSH 隧道,你可以将上面的 autossh 命令添加到 /etc/rc.local。
### 总结 ###
在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。
--------------------------------------------------------------------------------
via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
作者:[Dan Nanni][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/go/digitalocean
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
[3]:http://ask.xmodulo.com/install-autossh-linux.html

View File

@ -1,205 +0,0 @@
Nishita Agarwal分享它关于Linux防火墙'iptables'的面试经验
================================================================================
Nishita Agarwal是Tecmint的用户她将分享关于她刚刚经历的一家公司私人公司Pune印度的面试经验。在面试中她被问及许多不同的问题但她是iptables方面的专家因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。
![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg)
所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。
> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位我的专业集中在UNIX和它的变种BSDLinux。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化并将供职于印度的Pune公司。”
下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。
### 1. 你听说过Linux下面的iptables和Firewalld么知不知道它们是什么是用来干什么的 ###
> **答案** : iptables和Firewalld我都知道并且我已经使用iptables好一段时间了。iptables主要由C语言写成并且以GNU GPL许可证发布。它是从系统管理员的角度写的最新的稳定版是iptables 1.4.21。iptables通常被认为是类UNIX系统中的防火墙更准确的说可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块它执行过滤的任务。
>
> Firewalld是RHEL/CentOS 7也许还有其他发行版但我不太清楚中最新的过滤规则的实现。它已经取代了iptables接口并与netfilter相连接。
### 2. 你用过一些iptables的GUI或命令行工具么 ###
> **答案** : 虽然我既用过GUI工具比如与[Webmin][1]结合的Shorewall以及直接通过终端访问iptables。但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员而终端适合有经验的管理员。
### 3. 那么iptables和firewalld的基本区别是什么呢 ###
> **答案** : iptables和firewalld都有着同样的目的包过滤但它们使用不同的方式。iptables与firewalld不同在每次发生更改时都刷新整个规则集。通常iptables配置文件位于/etc/sysconfig/iptables而firewalld的配置文件位于/etc/firewalld/。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易但是两者都可以完成同样的任务。例如firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。
### 4. 如果有机会的话你会在你所有的服务器上用firewalld替换iptables么 ###
> **答案** : 我对iptables很熟悉它也工作的很好。如果没有任何需求需要firewalld的动态特性那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下目前为止我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢”。上面是我自己的想法但如果组织愿意用firewalld替换iptables的话我不介意。
### 5. 你看上去对iptables很有信心巧的是我们的服务器也在使用iptables。 ###
iptables使用的表有哪些请简要的描述iptables使用的表以及它们所支持的链。
> **答案** : 谢谢您的赞赏。至于您问的问题iptables使用的表有四个它们是
>
> Nat 表
> Mangle 表
> Filter 表
> Raw 表
>
> Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如如果一个通过某个接口的包被修饰修改了IP地址该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤由NAT表支持的链称为PREROUTING ChainPOSTROUTING Chain和OUTPUT Chain。
>
> Mangle表 : 正如它的名字一样这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING ChainOUTPUT ChainForward ChainInputChain和POSTROUTING Chain。
>
> Filter表 : Filter表是iptables中使用的默认表它用来过滤网络包。如果没有定义任何规则Filter表则被当作默认的表并且基于它来过滤。支持的链有INPUT ChainOUTPUT ChainFORWARD Chain。
>
> Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING Chain 和OUTPUT Chain。
### 6. 简要谈谈什么是iptables中的目标值能被指定为目标他们有什么用 ###
> **答案** : 下面是在iptables中可以指定为目标的值
>
> ACCEPT : 接受包
> QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方)
> DROP : 丢弃包
> RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调规则
### 7. 让我们来谈谈iptables技术方面的东西我的意思是说实际使用方面 ###
你怎么检测在CentOS中安装iptables时需要的iptables的rpm
> **答案** : iptables已经被默认安装在CentOS中我们不需要单独安装它。但可以这样检测rpm
>
> # rpm -qa iptables
>
> iptables-1.4.21-13.el7.x86_64
>
> 如果您需要安装它您可以用yum来安装。
>
> # yum install iptables-services
### 8. 怎样检测并且确保iptables服务正在运行 ###
> **答案** : 您可以在终端中运行下面的命令来检测iptables的状态。
>
> # service status iptables [On CentOS 6/5]
> # systemctl status iptables [On CentOS 7]
>
> 如果iptables没有在运行可以使用下面的语句
>
> ---------------- 在CentOS 6/5下 ----------------
> # chkconfig --level 35 iptables on
> # service iptables start
>
> ---------------- 在CentOS 7下 ----------------
> # systemctl enable iptables
> # systemctl start iptables
>
> 我们还可以检测iptables的模块是否被加载
>
> # lsmod | grep ip_tables
### 9. 你怎么检查iptables中当前定义的规则呢 ###
> **答案** : 当前的规则可以简单的用下面的命令查看:
>
> # iptables -L
>
> 示例输出
>
> Chain INPUT (policy ACCEPT)
> target prot opt source destination
> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
> ACCEPT icmp -- anywhere anywhere
> ACCEPT all -- anywhere anywhere
> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source destination
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source destination
### 10. 你怎样刷新所有的iptables规则或者特定的链呢 ###
> **答案** : 您可以使用下面的命令来刷新一个特定的链。
>
> # iptables --flush OUTPUT
>
> 要刷新所有的规则,可以用:
>
> # iptables --flush
### 11. 请在iptables中添加一条规则接受所有从一个信任的IP地址例如192.168.0.7)过来的包。 ###
> **答案** : 上面的场景可以通过运行下面的命令来完成。
>
> # iptables -A INPUT -s 192.168.0.7 -j ACCEPT
>
> 我们还可以在源IP中使用标准的斜线和子网掩码
>
> # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT
> # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT
### 12. 怎样在iptables中添加规则以ACCEPTREJECTDENY和DROP ssh的服务 ###
> **答案** : 但愿ssh运行在22端口那也是ssh的默认端口我们可以在iptables中添加规则来ACCEPT ssh的tcp包在22号端口上
>
> # iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT
>
> REJECT ssh服务22号端口的tcp包。
>
> # iptables -A INPUT -s -p tcp --dport 22 -j REJECT
>
> DENY ssh服务22号端口的tcp包。
>
>
> # iptables -A INPUT -s -p tcp --dport 22 -j DENY
>
> DROP ssh服务22号端口的tcp包。
>
>
> # iptables -A INPUT -s -p tcp --dport 22 -j DROP
### 13. 让我给你另一个场景假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接你会怎么做 ###
> **答案** : 这时我所需要的就是在iptables中使用multiport选项并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定
>
> # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP
>
> 可以用下面的语句查看写入的规则。
>
> # iptables -L
>
> Chain INPUT (policy ACCEPT)
> target prot opt source destination
> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
> ACCEPT icmp -- anywhere anywhere
> ACCEPT all -- anywhere anywhere
> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
> DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache
>
> Chain FORWARD (policy ACCEPT)
> target prot opt source destination
> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
>
> Chain OUTPUT (policy ACCEPT)
> target prot opt source destination
**面试官** : 好了我问的就是这些。你是一个很有价值的雇员我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题请问我。
作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事这样会打断愉快的对话。更不用说HR轮会不会比较难总之我获得了机会。
同时我要感谢Avishek和Ravi我的朋友花时间帮我整理我的面试。
朋友如果您有过类似的面试并且愿意与数百万Tecmint读者一起分享您的面试经历请将您的问题和答案发送到admin@tecmint.com。
谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/
作者:[Avishek Kumar][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/