mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-07 22:11:09 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
dfce1be723
@ -1,64 +1,65 @@
|
||||
如何在Linux上用Fail2ban保护服务器免受暴力攻击
|
||||
如何在 Linux 上用 Fail2Ban 保护服务器免受暴力攻击
|
||||
======
|
||||
|
||||
Linux管理员的一个重要任务是保护服务器免受非法攻击或访问。 默认情况下,Linux系统带有配置良好的防火墙,比如Iptables,Uncomplicated Firewall(UFW),ConfigServer Security Firewall(CSF)等,可以防止多种攻击。
|
||||
Linux 管理员的一个重要任务是保护服务器免受非法攻击或访问。 默认情况下,Linux 系统带有配置良好的防火墙,比如iptables、Uncomplicated Firewall(UFW),ConfigServer Security Firewall(CSF)等,可以防止多种攻击。
|
||||
|
||||
任何连接到互联网的机器都是恶意攻击的潜在目标。 有一个名为fail2ban的工具可用来缓解服务器上的非法访问。
|
||||
任何连接到互联网的机器都是恶意攻击的潜在目标。 有一个名为 Fail2Ban 的工具可用来缓解服务器上的非法访问。
|
||||
|
||||
### 什么是Fail2ban?
|
||||
### 什么是 Fail2Ban?
|
||||
|
||||
[Fail2ban][1]是一款入侵防御软件,可以保护服务器免受暴力攻击。 它是用Python编程语言编写的。 Fail2ban基于auth日志文件工作,默认情况下它会扫描所有auth日志文件,如`/var/log/auth.log`,`/var/log/apache/access.log`等,并禁止带有恶意标志的IP,比如密码失败太多,寻找漏洞等等标志。
|
||||
[Fail2Ban][1] 是一款入侵防御软件,可以保护服务器免受暴力攻击。 它是用 Python 编程语言编写的。 Fail2Ban 基于auth 日志文件工作,默认情况下它会扫描所有 auth 日志文件,如 `/var/log/auth.log`、`/var/log/apache/access.log` 等,并禁止带有恶意标志的IP,比如密码失败太多,寻找漏洞等等标志。
|
||||
|
||||
通常,fail2Ban用于更新防火墙规则,用于在指定的时间内拒绝IP地址。 它也会发送邮件通知。 Fail2Ban为各种服务提供了许多过滤器,如ssh,apache,nginx,squid,named,mysql,nagios等。
|
||||
通常,Fail2Ban 用于更新防火墙规则,用于在指定的时间内拒绝 IP 地址。 它也会发送邮件通知。 Fail2Ban 为各种服务提供了许多过滤器,如 ssh、apache、nginx、squid、named、mysql、nagios 等。
|
||||
|
||||
Fail2Ban能够降低错误认证尝试的速度,但是它不能消除弱认证带来的风险。 这只是服务器防止暴力攻击的安全手段之一。
|
||||
Fail2Ban 能够降低错误认证尝试的速度,但是它不能消除弱认证带来的风险。 这只是服务器防止暴力攻击的安全手段之一。
|
||||
|
||||
### 如何在Linux中安装Fail2ban
|
||||
### 如何在 Linux 中安装 Fail2Ban
|
||||
|
||||
Fail2ban已经与大部分Linux发行版打包在一起了,所以只需使用你的发行包版的包管理器来安装它。
|
||||
Fail2Ban 已经与大部分 Linux 发行版打包在一起了,所以只需使用你的发行包版的包管理器来安装它。
|
||||
|
||||
对于**`Debian / Ubuntu`**,使用[APT-GET命令][2]或[APT命令][3]安装。
|
||||
对于 Debian / Ubuntu,使用 [APT-GET 命令][2]或 [APT 命令][3]安装。
|
||||
|
||||
```
|
||||
$ sudo apt install fail2ban
|
||||
```
|
||||
|
||||
对于**`Fedora`**,使用[DNF命令][4]安装。
|
||||
对于 Fedora,使用 [DNF 命令][4]安装。
|
||||
|
||||
```
|
||||
$ sudo dnf install fail2ban
|
||||
```
|
||||
|
||||
对于 **`CentOS/RHEL`**,启用[EPEL库][5]或[RPMForge][6]库,使用[YUM命令][7]安装。
|
||||
对于 CentOS/RHEL,启用 [EPEL 库][5]或 [RPMForge][6] 库,使用 [YUM 命令][7]安装。
|
||||
|
||||
```
|
||||
$ sudo yum install fail2ban
|
||||
```
|
||||
|
||||
对于**`Arch Linux`**,使用[Pacman命令][8]安装。
|
||||
对于 Arch Linux,使用 [Pacman 命令][8]安装。
|
||||
|
||||
```
|
||||
$ sudo pacman -S fail2ban
|
||||
```
|
||||
|
||||
对于 **`openSUSE`** , 使用[Zypper命令][9]安装.
|
||||
对于 openSUSE , 使用 [Zypper命令][9]安装。
|
||||
|
||||
```
|
||||
$ sudo zypper in fail2ban
|
||||
```
|
||||
|
||||
### 如何配置Fail2ban
|
||||
### 如何配置 Fail2Ban
|
||||
|
||||
默认情况下,Fail2ban将所有配置文件保存在`/etc/fail2ban/` 目录中。 主配置文件是`jail.conf`,它包含一组预定义的过滤器。 所以,不要编辑文件,这是不可取的,因为只要有新的更新配置就会重置为默认值。
|
||||
默认情况下,Fail2Ban 将所有配置文件保存在 `/etc/fail2ban/` 目录中。 主配置文件是 `jail.conf`,它包含一组预定义的过滤器。 所以,不要编辑该文件,这是不可取的,因为只要有新的更新,配置就会重置为默认值。
|
||||
|
||||
只需在同一目录下创建一个名为`jail.local`的新配置文件,并根据您的意愿进行修改。
|
||||
只需在同一目录下创建一个名为 `jail.local` 的新配置文件,并根据您的意愿进行修改。
|
||||
|
||||
```
|
||||
# cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
|
||||
```
|
||||
|
||||
默认情况下,大多数选项都已经配置的很完美了,如果要启用对任何特定IP的访问,则可以将IP地址添加到`ignoreip` 区域,对于多个ip的情况,用空格隔开ip地址。
|
||||
默认情况下,大多数选项都已经配置的很完美了,如果要启用对任何特定 IP 的访问,则可以将 IP 地址添加到 `ignoreip` 区域,对于多个 IP 的情况,用空格隔开 IP 地址。
|
||||
|
||||
配置文件中的`DEFAULT`部分包含Fail2Ban遵循的基本规则集,您可以根据自己的意愿调整任何参数。
|
||||
配置文件中的 `DEFAULT` 部分包含 Fail2Ban 遵循的基本规则集,您可以根据自己的意愿调整任何参数。
|
||||
|
||||
```
|
||||
# nano /etc/fail2ban/jail.local
|
||||
@ -71,16 +72,14 @@ maxretry = 3
|
||||
destemail = 2daygeek@gmail.com
|
||||
```
|
||||
|
||||
* **ignoreip:**本部分允许我们列出IP地址列表,Fail2ban不会禁止与列表中的地址匹配的主机
|
||||
* **bantime:**主机被禁止的秒数
|
||||
* **findtime:**如果在上次“findtime”秒期间已经发生了“maxretry”次重试,则主机会被禁止
|
||||
* **maxretry:**“maxretry”是主机被禁止之前的失败次数
|
||||
|
||||
|
||||
* `ignoreip`:本部分允许我们列出 IP 地址列表,Fail2Ban 不会禁止与列表中的地址匹配的主机
|
||||
* `bantime`:主机被禁止的秒数
|
||||
* `findtime`:如果在最近 `findtime` 秒期间已经发生了 `maxretry` 次重试,则主机会被禁止
|
||||
* `maxretry`:是主机被禁止之前的失败次数
|
||||
|
||||
### 如何配置服务
|
||||
|
||||
Fail2ban带有一组预定义的过滤器,用于各种服务,如ssh,apache,nginx,squid,named,mysql,nagios等。 我们不希望对配置文件进行任何更改,只需在服务区域中添加`enabled = true`这一行就可以启用任何服务。 禁用服务时将true改为false即可。
|
||||
Fail2Ban 带有一组预定义的过滤器,用于各种服务,如 ssh、apache、nginx、squid、named、mysql、nagios 等。 我们不希望对配置文件进行任何更改,只需在服务区域中添加 `enabled = true` 这一行就可以启用任何服务。 禁用服务时将 `true` 改为 `false` 即可。
|
||||
|
||||
```
|
||||
# SSH servers
|
||||
@ -91,16 +90,15 @@ logpath = %(sshd_log)s
|
||||
backend = %(sshd_backend)s
|
||||
```
|
||||
|
||||
* **enabled:** 确定服务是打开还是关闭。
|
||||
* **port :**指的是特定的服务。 如果使用默认端口,则服务名称可以放在这里。 如果使用非传统端口,则应该是端口号。
|
||||
* **logpath:**提供服务日志的位置
|
||||
* **backend:**“后端”指定用于获取文件修改的后端。
|
||||
* `enabled`: 确定服务是打开还是关闭。
|
||||
* `port`:指明特定的服务。 如果使用默认端口,则服务名称可以放在这里。 如果使用非传统端口,则应该是端口号。
|
||||
* `logpath`:提供服务日志的位置
|
||||
* `backend`:指定用于获取文件修改的后端。
|
||||
|
||||
### 重启 Fail2Ban
|
||||
|
||||
进行更改后,重新启动 Fail2Ban 才能生效。
|
||||
|
||||
### 重启Fail2Ban
|
||||
|
||||
进行更改后,重新启动Fail2Ban才能生效。
|
||||
```
|
||||
[For SysVinit Systems]
|
||||
# service fail2ban restart
|
||||
@ -109,9 +107,10 @@ backend = %(sshd_backend)s
|
||||
# systemctl restart fail2ban.service
|
||||
```
|
||||
|
||||
### 验证Fail2Ban iptables规则
|
||||
### 验证 Fail2Ban iptables 规则
|
||||
|
||||
你可以使用下面的命令来确认是否在防火墙中成功添加了Fail2Ban iptables 规则。
|
||||
|
||||
你可以使用下面的命令来确认是否在防火墙中成功添加了Fail2Ban iptables规则。
|
||||
```
|
||||
# iptables -L
|
||||
Chain INPUT (policy ACCEPT)
|
||||
@ -135,9 +134,9 @@ target prot opt source destination
|
||||
RETURN all -- anywhere anywhere
|
||||
```
|
||||
|
||||
### 如何测试Fail2ban
|
||||
### 如何测试 Fail2Ban
|
||||
|
||||
我做了一些失败的尝试来测试这个。 为了证实这一点,我要验证`/var/log/fail2ban.log` 文件。
|
||||
我做了一些失败的尝试来测试这个。 为了证实这一点,我要验证 `/var/log/fail2ban.log` 文件。
|
||||
|
||||
```
|
||||
2017-11-05 14:43:22,901 fail2ban.server [7141]: INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.6
|
||||
@ -184,6 +183,7 @@ RETURN all -- anywhere anywhere
|
||||
```
|
||||
|
||||
要查看启用的监狱列表,请运行以下命令。
|
||||
|
||||
```
|
||||
# fail2ban-client status
|
||||
Status
|
||||
@ -191,7 +191,8 @@ Status
|
||||
`- Jail list: apache-auth, sshd
|
||||
```
|
||||
|
||||
通过运行以下命令来获取禁止的IP地址。
|
||||
通过运行以下命令来获取禁止的 IP 地址。
|
||||
|
||||
```
|
||||
# fail2ban-client status ssh
|
||||
Status for the jail: ssh
|
||||
@ -205,18 +206,19 @@ Status for the jail: ssh
|
||||
`- Total banned: 1
|
||||
```
|
||||
|
||||
要从Fail2Ban中删除禁止的IP地址,请运行以下命令。
|
||||
要从 Fail2Ban 中删除禁止的 IP 地址,请运行以下命令。
|
||||
|
||||
```
|
||||
# fail2ban-client set ssh unbanip 192.168.1.115
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-install-setup-configure-fail2ban-on-linux/#
|
||||
via: https://www.2daygeek.com/how-to-install-setup-configure-fail2ban-on-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[Flowsnow](https://github.com/Flowsnow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,40 +1,41 @@
|
||||
什么是 bashrc,为什么要编辑 bashrc
|
||||
什么是 .bashrc,为什么要编辑 .bashrc?
|
||||
======
|
||||
|
||||
![](https://www.maketecheasier.com/assets/uploads/2018/01/what-is-bashrc-hero.png)
|
||||
|
||||
你的 home 目录下藏着很多隐藏文件。如果你在运行 macOS 或者主流的 Linux 发行版的话,你就会看见一个名为“.bashrc”的文件靠近隐藏文件列表的上方。那么什么是 bashrc,编辑 bashrc 又有什么用呢?
|
||||
你的 home 目录下藏着很多隐藏文件。如果你在运行 macOS 或者主流的 Linux 发行版的话,你就会在靠近隐藏文件列表的上方看见一个名为 `.bashrc` 的文件。那么什么是 `.bashrc`,编辑 `.bashrc` 又有什么用呢?
|
||||
|
||||
![finder-find-bashrc][1]
|
||||
|
||||
如果你运行一个基于 Unix 或者类 Unix 的操作系统,bash 很有可能是作为默认终端被安装的。虽然存在很多[不同的 shell][2],bash 却是最常见或许也是最主流的。如果你不明白那意味着什么,bash 能解释你输入进终端程序的东西,并且基于你的输入来运行命令。它在一定程度上支持使用脚本来定制功能,这时候就要用到 bashrc 了。
|
||||
如果你运行一个基于 Unix 或者类 Unix 的操作系统,bash 很有可能是作为默认终端被安装的。虽然存在很多[不同的 shell][2],bash 却是最常见或许也是最主流的。如果你不明白那意味着什么,bash 是一个能解释你输入进终端程序的东西,并且基于你的输入来运行命令。它在一定程度上支持使用脚本来定制功能,这时候就要用到 `.bashrc` 了。
|
||||
|
||||
为了加载你的配置,bash 在每次启动时都会加载 bashrc 文件的内容。每个用户的 home 目录都能有这个 shell 脚本。它用来存储并加载你的终端配置和环境变量。
|
||||
为了加载你的配置,bash 在每次启动时都会加载 `.bashrc` 文件的内容。每个用户的 home 目录都有这个 shell 脚本。它用来存储并加载你的终端配置和环境变量。
|
||||
|
||||
终端配置可以包含很多不同的东西。最常见的,bashrc 文件包含用户想要用的别名。别名允许用户通过更短的名字来指向命令,对于经常在终端下工作的人来说这可是一个省时利器。
|
||||
终端配置可以包含很多不同的东西。最常见的,`.bashrc` 文件包含用户想要用的别名。别名允许用户通过更短的名字或替代的名字来指向命令,对于经常在终端下工作的人来说这可是一个省时利器。
|
||||
|
||||
![terminal-edit-bashrc-1][3]
|
||||
|
||||
你可以在任何终端文本编辑器上编辑 bashrc。在接下来的例子中我们将使用 `nano`。
|
||||
你可以在任何终端文本编辑器上编辑 `.bashrc`。在接下来的例子中我们将使用 `nano`。
|
||||
|
||||
要使用 `nano` 来编辑 bashrc,在终端中调用以下命令:
|
||||
```bash
|
||||
要使用 `nano` 来编辑 `.bashrc`,在终端中调用以下命令:
|
||||
|
||||
```
|
||||
nano ~/.bashrc
|
||||
```
|
||||
|
||||
如果你之前从没有编辑过 bashrc 的话,你也许会发现它是空的。这没关系!如果不是的话,你可以随意在任一行添加你的配置。
|
||||
如果你之前从没有编辑过 `.bashrc` 的话,你也许会发现它是空的。这没关系!如果不是的话,你可以随意在任一行添加你的配置。
|
||||
|
||||
你对 bashrc 所做的任何修改将在下一次启动终端时生效。如果你想立刻生效的话,运行下面的命令:
|
||||
|
||||
```bash
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
你可以添加到任何 bashrc 的位置,随意使用命令(通过 `#`)来组织你的代码。
|
||||
你可以添加到任何 `.bashrc` 的位置,随意使用命令(通过 `#`)来组织你的代码。
|
||||
|
||||
编辑 bashrc 需要遵循 [bash 脚本格式][4]。如果你不知道如何用 bash 编写脚本的话,有很多在线资料可供查阅。这是一本相当全面的[介绍指南][5],包含一些我们没能在这里提及的 bashrc 的方面。
|
||||
编辑 `.bashrc` 需要遵循 [bash 脚本格式][4]。如果你不知道如何用 bash 编写脚本的话,有很多在线资料可供查阅。这是一本相当全面的[介绍指南][5],包含一些我们没能在这里提及的 bashrc 的方面。
|
||||
|
||||
**相关**: [如何在 Linux 启动时以 root 权限运行 bash 脚本][6]
|
||||
**相关**: [如何在 Linux 启动时以 root 权限运行 bash 脚本][6]
|
||||
|
||||
有一些有用的小技巧能使你的终端体验将更高效,也更用户友好。
|
||||
|
||||
@ -44,28 +45,29 @@ source ~/.bashrc
|
||||
|
||||
bash 提示符允许你自定义你的终端,并让它在你运行命令时显示提示。自定义的 bash 提示符着实能提高你在终端的工作效率。
|
||||
|
||||
看看这些即[有用][7]又[有趣][8]的 bash 提示符,你可以把它们添加到你的 bashrc 里。
|
||||
看看这些即[有用][7]又[有趣][8]的 bash 提示符,你可以把它们添加到你的 `.bashrc` 里。
|
||||
|
||||
#### 别名
|
||||
|
||||
![terminal-edit-bashrc-3][9]
|
||||
|
||||
别名也允许你使用简写的代码来执行你想要的某个命令的某种格式。让我们用 `ls` 命令来举个例子吧。`ls` 命令默认显示你目录里的内容。这挺有用的,不过显示目录的更多信息,或者显示目录下的隐藏内容,往往更加有用。因此,有个常见的别名就是 `ll`,用来运行 `ls -lha` 或者其他类似的命令。这样就能显示文件的大部分信息,找出隐藏的文件,并能以“能被人类阅读”的单位显示文件大小,而不是用“块”作为单位。
|
||||
别名允许你使用简写的代码来执行你想要的某种格式的某个命令。让我们用 `ls` 命令来举个例子吧。`ls` 命令默认显示你目录里的内容。这挺有用的,不过显示目录的更多信息,或者显示目录下的隐藏内容,往往更加有用。因此,有个常见的别名就是 `ll`,用来运行 `ls -lha` 或者其他类似的命令。这样就能显示文件的大部分信息,找出隐藏的文件,并能以“能被人类阅读”的单位显示文件大小,而不是用“块”作为单位。
|
||||
|
||||
你需要按照下面这样的格式书写别名:
|
||||
|
||||
```bash
|
||||
```
|
||||
alias ll = "ls -lha"
|
||||
```
|
||||
|
||||
左边输入你想设置的别名,右边引号里是要执行的命令。你可以用这种方法来创建命令的短版本,防止出现常见的拼写错误,或者让一个命令总是带上你想要的参数来运行。你也可以用你喜欢的缩写来规避讨厌或容易忘记的语法。这是一些[常见的别名的用法][10],你可以添加到你的 bashrc 里。
|
||||
左边输入你想设置的别名,右边引号里是要执行的命令。你可以用这种方法来创建命令的短版本,防止出现常见的拼写错误,或者让一个命令总是带上你想要的参数来运行。你也可以用你喜欢的缩写来规避讨厌或容易忘记的语法。这是一些[常见的别名的用法][10],你可以添加到你的 `.bashrc` 里。
|
||||
|
||||
#### 函数
|
||||
|
||||
![terminal-edit-bashrc-2][11]
|
||||
|
||||
除了缩短命令名,你也可以用 bash 函数组合多个命令到一个操作。这些命令可以很复杂,但是它们大多遵循这种语法:
|
||||
```bash
|
||||
|
||||
```
|
||||
function_name () {
|
||||
command_1
|
||||
command_2
|
||||
@ -73,7 +75,8 @@ function_name () {
|
||||
```
|
||||
|
||||
下面的命令组合了 `mkdir` 和 `cd` 命令。输入 `md folder_name` 可以在你的工作目录创建一个名为“folder_name”的目录并立刻导航进入。
|
||||
```bash
|
||||
|
||||
```
|
||||
md () {
|
||||
mkdir -p $1
|
||||
cd $1
|
||||
@ -92,7 +95,7 @@ via: https://www.maketecheasier.com/what-is-bashrc/
|
||||
|
||||
作者:[Alexander Fox][a]
|
||||
译者:[heart4lor](https://github.com/heart4lor)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,78 @@
|
||||
Quick Look at the Arch Based Indie Linux Distribution: MagpieOS
|
||||
======
|
||||
Most of the Linux distros that are in use today are either created and developed in the US or Europe. A young developer from Bangladesh wants to change all that.
|
||||
|
||||
### Who is Rizwan?
|
||||
|
||||
[Rizwan][1] is a computer science student from Bangladesh. He is currently studying to become a profession Python programmer. He started using Linux back in 2015. Working with Linux inspired him to create this own Linux distribution. He also wants to let the rest of the world know that Bangladesh is upgrading to Linux.
|
||||
|
||||
He has also worked on creating a [live version of Linux From Scratch][2].
|
||||
|
||||
## ![MagpieOS Linux][3]
|
||||
|
||||
### What is MagpieOS?
|
||||
|
||||
Rizwan's new distro is named MagpieOS. [MagpieOS][4] is very simple. It is basically Arch with the GNOME3 desktop environment. MagpieOS also includes a custom repo with icons and themes (claimed to be) not available on other Arch-based distros or AUR.
|
||||
|
||||
Here is a list of the software included with MagpieOS: Firefox, LibreOffice, Uget, Bleachbit, Notepadqq, SUSE Studio Image Writer, Pamac Package Manager, Gparted, Gimp, Rhythmbox, Simple Screen Recorder, all default GNOME software including Totem Video Player, and a new set of custom wallpaper.
|
||||
|
||||
Currently, MagpieOS only supported the GNOME desktop environment. Rizwan picked it because it is his favorite. However, he plans to add more desktop environments in the future.
|
||||
|
||||
Unfortunately, MagpieOS does not support the Bangla language or any other local languages. It supports GNOME's default language like English, Hindi etc.
|
||||
|
||||
Rizwan named his distro MagpieOS because the [magpie][5] is the official bird of Bangladesh.
|
||||
|
||||
## ![MagpieOS Linux][6]
|
||||
|
||||
### Why Arch?
|
||||
|
||||
Like most people, Rizwan started his Linux journey by using [Ubuntu][7]. In the beginning, he was happy with it. However, sometimes the software he wanted to install was not available in the repos and he had to hunt through Google looking for the correct PPA. He decided to switch to [Arch][8] because Arch has many packages that were not available on Ubuntu. Rizwan also liked the fact that Arch is a rolling release and would always be up-to-date.
|
||||
|
||||
The problem with Arch is that it is complicated and time-consuming to install. So, Rizwan tried out several Arch-based distros and was not happy with any of them. He didn't like [Manjaro][9] because they did not have permission to use Arch's repos. Also, Arch repo mirrors are faster than Manjaro's and have more software. He liked [Antergos][10], but to install you need a constant internet connection. If your connection fails during installation, you have to start over.
|
||||
|
||||
Because of these issues, Rizwan decided to create a simple distro that would give him and others an Arch install without all the hassle. He also hopes to get developers from his home country to switch from Ubuntu to Arch by using his distro.
|
||||
|
||||
### How to Help Rizwan with MagpieOS
|
||||
|
||||
If you are interested in helping Rizwan develop MagpieOS, you can contact him via the [MagpieOS website][4]. You can also check out the project's [GitHub page][11]. Rizwan said that he is not looking for financial support at the moment.
|
||||
|
||||
## ![MagpieOS Linux][12]
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
I installed MagpieOS to give it a quick once-over. It uses the [Calamares installer][13], which means installing it was relatively quick and painless. After I rebooted, I was greeted by an audio message welcoming me to MagpieOS.
|
||||
|
||||
To be honest, it was the first time I have heard a post-install greeting. (Windows 10 might have one, but I'm not sure.) There was also a Mac OS-esque application dock at the bottom of the screen. Other than that, it felt like any other GNOME 3 desktop I have used.
|
||||
|
||||
Considering that it's an indie project at the nascent stage, I won't recommend it using as your main OS. But if you are a distrohopper, you can surely give it a try.
|
||||
|
||||
That being said, this is a good first try for a student seeking to put his country on the technological map. All the best, Rizwan.
|
||||
|
||||
Have you already heard of MagpieOS? What is your favorite region or locally made Linux distro? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/magpieos/
|
||||
|
||||
作者:[John Paul][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[1]:https://twitter.com/Linux_Saikat
|
||||
[2]:https://itsfoss.com/linux-from-scratch-live-cd/
|
||||
[3]:https://itsfoss.com/wp-content/uploads/2018/01/magpieos1.jpg
|
||||
[4]:http://www.magpieos.net
|
||||
[5]:https://en.wikipedia.org/wiki/Magpie
|
||||
[6]:https://itsfoss.com/wp-content/uploads/2018/01/magpieos2.jpg
|
||||
[7]:https://www.ubuntu.com
|
||||
[8]:https://www.archlinux.org
|
||||
[9]:http://manjaro.org
|
||||
[10]:https://antergos.com
|
||||
[11]:https://github.com/Rizwan-Hasan/MagpieOS
|
||||
[12]:https://itsfoss.com/wp-content/uploads/2018/01/magpieos3.png
|
||||
[13]:https://calamares.io
|
@ -0,0 +1,134 @@
|
||||
Linux Check IDE / SATA SSD Hard Disk Transfer Speed
|
||||
======
|
||||
So how do you find out how fast is your hard disk under Linux? Is it running at the SATA I (150 MB/s) or SATA II (300 MB/s) or SATA III (6.0Gb/s) speed without opening computer case or chassis?
|
||||
|
||||
You can use the **hdparm or dd command** to check hard disk speed. It provides a command line interface to various hard disk ioctls supported by the stock Linux ATA/IDE/SATA device driver subsystem. Some options may work correctly only with the latest kernels (make sure you have cutting edge kernel installed). I also recommend compiling hdparm with the included files from the most recent kernel source code.
|
||||
|
||||
### How to measure hard disk data transfer speed using hdparm
|
||||
|
||||
Login as the root user and enter the following command:
|
||||
`$ sudo hdparm -tT /dev/sda`
|
||||
OR
|
||||
`$ sudo hdparm -tT /dev/hda`
|
||||
Sample outputs:
|
||||
```
|
||||
/dev/sda:
|
||||
Timing cached reads: 7864 MB in 2.00 seconds = 3935.41 MB/sec
|
||||
Timing buffered disk reads: 204 MB in 3.00 seconds = 67.98 MB/sec
|
||||
```
|
||||
|
||||
For meaningful results, this operation should be **repeated 2-3 times**. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the **throughput of the processor, cache, and memory** of the system under test. [Here is a for loop example][1], to run test 3 time in a row:
|
||||
`for i in 1 2 3; do hdparm -tT /dev/hda; done`
|
||||
Where,
|
||||
|
||||
* **-t** :perform device read timings
|
||||
* **-T** : perform cache read timings
|
||||
* **/dev/sda** : Hard disk device file
|
||||
|
||||
|
||||
|
||||
To [find out SATA hard disk link speed][2], enter:
|
||||
`sudo hdparm -I /dev/sda | grep -i speed`
|
||||
Output:
|
||||
```
|
||||
* Gen1 signaling speed (1.5Gb/s)
|
||||
* Gen2 signaling speed (3.0Gb/s)
|
||||
* Gen3 signaling speed (6.0Gb/s)
|
||||
|
||||
```
|
||||
|
||||
Above output indicate that my hard disk can use 1.5Gb/s, 3.0Gb/s, or 6.0Gb/s speed. Please note that your BIOS / Motherboard must have support for SATA-II/III:
|
||||
`$ dmesg | grep -i sata | grep 'link up'`
|
||||
[![Linux Check IDE SATA SSD Hard Disk Transfer Speed][3]][3]
|
||||
|
||||
### dd Command
|
||||
|
||||
You can use the dd command as follows to get speed info too:
|
||||
```
|
||||
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
|
||||
rm /tmp/output.img
|
||||
```
|
||||
|
||||
Sample outputs:
|
||||
```
|
||||
262144+0 records in
|
||||
262144+0 records out
|
||||
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, **90.8 MB/s**
|
||||
|
||||
```
|
||||
|
||||
The [recommended syntax for the dd command is as follows][4]
|
||||
```
|
||||
dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync
|
||||
|
||||
## GNU dd syntax ##
|
||||
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
|
||||
|
||||
## OR alternate syntax for GNU/dd ##
|
||||
dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync
|
||||
```
|
||||
|
||||
|
||||
Sample outputs from the last dd command:
|
||||
```
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23889 s, 253 MB/s
|
||||
```
|
||||
|
||||
### Disks & storage - GUI tool
|
||||
|
||||
You can also use disk utility located at System > Administration > Disk utility menu. Please note that in latest version of Gnome it is simply called Disks.
|
||||
|
||||
#### How do I test the performance of my hard disk using Disks on Linux?
|
||||
|
||||
To test the speed of your hard disk:
|
||||
|
||||
1. Open **Disks** from the **Activities** overview (press the Super key on your keyboard and type Disks)
|
||||
2. Choose the **disk** from the list in the **left pane**
|
||||
3. Select the menu button and select **Benchmark disk …** from the menu
|
||||
4. Click **Start Benchmark …** and adjust the Transfer Rate and Access Time parameters as desired.
|
||||
5. Choose **Start Benchmarking** to test how fast data can be read from the disk. Administrative privileges required. Enter your password
|
||||
|
||||
|
||||
|
||||
A quick video demo of above procedure:
|
||||
|
||||
https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/disks-performance.mp4
|
||||
|
||||
|
||||
#### Read Only Benchmark (Safe option)
|
||||
|
||||
Then, select > Read only:
|
||||
![Fig.01: Linux Benchmarking Hard Disk Read Only Test Speed][5]
|
||||
The above option will not destroy any data.
|
||||
|
||||
#### Read and Write Benchmark (All data will be lost so be careful)
|
||||
|
||||
Visit System > Administration > Disk utility menu > Click Benchmark > Click Start Read/Write Benchmark button:
|
||||
![Fig.02:Linux Measuring read rate, write rate and access time][6]
|
||||
|
||||
### About the author
|
||||
|
||||
The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on [Twitter][7], [Facebook][8], [Google+][9].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
|
||||
|
||||
作者:[Vivek Gite][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.cyberciti.biz/
|
||||
[1]:https://www.cyberciti.biz/faq/bash-for-loop/
|
||||
[2]:https://www.cyberciti.biz/faq/linux-command-to-find-sata-harddisk-link-speed/
|
||||
[3]:https://www.cyberciti.biz/tips/wp-content/uploads/2007/10/Linux-Check-IDE-SATA-SSD-Hard-Disk-Transfer-Speed.jpg
|
||||
[4]:https://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/
|
||||
[5]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Speed-Benchmark.png (Linux Benchmark Hard Disk Speed)
|
||||
[6]:https://www.cyberciti.biz/media/new/tips/2007/10/Linux-Hard-Disk-Read-Write-Benchmark.png (Linux Hard Disk Benchmark Read / Write Rate and Access Time)
|
||||
[7]:https://twitter.com/nixcraft
|
||||
[8]:https://facebook.com/nixcraft
|
||||
[9]:https://plus.google.com/+CybercitiBiz
|
@ -0,0 +1,95 @@
|
||||
Monitoring network bandwidth with iftop command
|
||||
======
|
||||
System Admins are required to monitor IT infrastructure to make sure that everything is up & running. We have to monitor performance of hardware i.e memory, hdds & CPUs etc & so does we have to monitor our network. We need to make sure that our network is not being over utilised or our applications, websites might not work. In this tutorial, we are going to learn to use IFTOP utility.
|
||||
|
||||
( **Recommended read** :[ **Resource monitoring using Nagios**][1], [**Tools for checking system info**,][2] [**Important logs to monitor**][3])
|
||||
|
||||
Iftop is network monitoring utility that provides real time real time bandwidth monitoring. Iftop measures total data moving in & out of the individual socket connections i.e. it captures packets moving in and out via network adapter & than sums those up to find the bandwidth being utilized.
|
||||
|
||||
## Installation on Debian/Ubuntu
|
||||
|
||||
Iftop is available with default repositories of Debian/Ubuntu & can be simply installed using the command below,
|
||||
|
||||
```
|
||||
$ sudo apt-get install iftop
|
||||
```
|
||||
|
||||
## Installation on RHEL/Centos using yum
|
||||
|
||||
For installing iftop on CentOS or RHEL, we need to enable EPEL repository. To enable repository, run the following on your terminal,
|
||||
|
||||
### RHEL/CentOS 7
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
### RHEL/CentOS 6 (64 Bit)
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
### RHEL/CentOS 6 (32 Bit)
|
||||
|
||||
```
|
||||
$ rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
|
||||
```
|
||||
|
||||
After epel repository has been installed, we can now install iftop by running,
|
||||
|
||||
```
|
||||
$ yum install iftop
|
||||
```
|
||||
|
||||
This will install iftop utility on your system. We will now use it to monitor our network,
|
||||
|
||||
## Using IFTOP
|
||||
|
||||
You can start using iftop by opening your terminal windown & type,
|
||||
|
||||
```
|
||||
$ iftop
|
||||
```
|
||||
|
||||
![network monitoring][5]
|
||||
|
||||
You will now be presented with network activity happening on your machine. You can also use
|
||||
|
||||
```
|
||||
$ iftop -n
|
||||
```
|
||||
|
||||
Which will present the network information on your screen but with '-n' , you will not be presented with the names related to IP addresses but only ip addresses. This option allows for some bandwidth to be saved, which goes into resolving IP addresses to names.
|
||||
|
||||
Now we can also see all the commands that can be used with iftop. Once you have ran iftop, press 'h' button on the keyboard to see all the commands that can be used with iftop.
|
||||
|
||||
![network monitoring][7]
|
||||
|
||||
To monitor a particular network interface, we can mention interface with iftop,
|
||||
|
||||
```
|
||||
$ iftop -I enp0s3
|
||||
```
|
||||
|
||||
You can check further options that are used with iftop using help, as mentioned above. But these mentioned examples are only what you might to monitor network.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/monitoring-network-bandwidth-iftop-command/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/installing-configuring-nagios-server/
|
||||
[2]:http://linuxtechlab.com/commands-system-hardware-info/
|
||||
[3]:http://linuxtechlab.com/important-logs-monitor-identify-issues/
|
||||
[4]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=661%2C424
|
||||
[5]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-1.jpg?resize=661%2C424
|
||||
[6]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=663%2C416
|
||||
[7]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/iftop-help.jpg?resize=663%2C416
|
@ -0,0 +1,583 @@
|
||||
Rapid, Secure Patching: Tools and Methods
|
||||
======
|
||||
|
||||
It was with some measure of disbelief that the computer science community greeted the recent [EternalBlue][1]-related exploits that have torn through massive numbers of vulnerable systems. The SMB exploits have kept coming (the most recent being [SMBLoris][2] presented at the last DEF CON, which impacts multiple SMB protocol versions, and for which Microsoft will issue no corrective patch. Attacks with these tools [incapacitated critical infrastructure][3] to the point that patients were even turned away from the British National Health Service.
|
||||
|
||||
It is with considerable sadness that, during this SMB catastrophe, we also have come to understand that the famous Samba server presented an exploitable attack surface on the public internet in sufficient numbers for a worm to propagate successfully. I previously [have discussed SMB security][4] in Linux Journal, and I am no longer of the opinion that SMB server processes should run on Linux.
|
||||
|
||||
In any case, systems administrators of all architectures must be able to down vulnerable network servers and patch them quickly. There is often a need for speed and competence when working with a large collection of Linux servers. Whether this is due to security situations or other concerns is immaterial—the hour of greatest need is not the time to begin to build administration tools. Note that in the event of an active intrusion by hostile parties, [forensic analysis][5] may be a legal requirement, and no steps should be taken on the compromised server without a careful plan and documentation. Especially in this new era of the black hats, computer professionals must step up their game and be able to secure vulnerable systems quickly.
|
||||
|
||||
### Secure SSH Keypairs
|
||||
|
||||
Tight control of a heterogeneous UNIX environment must begin with best-practice use of SSH authentication keys. I'm going to open this section with a simple requirement. SSH private keys must be one of three types: Ed25519, ECDSA using the E-521 curve or RSA keys of 3072 bits. Any key that does not meet those requirements should be retired (in particular, DSA keys must be removed from service immediately).
|
||||
|
||||
The [Ed25519][6] key format is associated with Daniel J. Bernstein, who has such a preeminent reputation in modern cryptography that the field is becoming a DJB [monoculture][7]. The Ed25519 format is deigned for speed, security and size economy. If all of your SSH servers are recent enough to support Ed25519, then use it, and consider nothing else.
|
||||
|
||||
[Guidance on creating Ed25519 keys][8] suggests 100 rounds for a work factor in the "-o" secure format. Raising the number of rounds raises the strength of the encrypted key against brute-force attacks (should a file copy of the private key fall into hostile hands), at the cost of more work and time in decrypting the key when ssh-add is executed. Although there always is [controversy and discussion][9] with security advances, I will repeat the guidance here and suggest that the best format for a newly created SSH key is this:
|
||||
|
||||
```
|
||||
|
||||
ssh-keygen -a 100 -t ed25519
|
||||
|
||||
```
|
||||
|
||||
Your systems might be too old to support Ed25519—Oracle/CentOS/Red Hat 7 have this problem (the 7.1 release introduced support). If you cannot upgrade your old SSH clients and servers, your next best option is likely E-521, available in the ECDSA key format.
|
||||
|
||||
The ECDSA curves came from the US government's National Institute of Standards (NIST). The best known and most implemented of all of the NIST curves are P-256, P-384 and E-521\. All three curves are approved for secret communications by a variety of government entities, but a number of cryptographers have [expressed growing suspicion][10] that the P-256 and P-384 curves are tainted. Well known cryptographer Bruce Schneier [has remarked][11]: "I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry." However, DJB [has expressed][12] limited praise of the E-521 curve: "To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2521 – 1; but the sheer size of this prime makes it much slower than NIST P-256." All of the NIST curves have greater issues with "side channel" attacks than Ed25519—P-521 is certainly a step down, and many assert that none of the NIST curves are safe. In summary, there is a slight risk that a powerful adversary exists with an advantage over the P-256 and P-384 curves, so one is slightly inclined to avoid them. Note that even if your OpenSSH (source) release is capable of E-521, it may be [disabled by your vendor][13] due to patent concerns, so E-521 is not an option in this case. If you cannot use DJB's 2255 – 19 curve, this command will generate an E-521 key on a capable system:
|
||||
|
||||
```
|
||||
|
||||
ssh-keygen -o -a 100 -b 521 -t ecdsa
|
||||
|
||||
```
|
||||
|
||||
And, then there is the unfortunate circumstance with SSH servers that support neither ECDSA nor Ed25519\. In this case, you must fall back to RSA with much larger key sizes. An absolute minimum is the modern default of 2048 bits, but 3072 is a wiser choice:
|
||||
|
||||
```
|
||||
|
||||
ssh-keygen -o -a 100 -b 3072 -t rsa
|
||||
|
||||
```
|
||||
|
||||
Then in the most lamentable case of all, when you must use old SSH clients that are not able to work with private keys created with the -o option, you can remove the password on id_rsa and create a naked key, then use OpenSSL to encrypt it with AES256 in the PKCS#8 format, as [first documented by Martin Kleppmann][14]. Provide a blank new password for the keygen utility below, then supply a new password when OpenSSL reprocesses the key:
|
||||
|
||||
```
|
||||
|
||||
$ cd ~/.ssh
|
||||
|
||||
$ cp id_rsa id_rsa-orig
|
||||
|
||||
$ ssh-keygen -p -t rsa
|
||||
Enter file in which the key is (/home/cfisher/.ssh/id_rsa):
|
||||
Enter old passphrase:
|
||||
Key has comment 'cfisher@localhost.localdomain'
|
||||
Enter new passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved with the new passphrase.
|
||||
|
||||
$ openssl pkcs8 -topk8 -v2 aes256 -in id_rsa -out id_rsa-strong
|
||||
Enter Encryption Password:
|
||||
Verifying - Enter Encryption Password:
|
||||
|
||||
mv id_rsa-strong id_rsa
|
||||
chmod 600 id_rsa
|
||||
|
||||
```
|
||||
|
||||
After creating all of these keys on a newer system, you can compare the file sizes:
|
||||
|
||||
```
|
||||
|
||||
$ ll .ssh
|
||||
total 32
|
||||
-rw-------. 1 cfisher cfisher 801 Aug 10 21:30 id_ecdsa
|
||||
-rw-r--r--. 1 cfisher cfisher 283 Aug 10 21:30 id_ecdsa.pub
|
||||
-rw-------. 1 cfisher cfisher 464 Aug 10 20:49 id_ed25519
|
||||
-rw-r--r--. 1 cfisher cfisher 111 Aug 10 20:49 id_ed25519.pub
|
||||
-rw-------. 1 cfisher cfisher 2638 Aug 10 21:45 id_rsa
|
||||
-rw-------. 1 cfisher cfisher 2675 Aug 10 21:42 id_rsa-orig
|
||||
-rw-r--r--. 1 cfisher cfisher 583 Aug 10 21:42 id_rsa.pub
|
||||
|
||||
```
|
||||
|
||||
Although they are relatively enormous, all versions of OpenSSH that I have used have been compatible with the RSA private key in PKCS#8 format. The Ed25519 public key is now small enough to fit in 80 columns without word wrap, and it is as convenient as it is efficient and secure.
|
||||
|
||||
Note that PuTTY may have problems using various versions of these keys, and you may need to remove passwords for a successful import into the PuTTY agent.
|
||||
|
||||
These keys represent the most secure formats available for various OpenSSH revisions. They really aren't intended for PuTTY or other general interactive activity. Although one hopes that all users create strong keys for all situations, these are enterprise-class keys for major systems activities. It might be wise, however, to regenerate your system host keys to conform to these guidelines.
|
||||
|
||||
These key formats may soon change. Quantum computers are causing increasing concern for their ability to run [Shor's Algorithm][15], which can be used to find prime factors to break these keys in reasonable time. The largest commercially available quantum computer, the [D-Wave 2000Q][16], effectively [presents under 200 qubits][17] for this activity, which is not (yet) powerful enough for a successful attack. NIST [announced a competition][18] for a new quantum-resistant public key system with a deadline of November 2017 In response, a team including DJB has released source code for [NTRU Prime][19]. It does appear that we will likely see a post-quantum public key format for OpenSSH (and potentially TLS 1.3) released within the next two years, so take steps to ease migration now.
|
||||
|
||||
Also, it's important for SSH servers to restrict their allowed ciphers, MACs and key exchange lest strong keys be wasted on broken crypto (3DES, MD5 and arcfour should be long-disabled). My [previous guidance][20] on the subject involved the following (three) lines in the SSH client and server configuration (note that formatting in the sshd_config file requires all parameters on the same line with no spaces in the options; line breaks have been added here for clarity):
|
||||
|
||||
```
|
||||
|
||||
Ciphers chacha20-poly1305@openssh.com,
|
||||
aes256-gcm@openssh.com,
|
||||
aes128-gcm@openssh.com,
|
||||
aes256-ctr,
|
||||
aes192-ctr,
|
||||
aes128-ctr
|
||||
|
||||
MACs hmac-sha2-512-etm@openssh.com,
|
||||
hmac-sha2-256-etm@openssh.com,
|
||||
hmac-ripemd160-etm@openssh.com,
|
||||
umac-128-etm@openssh.com,
|
||||
hmac-sha2-512,
|
||||
hmac-sha2-256,
|
||||
hmac-ripemd160,
|
||||
umac-128@openssh.com
|
||||
|
||||
KexAlgorithms curve25519-sha256@libssh.org,
|
||||
diffie-hellman-group-exchange-sha256
|
||||
|
||||
```
|
||||
|
||||
Since the previous publication, RIPEMD160 is likely no longer safe and should be removed. Older systems, however, may support only SHA1, MD5 and RIPEMD160\. Certainly remove MD5, but users of PuTTY likely will want to retain SHA1 when newer MACs are not an option. Older servers can present a challenge in finding a reasonable Cipher/MAC/KEX when working with modern systems.
|
||||
|
||||
At this point, you should have strong keys for secure clients and servers. Now let's put them to use.
|
||||
|
||||
### Scripting the SSH Agent
|
||||
|
||||
Modern OpenSSH distributions contain the ssh-copy-id shell script for easy key distribution. Below is an example of installing a specific, named key in a remote account:
|
||||
|
||||
```
|
||||
|
||||
$ ssh-copy-id -i ~/.ssh/some_key.pub person@yourserver.com
|
||||
ssh-copy-id: INFO: Source of key(s) to be installed:
|
||||
"/home/cfisher/.ssh/some_key.pub"
|
||||
ssh-copy-id: INFO: attempting to log in with the new key(s),
|
||||
to filter out any that are already installed
|
||||
ssh-copy-id: INFO: 1 key(s) remain to be installed --
|
||||
if you are prompted now it is to install the new keys
|
||||
person@yourserver.com's password:
|
||||
|
||||
Number of key(s) added: 1
|
||||
|
||||
Now try logging into the machine, with:
|
||||
"ssh 'person@yourserver.com'"
|
||||
and check to make sure that only the key(s) you wanted were added.
|
||||
|
||||
```
|
||||
|
||||
If you don't have the ssh-copy-id script, you can install a key manually with the following command:
|
||||
|
||||
```
|
||||
|
||||
$ ssh person@yourserver.com 'cat >> ~/.ssh/authorized_keys' < \
|
||||
~/.ssh/some_key.pub
|
||||
|
||||
```
|
||||
|
||||
If you have SELinux enabled, you might have to mark a newly created authorized_keys file with a security type; otherwise, the sshd server dæmon will be prevented from reading the key (the syslog may report this issue):
|
||||
|
||||
```
|
||||
|
||||
$ ssh person@yourserver.com 'chcon -t ssh_home_t
|
||||
↪~/.ssh/authorized_keys'
|
||||
|
||||
```
|
||||
|
||||
Once your key is installed, test it in a one-time use with the -i option (note that you are entering a local key password, not a remote authentication password):
|
||||
|
||||
```
|
||||
|
||||
$ ssh -i ~/.ssh/some_key person@yourserver.com
|
||||
Enter passphrase for key '/home/v-fishecj/.ssh/some_key':
|
||||
Last login: Wed Aug 16 12:20:26 2017 from 10.58.17.14
|
||||
yourserver $
|
||||
|
||||
```
|
||||
|
||||
General, interactive users likely will cache their keys with an agent. In the example below, the same password is used on all three types of keys that were created in the previous section:
|
||||
|
||||
```
|
||||
|
||||
$ eval $(ssh-agent)
|
||||
Agent pid 4394
|
||||
|
||||
$ ssh-add
|
||||
Enter passphrase for /home/cfisher/.ssh/id_rsa:
|
||||
Identity added: ~cfisher/.ssh/id_rsa (~cfisher/.ssh/id_rsa)
|
||||
Identity added: ~cfisher/.ssh/id_ecdsa (cfisher@init.com)
|
||||
Identity added: ~cfisher/.ssh/id_ed25519 (cfisher@init.com)
|
||||
|
||||
```
|
||||
|
||||
The first command above launches a user agent process, which injects environment variables (named SSH_AGENT_SOCK and SSH_AGENT_PID) into the parent shell (via eval). The shell becomes aware of the agent and passes these variables to the programs that it runs from that point forward.
|
||||
|
||||
When launched, the ssh-agent has no credentials and is unable to facilitate SSH activity. It must be primed by adding keys, which is done with ssh-add. When called with no arguments, all of the default keys will be read. It also can be called to add a custom key:
|
||||
|
||||
```
|
||||
|
||||
$ ssh-add ~/.ssh/some_key
|
||||
Enter passphrase for /home/cfisher/.ssh/some_key:
|
||||
Identity added: /home/cfisher/.ssh/some_key
|
||||
↪(cfisher@localhost.localdomain)
|
||||
|
||||
```
|
||||
|
||||
Note that the agent will not retain the password on the key. ssh-add uses any and all passwords that you enter while it runs to decrypt keys that it finds, but the passwords are cleared from memory when ssh-add terminates (they are not sent to ssh-agent). This allows you to upgrade to new key formats with minimal inconvenience, while keeping the keys reasonably safe.
|
||||
|
||||
The current cached keys can be listed with ssh-add -l (from, which you can deduce that "some_key" is an Ed25519):
|
||||
|
||||
```
|
||||
|
||||
$ ssh-add -l
|
||||
3072 SHA256:cpVFMZ17oO5n/Jfpv2qDNSNcV6ffOVYPV8vVaSm3DDo
|
||||
/home/cfisher/.ssh/id_rsa (RSA)
|
||||
521 SHA256:1L9/CglR7cstr54a600zDrBbcxMj/a3RtcsdjuU61VU
|
||||
cfisher@localhost.localdomain (ECDSA)
|
||||
256 SHA256:Vd21LEM4lixY4rIg3/Ht/w8aoMT+tRzFUR0R32SZIJc
|
||||
cfisher@localhost.localdomain (ED25519)
|
||||
256 SHA256:YsKtUA9Mglas7kqC4RmzO6jd2jxVNCc1OE+usR4bkcc
|
||||
cfisher@localhost.localdomain (ED25519)
|
||||
|
||||
```
|
||||
|
||||
While a "primed" agent is running, the SSH clients may use (trusting) remote servers fluidly, with no further prompts for credentials:
|
||||
|
||||
```
|
||||
|
||||
$ sftp person@yourserver.com
|
||||
Connected to yourserver.com.
|
||||
sftp> quit
|
||||
|
||||
$ scp /etc/passwd person@yourserver.com:/tmp
|
||||
passwd 100% 2269 65.8KB/s 00:00
|
||||
|
||||
$ ssh person@yourserver.com
|
||||
(motd for yourserver.com)
|
||||
$ ls -l /tmp/passwd
|
||||
-rw-r--r-- 1 root wheel 2269 Aug 16 09:07 /tmp/passwd
|
||||
$ rm /tmp/passwd
|
||||
$ exit
|
||||
Connection to yourserver.com closed.
|
||||
|
||||
```
|
||||
|
||||
The OpenSSH agent can be locked, preventing any further use of the credentials that it holds (this might be appropriate when suspending a laptop):
|
||||
|
||||
```
|
||||
|
||||
$ ssh-add -x
|
||||
Enter lock password:
|
||||
Again:
|
||||
Agent locked.
|
||||
|
||||
$ ssh yourserver.com
|
||||
Enter passphrase for key '/home/cfisher/.ssh/id_rsa': ^C
|
||||
|
||||
```
|
||||
|
||||
It will provide credentials again when it is unlocked:
|
||||
|
||||
```
|
||||
|
||||
$ ssh-add -X
|
||||
Enter lock password:
|
||||
Agent unlocked.
|
||||
|
||||
```
|
||||
|
||||
You also can set ssh-agent to expire keys after a time limit with the -t option, which may be useful for long-lived agents that must clear keys after a set daily shift.
|
||||
|
||||
General shell users may cache many types of keys with a number of differing agent implementations. In addition to the standard OpenSSH agent, users may rely upon PuTTY's pageant.exe, GNOME keyring or KDE Kwallet, among others (the use of the PUTTY agent could likely fill an article on its own).
|
||||
|
||||
However, the goal here is to create "enterprise" keys for critical server controls. You likely do not want long-lived agents in order to limit the risk of exposure. When scripting with "enterprise" keys, you will run an agent only for the duration of the activity, then kill it at completion.
|
||||
|
||||
There are special options for accessing the root account with OpenSSH—the PermitRootLogin parameter can be added to the sshd_config file (usually found in /etc/ssh). It can be set to a simple yes or no, forced-commands-only, which will allow only explicitly-authorized programs to be executed, or the equivalent options prohibit-password or without-password, both of which will allow access to the keys generated here.
|
||||
|
||||
Many hold that root should not be allowed any access. [Michael W. Lucas][21] addresses the question in SSH Mastery:
|
||||
|
||||
> Sometimes, it seems that you need to allow users to SSH in to the system as root. This is a colossally bad idea in almost all environments. When users must log in as a regular user and then change to root, the system logs record the user account, providing accountability. Logging in as root destroys that audit trail....It is possible to override the security precautions and make sshd permit a login directly as root. It's such a bad idea that I'd consider myself guilty of malpractice if I told you how to do it. Logging in as root via SSH almost always means you're solving the wrong problem. Step back and look for other ways to accomplish your goal.
|
||||
|
||||
When root action is required quickly on more than a few servers, the above advice can impose painful delays. Lucas' direct criticism can be addressed by allowing only a limited set of "bastion" servers to issue root commands over SSH. Administrators should be forced to log in to the bastions with unprivileged accounts to establish accountability.
|
||||
|
||||
However, one problem with remotely "changing to root" is the [statistical use of the Viterbi algorithm][22] Short passwords, the su - command and remote SSH calls that use passwords to establish a trinary network configuration are all uniquely vulnerable to timing attacks on a user's keyboard movement. Those with the highest security concerns will need to compensate.
|
||||
|
||||
For the rest of us, I recommend that PermitRootLogin without-password be set for all target machines.
|
||||
|
||||
Finally, you can easily terminate ssh-agent interactively with the -k option:
|
||||
|
||||
```
|
||||
|
||||
$ eval $(ssh-agent -k)
|
||||
Agent pid 4394 killed
|
||||
|
||||
```
|
||||
|
||||
With these tools and the intended use of them in mind, here is a complete script that runs an agent for the duration of a set of commands over a list of servers for a common named user (which is not necessarily root):
|
||||
|
||||
```
|
||||
|
||||
# cat artano
|
||||
|
||||
#!/bin/sh
|
||||
|
||||
if [[ $# -lt 1 ]]; then echo "$0 - requires commands"; exit; fi
|
||||
|
||||
R="-R5865:127.0.0.1:5865" # set to "-2" if you don't want
|
||||
↪port forwarding
|
||||
|
||||
eval $(ssh-agent -s)
|
||||
|
||||
function cleanup { eval $(ssh-agent -s -k); }
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
function remsh { typeset F="/tmp/${1}" h="$1" p="$2";
|
||||
↪shift 2; echo "#$h"
|
||||
if [[ "$ARTANO" == "PARALLEL" ]]
|
||||
then ssh "$R" -p "$p" "$h" "$@" < /dev/null >>"${F}.out"
|
||||
↪2>>"${F}.err" &
|
||||
else ssh "$R" -p "$p" "$h" "$@"
|
||||
fi } # HOST PORT CMD
|
||||
|
||||
if ssh-add ~/.ssh/master_key
|
||||
then remsh yourserver.com 22 "$@"
|
||||
remsh container.yourserver.com 2200 "$@"
|
||||
remsh anotherserver.com 22 "$@"
|
||||
# Add more hosts here.
|
||||
else echo Bad password - killing agent. Try again.
|
||||
fi
|
||||
|
||||
wait
|
||||
|
||||
#######################################################################
|
||||
# Examples: # Artano is an epithet of a famous mythical being
|
||||
# artano 'mount /patchdir' # you will need an fstab entry for this
|
||||
# artano 'umount /patchdir'
|
||||
# artano 'yum update -y 2>&1'
|
||||
# artano 'rpm -Fvh /patchdir/\*.rpm'
|
||||
#######################################################################
|
||||
|
||||
```
|
||||
|
||||
This script runs all commands in sequence on a collection of hosts by default. If the ARTANO environment variable is set to PARALLEL, it instead will launch them all as background processes simultaneously and append their STDOUT and STDERR to files in /tmp (this should be no problem when dealing with fewer than a hundred hosts on a reasonable server). The PARALLEL setting is useful not only for pushing changes faster, but also for collecting audit results.
|
||||
|
||||
Below is an example using the yum update agent. The source of this particular invocation had to traverse a firewall and relied on a proxy setting in the /etc/yum.conf file, which used the port-forwarding option (-R) above:
|
||||
|
||||
```
|
||||
|
||||
# ./artano 'yum update -y 2>&1'
|
||||
Agent pid 3458
|
||||
Enter passphrase for /root/.ssh/master_key:
|
||||
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
|
||||
#yourserver.com
|
||||
Loaded plugins: langpacks, ulninfo
|
||||
No packages marked for update
|
||||
#container.yourserver.com
|
||||
Loaded plugins: langpacks, ulninfo
|
||||
No packages marked for update
|
||||
#anotherserver.com
|
||||
Loaded plugins: langpacks, ulninfo
|
||||
No packages marked for update
|
||||
Agent pid 3458 killed
|
||||
|
||||
```
|
||||
|
||||
The script can be used for more general maintenance functions. Linux installations running the XFS filesystem should "defrag" periodically. Although this normally would be done with cron, it can be a centralized activity, stored in a separate script that includes only on the appropriate hosts:
|
||||
|
||||
```
|
||||
|
||||
&1'
|
||||
Agent pid 7897
|
||||
Enter passphrase for /root/.ssh/master_key:
|
||||
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
|
||||
#yourserver.com
|
||||
#container.yourserver.com
|
||||
#anotherserver.com
|
||||
Agent pid 7897 killed
|
||||
|
||||
```
|
||||
|
||||
An easy method to collect the contents of all authorized_keys files for all users is the following artano script (this is useful for system auditing and is coded to remove file duplicates):
|
||||
|
||||
```
|
||||
|
||||
artano 'awk -F: {print\$6\"/.ssh/authorized_keys\"} \
|
||||
/etc/passwd | sort -u | xargs grep . 2> /dev/null'
|
||||
|
||||
```
|
||||
|
||||
It is convenient to configure NFS mounts for file distribution to remote nodes. Bear in mind that NFS is clear text, and sensitive content should not traverse untrusted networks while unencrypted. After configuring an NFS server on host 1.2.3.4, I add the following line to the /etc/fstab file on all the clients and create the /patchdir directory. After the change, the artano script can be used to mass-mount the directory if the network configuration is correct:
|
||||
|
||||
```
|
||||
|
||||
# tail -1 /etc/fstab
|
||||
1.2.3.4:/var/cache/yum/x86_64/7Server/ol7_latest/packages
|
||||
↪/patchdir nfs4 noauto,proto=tcp,port=2049 0 0
|
||||
|
||||
```
|
||||
|
||||
Assuming that the NFS server is mounted, RPMs can be upgraded from images stored upon it (note that Oracle Spacewalk or Red Hat Satellite might be a more capable patch method):
|
||||
|
||||
```
|
||||
|
||||
# ./artano 'rpm -Fvh /patchdir/\*.rpm'
|
||||
Agent pid 3203
|
||||
Enter passphrase for /root/.ssh/master_key:
|
||||
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
|
||||
#yourserver.com
|
||||
Preparing... ########################
|
||||
Updating / installing...
|
||||
xmlsec1-1.2.20-7.el7_4 ########################
|
||||
xmlsec1-openssl-1.2.20-7.el7_4 ########################
|
||||
Cleaning up / removing...
|
||||
xmlsec1-openssl-1.2.20-5.el7 ########################
|
||||
xmlsec1-1.2.20-5.el7 ########################
|
||||
#container.yourserver.com
|
||||
Preparing... ########################
|
||||
Updating / installing...
|
||||
xmlsec1-1.2.20-7.el7_4 ########################
|
||||
xmlsec1-openssl-1.2.20-7.el7_4 ########################
|
||||
Cleaning up / removing...
|
||||
xmlsec1-openssl-1.2.20-5.el7 ########################
|
||||
xmlsec1-1.2.20-5.el7 ########################
|
||||
#anotherserver.com
|
||||
Preparing... ########################
|
||||
Updating / installing...
|
||||
xmlsec1-1.2.20-7.el7_4 ########################
|
||||
xmlsec1-openssl-1.2.20-7.el7_4 ########################
|
||||
Cleaning up / removing...
|
||||
xmlsec1-openssl-1.2.20-5.el7 ########################
|
||||
xmlsec1-1.2.20-5.el7 ########################
|
||||
Agent pid 3203 killed
|
||||
|
||||
```
|
||||
|
||||
I am assuming that my audience is already experienced with package tools for their preferred platforms. However, to avoid criticism that I've included little actual discussion of patch tools, the following is a quick reference of RPM manipulation commands, which is the most common package format on enterprise systems:
|
||||
|
||||
* rpm -Uvh package.i686.rpm — install or upgrade a package file.
|
||||
|
||||
* rpm -Fvh package.i686.rpm — upgrade a package file, if an older version is installed.
|
||||
|
||||
* rpm -e package — remove an installed package.
|
||||
|
||||
* rpm -q package — list installed package name and version.
|
||||
|
||||
* rpm -q --changelog package — print full changelog for installed package (including CVEs).
|
||||
|
||||
* rpm -qa — list all installed packages on the system.
|
||||
|
||||
* rpm -ql package — list all files in an installed package.
|
||||
|
||||
* rpm -qpl package.i686.rpm — list files included in a package file.
|
||||
|
||||
* rpm -qi package — print detailed description of installed package.
|
||||
|
||||
* rpm -qpi package — print detailed description of package file.
|
||||
|
||||
* rpm -qf /path/to/file — list package that installed a particular file.
|
||||
|
||||
* rpm --rebuild package.src.rpm — unpack and build a binary RPM under /usr/src/redhat.
|
||||
|
||||
* rpm2cpio package.src.rpm | cpio -icduv — unpack all package files in the current directory.
|
||||
|
||||
Another important consideration for scripting the SSH agent is limiting the capability of an authorized key. There is a [specific syntax][23] for such limitations Of particular interest is the from="" clause, which will restrict logins on a key to a limited set of hosts. It is likely wise to declare a set of "bastion" servers that will record non-root logins that escalate into controlled users who make use of the enterprise keys.
|
||||
|
||||
An example entry might be the following (note that I've broken this line, which is not allowed syntax but done here for clarity):
|
||||
|
||||
```
|
||||
|
||||
from="*.c2.security.yourcompany.com,4.3.2.1" ssh-ed25519
|
||||
↪AAAAC3NzaC1lZDI1NTE5AAAAIJSSazJz6A5x6fTcDFIji1X+
|
||||
↪svesidBonQvuDKsxo1Mx
|
||||
|
||||
```
|
||||
|
||||
A number of other useful restraints can be placed upon authorized_keys entries. The command="" will restrict a key to a single program or script and will set the SSH_ORIGINAL_COMMAND environment variable to the client's attempted call—scripts can set alarms if the variable does not contain approved contents. The restrict option also is worth consideration, as it disables a large set of SSH features that can be both superfluous and dangerous.
|
||||
|
||||
Although it is possible to set server identification keys in the known_hosts file to a @revoked status, this cannot be done with the contents of authorized_keys. However, a system-wide file for forbidden keys can be set in the sshd_config with RevokedKeys. This file overrides any user's authorized_keys. If set, this file must exist and be readable by the sshd server process; otherwise, no keys will be accepted at all (so use care if you configure it on a machine where there are obstacles to physical access). When this option is set, use the artano script to append forbidden keys to the file quickly when they should be disallowed from the network. A clear and convenient file location would be /etc/ssh/revoked_keys.
|
||||
|
||||
It is also possible to establish a local Certificate Authority (CA) for OpenSSH that will [allow keys to be registered with an authority][24] with expiration dates. These CAs can [become quite elaborate][25] in their control over an enterprise. Although the maintenance of an SSH CA is beyond the scope of this article, keys issued by such CAs should be strong by adhering to the requirements for Ed25519/E-521/RSA-3072.
|
||||
|
||||
### pdsh
|
||||
|
||||
Many higher-level tools for the control of collections of servers exist that are much more sophisticated than the script I've presented here. The most famous is likely [Puppet][26], which is a Ruby-based configuration management system for enterprise control. Puppet has a somewhat short list of supported operating systems. If you are looking for low-level control of Android, Tomato, Linux smart terminals or other "exotic" POSIX, Puppet is likely not the appropriate tool. Another popular Ruby-based tool is [Chef][27], which is known for its complexity. Both Puppet and Chef require Ruby installations on both clients and servers, and they both will catalog any SSH keys that they find, so this key strength discussion is completely applicable to them.
|
||||
|
||||
There are several similar Python-based tools, including [Ansible][28], [Bcfg2][29], [Fabric][30] and [SaltStack][31]. Of these, only Ansible can run "agentless" over a bare SSH connection; the rest will require agents that run on target nodes (and this likely includes a Python runtime).
|
||||
|
||||
Another popular configuration management tool is [CFEngine][32], which is coded in C and claims very high performance. [Rudder][33] has evolved from portions of CFEngine and has a small but growing user community.
|
||||
|
||||
Most of the previously mentioned packages are licensed commercially and some are closed source.
|
||||
|
||||
The closest low-level tool to the activities presented here is the Parallel Distributed Shell (pdsh), which can be found in the [EPEL repository][34]. The pdsh utilities grew out of an IBM-developed package named dsh designed for the control of compute clusters. Install the following packages from the repository to use pdsh:
|
||||
|
||||
```
|
||||
|
||||
# rpm -qa | grep pdsh
|
||||
pdsh-2.31-1.el7.x86_64
|
||||
pdsh-rcmd-ssh-2.31-1.el7.x86_64
|
||||
|
||||
```
|
||||
|
||||
An SSH agent must be running while using pdsh with encrypted keys, and there is no obvious way to control the destination port on a per-host basis as was done with the artano script. Below is an example using pdsh to run a command on three remote servers:
|
||||
|
||||
```
|
||||
|
||||
# eval $(ssh-agent)
|
||||
Agent pid 17106
|
||||
|
||||
# ssh-add ~/.ssh/master_key
|
||||
Enter passphrase for /root/.ssh/master_key:
|
||||
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
|
||||
|
||||
# pdsh -w hosta.com,hostb.com,hostc.com uptime
|
||||
hosta: 13:24:49 up 13 days, 2:13, 6 users, load avg: 0.00, 0.01, 0.05
|
||||
hostb: 13:24:49 up 7 days, 21:15, 5 users, load avg: 0.05, 0.04, 0.05
|
||||
hostc: 13:24:49 up 9 days, 3:26, 3 users, load avg: 0.00, 0.01, 0.05
|
||||
|
||||
# eval $(ssh-agent -k)
|
||||
Agent pid 17106 killed
|
||||
|
||||
```
|
||||
|
||||
The -w option above defines a host list. It allows for limited arithmetic expansion and can take the list of hosts from standard input if the argument is a dash (-). The PDSH_SSH_ARGS and PDSH_SSH_ARGS_APPEND environment variables can be used to pass custom options to the SSH call. By default, 32 sessions will be launched in parallel, and this "fanout/sliding window" will be maintained by launching new host invocations as existing connections complete and close. You can adjust the size of the "fanout" either with the -f option or the FANOUT environment variable. It's interesting to note that there are two file copy commands: pdcp and rpdcp, which are analogous to scp.
|
||||
|
||||
Even a low-level utility like pdsh lacks some flexibility that is available by scripting OpenSSH, so prepare to feel even greater constraints as more complicated tools are introduced.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Modern Linux touches us in many ways on diverse platforms. When the security of these systems is not maintained, others also may touch our platforms and turn them against us. It is important to realize the maintenance obligations when you add any Linux platform to your environment. This obligation always exists, and there are consequences when it is not met.
|
||||
|
||||
In a security emergency, simple, open and well understood tools are best. As tool complexity increases, platform portability certainly declines, the number of competent administrators also falls, and this likely impacts speed of execution. This may be a reasonable trade in many other aspects, but in a security context, it demands a much more careful analysis. Emergency measures must be documented and understood by a wider audience than is required for normal operations, and using more general tools facilitates that discussion.
|
||||
|
||||
I hope the techniques presented here will prompt that discussion for those who have not yet faced it.
|
||||
|
||||
### Disclaimer
|
||||
|
||||
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of Linux Journal.
|
||||
|
||||
### Note:
|
||||
|
||||
An exploit [compromising Ed25519][35] was recently demonstrated that relies upon custom hardware changes to derive a usable portion of a secret key. Physical hardware security is a basic requirement for encryption integrity, and many common algorithms are further vulnerable to cache timing or other side channel attacks that can be performed by the unprivileged processes of other users. Use caution when granting access to systems that process sensitive data.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/rapid-secure-patching-tools-and-methods
|
||||
|
||||
作者:[Charles Fisher][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/charles-fisher
|
||||
[1]:https://en.wikipedia.org/wiki/EternalBlue
|
||||
[2]:http://securityaffairs.co/wordpress/61530/hacking/smbloris-smbv1-flaw.html
|
||||
[3]:http://www.telegraph.co.uk/news/2017/05/13/nhs-cyber-attack-everything-need-know-biggest-ransomware-offensive
|
||||
[4]:http://www.linuxjournal.com/content/smbclient-security-windows-printing-and-file-transfer
|
||||
[5]:https://staff.washington.edu/dittrich/misc/forensics
|
||||
[6]:https://ed25519.cr.yp.to
|
||||
[7]:http://www.metzdowd.com/pipermail/cryptography/2016-March/028824.html
|
||||
[8]:https://blog.g3rt.nl/upgrade-your-ssh-keys.html
|
||||
[9]:https://news.ycombinator.com/item?id=12563899
|
||||
[10]:http://safecurves.cr.yp.to/rigid.html
|
||||
[11]:https://en.wikipedia.org/wiki/Curve25519
|
||||
[12]:http://blog.cr.yp.to/20140323-ecdsa.html
|
||||
[13]:https://lwn.net/Articles/573166
|
||||
[14]:http://martin.kleppmann.com/2013/05/24/improving-security-of-ssh-private-keys.html
|
||||
[15]:https://en.wikipedia.org/wiki/Shor's_algorithm
|
||||
[16]:https://www.dwavesys.com/d-wave-two-system
|
||||
[17]:https://crypto.stackexchange.com/questions/40893/can-or-can-not-d-waves-quantum-computers-use-shors-and-grovers-algorithm-to-f
|
||||
[18]:https://yro.slashdot.org/story/16/12/21/2334220/nist-asks-public-for-help-with-quantum-proof-cryptography
|
||||
[19]:https://ntruprime.cr.yp.to/index.html
|
||||
[20]:http://www.linuxjournal.com/content/cipher-security-how-harden-tls-and-ssh
|
||||
[21]:https://www.michaelwlucas.com/tools/ssh
|
||||
[22]:https://people.eecs.berkeley.edu/~dawnsong/papers/ssh-timing.pdf
|
||||
[23]:https://man.openbsd.org/sshd#AUTHORIZED_KEYS_FILE_FORMAT
|
||||
[24]:https://ef.gy/hardening-ssh
|
||||
[25]:https://code.facebook.com/posts/365787980419535/scalable-and-secure-access-with-ssh
|
||||
[26]:https://puppet.com
|
||||
[27]:https://www.chef.io
|
||||
[28]:https://www.ansible.com
|
||||
[29]:http://bcfg2.org
|
||||
[30]:http://www.fabfile.org
|
||||
[31]:https://saltstack.com
|
||||
[32]:https://cfengine.com
|
||||
[33]:http://www.rudder-project.org/site
|
||||
[34]:https://fedoraproject.org/wiki/EPEL
|
||||
[35]:https://research.kudelskisecurity.com/2017/10/04/defeating-eddsa-with-faults
|
174
sources/tech/20180130 Ansible- Making Things Happen.md
Normal file
174
sources/tech/20180130 Ansible- Making Things Happen.md
Normal file
@ -0,0 +1,174 @@
|
||||
Ansible: Making Things Happen
|
||||
======
|
||||
In my [last article][1], I described how to configure your server and clients so you could connect to each client from the server. Ansible is a push-based automation tool, so the connection is initiated from your "server", which is usually just a workstation or a server you ssh in to from your workstation. In this article, I explain how modules work and how you can use Ansible in ad-hoc mode from the command line.
|
||||
|
||||
Ansible is supposed to make your job easier, so the first thing you need to learn is how to do familiar tasks. For most sysadmins, that means some simple command-line work. Ansible has a few quirks when it comes to command-line utilities, but it's worth learning the nuances, because it makes for a powerful system.
|
||||
|
||||
### Command Module
|
||||
|
||||
This is the safest module to execute remote commands on the client machine. As with most Ansible modules, it requires Python to be installed on the client, but that's it. When Ansible executes commands using the Command Module, it does not process those commands through the user's shell. This means some variables like $HOME are not available. It also means stream functions (redirects, pipes) don't work. If you don't need to redirect output or to reference the user's home directory as a shell variable, the Command Module is what you want to use. To invoke the Command Module in ad-hoc mode, do something like this:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -m command -a "whoami"
|
||||
|
||||
```
|
||||
|
||||
Your output should show SUCCESS for each host referenced and then return the user name that the user used to log in. You'll notice that the user is not root, unless that's the user you used to connect to the client computer.
|
||||
|
||||
If you want to see the elevated user, you'll add another argument to the ansible command. You can add -b in order to "become" the elevated user (or the sudo user). So, if you were to run the same command as above with a "-b" flag:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m command -a "whoami"
|
||||
|
||||
```
|
||||
|
||||
you should see a similar result, but the whoami results should say root instead of the user you used to connect. That flag is important to use, especially if you try to run remote commands that require root access!
|
||||
|
||||
### Shell Module
|
||||
|
||||
There's nothing wrong with using the Shell Module to execute remote commands. It's just important to know that since it uses the remote user's environment, if there's something goofy with the user's account, it might cause problems that the Command Module avoids. If you use the Shell Module, however, you're able to use redirects and pipes. You can use the whoami example to see the difference. This command:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -m command -a "whoami > myname.txt"
|
||||
|
||||
```
|
||||
|
||||
should result in an error about > not being a valid argument. Since the Command Module doesn't run inside any shell, it interprets the greater-than character as something you're trying to pass to the whoami command. If you use the Shell Module, however, you have no problems:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -m shell -a "whom > myname.txt"
|
||||
|
||||
```
|
||||
|
||||
This should execute and give you a SUCCESS message for each host, but there should be nothing returned as output. On the remote machine, however, there should be a file called myname.txt in the user's home directory that contains the name of the user. My personal policy is to use the Command Module whenever possible and to use the Shell Module if needed.
|
||||
|
||||
### The Raw Module
|
||||
|
||||
Functionally, the Raw Module works like the Shell Module. The key difference is that Ansible doesn't do any error checking, and STDERR, STDOUT and Return Code is returned. Other than that, Ansible has no idea what happens, because it just executes the command over SSH directly. So while the Shell Module will use /bin/sh by default, the Raw Module just uses whatever the user's personal default shell might be.
|
||||
|
||||
Why would a person decide to use the Raw Module? It doesn't require Python on the remote computer—at all. Although it's true that most servers have Python installed by default, or easily could have it installed, many embedded devices don't and can't have Python installed. For most configuration management tools, not having an agent program installed means the remote device can't be managed. With Ansible, if all you have is SSH, you still can execute remote commands using the Raw Module. I've used the Raw Module to manage Bitcoin miners that have a very minimal embedded environment. It's a powerful tool, and when you need it, it's invaluable!
|
||||
|
||||
### Copy Module
|
||||
|
||||
Although it's certainly possible to do file and folder manipulation with the Command and Shell Modules, Ansible includes a module specifically for copying files to the server. Even though it requires learning a new syntax for copying files, I like to use it because Ansible will check to see whether a file exists, and whether it's the same file. That means it copies the file only if it needs to, saving time and bandwidth. It even will make backups of existing files! I can't tell you how many times I've used scp and sshpass in a Bash FOR loop and dumped files on servers, even if they didn't need them. Ansible makes it easy and doesn't require FOR loops and IP iterations.
|
||||
|
||||
The syntax is a little more complicated than with Command, Shell or Raw. Thankfully, as with most things in the Ansible world, it's easy to understand—for example:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m copy \
|
||||
-a "src=./updated.conf dest=/etc/ntp.conf \
|
||||
owner=root group=root mode=0644 backup=yes"
|
||||
|
||||
```
|
||||
|
||||
This will look in the current directory (on the Ansible server/workstation) for a file called updated.conf and then copy it to each host. On the remote system, the file will be put in /etc/ntp.conf, and if a file already exists, and it's different, the original will be backed up with a date extension. If the files are the same, Ansible won't make any changes.
|
||||
|
||||
I tend to use the Copy Module when updating configuration files. It would be perfect for updating configuration files on Bitcoin miners, but unfortunately, the Copy Module does require that the remote machine has Python installed. Nevertheless, it's a great way to update common files on many remote machines with one simple command. It's also important to note that the Copy Module supports copying remote files to other locations on the remote filesystem using the remote_src=true directive.
|
||||
|
||||
### File Module
|
||||
|
||||
The File Module has a lot in common with the Copy Module, but if you try to use the File Module to copy a file, it doesn't work as expected. The File Module does all its actions on the remote machine, so src and dest are all references to the remote filesystem. The File Module often is used for creating directories, creating links or deleting remote files and folders. The following will simply create a folder named /etc/newfolder on the remote servers and set the mode:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m file \
|
||||
-a "path=/etc/newfolder state=directory mode=0755"
|
||||
|
||||
```
|
||||
|
||||
You can, of course, set the owner and group, along with a bunch of other options, which you can learn about on the Ansible doc site. I find I most often will either create a folder or symbolically link a file using the File Module. To create a symlink:
|
||||
|
||||
```
|
||||
|
||||
sensible host_or_groupname -b -m file \
|
||||
-a "src=/etc/ntp.conf dest=/home/user/ntp.conf \
|
||||
owner=user group=user state=link"
|
||||
|
||||
```
|
||||
|
||||
Notice that the state directive is how you inform Ansible what you actually want to do. There are several state options:
|
||||
|
||||
* link — create symlink.
|
||||
|
||||
* directory — create directory.
|
||||
|
||||
* hard — create hardlink.
|
||||
|
||||
* touch — create empty file.
|
||||
|
||||
* absent — delete file or directory recursively.
|
||||
|
||||
This might seem a bit complicated, especially when you easily could do the same with a Command or Shell Module command, but the clarity of using the appropriate module makes it more difficult to make mistakes. Plus, learning these commands in ad-hoc mode will make playbooks, which consist of many commands, easier to understand (I plan to cover this in my next article).
|
||||
|
||||
### File Management
|
||||
|
||||
Anyone who manages multiple distributions knows it can be tricky to handle the various package managers. Ansible handles this in a couple ways. There are specific modules for apt and yum, but there's also a generic module called "package" that will install on the remote computer regardless of whether it's Red Hat- or Debian/Ubuntu-based.
|
||||
|
||||
Unfortunately, while Ansible usually can detect the type of package manager it needs to use, it doesn't have a way to fix packages with different names. One prime example is Apache. On Red Hat-based systems, the package is "httpd", but on Debian/Ubuntu systems, it's "apache2". That means some more complex things need to happen in order to install the correct package automatically. The individual modules, however, are very easy to use. I find myself just using apt or yum as appropriate, just like when I manually manage servers. Here's an apt example:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m apt \
|
||||
-a "update_cache=yes name=apache2 state=latest"
|
||||
|
||||
```
|
||||
|
||||
With this one simple line, all the host machines will run apt-get update (that's the update_cache directive at work), then install apache2's latest version including any dependencies required. Much like the File Module, the state directive has a few options:
|
||||
|
||||
* latest — get the latest version, upgrading existing if needed.
|
||||
|
||||
* absent — remove package if installed.
|
||||
|
||||
* present — make sure package is installed, but don't upgrade existing.
|
||||
|
||||
The Yum Module works similarly to the Apt Module, but I generally don't bother with the update_cache directive, because yum updates automatically. Although very similar, installing Apache on a Red Hat-based system looks like this:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m yum \
|
||||
-a "name=httpd state=present"
|
||||
|
||||
```
|
||||
|
||||
The difference with this example is that if Apache is already installed, it won't update, even if an update is available. Sometimes updating to the latest version isn't want you want, so this stops that from accidentally happening.
|
||||
|
||||
### Just the Facts, Ma'am
|
||||
|
||||
One frustrating thing about using Ansible in ad-hoc mode is that you don't have access to the "facts" about the remote systems. In my next article, where I plan to explore creating playbooks full of various tasks, you'll see how you can reference the facts Ansible learns about the systems. It makes Ansible far more powerful, but again, it can be utilized only in playbook mode. Nevertheless, it's possible to use ad-hoc mode to peek at the sorts information Ansible gathers. If you run the setup module, it will show you all the details from a remote system:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m setup
|
||||
|
||||
```
|
||||
|
||||
That command will spew a ton of variables on your screen. You can scroll through them all to see the vast amount of information Ansible pulls from the host machines. In fact, it shows so much information, it can be overwhelming. You can filter the results:
|
||||
|
||||
```
|
||||
|
||||
ansible host_or_groupname -b -m setup -a "filter=*family*"
|
||||
|
||||
```
|
||||
|
||||
That should just return a single variable, ansible_os_family, which likely will be Debian or Red Hat. When you start building more complex Ansible setups with playbooks, it's possible to insert some logic and conditionals in order to use yum where appropriate and apt where the system is Debian-based. Really, the facts variables are incredibly useful and make building playbooks that much more exciting.
|
||||
|
||||
But, that's for another article, because you've come to the end of the second installment. Your assignment for now is to get comfortable using Ansible in ad-hoc mode, doing one thing at a time. Most people think ad-hoc mode is just a stepping stone to more complex Ansible setups, but I disagree. The ability to configure hundreds of servers consistently and reliably with a single command is nothing to scoff at. I love making elaborate playbooks, but just as often, I'll use an ad-hoc command in a situation that used to require me to ssh in to a bunch of servers to do simple tasks. Have fun with Ansible; it just gets more interesting from here!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxjournal.com/content/ansible-making-things-happen
|
||||
|
||||
作者:[Shawn Powers][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxjournal.com/users/shawn-powers
|
||||
[1]:http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin
|
Loading…
Reference in New Issue
Block a user