Merge pull request #29425 from geekpi/translating

Translating
This commit is contained in:
geekpi 2023-05-22 09:08:55 +08:00 committed by GitHub
commit a18b22fe7f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 330 additions and 294 deletions

View File

@ -1,293 +0,0 @@
[#]: subject: "How to Setup High Availability Apache (HTTP) Cluster on RHEL 9/8"
[#]: via: "https://www.linuxtechi.com/high-availability-apache-cluster-on-rhel/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Setup High Availability Apache (HTTP) Cluster on RHEL 9/8
======
In this post, we will cover how to setup two node high availability Apache cluster using pacemaker on RHEL 9/8
Pacemaker is a High Availability cluster Software for Linux like Operating System. Pacemaker is known as Cluster Resource Manager , It provides maximum availability of the cluster resources by doing fail over of resources between the cluster nodes. Pacemaker use corosync for heartbeat and internal communication among cluster components, Corosync also take care of Quorum in cluster.
##### Prerequisites
Before we begin, make sure you have the following:
- Two RHEL 9/8 servers
- Red Hat Subscription or Locally Configured Repositories
- SSH access to both servers
- Root or sudo privileges
- Internet connectivity
##### Lab Details:
- Server 1: node1.example.com (192.168.1.6)
- Server 2: node2.exaple.com (192.168.1.7)
- VIP: 192.168.1.81
- Shared Disk: /dev/sdb (2GB)
Without any further delay, lets deep dive into the steps,
### 1) Update /etc/hosts file
Add the following entries in /etc/hosts file on both the nodes,
```
192.168.1.6  node1.example.com
192.168.1.7  node2.example.com
```
### 2) Install high availability (pacemaker) package
Pacemaker and other required packages are not available in default packages repositories of RHEL 9/8. So, we must enable high availability repository. Run following subscription manager command on both nodes.
For RHEL 9 Servers
```
$ sudo subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms
```
For RHEL 8 Servers
```
$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
```
After enabling the repository, run beneath command to install pacemaker packages on both the nodes,
```
$ sudo dnf install pcs pacemaker fence-agents-all -y
```
### 3) Allow high availability ports in firewall
To allow high availability ports in the firewall, run beneath commands on each node,
```
$ sudo firewall-cmd --permanent --add-service=high-availability
$ sudo firewall-cmd --reload
```
### 4) Set password to hacluster and start pcsd service
Set password to hacluster user on both servers, run following echo command
```
$ echo "<Enter-Password>" | sudo passwd --stdin hacluster
```
Execute the following command to start and enable cluster service on both servers
```
$ sudo systemctl start pcsd.service
$ sudo systemctl enable pcsd.service
```
### 5) Create high availability cluster
Authenticate both the nodes using pcs command, run the below command from any node. In my case I am running it on node1,
```
$ sudo pcs host auth node1.example.com node2.example.com
```
Use hacluster user to authenticate,
Add both nodes to the cluster using following “pcs cluster setup” command , here I am using the cluster name as http_cluster . Run beneath commands only on node1,
```
$ sudo pcs cluster setup http_cluster --start node1.example.com node2.example.com
$ sudo pcs cluster enable --all
```
Output of both the commands would look like below,
Verify initial cluster status from any node,
```
$ sudo pcs cluster status
```
Note : In our lab, we dont have any fencing device, so we are disabling it. But in production environment it is highly recommended to configure the fencing.
```
$ sudo pcs property set stonith-enabled=false
$ sudo pcs property set no-quorum-policy=ignore
```
### 6) Configure shared Volume for the cluster
On the servers, a shared disk (/dev/sdb) of size 2GB is attached. So, we will configure it as LVM volume and format it as xfs file system.
Before start creating lvm volume, edit /etc/lvm/lvm.conf file on both the nodes.
Change the parameter “# system_id_source = “none”” tosystem_id_source = “uname”
```
$ sudo sed -i 's/# system_id_source = "none"/ system_id_source = "uname"/g' /etc/lvm/lvm.conf
```
Execute following set of commands one after the another on node1 to create lvm volume
```
$ sudo pvcreate /dev/sdb
$ sudo vgcreate --setautoactivation n vg01 /dev/sdb
$ sudo lvcreate -L1.99G -n lv01 vg01
$ sudo lvs /dev/vg01/lv01
$ sudo mkfs.xfs /dev/vg01/lv01
```
Add the shared device to the LVM devices file on the second node (node2.example.com) of the cluster, run below command on node2 only,
```
[[email protected] ~]$ sudo lvmdevices --adddev /dev/sdb
[[email protected] ~]$
```
### 7) Install and Configure Apache Web Server (HTTP)
Install apache web server (httpd) on both the servers, run following dnf command
```
$ sudo dnf install -y httpd wget
```
Also allow Apache ports in firewall, run following firewall-cmd command on both servers
```
$ sudo firewall-cmd --permanent --zone=public --add-service=http
$ sudo firewall-cmd --permanent --zone=public --add-service=https
$ sudo firewall-cmd --reload
```
Create status.conf file on both the nodes in order for the Apache resource agent to get the status of Apache
```
$ sudo bash -c 'cat <<-END > /etc/httpd/conf.d/status.conf
<Location /server-status>
    SetHandler server-status
    Require local
</Location>
END'
$
```
Modify /etc/logrotate.d/httpd on both the nodes
Replace the below line
```
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
```
With three lines
```
/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null &&
/usr/bin/ps -q $(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null &&
/usr/sbin/httpd -f /etc/httpd/conf/httpd.conf \
-c "PidFile /run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
```
Save and exit the file.
### 8) Create a sample web page for Apache
Perform the following commands only on node1,
```
$ sudo lvchange -ay vg01/lv01
$ sudo mount /dev/vg01/lv01 /var/www/
$ sudo mkdir /var/www/html
$ sudo mkdir /var/www/cgi-bin
$ sudo mkdir /var/www/error
$ sudo bash -c ' cat <<-END >/var/www/html/index.html
<html>
<body>High Availability Apache Cluster - Test Page </body>
</html>
END'
$
$ sudo umount /var/www
```
Note: If SElinux is enable, then run following on both the servers,
```
$ sudo restorecon -R /var/www
```
### 9) Create cluster resources and resource group
Define resource group and cluster resources for clusters. In my case, we are using “webgroup” as resource group.
- web_lvm is name of resource for shared lvm volume (/dev/vg01/lv01)
- web_fs is name of filesystem resource which will be mounted on /var/www
- VirtualIP is the resource for VIP (IPadd2) for nic enp0s3
- Website is the resource of Apache config file.
Execute following set of commands from any node.
```
$ sudo pcs resource create web_lvm ocf:heartbeat:LVM-activate vgname=vg01 vg_access_mode=system_id --group webgroup
$ sudo pcs resource create web_fs Filesystem device="/dev/vg01/lv01" directory="/var/www" fstype="xfs" --group webgroup
$ sudo pcs resource create VirtualIP IPaddr2 ip=192.168.1.81 cidr_netmask=24 nic=enp0s3 --group webgroup
$ sudo pcs resource create Website apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group webgroup
```
Now verify the cluster resources status, run
```
$ sudo pcs status
```
Great, output above shows that all the resources are started on node1.
### 10) Test Apache Cluster
Try to access web page using VIP 192.168.1.81
Either use curl command or web browser to access web page
```
$ curl http://192.168.1.81
```
or
Perfect, above output confirms that we are able access the web page of our highly available Apache cluster.
Lets try to move cluster resources from node1 to node2, run
```
$ sudo pcs node standby node1.example.com
$ sudo pcs status
```
Perfect, above output confirms that cluster resources are migrated from node1 to node2.
To remove the node (node1.example.com) from standby, run beneath command
```
$ sudo pcs node unstandby node1.example.com
```
Thats all from this post, I hope you have found it informative, kindly do post your queries and feedback in below comments section.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/high-availability-apache-cluster-on-rhel/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed/

View File

@ -2,7 +2,7 @@
[#]: via: "https://news.itsfoss.com/reminders/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -0,0 +1,329 @@
[#]: subject: "How to Setup High Availability Apache (HTTP) Cluster on RHEL 9/8"
[#]: via: "https://www.linuxtechi.com/high-availability-apache-cluster-on-rhel/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 RHEL 9/8 上设置高可用性 Apache (HTTP) 集群
======
在本文中,我们将介绍如何在 RHEL 9/8 上使用 pacemaker 设置两节点高可用性 Apache 集群
Pacemaker 是适用于类 Linux 操作系统的高可用性集群软件。Pacemaker 被称为“集群资源管理器”它通过在集群节点之间进行资源故障转移来提供集群资源的最大可用性。Pacemaker 使用 corosync 进行集群组件之间的心跳和内部通信Corosync 还负责集群中的 Quorum。
##### 先决条件
在我们开始之前,请确保你拥有以下内容:
- 两台 RHEL 9/8 服务器
- Red Hat 订阅或本地配置的仓库
- 通过 SSH 访问两台服务器
- root 或 sudo 权限
- 互联网连接
##### 实验室详情:
- 服务器 1node1.example.com (192.168.1.6)
- 服务器 2node2.exaple.com (192.168.1.7)
- VIP192.168.1.81
- 共享磁盘:/dev/sdb (2GB)
事不宜迟,让我们深入了解这些步骤。
### 1更新 /etc/hosts 文件
在两个节点上的 /etc/hosts 文件中添加以下条目:
```
192.168.1.6 node1.example.com
192.168.1.7 node2.example.com
```
### 2安装高可用pacemaker
Pacemaker 和其他必需的包在 RHEL 9/8 的默认包仓库中不可用。因此,我们必须启用高可用仓库。在两个节点上运行以下订阅管理器命令。
对于 RHEL 9 服务器:
```
$ sudo subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms
```
对于 RHEL 8 服务器:
```
$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
```
启用仓库后,运行命令在两个节点上安装 pacemaker 包:
```
$ sudo dnf install pcs pacemaker fence-agents-all -y
```
![][1]
### 3在防火墙中允许高可用端口
要允许防火墙中的高可用端口,请在每个节点上运行以下命令:
```
$ sudo firewall-cmd --permanent --add-service=high-availability
$ sudo firewall-cmd --reload
```
### 4为 hacluster 设置密码并启动 pcsd 服务
在两台服务器上为 hacluster 用户设置密码,运行以下 echo 命令:
```
$ echo "<Enter-Password>" | sudo passwd --stdin hacluster
```
执行以下命令在两台服务器上启动并启用集群服务:
```
$ sudo systemctl start pcsd.service
$ sudo systemctl enable pcsd.service
```
### 5创建高可用集群
使用 pcs 命令对两个节点进行身份验证,从任何节点运行以下命令。在我的例子中,我在 node1 上运行它:
```
$ sudo pcs host auth node1.example.com node2.example.com
```
使用 hacluster 用户进行身份验证。
![][2]
使用下面的 “pcs cluster setup” 命令将两个节点添加到集群,这里我使用的集群名称为 http_cluster。仅在 node1 上运行命令:
```
$ sudo pcs cluster setup http_cluster --start node1.example.com node2.example.com
$ sudo pcs cluster enable --all
```
这两个命令的输出如下所示:
![][3]
从任何节点验证初始集群状态:
```
$ sudo pcs cluster status
```
![][4]
注意:在我们的实验室中,我们没有任何防护设备,因此我们将其禁用。但在生产环境中,强烈建议配置防护。
```
$ sudo pcs property set stonith-enabled=false
$ sudo pcs property set no-quorum-policy=ignore
```
### 6为集群配置共享卷
在服务器上,挂载了一个大小为 2GB 的共享磁盘 (/dev/sdb)。因此,我们将其配置为 LVM 卷并将其格式化为 xfs 文件系统。
在开始创建 lvm 卷之前,编辑两个节点上的 /etc/lvm/lvm.conf 文件。
将参数 #system_id_source = "none" 更改为 system_id_source = "uname"
```
$ sudo sed -i 's/# system_id_source = "none"/ system_id_source = "uname"/g' /etc/lvm/lvm.conf
```
在 node1 上依次执行以下一组命令创建 lvm 卷:
```
$ sudo pvcreate /dev/sdb
$ sudo vgcreate --setautoactivation n vg01 /dev/sdb
$ sudo lvcreate -L1.99G -n lv01 vg01
$ sudo lvs /dev/vg01/lv01
$ sudo mkfs.xfs /dev/vg01/lv01
```
![][5]
将共享设备添加到集群第二个节点 (node2.example.com) 上的 LVM 设备文件中,仅在 node2 上运行以下命令:
```
[sysops@node2 ~]$ sudo lvmdevices --adddev /dev/sdb
[sysops@node2 ~]$
```
### 7) 安装和配置 Apache Web 服务器 (HTTP)
在两台服务器上安装 apache web 服务器 (httpd),运行以下 dnf 命令:
```
$ sudo dnf install -y httpd wget
```
并允许防火墙中的 Apache 端口,在两台服务器上运行以下 firewall-cmd 命令:
```
$ sudo firewall-cmd --permanent --zone=public --add-service=http
$ sudo firewall-cmd --permanent --zone=public --add-service=https
$ sudo firewall-cmd --reload
```
在两个节点上创建 status.conf 文件,以便 Apache 资源代理获取 Apache 的状态:
```
$ sudo bash -c 'cat <<-END > /etc/httpd/conf.d/status.conf
<Location /server-status>
SetHandler server-status
Require local
</Location>
END'
$
```
修改两个节点上的/etc/logrotate.d/httpd
替换下面的行
```
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
```
```
/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null &&
/usr/bin/ps -q $(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null &&
/usr/sbin/httpd -f /etc/httpd/conf/httpd.conf \
-c "PidFile /run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true
```
保存并退出文件。
![][6]
### 8) 为 Apache 创建一个示例网页
仅在 node1 上执行以下命令:
```
$ sudo lvchange -ay vg01/lv01
$ sudo mount /dev/vg01/lv01 /var/www/
$ sudo mkdir /var/www/html
$ sudo mkdir /var/www/cgi-bin
$ sudo mkdir /var/www/error
$ sudo bash -c ' cat <<-END >/var/www/html/index.html
<html>
<body>High Availability Apache Cluster - Test Page </body>
</html>
END'
$
$ sudo umount /var/www
```
注意:如果启用了 SElinux则在两台服务器上运行以下命令
```
$ sudo restorecon -R /var/www
```
### 9创建集群资源和资源组
为集群定义资源组和集群资源。在我的例子中,我们使用 “webgroup” 作为资源组。
- web_lvm 是共享 lvm 卷的资源名称 (/dev/vg01/lv01)
- web_fs 是将挂载在 /var/www 上的文件系统资源的名称
- VirtualIP 是 nic enp0s3 的 VIP (IPadd2) 资源
- 网站是 Apache 配置文件的资源。
从任何节点执行以下命令集。
```
$ sudo pcs resource create web_lvm ocf:heartbeat:LVM-activate vgname=vg01 vg_access_mode=system_id --group webgroup
$ sudo pcs resource create web_fs Filesystem device="/dev/vg01/lv01" directory="/var/www" fstype="xfs" --group webgroup
$ sudo pcs resource create VirtualIP IPaddr2 ip=192.168.1.81 cidr_netmask=24 nic=enp0s3 --group webgroup
$ sudo pcs resource create Website apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group webgroup
```
![][7]
现在验证集群资源状态,运行:
```
$ sudo pcs status
```
![][8]
很好,上面的输出显示所有资源都在 node1 上启动。
### 10) 测试 Apache 集群
尝试使用 VIP192.168.1.81)访问网页
使用 curl 命令或网络浏览器访问网页:
```
$ curl http://192.168.1.81
```
![][9]
或者
![][10]
完美,以上输出确认我们能够访问我们高可用 Apache 集群的网页。
让我们尝试将集群资源从 node1 移动到 node2运行
```
$ sudo pcs node standby node1.example.com
$ sudo pcs status
```
![][11]
完美,以上输出确认集群资源已从 node1 迁移到 node2。
要从备用节点node1.example.com中删除节点运行以下命令
```
$ sudo pcs node unstandby node1.example.com
```
![][12]
以上就是这篇文章的全部内容,我希望你发现它提供了丰富的信息,请在下面的评论部分中发表你的疑问和反馈。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/high-availability-apache-cluster-on-rhel/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed/
[1]: https://www.linuxtechi.com/wp-content/uploads/2016/02/DNF-Command-Install-Pacemake-PCS-RHEL.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2016/02/pcs-cluster-auth-command-rhel.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2016/02/pcs-cluster-setup-rhel-servers.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2016/02/pcs-cluster-initial-status-rhel.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2016/02/Create-LVM-Volume-for-Apache-Cluster-RHEL.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2016/02/Apache-cluster-logrotate-httpd-rhel.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2016/02/pcs-resource-create-command-rhel.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2016/02/Cluster-Resources-Status-RHEL.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2016/02/Curl-Command-Access-Apache-VIP.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2016/02/HA-Apache-Page-RHEL.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2016/02/pcs-node-statndby-rhel.png
[12]: https://www.linuxtechi.com/wp-content/uploads/2016/02/pcs-node-unstandby-rhel.png