How to Setup High Availability Apache (HTTP) Cluster on RHEL 9/8
======
In this post, we will cover how to setup two node high availability Apache cluster using pacemaker on RHEL 9/8
Pacemaker is a High Availability cluster Software for Linux like Operating System. Pacemaker is known as ‘Cluster Resource Manager ‘, It provides maximum availability of the cluster resources by doing fail over of resources between the cluster nodes. Pacemaker use corosync for heartbeat and internal communication among cluster components, Corosync also take care of Quorum in cluster.
##### Prerequisites
Before we begin, make sure you have the following:
- Two RHEL 9/8 servers
- Red Hat Subscription or Locally Configured Repositories
- SSH access to both servers
- Root or sudo privileges
- Internet connectivity
##### Lab Details:
- Server 1: node1.example.com (192.168.1.6)
- Server 2: node2.exaple.com (192.168.1.7)
- VIP: 192.168.1.81
- Shared Disk: /dev/sdb (2GB)
Without any further delay, let’s deep dive into the steps,
### 1) Update /etc/hosts file
Add the following entries in /etc/hosts file on both the nodes,
```
192.168.1.6 node1.example.com
192.168.1.7 node2.example.com
```
### 2) Install high availability (pacemaker) package
Pacemaker and other required packages are not available in default packages repositories of RHEL 9/8. So, we must enable high availability repository. Run following subscription manager command on both nodes.
Add both nodes to the cluster using following “pcs cluster setup” command , here I am using the cluster name as http_cluster . Run beneath commands only on node1,
Output of both the commands would look like below,
Verify initial cluster status from any node,
```
$ sudo pcs cluster status
```
Note : In our lab, we don’t have any fencing device, so we are disabling it. But in production environment it is highly recommended to configure the fencing.
```
$ sudo pcs property set stonith-enabled=false
$ sudo pcs property set no-quorum-policy=ignore
```
### 6) Configure shared Volume for the cluster
On the servers, a shared disk (/dev/sdb) of size 2GB is attached. So, we will configure it as LVM volume and format it as xfs file system.
Before start creating lvm volume, edit /etc/lvm/lvm.conf file on both the nodes.
Change the parameter “# system_id_source = “none”” tosystem_id_source = “uname”