Ceph is an open source software platform that stores data on a single distributed computer cluster. When you are planning to build a cloud, then on top of the requirements you have to decide on how to implement your storage. Open Source CEPH is one of RED HAT mature technology based on object-store system, called RADOS, with a set of gateway APIs that present the data in block, file, and object modes. As a result of its open source nature, this portable storage platform may be installed and used in public or private clouds. The topology of a Ceph cluster is designed around replication and information distribution, which are intrinsic and provide data integrity. It is designed to be fault-tolerant, and can run on commodity hardware, but can also be run on a number of more advanced systems with the right setup.
Ceph can be installed on any Linux distribution but it requires the recent kernel and other up-to-date libraries in order to be properly executed. But, here in this tutorial we will be using CentOS-7.0 with minimal installation packages on it.
### System Resources ###
**CEPH-STORAGE**
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.136.163
FQDN: ceph-storage.linoxide.com
**CEPH-NODE**
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.171.138
FQDN: ceph-node.linoxide.com
### Pre-Installation Setup ###
There are few steps that we need to perform on each of our node before the CEPH storage setup. So first thing is to make sure that each node is configured with its networking setup with FQDN that is reachable to other nodes.
**Configure Hosts**
To setup the hosts entry on each node let's open the default hosts configuration file as shown below.
While working on the VMware virtual environment, its recommended that you have installed its open VM tools. You can install using below command.
#yum install -y open-vm-tools
**Firewall Setup**
If you are working on a restrictive environment where your local firewall in enabled then make sure that the number of following ports are allowed from in your CEPH storge admin node and client nodes.
You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node and port 80 to CEPH admin or Calamari node for inbound so that clients in your network can access the Calamari web user interface.
You can start and enable firewall in centos 7 with given below command.
#systemctl start firewalld
#systemctl enable firewalld
To allow the mentioned ports in the Admin Calamari node run the following commands.
It quite fair that you should disable firewall and SELinux settings if you are working in a non-production environment , so we are going to disable the firewall and SELinux in our test environment.
#systemctl stop firewalld
#systemctl disable firewalld
**System Update**
Now update your system and then give it a reboot to implement the required changes.
#yum update
#shutdown -r 0
### Setup CEPH User ###
Now we will create a separate sudo user that will be used for installing the ceph-deploy utility on each node and allow that user to have password less access on each node because it needs to install software and configuration files without prompting for passwords on CEPH nodes.
To create new user with its separate home directory run the below command on the ceph-storage host.
To configure the PID count value, we will make use of the following commands to check the default kernel value. By default its a small maximum number of threads that is '32768'.
So will configure this value to a higher number of threads by editing the system conf file as shown in the image.
Upon successful execution of above command you can see it creating its configuration files.
Now to configure the default configuration file of CEPH, open it using any editor and place the following two lines under its global parameters that reflects your public network.
#vim ceph.conf
osd pool default size = 1
public network = 45.79.0.0/16
### Installing CEPH ###
We are now going to install CEPH on each of the node associated with our CEPH cluster. To do so we make use of the following command to install CEPH on our both nodes that is ceph-storage and ceph-node as shown below.
This will takes some time while processing all its required repositories and installing the required packages.
Once the ceph installation process is complete on both nodes we will proceed to create monitor and gather keys by running the following command on the same node.
Now we will setup disk storages, to do so first run the below command to List all of your usable disks by using the below command.
#ceph-deploy disk list ceph-storage
In results will get the list of your disks used on your storage nodes that you will use for creating the OSD. Let's run the following command that consists of your disks names as shown below.
#ceph-deploy disk zap storage:sda
#ceph-deploy disk zap storage:sdb
Now to finalize the OSD setup let's run the below commands to setup the journaling disk along with data disk.
You will have to repeat the same command on all the nodes while it will clean everything present on the disk. Afterwards to have a functioning cluster, we need to copy the different keys and configuration files from the admin ceph-node to all the associated nodes by using the following command.
#ceph-deploy admin ceph-node ceph-storage
### Testing CEPH ###
We have almost completed the CEPH cluster setup, let's run the below command to check the status of the running ceph by using the below command on the admin ceph-node.
#ceph status
#ceph health
HEALTH_OK
So, if you did not get any error message at ceph status , that means you have successfully setup your ceph storage cluster on CentOS 7.
### Conclusion ###
In this detailed article we learned about the CEPH storage clustering setup using the two virtual Machines with CentOS 7 OS installed on them that can be used as a backup or as your local storage that can be used to precess other virtual machines by creating its pools. We hope you have got this article helpful. Do share your experiences when you try this at your end.