TranslateProject/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md
2015-07-14 15:49:47 +08:00

12 KiB
Raw Blame History

struggling 翻译中 Creating RAID 5 (Striping with Distributed Parity) in Linux Part 4

In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.

Setup Raid 5 in CentOS

Setup Raid 5 in Linux

For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where its cost effective and provide performance as well as redundancy.

What is Parity?

Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Lets say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity informations. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.

Pros and Cons of RAID 5

  • Gives better performance
  • Support Redundancy and Fault tolerance.
  • Support hot spare options.
  • Will loose a single disk capacity for using parity information.
  • No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
  • Suits for transaction oriented environment as the reading will be faster.
  • Due to parity overhead, writing will be slow.
  • Rebuild takes long time.

Requirements

Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if youve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and mdadm package to create raid.

mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf.

Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.

My Server Setup

Operating System :	CentOS 6.5 Final
IP Address	 :	192.168.0.227
Hostname	 :	rd5.tecmintlocal.com
Disk 1 [20GB]	 :	/dev/sdb
Disk 2 [20GB]	 :	/dev/sdc
Disk 3 [20GB]	 :	/dev/sdd

This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.

Step 1: Installing mdadm and Verify Drives

  1. As we said earlier, that were using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.

    lsb_release -a

    ifconfig | grep inet

Setup Raid 5 in CentOS

CentOS 6.5 Summary

  1. If youre following our raid series, we assume that youve already installed mdadm package, if not, use the following command according to your Linux distribution to install the package.

    yum install mdadm [on RedHat systems]

    apt-get install mdadm [on Debain systems]

  2. After the mdadm package installation, lets list the three 20GB disks which we have added in our system using fdisk command.

    fdisk -l | grep sd

Install mdadm Tool in CentOS

Install mdadm Tool

  1. Now its time to examine the attached three drives for any existing RAID blocks on these drives using following command.

    mdadm -E /dev/sd[b-d]

    mdadm --examine /dev/sdb /dev/sdc /dev/sdd

Examine Drives For Raid

Examine Drives For Raid

Note: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.

Step 2: Partitioning the Disks for RAID

  1. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using fdisk command, before forwarding to the next steps.

    fdisk /dev/sdb

    fdisk /dev/sdc

    fdisk /dev/sdd

Create /dev/sdb Partition

Please follow the below instructions to create partition on /dev/sdb drive.

  • Press n for creating new partition.
  • Then choose P for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
  • Then choose 1 to be the first partition. By default it will be 1.
  • Here for cylinder size we dont have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
  • Next press p to print the created partition.
  • Change the Type, If we need to know the every available types Press L.
  • Here, we are selecting fd as my type is RAID.
  • Next press p to print the defined partition.
  • Then again use p to print the changes what we have made.
  • Use w to write the changes.

Create sdb Partition

Create sdb Partition

Note: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too.

Create /dev/sdc Partition

Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps.

# fdisk /dev/sdc

Create sdc Partition

Create sdc Partition

Create /dev/sdd Partition

# fdisk /dev/sdd

Create sdd Partition

Create sdd Partition

  1. After creating partitions, check for changes in all three drives sdb, sdc, & sdd.

    mdadm --examine /dev/sdb /dev/sdc /dev/sdd

    or

    mdadm -E /dev/sd[b-c]

Check Partition Changes

Check Partition Changes

Note: In the above pic. depict the type is fd i.e. for RAID.

  1. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.

Check Raid on Partition

Check Raid on Partition

Step 3: Creating md device md0

  1. Now create a Raid device md0 (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.

    mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

    or

    mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1

  2. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output.

    cat /proc/mdstat

Verify Raid Device

Verify Raid Device

If you want to monitor the current building process, you can use watch command, just pass through the cat /proc/mdstat with watch command which will refresh screen every 1 second.

# watch -n1 cat /proc/mdstat

Monitor Raid Process

Monitor Raid 5 Process

Raid 5 Process Summary

Raid 5 Process Summary

  1. After creation of raid, Verify the raid devices using the following command.

    mdadm -E /dev/sd[b-d]1

Verify Raid Level

Verify Raid Level

Note: The Output of the above command will be little long as it prints the information of all three drives.

  1. Next, verify the RAID array to assume that the devices which weve included in the RAID level are running and started to re-sync.

    mdadm --detail /dev/md0

Verify Raid Array

Verify Raid Array

Step 4: Creating file system for md0

  1. Create a file system for md0 device using ext4 before mounting.

    mkfs.ext4 /dev/md0

Create md0 Filesystem

Create md0 Filesystem

  1. Now create a directory under /mnt then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory.

    mkdir /mnt/raid5

    mount /dev/md0 /mnt/raid5/

    ls -l /mnt/raid5/

  2. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content.

    touch /mnt/raid5/raid5_tecmint_{1..5}

    ls -l /mnt/raid5/

    echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1

    cat /mnt/raid5/raid5_tecmint_1

    cat /proc/mdstat

Mount Raid 5 Device

Mount Raid Device

  1. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.

    vim /etc/fstab

    /dev/md0 /mnt/raid5 ext4 defaults 0 0

Raid 5 Automount

Raid 5 Automount

  1. Next, run mount -av command to check whether any errors in fstab entry.

    mount -av

Check Fstab Errors

Check Fstab Errors

Step 5: Save Raid 5 Configuration

  1. As mentioned earlier in requirement section, by default RAID dont have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.

So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

Save Raid 5 Configuration

Save Raid 5 Configuration

Note: Saving the configuration will keep the RAID level stable in md0 device.

Step 6: Adding Spare Drives

  1. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.

For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.

Conclusion

Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.


via: http://www.tecmint.com/create-raid-5-in-linux/

作者:Babin Lonston 译者:译者ID 校对:校对者ID

本文由 LCTT 原创翻译,Linux中国 荣誉推出