20141013-1 选题 lvm 的最新两篇文章

This commit is contained in:
DeadFire 2014-10-13 10:03:04 +08:00
parent 10f12635ab
commit b47049fba1
2 changed files with 366 additions and 0 deletions

View File

@ -0,0 +1,157 @@
Manage Multiple Logical Volume Management Disks using Striping I/O
================================================================================
In this article, we are going to see how the logical volumes writes the data to disk by striping I/O. Logical Volume management has one of the cool feature which can write data over multiple disk by striping the I/O.
![Manage LVM Disks Using Striping I/O](http://www.tecmint.com/wp-content/uploads/2014/09/LVM-Striping.jpeg)
Manage LVM Disks Using Striping I/O
### What is LVM Striping? ###
**LVM Striping** is one of the feature which will writes the data over multiple disk, instead of constant write on a single Physical volume.
#### Features of Striping ####
- It will increase the performance of disk.
- Saves from hard write over and over to a single disk.
- Disk fill-up can be reduced using striping over multiple disk.
In Logical volume management, if we need to create a logical volume the extended will get fully mapped to the volume group and physical volumes. In such situation if one of the **PV** (Physical Volume) gets filled we need to add more extends from other physical volume. Instead, adding more extends to PV, we can point our logical volume to use the particular Physical volumes writing I/O.
Assume we have **four disks** drives and pointed to four physical volumes, if each physical volume are capable of **100 I/O** totally our volume group will get **400 I/O**.
If we are not using the **stripe method**, the file system will writes across the underlying physical volume. For example, some data writes to physical volume 100 I/O will be write only to the first (**sdb1**) PV. If we create the logical volume with stripe option while writing, it will write to every four drives by splitting 100 I/O, that means every four drive will receive 25 I/O each.
This will be done in round robin process. If any one of the logical volume need to be extended, in this situation we cant add 1 or 2 PV. We have to add all 4 pvs to extend the logical volume size. This is one of the drawback in stripe feature, from this we can know that while creating logical volumes we need to assign the same stripe size over all logical volumes.
Logical Volume management has these features which we can stripe the data over multiple pvs at the same time. If you are familiar with logical volume you can go head to setup the logical volume stripe. If not then you must need to know about the logical volume managements basics, read below articles to know more about logical volume management.
#### My Server Setup ####
Here Im using **Centos6.5** for my workout. The same steps can be used in RHEL, Oracle Linux, and most of the distributions.
Operating System : CentOS 6.5
IP Address : 192.168.0.222
Hostname : tecmint.storage.com
### Logical Volume management using Striping I/O ###
For demonstration purpose, Ive used 4 Hard drives, each drive with 1 GB in Size. Let me show you four drives using **fdisk** command as shown below.
# fdisk -l | grep sd
![List Hard Drives](http://www.tecmint.com/wp-content/uploads/2014/09/List-Hard-Drives.png)
List Hard Drives
Now weve to create partitions for these 4 hard drives **sdb**, **sdc**, **sdd** and **sde** using **fdisk** command. To create partitions, please follow the **step #4** instructions, given in the **Part 1** of this article (link give above) and make sure you change the type to **LVM (8e)**, while creating partitions.
# pvcreate /dev/sd[b-e]1 -v
![Create Physical Volumes in LVM](http://www.tecmint.com/wp-content/uploads/2014/09/Create-Physical-Volumes-in-LVM.png)
Create Physical Volumes in LVM
Once PVs created, you can list them using **pvs** command.
# pvs
![Verify Physical Volumes](http://www.tecmint.com/wp-content/uploads/2014/09/Verify-Physical-Volumes.png)
Verify Physical Volumes
Now we need to define volume group using those 4 physical volumes. Here Im defining my volume group with **16MB** of Physical extended size (PE) with volume group named as **vg_strip**.
# vgcreate -s 16M vg_strip /dev/sd[b-e]1 -v
The description of above options used in the command.
- **[b-e]1** Define your hard drive names such as sdb1, sdc1, sdd1, sde1.
- **-s** Define your physical extent size.
- **-v** verbose.
Next, verify the newly created volume group using.
# vgs vg_strip
![Verify Volume Group](http://www.tecmint.com/wp-content/uploads/2014/09/Verify-Volume-Group.png)
Verify Volume Group
To get more detailed information about VG, use switch -v with **vgdisplay** command, it will give us a every physical volumes which all used in **vg_strip** volume group.
# vgdisplay vg_strip -v
![Volume Group Information](http://www.tecmint.com/wp-content/uploads/2014/09/Volume-Group-Information.png)
Volume Group Information
Back to our topic, now while creating Logical volume, we need to define the stripe value, how data need to write in our logical volumes using stripe method.
Here Im creating a logical volume in the name of **lv_tecmint_strp1** with **900MB** size, and it needs to be in **vg_strip** volume group, and Im defining as 4 stripe, it means the data writes to my logical volume, needs to be stripe over 4 PVs.
# lvcreate -L 900M -n lv_tecmint_strp1 -i4 vg_strip
- **-L** logical volume size
- **-n** logical volume name
- **-i** stripes
![Create Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/09/Create-Logical-Volumes.png)
Create Logical Volumes
In the above image, we can see that the default size of stripe-size was **64 KB**, if we need to define our own stripe value, we can use **-I** (Capital I). Just to confirm that the logical volume are created use the following command.
# lvdisplay vg_strip/lv_tecmint_strp1
![Confirm Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/09/Confirm-Logical-Volumes.png)
Confirm Logical Volumes
Now next question will be, How do we know that stripes are writing to 4 drives?. Here we can use **lvdisplay** and **-m** (display the mapping of logical volumes) command to verify.
# lvdisplay vg_strip/lv_tecmint_strp1 -m
![Check Logical Volumes](http://www.tecmint.com/wp-content/uploads/2014/09/Check-Logical-Volumes.png)
Check Logical Volumes
To create our defined stripe size, we need to create one logical volume with **1GB** size using my own defined Stripe size of **256KB**. Now Im going to stripe over only 3 PVs, here we can define which pvs we want to be striped.
# lvcreate -L 1G -i3 -I 256 -n lv_tecmint_strp2 vg_strip /dev/sdb1 /dev/sdc1 /dev/sdd1
![Define Stripe Size](http://www.tecmint.com/wp-content/uploads/2014/09/Define-Stripe-Size.png)
Define Stripe Size
Next, check the stripe size and which volume does it stripes.
# lvdisplay vg_strip/lv_tecmint_strp2 -m
![Check Stripe Size](http://www.tecmint.com/wp-content/uploads/2014/09/Check-Stripe-Size.png)
Check Stripe Size
Its time to use a device mapper, for this we use command **dmsetup**. It is a low level logical volume management tool which manages logical devices, that use the device-mapper driver. We can see the lvm information using dmsetup command to know the which stripe depends on which drives.
# dmsetup deps /dev/vg_strip/lv_tecmint_strp[1-2]
![Device Mapper](http://www.tecmint.com/wp-content/uploads/2014/09/Device-Mapper.png)
Device Mapper
Here we can see that strp1 depend on 4 drives, and strp2 depend on 3 devices.
Hope you have learnt, that how we can stripe through logical volumes to write the data. For this setup one must know about the basic of logical volume management. In my next article, I will show you how we can migrate in logical volume management, till then stay tuned for updates and dont forget to give valuable comments about the article.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-multiple-lvm-disks-using-striping-io/
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/

View File

@ -0,0 +1,209 @@
Migrating LVM Partitions to New Logical Volume (Drive) Part VI
================================================================================
This is the 6th part of our ongoing Logical Volume Management series, in this article we will show you how to migrate existing logical volumes to other new drive without any downtime. Before moving further, I would like to explain you about LVM Migration and its features.
![LVM Storage Migration](http://www.tecmint.com/wp-content/uploads/2014/10/LVM-Migrations.png)
LVM Storage Migration
### What is LVM Migration? ###
**LVM** migration is one of the excellent feature, where we can migrate the logical volumes to a new disk without the data-loss and downtime. The purpose of this feature is it to move our data from old disk to a new disk. Usually, we do migrations from one disk to other disk storage, only when an error occur in some disks.
### Features of Migration ###
- Moving logical volumes from one disk to other disk.
- We can use any type of disk like SATA, SSD, SAS, SAN storage iSCSI or FC.
- Migrate disks without data loss and downtime.
In LVM Migration, we will swap every volumes, file-system and its data in the existing storage. For example, if we have a single Logical volume, which has been mapped to one of the physical volume, that physical volume is a physical hard-drive.
Now if we need to upgrade our server with SSD Hard-drive, what we used to think at first? reformat of disk? No! we dont have to reformat the server. The LVM has the option to migrate those old SATA Drives with new SSD Drives. The Live migration will support any kind of disks, be it local drive, SAN or Fiber channel too.
#### My Server Setup ####
Operating System : CentOS 6.5 Final
IP Address : 192.168.0.224
System Hostname : lvmmig.tecmintlocal.com
### Step 1: Check for Present Drives ###
**1.** Assume we are already having one virtual drive named “**vdb**“, which mapped to one of the logical volume “**tecmint_lv**“. Now we want to migrate this “**vdb**” logical volume drive to some other new storage. Before moving further, first verify that the virtual drive and logical volume names with the help of **fdisk** and lvs commands as shown.
# fdisk -l | grep vd
# lvs
![Check Logical Volume Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Logical-Volume-Disk.png)
Check Logical Volume Disk
### Step 2: Check for Newly added Drive ###
**2.** Once we confirm our existing drives, now its time to attach our new SSD drive to system and verify newly added drive with the help of fdisk command.
# fdisk -l | grep dev
![Check New Added Drive](http://www.tecmint.com/wp-content/uploads/2014/10/Check-New-Added-Drive.png)
Check New Added Drive
**Note**: Did you see in the above screen, that the new drive has been added successfully with name “**/dev/sda**“.
### Step 3: Check Present Logical and Physical Volume ###
**3.** Now move forward to create physical volume, volume group and logical volume for migration. Before creating volumes, make sure to check the present logical volume data under **/mnt/lvm** mount point. Use the following commands to list the mounts and check the data.
# df -h
# cd /mnt/lvm
# cat tecmint.txt
![Check Logical Volume Data](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Logical-Volume-Data.png)
Check Logical Volume Data
**Note**: For demonstration purpose, weve created two files under **/mnt/lvm** mount point, and we migrate these data to a new drive without any downtime.
**4.** Before migrating, make sure to confirm the names of logical volume and volume group for which physical volume is related to and also confirm which physical volume used to hold this volume group and logical volume.
# lvs
# vgs -o+devices | grep tecmint_vg
![Confirm Logical Volume Names](http://www.tecmint.com/wp-content/uploads/2014/10/Confirm-Logical-Volume-Names.png)
Confirm Logical Volume Names
**Note**: Did you see in the above screen, that “**vdb**” holds the volume group **tecmint_vg**.
### Step 4: Create New Physical Volume ###
**5.** Before creating Physical Volume in our new added SSD Drive, we need to define the partition using fdisk. Dont forget to change the Type to LVM(8e), while creating partitions.
# pvcreate /dev/sda1 -v
# pvs
![Create Physical Volume](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Physical-Volume.png)
Create Physical Volume
**6.** Next, add the newly created physical volume to existing volume group tecmint_vg using vgextend command
# vgextend tecmint_vg /dev/sda1
# vgs
![Add Physical Volume](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Physical-Volume.png)
Add Physical Volume
**7.** To get the full list of information about volume group use vgdisplay command.
# vgdisplay tecmint_vg -v
![List Volume Group Info](http://www.tecmint.com/wp-content/uploads/2014/10/List-Volume-Group-Info.png)
List Volume Group Info
**Note**: In the above screen, we can see at the end of result as our PV has added to the volume group.
**8.** If in-case, we need to know more information about which devices are mapped, use the **dmsetup** dependency command.
# lvs -o+devices
# dmsetup deps /dev/tecmint_vg/tecmint_lv
In the above results, there is **1** dependencies (PV) or (Drives) and here **17** were listed. If you want to confirm look into the devices, which has major and minor number of drives that are attached.
# ls -l /dev | grep vd
![List Device Information](http://www.tecmint.com/wp-content/uploads/2014/10/List-Device-Information.png)
List Device Information
**Note**: In the above command, we can see that major number with **252** and minor number **17** is related to **vdb1**. Hope you understood from above command output.
### Step 5: LVM Mirroring Method ###
**9.** Now its time to do migration using Mirroring method, use **lvconvert** command to migrate data from old logical volume to new drive.
# lvconvert -m 1 /dev/tecmint_vg/tecmint_lv /dev/sda1
- **-m** = mirror
- **1** = adding a single mirror
![Mirroring Method Migration](http://www.tecmint.com/wp-content/uploads/2014/10/Mirroring-Method-Migration.png)
Mirroring Method Migration
**Note**: The above migration process will take long time according to our volume size.
**10.** Once migration process completed, verify the converted mirror.
# lvs -o+devices
![Verify Converted Mirror](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Converted-Mirror.png)
Verify Converted Mirror
**11.** Once you sure that the converted mirror is perfect, you can remove the old virtual disk **vdb1**. The option **-m** will remove the mirror, earlier weve used **1** for adding the mirror.
# lvconvert -m 0 /dev/tecmint_vg/tecmint_lv /dev/vdb1
![Remove Virtual Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Remove-Virtual-Disk.png)
Remove Virtual Disk
**12.** Once old virtual disk is removed, you can re-check the devices for logical volumes using following command.
# lvs -o+devices
# dmsetup deps /dev/tecmint_vg/tecmint_lv
# ls -l /dev | grep sd
![Check New Mirrored Device](http://www.tecmint.com/wp-content/uploads/2014/10/Check-New-Mirrored-Device.png)
Check New Mirrored Device
In the above picture, did you see that our logical volume now depends on **8,1** and has **sda1**. This indicates that our migration process is done.
**13.** Now verify the files that weve migrated from old to new drive. If same data is present at the new drive, that means we have done every steps perfectly.
# cd /mnt/lvm/
# cat tecmin.txt
![Check Mirrored Data](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Mirrored-Data.png)
Check Mirrored Data
**14.** After everything perfectly created, now its time to delete the **vdb1** from volume group and later confirm, which devices are depends on our volume group.
# vgreduce /dev/tecmint_vg /dev/vdb1
# vgs -o+devices
**15.** After removing vdb1 from volume group **tecmint_vg**, still our logical volume is present there because we have migrated it to **sda1** from **vdb1**.
# lvs
![Delete Virtual Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Delete-Virtual-Disk.png)
Delete Virtual Disk
### Step 6: LVM pvmove Mirroring Method ###
**16.** Instead using **lvconvert** mirroring command, we use here **pvmove** command with option **-n** (logical volume name) method to mirror data between two devices.
# pvmove -n /dev/tecmint_vg/tecmint_lv /dev/vdb1 /dev/sda1
The command is one of the simplest way to mirror the data between two devices, but in real environment **Mirroring** is used more often than **pvmove**.
### Conclusion ###
In this article, we have seen how to migrate the logical volumes from one drive to other. Hope you have learnt new tricks in logical volume management. For such setup one should must know about the basic of logical volume management. For basic setups, please refer to the links provided on top of the article at requirement section.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/lvm-storage-migration/#comment-331336
作者:[Babin Lonston][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/