Migrating from Multipath to Powerpath on RedHat
Introduction
Powerpath is the multipathing solution for EMC arrays. The multipath package is so buggy on RedHat that you shouldn’t install it in production environments. This migration has been released on RedHat 4.6.EL.
Reminder: Multipathing brings redundancy functionalities with 2 links on a disk array without having I/O errors.
Uninstalling multipath
Multipath package name (if installed) can be found this way:
rpm -qa | grep multipath
Next, you just need to use the package name and add it to the rpm command:
rpm -e device-mapper-multipath-0.4.5-27.el4_6.3
Now reboot!
Installation
Now we’ll install the package. Take it from the CD or anywhere else and install it:
rpm -ivh EMCpower.Linux*.rpm
Now you’ll need to launch the license key command:
emcpreg -install
Now update the initial ramdisk:
mkinitrd -f /boot/initrd-`uname -r`.img `uname -r`
Then reboot again!
Powerpath verification
You may need to start Powerpath service:
/etc/init.d/PowerPath start
And also may reboot the server again. Now enter powermt command to verify if your LUN can be shown:
$ powermt display dev=all
Pseudo name=emcpowerb ? All the paths below are handled through this pseudo-device
CLARiiON ID=APM00023500472
Logical device ID=600601F0310A00006F80C3D32D69D711? Unique ID of LUN (LUN Properties)
state=alive; policy=CLAROpt; priority=0; queued-IOs=0? properties/status of the paths
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
2 QLogic Fibre Channel 2300 sdc SP A1 active alive 0 0
2 QLogic Fibre Channel 2300 sde SP B0 active alive 0 0
3 QLogic Fibre Channel 2300 sdg SP B1 active alive 0 0
3 QLogic Fibre Channel 2300 sdi SP A0 active alive 0 0
You can also check that you have emc devices:
$ ls /dev/emcpower*
emcpower emcpowera
Configuring LVM
Now all is ok, but you need to recover your previous partitions from LVM. So edit the LVM config file and add these arguments:
...
filter = [ "r/sd*/", "a/.*/"," "a|/dev/sdb[1-9]|", "a|/dev/mapper/.*$|", "r|.*|" ]
...
Now you can reboot or rebuild the LVM cache:
vgscan -v
lvmdiskscan
Now you should see all your disks :-). Look also at the /dev/mapper
and you’ll see them too.
Last updated 03 Jun 2008, 12:10 CEST.