04 July, 2009

LVM Advanced Installation Notes:

1) The problem
After the  default installation (see previous post) I noticed that performance was not satisfactory.
I ran bonnie++ and other io benchmarking software.
The problem can be illustrated as follows:
hdparm -tT /dev/md0
Gives reasonable performance (380MB/s reads), while
hdparm -tT /dev/vgvol/dir
gives abysmal performance (120 MB/sec, equivalent to that of one drive)...

This suggests that we might have a problem with alignment between raid/lvm/xfs....

2) Raid Information
The following resources provide a lot of useful information regarding raid installation:
RAID HOWTO

In particular it defines the superblock and gives lots of useful information on mdadm and it's use.

3) File mdadm.conf
/etc/mdadm.conf is mdadms' primary configuration file. Unlike /etc/raidtab, mdadm does not rely on /etc/mdadm.conf to create or manage arrays. Rather, mdadm.conf is simply an extra way of keeping track of software RAIDs. Using a configuration file with mdadm is useful, but not required. Having one means you can quickly manage arrays without spending extra time figuring out what array properties are and where disks belong. For example, if an array wasn't running and there was no mdadm.conf file describing it, then the system administrator would need to spend time examining individual disks to determine array properties and member disks.

# mdadm --detail --scan
ARRAY /dev/md0 level=raid0 num-devices=2   \
    UUID=410a299e:4cdd535e:169d3df4:48b7144a

If there were multiple arrays running on the system, then mdadm would generate an array line for each one. So after you're done building arrays you could redirect the output of mdadm --detail --scan to /etc/mdadm.conf. Just make sure that you manually create a DEVICEentry as well. Using the example I've provided above we might have an /etc/mdadm.conf that looks like:

DEVICE    /dev/sdb1 /dev/sdc1
ARRAY     /dev/md0 level=raid0 num-devices=2    \                      
    UUID=410a299e:4cdd535e:169d3df4:48b7144a


4) Choices
HW vs SoftRaid vs FakeRaid:
I had all three options (I have a raid controller, an ICH10R mobo, and only run linux).
See below for a discussion
Pros and Cons
I chose softraid because:
- I have a fast processor
- I only ran linux.
- From what I have seen is reliable and fast and compared to fakeraid (dmraid) is more stable and slightly
faster.
See also the following for more discussion:
Link 1 Link 2   Link 3  



Superblock:

It turns out there are multiple versions. This is reported when running
mdadm --detail /dev/md0  (Under the version)
See link for more information.
Update: Add here choice ....


Swap file Location:
There is a discussion on where to put the swap file if you have a RAID partition... Should it be put on the raid
or separately???

Three solutions are proposed:
Separate RAID 1 for swap on 2 drives (so that if a drive fails there is swap on the other).
Add many swap partitions on each of the drives and let the kernel decide where to place the swap, or
place on raid 5.

After looking around the following discussion is the most convincing:
If you have everything on RAID on your server, it's often debated whether you want your swap partition on RAID as well. Some will state correctly that Linux optimally uses two swap partitions (e.g. on /dev/sda2 and /dev/sdb2) and that putting the swap on a RAID impacts the swap performance. While this is techncally correct, it is nonsense when it comes to availability.
First: if swap performance is an issue, the problem isn't RAID or not, it is too less RAM. Under normal circumstances, swap should be used only sparsely -- if at all. From time to time the system might swap out something not used for some time. If a larger amount of swap is used on a regular basis, else there's a memory leak in one of the applications running, or you simply have not enough RAM built in for the tasks running. Go buy some!
Second: while Linux can indeed distribute swapped pages across several swap partitions, once one of them suddenly disappears because the underlying disk died, the system simply crashes. And that's exactly what you don't want.

Conclusion: put the swap on a RAID as well as everything else.

Swap on RAID 5 for me



No comments: