On Tue, 19 Jan 2016 05:46:52 PM Piers Rowan via luv-main wrote: > LVM was put in as a measure because of the default CenTOS install + > running out of disk space. Previously to that I used NFS mounts for
If you have space for another virtual disk you can just copy it to a new image. This is much easier than doing the same for a non-virtual machine. > /home /var/spool/mail but this played up with shared mailboxes and > IMAP/Dovecot on large (4GB+ mail files). NFS also performs poorly for this sort of thing (and most other things). On Tue, 19 Jan 2016 05:52:38 PM Piers Rowan via luv-main wrote: > > 2. is /home RAID-5? i'm guessing it is since RAID-10 with 3 drives and > > a hot-spare doesn't make any sense. RAID-5 can be dreadfully slow, > > especially on random writes. > > RAID 5 + hot spare > > > what kind of virtual disks are you using for the VMÅ›? cow2 image files? > > raw or lvm partitions? partitions are much faster than qcow2 files. > > Not sure - can't break up the HP array to give it a dedicated disk now > tho. Too much risk of downtime. http://etbe.coker.com.au/2008/08/05/new-hp-server/ The HP RAID admin commands probably haven't changed in the last 8 years so the above web page might help you in discovering what is going on with the array. Also note that for decent performance in a HP RAID-5 or RAID-6 array you need to have a battery for the write-back cache. Note that the write-back cache will be disabled if the controller thinks that the battery is worn out. Compared to all other options for making HP hardware perform well buying a battery for the cache is the cheapest way to improve performance. The performance of your system indicates that you don't have a battery for the cache or that it's not enabled. Of course a RAID-1 of SSDs will massively outperform the RAID-5 you have. Given the size I guess it's one of the older HP servers that only takes ~70G disks. If you buy a cheap Dell PowerEdge server and put a couple of Intel SSDs (not bought from Dell because Dell charges heaps for storage) in a RAID-1 configuration it will massively outperform the old HP server. > >> iostat -x 10 > >> Linux 2.6.32-573.7.1.el6.x86_64 19/01/16 _x86_64_ (12 CPU) > >> > >> avg-cpu: %user %nice %system %iowait %steal %idle > >> > >> 5.27 0.00 1.74 3.98 0.00 89.01 > >> > >> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s > >> avgrq-sz avgqu-sz await svctm %util > >> sda 144.40 398.34 92.54 95.71 5790.73 4493.15 54.63 > >> 0.32 1.71 1.38 25.92 > > > > so you're running a backup at the moment? is that when the slowdown > > occurs, or does it happen any old time? > > No it was used for a project a couple of years ago and never unplugged Why is it being accessed then? > >> dm-2 0.00 0.00 0.00 0.00 0.00 0.00 8.00 > >> 0.00 3.36 3.07 0.00 > >> dm-3 0.00 0.00 0.00 0.00 0.00 0.00 8.00 > >> 0.00 0.00 0.00 0.00 > >> dm-4 0.00 0.00 9.63 8.24 77.01 65.93 8.00 > >> 0.22 12.41 1.52 2.72 > >> dm-5 0.00 0.00 73.20 404.46 4478.78 3774.52 17.28 > >> 0.24 0.27 0.46 21.75 > > > > it's odd that most of the I/O is on just one of these /home drives. > > > > craig > > I guessed that was how the RAID card presented itself to the OS That's usually not how things work. HP arrays are usually /dev/cciss/* . Run "ls -l /dev/mapper" and see what the dm devices are for. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/ _______________________________________________ luv-main mailing list [email protected] http://lists.luv.asn.au/listinfo/luv-main
