Neil Bothwick wrote: > Hello Daniel Iliev, > > >> Actually I'd be glad to read some results from a "Fake RAID-0 vs LVM" >> tests. My bet would be that RAID-0 w/o LVM would give the best speeds >> > > Omitting LVM isn't an option, I'd lose all the flexibility that LVM > offers. I don't see why RAID-0 should be necessarily more efficient than > LVM, unless there's something superior about RAID-0's striping > algorithms. I could do some before and after tests, but I'd first have the > reformat the filesystems to remove any effects of fragmentation. > > If no one comes up with a good reason for keeping the RAID, I'll get rid > of it, running bonnie++ before and after. > > >
Hi, Neil! Out of curiosity I made some tests which confirmed my expectations. What about you - did you have time (and wish) to take some performance benchmarks? I would be glad to see some additional results. I'm attaching my tests in file called "bench.txt". -- Best regards, Daniel
echo y | mdadm -C /dev/md9 -n2 /dev/sda11 /dev/sdb11 -l0 mkfs.xfs /dev/md9 mkdir /test mount /dev/md9 /test dd if=/dev/urandom of=/test.rnd bs=1M count=1500 time cp /test.rnd /test real 0m44.981s user 0m0.036s sys 0m6.967s sync time mv /test.rnd /test real 0m47.514s user 0m0.047s sys 0m7.077s sync time mv /test/test.rnd / real 0m53.863s user 0m0.060s sys 0m8.885s mdadm --stop /dev/md9 pvcreate /dev/sda11 pvcreate /dev/sdb11 vgcreate test /dev/sda11 vgextend test /dev/sdb11 vgdisplay | grep 'Total PE' Total PE 1686 lvcreate -i2 -l1686 -nlogvol test mkfs.xfs /dev/test/logvol mount /dev/test/logvol /test time cp /test.rnd /test real 1m12.183s user 0m0.039s sys 0m9.570s sync time mv /test.rnd /test real 0m51.643s user 0m0.044s sys 0m7.275s sync time mv /test/test.rnd / real 1m54.937s user 0m0.047s sys 0m9.556s ================= BOTTOM LINE: cp /test.rnd /test LVM: 20.78 [MB/s] RAID-0: 33.41 [MB/s] mv /test.rnd /test LVM: 29.04[MB/s] RAID-0: 31.56[MB/s] mv /test/test.rnd / LVM: 11.11[MB/s] RAID-0: 27.84[MB/s] Strange: I repeated the last LVM test because it seemed to me as a low performance peak, but the result was again very low: time mv /test/test.rnd / real 1m27.775s user 0m0.050s sys 0m9.813s which is: 1500/87.775 = 17.089 [MB/s]