> This looks wrong. Remember I was only testing read performance then. > The 6.0 kernel will never, under any circumstances, issue a transfer larger > than 64K to a disk device, not even a pseudodevice like RAIDframe I'm well aware of that. That's why now, I'm testing write performance.
> What fragment size are you using? The default, whatever that is. I can repeat the results of my read tests so you don't have to dig for them: Column titles are fsbsize/SPSU, all on a five-component Level 5 RAID. Extraction (minutes): 16k/32 16k/8 16k/128 32k/128 64k/128 64k/32 home 163 81* 151 326** 116 79 mail 146 113* 137 525** 109 94 *without quota (by mistake) **without WAPL (by mistake) In the following, + means atime, - noatime. find (seconds): 16k/32 16k/8 16k/128 32k/128 64k/128 64k/32 home+ 158 178 167 128 98 103 home- 79 86 76 77 73 76 mail+ 82 ??? 81 59 47 52 mail- 56 59 54 44 39 41 ???: forgot to measure tar (seconds): 16k/32 16k/8 16k/128 32k/128 64k/128 64k/32 home+ 577 639 536 476 406 418 home- 422 466 369 401 362 361 mail+ 600 690 587 450 371 395 mail- 411 484 375 348 318 337 parallel tars (seconds): 16k/32 16k/8 16k/128 32k/128 64k/128 64k/32 home+ 430+506 497+574 389+466 379+450 269+327 332+358 home- 302+360 355+412 275+333 319+380 242+290 294+288 mail+ 431+444 496+505 419+431 343+350 252+254 285+288 mail- 246+255 289+290 234+237 245+254 203+204 231+233 The last test (parallel tars) was meant to, in addition to the backup simulation the other two tests did, simulate concurrent access by users.