Hello! (I'm not subscribed to this list, but I'm hoping to get a reply anyway) While testing some SAN storage system, I needed a utility to erase disks quickly. I wrote my own one that mmap()s the block device, memset()s the area, then msync()s the changes, and finally close()s the file descriptor.
On one disk I had a primary MBR partition spanning the whole disk, like this (output from some of my obscure tools): disk /dev/disk/by-id/dm-name-FirstTest-32 has 20971520 blocks of size 512 (10737418240 bytes) partition 1 (1-20971520) Total Sectors = 20971519 When wiping, I started (for no good reason) to wipe partition 1, then I wiped the whole disk. The disk is 4-way multipathed to a 8Gb FC-SAN, and the disk system is all-SSD (32x2TB). Using kernel 3.0.101-80-default of SLES11 SP4. For the test I had reduced the amount of RAM via "mem=4G". The machine's RAM bandwidth is about 9GB/s. To my surprise I found out that the partition eats significant performance (not quite 50%, but a lot): ### Partition h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32_part1 time to open /dev/disk/by-id/dm-name-FirstTest-32_part1: 0.000042s time for fstat(): 0.000017s time to map /dev/disk/by-id/dm-name-FirstTest-32_part1 (size 10.7Gib) at 0x7fbc86739000: 0.000039s time to zap 10.7Gib: 52.474054s (204.62 MiB/s) time to sync 10.7Gib: 4.148350s (2588.36 MiB/s) time to unmap 10.7Gib at 0x7fbc86739000: 0.052170s time to close /dev/disk/by-id/dm-name-FirstTest-32_part1: 0.770630s ### Whole disk h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32 time to open /dev/disk/by-id/dm-name-FirstTest-32: 0.000022s time for fstat(): 0.000061s time to map /dev/disk/by-id/dm-name-FirstTest-32 (size 10.7Gib) at 0x7fa2434cc000: 0.000037s time to zap 10.7Gib: 24.580162s (436.83 MiB/s) time to sync 10.7Gib: 1.097502s (9783.51 MiB/s) time to unmap 10.7Gib at 0x7fa2434cc000: 0.052385s time to close /dev/disk/by-id/dm-name-FirstTest-32: 0.290470s Reproducible: h10:~ # ./flashzap -f -s /dev/disk/by-id/dm-name-FirstTest-32 time to open /dev/disk/by-id/dm-name-FirstTest-32: 0.000039s time for fstat(): 0.000065s time to map /dev/disk/by-id/dm-name-FirstTest-32 (size 10.7Gib) at 0x7f1cc17ab000: 0.000037s time to zap 10.7Gib: 24.624000s (436.06 MiB/s) time to sync 10.7Gib: 1.199741s (8949.79 MiB/s) time to unmap 10.7Gib at 0x7f1cc17ab000: 0.069956s time to close /dev/disk/by-id/dm-name-FirstTest-32: 0.327232s So without partition the throughput is about twice as high! Why? Regards Ulrich