Greetings learned ZFS geeks & guru’s,

Yet another question comes from my continued ZFS performance testing. This has 
to do with zpool iostat, and the strangeness that I do see.
I’ve created an eight (8) disk raidz pool from a Sun 3510 fibre array giving me 
a 465G volume.
# zpool create tp raidz c4t600 ... 8 disks worth of zpool
# zfs create tp/pool
# zfs set recordsize=8k tp/pool
# zfs set mountpoint=/pool tp/pool

I then create a 100G data file that is created by sequentially writing 64k 
blocks to the test data file. When I then issue a 
# zpool iostat -v tp 10 
I see the following strange behaviour. I see anywhere from up to 16 iterations 
(ie 160 seconds) of the following, where there are only writes to 2 of the 8 
disks:

                                           capacity     operations    bandwidth
pool                                     used  avail   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
testpool                                29.7G   514G      0  2.76K      0  22.1M
  raidz1                                29.7G   514G      0  2.76K      0  22.1M
    c4t600C0FF0000000000A74531B659C5C00d0s6      -      -      0      0      0  
    0
    c4t600C0FF0000000000A74533F3CF1AD00d0s6      -      -      0      0      0  
    0
    c4t600C0FF0000000000A74534C5560FB00d0s6      -      -      0      0      0  
    0
    c4t600C0FF0000000000A74535E50E5A400d0s6      -      -      0  1.38K      0  
2.76M
    c4t600C0FF0000000000A74537C1C061500d0s6      -      -      0      0      0  
    0
    c4t600C0FF0000000000A745343B08C4B00d0s6      -      -      0      0      0  
    0
    c4t600C0FF0000000000A745379CB90B600d0s6      -      -      0      0      0  
    0
    c4t600C0FF0000000000A74530237AA9300d0s6      -      -      0  1.38K      0  
2.76M
--------------------------------------  -----  -----  -----  -----  -----  -----

During these periods, my data file does not grow in size, but then I see writes 
to all of the disks like the following:

                                           capacity     operations    bandwidth
pool                                     used  avail   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
testpool                                64.0G   480G      0  1.45K      0  11.6M
  raidz1                                64.0G   480G      0  1.45K      0  11.6M
    c4t600C0FF0000000000A74531B659C5C00d0s6      -      -      0    246      0  
8.22M
    c4t600C0FF0000000000A74533F3CF1AD00d0s6      -      -      0    220      0  
8.23M
    c4t600C0FF0000000000A74534C5560FB00d0s6      -      -      0    254      0  
8.20M
    c4t600C0FF0000000000A74535E50E5A400d0s6      -      -      0    740      0  
1.45M
    c4t600C0FF0000000000A74537C1C061500d0s6      -      -      0    299      0  
8.21M
    c4t600C0FF0000000000A745343B08C4B00d0s6      -      -      0    284      0  
8.21M
    c4t600C0FF0000000000A745379CB90B600d0s6      -      -      0    266      0  
8.22M
    c4t600C0FF0000000000A74530237AA9300d0s6      -      -      0    740      0  
1.45M
--------------------------------------  -----  -----  -----  -----  -----  -----

And my data file will increase in size, but also notice notice, in the above, 
those disks that were being written to before, have a load that is consistent 
with the previous example. 

For background, the server, and the storage are dedicated solely to this 
testing, and there are no other applications being run at this time.

I thought that RaidZ would spread its load across all disks somewhat evenly. 
Can someone explain this result? I can consistently reproduce it as well.

Thanks
-Tony
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to