On Mon, 25 Sep 2006, Roch wrote:
This looks like on the second run, you had lots more free
memory and mkfile completed near memcpy speed.

   Both times the system was near idle.

Something is awry on the first pass though. Then,

        zpool iostat 1

can put some lights on this. IO will keep on going after the
mkfile completes in the second case. For the first one,
there may have been an interaction with not yet finished I/O loads ?

    The old drives arent in the system, but I did try this
with the new drives.  I ran "mkfile -v 1g zeros-1g" a couple
times while "zpool iostat -v 1" was running in another
window.  There were a seven stats like this first one where
it is writing to disk.  The next to last is were the
bandwidth drops as there isnt enough IO to fill out that
second. Followed by zeros of no IO.  I didnt see any "write
behind" -- Once the IO was done I didnt see more until I
started something else.

|                capacity     operations    bandwidth
| pool         used  avail   read  write   read  write
| ----------  -----  -----  -----  -----  -----  -----
| tank        26.1G  1.34T      0  1.13K      0   134M
|   raidz1    26.1G  1.34T      0  1.13K      0   134M
|     c0t1d0      -      -      0    367      0  33.6M
|     c0t2d0      -      -      0    377      0  35.5M
|     c0t3d0      -      -      0    401      0  35.0M
|     c0t4d0      -      -      0    411      0  36.0M
|     c0t5d0      -      -      0    424      0  34.9M
| ----------  -----  -----  -----  -----  -----  -----
| | capacity operations bandwidth
| pool         used  avail   read  write   read  write
| ----------  -----  -----  -----  -----  -----  -----
| tank        26.4G  1.34T      0  1.01K    560   118M
|   raidz1    26.4G  1.34T      0  1.01K    560   118M
|     c0t1d0      -      -      0    307      0  29.6M
|     c0t2d0      -      -      0    309      0  27.6M
|     c0t3d0      -      -      0    331      0  28.1M
|     c0t4d0      -      -      0    338  35.0K  27.0M
|     c0t5d0      -      -      0    338  35.0K  28.3M
| ----------  -----  -----  -----  -----  -----  -----
| | capacity operations bandwidth
| pool         used  avail   read  write   read  write
| ----------  -----  -----  -----  -----  -----  -----
| tank        26.4G  1.34T      0      0      0      0
|   raidz1    26.4G  1.34T      0      0      0      0
|     c0t1d0      -      -      0      0      0      0
|     c0t2d0      -      -      0      0      0      0
|     c0t3d0      -      -      0      0      0      0
|     c0t4d0      -      -      0      0      0      0
|     c0t5d0      -      -      0      0      0      0
| ----------  -----  -----  -----  -----  -----  -----

   As things stand now, I am happy.

   I do wonder what accounts for the improvement -- seek
time, transfer rate, disk cache, or something else?  Does
anywone have a dtrace script to measure this which they
would share?

harley.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to