On Sun, Dec 27, 2009 at 8:40 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Sun, 27 Dec 2009, Tim Cook wrote:
>
>  How is that going to prevent blocks being spread all over the disk when
>> you've got files several GB in size being written concurrently and deleted
>> at random?  And then throw in a mix of small files as well, kiss that
>> goodbye.
>>
>
> There would certainly be blocks spread all over the disk, but a (possible)
> seek ever 1MB of data is not too bad (not considering metadata seeks).  If
> the pool is allowed to get very full, then optimizations based on
> pre-allocated space stop working.
>
>
I guess it depends entirely on the space map :)


>
>  Pre-allocating data blocks is also not going to cure head seek and the
>> latency it induces on slow 7200/5400RPM drives.
>>
>
> But if the next seek to a data block is on a different drive, that drive
> can be seeking for the next block while the current block is already being
> read.
>
>
Well of course.  The argument of "if you just throw more disks at the
problem" will be valid in almost all situations.  Expecting to get the same
performance out of drives you get when they're empty and new vs. full and
used, in my experience, is crazy.  My point from the start was you will see
a significant performance decrease as time and fragmentation take place.




>
>  On a new, empty pool, or a pool that's been filled completely and emptied
>> several times?  It's not amazing to me on a new pool.  I would be surprised
>> to see you accomplish this feat repeatedly after filling and emptying the
>> drives.  It's a drawback of every implementation of copy-on-write I've ever
>> seen.  By it's very nature, I have no idea how you would avoid it.
>>
>
> This is a 2 year old pool which is typically filled (to about 80%) and
> "emptied" (reduced to 25%) many times.  However, when it is "emptied", all
> of the new files get removed since the extra space is used for testing.  I
> have only seen this pool get faster over time.
>
> For example, when the pool was first created, iozone only measured a
> single-thread large-file (64GB) write rate of 148MB/second but now it is up
> to 380MB/second with the same hardware.  The performance improvement is due
> to improvements to Solaris 10 software and array (STK2540) firmware.
>
> Original vs current:
>
>              KB  reclen   write rewrite    read    reread
>        67108864     256  148995  165041   463519   453896
>        67108864     256  380286  377397   551060   550414
>
>
Cmon, saying "all I did was change code and firmware" isn't a valid
comparison at all.  Ignoring that, I'm still referring to multiple streams
which create random I/O to the backend disk.

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to