David Collier-Brown wrote:
>>   ZFS copy-on-write results in tables' contents being spread across
>> the full width of their stripe, which is arguably a good thing
>> for transaction processing performance (or at least can be), but
>> makes sequential table-scan speed degrade.
>>  
>>   If you're doing sequential scans over large amounts of data
>> which isn't changing very rapidly, such as older segments, you
>> may want to re-sequentialize that data.

Richard Elling <[EMAIL PROTECTED]> wrote 
> There is a general feeling that COW, as used by ZFS, will cause
> all sorts of badness for database scans.  Alas, there is a dearth of
> real-world data on any impacts (I'm anxiously awaiting...)
> There are cases where this won't be a problem at all, but it will
> depend on how you use the data.

I quite agree: at some point, the experts on Oracle, MySQL and
PostgreSQL will get a clear understanding of how to get the
best performance for random database I/O and ZFS.  I'll be
interested to see what the behavior is for large, high-performance
systems. In the meantime...

> In this particular case, it would be cost effective to just buy a
> bunch of RAM and not worry too much about disk I/O during
> scans.  In the future, if you significantly outgrow the RAM, then
> there might be a case for a ZFS (L2ARC) cache LUN to smooth
> out the bumps.  You can probably defer that call until later.

... it's a Really Nice Thing that large memories only cost small 
dollars (;-))

--dave
-- 
David Collier-Brown            | Always do right. This will gratify
Sun Microsystems, Toronto      | some people and astonish the rest
[EMAIL PROTECTED]                 |                      -- Mark Twain
(905) 943-1983, cell: (647) 833-9377, (800) 555-9786 x56583
bridge: (877) 385-4099 code: 506 9191#
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to