Peter Rival writes:
 > Roch Bourbonnais - Performance Engineering wrote:
 > > Tao Chen writes:
 > >  > On 5/12/06, Roch Bourbonnais - Performance Engineering
 > >  > <[EMAIL PROTECTED]> wrote:
 > >  > >
 > >  > >   From: Gregory Shaw <[EMAIL PROTECTED]>
 > >  > >   Regarding directio and quickio, is there a way with ZFS to skip the
 > >  > >   system buffer cache?  I've seen big benefits for using directio when
 > >  > >   the data files have been segregated from the log files.
 > >  > >
 > >  > >
 > >  > > Were the benefits coming from extra concurrency (no single writter 
 > > lock)
 > >  > 
 > >  > Does DIO bypass "writter lock" on Solaris?
 > >
 > > Yep.
 > >
 > >  > Not on AIX, which uses CIO (concurrent I/O) to bypass managing locks
 > >  > at filesystem level:
 > >  > 
 > > http://oracle.ittoolbox.com/white-papers/improving-database-performance-with-aix-concurrent-io-2582
 > >  > 
 > >  > > or avoiding the extra copy to page cache
 > >  > 
 > >  > Certainly. Also to avoid VM overhead (DB does like raw devices).
 > >
 > > OK, but again, is it to avoid badly configured readahead, or 
 > > get extra concurrency, or something else ? I have a hard
 > > time that managing the page cache represents a cost when you 
 > > compare this to a 5ms I/O. 

 > I think you're missing one other thing - handling the memory overload of 
 > having orders of magnitude more accessed data than you have memory.  
 > Think about how you can handle having a couple hundred GB of dirty data 
 > being written by many threads (say either tablespace creates or temp 
 > table creation for a large table join) - fsflush and writebehind et. al.

When you dirty data enough, ZFS will start to throttle those 
writters; a bit like ufs_HW but at the system level. So most 
data in the ARC cache should be evictable on demand. There
are issues in current state of code that makes the amount of 
dirty data greater that we'd like but it's limited by design.

 > just can't keep up with it.  Of course, I know ZFS is "better" but to be 
 > useable in those situations it needs to be probably an order of 
 > magnitude better or so, and I haven't seen any data on a decently big 
 > rig with a proper storage config that shows that it is.  I'm not saying 
 > it's not, I'm just saying I haven't seen the data. :)
 >   Like you said, Roch, I've been down this road before and don't want to 
 > go down it again. ;)

Yes,  performance wise, ZFS is  already fast on lots of test
_and_ a big moving target.  That's  another great thing  about
it. But keep  those scenarios coming it's always interesting
to make sure they're covered.

-r

 > 
 >  - Pete

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to