Anton B. Rang writes:
 > >Were the benefits coming from extra concurrency (no
 > >single writer lock) or avoiding the extra copy to page cache or 
 > >from too much readahead that is not used before pages need to 
 > >be recycled. 
 > 
 > With QFS, a major benefit we see for databases and direct I/O is an
 > effective doubling of the memory available to the database for
 > caching.  Without direct I/O, every page read winds up in the file
 > system cache and the database cache. For large databases, this is the
 > difference between retaining key indexes in memory, or not. 

For read it is an interesting concept. Since

        Reading into cache
        Then copy into user space
        then keep data around but never use it

is not optimal. 
So 2 issues, there is the cost of copy and there is the memory.

Now could we detect the pattern that cause holding to the
cached block not optimal and do a quick freebehind after the 
copyout ? Something like Random access +  very large file + poor cache hit
ratio ?

Now about avoiding the copy; That would mean dma straight
into user space ? But if the checksum does not validate the
data, what do we do ? If storage is not raid-protected and we
have to return EIO, I don't think we can do this _and_
corrupt the user buffer also, not sure what POSIX says for
this situation.

Now latency wise, the cost of copy is  small compared to the
I/O;  right ? So it now  turns into an  issue of saving some
CPU cycles.


-r




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to