Spencer Shepler <[EMAIL PROTECTED]> wrote:

> On Thu, Joerg Schilling wrote:
> > Spencer Shepler <[EMAIL PROTECTED]> wrote:
> > 
> > > The close-to-open behavior of NFS clients is what ensures that the
> > > file data is on stable storage when close() returns.
> > 
> > In the 1980s this was definitely not the case. When did this change?
>
> It has not.  NFS clients have always flushed (written) modified file data 
> to the server before returning to the applications close().  The NFS
> client also asks that the data be committed to disk in this case.

This is definitely wrong.

Our developers did loose many files in the 1980s when the NFS file server
did fill up the exported filesystem while several NFS clients did try to
write back edited files at the same time.

VI at that time did not call fsync and for this reason did not notice that
the file could not be written back properly.

What happens: All client did call statfs() and did asume that there is 
still space on the server. They all did allow to put blocks into the local
clients buffer cache. VI did call close, but the client did notice the
no space problem after the close did return and VI did not notice that the
file was damaged and allowed the user to quit VI.

Some time later, Sun did enhance VI to first call fsync() and then call
close(). Only if both return 0, the file is granted to be on the server.
Sun also did inform us to write applications this way in order to prevent
lost file content.


> > > Having tar create/write/close files concurrently would be a 
> > > big win over NFS mounts on almost any system.
> > 
> > Do you have an idea on how to do this?
>
> My naive thought would be to have multiple threads that create and
> write file data upon extraction.  This multithreaded behavior would
> provide better overall throughput of an extraction given NFS' response
> time characteristics.  More outstanding requests results in better
> throughput.  It isn't only the file data being written to disk that
> is the overhead of the extraction, it is the creation of the directories
> and files that must also be committed to disk in the case of NFS.
> This is the other part that makes things slower than local access.

Doing this with tar (which fetches the data from a serial data stream)
would only make sense in case that there will be threads that only have the task
to wait for a final fsync()/close().

It would also make it harder to implement error control as it may be that 
a problem is detected late while another large file is being extracted.
Star could not just quit with an error message but would need to delay the
error caused exit.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
       [EMAIL PROTECTED]                (uni)  
       [EMAIL PROTECTED]     (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to