On Thu, Joerg Schilling wrote: > Spencer Shepler <[EMAIL PROTECTED]> wrote: > > > The close-to-open behavior of NFS clients is what ensures that the > > file data is on stable storage when close() returns. > > In the 1980s this was definitely not the case. When did this change?
It has not. NFS clients have always flushed (written) modified file data to the server before returning to the applications close(). The NFS client also asks that the data be committed to disk in this case. > > The meta-data requirements of NFS is what ensures that file creation, > > removal, renames, etc. are on stable storage when the server > > sends a response. > > > > So, unless the NFS server is behaving badly, the NFS client has > > a synchronous behavior and for some that means more "safe" but > > usually means that it is also slower that local access. > > In any case, calling fsync before close does nto seem to be a problem. Not for the NFS client because the default behavior has the same effect as fsync()/close(). > > > > You tell me ? We have 2 issues > > > > > > can we make 'tar x' over direct attach, safe (fsync) > > > and posix compliant while staying close to current > > > performance characteristics ? In other words do we > > > have the posix leeway to extract files in parallel ? > > > > > > For NFS, can we make 'tar x' fast and reliable while > > > keeping a principle of least surprise for users on > > > this non-posix FS. > > > > Having tar create/write/close files concurrently would be a > > big win over NFS mounts on almost any system. > > Do you have an idea on how to do this? My naive thought would be to have multiple threads that create and write file data upon extraction. This multithreaded behavior would provide better overall throughput of an extraction given NFS' response time characteristics. More outstanding requests results in better throughput. It isn't only the file data being written to disk that is the overhead of the extraction, it is the creation of the directories and files that must also be committed to disk in the case of NFS. This is the other part that makes things slower than local access. Spencer _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss