On Fri, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > Sorry, the code in Solaris would behave as I described. Upon the
> > application closing the file, modified data is written to the server.
> > The client waits for completion of those writes. If there is an error,
Jeff Victor <[EMAIL PROTECTED]> wrote:
> >>Your working did not match with the reality, this is why I did write this.
> >>You did write that upon close() the client will first do something similar
> >>to
> >>fsync on that file. The problem is that this is done asynchronously and the
> >>close()
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> Sorry, the code in Solaris would behave as I described. Upon the
> application closing the file, modified data is written to the server.
> The client waits for completion of those writes. If there is an error,
> it is returned to the caller of close(
On Fri, Jeff Victor wrote:
> Spencer Shepler wrote:
> >On Fri, Joerg Schilling wrote:
> >
> >>>This doesn't change the fact that upon close() the NFS client will
> >>>write data back to the server. This is done to meet the
> >>>close-to-open semantics of NFS.
> >>
> >>Your working did not match wi
Spencer Shepler wrote:
On Fri, Joerg Schilling wrote:
This doesn't change the fact that upon close() the NFS client will
write data back to the server. This is done to meet the
close-to-open semantics of NFS.
Your working did not match with the reality, this is why I did write this.
You did
On Fri, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > I didn't comment on the error conditions that can occur during
> > the writing of data upon close(). What you describe is the preferred
> > method of obtaining any errors that occur during the writing of data.
> > T
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> I didn't comment on the error conditions that can occur during
> the writing of data upon close(). What you describe is the preferred
> method of obtaining any errors that occur during the writing of data.
> This occurs because the NFS client is writin
On Thu, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > On Thu, Joerg Schilling wrote:
> > > Spencer Shepler <[EMAIL PROTECTED]> wrote:
> > >
> > > > The close-to-open behavior of NFS clients is what ensures that the
> > > > file data is on stable storage when close() re
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> On Thu, Joerg Schilling wrote:
> > Spencer Shepler <[EMAIL PROTECTED]> wrote:
> >
> > > The close-to-open behavior of NFS clients is what ensures that the
> > > file data is on stable storage when close() returns.
> >
> > In the 1980s this was definit
On Thu, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > The close-to-open behavior of NFS clients is what ensures that the
> > file data is on stable storage when close() returns.
>
> In the 1980s this was definitely not the case. When did this change?
It has not. NFS
"Frank Batschulat (Home)" <[EMAIL PROTECTED]> wrote:
> On Tue, 10 Oct 2006 01:25:36 +0200, Roch <[EMAIL PROTECTED]> wrote:
>
> > You tell me ? We have 2 issues
> >
> > can we make 'tar x' over direct attach, safe (fsync)
> > and posix compliant while staying close to current
> > perfor
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> The close-to-open behavior of NFS clients is what ensures that the
> file data is on stable storage when close() returns.
In the 1980s this was definitely not the case. When did this change?
> The meta-data requirements of NFS is what ensures that fi
Roch <[EMAIL PROTECTED]> wrote:
> > Neither Sun tar nor GNU tar call fsync which is the only way to
> > enforce data integrity over NFS.
>
> I tend to agree with this although I'd say that in practice,
> from performance perspective, calling fsync should be more
> relevant for direct attach.
I think the original point of NFS being better WRT data making it to the
disk was that :
NFS follows the SYNC-ON-CLOSE semantics. You will not see an explicit
fsync() being called
by the tar...
-- Sanjeev.
Frank Batschulat (Home) wrote:
On Tue, 10 Oct 2006 01:25:36 +0200, Roch <[EMAIL PROTECT
On Tue, 10 Oct 2006 01:25:36 +0200, Roch <[EMAIL PROTECTED]> wrote:
You tell me ? We have 2 issues
can we make 'tar x' over direct attach, safe (fsync)
and posix compliant while staying close to current
performance characteristics ? In other words do we
have the
On Tue, Roch wrote:
>
> Joerg Schilling writes:
> > Roch <[EMAIL PROTECTED]> wrote:
> >
> > > I would add that this is not a bug or deficientcy in
> > > implementation. Any NFS implementation tweak to make 'tar x'
> > > go as fast as direct attached will lead to silent data
>
Joerg Schilling writes:
> Roch <[EMAIL PROTECTED]> wrote:
>
> > I would add that this is not a bug or deficientcy in
> > implementation. Any NFS implementation tweak to make 'tar x'
> > go as fast as direct attached will lead to silent data
> > corruption (tar x succeeds but
Roch <[EMAIL PROTECTED]> wrote:
> I would add that this is not a bug or deficientcy in
> implementation. Any NFS implementation tweak to make 'tar x'
> go as fast as direct attached will lead to silent data
> corruption (tar x succeeds but the files don't checksum
> ok).
>
> Int
Spencer Shepler writes:
> On Tue, eric kustarz wrote:
> > Ben Rockwood wrote:
> > >I was really hoping for some option other than ZIL_DISABLE, but finally
> > >gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it
> > >didn't... at least not enough to matter.
> > >
> >
eric kustarz <[EMAIL PROTECTED]> wrote:
> Ben Rockwood wrote:
> I imagine what's happening is that tar is a single-threaded application
> and it's basically doing: open, asynchronous write, close. This will go
> really fast locally. But for NFS due to the way it does cache
> consistency, on
On Tue, eric kustarz wrote:
> Ben Rockwood wrote:
> >I was really hoping for some option other than ZIL_DISABLE, but finally
> >gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it
> >didn't... at least not enough to matter.
> >
> >ZIL_DISABLE was the solution, sadly. I'm ru
Ben Rockwood wrote:
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 o
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 or so soonish (I BFU'd
23 matches
Mail list logo