On Thu, Feb 11, 1999 at 10:24:51AM +0100, D. Rock wrote:
> Anyone with /usr/obj NFS mounted should have the same problems.
I've just updated my firewall machine using NFSv3 hardmounted /usr/obj
over a 10BaseT connection (slow!). Did a make installworld, and kernel
rebuild over it last night with
> >As I writed some time before, I always get the wrong results if I generate
> >the termcap.db in an NFSv3 mounted directory. It doesn't matter which machine
> >is the NFS server (tried Solaris 7 and the NFS client machine itself). The
> >generated file has *always* the wrong size (always the same
On a related note, I've been unable to reproduce the `hanging' in
3.0-CURRENT (without Matt's patch)...
-- Niels.
To Unsubscribe: send mail to majord...@freebsd.org
with "unsubscribe freebsd-current" in the body of the message
>As I writed some time before, I always get the wrong results if I generate
>the termcap.db in an NFSv3 mounted directory. It doesn't matter which machine
>is the NFS server (tried Solaris 7 and the NFS client machine itself). The
>generated file has *always* the wrong size (always the same: 107776
This doesn't fix my problem (my isn't even rename or delete related)
As I writed some time before, I always get the wrong results if I generate
the termcap.db in an NFSv3 mounted directory. It doesn't matter which machine
is the NFS server (tried Solaris 7 and the NFS client machine itself). The
g
> I tried to track down some of the problems doing a network snoop and noticed
> something interesting:
> NFSv3 seems to produce more than twice the packets during file write than
> NFSv2
> Is this true. There are many more getattr() calls with NFSv3 than with NFSv2.
You may mean "ACCESS", not "
"Daniel C. Sobral" wrote:
> N wrote:
> >
> > On a totally unrelated note, su(1) never works if you're not in group
> > wheel, Kerberos or no Kerberos, as far as I can tell.
>
> Standard BSD behavior.
Hmm... for Kerberos, this ought to be relaxed, really.
M
--
Mark Murray
Join the anti-SPAM move
N wrote:
>
> On a totally unrelated note, su(1) never works if you're not in group
> wheel, Kerberos or no Kerberos, as far as I can tell.
Standard BSD behavior.
--
Daniel C. Sobral(8-DCS)
d...@newsguy.com
d...@freebsd.org
Well, as a computer geek, I have to beli
Ok, I think I found the problem.
I believe what is going on is that dirty buffers are being held across
an nfs_rename() call under NFS V3. Under NFS V2, most such buffers are
written synchronously.
Under NFS V3, however, it appears to work differently. One of the
follo
Matt wrote:
> This is very odd. This is the approximate backtrace that I get
> when I throw my test machine into DDB:
[..]
> What is happening is that I am doing a 'make installworld' on my
> test machine with / and /usr NFS V3 mounted R+W.
I also have come to the conclusion tha
I think I've found the general area where the problem is occuring.
It appears to some sort of weird write()/remove() interaction.
In the make install, the problem appears to only occurs when
install -s ( install and strip binary ) is used.
This causes an NFS file to be writt
:> on the NFS server before you run make install on the NFS client ).
:
:Actually, I did, this broke somewhere at the end of January in 3.0-STABLE.
:Unfortunately I first built a kernel, rebooted with that, and tried to
:build the world, but /usr/bin/ps was the wrong version so I couldn't
:eas
Quoth Matt Dillon:
[make world over nfs breakage]
> It is very odd. I don't suppose very many people try to make install
> over NFS ( it works, you just have to chflags -R noschg the destination
> on the NFS server before you run make install on the NFS client ).
Actually, I did, thi
This is very odd. This is the approximate backtrace that I get
when I throw my test machine into DDB:
--- interrupt, ...
nfs_* routines
cluster_wbuild
vfs_bio_awrite
flushdirtybuffers
bdwrite
nfs_write
vn_write
write
syscall
What is happeni
Here's more. I seem to be getting stale NFS handle replies on the NFS-V3
commits and writes.
I am guessing that the problem is due, somehow, to install's writing or
renaming over of the original file, probably after stat'ing the original
file.
doing a mount -u -o acdirmin
15 matches
Mail list logo