On Saturday 15 July 2006 00:08, User Freebsd wrote:
> On Sat, 15 Jul 2006, Kostik Belousov wrote:
> > On Sat, Jul 15, 2006 at 12:10:29AM -0300, User Freebsd wrote:
> >> On Wed, 5 Jul 2006, Robert Watson wrote:
> >>> If you can get into DDB when the hang has occurred, output via
> >>> serial console
On 14/07/2006 6:08 PM, User Freebsd wrote:
Just in case, do you use mlocked mappings ? Also, why so huge number
of crons exist in the system ? The are all forking now. It may be (can
not say definitely without further investigation) just a fork bomb.
re: crons ... this, I'm not sure of, but my
On Sat, 15 Jul 2006, Kostik Belousov wrote:
On Sat, Jul 15, 2006 at 12:10:29AM -0300, User Freebsd wrote:
On Wed, 5 Jul 2006, Robert Watson wrote:
If you can get into DDB when the hang has occurred, output via serial
console for the following commands would be very helpful:
show pcpu
show
On Sat, Jul 15, 2006 at 12:10:29AM -0300, User Freebsd wrote:
>
>
> On Wed, 5 Jul 2006, Robert Watson wrote:
>
> >If you can get into DDB when the hang has occurred, output via serial
> >console for the following commands would be very helpful:
> >
> >show pcpu
> >show allpcpu
> >ps
> >trace
>
On Sat, 15 Jul 2006, User Freebsd wrote:
On Wed, 5 Jul 2006, Robert Watson wrote:
If you can get into DDB when the hang has occurred, output via serial
console for the following commands would be very helpful:
show pcpu
show allpcpu
ps
trace
traceall
show locks
show alllocks
show uma
show ma
On Wed, 5 Jul 2006, Robert Watson wrote:
If you can get into DDB when the hang has occurred, output via serial console
for the following commands would be very helpful:
show pcpu
show allpcpu
ps
trace
traceall
show locks
show alllocks
show uma
show malloc
show lockedvnods
'k, after 16 days
On Wed, 05 Jul 2006 02:49:26 +0200, Scott Long <[EMAIL PROTECTED]> wrote:
Michel Talon wrote:
BTW, I noticed yesterday that that IPv6 support committ to rpc.lockd
was never backed out. An immediate question for people experiencing
new rpc.lockd problems with 6.x should be whether or not ba
On Wed, 5 Jul 2006, Francisco Reyes wrote:
Scott Long writes:
For what it's worth, I recently spent a lot of time putting FreeBSD 6.1
to the test as both an NFS client and server in a mixed OS environment.
I have a few debugging settings/suggestions that have been sent my way and I
plan to
User Freebsd writes:
What are others using for ethernet?
Of our two machines having the problem 1 has BGE and the other one has EM
(Intel). Doesn't seem to make much of a difference.
Except for the network cards, these two machines are identical. Same
motherboard, same RAID controller, sam
On Wed, 5 Jul 2006, Francisco Reyes wrote:
can you trigger it using work on just one client against a server, without
client<->client interactions? This makes tracking and reproduction a lot
easier
Personally I am experiencing two problems.
1- NFS clients freeze/hang if the server goes away
User Freebsd writes:
I believe, in Francisco's case, they are willing to pay someone to fix the
NFS issues they are having, which, i'd assume, means easy access to the
problematic server(s) to do proper testing in a "real life scenario" ...
Correct. As long as the person is someone "trusted i
Robert Watson writes:
It's not impossible. It would be interesting to see if ps axl reports that
rpc.lockd is in the kqread state
Found my post in another thread.
0 354 1 0 96 0 1412 1032 select Ss??0:07.06
/usr/sbin/rpcbind
It was not in kqread state.. and that was fro
Robert Watson writes:
It's not impossible. It would be interesting to see if ps axl reports that
rpc.lockd is in the kqread state, which would suggest it was blocked in the
resolver.
Just tried "ps axl | grep rpc" in the machine giving us the most grief..
Only got one line back:
root 367
Robert Watson writes:
can you trigger it using work on just one client against a server, without
client<->client interactions? This makes tracking and reproduction a lot
easier
Personally I am experiencing two problems.
1- NFS clients freeze/hang if the server goes away.
We have clients with
Scott Long writes:
For what it's worth, I recently spent a lot of time putting FreeBSD 6.1
to the test as both an NFS client and server in a mixed OS environment.
I have a few debugging settings/suggestions that have been sent my way and I
plan to try them tonight, but this is just another re
> with the bge driver ... could we be possibly talking internet vs nfs
> issues?
Pursuing invetigations, i have discovered that for people having
workstations whose home directories are on a NFS server, and who run
Gnome or KDE, there is a program which has horrible NFS behavior,
it is gam_serv
On Wed, 5 Jul 2006, Michel Talon wrote:
So it may be relevant to say that i have kernels without IPV6 support.
Recall that i have absolutely no problem with the client in FreeBSD-6.1.
Tomorrow i will test one of the 6.1 machines as a NFS server and the other as
a client, and will make you know i
On Mon, 3 Jul 2006, Michael Collette wrote:
-
Let's start with the simplest. The scenario here involves 2 machines, mach01
and mach02. Both are running 6-STABLE, and both are running rpcbind,
rpc.statd, and rpc.lockd. mach
On Wed, 5 Jul 2006, Robert Watson wrote:
On Wed, 5 Jul 2006, Danny Braniss wrote:
In my case our main servers are NetApp, and the problems are more related
to am-utils running into some race condition (need more time to debug this
:-) the other problem is related to throughput, freebsd is slo
> So it may be relevant to say that i have kernels without IPV6 support.
> Recall that i have absolutely no problem with the client in FreeBSD-6.1.
> Tomorrow i will test one of the 6.1 machines as a NFS server and the other as
> a client, and will make you know if i see something.
Well, i have ch
On Wed, Jul 05, 2006 at 02:04:59PM +0100, Robert Watson wrote:
>
> On Wed, 5 Jul 2006, Kostik Belousov wrote:
>
> >>Also, the both lockd processes now put identification information in the
> >>proctitle (srv and kern). SIGUSR1 shall be sent to srv process.
> >
> >Hmm, after looking at the dump t
On Wed, 5 Jul 2006, Kostik Belousov wrote:
Also, the both lockd processes now put identification information in the
proctitle (srv and kern). SIGUSR1 shall be sent to srv process.
Hmm, after looking at the dump there and some code reading, I have noted the
following:
1. NLM lock request co
On Wed, Jul 05, 2006 at 02:38:22PM +0300, Kostik Belousov wrote:
> On Wed, Jul 05, 2006 at 10:09:24AM +0100, Robert Watson wrote:
> > The most significant problem working with rpc.lockd is creating easy to
> > reproduce test cases. Not least because they can potentially involve
> > multiple clie
On Wed, Jul 05, 2006 at 10:09:24AM +0100, Robert Watson wrote:
> The most significant problem working with rpc.lockd is creating easy to
> reproduce test cases. Not least because they can potentially involve
> multiple clients. If you can help to produce simple test cases to
> reproduce the bu
Quoting Michel Talon <[EMAIL PROTECTED]>:
So it would appear that you cured the NFS problems inherent with FBSD-6
by replacing FBSD with Fedora Linux. Nice to know that NFSd works in Linux.
But won't help those on the FBSD list fix their FBSD-6 boxen. :/
First NFS is designed to make machines
On Wed, 5 Jul 2006, Danny Braniss wrote:
In my case our main servers are NetApp, and the problems are more related to
am-utils running into some race condition (need more time to debug this :-)
the other problem is related to throughput, freebsd is slower than linux,
and while freebsd/nfs/tcp
Mornin'
On Tue, Jul 04, 2006 at 09:47:21PM +0100, Robert Watson wrote:
> BTW, I noticed yesterday that that IPv6 support committ to rpc.lockd was
> never backed out. An immediate question for people experiencing new
> rpc.lockd problems with 6.x should be whether or not backing out that
> chan
> Michel Talon wrote:
>
> >>Using Ubuntu as the server I connected a FreeBSD 5.4 and 6-stable box as
> >>clients on a 100Mb/s network. The time trial used a dummy 100Meg file
> >>transfered from the server to the client.
> >>
> >
> >
> > I have similar experiences here. With FreeBSD-6.1 as c
Michel Talon wrote:
BTW, I noticed yesterday that that IPv6 support committ to rpc.lockd was never
backed out. An immediate question for people experiencing new rpc.lockd
problems with 6.x should be whether or not backing out that change helps.
So it may be relevant to say that i have kerne
> BTW, I noticed yesterday that that IPv6 support committ to rpc.lockd was
> never
> backed out. An immediate question for people experiencing new rpc.lockd
> problems with 6.x should be whether or not backing out that change helps.
So it may be relevant to say that i have kernels without IPV6
On Tue, 4 Jul 2006, Scott Long wrote:
For what it's worth, I recently spent a lot of time putting FreeBSD 6.1 to
the test as both an NFS client and server in a mixed OS environment. By far
and away, the biggest problems that I encountered with it were due to linux
NFS bugs. CentOS, FC, and S
Michel Talon wrote:
Using Ubuntu as the server I connected a FreeBSD 5.4 and 6-stable box as
clients on a 100Mb/s network. The time trial used a dummy 100Meg file
transfered from the server to the client.
I have similar experiences here. With FreeBSD-6.1 as client (using an Intel
etherexp
On Mon, Jul 03, 2006 at 03:40:01PM -0700, Michael Collette wrote:
> User Freebsd wrote:
> >On Sat, 1 Jul 2006, Francisco Reyes wrote:
> >
> >>John Hay writes:
> >>
> >>>I only started to see the lockd problems when upgrading the server side
> >>>to FreeBSD 6.x and later. I had various FreeBSD clien
On Mon, 3 Jul 2006, Michael Collette wrote:
http://www.freebsd.org/cgi/query-pr.cgi?pr=80389
If you locally back out the referenced change lock_proc.c:1.18 in rpc.lockd on
the server, do things improve?
Robert N M Watson
Computer Laboratory
University of Cambridge
_
> Using Ubuntu as the server I connected a FreeBSD 5.4 and 6-stable box as
> clients on a 100Mb/s network. The time trial used a dummy 100Meg file
> transfered from the server to the client.
>
I have similar experiences here. With FreeBSD-6.1 as client (using an Intel
etherexpress card at 100
User Freebsd wrote:
On Sat, 1 Jul 2006, Francisco Reyes wrote:
John Hay writes:
I only started to see the lockd problems when upgrading the server side
to FreeBSD 6.x and later. I had various FreeBSD clients, between 4.x
and 7-current and the lockd problem only showed up when upgrading the
se
Garance A Drosihn wrote:
At 9:13 PM -0400 7/1/06, Francisco Reyes wrote:
John Hay writes:
I only started to see the lockd problems when upgrading
the server side to FreeBSD 6.x and later. I had various
FreeBSD clients, between 4.x and 7-current and the lockd
problem only showed up when upgradi
At 9:13 PM -0400 7/1/06, Francisco Reyes wrote:
John Hay writes:
I only started to see the lockd problems when upgrading
the server side to FreeBSD 6.x and later. I had various
FreeBSD clients, between 4.x and 7-current and the lockd
problem only showed up when upgrading the server from
5.x to
Michel Talon wrote:
[ ...a long email snipped... ]
My only conclusion is that these NFS stories are very
tricky. The only moment everything worked fine was when we were running
Solaris on the server.
I can't speak to the earlier part about NFS with Linux, but at least I very
much agree with yo
On Mon, 3 Jul 2006, Francisco Reyes wrote:
Kostik Belousov writes:
I think that then 6.2 and 6.3 is not for you either. Problems
cannot be fixed until enough information is given.
I am trying.. but so far only other users who are having the same problem are
commenting on this and other simm
> So it would appear that you cured the NFS problems inherent with FBSD-6
> by replacing FBSD with Fedora Linux. Nice to know that NFSd works in Linux.
> But won't help those on the FBSD list fix their FBSD-6 boxen. :/
>
First NFS is designed to make machines of different OSs interact properly.
If
On Mon, Jul 03, 2006 at 10:06:52AM +0100, Robert Watson wrote:
> It sounds like there is also an NFS client race condition or other bug of
> some sort.
It may not be related, directly, but one thing that I noticed,
while trying to sort out my own recently commissioned NFS setup,
is that the -r102
On Mon, Jul 03, 2006 at 10:06:52AM +0100, Robert Watson wrote:
>
> On Mon, 3 Jul 2006, Kostik Belousov wrote:
>
> >On Mon, Jul 03, 2006 at 12:50:11AM -0400, Francisco Reyes wrote:
> >>Kostik Belousov writes:
> >>>Since nobody except you experience that problems (at least, only you
> >>>notified
>
On Mon, 3 Jul 2006, Kostik Belousov wrote:
On Mon, Jul 03, 2006 at 12:50:11AM -0400, Francisco Reyes wrote:
Kostik Belousov writes:
Since nobody except you experience that problems (at least, only you
notified
about the problem existence)
Did you miss the part of:
User Freebsd writes:
Si
Quoting Michel Talon <[EMAIL PROTECTED]>:
I guess I'm still just a bit stunned that a bug this obvious not only
found it's way into the STABLE branch, but is still there. Maybe it's
not as obvious as I think, or not many folks are using it? All I know
for sure here is that if I had upgraded to
On Mon, Jul 03, 2006 at 12:50:11AM -0400, Francisco Reyes wrote:
> Kostik Belousov writes:
> >Since nobody except you experience that problems (at least, only you
> >notified
> >about the problem existence)
>
> Did you miss the part of:
>
> >User Freebsd writes:
> >>Since there are several of us
Kostik Belousov writes:
I think that then 6.2 and 6.3 is not for you either. Problems
cannot be fixed until enough information is given.
I am trying.. but so far only other users who are having the same problem
are commenting on this and other simmilar threads.
We just need some guidance..
On Sun, Jul 02, 2006 at 05:49:44PM -0400, Francisco Reyes wrote:
> User Freebsd writes:
>
> >Since there are several of us experiencing what looks to be the same sort
> >of deadlock issue, I beseech you not to give up
>
> I will try to setup the environment, but to be honest no more 6.X for us
User Freebsd writes:
Since there are several of us experiencing what looks to be the same sort
of deadlock issue, I beseech you not to give up
I will try to setup the environment, but to be honest no more 6.X for us
until 6.2 or 6.3.. We have lost clients already.
Is this a problem that yo
On Sat, 1 Jul 2006, Francisco Reyes wrote:
John Hay writes:
I only started to see the lockd problems when upgrading the server side
to FreeBSD 6.x and later. I had various FreeBSD clients, between 4.x
and 7-current and the lockd problem only showed up when upgrading the
server from 5.x to 6.x.
> John Hay writes:
>
> > I only started to see the lockd problems when upgrading the server side
> > to FreeBSD 6.x and later. I had various FreeBSD clients, between 4.x
> > and 7-current and the lockd problem only showed up when upgrading the
> > server from 5.x to 6.x.
>
> It confirms the same
John Hay writes:
I only started to see the lockd problems when upgrading the server side
to FreeBSD 6.x and later. I had various FreeBSD clients, between 4.x
and 7-current and the lockd problem only showed up when upgrading the
server from 5.x to 6.x.
It confirms the same we are experiencing..
On 6/29/06, Michael Collette <[EMAIL PROTECTED]> wrote:
This last week I had been working on a test network to test out 6.1
prior to upgrading our production boxes from 5.4. That's when I ran
across the rpc.lockd issues that have been discussed earlier.
Our production setup has diskless clients
> Based on prior reading about this problem, I'd venture to guess that the
> file locking between FC5 and FreeBSD simply isn't. See, between just 2
> machines sharing files without rpc.lockd running you won't see a
> problem. Both the client and the server must not only be running
> rpc.lockd
> I only started to see the lockd problems when upgrading the server side
> to FreeBSD 6.x and later. I had various FreeBSD clients, between 4.x
> and 7-current and the lockd problem only showed up when upgrading the
> server from 5.x to 6.x.
As far as i remember FreeBSD-4 did not have a true lock
> the one thing that sticks out to me about this report is that they
> upgraded teh NFS server to FC5 ... what was the server running before? if
> FreeBSD, could the problem be an interaction problem between the NFS
> server and client, vs just the client side?
Previously the server used Fed
Michel Talon wrote:
I guess I'm still just a bit stunned that a bug this obvious not only
found it's way into the STABLE branch, but is still there. Maybe it's
not as obvious as I think, or not many folks are using it? All I know
for sure here is that if I had upgraded to 6.1 my network would
On Fri, Jun 30, 2006 at 01:03:09AM +0200, Michel Talon wrote:
> > I guess I'm still just a bit stunned that a bug this obvious not only
> > found it's way into the STABLE branch, but is still there. Maybe it's
> > not as obvious as I think, or not many folks are using it? All I know
> > for su
User Freebsd writes:
the one thing that sticks out to me about this report is that they
upgraded teh NFS server to FC5
I wonder if the FreeBSD 6.X client would freeze with a non FreeBSD NFS
server. Would be interesting to have that info for comparison.
__
On Thu, 29 Jun 2006, Francisco Reyes wrote:
Michel Talon writes:
Strange, since i upgraded to FreeBSD-6.1 and the NFS server to Fedora Core
5,
my machine, NFS client is happy, and lockd works.
What volume are we talking about?
My own problems and other reports I see are all under heavy load
Michel Talon writes:
Strange, since i upgraded to FreeBSD-6.1 and the NFS server to Fedora Core 5,
my machine, NFS client is happy, and lockd works.
What volume are we talking about?
My own problems and other reports I see are all under heavy load.
On Thu, 29 Jun 2006 22:25:30 +0200, Michael Collette
<[EMAIL PROTECTED]> wrote:
Rong-en Fan wrote:
On 6/29/06, Michael Collette <[EMAIL PROTECTED]> wrote:
This last week I had been working on a test network to test out 6.1
prior to upgrading our production boxes from 5.4. That's when I ran
Rong-en Fan wrote:
On 6/29/06, Michael Collette <[EMAIL PROTECTED]> wrote:
This last week I had been working on a test network to test out 6.1
prior to upgrading our production boxes from 5.4. That's when I ran
across the rpc.lockd issues that have been discussed earlier.
Our production setup
Michael Collette writes:
This last week I had been working on a test network to test out 6.1
prior to upgrading our production boxes from 5.4.
I wish I had done that.. :-(
That's when I ran
across the rpc.lockd issues that have been discussed earlier.
I am not familiar with that, but I c
me - too ...
2006/6/29, Michael Collette <[EMAIL PROTECTED]>:
This last week I had been working on a test network to test out 6.1
prior to upgrading our production boxes from 5.4. That's when I ran
across the rpc.lockd issues that have been discussed earlier.
Our production setup has diskless
65 matches
Mail list logo