To those of us experiencing problems with ypserv, I have made a copy of
my binary available at:
DO NOT READ ANY FURTHER IF YOU HAVE NOT SETUP AND ADMINED A NIS DOMAIN!
THIS IS NOT FOR YOU!
http://www.cs.rpi.edu/~crossd/FreeBSD/ypserv
MD5 (ypserv) = 1f1c6c01eafd690059b32e615e5b6efc
It is binary
I am apparently bug-compatible with the original too, though it took
longer to trip over it (and the code runs LOTS faster :)... So probably
not tonight. I am going to be placing debugging statements in the code to
see if I can figure out where information is being stepped on.)
--
David Cross
Ok... I have just finished the first step in a rewrite of the hash routines
for berkleydb (read-only at this point), and I have ypserv compiled using
them. So far so good :). And ypserv uses a _lot_ less CPU resources now.
(I have totally removed all of the buffer management code in berkley db
At 5:54 PM -0400 5/15/01, David E. Cross wrote:
>I saw this the other day:
>
>http://www.sleepycat.com/historic.html
>
>Down at the bottom:
>
>> Finally, you should not upgrade your GNU gcc or Solaris compiler.
> > Optimizations in versions of gcc 2 that were in alpha test in
> > the summer of
I saw this the other day:
http://www.sleepycat.com/historic.html
Down at the bottom:
> Finally, you should not upgrade your GNU gcc or Solaris compiler.
> Optimizations in versions of gcc 2 that were in alpha test in the
> summer of 1997, and a version of the standard Solaris WorkShop Compiler
At 8:06 PM -0400 4/16/01, David E. Cross wrote:
[...skipping over some important stuff...]
>My second solution was to have the child call yp_init_dbs()
>instead of yp_flush_all() (the former would just nuke the
>references to the FDs, but actually keep them open). This
>didn't work. Can
I'm open to the idea of fixing it, but I wouldn't mind just another
day or two of testing first, hopefully with other folks involved.
I didn't see a diff attached?
- Jordan
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message
know some others who use ypserv heavily have run into these problems, if
you need the patch, I can provide it if you are willing to give it a test ;)
JKH: I think this _really_ needs to get into 4.3-RELEASE, this has been
a vexing bug for over a year. The current solution may be sub-optimal, bu
Ok... I am coming to the conclusion that there is some sort of kernel
issue that is causing this problem. Here is what I have done and discovered
to date (this is all with 4.3-RC2 FWIW):
At some point the 'qhead' CIRCLEQ structure in yp_dblookup.c gets corrupted.
This is declared as a static, an
I have found _a_ bug in ypserv (I think I may be stumbling over multiple
different bugs, but this one is very reproducable).
It is dying in the yp_testflags routine, in the for loop that goes through
the CIRCLEQ. The loop dies with qptr pointing to a struct that is all NULL
(my reading of
I have trace the problem in ypserv down to the RPC dispatch routines..
I am digging further and I hope to have it found and eliminated today
(in time for -RELEASE ;)
If anyone has any idea how it could be tripping up here, please let me
know. My 2 guesses are a corrupted svc_callback entry (no
the yp_all function it calls yp_fork() to fork a new ypserv, the
parent them calls return(NULL); and the child handles the request.
Looking at the ktraces, I notice that the parent does not close
the socket connection, but after the child finishes the transaction
the parent gets a rea
> The ypserv bug (the one where ypserv randomly stops responding or
> just seg-faults) is still very much alive. I had to restart it
> about 11 times in the course of 20 minutes this morning. That's
> the bad news, the good news is that I started it each time with
> '
The ypserv bug (the one where ypserv randomly stops responding or
just seg-faults) is still very much alive. I had to restart it
about 11 times in the course of 20 minutes this morning. That's
the bad news, the good news is that I started it each time with
'ktrace -i'.
Going b
14 matches
Mail list logo