On Mon, 11 Sep 2000, Marcelo Tosatti wrote:
> On Mon, 11 Sep 2000, Marcelo Tosatti wrote:
> > On Mon, 11 Sep 2000, octave klaba wrote:
> > 
> > > Hello,
> > > upgrading from 2.2.16 to 2.2.17 a raid-soft config (adaptec 5x36Go)
> > > 
> > > /sbin/lilo gave a D process ( :) )
> > > 
> > > root     14823  0.0  0.1  1184  496 ?        D    Sep10   0:00 /sbin/lilo
> > > 
> > > one question:
> > > reboot or not to reboot ?
> > > 
> > > I have no flopy disk on this server.
> > 
> > Could you please see where lilo is sleeping? 
> > 
> > You can do this with: ps xeo "command wchan" | grep lilo
> > 
> > Note that System.map must be in the correct place. 
> 
> Another thing: are you using stock raid or 0.90? 

Ooh, this seems familiar.  My configuration is very similar to the one
mentioned above (with Adaptec controller, driver is AIC7xxx) and I have
seem this problem of LILO hanging with RAID 0.90.  This was after the
'popper' got stuck in the R state as an unkillable process and 'kupdate'
and 'kswapd' were stuck in the D state.  I originally ran 2.2.17 with
stock RAID and the same thing happened with the 'popper' daemon there
although I did not try LILO that time.

Well, the machine with the problem is a production server and I have
reverted to 2.2.16 with RAID 0.90.  All is well there so far and,
hopefully, I can have the same stability as I had with 2.2.16 + stock
RAID.  I can't help with any further debugging but, both times the
'popper' and 'kupdate' got stuck, it was a result of a user with a large
mailbox connecting.  The 'popper' copies the mailbox over to a temporary
file on the same RAID partition (/var), so a guess would be this burst of
heavy I/O might be a trigger.  I would like to know if there is a patch to
fix this so I could reconsider upgrading from 2.2.16.

Thanks,

-- 
Roy Bixler
The University of Chicago Press
[EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to