On Sun, Sep 21, 2014 at 02:48:59AM +0200, lee wrote:
> Hi,
> 
> any idea why an NFS volume is being unmounted when a VM runs out of
> memory and kills some processes?  These processes use files on the NFS
> volume, but that's no reason to unmount it.
> 

The OOM process killer is not necessarily aware: it will kill processes until 
memory usage
works again - and going OOM is, itself, a sign that something is fairly 
drastically wrong.

Suppose:

There's heavy NFS usage and stuff is queueing to get on and off the mount / 
high disk I/O
and the kernel hits a stuck "something" - memory usage may spike and the OOM 
will eventually
kill processes. If something is using the NFS and there are stale mounts, that 
prevents 
clean unmounting ... further problems. Under high I/O I've occasionally seen 
information 
messages where the kernel is backing off for 120s and the note that you can 
disable further
displays of this message by cat'ing something into proc. [Hey, it's a Sunday 
morning and 
I can't remember everything :)

If a disk is being left unclean / marked as such, it won't be automatically 
mounted,
of course. If there's a problem with a remote mount / dead file handles that 
will also
clobber it.


> Also annoying: The volume doesn't get mounted when booting despite it's
> in /etc/fstab.  I have to log in into the VM and mount it manually.  The
> entry in fstab is correct: I can just say 'mount /mountpoint' and it's
> mounted.  Is that some Debian-specific bug?
> 

What's the entry? [Too many times, I've seen a typo or something anomalous only
once I've called a colleague in to have a look :) ]

Hope this helps - all the best,

AndyC



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20140921110354.ga4...@galactic.demon.co.uk

Reply via email to