"Andrew M.A. Cater" <amaca...@galactic.demon.co.uk> writes:

> On Sun, Sep 21, 2014 at 02:48:59AM +0200, lee wrote:
>> Hi,
>> 
>> any idea why an NFS volume is being unmounted when a VM runs out of
>> memory and kills some processes?  These processes use files on the NFS
>> volume, but that's no reason to unmount it.
>> 
>
> The OOM process killer is not necessarily aware: it will kill processes until 
> memory usage
> works again - and going OOM is, itself, a sign that something is fairly 
> drastically wrong.

The VM needs more memory which the server doesn't have.  For now, I
assigned more swap space to the VM.

The process killed was seamonkey.  Killing seamonkey frees at least 1GB.

> Suppose:
>
> There's heavy NFS usage and stuff is queueing to get on and off the mount / 
> high disk I/O

NFS usage isn't heavy.

> and the kernel hits a stuck "something" - memory usage may spike and the OOM 
> will eventually
> kill processes.

Even NFS processes, after freeing 1GB+ by killing seamonkey?

> If something is using the NFS and there are stale mounts, that prevents 
> clean unmounting ... further problems.

There's only one mount, and it isn't stale.

> Under high I/O I've occasionally seen information 
> messages where the kernel is backing off for 120s and the note that you can 
> disable further
> displays of this message by cat'ing something into proc. [Hey, it's a Sunday 
> morning and 
> I can't remember everything :)

There wasn't much I/O going on.  Seamonkey was actually idling while I
was doing something else that doesn't touch the VM seamonkey was running
in in any way.

> If a disk is being left unclean / marked as such, it won't be automatically 
> mounted,
> of course. If there's a problem with a remote mount / dead file handles that 
> will also
> clobber it.

The VM seamonkey was running in doesn't export anything via NFS.  It
only mounts a volume exported by another VM.  The export was fine or I
would have noticed on my desktop because it has the same volume mounted
the same way (/home) and would have frozen up.

>> Also annoying: The volume doesn't get mounted when booting despite it's
>> in /etc/fstab.  I have to log in into the VM and mount it manually.  The
>> entry in fstab is correct: I can just say 'mount /mountpoint' and it's
>> mounted.  Is that some Debian-specific bug?
>> 
>
> What's the entry? [Too many times, I've seen a typo or something anomalous 
> only
> once I've called a colleague in to have a look :) ]

jupiter:/srv/data_home /home nfs defaults 0 0


-- 
Knowledge is volatile and fluid.  Software is power.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87zjdsdftf....@yun.yagibdah.de

Reply via email to