Primary storage is on SAN (fiber attached), secondary is on NFS.  

I've submitted CLOUDSTACK-105.

On Sep 14, 2012, at 11:18 AM, Prasanna Santhanam 
<prasanna.santha...@citrix.com> wrote:

> Caleb - what kind of storage are you using? XenServer local store or NFS 
> shared store. We faced this with local store but only worked around the issue.
> 
> 
> 
> ----- Original Message -----
> From: Ahmad Emneina [mailto:ahmad.emne...@citrix.com]
> Sent: Friday, September 14, 2012 10:39 PM
> To: cloudstack-us...@incubator.apache.org 
> <cloudstack-us...@incubator.apache.org>; cloudstack-dev@incubator.apache.org 
> <cloudstack-dev@incubator.apache.org>
> Subject: Re: Xenserver 6.0.2/Cloudstack 3.0.2 stale socket files
> 
> This looks like a prime candidate for a bug. There might be time to get it
> in before 4.0 goes out!
> 
> On 9/14/12 9:54 AM, "Caleb Call" <calebc...@me.com> wrote:
> 
>> We came across an interesting issue yesterday in one of our clusters.  We
>> ran out of inodes on all of our cluster members (since when does this
>> happen in 2012?).  When this happened, it in turn made the / filesystem a
>> read-only filesystem which in turn made all the hosts go in to emergency
>> maintenance mode and as a result get marked down by Cloudstack.  We found
>> that it was caused by hundreds of thousands of stale socket files in /tmp
>> named "stream-unix.####.######".  To resolve the issue, we had to delete
>> those stale socket files (find /tmp -name "*stream*" -mtime +7 -exec rm
>> -v {} \;), then kill and restart xapi, then correct the emergency
>> maintenance mode.  These hosts had only been up for 45 days before this
>> issue occurred.  
>> 
>> In our scouring of the interwebs, the only other instance we've been able
>> to find of this (or similar) happening is in the same setup we are
>> currently running.  Xenserver 6.0.2 with CS 3.0.2.  Do these stream-unix
>> sockets have anything to do with Cloudstack?  I would think if this was a
>> Xenserver issue (bug), there would be a lot more on the internet about
>> this happening.  For a temporary workaround, we've added a cronjob to
>> cleanup these files but we'd really like to address the actual issue
>> that's causing these sockets to become stale and not get cleaned-up.
>> 
>> Thoughts?
>> 
>> Thanks,
>> Caleb
>> 
> 
> 
> -- 
> Æ
> 
> 
> 

Reply via email to