Re: [ceph-users] health_err on osd full

2014-07-18 Thread James Eckersall
Thanks Greg. I appreciate the advice, and very quick replies too :) On 18 July 2014 23:35, Gregory Farnum wrote: > On Fri, Jul 18, 2014 at 3:29 PM, James Eckersall > wrote: > > Thanks Greg. > > > > Can I suggest that the documentation makes this much clearer? It might > just be me, but I cou

Re: [ceph-users] health_err on osd full

2014-07-18 Thread Gregory Farnum
On Fri, Jul 18, 2014 at 3:29 PM, James Eckersall wrote: > Thanks Greg. > > Can I suggest that the documentation makes this much clearer? It might just > be me, but I couldn't glean this from the docs, so I expect I'm not the only > one. > > Also, can I clarify how many pg's you would suggest is

Re: [ceph-users] health_err on osd full

2014-07-18 Thread James Eckersall
ily rectified. J -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gregory Farnum Sent: 18 July 2014 23:25 To: James Eckersall Cc: ceph-users Subject: Re: [ceph-users] health_err on osd full Yes, that's expected behavior. Since the cluster ca

Re: [ceph-users] health_err on osd full

2014-07-18 Thread Gregory Farnum
Yes, that's expected behavior. Since the cluster can't move data around on its own, and lots of things will behave *very badly* if some of their writes go through but others don't, the cluster goes read-only once any OSD is full. That's why nearfull is a warn condition; you really want to even out