Thanks Greg. I appreciate the advice, and very quick replies too :)
On 18 July 2014 23:35, Gregory Farnum wrote:
> On Fri, Jul 18, 2014 at 3:29 PM, James Eckersall
> wrote:
> > Thanks Greg.
> >
> > Can I suggest that the documentation makes this much clearer? It might
> just be me, but I cou
On Fri, Jul 18, 2014 at 3:29 PM, James Eckersall
wrote:
> Thanks Greg.
>
> Can I suggest that the documentation makes this much clearer? It might just
> be me, but I couldn't glean this from the docs, so I expect I'm not the only
> one.
>
> Also, can I clarify how many pg's you would suggest is
ily rectified.
J
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Gregory Farnum
Sent: 18 July 2014 23:25
To: James Eckersall
Cc: ceph-users
Subject: Re: [ceph-users] health_err on osd full
Yes, that's expected behavior. Since the cluster ca
Yes, that's expected behavior. Since the cluster can't move data
around on its own, and lots of things will behave *very badly* if some
of their writes go through but others don't, the cluster goes
read-only once any OSD is full. That's why nearfull is a warn
condition; you really want to even out
Hi,
I have a ceph cluster running on 0.80.1 with 80 OSD's.
I've had fairly uneven distribution of the data and have been keeping it
ticking along with "ceph osd reweight XX 0.x" commands on a few OSD's while
I try and increase the pg count of the pools to hopefully better balance
the data.
Tonig