On Thu, 2013-10-10 at 18:24 +0200, Gregory Farnum wrote:
> On Wed, Oct 9, 2013 at 10:19 PM, Kees Bos wrote:
> > Hi,
> >
> >
> > I've managed to get cepth in a unhealthy state, from which it will not
> > recover automatically. I've done some 'ce
Hi,
I've managed to get cepth in a unhealthy state, from which it will not
recover automatically. I've done some 'ceph osd out X' and stopped
ceph-osd processes before the rebalancing was completed. (All in a test
environment :-) )
Now I see:
# ceph -w
cluster 7fac9ad3-455e-4570-ae24-5c431176
On Wed, 2013-10-09 at 15:18 +0200, Joao Eduardo Luis wrote:
> On 09/10/13 13:38, Kees Bos wrote:
> > Hi,
> >
> > What is the estimated storage usage for a monitor (i.e. the amount of
> > data stored in /var/lib/ceph/mon/ceph-mon01)
> >
> > Currently in my s
Hi,
What is the estimated storage usage for a monitor (i.e. the amount of
data stored in /var/lib/ceph/mon/ceph-mon01)
Currently in my starting test system it's something like 40M (du -s
-h /var/lib/ceph/mon/ceph-mon01), but that will probably grow with the
number of osds.
Are there some numbers
On Tue, 2013-10-08 at 11:55 +0200, Kees Bos wrote:
> Hi,
>
> It seems to me that ceph-deploy doesn't consider the [mon] and [osd]
> sections of {cluster}.conf Is this intentional or will this be
> implemented down the road.
>
Well, at least some of the [osd] section sett
Hi,
It seems to me that ceph-deploy doesn't consider the [mon] and [osd]
sections of {cluster}.conf Is this intentional or will this be
implemented down the road.
- Kees
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin