Sympthoms like on http://tracker.ceph.com/issues/4699
all OSDs the process ceph-osd crash with segfault
If i stop MONs daemons then i can start OSDs but if i start MONs back
then die all OSDs again.
more detailed log:
0> 2013-07-15 16:42:05.001242 7ffe5a6fc700 -1 *** Caught signal
(Segmenta
>
> On 07/14/2013 04:27 AM, James Harper wrote:
> > My cluster is in HEALTH_WARN state because one of my monitors has low
> disk space on /var/lib/ceph. Looking into this in more detail, there are a
> bunch of .sst files dating back to Jul 7, and then a lot more at Jun 30 and
> older.
>
> Since c
On 07/14/2013 01:56 PM, Joao Eduardo Luis wrote:
On 07/14/2013 04:27 AM, James Harper wrote:
My cluster is in HEALTH_WARN state because one of my monitors has low
disk space on /var/lib/ceph. Looking into this in more detail, there
are a bunch of .sst files dating back to Jul 7, and then a lot m
Am 14.07.2013 21:05, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Am 14.07.2013 18:19, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
Hi sage,
Am 14.07.2013 um 17:01 schrieb Sage Weil :
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Hello list,
migh
On Sun, 14 Jul 2013, Stefan Priebe wrote:
> Am 14.07.2013 18:19, schrieb Sage Weil:
> > On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
> > > Hi sage,
> > >
> > > Am 14.07.2013 um 17:01 schrieb Sage Weil :
> > >
> > > > On Sun, 14 Jul 2013, Stefan Priebe wrote:
> > > > > Hello list,
> >
Am 14.07.2013 18:19, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
Hi sage,
Am 14.07.2013 um 17:01 schrieb Sage Weil :
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Hello list,
might this be a problem due to having too much PGs? I've 370 per OSD instead
of having 3
Hi John,
Could you try without the cat'ing and such?
Could you also try this:
$ virsh secret define secret.xml
$ virsh secret-set-value
$ virsh pool-create ceph.xml
Could you post both XML files and not use any Xen commands like 'xe'?
I want to verify where this problem is.
Wido
On 07/11/
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
> Hi sage,
>
> Am 14.07.2013 um 17:01 schrieb Sage Weil :
>
> > On Sun, 14 Jul 2013, Stefan Priebe wrote:
> >> Hello list,
> >>
> >> might this be a problem due to having too much PGs? I've 370 per OSD
> >> instead
> >> of having 33 / OSD
Hi sage,
Am 14.07.2013 um 17:01 schrieb Sage Weil :
> On Sun, 14 Jul 2013, Stefan Priebe wrote:
>> Hello list,
>>
>> might this be a problem due to having too much PGs? I've 370 per OSD instead
>> of having 33 / OSD (OSDs*100/3).
>
> That might exacerbate it.
>
> Can you try setting
>
> osd m
On Sun, 14 Jul 2013, Stefan Priebe wrote:
> Hello list,
>
> might this be a problem due to having too much PGs? I've 370 per OSD instead
> of having 33 / OSD (OSDs*100/3).
That might exacerbate it.
Can you try setting
osd min pg log entries = 50
osd max pg log entries = 100
across your clust
On 07/14/2013 04:27 AM, James Harper wrote:
My cluster is in HEALTH_WARN state because one of my monitors has low disk
space on /var/lib/ceph. Looking into this in more detail, there are a bunch of
.sst files dating back to Jul 7, and then a lot more at Jun 30 and older.
Since cuttlefish's re
Hello list,
might this be a problem due to having too much PGs? I've 370 per OSD
instead of having 33 / OSD (OSDs*100/3).
Is there any plan for PG merging?
Stefan
Hello list,
anyone else here who always has problems bringing back an offline OSD?
Since cuttlefish i'm seeing slow requests for
12 matches
Mail list logo