Moving to ceph-users.
Ian R. Colle
Director of Engineering
Inktank
Delivering the Future of Storage
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: i...@inktank.com
On 4/16/14, 7:52 AM, "Ilya Storozhilov" wrote:
>Hello Ceph developers,
>
>we enco
Am 14.07.2013 21:05, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Am 14.07.2013 18:19, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
Hi sage,
Am 14.07.2013 um 17:01 schrieb Sage Weil :
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Hello list,
migh
On Sun, 14 Jul 2013, Stefan Priebe wrote:
> Am 14.07.2013 18:19, schrieb Sage Weil:
> > On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
> > > Hi sage,
> > >
> > > Am 14.07.2013 um 17:01 schrieb Sage Weil :
> > >
> > > > On Sun, 14 Jul 2013, Stefan Priebe wrote:
> > > > > Hello list,
> >
Am 14.07.2013 18:19, schrieb Sage Weil:
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
Hi sage,
Am 14.07.2013 um 17:01 schrieb Sage Weil :
On Sun, 14 Jul 2013, Stefan Priebe wrote:
Hello list,
might this be a problem due to having too much PGs? I've 370 per OSD instead
of having 3
On Sun, 14 Jul 2013, Stefan Priebe - Profihost AG wrote:
> Hi sage,
>
> Am 14.07.2013 um 17:01 schrieb Sage Weil :
>
> > On Sun, 14 Jul 2013, Stefan Priebe wrote:
> >> Hello list,
> >>
> >> might this be a problem due to having too much PGs? I've 370 per OSD
> >> instead
> >> of having 33 / OSD
Hi sage,
Am 14.07.2013 um 17:01 schrieb Sage Weil :
> On Sun, 14 Jul 2013, Stefan Priebe wrote:
>> Hello list,
>>
>> might this be a problem due to having too much PGs? I've 370 per OSD instead
>> of having 33 / OSD (OSDs*100/3).
>
> That might exacerbate it.
>
> Can you try setting
>
> osd m
On Sun, 14 Jul 2013, Stefan Priebe wrote:
> Hello list,
>
> might this be a problem due to having too much PGs? I've 370 per OSD instead
> of having 33 / OSD (OSDs*100/3).
That might exacerbate it.
Can you try setting
osd min pg log entries = 50
osd max pg log entries = 100
across your clust
Hello list,
might this be a problem due to having too much PGs? I've 370 per OSD
instead of having 33 / OSD (OSDs*100/3).
Is there any plan for PG merging?
Stefan
Hello list,
anyone else here who always has problems bringing back an offline OSD?
Since cuttlefish i'm seeing slow requests for
Hello list,
anyone else here who always has problems bringing back an offline OSD?
Since cuttlefish i'm seeing slow requests for the first 2-5 minutes
after bringing an OSD oinline again but that's so long that the VMs
crash as they think their disk is offline...
Under bobtail i never had any pro