Hi,
I’m looking for some help in figuring out why there are 2 pg’s in our cluster
in 'active+undersized+degraded' state. They don’t seem to get assigned a 3rd
osd to place data on. I’m not sure why, everything looks ‘ok’ to me. Our ceph
cluster consists of 3 nodes and has been upgraded from fir
>
>> And there is not good answer, it depends on your needs and use case.
>> For example if your main goal is space and not performance, fewer but
>> larger HDDs will be a better fit.
>
>In my deployment, I have slow requests when starting OSD with 2.5+ TB used on
>it.
>Due to slowdowns on start, I
Hi,
I upgraded to 10.2.1 and noticed that lttng is a dependency for the RHEL
packages in that version. Since I have no intention of doing traces on ceph I
find myself wondering why ceph is now requiring these libraries to be
installed. Since the lttng packages are not included in RHEL/CentOS 7
stuff I won’t be
using.
Thanks,
Max
On 25/05/16 9:57 PM, "Ken Dreyer" wrote:
>On Wed, May 25, 2016 at 8:00 AM, kefu chai wrote:
>> On Tue, May 24, 2016 at 5:23 AM, Max Vernimmen
>> wrote:
>>> Hi,
>>>
>>> I upgraded to 10.2.1 and noticed tha
owledge on debugging starts to run thin. I'd love to
learn how to continue, but I'm in need of some help here. Thank you for
your time!
Best regards,
--
Max Vernimmen
Senior DevOps Engineer
Textkernel
--
= bitmap
>
> The issues you are reporting sound like an issue many of us have seen on
> luminous and mimic clusters and has been identified to be caused by the
> "stupid allocator" memory allocator.
>
> Gr. Stefan
>