e do not have any osd matching the
device class , it won't start trying to allocate pg to OSD that doesn't
match it?
Is there any way to prevent pool 1 to use the OSD ?
--
Dr. Benoit Hudzia
Mobile (UK): +44 (0) 75 346 78673
Mobile (IE): +353 (0) 89 219 3675
Email: ben...@strat
at 17:11, John Spray wrote:
> On Thu, Jul 26, 2018 at 4:57 PM Benoit Hudzia
> wrote:
>
>> HI,
>>
>> We currently segregate ceph pool PG allocation using the crush device
>> class ruleset as described:
>> https://ceph.com/community/new-luminous-crush-device-c
4205M 9310G 9315G
For some reason it seems that some PG are allocated to osd 3 ( but stall +
peering)
This is kind of odd
On Thu, 26 Jul 2018 at 20:50, Benoit Hudzia wrote:
> You are correct the PG are stale ( not allocated )
>
> [root@stratonode1 /]# ceph status
> clus
ve scenario without the deletion step).
It seems that it is only in the case of deleting the OSD that the whole
calculation get screwed.
On Thu, 26 Jul 2018 at 20:52, Benoit Hudzia wrote:
> Sorry missing the pg dump :
>
> 2.1 0 00 0 0
98: 119388 Segmentation fault (core dumped)
/usr/bin/ceph-osd -f --cluster "${CEPH_CLUSTERNAME}" --id "${OSD_ID}"
--setuser root --setgroup root
--
Dr. Benoit Hudzia
Mobile (UK): +44 (0) 75 346 78673
Mobile (IE): +353 (0) 89 219 3675
Email: ben...@strat
ceph.com/issues/24639,
> if you have anything in common with that deployment. (But you probably
> don't; an error on read generally is about bad state on disk that was
> created somewhere else.)
> -Greg
>
> On Sun, Aug 5, 2018 at 3:19 PM Benoit Hudzia
> wrote:
>
>&
, in every failed run this two line are missing . Any idea why this
would occur?
Last but not least: I have setup the log level to 20, however, it seems
that the bluestore crash before even getting to the point where things are
logged.
Regards
Benoit
On Mon, 6 Aug 2018 at 13:07, Benoit Hudzia