Running 14.2.4 (but same issue observed on 14.2.2) we have a problem with,
thankfully a testing cluster, where all pgs are failing to peer and are
stuck in peering or unknown stale etc states.
My working theory is that this is because the OSDs dont seem to be
utilizing msgr v2 as "ceph osd find os
On Mon, Nov 11, 2019 at 5:09 PM Karsten Nielsen wrote:
>
> I started a job that moved some files around in the cephfs cluster that
> resulted in the mds to go back into the crash loop.
> Logs are here:
> http://s3.foo-bar.dk/mds-dumps/mds.log-2019
>
> Any help would be
Ok. Thanks. This is how it works.
I did not think that with a working service it would not work.
11.11.2019, 15:54, "Igor Fedotov" :
> On 11/11/2019 3:51 PM, Andrey Groshev wrote:
>> Hi, Igor!
>> Service is UP.
>
> This prevents ceph-bluestore-tool from starting, you should shutdown it
> first.
On 11/11/2019 3:51 PM, Andrey Groshev wrote:
Hi, Igor!
Service is UP.
This prevents ceph-bluestore-tool from starting, you should shutdown it
first.
I did not make separate devices.
blocks.db and blocks.wal are created only if they are on separate devices?
in general - yes. They make se
Hi, Igor!
Service is UP.
I did not make separate devices.
blocks.db and blocks.wal are created only if they are on separate devices?
# systemctl status ceph-osd@8.service
● ceph-osd@8.service - Ceph object storage daemon osd.8
Hi Andrey,
this log output rather looks like some other process is using
/var/lib/ceph/osd/ceph-8
Have you stopped OSD.8 daemon?
And are you sure you deployed standalone DB/WAL devices for this OSD?
Thanks,
Igor
On 11/11/2019 3:10 PM, Andrey Groshev wrote:
Hello,
Some time ago I deploye
Hello,
Some time ago I deployed a ceph cluster.
It works great.
Today I collect some statistics and found that the BlueFs utility is not
working.
# ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-8
inferring bluefs devices from bluestore path
slot 1 /var/lib/ceph/osd/ceph-8
Good day
We currently have 12 nodes in 4 Racks (3x4) and getting another 3 nodes to
complete the 5th rack on Version 12.2.12, using ceph-ansible & docker
containers.
With the 3 new nodes (1 rack bucket) we would like to make use of a
non-containerised setup since our long term plan is to complete
I started a job that moved some files around in the cephfs cluster that
resulted in the mds to go back into the crash loop.
Logs are here:
http://s3.foo-bar.dk/mds-dumps/mds.log-2019
Any help would be appriciated.
- Karsten
-Original message-
From: Yan, Zheng
Sent: Thu 07-11