[ceph-users] OSD's addrvec, not getting msgr v2 address, PGs stuck unknown or peering

2019-11-11 Thread Wesley Dillingham
Running 14.2.4 (but same issue observed on 14.2.2) we have a problem with, thankfully a testing cluster, where all pgs are failing to peer and are stuck in peering or unknown stale etc states. My working theory is that this is because the OSDs dont seem to be utilizing msgr v2 as "ceph osd find os

[ceph-users] Re: mds crash loop

2019-11-11 Thread Yan, Zheng
On Mon, Nov 11, 2019 at 5:09 PM Karsten Nielsen wrote: > > I started a job that moved some files around in the cephfs cluster that > resulted in the mds to go back into the crash loop. > Logs are here: > http://s3.foo-bar.dk/mds-dumps/mds.log-2019 > > Any help would be

[ceph-users] Re: Where rocksdb on my OSD's?

2019-11-11 Thread Andrey Groshev
Ok. Thanks. This is how it works. I did not think that with a working service it would not work. 11.11.2019, 15:54, "Igor Fedotov" : > On 11/11/2019 3:51 PM, Andrey Groshev wrote: >>  Hi, Igor! >>  Service is UP. > > This prevents ceph-bluestore-tool from starting, you should shutdown it > first.

[ceph-users] Re: Where rocksdb on my OSD's?

2019-11-11 Thread Igor Fedotov
On 11/11/2019 3:51 PM, Andrey Groshev wrote: Hi, Igor! Service is UP. This prevents ceph-bluestore-tool from starting, you should shutdown it first. I did not make separate devices. blocks.db and blocks.wal are created only if they are on separate devices? in general - yes.  They make se

[ceph-users] Re: Where rocksdb on my OSD's?

2019-11-11 Thread Andrey Groshev
Hi, Igor! Service is UP. I did not make separate devices. blocks.db and blocks.wal are created only if they are on separate devices? # systemctl status ceph-osd@8.service ● ceph-osd@8.service - Ceph object storage daemon osd.8

[ceph-users] Re: Where rocksdb on my OSD's?

2019-11-11 Thread Igor Fedotov
Hi Andrey, this log output rather looks like some other process is using /var/lib/ceph/osd/ceph-8 Have you stopped OSD.8 daemon? And are you sure you deployed standalone DB/WAL devices for this OSD? Thanks, Igor On 11/11/2019 3:10 PM, Andrey Groshev wrote: Hello, Some time ago I deploye

[ceph-users] Where rocksdb on my OSD's?

2019-11-11 Thread Andrey Groshev
Hello, Some time ago I deployed a ceph cluster. It works great. Today I collect some statistics and found that the BlueFs utility is not working. # ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-8 inferring bluefs devices from bluestore path slot 1 /var/lib/ceph/osd/ceph-8

[ceph-users] Adding new non-containerised hosts to current contanerised environment and moving away from containers forward

2019-11-11 Thread Jeremi Avenant
Good day We currently have 12 nodes in 4 Racks (3x4) and getting another 3 nodes to complete the 5th rack on Version 12.2.12, using ceph-ansible & docker containers. With the 3 new nodes (1 rack bucket) we would like to make use of a non-containerised setup since our long term plan is to complete

[ceph-users] Re: mds crash loop

2019-11-11 Thread Karsten Nielsen
I started a job that moved some files around in the cephfs cluster that resulted in the mds to go back into the crash loop. Logs are here: http://s3.foo-bar.dk/mds-dumps/mds.log-2019 Any help would be appriciated. - Karsten -Original message- From: Yan, Zheng Sent: Thu 07-11