[ceph-users] Re: [BULK] Re: Re: which is definitive: /var/lib/ceph symlinks or ceph-volume?

2021-04-06 Thread Eugen Block
Here's an example of an OSD with separate block.db: ---snip--- osd-db-0cd3d532-a03a-4a17-bc0c-ae3e8c710ace ceph.block_device=/dev/ceph-63cf931a-f7a8-4ab8-bdd2-4c01c8e10f4d/osd-block-a30f85ab-b2b9-4f75-a1f9-48c2ef6db1c1,ceph.block_uuid=8wWfCT-4aQw-2VKR-I1Lf-a0e3-rRRA-v3pLqL,ceph.cephx_lockbox_sec

[ceph-users] Re: Increase of osd space usage on cephfs heavy load

2021-04-06 Thread Olivier AUDRY
hello many thx for the time you take helping me on this. I restarted one of the backup and now the space usage for cephfs meta data goes from 17Go to 70Go but the hints you give me seems to not help here. cephfs-metadata/mds1_openfiles.0 mtime 2021-04-06 18:27:08.00, size 0 cephfs-metadata/m

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-06 Thread Casey Bodley
thanks for the details. this is a regression from changes to the datalog storage for multisite - this -5 error is coming from the new 'fifo' backend. as a workaround, you can set the new 'rgw_data_log_backing' config variable back to 'omap' Adam has fixes already merged to the pacific branch; be a

[ceph-users] Re: [BULK] Re: Re: which is definitive: /var/lib/ceph symlinks or ceph-volume?

2021-04-06 Thread Philip Brown
Where does it read it from? does it keep it in the block.db lv, or the block dev lv, or both? I removed the vg from the block dev and did wipefs, if I recall. - Original Message - From: "Eugen Block" To: "Philip Brown" Cc: "ceph-users" Sent: Tuesday, April 6, 2021 9:06:50 AM Subje

[ceph-users] Re: which is definitive: /var/lib/ceph symlinks or ceph-volume?

2021-04-06 Thread Eugen Block
Did you recreate the OSD with the same UUID? I'm guessing ceph-volume just reads LV tags and thinks there's a block.db for that OSD. Before creating the OSD you should also wipe the LV containing the block.db for the failed OSD. Zitat von Philip Brown : It was up and running. I should

[ceph-users] Re: which is definitive: /var/lib/ceph symlinks or ceph-volume?

2021-04-06 Thread Philip Brown
It was up and running. I should mention that the situation came about when I was testing OSD rebuild. So I had a previously autogenerated hybrid OSD, with SSD db, and HDD block dev. I then stopped the OSD, wiped HDD, did "ceph osd rm" and ceph auth rm. and then rebuilt with ceph-volume.. w

[ceph-users] Re: Changing IP addresses

2021-04-06 Thread Mark Lehrer
I just did this recently. The only painful part was using "monmaptool" to change the monitor IP addresses on disk. Once you do that, and change the monitor IPs in ceph.conf everywhere, it should come up just fine. Mark On Tue, Apr 6, 2021 at 8:08 AM Jean-Marc FONTANA wrote: > > Hello everyon

[ceph-users] Re: Problem using advanced OSD layout in octopus

2021-04-06 Thread Gary Molenkamp
Thanks Robert, So if the DB device is _not_ empty due to existing running OSDs, auto orchestration will not be an option until all OSDs on that DB device are converted. That makes sense. FWIW,  I did confirm that this process https://tracker.ceph.com/issues/46691 does work for replacing an O

[ceph-users] Changing IP addresses

2021-04-06 Thread Jean-Marc FONTANA
Hello everyone, We have installed a Nautilus Ceph cluster with 3 monitors, 5 osd and 1 RGW gateway. It works but now, we need to change the IP addresses of these machines to put them in DMZ. Are there any recommandations to go about doing this ? Best regards,

[ceph-users] Re: Problem using advanced OSD layout in octopus

2021-04-06 Thread Robert Sander
Hi, The DB device needs to be empty for an automatic OSD service. The service will then create N db slots using logical volumes and not partitions. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: Increase of osd space usage on cephfs heavy load

2021-04-06 Thread Burkhard Linke
Hi, On 4/6/21 2:20 PM, Olivier AUDRY wrote: hello now backup is running since 3hours and cephfs metadata goes from 20G to 479Go... POOL ID STORED OBJECTS USED %USED MAX AVAIL cephfs-metadata12 479 GiB 642.26k1.4 TiB 18.79 2.0 TiB cephfs-data0

[ceph-users] Problem using advanced OSD layout in octopus

2021-04-06 Thread Gary Molenkamp
Before I upgrade our Nautilus ceph to Octopus, I would like to make sure I am able to replace existing OSDs when they fail.  However, I have not been able to create an OSD in Octopus with the layout we are using in Nautilus.  I am testing this on a VM cluster so as not to touch any production s

[ceph-users] Re: bug in ceph-volume create

2021-04-06 Thread Konstantin Shalygin
Philip, good suggestion! I was create ticket for that [1] [1] https://tracker.ceph.com/issues/50163 K > On 5 Apr 2021, at 23:47, Philip Brown wrote: > > You guys might consider a feature request of doing some kind of check on long > device path names getting passed in, to see if the util sho

[ceph-users] Re: which is definitive: /var/lib/ceph symlinks or ceph-volume?

2021-04-06 Thread Eugen Block
Is the OSD up and running? Do you see IO on the dedicated block.db LV when running a 'ceph tell osd.7 bench'? It sounds very strange, I haven't seen that. Zitat von Philip Brown : I am in a situation where I see conflicting information. On the one hand, ls -l /var/lib/ceph/osd/ceph-7 shows

[ceph-users] Re: Increase of osd space usage on cephfs heavy load

2021-04-06 Thread Olivier AUDRY
hello now backup is running since 3hours and cephfs metadata goes from 20G to 479Go... POOL ID STORED OBJECTS USED %USED MAX AVAIL cephfs-metadata12 479 GiB 642.26k1.4 TiB 18.79 2.0 TiB cephfs-data013 2.9

[ceph-users] mkfs.xfs -f /dev/rbd0 hangs

2021-04-06 Thread 展荣臻(信泰)
hi,all: I use 15.2.10 ceph cluster with ubuntu 18.04.I create a rbd device and map into a host(ubuntu 18.04 ceph 15.2.10), When i type mkfs.fs -f /dev/rbd0 it is hangs,But it is ok that write data use "rados -p {poolname} put obj myfile". Has anyone encountered this kind of problem?

[ceph-users] What is the upper limit of the numer of PGs in a ceph cluster

2021-04-06 Thread huxia...@horebdata.cn
Dear Cepher, My client has 10 volume, each volume was assigned 8192 PGs, in total 81920 PGs. The ceph is with Luminous Bluestore. During a power outage, the cluster restarted, and we observed that OSD peering consumed a lot of CPU and memory resources, evne leading to some OSD flappings. My q

[ceph-users] RGW: Corrupted Bucket index with nautilus 14.2.16

2021-04-06 Thread by morphin
Good morning. I have a bucket and it has 50M object in it. The bucket created with multisite sync and that is the masterzone and only zone now. After a health check, I saw weird objects in pending attr state. I've tried to remove them with "radosgw-admin object rm --bypas-gc" but I coldn't delete