[ceph-users] Re: [EXTERNAL] Re: Can't connect to MDS admin socket after updating to cephadm

2022-11-10 Thread Luis Calero Muñoz
Hello Eugen, thanks for your answer. I was able to connect like you showed me until I updated my cluster to ceph version 16.2.10 (pacific). But now it doesn't work anymore: root@ceph-mds2:~# cephadm ls |grep ceph-mds | grep name "name": "mds.cephfs.ceph-mds2.cjpsjm", root@ceph-mds2:~# ce

[ceph-users] Re: How to force PG merging in one step?

2022-11-10 Thread Frank Schilder
Hi Eugen, I created https://tracker.ceph.com/issues/58002 Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Eugen Block Sent: 03 November 2022 11:41 To: Frank Schilder Cc: ceph-users@ceph.io Subject: Re: [ceph-user

[ceph-users] Re: LVM osds loose connection to disk

2022-11-10 Thread Frank Schilder
Hi all, I have some kind of update on the matter of stuck OSDs. It seems not to be an LVM issue and it also seems not to be connected to the OSD size. After moving all data from the tiny 100G OSDs to spare SSDs, I redeployed the 400G disks with 1 OSD per disk and started to move data from the s

[ceph-users] Re: [EXTERNAL] Re: Can't connect to MDS admin socket after updating to cephadm

2022-11-10 Thread Eugen Block
Hi, If I look at the container name in docker it has the dots changed to hyphens, but if I try to connect to the name with hyphens it doesn't work either: that is correct, that switch from dots to hyphens was introduced in pacific [1]. Can you share the content of the unit.run file for tha

[ceph-users] Re: LVM osds loose connection to disk

2022-11-10 Thread Igor Fedotov
Hi Frank, unfortunately IMO it's not an easy task to identify what are the relevant difference between mimic and octopus in this respect.. At least the question would be what minor Ceph releases are/were in use. I recall there were some tricks with setting/clearing bluefs_buffered_io somewhe

[ceph-users] Re: all monitors deleted, state recovered using documentation .. at what point to start osds ?

2022-11-10 Thread Tyler Brekke
Hi Shashi, I think you need to have a mgr running to get updated reporting, which would explain the incorrect ceph status output. Since you have a monitor quorum 1 out of 1, you can start up OSDs. but I would recommend getting all your mons/mgrs back up first. On Tue, Nov 8, 2022 at 5:56 PM Shash

[ceph-users] Re: Recent ceph.io Performance Blog Posts

2022-11-10 Thread Mark Nelson
Interesting, I see all of the usual suspects here with InlineSkipList KeyComparator being the big one.  I've rarely seen it this bad though.  What model CPU are you running on? There's a very good chance that you would benefit from the new (experimental) tuning in the RocksDB article.  Smalle

[ceph-users] Re: all monitors deleted, state recovered using documentation .. at what point to start osds ?

2022-11-10 Thread Shashi Dahal
Thanks for the info. I was able to get everything up and running. Just to mention, this particular cluster had around 50 vms from openstack (nova, cinder, glance all using openstack), 4 osd nodes with 10 disks each. There was no stop/start/delete operation. The cluster ran fine headless without

[ceph-users] User + Dev Monthly Meeting Coming Up on November 17th

2022-11-10 Thread Laura Flores
Hi Ceph Users, The User + Dev Monthly Meeting is coming up next week on *Thursday, November 17th* *@* *3:00pm UTC* (time conversions below). See meeting details at the bottom of this email. Please add any topics you'd like to discuss to the agenda: https://pad.ceph.com/p/ceph-user-dev-monthly-min

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-11-10 Thread Olli Rajala
Hi Venky, I have indeed observed the output of the different sections of perf dump like so: watch -n 1 ceph tell mds.`hostname` perf dump objecter watch -n 1 ceph tell mds.`hostname` perf dump mds_cache ...etc... ...but without any proper understanding of what is a normal rate for some number to

[ceph-users] Error initializing cluster client

2022-11-10 Thread Sagittarius-A Black Hole
Hi, I have a Ceph cluster with 3 nodes and only one allows me to execute commands in the shell anymore, the other two give me this error message. Error initializing cluster client: OSError('error calling conf_read_file',) This is the Ceph version that comes in containers, about which I can't f

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-11-10 Thread Venky Shankar
On Fri, Nov 11, 2022 at 3:06 AM Olli Rajala wrote: > > Hi Venky, > > I have indeed observed the output of the different sections of perf dump like > so: > watch -n 1 ceph tell mds.`hostname` perf dump objecter > watch -n 1 ceph tell mds.`hostname` perf dump mds_cache > ...etc... > > ...but withou