[ceph-users] Re: 17.2.5 snap_schedule module error (cephsqlite: cannot open temporary database)

2022-11-20 Thread phandaal
On 2022-11-17 14:19, Milind Changire wrote: On Thu, Nov 17, 2022 at 6:02 PM phandaal wrote: On 2022-11-17 12:58, Milind Changire wrote: The error arrives when trying to restart old schedules (schedule_client.py line 169) and trying to find the old store, which does not exist, the schedules have

[ceph-users] Re: backfilling kills rbd performance

2022-11-20 Thread Sridhar Seshasayee
Hi Martin, In Quincy the osd_op_queue defaults to 'mclock_scheduler'. It was set to 'wpq' before Quincy. > on a 3 node hyper converged pve cluster with 12 SSD osd devices I do > experience stalls in the rbd performance during normal backfill > operations e.g. moving a pool from 2/1 to 3/2. > > I

[ceph-users] Re: Recent ceph.io Performance Blog Posts

2022-11-20 Thread Stefan Kooman
On 11/9/22 14:30, Mark Nelson wrote: On 11/9/22 4:48 AM, Stefan Kooman wrote: On 11/8/22 21:20, Mark Nelson wrote: Hi Folks, I thought I would mention that I've released a couple of performance articles on the Ceph blog recently that might be of interest to people: For sure, thanks a lot,

[ceph-users] Re: iscsi target lun error

2022-11-20 Thread Xiubo Li
On 15/11/2022 23:44, Randy Morgan wrote: You are correct I am using the cephadm to create the iscsi portals. The cluster had been one I was learning a lot with and I wondered if it was because of the number of creations and deletions of things, so I rebuilt the cluster, now I am getting this r

[ceph-users] RBD migration between pools looks to be stuck on commit

2022-11-20 Thread Jozef Matický
Hello, I was migrating several RBDs between the two pools - from replicated to EC. I have managed to migrate about twenty images without any issues, all realtivelly the same size. It took generally about an hour to execute and 10 minutes to commit for each one. The last image however got stuck

[ceph-users] Re: Scheduled RBD volume snapshots without mirrioring (-schedule)

2022-11-20 Thread Ilya Dryomov
On Fri, Nov 18, 2022 at 3:46 PM Tobias Bossert wrote: > > Dear List > > I'm searching for a way to automate the snapshot creation/cleanup of RBD > volumes. Ideally, there would be something like the "Snapshot Scheduler for > cephfs"[1] but I understand > this is not as "easy" with RBD devices si

[ceph-users] Re: backfilling kills rbd performance

2022-11-20 Thread Frank Schilder
Hi Martin, did you change from rep=2+min_size=1 to rep=3+min_size=2 in one go? I'm wondering if the missing extra shard could case PGs going to read-only occasionally. Maybe do min_size=1 until all PGs have 3 shards and then set min_size=2. You can set recovery_sleep to a non-zero value. It is