Hi,

>Are the two clusters the same size? Pools both replicated? Do the nodes where 
>rbd-mirror is running have the same CPU/RAM resources?  Is one more >heavily 
>loaded than the other?

Cluster has to be exactly the same to mirroring work well ?

>rbd_mirror_journal_max_fetch_bytes
>and rbd_journal_max_payload_bytes=8388608


I didn't see this part in the docs, my bad. I added this to ceph.conf and it's 
working a lot faster



One more thing: do you have more info about the Ceph subreddit ban, and if it 
will be lifted soon?


Thanks Anthony and Eugen


Vivien


________________________________
De : Eugen Block <[email protected]>
Envoyé : jeudi 23 octobre 2025 07:58:08
À : [email protected]
Objet : [ceph-users] Re: Very slow mirroring operation

Hi,

I haven't dealt with journal based mirroring in quite a while, but
this section [0] in the docs seems relevant:

> rbd-mirror tunables are set by default to values suitable for
> mirroring an entire pool. When using rbd-mirror to migrate single
> volumes between clusters you may achieve substantial performance
> gains by setting rbd_journal_max_payload_bytes=8388608 within the
> [client] config section of the local or centralized configuration.
> Note that this setting may allow rbd-mirror to present a substantial
> write workload to the destination cluster: monitor cluster
> performance closely during migrations and test carefully before
> running multiple migrations in parallel.


# ceph config help rbd_journal_max_payload_bytes
rbd_journal_max_payload_bytes - maximum journal payload size before splitting
   (size, advanced)
   Default: 16384
   Can update at runtime: true
   Services: [rbd]

Regards,
Eugen

[0]
https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#enable-image-journaling-feature

Zitat von "GLE, Vivien" <[email protected]>:

> Hi,
>
>
> I'm testing mirroring in ceph and dealt with very slow mirroring operation,
>
> Cluster A (primary) is on a Ceph hyperconverged on proxmox and
> cluster B (secondary) is a Ceph cluster managed by cephadm. These 2
> have been peered via bootstrap.
>
> Moving 60 GiB VM to the mirrored pool in cluster A  it took 5-6
> hours to fully sync in the cluster B (SSD device only)
>
> Moving 60 GiB VM from cluster A to cluster B  in replica 3 SSD pool
> took 2 minutes (SSD device only)
>
>
> It looks like for this purpose cluster wont be higher than 6-7
> MiB/s, is this a normal behaviour ?
>
>
> Ceph -s during mirroring output and rbd info below
>
>
> ceph -s
>   cluster:
>     id:     ID
>     health: HEALTH_OK
>
>   services:
>     mon:        3 daemons, quorum r620-13-1,r620-13-9,r620-13-4 (age 12d)
>     mgr:        r620-13-7.zxzajo(active, since 24h), standbys:
> r620-13-10.wmzodp
>     mds:        1/1 daemons up
>     osd:        24 osds: 24 up (since 6d), 24 in (since 6d)
>     rbd-mirror: 1 daemon active (1 hosts)
>     rgw:        1 daemon active (1 hosts, 1 zones)
>
>   data:
>     volumes: 1/1 healthy
>     pools:   16 pools, 1287 pgs
>     objects: 151.21k objects, 551 GiB
>     usage:   1.6 TiB used, 6.3 TiB / 7.9 TiB avail
>     pgs:     1287 active+clean
>
>   io:
>     client:   15 KiB/s rd, 6.5 MiB/s wr, 11 op/s rd, 419 op/s wr
>
> --------------------------------------------
>
> rbd info ceph_proxmox/vm-118-disk-0
> rbd image 'vm-118-disk-0':
>     size 60 GiB in 15360 objects
>     order 22 (4 MiB objects)
>     snapshot_count: 0
>     id: abe8c560f69498
>     block_name_prefix: rbd_data.abe8c560f69498
>     format: 2
>     features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, journaling
>     op_features:
>     flags:
>     create_timestamp: Wed Oct 22 07:52:31 2025
>     access_timestamp: Wed Oct 22 07:52:31 2025
>     modify_timestamp: Wed Oct 22 07:52:31 2025
>     journal: abe8c560f69498
>     mirroring state: enabled
>     mirroring mode: journal
>     mirroring global id: id
>     mirroring primary: false
>
> -----------------------------------------
>
>
> In cluster A ceph.conf
>
>
>         rbd_default_features = 125
>
>
>
> Thanks !
>
>
> Vivien
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to