Hello,ceph has RBD and CephFS mirroring mechanisms but I don't know if these would work for you in cold backup scenario. (also I don't know if proxmox supports these).
https://docs.ceph.com/en/squid/dev/cephfs-mirroring/ https://docs.ceph.com/en/squid/rbd/rbd-mirroring/
Since you're using proxmox, probably the best way to backup RBD would be to use their proxmox backup server. You could also use some normal filesystem backup tool to backup non-proxmox data from cephfs.
Best regards Adam Prycki W dniu 26.05.2025 o 18:28, Nmz pisze:
Hello everyone,I came to Ceph from ZFS world. In ZFS I can use snapshots as backup way. Keep it localy or send to another server/pool. Is it possible to do it the same using Ceph? Right now in my cluster I use RBD and CephFS. RBD data structure: Ceph poolspool proxmox_metadata Rep SSD pool proxmox_data EC SSDstorage.cfgrbd: ceph_rbd content images data-pool proxmox_data pool proxmox_metadataCeph poolspool proxmox_data_1_ssd_other_hdd R 1 SSD 2 HDDstorage.cfgrbd: ceph_rbd_1ssd_hdd content images pool proxmox_data_1_ssd_other_hddCephFS data structure: Ceph poolspool cephfs_metadata Rep SSD pool cephfs_data Rep SSD pool web_data EC SSD pool db_data EC SSD pool data EC HDD# ceph fs lsname: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data web_data db_data data ]subvolumegroup | data_pool | subvolumehome | web_data | nmz .... db | db_data | sql persoanl | data | Video ....As you can see subvolumes use specific ceph pools. For every subvolume I use snapshots. Ceph backup server will work in cold mode, single node, not 24/7. What is the best way to send data to backup server ? I want to use incremental backup sync. Can I achieve it ? Thanks _______________________________________________ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
smime.p7s
Description: Kryptograficzna sygnatura S/MIME
_______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io