db_data data ]
subvolumegroup | data_pool | subvolume
home | web_data | nmz
db | db_data | sql
persoanl | data | Video
As you can see subvolumes use specific ceph pools. For every subvolume I use
snapshots.
Ceph backup server will work
md/system/ceph-osd@.service.
> Running command: /usr/bin/systemctl start ceph-osd@0
>> ceph-volume lvm activate successful for osd ID: 0
>> ceph-volume lvm create successful for: vgsdb/sdb1
> --
> Martin Konold - Prokurist, CTO
> KONSEC GmbH - make
Hi,
after one PG finish backfilling another PG will start to backfill. You can rise
osd_max_backfills if you want to backfill more at the same time.
backfill_toofull will decrease in time.
Why you see toofull ? Ceph remove old data only then new are in place. While it
haven`t done he calcula
Sveikas,
Can you try to set 'ceph config set osd osd_mclock_profile high_recovery_ops'
and see how will it effect you?
For some PG deep scrub runned for about 20h for me. After I gave more priority
1,2 hour was enaught to finish.
- Original Message -
From: Laimis Juzeliūnas
To