[ceph-users] Re: Time Estimation for cephfs-data-scan scan_links

2023-10-18 Thread Peter Grandi
[...] > What is being done is a serial tree walk and copy in 3 > replicas of all objects in the CephFS metadata pool, so it > depends on both the read and write IOPS rate for the metadata > pools, but mostly in the write IOPS. [...] Wild guess: > metadata is on 10x 3.84TB SSDs without persistent ca

[ceph-users] Re: Time Estimation for cephfs-data-scan scan_links

2023-10-13 Thread Peter Grandi
>> However, I've observed that the cephfs-data-scan scan_links step has >> been running for over 24 hours on 35 TB of data, which is replicated >> across 3 OSDs, resulting in more than 100 TB of raw data. What matters is the number of "inodes" (and secondarily their size), that is the number of me

[ceph-users] Re: Time Estimation for cephfs-data-scan scan_links

2023-10-13 Thread Peter Grandi
>> However, I've observed that the cephfs-data-scan scan_links step has >> been running for over 24 hours on 35 TB of data, which is replicated >> across 3 OSDs, resulting in more than 100 TB of raw data. What matters is the number of "inodes" (and secondarily their size), that is the number of me

[ceph-users] Re: Time Estimation for cephfs-data-scan scan_links

2023-10-12 Thread Venky Shankar
Hi Odair, On Thu, Oct 12, 2023 at 11:58 PM Odair M. wrote: > > Hello, > > I've encountered an issue where the metadata pool has corrupted a cache > inode, leading to an MDS rank abort in the 'reconnect' state. To address > this, I'm following the "USING AN ALTERNATE METADATA POOL FOR RECOVERY" >