[...]
> What is being done is a serial tree walk and copy in 3
> replicas of all objects in the CephFS metadata pool, so it
> depends on both the read and write IOPS rate for the metadata
> pools, but mostly in the write IOPS. [...] Wild guess:
> metadata is on 10x 3.84TB SSDs without persistent ca
>> However, I've observed that the cephfs-data-scan scan_links step has
>> been running for over 24 hours on 35 TB of data, which is replicated
>> across 3 OSDs, resulting in more than 100 TB of raw data.
What matters is the number of "inodes" (and secondarily their
size), that is the number of me
>> However, I've observed that the cephfs-data-scan scan_links step has
>> been running for over 24 hours on 35 TB of data, which is replicated
>> across 3 OSDs, resulting in more than 100 TB of raw data.
What matters is the number of "inodes" (and secondarily their
size), that is the number of me
Hi Odair,
On Thu, Oct 12, 2023 at 11:58 PM Odair M. wrote:
>
> Hello,
>
> I've encountered an issue where the metadata pool has corrupted a cache
> inode, leading to an MDS rank abort in the 'reconnect' state. To address
> this, I'm following the "USING AN ALTERNATE METADATA POOL FOR RECOVERY"
>