Hi,
On 4/6/21 2:20 PM, Olivier AUDRY wrote:
hello
now backup is running since 3hours and cephfs metadata goes from 20G to
479Go...
POOL
ID STORED OBJECTS USED %USED MAX AVAIL
cephfs-metadata 12 479 GiB 642.26k 1.4 TiB
18.79 2.0 TiB
cephfs-data0 13 2.9 TiB 9.23M 9.4 TiB
10.67 26 TiB
is that a normal behaviour ?
The MDS maintains a list of open files in the metadata pool. If your
backup is scanning a lot of files, and caps are not reclaimed by the
MDS, this list will become large.
The corresponding objects are called 'mds<rank>_openfiles.<chunk>', e.g.
mds0_openfiles.0, mds0_openfiles.1 etc. You can check the size of these
objects with the rados command.
If this is the reason for the large pool, I would recommend to restrict
the number of caps per client, otherwise you might run into out of
memory problems if the MDS is restarted during the backup.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io