I had this problem once in the past and found that it was related to a
particular osd. To identify it, I ran the command “ceph pg dump | grep snaptrim
| grep -v ‘snaptrim_wait’” and found that the osd displayed in the “UP_PRIMARY”
column was almost always the same.
So I restarted this osd and t
Hi all,
I have some troubles with my backup script because there are few files, in a
deep sub-directory, with a creation/modification date in the future (for
example: 2040-02-06 18:00:00). As my script uses the ceph.dir.rctime extended
attribute to identify the files and directories to backup,
ing leading '/' from absolute path names
# file: mnt/cephfs/test/.snap/
# owner: root
# group: root
user::rwx
group::rwx
other::---
So in my tests it never actually shows the "users" group acl. But you
wrote that it worked with Pacific for you, I'm confused...
Hi,
I'm facing the same situation as described in bug #57084
(https://tracker.ceph.com/issues/57084) since I upgraded from 16.2.13 to 17.2.6
for example:
root@faiserver:~# getfacl /mnt/ceph/default/
# file: mnt/ceph/default/
# owner: 99
# group: nogroup
# flags: -s-
user::rwx
user:s-sac-acquisi
Hi,
Or you can query the MDS(s) with:
ceph tell mds.* dump inode 2>/dev/null | grep path
for example:
user@server:~$ ceph tell mds.* dump inode 1099836155033 2>/dev/null | grep path
"path": "/ec42/default/joliot/gipsi/gpu_burn.sif",
"stray_prior_path": "",
Arnaud
Le 01/05/2023 15:07
Hi Venky,
> Also, at one point the kclient wasn't able to handle more than 400 snapshots
> (per file system), but we have come a long way from that and that is not a
> constraint right now.
Does it mean that there is no more limit to the number of snapshots per
filesystem? And, if not, do you k