> osd_memory_target = 2147483648
>
> Based on some reading, I'm starting to understand a little about what can be
> tweaked. For example, I think the osd_memory_target looks low. I also think
> the DB/WAL should be on dedicated disks or partitions, but have no idea what
> procedure
Hi Frank,
can't advise much on the disk issue - just an obvious thought about
upgrading the firmware and/or contacting the vendor. IIUC disk is
totally inaccessible at this point, e.g. you're unable to read from it
bypassing LVM as well, right? If so this definitely looks like a
low-level pr
Hi All,
I am using ceph luminous and librados2-14.2.11-0 library in my application,
I am getting the below random segfault time to time, below is the stack
trace from the core dump
Below segfault happens in librados2-14.2.11-0 version,
Program terminated with signal 11, Segmentation fault.
(gdb)
Hi all,
I need to reduce the number of PGs in a pool from 2048 to 512 and would really
like to do that in a single step. I executed the set pg_num 512 command, but
the PGs are not all merged. Instead I get this intermediate state:
pool 13 'con-fs2-meta2' replicated size 4 min_size 2 crush_rule
Since you've decided to *pin* them, they'll stay pinned to whichever ranks
they've been pinned to.
The pinning strategy can be overridden for subdirs. But as long as you
don't touch the pinning strategy for a subdir, the subdirs stay pinned to
the rank that the closest parent dir has been pinned to
Hi Milind,
super, thanks! That's good enough for me.
Just one stupid question. Will the ephemerally pinned sub-trees will always
stay entirely on the same MDS rank? I really want to avoid sub-tree exports for
folders under /home etc, meaning also avoiding sub-tree exports for a path like
/home
On Sat, Oct 8, 2022 at 7:27 PM Frank Schilder wrote:
> Hi all,
>
> I believe I enabled ephemeral pinning on a home dir, but I can't figure
> out how to check that its working. Here is my attempt:
>
> Set the flag:
> # setfattr -n ceph.dir.pin.distributed -v 1 /mnt/admin/cephfs/hpc/home
>
> Try to