hi all,
seek your help for this.
we are using luminous 12.2.12 and we enabled 3 active mds.
when I running "ceph daemon mds. dump loads" on any active mds, I always saw
such like below
"mds_load": {
"mds.0": {
"request_rate": 526.045993,
"cache_hit_rate": 0.000
Hi Konstantin,
the situation after moving the PGs with osdmaptool is not really better than
without:
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.86/1.08 STDDEV: 2.04
The OSD with the fewest PGs has 66 of them, the one with the most has 83.
Is this the expected result? I'm unsure how much unus
I think you can do a find for the inode (-inum n). At last I hope you can.
However, I vaguely remember that there was a thread where someone gave a really
nice MDS command for finding the path to an inode in no time.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, ru
Thanks, Good tip! If I do not know where I created these, is there a way
to get their location in the filesystem? Or maybe a command that deletes
by snapid?
{
"snapid": 54,
"ino": 1099519875627,
"stamp": "2017-09-13 21:21:35.769863",
"na
You can create multiple realms in the same .rgw.root pool. The only
limitation is that you can't use the same names for zones/zonegroups
between the realms in a single cluster.
On 12/17/19 12:46 AM, 黄明友 wrote:
I want run two realm on one ceph cluster, but I found rgw will use
only one .rgw.roo
Have you tried "ceph daemon mds.NAME dump snaps" (available since mimic)?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Lars Täuber
Sent: 17 December 2019 12:32:34
To: Stephan Mueller
Cc: ceph-users@ceph.io
Subject: [ceph-use
Have you already tried to adjust the "mds_cache_memory_limit" and or
"ceph tell mds.* cache drop"? I really wonder how the MDS copes with
that with milions of CAPS.
I played with the cache size, yeah. I kind of need a large cache,
otherwise everything is just slow and I'm constantly getting cac
Hi Janek,
Quoting Janek Bevendorff (janek.bevendo...@uni-weimar.de):
> Hey Patrick,
>
> I just wanted to give you some feedback about how 14.2.5 is working for me.
> I've had the chance to test it for a day now and overall, the experience is
> much better, although not perfect (perhaps far from i
Hi Michael,
thanks for your gist.
This is at least a way to do it. But there are many directories in our cluster.
The "find $1 -type d" lasts for about 90 minutes to find all 2.6 million
directories.
Is there another (faster) way e.g. via mds?
Cheers,
Lars
Mon, 16 Dec 2019 17:03:41 +
Step
Hey Patrick,
I just wanted to give you some feedback about how 14.2.5 is working for
me. I've had the chance to test it for a day now and overall, the
experience is much better, although not perfect (perhaps far from it).
I have two active MDS (I figured that'd spread the meta data load a
li
10 matches
Mail list logo