Hi.
I just changed some of my data on CephFS to go to the EC pool instead
of the 3x replicated pool. The data is "write rare / read heavy" data
being served to an HPC cluster.
To my surprise it looks like the OSD memory caching is done at the
"split object level" not at the "assembled object leve
@Paul Emmerich Thank you very much!. That did the trick!
On Sat, Jun 8, 2019 at 4:59 AM huang jun wrote:
>
> Did you osd oom killed when cluster doing recover/backfill, or just
> the client io?
> The configure items you mentioned is for bluestore and the osd memory
> include many other
> things
On Fri, Jun 7, 2019 at 11:35 PM Sergei Genchev wrote:
> Hi,
> My OSD processes are constantly getting killed by OOM killer. My
> cluster has 5 servers, each with 18 spinning disks, running 18 OSD
> daemons in 48GB of memory.
> I was trying to limit OSD cache, according to
> http://docs.ceph.co
On Thu, 6 Jun 2019 at 03:01, Josh Haft wrote:
>
> Hi everyone,
>
> On my 13.2.5 cluster, I recently enabled the ceph balancer module in
> crush-compat mode.
Why did you choose compat mode? Don't you want to try another one instead?
--
End of message. Next message?
__
what's your 'ceph osd df tree' outputs?does the osd have the expected PGs?
Josh Haft 于2019年6月7日周五 下午9:23写道:
>
> 95% of usage is CephFS. Remaining is split between RGW and RBD.
>
> On Wed, Jun 5, 2019 at 3:05 PM Gregory Farnum wrote:
> >
> > I think the mimic balancer doesn't include omap data wh
i think the write data will also write to the osd.4 in this case.
bc your osd.4 is not down, so the ceph don't think the pg have some osd
down,
and it will replicated the data to all osds in actingbackfill set.
Tarek Zegar 于2019年6月7日周五 下午10:37写道:
> Paul / All
>
> I'm not sure what warning your a
Did you osd oom killed when cluster doing recover/backfill, or just
the client io?
The configure items you mentioned is for bluestore and the osd memory
include many other
things, like pglog, you it's important to known do you cluster is dong recover?
Sergei Genchev 于2019年6月8日周六 上午5:35写道:
>
> Hi
From the error message, i'm decline to that 'mon_max_pg_per_osd' was exceed,
you can check the value of it, and its default value is 250, so you
can at most have 1500pgs(250*6osds),
and for replicated pools with size=3, you can have 500pgs for all pools,
you already have 448pgs, so the next pool ca
On Sat, 8 Jun 2019 at 04:35, Sergei Genchev wrote:
>
> Hi,
> My OSD processes are constantly getting killed by OOM killer. My
> cluster has 5 servers, each with 18 spinning disks, running 18 OSD
> daemons in 48GB of memory.
> I was trying to limit OSD cache, according to
> http://docs.ceph.com/
(adding ceph-users instead)
On 19/06/07 12:53pm, Lluis Arasanz i Nonell - Adam wrote:
> Hi all,
>
> I know I have a very old ceph version, but I need some help.
> Also, understand that English is not my native language, so please, take it
> in mind if something is not really well explained.
>
>
10 matches
Mail list logo