Hi
We have a new mimic (13.2.6, will upgrade to nautilus next month) cluster
where the rados gateway pool has currently many more objects per PG then
the other pools. This leads to a warning with ceph status
1 pools have many more objects per pg than average
I tries to get rid of this warning by
Hi there;
I've got a ceph cluster with 4 nodes, each with 9 4TB drives.
Last night a disk failed, and unfortunately this lead to a kernel panic on the
hosting server
(supermicro: never again).
One reboot later, the cluster rebalances.
This morning, I'm in this situation:
root@s3:~# ceph status
Hello,
i added a db device to my osds running nautilus. The DB data migratet
over some days from the hdd to ssd (db device).
But now it seems all are stuck at:
# ceph health detail
HEALTH_WARN BlueFS spillover detected on 8 OSD(s)
BLUEFS_SPILLOVER BlueFS spillover detected on 8 OSD(s)
osd.0
I have some osd's having a bluefs label formatted like this:
{
"/dev/sdc2": {
"osd_uuid": "cfb2eaa3-1811-4108-b9bb-aad49555246c",
"size": 4000681099264,
"btime": "2017-07-14 14:58:09.627614",
"description": "main",
"require_osd_release": "14"
}
}
A
Easiest way I know would be to use
$ ceph tell osd.X compact
This is what cures that whenever I have metadata spillover.
Reed
> On Mar 2, 2020, at 3:32 AM, Stefan Priebe - Profihost AG
> wrote:
>
> Hello,
>
> i added a db device to my osds running nautilus. The DB data migratet
> over some d
Hello,
I was looking for an official announcement for Octopus release, as the
latest update (back in Q3/2019) on the subject said it was scheduled for
March 1st.
Any updates on that?
BR,
--
Alex Chalkias
*Product Manager*
alex.chalk...@canonical.com
+33 766599367
*Canonical | **Ubuntu*
It's getting close. My guess is 1-2 weeks away.
On Mon, 2 Mar 2020, Alex Chalkias wrote:
> Hello,
>
> I was looking for an official announcement for Octopus release, as the
> latest update (back in Q3/2019) on the subject said it was scheduled for
> March 1st.
>
> Any updates on that?
>
> BR,
Thanks for the update. Are you doing a beta-release prior to the official
launch?
On Mon, Mar 2, 2020 at 7:12 PM Sage Weil wrote:
> It's getting close. My guess is 1-2 weeks away.
>
> On Mon, 2 Mar 2020, Alex Chalkias wrote:
>
> > Hello,
> >
> > I was looking for an official announcement for O
Am 02.03.20 um 18:16 schrieb Reed Dier:
> Easiest way I know would be to use
> $ ceph tell osd.X compact
>
> This is what cures that whenever I have metadata spillover.
no that does not help. Also keep in mind that in my case the metadata
hasn't spilled over instead i added after osd creation a
I just upgraded a cluster that I inherited from Jewel to Luminous and
trying to work through the new warnings/errors.
I got the message about 3 OMAP objects being too big, all of them in the
default.rgw.buckets.index pool. I expected that dynamic sharding should
kick in, but no luck after several
Hello all,
I'm maintaining a small Nautilus 12 OSD cluster (36TB raw). My mon nodes have
the mgr/mds collocated/stacked with the mon. Each are allocated 10gb of RAM.
During a recent single disk failure and corresponding recovery, I noticed my
mgr/mon's were starting to get OOM killed/restarted
Can you share "ceph pg 6.36a query" output
Steve
On 3/2/20, 2:53 AM, "Simone Lazzaris" wrote:
Hi there;
I've got a ceph cluster with 4 nodes, each with 9 4TB drives.
Last night a disk failed, and unfortunately this lead to a kernel panic on
the hosting server
(supermicro: ne
Hello list,
does anybody have a guide to build ceph Nautilus for Debian stretch? I
wasn't able to find a backported gcc-8 for stretch.
Otherwise i would start one.
Greets,
Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi,
On 3/3/20 8:01 AM, Stefan Priebe - Profihost AG wrote:
> does anybody have a guide to build ceph Nautilus for Debian stretch? I
> wasn't able to find a backported gcc-8 for stretch.
That's because a gcc backport isn't to trivial, it may even require rebuild,
of very basic libraries as libc/li
14 matches
Mail list logo