> When I first migrated to Ceph, my servers were all running CentOS 7, which I
> (wrongly) thought could not handle anything above Octopus,
Containerized deployments do have the advantage of less coupling to the
underlying OS for dependencies, though the very latest CentOS 9 containers may
ha
hester Institute of Technology
From: Tim Holloway
Sent: Saturday, April 12, 2025 1:13:05 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: nodes with high density of OSDs
When I first migrated to Ceph, my servers were all running CentOS 7,
which I (wrongly) thought could not
setups and not containers.Ymmv
--
Paul Mezzanini
Platform Engineer III
Research Computing
Rochester Institute of Technology
From: Tim Holloway
Sent: Saturday, April 12, 2025 1:13:05 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: nodes with high densit
When I first migrated to Ceph, my servers were all running CentOS 7,
which I (wrongly) thought could not handle anything above Octopus, and
on top of that, I initially did legacy installs. So in order to run
Pacific and to keep the overall clutter in the physical box
configuration down, I made
One possibility would be so have ceph simply set aside space on the OSD
and echo the metadata there automatically. Then a mechanism could scan
for un-adopted drives and import as needed. So even a dead host would be
OK as long as the device/LV was still usable. I've migrated non-ceph
LVs, after
> Apparently those UUIDs aren't as reliable as I thought.
>
> I've had problems with a server box that hosts a ceph VM.
VM?
> Looks like the mobo disk controller is unreliable
Lemme guess, it is an IR / RoC / RAID type? As opposed to JBOB / IT?
If the former and it’s an LSI SKU as most are,
On 12/4/25 20:56, Tim Holloway wrote:
> Which brings up something I've wondered about for some time. Shouldn't
> it be possible for OSDs to be portable? That is, if a box goes bad, in
> theory I should be able to remove the drive and jack it into a hot-swap
> bay on another server and have that ser
Apparently those UUIDs aren't as reliable as I thought.
I've had problems with a server box that hosts a ceph VM. Looks like the
mobo disk controller is unreliable AND one of the disks passes SMART but
has interface problems. So I moved the disks to an alternate box.
Between relocation and dr
Filestore, pre-ceph-volume may have been entirely different. IIRC LVM is used
these days to exploit persistent metadata tags.
> On Apr 11, 2025, at 4:03 PM, Tim Holloway wrote:
>
> I just checked an OSD and the "block" entry is indeed linked to storage using
> a /dev/mapper uuid LV, not a /de
I just checked an OSD and the "block" entry is indeed linked to storage
using a /dev/mapper uuid LV, not a /dev/device. When ceph builds an
LV-based OSD, it creates a VG whose name is "ceph-u", where ""
is a UUID, and an LV named "osd-block-", where "" is also a
uuid. So althoug
> I think one of the scariest things about your setup is that there are only 4
> nodes (I'm assuming that means Ceph hosts carrying OSDs). I've been bouncing
> around different configurations lately between some of my deployment issues
> and cranky old hardware and I presently am down to 4 hos
I thought those links were to the by-uuid paths for that reason?
> On Apr 11, 2025, at 6:39 AM, Janne Johansson wrote:
>
> Den fre 11 apr. 2025 kl 09:59 skrev Anthony D'Atri :
>>
>> Filestore IIRC used partitions, with cute hex GPT types for various states
>> and roles. Udev activation was so
Hi,
> On 11 Apr 2025, at 10:53, Alex from North wrote:
>
> Hello Tim! First of all, thanks for the detailed answer!
> Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but
> what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it
> seems to be not that
Hi Alex,
I think one of the scariest things about your setup is that there are
only 4 nodes (I'm assuming that means Ceph hosts carrying OSDs). I've
been bouncing around different configurations lately between some of my
deployment issues and cranky old hardware and I presently am down to 4
h
Den fre 11 apr. 2025 kl 09:59 skrev Anthony D'Atri :
>
> Filestore IIRC used partitions, with cute hex GPT types for various states
> and roles. Udev activation was sometimes problematic, and LVM tags are more
> flexible and reliable than the prior approach. There no doubt is more to it
> but
Hello Tim! First of all, thanks for the detailed answer!
Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but
what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it
seems to be not that dramatic, no?
However, in a docu I see that it is quite common for
Filestore IIRC used partitions, with cute hex GPT types for various states and
roles. Udev activation was sometimes problematic, and LVM tags are more
flexible and reliable than the prior approach. There no doubt is more to it
but that’s what I recall.
> On Apr 10, 2025, at 9:11 PM, Tim Hol
Peter,
I don't think udev factors in based on the original question. Firstly,
because I'm not sure udev deals with permanently-attached devices (it's
more for hot-swap items). Secondly, because the original complaint
mentioned LVM specifically.
I agree that the hosts seem overloaded, by the
> I have a 4 nodes with 112 OSDs each [...]
As an aside I rekon that is not such a good idea as Ceph was
designed for one-small-OSD per small-server and lots of them,
but lots of people of course know better.
> Maybe you can gimme a hint how to struggle it over?
That is not so much a Ceph questi
That's quite a large number of storage units per machine.
My suspicion is that since you have apparently an unusually high number
of LVs coming online at boot, the time it takes to linearly activate
them is long enough to overlap with the point in time that ceph starts
bringing up its storage-
Hello Dominique!
Os is quite new - Ubuntu 22.04 with all the latest upgrades.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Alex,
Which OS? I had the same problem regarding not automatic activation of LVM's on
an older version of Ubuntu. I never found a workaround except by upgrading to a
newer release.
> -Oorspronkelijk bericht-
> Van: Alex from North
> Verzonden: donderdag 10 april 2025 13:17
> Aan: ce
22 matches
Mail list logo