>
> You could check if your devices support NVMe namespaces and create more
> than one namespace on the device.
Wow, tricky. Will give it a try.
Thanks!
Łukasz Borek
luk...@borek.org.pl
On Tue, 4 Jun 2024 at 16:26, Robert Sander
wrote:
> Hi,
>
> On 6/4/24 16:15, Anthony D'Atri wrote:
>
Hi,
On 6/4/24 16:15, Anthony D'Atri wrote:
I've wondered for years what the practical differences are between using a
namespace and a conventional partition.
Namespaces show up as separate block devices in the kernel.
The orchestrator will not touch any devices that contain a partition
tab
Or partition, or use LVM.
I've wondered for years what the practical differences are between using a
namespace and a conventional partition.
> On Jun 4, 2024, at 07:59, Robert Sander wrote:
>
> On 6/4/24 12:47, Lukasz Borek wrote:
>
>> Using cephadm, is it possible to cut part of the NVME dr
On 6/4/24 12:47, Lukasz Borek wrote:
Using cephadm, is it possible to cut part of the NVME drive for OSD and
leave rest space for RocksDB/WALL?
Not out of the box.
You could check if your devices support NVMe namespaces and create more
than one namespace on the device. The kernel then sees m
>
> I have certainly seen cases where the OMAPS have not stayed within the
> RocksDB/WAL NVME space and have been going down to disk.
How to monitor OMAPS size and if it does not get out of NVME?
The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on
> workload spillover coul
The OP's number suggest IIRC like 120GB-ish for WAL+DB, though depending on
workload spillover could of course still be a thing.
>
> I have certainly seen cases where the OMAPS have not stayed within the
> RocksDB/WAL NVME space and have been going down to disk.
>
> This was on a large clust
I have certainly seen cases where the OMAPS have not stayed within the
RocksDB/WAL NVME space and have been going down to disk.
This was on a large cluster with a lot of objects but the disks that where
being used for the non-ec pool where seeing a lot more actual disk activity
than the other d
> You also have the metadata pools used by RGW that ideally need to be on NVME.
The OP seems to intend shared NVMe for WAL+DB, so that the omaps are on NVMe
that way.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
So a few questions I have around this.
What is the network you have for this cluster?
Changing the bluestone_min_alloc_size would be the last thing I would even
consider. In fact I wouldn’t be changing it as you are in untested territory.
The challenge with making these sort of things perform i
>
>> Is this a chassis with universal slots, or is that NVMe device maybe M.2
>> or rear-cage?
>
> 12 * HDD via LSI jbod + 1 PCI NVME.
All NVMe devices are PCI ;).
> Now it's 1.6TB, for the production plan
> is to use 3.2TB.
>
>
> `ceph df`
>> `ceph osd dump | grep pool`
>> So we can see wh
Anthony, Darren
Thanks for response.
Answering your questions:
What is the network you have for this cluster?
25GB/s
> Is this a chassis with universal slots, or is that NVMe device maybe M.2
> or rear-cage?
12 * HDD via LSI jbod + 1 PCI NVME. Now it's 1.6TB, for the production plan
is to use
> Hi Everyone,
>
> I'm putting together a HDD cluster with an ECC pool dedicated to the backup
> environment. Traffic via s3. Version 18.2, 7 OSD nodes, 12 * 12TB HDD +
> 1NVME each,
QLC, man. QLC. That said, I hope you're going to use that single NVMe SSD for
at least the index pool. Is t
12 matches
Mail list logo