On 8/23/24 10:58, Phong Tran Thanh wrote:
> Hi Ceph users
>
> I am designing a CEPH system with 6 servers running full NVMe. Do I need to
> use 3 separate servers to run the MON services that communicate with
> OpenStack, or should I integrate the MON services into the OSD servers?
> What is the r
On 17.01.24 11:13, Tino Todino wrote:
> Hi folks.
>
> I had a quick search but found nothing concrete on this so thought I would
> ask.
>
> We currently have a 4 host CEPH cluster with an NVMe pool (1 OSD per host)
> and an HDD Pool (1 OSD per host). Both OSD's use a separate NVMe for DB/WAL.
Hi,
this question has come up once in the past[0] afaict, but it was kind of
inconclusive so I'm taking the liberty of bringing it up again.
I'm looking into implementing a key rotation scheme for Ceph client keys. As it
potentially takes some non-zero amount of time to update key material ther
On 14.02.24 06:59, Vladimir Sigunov wrote:
> Hi Ronny,
> This is a good starting point for your design.
> https://docs.ceph.com/en/latest/rados/operations/stretch-mode/
>
> My personal experience says that 2 DC Ceph deployment could suffer from a
> 'split brain' situation. If you have any chance
Hi,
is there a ballpark timeline for a Squid release candidate / release?
I'm aware of this pad that tracks blockers, is that still accurate or should I
be looking at another resource?
https://pad.ceph.com/p/squid-upgrade-failures
Thanks!
peter.
___
On 22.10.21 11:29, Mevludin Blazevic wrote:
> Dear Ceph users,
>
> I have a small Ceph cluster where each host consist of a small amount of SSDs
> and a larger number of HDDs. Is there a way to use the SSDs as performance
> optimization such as putting OSD Journals to the SSDs and/or using SSDs
On 2025-07-09 23:12, Özkan Göksu wrote:
> Hello Burkhard.
>
> Yes you are right indeed.
>
> Currently I'm using an Archlinux based custom development OS Nautilus.
> For reef, I have to develop a new custom OS from scratch and revisit all of
> my dependencies.
> I had too many issues on my cust