Is this known to be working:
Setting up the Ceph Cluster with ARM and then use the storage with X86
Machines for example LXC, Docker and KVM?
Is this possible?
Greetings
filip
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Why not? The hwarch doesn't matter.
> On May 25, 2024, at 07:35, filip Mutterer wrote:
>
> Is this known to be working:
>
> Setting up the Ceph Cluster with ARM and then use the storage with X86
> Machines for example LXC, Docker and KVM?
>
> Is this possible?
>
> Greetings
>
> filip
> _
> Hi Everyone,
>
> I'm putting together a HDD cluster with an ECC pool dedicated to the backup
> environment. Traffic via s3. Version 18.2, 7 OSD nodes, 12 * 12TB HDD +
> 1NVME each,
QLC, man. QLC. That said, I hope you're going to use that single NVMe SSD for
at least the index pool. Is t
Well this was an interesting journey through the bowels of Ceph. I have
about 6 hours into tweaking every setting imaginable just to circle back to
my basic configuration and 2G memory target per osd. I was never able to
exceed 22 Mib/Sec recovery time during that journey.
I did end up fixing th
Hi!
Could you please elaborate what you meant by "adding another disc to the
recovery process"?
/Z
On Sat, 25 May 2024, 22:49 Mazzystr, wrote:
> Well this was an interesting journey through the bowels of Ceph. I have
> about 6 hours into tweaking every setting imaginable just to circle back