As far as I can see, cephadm on debian buster works, until it attempts to
start to many osd daemons at once.
I've a feeling it's running out of resources, but I'm not sure which ones
it does not look like memory. I need to get back on site to play with the
hardware, there is only so much one can t
You may be running into the same issue we ran into (make sure to read
the first issue, there's a few mingled in there), for which we
submitted a patch:
https://tracker.ceph.com/issues/50526
https://github.com/alfredodeza/remoto/issues/62
If you're brave (YMMV, test first non-prod), we pushed an i
Hello.
I have virtualization env and I'm looking new SSD for HDD replacement.
What are the best Performance / Price SSDs in the market right now?
I'm looking 1TB, 512GB, 480GB, 256GB, 240GB.
Is there a SSD recommendation list for ceph?
___
ceph-users ma
Hi,
I have created initramfs scripts for booting from RBD and CephFS:
https://launchpad.net/~trickkiste/+archive/ubuntu/ceph-initramfs
https://gitlab.com/trickkiste-at/ceph-initramfs
I noticed that the CephFS kernel module seems to be named "ceph" not
"cephfs", so I would like to to inquire wheth
Thanks David
We will investigate the bugs as per your suggestion, and then will look to
test with the custom image.
Appreciate it.
On Sat, May 29, 2021, 4:11 PM David Orman wrote:
> You may be running into the same issue we ran into (make sure to read
> the first issue, there's a few mingled in
The choice depends on scale, your choice of chassis / form factor, budget,
workload and needs.
The sizes you list seem awfully small. Tell us more about your use-case.
OpenStack? Proxmox? QEMU? VMware? Converged? Dedicated ?
—aad
> On May 29, 2021, at 2:10 PM, by morphin wrote:
>
> Hell
Hello Anthony.
I use Qemu and I don't need size.
I've 1000 vm and usually they're clones from the same rbd image. The
image is 30GB.
Right now I've 7TB Stored data. rep x3 = 20TB data. It's mostly read
intensive. Usage is stable and does not grow.
So I need I/O more than capacity. That's why I'm