> Why the limit of 6 OSDs per SSD?
SATA/SAS throughput generally.
> I am doing testing with a PCI-e based SSD, and showing that even with 15
OSD disk drives per SSD that the SSD is keeping up.
That will probably be fine performance wise but it's worth noting that all
OSDs will fail if the flash
Why the limit of 6 OSDs per SSD?
Where does Ceph tail off in performance when having to many OSDs in
servers?
When your Journal isn't able to keep up. If you use SSDs for
journaling, use 6 OSDs per SSD at max.
I am doing testing with a PCI-e based SSD, and showing that even with
15 OSD di
On 03/09/2014 12:19 PM, Pieter Koorts wrote:
Hello,
Just a general question really. What is the recommended node size for
Ceph with storage clusters? The Ceph documentation does say use more
smaller nodes rather than fewer large nodes but what constitutes to
large in terms of Ceph? Is it 16 OSD
Hello,
Just a general question really. What is the recommended node size for Ceph
with storage clusters? The Ceph documentation does say use more smaller
nodes rather than fewer large nodes but what constitutes to large in terms
of Ceph? Is it 16 OSD or more like 32 OSD?
Where does Ceph tail off