From my experience you’ll be better off planning exactly how many OSD’s and
nodes you’re going to have and if possible equip them from the start.
By just adding a new drive to the same pool ceph will start to rearrange data
across the whole cluster which might lead to less client IO depending on
Yes, rebuild in case of a whole chassis failure is indeed an issue. That
depends on how the failure domain looks like.
I'm currently thinking of initially not running fully equipped nodes.
Let's say four of these machines with 60x 6TB drives each, so only
loaded 2/3.
That's raw 1440TB distribu
I used a Unit a little like this (
https://www.sgi.com/products/storage/servers/mis_server.html) for a SATA
pool in ceph - rebuilds after a failure of a node can be painful without a
fair amount of testing & tuning.
I have opted for more units with less disks for future builds using R730XD.
On Mo
Sounds like you’ll have a field day waiting for rebuild in case of a node
failure or an upgrade of the crush map ;)
David
> 21 mars 2016 kl. 09:55 skrev Bastian Rosner :
>
> Hi,
>
> any chance that somebody here already got hands on Dell DSS 7000 machines?
>
> 4U chassis containing 90x 3.5"