Hello,

On Sat, 5 Apr 2014 08:44:02 +0100 Ian Marshall wrote:

> Hi All
> 
> I am struggling to gain information relating to whether Ceph without SSD
> drives will give sufficient performance in my planned infrastructure
> refresh using Openstack. I was keen to go with Ceph, with its support in
> Openstack and Ubuntu, but it has been suggested that a SAN solution would
> provide better performance. Unfortunate, since I have a limited budget, I
> cannot consider the Ceph Enterprise route at present.
> 
> Planned infrastructure -
> 2 x Hardware load balancer
Not knowing what you're serving there other than the web bits you
mentioned below, but if it is just simple layer 3 stuff like HTTP then
LVS will work fine on cheap commodity hardware and free your budget for
more goodies further down the list.

> 2 x Controller nodes [would run a Ceph MON on each]
> -- Dual CPU, 32Gb RAM, 4 x 600gb SAS 10k drives
Dedicated MON nodes are nice but it might be overkill in your case, and
half the memory and drives would do in either case. 
But then again you're planning on using Openstack, which comes with all
the kitchen sinks in the universe, so maybe that is a good setup if you
plan to have Openstack bits on those nodes, too.
And as a mentioned by Dan, make that 3 MONs. 

> 2 x Compute nodes [ could run a Ceph MON on one of these]
> -- Dual CPU, 256gb RAM, 4 x 600gb SAS 10k drives
Seeing that you seem to buy Dell, what CPUs would that be? Again, we don't
know what exactly how compute intense your VMs are, etc.
But 2 Xeons with what, 8 cores total(?), and 256GB RAM seems a bit
mismatched to me.
For example the compute nodes I'm building right now have only 128GB RAM
but 64 cores (4x Opteron) and will hold about 150VMs, thanks to KSM and
the fact that these VMs are basically identical. 

> EITHER SAN or if Ceph, the storage servers would be
> 2 x R70xd with dual CPU, 64Gb RAM, 24 x 600gb SAS 10k drives, each drive
> as RAID0 with write back cache on controller.
> each of these drives would have a partition for the journals.
>
OK, this is were the crumpet crumbles. ^o^
Firstly I presume you value that data. So a replica of 2 is right out.
Meaning that you will need 3 storage nodes if you do it the CEPH way.
But you're stating 20GB volumes and 100 VMs, that's only 2TB. 
What is all that rest of space for? 
Keep in mind that RBD is sparsely allocated, too.

Consider using 3 2U nodes with 12 drives each (OS on separate drives).
And you can afford spendy 10K SAS drives, surely you can afford a DC3700
SSD for each 3 spindles, as in 3 SSDs and 9 HDs in that 12 disk box.

Loose the SAS drives for 1TB Velociraptor ones, more storage and (test it
yourself) better performance than SAS drives I compared it with.
That would give you a very snappy 9TB storage cluster.
 
Dan pretty much wrote all there is about IOPS and latency.
Caching can help immensely, combined with something laid out above you
should be more than fine.

> Network is all 10gbe
> 
Make that a front and back network with redundant switches if your budget
allows.
Supposedly some 10GBE switches are faster (latency wise) than others.
You might find that using Infiniband (IPoIB) will turn out cheaper AND
faster than 10GBE. I'm using IB for the CEPH network and live migration in
the cluster I'm building right now.

> I have a couple of Dell 2950 with only 1gb network ports and could
> consider these for the Ceph MONs.
> 
That's perfectly adequate.

> This setup would need to be able to run about 80-100 VMs running web
> based applications, each is stateless but boot from block storage and
> each would be using 2-4gb RAM and 20Gb volumes.
> 
> I have read lots fo information on the internet and raised questions on
> various forums and have seen positive and negative feedback on Ceph
> performance without SSDs. This has confused me as I am unsure whether
> using Ceph is appropriate for my requirements.
> 
> NOTE, I would be willing to reduce quantity of drives on the storage
> server
> - say 8-12 x 1TB as also read that performance can be better with lower
> quantity of drives per host.
> 
More storage nodes is always better, because in a 24 OSD node with 10GBE
you definitely are saturating your network link with just 10 disks.
In your case 6 of the storage nodes I'm describing above would be swell,
but probably totally blow your budget.

Regards,

Christian
> 
> 
> Regards
> Ian


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to