Spell check fail, that of course should have read CRUSH map.

Sent from Samsung Mobile



-------- Original message --------
From: harri <ha...@madshark.co.uk>
Date: 18/06/2013 19:21 (GMT+00:00)
To: Gregory Farnum <g...@inktank.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Single Cluster / Reduced Failure Domains


Thanks Greg,

The concern I have is an "all eggs in one basket" approach to storage design. 
Is it feasible, however unlikely, that a single Ceph cluster could be brought 
down (obviously yes)? And what if you wanted to operate different storage 
networks?

It feels right to build virtual environments in a modular design with compute 
and storage designed to run a set amount of vm's, and then scale that modular 
design by building new separate modules or pods when you need more vm's.

The benefits of Ceph seem to get better when more commodity hardware is added 
but I'm wondering if it would be workable to build multiple Ceph clusters 
according to a modular design (still getting replication and self healing 
features but on a smaller scale per pod - assume there would be enough hardware 
to achieve performance).

I wonder do Dreamhosts run all VM's on the same Ceph cluster?

I appreciate Ceph is a different mindset from traditional SAN design and I want 
to explore design concepts before implementation. I understand you can create 
separation usint placement groups and CRUSU mapping but thats all in the same 
cluster.

Regards,

Lee.




Sent from Samsung Mobile



-------- Original message --------
From: Gregory Farnum <g...@inktank.com>
Date: 18/06/2013 17:02 (GMT+00:00)
To: harri <ha...@madshark.co.uk>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Single Cluster / Reduced Failure Domains


On Tuesday, June 18, 2013, harri wrote:

Hi,



I wondered what best practice is recommended to reducing failure domains for a 
virtual server platform. If I wanted to run multiple virtual server clusters 
then would it be feasible to serve storage from 1 x large Ceph cluster?

I'm a bit confused by your question here. Normally you want as many defined 
failure domains as possible to best tolerate those failures without data loss.




I am concerned that, in the unlikely event the Ceph whole cluster fails, then 
ALL my VM's would be offline.

Well, yes?




Is there anyway to ring-fence failure domains within a logical Ceph cluster or 
would you instead look to build multiple Ceph clusters (but then that defeats 
the object of the technology doesn't it?)?

You can separate your OSDs into different CRUSH buckets and thn assign 
different pools to draw from those buckets if you're trying to split up your 
storage somehow. But I'm still a little confused about what you're after. :)
-Greg


--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to