Re: [ceph-users] Introductions

2014-08-13 Thread Mikaël Cluseau
On 08/11/2014 01:14 PM, Zach Hill wrote: Thanks for the info! Great data points. We will still recommend a separated solution, but it's good to know that some have tried to unify compute and storage and have had some success. Yes, and using drives on compute node for backup is a seducing idea

Re: [ceph-users] Introductions

2014-08-10 Thread Zach Hill
Thanks for the info! Great data points. We will still recommend a separated solution, but it's good to know that some have tried to unify compute and storage and have had some success. On Sat, Aug 9, 2014 at 5:50 PM, Mikaël Cluseau wrote: > Hi Zach, > > > On 08/09/2014 11:33 AM, Zach Hill wrote

Re: [ceph-users] Introductions

2014-08-09 Thread Mikaël Cluseau
Hi Zach, On 08/09/2014 11:33 AM, Zach Hill wrote: Generally, we recommend strongly against such a deployment in order to ensure performance and failure isolation between the compute and storage sides of the system. But, I'm curious if anyone is doing this in practice and if they've found reaso

Re: [ceph-users] Introductions

2014-08-08 Thread debian Only
As i konw , it is not recommend to run Ceph OSD (RBD server) same as the VM host like KVM. in another hand, more service in same host, it is hard to maintenance, and not good performance for each service. 2014-08-09 7:33 GMT+07:00 Zach Hill : > Hi all, > > I'm Zach Hill, the storage lead at Euc

[ceph-users] Introductions

2014-08-08 Thread Zach Hill
Hi all, I'm Zach Hill, the storage lead at Eucalyptus . We're working on adding Ceph RBD support for our scale-out block storage (EBS API). Things are going well, and we've been happy with Ceph thus far. We are a RHEL/CentOS shop mostly, so any other tips there would be