I don't have much of the details (our engineering group handled most of the 
testing), however we currently have 10 Dell PowerEdge R720xd systems, each with 
24 600GB 10k SAS OSDs (the system has a RAID controller with 2GB NVRAM, in 
testing performance was better with this then with 6 SSD drives for journals). 
The cluster is configured with public/private networks, both on 10GbE networks. 
The NAS systems (there are 2 in Active/Passive mode) are connected to the 10GbE 
public network, along with the VMware hypervisor nodes. Performance is 
acceptable (nothing earth shattering, latency can be a concern during peak I/O 
periods, particularly backups) but we have a relatively small VMware 
environment, primarily for legacy application systems that either aren't 
supported or we're afraid to move to our larger private cloud infrastructure 
(which also uses Ceph, but direct access with QEMU+KVM). The iSCSI testing was 
about 2 years ago, I believe testing was done against Cuttlefish and we were 
using tgtd for the target. I'm sure there have been enhancements in both 
stability and performance since then, we've just not gotten around to 
evaluating or changing it, as what we have is working well for us (we have 
mixed workloads, but generally hover around 500-800 active IOPS during the day, 
with peaks to 2-3k during off-hour maintenance times). We've been running for 
about 1.5 years with this setup, and no major issues. 

----- Original Message -----

From: "Nikhil Mitra (nikmitra)" <nikmi...@cisco.com> 
To: "Bill Campbell" <bcampb...@axcess-financial.com> 
Cc: ceph-users@lists.ceph.com 
Sent: Monday, July 20, 2015 3:05:25 PM 
Subject: Re: [ceph-users] CEPH RBD with ESXi 

Hi Bill, 

Would you be kind enough to share how your setup looks like today as we are 
planning to use ESXi back-ended with CEPH storage. When you tested iSCSI what 
were the issues you noticed ? What version of CEPH were you running then ? What 
iSCSI software did you use for setup ? 

Regards, 
Nikhil Mitra 


From: "Campbell, Bill" < bcampb...@axcess-financial.com > 
Reply-To: "Campbell, Bill" < bcampb...@axcess-financial.com > 
Date: Monday, July 20, 2015 at 11:52 AM 
To: Nikhil Mitra < nikmi...@cisco.com > 
Cc: " ceph-users@lists.ceph.com " < ceph-users@lists.ceph.com > 
Subject: Re: [ceph-users] CEPH RBD with ESXi 

We use VMware with Ceph, however we don't use RBD directly (we have an NFS 
server which has RBD volumes exported as datastores in VMware). We did attempt 
iSCSI with RBD to connect to VMware but ran into stability issues (could have 
been the target software we were using) but have found NFS to be pretty 
reliable. 

----- Original Message -----

From: "Nikhil Mitra (nikmitra)" < nikmi...@cisco.com > 
To: ceph-users@lists.ceph.com 
Sent: Monday, July 20, 2015 2:07:13 PM 
Subject: [ceph-users] CEPH RBD with ESXi 

Hi, 

Has anyone implemented using CEPH RBD with Vmware ESXi hypervisor. Just looking 
to use it as a native VMFS datastore to host VMDK’s. Please let me know if 
there are any documents out there that might point me in the right direction to 
get started on this. Thank you. 

Regards, 
Nikhil Mitra 


_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies. 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to