Re: [ceph-users] ceph , VMWare , NFS-ganesha

2018-05-29 Thread Steven Vacaroaia
Thank you all My goal is to have an SSD based Ceph ( NVME + SSD) cluster so I need to consider performance as well as reliability ( although I do realize that a performant cluster that breaks my VMware is not ideal ;-)) It appears that NFS is the safe way to do it but will it be the bottleneck f

Re: [ceph-users] ceph , VMWare , NFS-ganesha

2018-05-29 Thread Dennis Benndorf
Hi, we use PetaSAN for our VMWare-Cluster. It provides an webinterface for management and does clustered active-active ISCSI. For us the easy management was the point to choose this, so we need not to think about how to configure ISCSI... Regards, Dennis Am 28.05.2018 um 21:42 schrieb Steve

Re: [ceph-users] ceph , VMWare , NFS-ganesha

2018-05-29 Thread Alex Gorbachev
On Mon, May 28, 2018 at 3:42 PM, Steven Vacaroaia wrote: > Hi, > > I need to design and build a storage platform that will be "consumed" mainly > by VMWare > > CEPH is my first choice > > As far as I can see, there are 3 ways CEPH storage can be made available to > VMWare > > 1. iSCSI > 2. NFS-Gan

Re: [ceph-users] ceph , VMWare , NFS-ganesha

2018-05-29 Thread Heðin Ejdesgaard Møller
We are using the iSCSI gateway in ceph-12.2 with vsphere-6.5 as the client. It's an active/passive setup, per. LUN. We choose this solution because that's what we could get RH support for and it sticks to the "no SPOF" philosophy. Performance is ~25-30% slower then krbd mounting the same rbd imag

[ceph-users] ceph , VMWare , NFS-ganesha

2018-05-28 Thread Steven Vacaroaia
Hi, I need to design and build a storage platform that will be "consumed" mainly by VMWare CEPH is my first choice As far as I can see, there are 3 ways CEPH storage can be made available to VMWare 1. iSCSI 2. NFS-Ganesha 3. mounted rbd to a lInux NFS server Any suggestions / advice as to whic

Re: [ceph-users] ceph , VMWare , NFS-ganesha

2018-05-28 Thread Brady Deetz
You might look into open vstorage as a gateway into ceph. On Mon, May 28, 2018, 2:42 PM Steven Vacaroaia wrote: > Hi, > > I need to design and build a storage platform that will be "consumed" > mainly by VMWare > > CEPH is my first choice > > As far as I can see, there are 3 ways CEPH storage ca

Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Alex Gorbachev
On Tuesday, October 18, 2016, Frédéric Nass wrote: > Hi Alex, > > Just to know, what kind of backstore are you using whithin Storcium ? > vdisk_fileio > or vdisk_blockio ? > > I see your agents can handle both : http://www.spinics.net/lists/ > ceph-users/msg27817.html > Hi Frédéric, We use all

Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Frédéric Nass
Hi Alex, Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio or vdisk_blockio ? I see your agents can handle both : http://www.spinics.net/lists/ceph-users/msg27817.html Regards, Frédéric. Le 06/10/2016 à 16:01, Alex Gorbachev a écrit : On Wed, Oct 5, 2016

Re: [ceph-users] Ceph + VMWare

2016-10-18 Thread Frédéric Nass
Hi Alex, Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio or vdisk_blockio ? I see your agents can handle both : http://www.spinics.net/lists/ceph-users/msg27817.html Regards, Frédéric. Le 06/10/2016 à 16:01, Alex Gorbachev a écrit : On Wed, Oct 5, 2016

Re: [ceph-users] Ceph + VMWare

2016-10-11 Thread Frédéric Nass
Hi Patrick, 1) Université de Lorraine. (7.000 researchers and staff members, 60.000 students, 42 schools and education structures, 60 research labs). 2) RHCS cluster: 144 OSDs on 12 nodes for 520 TB raw capacity. VMware clusters: 7 VMware clusters (40 ESXi hosts). First need is to provide

Re: [ceph-users] Ceph + VMWare

2016-10-07 Thread Jake Young
Hey Patrick, I work for Cisco. We have a 200TB cluster (108 OSDs on 12 OSD Nodes) and use the cluster for both OpenStack and VMware deployments. We are using iSCSI now, but it really would be much better if VMware did support RBD natively. We present a 1-2TB Volume that is shared between 4-8 ES

Re: [ceph-users] Ceph + VMWare

2016-10-06 Thread Alex Gorbachev
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry wrote: > Hey guys, > > Starting to buckle down a bit in looking at how we can better set up > Ceph for VMWare integration, but I need a little info/help from you > folks. > > If you currently are using Ceph+VMWare, or are exploring the option, > I'd

Re: [ceph-users] Ceph + VMWare

2016-10-06 Thread Oliver Dzombic
-D-70437 Stuttgart > Geschäftsführer: Daniel Schwager, Stefan Hörz - HRB Stuttgart 19870 > Tel: +49-711-849910-32, Fax: -932 - Mailto:daniel.schwa...@dtnet.de > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of &g

Re: [ceph-users] Ceph + VMWare

2016-10-05 Thread Daniel Schwager
tnet.de > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Patrick McGarry > Sent: Wednesday, October 05, 2016 8:33 PM > To: Ceph-User; Ceph Devel > Subject: [ceph-users] Ceph + VMWare > > Hey guys, > > Startin

Re: [ceph-users] Ceph + VMWare

2016-10-05 Thread Oliver Dzombic
Hi Patrick, we are currently trying to get ceph running with it for a customer. ( Means our stuff = cephfs, customer stuff = vmware on ONE ceph cluster ). Unluckily iscsi sucks ( ohne OSD fails = iscsi lock -> need restart iscsi daemon on ceph servers ). NFS sucks ( no natural HA ) So if you ca

[ceph-users] Ceph + VMWare

2016-10-05 Thread Patrick McGarry
Hey guys, Starting to buckle down a bit in looking at how we can better set up Ceph for VMWare integration, but I need a little info/help from you folks. If you currently are using Ceph+VMWare, or are exploring the option, I'd like some simple info from you: 1) Company 2) Current deployment size

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Nick Fisk
> -Original Message- > From: Alex Gorbachev [mailto:a...@iss-integration.com] > Sent: 11 September 2016 03:17 > To: Nick Fisk > Cc: Wilhelm Redbrake ; Horace Ng ; > ceph-users > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > C

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Alex Gorbachev
*Subject:* Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > > > > On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk wrote: > > > > > > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 04 September 2016 04:45 > *To:* Nick Fisk > *

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Nick Fisk
From: Alex Gorbachev [mailto:a...@iss-integration.com] Sent: 11 September 2016 16:14 To: Nick Fisk Cc: Wilhelm Redbrake ; Horace Ng ; ceph-users Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk mailto:n...@fisk.me.uk

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Alex Gorbachev
On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk wrote: > > > > > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 04 September 2016 04:45 > *To:* Nick Fisk > *Cc:* Wilhelm Redbrake ; Horace Ng ; > ceph-users > *Subject:* Re: [ceph-users] Ceph + VMwa

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-10 Thread Alex Gorbachev
-on-nfs-vs.html ) Alex > > From: Alex Gorbachev [mailto:a...@iss-integration.com] > Sent: 04 September 2016 04:45 > To: Nick Fisk > Cc: Wilhelm Redbrake ; Horace Ng ; > ceph-users > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > > &

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-04 Thread Nick Fisk
From: Alex Gorbachev [mailto:a...@iss-integration.com] Sent: 04 September 2016 04:45 To: Nick Fisk Cc: Wilhelm Redbrake ; Horace Ng ; ceph-users Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance On Saturday, September 3, 2016, Alex Gorbachev mailto:a...@iss

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-03 Thread Alex Gorbachev
> >> *Cc:* n...@fisk.me.uk ; >> Horace Ng > >; ceph-users < >> ceph-users@lists.ceph.com >> > >> *Subject:* Re: [ceph-users] Ceph + VMware + Single Thread Performance >> >> >> >> >> >> On Sunday, August 21, 2016, Wilhelm Red

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-03 Thread Alex Gorbachev
e a really good fit. > > > > Thanks for your very valuable info on analysis and hw build. > > > > Alex > > > > > > > Am 21.08.2016 um 09:31 schrieb Nick Fisk : > > >> -Original Message- > >> From: Alex Gorbachev [mail

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-31 Thread Nick Fisk
From: w...@globe.de [mailto:w...@globe.de] Sent: 31 August 2016 08:56 To: n...@fisk.me.uk; 'Alex Gorbachev' ; 'Horace Ng' Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Nick, what do you think about Infiniband? I have read that with Infiniband the

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-31 Thread Nick Fisk
From: w...@globe.de [mailto:w...@globe.de] Sent: 30 August 2016 18:40 To: n...@fisk.me.uk; 'Alex Gorbachev' Cc: 'Horace Ng' Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Hi Nick, here are my answers and questions... Am 30.08.16 um 19:

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Christian Balzer
Hello, On Mon, 22 Aug 2016 20:34:54 +0100 Nick Fisk wrote: > > -Original Message- > > From: Christian Balzer [mailto:ch...@gol.com] > > Sent: 22 August 2016 03:00 > > To: 'ceph-users' > > Cc: Nick Fisk > > Subject: Re: [ceph-us

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Nick Fisk
From: Alex Gorbachev [mailto:a...@iss-integration.com] Sent: 22 August 2016 20:30 To: Nick Fisk Cc: Wilhelm Redbrake ; Horace Ng ; ceph-users Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance On Sunday, August 21, 2016, Wilhelm Redbrake mailto:w...@globe.de

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Nick Fisk
> -Original Message- > From: Christian Balzer [mailto:ch...@gol.com] > Sent: 22 August 2016 03:00 > To: 'ceph-users' > Cc: Nick Fisk > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > Hello, > > On Sun,

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Alex Gorbachev
your very valuable info on analysis and hw build. > > > > Alex > > > > > > > Am 21.08.2016 um 09:31 schrieb Nick Fisk : > > >> -Original Message- > >> From: Alex Gorbachev [mailto:a...@iss-integration.com] > >> Sent: 21 August 2016

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Christian Balzer
Hello, On Sun, 21 Aug 2016 09:57:40 +0100 Nick Fisk wrote: > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Christian Balzer > > Sent: 21 August 2016 09:32 > > To: ceph-users > &g

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Nick Fisk
From: Alex Gorbachev [mailto:a...@iss-integration.com] Sent: 21 August 2016 15:27 To: Wilhelm Redbrake Cc: n...@fisk.me.uk; Horace Ng ; ceph-users Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance On Sunday, August 21, 2016, Wilhelm Redbrake mailto:w...@globe.de

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Alex Gorbachev
2016 04:15 > >> To: Nick Fisk > > >> Cc: w...@globe.de ; Horace Ng >; ceph-users > > >> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > >> > >> Hi Nick, > >> > >> On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Nick Fisk
> -Original Message- > From: Wilhelm Redbrake [mailto:w...@globe.de] > Sent: 21 August 2016 09:34 > To: n...@fisk.me.uk > Cc: Alex Gorbachev ; Horace Ng ; > ceph-users > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > Hi Nick,

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Christian Balzer > Sent: 21 August 2016 09:32 > To: ceph-users > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > > Hello, > > O

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Christian Balzer
on.com] > >> Sent: 21 August 2016 04:15 > >> To: Nick Fisk > >> Cc: w...@globe.de; Horace Ng ; ceph-users > >> > >> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > >> > >> Hi Nick, > >> > >> On Thu

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Brian ::
achev [mailto:a...@iss-integration.com] >> Sent: 21 August 2016 04:15 >> To: Nick Fisk >> Cc: w...@globe.de; Horace Ng ; ceph-users >> >> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance >> >> Hi Nick, >> >> On Thu, Jul 21, 20

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Nick Fisk
> -Original Message- > From: Alex Gorbachev [mailto:a...@iss-integration.com] > Sent: 21 August 2016 04:15 > To: Nick Fisk > Cc: w...@globe.de; Horace Ng ; ceph-users > > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > Hi Nick, >

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-20 Thread Alex Gorbachev
Hi Nick, On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk wrote: >> -Original Message- >> From: w...@globe.de [mailto:w...@globe.de] >> Sent: 21 July 2016 13:23 >> To: n...@fisk.me.uk; 'Horace Ng' >> Cc: ceph-users@lists.ceph.com >> Subject:

Re: [ceph-users] ceph + vmware

2016-07-26 Thread Jake Young
On Thursday, July 21, 2016, Mike Christie wrote: > On 07/21/2016 11:41 AM, Mike Christie wrote: > > On 07/20/2016 02:20 PM, Jake Young wrote: > >> > >> For starters, STGT doesn't implement VAAI properly and you will need to > >> disable VAAI in ESXi. > >> > >> LIO does seem to implement VAAI prop

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr] Sent: 22 July 2016 15:13 To: n...@fisk.me.uk Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 22/07/2016 14:10, Nick Fisk a écrit : From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 14:10, Nick Fisk a écrit : *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Frédéric Nass *Sent:* 22 July 2016 11:19 *To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-user

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Frédéric Nass Sent: 22 July 2016 11:19 To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 22/07/2016 11:4

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 11:48, Nick Fisk a écrit : *From:*Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr] *Sent:* 22 July 2016 10:40 *To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-users] ceph + vmware Le 22/07/201

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr] Sent: 22 July 2016 10:40 To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 22/07/2016 10:23, Nick Fisk a écrit :

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 10:23, Nick Fisk a écrit : *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Frédéric Nass *Sent:* 22 July 2016 09:10 *To:* n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-user

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Frédéric Nass Sent: 22 July 2016 09:10 To: n...@fisk.me.uk; 'Jake Young' ; 'Jan Schermer' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 22/07/2016 09:4

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 22/07/2016 09:47, Nick Fisk a écrit : *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Frédéric Nass *Sent:* 22 July 2016 08:11 *To:* Jake Young ; Jan Schermer *Cc:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-users] ceph + vmware Le 20/07/2016 21:20, Jake

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Frédéric Nass Sent: 22 July 2016 08:11 To: Jake Young ; Jan Schermer Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph + vmware Le 20/07/2016 21:20, Jake Young a écrit : On Wednesday, July 20

Re: [ceph-users] ceph + vmware

2016-07-22 Thread Frédéric Nass
Le 20/07/2016 21:20, Jake Young a écrit : On Wednesday, July 20, 2016, Jan Schermer > wrote: > On 20 Jul 2016, at 18:38, Mike Christie > wrote: > > On 07/20/2016 03:50 AM, Frédéric Nass wrote: >> >> Hi Mike, >> >> Thanks for the update o

Re: [ceph-users] ceph + vmware

2016-07-21 Thread Mike Christie
On 07/21/2016 11:41 AM, Mike Christie wrote: > On 07/20/2016 02:20 PM, Jake Young wrote: >> >> For starters, STGT doesn't implement VAAI properly and you will need to >> disable VAAI in ESXi. >> >> LIO does seem to implement VAAI properly, but performance is not nearly >> as good as STGT even with

Re: [ceph-users] ceph + vmware

2016-07-21 Thread Mike Christie
On 07/20/2016 02:20 PM, Jake Young wrote: > > For starters, STGT doesn't implement VAAI properly and you will need to > disable VAAI in ESXi. > > LIO does seem to implement VAAI properly, but performance is not nearly > as good as STGT even with VAAI's benefits. The assumption for the cause > is

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
...@fisk.me.uk; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Okay that should be the answer... I think it would be great to use Intel P3700 1.6TB as bcache in the iscsi rbd client gateway nodes. caching device: Intel P3700 1.6TB backing device

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
: 21 July 2016 14:05 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance That can not be correct. Check it at your cluster with dstat as i said... You will see at every node parallel IO on every OSD and journal Am 21.07.16 um 15:02 schrieb

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
. You get exactly the same problems with reading when you don’t set the readahead above 4MB. *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *w...@globe.de *Sent:* 21 July 2016 14:05 *To:* ceph-users@lists.ceph.com *Subject:* Re: [ceph-users] Ceph + VMware + Single

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
lto:w...@globe.de ] Sent: 21 July 2016 13:23 To: n...@fisk.me.uk ; 'Horace Ng' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Okay and what is your plan now to speed up ? Now I have come up wi

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Jake Young
mance improve. > > > Am 21.07.16 um 14:33 schrieb Nick Fisk: > > -Original Message- > From: w...@globe.de [ > mailto:w...@globe.de ] > Sent: 21 July 2016 13:23 > To: n...@fisk.me.uk ; > 'Horace Ng' > > Cc: ceph-users@lists.ceph.com > > Subject

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
t you will get really high numbers, unfortunately you can't do this with iSCSI though. > -Original Message- > From: w...@globe.de [mailto:w...@globe.de] > Sent: 21 July 2016 13:39 > To: n...@fisk.me.uk; 'Horace Ng' > Cc: ceph-users@lists.ceph.com > Subje

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
From: w...@globe.de [mailto:w...@globe.de] Sent: 21 July 2016 13:23 To: n...@fisk.me.uk; 'Horace Ng' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Okay and what is your plan now to speed up ? Now I have come up with a lower latency

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
ce Ng' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Okay and what is your plan now to speed up ? Now I have come up with a lower latency hardware design, there is not much further improvement until persistent RBD caching is implemented, as y

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
From: Jake Young [mailto:jak3...@gmail.com] Sent: 21 July 2016 13:24 To: n...@fisk.me.uk; w...@globe.de Cc: Horace Ng ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance My workaround to your single threaded performance issue was to increase the

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
> -Original Message- > From: w...@globe.de [mailto:w...@globe.de] > Sent: 21 July 2016 13:23 > To: n...@fisk.me.uk; 'Horace Ng' > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > Okay and what is

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Jake Young
---Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com > ] On Behalf Of Horace > > Sent: 21 July 2016 10:26 > > To: w...@globe.de > > Cc: ceph-users@lists.ceph.com > > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance &

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
...@lists.ceph.com] On Behalf Of w...@globe.de Sent: 21 July 2016 13:04 To: n...@fisk.me.uk; 'Horace Ng' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Hi, hmm i think 200 MByte/s is really bad. Is your Cluster in production right now? It&#

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > w...@globe.de > Sent: 21 July 2016 13:04 > To: n...@fisk.me.uk; 'Horace Ng' > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Ceph + VMware + Si

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
lists.ceph.com] On Behalf Of Horace Sent: 21 July 2016 10:26 To: w...@globe.de Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Hi, Same here, I've read some blog saying that vmware will frequently verify the locking on VMFS over iSCSI, hence i

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Nick Fisk
iginal Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Horace > Sent: 21 July 2016 10:26 > To: w...@globe.de > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance > > Hi, > > Same her

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Horace
users@lists.ceph.com Sent: Thursday, July 21, 2016 5:11:21 PM Subject: [ceph-users] Ceph + VMware + Single Thread Performance Hi everyone, we see at our cluster relatively slow Single Thread Performance on the iscsi Nodes. Our setup: 3 Racks: 18x Data Nodes, 3 Mon Nodes, 3 iscsi Gateway Nodes with tgt

[ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread w...@globe.de
Hi everyone, we see at our cluster relatively slow Single Thread Performance on the iscsi Nodes. Our setup: 3 Racks: 18x Data Nodes, 3 Mon Nodes, 3 iscsi Gateway Nodes with tgt (rbd cache off). 2x Samsung SM863 Enterprise SSD for Journal (3 OSD per SSD) and 6x WD Red 1TB per Data Node as

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Mike Christie
On 07/20/2016 11:52 AM, Jan Schermer wrote: > >> On 20 Jul 2016, at 18:38, Mike Christie wrote: >> >> On 07/20/2016 03:50 AM, Frédéric Nass wrote: >>> >>> Hi Mike, >>> >>> Thanks for the update on the RHCS iSCSI target. >>> >>> Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Jake Young
On Wednesday, July 20, 2016, Jan Schermer wrote: > > > On 20 Jul 2016, at 18:38, Mike Christie > wrote: > > > > On 07/20/2016 03:50 AM, Frédéric Nass wrote: > >> > >> Hi Mike, > >> > >> Thanks for the update on the RHCS iSCSI target. > >> > >> Will RHCS 2.1 iSCSI target be compliant with VMWare

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Jan Schermer
> On 20 Jul 2016, at 18:38, Mike Christie wrote: > > On 07/20/2016 03:50 AM, Frédéric Nass wrote: >> >> Hi Mike, >> >> Thanks for the update on the RHCS iSCSI target. >> >> Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is >> it too early to say / announce). > > No HA

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Mike Christie
On 07/20/2016 03:50 AM, Frédéric Nass wrote: > > Hi Mike, > > Thanks for the update on the RHCS iSCSI target. > > Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is > it too early to say / announce). No HA support for sure. We are looking into non HA support though. > >

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Frédéric Nass
Hi Mike, Thanks for the update on the RHCS iSCSI target. Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is it too early to say / announce). Knowing that HA iSCSI target was on the roadmap, we chose iSCSI over NFS so we'll just have to remap RBDs to RHCS targets when i

Re: [ceph-users] ceph + vmware

2016-07-16 Thread Jake Young
On Saturday, July 16, 2016, Oliver Dzombic wrote: > Hi Jake, > > thank you very much both was needed, MTU and VAAI deactivated ( i hope > that wont interfere with vmotion or other features ). > > I changed now the MTU of vmkernel and vswitch. That solved this problem. Try turning VAAI back on a

Re: [ceph-users] ceph + vmware

2016-07-16 Thread Oliver Dzombic
Hi Jake, thank you very much both was needed, MTU and VAAI deactivated ( i hope that wont interfere with vmotion or other features ). I changed now the MTU of vmkernel and vswitch. That solved this problem. So i could make an ext4 filesystem and mount it. Running dd if=/dev/zero of=/mnt/8G_tes

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Jake Young
I had some odd issues like that due to MTU mismatch. Keep in mind that the vSwitch and vmkernel port have independent MTU settings. Verify you can ping with large size packets without fragmentation between your host and iscsi target. If that's not it, you can try to disable VAAI options to see i

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Oliver Dzombic
Hi, i am currently trying out the stuff. My tgt config: # cat tgtd.conf # The default config file include /etc/tgt/targets.conf # Config files from other packages etc. include /etc/tgt/conf.d/*.conf nr_iothreads=128 - # cat iqn.2016-07.tgt.esxi-test.conf initiator-address ALL scsi_

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Oliver Dzombic > Sent: 15 July 2016 08:35 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] ceph + vmware > > Hi Nick, > > yeah i understand the p

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Oliver Dzombic
f >> Oliver Dzombic >> Sent: 12 July 2016 20:59 >> To: ceph-users@lists.ceph.com >> Subject: Re: [ceph-users] ceph + vmware >> >> Hi Jack, >> >> thank you! >> >> What has reliability to do with rbd_cache = true ? >> >>

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Oliver Dzombic > Sent: 12 July 2016 20:59 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] ceph + vmware > > Hi Jack, > > thank you! >

Re: [ceph-users] ceph + vmware

2016-07-12 Thread Oliver Dzombic
Hi Jack, thank you! What has reliability to do with rbd_cache = true ? I mean aside of the fact, that if a host powers down, the "flying" data are lost. Are there any special limitations / issues with rbd_cache = true and iscsi tgt ? -- Mit freundlichen Gruessen / Best regards Oliver Dzombic

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Alex Gorbachev
Hi Oliver, On Friday, July 8, 2016, Oliver Dzombic wrote: > Hi, > > does anyone have experience how to connect vmware with ceph smart ? > > iSCSI multipath does not really worked well. > NFS could be, but i think thats just too much layers in between to have > some useable performance. > > Syste

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Jake Young
I'm using this setup with ESXi 5.1 and I get very good performance. I suspect you have other issues. Reliability is another story (see Nick's posts on tgt and HA to get an idea of the awful problems you can have), but for my test labs the risk is acceptable. One change I found helpful is to run

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Oliver Dzombic
Hi Mike, i was trying: https://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/ ONE target, from different OSD servers directly, to multiple vmware esxi servers. A config looked like: #cat iqn.ceph-cluster_netzlaboranten-storage.conf driver iscsi bs-type rbd backing-store rbd/vmware-storag

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Mike Christie
On 07/08/2016 02:22 PM, Oliver Dzombic wrote: > Hi, > > does anyone have experience how to connect vmware with ceph smart ? > > iSCSI multipath does not really worked well. Are you trying to export rbd images from multiple iscsi targets at the same time or just one target? For the HA/multiple t

Re: [ceph-users] ceph + vmware

2016-07-09 Thread Nick Fisk
Of Jan > Schermer > Sent: 08 July 2016 20:53 > To: Oliver Dzombic > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] ceph + vmware > > There is no Ceph plugin for VMware (and I think you need at least an > Enterprise license for storage plugins, much $$$). > The &q

Re: [ceph-users] ceph + vmware

2016-07-08 Thread Jan Schermer
There is no Ceph plugin for VMware (and I think you need at least an Enterprise license for storage plugins, much $$$). The "VMware" way to do this without the plugin would be to have a VM running on every host serving RBD devices over iSCSI to the other VMs (the way their storage applicances wo

[ceph-users] ceph + vmware

2016-07-08 Thread Oliver Dzombic
Hi, does anyone have experience how to connect vmware with ceph smart ? iSCSI multipath does not really worked well. NFS could be, but i think thats just too much layers in between to have some useable performance. Systems like ScaleIO have developed a vmware addon to talk with it. Is there som