Re: [ceph-users] tgt and krbd

2015-03-17 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Mike Christie > Sent: 17 March 2015 21:27 > To: Nick Fisk; 'Jake Young' > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] tgt and krbd > >

Re: [ceph-users] tgt and krbd

2015-03-17 Thread Mike Christie
On 03/15/2015 08:42 PM, Mike Christie wrote: > On 03/15/2015 07:54 PM, Mike Christie wrote: >> On 03/09/2015 11:15 AM, Nick Fisk wrote: >>> Hi Mike, >>> >>> I was using bs_aio with the krbd and still saw a small caching effect. I'm >>> not sure if it was on the ESXi or tgt/krbd page cache side, but

Re: [ceph-users] tgt and krbd

2015-03-15 Thread Mike Christie
On 03/15/2015 07:54 PM, Mike Christie wrote: > On 03/09/2015 11:15 AM, Nick Fisk wrote: >> Hi Mike, >> >> I was using bs_aio with the krbd and still saw a small caching effect. I'm >> not sure if it was on the ESXi or tgt/krbd page cache side, but I was >> definitely seeing the IO's being coalesced

Re: [ceph-users] tgt and krbd

2015-03-15 Thread Mike Christie
On 03/09/2015 11:15 AM, Nick Fisk wrote: > Hi Mike, > > I was using bs_aio with the krbd and still saw a small caching effect. I'm > not sure if it was on the ESXi or tgt/krbd page cache side, but I was > definitely seeing the IO's being coalesced into larger ones on the krbd I am not sure what y

Re: [ceph-users] tgt and krbd

2015-03-09 Thread Nick Fisk
Hi Mike, I was using bs_aio with the krbd and still saw a small caching effect. I'm not sure if it was on the ESXi or tgt/krbd page cache side, but I was definitely seeing the IO's being coalesced into larger ones on the krbd device in iostat. Either way, it would make me potentially nervous to ru

Re: [ceph-users] tgt and krbd

2015-03-07 Thread Steffen W Sørensen
On 06/03/2015, at 22.47, Jake Young wrote: > > I wish there was a way to incorporate a local cache device into tgt with > > librbd backends. > What about a ram disk device like rapid disk+cache in front of your rbd block > device > > http://www.rapiddisk.org/?page_id=15#rapiddisk > > /Steffe

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Mike Christie
On 03/06/2015 06:51 AM, Jake Young wrote: > > > On Thursday, March 5, 2015, Nick Fisk > wrote: > > Hi All, > > __ __ > > Just a heads up after a day’s experimentation. > > __ __ > > I believe tgt with its default settings has a small write

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Friday, March 6, 2015, Steffen W Sørensen wrote: > > On 06/03/2015, at 16.50, Jake Young > > wrote: > > > > After seeing your results, I've been considering experimenting with > that. Currently, my iSCSI proxy nodes are VMs. > > > > I would like to build a few dedicated servers with fast SSDs

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Steffen W Sørensen
On 06/03/2015, at 16.50, Jake Young wrote: > > After seeing your results, I've been considering experimenting with that. > Currently, my iSCSI proxy nodes are VMs. > > I would like to build a few dedicated servers with fast SSDs or fusion-io > devices. It depends on my budget, it's hard t

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Nick Fisk
> Hi Jake, > > Good to see it’s not just me. > > I’m guessing that the fact you are doing 1MB writes means that the latency > difference is having a less noticeable impact on the overall write bandwidth. > What I have been discovering with Ceph + iSCSi is that due to all the extra > hops (client-

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
bd sync writes which I suppose might explain the > default difference, but this should be the expected behaviour. > > Nick > > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Jake Young > *Sent:* 06 March 2015 15:07 > > *To:*

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Nick Fisk
from the client. Nick From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jake Young Sent: 06 March 2015 15:07 To: Nick Fisk Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] tgt and krbd My initator is also VMware software iscsi. I had my tgt iscsi targets'

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
> Jake Young > Sent: 06 March 2015 12:52 > To: Nick Fisk > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] tgt and krbd > > > > On Thursday, March 5, 2015, Nick Fisk wrote: > Hi All, > > Just a heads up after a day’s experimentation. > > I believ

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jake Young Sent: 06 March 2015 12:52 To: Nick Fisk Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] tgt and krbd On Thursday, March 5, 2015, Nick Fisk wrote: Hi All, Just a heads up after a day’s

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Thursday, March 5, 2015, Nick Fisk wrote: > Hi All, > > > > Just a heads up after a day’s experimentation. > > > > I believe tgt with its default settings has a small write cache when > exporting a kernel mapped RBD. Doing some write tests I saw 4 times the > write throughput when using tgt ai

[ceph-users] tgt and krbd

2015-03-05 Thread Nick Fisk
Hi All, Just a heads up after a day's experimentation. I believe tgt with its default settings has a small write cache when exporting a kernel mapped RBD. Doing some write tests I saw 4 times the write throughput when using tgt aio + krbd compared to tgt with the builtin librbd. After r