> On 18 Aug 2015, at 15:50, Mark Nelson <mnel...@redhat.com> wrote:
> 
> 
> 
> On 08/18/2015 06:47 AM, Nick Fisk wrote:
>> Just to chime in, I gave dmcache a limited test but its lack of proper 
>> writeback cache ruled it out for me. It only performs write back caching on 
>> blocks already on the SSD, whereas I need something that works like a 
>> Battery backed raid controller caching all writes.
>> 
>> It's amazing the 100x performance increase you get with RBD's when doing 
>> sync writes and give it something like just 1GB write back cache with 
>> flashcache.
> 
> For your use case, is it ok that data may live on the flashcache for some 
> amount of time before making to ceph to be replicated?  We've wondered 
> internally if this kind of trade-off is acceptable to customers or not should 
> the flashcache SSD fail.
> 

Was it me pestering you about it? :-)
All my customers need this desperately - people don't care about having RPO=0 
seconds when all hell breaks loose.
People care about their apps being slow all the time which is effectively an 
"outage".
I (sysadmin) care about having consistent data where all I have to do is start 
up the VMs.

Any ideas how to approach this? I think even checkpoints (like reverting to a 
known point in the past) would be great and sufficient for most people...


>> 
>> 
>>> -----Original Message-----
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Jan Schermer
>>> Sent: 18 August 2015 12:44
>>> To: Mark Nelson <mnel...@redhat.com>
>>> Cc: ceph-users@lists.ceph.com
>>> Subject: Re: [ceph-users] any recommendation of using EnhanceIO?
>>> 
>>> I did not. Not sure why now - probably for the same reason I didn't
>>> extensively test bcache.
>>> I'm not a real fan of device mapper though, so if I had to choose I'd still 
>>> go for
>>> bcache :-)
>>> 
>>> Jan
>>> 
>>>> On 18 Aug 2015, at 13:33, Mark Nelson <mnel...@redhat.com> wrote:
>>>> 
>>>> Hi Jan,
>>>> 
>>>> Out of curiosity did you ever try dm-cache?  I've been meaning to give it a
>>> spin but haven't had the spare cycles.
>>>> 
>>>> Mark
>>>> 
>>>> On 08/18/2015 04:00 AM, Jan Schermer wrote:
>>>>> I already evaluated EnhanceIO in combination with CentOS 6 (and
>>> backported 3.10 and 4.0 kernel-lt if I remember correctly).
>>>>> It worked fine during benchmarks and stress tests, but once we run DB2
>>> on it it panicked within minutes and took all the data with it (almost 
>>> literally -
>>> files that werent touched, like OS binaries were b0rked and the filesystem
>>> was unsalvageable).
>>>>> If you disregard this warning - the performance gains weren't that great
>>> either, at least in a VM. It had problems when flushing to disk after 
>>> reaching
>>> dirty watermark and the block size has some not-well-documented
>>> implications (not sure now, but I think it only cached IO _larger_than the
>>> block size, so if your database keeps incrementing an XX-byte counter it 
>>> will
>>> go straight to disk).
>>>>> 
>>>>> Flashcache doesn't respect barriers (or does it now?) - if that's ok for 
>>>>> you
>>> than go for it, it should be stable and I used it in the past in production
>>> without problems.
>>>>> 
>>>>> bcache seemed to work fine, but I needed to
>>>>> a) use it for root
>>>>> b) disable and enable it on the fly (doh)
>>>>> c) make it non-persisent (flush it) before reboot - not sure if that was
>>> possible either.
>>>>> d) all that in a customer's VM, and that customer didn't have a strong
>>> technical background to be able to fiddle with it...
>>>>> So I haven't tested it heavily.
>>>>> 
>>>>> Bcache should be the obvious choice if you are in control of the
>>>>> environment. At least you can cry on LKML's shoulder when you lose
>>>>> data :-)
>>>>> 
>>>>> Jan
>>>>> 
>>>>> 
>>>>>> On 18 Aug 2015, at 01:49, Alex Gorbachev <a...@iss-integration.com>
>>> wrote:
>>>>>> 
>>>>>> What about https://github.com/Frontier314/EnhanceIO?  Last commit 2
>>>>>> months ago, but no external contributors :(
>>>>>> 
>>>>>> The nice thing about EnhanceIO is there is no need to change device
>>>>>> name, unlike bcache, flashcache etc.
>>>>>> 
>>>>>> Best regards,
>>>>>> Alex
>>>>>> 
>>>>>> On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz <d...@redhat.com>
>>> wrote:
>>>>>>> I did some (non-ceph) work on these, and concluded that bcache was
>>>>>>> the best supported, most stable, and fastest.  This was ~1 year
>>>>>>> ago, to take it with a grain of salt, but that's what I would recommend.
>>>>>>> 
>>>>>>> Daniel
>>>>>>> 
>>>>>>> 
>>>>>>> ________________________________
>>>>>>> From: "Dominik Zalewski" <dzalew...@optlink.net>
>>>>>>> To: "German Anders" <gand...@despegar.com>
>>>>>>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>>>>>>> Sent: Wednesday, July 1, 2015 5:28:10 PM
>>>>>>> Subject: Re: [ceph-users] any recommendation of using EnhanceIO?
>>>>>>> 
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I’ve asked same question last weeks or so (just search the mailing
>>>>>>> list archives for EnhanceIO :) and got some interesting answers.
>>>>>>> 
>>>>>>> Looks like the project is pretty much dead since it was bought out by
>>> HGST.
>>>>>>> Even their website has some broken links in regards to EnhanceIO
>>>>>>> 
>>>>>>> I’m keen to try flashcache or bcache (its been in the mainline
>>>>>>> kernel for some time)
>>>>>>> 
>>>>>>> Dominik
>>>>>>> 
>>>>>>> On 1 Jul 2015, at 21:13, German Anders <gand...@despegar.com>
>>> wrote:
>>>>>>> 
>>>>>>> Hi cephers,
>>>>>>> 
>>>>>>>   Is anyone out there that implement enhanceIO in a production
>>> environment?
>>>>>>> any recommendation? any perf output to share with the diff between
>>>>>>> using it and not?
>>>>>>> 
>>>>>>> Thanks in advance,
>>>>>>> 
>>>>>>> German
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list
>>>>>>> ceph-users@lists.ceph.com
>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>> 
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@lists.ceph.com
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>> 
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>> 
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> 
>> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to