On Wed, Oct 1, 2014 at 9:21 AM, Mark Nelson <mark.nel...@inktank.com> wrote:
> On 10/01/2014 11:18 AM, Gregory Farnum wrote:
>>
>> All the stuff I'm aware of is part of the testing we're doing for
>> Giant. There is probably ongoing work in the pipeline, but the fast
>> dispatch, sharded work queues, and sharded internal locking structures
>> that Somnath has discussed all made it.
>
>
> I seem to recall there was a deadlock issue or something with fast dispatch.
> Were we able to get that solved for Giant?

Fast dispatch is not enabled in librados, but I don't think most users
should be able to tell on that end. If they can, it'll be switched on
at some point in the Hammer development process. If it's small enough
we may backport eventually (we know how to go about it, but the change
will require more testing than we were comfortable with assigning at
this stage in an LTS).
-Greg

>
> Mark
>
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Wed, Oct 1, 2014 at 7:07 AM, Andrei Mikhailovsky <and...@arhont.com>
>> wrote:
>>>
>>>
>>> Greg, are they going to be a part of the next stable release?
>>>
>>> Cheers
>>> ________________________________
>>>
>>> From: "Gregory Farnum" <g...@inktank.com>
>>> To: "Andrei Mikhailovsky" <and...@arhont.com>
>>> Cc: "Timur Nurlygayanov" <tnurlygaya...@mirantis.com>, "ceph-users"
>>> <ceph-us...@ceph.com>
>>> Sent: Wednesday, 1 October, 2014 3:04:51 PM
>>> Subject: Re: [ceph-users] Why performance of benchmarks with small blocks
>>> is
>>> extremely small?
>>>
>>> On Wed, Oct 1, 2014 at 5:24 AM, Andrei Mikhailovsky <and...@arhont.com>
>>> wrote:
>>>>
>>>> Timur,
>>>>
>>>> As far as I know, the latest master has a number of improvements for ssd
>>>> disks. If you check the mailing list discussion from a couple of weeks
>>>> back,
>>>> you can see that the latest stable firefly is not that well optimised
>>>> for
>>>> ssd drives and IO is limited. However changes are being made to address
>>>> that.
>>>>
>>>> I am well surprised that you can get 10K IOps as in my tests I was not
>>>> getting over 3K IOPs on the ssd disks which are capable of doing 90K
>>>> IOps.
>>>>
>>>> P.S. does anyone know if the ssd optimisation code will be added to the
>>>> next
>>>> maintenance release of firefly?
>>>
>>>
>>> Not a chance. The changes enabling that improved throughput are very
>>> invasive and sprinkled all over the OSD; they aren't the sort of thing
>>> that one does backport or that one could put on top of a stable
>>> release for any meaningful definition of "stable". :)
>>> -Greg
>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>>
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to