When I look into profiler, I see that actual work with portables takes only
relatively small fraction of time. The only significant hotspot I saw was
query parsing, but we already discussed this in another topic and Sergi
created a ticket.

To improve performance even further, we need to start working on
microoptimizations, because I see that query execution produces loooots of
garbage due to dozens of wrappers, primitives boxing, etc.. Something comes
form portables, something comes from indexing. I do not think that working
solely on portables can give us a breakthrough in performance.

On Thu, Nov 5, 2015 at 12:50 AM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:

> On Wed, Nov 4, 2015 at 10:25 AM, Vladimir Ozerov <voze...@gridgain.com>
> wrote:
>
> >
> > Also I measured query performance on some local benchmarks and got
> > acceptable resutls - queries are about 5-7% slower with poratbles than
> with
> > OptimizedMarshaller. Looks very promising to me provided that we work
> with
> > deserialized objects now.
>
>
> Vladimir, I don’t think we can treat these results as acceptable. So far,
> Ignite has been doing very well on all competitive benchmarks, and we
> cannot afford to start loosing any of them.
>
> Now, I remember seeing emails about many more performance optimizations we
> can add, like aligning String representation with binary representation,
> etc. Do you think after adding all the optimizations we will still be
> slower or faster?
>
> D.
>

Reply via email to