Nick,

Can you explain why we would ever want to have a synchronous callback?
Aren’t all filter notifications supposed to be asynchronous, especially if
there is no performance degradation?

D.

On Thu, Apr 14, 2016 at 11:17 AM, Nikolay Tikhonov <ntikho...@gridgain.com>
wrote:

> The following code snippet show how make asynchronous filter in continuous
> query. Difference in configuration between sync and async  filters just in
> annotation on class.
>
> IgniteCache cache = ...;
>
> ContinuousQuery qry = new ContinuousQuery();
>
> qry.setRemoteFilterFactory(FactoryBuiler.factoryOf(Filter.class));
>
> cache.query(qry);
>
> *@IgniteAsyncCallback*
> class Filter implements CacheEntryEventFilter<Key, Value> {
>     @IgniteInstanceResource
>     private Ignite ignite;
>
>     @Override public boolean evaluate(CacheEntryEvent<? extends Key, ?
> extends Value> evt) {
>         IgniteCache<Key, Value> cache = ignite.cache(...);
>
>         // This filter has @IgniteAsyncCallback annotation then in this
> place cache
>         // operations are allowed and safe otherwise can get deadlock.
>         Value val = cache.get(...);
>         ...
>     }
> }
>
> Size for thread pool which using for executing callbacks can be configured
> by IgniteConfiguration.setAsyncCallbackPoolSize(...) method.
>
> On Thu, Apr 14, 2016 at 8:10 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
> wrote:
>
> > Do we have a coding example for this functionality somewhere? It would be
> > nice to review the changes from usability standpoint.
> >
> > On Thu, Apr 14, 2016 at 3:58 AM, Nikolay Tikhonov <
> ntikho...@gridgain.com>
> > wrote:
> >
> > > We are close to completing IGNITE-2004 ticket.
> > > As part this ticket was made the following changes on public API
> > > - if callback has @IgniteAsyncCallback annotation then callback should
> be
> > > run asynchronously
> > > - these callbacks are executed in special pool (callback thread pool)
> > which
> > > is configured by IgniteConfiguration.asyncCallbackThreadPoolSize
> > >
> > > Any comments on this?
> > >
> > > On Wed, Mar 30, 2016 at 12:45 PM, Yakov Zhdanov <yzhda...@gridgain.com
> >
> > > wrote:
> > >
> > > > I think this approach works unless user does not initiate number of
> > > > concurrent cache operations greater than MSG_QUEUE_SIZE.  Where msg
> > queue
> > > > size default is 1024, but still configurable.
> > > >
> > > > Thanks!
> > > > --
> > > > Yakov Zhdanov, Director R&D
> > > > *GridGain Systems*
> > > > www.gridgain.com
> > > >
> > > > 2016-03-30 11:44 GMT+03:00 Vladimir Ozerov <voze...@gridgain.com>:
> > > >
> > > > > Does it mean that if cache update rate is greater than filter
> > execution
> > > > > rate, then at some point we will stop reading messages from socket?
> > If
> > > > yes,
> > > > > then it seems we still cannot execute cache operations:
> > > > > 1) Filter starts cache operation for a key. Current node is backup
> > for
> > > > this
> > > > > key.
> > > > > 2) Cache message is sent to primary node
> > > > > 3) Primary sends message back to current node.
> > > > > 4) Message is never read because of backpressure. Cache operation
> and
> > > > > filter never complete.
> > > > >
> > > > > Am I missing something?
> > > > >
> > > > > On Wed, Mar 30, 2016 at 11:23 AM, Yakov Zhdanov <
> yzhda...@apache.org
> > >
> > > > > wrote:
> > > > >
> > > > > > Vladimir,
> > > > > >
> > > > > > Communication should stop reading from connection is there are
> too
> > > many
> > > > > > unprocessed messages. Sender will be blocked on putting message
> to
> > > > queue.
> > > > > >
> > > > > > --Yakov
> > > > > >
> > > > > > 2016-03-30 11:11 GMT+03:00 Vladimir Ozerov <voze...@gridgain.com
> >:
> > > > > >
> > > > > > > Guys,
> > > > > > >
> > > > > > > Can you explain how backpressure control is implemented? What
> if
> > > > event
> > > > > > > arrival speed is greater than filter processing speed?
> > > > > > >
> > > > > > > On Wed, Mar 30, 2016 at 10:37 AM, Semyon Boikov <
> > > > sboi...@gridgain.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Andrey,
> > > > > > > >
> > > > > > > > I agree that current situation with threading in Ignite is
> very
> > > > > > > > inconvenient when user callbacks execute some non-trivial
> code.
> > > But
> > > > > > > > changing this to async dispatch is huge refactoring, even
> > > changing
> > > > > this
> > > > > > > > just for continuous queries callback is not so easy task.
> > > > > > > >
> > > > > > > > We can start with
> > > > https://issues.apache.org/jira/browse/IGNITE-2004,
> > > > > > and
> > > > > > > > if
> > > > > > > > more users complains arise we can think about changing others
> > > parts
> > > > > of
> > > > > > > > system.
> > > > > > > >
> > > > > > > > For now we need decisions for these points:
> > > > > > > > - how to specify that callback should be run asynchronously
> > > > (Nikolay
> > > > > > > > suggested marker interface IgniteAsyncCallback, or
> > > > > > @IgniteAsyncCallback)
> > > > > > > > - where these callbacks are executed, AFAIK Nikolay added
> > special
> > > > > pool
> > > > > > > > which is configured in IgniteConfiguration (something like
> > > > > > > > IgniteConfiguration.asyncCallbackThreadPoolSize)
> > > > > > > >
> > > > > > > > Regards
> > > > > > > >
> > > > > > > >
> > > > > > > > On Tue, Mar 29, 2016 at 10:45 PM, Andrey Kornev <
> > > > > > > andrewkor...@hotmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Vladimir, Igniters
> > > > > > > > >
> > > > > > > > > Here are my 2 cents.
> > > > > > > > >
> > > > > > > > > The current situation with threading when it comes to
> > executing
> > > > > user
> > > > > > > > > callbacks -- the CQ filters (either local or remote), the
> CQ
> > > > > > listeners,
> > > > > > > > the
> > > > > > > > > event listeners, the messaging listeners, the entry
> > processors
> > > > > (did I
> > > > > > > > miss
> > > > > > > > > anything?) -- is pretty sad. The callbacks may get executed
> > on
> > > a
> > > > > > system
> > > > > > > > > pool's thread, public pool's, utility pool's, discovery
> > worker
> > > > > > thread,
> > > > > > > > > application thread, to name a few. It causes a lot of grief
> > and
> > > > > > > > suffering,
> > > > > > > > > hard-to-fix races, dead locks and other bugs.
> > > > > > > > >
> > > > > > > > > I guess it's always possible to come up with a more or less
> > > > > > reasonable
> > > > > > > > > explanation to such predicament (which usually boils down
> to
> > > "It
> > > > is
> > > > > > so
> > > > > > > > > because this is how it's implemented"), but I, as a user,
> > could
> > > > not
> > > > > > > care
> > > > > > > > > less. I want consistency. I want all my callbacks
> (including
> > > > Entry
> > > > > > > > > Processors!) to be executed on the public pool's threads,
> to
> > be
> > > > > > > precise.
> > > > > > > > > This is not the first time I complain about this, and I
> > really
> > > > > think
> > > > > > > it's
> > > > > > > > > time to fix this mess.
> > > > > > > > >
> > > > > > > > > For a good example of how to implement ordered async
> dispatch
> > > of
> > > > > > > > callbacks
> > > > > > > > > on large scale, one only needs to look at Akka (or Reactor
> > > > > > > > > https://github.com/reactor/reactor).  Coherence also
> managed
> > > to
> > > > > get
> > > > > > it
> > > > > > > > > right (in my opinion, that is).
> > > > > > > > >
> > > > > > > > > Regards
> > > > > > > > > Andrey
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to