Hi, thanks for the reply again!

1. @AffinityKeyMapped is not deprecated as you mentioned, but
AffinityKeyMapper is (it seems the AffinityKeyMapper is used in places where
the annotation cannot be - e.g. our case). if we use the AFFINITY_KEY clause
on the table definition, we don't want to select a field of the table as the
key - instead we want to use the cache name. Can this be a string literal
here? e.g. AFFINITY_KEY='MY_CACHE' so the same affinity key is generated for
every entry in the table?

2. "If it's the same node for all keys, all processing will happen on that
node" - This may be ok in our case. Are there any issues that may affect
"correctness" of the data, as opposed to performance of the processing?

3. "It depends on what you are trying to do." - we just want to be able to
write e.g 2 records in a transaction via some writing process, and be able
to read them somewhere else as soon as they are written to the cluster so
they can be used. 
We can probably write some custom logic to wait for all entries to arrive at
the client, and then batch them up - possibly by versioning them or
maintaining some other state about the transaction in a separate cache on
the cluster - but were hoping there would be some way of doing this out of
the box with a distributed cache solution - e.g. 2 records are written in a
transaction, and the client is updated with those 2 records in one callback.
The docs for ContinuousQueryWithTransformer.EventListener imply this kind of
thing should be possible (e.g. "called after one or more entries have been
updated" and the onUpdated method receives an iterable:

"    public interface EventListener<T> {
        /**
         * Called after one or more entries have been updated.
         *
         * @param events The entries just updated that transformed with
remote transformer of {@link ContinuousQueryWithTransformer}.
         */
        void onUpdated(Iterable<? extends T> events);
    }"






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to