Hello!
1. I don't think that AffinityKeyMapped is deprecated, but there are cases
when it is ignored :(
You can use affinity_key clause in CREATE TABLE ... WITH.
2. If it's the same node for all keys, all processing will happen on that
node.
3. It depends on what you are trying to do.
4. I don't t
Hello!
Please use Apache Ignite JIRA to file tickets:
https://issues.apache.org/jira/projects/IGNITE/issues
Does this issue reproduce on any Apache Ignite versions? Please try some
2.9 nightly build:
https://ci.ignite.apache.org/buildConfiguration/Releases_NightlyRelease_RunApacheIgniteNightlyRel
Your can see https://github.com/gridgain/gridgain/issues/1490
2020-10-13 0:00 GMT+08:00, LixinZhang :
> sqlOffloadingEnabled : true
>
> create table tests(
> ID int,
> CREATETIME TIMESTAMP(29,9),
> PRIMARY KEY (ID)
> ) WITH "template=partitioned, CACHE_NAME=tests";
>
> select AVG(EXTRA
sqlOffloadingEnabled : true
create table tests(
ID int,
CREATETIME TIMESTAMP(29,9),
PRIMARY KEY (ID)
) WITH "template=partitioned, CACHE_NAME=tests";
select AVG(EXTRACT(MILLISECOND from CREATETIME))
from tests;
Failed to run reduce query locally. General error:
"java.lang.ArrayIndexO
one per table, whenever the table definition/data changes
hourly/daily/weekly.
please suggest if there is any other way to do joins without using cache?
Thank You
Swara
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello!
1) Both 3rd party and native persistent store are always flushed (not
counting writeBehind). There's no option to force flush on either one.
2) This sounds like a task for K8s operator (a separate tool), not Apache
Ignite API. You can check out the GridGain's operator progress.
Regards,
--
Hello!
How many caches do you have? How often are they created?
Regards,
--
Ilya Kasnacheev
пн, 12 окт. 2020 г. в 13:35, swara :
> We are creating caches for each table and joining tables.
>
> Thank You
> Swara
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
Hi All,
Kindly let me know if ignite provides following rest endpoints:
1) To force flush data from cache to persistent store.
2) In kubernetes environment, to scale down/scale up number of baseline
nodes.
Regards
Wasim Bari
-
Wasim Bari
--
Sent from: http://apache-ignite-users.70518.x6.na
Hi All,
Kindly let me know if ignite provides following rest endpoints:
1) To force flush data from cache to persistent store.
2) In kubernetes environment, to scale down/scale up number of baseline
nodes.
Regards
Wasim Bari
-
Wasim Bari
--
Sent from: http://apache-ignite-users.70518.x6.na
Hi All,
Kindly let me know if ignite provides following rest endpoints:
1) To force flush data from cache to persistent store.
2) In kubernetes environment, to scale down/scale up number of baseline
nodes.
Regards
Wasim Bari
-
Wasim Bari
--
Sent from: http://apache-ignite-users.70518.x6.na
Thanks, this is what I have ended up doing. However, it looks like
AffinityKeyMapper is deprecated?
I am adding an implementation of this (which returns the binary typename of
the key BinaryObject) - and this does seem to have the desired effect (e.g.
all keys with the same typename are marked as p
static class ICCall implements IgniteCallable {
private static final long serialVersionUID =
4278959731940740185L;
@IgniteInstanceResource
private Ignite ignite;
@SpringResource(resourceName = "testService")
private T
We are creating caches for each table and joining tables.
Thank You
Swara
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello!
Looks like you have an SQL query which executes for ages and ran out of
topology history.
That, or some other weird issue (you have minorTopVer of 250+ which is
unusual). Can you provide logs for your server nodes?
Do you have any short-lived caches which are created and destroyed
frequen
Hello!
In this case you could use an affinity function which will put all these
entries on the same node, but it will mean that you no longer use any
distribution benefits.
I don't think it is a good design if you expect local listener to get a tx
worth of entries at once. Listener should ideally
15 matches
Mail list logo