It should work fine. I ran Ignite on various platforms non-Intel platforms,
including arm. There were some issues in the past but modern versions work
well. But you do have to keep in mind that release testing for Ignite is
done on Intel platforms. Also arm can bring some surprises in terms of
perf
Hi Igniters,
I'll be giving a webinar titled Networking in Apache Ignite. We'll look at
Apache Ignite's networking components - Discovery and Communication - to see
how they work together to implement various networking functions.
The webinar will be held at 10 AM PT / 1 PM ET / 5 PM GMT. Detail
Hi,
This looks weird but with the right logs we should figure it out.
One thing that I don't like about these settings is the asymmetry of the
server's and client's timeouts.
The server will use clientFailureDetectionTimeout=30s when talking to the
client.
The client will use failureDetectionti
Hi,
It's true that currently you need to implement something for Continuous Queries
failover in your application code.
Continuous Queries have setInitialQuery API to help with that.
How it's supposed to work is: you define an initial query (e.g. an SQL query)
which fetches the data that's alrea
Yes, you can upgrade from an older version to a newer one and keep the data, it
will just work.
You don't really need snapshots for that, although I assume snapshots would
also work.
> On 10 Dec 2020, at 16:20, xero wrote:
>
> Hi Dimitry and community,
> Is this still true? My intention is to
The options I see
1. Register a local listener on each node; you can call localListen() from a
broadcast() job or when the node starts.
2. Deploy a cluster-singleton service that calls remoteListen() in its
initialize().
I guess the first one will perform better.
Stan
From: maros.urbanec
Sen
The memory leak looks very much like
https://issues.apache.org/jira/browse/IGNITE-7918.
Can you check on 2.7?
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
First, this is a mailing list for Apache Ignite, although the results would be
more or less equal as GridGian is based on Ignite.
Second, the question is too broad.
You shouldn’t really think about running on 1 core as Ignite is for scaling to
many cores and machines.
The performance will vary g
I've put a full answer on SO -
https://stackoverflow.com/questions/55752357/possible-memory-leak-in-ignite-datastreamer/55786023#55786023
.
In short, so far it doesn't look like a memory leak to me - just a
misconfiguration.
There is a memory pool in JVM for direct memory buffers which is by defau
Can you share your full configuration (Ignite config and JVM options) and
the server logs of Ignite?
Which version of Ignite you use?
Can you confirm that on this version and configuration simply disabling
Ignite persistence removes the problem?
If yes, can you try running with walMode=NONE? It w
Hi,
You have to also fetch values to do a "compare-and-delete". Before deleting
each entry you check if it has been concurrently modified. If it was then
it's possible that the entry doesn't match your WHERE anymore.
So yes, for now deleting a large number of entries is heap-intensive.
It should
GridGain Snapshots allow you to take a backup on a live, working cluster.
If you can allow to stop the cluster activity while snapshot is being taken
you can:
- Deactive the cluster (e.g. control.sh --deactivate)
- Copy the persistence files (you would need work/binary_meta,
work/marshaller, work/d
Hi,
Please share
- Ignite version you're running
- Exact steps and events (a node was restarted, a client joined, etc)
- Logs of all three servers
Thanks,
Stan
On Mon, Aug 19, 2019 at 3:27 PM radha jai wrote:
> Hi ,
> Ignite being deployed on the kubernetes, there were 3 replicas of ignite
>
Hi Abhishek,
What's your Ignite version? Anything else to note about the cluster? E.g.
frequent topology changes (clients or servers joining and leaving, caches
starting and stopping)? What was the topology version when this happened?
Regarding the GC. Try adding -XX:+PrintGCApplicationStoppedTim
Hi,
It looks like the issue is that you're ending up sending an instance of
your gRPC server inside your service. This approach is generally incorrect.
What you should do is
- not pass gRPC to the service instance
- add an init() method implementation to your service
- in your init() start your gR
Hi,
I'm thinking this could be related to differences in the binary marshaller
configuration.
Are you using Java thin client? What version? What is the cache key type?
Are you setting a BinaryConfiguration explicitly on the client or server?
Thanks,
Stan
On Fri, Aug 23, 2019 at 3:38 PM wrote:
Hi,
I believe support for MongoDB 4.x is already implemented in
https://issues.apache.org/jira/browse/IGNITE-10847.
Also, I believe Ignite doesn't require a specific version of MongoDB. Have
you tried to install the latest 3.4.x version?
Thanks,
Stan
On Sun, Aug 25, 2019 at 7:04 PM Ashfaq Ahamed
Hi,
AFAICS this is not about the *protocol*, this is about *implementations* of
the protocol. I've followed the links and found this matrix of vulnerable
technologies:
https://vuls.cert.org/confluence/pages/viewpage.action?pageId=56393752
>From this matrix, Ignite uses only Node.js in WebConsole,
Hi,
In normal circumstances checkpoint is triggered on timeout, e.g. every 3
minutes (controlled by checkpointFrequency). So, the size of the checkpoint
is the amount of data written/updated in a 3-minute interval.
The best way to estimate it in your system is to enable data storage
metrics (DataS
f these custom classes into the
> libs folder in ignite but that has also not helped.
>
> On Fri, Aug 23, 2019 at 6:56 PM Stanislav Lukyanov
> wrote:
>
>> Hi,
>>
>> It looks like the issue is that you're ending up sending an instance of
>> your gRPC serve
Hi,
What version do you use?
There was an issue with recycling pages between data and indexes which has
been fixed in 2.7 https://issues.apache.org/jira/browse/IGNITE-4958.
In AI 2.7 and later this should be working fine.
Stan
On Sat, Oct 26, 2019 at 5:22 PM yann Blazart wrote:
> Yes the data
The right answer to this is probably not to use getAll in such cases.
If you want to load data in batches then you should either split the keys
yourself or use Query APIs, like ScanQuery or SqlQuery.
Stan
On Mon, Oct 28, 2019 at 10:36 PM Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
agupta...@bloomberg
This is not exactly correct.
When you do an SQL query with only PARTITIONED tables, or with a mix of
PARTITIONED and REPLICATED, the data will be taken from the primary
partions of PARTITIONED tables and *all* partitions of REPLICATED tables.
When you do an SQL query with only REPLICATED tables, th
It's best to have the number of partitions being a power of two, so better
to go with 32768 I think.
There are big clusters with hundreds of nodes out there, and they do use
large partition numbers sometimes - as large as 16k or 32k.
Note that it will bring some overhead on the metadata being stor
I believe that the correct answer to your question - don't do that.
The strength of distributed systems is that you have a number of identical
pieces which you can scale out virtually with no limits.
If your cluster is heterogenous - i.e. all the nodes are different in size,
amount of data and po
Each node is supposed to add its own IP and port to the S3 bucket when it
starts. That said, I wouldn't check the cluster state based on the contents
of the bucket alone.
Check your logs for errors. Try using some tools (e.g. check out Web
Console - either the one in Ignite
https://apacheignite-too
First, 1700 TPS given your transaction structure is 17 simple
operations per second, which is quite substantial - especially if you're
doing that from a single thread / single ODBC client.
Second, note that TRANSACTIONAL_SNAPSHOT is in beta and is not ready for
production use. There are no cla
Hi,
Web Console requires ignite-rest-http module to be enabled. It is not
enabled by default in Ignite binaries nor Docker image.
The steps that you've taken are done while the container is running - so,
AFTER the Ignite process has started. That's why copying the module has no
effect.
Try settin
There are multiple ways to configure a cache to use SQL. The easiest is to
use @QuerySqlField annotation. Check out this doc
https://www.gridgain.com/docs/8.7.6/developers-guide/SQL/sql-api#querysqlfield-annotation
.
On Tue, Nov 5, 2019 at 5:52 PM BorisBelozerov
wrote:
> I have 3 nodes, and I co
This message actually looks worrisome:
2019-10-22 10:31:42,441][WARN
][data-streamer-stripe-3-#52][PageMemoryImpl] Parking
thread=data-streamer-stripe-3-#52 for timeout (ms)=771038
It means that Ignite's throttling algorithm has decided to put a thread to
sleep for 771 seconds.
Can you share
Not out of the box but you could use SQL or ScanQuery for that.
With SQL:
SELECT _key FROM mycache
(given that your cache is SQL-enabled).
With ScanQuery:
cache.query(new ScanQuery(), Cache.Entry::getKey)
(may need to fix type errors to compile this)
Stan
On Wed, Dec 4, 2019 at 2:36 AM
Ok, there is a lot to dig through here but let me try with establishing
simple things first.
1. If two nodes (client or server) have the same cache specified in the
configuration, the configs must be identical.
2. If one node has a cache configuration, it will be shared between all
nodes automatica
In Ignite a node can go into "segmented" state in two cases really: 1. A
node was unavailable (sleeping. hanging in full GC, etc) for a long time 2.
Cluster detected a possible split-brain situation and marked the node as
"segmented".
Yes, split-brain protection (in GridGain implementation and in
This is a very common pitfall with distributed systems - comparing 1 node
vs 3 nodes. In short, this is not correct to compare them.
When you write to one node each write does the following:
1) client sends the request to the server
2) server updates data
3) server sends the response to the client
Hi Igniters,
Tomorrow I'll be talking at an online meetup of the Bay Area In-Memory
Computing community. The subject is The Role and Specifics of Networking in
Distributed Systems. We'll use Apache Ignite's protocols as an example -
experienced Ignite users will guess that we'll be looking at D
I don't really have much to say but to try reducing/balancing on heap cache
size.
If you have a lot of objects on heap, you need to have a large heap,
obviously.
If you need to constantly add/remove on-heap objects, you'll have a lot of
work for GC.
Perhaps you can review your architecture to avoi
Hi Calvin,
It should work the same for all queries. Ideally OptimizedMarshaller shouldn’t
even be used (except for JDK classes).
How did you check which marshaller is used in each case?
Can you share the code of your POJO? Or perhaps a runnable reproducer?
Can you also share the logs/check th
[?:1.8.0_152]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponseEntry.readExternal(GridCacheQueryResponseEntry.java:90)
~[ignite-core-2.3.0-clsa.20180130.59.jar:2.3.0-clsa.20180130.59]
Thanks,
Calvin
From: Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
Sent: Monday, July 0
Hi Gregory,
> So I would need a function that returns a node id from a Cache Key then a
> function returning a node of the cluster give its id
I don’t quiet get how it getting a node for a key fits here (you’d need to know
some key then),
but ignite.affinity().mapNodeToKey() does this.
How abou
Ignite transactions support this with REPEATABLE_READ isolation level.
More info here: https://apacheignite.readme.io/docs/transactions
Stan
From: Prasad Bhalerao
Sent: 9 июля 2018 г. 14:50
To: user@ignite.apache.org
Subject: Does ignite support Read-consistent queries?
Read-consistent queries:
The functionality you’re looking for is generally called Rolling Upgrade.
Ignite doesn’t support clusters with mixed versions out of the box.
There are third-party solutions on top of Ignite, such as GridGain, that do
have that.
Thanks,
Stan
From: KR Kumar
Sent: 16 июля 2018 г. 12:44
To: user@ig
I’d look into calling control.sh or ignitevisorcmd.sh and parsing their output.
E.g. check that control.sh --cache can connect to the local node and return one
of your caches.
However, this check is not purely for the local node, as the command will
connect to the cluster as a whole.
A more loca
Most likely it’s either
https://issues.apache.org/jira/browse/IGNITE-8023
or
https://issues.apache.org/jira/browse/IGNITE-7753
Until these issues are fixed, avoid starting/restarting nodes while cluster
activation is in progress.
Thanks,
Stan
From: Calvin KL Wong, CLSA
Sent: 19 июля 2018 г. 5:
If you’re using GridGain, it would be better to contact their customer support.
Also, check out this docs page about queries in WebConsole:
https://apacheignite-tools.readme.io/docs/queries-execution
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 12:54
To: user@ignite.apache.org
Subject: Insert Qu
If you’re using GridGain, it would be better to contact their customer support.
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 12:59
To: user@ignite.apache.org
Subject: How to increase memory of Gridgrain database.
Can anyone tells, how to increase a memory of Gridgrain database?
I tried below sni
You can write INSERT the same way as you write SELECT.
Syntax for INSERT is described here:
https://apacheignite-sql.readme.io/docs/insert
Thanks,
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 13:43
To: user@ignite.apache.org
Subject: RE: Insert Query in Gridgrain WebConsole
In this blog insert
Hi,
I think all you need is to store (a+b)/c in a separate field of the key and
annotate it with @AffinityKeyMapped:
class Key {
private long a, b, c;
@AffinityKeyMapped
private long affKey;
public Key(long a, long b, long c) {
this.a = a;
Hi,
I’d suggest try upgrading from 2.2 to 2.6.
One thing that stands out in your configs is expiryPolicy settings.
expiryPolicy property was deprecated, you need to use expiryPolicyFactory
instead.
I happen to have recently shared a sample of setting it in an XML config on SO:
https://stackoverf
Random-LRY selects the minimum of 5 timestamps (5 pages, 1 timestamp for each
page).
Random-2-LRU selects the minimum of 10 timestamps (5 pages, 2 timestamps for
each page).
My advice is not to go that deep. Random-2-LRU I protected from the “one-hit
wonder” and has a very tiny overhead compar
Are you trying to use MS SQL tools to connect to Ignite?
I don’t think that’s possible – at least, Ignite doesn’t claim to support that.
Stan
From: wt
Sent: 24 июля 2018 г. 16:01
To: user@ignite.apache.org
Subject: odbc - conversion failed because data value overflowed
I have a SQL server instan
I don’t know how that MSSQL studio works, but it might be dependent on T-SQL or
some system views/tables specific for MSSQL.
DBeaver doesn’t use any DB-specific features and is known to work with Ignite.
Take a look at this page: https://apacheignite-sql.readme.io/docs/sql-tooling
Stan
From: wt
What do you mean by “execute select query on cache using affinity key”
and what is the problem you’re trying to solve?
Stan
From: Prasad Bhalerao
Sent: 25 июля 2018 г. 10:03
To: user@ignite.apache.org
Subject: ***UNCHECKED*** Executing SQL on cache using affinnity key
Hi,
Is there any way to ex
the sql using a
affinity key so that it gets executed only on a node which owns that data?
Thanks,
Prasad
On Wed, Jul 25, 2018 at 3:01 PM Stanislav Lukyanov
wrote:
What do you mean by “execute select query on cache using affinity key”
and what is the problem you’re trying to solve?
Stan
Please don’t send messages like “?”. Mailing list is not a chatroom, messages
need to be complete and meaningful.
Reproducer is a code snippet or project that can be run standalone to reproduce
the problem.
The behavior you see isn’t expected nor known, so we need to see it before we
can commen
Hi,
You don’t need to use DDL here.
It’s better to
- Define Key class to be like
class Key { [QuerySqlField] String CacheKey; [AffinityKeyMapped] long
AffinityKey; }
- set your Key and UserData to be indexed
cacheCfg.QueryEntities =
{
new QueryEntity(typeof(Key), typeo
Hi,
I guess this is the problem: https://issues.apache.org/jira/browse/IGNITE-8987
Stan
From: zhouxy1123
Sent: 2 августа 2018 г. 9:57
To: user@ignite.apache.org
Subject: in a four node ignite cluster,use atomicLong and process is stuck
incountdown latch
hi ,
in a four node ignite cluster,use a
Seems to be answered in a nearby thread:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-yarn-cache-store-class-not-found-tt23016.html
Stan
From: debashissinha
Sent: 28 июля 2018 г. 13:01
To: user@ignite.apache.org
Subject: Ignite client and server on yarn with cache read through
Hi ,
I h
FYI the issue https://issues.apache.org/jira/browse/IGNITE-8774 is now fixed,
the fix will be available in Ignite 2.7.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
What's your version?
Do you use native persistence?
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
This is the commit https://github.com/apache/ignite/commit/bab61f1.
Fixed in 2.7.
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
With writeThrough an entry in the cache will never be "dirty" in that sense -
cache store will update the backing DB at the same time the cache update
happens.
From: monstereo
Sent: 14 августа 2018 г. 22:39
To: user@ignite.apache.org
Subject: Re: Eviction Policy on Dirty data
yes, using cachest
Hi,
The thing is that the PK index is currently created roughly as
CREATE INDEX T(_key)
and not
CREATE INDEX T(customer_id, date).
You can’t use the _key column in the WHERE clause directly, so the query
optimizer can’t use the index.
After the IGNITE-8386 is fixed the index will be cre
I see a node in the topology flacking up and down every minute in the
restart1.log:
TcpDiscoveryNode [id=d6e52510-3380-4258-8a8e-798640b1786c, addrs=[10.29.42.49,
127.0.0.1], sockAddrs=[/10.29.42.49:47500, /127.0.0.1:47500], discPort=47500,
order=596, intOrder=302, lastExchangeTime=153715439345
It definitely does.
IGNITE-7153 seems to be applicable for objects greater than 8 kb. Is that your
case?
If so then I guess that has to be the same issue.
Stan
From: Michael Fong
Sent: 9 октября 2018 г. 15:54
To: user@ignite.apache.org
Subject: BufferUnderflowException on GridRedisProtocolParser
Hi,
Clients should be able to connect to the cluster normally.
Please share logs if you’re still seeing this.
Stan
From: hulitao198758
Sent: 27 июля 2018 г. 14:32
To: user@ignite.apache.org
Subject: Re: The problem after the ignite upgrade
I opened the persistent storage cluster, version 2.3 a
Hi,
Generally, your initial thoughts on the expected latency are correct –
PRIMARY_SYNC allows you not to wait for the writes
to complete on the backups. However, some of the operations still have to be
completed on all nodes (e.g. acquiring key locks),
so increasing the number of backups does a
Hi,
No, the behavior you’re describing is not expected.
Moreover, there is a bunch of tests in Ignite that make sure the reads are
consistent during the rebalance,
so this shouldn’t be possible.
Can you create a small reproducer project and share it?
Thanks,
Stan
From: Shrikant Haridas Sonone
Hi,
If the issue is that the node is starting with a new persistent storage each
time
there could some issue with file system.
The algorithm to choose a persistent folder is
1) If IgniteConfiguration.consistentId is specified, use the folder of that name
2) Otherwise, check if there are any exis
Hi,
AFAICS Ignite doesn’t even use json4s itself. I assume it’s only in the
dependencies and binary distribution for Spark to work.
So, if Spark actually needs 3.2.X you can try using that.
You can remove/replace the Ignite’s json4s jar with the required one, or use
your-favorite-build-system’s
The call
setIndexedTypes(Long.class, Person.class)
will search for all `@QuerySqlField` fields in Person and create indexes for
all of them with `index=true`.
For example, if your class looks like this
class Person {
@QuerySqlField(index = true)
private String name;
Hi,
Looks like a JVM bug.
The message means that bytecode generated for a lambda in the Ignite code is
structurally incorrect,
but the bytecode is generated in parts by the javac and JVM, so the issue must
be there.
I suggest you upgrade to the latest JDK 8 (currently 8u181) and see if it helps
Well, exactly what it says – Ignite doesn’t guarantee consistency between
CacheStore and Native Persistence.
Because of this you may end up with different data in the 3rd party DB and
Ignite’s persistence,
so using such configuration is not advised.
See this page
https://apacheignite.readme.io/d
Hi,
I’ve tried your test and it works as expected, with some partitions lost and
the final size being ~850 (~150 less than on the start).
Am I missing something?
Thanks,
Stan
From: Roman Novichenok
Sent: 2 октября 2018 г. 22:21
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy optio
Hi,
Please share configurations and full logs from all nodes.
Stan
From: ApacheUser
Sent: 8 октября 2018 г. 17:49
To: user@ignite.apache.org
Subject: Spark Ignite Data Load failing On Large Cache
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage) Sp
Hi,
Can’t say much about Java EE usage exactly but overall the idea is
1) Start an Ignite node named “foo”, in the same JVM as the application, before
Hibernate is initialized
2) Specify `org.apache.ignite.hibernate.ignite_instance_name=foo` in the
Hibernate session properties
3) Do NOT specify
Ignite 2.0 is FAR from being recent :)
Try Ignite 2.6. If it doesn’t work, share configuration, logs and code that
starts Ignite.
Thanks,
Stan
From: ignite_user2016
Sent: 9 октября 2018 г. 22:00
To: user@ignite.apache.org
Subject: Re: Ignite on Spring Boot 2.0
Small update -
I also upgraded i
In short, Ignite replaces H2’s storage level with its own.
For example, Ignite implements H2’s Index interface with its own off-heap data
structures underneath.
When Ignite executes an SQL query, it will ask H2 to process it, then H2 will
callback to Ignite’s implementations
of H2’s interfaces (
. Simpler
implementation could just raise exceptions on queries when policy is ..._SAFE
and some partitions are unavailable.
Thanks again,
Roman
On Tue, Oct 9, 2018 at 2:54 PM Stanislav Lukyanov
wrote:
Hi,
I’ve tried your test and it works as expected, with some partitions lost and
the final
Looks like you’re not actually starting Ignite there.
You need to either
- Provide Ignite configuration path via SpringcacheManager.configurationPath
- Provide Ignite configuration bean via SpringcacheManager.configuration
- Start Ignite manually in the same JVM prior to the SB app initialization
I
Well, you need to wait for the IGNITE-7153 fix then.
Or contribute it! :)
I checked the code, and it seems to be a relatively easy fix. One needs to
alter the GridRedisProtocolParser
to use ParserState in the way GridTcpRestParser::parseMemcachePacker does.
Stan
From: Michael Fong
Sent: 11 октя
It looks like your use case is having Ignite as a cache for HDFS, as described
here https://ignite.apache.org/use-cases/hadoop/hdfs-cache.html.
Try using this guide
https://apacheignite-fs.readme.io/docs/secondary-file-system.
Stan
From: Divya Darshan DD
Sent: 26 сентября 2018 г. 9:19
To: user@
I think this describes what you want:
https://apacheignite-fs.readme.io/docs/secondary-file-system
Stan
From: Divya Darshan DD
Sent: 26 сентября 2018 г. 13:51
To: user@ignite.apache.org
Subject: Load data into particular Ignite cache from HDFS
Can you tell me a method to load files from HDFS in
Hi,
Nope, it doesn’t work like that.
Names of fields in the Java class are always the same as the names of the
fields in BinaryObject.
Frankly, I don’t really see a strong use case for the customization you’re
asking for.
If it is just to support different naming conventions in different places
Hi,
// Sidenote: better not to ask two unrelated questions in a single email. It
complicates things if the threads grow.
Roughly speaking, REPLICATED cache is the same as PARTITIONED with an infinite
number of backups.
The behavior is supposed to always be the same. Some components cut corners
No, seems there are no methods for that.
I assume it could be added to the metadata response. But why do you need it?
Stan
From: wt
Sent: 11 октября 2018 г. 15:12
To: user@ignite.apache.org
Subject: client probe cache metadata
Hi
The rest service has a meta method that returns fields and index
`limit` and `offset` should work, with usual semantics.
Thanks,
Stan
From: kcheng.mvp
Sent: 11 октября 2018 г. 18:59
To: user@ignite.apache.org
Subject: Can I use limit and offset these two h2 features for pagination
I know in h2 standalone mode, we can use *limit* and *offset*
features(functio
Refer to this:
https://apacheignite.readme.io/docs/baseline-topology#section-usage-scenarios
Stan
From: the_palakkaran
Sent: 9 октября 2018 г. 22:59
To: user@ignite.apache.org
Subject: Re: Configuration does not guarantee strict
consistencybetweenCacheStore and Ignite data storage upon restarts
Yep, `order by` is usually needed for `limit`, otherwise you’ll get random rows
of the dataset
as by default there is no ordering.
If I’m not mistaken, in Ignite this options work on both map and reduce steps.
E.g. `limit 100` will first take the 100 rows from each node, then combine them
in a t
You’re creating a new cache on each heath check call and never
destroy them – of course, that leads to a memory leak; it’s also awful for the
performance.
Don’t create a new cache each time. If you really want to check that cache
operations work,
use the same one every time.
Thanks,
Stan
Fr
Hi,
It is a rather lengthy thread and I can’t dive into details right now,
but AFAICS the issue now is making affinity key index to work with a secondary
index.
The important things to understand is
1) Ignite will only use one index per table
2) In case of a composite index, it will apply the co
Uhm, don’t have a tested example but it seems pretty trivial.
It would be something like
@Bean
public SpringCacheManager SpringCacheManager() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("WebGrid");
// set more Ignite parameters if
tree, then merge them.
Stan
From: eugene miretsky
Sent: 11 октября 2018 г. 20:58
To: user@ignite.apache.org
Subject: Re: Role of H2 datbase in Apache Ignite
Thanks!
So does it mean that CacheConfiguration.queryParallelism is really an H2
settings?
On Tue, Oct 9, 2018 at 4:27 PM Stanislav
Yes, sure.
From: Dave Harvey
Sent: 11 октября 2018 г. 23:59
To: user@ignite.apache.org
Subject: Re: Query 3x slower with index
"Ignite will only use one index per table"
I assume you mean "Ignite will only use one index per table per query"?
On Thu, Oct 11, 2018 at 1:55 P
Yes, there is a direct support for UUID.
If you don’t know where the error is coming from, please share the code and the
logs.
Stan
From: wt
Sent: 12 октября 2018 г. 13:00
To: user@ignite.apache.org
Subject: data streamer - failed to update keys (GUID)
hi
I just wanted to check something. I ha
There is an error “Failed to update index, incorrect key class”.
Any chance you’ve changed an integer field to a string one, or something like
that?
Changing field types is generally not supported.
Stan
From: wt
Sent: 12 октября 2018 г. 14:06
To: user@ignite.apache.org
Subject: RE: data streamer
Nope.
Here is the JIRA to add that: https://issues.apache.org/jira/browse/IGNITE-1683.
Seems like the functionality was broken at some point, and because of that it
was removed from Ignite.
Someone needs to address that and bring the API back to C#.
Stan
From: Hemasundara Rao
Sent: 12 октября 20
Hi,
Were you able to find the root cause of this?
If yes, what was it?
The error indicates a network connection issue, so I guess
the solution should be about the network configuration.
Stan
From: eugene miretsky
Sent: 21 августа 2018 г. 23:35
To: user@ignite.apache.org
Subject: Spark Dataframe
Hi,
If you want help, you have to explain what is your problem.
Sharing code is great, but people also need to understand what exactly you’re
trying to do
and what to look for.
Also, your code seems to contain private classes (com.inn.*) so no one will be
able to run it anyway.
Stan
From: shru
On your questions
> 1) How does one increase write throuput without increasing number of clients
> (the server nodes are underutilized at the moment)
Actually, adding more clients is the supposed way of increasing throughput if
servers have capacity.
> 2) We have use cases where we many have man
A guess: the value is being saved, but due to an issue with name or type
matching in the QueryEntity
SQL engine doesn’t return it.
Look for the problem in the cache config (queryEntities property), pay
attention to the names, etc.
Stan
From: wt
Sent: 15 октября 2018 г. 12:47
To: user@ignite.ap
1 - 100 of 271 matches
Mail list logo