Re: Creating multiple Ignite grids on same machine

2018-01-10 Thread Alexey Popov
Hi Raymond,

In your case you should configure:

1. different TcpDiscoverySpi local ports
2. different ports for TcpDiscoveryVmIpFinder (Vm = Static for .NET). You
should not use a default ipFinder.
3. different TcpCommunicationSpi local ports

Please see sample Java xml-configs as a reference sample. You can do the
similar things with Ignite.Net 2.3 configuration.

Sample cluster 1 cfg:










127.0.0.1:48500..48509












 

Sample cluster 2 cfg:










127.0.0.1:47500..47509












 

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection problem between client and server

2018-01-10 Thread Denis Mekhanikov
Hi Jeff.

Looks like my letter wasn't noticed by the developer community.

I sent a message to the dev list one more time:
http://apache-ignite-developers.2346864.n4.nabble.com/Irrelevant-data-in-discovery-messages-td25927.html

In the meanwhile make sure, that this is really the cause of the discovery
process being slow. Try deploying nodes on the same environment, but
without additional jar files on the classpath. Will it make discovery work
faster?

Denis

ср, 10 янв. 2018 г. в 8:39, Jeff Jiao :

> Hi Denis,
>
> Does Ignite dev team give any feedback for this?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Are services serialized ?

2018-01-10 Thread Mikael

Hi!

How are services started on a node ? say a node goes down so a service 
must be started on another node, will it send the serialized service 
object I used at deployment or will it create a new instance of it in 
the other node ?


And are services serialized for some other reasons ? do I need to set 
transient member variables ?, say for example I have a socket object as 
a member of the service implementation that I use for something ?


Mikael




Re: Are services serialized ?

2018-01-10 Thread Denis Mekhanikov
Hi Mikael!

Services along with *ServiceConfiguration* are put into a replicated system
cache, so they are serialized at the moment of deployment and sent to all
nodes in the cluster.
When service is deployed, it is deserialized.

So, if you have a field, that stores a socket instance, it's better to make
it transient and create it in *init()* method.

Denis

ср, 10 янв. 2018 г. в 11:44, Mikael :

> Hi!
>
> How are services started on a node ? say a node goes down so a service
> must be started on another node, will it send the serialized service
> object I used at deployment or will it create a new instance of it in
> the other node ?
>
> And are services serialized for some other reasons ? do I need to set
> transient member variables ?, say for example I have a socket object as
> a member of the service implementation that I use for something ?
>
> Mikael
>
>
>


Re: How possible we can get the reason why the ignite shut down itself?

2018-01-10 Thread Denis Mekhanikov
Hi Aaron!

Does the whole node stop, or only a single cache?

Make sure, that you don't call *Ignite.close()* or *IgniteCache.close()*
anywhere or use it as a resource in a try-with-resources block.

Logs may contain useful information about the node failure. Try analysing
it or attach it to the letter (please don't just past it into a body, use
file attachment).
You may need to enable INFO/DEBUG logging. Here is documentation on Ignite
logging: https://apacheignite.readme.io/docs/logging

Denis

ср, 10 янв. 2018 г. в 7:21, aa...@tophold.com :

> Hi All,
>
> We have cache node(only one node not cluster), with native persistence
> enable, update of this cache will be frequently.
>
> But not so frequent, we use this cache to aggregate the open close and
> high low price.  now only have about <1000 updates per seconds.
>
> we use the cache#invoke to update the price according to key every time.
>
> But but every one hour the cache just shut down by itself.  we got the
> CacheStoppedException: Fail to perform cache operation (cache is stopped)
>
> The underling server in fact have 64 memory still 30G+ cache left there..
>  the GC is normal no full GC triggered.
>
> the exception stack include no specific reason  why it shut down by
> itself, I not sure wherever any place to print this reason why the node
> shutdown itself?
>
>
>
> Regards
> Aaron
>
>  id="dataStorageConfiguration">
>
> 
>
>  value="${persistent.store.path:/var/tmp/market/store}"/>
>  value="${wal.archive.path:/var/tmp/market/store/wal/archive}"/>
>  value="${wal.store.path:/var/tmp/market/store/wal}"/>
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
>
> --
> aa...@tophold.com
>


Re: Transaction operations using the Ignite Thin Client Protocol

2018-01-10 Thread Pavel Tupitsyn
Hi Denis,

Yes, transactions in thin client protocol are surely very important, I
think we should add them soon (in 2.5).
Ticket: https://issues.apache.org/jira/browse/IGNITE-7369

And yes, server node will handle everything, client just performs TX_START,
TX_COMMIT, TX_ROLLBACK operations.

Pavel

On Wed, Jan 10, 2018 at 1:08 AM, Denis Magda  wrote:

> + dev list
>
> Igniters, Pavel,
>
> I think we need to bring support for key-value transactions to one of the
> future versions. As far as I understand, a server node, a thin client will
> be connected to, will be the transaction coordinator and the client will
> simply offloading everything to it. What do you think?
>
> —
> Denis
>
> > On Jan 8, 2018, at 1:12 AM, kotamrajuyashasvi <
> kotamrajuyasha...@gmail.com> wrote:
> >
> > Hi
> >
> > I would like to perform Ignite Transaction operations from a C++ program
> > using the Ignite Thin Client Protocol. Is it possible to do so ? If this
> > feature is not available now, will it be added in future ?
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Re: Migrating from Oracle to Apache Ignite.

2018-01-10 Thread Andrey Mashenkov
Hi,

1. SQL will support transactions after next ticket will be resolved [1].
You can track it's state via "start watching this issue" link on issue page.

2. JDBC pooling, failover and load banacling is not supported for JDBC.
More over JDBC connection is not thread safe.
Feel free to create a ticket for features you need.

Most likely, c3pi0 can be used for connection polling once transaction
support will be added [2].


[1] https://issues.apache.org/jira/browse/IGNITE-4191
[2]
https://stackoverflow.com/questions/40498191/apache-ignite-jdbc-driver-jdbc-connection-pool-options


On Wed, Jan 10, 2018 at 6:05 AM, rizal123  wrote:

> Hi,
>
> I have a project/poc, about migrating database oracle into in memory apache
> ignite.
>
> First of all, this is my topology.
>  t1530/Topology_Draft_1.png>
>
> in case image not showing: https://ibb.co/cbi5cR
>
> I have done this thing:
> 1. Create node server cluster. And import schema oracle into it.
> 2. Load data from Oracle into server cluster using LoadCache.
> 3. From my application, change datasource into ignite cluster. (just only
> one IP address). Currently i am using Jdbc Thin.
> 4. Start my application, and its Up. It`s running well.
>
> I have the following problems:
> 1. JDBC Thin, does not support Transactional SQL. I really need this ticket
> to be fixed.
> 2. Multi connection IP Address for JDBC Thin, or Load Balancer for JDBC
> Thin.
> 3. Automatic fail over. I have tested 1 machine, with 3 node cluster
> server.
> If the first node (That was first turn on) down, the connection will down
> too. Though there are still 2 clusters that live.
>
> Please let me know if there any solution...
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Adding custom processor in Ignite Node

2018-01-10 Thread kotamrajuyashasvi
Hi

I would like a start a custom processor on Ignite Node. After exploring the
source code, I have created a custom processor class that extends
GridProcessorAdapter. I added the code to start custom processor just like
other processors in IgniteKernal.java passing the GridKernalContextImpl ctx
object. The custom processor will start a separate service on a separate
Thread. I built the project and was able to see that my process was able to
start when the Ignite node started.  

This custom service that was started will use separate Threads which will do
certain processing of data on the Ignite nodes on the cluster. The main
operations performed by each Thread are : sql queries, cache get and puts
and Ignite Transactions. I tried executing a query but the method is getting
stuck forever. I used - QueryCursor> qryCur =
ctx.query().querySqlFieldsNoCache(Sql,false); where ctx is the
GridKernalContextImpl  object passed while starting the processor.  

 I would like to know how to perform the query, cache, transaction
operations using the GridKernalContextImpl  object . Can this object be used
by multiple Threads safely ?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClassCastException using cache and continuous query

2018-01-10 Thread Alexey Popov
Hi Diego,

It seems that your error is related to different class Loaders being used. 

I don't have an idea why this happens but please try to clean your "work"
directory in Ignite home (IGNITE_HOME) after 1.8 -> 2.3 upgrade or set up a
new IGNITE_HOME.

Please share you node configs and IgniteFetchItem class if you still face
the issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How possible we can get the reason why the ignite shut down itself?

2018-01-10 Thread aa...@tophold.com
hi All,

Mistake, someone include the enterprise jar in class path by mistake, so auto 
shutdown, find from log eventually!



Regards
Aaron


aa...@tophold.com
 
From: aa...@tophold.com
Date: 2018-01-10 12:21
To: user
Subject: How possible we can get the reason why the ignite shut down itself?
Hi All, 

We have cache node(only one node not cluster), with native persistence enable, 
update of this cache will be frequently.

But not so frequent, we use this cache to aggregate the open close and high low 
price.  now only have about <1000 updates per seconds. 

we use the cache#invoke to update the price according to key every time. 

But but every one hour the cache just shut down by itself.  we got the 
CacheStoppedException: Fail to perform cache operation (cache is stopped)

The underling server in fact have 64 memory still 30G+ cache left there..  the 
GC is normal no full GC triggered.

the exception stack include no specific reason  why it shut down by itself, I 
not sure wherever any place to print this reason why the node shutdown itself?



Regards
Aaron
























aa...@tophold.com


Index on a Long ?

2018-01-10 Thread Mikael

Hi!

How do I create an index on a cache key that is a Long, I can't use 
annotations and the QueryEntity look like it requires a class and field 
to set an index ?


Mikael




Re: Index on a Long ?

2018-01-10 Thread slava.koptilin
Hi Mikael,

You can specify indexed types via CacheConfiguration#setIndexedTypes()
method.
For instance:
CacheConfiguration ccfg = new CacheConfiguration<>();
ccfg.setIndexedTypes(Long.class, Person.class);

Another possible way is DDL statement:
// Create table based on PARTITIONED template with one backup.
cache.query(new SqlFieldsQuery(
"CREATE TABLE person (id LONG, name VARCHAR, city_id LONG, PRIMARY KEY
(id, city_id)) " +
"WITH \"backups=1, affinity_key=city_id\"")).getAll();
// Create an index.
cache.query(new SqlFieldsQuery("CREATE INDEX on Person
(city_id)")).getAll();

[1]
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-registering-indexed-types
[2] https://apacheignite-sql.readme.io/docs/create-index
[3]
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/sql/SqlDdlExample.java

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Index on a Long ?

2018-01-10 Thread Andrey Mashenkov
Hi Michael,

There is no need to create index on keys as ignite key-value storage
already has naturally index for it underneath.
It should be enough to register indexing types [1].

Would you please clarify the use case if there is any other questions?


[1]
https://apacheignite.readme.io/v1.8/docs/indexes#section-registering-indexed-types

On Wed, Jan 10, 2018 at 3:20 PM, Mikael  wrote:

> Hi!
>
> How do I create an index on a cache key that is a Long, I can't use
> annotations and the QueryEntity look like it requires a class and field to
> set an index ?
>
> Mikael
>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Apache Ignite & unixODBC and truncating text

2018-01-10 Thread bagsiur
Ok, thank you very much for your time nad reply.

So, if I understend corectly, this is bug of Apache Ignite. How much time
will take to resolve and fix this problem?

I will track progress :)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: dotnet thin client - multiple hosts?

2018-01-10 Thread Colin Green
Thanks.


Re: Apache Ignite best practice

2018-01-10 Thread Ilya Kasnacheev
Hello Sergey!

You are using "NOT IN" in your query. This may cause performance drawback.
Using JOIN TABLE() is recommended, as per
https://apacheignite.readme.io/docs/sql-performance-and-debugging#section-sql-performance-and-usability-considerations

Not sure about DELETE. I guess it might be OK with DELETE.

Regards,

-- 
Ilya Kasnacheev

2018-01-09 14:01 GMT+03:00 Borisov Sergey :

> Hi,
> Sorry for bad english.
> Need council on configuring Ignite, which is used as a SQL Grid.
> The task is rather simple - to store in realtime information about
> connections to the services and to be able to quickly search for it.
> Tell me please in what direction to diagnose and what are the variants for
> optimizing performance?
> The workload in the production mode is expected to be about ~ 100-150k RPS
> and ~ 1 million rows in the cache.
>
> *Test Infrastructure:*
> 3 Ignite nodes (version 2.3) in kubernetes on 3 servers (4 CPUs, 16 GB RAM)
> *JVM_OPTS* = -Xms8g -Xmx8g -server -XX:+AlwaysPreTouch -XX:+UseG1GC
> -XX:+DisableExplicitGC -XX:MaxDirectMemorySize=1024M
> -XX:+ScavengeBeforeFullGC
> *IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE* = 1
> *IGNITE_QUIET* = false
>
> *Cache structure:*
> CREATE TABLE IF NOT EXISTS TEST
> (
> id varchar (8),
> g_id varchar (17),
> update_at bigint,
> tag varchar (8),
> ver varchar (4),
> size smallint,
> score real,
> PRIMARY KEY (id)
> ) WITH "TEMPLATE = PARTITIONED, CACHE_NAME = TEST,
> WRITE_SYNCHRONIZATION_MODE = FULL_ASYNC, BACKUPS = 0, ATOMICITY = ATOMIC";
> CREATE INDEX IF NOT EXISTS idx_g_id_v ON TEST (ver ASC, g_id ASC);
> CREATE INDEX IF NOT EXISTS idx_size ON TEST (size ASC);
> CREATE INDEX IF NOT EXISTS idx_update_at ON TEST (update_at DESC);
> CREATE INDEX IF NOT EXISTS idx_tag ON TEST (tag ASC);
>
> *Queries executed while the application is running:*
> 1) Updating rows data (60% workload)
> MERGE INTO TEST (id, g_id, update_at, tag, ver, size, score) VALUES () 2)
> Removing (3% workload) DELETE FROM TEST WHERE id =? 3) Once a minute,
> remove not actual rows (TTL) DELETE FROM TEST WHERE update_at <=? 4)
> Getting requested rows (37% workload) ( SELECT a.k FROM (
> SELECT id AS k, t.score AS s FROM TEST t WHERE t.update_at>
> = $ {u} AND t.ver = ${v} AND t.g_id = '${g}' AND t.size> =
> ${cc1} AND t.size <= ${cc2} AND t.tag = `${t}` AND
> id NOT IN ('', '', '', '', , '') ORDER BY RAND () LIMIT
> 64 ) a ORDER BY POWER ($ {pp} -a.s, 2) ASC LIMIT 16 ) UNION ALL
> ( SELECT b.k FROM ( SELECT id AS k, t.score AS s FROM TEST
> t WHERE t.update_at> = $ {u} AND t.ver = ${v} AND
> t.g_id = '${g}' AND t.size> = ${cc1} AND t.size <= ${cc2} AND
> (t.tag <> `${t}` OR t.tag IS NULL) AND id NOT IN ('', '', '',
> '', , '') ORDER BY RAND () LIMIT 64 ) b ORDER
> BY POWER (${pp} -a.s, 2) ASC LIMIT 16 ) LIMIT 16 *The first iteration
> was through the REST API*: https://apacheignite.readme.
> io/docs#section-sql-fields-query-execute <= 20k requests per minute -
> response time: merge 4ms, select 30ms > 20k: merge & select 300ms -
> *9ms*, then complete degradation and fall *The second iteration was
> through jdbc and batch*: 1) every 3 seconds from 500 to 1000 rows: MERGE
> INTO T VALUES (...), (...), ... (...); 2) every 3 seconds from 0 to 150
> rows: DELETE FROM T WHERE ID in ('', '', ... ''); The performance increase
> was approximately 2.5 - 3 times, which is very small.
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-10 Thread Alexey Popov
Hi,

You are right, cache.putAll() can't evict the entries from the batch it is
working on, and you can get Ignite OOME.
This is expected behavior because putAll get locks for all provided entry
keys. That is critical:
1) for transactional caches and 
2) any caches backed up by 3-rd party persistence store.

There was an intention to optimize this behavior for atomic caches without
cache store [1] but it seems it will not be implemented. So, you could rely
on this behavior.  

[1] https://issues.apache.org/jira/browse/IGNITE-514.

Thank you,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adding custom processor in Ignite Node

2018-01-10 Thread afedotov
Hi,

1) Why do you want to implement your own processor?
2) Have you considered using Ignite's compute grid and other features
instead? 
If yes, what problems do you anticipate?

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


failed to find sql table for type:

2018-01-10 Thread kenn_thomp...@qat.com
Ignite.NET 2.3.0

I'm pulling data from another DB and loading into Ignite node started under
.NET Core console app. After converting the datarows into binary objects and
putting them into the cache, I both do a count as well as debug and see the
objects in the cache with the correct data.

I try pulling the objects out of the cache using SqlQuery, and get the
exception. I'm not sure what to check, but at one point yesterday I was able
to do it successfully. I'm not sure what I changed and have been unable to
walk whatever change I thought I made back.

Where should I dig to get this resolved? This is all running on a vanilla
ignite node.


Exception has occurred: CLR/Apache.Ignite.Core.Common.IgniteException
An unhandled exception of type 'Apache.Ignite.Core.Common.IgniteException'
occurred in Apache.Ignite.Core.dll: 'Failed to find SQL table for type:
trebuchetsettings'
 Inner exceptions found, see $exception in variables window for more
details.
 Innermost exception Apache.Ignite.Core.Common.JavaException : class
org.apache.ignite.IgniteCheckedException: Failed to find SQL table for type:
trebuchetsettings
at
org.apache.ignite.internal.processors.platform.utils.PlatformUtils.unwrapQueryException(PlatformUtils.java:519)
at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.runQuery(PlatformCache.java:1220)
at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.processInStreamOutObject(PlatformCache.java:874)
at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: javax.cache.CacheException: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
find SQL table for type: trebuchetsettings
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:597)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:368)
at
org.apache.ignite.internal.processors.platform.cache.PlatformCache.runQuery(PlatformCache.java:1214)
... 2 more
Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
find SQL table for type: trebuchetsettings
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSql(IgniteH2Indexing.java:1248)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$8.applyx(GridQueryProcessor.java:2068)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$8.applyx(GridQueryProcessor.java:2066)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.queryDistributedSql(GridQueryProcessor.java:2065)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySql(GridQueryProcessor.java:2045)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:582)
... 4 more



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adding custom processor in Ignite Node

2018-01-10 Thread kotamrajuyashasvi
HI

Thanks for your response.

Actually I want to implement  custom Thin Client Processor/Server which will
process requests from multiple C++ thin Client which use TCP socket
communication. The Client requests are to perform sql queries , cache gets
and puts and Ignite Transaction operations.

I have explored different options but they could not meet my requirements:
1. Ignite C++ :  since there will be a lot of Clients and I'm restricted to
use a separate Ignite C++ client node for each C++ Client since I need a
separate Transaction context for each client, hence its memory intensive.
Also there are some other issues which could not be reproduced/replicated
here.
 
2. ODBC : since I need cache gets and puts operations and Transactions which
are not supported currently and planned to be added in 2.5 version. Same
with the case of Thin Client Protocol.

Due to time constraint I want to try a custom Thin Client Processor where a
new Thread is created for each client connection to process custom
messages/requests and perform custom logic and appropriate operations on
grid and send custom message/response.Also from a knowledge perspective I
would like to know how Ignite processors work internally.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


3rd Party Persistence and two phase commit

2018-01-10 Thread Andrey Nestrogaev
Hi all! 

We have a cache "Accounts".
We use 3rd Party Persistence with Write-Through mode

In the cache there are two accounts.
Account width id = 1 is located on node A and account with id = 2 is located
on node B.
In a single transaction we want to transfer the amount of $100 from account
1 to account 2.

How Ignite in the described case supports ACID transactions?
What happens if the CacheStore on node A successfully executes the commit in
the overrided method sessionEnd
and when the sessionEnd method is executed on node B, an error occurs and
the data is not commit in the database.

In this case, we get inconsistent data in the database, right?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-10 Thread Denis Mekhanikov
Hi Andrey!

Actually, commit to the 3rd party database will be performed from the node,
that initiated the transaction.
So, there will be only one transaction on the backing database.
Otherwise it is impossible to guarantee data consistency, as you noticed.

Denis

ср, 10 янв. 2018 г. в 18:58, Andrey Nestrogaev :

> Hi all!
>
> We have a cache "Accounts".
> We use 3rd Party Persistence with Write-Through mode
>
> In the cache there are two accounts.
> Account width id = 1 is located on node A and account with id = 2 is
> located
> on node B.
> In a single transaction we want to transfer the amount of $100 from account
> 1 to account 2.
>
> How Ignite in the described case supports ACID transactions?
> What happens if the CacheStore on node A successfully executes the commit
> in
> the overrided method sessionEnd
> and when the sessionEnd method is executed on node B, an error occurs and
> the data is not commit in the database.
>
> In this case, we get inconsistent data in the database, right?
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-10 Thread Larry Mark
Thanks for the quick response.  I have observed similar behavior with 3rd
party persistence read through IF I set indexed types for the cache.

Test case - Load up the cache using put with 35,000 entries ( keys 1 ->
35,000).
Read every key using Get(key)

This is the use case that I want to use in my application, where I have an
active subset of my data in memory and if a key is accessed that is not in
memory, it is read in from a postgres database.I am only doing read
through, not write through since there is a different path for the data to
get into postgres.

I can see from cache metrics ( shown below ) that I perform 34011 reads, of
which there are 6093 hits and 27,918 misses and then I get the OOM error.
This only happens if indexed types are set on the cache.  Is this expected
behavior?  If I am not using sql query on the cache , only using get and
put, does it matter if I do not set indexedTypes?  Does it help or hurt
performance in any way?

Cache metrics and stack trace shown below.

CacheMetricsSnapshot [reads=34011, puts=35000, hits=6093, misses=27918,
txCommits=0, txRollbacks=0, evicts=0, removes=0, putAvgTimeNanos=190.56557,
getAvgTimeNanos=53.07989, rmvAvgTimeNanos=0.0, commitAvgTimeNanos=0.0,
rollbackAvgTimeNanos=0.0, cacheName=fubar, offHeapGets=0, offHeapPuts=0,
offHeapRemoves=0, offHeapEvicts=0, offHeapHits=0, offHeapMisses=0,
offHeapEntriesCnt=33726, heapEntriesCnt=2, offHeapPrimaryEntriesCnt=33726,
offHeapBackupEntriesCnt=0, offHeapAllocatedSize=0, size=33726,
keySize=33726, isEmpty=false, dhtEvictQueueCurrSize=-1, txThreadMapSize=0,
txXidMapSize=0, txCommitQueueSize=0, txPrepareQueueSize=0,
txStartVerCountsSize=0, txCommittedVersionsSize=0,
txRolledbackVersionsSize=0, txDhtThreadMapSize=0, txDhtXidMapSize=-1,
txDhtCommitQueueSize=0, txDhtPrepareQueueSize=0, txDhtStartVerCountsSize=0,
txDhtCommittedVersionsSize=-1, txDhtRolledbackVersionsSize=-1,
isWriteBehindEnabled=false, writeBehindFlushSize=-1,
writeBehindFlushThreadCnt=-1, writeBehindFlushFreq=-1,
writeBehindStoreBatchSize=-1, writeBehindTotalCriticalOverflowCnt=-1,
writeBehindCriticalOverflowCnt=-1, writeBehindErrorRetryCnt=-1,
writeBehindBufSize=-1, totalPartitionsCnt=1024, rebalancingPartitionsCnt=0,
keysToRebalanceLeft=0, rebalancingKeysRate=0, rebalancingBytesRate=0,
rebalanceStartTime=0, rebalanceFinishTime=0, keyType=java.lang.Object,
valType=java.lang.Object, isStoreByVal=true, isStatisticsEnabled=true,
isManagementEnabled=false, isReadThrough=true, isWriteThrough=false]


[14:46:05,572][ERROR][sys-#52][GridPartitionedSingleGetFuture] Failed to
get values from dht cache [fut=GridFutureAdapter [ignoreInterrupts=false,
state=DONE, res=class o.a.i.IgniteCheckedException: Not enough memory
allocated (consider increasing data region size or enabling evictions)
[policyName=RefData, size=22.0 MB], hash=315679498]]
class org.apache.ignite.IgniteCheckedException: Not enough memory allocated
(consider increasing data region size or enabling evictions)
[policyName=RefData, size=22.0 MB]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7252)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:975)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.internal.mem.IgniteOutOfMemoryException:
Not enough memory allocated (consider increasing data region size or
enabling evictions) [policyName=RefData, size=22.0 MB]
at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:292)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePageNoReuse(DataStructure.java:117)
at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePage(DataStructure.java:105)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$8400(BPlusTree.java:81)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.insertWithSplit(BPlusTree.java:2703)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.insert(BPlusTree.java:2665)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.access$2500(BPlusTree.java:2547)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:411)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:392)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4697)
at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4682)
at
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandl

String as value problem ?

2018-01-10 Thread Mikael

Hi!

I must be doing something very wrong, I have a cache where I use a 
String as the value and Ignite go bonkers on me when it try to create 
the cache:


SEVERE: Failed to initialize cache. Will try to rollback cache start 
routine. [cacheName=RTU_CFGS]
class org.apache.ignite.IgniteCheckedException: Failed to register query 
type: QueryTypeDescriptorImpl [cacheName=RTU_CFGS, name=String, 
schemaName=RTU_CFGS, tblName=STRING, fields={}, 
idxs={STRING__VAL_IDX=QueryIndexDescriptorImpl [name=STRING__VAL_IDX, 
type=SORTED, inlineSize=-1]}, fullTextIdx=null, keyCls=class 
java.lang.Long, valCls=class java.lang.String, 
keyTypeName=java.lang.Long, valTypeName=java.lang.String, 
valTextIdx=false, typeId=9, affKey=null, keyFieldName=null, 
valFieldName=null, obsolete=false]
    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.registerType(IgniteH2Indexing.java:1709)
    at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerCache0(GridQueryProcessor.java:1512)
    at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:779)
    at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:840)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(GridCacheProcessor.java:1113)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1816)
    at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:751)
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:882)
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:588)
    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279)
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)


All other caches work fine, but I have two that use a java.lang.String 
class for the value and both of them do this, am I doing something silly 
here or is it some problem using a String as the value for the cache (I 
guess it is related to indexing in H2 for some reason), if I change to 
some other class for value in the caches they work just fine.


Mikael




Re: Force flush of IGFS to secondary file system in DUAL_ASYNC mode

2018-01-10 Thread Juan Rodríguez Hortalá
Hi llya,

Thanks a lot for the detailed answer. It's nice to know there is a clear
path to achieve that flush.

Greetings,

Juan

On Mon, Jan 8, 2018 at 4:33 AM, ilya.kasnacheev 
wrote:

> Hello!
>
> After reviewing IGFS code, I think that you can do the following:
>
> You should save all file paths that are being migrated, and then call
> await(collectionWithAllFilePaths) on IgfsImpl. If it's a huge number of
> files, I imagine you can do this in batches.
>
> It will do the same synchronous wait that DUAL_SYNC would do, just from a
> different entry point. After await() returns you are safe to close IgfsImpl
> and shutdown your cluster.
>
> Note that I would like to have the same behaviour for
> IgfsImpl.close(cancel:
> false), but it's NOT there yet. I have filed
> https://issues.apache.org/jira/browse/IGNITE-7356 - do not hesitate to
> comment.
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: String as value problem ?

2018-01-10 Thread vkulichenko
Mikael,

First of all, the trace should contain the cause with more details. What
does it tell? If this doesn't help to figure our the reason of the failure,
please show the cache configuration.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Los Angeles Area Apache Ignite/Kubernetes Meetup

2018-01-10 Thread Dani Traphagen
Hi everyone,


*January 31st *I will be speaking in Los Angeles at *Verizon Digital Media
Services* on using Apache Ignite with Kubernetes!


You can read all the meetup details here

along with the abstract description. Free Food and Drinks will be provided.


Hope to see you there if you are in the area!


Cheers,
Dani


-- 
Dani Traphagen | d...@gridgain.com
Solutions Architect
*GridGain*


Re: Los Angeles Area Apache Ignite/Kubernetes Meetup

2018-01-10 Thread Denis Magda
cross-posting to the user list

> On Jan 10, 2018, at 3:17 PM, Dani Traphagen  wrote:
> 
> Hi everyone,
> 
> 
> *January 31st *I will be speaking in Los Angeles at *Verizon Digital Media
> Services* on using Apache Ignite with Kubernetes!
> 
> 
> You can read all the meetup details here
>  along
> with the abstract description. Free Food and Drinks will be provided.
> 
> 
> Hope to see you there if you are in the area!
> 
> 
> Cheers,
> Dani
> 
> 
> -- 
> Dani Traphagen | d...@gridgain.com
> Solutions Architect
> *GridGain*



Re: Migrating from Oracle to Apache Ignite.

2018-01-10 Thread rizal123
Hi Andrew,

Thanks for your reply.

Hope the ticket will be on 2.4

Next question about replication. I have 3 node server with different
machine/ip. How ignite replicate/distribution data between them? Whereas my
application only access into one node.

Please let me know if there something I miss...

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Migrating from Oracle to Apache Ignite.

2018-01-10 Thread Denis Magda
The data will be distributed evenly among the node. You can read more on this 
here [1] or watch this video [2]

[1] https://apacheignite.readme.io/v2.3/docs/data-grid 

[2] https://www.youtube.com/watch?v=G22L2KW9gEQ


—
Denis

> On Jan 10, 2018, at 5:18 PM, rizal123  wrote:
> 
> Hi Andrew,
> 
> Thanks for your reply.
> 
> Hope the ticket will be on 2.4
> 
> Next question about replication. I have 3 node server with different
> machine/ip. How ignite replicate/distribution data between them? Whereas my
> application only access into one node.
> 
> Please let me know if there something I miss...
> 
> Thanks
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: 3rd Party Persistence and two phase commit

2018-01-10 Thread Denis Magda
Covered here:
http://gridgain.blogspot.com/2014/09/two-phase-commit-for-in-memory-caches.html 


—
Denis

> On Jan 10, 2018, at 8:27 AM, Denis Mekhanikov  wrote:
> 
> Hi Andrey!
> 
> Actually, commit to the 3rd party database will be performed from the node, 
> that initiated the transaction.
> So, there will be only one transaction on the backing database.
> Otherwise it is impossible to guarantee data consistency, as you noticed.
> 
> Denis
> 
> ср, 10 янв. 2018 г. в 18:58, Andrey Nestrogaev  >:
> Hi all!
> 
> We have a cache "Accounts".
> We use 3rd Party Persistence with Write-Through mode
> 
> In the cache there are two accounts.
> Account width id = 1 is located on node A and account with id = 2 is located
> on node B.
> In a single transaction we want to transfer the amount of $100 from account
> 1 to account 2.
> 
> How Ignite in the described case supports ACID transactions?
> What happens if the CacheStore on node A successfully executes the commit in
> the overrided method sessionEnd
> and when the sessionEnd method is executed on node B, an error occurs and
> the data is not commit in the database.
> 
> In this case, we get inconsistent data in the database, right?
> 
> Thanks!
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
> 



How to identify if the data returned from cache is partial data due to multiple node failures in partitioned cache

2018-01-10 Thread aMark
Hi,

We are using Ignite cache version 2.3 . We are using persistent cache in
Partitioned Mode having 4 cluster node running.   We have configured caches
to have 1 backup. 

I understand that if there are more than one node failure at a time, then
data present in the live cluster may not be complete data for a given cache. 

In the above setup, when all four nodes are running, I get close to ~650K
key value pair for a cache. But if I bring down three nodes then I get close
to ~300K key values pair for the same cache.

If I dont have initial count of entries in the the cache, I dont know if the
entries returned is a full set of partial set.

Is there an API/configuration in Ignite to identify that cache might not
have complete data in the cluster for the time being (due to any reason) ?  



Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-10 Thread Andrey Nestrogaev
Hi Denis, thanks for clarifying!

Did I understand you correctly that in any cases all interactions with 3rd
party database will occur only on the initial node and the sessionEnd 
method will be called only once on the initial node?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-10 Thread Andrey Nestrogaev
Hi Denis, 

Yes, I have already read the article you mentioned.

But it shows an example where the primary data being changed is located on
the same node, at least I understand it so.
In my original understanding it was that each node creates its own
connection to the 3rd database. But perhaps this works only in the case of
Read-Through, and with distributed transactions, another approach works.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/