Re: Timeout while running checkpoint

2018-02-12 Thread Vinokurov Pavel
How often the "Skipping checkpoint" message occurred in logs?

2018-02-12 10:47 GMT+03:00 Josephine Barboza :

> No I haven’t overridden checkpointFreq value.
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Monday, February 12, 2018 1:03 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Timeout while running checkpoint
>
>
>
> Hi,
>
>
>
> The timeout could be caused by value 
> PersistentStoreConfiguration#checkpointFreq
> parameter.
>
> Have you overrided *checkpointFreq* config parameter?
>
>
>
> 2018-02-12 10:05 GMT+03:00 Josephine Barboza :
>
> Hi,
>
> I’m constantly seeing a lot of information logs after setting up a cluster
> in ignite of two nodes
>
>
>
> Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms,
> checkpointLockHoldTime=1ms, reason='timeout']
>
>
>
> Why could the process be timing out? I am using persistent store
> configuration with v2.1.
>
>
>
> Thanks,
>
> Josephine
>
> *IMPORTANT NOTICE: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify the system manager and/or the sender immediately.*
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>



-- 

Regards

Pavel Vinokurov


RE: Timeout while running checkpoint

2018-02-12 Thread Josephine Barboza
3 mins on both nodes

2018-02-12 09:10:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO  
GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were 
modified) [checkpointLockWait=0ms, checkpointLockHoldTime=3ms, reason='timeout']
2018-02-12 09:13:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO  
GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were 
modified) [checkpointLockWait=0ms, checkpointLockHoldTime=2ms, reason='timeout']


From: Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
Sent: Monday, February 12, 2018 2:23 PM
To: user@ignite.apache.org
Subject: Re: Timeout while running checkpoint

How often the "Skipping checkpoint" message occurred in logs?

2018-02-12 10:47 GMT+03:00 Josephine Barboza 
mailto:josephine.barb...@nviz.com>>:
No I haven’t overridden checkpointFreq value.

From: Vinokurov Pavel 
[mailto:vinokurov.pa...@gmail.com]
Sent: Monday, February 12, 2018 1:03 PM
To: user@ignite.apache.org
Subject: Re: Timeout while running checkpoint

Hi,

The timeout could be caused by value 
PersistentStoreConfiguration#checkpointFreq parameter.
Have you overrided checkpointFreq config parameter?

2018-02-12 10:05 GMT+03:00 Josephine Barboza 
mailto:josephine.barb...@nviz.com>>:
Hi,
I’m constantly seeing a lot of information logs after setting up a cluster in 
ignite of two nodes

Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms, 
checkpointLockHoldTime=1ms, reason='timeout']

Why could the process be timing out? I am using persistent store configuration 
with v2.1.

Thanks,
Josephine
IMPORTANT NOTICE: This email and any files transmitted with it are confidential 
and intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the system 
manager and/or the sender immediately.



--

Regards

Pavel Vinokurov



--

Regards

Pavel Vinokurov


RE: Timeout while running checkpoint

2018-02-12 Thread Josephine Barboza
3 mins on both nodes

2018-02-12 09:10:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO  
GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were 
modified) [checkpointLockWait=0ms, checkpointLockHoldTime=3ms, reason='timeout']
2018-02-12 09:13:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO  
GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were 
modified) [checkpointLockWait=0ms, checkpointLockHoldTime=2ms, reason='timeout']


From: Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
Sent: Monday, February 12, 2018 2:23 PM
To: user@ignite.apache.org
Subject: Re: Timeout while running checkpoint

How often the "Skipping checkpoint" message occurred in logs?

2018-02-12 10:47 GMT+03:00 Josephine Barboza 
mailto:josephine.barb...@nviz.com>>:
No I haven’t overridden checkpointFreq value.

From: Vinokurov Pavel 
[mailto:vinokurov.pa...@gmail.com]
Sent: Monday, February 12, 2018 1:03 PM
To: user@ignite.apache.org
Subject: Re: Timeout while running checkpoint

Hi,

The timeout could be caused by value 
PersistentStoreConfiguration#checkpointFreq parameter.
Have you overrided checkpointFreq config parameter?

2018-02-12 10:05 GMT+03:00 Josephine Barboza 
mailto:josephine.barb...@nviz.com>>:
Hi,
I’m constantly seeing a lot of information logs after setting up a cluster in 
ignite of two nodes

Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms, 
checkpointLockHoldTime=1ms, reason='timeout']

Why could the process be timing out? I am using persistent store configuration 
with v2.1.

Thanks,
Josephine
IMPORTANT NOTICE: This email and any files transmitted with it are confidential 
and intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the system 
manager and/or the sender immediately.



--

Regards

Pavel Vinokurov



--

Regards

Pavel Vinokurov


Re: Versioning services

2018-02-12 Thread colinc
Thanks for this. I'll keep an eye on that ticket - it seems to be exactly
what we are looking for.

In the meantime, we think we have a work-around. Does the following sound
viable, or do you think it might it cause problems?

* Create a custom intercepting ClassLoader
* Start Ignite by directly calling IgnitionEx.start(URL springCfgUrl,
@Nullable ClassLoader ldr)
* Create the service instance using the new ClassLoader and deploy the
service as a node singleton.
* Ignite re-instantiates the service - but this is now handled by our
intercepting ClassLoader too. The ClassLoader ensures that the correct
version of the service class is loaded from the appropriate jar.

For the moment, we will use the service name to distinguish between versions
- though, as per the jira ticket, an explicit version number would be
welcome too.

Regards,
Colin.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Timeout while running checkpoint

2018-02-12 Thread Vinokurov Pavel
It is normal behavior.
According to documentation the checkpointing process could be triggered by
the  timeout( 3min by default) or the size of the checkpointing buffer.
In your case every 3 mins Ingite starts the checkpointing process to sync
dirty pages from RAM on disk.
The log message indicates there are not dirty pages in RAM.

https://apacheignite.readme.io/v2.3/docs/persistence-checkpointing#section-checkpointing-tuning
https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size


2018-02-12 12:16 GMT+03:00 Josephine Barboza :

> 3 mins on both nodes
>
>
>
> 2018-02-12 09:10:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO
> GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were
> modified) [checkpointLockWait=0ms, checkpointLockHoldTime=3ms,
> reason='timeout']
>
> 2018-02-12 09:13:37  [db-checkpoint-thread-#33%nvIDNSGN7CR%] INFO
> GridCacheDatabaseSharedManager:463 - Skipping checkpoint (no pages were
> modified) [checkpointLockWait=0ms, checkpointLockHoldTime=2ms,
> reason='timeout']
>
>
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Monday, February 12, 2018 2:23 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Timeout while running checkpoint
>
>
>
> How often the "Skipping checkpoint" message occurred in logs?
>
>
>
> 2018-02-12 10:47 GMT+03:00 Josephine Barboza :
>
> No I haven’t overridden checkpointFreq value.
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Monday, February 12, 2018 1:03 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Timeout while running checkpoint
>
>
>
> Hi,
>
>
>
> The timeout could be caused by value 
> PersistentStoreConfiguration#checkpointFreq
> parameter.
>
> Have you overrided *checkpointFreq* config parameter?
>
>
>
> 2018-02-12 10:05 GMT+03:00 Josephine Barboza :
>
> Hi,
>
> I’m constantly seeing a lot of information logs after setting up a cluster
> in ignite of two nodes
>
>
>
> Skipping checkpoint (no pages were modified) [checkpointLockWait=0ms,
> checkpointLockHoldTime=1ms, reason='timeout']
>
>
>
> Why could the process be timing out? I am using persistent store
> configuration with v2.1.
>
>
>
> Thanks,
>
> Josephine
>
> *IMPORTANT NOTICE: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify the system manager and/or the sender immediately.*
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>



-- 

Regards

Pavel Vinokurov


Distributed transaction (Executing task on client as well as on key owner node)

2018-02-12 Thread Prasad Bhalerao
Hi,

I am trying to test the distributed transaction support using following
piece of code. While debugging the code I observed that code executes on
client node first and after doing commit the code executes on a node which
owns that kay.

What I am trying to do is, to collocate the data to avoid the network call
as my data in real use case is going to big. But while debugging the code,
I observed that entry processor first executes on client node, gets all the
data executes the task. and after commit executes the same code on remote
node.

Can someone please explain this behavior? My use case to execute the task
on nodes which owns the data in single transaction.

private static void executeEntryProcessorTransaction(IgniteCache cache) {
Person val=null;
try (Transaction tx =
Ignition.ignite().transactions().txStart(TransactionConcurrency.OPTIMISTIC,TransactionIsolation.SERIALIZABLE))
{
  long myid =6l;
CacheEntryProcessor entryProcessor = new MyEntryProcessor();
cache.invoke(myid, entryProcessor);
System.out.println("Overwrote old value: " + val);
val = cache.get(myid);
System.out.println("Read value: " + val);

tx.commit();
System.out.println("Read value after commit: " +   cache.get(myid));
}
}



Thanks,
Prasad


Re: Question on ports exposed in kubernetes setup

2018-02-12 Thread Roman Guseinov
Hi Vishwas,

11211 is a default port of JDBC connection [1]. Default REST port is 8080.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteJdbcDriver.html

Regards,
Roman




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Autowire in CacheStore implementation

2018-02-12 Thread Prasad Bhalerao
Hi,

Does anyone have any idea, how to autowire dependency in CacheStore
implementation?
To set cacheStore implementation in cache configuration we use following
code.

CacheConfiguration.setCacheStoreFactory().

Now this method accepts the Factory implementation which instantiates the
CacheStore implementation. Now this bean is not spring managed bean. How to
autowire datasource in cache store implementation?

Thanks,
Prasad


Re: Distributed transaction (Executing task on client as well as on key owner node)

2018-02-12 Thread Ilya Lantukh
Hi,

The fact that code from invoke(...) is executed on node that initiated
transaction ("near node" in ignite terminology) is a known issue. There is
a ticket for it (https://issues.apache.org/jira/browse/IGNITE-3471), but it
hasn't been fixed yet.

To solve your initial goal, you might want to start transaction on the
primary node for your key. It can be achieved by using
ignite.compute().affinityRun(...), but in this case you have to start
transaction inside affinityRun closure.

Like this:
ignite.compute().affinityRun(cacheName, key,
() -> {
try (Transaction tx =
Ignition.ignite().transactions().txStart(...)) {
cache.invoke(key, entryProcessor);

tx.commit();
}
}
);
}

In this case you will minimize overhead to modify entry in cache -
entryProcessor will be executed only on nodes that own the key, and stored
value shouldn't be transferred between nodes at all.

Hope this helps.



On Mon, Feb 12, 2018 at 1:57 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> I am trying to test the distributed transaction support using following
> piece of code. While debugging the code I observed that code executes on
> client node first and after doing commit the code executes on a node which
> owns that kay.
>
> What I am trying to do is, to collocate the data to avoid the network call
> as my data in real use case is going to big. But while debugging the code,
> I observed that entry processor first executes on client node, gets all the
> data executes the task. and after commit executes the same code on remote
> node.
>
> Can someone please explain this behavior? My use case to execute the task
> on nodes which owns the data in single transaction.
>
> private static void executeEntryProcessorTransaction(IgniteCache Person> cache) {
> Person val=null;
> try (Transaction tx = Ignition.ignite().transactions().txStart(
> TransactionConcurrency.OPTIMISTIC,TransactionIsolation.SERIALIZABLE)) {
>   long myid =6l;
> CacheEntryProcessor entryProcessor = new MyEntryProcessor();
> cache.invoke(myid, entryProcessor);
> System.out.println("Overwrote old value: " + val);
> val = cache.get(myid);
> System.out.println("Read value: " + val);
>
> tx.commit();
> System.out.println("Read value after commit: " +
> cache.get(myid));
> }
> }
>
>
>
> Thanks,
> Prasad
>



-- 
Best regards,
Ilya


Re: Question on ports exposed in kubernetes setup

2018-02-12 Thread vbm
Hi Roman,

Thanks for the reply. 
I think the guide needs to be updated, to reflect the default port mapping.


Regards,
Vishwas



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed transaction (Executing task on client as well as on key owner node)

2018-02-12 Thread Prasad Bhalerao
 Hi,

My use case is to collocate the data and execute the task on that node. But
I want to execute >1000 number of such tasks in collocate task mode and all
such tasks should be executed in a single transaction. If any one of the
task fails, I want to rollback the complete transaction. The reason to
execute the task collocate mode is my input data is going to be huge. Can
you please explain how to implement behavior using your solution?
As per your  solution I will be executing transaction.commit 1000 times if
I have 1000 such compute tasks but if any one of it fails I do not have a
way to rollback complete transaction.

Is there any alternative to implement this?



Thanks,
Prasad

On Mon, Feb 12, 2018 at 6:22 PM, Ilya Lantukh  wrote:

> Hi,
>
> The fact that code from invoke(...) is executed on node that initiated
> transaction ("near node" in ignite terminology) is a known issue. There is
> a ticket for it (https://issues.apache.org/jira/browse/IGNITE-3471), but
> it hasn't been fixed yet.
>
> To solve your initial goal, you might want to start transaction on the
> primary node for your key. It can be achieved by using
> ignite.compute().affinityRun(...), but in this case you have to start
> transaction inside affinityRun closure.
>
> Like this:
> ignite.compute().affinityRun(cacheName, key,
> () -> {
> try (Transaction tx = 
> Ignition.ignite().transactions().txStart(...))
> {
> cache.invoke(key, entryProcessor);
>
> tx.commit();
> }
> }
> );
> }
>
> In this case you will minimize overhead to modify entry in cache -
> entryProcessor will be executed only on nodes that own the key, and stored
> value shouldn't be transferred between nodes at all.
>
> Hope this helps.
>
>
>
> On Mon, Feb 12, 2018 at 1:57 PM, Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>> Hi,
>>
>> I am trying to test the distributed transaction support using following
>> piece of code. While debugging the code I observed that code executes on
>> client node first and after doing commit the code executes on a node which
>> owns that kay.
>>
>> What I am trying to do is, to collocate the data to avoid the network
>> call as my data in real use case is going to big. But while debugging the
>> code, I observed that entry processor first executes on client node, gets
>> all the data executes the task. and after commit executes the same code on
>> remote node.
>>
>> Can someone please explain this behavior? My use case to execute the task
>> on nodes which owns the data in single transaction.
>>
>> private static void executeEntryProcessorTransaction(IgniteCache> Person> cache) {
>> Person val=null;
>> try (Transaction tx = Ignition.ignite().transactions
>> ().txStart(TransactionConcurrency.OPTIMISTIC,TransactionIsolation.SERIALIZABLE))
>> {
>>   long myid =6l;
>> CacheEntryProcessor entryProcessor = new MyEntryProcessor();
>> cache.invoke(myid, entryProcessor);
>> System.out.println("Overwrote old value: " + val);
>> val = cache.get(myid);
>> System.out.println("Read value: " + val);
>>
>> tx.commit();
>> System.out.println("Read value after commit: " +
>> cache.get(myid));
>> }
>> }
>>
>>
>>
>> Thanks,
>> Prasad
>>
>
>
>
> --
> Best regards,
> Ilya
>


Re: Autowire in CacheStore implementation

2018-02-12 Thread dkarachentsev
Hi Prasad,

If you started Ignite with IgniteSpringBean or IgniteSpring try
@SpringApplicationContextResource [1] annotation. Ignite's resource injector
will use spring context to set a dependency annotated by it. But I'm not
sure that this will work with CacheStore, it should be rechecked.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/resources/SpringApplicationContextResource.html

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Logging using Log4Net

2018-02-12 Thread ozgurnevres
Hi,

I want to use log4net for logging. Ignite starts from
ApplicationConfiguration like this:
Ignition.StartFromApplicationConfiguration()

It seems it isn't logging in C:\Logs. What am I doing wrong? Ignite
configuration is like 

  http://ignite.apache.org/schema/dotnet/IgniteConfigurationSection";
gridName="myGrid1" clientMode="false" jvmMaxMemoryMb="6144">

...

And Log4Net configuration in app.config:
  

  
  
  
  
  
  
  
  
  

  


  
  
  

  





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-12 Thread David Harvey
I have a 8 node cluster with 244GB/node, and I see a behavior I don't have
any insight into, and which doesn't make sense.

I'm using a custom StreamReceiver which starts a transaction that starts a
transaction and updates 4 partitioned caches, 2 of which should be local
updates.   Ignite Persistence is on, and there is 1 sync backup per cache.


I start out with no caches.   I'm normally getting about 16K
transactions/sec, and that drops to about 1K/s for about 20 minutes, and
then recovers.

One node starts transmitting/receiving with peaks up to 260 MB/s vs. the
normal peaks which are about 60MB/s.  The thread count on that node hits a
peak and stays there for the duration of the event.   The SSD write times
are very low.  This is prior to filling up the cache, so there are no
reads.   The transmit BW drops off


The logs show nothing interesting, only checkpoints, and their frequency is
low.  The checkpoint times don't get worse, and their frequency drops off,
due to throughput drop.

I have 6 threads feeding the DataStreamer from a client node.  When each
finishes a batch of 200,000 transactions, it waits for the Futures for
complete, and will issue a TryFlush if it waits too long.  ( The
DataStreamer API  is not ideal for the case where there are multiple
threads using the same stream: when there are multiple streams,  the choice
is to Flush, which degrades the throughput of the other streams, or to
wait, where the data is not sent if the buffers aren't filling.  ) .
Normally each batch would take 2 minutes or so, in this case the flush did
not complete for 20 minutes.   At the low point, I was seeing 260 futures
completing per second, vs, the normal ~16K.

I've attached the current configuration file.  This originally occurred
when using 64 DataStreamer threads with no other thread counts changed.  It
also seemed to cause peer class loading to fail and I needed to increase
the timeout to avoid that.

Thanks,
Dave Harvey

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.




http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd";>




















  

  
 
  
 
  



  








 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  



































   
   
   

   
   
   
   
   
   
 
 
 




  


Re: slow query performance against berkley db

2018-02-12 Thread Mikael

Hi!

What is it you are trying to do ? I assume you have a working solution 
with BDB now, why do you want to change it to Ignite ? do you want/need 
redundancy/HA ? do you plan to run this on a single node or multiple nodes ?


Mikael


Den 2018-02-12 kl. 03:45, skrev Rajesh Kishore:

Dear all

Request you to kindly suggest me if my approach is wrong ? The idea of 
replacing berkley db with Ignite would not work out if the query is 
slow , whats the best model to be used with Ignite for my usecase.


Thanks,
Rajesh

On Fri, Feb 9, 2018 at 9:38 AM, Rajesh Kishore 
mailto:rajesh10si...@gmail.com>> wrote:


Igniters any pointers pls.

Regards,
Rajesh

On Wed, Feb 7, 2018 at 9:15 AM, Rajesh Kishore
mailto:rajesh10si...@gmail.com>> wrote:

Hi Dmitry,

Thanks a ton.

What is not convincing to me is with just *.1 M  in main table
and* *2 M records in other table * , sql query is taking
around 24 sec, that is worrisome.
In local cache mode , I tried both using partitioned and non
partitioned mode , the result is same.
All I wanted to know , is my approach is wrong  somewhere? I
am sure igniters would correct me with my approach used.

Regards,
-Rajesh

On Wed, Feb 7, 2018 at 8:23 AM, Dmitriy Setrakyan
mailto:dsetrak...@apache.org>> wrote:

Hi Rajesh,

Please allow the community some time to test your code.

As far as testing single node vs. distributed, when you
have more than one node, Ignite will split your data set
evenly across multiple nodes. This means that when running
the query, it will be executed on each node on smaller
data sets in parallel, which should provide better
performance. If your query does some level of scanning,
then the more nodes you add, the faster it will get.

D.

On Tue, Feb 6, 2018 at 5:02 PM, Rajesh Kishore
mailto:rajesh10si...@gmail.com>>
wrote:

Hi All
Please help me in getting the pointers, this is
deciding factor for us to further evaluate ignite.
Somehow we are not convinced with just  . 1 m records
it's not responsive as that of Berkley db.
Let me know the strategy to be adopted, pointers where
I am doing wrong.

Thanks
Rajesh

On 6 Feb 2018 6:11 p.m., "Rajesh Kishore"
mailto:rajesh10si...@gmail.com>> wrote:

Further to this,

I am re-framing what I have , pls correct me if my
approach is correct or not.

As of now, using only node as local cache and
using native persistence file system. The system
has less number of records around *.1 M *in main
table and 2 M in supporting table.

Using sql to retrieve the records using join , the
sql used is

---
 final String query1 = "SELECT "
    + "f.entryID,f.attrName,f.attrValue, "
    + "f.attrsType "
    + "FROM "
+"( select st.entryID,st.attrName,st.attrValue,
st.attrsType from "
+"(SELECT at1.entryID FROM
\"objectclass\".Ignite_ObjectClass"
    + " at1 WHERE "
    + " at1.attrValue= ? )  t"
    +" INNER JOIN
\"Ignite_DSAttributeStore\".IGNITE_DSATTRIBUTESTORE
st ON st.entryID = t.entryID "
    + " WHERE st.attrKind IN ('u','o') "
    +" ) f "
    + " INNER JOIN "
    + " ( "
    +" SELECT entryID from
\"dn\".Ignite_DN where parentDN like ? "
+")  "
    +" dnt"
    + " ON f.entryID = dnt.entryID"
    + " order by f.entryID";

    String queryWithType = query1;
QueryCursor> cursor = cache.query(new
SqlFieldsQuery(
queryWithType).setEnforceJoinOrder(true).setArgs("person",
"dc=ignite,%"));
System.out.println("SUBTREE "+cursor.getAll() );



---

The corresponding EXPLAIN plan is
-

Re: Text Query question

2018-02-12 Thread dkarachentsev
Jet,

Yep, this should work, but meanwhile this ticket remains unresolved [1].

[1] https://issues.apache.org/jira/browse/IGNITE-5371

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Logging using Log4Net

2018-02-12 Thread Alexey Popov
Hi,

There could be several issues. Unfortunately, you just provided some config
snippets.

First of all, please add  to your appender
RollingLogFileAppender config.

Then, please ensure that your log4net configuration section  is
actually used.
It is better to have a separate file log4net.config file.

Please share a simple reproducer project if you still face any issue.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed transaction (Executing task on client as well as on key owner node)

2018-02-12 Thread dkarachentsev
Hi Prasad,

This approach will work with multiple keys if they are collocated on the
same node and you start/stop transaction in the same thread/task. There no
other workaround.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Loading cache from Oracle Table

2018-02-12 Thread Pavel Vinokurov
Yes, you could use standard Java ExecutorService within CacheStore.loadCache
implementation.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there any work around for indexing list in Apache ignite and use in where clause?

2018-02-12 Thread Roman Guseinov
Hi Mani,

It looks like indexes for list/array field types are not supported.

Could you describe your use case in more detail? Could you send examples of
data and search parameters? Maybe there are some workarounds.

Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to connect ignite pods in Kubernetes using Ip-finder

2018-02-12 Thread Ryan Samo
I just ran into this same issue recently and it turns out that the
permissions given to the default ServiceAccount in Kubernetes Role-based
access control (RBAC) are not high enough to allow for the
TcpDiscoveryKubernetesIpFinder to talk to the kubernetes service at
"https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite";
which is why you get a 403 unauthorized exception. I found a work around in
the link below which grants the default ServiceAccount a ClusterRole of
"cluster-admin" in K8, then the Ignite PODs can communicate.

https://github.com/fluent/fluentd-kubernetes-daemonset/issues/14
   

My question is, does the community have any documentation or knowledge in
the Ignite space for what permissions are required in Kubernetes in order
for an Ignite cluster to operate properly? It seems like granting
"cluster-admin" could be a bit risky for a production solution, especially
if you plan to have many Ignite clusters, each with their own K8 namespace
for example. I read through the Kubernetes Deployment documentation for
Ignite and did not see any reference to RBAC which was implemented in K8
v1.8. I suspect that maybe the Ignite documentation was written prior to
this release?

Thanks in advance for light you could shed on the subject.

Kubernetes v1.9.2
Ignite v2.3.0




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Adding new fields without server restart

2018-02-12 Thread Tim Newman
Hi,

I have a POJO that I am caching written like:
public class Person implements Serializable {
private static final long serialVersionUID = 1537032807962869676L;

@QuerySqlField(index = true)
private final Long personId;

@QuerySqlField
private final String firstName;

...
}

This is working well and good, but now I want to add a new field: "homeState":
public class Person implements Serializable {
private static final long serialVersionUID = 1537032807962869676L;

@QuerySqlField(index = true)
private final Long personId;

@QuerySqlField
private final String firstName;

@QuerySqlField(index = true)
private final String homeState;

...
}

If I update the entries in the cache to have the new "homeState" value and then 
dump out the contents of the cache, the data is as I would expect. However, 
when I try to run a query against the new column (for example: "DELETE FROM 
person WHERE homeState = 'CA'") I get the error:
org.h2.jdbc.JdbcSQLException: Column "HOMESTATE" not found; SQL statement: 
DELETE FROM person WHERE homeState = ? [42122-195]

My CacheConfiguration used to get the cache:
CacheConfiguration cfg = new CacheConfiguration<>();
cfg.setName("person");
cfg.setIndexedTypes(Long.class, Person.class);

If I print out the QueryEntity objects, it looks like everything is good:
[QueryEntity [keyType=java.lang.Long, valType=com.calabrio.igtest.Person, 
keyFieldName=null, valueFieldName=null, fields={personId=java.lang.Long, 
firstName=java.lang.String, homeState=java.lang.String}, keyFields=[], 
aliases={firstName=firstName, personId=personId, homeState=homeState}, 
idxs=[QueryIndex [name=Person_personId_idx, fields={personId=true}, 
type=SORTED, inlineSize=-1], QueryIndex [name=Person_homeState_idx, 
fields={homeState=true}, type=SORTED, inlineSize=-1]], tableName=null]]

When I call the metadata REST API, the new field is not listed in the "fields" 
section for my cache object.

I've tried this on v2.1 and v2.3. Am I missing something, or is this simply not 
possible without including the relevant ALTER TABLE ... statement too? The goal 
is to not need to restart the Ignite server cluster.

Thanks
-Tim


Re: Is there any work around for indexing list in Apache ignite and use in where clause?

2018-02-12 Thread vkulichenko
This is also discussed on StackOverflow:
https://stackoverflow.com/questions/48723261/is-there-any-work-around-for-indexing-list-in-apche-ignite-and-use-in-where-clau

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed transaction (Executing task on client as well as on key owner node)

2018-02-12 Thread vkulichenko
Hi Prasad,

I understand that the example you provided can be a simplified one, however
wanted to mention that this particular piece of code does not require a
transaction at all. You can just execute a single invoke() operation,
optionally returning required value (its API allows that). This will work
even with ATOMIC cache which is much faster than TRANSACTIONAL and will
properly collocate everything.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Add ignite jars using maven

2018-02-12 Thread vkulichenko
Rajarshi,

You don't have to add dependencies explicitly of course. Ignite is a
standard Maven project and there is no additional magic that you need to
consider. Just add required Ignite modules that are required for your
project into POM file and Maven will do the rest.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adding new fields without server restart

2018-02-12 Thread vkulichenko
Hi Tim,

Cache configuration is defines when it's started. So @QuerySqlField
annotation on the new field does not have affect unless you restart the
cluster or at least destroy the cache and create with the new configuration.
Field are added on object level transparently, but to modify the SQL schema
in runtime you need to use ALTER TABLE and CREATE INDEX:
https://apacheignite-sql.readme.io/docs/ddl

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread John Wilson
Hi,

Assume the Prepare phase has completed and that the primary node has
received a commit message from the coordinator.

Two questions:

   1. A primary node commits a transaction before it forwards a commit
   message to the backup nodes. True?
   2. What happens if a Primary Node fails while it is committing but
   before the commit message is sent to backup nodes? Do the backup nodes
   commit after some timeout or will they send a fail message to the
   coordinator?

The doc below provides a nice description but doesn't exactly answer my
question.

https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-failover-and-recovery

Thanks,


Re: What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread vkulichenko
Hi John,

1. True.

2. The blog actually provides the answer:

When the Backup Nodes detect the failure, they will notify the Transaction
coordinator that they committed the transaction successfully. In this
scenario, there is no data loss because the data are backed up and can still
be accessed and used by applications.

In other words, if primary node fails, backups will not wait for a message,
but instead will commit right away and send an ack to the coordinator. Once
coordinator gets all required acs, transaction completes.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread John Wilson
You're always helpful Val. Thanks!


I have a question regarding Optimistic Locking


   1. The documentation here,
   
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Key-Value+Transactions+Architecture,
   states that locks, for optimistic locking, are acquired during the
   "prepare" phase. But the graphic depicted there and the tutorial here,
   
https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-concurrency-modes-and-isolation-levels,
   clearly indicate that locks are acquired during the commit phase; with a
   version information passed along with the key by the coordinator to the
   primary nodes. Can you please explain the discrepancy?

And two questions regarding pages and page locking?

   1. Does Ignite use a lock-free algorithm for its B+ tree structure?
   Looking at the source code and the use of a tag field to avoid the ABA
   problem suggests that.
   2. Ignite documentation talks about entry-level locks and page locks.
   When exactly is a page locked and released? Also, when an entry is
   inserted/modified in a page, is the page locked, forbidding other threads
   from inserting other entries in the page, or only the entry's offset is
   locked allowing other threads to insert other entries in the page?
   3. What is the the difference between a directCount and indirectCount
   for a page?

Thanks

On Mon, Feb 12, 2018 at 7:33 PM, vkulichenko 
wrote:

> Hi John,
>
> 1. True.
>
> 2. The blog actually provides the answer:
>
> When the Backup Nodes detect the failure, they will notify the Transaction
> coordinator that they committed the transaction successfully. In this
> scenario, there is no data loss because the data are backed up and can
> still
> be accessed and used by applications.
>
> In other words, if primary node fails, backups will not wait for a message,
> but instead will commit right away and send an ack to the coordinator. Once
> coordinator gets all required acs, transaction completes.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: What happens if Primary Node fails during the Commit Phase

2018-02-12 Thread John Wilson
I got the answer for #3 here
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory-underthehood-Pages&links.
I will post the remaining questions in a separate thread.

On Mon, Feb 12, 2018 at 8:03 PM, John Wilson 
wrote:

> You're always helpful Val. Thanks!
>
>
> I have a question regarding Optimistic Locking
>
>
>1. The documentation here, https://cwiki.apache.
>org/confluence/display/IGNITE/Ignite+Key-Value+Transactions+
>Architecture
>
> ,
>states that locks, for optimistic locking, are acquired during the
>"prepare" phase. But the graphic depicted there and the tutorial here,
>https://www.gridgain.com/resources/blog/apache-ignite-transactions-
>architecture-concurrency-modes-and-isolation-levels
>
> ,
>clearly indicate that locks are acquired during the commit phase; with a
>version information passed along with the key by the coordinator to the
>primary nodes. Can you please explain the discrepancy?
>
> And two questions regarding pages and page locking?
>
>1. Does Ignite use a lock-free algorithm for its B+ tree structure?
>Looking at the source code and the use of a tag field to avoid the ABA
>problem suggests that.
>2. Ignite documentation talks about entry-level locks and page locks.
>When exactly is a page locked and released? Also, when an entry is
>inserted/modified in a page, is the page locked, forbidding other threads
>from inserting other entries in the page, or only the entry's offset is
>locked allowing other threads to insert other entries in the page?
>3. What is the the difference between a directCount and indirectCount
>for a page?
>
> Thanks
>
> On Mon, Feb 12, 2018 at 7:33 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi John,
>>
>> 1. True.
>>
>> 2. The blog actually provides the answer:
>>
>> When the Backup Nodes detect the failure, they will notify the Transaction
>> coordinator that they committed the transaction successfully. In this
>> scenario, there is no data loss because the data are backed up and can
>> still
>> be accessed and used by applications.
>>
>> In other words, if primary node fails, backups will not wait for a
>> message,
>> but instead will commit right away and send an ack to the coordinator.
>> Once
>> coordinator gets all required acs, transaction completes.
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Logging using Log4Net

2018-02-12 Thread ozgurnevres
"please ensure that your log4net configuration section  is 
actually used"

How can I ensure that? It seems there's no property in logger configuration
to tell which appender will be used.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Logging using Log4Net

2018-02-12 Thread ozgurnevres
You can download a simple reproducer project here: 
https://1drv.ms/u/s!ApZeEREhT0aVxHwU56ywJxhuVWvR
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/