Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Yakov Zhdanov
As far as I can understand Alex was trying to avoid the scenario when user
needs to bring 1Tb dataset to each node of 50 nodes cluster and then
discard 49/50 of data loaded. For me this seems to be a very good catch.

However, I agree with Val that this may be implemented apart from store and
user can continue using store for read/write-through and there is probably
no need to alter any API.

Maybe we need to outline Val's suggestion in the documentation and describe
this as one of the possible scenarios. Thoughts?

--Yakov


[GitHub] ignite pull request #1233: Ignite 3075 2

2016-11-15 Thread kdudkov
GitHub user kdudkov opened a pull request:

https://github.com/apache/ignite/pull/1233

Ignite 3075 2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3075-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1233.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1233


commit 1093819ac0f3e7a0faacde59919117b8977e6d5b
Author: Igor Sapego 
Date:   2016-11-09T15:19:01Z

IGNITE-4201: Fixed version fix maven step.

commit 2e551343e306921bd907219f51775c8d695f2f49
Author: Konstantin Dudkov 
Date:   2016-11-10T10:46:06Z

IGNITE-2325

commit 29a9e919654ffe86fa6a24eee3ed916276a4bebf
Author: Konstantin Dudkov 
Date:   2016-11-10T14:09:05Z

IGNITE-3074

commit baa752660c6eddf27d15a812252b01b5872385de
Author: iveselovskiy 
Date:   2016-11-10T15:47:09Z

IGNITE-4208: Hadoop: Fixed a bug preventing normal secondary file system 
start. This closes #1228.

commit 53f3b7be2558cc20fc848db0f1603358bb279ec6
Author: Konstantin Dudkov 
Date:   2016-11-10T16:09:31Z

IGNITE-3075

commit 5a4ebd5de8751dcf32a26c96bf4f39e43bcbb341
Author: Alexey Kuznetsov 
Date:   2016-11-11T02:49:41Z

Fixed classnames.properties generation for ignite-hadoop module.

commit 73a8fa8b635cce3b9d8dcad364a32d29f12d4398
Author: Alexey Kuznetsov 
Date:   2016-11-11T03:20:32Z

Fixed classnames.properties generation for ignite-hadoop module.

commit f8aa957327312d76f90231b9bfe6d386d1d4ec37
Author: Alexey Kuznetsov 
Date:   2016-11-11T08:56:42Z

Reverted wrong commit.

commit 5563bc36efe13c9ae8e65a4a7bffbe70ba495ba5
Author: Konstantin Dudkov 
Date:   2016-11-11T10:23:19Z

IGNITE-2523 fix broken near request compatibility

commit caa043f38b29b358b9e942e3ffcc47307d902311
Author: Konstantin Dudkov 
Date:   2016-11-11T11:45:23Z

Merge branch 'ignite-2523-2' into ignite-3074-2

commit 84834782553cbdb9796264f69980f4f45651f5c5
Author: Konstantin Dudkov 
Date:   2016-11-11T12:45:56Z

Merge branch 'ignite-3074-2' into ignite-3075-2

commit 9499bff39fb29099caf357b263e6a94dd21bd7c6
Author: Konstantin Dudkov 
Date:   2016-11-11T13:59:44Z

IGNITE-3075 restore compatible fields order

commit e0b9da7f76e876a2576b78d6f1669f64745feead
Author: Konstantin Dudkov 
Date:   2016-11-11T14:37:10Z

IGNITE-3075 check expiry policy, not ttl

commit cc48987aebbb018b267b9b137e94d593b9d3ad0b
Author: Konstantin Dudkov 
Date:   2016-11-14T07:54:22Z

IGNITE-3075 fix test

commit b253c6ebdd4f4948d0b6514a501a07ada19c567d
Author: Konstantin Dudkov 
Date:   2016-11-14T08:37:39Z

Merge branch 'ignite-1.7.3' into ignite-2523-2

commit ad881a0c99e64eb82613c1004615e0bf04a44603
Author: Konstantin Dudkov 
Date:   2016-11-14T08:38:19Z

Merge branch 'ignite-2523-2' into ignite-3074-2

commit b71e6304de7f8a6be365ac39256243e43cf02bbc
Author: Konstantin Dudkov 
Date:   2016-11-14T08:38:50Z

Merge branch 'ignite-3074-2' into ignite-3075-2




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #1232: Ignite 3074 2

2016-11-15 Thread kdudkov
GitHub user kdudkov opened a pull request:

https://github.com/apache/ignite/pull/1232

Ignite 3074 2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3074-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1232.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1232


commit 1093819ac0f3e7a0faacde59919117b8977e6d5b
Author: Igor Sapego 
Date:   2016-11-09T15:19:01Z

IGNITE-4201: Fixed version fix maven step.

commit 2e551343e306921bd907219f51775c8d695f2f49
Author: Konstantin Dudkov 
Date:   2016-11-10T10:46:06Z

IGNITE-2325

commit 29a9e919654ffe86fa6a24eee3ed916276a4bebf
Author: Konstantin Dudkov 
Date:   2016-11-10T14:09:05Z

IGNITE-3074

commit baa752660c6eddf27d15a812252b01b5872385de
Author: iveselovskiy 
Date:   2016-11-10T15:47:09Z

IGNITE-4208: Hadoop: Fixed a bug preventing normal secondary file system 
start. This closes #1228.

commit 5a4ebd5de8751dcf32a26c96bf4f39e43bcbb341
Author: Alexey Kuznetsov 
Date:   2016-11-11T02:49:41Z

Fixed classnames.properties generation for ignite-hadoop module.

commit 73a8fa8b635cce3b9d8dcad364a32d29f12d4398
Author: Alexey Kuznetsov 
Date:   2016-11-11T03:20:32Z

Fixed classnames.properties generation for ignite-hadoop module.

commit f8aa957327312d76f90231b9bfe6d386d1d4ec37
Author: Alexey Kuznetsov 
Date:   2016-11-11T08:56:42Z

Reverted wrong commit.

commit 5563bc36efe13c9ae8e65a4a7bffbe70ba495ba5
Author: Konstantin Dudkov 
Date:   2016-11-11T10:23:19Z

IGNITE-2523 fix broken near request compatibility

commit caa043f38b29b358b9e942e3ffcc47307d902311
Author: Konstantin Dudkov 
Date:   2016-11-11T11:45:23Z

Merge branch 'ignite-2523-2' into ignite-3074-2

commit b253c6ebdd4f4948d0b6514a501a07ada19c567d
Author: Konstantin Dudkov 
Date:   2016-11-14T08:37:39Z

Merge branch 'ignite-1.7.3' into ignite-2523-2

commit ad881a0c99e64eb82613c1004615e0bf04a44603
Author: Konstantin Dudkov 
Date:   2016-11-14T08:38:19Z

Merge branch 'ignite-2523-2' into ignite-3074-2




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-4223) Joining node should fetch affinity for all caches using single message

2016-11-15 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-4223:


 Summary: Joining node should fetch affinity for all caches using 
single message
 Key: IGNITE-4223
 URL: https://issues.apache.org/jira/browse/IGNITE-4223
 Project: Ignite
  Issue Type: Task
  Components: cache
Reporter: Semen Boikov
Assignee: Konstantin Dudkov
 Fix For: 2.0


Currently when new node joins cluster and 'late affinity assignment' mode is 
enabled it requests caches affinity using message per cache (see 
CacheAffinitySharedManager.fetchAffinityOnJoin). Actually in 'late affinity 
assignment' mode coordinator has affinity information for all caches, so single 
request can be sent to coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Alexandr Kuramshin
Hi all,

I think the discussion goes a wrong direction. Certainly it's not a big
deal to implement some custom user logic to load the data into caches. But
Ignite framework gives the user some reusable code build on top of the
basic system.

So the main question is: Why developers let the user to use convenient way
to load caches with totally non-optimal solution?

We could talk too much about different persistence storage types, but
whenever we initiate the loading with IgniteCache.loadCache the current
implementation imposes much overhead on the network.

Partition-aware data loading may be used in some scenarios to avoid this
network overhead, but the users are compelled to do additional steps to
achieve this optimization: adding the column to tables, adding compound
indices including the added column, write a peace of repeatable code to
load the data in different caches in fault-tolerant fashion, etc.

Let's give the user the reusable code which is convenient, reliable and
fast.

2016-11-14 20:56 GMT+03:00 Valentin Kulichenko <
valentin.kuliche...@gmail.com>:

> Hi Aleksandr,
>
> Data streamer is already outlined as one of the possible approaches for
> loading the data [1]. Basically, you start a designated client node or
> chose a leader among server nodes [1] and then use IgniteDataStreamer API
> to load the data. With this approach there is no need to have the
> CacheStore implementation at all. Can you please elaborate what additional
> value are you trying to add here?
>
> [1] https://apacheignite.readme.io/docs/data-loading#ignitedatastreamer
> [2] https://apacheignite.readme.io/docs/leader-election
>
> -Val
>
> On Mon, Nov 14, 2016 at 8:23 AM, Dmitriy Setrakyan 
> wrote:
>
> > Hi,
> >
> > I just want to clarify a couple of API details from the original email to
> > make sure that we are making the right assumptions here.
> >
> > *"Because of none keys are passed to the CacheStore.loadCache methods,
> the
> > > underlying implementation is forced to read all the data from the
> > > persistence storage"*
> >
> >
> > According to the javadoc, loadCache(...) method receives an optional
> > argument from the user. You can pass anything you like, including a list
> of
> > keys, or an SQL where clause, etc.
> >
> > *"The partition-aware data loading approach is not a choice. It requires
> > > persistence of the volatile data depended on affinity function
> > > implementation and settings."*
> >
> >
> > This is only partially true. While Ignite allows to plugin custom
> affinity
> > functions, the affinity function is not something that changes
> dynamically
> > and should always return the same partition for the same key.So, the
> > partition assignments are not volatile at all. If, in some very rare
> case,
> > the partition assignment logic needs to change, then you could update the
> > partition assignments that you may have persisted elsewhere as well, e.g.
> > database.
> >
> > D.
> >
> > On Mon, Nov 14, 2016 at 10:23 AM, Vladimir Ozerov 
> > wrote:
> >
> > > Alexandr, Alexey,
> > >
> > > While I agree with you that current cache loading logic is far from
> > ideal,
> > > it would be cool to see API drafts based on your suggestions to get
> > better
> > > understanding of your ideas. How exactly users are going to use your
> > > suggestions?
> > >
> > > My main concern is that initial load is not very trivial task in
> general
> > > case. Some users have centralized RDBMS systems, some have NoSQL,
> others
> > > work with distributed persistent stores (e.g. HDFS). Sometimes we have
> > > Ignite nodes "near" persistent data, sometimes we don't. Sharding,
> > > affinity, co-location, etc.. If we try to support all (or many) cases
> out
> > > of the box, we may end up in very messy and difficult API. So we should
> > > carefully balance between simplicity, usability and feature-rich
> > > characteristics here.
> > >
> > > Personally, I think that if user is not satisfied with "loadCache()"
> API,
> > > he just writes simple closure with blackjack streamer and queries and
> > send
> > > it to whatever node he finds convenient. Not a big deal. Only very
> common
> > > cases should be added to Ignite API.
> > >
> > > Vladimir.
> > >
> > >
> > > On Mon, Nov 14, 2016 at 12:43 PM, Alexey Kuznetsov <
> > > akuznet...@gridgain.com>
> > > wrote:
> > >
> > > > Looks good for me.
> > > >
> > > > But I will suggest to consider one more use-case:
> > > >
> > > > If user knows its data he could manually split loading.
> > > > For example: table Persons contains 10M rows.
> > > > User could provide something like:
> > > > cache.loadCache(null, "Person", "select * from Person where id <
> > > > 1_000_000",
> > > > "Person", "select * from Person where id >=  1_000_000 and id <
> > > 2_000_000",
> > > > 
> > > > "Person", "select * from Person where id >= 9_000_000 and id <
> > > 10_000_000",
> > > > );
> > > >
> > > > or may be it could be some descriptor object like
> > > >
> > > >  {
> > > >sql: select * f

Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Dmitriy Setrakyan
On Tue, Nov 15, 2016 at 9:07 AM, Yakov Zhdanov  wrote:

> As far as I can understand Alex was trying to avoid the scenario when user
> needs to bring 1Tb dataset to each node of 50 nodes cluster and then
> discard 49/50 of data loaded. For me this seems to be a very good catch.
>

Yakov, I agree that such scenario should be avoided. I also think that
loadCache(...) method, as it is right now, provides a way to avoid it.

DataStreamer also seems like an option here, but in this case,
loadCache(...) method should not be used at all, to my understanding.


Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Vladimir Ozerov
Hi Alex,

>>> Let's give the user the reusable code which is convenient, reliable and
fast.
Convenience - this is why I asked for example on how API can look like and
how users are going to use it.

Vladimir.

On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin 
wrote:

> Hi all,
>
> I think the discussion goes a wrong direction. Certainly it's not a big
> deal to implement some custom user logic to load the data into caches. But
> Ignite framework gives the user some reusable code build on top of the
> basic system.
>
> So the main question is: Why developers let the user to use convenient way
> to load caches with totally non-optimal solution?
>
> We could talk too much about different persistence storage types, but
> whenever we initiate the loading with IgniteCache.loadCache the current
> implementation imposes much overhead on the network.
>
> Partition-aware data loading may be used in some scenarios to avoid this
> network overhead, but the users are compelled to do additional steps to
> achieve this optimization: adding the column to tables, adding compound
> indices including the added column, write a peace of repeatable code to
> load the data in different caches in fault-tolerant fashion, etc.
>
> Let's give the user the reusable code which is convenient, reliable and
> fast.
>
> 2016-11-14 20:56 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Hi Aleksandr,
> >
> > Data streamer is already outlined as one of the possible approaches for
> > loading the data [1]. Basically, you start a designated client node or
> > chose a leader among server nodes [1] and then use IgniteDataStreamer API
> > to load the data. With this approach there is no need to have the
> > CacheStore implementation at all. Can you please elaborate what
> additional
> > value are you trying to add here?
> >
> > [1] https://apacheignite.readme.io/docs/data-loading#ignitedatastreamer
> > [2] https://apacheignite.readme.io/docs/leader-election
> >
> > -Val
> >
> > On Mon, Nov 14, 2016 at 8:23 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > Hi,
> > >
> > > I just want to clarify a couple of API details from the original email
> to
> > > make sure that we are making the right assumptions here.
> > >
> > > *"Because of none keys are passed to the CacheStore.loadCache methods,
> > the
> > > > underlying implementation is forced to read all the data from the
> > > > persistence storage"*
> > >
> > >
> > > According to the javadoc, loadCache(...) method receives an optional
> > > argument from the user. You can pass anything you like, including a
> list
> > of
> > > keys, or an SQL where clause, etc.
> > >
> > > *"The partition-aware data loading approach is not a choice. It
> requires
> > > > persistence of the volatile data depended on affinity function
> > > > implementation and settings."*
> > >
> > >
> > > This is only partially true. While Ignite allows to plugin custom
> > affinity
> > > functions, the affinity function is not something that changes
> > dynamically
> > > and should always return the same partition for the same key.So, the
> > > partition assignments are not volatile at all. If, in some very rare
> > case,
> > > the partition assignment logic needs to change, then you could update
> the
> > > partition assignments that you may have persisted elsewhere as well,
> e.g.
> > > database.
> > >
> > > D.
> > >
> > > On Mon, Nov 14, 2016 at 10:23 AM, Vladimir Ozerov <
> voze...@gridgain.com>
> > > wrote:
> > >
> > > > Alexandr, Alexey,
> > > >
> > > > While I agree with you that current cache loading logic is far from
> > > ideal,
> > > > it would be cool to see API drafts based on your suggestions to get
> > > better
> > > > understanding of your ideas. How exactly users are going to use your
> > > > suggestions?
> > > >
> > > > My main concern is that initial load is not very trivial task in
> > general
> > > > case. Some users have centralized RDBMS systems, some have NoSQL,
> > others
> > > > work with distributed persistent stores (e.g. HDFS). Sometimes we
> have
> > > > Ignite nodes "near" persistent data, sometimes we don't. Sharding,
> > > > affinity, co-location, etc.. If we try to support all (or many) cases
> > out
> > > > of the box, we may end up in very messy and difficult API. So we
> should
> > > > carefully balance between simplicity, usability and feature-rich
> > > > characteristics here.
> > > >
> > > > Personally, I think that if user is not satisfied with "loadCache()"
> > API,
> > > > he just writes simple closure with blackjack streamer and queries and
> > > send
> > > > it to whatever node he finds convenient. Not a big deal. Only very
> > common
> > > > cases should be added to Ignite API.
> > > >
> > > > Vladimir.
> > > >
> > > >
> > > > On Mon, Nov 14, 2016 at 12:43 PM, Alexey Kuznetsov <
> > > > akuznet...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Looks good for me.
> > > > >
> > > > > But I will suggest to consider one more use-case:
> > > >

Re: Apache Ignite 1.8 Release

2016-11-15 Thread Pavel Tupitsyn
Denis, [1] depends on [2], and [2](.NET: CacheEntryProcessor binary mode)
is not a simple thing. We won't be able to do that for 1.8.
Other than that, I'll try to fit as many of them as I can. But I can't
answer your question since I don't see any date yet.

By the way, you were going to help with the reviews.

[1] https://issues.apache.org/jira/browse/IGNITE-4128
[2] https://issues.apache.org/jira/browse/IGNITE-3825

On Tue, Nov 15, 2016 at 4:03 AM, Denis Magda  wrote:

> *Alexander P., Igor S.,*
>
> When will your merge all DML and ODBC (PDO) related changes into 1.8
> branch? I’m looking forward to go through PDO [1] documentation and be sure
> that everything works as described on my side.
>
> *Pavel,*
>
> Do you think it will be possible to complete all the .NET usability
> tickets [2] under 1.8 and roll them out to the Apache Ignite users?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-3921
> [2] https://issues.apache.org/jira/browse/IGNITE-4114
>
> —
> Denis
>
> On Nov 9, 2016, at 6:55 AM, Denis Magda  wrote:
>
> Do we have a branch for ignite-1.8? Is there anyone who can take over the
> release process of 1.8?
>
> —
> Denis
>
> On Nov 8, 2016, at 9:01 PM, Alexander Paschenko <
> alexander.a.pasche...@gmail.com> wrote:
>
> Current status on DML:
>
> - Basic data streamer support implemented (basicness is mostly about
> configuration - say, currently there's no way to specify streamer's
> batch size via JDBC driver, but this can be improved easily).
>
> - Fixed all minor stuff agreed with Vladimir.
>
> - There are some tests that started failing after binary hash codes
> generation rework made by Vladimir in ignite-4011-1 branch, I will ask
> him to look into it and fix those. Failing tests live in
> GridCacheBinaryObjectsAbstractSelfTest, and are as follows:
> - testPutWithFieldsHashing
> - testCrossFormatObjectsIdentity
> - testPutWithCustomHashing
> I added them personally during working on first version of auto
> hashing few weeks ago, and what they do is test these very hashing
> features. Again, prior to Vlad's rework those tests passed. So could
> you please take a look?
>
> - Working on Sergey V.'s comments about current code.
>
> - Alex
>
>
>
>


Re: IGNITE-3066 Set of Redis commands that can be easily implemented via existing REST commands

2016-11-15 Thread Andrey Novikov
Roman,

I reviewed your code and added comments in JIRA.

May we will try to use Upsource (http://reviews.ignite.apache.org/) for code
review?


On Tue, Nov 15, 2016 at 1:22 PM, Roman Shtykh 
wrote:

> Alexey,
> Thank you for your thorough reviews! I fixed the issues.
> -Roman
>
>
> On Tuesday, November 15, 2016 12:32 PM, Alexey Kuznetsov <
> akuznet...@apache.org> wrote:
>
>
>  Roman,
>
> I reviewed your code and now it looks good for me.
> But I added two minor comments in JIRA.
>
> Also I think Andrey Novikov should take a look, as he has some experience
> in ignite-rest module.
>
> Andrey, take a look:
>
> Issue: https://issues.apache.org/jira/browse/IGNITE-3066
> PR:  https://github.com/apache/ignite/pull/1212
>
>
> On Tue, Nov 15, 2016 at 9:27 AM, Roman Shtykh 
> wrote:
>
> > Alexey,
> > Thank you!I answered and pushed the changes.
> > -Roman
> >
> >
> >On Tuesday, November 15, 2016 12:14 AM, Alexey Kuznetsov <
> > akuznet...@apache.org> wrote:
> >
> >
> >  Roman,
> >
> > I made one more review,  see my comments in JIRA issue.
> >
> > On Mon, Nov 7, 2016 at 1:30 PM, Alexey Kuznetsov 
> > wrote:
> >
> > > I will take a look on PR today.
> > >
> > > On Mon, Nov 7, 2016 at 11:35 AM, Roman Shtykh
>  > >
> > > wrote:
> > >
> > >>  Denis,
> > >> It is https://github.com/apache/ignite/pull/1212
> > >>
> > >> Thank you,
> > >> Roman
> > >>
> > >>
> > >>On Saturday, November 5, 2016 4:56 AM, Denis Magda <
> > >> dma...@gridgain.com> wrote:
> > >>
> > >>
> > >>  Roman,
> > >>
> > >> Would you mind making a pull-request? It’s not clear and easy to
> review
> > >> using the branch you provided
> > >> https://github.com/apache/ignite/tree/ignite-2788 <
> > >> https://github.com/apache/ignite/tree/ignite-2788>
> > >>
> > >> This link provides details how to achieve this
> > >> https://cwiki.apache.org/confluence/display/IGNITE/How+to+
> > >> Contribute#HowtoContribute-1.CreateGitHubpull-request <
> > >> https://cwiki.apache.org/confluence/display/IGNITE/How+to+
> > >> Contribute#HowtoContribute-1.CreateGitHubpull-request>
> > >>
> > >> Let us know if you have any issue preparing the pull-request.
> > >>
> > >> —
> > >> Denis
> > >>
> > >> > On Nov 3, 2016, at 6:24 PM, Roman Shtykh  >
> > >> wrote:
> > >> >
> > >> > Igniters,
> > >> > Please review the issue.https://issues.apache.or
> > >> g/jira/browse/IGNITE-3066
> > >> >
> > >> > Thank you,Roman
> > >>
> > >>
> > >>
> > >>
> > >
> > >
> > >
> > > --
> > > Alexey Kuznetsov
> > >
> >
> >
> >
> > --
> > Alexey Kuznetsov
> >
> >
> >
>
>
>
> --
> Alexey Kuznetsov
>
>
>


[GitHub] ignite pull request #1217: IGNITE-4120: Deadlock Detection Example

2016-11-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/1217


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-4224) Resolve JdbcQueryTask compatibility issues

2016-11-15 Thread Alexander Paschenko (JIRA)
Alexander Paschenko created IGNITE-4224:
---

 Summary: Resolve JdbcQueryTask compatibility issues
 Key: IGNITE-4224
 URL: https://issues.apache.org/jira/browse/IGNITE-4224
 Project: Ignite
  Issue Type: Sub-task
Reporter: Alexander Paschenko
Assignee: Alexander Paschenko


Suggested solution is to move disturbing changes into separate class while 
making old one deprecated and destined for deletion in Ignite 2.0, and send new 
kind of tasks to compatible nodes only (having version 1.8.0 or newer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4225) DataStreamer can hang on changing topology

2016-11-15 Thread Anton Vinogradov (JIRA)
Anton Vinogradov created IGNITE-4225:


 Summary: DataStreamer can hang on changing topology
 Key: IGNITE-4225
 URL: https://issues.apache.org/jira/browse/IGNITE-4225
 Project: Ignite
  Issue Type: Bug
Reporter: Anton Vinogradov
Assignee: Anton Vinogradov
Priority: Critical


Hang reason:

Exchange cannot happen because some datastreamer futures not finished 

{noformat}
Pending data streamer futures:
[12:17:28,427][WARN 
][exchange-worker-#106%distributed.CacheLoadingConcurrentGridStartSelfTest2%][GridCachePartitionExchangeManager]
 >>> DataStreamerFuture [topVer=AffinityTopologyVersion [topVer=5, 
minorTopVer=0], super=GridFutureAdapter [resFlag=0, res=null, 
startTime=1479201428401, endTime=0, ignoreInterrupts=false, state=INIT]]
{noformat}

Reason of notfinished futures:

{noformat}
- parking to wait for  <0x000792e050b0> (a 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache$AffinityReadyFuture)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:160)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:118)
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:538)
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:449)
at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.nodes(GridAffinityAssignmentCache.java:402)
at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.nodes(GridCacheAffinityManager.java:259)
at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:295)
at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:286)
at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:310)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:1948)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:370)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:297)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:56)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:86)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1080)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:708)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:101)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:671)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Possible solution:
Need to use topology instead of affinity to detect is node primary
{noformat}
boolean primary = cctx.affinity().primary(cctx.localNode(), entry.key(), 
topVer);
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request #1230: IGNITE-4134 .NET: Add CacheConfiguration.ExpiryPo...

2016-11-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/1230


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Apache Ignite 1.8 Release

2016-11-15 Thread Igor Sapego
Denis,

I can merge PDO-related changes into 1.8 but without DML they will break
tests
and even compilation so I don't see any sense in doing that before DML is
merged.

After DML is ready and merged I'll need some time to merge my changes and
check
that everything works as intended. The code itself, tests and examples are
ready.


Best Regards,
Igor

On Tue, Nov 15, 2016 at 11:31 AM, Pavel Tupitsyn 
wrote:

> Denis, [1] depends on [2], and [2](.NET: CacheEntryProcessor binary mode)
> is not a simple thing. We won't be able to do that for 1.8.
> Other than that, I'll try to fit as many of them as I can. But I can't
> answer your question since I don't see any date yet.
>
> By the way, you were going to help with the reviews.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4128
> [2] https://issues.apache.org/jira/browse/IGNITE-3825
>
> On Tue, Nov 15, 2016 at 4:03 AM, Denis Magda  wrote:
>
> > *Alexander P., Igor S.,*
> >
> > When will your merge all DML and ODBC (PDO) related changes into 1.8
> > branch? I’m looking forward to go through PDO [1] documentation and be
> sure
> > that everything works as described on my side.
> >
> > *Pavel,*
> >
> > Do you think it will be possible to complete all the .NET usability
> > tickets [2] under 1.8 and roll them out to the Apache Ignite users?
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-3921
> > [2] https://issues.apache.org/jira/browse/IGNITE-4114
> >
> > —
> > Denis
> >
> > On Nov 9, 2016, at 6:55 AM, Denis Magda  wrote:
> >
> > Do we have a branch for ignite-1.8? Is there anyone who can take over the
> > release process of 1.8?
> >
> > —
> > Denis
> >
> > On Nov 8, 2016, at 9:01 PM, Alexander Paschenko <
> > alexander.a.pasche...@gmail.com> wrote:
> >
> > Current status on DML:
> >
> > - Basic data streamer support implemented (basicness is mostly about
> > configuration - say, currently there's no way to specify streamer's
> > batch size via JDBC driver, but this can be improved easily).
> >
> > - Fixed all minor stuff agreed with Vladimir.
> >
> > - There are some tests that started failing after binary hash codes
> > generation rework made by Vladimir in ignite-4011-1 branch, I will ask
> > him to look into it and fix those. Failing tests live in
> > GridCacheBinaryObjectsAbstractSelfTest, and are as follows:
> > - testPutWithFieldsHashing
> > - testCrossFormatObjectsIdentity
> > - testPutWithCustomHashing
> > I added them personally during working on first version of auto
> > hashing few weeks ago, and what they do is test these very hashing
> > features. Again, prior to Vlad's rework those tests passed. So could
> > you please take a look?
> >
> > - Working on Sergey V.'s comments about current code.
> >
> > - Alex
> >
> >
> >
> >
>


[jira] [Created] (IGNITE-4226) Redis SET command should handle expirations

2016-11-15 Thread Roman Shtykh (JIRA)
Roman Shtykh created IGNITE-4226:


 Summary: Redis SET command should handle expirations
 Key: IGNITE-4226
 URL: https://issues.apache.org/jira/browse/IGNITE-4226
 Project: Ignite
  Issue Type: Sub-task
Reporter: Roman Shtykh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Alexandr Kuramshin
Hi Vladimir,

I don't offer any changes in API. Usage scenario is the same as it was
described in
https://apacheignite.readme.io/docs/persistent-store#section-loadcache-

The preload cache logic invokes IgniteCache.loadCache() with some
additional arguments, depending on a CacheStore implementation, and then
the loading occurs in the way I've already described.


2016-11-15 11:26 GMT+03:00 Vladimir Ozerov :

> Hi Alex,
>
> >>> Let's give the user the reusable code which is convenient, reliable and
> fast.
> Convenience - this is why I asked for example on how API can look like and
> how users are going to use it.
>
> Vladimir.
>
> On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin  >
> wrote:
>
> > Hi all,
> >
> > I think the discussion goes a wrong direction. Certainly it's not a big
> > deal to implement some custom user logic to load the data into caches.
> But
> > Ignite framework gives the user some reusable code build on top of the
> > basic system.
> >
> > So the main question is: Why developers let the user to use convenient
> way
> > to load caches with totally non-optimal solution?
> >
> > We could talk too much about different persistence storage types, but
> > whenever we initiate the loading with IgniteCache.loadCache the current
> > implementation imposes much overhead on the network.
> >
> > Partition-aware data loading may be used in some scenarios to avoid this
> > network overhead, but the users are compelled to do additional steps to
> > achieve this optimization: adding the column to tables, adding compound
> > indices including the added column, write a peace of repeatable code to
> > load the data in different caches in fault-tolerant fashion, etc.
> >
> > Let's give the user the reusable code which is convenient, reliable and
> > fast.
> >
> > 2016-11-14 20:56 GMT+03:00 Valentin Kulichenko <
> > valentin.kuliche...@gmail.com>:
> >
> > > Hi Aleksandr,
> > >
> > > Data streamer is already outlined as one of the possible approaches for
> > > loading the data [1]. Basically, you start a designated client node or
> > > chose a leader among server nodes [1] and then use IgniteDataStreamer
> API
> > > to load the data. With this approach there is no need to have the
> > > CacheStore implementation at all. Can you please elaborate what
> > additional
> > > value are you trying to add here?
> > >
> > > [1] https://apacheignite.readme.io/docs/data-loading#
> ignitedatastreamer
> > > [2] https://apacheignite.readme.io/docs/leader-election
> > >
> > > -Val
> > >
> > > On Mon, Nov 14, 2016 at 8:23 AM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I just want to clarify a couple of API details from the original
> email
> > to
> > > > make sure that we are making the right assumptions here.
> > > >
> > > > *"Because of none keys are passed to the CacheStore.loadCache
> methods,
> > > the
> > > > > underlying implementation is forced to read all the data from the
> > > > > persistence storage"*
> > > >
> > > >
> > > > According to the javadoc, loadCache(...) method receives an optional
> > > > argument from the user. You can pass anything you like, including a
> > list
> > > of
> > > > keys, or an SQL where clause, etc.
> > > >
> > > > *"The partition-aware data loading approach is not a choice. It
> > requires
> > > > > persistence of the volatile data depended on affinity function
> > > > > implementation and settings."*
> > > >
> > > >
> > > > This is only partially true. While Ignite allows to plugin custom
> > > affinity
> > > > functions, the affinity function is not something that changes
> > > dynamically
> > > > and should always return the same partition for the same key.So, the
> > > > partition assignments are not volatile at all. If, in some very rare
> > > case,
> > > > the partition assignment logic needs to change, then you could update
> > the
> > > > partition assignments that you may have persisted elsewhere as well,
> > e.g.
> > > > database.
> > > >
> > > > D.
> > > >
> > > > On Mon, Nov 14, 2016 at 10:23 AM, Vladimir Ozerov <
> > voze...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Alexandr, Alexey,
> > > > >
> > > > > While I agree with you that current cache loading logic is far from
> > > > ideal,
> > > > > it would be cool to see API drafts based on your suggestions to get
> > > > better
> > > > > understanding of your ideas. How exactly users are going to use
> your
> > > > > suggestions?
> > > > >
> > > > > My main concern is that initial load is not very trivial task in
> > > general
> > > > > case. Some users have centralized RDBMS systems, some have NoSQL,
> > > others
> > > > > work with distributed persistent stores (e.g. HDFS). Sometimes we
> > have
> > > > > Ignite nodes "near" persistent data, sometimes we don't. Sharding,
> > > > > affinity, co-location, etc.. If we try to support all (or many)
> cases
> > > out
> > > > > of the box, we may end up in very messy and difficult API. So we
> > should
> > > > > caref

Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Alexey Kuznetsov
Hi, All!

I think we do not need to chage API at all.

public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
Object... args) throws CacheException;

We could pass any args to loadCache();

So we could create class
 IgniteCacheLoadDescriptor {
 some fields that will describe how to load
}


and modify POJO store to detect and use such arguments.


All we need is to implement this and write good documentation and examples.

Thoughts?

On Tue, Nov 15, 2016 at 5:22 PM, Alexandr Kuramshin 
wrote:

> Hi Vladimir,
>
> I don't offer any changes in API. Usage scenario is the same as it was
> described in
> https://apacheignite.readme.io/docs/persistent-store#section-loadcache-
>
> The preload cache logic invokes IgniteCache.loadCache() with some
> additional arguments, depending on a CacheStore implementation, and then
> the loading occurs in the way I've already described.
>
>
> 2016-11-15 11:26 GMT+03:00 Vladimir Ozerov :
>
> > Hi Alex,
> >
> > >>> Let's give the user the reusable code which is convenient, reliable
> and
> > fast.
> > Convenience - this is why I asked for example on how API can look like
> and
> > how users are going to use it.
> >
> > Vladimir.
> >
> > On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin <
> ein.nsk...@gmail.com
> > >
> > wrote:
> >
> > > Hi all,
> > >
> > > I think the discussion goes a wrong direction. Certainly it's not a big
> > > deal to implement some custom user logic to load the data into caches.
> > But
> > > Ignite framework gives the user some reusable code build on top of the
> > > basic system.
> > >
> > > So the main question is: Why developers let the user to use convenient
> > way
> > > to load caches with totally non-optimal solution?
> > >
> > > We could talk too much about different persistence storage types, but
> > > whenever we initiate the loading with IgniteCache.loadCache the current
> > > implementation imposes much overhead on the network.
> > >
> > > Partition-aware data loading may be used in some scenarios to avoid
> this
> > > network overhead, but the users are compelled to do additional steps to
> > > achieve this optimization: adding the column to tables, adding compound
> > > indices including the added column, write a peace of repeatable code to
> > > load the data in different caches in fault-tolerant fashion, etc.
> > >
> > > Let's give the user the reusable code which is convenient, reliable and
> > > fast.
> > >
> > > 2016-11-14 20:56 GMT+03:00 Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com>:
> > >
> > > > Hi Aleksandr,
> > > >
> > > > Data streamer is already outlined as one of the possible approaches
> for
> > > > loading the data [1]. Basically, you start a designated client node
> or
> > > > chose a leader among server nodes [1] and then use IgniteDataStreamer
> > API
> > > > to load the data. With this approach there is no need to have the
> > > > CacheStore implementation at all. Can you please elaborate what
> > > additional
> > > > value are you trying to add here?
> > > >
> > > > [1] https://apacheignite.readme.io/docs/data-loading#
> > ignitedatastreamer
> > > > [2] https://apacheignite.readme.io/docs/leader-election
> > > >
> > > > -Val
> > > >
> > > > On Mon, Nov 14, 2016 at 8:23 AM, Dmitriy Setrakyan <
> > > dsetrak...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I just want to clarify a couple of API details from the original
> > email
> > > to
> > > > > make sure that we are making the right assumptions here.
> > > > >
> > > > > *"Because of none keys are passed to the CacheStore.loadCache
> > methods,
> > > > the
> > > > > > underlying implementation is forced to read all the data from the
> > > > > > persistence storage"*
> > > > >
> > > > >
> > > > > According to the javadoc, loadCache(...) method receives an
> optional
> > > > > argument from the user. You can pass anything you like, including a
> > > list
> > > > of
> > > > > keys, or an SQL where clause, etc.
> > > > >
> > > > > *"The partition-aware data loading approach is not a choice. It
> > > requires
> > > > > > persistence of the volatile data depended on affinity function
> > > > > > implementation and settings."*
> > > > >
> > > > >
> > > > > This is only partially true. While Ignite allows to plugin custom
> > > > affinity
> > > > > functions, the affinity function is not something that changes
> > > > dynamically
> > > > > and should always return the same partition for the same key.So,
> the
> > > > > partition assignments are not volatile at all. If, in some very
> rare
> > > > case,
> > > > > the partition assignment logic needs to change, then you could
> update
> > > the
> > > > > partition assignments that you may have persisted elsewhere as
> well,
> > > e.g.
> > > > > database.
> > > > >
> > > > > D.
> > > > >
> > > > > On Mon, Nov 14, 2016 at 10:23 AM, Vladimir Ozerov <
> > > voze...@gridgain.com>
> > > > > wrote:
> > > > >
> > > > > > Alexandr, Alexey,
> > > > > >
> > > > > > While I agree with you that c

[GitHub] ignite pull request #1234: IGNITE-4125 .NET: MultiTieredCacheExample added

2016-11-15 Thread ptupitsyn
GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/1234

IGNITE-4125 .NET: MultiTieredCacheExample added



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-4125

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1234.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1234


commit 034bb05a85f64a829a14b48a0166437e2118fe70
Author: Pavel Tupitsyn 
Date:   2016-11-02T13:09:41Z

IGNITE-4125 .NET: Tiered cache example

commit 1ecc7859e5d1abde2569535a354dc635299591cc
Author: Pavel Tupitsyn 
Date:   2016-11-02T13:14:47Z

wip

commit 1d243ece67f3ccea82f2de6da100b9a1336da108
Author: Pavel Tupitsyn 
Date:   2016-11-02T13:32:09Z

blocked by IGNITE-4126

commit dc70e67179ee24b8f5f9556c2476f7fc1d08cf7c
Author: Pavel Tupitsyn 
Date:   2016-11-02T15:09:30Z

wip

commit 7a0dca480ddf961fcbd8d895c3e2241eadc7a14b
Author: Pavel Tupitsyn 
Date:   2016-11-15T09:24:08Z

Merge branch 'master' into ignite-4125

# Conflicts:
#   
modules/platforms/dotnet/examples/Apache.Ignite.Examples/Apache.Ignite.Examples.csproj

commit 737d05258805c4ccf0a39b05dd0ca65763e5f199
Author: Pavel Tupitsyn 
Date:   2016-11-15T09:46:15Z

Merge branch 'master' into ignite-4125

commit 803bccf2552fd997f1ee3968c7f86065711f9db5
Author: Pavel Tupitsyn 
Date:   2016-11-15T09:54:12Z

Configure file swap

commit cfa090bd6cab91feaf0ac1570d85882bab88106e
Author: Pavel Tupitsyn 
Date:   2016-11-15T10:39:34Z

Remove swap




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #1197: IGNITE-3191 BinaryObjectBuilder: binary schema id...

2016-11-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/1197


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Apache Ignite 1.8 Release

2016-11-15 Thread Vladimir Ozerov
Folks,
As DML is the main feature for now, I propose to create *ignite-1.8 *branch
when DML is ready. If nobody minds I will do that as soon as DML is merged
and start vote when the rest minor things are cleared.
Also I would like to remind that we still have a lot of tickets assigned on
1.8 version. Please look through tickets assigned to yourself and move them
further if you do not think they will be ready in the nearest time.

Alexander,
Do you have any estimates on when we can expect DML to be ready for merge?

Igor,
I looked at DML code and it appears to be almost ready at the moment. I
think you can simply merge it into your PRs and continue with testing even
before DML is officially merged.

Vladimir.

On Tue, Nov 15, 2016 at 12:45 PM, Igor Sapego  wrote:

> Denis,
>
> I can merge PDO-related changes into 1.8 but without DML they will break
> tests
> and even compilation so I don't see any sense in doing that before DML is
> merged.
>
> After DML is ready and merged I'll need some time to merge my changes and
> check
> that everything works as intended. The code itself, tests and examples are
> ready.
>
>
> Best Regards,
> Igor
>
> On Tue, Nov 15, 2016 at 11:31 AM, Pavel Tupitsyn 
> wrote:
>
> > Denis, [1] depends on [2], and [2](.NET: CacheEntryProcessor binary mode)
> > is not a simple thing. We won't be able to do that for 1.8.
> > Other than that, I'll try to fit as many of them as I can. But I can't
> > answer your question since I don't see any date yet.
> >
> > By the way, you were going to help with the reviews.
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-4128
> > [2] https://issues.apache.org/jira/browse/IGNITE-3825
> >
> > On Tue, Nov 15, 2016 at 4:03 AM, Denis Magda  wrote:
> >
> > > *Alexander P., Igor S.,*
> > >
> > > When will your merge all DML and ODBC (PDO) related changes into 1.8
> > > branch? I’m looking forward to go through PDO [1] documentation and be
> > sure
> > > that everything works as described on my side.
> > >
> > > *Pavel,*
> > >
> > > Do you think it will be possible to complete all the .NET usability
> > > tickets [2] under 1.8 and roll them out to the Apache Ignite users?
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-3921
> > > [2] https://issues.apache.org/jira/browse/IGNITE-4114
> > >
> > > —
> > > Denis
> > >
> > > On Nov 9, 2016, at 6:55 AM, Denis Magda  wrote:
> > >
> > > Do we have a branch for ignite-1.8? Is there anyone who can take over
> the
> > > release process of 1.8?
> > >
> > > —
> > > Denis
> > >
> > > On Nov 8, 2016, at 9:01 PM, Alexander Paschenko <
> > > alexander.a.pasche...@gmail.com> wrote:
> > >
> > > Current status on DML:
> > >
> > > - Basic data streamer support implemented (basicness is mostly about
> > > configuration - say, currently there's no way to specify streamer's
> > > batch size via JDBC driver, but this can be improved easily).
> > >
> > > - Fixed all minor stuff agreed with Vladimir.
> > >
> > > - There are some tests that started failing after binary hash codes
> > > generation rework made by Vladimir in ignite-4011-1 branch, I will ask
> > > him to look into it and fix those. Failing tests live in
> > > GridCacheBinaryObjectsAbstractSelfTest, and are as follows:
> > > - testPutWithFieldsHashing
> > > - testCrossFormatObjectsIdentity
> > > - testPutWithCustomHashing
> > > I added them personally during working on first version of auto
> > > hashing few weeks ago, and what they do is test these very hashing
> > > features. Again, prior to Vlad's rework those tests passed. So could
> > > you please take a look?
> > >
> > > - Working on Sergey V.'s comments about current code.
> > >
> > > - Alex
> > >
> > >
> > >
> > >
> >
>


[jira] [Created] (IGNITE-4227) ODBC: Implement SQLError function

2016-11-15 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-4227:
---

 Summary: ODBC: Implement SQLError function
 Key: IGNITE-4227
 URL: https://issues.apache.org/jira/browse/IGNITE-4227
 Project: Ignite
  Issue Type: Task
  Components: odbc
Affects Versions: 1.7
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 1.8


Some driver managers use this function even though {{SQLGetDiagRec}} was called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Service proxy API changes

2016-11-15 Thread Dmitriy Karachentsev
Hi Igniters!

I'd like to modify our public API and add IgniteServices.serviceProxy()
method with timeout argument as part of the task
https://issues.apache.org/jira/browse/IGNITE-3862

In short, without timeout, in case of serialization error, or so, service
acquirement may hang and infinitely log errors.

Do you have any concerns about this change?

Thanks!
Dmitry.


Re: Service proxy API changes

2016-11-15 Thread Vladimir Ozerov
Dmitriy,

Shouldn't we report serialization problem properly instead? We already had
a problem when node hanged during job execution in case it was impossible
to deserialize the job on receiver side. It was resolved properly - we
caught exception on receiver side and reported it back sender side.

I believe we must do the same for services. Otherwise we may end up in
messy API which doesn't resolve original problem.

Vladimir.

On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:

> Hi Igniters!
>
> I'd like to modify our public API and add IgniteServices.serviceProxy()
> method with timeout argument as part of the task
> https://issues.apache.org/jira/browse/IGNITE-3862
>
> In short, without timeout, in case of serialization error, or so, service
> acquirement may hang and infinitely log errors.
>
> Do you have any concerns about this change?
>
> Thanks!
> Dmitry.
>


Re: Service proxy API changes

2016-11-15 Thread Vladimir Ozerov
Also we implemented the same thing for platforms some time ago. In short,
job result processing was implemented as follows (pseudocode):

// Execute.
Object res;

try {
res = job.run();
}
catch (Exception e) {
res = e
}

// Serialize result.
try {
SERIALIZE(res);
}
catch (Exception e) {
try{
// Serialize serialization error.
SERIALIZE(new IgniteException("Failed to serialize result.", e));
}
catch (Exception e) {
// Cannot serialize serialization error, so pass only string to
exception.
SERIALIZE(new IgniteException("Failed to serialize result: " +
e.getMessage());
}
}

On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov 
wrote:

> Dmitriy,
>
> Shouldn't we report serialization problem properly instead? We already had
> a problem when node hanged during job execution in case it was impossible
> to deserialize the job on receiver side. It was resolved properly - we
> caught exception on receiver side and reported it back sender side.
>
> I believe we must do the same for services. Otherwise we may end up in
> messy API which doesn't resolve original problem.
>
> Vladimir.
>
> On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
> dkarachent...@gridgain.com> wrote:
>
>> Hi Igniters!
>>
>> I'd like to modify our public API and add IgniteServices.serviceProxy()
>> method with timeout argument as part of the task
>> https://issues.apache.org/jira/browse/IGNITE-3862
>>
>> In short, without timeout, in case of serialization error, or so, service
>> acquirement may hang and infinitely log errors.
>>
>> Do you have any concerns about this change?
>>
>> Thanks!
>> Dmitry.
>>
>
>


Re: Request for review IGNITE-4198

2016-11-15 Thread Vladimir Ozerov
Hi Roman,

Reviewed. One minor comment from my side in the ticket.

Vladimir.

On Tue, Nov 15, 2016 at 5:30 AM, Roman Shtykh 
wrote:

> Hi Vladimir,
> Thanks a lot for the review!I pushed the changes. Please let me know if it
> is good to merge now.
> -Roman
>
>
> On Friday, November 11, 2016 11:50 PM, Vladimir Ozerov <
> voze...@gridgain.com> wrote:
>
>
>  Hi Roman,
>
> Reviewed. My comments are in the ticket.
>
> Vladimir.
>
> On Thu, Nov 10, 2016 at 8:00 AM, Roman Shtykh 
> wrote:
>
> > Igniters,
> > Can anyone have a look and give comment on this issue?
> > IGNITE-4198: Kafka Connect sink option to transform Kafka values.
> > https://issues.apache.org/jira/browse/IGNITE-4198
> >
> > -Roman
> >
>
>
>
>


[GitHub] ignite pull request #1216: IGNITE-4175: SQL: JdbcResultSet class wasNull() m...

2016-11-15 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/1216


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #1235: IGNITE-4175: SQL: JdbcResultSet class wasNull() m...

2016-11-15 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/1235

IGNITE-4175: SQL: JdbcResultSet class wasNull() method should return true 
on NULL fields 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4175

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1235.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1235


commit 9aea27a6d0f772b0b74d2e5779b3a0fbd45b85a2
Author: Andrey V. Mashenkov 
Date:   2016-11-03T19:10:41Z

Trivial fix.

JdbcResultSet.wasNull() method should return null on column NULL value.

commit 10b1b0abf9e83fc65b421dd91ef66889af8d3282
Author: Andrey V. Mashenkov 
Date:   2016-11-10T11:30:51Z

Minors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Service proxy API changes

2016-11-15 Thread Dmitriy Karachentsev
Vladimir, thanks for your reply!

What you suggest definitely makes sense and it looks more reasonable
solution.
But there remains other thing. The second issue, that this solution solves,
is prevent log pollution with GridServiceNotFoundException. The reason why
it happens is because GridServiceProxy#invokeMethod() designed to catch it
and ClusterTopologyCheckedException and retry over and over again unless
service is become available, but on the same time there are tons of stack
traces printed on remote node.

If we want to avoid changing API, probably we need to add option to mute
that exceptions in GridJobWorker.

What do you think?

On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov 
wrote:

> Also we implemented the same thing for platforms some time ago. In short,
> job result processing was implemented as follows (pseudocode):
>
> // Execute.
> Object res;
>
> try {
> res = job.run();
> }
> catch (Exception e) {
> res = e
> }
>
> // Serialize result.
> try {
> SERIALIZE(res);
> }
> catch (Exception e) {
> try{
> // Serialize serialization error.
> SERIALIZE(new IgniteException("Failed to serialize result.", e));
> }
> catch (Exception e) {
> // Cannot serialize serialization error, so pass only string to
> exception.
> SERIALIZE(new IgniteException("Failed to serialize result: " +
> e.getMessage());
> }
> }
>
> On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov 
> wrote:
>
> > Dmitriy,
> >
> > Shouldn't we report serialization problem properly instead? We already
> had
> > a problem when node hanged during job execution in case it was impossible
> > to deserialize the job on receiver side. It was resolved properly - we
> > caught exception on receiver side and reported it back sender side.
> >
> > I believe we must do the same for services. Otherwise we may end up in
> > messy API which doesn't resolve original problem.
> >
> > Vladimir.
> >
> > On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
> > dkarachent...@gridgain.com> wrote:
> >
> >> Hi Igniters!
> >>
> >> I'd like to modify our public API and add IgniteServices.serviceProxy()
> >> method with timeout argument as part of the task
> >> https://issues.apache.org/jira/browse/IGNITE-3862
> >>
> >> In short, without timeout, in case of serialization error, or so,
> service
> >> acquirement may hang and infinitely log errors.
> >>
> >> Do you have any concerns about this change?
> >>
> >> Thanks!
> >> Dmitry.
> >>
> >
> >
>


Re: Service proxy API changes

2016-11-15 Thread Vladimir Ozerov
To avoid log pollution we usually use LT class (alias for GridLogThrottle).
Please check if it can help you.

On Tue, Nov 15, 2016 at 5:48 PM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:

> Vladimir, thanks for your reply!
>
> What you suggest definitely makes sense and it looks more reasonable
> solution.
> But there remains other thing. The second issue, that this solution solves,
> is prevent log pollution with GridServiceNotFoundException. The reason why
> it happens is because GridServiceProxy#invokeMethod() designed to catch it
> and ClusterTopologyCheckedException and retry over and over again unless
> service is become available, but on the same time there are tons of stack
> traces printed on remote node.
>
> If we want to avoid changing API, probably we need to add option to mute
> that exceptions in GridJobWorker.
>
> What do you think?
>
> On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov 
> wrote:
>
> > Also we implemented the same thing for platforms some time ago. In short,
> > job result processing was implemented as follows (pseudocode):
> >
> > // Execute.
> > Object res;
> >
> > try {
> > res = job.run();
> > }
> > catch (Exception e) {
> > res = e
> > }
> >
> > // Serialize result.
> > try {
> > SERIALIZE(res);
> > }
> > catch (Exception e) {
> > try{
> > // Serialize serialization error.
> > SERIALIZE(new IgniteException("Failed to serialize result.", e));
> > }
> > catch (Exception e) {
> > // Cannot serialize serialization error, so pass only string to
> > exception.
> > SERIALIZE(new IgniteException("Failed to serialize result: " +
> > e.getMessage());
> > }
> > }
> >
> > On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov 
> > wrote:
> >
> > > Dmitriy,
> > >
> > > Shouldn't we report serialization problem properly instead? We already
> > had
> > > a problem when node hanged during job execution in case it was
> impossible
> > > to deserialize the job on receiver side. It was resolved properly - we
> > > caught exception on receiver side and reported it back sender side.
> > >
> > > I believe we must do the same for services. Otherwise we may end up in
> > > messy API which doesn't resolve original problem.
> > >
> > > Vladimir.
> > >
> > > On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
> > > dkarachent...@gridgain.com> wrote:
> > >
> > >> Hi Igniters!
> > >>
> > >> I'd like to modify our public API and add
> IgniteServices.serviceProxy()
> > >> method with timeout argument as part of the task
> > >> https://issues.apache.org/jira/browse/IGNITE-3862
> > >>
> > >> In short, without timeout, in case of serialization error, or so,
> > service
> > >> acquirement may hang and infinitely log errors.
> > >>
> > >> Do you have any concerns about this change?
> > >>
> > >> Thanks!
> > >> Dmitry.
> > >>
> > >
> > >
> >
>


[GitHub] ignite pull request #1236: IGNITE-4137 .NET: Atomic examples added

2016-11-15 Thread ptupitsyn
GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/1236

IGNITE-4137 .NET: Atomic examples added



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite ignite-4137

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1236.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1236






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Service proxy API changes

2016-11-15 Thread Dmitriy Karachentsev
Perfect, thanks!

On Tue, Nov 15, 2016 at 5:56 PM, Vladimir Ozerov 
wrote:

> To avoid log pollution we usually use LT class (alias for GridLogThrottle).
> Please check if it can help you.
>
> On Tue, Nov 15, 2016 at 5:48 PM, Dmitriy Karachentsev <
> dkarachent...@gridgain.com> wrote:
>
> > Vladimir, thanks for your reply!
> >
> > What you suggest definitely makes sense and it looks more reasonable
> > solution.
> > But there remains other thing. The second issue, that this solution
> solves,
> > is prevent log pollution with GridServiceNotFoundException. The reason
> why
> > it happens is because GridServiceProxy#invokeMethod() designed to catch
> it
> > and ClusterTopologyCheckedException and retry over and over again unless
> > service is become available, but on the same time there are tons of stack
> > traces printed on remote node.
> >
> > If we want to avoid changing API, probably we need to add option to mute
> > that exceptions in GridJobWorker.
> >
> > What do you think?
> >
> > On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov 
> > wrote:
> >
> > > Also we implemented the same thing for platforms some time ago. In
> short,
> > > job result processing was implemented as follows (pseudocode):
> > >
> > > // Execute.
> > > Object res;
> > >
> > > try {
> > > res = job.run();
> > > }
> > > catch (Exception e) {
> > > res = e
> > > }
> > >
> > > // Serialize result.
> > > try {
> > > SERIALIZE(res);
> > > }
> > > catch (Exception e) {
> > > try{
> > > // Serialize serialization error.
> > > SERIALIZE(new IgniteException("Failed to serialize result.",
> e));
> > > }
> > > catch (Exception e) {
> > > // Cannot serialize serialization error, so pass only string to
> > > exception.
> > > SERIALIZE(new IgniteException("Failed to serialize result: " +
> > > e.getMessage());
> > > }
> > > }
> > >
> > > On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Dmitriy,
> > > >
> > > > Shouldn't we report serialization problem properly instead? We
> already
> > > had
> > > > a problem when node hanged during job execution in case it was
> > impossible
> > > > to deserialize the job on receiver side. It was resolved properly -
> we
> > > > caught exception on receiver side and reported it back sender side.
> > > >
> > > > I believe we must do the same for services. Otherwise we may end up
> in
> > > > messy API which doesn't resolve original problem.
> > > >
> > > > Vladimir.
> > > >
> > > > On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
> > > > dkarachent...@gridgain.com> wrote:
> > > >
> > > >> Hi Igniters!
> > > >>
> > > >> I'd like to modify our public API and add
> > IgniteServices.serviceProxy()
> > > >> method with timeout argument as part of the task
> > > >> https://issues.apache.org/jira/browse/IGNITE-3862
> > > >>
> > > >> In short, without timeout, in case of serialization error, or so,
> > > service
> > > >> acquirement may hang and infinitely log errors.
> > > >>
> > > >> Do you have any concerns about this change?
> > > >>
> > > >> Thanks!
> > > >> Dmitry.
> > > >>
> > > >
> > > >
> > >
> >
>


[jira] [Created] (IGNITE-4228) Missing documentation on CacheJdbcPojoStore

2016-11-15 Thread Alexander (JIRA)
Alexander created IGNITE-4228:
-

 Summary: Missing documentation on CacheJdbcPojoStore
 Key: IGNITE-4228
 URL: https://issues.apache.org/jira/browse/IGNITE-4228
 Project: Ignite
  Issue Type: Wish
  Components: documentation
Affects Versions: 1.7
Reporter: Alexander
Assignee: Alexey Kuznetsov
Priority: Minor


Missing documentation on CacheJdbcPojoStore class in Javadoc.

Especially on the method loadCache() - what kind of the optional arguments is 
supported?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4229) Loading configuration from XML into Console for further editing

2016-11-15 Thread Alexander (JIRA)
Alexander created IGNITE-4229:
-

 Summary: Loading configuration from XML into Console for further 
editing
 Key: IGNITE-4229
 URL: https://issues.apache.org/jira/browse/IGNITE-4229
 Project: Ignite
  Issue Type: Wish
Reporter: Alexander
Assignee: Alexey Kuznetsov
Priority: Minor


It would be great to have ability of loading an external XML into Console as 
the initial configuration and then get editing.

At this point it's only allowed to create a configuration from scratch and then 
export to the XML.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request #1237: IGNITE-4227: Implemented SQLError.

2016-11-15 Thread isapego
GitHub user isapego opened a pull request:

https://github.com/apache/ignite/pull/1237

IGNITE-4227: Implemented SQLError.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4227

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1237.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1237


commit 7290d88e14a15a3d030b7381dbd0a3f14cb65a12
Author: Pavel Tupitsyn 
Date:   2016-10-18T14:17:17Z

IGNITE-4030 Streamline PlatformTarget operation methods

This closes #1167

commit 66c76d1f30f024b58db8cab07ba9e7d429f596f8
Author: tledkov-gridgain 
Date:   2016-10-18T15:45:06Z

IGNITE-2355 Fixed the test HadoopClientProtocolMultipleServersSelfTest. 
Clear connection poll after the test, cosmetic.

commit f37fbcab1ae2c7553696e96b7a9c3194a570d7af
Author: isapego 
Date:   2016-10-19T10:06:42Z

IGNITE-3705: Fixed compiliation warnings. This closes #1169.

commit 7ed2bb7e341701d052220a36a2b2f8f0a46fd644
Author: AMRepo 
Date:   2016-10-19T15:33:59Z

IGNITE-3448 Support SQL queries with distinct aggregates added. This closes 
#3448.

commit 551a4dfae6169a07a5e28f9b266f90311f3216b7
Author: tledkov-gridgain 
Date:   2016-10-21T10:25:57Z

IGNITE-2355 Fixed the test HadoopClientProtocolMultipleServersSelfTest. 
Clear connection poll before and after  the test

commit ec12a9db2265180f96be72e2217e60ced856164e
Author: vozerov-gridgain 
Date:   2016-10-24T14:52:36Z

Minor fix for flags passed to GridCacheMapEntry.initialValue from data 
streamer isolated updater.

commit 44740465677c39068dc813dabd464e60f09e5f49
Author: tledkov-gridgain 
Date:   2016-10-26T13:00:11Z

IGNITE-4062: fix BinaryObject.equals: compare only bytes containing the 
fields' data (without header and footer). This closes  #1182.

commit 9ddb8be1243df8e489f7ebc716d315415775439a
Author: Dmitriy Govorukhin 
Date:   2016-10-27T14:52:22Z

IGNITE-2079 GridCacheIoManager eats exception trail if it falls into the 
directed case
merger from ignite-2079-2

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/query/GridCacheQueryMetricsAdapter.java

commit 6f160728c544d252f77bdb85c0ff2857559707a3
Author: Valentin Kulichenko 
Date:   2016-10-28T23:18:14Z

IGNITE-4110 - Fixed BUFFER_UNDERFLOW and BUFFER_OVERFLOW handling in 
BlockingSslHandler

commit 6b78ad0cbbcf286cb083136c49cebd5dd85de58c
Author: sboikov 
Date:   2016-10-31T07:35:44Z

TcoDiscovery: reduced amount of debug logging (heartbeat/connection check 
messages are logged trace level).

commit 175da6b7e394dd76c27d5155ff98a5b2ef03bb9d
Author: tledkov-gridgain 
Date:   2016-11-07T06:16:58Z

IGNITE-3432:  check data/meta cache names are different for different IGFS 
instances. This closes #1201

commit 40ef2f5ae42826fe8fd077e3013e8f55c8512bdd
Author: Dmitriy Govorukhin 
Date:   2016-11-07T09:09:41Z

ignite-4178 support permission builder

commit fc7ce5a4d72145f2e8a86debeda264ef0a5b37e3
Author: isapego 
Date:   2016-11-07T10:26:05Z

IGNITE-4090: Added flags so stdint and limits can be used in C++.

commit a98804a249496ba9bafbc96daa7aaf25b3d36724
Author: Igor Sapego 
Date:   2016-11-07T11:00:00Z

IGNITE-4113: Added tests. Added Statement::Set/GetAttribute.

commit 950bad474ef29f9b808e74034c49a69d57eb2740
Author: dkarachentsev 
Date:   2016-11-08T11:03:34Z

GG-11655 - Restore service compatibility with releases before 1.5.30.

commit 3d19bfc2b66574e3945ce17c7a4dfe77d0070b8d
Author: dkarachentsev 
Date:   2016-11-08T11:04:36Z

Merge remote-tracking branch 'origin/ignite-1.6.11' into ignite-1.6.11

commit e821dc0083003bc81058b1cb223d8a8a2ee44daf
Author: Dmitriy Govorukhin 
Date:   2016-11-08T12:09:21Z

IGNITE-2079 (revert commit) GridCacheIoManager eats exception trail if it 
falls into the directed case

commit c2c82ca44befe4570325dd6cf2ba885e0d90596c
Author: Dmitriy Govorukhin 
Date:   2016-11-08T12:10:10Z

Merge remote-tracking branch 'professional/ignite-1.6.11' into ignite-1.6.11

commit 865bbcf0f41a0c4944e0928f1758d43a0eae82c5
Author: Dmitriy Govorukhin 
Date:   2016-11-08T12:18:29Z

Revert "Merge remote-tracking branch 'professional/ignite-1.6.11' into 
ignite-1.6.11"

This reverts commit c2c82ca44befe4570325dd6cf2ba885e0d90596c, reversing
changes made to e821dc0083003bc81058b1cb223d8a8a2ee44daf.

commit 9726421ff9efb2b19813b2fd6ad27a3728b5ab1a
Author: Dmitriy Govorukhin 
Date:   2016-11-08T12:59:00Z

  Revert  Revert  Merge remote-tracking branch 'professional/ignite-1.6.11'

commit 5a3a1960fff1dcf32961c45c0ba5149d6748d2fc
Author: Igor Sapego 
Date:   2016-11-08T14:36:35Z

Added license header.

commit d88f422aeb02738d676d86ce416551b805ad154e
Author: Andrey Novikov 
Date:   2016-11-09T07:25:38Z

GG-1

Re: Service proxy API changes

2016-11-15 Thread Valentin Kulichenko
I would still add a timeout there. In my view, it makes sense to have such
option. Currently user thread can block indefinitely or loop in while(true)
forever.

-Val

On Tue, Nov 15, 2016 at 7:10 AM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:

> Perfect, thanks!
>
> On Tue, Nov 15, 2016 at 5:56 PM, Vladimir Ozerov 
> wrote:
>
> > To avoid log pollution we usually use LT class (alias for
> GridLogThrottle).
> > Please check if it can help you.
> >
> > On Tue, Nov 15, 2016 at 5:48 PM, Dmitriy Karachentsev <
> > dkarachent...@gridgain.com> wrote:
> >
> > > Vladimir, thanks for your reply!
> > >
> > > What you suggest definitely makes sense and it looks more reasonable
> > > solution.
> > > But there remains other thing. The second issue, that this solution
> > solves,
> > > is prevent log pollution with GridServiceNotFoundException. The reason
> > why
> > > it happens is because GridServiceProxy#invokeMethod() designed to
> catch
> > it
> > > and ClusterTopologyCheckedException and retry over and over again
> unless
> > > service is become available, but on the same time there are tons of
> stack
> > > traces printed on remote node.
> > >
> > > If we want to avoid changing API, probably we need to add option to
> mute
> > > that exceptions in GridJobWorker.
> > >
> > > What do you think?
> > >
> > > On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Also we implemented the same thing for platforms some time ago. In
> > short,
> > > > job result processing was implemented as follows (pseudocode):
> > > >
> > > > // Execute.
> > > > Object res;
> > > >
> > > > try {
> > > > res = job.run();
> > > > }
> > > > catch (Exception e) {
> > > > res = e
> > > > }
> > > >
> > > > // Serialize result.
> > > > try {
> > > > SERIALIZE(res);
> > > > }
> > > > catch (Exception e) {
> > > > try{
> > > > // Serialize serialization error.
> > > > SERIALIZE(new IgniteException("Failed to serialize result.",
> > e));
> > > > }
> > > > catch (Exception e) {
> > > > // Cannot serialize serialization error, so pass only string
> to
> > > > exception.
> > > > SERIALIZE(new IgniteException("Failed to serialize result: "
> +
> > > > e.getMessage());
> > > > }
> > > > }
> > > >
> > > > On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Dmitriy,
> > > > >
> > > > > Shouldn't we report serialization problem properly instead? We
> > already
> > > > had
> > > > > a problem when node hanged during job execution in case it was
> > > impossible
> > > > > to deserialize the job on receiver side. It was resolved properly -
> > we
> > > > > caught exception on receiver side and reported it back sender side.
> > > > >
> > > > > I believe we must do the same for services. Otherwise we may end up
> > in
> > > > > messy API which doesn't resolve original problem.
> > > > >
> > > > > Vladimir.
> > > > >
> > > > > On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
> > > > > dkarachent...@gridgain.com> wrote:
> > > > >
> > > > >> Hi Igniters!
> > > > >>
> > > > >> I'd like to modify our public API and add
> > > IgniteServices.serviceProxy()
> > > > >> method with timeout argument as part of the task
> > > > >> https://issues.apache.org/jira/browse/IGNITE-3862
> > > > >>
> > > > >> In short, without timeout, in case of serialization error, or so,
> > > > service
> > > > >> acquirement may hang and infinitely log errors.
> > > > >>
> > > > >> Do you have any concerns about this change?
> > > > >>
> > > > >> Thanks!
> > > > >> Dmitry.
> > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Valentin Kulichenko
It sounds like Aleksandr is basically proposing to support automatic
persistence [1] for loading through data streamer and we really don't have
this. However, I think I have more generic solution in mind.

What if we add one more IgniteCache.loadCache overload like this:

loadCache(@Nullable IgniteBiPredicate p, IgniteBiInClosure
clo, @Nullable
Object... args)

It's the same as the existing one, but with the key-value closure provided
as a parameter. This closure will be passed to the CacheStore.loadCache
along with the arguments and will allow to override the logic that actually
saves the loaded entry in cache (currently this logic is always provided by
the cache itself and user can't control it).

We can then provide the implementation of this closure that will create a
data streamer and call addData() within its apply() method.

I see the following advantages:

   - Any existing CacheStore implementation can be reused to load through
   streamer (our JDBC and Cassandra stores or anything else that user has).
   - Loading code is always part of CacheStore implementation, so it's very
   easy to switch between different ways of loading.
   - User is not limited by two approaches we provide out of the box, they
   can always implement a new one.

Thoughts?

[1] https://apacheignite.readme.io/docs/automatic-persistence

-Val

On Tue, Nov 15, 2016 at 2:27 AM, Alexey Kuznetsov 
wrote:

> Hi, All!
>
> I think we do not need to chage API at all.
>
> public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
> Object... args) throws CacheException;
>
> We could pass any args to loadCache();
>
> So we could create class
>  IgniteCacheLoadDescriptor {
>  some fields that will describe how to load
> }
>
>
> and modify POJO store to detect and use such arguments.
>
>
> All we need is to implement this and write good documentation and examples.
>
> Thoughts?
>
> On Tue, Nov 15, 2016 at 5:22 PM, Alexandr Kuramshin 
> wrote:
>
> > Hi Vladimir,
> >
> > I don't offer any changes in API. Usage scenario is the same as it was
> > described in
> > https://apacheignite.readme.io/docs/persistent-store#section-loadcache-
> >
> > The preload cache logic invokes IgniteCache.loadCache() with some
> > additional arguments, depending on a CacheStore implementation, and then
> > the loading occurs in the way I've already described.
> >
> >
> > 2016-11-15 11:26 GMT+03:00 Vladimir Ozerov :
> >
> > > Hi Alex,
> > >
> > > >>> Let's give the user the reusable code which is convenient, reliable
> > and
> > > fast.
> > > Convenience - this is why I asked for example on how API can look like
> > and
> > > how users are going to use it.
> > >
> > > Vladimir.
> > >
> > > On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin <
> > ein.nsk...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I think the discussion goes a wrong direction. Certainly it's not a
> big
> > > > deal to implement some custom user logic to load the data into
> caches.
> > > But
> > > > Ignite framework gives the user some reusable code build on top of
> the
> > > > basic system.
> > > >
> > > > So the main question is: Why developers let the user to use
> convenient
> > > way
> > > > to load caches with totally non-optimal solution?
> > > >
> > > > We could talk too much about different persistence storage types, but
> > > > whenever we initiate the loading with IgniteCache.loadCache the
> current
> > > > implementation imposes much overhead on the network.
> > > >
> > > > Partition-aware data loading may be used in some scenarios to avoid
> > this
> > > > network overhead, but the users are compelled to do additional steps
> to
> > > > achieve this optimization: adding the column to tables, adding
> compound
> > > > indices including the added column, write a peace of repeatable code
> to
> > > > load the data in different caches in fault-tolerant fashion, etc.
> > > >
> > > > Let's give the user the reusable code which is convenient, reliable
> and
> > > > fast.
> > > >
> > > > 2016-11-14 20:56 GMT+03:00 Valentin Kulichenko <
> > > > valentin.kuliche...@gmail.com>:
> > > >
> > > > > Hi Aleksandr,
> > > > >
> > > > > Data streamer is already outlined as one of the possible approaches
> > for
> > > > > loading the data [1]. Basically, you start a designated client node
> > or
> > > > > chose a leader among server nodes [1] and then use
> IgniteDataStreamer
> > > API
> > > > > to load the data. With this approach there is no need to have the
> > > > > CacheStore implementation at all. Can you please elaborate what
> > > > additional
> > > > > value are you trying to add here?
> > > > >
> > > > > [1] https://apacheignite.readme.io/docs/data-loading#
> > > ignitedatastreamer
> > > > > [2] https://apacheignite.readme.io/docs/leader-election
> > > > >
> > > > > -Val
> > > > >
> > > > > On Mon, Nov 14, 2016 at 8:23 AM, Dmitriy Setrakyan <
> > > > dsetrak...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I just want to clar

Re: Apache Ignite 1.8 Release

2016-11-15 Thread Denis Magda
Pavel,

Got you thanks. I’ve reviewed all the tickets that were waiting for my review. 
Please have a look at minor comments.

—
Denis

> On Nov 15, 2016, at 12:31 AM, Pavel Tupitsyn  wrote:
> 
> Denis, [1] depends on [2], and [2](.NET: CacheEntryProcessor binary mode) is 
> not a simple thing. We won't be able to do that for 1.8.
> Other than that, I'll try to fit as many of them as I can. But I can't answer 
> your question since I don't see any date yet.
> 
> By the way, you were going to help with the reviews.
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-4128 
> 
> [2] https://issues.apache.org/jira/browse/IGNITE-3825 
> 
> 
> On Tue, Nov 15, 2016 at 4:03 AM, Denis Magda  > wrote:
> Alexander P., Igor S.,
> 
> When will your merge all DML and ODBC (PDO) related changes into 1.8 branch? 
> I’m looking forward to go through PDO [1] documentation and be sure that 
> everything works as described on my side.
> 
> Pavel,
> 
> Do you think it will be possible to complete all the .NET usability tickets 
> [2] under 1.8 and roll them out to the Apache Ignite users?
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-3921 
> 
> [2] https://issues.apache.org/jira/browse/IGNITE-4114 
> 
> 
> —
> Denis
> 
>> On Nov 9, 2016, at 6:55 AM, Denis Magda > > wrote:
>> 
>> Do we have a branch for ignite-1.8? Is there anyone who can take over the 
>> release process of 1.8?
>> 
>> —
>> Denis
>> 
>>> On Nov 8, 2016, at 9:01 PM, Alexander Paschenko 
>>> mailto:alexander.a.pasche...@gmail.com>> 
>>> wrote:
>>> 
>>> Current status on DML:
>>> 
>>> - Basic data streamer support implemented (basicness is mostly about
>>> configuration - say, currently there's no way to specify streamer's
>>> batch size via JDBC driver, but this can be improved easily).
>>> 
>>> - Fixed all minor stuff agreed with Vladimir.
>>> 
>>> - There are some tests that started failing after binary hash codes
>>> generation rework made by Vladimir in ignite-4011-1 branch, I will ask
>>> him to look into it and fix those. Failing tests live in
>>> GridCacheBinaryObjectsAbstractSelfTest, and are as follows:
>>> - testPutWithFieldsHashing
>>> - testCrossFormatObjectsIdentity
>>> - testPutWithCustomHashing
>>> I added them personally during working on first version of auto
>>> hashing few weeks ago, and what they do is test these very hashing
>>> features. Again, prior to Vlad's rework those tests passed. So could
>>> you please take a look?
>>> 
>>> - Working on Sergey V.'s comments about current code.
>>> 
>>> - Alex
>> 
> 
> 



Re: Apache Ignite 1.8 Release

2016-11-15 Thread Denis Magda
Igor,

It makes sense to wait for me while everything gets merged into 1.8 then. 
Please let me know over this discussion when the overall merge happens.

—
Denis

> On Nov 15, 2016, at 1:45 AM, Igor Sapego  wrote:
> 
> Denis,
> 
> I can merge PDO-related changes into 1.8 but without DML they will break tests
> and even compilation so I don't see any sense in doing that before DML is 
> merged.
> 
> After DML is ready and merged I'll need some time to merge my changes and 
> check
> that everything works as intended. The code itself, tests and examples are 
> ready.
> 
> 
> Best Regards,
> Igor
> 
> On Tue, Nov 15, 2016 at 11:31 AM, Pavel Tupitsyn  > wrote:
> Denis, [1] depends on [2], and [2](.NET: CacheEntryProcessor binary mode)
> is not a simple thing. We won't be able to do that for 1.8.
> Other than that, I'll try to fit as many of them as I can. But I can't
> answer your question since I don't see any date yet.
> 
> By the way, you were going to help with the reviews.
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-4128 
> 
> [2] https://issues.apache.org/jira/browse/IGNITE-3825 
> 
> 
> On Tue, Nov 15, 2016 at 4:03 AM, Denis Magda  > wrote:
> 
> > *Alexander P., Igor S.,*
> >
> > When will your merge all DML and ODBC (PDO) related changes into 1.8
> > branch? I’m looking forward to go through PDO [1] documentation and be sure
> > that everything works as described on my side.
> >
> > *Pavel,*
> >
> > Do you think it will be possible to complete all the .NET usability
> > tickets [2] under 1.8 and roll them out to the Apache Ignite users?
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-3921 
> > 
> > [2] https://issues.apache.org/jira/browse/IGNITE-4114 
> > 
> >
> > —
> > Denis
> >
> > On Nov 9, 2016, at 6:55 AM, Denis Magda  > > wrote:
> >
> > Do we have a branch for ignite-1.8? Is there anyone who can take over the
> > release process of 1.8?
> >
> > —
> > Denis
> >
> > On Nov 8, 2016, at 9:01 PM, Alexander Paschenko <
> > alexander.a.pasche...@gmail.com > 
> > wrote:
> >
> > Current status on DML:
> >
> > - Basic data streamer support implemented (basicness is mostly about
> > configuration - say, currently there's no way to specify streamer's
> > batch size via JDBC driver, but this can be improved easily).
> >
> > - Fixed all minor stuff agreed with Vladimir.
> >
> > - There are some tests that started failing after binary hash codes
> > generation rework made by Vladimir in ignite-4011-1 branch, I will ask
> > him to look into it and fix those. Failing tests live in
> > GridCacheBinaryObjectsAbstractSelfTest, and are as follows:
> > - testPutWithFieldsHashing
> > - testCrossFormatObjectsIdentity
> > - testPutWithCustomHashing
> > I added them personally during working on first version of auto
> > hashing few weeks ago, and what they do is test these very hashing
> > features. Again, prior to Vlad's rework those tests passed. So could
> > you please take a look?
> >
> > - Working on Sergey V.'s comments about current code.
> >
> > - Alex
> >
> >
> >
> >
> 



Re: Service proxy API changes

2016-11-15 Thread Dmitry Karachentsev
Valentin, as I understand, this situation is abnormal, and if user 
thread hangs on getting service proxy, it's worth to check logs for 
failure instead of making workaround.


On 15.11.2016 19:47, Valentin Kulichenko wrote:

I would still add a timeout there. In my view, it makes sense to have such
option. Currently user thread can block indefinitely or loop in while(true)
forever.

-Val

On Tue, Nov 15, 2016 at 7:10 AM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:


Perfect, thanks!

On Tue, Nov 15, 2016 at 5:56 PM, Vladimir Ozerov 
wrote:


To avoid log pollution we usually use LT class (alias for

GridLogThrottle).

Please check if it can help you.

On Tue, Nov 15, 2016 at 5:48 PM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:


Vladimir, thanks for your reply!

What you suggest definitely makes sense and it looks more reasonable
solution.
But there remains other thing. The second issue, that this solution

solves,

is prevent log pollution with GridServiceNotFoundException. The reason

why

it happens is because GridServiceProxy#invokeMethod() designed to

catch

it

and ClusterTopologyCheckedException and retry over and over again

unless

service is become available, but on the same time there are tons of

stack

traces printed on remote node.

If we want to avoid changing API, probably we need to add option to

mute

that exceptions in GridJobWorker.

What do you think?

On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov 
Also we implemented the same thing for platforms some time ago. In

short,

job result processing was implemented as follows (pseudocode):

// Execute.
Object res;

try {
 res = job.run();
}
catch (Exception e) {
 res = e
}

// Serialize result.
try {
 SERIALIZE(res);
}
catch (Exception e) {
 try{
 // Serialize serialization error.
 SERIALIZE(new IgniteException("Failed to serialize result.",

e));

 }
 catch (Exception e) {
 // Cannot serialize serialization error, so pass only string

to

exception.
 SERIALIZE(new IgniteException("Failed to serialize result: "

+

e.getMessage());
 }
}

On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov <

voze...@gridgain.com

wrote:


Dmitriy,

Shouldn't we report serialization problem properly instead? We

already

had

a problem when node hanged during job execution in case it was

impossible

to deserialize the job on receiver side. It was resolved properly -

we

caught exception on receiver side and reported it back sender side.

I believe we must do the same for services. Otherwise we may end up

in

messy API which doesn't resolve original problem.

Vladimir.

On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:


Hi Igniters!

I'd like to modify our public API and add

IgniteServices.serviceProxy()

method with timeout argument as part of the task
https://issues.apache.org/jira/browse/IGNITE-3862

In short, without timeout, in case of serialization error, or so,

service

acquirement may hang and infinitely log errors.

Do you have any concerns about this change?

Thanks!
Dmitry.







Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Denis Magda
How would your proposal resolve the main point Aleksandr is trying to convey 
that is extensive network utilization?

As I see the loadCache method still will be triggered on every and as before 
all the nodes will pre-load all the data set from a database. That was 
Aleksandr’s reasonable concern. 

If we make up a way how to call the loadCache on a specific node only and 
implement some falt-tolerant mechanism then your suggestion should work 
perfectly fine.

—
Denis
 
> On Nov 15, 2016, at 12:05 PM, Valentin Kulichenko 
>  wrote:
> 
> It sounds like Aleksandr is basically proposing to support automatic
> persistence [1] for loading through data streamer and we really don't have
> this. However, I think I have more generic solution in mind.
> 
> What if we add one more IgniteCache.loadCache overload like this:
> 
> loadCache(@Nullable IgniteBiPredicate p, IgniteBiInClosure
> clo, @Nullable
> Object... args)
> 
> It's the same as the existing one, but with the key-value closure provided
> as a parameter. This closure will be passed to the CacheStore.loadCache
> along with the arguments and will allow to override the logic that actually
> saves the loaded entry in cache (currently this logic is always provided by
> the cache itself and user can't control it).
> 
> We can then provide the implementation of this closure that will create a
> data streamer and call addData() within its apply() method.
> 
> I see the following advantages:
> 
>   - Any existing CacheStore implementation can be reused to load through
>   streamer (our JDBC and Cassandra stores or anything else that user has).
>   - Loading code is always part of CacheStore implementation, so it's very
>   easy to switch between different ways of loading.
>   - User is not limited by two approaches we provide out of the box, they
>   can always implement a new one.
> 
> Thoughts?
> 
> [1] https://apacheignite.readme.io/docs/automatic-persistence
> 
> -Val
> 
> On Tue, Nov 15, 2016 at 2:27 AM, Alexey Kuznetsov 
> wrote:
> 
>> Hi, All!
>> 
>> I think we do not need to chage API at all.
>> 
>> public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
>> Object... args) throws CacheException;
>> 
>> We could pass any args to loadCache();
>> 
>> So we could create class
>> IgniteCacheLoadDescriptor {
>> some fields that will describe how to load
>> }
>> 
>> 
>> and modify POJO store to detect and use such arguments.
>> 
>> 
>> All we need is to implement this and write good documentation and examples.
>> 
>> Thoughts?
>> 
>> On Tue, Nov 15, 2016 at 5:22 PM, Alexandr Kuramshin 
>> wrote:
>> 
>>> Hi Vladimir,
>>> 
>>> I don't offer any changes in API. Usage scenario is the same as it was
>>> described in
>>> https://apacheignite.readme.io/docs/persistent-store#section-loadcache-
>>> 
>>> The preload cache logic invokes IgniteCache.loadCache() with some
>>> additional arguments, depending on a CacheStore implementation, and then
>>> the loading occurs in the way I've already described.
>>> 
>>> 
>>> 2016-11-15 11:26 GMT+03:00 Vladimir Ozerov :
>>> 
 Hi Alex,
 
>>> Let's give the user the reusable code which is convenient, reliable
>>> and
 fast.
 Convenience - this is why I asked for example on how API can look like
>>> and
 how users are going to use it.
 
 Vladimir.
 
 On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin <
>>> ein.nsk...@gmail.com
> 
 wrote:
 
> Hi all,
> 
> I think the discussion goes a wrong direction. Certainly it's not a
>> big
> deal to implement some custom user logic to load the data into
>> caches.
 But
> Ignite framework gives the user some reusable code build on top of
>> the
> basic system.
> 
> So the main question is: Why developers let the user to use
>> convenient
 way
> to load caches with totally non-optimal solution?
> 
> We could talk too much about different persistence storage types, but
> whenever we initiate the loading with IgniteCache.loadCache the
>> current
> implementation imposes much overhead on the network.
> 
> Partition-aware data loading may be used in some scenarios to avoid
>>> this
> network overhead, but the users are compelled to do additional steps
>> to
> achieve this optimization: adding the column to tables, adding
>> compound
> indices including the added column, write a peace of repeatable code
>> to
> load the data in different caches in fault-tolerant fashion, etc.
> 
> Let's give the user the reusable code which is convenient, reliable
>> and
> fast.
> 
> 2016-11-14 20:56 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
> 
>> Hi Aleksandr,
>> 
>> Data streamer is already outlined as one of the possible approaches
>>> for
>> loading the data [1]. Basically, you start a designated client node
>>> or
>> chose a leader among server nodes [1] and then use
>> IgniteDataStreamer
 API
>

Re: Code Review Tool Proposal: Upsource

2016-11-15 Thread Denis Magda
Pavel,

Makes sense to me. Let’s start the voting process then adding the link to this 
discussion to the voting thread. 
I would wait no less than 5 days giving a chance to everyone to share his/her 
opinion.

—
Denis

> On Nov 14, 2016, at 11:16 AM, Pavel Tupitsyn  wrote:
> 
> Denis,
> 
> Contributors will have to start a review on branch or pull request manually
> (a couple of clicks really), then attach an URL to the JIRA ticket.
> Example: https://issues.apache.org/jira/browse/IGNITE-4116
> 
>> are there any examples of Apache projects that used some 3rd party tool
> for review process
> Some projects use Crucible: https://fisheye6.atlassian.com/
> Apache Hive used Phabricator in the past.
> 
> 
> Mike,
> 
>> Why not / what is wrong with GitHub?
> Nothing is wrong with GitHub, I think it is the second best option.
> Still, Upsource is much nicer, so I'd like to explore this possibility.
> 
>> commercial tool I have to pay for
> They provide open source license. We license TeamCity this way.
> 
> On Mon, Nov 14, 2016 at 8:32 PM, Michael André Pearce <
> michael.andre.pea...@me.com> wrote:
> 
>> Why not / what is wrong with GitHub?
>> 
>> Code is there anyhow...
>> 
>> I've found this seems to be the way a lot of projects have gone.
>> 
>> It allows me to review the code without checkout
>> 
>> I can comment inline with a pr or code commit
>> 
>> I can fork a project to my own space and create a pr back to the main repo
>> 
>> It updates when I make a commit
>> 
>> Supports multiple reviewers.
>> 
>> Eco system of bots
>> 
>> It doesn't tie me into a commercial ide tool (I love IntelliJ like the
>> next person, but appreciate it is a commercial tool I have to pay for for
>> all the bells and whistles)
>> 
>> Rgds
>> Mike
>> 
>>> On 14 Nov 2016, at 17:03, Denis Magda  wrote:
>>> 
>>> Pavel,
>>> 
>>> How will the contribution process be affected if the community switches
>> to Upsource? Will Upsource introduce additional steps for those who want to
>> ask someone to review a branch or the tool simply intercepts all the
>> pull-requests automatically?
>>> 
>>> Cos, Raul, Others,
>>> 
>>> How this intention is aligned with Apache at all? In you experience, are
>> there any examples of Apache projects that used some 3rd party tool for
>> review process?
>>> 
>>> —
>>> Denis
>>> 
 On Nov 14, 2016, at 4:08 AM, Pavel Tupitsyn 
>> wrote:
 
 Igniters,
 
 We have set up Upsource code review tool at
 http://reviews.ignite.apache.org/
 
 I propose to evaluate it and see if it works for us.
 
 
 * Why?
 Current JIRA-based process is not very efficient. Anyone who have used a
 review tool will probably agree:
 
 - No need to switch branches locally and interrupt your current work.
>> You
 can see the code in one click.
 - All current reviews are easily accessible
 - Multiple reviewers
 - Much better discussions: comments are right in the code; each point
>> can
 be discussed and accepted separately
 - Integrates with IDEA - open the diff in IDEA in one click, or see the
 reviews there without opening the browser at all
 
 
 * Why Upsource?
 I've evaluated a bunch of tools (CodeCollaborator, ReviewBoard,
 Phabricator, Crucible),
 and Upsource looks like the best fit for us:
 - PR-based code reviews. This is a major advantage: review for a PR can
>> be
 created in one click, and it updates automatically when you push more
 commits (fix review issues)
 - Good Java support and IDEA integration
 - Good performance (our code base is big, and tools like Crucible really
 struggle with it)
 
 
 Thoughts and suggestions are welcome.
 
 Thanks,
 
 Pavel
>>> 
>> 



Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Valentin Kulichenko
You can use localLoadCache method for this (it should be overloaded as well
of course). Basically, if you provide closure based on IgniteDataStreamer
and call localLoadCache on one of the nodes (client or server), it's the
same approach as described in [1], but with the possibility to reuse
existing persistence code. Makes sense?

[1] https://apacheignite.readme.io/docs/data-loading#ignitedatastreamer

-Val

On Tue, Nov 15, 2016 at 1:15 PM, Denis Magda  wrote:

> How would your proposal resolve the main point Aleksandr is trying to
> convey that is extensive network utilization?
>
> As I see the loadCache method still will be triggered on every and as
> before all the nodes will pre-load all the data set from a database. That
> was Aleksandr’s reasonable concern.
>
> If we make up a way how to call the loadCache on a specific node only and
> implement some falt-tolerant mechanism then your suggestion should work
> perfectly fine.
>
> —
> Denis
>
> > On Nov 15, 2016, at 12:05 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
> >
> > It sounds like Aleksandr is basically proposing to support automatic
> > persistence [1] for loading through data streamer and we really don't
> have
> > this. However, I think I have more generic solution in mind.
> >
> > What if we add one more IgniteCache.loadCache overload like this:
> >
> > loadCache(@Nullable IgniteBiPredicate p, IgniteBiInClosure
> > clo, @Nullable
> > Object... args)
> >
> > It's the same as the existing one, but with the key-value closure
> provided
> > as a parameter. This closure will be passed to the CacheStore.loadCache
> > along with the arguments and will allow to override the logic that
> actually
> > saves the loaded entry in cache (currently this logic is always provided
> by
> > the cache itself and user can't control it).
> >
> > We can then provide the implementation of this closure that will create a
> > data streamer and call addData() within its apply() method.
> >
> > I see the following advantages:
> >
> >   - Any existing CacheStore implementation can be reused to load through
> >   streamer (our JDBC and Cassandra stores or anything else that user
> has).
> >   - Loading code is always part of CacheStore implementation, so it's
> very
> >   easy to switch between different ways of loading.
> >   - User is not limited by two approaches we provide out of the box, they
> >   can always implement a new one.
> >
> > Thoughts?
> >
> > [1] https://apacheignite.readme.io/docs/automatic-persistence
> >
> > -Val
> >
> > On Tue, Nov 15, 2016 at 2:27 AM, Alexey Kuznetsov  >
> > wrote:
> >
> >> Hi, All!
> >>
> >> I think we do not need to chage API at all.
> >>
> >> public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
> >> Object... args) throws CacheException;
> >>
> >> We could pass any args to loadCache();
> >>
> >> So we could create class
> >> IgniteCacheLoadDescriptor {
> >> some fields that will describe how to load
> >> }
> >>
> >>
> >> and modify POJO store to detect and use such arguments.
> >>
> >>
> >> All we need is to implement this and write good documentation and
> examples.
> >>
> >> Thoughts?
> >>
> >> On Tue, Nov 15, 2016 at 5:22 PM, Alexandr Kuramshin <
> ein.nsk...@gmail.com>
> >> wrote:
> >>
> >>> Hi Vladimir,
> >>>
> >>> I don't offer any changes in API. Usage scenario is the same as it was
> >>> described in
> >>> https://apacheignite.readme.io/docs/persistent-store#
> section-loadcache-
> >>>
> >>> The preload cache logic invokes IgniteCache.loadCache() with some
> >>> additional arguments, depending on a CacheStore implementation, and
> then
> >>> the loading occurs in the way I've already described.
> >>>
> >>>
> >>> 2016-11-15 11:26 GMT+03:00 Vladimir Ozerov :
> >>>
>  Hi Alex,
> 
> >>> Let's give the user the reusable code which is convenient, reliable
> >>> and
>  fast.
>  Convenience - this is why I asked for example on how API can look like
> >>> and
>  how users are going to use it.
> 
>  Vladimir.
> 
>  On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin <
> >>> ein.nsk...@gmail.com
> >
>  wrote:
> 
> > Hi all,
> >
> > I think the discussion goes a wrong direction. Certainly it's not a
> >> big
> > deal to implement some custom user logic to load the data into
> >> caches.
>  But
> > Ignite framework gives the user some reusable code build on top of
> >> the
> > basic system.
> >
> > So the main question is: Why developers let the user to use
> >> convenient
>  way
> > to load caches with totally non-optimal solution?
> >
> > We could talk too much about different persistence storage types, but
> > whenever we initiate the loading with IgniteCache.loadCache the
> >> current
> > implementation imposes much overhead on the network.
> >
> > Partition-aware data loading may be used in some scenarios to avoid
> >>> this
> > network overhead, but the users are compelled t

Re: Service proxy API changes

2016-11-15 Thread Valentin Kulichenko
Dmitry,

I absolutely agree with this and I'm not saying that timeout solves any of
these problems. However, it provides a bit of safety for such cases (client
continues working instead of hanging forever) and also allows not too wait
for the service too long which can be also useful sometimes (e.g. service
is redeployed and initialization takes a lot of time for some reason). It
makes perfect sense to have it and should not be difficult to add. Am I
missing something?

-Val

On Tue, Nov 15, 2016 at 1:12 PM, Dmitry Karachentsev <
dkarachent...@gridgain.com> wrote:

> Valentin, as I understand, this situation is abnormal, and if user thread
> hangs on getting service proxy, it's worth to check logs for failure
> instead of making workaround.
>
>
> On 15.11.2016 19:47, Valentin Kulichenko wrote:
>
>> I would still add a timeout there. In my view, it makes sense to have such
>> option. Currently user thread can block indefinitely or loop in
>> while(true)
>> forever.
>>
>> -Val
>>
>> On Tue, Nov 15, 2016 at 7:10 AM, Dmitriy Karachentsev <
>> dkarachent...@gridgain.com> wrote:
>>
>> Perfect, thanks!
>>>
>>> On Tue, Nov 15, 2016 at 5:56 PM, Vladimir Ozerov 
>>> wrote:
>>>
>>> To avoid log pollution we usually use LT class (alias for

>>> GridLogThrottle).
>>>
 Please check if it can help you.

 On Tue, Nov 15, 2016 at 5:48 PM, Dmitriy Karachentsev <
 dkarachent...@gridgain.com> wrote:

 Vladimir, thanks for your reply!
>
> What you suggest definitely makes sense and it looks more reasonable
> solution.
> But there remains other thing. The second issue, that this solution
>
 solves,

> is prevent log pollution with GridServiceNotFoundException. The reason
>
 why

> it happens is because GridServiceProxy#invokeMethod() designed to
>
 catch
>>>
 it

> and ClusterTopologyCheckedException and retry over and over again
>
 unless
>>>
 service is become available, but on the same time there are tons of
>
 stack
>>>
 traces printed on remote node.
>
> If we want to avoid changing API, probably we need to add option to
>
 mute
>>>
 that exceptions in GridJobWorker.
>
> What do you think?
>
> On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov  wrote:
>
> Also we implemented the same thing for platforms some time ago. In
>>
> short,

> job result processing was implemented as follows (pseudocode):
>>
>> // Execute.
>> Object res;
>>
>> try {
>>  res = job.run();
>> }
>> catch (Exception e) {
>>  res = e
>> }
>>
>> // Serialize result.
>> try {
>>  SERIALIZE(res);
>> }
>> catch (Exception e) {
>>  try{
>>  // Serialize serialization error.
>>  SERIALIZE(new IgniteException("Failed to serialize result.",
>>
> e));

>  }
>>  catch (Exception e) {
>>  // Cannot serialize serialization error, so pass only string
>>
> to
>>>
 exception.
>>  SERIALIZE(new IgniteException("Failed to serialize result: "
>>
> +
>>>
 e.getMessage());
>>  }
>> }
>>
>> On Tue, Nov 15, 2016 at 5:05 PM, Vladimir Ozerov <
>>
> voze...@gridgain.com
>>>
 wrote:
>>
>> Dmitriy,
>>>
>>> Shouldn't we report serialization problem properly instead? We
>>>
>> already

> had
>>
>>> a problem when node hanged during job execution in case it was
>>>
>> impossible
>
>> to deserialize the job on receiver side. It was resolved properly -
>>>
>> we

> caught exception on receiver side and reported it back sender side.
>>>
>>> I believe we must do the same for services. Otherwise we may end up
>>>
>> in

> messy API which doesn't resolve original problem.
>>>
>>> Vladimir.
>>>
>>> On Tue, Nov 15, 2016 at 4:38 PM, Dmitriy Karachentsev <
>>> dkarachent...@gridgain.com> wrote:
>>>
>>> Hi Igniters!

 I'd like to modify our public API and add

>>> IgniteServices.serviceProxy()
>
>> method with timeout argument as part of the task
 https://issues.apache.org/jira/browse/IGNITE-3862

 In short, without timeout, in case of serialization error, or so,

>>> service
>>
>>> acquirement may hang and infinitely log errors.

 Do you have any concerns about this change?

 Thanks!
 Dmitry.


>>>
>


Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Denis Magda
Well, that’s clear. However, with localLoadCache the user still has to care 
about the fault-tolerance if the node that loads the data goes down. What if we 
provide an overloaded version of loadCache that will accept a number of nodes 
where the closure has to be executed? If the number decreases then the engine 
will re-execute the closure on a node that is alive.

—
Denis 


> On Nov 15, 2016, at 2:06 PM, Valentin Kulichenko 
>  wrote:
> 
> You can use localLoadCache method for this (it should be overloaded as well
> of course). Basically, if you provide closure based on IgniteDataStreamer
> and call localLoadCache on one of the nodes (client or server), it's the
> same approach as described in [1], but with the possibility to reuse
> existing persistence code. Makes sense?
> 
> [1] https://apacheignite.readme.io/docs/data-loading#ignitedatastreamer
> 
> -Val
> 
> On Tue, Nov 15, 2016 at 1:15 PM, Denis Magda  wrote:
> 
>> How would your proposal resolve the main point Aleksandr is trying to
>> convey that is extensive network utilization?
>> 
>> As I see the loadCache method still will be triggered on every and as
>> before all the nodes will pre-load all the data set from a database. That
>> was Aleksandr’s reasonable concern.
>> 
>> If we make up a way how to call the loadCache on a specific node only and
>> implement some falt-tolerant mechanism then your suggestion should work
>> perfectly fine.
>> 
>> —
>> Denis
>> 
>>> On Nov 15, 2016, at 12:05 PM, Valentin Kulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>> 
>>> It sounds like Aleksandr is basically proposing to support automatic
>>> persistence [1] for loading through data streamer and we really don't
>> have
>>> this. However, I think I have more generic solution in mind.
>>> 
>>> What if we add one more IgniteCache.loadCache overload like this:
>>> 
>>> loadCache(@Nullable IgniteBiPredicate p, IgniteBiInClosure
>>> clo, @Nullable
>>> Object... args)
>>> 
>>> It's the same as the existing one, but with the key-value closure
>> provided
>>> as a parameter. This closure will be passed to the CacheStore.loadCache
>>> along with the arguments and will allow to override the logic that
>> actually
>>> saves the loaded entry in cache (currently this logic is always provided
>> by
>>> the cache itself and user can't control it).
>>> 
>>> We can then provide the implementation of this closure that will create a
>>> data streamer and call addData() within its apply() method.
>>> 
>>> I see the following advantages:
>>> 
>>>  - Any existing CacheStore implementation can be reused to load through
>>>  streamer (our JDBC and Cassandra stores or anything else that user
>> has).
>>>  - Loading code is always part of CacheStore implementation, so it's
>> very
>>>  easy to switch between different ways of loading.
>>>  - User is not limited by two approaches we provide out of the box, they
>>>  can always implement a new one.
>>> 
>>> Thoughts?
>>> 
>>> [1] https://apacheignite.readme.io/docs/automatic-persistence
>>> 
>>> -Val
>>> 
>>> On Tue, Nov 15, 2016 at 2:27 AM, Alexey Kuznetsov >> 
>>> wrote:
>>> 
 Hi, All!
 
 I think we do not need to chage API at all.
 
 public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
 Object... args) throws CacheException;
 
 We could pass any args to loadCache();
 
 So we could create class
 IgniteCacheLoadDescriptor {
 some fields that will describe how to load
 }
 
 
 and modify POJO store to detect and use such arguments.
 
 
 All we need is to implement this and write good documentation and
>> examples.
 
 Thoughts?
 
 On Tue, Nov 15, 2016 at 5:22 PM, Alexandr Kuramshin <
>> ein.nsk...@gmail.com>
 wrote:
 
> Hi Vladimir,
> 
> I don't offer any changes in API. Usage scenario is the same as it was
> described in
> https://apacheignite.readme.io/docs/persistent-store#
>> section-loadcache-
> 
> The preload cache logic invokes IgniteCache.loadCache() with some
> additional arguments, depending on a CacheStore implementation, and
>> then
> the loading occurs in the way I've already described.
> 
> 
> 2016-11-15 11:26 GMT+03:00 Vladimir Ozerov :
> 
>> Hi Alex,
>> 
> Let's give the user the reusable code which is convenient, reliable
> and
>> fast.
>> Convenience - this is why I asked for example on how API can look like
> and
>> how users are going to use it.
>> 
>> Vladimir.
>> 
>> On Tue, Nov 15, 2016 at 11:18 AM, Alexandr Kuramshin <
> ein.nsk...@gmail.com
>>> 
>> wrote:
>> 
>>> Hi all,
>>> 
>>> I think the discussion goes a wrong direction. Certainly it's not a
 big
>>> deal to implement some custom user logic to load the data into
 caches.
>> But
>>> Ignite framework gives the user some reusable code build on top of
 the
>>> basic system.
>>> 
>>

Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Valentin Kulichenko
Denis,

The loading will be most likely initiated by the application anyway, even
if you call localLoadCache on one of the server nodes. I.e. the flow is the
following:

   1. Client sends a closure to a server node (e.g. oldest or random).
   2. The closure calls localLoadCache method.
   3. If this server node fails (or if the loading process fails), client
   gets an exception and retries if needed.

I would not complicate the API and implementation even more. We have
compute grid API that already allows to handle things you're describing.
It's very flexible and easy to use.

-Val

On Tue, Nov 15, 2016 at 2:20 PM, Denis Magda  wrote:

> Well, that’s clear. However, with localLoadCache the user still has to
> care about the fault-tolerance if the node that loads the data goes down.
> What if we provide an overloaded version of loadCache that will accept a
> number of nodes where the closure has to be executed? If the number
> decreases then the engine will re-execute the closure on a node that is
> alive.
>
> —
> Denis
>
>
> > On Nov 15, 2016, at 2:06 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
> >
> > You can use localLoadCache method for this (it should be overloaded as
> well
> > of course). Basically, if you provide closure based on IgniteDataStreamer
> > and call localLoadCache on one of the nodes (client or server), it's the
> > same approach as described in [1], but with the possibility to reuse
> > existing persistence code. Makes sense?
> >
> > [1] https://apacheignite.readme.io/docs/data-loading#ignitedatastreamer
> >
> > -Val
> >
> > On Tue, Nov 15, 2016 at 1:15 PM, Denis Magda  wrote:
> >
> >> How would your proposal resolve the main point Aleksandr is trying to
> >> convey that is extensive network utilization?
> >>
> >> As I see the loadCache method still will be triggered on every and as
> >> before all the nodes will pre-load all the data set from a database.
> That
> >> was Aleksandr’s reasonable concern.
> >>
> >> If we make up a way how to call the loadCache on a specific node only
> and
> >> implement some falt-tolerant mechanism then your suggestion should work
> >> perfectly fine.
> >>
> >> —
> >> Denis
> >>
> >>> On Nov 15, 2016, at 12:05 PM, Valentin Kulichenko <
> >> valentin.kuliche...@gmail.com> wrote:
> >>>
> >>> It sounds like Aleksandr is basically proposing to support automatic
> >>> persistence [1] for loading through data streamer and we really don't
> >> have
> >>> this. However, I think I have more generic solution in mind.
> >>>
> >>> What if we add one more IgniteCache.loadCache overload like this:
> >>>
> >>> loadCache(@Nullable IgniteBiPredicate p, IgniteBiInClosure
> >>> clo, @Nullable
> >>> Object... args)
> >>>
> >>> It's the same as the existing one, but with the key-value closure
> >> provided
> >>> as a parameter. This closure will be passed to the CacheStore.loadCache
> >>> along with the arguments and will allow to override the logic that
> >> actually
> >>> saves the loaded entry in cache (currently this logic is always
> provided
> >> by
> >>> the cache itself and user can't control it).
> >>>
> >>> We can then provide the implementation of this closure that will
> create a
> >>> data streamer and call addData() within its apply() method.
> >>>
> >>> I see the following advantages:
> >>>
> >>>  - Any existing CacheStore implementation can be reused to load through
> >>>  streamer (our JDBC and Cassandra stores or anything else that user
> >> has).
> >>>  - Loading code is always part of CacheStore implementation, so it's
> >> very
> >>>  easy to switch between different ways of loading.
> >>>  - User is not limited by two approaches we provide out of the box,
> they
> >>>  can always implement a new one.
> >>>
> >>> Thoughts?
> >>>
> >>> [1] https://apacheignite.readme.io/docs/automatic-persistence
> >>>
> >>> -Val
> >>>
> >>> On Tue, Nov 15, 2016 at 2:27 AM, Alexey Kuznetsov <
> akuznet...@apache.org
> >>>
> >>> wrote:
> >>>
>  Hi, All!
> 
>  I think we do not need to chage API at all.
> 
>  public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
>  Object... args) throws CacheException;
> 
>  We could pass any args to loadCache();
> 
>  So we could create class
>  IgniteCacheLoadDescriptor {
>  some fields that will describe how to load
>  }
> 
> 
>  and modify POJO store to detect and use such arguments.
> 
> 
>  All we need is to implement this and write good documentation and
> >> examples.
> 
>  Thoughts?
> 
>  On Tue, Nov 15, 2016 at 5:22 PM, Alexandr Kuramshin <
> >> ein.nsk...@gmail.com>
>  wrote:
> 
> > Hi Vladimir,
> >
> > I don't offer any changes in API. Usage scenario is the same as it
> was
> > described in
> > https://apacheignite.readme.io/docs/persistent-store#
> >> section-loadcache-
> >
> > The preload cache logic invokes IgniteCache.loadCache() with some
> > additional argument

[jira] [Created] (IGNITE-4230) Create documentation for the integration with Tableau

2016-11-15 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-4230:
---

 Summary: Create documentation for the integration with Tableau
 Key: IGNITE-4230
 URL: https://issues.apache.org/jira/browse/IGNITE-4230
 Project: Ignite
  Issue Type: Task
Reporter: Denis Magda
Assignee: Igor Sapego
 Fix For: 2.0


Let's create a simple documentation that will show how to connect from Tableau 
to Ignite and what steps have to be performed to fulfil this.

Refer to this discussion for more details:
http://apache-ignite-developers.2346864.n4.nabble.com/Connecting-to-Ignite-from-Tableau-td12065.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Connecting to Ignite from Tableau

2016-11-15 Thread Denis Magda
Created the ticket for this task
https://issues.apache.org/jira/browse/IGNITE-4230 


Igor, it will be nice if you prepare a draft of the page at some point of time.

—
Denis

> On Nov 8, 2016, at 7:40 PM, Dmitriy Setrakyan  wrote:
> 
> I do agree, though, that we should have a separate page for Tableau
> documentation.
> 
> On Tue, Nov 8, 2016 at 10:53 AM, Denis Magda  wrote:
> 
>> I’m planning to put PHP PDO documentation under kind of “Runs Everywhere”
>> section. Under this section you will be able to find a reference to our
>> .NET, C++ client docs as well as information about REST API.
>> 
>> Tableau should be added under the new “Tools Integrations” section.
>> 
>> Could you create a ticket for Tableau and create a guide for it? After
>> that we will add the page under specific section.
>> 
>> —
>> Denis
>> 
>>> On Nov 8, 2016, at 4:16 AM, Igor Sapego  wrote:
>>> 
>>> Maybe we should have a page with notes for any widespread product
>>> which we tested with Ignite. We going to have a page for PDO soon
>>> so why not to make a page for Tableau?
>>> 
>>> Best Regards,
>>> Igor
>>> 
>>> On Tue, Nov 8, 2016 at 12:02 PM, Dmitriy Setrakyan <
>> dsetrak...@apache.org>
>>> wrote:
>>> 
 I think we should have minimal documentation about Tableu, describing
>> the
 steps, and a screenshot. Even though it seems trivial to us, it will
>> not be
 as trivial to users.
 
 On Mon, Nov 7, 2016 at 10:51 PM, Vladimir Ozerov 
 wrote:
 
> Denis,
> 
> I am not sure it requires specific documentation. Tableau is just one
>> of
> many applications which use ODBC to connect to some data source.
> 
> On Tue, Nov 8, 2016 at 1:50 AM, Denis Magda 
>> wrote:
> 
>> Guys,
>> 
>> As far as I aware with the help of Ignite ODBC driver it’s feasible to
>> connect to Ignite cluster from Tableau [1].
>> However I didn’t find any documentation related to this support.
>> 
>> Igor, are there any specific steps and hints I should follow if I want
> to
>> connect to Ignite from above mentioned tool?
>> 
>> [1] http://www.tableau.com 
>> 
>> —
>> Denis
> 
 
 
>> 
>> 



Re: IgniteCache.loadCache improvement proposal

2016-11-15 Thread Denis Magda
Val,

Then I would create a blog post on how to use the new API proposed by you to 
accomplish the scenario described by Alexandr. Are you willing to write the 
post once the API is implemented?

Alexandr, do you think the API proposed by Val will resolve your case when it’s 
used as listed below? If it’s so are you interested to take over the 
implementation and contribute to Apache Ignite?

—
Denis

> On Nov 15, 2016, at 2:30 PM, Valentin Kulichenko 
>  wrote:
> 
> Denis,
> 
> The loading will be most likely initiated by the application anyway, even
> if you call localLoadCache on one of the server nodes. I.e. the flow is the
> following:
> 
>   1. Client sends a closure to a server node (e.g. oldest or random).
>   2. The closure calls localLoadCache method.
>   3. If this server node fails (or if the loading process fails), client
>   gets an exception and retries if needed.
> 
> I would not complicate the API and implementation even more. We have
> compute grid API that already allows to handle things you're describing.
> It's very flexible and easy to use.
> 
> -Val
> 
> On Tue, Nov 15, 2016 at 2:20 PM, Denis Magda  wrote:
> 
>> Well, that’s clear. However, with localLoadCache the user still has to
>> care about the fault-tolerance if the node that loads the data goes down.
>> What if we provide an overloaded version of loadCache that will accept a
>> number of nodes where the closure has to be executed? If the number
>> decreases then the engine will re-execute the closure on a node that is
>> alive.
>> 
>> —
>> Denis
>> 
>> 
>>> On Nov 15, 2016, at 2:06 PM, Valentin Kulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>> 
>>> You can use localLoadCache method for this (it should be overloaded as
>> well
>>> of course). Basically, if you provide closure based on IgniteDataStreamer
>>> and call localLoadCache on one of the nodes (client or server), it's the
>>> same approach as described in [1], but with the possibility to reuse
>>> existing persistence code. Makes sense?
>>> 
>>> [1] https://apacheignite.readme.io/docs/data-loading#ignitedatastreamer
>>> 
>>> -Val
>>> 
>>> On Tue, Nov 15, 2016 at 1:15 PM, Denis Magda  wrote:
>>> 
 How would your proposal resolve the main point Aleksandr is trying to
 convey that is extensive network utilization?
 
 As I see the loadCache method still will be triggered on every and as
 before all the nodes will pre-load all the data set from a database.
>> That
 was Aleksandr’s reasonable concern.
 
 If we make up a way how to call the loadCache on a specific node only
>> and
 implement some falt-tolerant mechanism then your suggestion should work
 perfectly fine.
 
 —
 Denis
 
> On Nov 15, 2016, at 12:05 PM, Valentin Kulichenko <
 valentin.kuliche...@gmail.com> wrote:
> 
> It sounds like Aleksandr is basically proposing to support automatic
> persistence [1] for loading through data streamer and we really don't
 have
> this. However, I think I have more generic solution in mind.
> 
> What if we add one more IgniteCache.loadCache overload like this:
> 
> loadCache(@Nullable IgniteBiPredicate p, IgniteBiInClosure
> clo, @Nullable
> Object... args)
> 
> It's the same as the existing one, but with the key-value closure
 provided
> as a parameter. This closure will be passed to the CacheStore.loadCache
> along with the arguments and will allow to override the logic that
 actually
> saves the loaded entry in cache (currently this logic is always
>> provided
 by
> the cache itself and user can't control it).
> 
> We can then provide the implementation of this closure that will
>> create a
> data streamer and call addData() within its apply() method.
> 
> I see the following advantages:
> 
> - Any existing CacheStore implementation can be reused to load through
> streamer (our JDBC and Cassandra stores or anything else that user
 has).
> - Loading code is always part of CacheStore implementation, so it's
 very
> easy to switch between different ways of loading.
> - User is not limited by two approaches we provide out of the box,
>> they
> can always implement a new one.
> 
> Thoughts?
> 
> [1] https://apacheignite.readme.io/docs/automatic-persistence
> 
> -Val
> 
> On Tue, Nov 15, 2016 at 2:27 AM, Alexey Kuznetsov <
>> akuznet...@apache.org
> 
> wrote:
> 
>> Hi, All!
>> 
>> I think we do not need to chage API at all.
>> 
>> public void loadCache(@Nullable IgniteBiPredicate p, @Nullable
>> Object... args) throws CacheException;
>> 
>> We could pass any args to loadCache();
>> 
>> So we could create class
>> IgniteCacheLoadDescriptor {
>> some fields that will describe how to load
>> }
>> 
>> 
>> and modify POJO store to detect and use such arguments.
>> 
>> 

[GitHub] ignite pull request #1225: IGNITE-4198: Kafka Connect sink option to transfo...

2016-11-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/1225


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: IGNITE-3066 Set of Redis commands that can be easily implemented via existing REST commands

2016-11-15 Thread Roman Shtykh
Andrey,
Sure, but I couldn't sign up -- "Cannot complete request due to license 
limitations."Let's use github for now.
Thank you for your review!
-Roman
 

On Tuesday, November 15, 2016 6:12 PM, Andrey Novikov  
wrote:
 

 Roman,

I reviewed your code and added comments in JIRA.

May we will try to use Upsource (http://reviews.ignite.apache.org/) for code
review?


On Tue, Nov 15, 2016 at 1:22 PM, Roman Shtykh 
wrote:

> Alexey,
> Thank you for your thorough reviews! I fixed the issues.
> -Roman
>
>
>    On Tuesday, November 15, 2016 12:32 PM, Alexey Kuznetsov <
> akuznet...@apache.org> wrote:
>
>
>  Roman,
>
> I reviewed your code and now it looks good for me.
> But I added two minor comments in JIRA.
>
> Also I think Andrey Novikov should take a look, as he has some experience
> in ignite-rest module.
>
> Andrey, take a look:
>
> Issue: https://issues.apache.org/jira/browse/IGNITE-3066
> PR:  https://github.com/apache/ignite/pull/1212
>
>
> On Tue, Nov 15, 2016 at 9:27 AM, Roman Shtykh 
> wrote:
>
> > Alexey,
> > Thank you!I answered and pushed the changes.
> > -Roman
> >
> >
> >    On Tuesday, November 15, 2016 12:14 AM, Alexey Kuznetsov <
> > akuznet...@apache.org> wrote:
> >
> >
> >  Roman,
> >
> > I made one more review,  see my comments in JIRA issue.
> >
> > On Mon, Nov 7, 2016 at 1:30 PM, Alexey Kuznetsov 
> > wrote:
> >
> > > I will take a look on PR today.
> > >
> > > On Mon, Nov 7, 2016 at 11:35 AM, Roman Shtykh
>  > >
> > > wrote:
> > >
> > >>  Denis,
> > >> It is https://github.com/apache/ignite/pull/1212
> > >>
> > >> Thank you,
> > >> Roman
> > >>
> > >>
> > >>    On Saturday, November 5, 2016 4:56 AM, Denis Magda <
> > >> dma...@gridgain.com> wrote:
> > >>
> > >>
> > >>  Roman,
> > >>
> > >> Would you mind making a pull-request? It’s not clear and easy to
> review
> > >> using the branch you provided
> > >> https://github.com/apache/ignite/tree/ignite-2788 <
> > >> https://github.com/apache/ignite/tree/ignite-2788>
> > >>
> > >> This link provides details how to achieve this
> > >> https://cwiki.apache.org/confluence/display/IGNITE/How+to+
> > >> Contribute#HowtoContribute-1.CreateGitHubpull-request <
> > >> https://cwiki.apache.org/confluence/display/IGNITE/How+to+
> > >> Contribute#HowtoContribute-1.CreateGitHubpull-request>
> > >>
> > >> Let us know if you have any issue preparing the pull-request.
> > >>
> > >> —
> > >> Denis
> > >>
> > >> > On Nov 3, 2016, at 6:24 PM, Roman Shtykh  >
> > >> wrote:
> > >> >
> > >> > Igniters,
> > >> > Please review the issue.https://issues.apache.or
> > >> g/jira/browse/IGNITE-3066
> > >> >
> > >> > Thank you,Roman
> > >>
> > >>
> > >>
> > >>
> > >
> > >
> > >
> > > --
> > > Alexey Kuznetsov
> > >
> >
> >
> >
> > --
> > Alexey Kuznetsov
> >
> >
> >
>
>
>
> --
> Alexey Kuznetsov
>
>
>