Re: Errors with TCPCommunicationSpi when using zookeeper discovery

2018-07-25 Thread Larry Mark
The logs do not indicate any connectivity problem, unless I am missing it, in which case please point it out to me. The messages seem to be getting through fine, but the server thinks there is a connection which does not exist, so it rejects it. This seems to happen because the communication SPI

Re: Errors with TCPCommunicationSpi when using zookeeper discovery

2018-08-02 Thread Larry Mark
ms for a long time and yet > the connection won't be established. > > Regards, > > -- > Ilya Kasnacheev > > 2018-07-25 17:30 GMT+03:00 Larry Mark : > >> The logs do not indicate any connectivity problem, unless I am missing >> it, in which case please poi

Re: write behind performance impacting main thread. Write behind buffer is never full

2017-11-03 Thread Larry Mark
Alexey, With our use case setting the coalesce off will probably make it worse, for at least some caches we are doing many updates to the same key, one of the reasons I am setting the batch size to 500. I will send the cachestore implementation and some logs that show the phenomenon early next we

Re: write behind performance impacting main thread. Write behind buffer is never full

2017-11-07 Thread Larry Mark
same key unique to us, or is this common enough that there should be a fix to the coalesce code? Best, Larry On Fri, Nov 3, 2017 at 5:14 PM, Larry Mark wrote: > Alexey, > > With our use case setting the coalesce off will probably make it worse, > for at least some caches we are doing

Re: write behind performance impacting main thread. Write behindbuffer is never full

2017-11-13 Thread Larry Mark
Getting rid of all locking would be great. Assuming my read of the code is correct, and the locking is just to make sure the value does not change, I was thinking of a sub optimization for my specific problem ( there being very few unique keys in the cache ) . Loop through the cache in reverse or

Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-10 Thread Larry Mark
Thanks for the quick response. I have observed similar behavior with 3rd party persistence read through IF I set indexed types for the cache. Test case - Load up the cache using put with 35,000 entries ( keys 1 -> 35,000). Read every key using Get(key) This is the use case that I want to use in

Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-11 Thread Larry Mark
Here are the configurations DataRegionConfiguration = (new DataRegionConfiguration) .setName("RefData") .setInitialSize(21 * 1024 * 1024) .setMaxSize(21 * 1024 * 1024) .setPersistenceEnabled(false) .setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU) .setMe

Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-12 Thread Larry Mark
Alexey, The runtime class is used so I can have a common method to create any cache type and index the key and value types of the cache. To simplify things, attached is a tar file that is a small program that throws an OOM exception for me. I get the OOM when loading from the cache store on miss

Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-16 Thread Larry Mark
no problem, this is not a short term blocker, it is just something I need to understand better to make sure that I do not configure things in a way to get unexpected OOM in production. On Mon, Jan 15, 2018 at 1:18 PM, Alexey Popov wrote: > Hi Larry, > > I am without my PC for a while. I will ch