Binary types configuration and cluster configuration consistency between server and client nodes

2021-11-30 Thread Rotondi, Antonio
Hello, I have a question regarding an application we have already in production and for which we are trying to clean as much as possible the dependencies between server node and client node ones resources. In particular we are seeing that it is not possible to set the binary types configuration

Re: Several gigabytes in org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper

2021-11-30 Thread Stephen Darlington
I have not dug into the code, but judging from the property name, the data structure is related to recovering from failures (recovery). Are these out of memory errors happening around the time of other problems? Are you seeing network issues? Do you see “long JVM pauses” in the logs? > On 30 No

Re: Binary types configuration and cluster configuration consistency between server and client nodes

2021-11-30 Thread Pavel Tupitsyn
Hello Antonio, You can solve this by removing all types from the static BinaryConfiguration, and rely on dynamic type registration instead. In most cases, you don't need to do anything extra and types will be registered automatically on first use (e.g. cache.put call). If you still encounter an e

Re: Several gigabytes in org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper

2021-11-30 Thread Eduard Llull Pou
Hi Stephen, I have not gathered the logs produced around the time I generated the memory dump. I will dump the memory again when we reach the 5GB warning threshold and also I'll gather the log files in the server so we have all the related information. I should take less than a week to have anoth

Re: Several gigabytes in org.apache.ignite.spi.communication.tcp.internal.GridNioServerWrapper

2021-11-30 Thread Eduard Llull Pou
I'm glad I double checked, we have the logs from the last 5 minutes before the heap dump. Most of the log lines (59265 of 68296) are "Client node outbound message queue size exceeded slowClientQueueLimit, the client will be dropped (consider changing 'slowClientQueueLimit' configuration property)"

[2.8.1] Having more backups make SQL queries slower.

2021-11-30 Thread Maximiliano Gazquez
Hello everyone. We are doing some testing in a 10 node cluster which we use as a distributed database with persistence enabled. Each node has 6gb region size + 5gb heap. All caches are partitioned, and I connect to the cluster using the thin client. I’ve found a performance issue: • With 2 back

AW: [2.8.1] Having more backups make SQL queries slower.

2021-11-30 Thread Henrik
With more backups the cluster has the worse writing performance since data will be copied by multiple times. But the reading performance should be increased since each node answers the request from local backup. Thanks Gesendet mit der Telekom Mail App

Re: AW: [2.8.1] Having more backups make SQL queries slower.

2021-11-30 Thread Maximiliano Gazquez
My problem is 100% with queries, not writes. It’s the same cluster, same hardware, but a LOT slower when using 4 backups instead of 2. Is there any metric that I could check to find out what’s happening? Thanks! On 30 Nov 2021 15:42 -0300, Henrik , wrote: > With more backups the cluster has the

access streaming

2021-11-30 Thread Henrik
does anyone know how to access the streaming API from the ruby client? https://github.com/ankane/ignite-ruby Thank you.

Re[2]: AW: [2.8.1] Having more backups make SQL queries slower.

2021-11-30 Thread Zhenya Stanilovsky
hello Maximiliano Gazquez, good question ! But there is no one strength answer on it.   >  I assumed that queries are distributed and each node answers the query only >with its primary partitions and adding backups wouldn’t affect performance. Ok, but what about overall system performance degrad