Hi Mike. In your case try one of two ways.

In cache configuration:
1. Increase writeBehindFlushSize parameter (default is 10240 bytes).
or
2. Decrease writeBehindFlushFreq parameter (default is 5000 ms).

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html

Hope this will help you.

----------------------
Ilya

пт, 12 нояб. 2021 г. в 03:29, Mike Wiesenberg <mike.wiesenb...@gmail.com>:

> Hi,
>  Using GridCacheWriteBehindStore (Version 2.10.0), we are occasionally
> receiving this error message
>
> "Failed to update store (value will be lost as current buffer size is
> greater than 'cacheCriticalSize.."
>
> Looking at the code to try to determine how to raise the buffer size, I
> observe that
>
> 1. This only occurs when there is an exception writing to the store
> ("Unable to update"). I observe 18 such failures in my log but 512 "Failed
> to update messages", which indicates that the sql errors are negatively
> impacting other values which happen to have been in the same batch with
> store errors. Is there a way to ignore or skip the 'Unable to update" fails
> so other values are not lost when those occur? (It seems kind of random
> that we only discard values due to the cacheCriticalSize when an unrelated
> exception occurs. Why?)
>
> 2. I was looking to see if I can increase the cacheCriticalSize, but it's
> calculated by multiplying the cacheMaxSize times 1.5, and increasing the
> cacheMaxSize would also increase the number of values held before storing,
> so it wouldn't help much. Is there another way to adjust configuration to
> mitigiate this problem?
>
>
> Thanks,
>  Mike
>

Reply via email to