Yes, see the root issue [1], but it [2] is happened during the recovery.
> AFAIK all required parts of txn processing are already properly linearized
I see a DCL replacements like IgniteTxAdapter#finishFuture() which call
this into question.
Anyway, this may be the correct statement, but TX proce
Do we have a real reproducer for thread unsafe behavior, which causes data
inconsistency ?
AFAIK all required parts of txn processing are already properly linearized,
and other parts are ready to be processed in parallel (like txn recovery)
пн, 19 июн. 2023 г. в 22:25, Anton Vinogradov :
> Folks,
Folks, idea to synchronize all methods unfortunately failed :(
1) TxState has 4 implementations, a lot of changes are required
2) IgniteTxEntry is not synchronized as well ...
3) And even IgniteInternalTx implementations (1+ lines) are not
synchronized as well ...
It seems to be unreal to refac
>> could you please point to this at code, it may be not needed after the
fix and can bring the performance growth.
BTW, found the trick.
Still necessary to keep copying.
On Wed, May 24, 2023 at 2:44 PM Anton Vinogradov wrote:
> Andrey,
>
> Thanks for the tip.
> Of course, I'll benchmark the fix
Andrey,
Thanks for the tip.
Of course, I'll benchmark the fix before the merge.
According to the comment,
>> and return entries copy form tx state in order to avoid
ConcurrentModificationException.
, could you please point to this at code, it may be not needed after the
fix and can bring the per
Please, run benchmarks after fixing the problem. E.g. replacing HashMap to
ConcurrentHashMap can significantly affect performance.
See for example comments to IGNITE-2968 issue (
https://issues.apache.org/jira/browse/IGNITE-2968?focusedCommentId=15415170&page=com.atlassian.jira.plugin.system.issue
Checked the approach to use msg.version() as a striped pool index for tx
messages processing.
Seems, this fixes the problem for cases when originating node is not a
primary (tx creation happens at the striped pool).
But, what about cases when transaction started from the user thread, not a
striped
>> This invariant is violated in many places.
At least in half of the messages:
org.apache.ignite.internal.processors.cache.distributed.GridDistributedLockRequest#partition
org.apache.ignite.internal.processors.cache.distributed.GridDistributedUnlockRequest#partition
org.apache.ignite.internal.pro
>> Tx processing is supposed to be thread bound by hashing the version to a
partition
This invariant is violated in many places. The most notorious example is tx
recovery.
Another example: I just added an assertion that checks tId of a creator
thread with tId of an accessor thread.
TxMultiCacheAsy
Tx processing is supposed to be thread bound by hashing the version to a
partition, see methods like [1]
If for some cases this invariant is broken, this should be fixed.
[1]
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareRequest#partition
пт, 19 мая 2023 г. в 15:57,
Igniters,
My team was faced with node failure [1] because of non-threadsafe
collections usage.
IgniteTxStateImpl's fields
- activeCacheIds
- txMap
are not thread safe, but are widely used from different threads without the
proper sync.
The main question is ... why?
According to the research, we
11 matches
Mail list logo