Hi.

You can try OPTIMISTIC SERIALIZABLE isolation, it might have better
throughput in contending scenarios.
But this is not the same as RW lock, because a tx can be invalidated after
a commit if a lock conflict is detected.
No RW lock of any kind is planned, AFAIK.

вт, 7 дек. 2021 г. в 23:22, <jay.et...@gmx.de>:

> Dear all,
>
>
>
> we’re running in circles with Ignite for so long now. Can anyone please
> help? All our attempts to custom-build a Reader Writer Lock (/Re-entrant
> Lock) for use inside transactions have failed.
>
>
>
> Background:
>
> - Multi-node setup
>
> - Very high throughput mixed read/write cache access
>
> - Key-Value API using transactional caches
>
> - Strong consistency absolute requirement
>
> - Transactional context required for guarantees and fault-tolerance
>
>
>
> Using Pessimistic Repeatable-Read transactions gives strong consistency
> but kills performance if there’s a large number of operations on the same
> cache entry (and they tend to introduce performance penalties in
> entire-cache operations and difficulties in cross-cache locking as well).
> All other transactional modes somehow violate the strong consistency
> requirement as we see it and were able to test so far.
>
>
>
> In other distributed environments we use reader writer locks to gain both
> strong consistency and high performance with mixed workloads. In Ignite
> however we’re not aware that explicit locks can be used inside
> transactions: The documentation clearly states so (
> https://ignite.apache.org/docs/latest/distributed-locks) and trying to
> custom-build a reader writer lock for use inside transactions we always end
> up concluding that this may not be achievable if there are multiple ways
> to implicitly acquire but none to release locks.
>
>
>
> Are we out of luck here or
>
> - did we miss something?
>
> - are there workarounds you know of?
>
> - are there plans to implement transactional re-entrant locks in future
> releases?
>
>
>
> Jay
>
>
>
>
>


-- 

Best regards,
Alexei Scherbakov

Reply via email to