Igniters,

I’ve implemented new fast reentrant lock within an issue[1].

Could someone review prepared PR [2, 3, 4]?

Some details are described below.

The main idea:

Current locks implementation is based on shared states in a cache, while
the new lock uses IgniteCache#invoke* methods to update shared state
atomically. New lock implementation doesn’t use a continuous query, so the
cache became atomic now.

New lock implementation has two types: fair and unfair, split into
different classes for performance reasons.

Some benchmarks results (hardware: Core i5 (2 gen) + 6GB RAM):

Speed up: single thread + fair:   21.9x (1 node), 3.4x (2 nodes), 9.9x (5
nodes), 17.9x (10 nodes)

Speed up: single thread + unfair: 22.4x (1 node), 3.2x (2 nodes), 8.0x (5
nodes), 19.0x (10 nodes)

Speed up: multi-threads + fair:   3.9x (1 n,2 t), 3.5x (1 n,10t), 13.5x (5
n,2t), 15.0x (5n, 10t)

Speed up: multi-threads + unfair: 33.5x (1 n,2t), 210x (1 n,10t), 318x  (5
n,2t), 389x  (5n, 10t)

Benchmarks’ summary:

1) The unfair lock has a local reentrant lock which is used for local
synchronization as a guard before the shared state. This allows reaching
performance close to a local reentrant lock.

2) One server node can be a primary one for the shared state, this gives us
a performance boost on one node only.

3) Speedup grows with a number of nodes.


[1] JIRA: https://issues.apache.org/jira/browse/IGNITE-4908

[2] PR: https://github.com/apache/ignite/pull/2360

[3] Upsource review:
https://reviews.ignite.apache.org/ignite/review/IGNT-CR-248

[4] Team City:
https://ci.ignite.apache.org/project.html?projectId=Ignite20Tests&branch_Ignite20Tests=pull%2F2360%2Fhead

Reply via email to