Hi, While reviewing the PR for this issue:
https://hibernate.atlassian.net/browse/HHH-10649 I realized that the ReadWrite cache concurrency strategy has a flaw that permits "read uncommitted" anomalies. The RW cache concurrency strategy guards any modifications with Lock entries, as explained in this post that I wrote some time ago: http://vladmihalcea.com/2015/05/25/how-does-hibernate-read_write-cacheconcurrencystrategy-work/ Every time we update/delete an entry, a Lock is put in the cache under the entity key, and, this way, "read uncommitted" anomalies should be prevented. The problem comes when entries are evicted either explicitly: session.getSessionFactory().getCache().evictEntity( CacheableItem.class, item1.getId() ); or implicitly: session.refresh( item1 ); During eviction, the 2PL will remove the Lock entry, and if the user attempts to load the entity anew (in the same transaction that has modified the entity but which is not committed yet), an uncommitted change could be propagated to the 2PL. This issue is replicated by the PR associated to this Jira issue, and I also replicated it with manual eviction and entity loading. To fix it, the RW cache concurrency strategy should not delete entries from 2PL upon eviction, but instead it should turn them in Lock entries. For the evict method, this is not really a problem, but evictAll would imply taking all entries and replacing them with Locks, and that might not perform very well in a distributed-cache scenario. Ideally, lock entries would be stored separately than actual cached value entries, and this problem would be fixed in a much cleaner fashion. Let me know what you think about this. Vlad _______________________________________________ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev