[ https://issues.apache.org/jira/browse/KUDU-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17926993#comment-17926993 ]
ASF subversion and git services commented on KUDU-613: ------------------------------------------------------ Commit 5967ab153fc0e22b42ae0308ee557a68a9ca6014 in kudu's branch refs/heads/branch-1.18.x from Mahesh Reddy [ https://gitbox.apache.org/repos/asf?p=kudu.git;h=5967ab153 ] KUDU-613: Fix BlockCache Constructor The capacity constraints are not calculated properly when creating a block cache with the slru eviction policy. This patch fixes this miscalculation. Change-Id: Icfde56fd766ba7160052e88ca09a63845f3297c6 Reviewed-on: http://gerrit.cloudera.org:8080/22478 Reviewed-by: Alexey Serbin <ale...@apache.org> Tested-by: Alexey Serbin <ale...@apache.org> (cherry picked from commit 1d46b2fcdba6b30c52ebbba8725a16d749e4f857) Reviewed-on: http://gerrit.cloudera.org:8080/22484 Reviewed-by: Abhishek Chennaka <achenn...@cloudera.com> Reviewed-by: Mahesh Reddy <mre...@cloudera.com> > Scan-resistant cache replacement algorithm for the block cache > -------------------------------------------------------------- > > Key: KUDU-613 > URL: https://issues.apache.org/jira/browse/KUDU-613 > Project: Kudu > Issue Type: Improvement > Components: perf > Affects Versions: M4.5 > Reporter: Andrew Wang > Assignee: Mahesh Reddy > Priority: Major > Labels: performance, roadmap-candidate > > The block cache currently uses LRU, which is vulnerable to large scan > workloads. It'd be good to implement something like 2Q. > ARC (patent encumbered, but good for ideas): > https://www.usenix.org/conference/fast-03/arc-self-tuning-low-overhead-replacement-cache > HBase (2Q like): > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java -- This message was sent by Atlassian Jira (v8.20.10#820010)