[
https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903831#comment-15903831
]
Edward Bortnikov commented on HBASE-16417:
------------------------------------------
bq. On the 90th percentile degradation when BASIC, how many segments we talking
.... 2 or 3 or more than this?
Taking the liberty of answering for [~eshcar]. The current default active
segment size cap for in-memory flush is 1/4 the memstore size cap for disk
flush. Which means that the expected number of segments in the pipeline is
4/2=2. However, since disk flush is non-immediate, new segments can sometime
pile up, especially under a very high write rate as exercised in our test. We
don't have easily trackable metrics installed (maybe should have) but probably
we're speaking about many more segments here. The number can't exceed 30 - at
that point, a forceful merge happens. We guess that looking up the key in every
single segment (to initialize the scan) is what leads to the high tail latency.
We're taking a closer look at merge (index compaction only, no data copy),
hopefully we'll show there's no material damage about it .. even EAGER does not
look too bad .. A matter of a few more days of experimentation. Thanks.
> In-Memory MemStore Policy for Flattening and Compactions
> --------------------------------------------------------
>
> Key: HBASE-16417
> URL: https://issues.apache.org/jira/browse/HBASE-16417
> Project: HBase
> Issue Type: Sub-task
> Reporter: Anastasia Braginsky
> Assignee: Eshcar Hillel
> Fix For: 2.0.0
>
> Attachments: HBASE-16417-benchmarkresults-20161101.pdf,
> HBASE-16417-benchmarkresults-20161110.pdf,
> HBASE-16417-benchmarkresults-20161123.pdf,
> HBASE-16417-benchmarkresults-20161205.pdf,
> HBASE-16417-benchmarkresults-20170309.pdf
>
>
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)