[ 
https://issues.apache.org/jira/browse/HBASE-19496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288901#comment-16288901
 ] 

Chia-Ping Tsai commented on HBASE-19496:
----------------------------------------

bq. So this issue comes with Netty default server only?
The bytebuffer passed from netty is always reusable. And the simple server will 
use the bytebuffer under the following condition.
{code}
    // We create random on heap buffers are read into those when
    // 1. ByteBufferPool is not there.
    // 2. When the size of the req is very small. Using a large sized (64 KB) 
buffer from pool is
    // waste then. Also if all the reqs are of this size, we will be creating 
larger sized
    // buffers and pool them permanently. This include Scan/Get request and DDL 
kind of reqs like
    // RegionOpen.
    // 3. If it is an initial handshake signal or initial connection request. 
Any way then
    // condition 2 itself will match
    // 4. When SASL use is ON.
    if (this.rpcServer.reservoir == null || skipInitialSaslHandshake || 
!connectionHeaderRead ||
        useSasl || length < this.rpcServer.minSizeForReservoirUse) {
      this.data = new SingleByteBuff(ByteBuffer.allocate(length));
    } else {
      Pair<ByteBuff, CallCleanup> pair = RpcServer.allocateByteBuffToReadInto(
        this.rpcServer.reservoir, this.rpcServer.minSizeForReservoirUse, 
length);
      this.data = pair.getFirst();
      this.callCleanup = pair.getSecond();
    }
{code}

bq. So seems for ServerLoad, the request size is more and the PB data size may 
be more. 
If the size of ServerLoad is larger than the {{minSizeForReservoirUse}}, the 
pool will be used. The size is configurable so it is easy to reproduce this bug 
by reducing the size.

bq.  We should ideally have an inspection at a stage after the read at Rpc 
server (where we already know the request is for which method) and take a call 
abt the copy to new data structure. 
Rpc server has no idea about how server use the pb object even if we parse the 
method from request.  Cloneing all request data in rpc layer may make server 
burn out. I prefer to assume all pb object passed from rpc layer is modifiable, 
and rs/master should do the clone if they want to keep the pb object after the 
call is done.

bq. Ideally in HM side, the BBpool itself should not get created.
HM can host the normal region, so I think the BBpool is still useful for HM. 

> Reusing the ByteBuffer in rpc layer corrupt the ServerLoad and RegionLoad
> -------------------------------------------------------------------------
>
>                 Key: HBASE-19496
>                 URL: https://issues.apache.org/jira/browse/HBASE-19496
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Chia-Ping Tsai
>            Assignee: Chia-Ping Tsai
>            Priority: Blocker
>             Fix For: 2.0.0-beta-1
>
>         Attachments: HBASE-19496.wip.patch
>
>
> {{ServerLoad}} and {{RegionLoad}} store the pb object internally but the 
> bytebuffer of pb object may be reused in rpc layer. Hence, the {{ServerLoad}} 
> and {{RegionLoad}} which saved by {{HMaster}} will be corrupted if the 
> bytebuffer backed is modified.
> This issue doesn't happen on branch-1.
> # netty server was introduced in 2.0 (see HBASE-17263)
> # reusing bytebuffer to read RPC requests was introduced in 2.0 (see 
> HBASE-15788)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to