If you files are on the same cluster, then this is not the issue. Do you
have the command line you run to launch your bulk load?



2014-09-05 11:54 GMT-04:00 Jianshi Huang <jianshi.hu...@gmail.com>:

> Hi JM,
>
> What do you mean by the 'destination cluster'? The files are in the same
> Hadoop/HDFS cluster where HBase is running.
>
> Do you mean do the bulk importing on HBase Master node?
>
>
> Jianshi
>
>
> On Fri, Sep 5, 2014 at 11:18 PM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
> > Hi Jianshi,
> >
> > You might want to upload the file on the destination cluster first and
> then
> > re-run your bulk load from there. That way the transfer time will not be
> > taken into consideration for the timeout size the files will be local.
> >
> > JM
> >
> >
> > 2014-09-05 11:15 GMT-04:00 Jianshi Huang <jianshi.hu...@gmail.com>:
> >
> > > I'm importing 2TB of generated HFiles to HBase and I constantly get the
> > > following errors:
> > >
> > > Caused by:
> > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.RegionTooBusyException):
> > > org.apache.hadoop.hbase.RegionTooBusyException: failed to get a lock in
> > > 60000 ms.
> > >
> > >
> >
> regionName=grapple_edges_v2,ff000000,1409817320781.6d2955c780b39523de733f3565642d96.,
> > > server=xxxxx.xxx.xxx,60020,1404854700728
> > >         at
> > > org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5851)
> > >         at
> > > org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5837)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.startBulkRegionOperation(HRegion.java:5795)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3543)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3277)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> > >         at
> > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> > >         at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> > >         at java.lang.Thread.run(Thread.java:724)
> > >
> > >         at
> > org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1498)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.bulkLoadHFile(ClientProtos.java:29276)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.bulkLoadHFile(ProtobufUtil.java:1548)
> > >         ... 11 more
> > >
> > >
> > > What makes the region too busy? Is there a way to improve it?
> > >
> > > Does that also mean some part of my data are not correctly imported?
> > >
> > >
> > > Thanks,
> > >
> > > --
> > > Jianshi Huang
> > >
> > > LinkedIn: jianshi
> > > Twitter: @jshuang
> > > Github & Blog: http://huangjs.github.com/
> > >
> >
>
>
>
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/
>

Reply via email to