gt; > -Anoop-
> > ____
> > From: Ankit Jain [ankitjainc...@gmail.com]
> > Sent: Saturday, April 13, 2013 11:01 AM
> > To: user@hbase.apache.org
> > Subject: HBase random read performance
> >
> > Hi All,
> >
>
08, 2013 5:49 AM
To: user@hbase.apache.org
Subject: Re: Hbase random read performance
Moving to HBase user mailing list.
Can you upgrade to newer release such as 0.94.8 ?
Cheers
On Jul 8, 2013, at 4:36 AM, Boris Emelyanov wrote:
> I'm trying to configure hbase for fully random read per
Moving to HBase user mailing list.
Can you upgrade to newer release such as 0.94.8 ?
Cheers
On Jul 8, 2013, at 4:36 AM, Boris Emelyanov wrote:
> I'm trying to configure hbase for fully random read performance, my cluster
> parameters are:
>
> 9 servers as slaves, each has two 1TB HDD as had
kit Jain [ankitjainc...@gmail.com]
> Sent: Saturday, April 13, 2013 11:01 AM
> To: user@hbase.apache.org
> Subject: HBase random read performance
>
> Hi All,
>
> We are using HBase 0.94.5 and Hadoop 1.0.4.
>
> We have HBase cluster of 5 nodes(5 regionservers and 1 ma
From: Ted Yu
To: user@hbase.apache.org
Sent: Monday, April 15, 2013 10:03 AM
Subject: Re: 答复: HBase random read performance
This is a related JIRA which should provide noticeable speed up:
HBASE-1935 Scan in parallel
Cheers
On Mon, Apr 15, 2013 at 7:13 AM
gt; >We are retrieving all the 1 rows in one call.
> > > >> >
> > > >> >Ans3:
> > > >> >Disk detai:
> > > >> >Model Number: ST2000DM001-1CH164
> > > >> >Serial Number: Z1E276YF
> > > >
>Ankit Jain
> > >> >
> > >> >On Mon, Apr 15, 2013 at 5:11 PM, 谢良 wrote:
> > >> >
> > >> >> First, it's probably helpless to set block size to 4KB, please
> > >> >> refer to the beginning of HFile.java:
> > >&g
gt;> Smaller blocks are good
> >> >> * for random access, but require more memory to hold the block
> >> >>index, and may
> >> >> * be slower to create (because we must flush the compressor
> >> >>stream at the
> >> >>
ease refer
>>> to
>>> >> the beginning of HFile.java:
>>> >>
>>> >> Smaller blocks are good
>>> >> * for random access, but require more memory to hold the block index,
>>> >>and
>>> >> may
>>> >> * be sl
>> may
>> >> * be slower to create (because we must flush the compressor stream at
>> >>the
>> >> * conclusion of each data block, which leads to an FS I/O flush).
>> >> Further, due
>> >> * to the internal caching in Compression codec,
llest possible
> >> block
> >> * size would be around 20KB-30KB.
> >>
> >> Second, is it a single-thread test client or multi-threads? we couldn't
> >> expect too much if the requests are one by one.
> >>
> >> Third, could you p
t client or multi-threads? we couldn't
>> expect too much if the requests are one by one.
>>
>> Third, could you provide more info about your DN disk numbers and IO
>> utils ?
>>
>> Thanks,
>> Liang
>>
>> 发件人: Ankit Jain [ankitjainc...@gmail.co
> Thanks,
> Liang
>
> 发件人: Ankit Jain [ankitjainc...@gmail.com]
> 发送时间: 2013年4月15日 18:53
> 收件人: user@hbase.apache.org
> 主题: Re: HBase random read performance
>
> Hi Anoop,
>
> Thanks for reply..
>
> I tried by setting Hfil
one by one.
Third, could you provide more info about your DN disk numbers and IO utils ?
Thanks,
Liang
发件人: Ankit Jain [ankitjainc...@gmail.com]
发送时间: 2013年4月15日 18:53
收件人: user@hbase.apache.org
主题: Re: HBase random read performance
Hi Anoop,
Thanks for
(if major compaction was not done at the time
> of testing)
>
> -Anoop-
>
> From: Ankit Jain [ankitjainc...@gmail.com]
> Sent: Saturday, April 13, 2013 11:01 AM
> To: user@hbase.apache.org
> Subject: HBase random read performance
>
>
Interesting. Can you explain why this happens?
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Sent: Monday, April 15, 2013 3:47 PM
To: user@hbase.apache.org
Subject: RE: HBase random read performance
Ankit
I guess you might be having default HFile
-
From: Ankit Jain [ankitjainc...@gmail.com]
Sent: Saturday, April 13, 2013 11:01 AM
To: user@hbase.apache.org
Subject: HBase random read performance
Hi All,
We are using HBase 0.94.5 and Hadoop 1.0.4.
We have HBase cluster of 5 nodes(5 regionservers and 1 master node
Hello Ankit,
How exactly are you trying to fetch the data?Some tips to enhance the
reads could be :
Use of scan caching.
Good rowkey design.
Use of block cache.
Properly closing HTable and ResultScanner.
Use of bloom filters.
Use of Filters to limit the search.
Proper use of compression.
Use JB
you setup is rather basic with 8gb memory per server. You should run
hadoop/hbase on better hardware than this.
On Sat, Apr 13, 2013 at 7:31 AM, Ankit Jain wrote:
> Hi All,
>
> We are using HBase 0.94.5 and Hadoop 1.0.4.
>
> We have HBase cluster of 5 nodes(5 regionservers and 1 master node). Eac
Hi Ankit,
Reads might be impacts by many specifications in your system
As proposed above, Bloom filter can help, but also caching, regions size
and splits, etc. If you have only this table in your cluster, and so only
16 regions, you might want to split your table into smaller pieces. Also,
what'
> We are getting very low random read performance while performing multi get
from HBase.
What are you exactly trying to test here though? 1 random rows in
a single multi-get action from a single application thread returning
back the assembled list from across 5 server, in 17s, is an indicator
Using bloom filter is almost mandatory there;
You might also want to try Short Circuit Reads and be sure you get 100%
data locality (major_compact your table first)
On Sat, Apr 13, 2013 at 5:16 PM, Ted Yu wrote:
> Did you enable bloom filters ?
> See http://hbase.apache.org/book.html#schema.blo
Did you enable bloom filters ?
See http://hbase.apache.org/book.html#schema.bloom
Cheers
On Fri, Apr 12, 2013 at 10:31 PM, Ankit Jain wrote:
> Hi All,
>
> We are using HBase 0.94.5 and Hadoop 1.0.4.
>
> We have HBase cluster of 5 nodes(5 regionservers and 1 master node). Each
> regionserver has
Hi All,
We are using HBase 0.94.5 and Hadoop 1.0.4.
We have HBase cluster of 5 nodes(5 regionservers and 1 master node). Each
regionserver has 8 GB RAM.
We have loaded 25 millions records in HBase table, regions are pre-split
into 16 regions and all the regions are equally loaded.
We are gettin
24 matches
Mail list logo