Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Adrien Mogenet
Are your GC logs enabled? Can you see any long pause in it? On Fri, Aug 30, 2013 at 4:45 AM, Kiru Pakkirisamy wrote: > I just moved from 0.94.10 to 0.94.11. Tremendous improvement in our app's > query response. Went down to 1.3 sec from 1.7 sec. > Concurrent tests are also good, but it still ex

Re: counter Increment gives DonotRetryException

2013-08-29 Thread yeshwanth kumar
hi ted columnar:column1 contains integer value. if i perform increment on a new row then increment operation is successful. like * incr 't1','newrow','cloumnar:column1',1* Hi Jean i am not incrementing with a string value, i am giving integer value. Thanks for the response guys, looking for sol

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Kiru Pakkirisamy
I just moved from 0.94.10 to 0.94.11. Tremendous improvement in our app's query response. Went down to 1.3 sec from 1.7 sec.  Concurrent tests are also good, but it still exponentially degrades from to 10 secs for 8 concurrent clients. There might a bug lurking in there somewhere that is probabl

Re: observation while running hbase under load

2013-08-29 Thread Ted Yu
This JIRA is related: HBASE-8836 On Thu, Aug 29, 2013 at 2:22 PM, RK S wrote: > Does Hbase gives higher Preference to Writes than Reads , if one tries to > do both operation for the same rowkey at the same time??? > My scenario > > Iam new to Hbase and Iam testing Hbase for our datawarehouse s

RE: HBase client with security

2013-08-29 Thread Lanati, Matteo
Hi Harsh, thanks for the suggestion. I added HADOOP_PREFIX so that the conf folder is in the path. It still doesn't work, so I suppose Hadoop's core-site.xml is faulty (though I need a Kerberos ticket to use Hadoop, so security is working). In fact, when I try to list from HBase shell I get 13/0

Re: observation while running hbase under load

2013-08-29 Thread RK S
Does Hbase gives higher Preference to Writes than Reads , if one tries to do both operation for the same rowkey at the same time??? My scenario Iam new to Hbase and Iam testing Hbase for our datawarehouse solution. Iam trying following 2 scenarios. > > 10 Rows > Each of the Rowkey has 5000 C

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Saurabh Yahoo
Thanks Adrian. Based on hbase book, it is listed as experimental item. ( http://hbase.apache.org/book/upgrade0.92.html), even it had been implemented back in 2011. Is anyone running this in production? Any feedback.. Thanks, Saurabh. On Aug 29, 2013, at 4:07 PM, Adrien Mogenet wrote: > Ano

Re: Region server exception

2013-08-29 Thread Kiru Pakkirisamy
Ted, When there are more than 32 concurrent clients (in a 4 nodes x 8 core cluster). I keep getting responseTooSlow for my coprocessors. Our app is built mainly using coprocessor and a few multi-get. (responseTooSlow): {"processingtimems":10682,"call":"execCoprocessor([B@511c627c, getFoo({T_520

Re: Region server exception

2013-08-29 Thread Ted Yu
This exception means some other thread was holding the lock for extended period of time. Can you tell us more about your coprocessor ? Thanks On Thu, Aug 29, 2013 at 12:55 PM, Kiru Pakkirisamy < kirupakkiris...@yahoo.com> wrote: > > > This exception stack happens from within my coprocessor cod

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Adrien Mogenet
Another point that could help to stay under the `1s SLA': enable direct byte buffers for LruBlockCache. Have a look at HBASE-4027. On Thu, Aug 29, 2013 at 9:27 PM, Kiru Pakkirisamy wrote: > Yes, in that case, it matters. I was talking about a case where you are > mostly serving from cache. > >

Region server exception

2013-08-29 Thread Kiru Pakkirisamy
This exception stack happens from within my coprocessor code on concurrent reads. Any ideas ? java.io.InterruptedIOException at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5894) at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5875) at org.apache.hadoop.hbase

Re: Default balancer status

2013-08-29 Thread Jean-Marc Spaggiari
Thanks Bryan. That's what I was looking for. If I have time I will see if I can backport that into 0.94. For now I will go with the period option... JM 2013/8/29 Bryan Beaudreault > This was fixed in 0.95.2. > https://issues.apache.org/jira/browse/HBASE-6260 > > In the meantime you can set the

Re: Default balancer status

2013-08-29 Thread Bryan Beaudreault
This was fixed in 0.95.2. https://issues.apache.org/jira/browse/HBASE-6260 In the meantime you can set the hbase.balancer.period to a very large number. On Thu, Aug 29, 2013 at 3:32 PM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > Hi, > > Is there a way to have the balancer off by d

Re: Error running hbase

2013-08-29 Thread Ted Yu
There was an answer at the end of stackflow URL you posted. If your problem isn't solved, please let us know some more details of your deployment: HBase version, config parameters, etc. Thanks On Thu, Aug 29, 2013 at 10:49 AM, jamal sasha wrote: > Hi, > I am trying to run write directly to

Default balancer status

2013-08-29 Thread Jean-Marc Spaggiari
Hi, Is there a way to have the balancer off by default? We can turn it off using balancer_switch but when we restart the cluster, it's back to on. Any way to turn it off by default? Thanks, JM

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Kiru Pakkirisamy
Yes, in that case, it matters. I was talking about a case where you are mostly serving from cache.   Regards, - kiru Kiru Pakkirisamy | webcloudtech.wordpress.com From: Saurabh Yahoo To: "user@hbase.apache.org" Cc: "user@hbase.apache.org" Sent: Thursday,

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Saurabh Yahoo
Thanks Kiru. We have 10TB of data on disk. It would not fit in memory. Also for the first time, hbase need to read from the disk. And it has to go through the network to read the blocks which are stored at other data node. So in my opinion, locality matters. Thanks, Saurabh. On Aug 29, 20

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Kiru Pakkirisamy
But locality index should not matter right if you are in IN_MEMORY most and you are running the test after  a few runs to make sure they are already in IN_MEMORY  (ie blockCacheHit is high or blockCacheMiss is low)  (?)  Regards, - kiru Kiru Pakkirisamy | webcloudtech.wordpress.com _

RE: experiencing high latency for few reads in HBase

2013-08-29 Thread Vladimir Rodionov
Usually, either cluster restart or major compaction helps improving locality index. There is an issue in region assignment after table disable/enable in 0.94.x (x <11) which breaks HDFS locality. Fixed in 0.94.11 You can write your own routine to manually "localize" particular table using pub

Re: Never ending "Doing distributed log split" task.,

2013-08-29 Thread Jean-Marc Spaggiari
I have not yet found an easy way to upgrade to hadoop 1.2.1 so I'm waiting for my 2nd cluster to be ready to install it with 1.2.1 and distcp to it. But that's outside of the scope of this discussion ;) 2013/8/29 Ted Yu > So you have HBASE-8670 in your deployment. > > Suggest upgrading hadoop t

observation while running hbase under load

2013-08-29 Thread Rahul Singh
Hi , Iam new to Hbase and Iam testing Hbase for our datawarehouse solution. Iam trying following 2 scenarios. 10 Rows Each of the Rowkey has 5000 Columns Qualifiers spread across 3 Column families. I generate following 2 kinds of load. 1. 1.1 Generate 10 of rows , with sequential INS

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Bryan Beaudreault
If you lost a RS or otherwise moved a region around (hbase balancer?) then it would throw off the locality. You'll want to compact any time a region moves for any reason (unless it moved to another RS with which it had locality). Try major compacting again now and see if the locality index goes u

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Saurabh Yahoo
Thanks Vlad. Quick question. I notice hdfsBlocksLocalityIndex is around 50 in all region servers. Does that could be a problem? If it is, how to solve that? We already ran the major compaction after ingesting the data. Thanks, Saurabh. On Aug 29, 2013, at 12:17 PM, Vladimir Rodionov w

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Saurabh Yahoo
Thanks Kiru. Yes. Our read is going across the region servers evenly. I did not see any issue with that. On Aug 29, 2013, at 11:59 AM, Kiru Pakkirisamy wrote: > Saurabh, > I have a suspicion that the few high latency responses are happening because > of "hot" region.(s) > I vaguely rememb

Error running hbase

2013-08-29 Thread jamal sasha
Hi, I am trying to run write directly to hbase from a mapreduce code. But I am getting this issue similar to what is reported here: http://stackoverflow.com/questions/12607349/cant-connect-to-zookeeper-and-then-hbase-master-shuts-down How to solve this. I think I am running an hbase instance alr

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Saurabh Yahoo
Thanks Federico. I will look into this patch. We are using .94.6.1 version. On Aug 29, 2013, at 8:37 AM, Federico Gaule wrote: > In 0.94.11 Release, has been included an optimization for MultiGets: > https://issues.apache.org/jira/browse/HBASE-9087 > > What version have you deployed? > > >

Re: Hbase RowKey design schema

2013-08-29 Thread Doug Meil
Hi there, One thing to mention about the BigTable paper is they reverse the URL so that scans work with subdomains. www.subdomain1.cnn.com -> com.cnn.subdomain1.www www.subdomain2.cnn.com -> com.cnn.subdomain2.www If you don't reverse the URL there isn't an easy scan (short of creating another

Re: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread praveenesh kumar
Thanks a lot for helping guys. The code is working now. Haven't changed anything much on the code side. It was more of data types issues. Cheers :) On Thu, Aug 29, 2013 at 6:31 PM, Shahab Yunus wrote: > Exactly I had the same though as Ashwanth too, that is why I asked whether > @Override annot

Re: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread Shahab Yunus
Exactly I had the same though as Ashwanth too, that is why I asked whether @Override annotation is being used or not. Regards, Shahab On Thu, Aug 29, 2013 at 1:09 PM, Ashwanth Kumar < ashwanthku...@googlemail.com> wrote: > Hey Praveenesh, I am not sure if this would help. > > But can you try mo

Re: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread Ashwanth Kumar
Hey Praveenesh, I am not sure if this would help. But can you try moving your mapper to an inner class / separate class and try the code? I somehow get a feeling that default Mapper (IdentityMapper) is being used (may be you can check the mapreduce.map.class value?), that would be the only reason

Re: Never ending "Doing distributed log split" task.,

2013-08-29 Thread Ted Yu
So you have HBASE-8670 in your deployment. Suggest upgrading hadoop to newer release, e.g. 1.2.1 so that the new HDFS improvements can be utilized. Cheers On Thu, Aug 29, 2013 at 9:50 AM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> wrote: > Hadoop 1.0.4 with HBase 0.94.12-SNAPSHOT > > The f

Re: Never ending "Doing distributed log split" task.,

2013-08-29 Thread Jean-Marc Spaggiari
Hadoop 1.0.4 with HBase 0.94.12-SNAPSHOT The file name changed since I have restarted HBase but here is what I have: hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fs -ls hdfs://node3:9000/hbase/.logs/node1,60020,1377793020654/ Found 1 items -rw-r--r-- 3 hbase supergroup 0 2013-08-29 12:17 /hb

Re: Never ending "Doing distributed log split" task.,

2013-08-29 Thread Ted Yu
What is your HBase / Hadoop version ? Can you check namenode log looking for lines related to hdfs://node3:9000/hbase/.logs/node1,60020,1377789460683- splitting/node1%2C60020%2C1377789460683.1377789462024 ? Thanks On Thu, Aug 29, 2013 at 9:03 AM, Jean-Marc Spaggiari < jean-m...@spaggiari.org> w

Re: counter Increment gives DonotRetryException

2013-08-29 Thread Jean-Daniel Cryans
You probably put a string in there that was a number, and increment expects a 8 bytes long. For example, if you did: put 't1', '9row27', 'columnar:column1', '1' Then did an increment on that, it would fail. J-D On Thu, Aug 29, 2013 at 4:42 AM, yeshwanth kumar wrote: > i am newbie to Hbase, >

Re: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread Shahab Yunus
You are also using the @Override annotation to make sure that your overridden method is being called? Regards, Shahab On Thu, Aug 29, 2013 at 12:03 PM, praveenesh kumar wrote: > Thanks Shahab for replying. Sorry, that was typo, while writing the code > snippet. Even keeping the keys as NullWri

RE: experiencing high latency for few reads in HBase

2013-08-29 Thread Vladimir Rodionov
Yes. HBase won't guarantee strict sub-second latency. Best regards, Vladimir Rodionov Principal Platform Engineer Carrier IQ, www.carrieriq.com e-mail: vrodio...@carrieriq.com From: Saurabh Yahoo [saurabh...@yahoo.com] Sent: Thursday, August 29, 2013 2:49

Re: counter Increment gives DonotRetryException

2013-08-29 Thread Ted Yu
The exception came from HRegion#increment(): if(kv.getValueLength() == Bytes.SIZEOF_LONG) { amount += Bytes.toLong(kv.getBuffer(), kv.getValueOffset(), Bytes.SIZEOF_LONG); } else { // throw DoNotRetryIOException instead of Ille

Re: Hbase thrift client's privilege control

2013-08-29 Thread Kangle Yu
sorry, I click the sent button early. Since "Thrift gateway will authenticate with HBase using the supplied credential. No authentication will be performed by the Thrift gateway itself. All client access via the Thrift gateway will use the Thrift gateway's credential and have its privilege."(

Re: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread praveenesh kumar
Thanks Shahab for replying. Sorry, that was typo, while writing the code snippet. Even keeping the keys as NullWritable or LongWritable i.e. by keeping the same types of keys, I am getting the same error. I don't think the error is at Map Input side. Its saying "value from map". Can't understand w

counter Increment gives DonotRetryException

2013-08-29 Thread yeshwanth kumar
i am newbie to Hbase, going through Counters topic, whenever i perform increment like """incr 't1','9row27','columnar:column1',1""" it gives an ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Attempted to increment field that isn't 64 bits wid

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Kiru Pakkirisamy
Saurabh, I have a suspicion that the few high latency responses are happening because of "hot" region.(s) I vaguely remember you mentioning that the data is evenly distributed across all regions. I hope your test also goes across them evenly. You may want to check the read requests to the region

Never ending "Doing distributed log split" task.,

2013-08-29 Thread Jean-Marc Spaggiari
I have restart my cluster and I'm now waiting for this task to end: Doing distributed log split in [hdfs://node3:9000/hbase/.logs/node1,60020,1377789460683-splitting] It's running fir now 30 minutes. There was nothing running on the cluster. No reads, no writes, nothing, for days... I got that o

Re: java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread Shahab Yunus
" public class MYHBaseLoader extends Mapper<*NullWritable*,BytesWritable,NullWritable,Put> { protected void map (*LongWritable* key, BytesWritable value, Context context) throws IOException, InterruptedException { ..." Why is the difference in types of the keys? Regards, Shahab On Thu

Re: Writing map outputs to HBase

2013-08-29 Thread Ted Yu
See http://hbase.apache.org/book.html#mapreduce.example.readwrite On Thu, Aug 29, 2013 at 7:26 AM, praveenesh kumar wrote: > Hi, > > What is the easiest and efficient way to write a sequence file into HBase. > I want to parse the sequence file. My sequence file has records in the form > of . >

Writing map outputs to HBase

2013-08-29 Thread praveenesh kumar
Hi, What is the easiest and efficient way to write a sequence file into HBase. I want to parse the sequence file. My sequence file has records in the form of . I want to parse each value, generate keys and values in map() function and write the output into HBase. I am trying to use HBaseTableUt

Re: Hbase RowKey design schema

2013-08-29 Thread Shahab Yunus
What advantage you will be gaining by compressing? Less space? But then it will add compression/decompression performance overhead. A trade-off but a especially significant as space is cheap and redundancy is OK with such data stores. Having said that, more importantly, what are your read use-case

Re: HBase client with security

2013-08-29 Thread Harsh J
Two things come to mind: 1. Is HADOOP_CONF_DIR also on HBase's classpath? If it or HADOOP_PREFIX/HADOOP_HOME is defined, it usually is. But re-check via "hbase classpath" 2. Assuming (1) is good, does your core-site.xml have kerberos authentication settings for hadoop as well? On Thu, Aug 29, 201

HBase client with security

2013-08-29 Thread Lanati, Matteo
Hi all, I set up Hadoop (1.2.0), Zookeeper (3.4.5) and HBase (0.94.8-security) with security. HBase works if I launch the shell from the node running the master, but I'd like to use it from an external machine. I prepared one, copying the Hadoop and HBase installation folders and adapting the p

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Federico Gaule
In 0.94.11 Release, has been included an optimization for MultiGets: https://issues.apache.org/jira/browse/HBASE-9087 What version have you deployed? On 08/29/2013 01:29 AM, lars hofhansl wrote: A 1s SLA is tough in HBase (or any large memory JVM application). Maybe, if you presplit your ta

Re: RowLocks

2013-08-29 Thread Michael Segel
Thanks for the update. Actually they worked ok for what they were. IMHO they should never had been made public because they aren't RLL that people think of as part of transactions and isolation levels found in RDBMSs. Had me worried there for a sec... Thx On Aug 28, 2013, at 11:22 PM, lars

Re: how to export data from hbase to mysql?

2013-08-29 Thread Mohammad Tariq
My 2 cents : 1- Map your table to a Hive table and do the export using Sqoop. 2- Export the table to a file first, and then export it using Sqoop. Warm Regards, Tariq cloudfront.blogspot.com On Wed, Aug 28, 2013 at 7:12 PM, Shahab Yunus wrote:

Re: experiencing high latency for few reads in HBase

2013-08-29 Thread Saurabh Yahoo
Hi Vlad, We do have strict latency requirement as it is financial data requiring direct access from clients. Are you saying that it is not possible to achieve sub second latency using hbase (because it is based on java.) ? On Aug 28, 2013, at 8:10 PM, Vladimir Rodionov wrote: > Increa

java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.hbase.client.Put, recieved org.apache.hadoop.io.BytesWritable

2013-08-29 Thread praveenesh kumar
Hi all, I am trying to write a MR code to load a HBase table. I have a mapper that emits (null,put object) and I am using TableMapReduceUtil.initTableReducerJob() to write it into a HBase table. Following is my code snippet public class MYHBaseLoader extends Mapper { protected void map (L

Hbase RowKey design schema

2013-08-29 Thread Wasim Karani
I am using HBase to store webtable content like how google is using bigtable. For reference of google bigtable My question is on RowKey, how we should be forming it. What google is doing is saving the URL in a reverse order as you can see in the PDF document "com.cnn.www" so that all the links ass

issue debug hbase

2013-08-29 Thread kun yan
Hi all I use maven complie hbase src hbase version is 0.94 I can debug hbase for example create table as Java Application(i set breakpoint )that is so nice to learn hbase how to create table but i found i cannot debug hbase as remote java application when i breakpoint to src(in my local client) th

issue debug hbase

2013-08-29 Thread kun yan
Hi all I use maven complie hbase src hbase version is 0.94 I can debug hbase for example create table as Java Application(i set breakpoint )that is so nice to learn hbase how to create table but i found i cannot debug hbase as remote java application when i breakpoint to src(in my local client) th