Online/Realtime query with filter and join?

2013-11-28 Thread Ramon Wang
Hi Folks It seems to be impossible, but I still want to check if there is a way we can do "complex" query on HBase with "Order By", "JOIN".. etc like we have with normal RDBMS, we are asked to provided such a solution for it, any ideas? Thanks for your help. BTW, i think maybe impala from CDH wou

RE: HBase table delete from Phoenix

2013-11-28 Thread Job Thomas
SQL for deleting table ? I used ' drop table table_name ' Best Regards, Job M Thomas From: Azuryy Yu [mailto:azury...@gmail.com] Sent: Fri 11/29/2013 11:35 AM To: user@hbase.apache.org Subject: Re: HBase table delete from Phoenix Job, You'd better paster you

Re: HBase table delete from Phoenix

2013-11-28 Thread Azuryy Yu
Job, You'd better paster your SQL of Phonenix, thanks. On Fri, Nov 29, 2013 at 12:47 PM, Job Thomas wrote: > > Hi Ted, > > My Table contain 10 Million rows, 1 column family and that contain 15 > columns. > > > > > From: Ted Yu [mailto:yuzhih...@gmail.com] > Sen

RE: HBase table delete from Phoenix

2013-11-28 Thread Job Thomas
Hi Ted, My Table contain 10 Million rows, 1 column family and that contain 15 columns. From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Fri 11/29/2013 10:06 AM To: phoenix-hbase-...@googlegroups.com Subject: Re: HBase table delete from Phoenix Job: I am

Re: HBase table delete from Phoenix

2013-11-28 Thread Ted Yu
Job: I am including phoenix dev mailing list. How many rows are there in your table ? Cheers On Thu, Nov 28, 2013 at 8:25 PM, Job Thomas wrote: > I have created table in hbase via phoenix. > > Why hbase table is not droped after deleted it from Phoenix and vise > versa.? > > Why deleting a

HBase table delete from Phoenix

2013-11-28 Thread Job Thomas
I have created table in hbase via phoenix. Why hbase table is not droped after deleted it from Phoenix and vise versa.? Why deleting a table from Phoenix takes considerably more time than deleting it from Hbase directly? Thanks in advance, Job M Thomas

Re: HBase and HDFS replication

2013-11-28 Thread Pablo Medina
Take a look at the zookeeper session timeout. The ephemeral node of the rs going down will be deleted when session is expired and then other regionservers will race to take ownership of the regions being down. The default session timeout is too high so I think it may be related to the problem you a

Re: Replication metrics with more than one Peer/Slave

2013-11-28 Thread Pablo Medina
I know that in 0.95 metrics are reported by peer. I want to understand what is the expected behavior in 0.94.13. Can you share your experience when looking at the metrics of more than one peer? El 28/11/2013 19:05, "Asaf Mesika" escribió: > I tacked the same problem, and was answerd it was fixed

Re: Replication metrics with more than one Peer/Slave

2013-11-28 Thread Asaf Mesika
I tacked the same problem, and was answerd it was fixed in 0.95 On Thursday, November 28, 2013, Pablo Medina wrote: > Hi all, > > Knowing that replication metrics are global at the region server level in > HBase 0.94.13, what is the meaning of a metric like sizeOfLogQueue when > replicating to mo

Re: HBase value design

2013-11-28 Thread Asaf Mesika
On our project we store nested record structures with 10-40 fields. We have decided to save on storage and write throughout by writing a serialized avro record as value. We place one byte before to allow versioning. We did it since each column is written with its rowkey, cq, cf and timestamp. Your

Re: How to improve the write performance?

2013-11-28 Thread Asaf Mesika
Start by installing ganglia or graphite and figure our which part is slower in HBase On Wednesday, November 27, 2013, jingych wrote: > Thanks for reply! > > I'm running with single thread. > > Actually I want to know: How really fast that HBase write can be with each > thread? > > And how to opt

Re: HBase value design

2013-11-28 Thread Ted Yu
It means you may have a new member other than the following two: > Integer countInt > Float countFloat On Thu, Nov 28, 2013 at 7:40 AM, Amit Sela wrote: > I am using some sort of schema that allows me to expand my data blob if > needed. > However, I'm considering testing Phoenix (or maybe pres

Re: HBase value design

2013-11-28 Thread Amit Sela
I am using some sort of schema that allows me to expand my data blob if needed. However, I'm considering testing Phoenix (or maybe prestoDB once it gets an HBase connector) and I was wondering if the common practice is "simple type" values and not data blobs because I saw that Phoenix doesn't suppo

Re: HBase value design

2013-11-28 Thread Ted Yu
Amit: In your example you use Writable for serialization. In 0.96 and beyond, protobuf is used in place of Writable. If there is a possibility a new member would be added to the tuple, consider using some scheme that allows the expansion. Please take a look at this as well: HBASE-8089 Add type su

Re: Region server block cache and memstore size

2013-11-28 Thread Kevin O'dell
I agree with what Anoop said here, just because they are scans, it doesn't make a lot of sense to turn off your block cache. Are you trying to save memory? As for the memstore global limits, you will want to set those to something like upper .11 lower .10 You have to leave at the minimum .10,

Re: HBase value design

2013-11-28 Thread Jean-Marc Spaggiari
Hi Amit, It all depends on your usecase ;) If you always access countIn and countFloat when you access a value, then put them together to avoid to have to do 2 calls or a scan or a multiget. But if you never access them together, you might want to separate them to reduce RCP transfert, etc. JM

HBase value design

2013-11-28 Thread Amit Sela
There are a lot of discussions here regarding the row design but I have a question about the value design: Say I have a table t1 with rows r1,r2...rn and family f. I also have qualifiers q1,q2...,qm For each (ri,fi,qi) tuple I want to store a value vi that is a data blob that implements Writable

Re: HBase and HDFS replication

2013-11-28 Thread Andrea
I'm glad to follow up to my post telling you that regions which are down go up to the other nodes after about 20 minutes! I've only have to wait a little more than i did... But there is no a way to reduce this time? I mean, about 20 minutes to wait the regions recovered is too long! There is no

Re: HMaster Aborted for HBase-0.94.13

2013-11-28 Thread Jiajun Chen
stop hbase cluster clear the data of zookeeper start hbase cluster then the error would seem to disappeared hope this to help someone otherwise there is a solution for somebody's problem : http://stackoverflow.com/questions/17792619/fatal-master-hmaster-unexpected-state-cannot-transit-it-to-offli

Re: HBase and HDFS replication

2013-11-28 Thread Andrea
This is the HMaster 2013-11-28 10:21:11,926 INFO org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [ip-10$ 2013-11-28 10:21:11,927 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current region=-ROOT-,,0.70236

Re: HBase and HDFS replication

2013-11-28 Thread Ted Yu
So 1 region of usertable got lost ? Can you pastebin master server log around the time you killed the region server ? Thanks On Nov 28, 2013, at 2:13 AM, Andrea wrote: > Hi, I'm using HBase 0.94.12 above Hadoop 1.2.1 and I have one node for > zookeeper, one node for a Namenode/Hmaster and th

HBase and HDFS replication

2013-11-28 Thread Andrea
Hi, I'm using HBase 0.94.12 above Hadoop 1.2.1 and I have one node for zookeeper, one node for a Namenode/Hmaster and three Datanode/Regionservers. All the machines are on Amazon EC2, instance m2.xlarge. I set the replication at two, so I'm expecting if I kill a HregionServer/Datanode (for exam

Re: Region server block cache and memstore size

2013-11-28 Thread Anoop John
So you use Bulk load with HFileOpFormat for writing data? Then you can reduce the hbase.regionserver.global.memstore.upperLimit and hbase.regionserver.global.memstore.lowerLimit and give more heap % for the block cache. Not getting why u try to reduce that also. -Anoop- On Thu, Nov 28, 2013 a

Region server block cache and memstore size

2013-11-28 Thread Ivan Tretyakov
Hi! We are using HBase 0.92.1-cdh4.1.1. To import data the only way we use is bulk load. And our common access pattern is sequential scans of different parts of the tables. Since that we are considering to disable block cache by setting hbase.block.cache.size to zero. But We've found following in

Re: Suddenly NameNode stopped responding

2013-11-28 Thread Azuryy Yu
Sandeep, and please take a look here http://hbase.apache.org/book.html#hadoop PS: HDFSv2 supports HA. On Thu, Nov 28, 2013 at 2:31 PM, Sandeep L wrote: > Hi, > Thanks for update. > After spending quite a bit of time on Hadoop/HBase I couldn't find any > thing awkward in logs. > At last what I

Re: HMaster Aborted for HBase-0.94.13

2013-11-28 Thread Jiajun Chen
https://issues.apache.org/jira/browse/HBASE-8912 On 28 November 2013 14:43, Jiajun Chen wrote: > > > 2013-11-27 18:24:33,375 INFO > org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter > type for hdfs:// > master.uc.uuc.com:9000/hbase/H/18c9cb11b3e673dec07038f166fb3ef7/.tm