Hi Folks
It seems to be impossible, but I still want to check if there is a way we
can do "complex" query on HBase with "Order By", "JOIN".. etc like we have
with normal RDBMS, we are asked to provided such a solution for it, any
ideas? Thanks for your help.
BTW, i think maybe impala from CDH wou
SQL for deleting table ?
I used ' drop table table_name '
Best Regards,
Job M Thomas
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: Fri 11/29/2013 11:35 AM
To: user@hbase.apache.org
Subject: Re: HBase table delete from Phoenix
Job,
You'd better paster you
Job,
You'd better paster your SQL of Phonenix, thanks.
On Fri, Nov 29, 2013 at 12:47 PM, Job Thomas wrote:
>
> Hi Ted,
>
> My Table contain 10 Million rows, 1 column family and that contain 15
> columns.
>
>
>
>
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sen
Hi Ted,
My Table contain 10 Million rows, 1 column family and that contain 15 columns.
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Fri 11/29/2013 10:06 AM
To: phoenix-hbase-...@googlegroups.com
Subject: Re: HBase table delete from Phoenix
Job:
I am
Job:
I am including phoenix dev mailing list.
How many rows are there in your table ?
Cheers
On Thu, Nov 28, 2013 at 8:25 PM, Job Thomas wrote:
> I have created table in hbase via phoenix.
>
> Why hbase table is not droped after deleted it from Phoenix and vise
> versa.?
>
> Why deleting a
I have created table in hbase via phoenix.
Why hbase table is not droped after deleted it from Phoenix and vise versa.?
Why deleting a table from Phoenix takes considerably more time than deleting
it from Hbase directly?
Thanks in advance,
Job M Thomas
Take a look at the zookeeper session timeout. The ephemeral node of the rs
going down will be deleted when session is expired and then other
regionservers will race to take ownership of the regions being down. The
default session timeout is too high so I think it may be related to the
problem you a
I know that in 0.95 metrics are reported by peer. I want to understand what
is the expected behavior in 0.94.13. Can you share your experience when
looking at the metrics of more than one peer?
El 28/11/2013 19:05, "Asaf Mesika" escribió:
> I tacked the same problem, and was answerd it was fixed
I tacked the same problem, and was answerd it was fixed in 0.95
On Thursday, November 28, 2013, Pablo Medina wrote:
> Hi all,
>
> Knowing that replication metrics are global at the region server level in
> HBase 0.94.13, what is the meaning of a metric like sizeOfLogQueue when
> replicating to mo
On our project we store nested record structures with 10-40 fields. We have
decided to save on storage and write throughout by writing a serialized
avro record as value. We place one byte before to allow versioning. We did
it since each column is written with its rowkey, cq, cf and timestamp. Your
Start by installing ganglia or graphite and figure our which part is slower
in HBase
On Wednesday, November 27, 2013, jingych wrote:
> Thanks for reply!
>
> I'm running with single thread.
>
> Actually I want to know: How really fast that HBase write can be with each
> thread?
>
> And how to opt
It means you may have a new member other than the following two:
> Integer countInt
> Float countFloat
On Thu, Nov 28, 2013 at 7:40 AM, Amit Sela wrote:
> I am using some sort of schema that allows me to expand my data blob if
> needed.
> However, I'm considering testing Phoenix (or maybe pres
I am using some sort of schema that allows me to expand my data blob if
needed.
However, I'm considering testing Phoenix (or maybe prestoDB once it gets an
HBase connector) and I was wondering if the common practice is "simple
type" values and not data blobs because I saw that Phoenix doesn't suppo
Amit:
In your example you use Writable for serialization.
In 0.96 and beyond, protobuf is used in place of Writable.
If there is a possibility a new member would be added to the tuple,
consider using some scheme that allows the expansion.
Please take a look at this as well:
HBASE-8089 Add type su
I agree with what Anoop said here, just because they are scans, it doesn't
make a lot of sense to turn off your block cache. Are you trying to save
memory? As for the memstore global limits, you will want to set those to
something like
upper .11
lower .10
You have to leave at the minimum .10,
Hi Amit,
It all depends on your usecase ;)
If you always access countIn and countFloat when you access a value, then
put them together to avoid to have to do 2 calls or a scan or a multiget.
But if you never access them together, you might want to separate them to
reduce RCP transfert, etc.
JM
There are a lot of discussions here regarding the row design but I have a
question about the value design:
Say I have a table t1 with rows r1,r2...rn and family f.
I also have qualifiers q1,q2...,qm
For each (ri,fi,qi) tuple I want to store a value vi that is a data blob
that implements Writable
I'm glad to follow up to my post telling you that regions which are down go
up to the other nodes after about 20 minutes! I've only have to wait a little
more than i did...
But there is no a way to reduce this time? I mean, about 20 minutes to wait
the regions recovered is too long! There is no
stop hbase cluster
clear the data of zookeeper
start hbase cluster
then the error would seem to disappeared
hope this to help someone
otherwise there is a solution for somebody's problem :
http://stackoverflow.com/questions/17792619/fatal-master-hmaster-unexpected-state-cannot-transit-it-to-offli
This is the HMaster
2013-11-28 10:21:11,926 INFO
org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer
ephemeral node deleted, processing expiration [ip-10$
2013-11-28 10:21:11,927 DEBUG
org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current
region=-ROOT-,,0.70236
So 1 region of usertable got lost ?
Can you pastebin master server log around the time you killed the region server
?
Thanks
On Nov 28, 2013, at 2:13 AM, Andrea wrote:
> Hi, I'm using HBase 0.94.12 above Hadoop 1.2.1 and I have one node for
> zookeeper, one node for a Namenode/Hmaster and th
Hi, I'm using HBase 0.94.12 above Hadoop 1.2.1 and I have one node for
zookeeper, one node for a Namenode/Hmaster and three Datanode/Regionservers.
All the machines are on Amazon EC2, instance m2.xlarge.
I set the replication at two, so I'm expecting if I kill a
HregionServer/Datanode (for exam
So you use Bulk load with HFileOpFormat for writing data? Then you can
reduce the hbase.regionserver.global.memstore.upperLimit and
hbase.regionserver.global.memstore.lowerLimit and give more heap % for the
block cache. Not getting why u try to reduce that also.
-Anoop-
On Thu, Nov 28, 2013 a
Hi!
We are using HBase 0.92.1-cdh4.1.1. To import data the only way we use is
bulk load. And our common access pattern is sequential scans of different
parts of the tables.
Since that we are considering to disable block cache by setting
hbase.block.cache.size to zero.
But We've found following in
Sandeep,
and please take a look here http://hbase.apache.org/book.html#hadoop
PS: HDFSv2 supports HA.
On Thu, Nov 28, 2013 at 2:31 PM, Sandeep L wrote:
> Hi,
> Thanks for update.
> After spending quite a bit of time on Hadoop/HBase I couldn't find any
> thing awkward in logs.
> At last what I
https://issues.apache.org/jira/browse/HBASE-8912
On 28 November 2013 14:43, Jiajun Chen wrote:
>
>
> 2013-11-27 18:24:33,375 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
> type for hdfs://
> master.uc.uuc.com:9000/hbase/H/18c9cb11b3e673dec07038f166fb3ef7/.tm
26 matches
Mail list logo