The 2nd HBase 0.98.0 release candidate (RC1) is available for download at
http://people.apache.org/~apurtell/0.98.0RC1/ and Maven artifacts are also
available in the temporary repository
https://repository.apache.org/content/repositories/orgapachehbase-1003
Signed with my code signing key D5365CCD
Yup, that would be the question.
Having played around with endpoint coprocessors to implement real-time
timeseries Aggregationen in 0.92 it would seem to be worth the effort to
implement it if it didn't already exist, now that the coprocessor API has
become somewhat stable.
Am 29.01.2014 18:59 schr
M/R + timeRange filter will be unnecessarily slow and heavy on the cluster. If
you can, lead the row key with the time, that way you can very quickly find any
changes within an interval.
(but need watch region hotspotting then, might be need to prefix the row with a
few bit from a hash of row k
Not sure if this will help but it is worth to take a look the master
hostname /ip used by zk and make sure the same hostname/ip is in your
/etc/hosts. For example,
hbase zkcli
get /hbase/master
Kim
On Wed, Jan 29, 2014 at 12:11 PM, Fernando Iwamoto - Plannej <
fernando.iwam...@plannej.com.br> w
Hi Ted and Vladimir, thanks!
I was thinking if using index is a good idea. My scan/get criteria is
something like "get all rows I inserted since end of yesterday". I may
have to use MapReduce + timeRange filter.
Lars and all, I will try to report back some performance data later.
Thanks for th
Thanks. The network settings are managed by a special group of staffs.
I can ssh between any two nodes using "obelix*.local" and directly
using IPs without any problem. I don't know how to further test
whether the settings are problematic.
On Wed, Jan 29, 2014 at 3:31 PM, Jean-Marc Spaggiari
wro
Thank you very much. It's hard to remember these details.
Yes, I uncommented HBASE_MANAGES_ZK. The following lines are
uncommented in my hbase-env.sh:
export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_37
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
export HBASE_MANAGES_ZK=true
On Wed, Jan 29, 2014 at 3:11
No, there's no firewall.
On Wed, Jan 29, 2014 at 3:38 PM, Dhaval Shah
wrote:
> Do you have a firewall between the master and the slaves?
>
> Regards,
>
>
>
> Dhaval
>
>
>
> From: Fernando Iwamoto - Plannej
> To: user@hbase.apache.org
> Sent: Wednesday, 29 January
Does this include capability to work with secure HBase as well?
Thanks
On Friday, January 24, 2014 4:28:16 PM UTC-6, tsuna wrote:
>
> Hi all,
> after 3 months in RC2 and 2 more months past that, I'm happy to
> announce that AsyncHBase 1.5.0 is now officially out. AsyncHBase
> remains true to
bq. table:family2 holds only row keys (no data) from table:family1.
Wei:
You can designate family2 as essential column family so that family1 is
brought into heap when needed.
On Wed, Jan 29, 2014 at 1:33 PM, Vladimir Rodionov
wrote:
> Yes, your row will be split by KV boundaries - no need to
Yes, your row will be split by KV boundaries - no need to increase default
block size, except, probably, performance.
You will need to try different sizes to find optimal performance in your use
case.
I would not use combination of scan & get on the same table:family with very
large rows.
Either
I see.
In that case it seems larger block sizes would be beneficial. In the end you
need to performance test this, though.
You can load your data once and then change the block size to various sizes, if
you force a major compaction your data will be rewritten with the new block
size.
If you ha
Sorry, 1000 columns, each 2K, so each row is 2M. I guess HBase will keep a
single KV (i.e., a column rather than a row) in a block, so a row will
span multiple blocks?
My scan pattern is: I will do range scan, find the matching row keys, and
fetch the whole row for each row that matches my crit
To be more clear, each KV (cell) is a couple of KB but each row is a
couple of MB. If I need to search through row key, but always fetch rows
as a whole, shall I use a block size larger than the default 64KB?
Thanks,
Wei
-
Wei Tan, PhD
Research Staff Member
IBM T
You 1000 columns? Not 1000k = 1m column, I assume.
So you'll have 2MB KVs. That's a bit on the large side.
HBase will "grow" the block to fit the KV into it. It means you have basically
one block per KV.
I guess you address these rows via point gets (GET), and do not typically scan
through them,
Do you have a firewall between the master and the slaves?
Regards,
Dhaval
From: Fernando Iwamoto - Plannej
To: user@hbase.apache.org
Sent: Wednesday, 29 January 2014 3:11 PM
Subject: Re: RegionServer unable to connect to master
Iam new to HBASE too, but I
Hi, I have a HBase table where each row has ~1000k columns, ~2K each. My
table scan pattern is to use a row key filter but I need to fetch the
whole row (~1000 k) columns back.
Shall I set HFile block size to be larger than the default 64K?
Thanks,
Wei
-
Wei Tan,
Is that correct?
$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 obelix105.local xx.yy.netobelix105
192.168.245.1 obelix.local
In regionserver list you have obelix105.local
wich point to 127.0.1.1. Should it not bt 192.168.245.1 instead?
2014-01-29 Fernando Iwamoto - Plannej
Iam new to HBASE too, but I had same problem long time ago and I dont
remember how i fixed, I will keep troubleshooting you...
How about zookeeper? have you uncommented the HBASE_MANAGE_ZK(something
like that) in hbase-env.sh and set to TRUE?
2014-01-29 Guang Gao
> You mean the SSH key? Yes, an
You mean the SSH key? Yes, any two nodes can ssh each other without password.
On Wed, Jan 29, 2014 at 2:10 PM, Fernando Iwamoto - Plannej
wrote:
> Did you tried to pass the key to the machines?
>
>
> 2014-01-29 birdeeyore
>
>> Thanks for your reply. Here's some additional info. Thanks.
>>
>> $ c
Did you tried to pass the key to the machines?
2014-01-29 birdeeyore
> Thanks for your reply. Here's some additional info. Thanks.
>
> $ cat hbase-site.xml
>
>
> hbase.cluster.distributed
> true
>
>
> hbase.rootdir
> hdfs://obelix8.local:9001/hbase
>
>
> hbas
Thanks for your reply. Here's some additional info. Thanks.
$ cat hbase-site.xml
hbase.cluster.distributed
true
hbase.rootdir
hdfs://obelix8.local:9001/hbase
hbase.zookeeper.quorum
obelix105.local,obelix106.local,obelix107.local
hbase.zookeeper.pro
Are you looking to trigger endpoint coprocessors ?
-Viral
On Wed, Jan 29, 2014 at 2:33 AM, ripsacCTO wrote:
> Hi Tsuna,
>
> Is there any support for coprocessors ?
>
>
> On Saturday, January 25, 2014 3:58:16 AM UTC+5:30, tsuna wrote:
>>
>> Hi all,
>> after 3 months in RC2 and 2 more months pas
I'm using Pig 0.11.1. I don't recall having to bundle specific hbase-site.xml
in my pig jar in the past, but it's certainly worth checking.
Ted Yu wrote:
>From the stack trace, looks like TableOutputFormat had trouble reading
>from
>zookeeper.
>
>What pig version were you using ?
>
>Cheers
>
>
Hi,
can you please share your config files and your host file?
Thanks,
JM
2014-01-29 Guang Gao
> Hi all,
>
> This is my first time to try to setup HBase on a 10-node cluster. I tried
> two settings: HBase 0.94.16+Hadoop 1.2.1, and HBase 0.96.1.1+Hadoop 2.2.0.
> In both cases, the region serv
My approach (with maven) is the following:
1. Co-processors package is built as separate artifact and they are tested
in context of another module as part of integration testing.
2. Module is built with co-processor package placed with dependency plugin.
It's location is passed to test through syst
thanks for the link. It helped. I wish I had seen it before but the problem
with the jersey dependency in client's pom file is still open. It apeared
after upgrading the hbase 0.94.7 to hbase-0.96.1-hadoop2.
thx and regards,
shapoor
--
View this message in context:
http://apache-hbase.679495.n
27 matches
Mail list logo