hi Mohammad Tariq,
thanks for reply.
I follow your instruction, change the hosts to this:
master:
127.0.0.1 localhost 127.0.0.1 localhost ubuntu 10.66.201.243 master
10.66.201.244 slave1 10.66.201.245 slave2
slave1:
127.0.0.1 localhost 127.0.0.1 slave1 ubuntu 10.66.201.243 master 10.66.201.2
Hi exp,
Do not remove this line, instead make it 127.0.0.1..and copy the
hadoop-core-0.20.204.0.jar from your HADOOP_HOME and
commons-configuration-1.6.jar from the HADOOP_HOME/lib folder to the
HBASE_HOME/lib folder. It should work then..Please let me know if it
works for you.
Regards,
M
Hi stuti,
i found the same problem, so i thought of the solution is, i added
zookeeper-3.3.2.jar and log4j-1.2.5.jar on classpath of HBASE. it worked
fine.
thank you stuti.
While executing one more program on mapreduce i am getting following error
on console of eclipse, though i added common-cli-
Stuart Smith-8 wrote:
>
> Hello,
> How did you query base via a statement object? Are you using Hive?
>
> Or is this some new interface I don't know about.. I always had to use
> Get() or Scan().
> And hbase stores everything as bytes, not strings.. unlike C, in java,
> there is a differenc
Hi,
some disks of one node in my hbase cluster were broken, and after I mounted
some new ones and start regionserver/datanode on that node again, there
can't be data locality anymore unless I trigger a major_compaction on the
table manually(datanode/regionserver share the same physical node)
My que
hi,
I'm using Hadoop 0.20.204.0
After I remove the 127.0.1.1 lines, the HMaster cannot start. I get this
exception:
2011-12-16 13:37:11,899 ERROR
org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: Failed construction of Master: class
org.
Got it. Thank you.
-Original Message-
From: Shrijeet Paliwal [mailto:shrij...@rocketfuel.com]
Sent: Thursday, December 15, 2011 9:36 PM
To: user@hbase.apache.org
Subject: Re: understanding hbase client timeout settings
Steve,
Karthick has given an explanation here :
https://reviews.apac
Steve,
Karthick has given an explanation here :
https://reviews.apache.org/r/755/(also in Jira, but it gets lost in
comments)
On Thu, Dec 15, 2011 at 9:34 PM, Shrijeet Paliwal
wrote:
> We needed calls to come back (or timeout) in less than 50ms. That was low
> for hbase.rpc.timeout .
>
>
> On Thu
We needed calls to come back (or timeout) in less than 50ms. That was low
for hbase.rpc.timeout .
On Thu, Dec 15, 2011 at 9:30 PM, Steve Boyle wrote:
> Shrijeet,
>
> What do you consider a very low value for hbase.rpc.timeout?
>
> Thanks,
> Steve
>
> -Original Message-
> From: Shrijeet Pa
Shrijeet,
What do you consider a very low value for hbase.rpc.timeout?
Thanks,
Steve
-Original Message-
From: Shrijeet Paliwal [mailto:shrij...@rocketfuel.com]
Sent: Thursday, December 15, 2011 9:03 PM
To: user@hbase.apache.org
Subject: Re: understanding hbase client timeout settings
S
Steve,
We have been using timeouts in production via two different methods:
1. Use RPC timeout mechanism provided by HBase client. For this you will
need both 2937 and 3154. We backported 2937 to 0.90.3. The reason 3154
alone does not help is because if you set the conf parameter introduced in
31
Hi Vamshi,
Are you sure you have added the relevant jars in your classpath correctly.
Error is : Caused by: java.lang.ClassNotFoundException:
org.apache.zookeeper.KeeperException
Your code is not able to find proper jars in path.
Hope this helps!
-Original Message-
From: Vamshi Krishna [
I added a note to troubleshooting section. Thanks Joey
On Dec 15, 2011, at 11:04 AM, Joey Echeverria wrote:
> For posterity, Mohammad fixed his problem with the following:
>
> "I copied the hadoop-core-0.20.205.0.jar and
> commons-configuration-1.6.jar from hadoop folder to the hbase/lib
>
Can u try w/o a limit or with upped limit and see if a diff?Sounds
plausible yes but looks like u could make a fact with some small experiments
Thanks
On Dec 15, 2011, at 12:25 PM, Ben West wrote:
> Digging into this further, I see the following in HTablePool:
>
> public void putTable
Hi,
I'm trying to understand what timeout controls are available in the hbase
client. I'm using hbase version 0.90.4-cdh3u2. I have a client application
that does gets, puts, increments and scans. I'd like to be able to have a
client side timeout such that the client can clean-up in a case w
Hi folks-
The HBase book (soon to be renamed "Reference Guide") has been updated. Most
notably, in the Architecture chapter about regions (region assignment and
locality).
http://hbase.apache.org/book.html#regions.arch
Enjoy!
Doug Meil
Chief Software Architect, Explorys
doug.m...@explorys.c
Hi Michael,
Not a problem..I'll try to act according to your advice..Thanks a
lot for all the support.
Regards,
Mohammad Tariq
On Fri, Dec 16, 2011 at 2:31 AM, Michael Segel
wrote:
>
> Mohammad,
> I'm tight on time... Short answer...
> Strip out the xml in to some object and then con
Mohammad,
I'm tight on time... Short answer...
Strip out the xml in to some object and then consider using Avro to write the
object to HBase.
This could probably shrink your footprint per record/row.
Note: I don't know anything about your data so you really have to take what I
say with a larg
Digging into this further, I see the following in HTablePool:
public void putTable(HTableInterface table) {
LinkedList queue =
tables.get(Bytes.toString(table.getTableName()));
synchronized(queue) {
if(queue.size() >= maxSize) {
// release table instance since we're not re
Hi Lars,
Files are not really big..Might go upto 20kB..Initially we were
thinking about the HDFS as storage, but due to the lack of random data
access we are now planning to use Hbase..Please guide me if you think
there is some way that can help us, as we are new to the hadoop world.
Regard
How big are these XML files?
You might want to consider storing them in HDFS directly and only Meta
information in HBase.
-- Lars
Mohammad Tariq schrieb:
>Hello list,
>
> I want to store xml files in Hbase and these files may have
>tags within tags..And for that I have to create severa
For posterity, Mohammad fixed his problem with the following:
"I copied the hadoop-core-0.20.205.0.jar and
commons-configuration-1.6.jar from hadoop folder to the hbase/lib
folder, and after that Hbase was working fine."
On Thu, Dec 15, 2011 at 11:30 AM, Joey Echeverria wrote:
> (-hdfs-u...@hado
I agree with J-D and Shashwat. BTW, which version of Hadoop are you using??
Regards,
Mohammad Tariq
On Thu, Dec 15, 2011 at 11:56 PM, shashwat shriparv
wrote:
> make 127.0.1.1 to 127.0.0.1 that will solve lot of problems
>
> On Thu, Dec 15, 2011 at 11:54 PM, Jean-Daniel Cryans
> wrote:
>
>
Hello Michael,
First of all I would like to thank you for such a good
reply...Yes, you are absolutely right, this was regarding hierarchical
data. Actually we have a web service that collects information from a
server through xml files. Till now we have used Oracle for the
storage. But, sinc
make 127.0.1.1 to 127.0.0.1 that will solve lot of problems
On Thu, Dec 15, 2011 at 11:54 PM, Jean-Daniel Cryans wrote:
> Hi,
>
> A few notes:
>
> Remove the 127.0.1.1 lines, they usually mess things up.
>
> The hbase.master configuration has been removed from the HBase code
> more than 2 years a
Hi,
A few notes:
Remove the 127.0.1.1 lines, they usually mess things up.
The hbase.master configuration has been removed from the HBase code
more than 2 years ago, you can remove it too.
Setting hbase.master.dns.interface alone without
hbase.master.dns.nameserver doesn't do anything if I remem
Thanks for the reply.
But that is from Java ..I am looking from the HBase shell?
- Original Message -
From: Stack
To: user@hbase.apache.org; Sreeram K
Cc:
Sent: Thursday, December 15, 2011 10:10 AM
Subject: Re: HBase- Scan with wildcard character
On Thu, Dec 15, 2011 at 8:59 AM, Sreer
On Thu, Dec 15, 2011 at 8:59 AM, Sreeram K wrote:
> I have one more question..
> Can we have a query in HBase shell based on Colum Value.
>
> I am looking at scan-> with Coulm ID? is that possible..the way we are doing
> with STARTROW?
> Can you pl pont me to an example..
>
>
You need to use a v
I have one more question..
Can we have a query in HBase shell based on Colum Value.
I am looking at scan-> with Coulm ID? is that possible..the way we are doing
with STARTROW?
Can you pl pont me to an example..
- Original Message -
From: Sreeram K
To: "user@hbase.apache.org" ; lars ho
(-hdfs-u...@hadoop.apache.org, +user@hbase.apache.org)
What error message are you seeing when you try to start HBase?
-Joey
On Fri, Dec 9, 2011 at 12:14 PM, Mohammad Tariq wrote:
> Hi Joey,
> Thanks a lot for the response. Hadoop is working fine in psuedo
> distribtued mode. I am able to use H
Hi Mohammad,
It sounds like you want to implement a hierarchial data model within HBase. You
can do this, albeit there are some drawbacks...
In terms of drawbacks...
The best example that I can think of is implementing a point of sale solution
in Dick Pick's Revelation system.
Here you store
Hello Doug,
Thanks a lot for the links.
Regards,
Mohammad Tariq
On Thu, Dec 15, 2011 at 7:34 PM, Doug Meil
wrote:
>
> Hi there-
>
> See...
>
> http://hbase.apache.org/book.html#supported.datatypes
>
> http://hbase.apache.org/book.html#datamodel
>
> ... Hbase is "bytes-in bytes-out" in
Hi there-
See...
http://hbase.apache.org/book.html#supported.datatypes
http://hbase.apache.org/book.html#datamodel
... Hbase is "bytes-in bytes-out" in terms of what it stores.
On 12/15/11 7:25 AM, "Mohammad Tariq" wrote:
>Hello list,
>
> I want to store xml files in Hbase and thes
Hello list,
I want to store xml files in Hbase and these files may have
tags within tags..And for that I have to create several columns within
a column family..How can I do that..Sorry if my question is
childish..And if that is the case please provide me some link where I
can get the proper
In our tests, filtering rows with timestamps was much faster than using
a filter which results in a full table scan. But, I question the
reliability to use the internal timestamp to detect new data and if this
still scales with a growing amount of data over the years.
Regards,
Thomas
-Origina
hi all,
I am installing hbase on a small cluster of 3 machines. The RegionServer unable
connect to the master. This is the log:
2011-12-15 13:46:43,415 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to
Master server at localhost:6
2011-12-15 13:47:43,473 WARN
36 matches
Mail list logo