I am trying for 0.90.1 (hbase-0.90.1-CDH3B4) under pseudo-dist mode, and met
the problem of HMaster crashing. Here is how I did.
I. First I installed Hadoop pseudo cluster (hadoop-0.20.2-CDH3B4) with the
following conf edited.
1) core-site.xml ==>
fs.default.name
hdfs://localhost:9000
2) h
Ok, but our app has online/realtime processing requirements. My
understanding bulk importing requires M/R job and is only good for
batch processing?
The Javadoc says HBaseAdmin flush is an async operation. How do I get
the confirmation whether it succeeded or not?
On 5/29/11, Todd Lipcon wrote:
Or actually flush the table rather than just flushing commits:
http://archive.cloudera.com/cdh/3/hbase-0.90.1-cdh3u0/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#flush(byte[])
-Todd
On Sat, May 28, 2011 at 12:29 PM, Joey Echeverria wrote:
> You might want to look into bulk loading.
>
>
You might want to look into bulk loading.
-Joey
On May 28, 2011 9:47 AM, "Qing Yan" wrote:
> Well, I realized myself RS flush to HDFS is not designed to do incremental
> changes. So there is no way around of WAL? man..just wish it can run a bit
> faster:-P
>
> On Sat, May 28, 2011 at 9:36 PM, Qin
Never got the "-c" argument to work, but when I setup the following
environment vars, it was happy:
export HBASE_HOME
added hbase conf dir to CLASSPATH
added hbase conf dir to HADOOP_CLASSPATH
not sure which of those did the trick, but I'm good now and I guess
hadoop now "knows about my hbase" v
Thanks, but I've still been unable to get completebulkload to recognize
the '-c ' argument. I downloaded the latest
stable 0.90.3 version of hbase, just to make sure I had a current
version. When I provide the -c argument, the completebulkload utility
prints its usage:
'completebulkload /path/to/h
Well, I realized myself RS flush to HDFS is not designed to do incremental
changes. So there is no way around of WAL? man..just wish it can run a bit
faster:-P
On Sat, May 28, 2011 at 9:36 PM, Qing Yan wrote:
> Ok, thanks for the explaination. so data loss is normal in this case.
> Yeah , I did
Ok, thanks for the explaination. so data loss is normal in this case.
Yeah , I did a "kill -9". I did wait till the RS get reassigned and
actually let process B keep retring over the night ..
Is WAL the only way to guarantee data safety in hbase? We want high insert
rate though.
Is there a middle