On Thu, Jun 28, 2012 at 1:40 AM, samar kumar wrote:
> Thanks for the replies . I am aware of the apis but can anyone give me
> little bit more insight on the details. After creating the HFiles and
> calling IncrementalLoadHFile how does it internally change the RS, Catalog
> tables etc?
> Can any
> of time.
>
>
> -Anoop-
>
> From: Jerry Lam [chiling...@gmail.com]
> Sent: Wednesday, June 27, 2012 10:52 PM
> To: user@hbase.apache.org
> Subject: Re: direct Hfile Read and Writes
>
> Hi Samar:
>
> I have used IncrementalLoadHFile suc
...@gmail.com]
Sent: Wednesday, June 27, 2012 10:52 PM
To: user@hbase.apache.org
Subject: Re: direct Hfile Read and Writes
Hi Samar:
I have used IncrementalLoadHFile successfully in the past. Basically, once
you have written hfile youreself you can use the IncrementalLoadHFile to
merge with the
Hi Samar:
I have used IncrementalLoadHFile successfully in the past. Basically, once
you have written hfile youreself you can use the IncrementalLoadHFile to
merge with the HFile currently managed by HBase. Once it is loaded to
HBase, the records in the increment hfile are accessible by clients.
1. Since the data we might need would be distributed across regions how
would direct reading of Hfile be helpful.
You can read the HFilePrettyPrinter, it shows how to create a HFile.Reader
and use it to read the HFile.
Or you can use the ./hbase org.apache.hadoop.hbase.io.hfile.HFile -p -f
hdf
Hi Hbase Users,
I have seen API's supporting HFile direct reads and write. I Do understand
it would create Hfiles in the location specified and it should be much
faster since we would skip all the look ups to ZK. catalog table . RS , but
can anyone point me to a particular case when we would like