Anil,
It's not necessary to use comma. You may use any other character as
the delimiter.
And you are right. Number of splits must match the number of columns.
Thanks,
Sergey
On Tue, Mar 22, 2016 at 6:31 AM, Anil wrote:
> Thanks Sergey for the response. i cannot change the decimeter in my file as
Hi Mohammad,
The right class to look into is PInteger. It has static class IntCodec
which is using for code/decode integers.
Thanks,
Sergey
On Tue, Mar 22, 2016 at 7:15 AM, Mohammad Adnan Raza
wrote:
> I am changing my question a bit to be more precise...
> Given a phoenix table with INTEGER col
Thanks Josh and everyone else .. Shall try this suggestion
On 22 Mar 2016 09:36, "Josh Elser" wrote:
> Keytab-based logins do not automatically spawn a renewal thread in
> Hadoop's UserGroupInformation library, IIRC. HBase's RPC implementation
> does try to automatically re-login, but if you are
I dont know the meaning in_memory=false in context of phoenix. In context of
hbase, that entire data will be kept in memory( not just block cache). I m
guessing, Phoenix is also doing same thing.
By data size on disk, you can roughly estimate memory footprint. For example,
You can load one index
Thank you Anil.
Any inputs on what does IN_MEMORY = false do at index creation time?
I get the separate table part. But not sure I understood why would memory
footprint be equal to data size. Would this secondary index reside entirely in
memory? I guess only parts of it would reside on block cach
Global indexes are stored in a separate hbase table. So, you can estimate
memory footprint by looking at the data size of that index currently.
HTH,
Anil Gupta
On Tue, Mar 22, 2016 at 7:19 AM, Sumit Nigam wrote:
> Hi,
>
> I am trying to estimate what (if any) are the implications of accumulatin
Hi,
I am trying to estimate what (if any) are the implications of accumulating data
in phoenix secondary index. I have a secondary index on 3 columns and would
like to know if anyone has an idea of how to estimate memory footprint of
secondary index (if any) based on number of entries in data ta
I am changing my question a bit to be more precise...
Given a phoenix table with INTEGER column type. And if I fire upsert
statement with integer value. How phoenix converts it to byte array and put
in the Hbase table.
Or if anyone can tell me which class is responsible for that conversion so
I can
Thanks Sergey for the response. i cannot change the decimeter in my file as
comma used as valid char for my data.
>From my understanding of the code, number of splits in csv record must
match with number of columns. Agree?
Regards,
Anil
On 21 March 2016 at 23:52, Sergey Soldatov wrote:
> Hi
Hello Everyone,
I have created phoenix table like this
CREATE TABLE PRODUCT_DETAILS(NAME VARCHAR NOT NULL PRIMARY KEY,CF.VOLUME
INTEGER,CF.PRICE INTEGER,CF.DISCOUNT INTEGER,CF.BASELINE INTEGER,CF.UPLIFT
INTEGER,CF.FINALPRICE INTEGER,CF.SALEPRICE INTEGER);
Now the datatype Integer is 4 byte signe
10 matches
Mail list logo