Re: Question regarding designing row keys

2016-10-03 Thread Krishna
You have two options: - Modify your primary key to include metric_type & timestamp as leading columns. - Create an index on metric_type & timestamp On Monday, October 3, 2016, Kanagha wrote: > Sorry for the confusion. > > metric_type, > timestamp, > metricId is defined as the primary key via Ph

Re: Implement Custom Aggregate Functions in Phoenix

2016-10-03 Thread Akhil
Hi Swapna, I happen to run into same issue and have written a custom aggregate UDF and successfully compiled phoenix-core and full phoenix project too. But after replacing phoenix-client, phoenix-server and phoenix-core jar I still get following error- Error: ERROR 6001 (42F01): Function undefine

Re: Question regarding designing row keys

2016-10-03 Thread Kanagha
Sorry for the confusion. metric_type, timestamp, metricId is defined as the primary key via Phoenix for metric_table. Thanks Kanagha On Mon, Oct 3, 2016 at 3:41 PM, Michael McAllister wrote: > > > > there is no indexing available on this table yet. > > > > > > > So you haven’t defined a prim

Re: Question regarding designing row keys

2016-10-03 Thread Michael McAllister
> there is no indexing available on this table yet. > So you haven’t defined a primary key constraint? Can you share your table creation DDL? Michael McAllister Staff Data Warehouse Engineer | Decision Systems mmcallis...@homeaway.com | C: 512.423.7447 | skype:

Re: Question regarding designing row keys

2016-10-03 Thread Kanagha
I did an explain plan in Phoenix and it returned as "Parallel 1-way round robin full scan over metric_table". A full scan is done. Thanks Kanagha On Mon, Oct 3, 2016 at 1:38 PM, Kanagha wrote: > + user@phoenix.apache.org > > Kanagha > > On Mon, Oct 3, 2016 at 1:31 PM, Kanagha wrote: > >> Hi,

Re: Question regarding designing row keys

2016-10-03 Thread Kanagha
+ user@phoenix.apache.org Kanagha On Mon, Oct 3, 2016 at 1:31 PM, Kanagha wrote: > Hi, > > We have designed an metric_table, for ex: > > metric_type, > timestamp, > metricId > > in HBase using Apache Phoenix. And there is no indexing available on this > table yet. > > Our access patterns are us

Re: Phoenix ResultSet.next() takes a long time for first row

2016-10-03 Thread Sasikumar Natarajan
Thanks Ankit for the response. We have tried adding salt buckets and we didn't get much improvement. So we are trying to pre-split the regions. If we are pre-splitting on col1 and we have 1,00,000 values of it in hand. I have the questions below, 1) How do we make a create table script with 1,00,0

Re: Phoenix custom UDF

2016-10-03 Thread akhil jain
Hi Rajesh, Thanks for reply. But we can surely write UDAF's on lines of default SUM and AVG functions in phoenix, which are present by default. We need a good tutorial or documentation for this. Thanks, AJ On Mon, Oct 3, 2016 at 2:05 PM, rajeshb...@apache.org < chrajeshbab...@gmail.com> wrote:

Re: Phoenix custom UDF

2016-10-03 Thread rajeshb...@apache.org
Hi Akhil, There is no support for UDAFs in Phoenix at present. Thanks, Rajeshbabu. On Sun, Oct 2, 2016 at 6:57 PM, akhil jain wrote: > Thanks James. It worked. > > Can you please provide me pointers to write UDAFs in phoenix like we > have GenericUDAFEvaluator for writing Hive UDAFs. > I am lo