Re: Scanning big region parallely

2016-10-21 Thread Sanooj Padmakumar
Thanks Sergey. Let me check this Regards Sanooj On 22 Oct 2016 1:55 a.m., "Sergey Soldatov" wrote: > Hi Sanooj, > > You may take a look at BaseResulterators.getIterators() and > BaseResultIterators.getParallelScans() > > Thanks, > Sergey > > On Fri, Oct 21, 2016 at 6:02 AM, Sanooj Padmakumar

Re: Creating Covering index on Phoenix

2016-10-21 Thread Sergey Soldatov
Hi Mich, It's really depends on the query that you are going to use. If conditions will be applied only by time column you may create index like create index I on "marketDataHbase" ("timecreated") include ("ticker", "price"); If the conditions will be applied on others columns as well, you may use

Re: Scanning big region parallely

2016-10-21 Thread Sergey Soldatov
Hi Sanooj, You may take a look at BaseResulterators.getIterators() and BaseResultIterators.getParallelScans() Thanks, Sergey On Fri, Oct 21, 2016 at 6:02 AM, Sanooj Padmakumar wrote: > Hi all > > If anyone can provide some information as to which part of the phoenix > code we need to check to

Problem with UDFS

2016-10-21 Thread Yang, Eric
I'm just trying to get the basics of User Defined Functions working: What I have so far. The following was copied from https://www.snip2code.com/Snippet/572287/Phoenix-User-Defined-Function-Test---Add My main concern was that I was screwing up package definitions and jar creation hence the di

Re: Scanning big region parallely

2016-10-21 Thread Sanooj Padmakumar
Hi all If anyone can provide some information as to which part of the phoenix code we need to check to see how parallel execution is performed. Thanks again Sanooj On 20 Oct 2016 11:31 a.m., "Sanooj Padmakumar" wrote: > Hi James, > > We are loading data from Phoenix tables into in-memory datab

Creating Covering index on Phoenix

2016-10-21 Thread Mich Talebzadeh
Hi, I have a Phoenix table on Hbase as follows: [image: Inline images 1] I want to create a covered index to cover the three columns: ticker, timecreated, price More importantly I want the index to be maintained when new rows are added to Hbase table. What is the best way of achieving this?

Avoid Full Scan of RHS table in Left Outer Join

2016-10-21 Thread Vikash Talanki
Hi All, I am running a query that has multiple(4) Left outer joins. I have 1 main table which is around 400K records with 750 columns. There are 2 more tables and I'll call them as code tables. My query has 3 left outer joins of main table to same code table1 and 1 left outer join of main table to

[jira] Vivek Paranthaman shared "PHOENIX-3395: ResultSet .next() throws commons-io exception" with you

2016-10-21 Thread Vivek Paranthaman (JIRA)
Vivek Paranthaman shared an issue with you > ResultSet .next() throws commons-io exception > - > > Key: PHOENIX-3395 > URL: https://issues.apache.org/jira/browse/P