Re: [ANNOUNCE] Rushabh Shah as Phoenix Committer

2023-08-23 Thread Ankit Singhal
Congratulations Rushabh !! On Wed, Aug 23, 2023 at 10:15 PM Istvan Toth wrote: > Congratulations Rushabh! > > > On Thu, Aug 24, 2023 at 2:11 AM rajeshb...@apache.org < > chrajeshbab...@gmail.com> wrote: > >> Congratulations Rushabh! >> >> Thanks, >> Rajeshbabu. >> >> On Thu, Aug 24, 2023, 5:37 A

[ANNOUNCE] New VP Apache Phoenix

2022-07-18 Thread Ankit Singhal
in this role. Please join me in congratulating Rajeshbabu! Thank you all for the opportunity to serve you as the VP for these last years. Regards, Ankit Singhal

Re: [ANNOUNCE] Tanuj Khurana as Phoenix Committer

2021-12-16 Thread Ankit Singhal
Congratulations and Welcome, Tanuj!! On Thu, Dec 16, 2021 at 11:17 AM Geoffrey Jacoby wrote: > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Tanuj > Khurana has accepted the PMC's invitation to become a committer on Apache > Phoenix. > > We appreciate all of the great contrib

Re: Major problem for us with Phoenix joins with certain aggregations

2021-10-11 Thread Ankit Singhal
actor your query/schema to use indices automatically. Regards, Ankit Singhal On Mon, Oct 11, 2021 at 2:08 PM Josh Elser wrote: > No worries. Thanks for confirming! > > On 10/10/21 1:43 PM, Simon Mottram wrote: > > Hi > > > > Thanks for the reply, I posted here by mistake

Re: [ANNOUNCE] New Phoenix committer Viraj Jasani

2021-02-09 Thread Ankit Singhal
Congratulations, Viraj! On Tue, Feb 9, 2021 at 2:18 PM Xinyi Yan wrote: > Congratulations and welcome, Viraj! > > On Tue, Feb 9, 2021 at 2:07 PM Geoffrey Jacoby wrote: > >> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Viraj >> Jasani has accepted the PMC's invitation to bec

Re: [Discuss] Phoenix Tech Talks

2021-02-08 Thread Ankit Singhal
This is excellent, thanks Kadir for initiating it, I am keen to listen to your presentation on "strongly consistent global indexes". Logistics looks great, however, if required can be adjusted as per the feedback after the first meetup. Some of the current topics comes to my mind (in case someone

Re: [Discuss] Dropping support for older HBase version

2021-01-31 Thread Ankit Singhal
e user who has been waiting for a long on the release. Regards, Ankit Singhal On Fri, Jan 29, 2021 at 10:59 PM Istvan Toth wrote: > I'm not sure I understand, let me rephrase > > So we drop support right after we release a Phoenix minor version, > if the Phoenix release date is

[ANNOUNCE] New Phoenix committer Richárd Antal

2021-01-04 Thread Ankit Singhal
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richárd Antal has accepted the PMC's invitation to become a committer on Apache Phoenix. We appreciate all of the great contributions Richárd has made to the community thus far and we look forward to his continued involvement. Cong

Re: [DISCUSS] EOL support for HBase 1.3

2020-08-24 Thread Ankit Singhal
et of commits which brings back this shim layer, instead can use the tag and go with it. Regards, Ankit Singhal On Mon, Aug 24, 2020 at 10:04 AM Geoffrey Jacoby wrote: > The HBase community has just unanimously EOLed HBase 1.3. > > As I recall, 1.3 has some significant differences with 1.4

Re: How to bulk load a csv file into a phoenix table created in lowercase

2020-05-14 Thread Ankit Singhal
Can you please try using CSVBulkLoad (as a workaround, we have it recently fixed) and raise a bug for the Psql not handling table-name case sensitivity. https://issues.apache.org/jira/browse/PHOENIX-3541 https://issues.apache.org/jira/browse/PHOENIX-5319 Regards, Ankit Singhal On Thu, May 14

Re: Performance degradation on query analysis

2019-09-19 Thread Ankit Singhal
Please schedule compaction on SYSTEM.STATS table to clear the old entries. On Thu, Sep 19, 2019 at 1:48 PM Stepan Migunov < stepan.migu...@firstlinesoftware.com> wrote: > Thanks, Josh. The problem was really related to reading the SYSTEM.STATS > table. > There were only 8,000 rows in the table, b

Re: Multi-Tenancy and shared records

2019-09-04 Thread Ankit Singhal
em with that. Regards, Ankit Singhal On Wed, Sep 4, 2019 at 6:24 PM Simon Mottram wrote: > Hi Ankit > > Thats very useful, many thanks. > > Before I dive into using Phoenix (which has given me a torrid time over > the last few days!), is using Phoenix the best option given that I'

Re: Multi-Tenancy and shared records

2019-09-03 Thread Ankit Singhal
>> If not possible I guess we have to look at doing something at the HBase level. As Josh said, it's not yet supported in Phoenix, Though you may try using cell-level security of HBase with some Phoenix internal API and let us know if it works for you. Sharing a sample code if you wanna try. /** *

Re: Buckets VS regions

2019-08-20 Thread Ankit Singhal
sensitive information about your environment and data like hbase:meta has ip-address/hostname and system.stats has data row keys, so upload only if you think it's a test data and hostnames have no significance). Thanks, Ankit Singhal On Mon, Aug 19, 2019 at 11:17 PM venkata subbarayudu

Re: Is there any way to using appropriate index automatically?

2019-08-20 Thread Ankit Singhal
code to fix the issue, so the patch would really be appreciated. And, also can you try running "select a,b,c,d,e,f,g,h,i,j,k,m,n from TEST_PHOENIX.APP where c=2 and h = 1 limit 5", and see if index is getting used. Regards, Ankit Singhal On Tue, Aug 20, 2019 at 1:49 AM you Zhuang wrote

Re: split count for mapreduce jobs with PhoenixInputFormat

2019-01-30 Thread Ankit Singhal
As Thomas said, no. of splits will be equal to the number of guideposts available for the table or the ones required to cover the filter. if you are seeing one split per region then either stats are disabled or guidePostwidth is set higher than the size of the region , so try reducing the guidepost

Re: Hbase vs Phienix column names

2019-01-08 Thread Ankit Singhal
ng phoenixColumnName = pTable.getColumnForColumnQualifier("0".getBytes(), hbaseColumnQualifierBytes).getName(); Regards, Ankit Singhal On Tue, Jan 8, 2019 at 10:03 AM Josh Elser wrote: > (from the peanut-gallery) > > That sounds to me like a useful utility to share with others if you're > going to w

Re: JDBC Connection to Apache Phoenix failing

2018-12-04 Thread Ankit Singhal
Is connecting and running some commands through HBase shell working? As per the stack trace, It seems your HBase is not up , Look at the master and regionserver logs for errors. On Tue, Dec 4, 2018 at 4:17 AM Raghavendra Channarayappa < raghavendra.channaraya...@myntra.com> wrote: > Can someone

Re: ON DUPLICATE KEY with Global Index

2018-10-09 Thread Ankit Singhal
We do not allow atomic upsert and throw the corresponding exception in the cases documented under the limitations section of http://phoenix.apache.org/atomic_upsert.html. Probably a documentation needs a little touch to convey this clearly. On Tue, Oct 9, 2018 at 10:05 AM Josh Elser wrote: > Ca

Re: Table dead lock: ERROR 1120 (XCL20): Writes to table blocked until index can be updated

2018-09-26 Thread Ankit Singhal
You might be hitting PHOENIX-4785 <https://jira.apache.org/jira/browse/PHOENIX-4785>, you can apply the patch on top of 4.14 and see if it fixes your problem. Regards, Ankit Singhal On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com> wrote: > Any advices?

Re: MutationState size is bigger than maximum allowed number of bytes

2018-09-20 Thread Ankit Singhal
x27;t help. Regards, Ankit Singhal On Thu, Sep 20, 2018 at 10:24 AM Batyrshin Alexander <0x62...@gmail.com> wrote: > Nope, it was client side config. > Thank you for response. > > On 20 Sep 2018, at 05:36, Jaanai Zhang wrote: > > Are you configuring these on the

Re: Slow query on Secondary Index

2018-09-18 Thread Ankit Singhal
To better understand the problem, we may require your DDL for both the indexes and data table and also the query using your secondary index And, please try some tuning documented on https://phoenix.apache.org/secondary_indexing.html and see if it helps. On Tue, Sep 18, 2018 at 11:25 AM Josh Else

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-13 Thread Ankit Singhal
gured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks" Regards, Ankit Singhal On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲 wrote: &

Re: java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.

2018-05-24 Thread Ankit Singhal
Probably , you are affected by https://issues.apache.org/jira/browse/HBASE-20172, are you on JDK 1.7 or lower? can you upgrade to JDK 1.8 and check. On Sun, May 6, 2018 at 9:29 AM, anil gupta wrote: > As per following line: > "Caused by: java.lang.RuntimeException: Could not create interface >

Re: [DISCUSS] Include python-phoenixdb into Phoenix

2018-03-08 Thread Ankit Singhal
27;t > >> know what, if any, infrastructure exists to distribute Python modules. > >> https://packaging.python.org/glossary/#term-built-distribution > >> > >> I feel like a sub-directory in the phoenix repository would be the > >> easiest to make this w

[DISCUSS] Include python-phoenixdb into Phoenix

2018-03-01 Thread Ankit Singhal
ython-phoenixdb [3] *https://github.com/Pirionfr/pyPhoenix <https://github.com/Pirionfr/pyPhoenix>* *[4] https://issues.apache.org/jira/browse/PHOENIX-4636 <https://issues.apache.org/jira/browse/PHOENIX-4636>* Regards, Ankit Singhal On Tue, Apr 11, 2017 at 1:30 AM, James Taylor wrote: >

Re: Phoenix system tables in multitenant setup

2017-10-23 Thread Ankit Singhal
bq. But for tables inside, I am assuming the user needs access to the Phoenix SYSTEM tables (and CREATE rights for the namespace in question on the HBase level)? Is that the case? And if so, what are they able to see, as in, only their information, or all information from other tenants as well? If

Re: PHOENIX - Could not find or load main class $PHOENIX_OPTS

2017-10-05 Thread Ankit Singhal
you can open a script and remove $PHOENIX_OPTS as it may be illegal to access environment variables like this in NT system. $PHOENIX_OPTS is generally used to pass JVM parameters (like -Xmn ,-Xmx) by setting it through the shell. On Fri, Oct 6, 2017 at 9:49 AM, Mallieswari Dineshbabu < dmalliesw.

Re: Row count

2017-09-13 Thread Ankit Singhal
Best is to do "SELECT COUNT(*) FROM MYTABLE" with index. As index table will have less data so it can be read faster. if you have time series data or your data is always incremental with some ID then you can do incremental count with row_timestamp filters or ID filter bq. however the result could

Re: Phoenix CSV Bulk Load Tool Date format for TIMESTAMP

2017-09-07 Thread Ankit Singhal
Yes, you can write your own custom mapper to do conversions (look at CsvToKeyValueMapper, CsvUpsertExecutor#createConversionFunction) or consider using chaining of jobs(where the first Job with multiple inputs standardizing the date format followed by CSVBulkLoadTool) or writing a custom TextInputF

Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-07 Thread Ankit Singhal
bq. This runs successfully if I split this into 2 files, but I'd like to avoid doing that. do you run a different job for each file? if your HBase cluster is not co-located with your yarn cluster then it may be possible that copying of large HFile is timing out(this may happen due to the fewer reg

Re: Can I reset a row's TTL?

2017-08-07 Thread Ankit Singhal
You can do this by UPSERT SELECT. On Mon, Aug 7, 2017 at 4:13 PM, Ankit Singhal wrote: > You can read KVs for that user and write them again with current time. > > On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes < > cheyenne.osanu.for...@gmail.com> wrote: > >> I&#x

Re: Can I reset a row's TTL?

2017-08-07 Thread Ankit Singhal
You can read KVs for that user and write them again with current time. On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > I'm using phoenix to store user sessions. The table's TTL is set to 3 days > and I'd like to have the 3 days start over if the user co

Re: Apache Spark Integration

2017-07-17 Thread Ankit Singhal
You can take a look at our IT tests for phoenix-spark module. https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala On Mon, Jul 17, 2017 at 9:20 PM, Luqman Ghani wrote: > > -- Forwarded message -- > From: Luqman Gha

Re: RegionNotServingException when using Phoenix

2017-07-14 Thread Ankit Singhal
Yes, value 1 for "hbase.client.retries.number" is the root cause of above exception. General guideline/formulae could be(not official):- (time taken for region movement in your cluster + timeout of zookeeper) / hbase.client.pause Or with intuition, you can set to at least 10. On Fri, Jul 14, 201

Re: Can set default value for column in phoenix ?

2017-07-14 Thread Ankit Singhal
Phoenix 4.9 onwards you can specify any expression for default column. (I'm not sure if there is any limitation called out). For syntax:- https://phoenix.apache.org/language/index.html#column_def For examples- https://github.com/apache/phoenix/blob/2d40241b5c5c0287dca3dd2e2476db328bb7e7de/phoenix-

Re: Safest migration path for Apache Phoenix 4.5 to 4.9

2017-06-26 Thread Ankit Singhal
If you don't have secondary indexes, views and immutable table then upgrade from 4.5 to 4.9 will just add some new columns in SYSTEM.CATALOG and re-creates STATS table. but still we have not tested an upgrade from 4.5 to 4.9, it is always advisable to do stop after every two versions (especially in

Re: phoenix query modtime

2017-06-24 Thread Ankit Singhal
Yes, and also to avoid returning an incomplete row for the same primary key because of different timestamp for the column's cell. On Sat, Jun 24, 2017 at 4:20 AM, Randy Hu wrote: > First HBase does not have a concept of "row timestamp". Timestamp is part > of each cell. The closest to row timest

Re: Phoenix UDF jar cache?

2017-06-24 Thread Ankit Singhal
Yes, this is a limitation[1] of the current implementation of UDF and class loader used. It is recommended either to reboot the cluster if implementation changes or use new jar name. [1] https://phoenix.apache.org/udf.html On Wed, May 3, 2017 at 4:41 AM, Randy Hu wrote: > Developed and test U

Re: phoenix query modtime

2017-06-23 Thread Ankit Singhal
> Nan > > On Fri, Jun 23, 2017 at 1:23 AM, Ankit Singhal > wrote: > >> If you have composite columns in your row key of HBase table and they are >> not formed through Phoenix then you can't access an individual column of >> primary key by Phoenix SQL too. >

Re: How to create new table as existing table with same structure and data ??

2017-06-22 Thread Ankit Singhal
You can map an existing table to view or table in Phoenix but we expect the name of the table should match with Phoenix table name. (However, you can rename your existing HBase table with snapshot and restore) The DDLs you are using to map the table is not correct or are not supported. You can refe

Re: Getting too many open files during table scan

2017-06-22 Thread Ankit Singhal
bq. A leading date column is in our schema model:- Don't you have any other column which is obligatory in queries during reading but not monotonous with ingestion? As pre-split can help you avoiding hot-spotting. For parallelism/performance comparison, have you tried running a query on a non-salted

Re: phoenix query modtime

2017-06-22 Thread Ankit Singhal
If you have composite columns in your row key of HBase table and they are not formed through Phoenix then you can't access an individual column of primary key by Phoenix SQL too. Try composing the whole PK and use them in a filter or may check if you can use regex functions[1] or LIKE operator. [1

Re: Renaming table schema in hbase

2017-06-08 Thread Ankit Singhal
ady be there and you will just save time by doing so. Regards, Ankit Singhal On Thu, Jun 8, 2017 at 1:34 PM, Michael Young wrote: > I have a doubt about step 2 from Ankit Singhal's response in > http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoeni > x-4-4-Rename-table-Suppo

Re: checking-in on hbase 1.3.1 support

2017-05-25 Thread Ankit Singhal
Next release of Phoenix(v4.11.0) will be supporting HBase 1.3.1(see PHOENIX-3603) and there is no timeline yet decided for the release. But you may expect some updates in next 1-2 months. On Thu, May 25, 2017 at 3:32 AM, Anirudha Jadhav wrote: > hi, > > just checking in, any idea what kind of a

Re: Why can Cache of region boundaries are out of date be happening in 4.5.x?

2017-05-20 Thread Ankit Singhal
It could be because of stale stats due to the merging of region or something, you can try deleting the stats from SYSTEM.STATS. http://apache-phoenix-user-list.1124778.n5.nabble.com/Cache-of-region-boundaries-are-out-of-date-during-index-creation-td1213.html On Sat, May 20, 2017 at 8:29 PM, Pedro

Re: Upsert-Select NullPointerException

2017-05-04 Thread Ankit Singhal
I think you have a salted table and you are hitting a below bug. https://issues.apache.org/jira/browse/PHOENIX-3800 Do you mind trying out the patch, we will have this fixed in 4.11 at least(probably 4.10.1 too). On Fri, May 5, 2017 at 11:06 AM, Bernard Quizon < bernard.qui...@stellarloyalty.com>

Re: Fwd: Apache Phoenix

2017-05-03 Thread Ankit Singhal
g SQL for some other purpose. Regards, Ankit Singhal On Tue, May 2, 2017 at 7:55 PM, Josh Elser wrote: > Planning for unexpected outages with HBase is a very good idea. At a > minimum, there will likely be points in time where you want to change HBase > configuration, apply some patched j

Re: Passing arguments (schema name) to .sql file while executing from command line

2017-04-20 Thread Ankit Singhal
If you are using phoenix 4.8 onwards then you can try giving zookeeper string appended with a schema like below. psql.py ;schema= /create_table.sql psql.py zookeeer1;schema=TEST_SCHEMA /create_table.sql On Sat, Apr 15, 2017 at 2:25 AM, sid red wrote: > Hi, > > I am trying to find a solution, w

Re: Limit of phoenix connections on client side

2017-04-20 Thread Ankit Singhal
bq. 1. How many concurrent phoenix connections the application can open? I don't think there is any limit on this. bq. 2. Is there any limitations regarding the number of connections I should consider? I think as many till your JVM permits. bq. 3. Is the client side config parameter phoenix.que

Re: load kafka to phoenix

2017-04-20 Thread Ankit Singhal
It seems we don't pack the dependencies in phoenix-kafka jar yet. Try including flume-ng-configuration-1.3.0.jar in your classpath to resolve the above issue. On Thu, Apr 20, 2017 at 9:27 AM, lk_phoenix wrote: > hi,all: > I try to read data from kafka_2.11-0.10.2.0 , I get error: > > Exception

Re: Bad performance of the first resultset.next()

2017-04-20 Thread Ankit Singhal
+1 for Joanthan comment, -- Take multiple jstack of the client during the query time and check which thread is working for long. If you find merge sort is the bottleneck then removing salting and using SERIAL scan will help in the query given above. Ensure that your queries are not causing hotspott

Re: View timestamp on existing table (potential defect)

2017-04-20 Thread Ankit Singhal
This is because we cap the scan with the current timestamp so anything beyond the current time will not be seen. This is needed mainly to avoid UPSERT SELECT to see its own new writes. https://issues.apache.org/jira/browse/PHOENIX-3176 On Thu, Apr 20, 2017 at 11:52 PM, Randy wrote: > I was try

Re: phoenix.schema.isNamespaceMappingEnabled

2017-04-20 Thread Ankit Singhal
Sudhir, Relevant JIRA for the same. https://issues.apache.org/jira/browse/PHOENIX-3288 Let me see if I can crack this for the coming release. On Fri, Apr 21, 2017 at 8:42 AM, Josh Elser wrote: > Sudhir, > > Didn't meant to imply that asking the question was a waste of time. > Instead, I want

Re: Accessing existing schema, creating schema

2017-03-09 Thread Ankit Singhal
can you please share exception stack trace? On Fri, Mar 10, 2017 at 12:25 PM, mferlay wrote: > Hi everybody , I up this message because I think i'm in the same situation. > I'm using Phoenix 4.9 - Hbase 1.2. > I have found some stuff like setting > "phoenix.schema.isNamespaceMappingEnabled" at t

Re: Missing support for HBase 1.0 in Phoenix 4.9 ?

2017-02-07 Thread Ankit Singhal
The relevant thread where the decision was made to drop HBase 1.0 support from v4.9 onwards. http://search-hadoop.com/m/Phoenix/9UY0h21AGFh2sgGFK?subj=Re+DISCUSS+Drop+HBase+1+0+support+for+4+9 On Tue, Feb 7, 2017 at 8:13 PM, Mark Heppner wrote: > Pedro, > I can't answer your question, but if y

Re: ROW_TIMESTAMP weird behaviour

2017-02-07 Thread Ankit Singhal
+ > | 2017-01-01 15:02:21.050 | > | 2017-01-02 15:02:21.050 | > | 2017-01-13 15:02:21.050 | > | 2017-02-06 15:02:21.050 | > | 2017-02-07 11:02:21.050 | > | 2017-02-07 11:03:21.050 | > | 2017-02-07 12:02:21.050 | > | 2017-02-07 12:03:21.050 | > +---

Re: ROW_TIMESTAMP weird behaviour

2017-02-07 Thread Ankit Singhal
I think you are also hitting https://issues.apache.org/jira/browse/PHOENIX-3176. On Tue, Feb 7, 2017 at 2:18 PM, Dhaval Modi wrote: > Hi Pedro, > > Upserted key are different. One key is for July month & other for January > month. > 1. '2017-*07*-02T15:02:21.050' > 2. '2017-*01*-02T15:02:21.050'

Re: Phoenix tracing did not start

2017-01-19 Thread Ankit Singhal
Hi Pradheep, It seems tracing is not distributed as a part of HDP 2.4.3.0, please work with your vendor for an appropriate solution. Regards, Ankit Singhal On Thu, Jan 19, 2017 at 4:48 AM, Pradheep Shanmugam < pradheep.shanmu...@infor.com> wrote: > Hi, > > I am using hdp 2.

Re: Phoenix Spark plug in cannot find table with a Namespace prefix

2016-12-29 Thread Ankit Singhal
at org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:433) at Regards, Ankit Singhal. On Mon, Nov 7, 2016 at 11:38 PM, Long, Xindian wrote: > Hi, Josh: > > > > Thanks for your reply. I just added a Jira issue, but I am not familiar >

Re: slow response on large # of columns

2016-12-28 Thread Ankit Singhal
Have you checked your query performance without sqlline. As Jonathan also mentioned, Sqlline has it's own performance issue in terms of reading metadata.( so probably time spend is actually spent by sqlline in reading metadata for 3600 columns and printing header) On Wed, Dec 28, 2016 at 12:04 A

Re: Inconsistent null behavior

2016-12-06 Thread Ankit Singhal
@James, is this similar to https://issues.apache.org/jira/browse/PHOENIX-3112? @Mac, can you try if increasing hbase.client.scanner.max.result.size helps? On Tue, Dec 6, 2016 at 10:53 PM, James Taylor wrote: > Looks like a bug to me. If you can reproduce the issue outside of Python > phoenixdb,

Re: huge query result miss some fields

2016-11-24 Thread Ankit Singhal
Do you have bigger rows? if yes , it may be similar to https://issues.apache.org/jira/browse/PHOENIX-3112 and increasing hbase.client.scanner.max.result.size can help. On Thu, Nov 24, 2016 at 6:00 PM, 金砖 wrote: > thanks Abel. > > > I tried update statistics, it did not work. > > > But after so

Re: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not found

2016-10-23 Thread Ankit Singhal
You need to increase phoenix timeout as well(phoenix.query.timeoutMs). https://phoenix.apache.org/tuning.html On Sun, Oct 23, 2016 at 3:47 PM, Parveen Jain wrote: > hi All, > > I just realized that phoneix doesn't provide "group by" and "distinct" > methods if we use phoenix map reduce. It seem

Re: Index in Phoenix view on Hbase is not updated

2016-10-23 Thread Ankit Singhal
bq. Will bulk load from Phoenix update the underlying Hbase table? Yes. instead of using importTSV try to use CSV bulkload only. bq. Do I need to replace Phoenix view on Hbase as with CREATE TABLE? You can still keep VIEW. Regards, Ankit Singhal On Sun, Oct 23, 2016 at 6:37 PM, Mich Talebzadeh

Re: Setting default timezone for Phoenix

2016-10-23 Thread Ankit Singhal
ctions.html [2]https://phoenix.apache.org/language/functions.html#to_date [3] https://phoenix.apache.org/tuning.html Regards, Ankit Singhal On Sun, Oct 23, 2016 at 6:15 PM, Mich Talebzadeh wrote: > Hi, > > My queries in Phoenix pickup GMT timezone as default. > > I need them to defaul

Re: Index in Phoenix view on Hbase is not updated

2016-10-23 Thread Ankit Singhal
Rebuild is currently a costly operation as it will rebuild the complete index again and should be used when you think your index is corrupted and you are not aware about the timestamp when it got out of sync. Why can't you use Bulk loading tool[1] provided by Phoenix instead of using importTSV, as

Re: Analytic functions in Phoenix

2016-10-23 Thread Ankit Singhal
hink query with OVER clause can be re-written by using SELF JOINs in many cases. Regards, Ankit Singhal On Sun, Oct 23, 2016 at 3:11 PM, Mich Talebzadeh wrote: > Hi, > > I was wondering whether analytic functions work in Phoenix. For example > something equivalent to below in H

Re: Ordering of numbers generated by a sequence

2016-10-17 Thread Ankit Singhal
JFYI, phoenix.query.rowKeyOrderSaltedTable is deprecated and is not honored from v4.4, so please use phoenix.query.force.rowkeyorder instead. I have updated the docs(http://localhost:8000/tuning.html) now accordingly. On Mon, Oct 17, 2016 at 3:14 AM, Josh Elser wrote: > Not 100% sure, but yes, I

Re: Creating view on a phoenix table throws Mismatched input error

2016-10-07 Thread Ankit Singhal
Currently, Phoenix doesn't support projecting selective columns of table or expressions in a view. You need to project all the columns with (select *). Please see the section "Limitations" on this page or PHOENIX-1507. https://phoenix.apache.org/views.html On Thu, Oct 6, 2016 at 10:05 PM, Mich Ta

Re: Phoenix ResultSet.next() takes a long time for first row

2016-09-28 Thread Ankit Singhal
'2016-08-05 23:59:59.000'))) >>> DDL: >>> >>> CREATE TABLE IF NOT EXISTS SPL_FINAL >>> (col1 VARCHAR NOT NULL, >>> col2 VARCHAR NOT NULL, >>> col3 INTEGER NOT NULL, >>> col4 INTEGER NOT NULL, >>> col5 VARCHAR NOT NULL, >>

Re: Phoenix ResultSet.next() takes a long time for first row

2016-09-22 Thread Ankit Singhal
Share some more details about the query, DDL and explain plan. In Phoenix, there are cases where we do some server processing at the time when rs.next() is called first time but subsequent next() should be faster. On Thu, Sep 22, 2016 at 9:52 AM, Sasikumar Natarajan wrote: > Hi, > I'm using

Re: can I prevent rounding of a/b when a and b are integers

2016-09-21 Thread Ankit Singhal
Adding some more workaround , if you are working on column:- select cast(col_int1 as decimal)/col_int2; select col_int1*1.0/3; On Wed, Sep 21, 2016 at 8:33 PM, James Taylor wrote: > Hi Noam, > Please file a JIRA. As a workaround, you can do SELECT 1.0/3. > Thanks, > James > > On Wed, Sep 21, 2

Re: CONVERT_TZ for TIMESTAMP column

2016-09-02 Thread Ankit Singhal
test (*"iso_8601" TIMESTAMP NOT NULL* PRIMARY KEY); upsert into test values(TO_DATE('2016-04-01 22:45:00')); select * from test; +--+ | iso_8601 | +--+ | 2016-04-01 22:45:00.000 | +------+ Regards,

Re: error after upgrade to 4.8-hbase-1.1

2016-09-01 Thread Ankit Singhal
and. Regards, Ankit Singhal On Fri, Aug 26, 2016 at 7:52 PM, jinzh...@wacai.com wrote: > after upgraded , a lot of WARN logs in hbase-regionserver.log: > > > 2016-08-26 22:12:39,682 WARN [pool-287-thread-1] coprocessor. > MetaDataRegionObserver: ScheduledBui

Re: Cannot select data from a system table

2016-08-31 Thread Ankit Singhal
his page: >> http://phoenix.apache.org/language/index.html -- it feels like where I >> should find a list like that, but I don't see it explicitly called out. >> >> -Aaron >> >> On Aug 21, 2016, at 09:04, Ted Yu wrote: >> >> Ankit: >> Is

Re: RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time

2016-08-31 Thread Ankit Singhal
can you check whether your hbase is stable or not (you can use hbck tool to see any inconsistencies). On Sat, Aug 27, 2016 at 10:41 PM, Sanooj Padmakumar wrote: > Hi All, > > I am getting the same exception , this time when running a Phoenix MR ( > https://phoenix.apache.org/phoenix_mr.html) ..

Re: Help w/ table that suddenly keeps timing out

2016-08-31 Thread Ankit Singhal
Yes, Ted is right , "Error 1102 (XCL02): Cannot get all table regions" happens when Phoenix is not able to get locations of all regions. Assigning that offline region should help. On Mon, Aug 29, 2016 at 10:22 PM, Ted Yu wrote: > I searched for "Cannot get all table regions" in hbase repo - no h

Re: java.lang.IllegalStateException: Number of bytes to resize to must be greater than zero, but instead is -1984010164

2016-08-31 Thread Ankit Singhal
can you confirm what values are set for phoenix.groupby.estimatedDistinctValues(Integer) and phoenix.groupby.maxCacheSize(long)? On Wed, Aug 31, 2016 at 12:24 PM, Dong-iL, Kim wrote: > Hi. > > when I’m using simple groupby query, exception occured as below. > What shall I do? > > Thanks. > > Err

Re: [ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-21 Thread Ankit Singhal
available @ > http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/ ? > > > Many thanks, > > Tom > > > ------ > *From:* Ankit Singhal > *Sent:* 12 August 2016 18:25 > *To:* d...@phoenix.apache.org; user; annou...@apache.org; >

Re: Cannot select data from a system table

2016-08-21 Thread Ankit Singhal
Aaron, you can escape check for reserved keyword with double quotes "" SELECT * FROM SYSTEM."FUNCTION" Regards, Ankit Singhal On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor wrote: > Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is >

Re: How to troubleshoot 'Could not find hash cache for joinId' which is failing always for some users and never for others

2016-08-15 Thread Ankit Singhal
the searched data is different. Yes, it could be possible because some users are hitting certain key range only depending upon the first column(prefix) of the row key. Regards, Ankit Singhal On Mon, Aug 15, 2016 at 6:29 PM, Chabot, Jerry wrote: > I’ve added the hint to the SELECT. Does an

[ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-12 Thread Ankit Singhal
Apache Phoenix enables OLTP and operational analytics for Hadoop through SQL support and integration with other projects in the ecosystem such as Spark, HBase, Pig, Flume, MapReduce and Hive. We're pleased to announce our 4.8.0 release which includes: - Local Index improvements[1] - Integration wi

Re: Problems with Phoenix bulk loader when using row_timestamp feature

2016-08-11 Thread Ankit Singhal
Samarth, filed PHOENIX-3176 for the same. On Wed, Aug 10, 2016 at 11:42 PM, Ryan Templeton wrote: > 0: jdbc:phoenix:localhost:2181> explain select count(*) from > historian.data; > > *+--+* > > *| * * PLAN ** |* > > *+--

Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-09 Thread Ankit Singhal
#How_I_map_Phoenix_table_to_an_existing_HBase_table If you have a composite key , it is always better to insert data from phoenix only. Regards, Ankit Singhal On Fri, Aug 5, 2016 at 8:00 PM, Dong-iL, Kim wrote: > oh. phoenix version is 4.7.0 and on EMR. > Thx. > > > On Aug 5, 2016, at 11:27 PM, Dong-iL, Kim wrote: > > &

Re: Java Query timeout

2016-08-09 Thread Ankit Singhal
hin a timeout period. You need to increase scanner timeout period along with above properties you mentioned. hbase.client.scanner.timeout.period 6 Regards, Ankit Singhal On Mon, Aug 8, 2016 at 6:55 PM, wrote: > Thanks Brian. I have added HBASE_CONF_DIR and it’s still ti

Re: Errors while launching sqlline

2016-07-13 Thread Ankit Singhal
Hi Vasanth, RC for 4.8(with support of hbase-1.2) is just out today, you can try with the latest build. Regards, Ankit Singhal On Thu, Jul 14, 2016 at 10:06 AM, Vasanth Bhat wrote: > Thanks James. > > When are the early builds going to be available for Phoenix > 4.8.0

Re: phoenix explain plan not showing any difference after adding a local index on the table column that is used in query filter

2016-06-29 Thread Ankit Singhal
bc'; For covered indexes , you can read https://phoenix.apache.org/secondary_indexing.html Regards, Ankit Singhal On Tue, Jun 28, 2016 at 4:25 AM, Vamsi Krishna wrote: > Team, > > I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0). > *Question: *phoenix explain plan no

Re: For multiple local indexes on Phoenix table only one local index table is being created in HBase

2016-06-29 Thread Ankit Singhal
Hi Vamsi, Phoenix uses single local Index table for all the local indexes created on a particular data table. Rows are differentiated by local index sequence id and filtered when requested during the query for particular index. Regards, Ankit Singhal Re On Tue, Jun 28, 2016 at 4:18 AM, Vamsi

Re: PhoenixFunction

2016-06-29 Thread Ankit Singhal
ssion(FloorYearExpression.class), CeilWeekExpression(CeilWeekExpression.class), CeilMonthExpression(CeilMonthExpression.class), CeilYearExpression(CeilYearExpression.class); Regards, Ankit Singhal On Wed, Jun 29, 2016 at 9:08 AM, Yang Zhang wrote: > when I use the functions described on your

Re: Bulk loading and index

2016-06-25 Thread Ankit Singhal
(v) ASYNC But if you are only using CSVBulkLoadTool for bulk load, then it will automatically prepare and bulk load index data also. So Index maintaining would not be required. Regards, Ankit Singhal On Sat, Jun 25, 2016 at 4:13 PM, Tongzhou Wang (Simon) < tongzhou.wang.1...@gmail.com>

Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread Ankit Singhal
Yes, restart your cluster On Wed, Jun 15, 2016 at 8:17 AM, anupama agarwal wrote: > I have created async index with same name. But I am still getting the same > error. Should I restart my cluster for changes to reflect? > On Jun 15, 2016 8:38 PM, "Ankit Singhal" wro

Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread Ankit Singhal
Hi Anupama, Option 1:- You can create a ASYNC index so that WAL can be replayed. And once your regions are up , remember to do the flush of data table before dropping the index. Option 2:- Create a table in hbase with the same name as index table name by using hbase shell. Regards, Ankit

Re: phoenix : timeouts for long queries

2016-05-13 Thread Ankit Singhal
You can try increasing phoenix.query.timeoutMs (and hbase.client.scanner.timeout.period) on the client . https://phoenix.apache.org/tuning.html On Fri, May 13, 2016 at 1:51 PM, 景涛 <844300...@qq.com> wrote: > When I query from a very big table > It get errors as follow: > > java.lang.RuntimeExcep

Re: Global Index stuck in BUILDING state

2016-05-11 Thread Ankit Singhal
Try recreating your index with ASYNC and update index using INDEX tool so that you don't face issues related to timeout or stuck during the initial load of huge data. https://phoenix.apache.org/secondary_indexing.html On Tue, May 10, 2016 at 7:26 AM, anupama agarwal wrote: > Hi All, > > I have

Re: [Spark 1.5.2]Check Foreign Key constraint

2016-05-11 Thread Ankit Singhal
You can use Joins as a substitute to subqueries. On Wed, May 11, 2016 at 1:27 PM, Divya Gehlot wrote: > Hi, > I am using Spark 1.5.2 with Apache Phoenix 4.4 > As Spark 1.5.2 doesn't support subquery in where conditions . > https://issues.apache.org/jira/browse/SPARK-4226 > > Is there any altern

Re: [Phoenix 4.4]Rename table Supported ?

2016-05-09 Thread Ankit Singhal
ts in CurrentSCN ./sqlline.py "localhost;CurrentSCN=") and create table with the exact DDL used for old table but with the table name changed to new table. 3. confirm that your new table is working fine as expected . 4. Then drop the old table from phoenix and snapshot from hbase sh

Re: SYSTEM.CATALOG table - VERSIONS attribute

2016-05-08 Thread Ankit Singhal
Yes, you can but it depends if you don't want to go back in time for schema before 5 versions. On Mon, May 9, 2016 at 8:16 AM, Bavineni, Bharata < bharata.bavin...@epsilon.com> wrote: > Hi, > > SYSTEM.CATALOG table is created with VERSIONS => '1000' by default. Can we > change this value to 5 or

Re: [while doing select] getting exception - ERROR 1108 (XCL08): Cache of region boundaries are out of date.

2016-05-08 Thread Ankit Singhal
Yes Vishnu , you may be hitting https://issues.apache.org/jira/browse/PHOENIX-2249 so you can try deleting stats for the table "*EVENTS_PROD*'. On Mon, May 9, 2016 at 10:56 AM, vishnu rao wrote: > hi guys need help ! > > i was getting this exception while doing a select. hbase 1.1 with phoenix >

Re: Undefined column. columnName=IS_ROW_TIMESTAMP

2016-05-02 Thread Ankit Singhal
=> "SYSTEM.CATALOG\x000", > STOPROW => "SYSTEM.CATALOG\x001"} > > Could this row be causing the issue? > > > > Thank you, > > Bharathi. > > > > > > *From:* Ankit Singhal [mailto:ankitsingha...@gmail.com] > *Sent:* Sunday, Ma

  1   2   >