Congratulations Rushabh !!
On Wed, Aug 23, 2023 at 10:15 PM Istvan Toth wrote:
> Congratulations Rushabh!
>
>
> On Thu, Aug 24, 2023 at 2:11 AM rajeshb...@apache.org <
> chrajeshbab...@gmail.com> wrote:
>
>> Congratulations Rushabh!
>>
>> Thanks,
>> Rajeshbabu.
>>
>> On Thu, Aug 24, 2023, 5:37 A
in this role.
Please join me in congratulating Rajeshbabu!
Thank you all for the opportunity to serve you as the VP for
these last years.
Regards,
Ankit Singhal
Congratulations and Welcome, Tanuj!!
On Thu, Dec 16, 2021 at 11:17 AM Geoffrey Jacoby wrote:
> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Tanuj
> Khurana has accepted the PMC's invitation to become a committer on Apache
> Phoenix.
>
> We appreciate all of the great contrib
actor your query/schema to use indices
automatically.
Regards,
Ankit Singhal
On Mon, Oct 11, 2021 at 2:08 PM Josh Elser wrote:
> No worries. Thanks for confirming!
>
> On 10/10/21 1:43 PM, Simon Mottram wrote:
> > Hi
> >
> > Thanks for the reply, I posted here by mistake
Congratulations, Viraj!
On Tue, Feb 9, 2021 at 2:18 PM Xinyi Yan wrote:
> Congratulations and welcome, Viraj!
>
> On Tue, Feb 9, 2021 at 2:07 PM Geoffrey Jacoby wrote:
>
>> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Viraj
>> Jasani has accepted the PMC's invitation to bec
This is excellent, thanks Kadir for initiating it, I am keen to listen to
your presentation on "strongly consistent global indexes".
Logistics looks great, however, if required can be adjusted as per the
feedback after the first meetup.
Some of the current topics comes to my mind (in case someone
e user who has been waiting for a long on the release.
Regards,
Ankit Singhal
On Fri, Jan 29, 2021 at 10:59 PM Istvan Toth wrote:
> I'm not sure I understand, let me rephrase
>
> So we drop support right after we release a Phoenix minor version,
> if the Phoenix release date is
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richárd
Antal
has accepted the PMC's invitation to become a committer on Apache Phoenix.
We appreciate all of the great contributions Richárd has made to the
community thus far and we look forward to his continued involvement.
Cong
et of commits which brings back this shim layer,
instead can use the tag and go with it.
Regards,
Ankit Singhal
On Mon, Aug 24, 2020 at 10:04 AM Geoffrey Jacoby wrote:
> The HBase community has just unanimously EOLed HBase 1.3.
>
> As I recall, 1.3 has some significant differences with 1.4
Can you please try using CSVBulkLoad (as a workaround, we have it recently
fixed) and raise a bug for the Psql not handling table-name case
sensitivity.
https://issues.apache.org/jira/browse/PHOENIX-3541
https://issues.apache.org/jira/browse/PHOENIX-5319
Regards,
Ankit Singhal
On Thu, May 14
Please schedule compaction on SYSTEM.STATS table to clear the old entries.
On Thu, Sep 19, 2019 at 1:48 PM Stepan Migunov <
stepan.migu...@firstlinesoftware.com> wrote:
> Thanks, Josh. The problem was really related to reading the SYSTEM.STATS
> table.
> There were only 8,000 rows in the table, b
em with that.
Regards,
Ankit Singhal
On Wed, Sep 4, 2019 at 6:24 PM Simon Mottram
wrote:
> Hi Ankit
>
> Thats very useful, many thanks.
>
> Before I dive into using Phoenix (which has given me a torrid time over
> the last few days!), is using Phoenix the best option given that I'
>> If not possible I guess we have to look at doing something at the HBase
level.
As Josh said, it's not yet supported in Phoenix, Though you may try using
cell-level security of HBase with some Phoenix internal API and let us know
if it works for you.
Sharing a sample code if you wanna try.
/**
*
sensitive information about your environment and data like hbase:meta
has ip-address/hostname and system.stats has data row keys, so upload only
if you think it's a test data and hostnames have no significance).
Thanks,
Ankit Singhal
On Mon, Aug 19, 2019 at 11:17 PM venkata subbarayudu
code to fix the issue, so the patch
would really be appreciated.
And, also can you try running "select a,b,c,d,e,f,g,h,i,j,k,m,n from
TEST_PHOENIX.APP where c=2 and h = 1 limit 5", and see if index is getting
used.
Regards,
Ankit Singhal
On Tue, Aug 20, 2019 at 1:49 AM you Zhuang
wrote
As Thomas said, no. of splits will be equal to the number of guideposts
available for the table or the ones required to cover the filter.
if you are seeing one split per region then either stats are disabled or
guidePostwidth is set higher than the size of the region , so try reducing
the guidepost
ng phoenixColumnName =
pTable.getColumnForColumnQualifier("0".getBytes(),
hbaseColumnQualifierBytes).getName();
Regards,
Ankit Singhal
On Tue, Jan 8, 2019 at 10:03 AM Josh Elser wrote:
> (from the peanut-gallery)
>
> That sounds to me like a useful utility to share with others if you're
> going to w
Is connecting and running some commands through HBase shell working? As per
the stack trace, It seems your HBase is not up , Look at the master and
regionserver logs for errors.
On Tue, Dec 4, 2018 at 4:17 AM Raghavendra Channarayappa <
raghavendra.channaraya...@myntra.com> wrote:
> Can someone
We do not allow atomic upsert and throw the corresponding exception in the
cases documented under the limitations section of
http://phoenix.apache.org/atomic_upsert.html. Probably a documentation
needs a little touch to convey this clearly.
On Tue, Oct 9, 2018 at 10:05 AM Josh Elser wrote:
> Ca
You might be hitting PHOENIX-4785
<https://jira.apache.org/jira/browse/PHOENIX-4785>, you can apply the
patch on top of 4.14 and see if it fixes your problem.
Regards,
Ankit Singhal
On Wed, Sep 26, 2018 at 2:33 PM Batyrshin Alexander <0x62...@gmail.com>
wrote:
> Any advices?
x27;t help.
Regards,
Ankit Singhal
On Thu, Sep 20, 2018 at 10:24 AM Batyrshin Alexander <0x62...@gmail.com>
wrote:
> Nope, it was client side config.
> Thank you for response.
>
> On 20 Sep 2018, at 05:36, Jaanai Zhang wrote:
>
> Are you configuring these on the
To better understand the problem, we may require your DDL for both the
indexes and data table and also the query using your secondary index
And, please try some tuning documented on
https://phoenix.apache.org/secondary_indexing.html and see if it helps.
On Tue, Sep 18, 2018 at 11:25 AM Josh Else
gured region split policy
'org.apache.phoenix.schema.MetaDataSplitPolicy'
for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
or table descriptor if you want to bypass sanity checks"
Regards,
Ankit Singhal
On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲 wrote:
&
Probably , you are affected by
https://issues.apache.org/jira/browse/HBASE-20172, are you on JDK 1.7 or
lower? can you upgrade to JDK 1.8 and check.
On Sun, May 6, 2018 at 9:29 AM, anil gupta wrote:
> As per following line:
> "Caused by: java.lang.RuntimeException: Could not create interface
>
27;t
> >> know what, if any, infrastructure exists to distribute Python modules.
> >> https://packaging.python.org/glossary/#term-built-distribution
> >>
> >> I feel like a sub-directory in the phoenix repository would be the
> >> easiest to make this w
ython-phoenixdb
[3] *https://github.com/Pirionfr/pyPhoenix
<https://github.com/Pirionfr/pyPhoenix>*
*[4] https://issues.apache.org/jira/browse/PHOENIX-4636
<https://issues.apache.org/jira/browse/PHOENIX-4636>*
Regards,
Ankit Singhal
On Tue, Apr 11, 2017 at 1:30 AM, James Taylor
wrote:
>
bq. But for tables inside, I am assuming the user needs access to the
Phoenix SYSTEM tables (and CREATE rights for the namespace in question
on the HBase level)? Is that the case? And if so, what are they able
to see, as in, only their information, or all information from other
tenants as well? If
you can open a script and remove $PHOENIX_OPTS as it may be illegal to
access environment variables like this in NT system.
$PHOENIX_OPTS is generally used to pass JVM parameters (like -Xmn ,-Xmx) by
setting it through the shell.
On Fri, Oct 6, 2017 at 9:49 AM, Mallieswari Dineshbabu <
dmalliesw.
Best is to do "SELECT COUNT(*) FROM MYTABLE" with index. As index table
will have less data so it can be read faster.
if you have time series data or your data is always incremental with some
ID then you can do incremental count with row_timestamp filters or ID filter
bq. however the result could
Yes, you can write your own custom mapper to do conversions (look at
CsvToKeyValueMapper, CsvUpsertExecutor#createConversionFunction) or
consider using chaining of jobs(where the first Job with multiple inputs
standardizing the date format followed by CSVBulkLoadTool) or writing a
custom TextInputF
bq. This runs successfully if I split this into 2 files, but I'd like to
avoid doing that.
do you run a different job for each file?
if your HBase cluster is not co-located with your yarn cluster then it may
be possible that copying of large HFile is timing out(this may happen due
to the fewer reg
You can do this by UPSERT SELECT.
On Mon, Aug 7, 2017 at 4:13 PM, Ankit Singhal
wrote:
> You can read KVs for that user and write them again with current time.
>
> On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I
You can read KVs for that user and write them again with current time.
On Sun, Aug 6, 2017 at 8:44 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I'm using phoenix to store user sessions. The table's TTL is set to 3 days
> and I'd like to have the 3 days start over if the user co
You can take a look at our IT tests for phoenix-spark module.
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
On Mon, Jul 17, 2017 at 9:20 PM, Luqman Ghani wrote:
>
> -- Forwarded message --
> From: Luqman Gha
Yes, value 1 for "hbase.client.retries.number" is the root cause of above
exception.
General guideline/formulae could be(not official):-
(time taken for region movement in your cluster + timeout of zookeeper) /
hbase.client.pause
Or with intuition, you can set to at least 10.
On Fri, Jul 14, 201
Phoenix 4.9 onwards you can specify any expression for default column. (I'm
not sure if there is any limitation called out).
For syntax:-
https://phoenix.apache.org/language/index.html#column_def
For examples-
https://github.com/apache/phoenix/blob/2d40241b5c5c0287dca3dd2e2476db328bb7e7de/phoenix-
If you don't have secondary indexes, views and immutable table then upgrade
from 4.5 to 4.9 will just add some new columns in SYSTEM.CATALOG and
re-creates STATS table.
but still we have not tested an upgrade from 4.5 to 4.9, it is always
advisable to do stop after every two versions (especially in
Yes, and also to avoid returning an incomplete row for the same primary key
because of different timestamp for the column's cell.
On Sat, Jun 24, 2017 at 4:20 AM, Randy Hu wrote:
> First HBase does not have a concept of "row timestamp". Timestamp is part
> of each cell. The closest to row timest
Yes, this is a limitation[1] of the current implementation of UDF and
class loader used. It is recommended either to reboot the cluster if
implementation changes or use new jar name.
[1] https://phoenix.apache.org/udf.html
On Wed, May 3, 2017 at 4:41 AM, Randy Hu wrote:
> Developed and test U
> Nan
>
> On Fri, Jun 23, 2017 at 1:23 AM, Ankit Singhal
> wrote:
>
>> If you have composite columns in your row key of HBase table and they are
>> not formed through Phoenix then you can't access an individual column of
>> primary key by Phoenix SQL too.
>
You can map an existing table to view or table in Phoenix but we expect the
name of the table should match with Phoenix table name. (However, you can
rename your existing HBase table with snapshot and restore)
The DDLs you are using to map the table is not correct or are not
supported. You can refe
bq. A leading date column is in our schema model:-
Don't you have any other column which is obligatory in queries during
reading but not monotonous with ingestion? As pre-split can help you
avoiding hot-spotting.
For parallelism/performance comparison, have you tried running a query on a
non-salted
If you have composite columns in your row key of HBase table and they are
not formed through Phoenix then you can't access an individual column of
primary key by Phoenix SQL too.
Try composing the whole PK and use them in a filter or may check if you can
use regex functions[1] or LIKE operator.
[1
ady be there and you will just save
time by doing so.
Regards,
Ankit Singhal
On Thu, Jun 8, 2017 at 1:34 PM, Michael Young wrote:
> I have a doubt about step 2 from Ankit Singhal's response in
> http://apache-phoenix-user-list.1124778.n5.nabble.com/Phoeni
> x-4-4-Rename-table-Suppo
Next release of Phoenix(v4.11.0) will be supporting HBase 1.3.1(see
PHOENIX-3603) and there is no timeline yet decided for the release. But you
may expect some updates in next 1-2 months.
On Thu, May 25, 2017 at 3:32 AM, Anirudha Jadhav wrote:
> hi,
>
> just checking in, any idea what kind of a
It could be because of stale stats due to the merging of region or
something, you can try deleting the stats from SYSTEM.STATS.
http://apache-phoenix-user-list.1124778.n5.nabble.com/Cache-of-region-boundaries-are-out-of-date-during-index-creation-td1213.html
On Sat, May 20, 2017 at 8:29 PM, Pedro
I think you have a salted table and you are hitting a below bug.
https://issues.apache.org/jira/browse/PHOENIX-3800
Do you mind trying out the patch, we will have this fixed in 4.11 at
least(probably 4.10.1 too).
On Fri, May 5, 2017 at 11:06 AM, Bernard Quizon <
bernard.qui...@stellarloyalty.com>
g SQL for some other purpose.
Regards,
Ankit Singhal
On Tue, May 2, 2017 at 7:55 PM, Josh Elser wrote:
> Planning for unexpected outages with HBase is a very good idea. At a
> minimum, there will likely be points in time where you want to change HBase
> configuration, apply some patched j
If you are using phoenix 4.8 onwards then you can try giving zookeeper
string appended with a schema like below.
psql.py ;schema= /create_table.sql
psql.py zookeeer1;schema=TEST_SCHEMA /create_table.sql
On Sat, Apr 15, 2017 at 2:25 AM, sid red wrote:
> Hi,
>
> I am trying to find a solution, w
bq. 1. How many concurrent phoenix connections the application can open?
I don't think there is any limit on this.
bq. 2. Is there any limitations regarding the number of connections I should
consider?
I think as many till your JVM permits.
bq. 3. Is the client side config parameter phoenix.que
It seems we don't pack the dependencies in phoenix-kafka jar yet. Try
including flume-ng-configuration-1.3.0.jar in your classpath to resolve the
above issue.
On Thu, Apr 20, 2017 at 9:27 AM, lk_phoenix wrote:
> hi,all:
> I try to read data from kafka_2.11-0.10.2.0 , I get error:
>
> Exception
+1 for Joanthan comment,
-- Take multiple jstack of the client during the query time and check which
thread is working for long. If you find merge sort is the bottleneck then
removing salting and using SERIAL scan will help in the query given above.
Ensure that your queries are not causing hotspott
This is because we cap the scan with the current timestamp so anything
beyond the current time will not be seen. This is needed mainly to avoid
UPSERT SELECT to see its own new writes.
https://issues.apache.org/jira/browse/PHOENIX-3176
On Thu, Apr 20, 2017 at 11:52 PM, Randy wrote:
> I was try
Sudhir,
Relevant JIRA for the same.
https://issues.apache.org/jira/browse/PHOENIX-3288
Let me see if I can crack this for the coming release.
On Fri, Apr 21, 2017 at 8:42 AM, Josh Elser wrote:
> Sudhir,
>
> Didn't meant to imply that asking the question was a waste of time.
> Instead, I want
can you please share exception stack trace?
On Fri, Mar 10, 2017 at 12:25 PM, mferlay wrote:
> Hi everybody , I up this message because I think i'm in the same situation.
> I'm using Phoenix 4.9 - Hbase 1.2.
> I have found some stuff like setting
> "phoenix.schema.isNamespaceMappingEnabled" at t
The relevant thread where the decision was made to drop HBase 1.0 support
from v4.9 onwards.
http://search-hadoop.com/m/Phoenix/9UY0h21AGFh2sgGFK?subj=Re+DISCUSS+Drop+HBase+1+0+support+for+4+9
On Tue, Feb 7, 2017 at 8:13 PM, Mark Heppner wrote:
> Pedro,
> I can't answer your question, but if y
+
> | 2017-01-01 15:02:21.050 |
> | 2017-01-02 15:02:21.050 |
> | 2017-01-13 15:02:21.050 |
> | 2017-02-06 15:02:21.050 |
> | 2017-02-07 11:02:21.050 |
> | 2017-02-07 11:03:21.050 |
> | 2017-02-07 12:02:21.050 |
> | 2017-02-07 12:03:21.050 |
> +---
I think you are also hitting
https://issues.apache.org/jira/browse/PHOENIX-3176.
On Tue, Feb 7, 2017 at 2:18 PM, Dhaval Modi wrote:
> Hi Pedro,
>
> Upserted key are different. One key is for July month & other for January
> month.
> 1. '2017-*07*-02T15:02:21.050'
> 2. '2017-*01*-02T15:02:21.050'
Hi Pradheep,
It seems tracing is not distributed as a part of HDP 2.4.3.0, please work
with your vendor for an appropriate solution.
Regards,
Ankit Singhal
On Thu, Jan 19, 2017 at 4:48 AM, Pradheep Shanmugam <
pradheep.shanmu...@infor.com> wrote:
> Hi,
>
> I am using hdp 2.
at
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:433)
at
Regards,
Ankit Singhal.
On Mon, Nov 7, 2016 at 11:38 PM, Long, Xindian
wrote:
> Hi, Josh:
>
>
>
> Thanks for your reply. I just added a Jira issue, but I am not familiar
>
Have you checked your query performance without sqlline. As Jonathan also
mentioned, Sqlline has it's own performance issue in terms of reading
metadata.( so probably time spend is actually spent by sqlline in reading
metadata for 3600 columns and printing header)
On Wed, Dec 28, 2016 at 12:04 A
@James, is this similar to
https://issues.apache.org/jira/browse/PHOENIX-3112?
@Mac, can you try if increasing hbase.client.scanner.max.result.size helps?
On Tue, Dec 6, 2016 at 10:53 PM, James Taylor
wrote:
> Looks like a bug to me. If you can reproduce the issue outside of Python
> phoenixdb,
Do you have bigger rows? if yes , it may be similar to
https://issues.apache.org/jira/browse/PHOENIX-3112 and
increasing hbase.client.scanner.max.result.size can help.
On Thu, Nov 24, 2016 at 6:00 PM, 金砖 wrote:
> thanks Abel.
>
>
> I tried update statistics, it did not work.
>
>
> But after so
You need to increase phoenix timeout as well(phoenix.query.timeoutMs).
https://phoenix.apache.org/tuning.html
On Sun, Oct 23, 2016 at 3:47 PM, Parveen Jain wrote:
> hi All,
>
> I just realized that phoneix doesn't provide "group by" and "distinct"
> methods if we use phoenix map reduce. It seem
bq. Will bulk load from Phoenix update the underlying Hbase table?
Yes. instead of using importTSV try to use CSV bulkload only.
bq. Do I need to replace Phoenix view on Hbase as with CREATE TABLE?
You can still keep VIEW.
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 6:37 PM, Mich Talebzadeh
ctions.html
[2]https://phoenix.apache.org/language/functions.html#to_date
[3] https://phoenix.apache.org/tuning.html
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 6:15 PM, Mich Talebzadeh
wrote:
> Hi,
>
> My queries in Phoenix pickup GMT timezone as default.
>
> I need them to defaul
Rebuild is currently a costly operation as it will rebuild the complete
index again and should be used when you think your index is corrupted and
you are not aware about the timestamp when it got out of sync.
Why can't you use Bulk loading tool[1] provided by Phoenix instead of using
importTSV, as
hink query with OVER clause can be re-written by using SELF JOINs
in many cases.
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 3:11 PM, Mich Talebzadeh
wrote:
> Hi,
>
> I was wondering whether analytic functions work in Phoenix. For example
> something equivalent to below in H
JFYI, phoenix.query.rowKeyOrderSaltedTable is deprecated and is
not honored from v4.4, so please use phoenix.query.force.rowkeyorder
instead.
I have updated the docs(http://localhost:8000/tuning.html) now accordingly.
On Mon, Oct 17, 2016 at 3:14 AM, Josh Elser wrote:
> Not 100% sure, but yes, I
Currently, Phoenix doesn't support projecting selective columns of table or
expressions in a view. You need to project all the columns with (select *).
Please see the section "Limitations" on this page or PHOENIX-1507.
https://phoenix.apache.org/views.html
On Thu, Oct 6, 2016 at 10:05 PM, Mich Ta
'2016-08-05 23:59:59.000')))
>>> DDL:
>>>
>>> CREATE TABLE IF NOT EXISTS SPL_FINAL
>>> (col1 VARCHAR NOT NULL,
>>> col2 VARCHAR NOT NULL,
>>> col3 INTEGER NOT NULL,
>>> col4 INTEGER NOT NULL,
>>> col5 VARCHAR NOT NULL,
>>
Share some more details about the query, DDL and explain plan. In Phoenix,
there are cases where we do some server processing at the time when
rs.next() is called first time but subsequent next() should be faster.
On Thu, Sep 22, 2016 at 9:52 AM, Sasikumar Natarajan
wrote:
> Hi,
> I'm using
Adding some more workaround , if you are working on column:-
select cast(col_int1 as decimal)/col_int2;
select col_int1*1.0/3;
On Wed, Sep 21, 2016 at 8:33 PM, James Taylor
wrote:
> Hi Noam,
> Please file a JIRA. As a workaround, you can do SELECT 1.0/3.
> Thanks,
> James
>
> On Wed, Sep 21, 2
test (*"iso_8601" TIMESTAMP NOT NULL* PRIMARY
KEY);
upsert into test values(TO_DATE('2016-04-01 22:45:00'));
select * from test;
+--+
| iso_8601 |
+--+
| 2016-04-01 22:45:00.000 |
+------+
Regards,
and.
Regards,
Ankit Singhal
On Fri, Aug 26, 2016 at 7:52 PM, jinzh...@wacai.com
wrote:
> after upgraded , a lot of WARN logs in hbase-regionserver.log:
>
>
> 2016-08-26 22:12:39,682 WARN [pool-287-thread-1] coprocessor.
> MetaDataRegionObserver: ScheduledBui
his page:
>> http://phoenix.apache.org/language/index.html -- it feels like where I
>> should find a list like that, but I don't see it explicitly called out.
>>
>> -Aaron
>>
>> On Aug 21, 2016, at 09:04, Ted Yu wrote:
>>
>> Ankit:
>> Is
can you check whether your hbase is stable or not (you can use hbck tool to
see any inconsistencies).
On Sat, Aug 27, 2016 at 10:41 PM, Sanooj Padmakumar
wrote:
> Hi All,
>
> I am getting the same exception , this time when running a Phoenix MR (
> https://phoenix.apache.org/phoenix_mr.html) ..
Yes, Ted is right , "Error 1102 (XCL02): Cannot get all table regions"
happens when Phoenix is not able to get locations of all regions. Assigning
that offline region should help.
On Mon, Aug 29, 2016 at 10:22 PM, Ted Yu wrote:
> I searched for "Cannot get all table regions" in hbase repo - no h
can you confirm what values are set for
phoenix.groupby.estimatedDistinctValues(Integer)
and phoenix.groupby.maxCacheSize(long)?
On Wed, Aug 31, 2016 at 12:24 PM, Dong-iL, Kim wrote:
> Hi.
>
> when I’m using simple groupby query, exception occured as below.
> What shall I do?
>
> Thanks.
>
> Err
available @
> http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/ ?
>
>
> Many thanks,
>
> Tom
>
>
> ------
> *From:* Ankit Singhal
> *Sent:* 12 August 2016 18:25
> *To:* d...@phoenix.apache.org; user; annou...@apache.org;
>
Aaron,
you can escape check for reserved keyword with double quotes ""
SELECT * FROM SYSTEM."FUNCTION"
Regards,
Ankit Singhal
On Fri, Aug 19, 2016 at 10:47 PM, Aaron Molitor
wrote:
> Looks like the SYSTEM.FUNCTION table is names with a reserved word. Is
>
the searched data is different.
Yes, it could be possible because some users are hitting certain key range
only depending upon the first column(prefix) of the row key.
Regards,
Ankit Singhal
On Mon, Aug 15, 2016 at 6:29 PM, Chabot, Jerry
wrote:
> I’ve added the hint to the SELECT. Does an
Apache Phoenix enables OLTP and operational analytics for Hadoop through
SQL support and integration with other projects in the ecosystem such as
Spark, HBase, Pig, Flume, MapReduce and Hive.
We're pleased to announce our 4.8.0 release which includes:
- Local Index improvements[1]
- Integration wi
Samarth, filed PHOENIX-3176 for the same.
On Wed, Aug 10, 2016 at 11:42 PM, Ryan Templeton wrote:
> 0: jdbc:phoenix:localhost:2181> explain select count(*) from
> historian.data;
>
> *+--+*
>
> *| * * PLAN ** |*
>
> *+--
#How_I_map_Phoenix_table_to_an_existing_HBase_table
If you have a composite key , it is always better to insert data from
phoenix only.
Regards,
Ankit Singhal
On Fri, Aug 5, 2016 at 8:00 PM, Dong-iL, Kim wrote:
> oh. phoenix version is 4.7.0 and on EMR.
> Thx.
>
> > On Aug 5, 2016, at 11:27 PM, Dong-iL, Kim wrote:
> >
&
hin a timeout period. You need to increase scanner timeout
period along with above properties you mentioned.
hbase.client.scanner.timeout.period
6
Regards,
Ankit Singhal
On Mon, Aug 8, 2016 at 6:55 PM, wrote:
> Thanks Brian. I have added HBASE_CONF_DIR and it’s still ti
Hi Vasanth,
RC for 4.8(with support of hbase-1.2) is just out today, you can try with
the latest build.
Regards,
Ankit Singhal
On Thu, Jul 14, 2016 at 10:06 AM, Vasanth Bhat wrote:
> Thanks James.
>
> When are the early builds going to be available for Phoenix
> 4.8.0
bc';
For covered indexes , you can read
https://phoenix.apache.org/secondary_indexing.html
Regards,
Ankit Singhal
On Tue, Jun 28, 2016 at 4:25 AM, Vamsi Krishna
wrote:
> Team,
>
> I'm using HDP 2.3.2 (HBase : 1.1.2, Phoenix : 4.4.0).
> *Question: *phoenix explain plan no
Hi Vamsi,
Phoenix uses single local Index table for all the local indexes created on
a particular data table.
Rows are differentiated by local index sequence id and filtered when
requested during the query for particular index.
Regards,
Ankit Singhal
Re
On Tue, Jun 28, 2016 at 4:18 AM, Vamsi
ssion(FloorYearExpression.class),
CeilWeekExpression(CeilWeekExpression.class),
CeilMonthExpression(CeilMonthExpression.class),
CeilYearExpression(CeilYearExpression.class);
Regards,
Ankit Singhal
On Wed, Jun 29, 2016 at 9:08 AM, Yang Zhang wrote:
> when I use the functions described on your
(v) ASYNC
But if you are only using CSVBulkLoadTool for bulk load, then it will
automatically prepare and bulk load index data also. So Index maintaining
would not be required.
Regards,
Ankit Singhal
On Sat, Jun 25, 2016 at 4:13 PM, Tongzhou Wang (Simon) <
tongzhou.wang.1...@gmail.com>
Yes, restart your cluster
On Wed, Jun 15, 2016 at 8:17 AM, anupama agarwal wrote:
> I have created async index with same name. But I am still getting the same
> error. Should I restart my cluster for changes to reflect?
> On Jun 15, 2016 8:38 PM, "Ankit Singhal" wro
Hi Anupama,
Option 1:-
You can create a ASYNC index so that WAL can be replayed. And once your
regions are up , remember to do the flush of data table before dropping the
index.
Option 2:-
Create a table in hbase with the same name as index table name by using
hbase shell.
Regards,
Ankit
You can try increasing phoenix.query.timeoutMs (and
hbase.client.scanner.timeout.period) on the client .
https://phoenix.apache.org/tuning.html
On Fri, May 13, 2016 at 1:51 PM, 景涛 <844300...@qq.com> wrote:
> When I query from a very big table
> It get errors as follow:
>
> java.lang.RuntimeExcep
Try recreating your index with ASYNC and update index using INDEX tool so
that you don't face issues related to timeout or stuck during the initial
load of huge data.
https://phoenix.apache.org/secondary_indexing.html
On Tue, May 10, 2016 at 7:26 AM, anupama agarwal wrote:
> Hi All,
>
> I have
You can use Joins as a substitute to subqueries.
On Wed, May 11, 2016 at 1:27 PM, Divya Gehlot
wrote:
> Hi,
> I am using Spark 1.5.2 with Apache Phoenix 4.4
> As Spark 1.5.2 doesn't support subquery in where conditions .
> https://issues.apache.org/jira/browse/SPARK-4226
>
> Is there any altern
ts in CurrentSCN ./sqlline.py
"localhost;CurrentSCN=") and create table with the exact DDL used for
old table but with the table name changed to new table.
3. confirm that your new table is working fine as expected .
4. Then drop the old table from phoenix and snapshot from hbase sh
Yes, you can but it depends if you don't want to go back in time for schema
before 5 versions.
On Mon, May 9, 2016 at 8:16 AM, Bavineni, Bharata <
bharata.bavin...@epsilon.com> wrote:
> Hi,
>
> SYSTEM.CATALOG table is created with VERSIONS => '1000' by default. Can we
> change this value to 5 or
Yes Vishnu , you may be hitting
https://issues.apache.org/jira/browse/PHOENIX-2249 so you can try deleting
stats for the table "*EVENTS_PROD*'.
On Mon, May 9, 2016 at 10:56 AM, vishnu rao wrote:
> hi guys need help !
>
> i was getting this exception while doing a select. hbase 1.1 with phoenix
>
=> "SYSTEM.CATALOG\x000",
> STOPROW => "SYSTEM.CATALOG\x001"}
>
> Could this row be causing the issue?
>
>
>
> Thank you,
>
> Bharathi.
>
>
>
>
>
> *From:* Ankit Singhal [mailto:ankitsingha...@gmail.com]
> *Sent:* Sunday, Ma
1 - 100 of 134 matches
Mail list logo