Hello,
1. Currently using phoenix 4.0.0 incubating for both client and server.
2. Upgraded to 4.3.1(most recent)
3. While trying to connect using the client in command line (using
./sqlline.py) the connection could not be success throwing the following
error.
1)
*Error: ERROR 1013 (42M04): Table
Upgraded Apache-phoenix-4.1.0 to 4.3.1. When connecting phoenix client for
the first time, it is taking much longer time to connect than usual.
Is there any workaround for connecting without taking longer time?
Note: There are more number of phoenix views and phoenix tables(4K) in the
system.
Th
To add phoenix 4.3.1 dependency in the project and building, phoenix-4.3.1
client jars are not in maven central repository. Which is the right place
to request to add the jars in the repository? Or is this the right place?
Thanks,
Arun
It is from both code and sqlline.
On Jun 1, 2015 5:52 PM, "Nick Dimiduk" wrote:
> Is this from code, or sqlline?
>
> On Fri, May 29, 2015 at 2:42 PM, Arun Kumaran Sabtharishi <
> arun1...@gmail.com> wrote:
>
>> Upgraded Apache-phoenix-4.1.0 to 4.3.1. When
Hello Phoenix users and developers,
Recently upgraded to the phoenix to 4.3.1 and the following things are hazy.
1. What is the right way to use the phoenix jdbc client?
2. Since there is no phoenix-4.3.1-client.jar in the maven repository, is
it wise to use the jar as a library within the proje
James,
Thanks for your reply. It worked!
But, can you help me understand how does it make a difference even when
there is no data in the system.sequence table.
Thanks,
Arun
Hello phoenix users and developers,
After upgrading to phoenix 4.3.1, dropping a view times-out(views with huge
data and also no/less data).
Before upgrading to 4.3.1 client, the system was using 4.0.0 incubating
client which had a similar issue where dropping a view took much time. But
at some p
Adding some information on this,
The SYSTEM.CATALOG has around 1.3 million views and the region count is 1.
Is it safe to split the SYSTEM.CATALOG table?
Would splitting help the performance of dropping?
Thanks,
Arun
Hello James,
Thanks for the reply. Here are the answers for the questions you have asked.
*1.) What's different between the two environments (i.e. the working andnot
working ones)?*
The not working ones has more number of views than the working ones.
*2.) Do you mean 1.3M views or 1.3M rows?*
Hi James,
When tried twice, the issue could be reproduced the first time and not the
second time in my local environment. However, the degrade in performance
when the number of views/columns grown is evident which is logged.
I have added the github URL for the test project which does the followin
Hello Phoenix/Hbase users and developers,
I have a few questions regarding how table table delete works in hbase.
*What I know:*
If a hbase table is deleted(after disabling), the SYSTEM.CATALOG entries
related to that table will be deleted.
If a view is created using phoenix (assuming there are
Hello phoenix users and developers,
Is it possible to delete phoenix tables/views using hbase shell(by deleting
specific columns in SYSTEM.CATALOG)? If so, based on what row key or what
rows has to be deleted in the SYSTEM.CATALOG table?
Thanks,
Arun
Because there is some problem with a particular environment where phoenix
drop is timing out.
James,
Thanks for the reply. Since I am new to profiling in phoenix, it would be
excellent if you could point me to a location where profiling in phoenix is
explained or if you can in this email.
Thanks,
Arun
James,
We dug deeper and found that the time is spent in the
MetaDataEndPointImpl.findChildViews() method. It runs a scan on the
SYSTEM.CATALOG table looking for the link record. Since the link record is
in the format CHILD-PARENT, it has to scan the entire table to find the
parent suffix.
In our
James,
Filed the bugs.
https://issues.apache.org/jira/browse/PHOENIX-2050
https://issues.apache.org/jira/browse/PHOENIX-2051
Thanks,
Arun
James,
Do you see any issues in using the delete statement below as a workaround
for dropping views until the JIRA's are fixed and released?
delete from SYSTEM.CATALOG where table_name = 'MY_VIEW'
Thanks,
Arun
Hello all,
While restarting the HBase servers after upgrading from Apache phoenix
4.5.1-HBase-1.0 to 4.6.0-HBase-1.0, the master server start fails due to
the following exception.
2015-12-16 14:11:32,330 FATAL org.apache.hadoop.hbase.master.HMaster:
Failed to become active master
java.lang.Runtim
Hi Phoenix users and Developers,
Is phoenix uses BulkDeleteProtocol in HBase or does it deletes the rows one
at a time? Kindly, direct me to the appropriate class in the phoenix source
code as well.
Thanks,
Arun
After trying to dig through and debug the phoenix source code several
hours, could not find the one place where the actual phoenix delete
happens. Kindly point me where does the delete starts in the phoenix core
and the place where the actual delete happens.
Note: I have checked classes like Delet
Thanks, James.
But, I do not see Phoenix using Hbase's BulkDeleteProtocol. Does this mean
phoenix deletes rows one by one in linear time?
Thanks,
Arun
To add details to the original problem that was mentioned in this email, we
migrated to Phoenix-4.6.1 very recently and this problem started occurring
only after that.
1. Checking SYSTEM.CATALOG for some older phoenix views in the same
environment, some of the *phoenix views did not have the IS_RO
r phoenix views, created pre-4.6, shouldn't have the ROW_TIMESTAMP
>> column. Was the upgrade done correctly i.e. the server jar upgraded before
>> the client jar? Is it possible to get the complete stack trace? Would be
>> great if you could come up with a test case here to und
James,
To add more information on this issue, this happens in new phoenix views
associated with brand new tables as well. So, this cannot be an
upgrade/migration issue. Not figured out a specific way to reproduce this
issue yet. Could you throw some ideas on what direction this problem could
be ap
before 4.6 upgrade?"
We do see that clearCache() is being called for 4.7, and 4.7 upgrades from
ConnectionQueryServicesImpl class, but not for 4.6
Thanks,
Arun
On Tue, Apr 19, 2016 at 10:22 AM, Arun Kumaran Sabtharishi <
arun1...@gmail.com> wrote:
> James,
>
> To add more i
stopping upgrade
> code to add a new column.
>
> scan 'SYSTEM.CATALOG', {RAW=>true}
>
>
>
> Regards,
> Ankit Singhal
>
> On Wed, Apr 20, 2016 at 4:25 AM, Arun Kumaran Sabtharishi <
> arun1...@gmail.com> wrote:
>
>> After further in
#x27;, IN_MEMORY => 'false', BLOCKCACHE => 'true'}
1 row in 0.6060 seconds
The above is for describe SYSTEM.CATALOG. The output for scan
'SYSTEM.CATALOG', {RAW=>true} is too huge.
Thanks,
Arun
On Wed, Apr 20, 2016 at 11:19 AM, James Taylor
wrote:
> Arun,
>
\x00defau
lt
Thanks,
Arun
On Wed, Apr 20, 2016 at 11:31 AM, Arun Kumaran Sabtharishi <
arun1...@gmail.com> wrote:
> James,
>
> Table SYSTEM.CATALOG is ENABLED
> SYSTEM.CATALOG, {TABLE_ATTRIBUTES => {coprocessor$1 =>
> '|org.apache.phoenix.coprocessor.ScanR
st after grep for CATALOG in a command output (scan
> 'SYSTEM.CATALOG', {RAW=>true}).
>
> On Wed, Apr 20, 2016 at 10:07 PM, Arun Kumaran Sabtharishi <
> arun1...@gmail.com> wrote:
>
>> One more question to add,
>> Do we need to have 1000 versions, and KE
OOLEAN;
> >!quit
>
> Quit the shell and start new session without CurrentSCN.
> > ./sqlline.py localhost
> > !describe system.catalog
>
> this should resolve the issue of missing column.
>
> Regards,
> Ankit Singhal
>
>
> On Fri, Apr 22, 2016 at 3:02 AM
columns manually by following below
>> step;
>>
>> > ./sqlline.py localhost;CurrentSCN=9
>> > ALTER TABLE SYSTEM.CATALOG ADD BASE_COLUMN_COUNT INTEGER,
>> IS_ROW_TIMESTAMP BOOLEAN;
>> >!quit
>>
>> Quit the shell and start new session withou
the dependent features
> work correctly.(Remember use correct INTEGER byte representation for
> DATA_TYPE column).
>
> And, can you also please share output of
> > scan 'SYSTEM.SEQUENCE'
>
> Regards,
> Ankit
>
> On Fri, Apr 22, 2016 at 9:14 PM, Arun Kumaran Sabth
Hi Ankit,
Just following with the question that when the alter statement was issued
with CurrentSCN=9, the current timestamp was not set to 9.
Will this cause an issue in the future if it has to compare the timestamps?
Thanks,
Arun
On Mon, Apr 25, 2016 at 10:32 AM, Arun Kumaran Sabtharishi
eed to upgrade on the time
> stamp of the system catalog table.
> Thanks,
> James
>
>
> On Tuesday, April 26, 2016, Arun Kumaran Sabtharishi
> wrote:
>
>> Hi Ankit,
>>
>> Just following with the question that when the alter statement was issued
>> wit
action on SYSTEM.CATALOG
> - when you don't see those columns and open connection at currentSCN=9 and
> alter table to add both the columns.
> - you may set keep_deleted_cells back to true in SYSTEM.CATALOG
>
> Regards,
> Ankit Singhal
>
>
>
> Regards,
> Ank
35 matches
Mail list logo