Re: Phoenix and Hive

2015-06-16 Thread 김영우
Looks like Hive SH for Phoenix is WIP, https://issues.apache.org/jira/browse/PHOENIX-331 https://github.com/nmaillard/Phoenix-Hive HTH - Youngwoo On Wed, Jun 17, 2015 at 3:26 PM, Buntu Dev wrote: > I got quite a bit of data in a Hive managed tables and I looking for ways > to join those tables

Re: Phoenix and Hive

2015-06-16 Thread Fulin Sun
Hi there AFAIK, spark can smoothly read hive data using HiveContext, or use dataframe and data source api to read any external data source to tranform that into dataframe. So I would recommend using phoenix-spark module to achieve this goal. And you can simply choose to write spark dataframe to

Phoenix and Hive

2015-06-16 Thread Buntu Dev
I got quite a bit of data in a Hive managed tables and I looking for ways to join those tables with the one I create in Phoenix. I'm aware of HBase and Hive integration but not sure if there is any current support for Phoenix and Hive integration. Please let me know. Thanks!

How to set the URL for Hbase with custom zookeeper.znode.parent

2015-06-16 Thread guxiaobo1982
Hi, I am trying to use SQuirreLSQL to connect to a remote single node HBase instance with a customed zookeeper.znode.parent /hbase-unsecure, and the hbase.zookeeper.quorum is lix2.bh.com with port 2181, then what should be the JDBC URL. The steps at http://phoenix.apache.org/installation.html

Re: Which jar to put into Hbase region servers' classpath?

2015-06-16 Thread 丁桂涛(桂花)
Please follow the instructions here : 1. Add the phoenix-[version]-server.jar to the classpath of all HBase region server and master and remove any previous version. For Phoenix 4.4.0, the file should be phoenix-4.4.0-HBase-1.0-server.jar or

Which jar to put into Hbase region servers' classpath?

2015-06-16 Thread guxiaobo1982
Hi, For phoenix 4.4.0, there are so many jars, but which one should be put into hbase region servers' classpath, and which one should be add to the client side, which is without hbase client library installed? Thanks.

Re: Re: table alias

2015-06-16 Thread Fulin Sun
Hi, there I think the problem root cause has been explained explicitely by James, that you may need to remove the double quotes cause double quotes would got upper case identifiers. BI tool may be leveraged to reach that. Best, Sun. CertusNet From: Yufan Liu Date: 2015-06-17 07:45 To:

Re: Deleting phoenix tables and views using hbase shell

2015-06-16 Thread James Taylor
Arun, We need to get to the bottom of the issue you're seeing. It'd be great if you could narrow it down or profile it to get more information. Thanks, James On Tuesday, June 16, 2015, Arun Kumaran Sabtharishi wrote: > Because there is some problem with a particular environment where phoenix > d

Re: Deleting phoenix tables and views using hbase shell

2015-06-16 Thread Arun Kumaran Sabtharishi
Because there is some problem with a particular environment where phoenix drop is timing out.

Re: Deleting phoenix tables and views using hbase shell

2015-06-16 Thread anil gupta
Curious to know, is there any reason you dont want to use Phoenix for deleting tables? ~Anil On Tue, Jun 16, 2015 at 3:45 PM, Arun Kumaran Sabtharishi < arun1...@gmail.com> wrote: > Hello phoenix users and developers, > > Is it possible to delete phoenix tables/views using hbase shell(by > delet

Re: table alias

2015-06-16 Thread Yufan Liu
Hi James, I change everything in the query to upper case: select "FACT"."C1" as “C0" from (select COL1 as C1 from T1) as "FACT”, and it returns the same error: "Undefined column family. familyName=FACT.null." But when I change to FACT."C1", it works fine. It seems there is problem with double quot

Deleting phoenix tables and views using hbase shell

2015-06-16 Thread Arun Kumaran Sabtharishi
Hello phoenix users and developers, Is it possible to delete phoenix tables/views using hbase shell(by deleting specific columns in SYSTEM.CATALOG)? If so, based on what row key or what rows has to be deleted in the SYSTEM.CATALOG table? Thanks, Arun

Re: Join create OOM with java heap space on phoenix client

2015-06-16 Thread Krunal Varajiya
Thanks Maryann, I have tried hint “USE_SORT_MERGE_JOIN”, but it just gets stuck for 2-3 hours. doesn’t show any errors. it does copy data in ResultSpooler on client machine, and then I see it merges some data into ResultSpooler and delete some of the ResultSpooler, but after that I don’t get an

Re: Join create OOM with java heap space on phoenix client

2015-06-16 Thread Maryann Xue
Hi Krunal, Can you try with merge join by specifying the hint "USE_SORT_MERGE_JOIN"? and if it still does not work, would you mind posting the exact error message of running this merge-join? Thanks, Maryann On Tue, Jun 16, 2015 at 6:12 PM, Krunal Varajiya wrote: > Does anybody has any idea w

Re: Join create OOM with java heap space on phoenix client

2015-06-16 Thread Krunal Varajiya
Does anybody has any idea why below join is throwing OOM error? I will really appreciate any help here. We are stuck here is as none of our join works even with 5M rows. From: Krunal mailto:krunal.varaj...@ask.com>> Reply-To: "user@phoenix.apache.org" mailto:user

Re: table alias

2015-06-16 Thread James Taylor
Hi Yanlin, That's a legit error as well. Putting double quotes around an identifier makes it case sensitive. Without double quotes, identifiers are normalized by upper casing them. So you don't have a "c1" column, but you do have a "C1" column. Thanks, James On Tue, Jun 16, 2015 at 2:23 PM, yanlin

Re: table alias

2015-06-16 Thread yanlin wang
Hi Jame, Sorry about the bad test case. The actual test case should be this: select "fact"."c1" as “c0" from (select col1 as c1 from t1) as "fact”; Error: ERROR 1001 (42I01): Undefined column family. familyName=fact.null SQLState: 42I01 ErrorCode: 1001 The BI tool i am using tries to generat

Re: table alias

2015-06-16 Thread James Taylor
Hi Yufan, The outer query should use the alias name (c1). If not, please file a JIRA when you have a chance. Thanks, James On Tue, Jun 16, 2015 at 2:03 PM, yanlin wang wrote: > Thanks James. My example is bad … > > >> On Jun 16, 2015, at 1:39 PM, James Taylor wrote: >> >> Hi Yanlin, >> The first

Re: table alias

2015-06-16 Thread yanlin wang
Thanks James. My example is bad … > On Jun 16, 2015, at 1:39 PM, James Taylor wrote: > > Hi Yanlin, > The first error is legit: you're aliasing col1 as c1 in the inner > query but then trying to select it as col1 in the outer query. > > The second error is a known limitation of derived tables

Re: table alias

2015-06-16 Thread Yufan Liu
Hi James, I have tried the queries you guys are using above (select fact.c1 from (select k as k1, col1 as c1 from t1) as fact), it works. But in the result set, it displays the original column name (col1) instead of alias name (c1). Is that expected behavior? 2015-06-16 13:39 GMT-07:00 James Tay

Re: table alias

2015-06-16 Thread James Taylor
Hi Yanlin, The first error is legit: you're aliasing col1 as c1 in the inner query but then trying to select it as col1 in the outer query. The second error is a known limitation of derived tables (PHOENIX-2041). Thanks, James On Tue, Jun 16, 2015 at 11:48 AM, yanlin wang wrote: > Hi James, > >

Re: PhoenixIOException :Setting the query timeout

2015-06-16 Thread Thomas D'Silva
Bahubali, hbase-site.xml needs to be on the client's CLASSPATH in order to get picked up or else it will use the default timeout. When using sqlline it sets the CLASSPATH to the HBASE_CONF_PATH environment variable which default to the current directory. Try running sqlline directly from the bin d

Re: table alias

2015-06-16 Thread yanlin wang
Hi James, I figured the error i got was not the phoenix version issue and here is the test case you can reproduce it: create table t1 (k varchar primary key, col1 varchar); select fact.col1 from (select k as k1, col1 as c1 from t1) as fact; Error: ERROR 1001 (42I01): Undefined column family.

Re: table alias

2015-06-16 Thread yanlin wang
Hi James, Thanks for the info. I am using cloudera distribution CLABS_PHOENIX-4.3.0-1.clabs_phoenix1.0.0.p0.78 that can be the issue. I will try to play with other versions. Thx Yanlin > On Jun 16, 2015, at 9:34 AM, James Taylor wrote: > > Hi Yanlin, > What version of Phoenix are you using

Re: table alias

2015-06-16 Thread James Taylor
Hi Yanlin, What version of Phoenix are you using? I tried the following in sqlline, and it worked fine: 0: jdbc:phoenix:localhost> create table t1 (k varchar primary key, col1 varchar); No rows affected (10.29 seconds) 0: jdbc:phoenix:localhost> select fact.col1 from (select col1 from t1) as fact;

Re: Bulk loading through HFiles

2015-06-16 Thread Dawid Wysakowicz
Hi Yiannis, I've resolved the issue when I've run the code on bigger set of data. I will try to post the code when I polish it a bit. The partitions should be sorted with KeyValue sorter before bulkSaving them. 2015-06-16 15:10 GMT+02:00 Yiannis Gkoufas : > Hi, > > didn't realize that I only sent

Re: Bulk loading through HFiles

2015-06-16 Thread Yiannis Gkoufas
Hi, didn't realize that I only sent to Dawid. Resending to the entire list in case someone else has encountered this error before: 15/06/10 23:45:16 WARN TaskSetManager: Lost task 34.48 in stage 0.0 (TID 816, iriclusnd20): java.io.IOException: Added a key not lexically larger than previous key=\

PhoenixIOException :Setting the query timeout

2015-06-16 Thread Bahubali Jain
Hi, I am running into below exception when I run a select query on a table with 10 million rows (using sqlline.py) I added the parameter phoenix.query.timeoutMs to the hbase-site.xml present in the bin directory where sqlline.py is located, but for some reason its doesn't seem to be taking effect.

RE: How to increase call timeout/count rows?

2015-06-16 Thread Riesland, Zack
Thanks James, So to clarify: changing phoenix.query.timeoutMs via Ambari is insufficient. I have to add/adjust this value by hand on each of my region servers (I'm guessing that's what client-side means here). Is that correct? -Original Message- From: James Taylor [mailto:jamestay...@a

Re: Mapping existing hbase table with phoenix view/table

2015-06-16 Thread Nishant Patel
Hi James, Thanks for your response. It is working now. Regards, Nishant On Mon, Jun 15, 2015 at 11:33 PM, James Taylor wrote: > Hi NIshant, > Have you seen this: > > https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table > > Your row key is a byte[] in HBase. It