Query on mapping Hbase table with Phoenix where rowkey is composite of multiple columns

2015-06-10 Thread Nishant Patel
Hi All, I have hbase table where rowkey is composite key of 4 columns. Rowkey Example: qualifier1|qualifier2|qualifier3|qualifier4 I want to create phoenix table mapping this hbase table. I always receive qualifier1, qualifier2 and qualifier3 as filter condition in query. How can I map phoenix

Re: Apache Phoenix (4.3.1 and 4.4.0-HBase-0.98) on Spark 1.3.1 ClassNotFoundException

2015-06-10 Thread Jeroen Vlek
Hi Josh, That worked! Thank you so much! (I can't believe it was something so obvious ;) ) If you care about such a thing you could answer my question here for bounty: http://stackoverflow.com/questions/30639659/apache-phoenix-4-3-1-and-4-4-0-hbase-0-98-on-spark-1-3-1-classnotfoundexceptio Hav

Re: Re: Error using sqlline.py

2015-06-10 Thread Fulin Sun
Hi, there If you are using CDH 5.4.x and try to integrate with Phoenix 4.4 , you may want to recompile phoenix 4.4 as the below thread. http://mail-archives.apache.org/mod_mbox/phoenix-user/201506.mbox/%3c2015060410170208509...@certusnet.com.cn%3E Best, Sun. CertusNet From: Bahubali J

Apache Phoenix Flume plugin usage

2015-06-10 Thread Buntu Dev
I plan on using Apache Phoenix on CDH and installed the parcel (4.3.0-clabs-phoenix-1.0.0-all). Does the Flume client still needs to be built for the client jar as mentioned in the install and setup steps here: http://phoenix.apache.org/flume.html I'm also looking for some examples of ingesting e

Re: Error using sqlline.py

2015-06-10 Thread Bahubali Jain
I have CDH 5.4 and the problem seems to be due to some incompatibility as per the below thread. http://mail-archives.apache.org/mod_mbox/phoenix-user/201506.mbox/%3CCAAF1JdhQUXwDwOJm6e38jjiBo-Kkm1=igagdrzmw-fxyrwk...@mail.gmail.com%3E On Wed, Jun 10, 2015 at 10:33 PM, Bahubali Jain wrote: > I d

Re: Bulk loading through HFiles

2015-06-10 Thread Yiannis Gkoufas
Hi Dawid, yes I have been using your code. Probably I am invoking the classes in a wrong way. val data = readings.map(e => e.split(",")) .map(e => (e(0),e(1).toLong,e(2).toDouble,e(3).toDouble)) val tableName = "TABLE"; val columns = Seq("SMID","DT","US","GEN"); val zkUrl = Some("localhost:2181

Re: Join create OOM with java heap space on phoenix client

2015-06-10 Thread Krunal Varajiya
Hey Maryann, Sorry I just realized that I am using Phoenix 4.3.0. I am using sample tables generated using performance.py script from Phoenix package! I have generated 5M, 25M, 100M using this script and running join in these tables! Here is table definition: TABLE_CAT

Re: Phoenix drop view not working after 4.3.1 upgrade

2015-06-10 Thread Arun Kumaran Sabtharishi
Hi James, When tried twice, the issue could be reproduced the first time and not the second time in my local environment. However, the degrade in performance when the number of views/columns grown is evident which is logged. I have added the github URL for the test project which does the followin

Re: Bulk loading through HFiles

2015-06-10 Thread Dawid
Thx a lot James. That's the case. On 10.06.2015 19:50, James Taylor wrote: David, It might be timestamp related. Check the timestamp of the rows/cells you imported from the HBase shell. Are the timestamps later than the server timestamp? In that case, you wouldn't see that data. If this is the

Re: Bulk loading through HFiles

2015-06-10 Thread James Taylor
David, It might be timestamp related. Check the timestamp of the rows/cells you imported from the HBase shell. Are the timestamps later than the server timestamp? In that case, you wouldn't see that data. If this is the case, you can try specifying the CURRENT_SCN property at connection time with a

Re: Bulk loading through HFiles

2015-06-10 Thread Dawid
Yes, that's right I have generated HFile's that I managed to load so to be visible in HBase. I can't make them 'visible' to phoenix. What I noticed today is I have rows loaded from the generated HFiles and upserted through sqlline when I run 'DELETE FROM TABLE' only the upserted one disappears

Re: pherf question

2015-06-10 Thread Cody Marcel
There are no limits to the syntax. Pherf just send whatever is in the files it reads directly to phx client. Can you post exact file you are using? Also, a couple of pointers. -Use a different file than the ones provided. Just put it in the config directory created in the installation. Pherf will

Re: Error using sqlline.py

2015-06-10 Thread Bahubali Jain
I downloaded from the below link http://supergsego.com/apache/phoenix/phoenix-4.4.0-HBase-1.0/ I have a single node install of hadoop and had restarted the hbase daemons. On Jun 10, 2015 10:22 PM, "Nick Dimiduk" wrote: > Just double-check: you're using the HBase-1.0 Phoenix build with HBase > 1.

Re: Error using sqlline.py

2015-06-10 Thread Nick Dimiduk
Just double-check: you're using the HBase-1.0 Phoenix build with HBase 1.0? Did you replace the Phoenix jar with the new 4.4.0 bits on all HBase hosts and restart HBase daemons? -n On Wed, Jun 10, 2015 at 6:51 AM, Bahubali Jain wrote: > Hi, > I am running into below issue while trying to connec

pherf question

2015-06-10 Thread Perko, Ralph J
Hi, I am experimenting with the Pherf utility and had some questions. Running the utility with the provided scenario and data model works really well. However when I try to add parameters to the create table statement or do something else like add a secondary index I get what appears to be syn

Re: Bulk loading through HFiles

2015-06-10 Thread Yiannis Gkoufas
Hi Dawid, I am trying to do the same thing but I hit a wall while writing the Hfiles getting the following error: java.io.IOException: Added a key not lexically larger than previous key=\x00\x168675230967GMP\x00\x00\x00\x01=\xF4h)\xE0\x010GEN\x00\x00\x01M\xDE.\xB4T\x04, lastkey=\x00\x168675230967

Error using sqlline.py

2015-06-10 Thread Bahubali Jain
Hi, I am running into below issue while trying to connect using sqlline.py, can you please shed some light on this. Hbase Version is 1.0 and Phoeniix version is 4.4 SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4

Re: Apache Phoenix (4.3.1 and 4.4.0-HBase-0.98) on Spark 1.3.1 ClassNotFoundException

2015-06-10 Thread Josh Mahonin
Hi Jeroen, Rather than bundle the Phoenix client JAR with your app, are you able to include it in a static location either in the SPARK_CLASSPATH, or set the conf values below (I use SPARK_CLASSPATH myself, though it's deprecated): spark.driver.extraClassPath spark.executor.extraClassPath Jo

Re: Apache Phoenix (4.3.1 and 4.4.0-HBase-0.98) on Spark 1.3.1 ClassNotFoundException

2015-06-10 Thread Jeroen Vlek
Hi Josh, Thank you for your effort. Looking at your code, I feel that mine is semantically the same, except written in Java. The dependencies in the pom.xml all have the scope provided. The job is submitted as follows: $ rm spark.log && MASTER=spark://maprdemo:7077 /opt/mapr/spark/spark-1.3.1/