Re: Exception while executing batch

2015-10-02 Thread Sumit Nigam
You can print stack trace to see what is the issue. I ran into similar problem when my connection got shared between multiple threads. From: Hafiz Mujadid To: user@phoenix.apache.org Sent: Friday, October 2, 2015 11:24 PM Subject: Exception while executing batch I am using prepared

Re: Schema design

2015-10-02 Thread Sumit Nigam
Hi Buntu, Possibly, following schema can help? Rowkey with columns user, X, Y, timestamp. (Composite PK with user as leading column). You can MD5 each field to make it fixed length if want.Then, also make timestamp column as your secondary index. Salt the table. I think single table is enough her

Schema design

2015-10-02 Thread Buntu Dev
I'm trying to design the Phoenix/HBase table schema in order to answer these questions: * Does the given user have attribute X with value Y? and at given time t1. * Get list of users who had attribute X with value Y between timestamps t1 and t2? * Get all the attributes of user at or around a give

Re: Estimating the "cost" of a query

2015-10-02 Thread James Taylor
Hi Alok, Yes, you could calculate an estimate for this information, but it isn't currently exposed through JDBC or through the explain plan (which would be a good place for it to live). You'd need to dip down to the implementation to get it. Something like this: PhoenixStatement statement = connec

Estimating the "cost" of a query

2015-10-02 Thread Alok Singh
Is there a way to figure out how many rows/cells were scanned in hbase perform a phoenix query? I tried using the explain command, but, it is not clear how to estimate the number of rows touched by looking at the explain plan. Essentially, I want to be able to report back to users the "cost" of per

Re: HBase MOB

2015-10-02 Thread James Taylor
Forgot to mention, for syntax examples, see http://phoenix.apache.org/language/index.html#create_table On Fri, Oct 2, 2015 at 9:24 AM, James Taylor wrote: > Hi Cristofer, > > Though I haven't explicitly tried this, in theory you should be able to > set the IS_MOB and MOB_THRESHOLD on a column fa

Re: table metaData

2015-10-02 Thread Hafiz Mujadid
thanks jdbc worked for me On Sat, Oct 3, 2015 at 1:56 AM, James Heather wrote: > The JDBC methods work just fine. > > You're really better off using them, rather than querying the internal > tables, because if implementation details change, your code will break. > On 2 Oct 2015 21:29, "Konstanti

Re: table metaData

2015-10-02 Thread James Heather
The JDBC methods work just fine. You're really better off using them, rather than querying the internal tables, because if implementation details change, your code will break. On 2 Oct 2015 21:29, "Konstantinos Kougios" wrote: > I didn't try the jdbc getMetaData methods. If those don't work, you

Re: table metaData

2015-10-02 Thread Konstantinos Kougios
I didn't try the jdbc getMetaData methods. If those don't work, you can always query the system tables, i.e. : select * from system.catalog; On 02/10/15 21:26, Hafiz Mujadid wrote: Hi all! How can i get table metadata like column types in phoenix using phoenix jdbs? thanks

table metaData

2015-10-02 Thread Hafiz Mujadid
Hi all! How can i get table metadata like column types in phoenix using phoenix jdbs? thanks

Exception while executing batch

2015-10-02 Thread Hafiz Mujadid
I am using prepared statement to execute batch to upsert into hbase using phoenix jdbc. When executeBatch is called following exception occur ERROR 1106 (XCL06): Exception while executing batch any idea?

Re: HBase MOB

2015-10-02 Thread James Taylor
Hi Cristofer, Though I haven't explicitly tried this, in theory you should be able to set the IS_MOB and MOB_THRESHOLD on a column family in the CREATE TABLE or ALTER TABLE calls. You can prefix the property with the column family name if you want it to only apply to that column family. Phoenix ju

Re: Fixed bug in PMetaDataImpl

2015-10-02 Thread James Heather
Commit message updated. On 02/10/15 17:02, James Taylor wrote: Patch looks great - thanks so much, James. Would you mind prefixing the commit message with "PHOENIX-2256" as that's what ties the pull to the JIRA? I'll get this committed today. James On Fri, Oct 2, 2015 at 7:34 AM, James H

HBase MOB

2015-10-02 Thread Cristofer Weber
Hi there! Are there any plan to allow a mapping of binary / varchar columns to HBase MOB, or is this already possible somehow? Did some searches on web and not found anything related to MOBs. Regards, Cristofer This e-mail message, including any attachments, is for the sole use of the person t

Re: Fixed bug in PMetaDataImpl

2015-10-02 Thread Cody Marcel
I think that's a duplicate bug. Can we consolidate them? https://issues.apache.org/jira/browse/PHOENIX-2172 On Fri, Oct 2, 2015 at 9:02 AM, James Taylor wrote: > Patch looks great - thanks so much, James. Would you mind prefixing the > commit message with "PHOENIX-2256" as that's what ties the

Re: Append data in hbase using spark-phoenix

2015-10-02 Thread Konstantinos Kougios
Hi, Use rdd.saveToPhoenix() , where rdd must be a tuple rdd. I.e. create a table: CREATE TABLE OUTPUT_TEST_TABLE (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR, col2 INTEGER) SALT_BUCKETS = 8; and run this job: package com.aktit.phoenix import org.apache.spark.{Logging, SparkConf, Spar

Re: Fixed bug in PMetaDataImpl

2015-10-02 Thread James Taylor
Patch looks great - thanks so much, James. Would you mind prefixing the commit message with "PHOENIX-2256" as that's what ties the pull to the JIRA? I'll get this committed today. James On Fri, Oct 2, 2015 at 7:34 AM, James Heather wrote: > Hi all (@James T in particular), > > I've submitte

Append data in hbase using spark-phoenix

2015-10-02 Thread Hafiz Mujadid
Hi all! I want to append data to an hbase table using spark-phoenix connector. How can i append data into an existing table? thanks

Fixed bug in PMetaDataImpl

2015-10-02 Thread James Heather
Hi all (@James T in particular), I've submitted a pull request to fix the bug I reported in https://issues.apache.org/jira/browse/PHOENIX-2256 concerning a failing unit test in Java 8. It was a genuine bug in PMetaDataImpl that just happened to sneak through the tests in Java 7 but not Java