Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Jonathan Leech
You could also take a snapshot in hbase just prior to the drop table, then restore it afterward. > On Feb 24, 2016, at 12:25 PM, Steve Terrell wrote: > > Thanks for your quick and accurate responses! > >> On Wed, Feb 24, 2016 at 1:18 PM, Ankit Singhal >> wrote: >> Yes, data will be deleted

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
Thanks for your quick and accurate responses! On Wed, Feb 24, 2016 at 1:18 PM, Ankit Singhal wrote: > Yes, data will be deleted during drop table command and currently there is > no parameter to control that. You may raise a jira for this. > > A workaround you may try is by opening a connection

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Ankit Singhal
Yes, data will be deleted during drop table command and currently there is no parameter to control that. You may raise a jira for this. A workaround you may try is by opening a connection at a timestamp a little greater than last modified timestamp of table and then run drop table command. but rem

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
Good idea! I was using SQuirreL Client, and I did put the hbase-site.xml in the same directory as the driver jar. However, I did not know how to check to see that the property was being found. So, I switched to using the sqlline query command line, and this time the table remained in the hbase s

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Ankit Singhal
Hi Steve, can you check whether the properties are picked by the sql/application client. Regards, Ankit Singhal On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell wrote: > HI, I hope someone can tell me what I'm doing wrong… > > I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on

Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
HI, I hope someone can tell me what I'm doing wrong… I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on both the client and server side. I restarted the HBase master service. I used Phoenix to create a table and upsert some values. I used Phoenix to drop the table. I expected

Re: leveraging hive.hbase.generatehfiles

2016-02-24 Thread Gabriel Reid
Hi Zack, If bulk loading is currently slow or error prone, I don't think that this approach would improve the situation. >From what I understand from that link, this is a way to copy the contents of a Hive table into HFiles. Hive operates via mapreduce jobs, so this is technically a map reduce jo

Re: ORDER BY Error on Windows

2016-02-24 Thread Ankit Singhal
Hi Yiannis, You may need to set phoenix.spool.directory to correct windows folder as by default it is set to /tmp. It is fixed in 4.7. https://issues.apache.org/jira/browse/PHOENIX-2348 Regards, Ankit Singhal On Wed, Feb 24, 2016 at 10:05 PM, Yiannis Gkoufas wrote: > Hi there, > > we have been

ORDER BY Error on Windows

2016-02-24 Thread Yiannis Gkoufas
Hi there, we have been using phoenix client without a problem in linux systems but we have encountered some problems on windows. We run the queries through SquirellSQL using the 4.5.2 client jar The query which looks like this SELECT * FROM TABLE WHERE ID='TEST' works without a problem. But when w

Cache of region boundaries are out of date - during index creation

2016-02-24 Thread Jaroslav Šnajdr
Hello everyone, while creating an index on my Phoenix table: CREATE LOCAL INDEX idx_media_next_update_at ON media (next_metadata_update_at); I'm getting an exception every time the command is run, after it's been running for a while: *Error: ERROR 1108 (XCL08): Cache of region boundaries are

leveraging hive.hbase.generatehfiles

2016-02-24 Thread Riesland, Zack
We continue to have issues getting large amounts of data from Hive into Phoenix. BulkLoading is very slow and often fails for very large data sets. I stumbled upon this article that seems to present an interesting alternative: https://community.hortonworks.com/articles/2745/creating-hbase-hfiles