You could also take a snapshot in hbase just prior to the drop table, then
restore it afterward.
> On Feb 24, 2016, at 12:25 PM, Steve Terrell wrote:
>
> Thanks for your quick and accurate responses!
>
>> On Wed, Feb 24, 2016 at 1:18 PM, Ankit Singhal
>> wrote:
>> Yes, data will be deleted
Thanks for your quick and accurate responses!
On Wed, Feb 24, 2016 at 1:18 PM, Ankit Singhal
wrote:
> Yes, data will be deleted during drop table command and currently there is
> no parameter to control that. You may raise a jira for this.
>
> A workaround you may try is by opening a connection
Yes, data will be deleted during drop table command and currently there is
no parameter to control that. You may raise a jira for this.
A workaround you may try is by opening a connection at a timestamp a little
greater than last modified timestamp of table and then run drop table
command. but rem
Good idea!
I was using SQuirreL Client, and I did put the hbase-site.xml in the same
directory as the driver jar. However, I did not know how to check to see
that the property was being found.
So, I switched to using the sqlline query command line, and this time the
table remained in the hbase s
Hi Steve,
can you check whether the properties are picked by the sql/application
client.
Regards,
Ankit Singhal
On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell
wrote:
> HI, I hope someone can tell me what I'm doing wrong…
>
> I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on
HI, I hope someone can tell me what I'm doing wrong…
I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on both
the client and server side.
I restarted the HBase master service.
I used Phoenix to create a table and upsert some values.
I used Phoenix to drop the table.
I expected
Hi Zack,
If bulk loading is currently slow or error prone, I don't think that
this approach would improve the situation.
>From what I understand from that link, this is a way to copy the
contents of a Hive table into HFiles. Hive operates via mapreduce
jobs, so this is technically a map reduce jo
Hi Yiannis,
You may need to set phoenix.spool.directory to correct windows folder as by
default it is set to /tmp.
It is fixed in 4.7.
https://issues.apache.org/jira/browse/PHOENIX-2348
Regards,
Ankit Singhal
On Wed, Feb 24, 2016 at 10:05 PM, Yiannis Gkoufas
wrote:
> Hi there,
>
> we have been
Hi there,
we have been using phoenix client without a problem in linux systems but we
have encountered some problems on windows.
We run the queries through SquirellSQL using the 4.5.2 client jar
The query which looks like this SELECT * FROM TABLE WHERE ID='TEST' works
without a problem. But when w
Hello everyone,
while creating an index on my Phoenix table:
CREATE LOCAL INDEX idx_media_next_update_at ON media
(next_metadata_update_at);
I'm getting an exception every time the command is run, after it's been
running for a while:
*Error: ERROR 1108 (XCL08): Cache of region boundaries are
We continue to have issues getting large amounts of data from Hive into Phoenix.
BulkLoading is very slow and often fails for very large data sets.
I stumbled upon this article that seems to present an interesting alternative:
https://community.hortonworks.com/articles/2745/creating-hbase-hfiles
11 matches
Mail list logo