I sort-of ran out of time for the near term, but will look into all these
suggestions at a later date. Thanks!
On Thu, Feb 25, 2016 at 1:01 PM, James Taylor
wrote:
> +1 to Ankit's suggestion. If you haven't altered the table, then you can
> just connect at the timestamp of one more than the tim
+1 to Ankit's suggestion. If you haven't altered the table, then you can
just connect at the timestamp of one more than the timestamp at which the
table was created (see [1]), and issue to DROP TABLE command from that
connection. If you have altered the table, then you have to be more careful
as th
If I understand what you're attempting to do, I think a snapshot will work. A
snapshot is not a copy of the data and doesn't take much time. It's really just
a copy of the region-> store files mapping at a point in time. It will prevent
the store files from getting deleted so drop the snapshot o
I like your outside-the-box thinking. Unfortunately, my end goal was to
convert a table created with dynamic fields into a table or view with all
static fields so that I could avoid having to specify the data type of
every dynamic field in the SQL. And I wanted to avoid having to rewrite
the data
1) Take table snapshot of hbase table
2) drop phoenix table
3) restore back the hbase snapshot
Now there will be a table in hbase.
Thanks
Sandeep
ᐧ
On Thu, Feb 25, 2016 at 6:41 AM, Jonathan Leech wrote:
> You could also take a snapshot in hbase just prior to the drop table, then
> restore it a
You could also take a snapshot in hbase just prior to the drop table, then
restore it afterward.
> On Feb 24, 2016, at 12:25 PM, Steve Terrell wrote:
>
> Thanks for your quick and accurate responses!
>
>> On Wed, Feb 24, 2016 at 1:18 PM, Ankit Singhal
>> wrote:
>> Yes, data will be deleted
Thanks for your quick and accurate responses!
On Wed, Feb 24, 2016 at 1:18 PM, Ankit Singhal
wrote:
> Yes, data will be deleted during drop table command and currently there is
> no parameter to control that. You may raise a jira for this.
>
> A workaround you may try is by opening a connection
Yes, data will be deleted during drop table command and currently there is
no parameter to control that. You may raise a jira for this.
A workaround you may try is by opening a connection at a timestamp a little
greater than last modified timestamp of table and then run drop table
command. but rem
Good idea!
I was using SQuirreL Client, and I did put the hbase-site.xml in the same
directory as the driver jar. However, I did not know how to check to see
that the property was being found.
So, I switched to using the sqlline query command line, and this time the
table remained in the hbase s
Hi Steve,
can you check whether the properties are picked by the sql/application
client.
Regards,
Ankit Singhal
On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell
wrote:
> HI, I hope someone can tell me what I'm doing wrong…
>
> I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on
HI, I hope someone can tell me what I'm doing wrong…
I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on both
the client and server side.
I restarted the HBase master service.
I used Phoenix to create a table and upsert some values.
I used Phoenix to drop the table.
I expected
11 matches
Mail list logo