On Sat, Sep 15, 2018 at 09:19:12PM -0700, Gopal Vijayaraghavan wrote:
> Since we added a sequence + locking in Hive ACID, there's a Surrogate
> Key prototype (for Hive 3.0)
Great. I did not mention I needed an ACID compliant sequence.
> This is not an auto_increment key, but the numbering is for
On Sat, Sep 15, 2018 at 09:38:01PM +, Vineet Garg wrote:
> Not exactly sequence but an ability to generate unique numbers (with
> limitation) is under development:
> https://issues.apache.org/jira/browse/HIVE-20536
unique numbers is sufficient for a sequence. However, the limitation
looks hug
Hi,
We are also facing the same issue. /user/hive/warehouse always reaches hard
quota and jobs fail. Often we reachout to users to delete old tables/db’s.
Is there a good way to handle this at enterprise level ( 100’s of users and
1000’s of databases)?
On Sun, Sep 16, 2018 at 00:31 Mahender Sarang
unsubscribe
Hi,
Our storage holding TB of \User folder data. it has users and their logs. is
there a way to set limit or quota and automatically clean up folder if it
exceeds beyond certain limit.
$ sudo -u hdfs hdfs dfsadmin -setSpaceQuota 10g /user
I know above command sets the limit. But is there bet