Trt this

hive.limit.optimize.fetch.max

   - Default Value: 50000
   - Added In: Hive 0.8.0

Maximum number of rows allowed for a smaller subset of data for simple
LIMIT, if it is a fetch query. Insert queries are not restricted by this
limit.


HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 31 August 2016 at 13:42, ravi teja <raviort...@gmail.com> wrote:

> Hi Community,
>
> Many users run adhoc hive queries on our platform.
> Some rogue queries managed to fill up the hdfs space and causing
> mainstream queries to fail.
>
> We wanted to limit the data generated by these adhoc queries.
> We are aware of strict param which limits the data being scanned, but it
> is of less help as huge number of user tables aren't partitioned.
>
> Is there a way we can limit the data generated from hive per query, like a
> hve parameter for setting HDFS quotas for job level *scratch* directory
> or any other approach?
> What's the general approach to gaurdrail such multi-tenant cases.
>
> Thanks in advance,
> Ravi
>

Reply via email to