Hello Gopal,

I have been looking further into this issue, and have found that the
non-determinstic behavior of Hive in
generating DAGs is actually due to the logic in
AggregateStatsCache.findBestMatch() called from
AggregateStatsCache.get(), as well as the disproportionate distribution of
Nulls in __HIVE_DEFAULT_PARTITION__
(in the case of the TPC-DS dataset).

Here is what is happening. Let me use web_sales table and ws_web_site_sk
column in the 10TB TPC-DS dataset as
a running example.

1. In the course of running TPC-DS queries, Hive asks MetaStore about the
column statistics of 1823 partNames
in the web_sales/ws_web_site_sk combination, either without
__HIVE_DEFAULT_PARTITION__ or with
__HIVE_DEFAULT_PARTITION__.

  --- Without __HIVE_DEFAULT_PARTITION__, it reports a total of 901180
nulls.

  --- With __HIVE_DEFAULT_PARTITION__, however, it report a total of
1800087 nulls, almost twice as many.

2. The first call to MetaStore returns the correct result, but all
subsequent requests are likely to
return the same result from the cache, irrespective of the inclusion of
__HIVE_DEFAULT_PARTITION__. This is
because AggregateStatsCache.findBestMatch() treats
__HIVE_DEFAULT_PARTITION__ in the same way as other
partNames, and the difference in the size of partNames[] is just 1. The
outcome depends on the duration of
intervening queries, so everything is now non-deterministic.

3. If a wrong value of numNulls is returned, Hive generates a different
DAG, which usually takes much longer
than the correct one (e.g., 150s to 1000s for the first part of Query 24,
and 40s to 120s for Query 5).  I
guess the problem is particularly pronounced here because of the huge
number of nulls in
__HIVE_DEFAULT_PARTITION__. It is ironic to see that the query optimizer is
so efficient that a single wrong
guess of numNulls creates a very inefficient DAG.

Note that this behavior cannot be avoided by setting
hive.metastore.aggregate.stats.cache.max.variance to zero
because the difference in the number of partNames[] between the argument
and the entry in the cache is just 1.

I think that AggregateStatsCache.findBestMatch() should treat
__HIVE_DEFAULT_PARTITION__ in a special way, by
not returning the result in the cache if there is a difference in the
inclusion of partName
__HIVE_DEFAULT_PARTITION__ (or should provide the use with an option to
activate this feature). However, I am
testing only with the TPC-DS data, so please take my claim with a grain of
salt.

--- Sungwoo


On Fri, Jul 20, 2018 at 2:54 PM Gopal Vijayaraghavan <gop...@apache.org>
wrote:

> > My conclusion is that a query can update some internal states of
> HiveServer2, affecting DAG generation for subsequent queries.
>
> Other than the automatic reoptimization feature, there's two other
> potential suspects.
>
> First one would be to disable the in-memory stats cache's variance param,
> which might be triggering some residual effects.
>
> hive.metastore.aggregate.stats.cache.max.variance
>
> I set it to 0.0 when I suspect that feature is messing with the runtime
> plans or just disable the cache entirely with
>
> set hive.metastore.aggregate.stats.cache.enabled=false;
>
> Other than that, query24 is an interesting query.
>
> Is probably one of the corner cases where the predicate push-down is
> actually hurting the shared work optimizer.
>
> Also cross-check if you have accidentally loaded store_sales with
> ss_item_sk(int) and if the item i_item_sk is a bigint (type mismatches will
> trigger a slow join algorithm, but without any consistency issues).
>
> Cheers,
> Gopal
>
>
>

Reply via email to