This is another argument for getting the code to the point where this can
default to "true":
SQLConf.scala: val ADAPTIVE_EXECUTION_ENABLED = buildConf("
*spark.sql.adaptive.enabled*")
On Tue, Aug 22, 2017 at 12:27 PM, Reynold Xin wrote:
> +1
>
>
> On Tue, Aug 22, 2017 at 12:25 PM, Maciej Szymk
+1 (non-binding)
I am specifically interested in setting up testing environment for my
company's Spark use and also expecting more comprehensive documents on
getting development env setup in case of bug fix or new feature development,
now it is only briefly documented in
https://github.com/apache-
+1
On Tue, Aug 22, 2017 at 12:25 PM, Maciej Szymkiewicz wrote:
> Hi,
>
> From my experience it is possible to cut quite a lot by reducing
> spark.sql.shuffle.partitions to some reasonable value (let's say
> comparable to the number of cores). 200 is a serious overkill for most of
> the test cas
Hi,
>From my experience it is possible to cut quite a lot by reducing
spark.sql.shuffle.partitions to some reasonable value (let's say comparable
to the number of cores). 200 is a serious overkill for most of the test
cases anyway.
Best,
Maciej
On 21 August 2017 at 03:00, Dong Joon Hyun wrot