I'm not entirely sure what the behaviour is when writing to remote cluster.
It could be that the connections are being established for every element in
your dataframe, perhaps having to use for each partition may reduce the
number of connections? You may have to look at what the executors do when
t
I managed to make the failing stage work by increasing memoryOverhead to
something ridiculous > 50%. Our spark.executor.memory = 12gb and I bumped
spark.mesos.executor.memoryOverhead=8G
*Can someone explain why this solved the issue?* As I understand, usage of
memoryOverhead is for VM overhead an
Not for the stage that fails, all it does is read and write - the number of
tasks is # of cores * # of executor instances. For us that is 60 (3 cores
20 executors)
The input partition size for the failing stage, when spark reads the 20
files each 132M, it comes out to be 40 partitions.
On Fri,
If you're using Spark SQL, that configuration setting causes a shuffle if
the number of your input partitions to the write is larger than that
configuration.
Is there anything in the executor logs or the Spark UI DAG that indicates a
shuffle? I don't expect a shuffle if it is a straight write. Wha
Could you explain why shuffle partitions might be a good starting point?
Some more details: when I write the output the first time after logic is
complete, I repartition the files to 20 after having
spark.sql.shuffle.partitions = 2000 so we don’t have too many small files.
Data is small about 130M
spark.sql.shuffle.partitions might be a start.
Is there a difference in the number of partitions when the parquet is read
to spark.sql.shuffle.partitions? Is it much higher than
spark.sql.shuffle.partitions?
On Fri, 20 Dec 2019, 7:34 pm Ruijing Li, wrote:
> Hi all,
>
> I have encountered a stra
Hi all,
I have encountered a strange executor OOM error. I have a data pipeline
using Spark 2.3 Scala 2.11.12. This pipeline writes the output to one HDFS
location as parquet then reads the files back in and writes to multiple
hadoop clusters (all co-located in the same datacenter). It should be