l Spitzer
Gesendet: Donnerstag, 8. April 2021 15:24
An: Weiand, Markus, NMA-CFD
Cc: user@spark.apache.org
Betreff: Re: possible bug
Could be that the driver JVM cannot handle the metadata required to store the
partition information of a 70k partition RDD. I see you say you have a 100GB
driver b
.
I'm trying to coalesce an empty rdd with 7 partitions in an empty rdd with
1 partition, why is this a problem without shuffling?
Von: Sean Owen
Gesendet: Donnerstag, 8. April 2021 15:00
An: Weiand, Markus, NMA-CFD
Cc: user@spark.apache.org
Betreff: Re: possible bug
That's a very low l
Hi all,
I'm using spark on a c5a.16xlarge machine in amazon cloud (so having 64 cores
and 128 GB RAM). I'm using spark 3.01.
The following python code leads to an exception, is this a bug or is my
understanding of the API incorrect?
import pyspark
conf=pyspark.SparkConf().setMaster("l