Hi, community!
I notice a change about the memory module of yarn container between spark-2.3.0
and spark-3.2.1 when requesting containers from yarn.
org.apache.spark.deploy.yarn.Client.java # verifyClusterResources
```
spark-2.3.0
val executorMem = executorMemory + executorMemoryOverhead
``
Hi devs,
question: how to convert hive output format to spark sql datasource format?
spark version: spark 2.3.0
scene: there are many small files on hdfs(hive) generated by spark sql
applications when dynamic partition is enabled or setting
spark.sql.shuffle.partitions >200. so i am t