Hi Nasrulla,

Not sure what your new code is doing, but the symptom looks like you're
creating a new data source that wraps around the builtin Parquet data
source?

The problem here is, whole-stage codegen generated code for row-based
input, but the actual input is columnar.
In other words, in your setup, the vectorized Parquet reader is enabled
(which produces columnar output), and you probably wrote a new operator
that didn't properly interact with the columnar support, so that WSCG
thought it should generate row-based code instead of columnar code.

Hope it helps,
Kris
--

Kris Mok

Software Engineer Databricks Inc.

kris....@databricks.com

databricks.com


<http://databricks.com/>


On Thu, Jun 11, 2020 at 5:41 PM Nasrulla Khan Haris
<nasrulla.k...@microsoft.com.invalid> wrote:

> HI Spark developer,
>
>
>
> I have a new baseRelation which Initializes ParquetFileFormat object and
> when reading the data I am encountering Cast Exception below, however when
> I disable codegen support with config “spark.sql.codegen.wholeStage"=
> false, I do not encounter this exception.
>
>
>
>
>
> 20/06/11 17:35:39 INFO FileScanRDD: Reading File path: file:///D:/
> jvm/src/test/scala/resources/pems_sorted/station=402260/part-r-00245-ddaee723-f3f6-4f25-a34b-3312172aa6d7.snappy.parquet,
> range: 0-50936, partition values: [402260]
>
> 20/06/11 17:35:39 INFO CodecPool: Got brand-new decompressor [.snappy]
>
> 20/06/11 17:35:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID
> 0)
>
> java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch
> cannot be cast to org.apache.spark.sql.catalyst.InternalRow
>
>                 at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
> Source)
>
>                 at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
> Source)
>
>                 at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>
>                 at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
>
>                 at
> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
>
>                 at
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
>
>                 at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>
>                 at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
>
>                 at org.apache.spark.scheduler.Task.run(Task.scala:123)
>
>                 at
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>
>                 at
> org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>
>                 at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>
>                 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>                 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
>                 at java.lang.Thread.run(Thread.java:748)
>
>
>
>
>
> Appreciate your inputs.
>
>
>
> Thanks,
>
> NKH
>

Reply via email to