Hi Khalid,

Elango mentioned the file is working fine in our another environment with the same driver and executor memory

Brian

On Jul 7, 2023, at 10:18 AM, Khalid Mammadov <khalidmammad...@gmail.com> wrote:


Perhaps that parquet file is corrupted or got that is in that folder?
To check, try to read that file with pandas or other tools to see if you can read without Spark.

On Wed, 5 Jul 2023, 07:25 elango vaidyanathan, <elango...@gmail.com> wrote:

Hi team,

Any updates on this below issue

On Mon, 3 Jul 2023 at 6:18 PM, elango vaidyanathan <elango...@gmail.com> wrote:

 
Hi all,

I am reading a parquet file like this and it gives java.lang.IllegalArgumentException. However i can work with other parquet files (such as nyc taxi parquet files) without any issue. I have copied the full error log as well. Can you please check once and let me know how to fix this?

import pyspark

from pyspark.sql import SparkSession

spark=SparkSession.builder.appName("testPyspark").config("spark.executor.memory", "20g").config("spark.driver.memory", "50g").getOrCreate()

df=spark.read.parquet("/data/202301/account_cycle")

df.printSchema() # worksfine

df.count() #worksfine

df.show()# getting below error

>>> df.show()

23/07/03 18:07:20 INFO FileSourceStrategy: Pushed Filters:

23/07/03 18:07:20 INFO FileSourceStrategy: Post-Scan Filters:

23/07/03 18:07:20 INFO FileSourceStrategy: Output Data Schema: struct<account_cycle_serial: bigint, account_serial: bigint, account_status: string, currency_code: string, opened_dt: date ... 30 more fields>

23/07/03 18:07:20 INFO MemoryStore: Block broadcast_19 stored as values in memory (estimated size 540.6 KiB, free 26.5 GiB)

23/07/03 18:07:20 INFO MemoryStore: Block broadcast_19_piece0 stored as bytes in memory (estimated size 46.0 KiB, free 26.5 GiB)

23/07/03 18:07:20 INFO BlockManagerInfo: Added broadcast_19_piece0 in memory on mynode:41055 (size: 46.0 KiB, free: 26.5 GiB)

23/07/03 18:07:20 INFO SparkContext: Created broadcast 19 from showString at NativeMethodAccessorImpl.java:0

23/07/03 18:07:20 INFO FileSourceScanExec: Planning scan with bin packing, max size: 134217728 bytes, open cost is considered as scanning 4194304 bytes.

23/07/03 18:07:20 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:0

23/07/03 18:07:20 INFO DAGScheduler: Got job 13 (showString at NativeMethodAccessorImpl.java:0) with 1 output partitions

23/07/03 18:07:20 INFO DAGScheduler: Final stage: ResultStage 14 (showString at NativeMethodAccessorImpl.java:0)

23/07/03 18:07:20 INFO DAGScheduler: Parents of final stage: List()

23/07/03 18:07:20 INFO DAGScheduler: Missing parents: List()

23/07/03 18:07:20 INFO DAGScheduler: Submitting ResultStage 14 (MapPartitionsRDD[42] at showString at NativeMethodAccessorImpl.java:0), which has no missing parents

23/07/03 18:07:20 INFO MemoryStore: Block broadcast_20 stored as values in memory (estimated size 38.1 KiB, free 26.5 GiB)

23/07/03 18:07:20 INFO MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 10.5 KiB, free 26.5 GiB)

23/07/03 18:07:20 INFO BlockManagerInfo: Added broadcast_20_piece0 in memory on mynode:41055 (size: 10.5 KiB, free: 26.5 GiB)

23/07/03 18:07:20 INFO SparkContext: Created broadcast 20 from broadcast at DAGScheduler.scala:1478

23/07/03 18:07:20 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 14 (MapPartitionsRDD[42] at showString at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))

23/07/03 18:07:20 INFO TaskSchedulerImpl: Adding task set 14.0 with 1 tasks resource profile 0

23/07/03 18:07:20 INFO TaskSetManager: Starting task 0.0 in stage 14.0 (TID 48) (mynode, executor driver, partition 0, PROCESS_LOCAL, 4890 bytes) taskResourceAssignments Map()

23/07/03 18:07:20 INFO Executor: Running task 0.0 in stage 14.0 (TID 48)

23/07/03 18:07:20 INFO FileScanRDD: Reading File path: file:///data/202301/account_cycle/account_cycle-202301-53.parquet, range: 0-134217728, partition values: [empty row]

23/07/03 18:07:20 ERROR Executor: Exception in task 0.0 in stage 14.0 (TID 48)

java.lang.IllegalArgumentException

        at java.nio.Buffer.limit(Buffer.java:275)

        at org.xerial.snappy.Snappy.uncompress(Snappy.java:553)

        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:71)

        at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)

        at java.io.DataInputStream.readFully(DataInputStream.java:195)

        at java.io.DataInputStream.readFully(DataInputStream.java:169)

        at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)

        at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)

        at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)

        at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.<init>(PlainValuesDictionary.java:154)

        at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:96)

        at org.apache.parquet.column.Encoding$5.initDictionary(Encoding.java:163)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.<init>(VectorizedColumnReader.java:114)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:352)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:293)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)

        at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)

        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:350)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)

        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)

        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)

        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)

        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

        at org.apache.spark.scheduler.Task.run(Task.scala:131)

        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)

        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)

        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:750)

23/07/03 18:07:20 WARN TaskSetManager: Lost task 0.0 in stage 14.0 (TID 48) (mynode executor driver): java.lang.IllegalArgumentException

        at java.nio.Buffer.limit(Buffer.java:275)

        at org.xerial.snappy.Snappy.uncompress(Snappy.java:553)

        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:71)

        at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)

        at java.io.DataInputStream.readFully(DataInputStream.java:195)

        at java.io.DataInputStream.readFully(DataInputStream.java:169)

        at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)

        at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)

        at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)

        at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.<init>(PlainValuesDictionary.java:154)

        at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:96)

        at org.apache.parquet.column.Encoding$5.initDictionary(Encoding.java:163)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.<init>(VectorizedColumnReader.java:114)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:352)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:293)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)

        at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)

        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:350)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)

        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)

        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)

        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)

        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

        at org.apache.spark.scheduler.Task.run(Task.scala:131)

        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)

        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)

        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:750)

23/07/03 18:07:20 ERROR TaskSetManager: Task 0 in stage 14.0 failed 1 times; aborting job

23/07/03 18:07:20 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose tasks have all completed, from pool

23/07/03 18:07:20 INFO TaskSchedulerImpl: Cancelling stage 14

23/07/03 18:07:20 INFO TaskSchedulerImpl: Killing all running tasks in stage 14: Stage cancelled

23/07/03 18:07:20 INFO DAGScheduler: ResultStage 14 (showString at NativeMethodAccessorImpl.java:0) failed in 0.278 s due to Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 48) (mynode executor driver): java.lang.IllegalArgumentException

        at java.nio.Buffer.limit(Buffer.java:275)

        at org.xerial.snappy.Snappy.uncompress(Snappy.java:553)

        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:71)

        at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)

        at java.io.DataInputStream.readFully(DataInputStream.java:195)

        at java.io.DataInputStream.readFully(DataInputStream.java:169)

        at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)

        at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)

        at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)

       at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.<init>(PlainValuesDictionary.java:154)

        at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:96)

        at org.apache.parquet.column.Encoding$5.initDictionary(Encoding.java:163)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.<init>(VectorizedColumnReader.java:114)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:352)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:293)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)

        at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)

        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:350)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)

        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)

        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)

        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)

        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

        at org.apache.spark.scheduler.Task.run(Task.scala:131)

        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)

        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)

        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:

23/07/03 18:07:20 INFO DAGScheduler: Job 13 failed: showString at NativeMethodAccessorImpl.java:0, took 0.280998 s

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_5_piece0 on mynode:41055 in memory (size: 10.5 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_16_piece0 on mynode:41055 in memory (size: 46.0 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_10_piece0 on mynode:41055 in memory (size: 46.0 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_15_piece0 on mynode:41055 in memory (size: 46.9 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_18_piece0 on mynode:41055 in memory (size: 46.9 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_8_piece0 on mynode:41055 in memory (size: 10.5 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_6_piece0 on mynode:41055 in memory (size: 46.9 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_11_piece0 on mynode:41055 in memory (size: 10.5 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_14_piece0 on mynode:41055 in memory (size: 10.5 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_12_piece0 on mynode:41055 in memory (size: 46.9 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_7_piece0 on mynode:41055 in memory (size: 46.0 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_13_piece0 on mynode:41055 in memory (size: 46.0 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_3_piece0 on mynode:41055 in memory (size: 5.5 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_17_piece0 on mynode:41055 in memory (size: 10.5 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_4_piece0 on mynode:41055 in memory (size: 46.0 KiB, free: 26.5 GiB)

23/07/03 18:07:21 INFO BlockManagerInfo: Removed broadcast_9_piece0 on mynode:41055 in memory (size: 46.9 KiB, free: 26.5 GiB)

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

  File "/nix/store/jkyamgd3bd97bjy8vd4nawlnyz23lk2w-spark-3.2.2/lib/spark-3.2.2/python/pyspark/sql/dataframe.py", line 494, in show

    print(self._jdf.showString(n, 20, vertical))

  File "/nix/store/jkyamgd3bd97bjy8vd4nawlnyz23lk2w-spark-3.2.2/lib/spark-3.2.2/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__

  File "/nix/store/jkyamgd3bd97bjy8vd4nawlnyz23lk2w-spark-3.2.2/lib/spark-3.2.2/python/pyspark/sql/utils.py", line 111, in deco

    return f(*a, **kw)

  File "/nix/store/jkyamgd3bd97bjy8vd4nawlnyz23lk2w-spark-3.2.2/lib/spark-3.2.2/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 326, in get_return_value

py4j.protocol.Py4JJavaError: An error occurred while calling o64.showString.

: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 48) (mynode executor driver): java.lang.IllegalArgumentException

        at java.nio.Buffer.limit(Buffer.java:275)

        at org.xerial.snappy.Snappy.uncompress(Snappy.java:553)

        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:71)

        at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)

        at java.io.DataInputStream.readFully(DataInputStream.java:195)

        at java.io.DataInputStream.readFully(DataInputStream.java:169)

        at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)

        at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)

        at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)

        at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.<init>(PlainValuesDictionary.java:154)

        at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:96)

        at org.apache.parquet.column.Encoding$5.initDictionary(Encoding.java:163)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.<init>(VectorizedColumnReader.java:114)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:352)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:293)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)

        at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)

        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:350)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)

        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)

        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)

        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)

        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

        at org.apache.spark.scheduler.Task.run(Task.scala:131)

        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)

        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)

       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:

        at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)

        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)

        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)

        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)

        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)

        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)

        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)

        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)

        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)

       at scala.Option.foreach(Option.scala:407)

        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)

        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)

        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)

        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)

        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)

        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)

        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)

        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:492)

        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:445)

        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48)

        at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3715)

        at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2728)

        at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706)

        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)

        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)

        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)

        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)

        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)

        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704)

        at org.apache.spark.sql.Dataset.head(Dataset.scala:2728)

        at org.apache.spark.sql.Dataset.take(Dataset.scala:2935)

        at org.apache.spark.sql.Dataset.getRows(Dataset.scala:287)

        at org.apache.spark.sql.Dataset.showString(Dataset.scala:326)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)

        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)

        at py4j.Gateway.invoke(Gateway.java:282)

        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)

        at py4j.commands.CallCommand.execute(CallCommand.java:79)

        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)

        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)

        at java.lang.Thread.run(Thread.java:750)

Caused by: java.lang.IllegalArgumentException

        at java.nio.Buffer.limit(Buffer.java:275)

        at org.xerial.snappy.Snappy.uncompress(Snappy.java:553)

        at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:71)

        at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)

        at java.io.DataInputStream.readFully(DataInputStream.java:195)

        at java.io.DataInputStream.readFully(DataInputStream.java:169)

        at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:286)

        at org.apache.parquet.bytes.BytesInput.toByteBuffer(BytesInput.java:237)

        at org.apache.parquet.bytes.BytesInput.toInputStream(BytesInput.java:246)

        at org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.<init>(PlainValuesDictionary.java:154)

        at org.apache.parquet.column.Encoding$1.initDictionary(Encoding.java:96)

        at org.apache.parquet.column.Encoding$5.initDictionary(Encoding.java:163)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.<init>(VectorizedColumnReader.java:114)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:352)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:293)

        at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:196)

        at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:191)

        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:104)

        at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)

        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)

        at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:350)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)

        at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)

        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)

        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)

        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)

        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

        at org.apache.spark.scheduler.Task.run(Task.scala:131)

        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)

        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)

        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        ... 1 more

 

 

Thanks,

Elango

 

--

Thanks,
Elango
--

Thanks,
Elango

Reply via email to