Hi All,

I'm seeing the below WARNINGs in stdout using Spark SQL in Spark 1.1.0 --
is this warning a known issue?  I don't see any open Jira tickets for it.

Sep 22, 2014 10:13:54 AM INFO: parquet.hadoop.InternalParquetRecordReader:
at row 0. reading next block
Sep 22, 2014 10:13:54 AM INFO: parquet.hadoop.InternalParquetRecordReader:
block read in memory in 6 ms. row count = 453875
Sep 22, 2014 10:13:55 AM WARNING: parquet.hadoop.ParquetRecordReader: Can
not initialize counter due to context is not a instance of
TaskInputOutputContext, but is
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Sep 22, 2014 10:13:55 AM INFO: parquet.hadoop.InternalParquetRecordReader:
RecordReader initialized will read a total of 454101 records.
Sep 22, 2014 10:13:55 AM INFO: parquet.hadoop.InternalParquetRecordReader:
at row 0. reading next block
Sep 22, 2014 10:13:55 AM INFO: parquet.hadoop.InternalParquetRecordReader:
block read in memory in 6 ms. row count = 454101
Sep 22, 2014 10:13:55 AM WARNING: parquet.hadoop.ParquetRecordReader: Can
not initialize counter due to context is not a instance of
TaskInputOutputContext, but is
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl


Thanks!
Andrew

Reply via email to