Thanks for the info Michael.  I see this a few other places in the
Impala+Parquet context but a real quick scan didn't reveal any leads on
this warning.   I'll ignore for now.

Andrew

On Mon, Sep 22, 2014 at 12:16 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> These are coming from the parquet library and as far as I know can be
> safely ignored.
>
> On Mon, Sep 22, 2014 at 3:27 AM, Andrew Ash <and...@andrewash.com> wrote:
>
>> Hi All,
>>
>> I'm seeing the below WARNINGs in stdout using Spark SQL in Spark 1.1.0 --
>> is this warning a known issue?  I don't see any open Jira tickets for it.
>>
>> Sep 22, 2014 10:13:54 AM INFO:
>> parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
>> Sep 22, 2014 10:13:54 AM INFO:
>> parquet.hadoop.InternalParquetRecordReader: block read in memory in 6 ms.
>> row count = 453875
>> Sep 22, 2014 10:13:55 AM WARNING: parquet.hadoop.ParquetRecordReader: Can
>> not initialize counter due to context is not a instance of
>> TaskInputOutputContext, but is
>> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
>> Sep 22, 2014 10:13:55 AM INFO:
>> parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will
>> read a total of 454101 records.
>> Sep 22, 2014 10:13:55 AM INFO:
>> parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
>> Sep 22, 2014 10:13:55 AM INFO:
>> parquet.hadoop.InternalParquetRecordReader: block read in memory in 6 ms.
>> row count = 454101
>> Sep 22, 2014 10:13:55 AM WARNING: parquet.hadoop.ParquetRecordReader: Can
>> not initialize counter due to context is not a instance of
>> TaskInputOutputContext, but is
>> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
>>
>>
>> Thanks!
>> Andrew
>>
>
>

Reply via email to