https://issues.apache.org/jira/browse/SPARK-19680
and
https://issues.apache.org/jira/browse/KAFKA-3370
has a good explanation.
Verify that it works correctly with auto offset set to latest, to rule
out other issues.
Then try providing explicit starting offsets reasonably near the
beginning of
I don't know if any workaround. Maybe ask at Spark mailing list?
-Matthias
On 2/12/18 1:20 PM, Ted Yu wrote:
> Have you looked at SPARK-19888 ?
>
> Please give the full stack trace of the exception you saw.
>
> Cheers
>
> On Mon, Feb 12, 2018 at 12:38 PM, Mina Aslani wrote:
>
>> Hi Matthias,
Have you looked at SPARK-19888 ?
Please give the full stack trace of the exception you saw.
Cheers
On Mon, Feb 12, 2018 at 12:38 PM, Mina Aslani wrote:
> Hi Matthias,
> Are you referring to https://issues.apache.org/jira/browse/SPARK-19976?
> Doesn't look like that the jira was not fixed. (e.g
park does not pass this config to the consumer on purpose...
> It's not a Kafka issues -- IIRC, there is Spark JIRA ticket for this.
>
> -Matthias
>
> On 2/12/18 11:04 AM, Mina Aslani wrote:
> > Hi,
> >
> > I am getting below error
> > Caused by: or
> AFAIK, Spark does not pass this config to the consumer on purpose...
> It's not a Kafka issues -- IIRC, there is Spark JIRA ticket for this.
>
> -Matthias
>
> On 2/12/18 11:04 AM, Mina Aslani wrote:
> > Hi,
> >
> > I am getting below error
> > Caused b
or
> Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException:
> Offsets out of range with no configured reset policy for partitions:
> {topic1-0=304337}
> as soon as I submit a spark app to my cluster.
>
> I am using below dependency
> name: 'spark-streaming-kafka-0-10_2.11', version: '2.
Hi,
I am getting below error
Caused by: org.apache.kafka.clients.consumer.OffsetOutOfRangeException:
Offsets out of range with no configured reset policy for partitions:
{topic1-0=304337}
as soon as I submit a spark app to my cluster.
I am using below dependency
name: 'spark-streaming-ka