Spark version is 2.2 and I think I am running into this issue
https://issues.apache.org/jira/browse/SPARK-18016as the dataset schema is
pretty huge and nested
From: ARAVIND SETHURATHNAM
Date: Monday, June 18, 2018 at 4:00 PM
To: "user@spark.apache.org"
Subject: Spark batch job:
Hi,
We have several structured streaming jobs (spark version 2.2.0) consuming from
kafka and writing to s3. They were running fine for a month, since yesterday
few jobs started failing and I see the below exception in the failed jobs log,
```Tried to fetch 473151075 but the returned record offs