t;>
>> In case we want to use a pseudo file-system (like S3) which does not
>> support append what are our options? I am not familiar with the code yet
>> but is it possible to generate a new file whenever conflict of this sort
>> happens?
>>
>>
>> Th
_
From: Dirceu Semighini Filho
Sent: Thursday, November 17, 2016 6:50:28 AM
To: Arijit
Cc: Tathagata Das; user@spark.apache.org
Subject: Re: Spark Streaming Data loss on failure to write BlockAdditionEvent
failure to WAL
Hi Arijit,
Have you find a solution for this? I'
ijit
> --
> *From:* Tathagata Das
> *Sent:* Monday, November 7, 2016 7:59:06 PM
> *To:* Arijit
> *Cc:* user@spark.apache.org
> *Subject:* Re: Spark Streaming Data loss on failure to write
> BlockAdditionEvent failure to WAL
>
> For WAL in Spark t
am not familiar with the code yet but is it
possible to generate a new file whenever conflict of this sort happens?
Thanks again, Arijit
From: Tathagata Das
Sent: Monday, November 7, 2016 7:59:06 PM
To: Arijit
Cc: user@spark.apache.org
Subject: Re: Spark St
For WAL in Spark to work with HDFS, the HDFS version you are running must
support file appends. Contact your HDFS package/installation provider to
figure out whether this is supported by your HDFS installation.
On Mon, Nov 7, 2016 at 2:04 PM, Arijit wrote:
> Hello All,
>
>
> We are using Spark 1
Hello All,
We are using Spark 1.6.2 with WAL enabled and encountering data loss when the
following exception/warning happens. We are using HDFS as our checkpoint
directory.
Questions are:
1. Is this a bug in Spark or issue with our configuration? Source looks like
the following. Which file