cool~ Thanks Kang! I will check and let you know.
Sorry for delay as there is an urgent customer issue today.
Best
Martin
2017-07-24 22:15 GMT-07:00 周康 :
> * If the file exists but is a directory rather than a regular file, does
> * not exist but cannot be created, or cannot be opened for any ot
* If the file exists but is a directory rather than a regular file, does
* not exist but cannot be created, or cannot be opened for any other
* reason then a FileNotFoundException is thrown.
After searching into FileOutputStream i saw this annotation.So you can
check executor node first(may be no
You can also check whether space left in the executor node enough to store
shuffle file or not.
2017-07-25 13:01 GMT+08:00 周康 :
> First,spark will handle task fail so if job ended normally , this error
> can be ignore.
> Second, when using BypassMergeSortShuffleWriter, it will first write data
>
First,spark will handle task fail so if job ended normally , this error can
be ignore.
Second, when using BypassMergeSortShuffleWriter, it will first write data
file then write an index file.
You can check "Failed to delete temporary index file at" or "fail to rename
file" in related executor node'
Is there anyone at share me some lights about this issue?
Thanks
Martin
2017-07-21 18:58 GMT-07:00 Martin Peng :
> Hi,
>
> I have several Spark jobs including both batch job and Stream jobs to
> process the system log and analyze them. We are using Kafka as the pipeline
> to connect each jobs.
>
Hi,
I have several Spark jobs including both batch job and Stream jobs to
process the system log and analyze them. We are using Kafka as the pipeline
to connect each jobs.
Once upgrade to Spark 2.1.0 + Spark Kafka Streaming 010, I found some of
the jobs(both batch or streaming) are thrown below e