or state mgmt is used.
>>>>
>>>> I've poured over the documentation and tried setting the following
>>>> properties but they have not helped.
>>>> As a work around we're using a cron script that periodically cleans up
>>>> old
>>>> files but
have not helped.
>>> As a work around we're using a cron script that periodically cleans up
>>> old
>>> files but this has a bad smell to it.
>>>
>>> SPARK_WORKER_OPTS in spark-env.sh on every worker node
>>> spark.worker.cleanup.enabled tr
ed true
>> spark.worker.cleanup.interval
>> spark.worker.cleanup.appDataTtl
>>
>> Also tried on the driver side:
>> spark.cleaner.ttl
>> spark.shuffle.consolidateFiles true
>>
>>
>>
>> --
>> View this
iles true
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Worker-runs-out-of-inodes-tp22355.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
leaner.ttl
spark.shuffle.consolidateFiles true
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Worker-runs-out-of-inodes-tp22355.html
Sent from the Apache Spark User List mailing list archive at