I am using the latest Spark version 1.6
I have increased the maximum number of open files using this command *sysctl
-w fs.file-max=3275782*
Also I increased the limit for the user who run the spark job by updating
the /etc/security/limits.conf file. Soft limit is 1024 and Hard limit
is 65536.
T
We have similar jobs consuming from Kafka and writing to elastic search and the
culprit is usually not enough memory for the executor or driver or not enough
executors in general to process the job try using dynamic allocation if you're
not too sure about how many cores/executors you actually ne
Which version of Spark are you using ?
How did you increase the open file limit ?
Which operating system do you use ?
Please see Example 6. ulimit Settings on Ubuntu under:
http://hbase.apache.org/book.html#basic.prerequisites
On Sun, Apr 24, 2016 at 2:34 AM, fanooos wrote:
> I have a spark s