Could you kindly share if you've been able to run the streaming job over a
long period of time? I did something very similar and the executors seem to
run out of memory (how fast depends on how much data/memory they get). Just
curious what your experience is

On Fri, Sep 26, 2014 at 12:31 AM, maddenpj <madde...@gmail.com> wrote:

> Yup it's all in the gist:
> https://gist.github.com/maddenpj/5032c76aeb330371a6e6
>
> Lines 6-9 deal with setting up the driver specifically. This sets the
> driver
> up on each partition which keeps the connection pool around per record.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-No-parallelism-in-writing-to-database-MySQL-tp15174p15202.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to