.rdd
>>>> .map {
>>>> case (Row(a: String, b: String)) => Processor.process(a,b)
>>>> }
>>>> .cache()
>>>> }
>>>>
>>>> I
cs I want from the process() method which uses
>>> the
>>> client I initialised on the driver code. Currently this works and I am
>>> able
>>> to send millions of data point. I was just wandering how it works
>>> internally. Does it share the db con
ses
>> the
>> client I initialised on the driver code. Currently this works and I am
>> able
>> to send millions of data point. I was just wandering how it works
>> internally. Does it share the db connection or creates a new connection
>&
just wandering how it works
> internally. Does it share the db connection or creates a new connection
> every time?
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
&
on
every time?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Understanding-how-spark-share-db-connections-created-on-driver-tp28806.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---