Hi Kelvin,

Yes. I am creating an uber jar with the Postgres driver included, but 
nevertheless tried both –jars and –driver-classpath flags. It didn’t help.

Interestingly, I can’t use BoneCP even in the driver program when I run my 
application with spark-submit. I am getting the same exception when the 
application initializes BoneCP before creating SparkContext. It looks like 
Spark is loading a different version of the Postgres JDBC driver than the one 
that I am linking.

Mohammed

From: Kelvin Chu [mailto:2dot7kel...@gmail.com]
Sent: Thursday, February 19, 2015 7:56 PM
To: Mohammed Guller
Cc: user@spark.apache.org
Subject: Re: using a database connection pool to write data into an RDBMS from 
a Spark application

Hi Mohammed,

Did you use --jars to specify your jdbc driver when you submitted your job? 
Take a look of this link: 
http://spark.apache.org/docs/1.2.0/submitting-applications.html

Hope this help!

Kelvin

On Thu, Feb 19, 2015 at 7:24 PM, Mohammed Guller 
<moham...@glassbeam.com<mailto:moham...@glassbeam.com>> wrote:
Hi –
I am trying to use BoneCP (a database connection pooling library) to write data 
from my Spark application to an RDBMS. The database inserts are inside a 
foreachPartition code block. I am getting this exception when the code tries to 
insert data using BoneCP:

java.sql.SQLException: No suitable driver found for 
jdbc:postgresql://hostname:5432/dbname

I tried explicitly loading the Postgres driver on the worker nodes by adding 
the following line inside the foreachPartition code block:

Class.forName("org.postgresql.Driver")

It didn’t help.

Has anybody able to get a database connection pool library to work with Spark? 
If you got it working, can you please share the steps?

Thanks,
Mohammed


Reply via email to