Thanks Eran.

But how about including Teradata jars in classpath? That is where I am
having problems. How did you included JAR for Oracle JDBC? Did you use %dep
or set any env variables or any interpreter parameters?



On Fri, Jul 31, 2015 at 5:46 PM, IT CTO <[email protected]> wrote:

> I was just saying that you don'y need the %pyspark, just override the hive
> connection string and use it. We did that for Oracle.
> Eran
>
> בתאריך שבת, 1 באוג׳ 2015, 00:33 מאת Dhaval Patel <[email protected]>:
>
>> Hi Eran,
>>
>> That is what I am trying to figure out - how can I put jar in the class
>> path? I tried adding export var in setup script as mentioned below and it
>> doesn't seem to add in the classpath. Or did I misunderstood anything?
>>
>> Thanks,
>> Dhaval
>>
>> On Fri, Jul 31, 2015 at 5:14 PM, IT CTO <[email protected]> wrote:
>>
>>> As a simple hack you can put the jar in the class path and tgen set the
>>> jdbc parameters in the hive interpreter parameters. Then use %hive and just
>>> write sql aginst teradata.
>>> Eran
>>>
>>> בתאריך שבת, 1 באוג׳ 2015, 00:08 מאת Dhaval Patel <[email protected]>:
>>>
>>>> Hi,
>>>>
>>>> I am trying to connect to Teradata from spark and getting below error
>>>> for not finding suitable drivers.
>>>>
>>>> : java.sql.SQLException: No suitable driver found for
>>>> jdbc:teradata://XXXXXX
>>>>
>>>>
>>>> I have tried adding jar files using %dep, as well as in zeppelin-env.sh
>>>> setting up SPARK_CLASSPATH variable but instead of adding under
>>>> classpaths, it adds under spark.driver.extraClassPath.
>>>> SPARK_CLASSPATH=/...path/terajdbc4.jar:/..path/tdgssconfig.jar
>>>>
>>>>
>>>> Below is code I tried from Z :
>>>>
>>>> %pyspark
>>>> df = sqlContext.load(source="jdbc", url="jdbc:teradata://XXXXX,
>>>> user=XXXXXX, password=XXXXXXX", dbtable="XXXXXXX")
>>>>
>>>> I have tried from shell adding the driver and connecting from there and
>>>> it worked like charm.
>>>>
>>>> Thanks in advance!
>>>>
>>>> -Dhaval
>>>>
>>>
>>

Reply via email to