Hi,
There are 4 tables ranging from 10 million to 100 million rows but they all
have primary keys.
The network is fine but our Oracle is RAC and we can only connect to a
designated Oracle node (where we have a DQ account only).
We have a limited time window of few hours to get the required data out.
Thanks
On Sunday, 14 August 2016, 21:07, Mich Talebzadeh
<[email protected]> wrote:
How big are your tables and is there any issue with the network between your
Spark nodes and your Oracle DB that adds to issues?
HTH
Dr Mich Talebzadeh LinkedIn
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com
Disclaimer: Use it at your own risk. Any and all responsibility for any loss,
damage or destructionof data or any other property which may arise from relying
on this email's technical content is explicitly disclaimed.The author will in
no case be liable for any monetary damages arising from suchloss, damage or
destruction.
On 14 August 2016 at 20:50, Ashok Kumar <[email protected]> wrote:
Hi Gurus,
I have few large tables in rdbms (ours is Oracle). We want to access these
tables through Spark JDBC
What is the quickest way of getting data into Spark Dataframe say multiple
connections from Spark
thanking you