Hi All,

I'm new to Spark and Scala, just recently using this language and love it, but 
there is a small coding problem when I want to convert my existing map reduce 
code from Java to Spark...

In Java, I create a class by extending org.apache.hadoop.mapreduce.Mapper and 
override the setup(), map() and cleanup() methods.
But in the Spark, there is no a method called setup(), so I write the setup() 
code into map(), but it performs badly.
The reason is I create database connection in the setup() once and run() will 
execute SQL query, then cleanup() will close the connection.
Could someone tell me how to do it in Spark?

Best regards,
Henry Hung

________________________________
The privileged confidential information contained in this email is intended for 
use only by the addressees as indicated by the original sender of this email. 
If you are not the addressee indicated in this email or are not responsible for 
delivery of the email to such a person, please kindly reply to the sender 
indicating this fact and delete all copies of it from your computer and network 
server immediately. Your cooperation is highly appreciated. It is advised that 
any unauthorized use of confidential information of Winbond is strictly 
prohibited; and any information in this email irrelevant to the official 
business of Winbond shall be deemed as neither given nor endorsed by Winbond.

Reply via email to