It sort of depends on optimized. There is a good thread on the topic at http://search-hadoop.com/m/q3RTtJor7QBnWT42/Spark+and+SQL+server/v=threaded
If you have an archival type strategy, you could do daily BCP extracts out to load the data into HDFS / S3 / etc. This would result in minimal impact to SQL Server for the extracts (for that scenario, that was of primary importance). On Thu, Jul 23, 2015 at 16:42 vinod kumar <vinodsachin...@gmail.com> wrote: > Hi Everyone, > > I am in need to use the table from MsSQLSERVER in SPARK.Any one please > share me the optimized way for that? > > Thanks in advance, > Vinod > >