They're not simply interchangeable. sqoop is written to use mapreduce. I actually implemented my own replacement for sqoop-export in spark, which was extremely simple. It wasn't any faster, because the bottleneck was the receiving database.
Is your motivation here speed? Or correctness? On Sat, Apr 30, 2016 at 8:45 AM, Mich Talebzadeh <mich.talebza...@gmail.com> wrote: > Hi, > > What is the simplest way of making sqoop import use spark engine as > opposed to the default mapreduce when putting data into hive table. I did > not see any parameter for this in sqoop command line doc. > > Thanks > > Dr Mich Talebzadeh > > > > LinkedIn * > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* > > > > http://talebzadehmich.wordpress.com > > > -- Want to work at Handy? Check out our culture deck and open roles <http://www.handy.com/careers> Latest news <http://www.handy.com/press> at Handy Handy just raised $50m <http://venturebeat.com/2015/11/02/on-demand-home-service-handy-raises-50m-in-round-led-by-fidelity/> led by Fidelity