It worths trying it though, you might hit many technical issues;
for example, I'm not sure that spark completely supports all the Oracle
dialects (e.g., types) in JDBCRelation.
Also, I'm not sure spark can handle queries that you have used in the use
case.
At least, you just have to check if spark supports queries and types.

// maropu

On Wed, Apr 6, 2016 at 1:47 PM, ayan guha <guha.a...@gmail.com> wrote:

> Hi
>
> Thanks for reply. My use case is query ~40 tables from Oracle (using index
> and incremental only) and add data to existing Hive tables. Also, it would
> be good to have an option to create Hive table, driven by job specific
> configuration.
>
> What do you think?
>
> Best
> Ayan
>
> On Wed, Apr 6, 2016 at 2:30 PM, Takeshi Yamamuro <linguin....@gmail.com>
> wrote:
>
>> Hi,
>>
>> It depends on your use case using sqoop.
>> What's it like?
>>
>> // maropu
>>
>> On Wed, Apr 6, 2016 at 1:26 PM, ayan guha <guha.a...@gmail.com> wrote:
>>
>>> Hi All
>>>
>>> Asking opinion: is it possible/advisable to use spark to replace what
>>> sqoop does? Any existing project done in similar lines?
>>>
>>> --
>>> Best Regards,
>>> Ayan Guha
>>>
>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>
>
>
> --
> Best Regards,
> Ayan Guha
>



-- 
---
Takeshi Yamamuro

Reply via email to