One of the reason in my mind is to avoid Map-Reduce application completely
during ingestion, if possible. Also, I can then use Spark stand alone
cluster to ingest, even if my hadoop cluster is heavily loaded. What you
guys think?

On Wed, Apr 6, 2016 at 3:13 PM, Jörn Franke <jornfra...@gmail.com> wrote:

> Why do you want to reimplement something which is already there?
>
> On 06 Apr 2016, at 06:47, ayan guha <guha.a...@gmail.com> wrote:
>
> Hi
>
> Thanks for reply. My use case is query ~40 tables from Oracle (using index
> and incremental only) and add data to existing Hive tables. Also, it would
> be good to have an option to create Hive table, driven by job specific
> configuration.
>
> What do you think?
>
> Best
> Ayan
>
> On Wed, Apr 6, 2016 at 2:30 PM, Takeshi Yamamuro <linguin....@gmail.com>
> wrote:
>
>> Hi,
>>
>> It depends on your use case using sqoop.
>> What's it like?
>>
>> // maropu
>>
>> On Wed, Apr 6, 2016 at 1:26 PM, ayan guha <guha.a...@gmail.com> wrote:
>>
>>> Hi All
>>>
>>> Asking opinion: is it possible/advisable to use spark to replace what
>>> sqoop does? Any existing project done in similar lines?
>>>
>>> --
>>> Best Regards,
>>> Ayan Guha
>>>
>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>
>
>
> --
> Best Regards,
> Ayan Guha
>
>


-- 
Best Regards,
Ayan Guha

Reply via email to