Pretty simple assuming that you have ojdbc6.jar in $SQOOP_HOME/lib

sqoop import --connect "jdbc:oracle:thin:@rhes564:1521:mydb" --username
hddtester -P \
        --query "select * from hddtester.t where \
        \$CONDITIONS" \
        --split-by object_id \
        --hive-import
        --create-hive-table
        --hive-table "OracleHadoop.t"
        --target-dir "t"

sqoop import --connect "jdbc:oracle:thin:@rhes564:1521:mydb" --username
hddtester -P \
--query "select * from hddtester.t where \
\$CONDITIONS" \
--split-by object_id \
--as-avrodatafile \
--target-dir "/tmp/myt" \
--compress \
--num-mappers 20 \
--incremental append \
--check-column object_id \
--last-value 1000000

HTH



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 6 April 2016 at 05:47, ayan guha <guha.a...@gmail.com> wrote:

> Hi
>
> Thanks for reply. My use case is query ~40 tables from Oracle (using index
> and incremental only) and add data to existing Hive tables. Also, it would
> be good to have an option to create Hive table, driven by job specific
> configuration.
>
> What do you think?
>
> Best
> Ayan
>
> On Wed, Apr 6, 2016 at 2:30 PM, Takeshi Yamamuro <linguin....@gmail.com>
> wrote:
>
>> Hi,
>>
>> It depends on your use case using sqoop.
>> What's it like?
>>
>> // maropu
>>
>> On Wed, Apr 6, 2016 at 1:26 PM, ayan guha <guha.a...@gmail.com> wrote:
>>
>>> Hi All
>>>
>>> Asking opinion: is it possible/advisable to use spark to replace what
>>> sqoop does? Any existing project done in similar lines?
>>>
>>> --
>>> Best Regards,
>>> Ayan Guha
>>>
>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>
>
>
> --
> Best Regards,
> Ayan Guha
>

Reply via email to