[ 
https://issues.apache.org/jira/browse/SQOOP-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henrique Alves Junqueira Nogueira Branco updated SQOOP-3476:
------------------------------------------------------------
    Description: 
Hi there, 

o achieve that I've used first the following command line:


  sqoop-import-all-tables
 --connect jdbc:mysql://IP:porta/mydatabase
 --username myusername
 -p
 --hive-import
 --direct


 But there are some incompatibility between data type in MySQL and Hive, and I 
received the following error message:
  ERROR tool.ImportAllTablesTool: Encountered IOException running import job: 
java.io.IOException: Hive does not support the SQL type for column rowguid.
 So, looking at [Sqoop 1.4.7 
documentation|https://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html], I 
noticed that there is a parameter 

{{--map-column-hive <name-of-column-to-map>}}.

So, I added this to my command line: 

{{--map-column-hive rowguid=binary}} 

and got this new error:

 ERROR sqoop.Sqoop: Got exception running Sqoop: 
java.lang.IllegalArgumentException: No column by the name rowguid found while 
importing data
 java.lang.IllegalArgumentException: No column by the name rowguid found while 
importing data


 This error happens when Sqoop is importing the next table after the one 
containing rowguid column.

My database has several tables and tons of columns. When I execute the above 
command line, it goes normal until it throws me an error related to missing 
column rowguid. Of course, this mapping should be done just on the specific 
table where this column exists, not in all tables.

Is there a way to define {{table-name + column-name}} in {{--map-column-hive}} 

parameter? Or another way to step over this issue? Maybe importing all tables 
changing automatically the data types?

  was:
Hi there, 



o achieve that I've used first the following command line:
 sqoop-import-all-tables
--connect jdbc:mysql://IP:porta/mydatabase
--username myusername
-p
--hive-import
--direct
But there are some incompatibility between data type in MySQL and Hive, and I 
received the following error message:
 ERROR tool.ImportAllTablesTool: Encountered IOException running import job: 
java.io.IOException: Hive does not support the SQL type for column rowguid.
So, looking at [Sqoop 1.4.7 
documentation|https://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html], I 
noticed that there is a parameter 

{{--map-column-hive <name-of-column-to-map>}}.

So, I added it to my command line: 

{{--map-column-hive rowguid=binary}} 

and got this new error:
 ERROR sqoop.Sqoop: Got exception running Sqoop: 
java.lang.IllegalArgumentException: No column by the name rowguid found while 
importing data
java.lang.IllegalArgumentException: No column by the name rowguid found while 
importing data
This error happens when Sqoop is importing the next table after the one 
containing rowguid column.

My database has several tables and tons of columns. When I execute the above 
command line, it goes normal until it throws me an error related to missing 
column rowguid. Of course, this mapping should be done just on the specific 
table where this column exists, not in all tables.

Is there a way to define {{table-name + column-name}} in {{--map-column-hive}} 

parameter? Or another way to step over this issue? Maybe importing all tables 
changing automatically the data types?


> Allow table choose in --map-column-hive parameter while importing to MySQL
> --------------------------------------------------------------------------
>
>                 Key: SQOOP-3476
>                 URL: https://issues.apache.org/jira/browse/SQOOP-3476
>             Project: Sqoop
>          Issue Type: Improvement
>          Components: connectors/mysql
>    Affects Versions: 1.4.7
>         Environment: MySQL 8.0, Sqoop 1.4.7, Hive 3.1.1
>            Reporter: Henrique Alves Junqueira Nogueira Branco
>            Priority: Major
>              Labels: features
>
> Hi there, 
> o achieve that I've used first the following command line:
>   sqoop-import-all-tables
>  --connect jdbc:mysql://IP:porta/mydatabase
>  --username myusername
>  -p
>  --hive-import
>  --direct
>  But there are some incompatibility between data type in MySQL and Hive, and 
> I received the following error message:
>   ERROR tool.ImportAllTablesTool: Encountered IOException running import job: 
> java.io.IOException: Hive does not support the SQL type for column rowguid.
>  So, looking at [Sqoop 1.4.7 
> documentation|https://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html], I 
> noticed that there is a parameter 
> {{--map-column-hive <name-of-column-to-map>}}.
> So, I added this to my command line: 
> {{--map-column-hive rowguid=binary}} 
> and got this new error:
>  ERROR sqoop.Sqoop: Got exception running Sqoop: 
> java.lang.IllegalArgumentException: No column by the name rowguid found while 
> importing data
>  java.lang.IllegalArgumentException: No column by the name rowguid found 
> while importing data
>  This error happens when Sqoop is importing the next table after the one 
> containing rowguid column.
> My database has several tables and tons of columns. When I execute the above 
> command line, it goes normal until it throws me an error related to missing 
> column rowguid. Of course, this mapping should be done just on the specific 
> table where this column exists, not in all tables.
> Is there a way to define {{table-name + column-name}} in 
> {{--map-column-hive}} 
> parameter? Or another way to step over this issue? Maybe importing all tables 
> changing automatically the data types?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)
  • [jira] [Updated] (SQOOP-34... Henrique Alves Junqueira Nogueira Branco (Jira)
    • [jira] [Updated] (SQO... Henrique Alves Junqueira Nogueira Branco (Jira)

Reply via email to