Hi,
I remember encountering a similar problem that was caused by an old mysql
client driver.
You can try and upgrade your mysql connector.

Daniel

On Mon, Nov 30, 2015 at 8:12 PM, Timothy Garza <
timothy.ga...@collinsongroup.com> wrote:

> We’ve been playing with the MySQL Global Settings: (Hive metastore)
>
>
>
> *mysql*> set global innodb_large_prefix = ON;  (ß this was set to OFF
> previously)
>
>
>
> …and now the ERROR is thus:
>
> Specified key was too long; max key length is 3072 bytes
>
>
>
> So it’s still ‘failing’ (but the HDFS operation itself succeeds). This
> must be the problem area as the message has changed from:
>
>
>
> Specified key was too long; max key length is 767 bytes
>
> to
>
> Specified key was too long; max key length is 3072 bytes
>
>
>
> …simply by altering the MySQL Global settings. So is hiveserver2 trying to
> use a key larger than MySQL supports (v5.5.2, file format Antelope)?
>
>
>
> NB. This only occurs when executing beeline INSERT, not CREATE nor SELECT
> statements on a Hive Table (in this case a Sequence File).
>
>
>
> My colleague thinks this is SSL related (because of the use of the word
> ‘key’ in the error), is HiveServer2 connecting to the Metastore using SSL?
>
>
> ------------------------------
>
> Weirdly I’m experiencing exactly the same issue when trying to populate a
> Hive Table using INSERT OVERWRITE TABLE. We’re recently upgraded from Hive
> 0.13 to 1.2.1. NB. The Hive Table populates but the map-reduce returns an
> error code. I have run the hive Schema Tool:   schematool -dbType mysql
> -upgradeSchemaFrom 0.13
>
>
>
> The only table I can see with 767 size column is “PART_COL_STATS” –
> implemented in one of the metastore upgrade scripts. Column Name: 
> PARTITION_NAME
> | varchar(767). ß I changed this column to varchar(1000) but get the same
> message afterwards:
>
>
>
> *ERROR jdbc.JDBCStatsPublisher:* Error during JDBC initialization.
>
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key
> was too long; max key length is 767 bytes
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>
>         at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
>
>         at com.mysql.jdbc.Util.getInstance(Util.java:384)
>
>         at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
>
>         at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4232)
>
>         at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4164)
>
>         at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2615)
>
>         at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2776)
>
>         at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2832)
>
>         at
> com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1755)
>
>         at
> com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1679)
>
>         at
> org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher.init(JDBCStatsPublisher.java:292)
>
>         at
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:411)
>
>         at
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
>
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>
>         at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
>
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
>
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
>
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
>
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
>
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054)
>
>         at
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
>
>         at
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
>
>         at
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
>
>         at java.security.AccessController.doPrivileged(Native Method)
>
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>
>         at
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
>
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>
>
>
> *From:* San Luo [mailto:luo...@damai.cn <luo...@damai.cn>]
> *Sent:* 30 November 2015 10:06
> *To:* user@hive.apache.org
> *Subject:* com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
> Specified key was too long; max key length is 767 bytes
>
>
>
> hi guys,
>
>      when I am trying to use mysql to manager hive metastore,I got a
> problem like below:
>
>
>
> [root@master hive]# hive
>
>
>
> Logging initialized using configuration in
> jar:file:/usr/lib/hive/lib/hive-common-1.1.1.jar!/hive-log4j.properties
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in
> [jar:file:/usr/lib/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: Found binding in
> [jar:file:/usr/lib/hive/lib/hive-jdbc-1.1.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> hive> show tables;
>
> OK
>
> invites
>
> pokes
>
> test
>
> Time taken: 2.573 seconds, Fetched: 3 row(s)
>
> hive> LOAD DATA LOCAL INPATH './examples/files/kv2.txt' OVERWRITE INTO
> TABLE invites PARTITION (ds='2008-08-15');
>
> Loading data to table default.invites partition (ds=2008-08-15)
>
> Interrupting... Be patient, this might take some time.
>
> Press Ctrl+C again to kill JVM
>
> Failed with exception
> MetaException(message:javax.jdo.JDODataStoreException: An exception was
> thrown while adding/validating class(es) : Specified key was too long; max
> key length is 767 bytes
>
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key
> was too long; max key length is 767 bytes
>
>         at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown
> Source)
>
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>
>         at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
>
>         at com.mysql.jdbc.Util.getInstance(Util.java:386)
>
> …….
>
>
>
>
>
> I tried hive 1.2.1 and hive 1.1.1, neither works fine. I also tried to
> modify mysql charcter set of hive database to latin1 and utf8 but it
> doesn’t work.
>
>
>
> Any idea will be appreciated.
>
>
>
>
>
> The Collinson Group Limited; Registered number: 2577557, Registered in
> England & Wales; Registered Office: Cutlers Exchange, 123 Houndsditch,
> London, EC3A 7BU.
>
>
> This e-mail may contain privileged and confidential information and/or
> copyright material and is intended for the use of the addressee only. If
> you receive this e-mail by mistake please advise the sender immediately by
> using the reply facility in your e-mail software and delete this e-mail
> from your computer system. You may not deliver, copy or disclose its
> contents to anyone else. Any unauthorised use may be unlawful. Any views
> expressed in this e-mail are those of the individual sender and may not
> necessarily reflect the views of The Collinson Group Ltd and/or its
> subsidiaries or any other associated company (collectively “Collinson
> Group”).
>
> As communications via the Internet are not secure Collinson Group cannot
> accept any liability if this e-mail is accessed by third parties during the
> course of transmission or is modified or amended in any way following
> despatch. Collinson Group cannot guarantee that any attachment to this
> email does not contain a virus, therefore it is strongly recommended that
> you carry out your own virus check before opening any attachment, as we
> cannot accept liability for any damage sustained as a result of software
> virus infection. Senders of messages shall be taken to consent to the
> monitoring and recording of e-mails addressed to members of the Company.
>

Reply via email to