Deegue created HIVE-23710:
-----------------------------

             Summary: Add table meta cache limit when starting Hive server2
                 Key: HIVE-23710
                 URL: https://issues.apache.org/jira/browse/HIVE-23710
             Project: Hive
          Issue Type: Improvement
         Environment: Hive 2.3.6
            Reporter: Deegue


When we start up Hive server2, it will connect to metastore to get table meta 
info by database and cache them. If there are many tables in a database, 
however, will exceed `hive.metastore.client.socket.timeout`.
Then exception thrown like:
{noformat}
2020-06-17T11:38:27,595  WARN [main] metastore.RetryingMetaStoreClient: 
MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. 
getTableObjectsByName
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
        at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) 
~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) 
~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) 
~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) 
~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_objects_by_name_req(ThriftHiveMetastore.java:1596)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_objects_by_name_req(ThriftHiveMetastore.java:1583)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTableObjectsByName(HiveMetaStoreClient.java:1370)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getTableObjectsByName(SessionHiveMetaStoreClient.java:238)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) ~[?:?]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_121]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:206)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at com.sun.proxy.$Proxy38.getTableObjectsByName(Unknown Source) ~[?:?]
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) ~[?:?]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_121]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2336)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at com.sun.proxy.$Proxy38.getTableObjectsByName(Unknown Source) ~[?:?]
        at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllTableObjects(Hive.java:1343) 
~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:127)
 ~[hive-exec-2.3.6.jar:2.3.6]
        at 
org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:167) 
~[hive-service-2.3.6.jar:2.3.6]
        at 
org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:607)
 ~[hive-service-2.3.6.jar:2.3.6]
        at 
org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:100) 
~[hive-service-2.3.6.jar:2.3.6]
        at 
org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:855)
 ~[hive-service-2.3.6.jar:2.3.6]
        at 
org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:724) 
~[hive-service-2.3.6.jar:2.3.6]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_121]
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_121]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_121]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
        at org.apache.hadoop.util.RunJar.run(RunJar.java:226) 
~[hadoop-common-2.6.0-cdh5.16.1.jar:?]
        at org.apache.hadoop.util.RunJar.main(RunJar.java:141) 
~[hadoop-common-2.6.0-cdh5.16.1.jar:?]
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method) ~[?:1.8.0_121]
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
~[?:1.8.0_121]
        at java.net.SocketInputStream.read(SocketInputStream.java:171) 
~[?:1.8.0_121]
        at java.net.SocketInputStream.read(SocketInputStream.java:141) 
~[?:1.8.0_121]
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
~[?:1.8.0_121]
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) 
~[?:1.8.0_121]
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345) 
~[?:1.8.0_121]
        at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
 ~[hive-exec-2.3.6.jar:2.3.6]
        ... 32 more
{noformat}


The solution is add `hive.server2.init.load.table.limit` to control max number 
of tables for one TCP request from Hive server2 to metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to