[ 
https://issues.apache.org/jira/browse/HIVE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

apachehadoop updated HIVE-6212:
-------------------------------

    Description: 
Hi friends:
Now I can't open the page https://groups.google.com/forum/#!forum/presto-users 
,so show my question here.
I have started hiveserver and started presto-server on a machine with commands 
below:
hive --service hiveserver -p 9083
./launcher run
When I use the presto-client-cli command ./presto --server localhost:9083 
--catalog hive --schema default ,the console shows presto:default>,input the 
command as show tables the console prints Error running command: 
java.nio.channels.ClosedChannelException,
and the hiveserver console print as below:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap 
space
at 
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

my configuration file below:
node.properties
node.environment=production
node.id=cc4a1bbf-5b98-4935-9fde-2cf1c98e8774
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/data

config.properties
coordinator=true
datasources=jmx
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://slave4:8080

jvm.config
-server
-Xmx16G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.56/presto-server-0.56/lib/floatingdecimal-0.1.jar

log.properties
com.facebook.presto=DEBUG

catalog/hive.properties
connector.name=hive-cdh4
hive.metastore.uri=thrift://slave4:9083

HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56

Last I had increased the Java heap size for the Hive metastore,but it still 
given me the same error informations ,please help me to check if that is a bug 
of CDH5.Now I have no idea,god !

please help me ,thanks.
**********************************************************************************************************************************************************************
========================================================================================================
**********************************************************************************************************************************************************************
Add some informations as below:

Help,help,help!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I have test prest-server-0.55 and 0.56 and 0.57 on CDH4 +hive-0.10 or 
hive-0.11,but it still shown error informations above.
ON coordinator machine the directory etc and configuration files as below:
=================coordinator================================ 
 ----------------config.properties:
coordinator=true
datasources=jmx
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://name:8080
--------------jvm.config:
-server
-Xmx4G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.55/presto-server-0.55/lib/floatingdecimal-0.1.jar
-----------------log.properties:
com.facebook.presto=DEBUG
-----------------node.properties:
node.environment=production
#node.id=0699bf8f-ac4e-48f4-92a9-28c0d8862923
node.id=name
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/data
---------------hive.properties:
connector.name=hive-cdh4
hive.metastore.uri=thrift://name:10000
---------------jmx.properties:
connector.name=jmx

=================coordinator================================ 
=================woker==================================== 
----------------config.properties:
coordinator=false
datasources=jmx,hive
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://name:8080
--------------jvm.config:
-server
-Xmx4G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.55/presto-server-0.55/lib/floatingdecimal-0.1.jar
-----------------log.properties:
com.facebook.presto=DEBUG
-----------------node.properties:
node.environment=production
#node.id=773a6b4e-9fd0-4342-8d96-59d1f58ac7cd
node.id=data1
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/data
---------------hive.properties:
connector.name=hive-cdh4
hive.metastore.uri=thrift://name:10000
---------------jmx.properties:
connector.name=jmx
=================woker====================================
=================discovery-server===========================
--------------config.properties:
http-server.http.port=8411
discovery.uri=http://name:8080
--------------node.properties:
node.environment=production
#node.id=b45edeee-4870-420f-919b-9f8487b9750e
#node.id=0699bf8f-ac4e-48f4-92a9-28c0d8862923
node.id=name
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-discovery-data
----------------jvm.config
-server
-Xmx1G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
=================discovery-server===========================
I had started hiveserver and coordinator service and discovery-server on name 
machine,worker on data1 machine, all that ware Normal.When I started to use the 
command which was as below:
./presto --server name:10000 --catalog hive --schema default
the console of hiveserver that shown :
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap 
space
        at 
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
        at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:662)

MeanWhile,I had increased the java heap size of hive to  2g or 4g,however it 
still throwed exceptions as above.
Please give me some ideas because of you are Professional.
Thanks.


  was:
Hi friends:
Now I can't open the page https://groups.google.com/forum/#!forum/presto-users 
,so show my question here.
I have started hiveserver and started presto-server on a machine with commands 
below:
hive --service hiveserver -p 9083
./launcher run
When I use the presto-client-cli command ./presto --server localhost:9083 
--catalog hive --schema default ,the console shows presto:default>,input the 
command as show tables the console prints Error running command: 
java.nio.channels.ClosedChannelException,
and the hiveserver console print as below:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap 
space
at 
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

my configuration file below:
node.properties
node.environment=production
node.id=cc4a1bbf-5b98-4935-9fde-2cf1c98e8774
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/data

config.properties
coordinator=true
datasources=jmx
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://slave4:8080

jvm.config
-server
-Xmx16G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.56/presto-server-0.56/lib/floatingdecimal-0.1.jar

log.properties
com.facebook.presto=DEBUG

catalog/hive.properties
connector.name=hive-cdh4
hive.metastore.uri=thrift://slave4:9083

HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56

Last I had increased the Java heap size for the Hive metastore,but it still 
given me the same error informations ,please help me to check if that is a bug 
of CDH5.Now I have no idea,god !

please help me ,thanks.

Add some informations as below:

Help,help,help!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I have test prest-server-0.55 and 0.56 and 0.57 on CDH4 +hive-0.10 or 
hive-0.11,but it still shown error informations above.
ON coordinator machine the directory etc and configuration files as below:
=================coordinator================================ 
 ----------------config.properties:
coordinator=true
datasources=jmx
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://name:8080
--------------jvm.config:
-server
-Xmx4G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.55/presto-server-0.55/lib/floatingdecimal-0.1.jar
-----------------log.properties:
com.facebook.presto=DEBUG
-----------------node.properties:
node.environment=production
#node.id=0699bf8f-ac4e-48f4-92a9-28c0d8862923
node.id=name
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/data
---------------hive.properties:
connector.name=hive-cdh4
hive.metastore.uri=thrift://name:10000
---------------jmx.properties:
connector.name=jmx

=================coordinator================================ 
=================woker==================================== 
----------------config.properties:
coordinator=false
datasources=jmx,hive
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://name:8080
--------------jvm.config:
-server
-Xmx4G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
-XX:PermSize=150M
-XX:MaxPermSize=150M
-XX:ReservedCodeCacheSize=150M
-Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.55/presto-server-0.55/lib/floatingdecimal-0.1.jar
-----------------log.properties:
com.facebook.presto=DEBUG
-----------------node.properties:
node.environment=production
#node.id=773a6b4e-9fd0-4342-8d96-59d1f58ac7cd
node.id=data1
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/data
---------------hive.properties:
connector.name=hive-cdh4
hive.metastore.uri=thrift://name:10000
---------------jmx.properties:
connector.name=jmx
=================woker====================================
=================discovery-server===========================
--------------config.properties:
http-server.http.port=8411
discovery.uri=http://name:8080
--------------node.properties:
node.environment=production
#node.id=b45edeee-4870-420f-919b-9f8487b9750e
#node.id=0699bf8f-ac4e-48f4-92a9-28c0d8862923
node.id=name
node.data-dir=/home/hadoop/cloudera-5.0.0/presto-discovery-data
----------------jvm.config
-server
-Xmx1G
-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+AggressiveOpts
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
=================discovery-server===========================
I had started hiveserver and coordinator service and discovery-server on name 
machine,worker on data1 machine, all that ware Normal.When I started to use the 
command which was as below:
./presto --server name:10000 --catalog hive --schema default
the console of hiveserver that shown :
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap 
space
        at 
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
        at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:662)

MeanWhile,I had increased the java heap size of hive to  2g or 4g,however it 
still throwed exceptions as above.
Please give me some ideas because of you are Professional.
Thanks.



> Using Presto-0.56 for sql query,but HiveServer the console print 
> java.lang.OutOfMemoryError: Java heap space
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-6212
>                 URL: https://issues.apache.org/jira/browse/HIVE-6212
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2
>    Affects Versions: 0.11.0
>         Environment: HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56
>            Reporter: apachehadoop
>             Fix For: 0.11.0
>
>
> Hi friends:
> Now I can't open the page 
> https://groups.google.com/forum/#!forum/presto-users ,so show my question 
> here.
> I have started hiveserver and started presto-server on a machine with 
> commands below:
> hive --service hiveserver -p 9083
> ./launcher run
> When I use the presto-client-cli command ./presto --server localhost:9083 
> --catalog hive --schema default ,the console shows presto:default>,input the 
> command as show tables the console prints Error running command: 
> java.nio.channels.ClosedChannelException,
> and the hiveserver console print as below:
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap 
> space
> at 
> org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
> at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> at java.lang.Thread.run(Thread.java:662)
> my configuration file below:
> node.properties
> node.environment=production
> node.id=cc4a1bbf-5b98-4935-9fde-2cf1c98e8774
> node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/data
> config.properties
> coordinator=true
> datasources=jmx
> http-server.http.port=8080
> presto-metastore.db.type=h2
> presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.56/presto/db/MetaStore
> task.max-memory=1GB
> discovery-server.enabled=true
> discovery.uri=http://slave4:8080
> jvm.config
> -server
> -Xmx16G
> -XX:+UseConcMarkSweepGC
> -XX:+ExplicitGCInvokesConcurrent
> -XX:+CMSClassUnloadingEnabled
> -XX:+AggressiveOpts
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:OnOutOfMemoryError=kill -9 %p
> -XX:PermSize=150M
> -XX:MaxPermSize=150M
> -XX:ReservedCodeCacheSize=150M
> -Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.56/presto-server-0.56/lib/floatingdecimal-0.1.jar
> log.properties
> com.facebook.presto=DEBUG
> catalog/hive.properties
> connector.name=hive-cdh4
> hive.metastore.uri=thrift://slave4:9083
> HADOOP ENVIRONMENT IS CDH5+CDH5-HIVE-0.11+PRESTO-0.56
> Last I had increased the Java heap size for the Hive metastore,but it still 
> given me the same error informations ,please help me to check if that is a 
> bug of CDH5.Now I have no idea,god !
> please help me ,thanks.
> **********************************************************************************************************************************************************************
> ========================================================================================================
> **********************************************************************************************************************************************************************
> Add some informations as below:
> Help,help,help!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> I have test prest-server-0.55 and 0.56 and 0.57 on CDH4 +hive-0.10 or 
> hive-0.11,but it still shown error informations above.
> ON coordinator machine the directory etc and configuration files as below:
> =================coordinator================================ 
>  ----------------config.properties:
> coordinator=true
> datasources=jmx
> http-server.http.port=8080
> presto-metastore.db.type=h2
> presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/db/MetaStore
> task.max-memory=1GB
> discovery-server.enabled=true
> discovery.uri=http://name:8080
> --------------jvm.config:
> -server
> -Xmx4G
> -XX:+UseConcMarkSweepGC
> -XX:+ExplicitGCInvokesConcurrent
> -XX:+CMSClassUnloadingEnabled
> -XX:+AggressiveOpts
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:OnOutOfMemoryError=kill -9 %p
> -XX:PermSize=150M
> -XX:MaxPermSize=150M
> -XX:ReservedCodeCacheSize=150M
> -Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.55/presto-server-0.55/lib/floatingdecimal-0.1.jar
> -----------------log.properties:
> com.facebook.presto=DEBUG
> -----------------node.properties:
> node.environment=production
> #node.id=0699bf8f-ac4e-48f4-92a9-28c0d8862923
> node.id=name
> node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/data
> ---------------hive.properties:
> connector.name=hive-cdh4
> hive.metastore.uri=thrift://name:10000
> ---------------jmx.properties:
> connector.name=jmx
> =================coordinator================================ 
> =================woker==================================== 
> ----------------config.properties:
> coordinator=false
> datasources=jmx,hive
> http-server.http.port=8080
> presto-metastore.db.type=h2
> presto-metastore.db.filename=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/db/MetaStore
> task.max-memory=1GB
> discovery-server.enabled=true
> discovery.uri=http://name:8080
> --------------jvm.config:
> -server
> -Xmx4G
> -XX:+UseConcMarkSweepGC
> -XX:+ExplicitGCInvokesConcurrent
> -XX:+CMSClassUnloadingEnabled
> -XX:+AggressiveOpts
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:OnOutOfMemoryError=kill -9 %p
> -XX:PermSize=150M
> -XX:MaxPermSize=150M
> -XX:ReservedCodeCacheSize=150M
> -Xbootclasspath/p:/home/hadoop/cloudera-5.0.0/presto-0.55/presto-server-0.55/lib/floatingdecimal-0.1.jar
> -----------------log.properties:
> com.facebook.presto=DEBUG
> -----------------node.properties:
> node.environment=production
> #node.id=773a6b4e-9fd0-4342-8d96-59d1f58ac7cd
> node.id=data1
> node.data-dir=/home/hadoop/cloudera-5.0.0/presto-0.55/presto/data
> ---------------hive.properties:
> connector.name=hive-cdh4
> hive.metastore.uri=thrift://name:10000
> ---------------jmx.properties:
> connector.name=jmx
> =================woker====================================
> =================discovery-server===========================
> --------------config.properties:
> http-server.http.port=8411
> discovery.uri=http://name:8080
> --------------node.properties:
> node.environment=production
> #node.id=b45edeee-4870-420f-919b-9f8487b9750e
> #node.id=0699bf8f-ac4e-48f4-92a9-28c0d8862923
> node.id=name
> node.data-dir=/home/hadoop/cloudera-5.0.0/presto-discovery-data
> ----------------jvm.config
> -server
> -Xmx1G
> -XX:+UseConcMarkSweepGC
> -XX:+ExplicitGCInvokesConcurrent
> -XX:+AggressiveOpts
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:OnOutOfMemoryError=kill -9 %p
> =================discovery-server===========================
> I had started hiveserver and coordinator service and discovery-server on name 
> machine,worker on data1 machine, all that ware Normal.When I started to use 
> the command which was as below:
> ./presto --server name:10000 --catalog hive --schema default
> the console of hiveserver that shown :
> Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap 
> space
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:353)
>         at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:215)
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
>         at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>         at java.lang.Thread.run(Thread.java:662)
> MeanWhile,I had increased the java heap size of hive to  2g or 4g,however it 
> still throwed exceptions as above.
> Please give me some ideas because of you are Professional.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to