I'm executing some queries using HiveServer2 and jdbc. Once in a while my
application gets stuck with an output like this (on random query,
nondeterministic):

07:24:54.910 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.910 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.911 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.912 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.912 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.912 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.912 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
> 07:24:54.912 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> CLIENT: reading data length: 53
> 07:24:54.912 [pool-4-thread-1] DEBUG o.a.thrift.transport.TSaslTransport -
> writing data length: 100
>

and so on, until I kill it manually. My code:

private void runSingleStreamOfQueries(String hiveServerHost, int
> hiveServerPort, List<String> queryNames,
>             BenchmarkResultDTO singleStreamOfQueriesExecutionTimeDTO,
> BenchmarkResultDTO singleStreamOfQueriesErrorCountDTO,
>             List<BenchmarkResultDTO> queriesExecutionTimesDTOs, String
> databaseName) throws IOException, SQLException {
>     Connection con = null;
>     Statement stmt = null;
>     String username = "hive";
>     try {
>         con = DriverManager.getConnection("jdbc:hive2://" + hiveServerHost
> + ":" + hiveServerPort + "/" + databaseName,
>                 username, "");
>         stmt = con.createStatement();
>         int numberOfErrors = 0;
>         log.info("Starting single stream of queries...");
>         log.info("Number of queries = " + (queryNames.size()));
>         StopWatch stopwatchStream = new StopWatch();
>         StopWatch stopwatchQuery = new StopWatch();
>         stopwatchStream.start();
>         singleStreamOfQueriesExecutionTimeDTO.setTimestampStart(new
> DateTime());
>         int executedQueries = 0;
>         for (String queryName : queryNames) {
>             log.info("Executing " + queryName + "...");
>             String queryFilePath = BENCHMARK_QUERIES_DIR + File.separator
> + queryName;
>             String query = fileToString(queryFilePath);
>             log.info(query);
>             try {
>                 stopwatchQuery.start();
>                 stmt.executeQuery(query);
>                 stopwatchQuery.stop();
>                 int queryIndex = findQueryIndex(queryName,
> queriesExecutionTimesDTOs);
>                 queriesExecutionTimesDTOs.get(queryIndex).setValue(new
> BigDecimal(stopwatchQuery.getTime()/1000));
>             } catch (SQLException e) {
>                 numberOfErrors++;
>                 log.warn("SQLException on " + queryName, e);
>             } finally {
>                 stopwatchQuery.reset();
>             }
>             log.info(queryName + " executed.");
>             executedQueries++;
>             log.info(queryNames.size()-executedQueries + " queries left");
>         }
>         stopwatchStream.stop();
>         singleStreamOfQueriesExecutionTimeDTO.setValue(new
> BigDecimal(stopwatchStream.getTime()/1000));
>         singleStreamOfQueriesExecutionTimeDTO.setTimestampEnd(new
> DateTime());
>         singleStreamOfQueriesErrorCountDTO.setValue(new
> BigDecimal(numberOfErrors));
>     } finally {
>         if(null != stmt) stmt.close();
>         if(null != con) con.close();
>     }
>     log.info("Single stream of queries finished.");
> }
>

Usually for the first time it works. When I execute the method once again,
the problem arises at some nondeterministic point. When I kill the process
and execute the method again, it doesn' work from the beginning and only
restarting HiveServer2 helps.

What is the problem? Do I have memory leaks?

Regards,
Pawel

Reply via email to