Hi Prasanth, I see there are some logs in the system that are too big, and using many space. Jenkins will delete those logs eventually. These are some of the logs bigger than 1G that I found:
*13G ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestJdbcWithMiniHS2/hive.log* *9.9G ./logs/PreCommit-HIVE-TRUNK-Build-4790/succeeded/TestJdbcWithMiniHS2/hive.log <<< HIVE-11416* *5.5G ./logs/PreCommit-HIVE-TRUNK-Build-4790/succeeded/TestSchedulerQueue/hive.log* *4.9G ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestSchedulerQueue/hive.log* *4.6G ./logs/PreCommit-HIVE-TRUNK-Build-4792/succeeded/TestSchedulerQueue/hive.log* *4.1G ./logs/PreCommit-HIVE-TRUNK-Build-Upload-10/succeeded/TestSchedulerQueue/hive.log* 2.0G ./logs/PreCommit-HIVE-TRUNK-Build-4792/succeeded/TestSSL/hive.log 1.9G ./logs/PreCommit-HIVE-TRUNK-Build-4790/failed/TestSSL/hive.log 1.8G ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestSSL/hive.log 1.8G ./logs/PreCommit-HIVE-TRUNK-Build-Upload-10/succeeded/TestJdbcWithMiniHS2/hive.log 1.7G ./logs/HIVE-TRUNK-HADOOP-2-1/succeeded/TestSparkCliDriver-date_udf.q-join23.q-auto_join4.q-and-12-more/spark.log 1.7G ./logs/PreCommit-HIVE-TRUNK-Build-4789/succeeded/TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more/spark.log 1.7G ./logs/PreCommit-HIVE-TRUNK-Build-4790/succeeded/TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more/spark.log 1.7G ./logs/PreCommit-HIVE-TRUNK-Build-4792/succeeded/TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more/spark.log *TestJdbcWithMiniHS2* is one causing this issue. Is debug enabled on this log? - Sergio On Sun, Aug 2, 2015 at 7:01 PM, Prasanth Jayachandran < pjayachand...@hortonworks.com> wrote: > Looks like there is something wrong with the precommit tests. > The tests runs through but throws IOException or runs out of disk. > https://issues.apache.org/jira/browse/HIVE-11416 > https://issues.apache.org/jira/browse/HIVE-11304 > > Can someone take a look whats going on? > > Thanks > Prasanth >