Looks like it's memory/ disk space issue with your database server used to store Hive metadata. Can you check the disk usage of /tmp directory (data directory of DB server).
Date: Wed, 6 Feb 2013 18:34:31 +0530 Subject: Getting Error while executing "show partitions TABLE_NAME" From: chunky.gu...@vizury.com To: user@hive.apache.org Hi All, I ran this :- hive> show partitions tab_name; and got this error :- FAILED: Error in metadata: javax.jdo.JDODataStoreException: Error executing JDOQL query "SELECT `THIS`.`PART_NAME` AS NUCORDER0 FROM `PARTITIONS` `THIS` LEFT OUTER JOIN `TBLS` `THIS_TABLE_DATABASE` ON `THIS`.`TBL_ID` = `THIS_TABLE_DATABASE`.`TBL_ID` LEFT OUTER JOIN `DBS` `THIS_TABLE_DATABASE_DATABASE_NAME` ON `THIS_TABLE_DATABASE`.`DB_ID` = `THIS_TABLE_DATABASE_DATABASE_NAME`.`DB_ID` LEFT OUTER JOIN `TBLS` `THIS_TABLE_TABLE_NAME` ON `THIS`.`TBL_ID` = `THIS_TABLE_TABLE_NAME`.`TBL_ID` WHERE `THIS_TABLE_DATABASE_DATABASE_NAME`.`NAME` = ? AND `THIS_TABLE_TABLE_NAME`.`TBL_NAME` = ? ORDER BY NUCORDER0 " : Error writing file '/tmp/MY0TOZFT' (Errcode: 28). NestedThrowables: java.sql.SQLException: Error writing file '/tmp/MY0TOZFT' (Errcode: 28) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask Actually yesterday we had less no. of partitions and today I added around 3000 more partition for my data which is stored in s3 for Hive. I think this created the above error, but don't know how to solve it. Please help me in this. Thanks, Chunky.