Hi Koert, That's because Hive metastore doesn't know about the partitions you added. I was in a similar situation but I use Amazon EMR and in their version of Hive, one can run the command "alter table src recover partitions" that goes through the directory structure of table (src, in this case) and adds the partitions in Hive metastore. You can look into getting a patch for that, if at all available.
Instead of running the command through shell, you could try to connect to Hive from within Java (I personally use the Hive JDBC driver to do so) and then issue the desired add partition command but it might be an overkill if this the only reason you want to connect to Hive from Java. Mark Mark Grover, Business Intelligence Analyst OANDA Corporation www: oanda.com www: fxtrade.com e: mgro...@oanda.com "Best Trading Platform" - World Finance's Forex Awards 2009. "The One to Watch" - Treasury Today's Adam Smith Awards 2009. ----- Original Message ----- From: "Koert Kuipers" <ko...@tresata.com> To: user@hive.apache.org Sent: Wednesday, February 8, 2012 11:04:18 AM Subject: external partitioned table hello all, we have an external partitioned table in hive. we add to this table by having map-reduce jobs (so not from hive) create new subdirectories with the right format (partitionid=partitionvalue). however hive doesn't pick them up automatically. we have to go into hive shell and run "alter table sometable add partition (partitionid=partitionvalue)". to make matter worse hive doesnt really lend itself to running such an add-partition-operation from java (or for that matter: hive doesn't lend itself to any easy programmatic manipulations... grrr. but i will stop now before i go on a a rant). any suggestions how to approach this? thanks! best, koert