Hey Matt, We did something similar at Facebook to capture the information on who ran what on the clusters and dumped that out to an audit db. Specifically we were using Hive post execution hooks to achive that
http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/ql/hooks/PostExecute.html this gets called from the hive cli mostly. I am not sure if the particular hook that we had implemented was contributed back, but this could potentially be a cool contribution :) Ashish On Wed, Sep 12, 2012 at 11:10 AM, Matt Goeke <goeke.matt...@gmail.com>wrote: > All, > > I looked in the Hive JIRA and saw nothing like what we are looking to > implement so I am interesting in getting feedback as to whether there is > any overlap in this and any other current efforts: > > Currently our Hive warehouse is open to querying from any of our business > analysts and we pool them by user in the fair scheduler to prevent someone > from hogging cluster resources. We are looking to start summarizing > details of their queries so that we can view common questions they ask in > order find ways to optimize our tables / submission process. One thought > was to patch the Hive client / thrift server to write out the submitted > queries to the DB that our metastore is on and from there we can perform > some simple analytics to roll up a view of how they use the warehouse over > time. This doesn't seem like it would be too difficult of an effort as the > needed infrastructure is already in place but any suggestions or comments > on this would be greatly appreciated. Also if this is interesting to anyone > else we are happy to keep you in the loop as to any patches we create. > > -- > Matt Goeke >