I feel #4 as a better option. Regards, Ravi Teja
-----Original Message----- From: Alejandro Abdelnur [mailto:t...@cloudera.com] Sent: Wednesday, October 12, 2011 9:38 PM To: common-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org; hdfs-...@hadoop.apache.org Subject: 0.23 & trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar? Currently common, hdfs and mapred create partial tars which are not usable unless they are stitched together into a single tar. With HADOOP-7642 the stitching happens as part of the build. The build currently produces the following tars: 1* common TAR 2* hdfs (partial) TAR 3* mapreduce (partial) TAR 4* hadoop (full, the stitched one) TAR #1 on its own does not run anything, #2 and #3 on their own don't run. #4 runs hdfs & mapreduce. Questions: Q1. Does it make sense to publish #1, #2 & #3? Or #4 is sufficient and you start the services you want (i.e. Hbase would just use HDFS)? Q2. And what about a source TAR, does it make sense to have source TAR per component or a single TAR for the whole? For simplicity (for the build system and for users) I'd prefer a single binary TAR and a single source TAR. Thanks. Alejandro