I think it's simplest to publish a single Hadoop tarball and users
start the services they want. This is the model we have always
followed up to now.

Cheers,
Tom

On Wed, Oct 12, 2011 at 9:07 AM, Alejandro Abdelnur <t...@cloudera.com> wrote:
> Currently common, hdfs and mapred create partial tars which are not usable
> unless they are stitched together into a single tar.
>
> With HADOOP-7642 the stitching happens as part of the build.
>
> The build currently produces the following tars:
>
> 1* common TAR
> 2* hdfs (partial) TAR
> 3* mapreduce (partial) TAR
> 4* hadoop (full, the stitched one) TAR
>
> #1 on its own does not run anything, #2 and #3 on their own don't run. #4
> runs hdfs & mapreduce.
>
> Questions:
>
> Q1. Does it make sense to publish #1, #2 & #3? Or #4 is sufficient and you
> start the services you want (i.e. Hbase would just use HDFS)?
>
> Q2. And what about a source TAR, does it make sense to have source TAR per
> component or a single TAR for the whole?
>
>
> For simplicity (for the build system and for users) I'd prefer a single
> binary TAR and a single source TAR.
>
> Thanks.
>
> Alejandro
>

Reply via email to