The following script implements the hack mentioned in my previous message.

Hope this help.
Nicholas


#!/bin/bash
set -e -x

#Set homes for the sub-projects
COMMON_HOME=/cygdrive/d/@sze/hadoop/a/common
HDFS_HOME=/cygdrive/d/@sze/hadoop/a/hdfs
MAPREDUCE_HOME=/cygdrive/d/@sze/hadoop/a/mapreduce

#Compile each sub-project
#Change the ant command if necessary
cd $COMMON_HOME
ant clean compile-core-test
cd $HDFS_HOME
ant clean compile-hdfs-test
cd $MAPREDUCE_HOME
ant clean compile-mapred-test examples

#Copy everything to common
cp -R $HDFS_HOME/build/* $MAPREDUCE_HOME/build/* $COMMON_HOME/build

#Then, you may use the scripts in $COMMON_HOME/bin to run hadoop as before.
#For examples,
#
# > cd $COMMON_HOME
#
# > ./bin/start-dfs.sh
#
# > ./bin/start-mapred.sh
#
# > ./bin/hadoop fs -ls
#
# > ./bin/hadoop jar build/hadoop-mapred-examples-0.21.0-dev.jar pi 10 10000





----- Original Message ----
> From: "Tsz Wo (Nicholas), Sze" <s29752-hadoop...@yahoo.com>
> To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
> mapreduce-...@hadoop.apache.org
> Sent: Monday, August 10, 2009 5:25:08 PM
> Subject: Question: how to run hadoop after the project split?
> 
> I have to admit that I don't know the official answer.  The hack below seems 
> working:
> - compile all 3 sub-projects;
> - copy everything in hdfs/build and mapreduce/build to common/build;
> - then run hadoop by the scripts in common/bin as before.
> 
> Any better idea?
> 
> Nicholas Sze

Reply via email to