Hi ,
All examples that I found executes mapreduce job on a single file but in my
situation I have more than one.

Suppose I have such folder on HDFS which contains some files:

    /my_hadoop_hdfs/my_folder:
                /my_hadoop_hdfs/my_folder/file1.txt
                /my_hadoop_hdfs/my_folder/file2.txt
                /my_hadoop_hdfs/my_folder/file3.txt


how can I execute  hadoop mapreduce on file1.txt , file2.txt and file3.txt?

Is it possible to provide to hadoop job folder as parameter and all files
will be produced by mapreduce job?

Thanks In Advance
Oleg.

Reply via email to