Sorry for replying so late. I just created an JIRA ticket -
https://issues.apache.org/jira/browse/MAPREDUCE-2419.
Gerald
On Wed, Jan 5, 2011 at 7:34 PM, Greg Roelofs wrote:
> Zhenhua Guo wrote:
>
>> It seems that mapred.task.cache.levels is used by JobTracker to create
>&g
I has a blog post which may be helpful to you. Read it here
http://tech.zhenhua.info/2010/11/svn-subclipse.html.
Gerald
On Tue, Jan 11, 2011 at 9:34 PM, hemanth.murthy wrote:
>
> Hi,
>
> I tried to build the eclipse files using "ant eclipse", but I get this error
> eclipse:
> [eclipse] There we
r-facing API) and AbstractFileSystem (the fs
> implementation API). So the AbstractFileSystem derived classes replace
> the FileSystem derived classes.
>
> Thanks,
> Eli
>
> On Thu, Dec 30, 2010 at 8:16 PM, Zhenhua Guo wrote:
>> I noticed that in HDFS there are two sets
I noticed that in HDFS there are two sets of API classes - *FileSystem
and *FS, e.g. FtpFileSystem vs. FtpFs, ChecksumFileSystem vs.
ChecksumFS. I wonder what is the difference. One is replacement of the
other?
Thanks
Gerald
It seems that mapred.task.cache.levels is used by JobTracker to create
task caches for nodes at various levels. This makes data-locality
scheduling possible.
If I set mapred.task.cache.levels to 0 and use default network
topology, then mapreduce job will stall forever. The reason is
JobInProgress::
I know mappers can take files in HDFS as input. I wonder whether they
can take local files as input.
Thanks.
Gerald
confirm)
>
> Not that I am aware of. The task's input location is used directly to
> read the data.
>
> Thanks
> Hemanth
>>
>> Hope this will help.
>>
>> Chen
>>
>> On Sun, Oct 31, 2010 at 9:49 PM, Zhenhua Guo wrote:
>>
>>> Th
response is to be sent back
> to a TaskTracker that called JobTracker.heartbeat(...).
>
>>
>>
>> On Thu, Oct 28, 2010 at 2:52 PM, Zhenhua Guo wrote:
>>> Hi, all
>>> I wonder how Hadoop schedules mappers and reducers (e.g. consider
>>> load balan
Hi, all
I wonder how Hadoop schedules mappers and reducers (e.g. consider
load balancing, affinity to data?). For example, how to decide on
which nodes mappers and reducers are to be executed and when.
Thanks!
Gerald