http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html

We do not currently cache blocks which are under construction, corrupt, or
otherwise incomplete.

Have you tried with a file with more than 1 block?

And dfs.namenode.path.based.cache.refresh.interval.ms might be too large?

You might want to ask a broader mailing list. This is not related to Spark.

Bertrand


On Fri, May 16, 2014 at 2:56 AM, hequn cheng <chenghe...@gmail.com> wrote:

> I tried centralized cache step by step following the apache hadoop oficial
> website, but it seems centralized cache doesn't work.
> see :
> http://stackoverflow.com/questions/22293358/centralized-cache-failed-in-hadoop-2-3
> .
> Can anyone succeed?
>
>
> 2014-05-15 5:30 GMT+08:00 William Kang <weliam.cl...@gmail.com>:
>
>> Hi,
>> Any comments or thoughts on the implications of the newly released
>> feature from Hadoop 2.3 on the centralized cache? How different it is from
>> RDD?
>>
>> Many thanks.
>>
>>
>> Cao
>>
>
>

Reply via email to