On Apr 10, 2009, at 1:54 PM, Stas Oskin wrote:

Actually, now I remember that you posted some time ago about your University
loosing about 300 files.
So since then the situation has improved I presume?

Yup! The only files we lose now are due to multiple simultaneous hardware loss. Since January: 11 files to accidentally reformatting 2 nodes at once, 35 to a night with 2 dead nodes. Make no mistake - HDFS with 2 replicas is *not* an archive-quality file system. HDFS does not replace tape storage for long term storage.

Brian



2009/4/10 Stas Oskin <[email protected]>

2009/4/10 Brian Bockelman <[email protected]>

Most of the issues were resolved in 0.19.1 -- I think 0.20.0 is going to
be even better.

We run about 300TB @ 2 replicas, and haven't had file loss that was
Hadoop's fault since about January.

Brian


And you running 0.19.1?

Regards.


Reply via email to