Andy Hayward writes: > On 2/1/06, Greg Oster <[EMAIL PROTECTED]> wrote: > > "Peter Fraser" writes: > > > and as a result all file writes to the failed > > > drive queued up in memory, > > > > I've never seen that behaviour... I find it hard to believe that > > you'd be able to queue up 2 days worth of writes without a) any reads > > being done or b) not noticing that the filesystem was completely > > unresponsive when a write of associated meta-data never returned... > > (on the first write of meta-data that didn't return, pretty much all > > IO to that filesystem should grind to a halt. Sorry.. I'm not buying > > the "it queued up things for two days"... ) > > I've seem similar on a machine with a filesystem on a raid-1 partition > and mounted with softdeps enabled. From what I remember the scenario > was something like: > > * copied 10Gb or so of data to new raid-1 filesystem > * system then left idle for 30mins or so > * being an idiot, pulled the wrong plug out of the wall > * upon reboot, and after raid resync and fsck, most of the copied data > was no longer there
RAIDframe can only write what it's given. If, after 30 minutes, the filesystem layers havn't synced all the data, RAIDframe can't do anything about that... if left idle for 30 minutes, that filesystem should have synced itself many times over, to the point that fsck shouldn't have found anything to complain about... (I strongly suspect you'd see exactly the same behaviour without RAIDframe involved here... I also suspect you wouldn't see the same behavior without softdeps, RAIDframe or not.) Later... Greg Oster