Am 31.08.2016 um 16:33 schrieb Michael Mol:
>
> In data=journal mode, the contents of files pass through the journal as well, 
> ensuring that, at least as far as the filesystem's responsibility is 
> concerned, 
> the data will be intact in the event of a crash.

a common misconception. But not true at all. Google a bit.
>
> Now, I can still think of ways you can lose data in data=journal mode:
>
> * You mounted the filesystem with barrier=0 or with nobarrier; this can 
> result 

not needed.

> in data writes going to disk out of order, if the I/O stack supports 
> barriers. 
> If you say "my file is ninety bytes" "here are ninety bytes of data, all 9s", 
> "my file is now thirty bytes", "here are thirty bytes of data, all 3s", then 
> in 
> the end you should have a thirty-byte file filled with 3s. If you have 
> barriers 
> enabled and you crash halfway through the whole process, you should find a 
> file 
> of ninety bytes, all 9s. But if you have barriers disabled, the data may hit 
> disk as though you'd said "my file is ninety bytes, here are ninety bytes of 
> data, all 9s, here are thirty bytes of data, all 3s, now my file is thirty 
> bytes." If that happens, and you crash partway through the commit to disk, 
> you 
> may see a ninety-byte file consisting of  thirty 3s and sixty 9s. Or things 
> may 
> landthat you see a thirty-byte file of 9s.
>
> * Your application didn't flush its writes to disk when it should have.

not needed either.

>
> * Your vm.dirty_bytes or vm.dirty_ratio are too high, you've been writing a 
> lot to disk, and the kernel still has a lot of data buffered waiting to be 
> written. (Well, that can always lead to data loss regardless of how high 
> those 
> settings are, which is why applications should flush their writes.)
>
> * You've used hdparm to enable write buffers in your hard disks, and your 
> hard 
> disks lose power while their buffers have data waiting to be written.
>
> * You're using a buggy disk device that does a poor job of handling power 
> loss. Such as some SSDs which don't have large enough capacitors for their 
> own 
> write reordering. Or just about any flash drive.
>
> * There's a bug in some code, somewhere.

nope.
> In-memory corruption of a data is a universal hazard. ECC should be the norm, 
> not the exception, honestly.
>


Reply via email to