Generally, there should not be "corruption", only a roll-back to a previous state. *HOWEVER*, its possible that an application which has state outside of the filesystem (such as effects on network peers, or even state written to *other* filesystems) will encounter a consistency problem as the application will not be expecting this potentially "partial" rollback of state.
This state *could* be state tracked in remote systems, or VMs, for example. Generally, I discourage disabling the sync unless you know *exactly* what you are doing. On my build filesystems I do it, because I can regenerate all the data, and a loss of up to 30 seconds of data is no problem for me. But I don't do this on home directories, or filesystems used for "arbitrary" application storage. And I would *never* do this for a filesystem that is backing a database. As they say, better safe than sorry. - Garrett On Nov 10, 2011, at 11:12 AM, Tomas Forsman wrote: > On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes: > >> On Wed, 9 Nov 2011, Tomas Forsman wrote: >>>> >>>> At all times, if there's a server crash, ZFS will come back along at next >>>> boot or mount, and the filesystem will be in a consistent state, that was >>>> indeed a valid state which the filesystem actually passed through at some >>>> moment in time. So as long as all the applications you're running can >>>> accept the possibility of "going back in time" as much as 30 sec, following >>>> an ungraceful ZFS crash, then it's safe to disable ZIL (set sync=disabled). >>> >>> Client writes block 0, server says OK and writes it to disk. >>> Client writes block 1, server says OK and crashes before it's on disk. >>> Client writes block 2.. waaiits.. waiits.. server comes up and, server >>> says OK and writes it to disk. >>> >>> Now, from the view of the clients, block 0-2 are all OK'd by the server >>> and no visible errors. >>> On the server, block 1 never arrived on disk and you've got silent >>> corruption. >> >> The silent corruption (of zfs) does not occur due to simple reason that >> flushing all of the block writes are acknowledged by the disks and then a >> new transaction occurs to start the next transaction group. The previous >> transaction is not closed until the next transaction has been >> successfully started by writing the previous TXG group record to disk. >> Given properly working hardware, the worst case scenario is losing the >> whole transaction group and no "corruption" occurs. >> >> Loss of data as seen by the client can definitely occur. > > When a client writes something, and something else ends up on disk - I > call that corruption. Doesn't matter whose fault it is and technical > details, the wrong data was stored despite the client being careful when > writing. > > /Tomas > -- > Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/ > |- Student at Computing Science, University of Umeå > `- Sysadmin at {cs,acc}.umu.se > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss