I believe the flag you are setting, without looking, controls whether
ZFS flushes the disk write cache, and not a cache of ZFS's.

It's moderately unsafe - what ZFS does is that it writes blocks to the
hard drives, then issues a command to force the drives to guarantee
the write caches are flushed and the disks actually contain the
blocks. Disabling that means your IO may be in the write cache, and
ZFS may think the blocks are on disk when, in fact, they're not.

This has...a number of possible consequences. It means, in practice,
that it's possible for ZFS to tell the NFS client data is on disk and
sync'd when it is, in fact, not.

Of course, all this is predicated on my initial assumption of what the
flag does - please, someone chime in and correct me if I'm full of it.
:)

- Rich

On Thu, Oct 6, 2011 at 5:45 AM, Gabriele Bulfon <gbul...@sonicle.com> wrote:
> Hi,
> NFS on zfs can be quite a pain with large number of small files.
> After playing around it, I discovered this zfs_nocacheflush flag bringing me 
> back to high
> performances on NFS.
> Questions:
> - How much unsafe is this?
> - How can I check zfs caching status, to understand really what is staying on 
> the cache and for how long?
> - Are we talking about long term cache or fractions of second sync?
> - Is there a sync tool to force zfs cache flush?
> - Is there a way to set this flag on a per-pool basis, or on a per-dataset 
> basis? (I may
> want to enable this only on NFS shares).
> Thanx for any help.
> Gabriele.
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>

_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to