We've got an interesting application which involves recieving lots of
multicast groups, and writing the data to disc as a cache.  We're
currently using ZFS for this cache, as we're potentially dealing with a
couple of TB at a time.

The threads writing to the filesystem have real-time SCHED_FIFO priorities
set to 25.  The processes recovering data from the cache and moving it
elsewhere are niced at +10.

We're seeing the writes stall in favour of the reads.  For normal
workloads I can understand the reasons, but I was under the impression
that real-time processes essentially trump all others, and I'm surprised
by this behaviour; I had a dozen or so RT-processes sat waiting for disc
for about 20s.

My questions:

  *  Is this a ZFS issue?  Would we be better using another filesystem?

  *  Is there any way to mitigate against it?  Reduce the number of iops
     available for reading, say?

  *  Is there any way to disable or invert this behaviour?

  *  Is this a bug, or should it be considered one?

Thanks.

-- 
Dickon Hood

Due to digital rights management, my .sig is temporarily unavailable.
Normal service will be resumed as soon as possible.  We apologise for the
inconvenience in the meantime.

No virus was found in this outgoing message as I didn't bother looking.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to