On Mon, Apr 23, 2007 at 20:27:56 +0100, Peter Tribble wrote:
: On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:

: >Relatively low traffic to the pool but sync takes too long to complete
: >and other operations are also not that fast.

: >Disks are on 3510 array. zil_disable=1.

: >bash-3.00# ptime sync

: >real     1:21.569
: >user        0.001
: >sys         0.027

: Hey, that is *quick*!

: On Friday afternoon I typed sync mid-afternoon. Nothing had happened
: a couple of hours later when I went home. It looked as though it had 
: finished
: by 11pm, when I checked in from home.

: This was on a thumper running S10U3. As far as I could tell, all writes
: to the pool stopped completely. There were applications trying to write,
: but they had just stopped (and picked up later in the evening). A fairly
: consistent few hundred K per second of reads; no writes; and pretty low
: system load.

I'm glad I'm not the only one to have seen this.

I'm currently playing with ZFS on a T2000 with 24x500GB SATA discs in an
external array that presents as SCSI.  After having much 'fun' with the
Solaris SCSI driver not handling LUNs >2TB, I reconfigured the array to
present as one target with 24 LUNs, one per disc, and threw ZFS at it in a
raidz2 configuration.  I admit this isn't optimal, but it has the
behaviour I wanted: namely lots of space with a little redundancy for
safety.

Having had said 'fun' with the SD driver I thought I'd thoroughly check
large object handling, and started eight 'dd if=/dev/zero's before
retiring to the pub and leaving it overnight.

The next morning, I discovered a bunch of rather large files.  340GB in
size.  Everything seemed OK, so I issued an 'rm *', expecting it to return
rather quickly.  How wrong I was.

It took a minute (61s from memory) to delete a single 320GB file, which
flattened the SCSI bus issuing 4.5MB/s/disc reads (as reported by iostat
-x), during which time all writes were suspended.  This is not good.  Once
that had finished, a 'ptime sync' sat for 25 minutes running at about
1MB/s/disc.  Again, all reads.

Given what I intend to use this filesystem for -- dropping all the BBC's
Freeview muxes to disc in 24-hour chunks -- performance on large objects
is rather important to me.  I've reconfigured to 3x(7+1) raidz, and this
has helped a lot (as I expected it would), but it's still not great having
multi-second write locks when deleting 16GB objects.

100MB/s write speed and 200MB/s read speed isn't bad, though.  Quite
impressed with that.

: It did recover, but write latencies of a few hours is rather undesirable.

To put it mildly.

: What on earth was it doing?

I wish I knew.  Anyone any ideas on how to optimise it further?  I'm using
the defaults (whatever's created by a 8GB RAM T2000 with 8 1GHz cores); no
compression, no nothing.

-- 
Dickon Hood

Due to digital rights management, my .sig is temporarily unavailable.
Normal service will be resumed as soon as possible.  We apologise for the
inconvenience in the meantime.

No virus was found in this outgoing message as I didn't bother looking.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to