Hi Richard,

Been watching the stats on the array and the cache hits are < 3% on
these volumes. We're very write heavy, and rarely write similar enough
data twice. Having random oriented database data and
sequential-oriented database log data on the same volume groups, it
seems to me this was causing a lot of head repositioning.

By shutting down the slaves database servers we cut the latency
tremendously, which would seem to me to indicate a lot of contention.
But I'm trying to come up to speed on this, so I may be wrong.

"iostat -xtcnz 5" showed the latency dropped from 200 to 20 once we
cut the replication. Since the masters and slaves were using the same
the volume groups and RAID-Z was striping across all of them on both
the masters and slaves, I think this was a big problem.

Any comments?

Best Regards,
Jason

On 11/29/06, Richard Elling <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi Richard,
>
> Originally, my thinking was I'd like drop one member out of a 3 member
> RAID-Z and turn it into a RAID-1 zpool.

You would need to destroy the pool to do this -- requiring the data to
be copied twice.

> Although, at the moment I'm not sure.

So many options, so little time... :-)

> Currently, I have 3 volume groups in my array with 4 disk each (total
> 12 disks). These VGs are sliced into 3 volumes each. I then have two
> database servers using one LUN from each of the 3 VGs RAID-Z'd
> together. For redundancy its great, for performance its pretty bad.
>
> One of the major issues is the disk seek contention between the
> servers since they're all using the same disks, and RAID-Z tries to
> utilize all the devices it has access to on every write.

This is difficult to pin down.  The disks cache and the RAID controller
caches.  So it is true that you would have contention, it is difficult
to predict what affect, if any, the hosts would see.

> What I thought I'd move to was 6 RAID-1 VGs on the array, and assign
> the VGs to each server via a 1 device striped zpool. However, given
> the fact that ZFS will kernel panic in the event of bad data I'm
> reconsidering how to lay it out.

NB. all other file systems will similarly panic.  We get spoiled to
some extent because there are errors where ZFS won't panic.  In the
future, there will be more errors that ZFS can handle without panic.

> Essentially I've got 12 disks to work with.
>
> Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
> is much appreciated.

send/receive = copy/copy = backup/restore
It may be possible to do this as a rolling reconfiguration.
  -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to