On 9/22/06, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
Update ...
iostat output during "zpool scrub"
extended device statistics
device r/sw/s Mr/s Mw/s wait actv svc_t %w %b
sd34 2.0 395.20.10.6 0.0 34.8 87.7 0 100
sd3521.0 312.2
On 9/22/06, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
> Update ...
>
> iostat output during "zpool scrub"
>
> extended device statistics
>
> w/s Mr/s Mw/s wait actv svc_t %w %b
> 34 2.0 395.20.10.6 0.0 34.8 87.7
> 0 100
> 3521.0 312.21.22.9 0.0 26.0
> Update ...
>
> iostat output during "zpool scrub"
>
> extended device statistics
>
> w/s Mr/s Mw/s wait actv svc_t %w %b
> 34 2.0 395.20.10.6 0.0 34.8 87.7
> 0 100
> 3521.0 312.21.22.9 0.0 26.0 78.0
> 0 79
> 362
Update ...
iostat output during "zpool scrub"
extended device statistics
device r/sw/s Mr/s Mw/s wait actv svc_t %w %b
sd34 2.0 395.20.10.6 0.0 34.8 87.7 0 100
sd3521.0 312.21.22.9 0.0 26.0 78.0 0
> Looks like you have compression turned on?
we made tests with compression on and off and found almost no difference.
CPU load was under 3% ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
Looks like you have compression turned on?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> We use FSS, but CPU load was really load under the
> tests.
errata: We use FSS, but CPU load was really LOW under the tests.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
>
> Hi Gino,
>
> Can you post the 'zpool status' for each pool and
> 'zfs get all'
> for each fs; Any interesting data in the dmesg output
> ?
sure.
1) nothing on dmesg (are you thinking about shared IRQ?)
2) Only using one pool for tests:
# zpool status
pool: zpool1
state: ONLINE
scrub:
Hi Chris,
both server same setup. OS on local hw raid mirror, other filesystem on a SAN.
We found really bad performance but also that under that heavy I/O zfs pool was
something like freezed.
I mean, a zone living on the same zpool was completely unusable because of I/O
load.
We use FSS, but C