> matt@vault:~$ zpool status
> pool: rpool
> state: ONLINE
> scan: resilvered 588M in 0h3m with 0 errors on Fri Jan 7 07:38:06 2011
> config:
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> c8t1d0s0 ONLINE 0 0 0
> c8t0d0s0 ONLINE 0 0 0
> cache
> c12d0s0 ONLINE 0 0 0
On Feb 9, 2011, at 2:51 AM, Matt Connolly wrote:
> Thanks Richard - interesting...
>
> The c8 controller is the motherboard SATA controller on an Intel D510
> motherboard.
>
> I've read over the man page for iostat again, and I don't see anything in
> there that makes a distinction between the
On Wed, February 9, 2011 04:51, Matt Connolly wrote:
> Nonetheless, I still find it odd that the whole io system effectively
> hangs up when one drive's queue fills up. Since the purpose of a mirror is
> to continue operating in the case of one drive's failure, I find it
> frustrating that the s
Thanks Richard - interesting...
The c8 controller is the motherboard SATA controller on an Intel D510
motherboard.
I've read over the man page for iostat again, and I don't see anything in there
that makes a distinction between the controller and the device.
If it is the controller, would it m
It is a 4k sector drive, but I thought zfs recognised those drives and didn't
need any special configuration...?
4k drives are a big problem for ZFS, much has been posted/written
about it. Basically, if the 4k drives report 512 byte blocks, as they
almost all do, then ZFS does not detect and
Observation below...
On Feb 4, 2011, at 7:10 PM, Matt Connolly wrote:
> Hi, I have a low-power server with three drives in it, like so:
>
>
> matt@vault:~$ zpool status
> pool: rpool
> state: ONLINE
> scan: resilvered 588M in 0h3m with 0 errors on Fri Jan 7 07:38:06 2011
> config:
>
>
matt.connolly...@gmail.com said:
> After putting the drive online (and letting the resilver complete) I took the
> slow drive (c8t1d0 western digital green) offline and the system ran very
> nicely.
>
> It is a 4k sector drive, but I thought zfs recognised those drives and didn't
> need any specia
Thanks, Marion.
(I actually got the drive labels mixed up in the original post... I edited it
on the forum page:
http://opensolaris.org/jive/thread.jspa?messageID=511057#511057 )
My suspicion was the same: the drive doing the slow i/o is the problem.
I managed to confirm that by taking the oth
matt.connolly...@gmail.com said:
> extended device statistics
> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
> 1.2 36.0 153.6 4608.0 1.2 0.3 31.99.3 16 18 c12d0
> 0.0 113.40.0 7446.7 0.8 0.17.00.5 15 5 c
Hi, I have a low-power server with three drives in it, like so:
matt@vault:~$ zpool status
pool: rpool
state: ONLINE
scan: resilvered 588M in 0h3m with 0 errors on Fri Jan 7 07:38:06 2011
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0
10 matches
Mail list logo