ied to use a ZVOL from rpool (on fast 15k rpm drives) as a cache device
for another pool (on slower 7.2k rpm drives). It worked great up until it
hit the race condition and hung the system. It would have been nice if zfs
had issued a warning, or at least if this fact was better documented.
Scott
> Thus far there is no evidence that there is anything wrong with your
> storage arrays, or even with zfs. The problem seems likely to be
> somewhere else in the kernel.
Agreed. And I tend to think that the problem lays somewhere in the LDOM
software. I mainly just wanted to get some experience
No errors reported on any disks.
$ iostat -xe
extended device statistics errors ---
devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot
vdc0 0.65.6 25.0 33.5 0.0 0.1 17.3 0 2 0 0 0 0
vdc1 78.1 24.4
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS
file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200,
and using a SSD drive as a ZIL device. Primary access to this system is via
NFS, and with NFS COMMITs b
amped with such a load.
We're running Solaris 10, not OpenSolaris, so it could also be the case that
there is a regression somewhere in there.
Scott Duckworth, Systems Programmer II
Clemson University School of Computing
On Tue, May 12, 2009 at 10:10 PM, Rince wrote:
> Hi world,
> I h