Thanks everyone who has tried to help. this has gotten a bit crazier, I
removed the 'faulty' drive and let the pool run in degraded mode. It would
appear that now another drive has decided to play up;
de-bash-4.0# zpool status
pool: data
state: DEGRADED
status: One or more devices has b
Ok, I changed the cable and also tried swapping the port on the motherboard.
The drive continued to have huge asvc_t and also started to have huge wsvc_t. I
unplugged it and the 'pool' is now operating as per expected performance wise.
See the 'storage' forum for any further updates as I am now
>
>
> I'd say your easiest two options are swap ports and
> see if the problem
> follows the drive. If it does, swap the drive out.
>
>
> --Tim
> ___
Yep, that sounds like a plan.
Thanks for your suggestion.
--
This message posted from opensolaris
While not strictly a ZFS issue as such I thought I'd post here as this and the
storage forums are my best bet in terms of getting some help.
I have a machine that I recently set up with b130, b131 and b132. With each
build I have been playing around with ZFS raidz2 and mirroring to do a little
Hi all,
I have a home server based on SNV_127 with 8 disks;
2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
This server performs a few functions;
NFS : for several 'lab' ESX virtual machines
NFS : mythtv storage (videos, music, recordings etc)
Samba : for home directories for all networke
Thanks for the help.
I was curious whether the zfs send|receive was considered suitable given a few
things I've read which said somethings along the lines of "don't count on being
able to restore this stuff". Ideally that is what I would use with the
'incremental' option so as to only backup ch
Hello all,
Are there any best practices / recommendations for ways of doing this ?
In this case the ZVOLs would be iSCSI LUNS containing ESX VMs .I am aware
of the of the need for the VMs to be quiesced for the backups to be useful.
Cheers.
--
This message posted from opensolaris.org
_