I have several of these messages from fmdump:
fmdump -v -u 98abae95-8053-4cdc-d91a-dad89b125db4~
TIME UUID SUNW-MSG-ID
Sep 18 00:45:23.7621 98abae95-8053-4cdc-d91a-dad89b125db4 ZFS-8000-FD
100% fault.fs.zfs.vdev.io
Proble
I recently ran into a problem for the second time with ZFS mirrors. I mirror
between two different physical arrays for some of my data. One array (SE3511)
had a catastrophic failure and was unresponsive. Thus, according to the ZFS in
s10u3 it just basically waits for the array to come back and h
Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror of
a T3B lun and a corresponding lun of a SE3511 brick. I did this since I was new
with ZFS and wanted to ensure that my data would survive an array failure. It
turns out that I was smart for doing this :)
I had a hard
I have a test cluster running HA-NFS that shares both ufs and zfs based file
systems. However, the behavior that I am seeing is a little perplexing.
The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000's connecting to
two T3B partner groups through a QLogic switch. All four bricks of the
We are currently running sun cluster 3.2 on solaris 10u3. We are using ufs/vxvm
4.1 as our shared file systems. However, I would like to migrate to HA-NFS on
ZFS. Since there is no conversion process from UFS to ZFS other than copy, I
would like to migrate on my own time. To do this I am plannin
There was a log of talk about ZFS and NFS shares being a problem when there was
a large number of filesystems. There was a fix that in part included an in
kernel sharetab (I think :) Does anyone know if this has made it into S10u4?
Thanks,
BlueUmp
This message posted from opensolaris.org
__
I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This is our
first real tests with ZFS and I working on how to replace our HA-NFS ufs file
systems with ZFS counterparts. One of the things I am concerned with is how do
I replace a disk array/vdev in a pool? It appears that is not
We are currently working on a plan to upgrade our HA-NFS cluster that uses
HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is there a
known procedure or best practice for this? I have enough free disk space to
recreate all the filesystems and copy the data if necessary, but would
Eric,
To ask the obvious but crucial question :) What is the best way to truncate the
file on ZFS?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
>
> What does vmstat look like ?
> Also zpool iostat 1.
>
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
tank 291M 9.65G 0 11 110K 694K
tank 301M 9.64G
I just installed build 41 of Nevada on a SunBlade 1500 with 2GB of ram. I
wanted to check out zfs since the delay of S10U2 I really could not wait any
longer :)
I installed it on my system and created a zpool out of an approximately 40GB
disk slice. I then wanted to build a version of thunderbi
11 matches
Mail list logo