Hi,
I am having a problem running zpool imports when we import multiple storage
pools at one time. Below are the details of the setup:
- We are using a SAN with Sun 6140 storage arrays.
- Dual port HBA on each server is Qlogic running the QLC driver with Sun
mpxio(SFCSM) running.
- We have 400
This is an old discussion but maybe someone could help me clarify a couple of
things.
Having not known taht zone root file systems can't be ZFS, we built several of
these zones in our production environment. We are running Solaris 10 8/07. We
now see that Zone roots are not supported on ZFS fil
The 250KB below was confusing to one reader.
What I mean is that over the interval of the file write, it transfers 250KB of
traffic. man iostat and you can see that it is correct.
250KB per second is not the bandwidth.
I also understand the 'mkfile' is not an acceptable perrformance benchmar
HI,
The question is a ZFS performance question in reguards to SAN traffic.
We are trying to benchmark ZFS vx VxFS file systems and I get the following
performance results.
Test Setup:
Solaris 10: 11/06
Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS)
Sun Fire v490 server
LSI Raid 3
Hi,
Does anyone know how to force the [b]zpool[/b] command to perform an overlay
when mounting the file system?
I get the following error:
# zpool import testpool
cannot mount '/testpool': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
I can mount an ove
Hey,
#First question to ask -- are you using the emlxs driver for
#the Emulex card?
Im using what I believe is the latest version of SFS. I got it from a link on
the Emulex website.
to http://www.sun.com/download/products.xml?id=42c4317d
#Second question -- are you up to date on the SAN Found
Jason,
I am no longer looking at not using STMS multipathing because without STMS you
loose the binding to the array and I loose all transmissions between the server
and array. The binding does come back after a few minutes but this is not
acceptable in our environment.
Load times vary depe
I simply created a zpool with an array disk like
hosta# zpool created testpool c6td0 //runs within a second
hosta# zpool export testpool // runs within a second
hostb# zpool import testpool // takes 5-7 minutes
If STMS(mpxio) is disabled, it takes from 45-60 seconds. I tested this with
LUN
I, too, experienced a long delay while importing a zpool on a second machine. I
do not have any filesystems in the pool. Just the Solaris 10 Operating system,
Emulex 10002DC HBA, and a 4884 LSI array (dual attached).
I don't have any file systems created but when STMS(mpxio) is enabled I see
Hi,
I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have
dual FC connections to storage using two ports on an Emulex HBA.
In the Solaris ZFS admin guide. It says that a ZFS file system monitors disks
by their path and their device ID. If a disk is switched between co
Hi,
In migrating from **VM to ZFS am I going to have an issue with Major/Minor
numbers with NFS mounts? Take the following scenario.
1. NFS clients are connected to an active NFS server that has SAN shared
storage between the active and standby nodes in a cluster.
2. The NFS clients are using t
Hi,
I am wandering HOW ZFS ensures that a storage pool isn't imported by two
machines at one time? Does it stamp the disks the hostID or hostName? Below is
a snipplet from the ZFS Admin Guide. It appears that this can be overwritten
with import -f.
importing a pool that is currently in use b
12 matches
Mail list logo