I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO,
respectively. Both work fine - the only caveat is to drop your sd_queue to
around 20 or so, otherwise you can run into an ugly display of bus resets.
This message posted from opensolaris.org
Yes. Works fine, though it's an interim solution until I can get rid of
PowerPath.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm using only sd_max_throttle and disabling transient errors. Without the
max_throttle the system on Pillar becomes unusable and goes into lifetime of
bus reset syncs.
* set max number of commands to 20 - max 256
set sd:sd_max_throttle=20
* prevent warning messages of non-disruptive operations
For a quick overview of setting up MPxIO and the other configs:
[EMAIL PROTECTED]:~]# fcinfo hba-port
HBA Port WWN: 1000c952776f
OS Device Name: /dev/cfg/c8
Manufacturer: Sun Microsystems, Inc.
Model: LP1-S
Type: N-port
State: online
Support
Actually, I'm using ZFS in a SAN environment often importing LUNS to save
management overhead and make snapshots easily available, among other things. I
would love zfs remove because it allows me, in conjunction with containers, to
build up a single managable pool for a number of local host syst
I've been seeing this failure to cap on a number of (Solaris 10 update 2 and 3)
machines since the script came out (arc hogging is a huge problem for me, esp
on Oracle). This is probably a red herring, but my v490 testbed seemed to
actually cap on 3 separate tests, but my t2000 testbed doesn't e
I thought I'd share some lessons learned testing Oracle APS on Solaris 10 using
ZFS as backend storage. I just got done running 2 months worth of performance
tests on a v490 (32GB/4x1.8Ghz dual core proc system with 2xSun 2G HBAs on
separate fabrics) and varying how I managed storage. Storage us
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS
Memory Based Planner and filebench. All tests completed using Solaris 10 update
2 and update 3.:
-use zpools with 8k blocksize for data
-don't use zfs for redo logs - use ufs with directio and noatime. Building
redo log
My biggest concern has been more making sure that Oracle doesn't have to fight
to get memory, which it does now. There's definite performance uptick during
the process of releasing ARC cache memory to allow Oracle to get what it's
asking for and this is passed on to the application. The problem
I currently run 6 Oracle 9i and 10g dbs using 8GB SGA apiece in containers on a
v890 and find no difficulties starting Oracle (though we don't start all the
dbs truly simultaneously). The ARC cache doesn't ramp up until a lot of IO has
passed through after a reboot (typically a steady rise over
The big problem is that if you don't do your redundancy in the zpool, then the
loss of a single device flatlines the system. This occurs in single device
pools or stripes or concats. Sun support has said in support calls and Sunsolve
docs that this is by design, but I've never seen the loss of a
I'd definitely prefer owning a sort of SAN solution that would basically just
be trays of JBODs exported through redundant controllers, with enterprise level
service. The world is still playing catch up to integrate with all the
possibilities of zfs.
This message posted from opensolaris.org
I didn't see an advantage in this scenario, though I use zfs/compression
happily on my NFS user directory.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
> We are currently recommending separate (ZFS) file systems for redo logs.
Did you try that? Or did you go straight to a separate UFS file system for
redo logs?
I'd answered this directly in email originally.
The answer was that yes, I tested using zfs for logpools among a number of
disk layo
Try throttling back the max # of IOs. I saw a number of errors similar to this
on Pillar and EMC.
In /etc/system, set:
set sd:sd_max_throttle=20
and reboot.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
The thought is to start throttling and possibly tune up or down, depending on
errors or lack of errors. I don't know of a specific NexSAN throttle preference
(we use SATABoy, and go with 20).
This message posted from opensolaris.org
___
zfs-discuss
Any chance these fixes will make it into the normal Solaris R&S patches?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
For the particular HDS array you're working on, or also on NexSAN storage?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why did you choose to deploy the database on ZFS ?
-On disk consistancy was big - one of our datacenters was having power problems
and the systems would sometimes drop live. I had a couple of instances of data
errors with VXVM/VXFS and we had to restore from tape.
-zfs snapshot saves us many hour
> What's the maximum filesystem size you've used in production environment? How
> did the experience come out?
I have a 26tb pool that will be upgraded to 39tb in the next couple of months.
This is the backend for Backup images. The ease of managing this sort of
expanding storage is a little b
Solaris 10, u3, zfs 3 kernel 118833-36.
Running into a weird problem where I attach a mirror to a large, existing
filesytem. The attach occurs and then the system starts swallowing all the
available memory and the system performance chokes, while the filesystems sync.
In some cases all memory
> Shows up as lpfc (is that Emulex?)
lpfc (or fibre-channel) is an Emulex branded emulex card device - sun branded
emulex uses the emlxs driver.
I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath
4.5.0(and MPxIOin other cases) and Clariion arrays and have never se
There a way to take advantage of this in Sol10/u03?
"sorry, variable 'zfs_vdev_cache_max' is not defined in the 'zfs' module"
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
If anyone is running this configuration, I have some questions for you about
Page83 data errors.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
I've been running ZFS against EMC Clariion CX-600 and CX-500s in various
configurations, mostly exported disk situations, with a number of kernel
flatlining situations. Most of these situations include Page83 data errors in
/var/adm/messages during kernel crashes.
As we're outgrowing the speed
Sun has seen all of this during various problems over the past year and a half,
but:
CX600 FLARE code 02.07.600.5.027
CX500 FLARE code 02.19.500.5.044
Brocade Fabric, relevant switch models are 4140 (core), 200e (edge), 3800
(edge).
Sun Branded Emulex HBAs in the following models:
SG-XPCI1FC-
26 matches
Mail list logo