Jeff,
This is great information. Thanks for sharing.
Quickio is almost required if you want vxfs with Oracle. We ran a
benchmark a few years back and found that vxfs is fairly cache hungry
and ufs with directio beats vxfs without quickio hands down.
Take a look at what mpstat says on xcalls.
That's really an NFS question, not a ZFS one — ZFS simply uses whatever UID the
NFS server passes through to it. That said, Solaris doesn’t offer this
functionality, as far as I know. Perhaps NFSv4 domains could be used to
achieve something similar
This message posted from opensolaris.
My biggest concern has been more making sure that Oracle doesn't have to fight
to get memory, which it does now. There's definite performance uptick during
the process of releasing ARC cache memory to allow Oracle to get what it's
asking for and this is passed on to the application. The problem
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS
Memory Based Planner and filebench. All tests completed using Solaris 10 update
2 and update 3.:
-use zpools with 8k blocksize for data
-don't use zfs for redo logs - use ufs with directio and noatime. Building
redo log
Thanks for the feedback. Please see below.
> ZFS should give back memory used for cache to system
> if applications are demanding it. Right it should but sometimes it
> won't.
>
> However with databases there's simple workaround - as
> you know how much ram all databases will consume at least you
I'm sorry dude, I can't make head or tail from your post. What is your point?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > My question is not related directly to ZFS but
> maybe
> > you know the answer.
> > Currently I can run the ZFS Web administration
> > interface only locally - by pointing my browser to
> > [i]https://localhost:6789/zfs/[/i]
> > What should be done to enable an access to
> > [i]https://zfshost:
I'm sharing a zfs filesystem with sharenfs=on and I'm
facing the problem that user ids of the clients do not
exist on the zfs file server and also different clients
can connect using the same uid.
What I would like to do is map client IP + uid to a local
uid on the zfs server.
e.g.
192.168.1.2,