> Bag-o-tricks-r-us, I suggest the following in such a case: > > - Two ZFS pools > - One for production > - One for Education
The DBA's are very resistant to splitting our whole environments. There are nine on the test/devl server! So, we're going to put the DB files and redo logs on separate (UFS with directio) LUN's. Binaries and backups will go onto two separate ZFS LUN's. With production, they can do their cloning at night to minimize impact. Not sure what they'll do on test/devl. The two ZFS file systems will probably also be separate zpools (political as well as juggling Hitachi disk space reasons). BTW, it wasn't the storage guys who decided the "one filesystem to rule them all" strategy, but my predecessors. It was part of the move from Clarion arrays to Hitachi. The storage folks know about, understand, and agree with us when we talk about these kinds of issues (at least, they do now). We've pushed the caching and other subsystems often enough to make this painfully clear. > Another thought is while ZFS works out its kinks why > not use the BCV or ShadowCopy or whatever IBM calls > it to create Education instance. This will reduce a > tremendous amount of I/O. This means buying more software to alleviate a short-term problem (with RAC, the whole design will be different, including moving to ASM). We have RMAN and OEM already, so this argument won't fly. > BTW, I'm curious what application using Oracle is > creating more than a million files? Oracle Financials. The application includes everything but the kitchen sink (but the bathroom sink is there!). Thanks for all of your feedback and suggestions. They all sound bang on. If we could just get all the pieces in place to move forward now, I think we'll be OK. One big issue for us will be finding the Hitachi disk space--we're pretty full-up right now. :-( Rainer This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss