the 4 database servers are part of an Oracle RAC configuration. 3 databases are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and littledb2 on the last two. The oracle backup system spawns db backup jobs that could occur on any node based on traffic and load. All nodes are fiber attached to a SAN. They all of FC access to the same set of SAN disks where the nightly dumps must go to. The plan all along was to save the gigE network for network traffic and have the nightly backups occur over the dedicated fc network.
Originally, we tried using our tape backup software to read the oracle flash recovery area (oracle raw device on a seperate set of san disks), however our backup software has a known issue with the the particular version of ORacle we are using. So we scavenged up a San disk that would be mounted with a filesystem so that the tape backup software can just read the oracle dump file like a regular file. However this does not work because all four hosts need access to the backup indexes which are stored on the shared disk. As I mentioned earlier this is not working with ZFS and apparently is fostering corruption in the ZFS. We havent done seperate dedicated disks to each host because to divide the available disk space would not result in enough space when distributed. Also our failover capabiliteis for backup would be gone as if one of the hosts fails that happens to have the disk attached for a certain database, no other host can step in and do the backup, whereas the original plan was that all 4 servers read/write to the same set of shared storage. Any host can backup any of the three databases and the next night a different host could do the backup and it would be no problem as it would have access to the shared indexes on the shared disk Now it seems our only option is to switch to NFS (and use the network) while the dedicated Fiber laid to each of these four hosts goes unused or buy QFS for tens of thousands of dollars All the physical infrastructure is there for a dedicated backup FC network, it seems just for lack of a shared filesystem to lay on top of the v490's to keep arbitrate between them and the shared disk Too bad the san we are using cant export nfs shares directly over the FC to HOST hbas. I am all for storage servers that have FC, but publish NFS over the network, we just dont want to use the network in this case, we want to use the FC I still wonder if NFS could be used over the FC network in some way similar to how NFS works over ethernet/tcp network Let me know if I am overlooking something, the last hope here is to see if GlusterFS can run reliably on the Solaris 10 v490's talking to our san. Maybe IP over Fiberchannel and just treat the FC as if it was a network This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss