The volumes sit on HDS SAN. The only reason for the volumes is to prevent inadvertent import of the zpool on two nodes of a cluster simultaneously. Since we're on SAN with Raid internally, didn't seem to we would need zfs to provide that redundancy also.
On Tue, Jun 21, 2011 at 4:17 AM, Remco Lengers <re...@lengers.com> wrote: > ** > Todd, > > Is that ZFS on top of VxVM ? Are those volumes okay? I wonder if this is > really a sensible combination? > > ..Remco > > > On 6/21/11 7:36 AM, Todd Urie wrote: > > I have a zpool that shows the following from a zpool status -v <zpool name> > > > brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101 > pool: ABC0101 > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > ABC0101 ONLINE 0 0 10 > /dev/vx/dsk/ ABC01dg/ ABC0101_01 ONLINE 0 0 2 > /dev/vx/dsk/ ABC01dg/ ABC0101_02 ONLINE 0 0 8 > /dev/vx/dsk/ ABC01dg/ ABC0101_03 ONLINE 0 0 10 > > errors: Permanent errors have been detected in the following files: > > /clients/ABC0101/rep/local/bfm/web/htdocs/tmp/rscache/717b52282ea059452621587173561360 > /clients/ > ABC0101/rep/local/bfm/web/htdocs/tmp/rscache/6e6a9f37c4d13fdb3dcb8649272a2a49 > /clients/ > ABC0101/rep/d0/prod1/reports/ReutersCMOLoad/ReutersCMOLoad. > ABCntss001.20110620.141330.26496.ROLLBACK_FOR_UPDATE_COUPONS.html > /clients/ > ABC0101/rep/local/bfm/web/htdocs/tmp/G2_0.related_detail_loader.1308593666.54643.n5cpoli3355.data > /clients/ > ABC0101/rep/d0/prod1/reports/gp_reports/ALLMNG/20110429/F_OLPO82_A.gp. > ABCIM_GA.nlaf.xml.gz > /clients/ > ABC0101/rep/d0/prod1/reports/gp_reports/ALLMNG/20110429/UNVLXCIAFI.gp. > ABCIM_GA.nlaf.xml.gz > /clients/ > ABC0101/rep/d0/prod1/reports/gp_reports/ALLMNG/20110429/UNIVLEXCIA.gp.BARCRATING_ > ABC.nlaf.xml.gz > > I think that a scrub at least has the possibility to clear this up. A > quick search suggests that others have had some good experience with using > scrub in similar circumstances. I was wondering if anyone could share some > of their experiences, good and bad, so that I can assess the risk and > probability of success with this approach. Also, any other ideas would > certainly be appreciated. > > > -----RTU > > > _______________________________________________ > zfs-discuss mailing > listzfs-discuss@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- -----RTU
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss