We've had the acopia around for over a year and are also moving away from it, or at least limiting it's influence. The biggest issue we've encountered is with meta-data corruption. All filesystem changes (including ownerships & permissions changes) must be done through the acopia or you risk meta-data corruption. And once the meta-data for a share is corrupt, Acopia's typical answer is delete the share and re-import it; and thus an outage to the customers while the work is in progress. In our environment, that's not acceptable. We've also had the acopia's crash a couple of times, both the primary and the standby simultaneously.
I'm not directly involved with the Acopias we have but I hear the grumbling over the cube walls. If you'd like some more details on the issues let me know and I'll ask. Matt =========================================================== "If they are the pillars of our community, We better keep a sharp eye on the roof." =========================================================== ----- Original Message ---- > From: John Stoffel <j...@stoffel.org> > To: Adam Levin <lev...@westnet.com> > Cc: LOPSA Discuss List <discuss@lopsa.org> > Sent: Wed, May 26, 2010 3:01:01 PM > Subject: Re: [lopsa-discuss] Acopia > > Adam> Hey all. Does anyone have any experience with Acopia? > I've got Adam> brief and unpleasant knowledge of Rainfinity, but I've > never Adam> worked with Acopia, and I hear it's good stuff for NAS > namespace Adam> virtualization, so I'm curious about any success stories > or Adam> failure stories you might have. We got one at my current > work, but we've moved away from it completely. The big big big silent > issue with it (or any NAS redirector like this) is backups. > Specifically, restores. Unless your backup software and your NAS box talk > very well together, you're going to be hosed on restores. Now, > I will admit this was all a couple of years ago, and it may have been fixed > or improved since then, but think of this situation and ask Acopia how it is > handled. Their answer will be illuminating (and I'd love to hear it if > you'll share it!). Have two Netapp filers, one which is a fast FAS3xxx, > the other a nice big Nearstore with bigger but slower disks. Call them > F and S. So you keep tier one data on F, and tier 2 data on S, > with say with 3Tb on F and 16Tb on S. Now you do a backup. > How? NDMP? How do you syncronize F & S to have a completely > consistent view into the total pool of files? Now, assume that F > dies and needs to be restored completely. How do you do this? > Efficiently and quickly and without having files appear on two > places? 1. You can backup through the Acopia, giving you a single image > to backup, but you a) hit the acopia hard and b) have to restore > through it, and c) how do you relayout your files into S & F > properly? Again, they may have good answers to this > question. 2. You can backup behind the Acopia's back, using NDMP > (or whatever you want) so that you get the speed and use of snapshots > for consistent backups. But now you need to restore data, and > some of it's moved from one system to another? How do you > *find* the data to restore? And if it's split across both F > & S systems, you now need to manually recover it. 3. How > many files will you have? Both Rainfinity and Acopia blew up > for us because we had millions of small files. The Rainfinity > was before my time at $WORK, but the Acopia also ran into problems > with 8+ million files. I suspect that this is fixed now... but > I wonder. It's a great idea, but the downsides (at the time) > turned out to be just too much. My opinion these days is that an HSM > system which is part of the backup system is the way to go, because it > integrates two critical data management tools together, so they work > (hopefully!) together. CommVault is pretty good in this respect with > Netapps and their HSM module. Not perfect, but not bad > either. Good luck, and let us know how it works > out. John _______________________________________________ Discuss > mailing list > href="mailto:Discuss@lopsa.org">Discuss@lopsa.org > href="http://lopsa.org/cgi-bin/mailman/listinfo/discuss" target=_blank > >http://lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by > the League of Professional System Administrators > target=_blank >http://lopsa.org/ _______________________________________________ Discuss mailing list Discuss@lopsa.org http://lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/