>>>>> "Adam" == Adam Levin <lev...@westnet.com> writes:
Adam> On Wed, 26 May 2010, John Stoffel wrote: >> Unless your backup software and your NAS box talk very well together, >> you're going to be hosed on restores. Adam> Thanks, John. That's a concern of mine as well. To us, it turned out to be the critical problem, because it was hard to be sure that a restore would bring back all the files in a directory, since you could have files spread across multiple backends, depending on the access patterns and the Acopia's migration rules. >> Now, I will admit this was all a couple of years ago, and it may have >> been fixed or improved since then, but think of this situation and ask >> Acopia how it is handled. Their answer will be illuminating (and I'd >> love to hear it if you'll share it!). Adam> We will have them coming in shortly, so hopefully I can get some Adam> current answers. Please share! Adam> It may depend on which features and what architecture we go Adam> with. If it's possible to use the box as simply a virtualizer, Adam> where the data is still stored and controlled by the filers Adam> (NetApps in our case as well), then we can just restore to the Adam> original location. That might work, but the real advantage we perceived was being able to get around/away from the 16Tb volume limitations of Netapp, which would let us simplify the data paths users used to access stuff, and let us grow large projects without having to shuffle data around by hand and update where it was. We used to have a forest of mount points and symlinks, etc. Total pain. FlexVols have helped alot, but are still annoying. Adam> If the Acopia owns the filesystem and is moving files around on Adam> its own (like some policy-based DLM systems do), then we have Adam> bigger worries about the restores. If we have the option, then Adam> at least we get to decide for ourselves whether to complicate Adam> our backups. :) Basically, that's what you can have the Acopia do, which is assign a single virtual mount point to one or more backend storage pools. Acopia manages them transparently to the end users. One goal was to make refreshes of our Netapps transparent to end users, but that didn't work out really. Basically, you have to take the downtime to put the Acopia in front of the storage, then take downtime again if you need to remove the acopia to do another site or set of volumes. I'll admit that we never had performance problems, at least none that we noticed. Adam> At the moment what we really want is simply a box that sits Adam> between the filers and the users and when the user wants to Adam> connect to a share, the box in the middle directs them to the Adam> right filer. As long as we can define where to point the user, Adam> we can migrate the data around without disrupting the users. Adam> Once you move to a true global namespace, the backup/restore Adam> issues mount up quickly. Exactly, a single global namespace is a wonderful thing to work towards, but backups and restores become painful if not planned for more carefully. Now moving onto Netapp, I really really really wish they'd let me create a single Aggregate that spanned all the disks in a Filer, so I can create as many volumes as I want, which can grow/shrink easily. Right now we have to balance volumes across aggregates, and if you guess wrong... it's a complete hassle to move data, even if you have Tb free on another aggregate. I'd be happier with 16Tb volumes, if I could just have them all mounted on a 64Tb aggregate without hassles. John _______________________________________________ Discuss mailing list Discuss@lopsa.org http://lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/