> "Adam" == Adam Levin writes:
Adam> In other words, if I have backups for the past month and the VM
Adam> was on datastore 1, but then last week it moved to datastore 2,
Adam> then when the restore is needed they'll grab it from datastore
Adam> 2, but if they don't get back what they need, a
h-boun...@lists.lopsa.org] On
Behalf Of Adam Levin
Sent: Thursday, October 29, 2015 5:28 PM
To: Michael Ryder
Cc: tech@lists.lopsa.org
Subject: Re: [lopsa-tech] backing up your VMs
Honestly, NBU isn't a bad product. It's been around a long time, and if you
use it in the traditional backup an
>>>
>>> 12716 Pegasus Drive
>>>
>>> CSB 308
>>>
>>> Orlando, FL 32816
>>>
>>>
>>>
>>> E-mail: jim.en...@ucf.edu
>>>
>>> Voice: 407-823-1701
>>>
>>> Fax: 407-882-9017
>&g
Florida
>>
>> 12716 Pegasus Drive
>>
>> CSB 308
>>
>> Orlando, FL 32816
>>
>>
>>
>> E-mail: jim.en...@ucf.edu
>>
>> Voice: 407-823-1701
>>
>> Fax: 407-882-9017
>>
>>
>>
>> *From:* tech-boun...@list
@lists.lopsa.org [mailto:tech-boun...@lists.lopsa.org]
> *On Behalf Of *Adam Levin
> *Sent:* Thursday, October 29, 2015 5:28 PM
> *To:* Michael Ryder
> *Cc:* tech@lists.lopsa.org
> *Subject:* Re: [lopsa-tech] backing up your VMs
>
>
>
> Honestly, NBU isn't a bad produ
Honestly, NBU isn't a bad product. It's been around a long time, and if
you use it in the traditional backup and recovery sense, it works quite
well. Its database doesn't scale the way DB2 does, naturally.
When we were rebuilding our datacenters 6 years ago, TSM was in the
running. We got the n
I haven't heard *anything* nice about NBU, sorry.
Are you able to say why they dropped TSM? The DB2 backend implementation
is much tighter now, so database update speed has been vastly improved
along with stability. Also implemented in the past 6 years was incremental
block-level backup and stab
Be aware that the cost of TSM may be structured much differently from
Veeam. Paying for consumption with TSM is fairly costly (anecdotal).
The offset for paying by sockets on the hypervisors is that you will need
to also consume resources in the backup infrastructure (per vCenter in our
topology).
I'll second the recommendation for TSM. We don't use TDP VE but I can
definitely vouch for the scalability. We're use it to track ~1 billion
onsite file versions and 5PB on tape, with an equivalent number offsite.
Our daily backup volumes vary from 10TB all the way to 60TB. For a storage
system of
The big problem with NBU has been that they are always playing catchup to
the VMWare feature set, and still haven't fully caught up to doing good
snapshot-based backups integrated with NetApp in 7.6. 7.7 supposedly fixes
some of that, but that's what we've been hearing since 7.0.
-Adam
On Thu, O
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
One more vote of confidence for NBU. It's been a while since I've used
it, but it was most definitely The Awesome.
D
On 10/29/2015 12:26 PM, Adam Levin wrote:
> Thanks, Mike. We were a TSM shop until we switched to NBU 6 years
> ago. I don't thin
Thanks, Mike. We were a TSM shop until we switched to NBU 6 years ago. I
don't think they're looking to go back, but there's no question it scales.
It's probably the biggest, baddest backup system there is. :)
-Adam
On Thu, Oct 29, 2015 at 12:11 PM, Michael Ryder
wrote:
> Adam
>
> Have you l
Adam
Have you looked at Tivoli (Now Spectrum Protect)? My environment isn't as
large as yours, but from what I've heard it scales well. We use it to
backup both Linux and Windows VMs and physical hosts.
The add-on Tivoli Data Protector for Virtual Environments is the magic
smoke that adds featu
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
>> On Wed, Oct 28, 2015 at 4:17 PM, Edward Ned Harvey (lopser)
>> wrote:
>> Unless I miss my guess, the discussions you're remembering are *not*
>> filesystem-eats-itself-because-of-power-failure. Every filesystem can
>> become corrupt via ha
> From: Adam Levin [mailto:levi...@gmail.com]
>
> It certainly deserves
> a second look as to whether this quiescing stuff is necessary.
FWIW, I don't advise *not* quiescing. At worst, it does no harm, and at best,
it might be important. But I don't do snapshots in vmware - and don't do
quiesci
> From: tech-boun...@lists.lopsa.org [mailto:tech-boun...@lists.lopsa.org]
> On Behalf Of Steve VanDevender
>
> Database systems still often seem to have this problem, though, and
> doing filesystem-level backups of systems with running databases will
> often get inconsistent database state.
You
Yeah, I'm not sure there's a great answer to this. Even just choosing
random blocks of public IP's can get you into trouble if the other company
has guys that think just like you. :)
-Adam
On Wed, Oct 28, 2015 at 4:41 PM, David Lang wrote:
> On Tue, 27 Oct 2015, John Stoffel wrote:
>
> And us
Adam Levin writes:
> This is a very interesting discussion for me, and probably warrants
> some more research and testing. I readily admit that I've always
> worked under the operating assumption that pulling the plug *could*
> lead to corruption, even after "upgrading" from ufs to xfs those m
On Tue, 27 Oct 2015, John Stoffel wrote:
And using public IP spaces... really dumb outside a lab environment.
I mean how hard is it to use 10.x.x.x for everything these days?
that depends, how hard is it to change your IPs when you merge with someone else
who is already using the 10.x.x.x and
On Wed, Oct 28, 2015 at 4:17 PM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> Unless I miss my guess, the discussions you're remembering are *not*
> filesystem-eats-itself-because-of-power-failure. Every filesystem can
> become corrupt via hardware failure (CPU or memory errors, etc
This is a very interesting discussion for me, and probably warrants some
more research and testing. I readily admit that I've always worked under
the operating assumption that pulling the plug *could* lead to corruption,
even after "upgrading" from ufs to xfs those many years ago. It certainly
de
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> Mostly discussion/"help plz!" in #macports IRC. It's not especially common
> but there've been enough (3-4) instances to make me wary of relying on it.
>
> xfs has been known to eat itself under some circumstances as well; that one
> has be
At a previous employer we used Avamar for this and I recall it working well. I
didn't operate it myself, but restores and clones I requested from backups
always came out just as expected.
I believe the product is now owned by EMC.
--
Brad Beyenhof . . . . . . . . . . . . . . . . http://augment
> From: Adam Levin [mailto:levi...@gmail.com]
>
> VMWare Tools allows VMWare to tell
> the VM, through VSS, to quiesce, and then VMWare can take its snapshot --
> it knows to quiesce when it takes its own snapshot. Once that snapshot
> exists, it's 100% safe
Actually, this is incorrect.
In ord
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> And in general, relying on being able to walk away from a bad landing just
> seems like an open invitation for things to go wrong. *Especially* for
> backups.
I think the right approach is to snapshot and replicate the machines in their
ru
On Wed, Oct 28, 2015 at 9:51 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> Link?
>
> I've never experienced that, and I haven't been able to find any
> supporting information from the hive mind.
>
Mostly discussion/"help plz!" in #macports IRC. It's not especially common
but the
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> Sadly HFS+ *is* known to sometimes corrupt itself in unfixable ways on hard
> powerdown.
Link?
I've never experienced that, and I haven't been able to find any supporting
information from the hive mind.
___
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> OSes, maybe ("designed to" and "it works" are often not on speaking terms
> with each other). Applications, far too often not so much.
Perhaps "Designed and tested" would be a more compelling way to phrase that? I
know crash consistency te
On Wed, Oct 28, 2015 at 9:41 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> Dunno what filesystems or applications you support, but these aren't
> concerns for the *filesystems* ext3/4, btrfs, ntfs, xfs, zfs, hfs+... Which
> is all the filesystems I can think of, in current usage
> From: Adam Levin [mailto:levi...@gmail.com]
>
> I'm not sure I understand exactly what you're doing. Are you using RDMs
> and giving each VM a direct LUN to the storage system, or are you presenting
> datastores via iSCSI? Are you saying you're presenting one datastore per
> VM?
Yeah, iscsi,
On Wed, Oct 28, 2015 at 6:52 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> What I've always done was to make individual zvol's in ZFS, and export
> them over iscsi. Then vmware simply uses that "disk" as the disk for the
> VM. Let ZFS do snapshotting, and don't worry about vmware
I'm not sure I understand exactly what you're doing. Are you using RDMs
and giving each VM a direct LUN to the storage system, or are you
presenting datastores via iSCSI? Are you saying you're presenting one
datastore per VM?
Managing RDMs for 2500 VMs is simply impractical, and there's a limit
I'm hearing a lot of people here saying "quiesce" the VM, and how many VM's do
you have per volume... I am surprised by both of these.
What I've always done was to make individual zvol's in ZFS, and export them
over iscsi. Then vmware simply uses that "disk" as the disk for the VM. Let ZFS
do s
Adam> One of the problems with using lots of datastores is the IP
Adam> issue with cDOT, which as you point out isn't a problem with RFC
Adam> 1918 address spaces...
Adam> ...unless your network team long ago got really tired of mergers
Adam> and acquisitions causing all sorts of problems with ov
One of the problems with using lots of datastores is the IP issue with
cDOT, which as you point out isn't a problem with RFC 1918 address spaces...
...unless your network team long ago got really tired of mergers and
acquisitions causing all sorts of problems with overlapping address spaces,
and d
With Clustered OnTap, it actually makes sense to have lots of volumes
and lots of IPs, one per datastore, so you can move them around the
cluster and also move the interface to follow the volume as well. You
burn through IPs, but that's what the 192.168.x.y space is for, right?
Just dedicate a cl
Yeah that seems to be the easiest answer, even if it's not ideal. That'll
naturally limit the number of VMs per datastore. If we can manage to
change policies to go with crash-consistent instead of app-consistent on
most of our service levels, that'll help a lot too.
-Adam
On Tue, Oct 27, 2015
Cool, thanks for the pointer. Can I ask what the size of your environment
is? Is the product scaling well?
Thanks,
-Adam
On Tue, Oct 27, 2015 at 11:49 AM, Dave Caplinger <
davecaplin...@solutionary.com> wrote:
> We use Unitrends Virtual Backup as well; it has been around for quite a
> while si
Yeah, it's a hard balance to strike. Having one tool to do it all
makes training and support simpler and easier. But... if that tool
can't do it all as well as a specific tool, then maybe it's not a good
tradeoff to make.
I don't have a good answer, but in some cases just pure $$$ costs
argues
We use Unitrends Virtual Backup as well; it has been around for quite a while
since it was previously known as PHD Virtual Backup before Unitrends acquired
them. It snapshots and backs up individual VM's virtual disks at a time (with
de-duplication and compression) so we don't have the issue of
The more we look into this, the more I think that trying to use just one
tool is going to mean that some part of the environment isn't going to work
well. Different tools have different strengths. Our management is pushing
for this one tool solution as well, but it's causing some difficulties
bec
> "Ray" == Ray Van Dolson writes:
Ray> On Tue, Oct 27, 2015 at 09:58:12AM -0400, John Stoffel wrote:
Ray>
>> We're going to have the same type of problem down the line too, and
>> I've used CommVault (on FC SAN volumes), a little bit of Veeam, and
>> we're moving to Netbackup with Snapmanag
Thanks Ski. We also recently spoke to a new, young vendor named Rubrik.
It's an interesting product, but our company tends to avoid technology
that's less than 5 years or so in the market (pah! boring! :) ).
-Adam
On Tue, Oct 27, 2015 at 10:40 AM, Ski Kacoroski wrote:
> Adam,
>
> I am a much
Adam,
I am a much smaller shop, but we really like Unitrends Virtual Backup.
It is like Veeam, was less expensive for us, and just plain works. I
have no idea how if it could handle your solution.
cheers,
ski
On 10/27/2015 06:14 AM, Adam Levin wrote:
Hey all, I've got a question about how
.org] on behalf of
Adam Levin [levi...@gmail.com]
Sent: Tuesday, October 27, 2015 10:04 AM
To: Ray Van Dolson
Cc: tech@lists.lopsa.org
Subject: Re: [lopsa-tech] backing up your VMs
At the moment we are using the native tools too. We are looking into doing the
*really important* MS SQL systems wi
On Tue, Oct 27, 2015 at 09:58:12AM -0400, John Stoffel wrote:
> We're going to have the same type of problem down the line too, and
> I've used CommVault (on FC SAN volumes), a little bit of Veeam, and
> we're moving to Netbackup with Snapmanager on NFS datastores.
Out of curiosity, what ma
Adam> The challenge right now isn't so much on the NetApp side but on the
VMWare side.
Adam> Typical sequence of events:
Adam> 1) get list of VMs on datastore X
Adam> 2) quiesce all VMs on datastore X
Adam> 3) snapshot datastore X via NetApp mechanism
Adam> 4) un-quiesce all VMs on datastore X
At the moment we are using the native tools too. We are looking into doing
the *really important* MS SQL systems with our Actifio appliance, but so
far we haven't done enough testing for me to form an opinion, and it's
certainly not a cheap solution. :)
-Adam
On Tue, Oct 27, 2015 at 10:02 AM, R
Curious what your approach with the SQL VM's are. We've struggled
getting this right w/ CommVault. Backups are consistently fast, but
restore speeds can vary *wildly* (whereas other datasets don't seem to
have this variability). Our DBA's are pferring to stick with the
native SQL backup/restore
Thanks, this is what we were thinking of doing, so it's good to hear it's
working for others. I'm hearing from vendors that 50 VM's per datastore is
a good number. We were hoping for larger datastores (most of our VMs are
Windows <50GB or Linux <100GB), but that would probably end up with too
man
Yes, we still do quiesce the VM's -- but perhaps avoid some of the
issues you've seen by having more numerous, smaller FlexVol datastores
(usually around 5TB max). We had to work with our Compute team to kind
of re-work things to accommodate backup workflows to minimize
disruption.
I'd guess ther
We use CommVault + NetApp + FlexClone (Intellisnap). As long as you
schedule things so you only end up taking one FlexClone per volume per
cycle, it works quite well. The time to roll back the VMware level
snapshots becomes very small.
Ray
On Tue, Oct 27, 2015 at 06:20:18AM -0700, Jason Barbier
Interesting. How is that different from just taking regular snapshots?
Don't you still have to quiesce the VMs before taking the flexclone? How
many VMs per volume do you have?
-Adam
On Tue, Oct 27, 2015 at 9:32 AM, Ray Van Dolson wrote:
> We use CommVault + NetApp + FlexClone (Intellisnap).
Right now at $work we use a combination of Veeam weeklies, because
Veeam takes a week to do all of our backups, ans ZFS snapshots for
nightlies. Getting onto nextenta's SAN product with ZFS helped fix a
lot of small issues we had around backups and SAN migration.
--
Jason Barbier | E: jab...@se
Thanks, Matt.
The challenge right now isn't so much on the NetApp side but on the VMWare
side.
Typical sequence of events:
1) get list of VMs on datastore X
2) quiesce all VMs on datastore X
3) snapshot datastore X via NetApp mechanism
4) un-quiesce all VMs on datastore X
What happens is that st
When on a NetApp, I've seen most people use the NetApp VMware connector for
snapvaulting. I don't know how it operates at that scale, but I imagine it
could scale out. You have 75 datastores... I just don't know what would be
required to make it performant at that extreme.
The NetApp dude at our c
56 matches
Mail list logo