> From: Brandon Allbery [mailto:allber...@gmail.com]
>
>> On Wed, Oct 28, 2015 at 4:17 PM, Edward Ned Harvey (lopser)
>> wrote:
>> Unless I miss my guess, the discussions you're remembering are *not*
>> filesystem-eats-itself-because-of-power-failure. Every filesystem can
>> become corrupt via ha
> From: Adam Levin [mailto:levi...@gmail.com]
>
> It certainly deserves
> a second look as to whether this quiescing stuff is necessary.
FWIW, I don't advise *not* quiescing. At worst, it does no harm, and at best,
it might be important. But I don't do snapshots in vmware - and don't do
quiesci
> From: tech-boun...@lists.lopsa.org [mailto:tech-boun...@lists.lopsa.org]
> On Behalf Of Steve VanDevender
>
> Database systems still often seem to have this problem, though, and
> doing filesystem-level backups of systems with running databases will
> often get inconsistent database state.
You
Yeah, I'm not sure there's a great answer to this. Even just choosing
random blocks of public IP's can get you into trouble if the other company
has guys that think just like you. :)
-Adam
On Wed, Oct 28, 2015 at 4:41 PM, David Lang wrote:
> On Tue, 27 Oct 2015, John Stoffel wrote:
>
> And us
Adam Levin writes:
> This is a very interesting discussion for me, and probably warrants
> some more research and testing. I readily admit that I've always
> worked under the operating assumption that pulling the plug *could*
> lead to corruption, even after "upgrading" from ufs to xfs those m
On Tue, 27 Oct 2015, John Stoffel wrote:
And using public IP spaces... really dumb outside a lab environment.
I mean how hard is it to use 10.x.x.x for everything these days?
that depends, how hard is it to change your IPs when you merge with someone else
who is already using the 10.x.x.x and
On Wed, Oct 28, 2015 at 4:17 PM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> Unless I miss my guess, the discussions you're remembering are *not*
> filesystem-eats-itself-because-of-power-failure. Every filesystem can
> become corrupt via hardware failure (CPU or memory errors, etc
This is a very interesting discussion for me, and probably warrants some
more research and testing. I readily admit that I've always worked under
the operating assumption that pulling the plug *could* lead to corruption,
even after "upgrading" from ufs to xfs those many years ago. It certainly
de
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> Mostly discussion/"help plz!" in #macports IRC. It's not especially common
> but there've been enough (3-4) instances to make me wary of relying on it.
>
> xfs has been known to eat itself under some circumstances as well; that one
> has be
Mario, for the use case in question, the one step migration is to burn
updates to optical media, both so that they have a record of the
transfer, and so they don't use re-writable media.
I believe that the approval process will involve updating reference
non-air-gapped host(s) to prove the updates
I am interested in chatting off-list with anyone who has deployed an
air-gapped Red Hat Satellite server. A unit at $WORK has a need to
update RHEL boxes in their air-gap systems, and they're looking for
information on the most straight forward way to do so.
If there's a simpler / cheaper means th
At a previous employer we used Avamar for this and I recall it working well. I
didn't operate it myself, but restores and clones I requested from backups
always came out just as expected.
I believe the product is now owned by EMC.
--
Brad Beyenhof . . . . . . . . . . . . . . . . http://augment
> From: Adam Levin [mailto:levi...@gmail.com]
>
> VMWare Tools allows VMWare to tell
> the VM, through VSS, to quiesce, and then VMWare can take its snapshot --
> it knows to quiesce when it takes its own snapshot. Once that snapshot
> exists, it's 100% safe
Actually, this is incorrect.
In ord
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> And in general, relying on being able to walk away from a bad landing just
> seems like an open invitation for things to go wrong. *Especially* for
> backups.
I think the right approach is to snapshot and replicate the machines in their
ru
On Wed, Oct 28, 2015 at 9:51 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> Link?
>
> I've never experienced that, and I haven't been able to find any
> supporting information from the hive mind.
>
Mostly discussion/"help plz!" in #macports IRC. It's not especially common
but the
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> Sadly HFS+ *is* known to sometimes corrupt itself in unfixable ways on hard
> powerdown.
Link?
I've never experienced that, and I haven't been able to find any supporting
information from the hive mind.
___
> From: Brandon Allbery [mailto:allber...@gmail.com]
>
> OSes, maybe ("designed to" and "it works" are often not on speaking terms
> with each other). Applications, far too often not so much.
Perhaps "Designed and tested" would be a more compelling way to phrase that? I
know crash consistency te
On Wed, Oct 28, 2015 at 9:41 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> Dunno what filesystems or applications you support, but these aren't
> concerns for the *filesystems* ext3/4, btrfs, ntfs, xfs, zfs, hfs+... Which
> is all the filesystems I can think of, in current usage
> From: Adam Levin [mailto:levi...@gmail.com]
>
> I'm not sure I understand exactly what you're doing. Are you using RDMs
> and giving each VM a direct LUN to the storage system, or are you presenting
> datastores via iSCSI? Are you saying you're presenting one datastore per
> VM?
Yeah, iscsi,
On Wed, Oct 28, 2015 at 6:52 AM, Edward Ned Harvey (lopser) <
lop...@nedharvey.com> wrote:
> What I've always done was to make individual zvol's in ZFS, and export
> them over iscsi. Then vmware simply uses that "disk" as the disk for the
> VM. Let ZFS do snapshotting, and don't worry about vmware
I'm not sure I understand exactly what you're doing. Are you using RDMs
and giving each VM a direct LUN to the storage system, or are you
presenting datastores via iSCSI? Are you saying you're presenting one
datastore per VM?
Managing RDMs for 2500 VMs is simply impractical, and there's a limit
I'm hearing a lot of people here saying "quiesce" the VM, and how many VM's do
you have per volume... I am surprised by both of these.
What I've always done was to make individual zvol's in ZFS, and export them
over iscsi. Then vmware simply uses that "disk" as the disk for the VM. Let ZFS
do s
As kind of a follow on to my response to the question Tom posted over on
discuss, has anyone ever gone through a formal project to operationalize
Satellite 6 (or, the underlying Puppet and Foreman components) and would
be willing/able to share the plan documentation? I'm talking things
like ti
23 matches
Mail list logo