On Feb 20, 2014, at 1:48 PM, Jimmy Hess <mysi...@gmail.com> wrote:

> The locking restrictions are for your own protection. If the filesystem
> inside your virtual disks is not a clustered filesystem;
> two instances of a VM  simultaneously  mounting the same NTFS volume and
> writing some things, is an absolute disaster.
> 
> 
> Under normal circumstances,  two applications should never be writing to
> the same file.  This is true on clustered filesystems.
> This is true when running multiple applications on a single computer.

Why should "two applications should never be writing to the same file"? In a 
real clustered *file*system this is exactly what you want. The same logical 
volume mounted across host cluster members, perhaps geodistantly located, each 
having access at the record level to the data. This permits HA and for the 
application to be distributed across cluster nodes. If a host node is lost then 
the application stays running. If the physical volume is unavailable then 
logically shadowing the volume across node members or storage controllers / 
SANs permits fault tolerance. You don’t need to “fail disks over” (really 
logical volumes) as they are resilient from the start, they just don’t fail. 
When the shadow members return they replay journals or resilver if the journals 
are lost. 

I’d note that this can be accomplished just so long as you have a common disk 
format across the OS nodes. 

These problems were all resolved 40 years ago in mainframe and supermini 
systems. They’re not new. VMware has been slowly reinventing — more accurately 
rediscovering — well known HA techniques as it’s trying to mature. And it still 
has a lot of catching up to do. It’s the same tale that microcomputers have 
been doing for decades as they’ve come into use as servers. 

However I’m not sure what all of this has to do with network operations. ;)



-d 

-----

Dan Shoop
sh...@iwiring.net
1-646-402-5293 (GoogleVoice)





Reply via email to