You could do that, but as mentioned I think its a mistake to go to the
trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
image on that filesystem. You'll lose a lot of iops along the way, and have
more
Better to wire up the lun directly to the vm unless there is a good reason
not to.
On Sep 13, 2013 7:40 PM, "Marcus Sorensen" wrote:
> You could do that, but as mentioned I think its a mistake to go to the
> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
> filesystem o
When you say, "wire up the lun directly to the vm," do you mean
circumventing the hypervisor? I didn't think we could do that in CS.
OpenStack, on the other hand, always circumvents the hypervisor, as far as
I know.
On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen wrote:
> Better to wire up the
No, as that would rely on virtualized network/iscsi initiator inside the
vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
disk to the VM, rather than attaching some image file that resides on a
filesystem, mounted on the host, living on a target.
Actually, if you plan on
Yeah, I think it would be nice if it supported Live Migration.
That's kind of why I was initially leaning toward SharedMountPoint and just
doing the work ahead of time to get things in a state where the current
code could run with it.
On Fri, Sep 13, 2013 at 8:00 PM, Marcus Sorensen wrote:
> No
Look in LibvirtVMDef.java (I think) for the disk definitions. There are
ones that work for block devices rather than files. You can piggy back off
of the existing disk definitions and attach it to the vm as a block device.
The definition is an XML string per libvirt XML format. You may want to use
If you wire up the block device you won't have to require users to manage a
clustered filesystem or lvm, and all of the work in maintaining those
clustered services and quorum management, cloudstack will ensure only one
vm is using the disks at any given time and where. It would be cake
compared to
Yeah, that would be ideal.
So, I would still need to discover the iSCSI target, log in to it, then
figure out what /dev/sdX was created as a result (and leave it as is - do
not format it with any file system...clustered or not). I would pass that
device into the VM.
Kind of accurate?
On Fri, Se
Perfect. You'll have a domain def ( the VM), a disk def, and the attach the
disk def to the vm. You may need to do your own StorageAdaptor and run
iscsiadm commands to accomplish that, depending on how the libvirt iscsi
works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
xen
Yes, this KVP is for hyperv systemvm like the systemvm on xenserver they show
up in /proc/cmdline.
The boot args will be passed from hyperv to systemvm using this KVP and then
they will be read from the cloud-early-config and then configure the systemvm.
Thanks
Rajesh Battala
-Original Mess
OK, yeah, the ACL part will be interesting. That is a bit different from
how it works with XenServer and VMware.
Just to give you an idea how it works in 4.2 with XenServer:
* The user creates a CS volume (this is just recorded in the cloud.volumes
table).
* The user attaches the volume as a dis
Ok, KVM will be close to that, of course, because only the hypervisor
classes differ, the rest is all mgmt server. Creating a volume is just
a db entry until it's deployed for the first time. AttachVolumeCommand
on the agent side (LibvirtStorageAdaptor.java is analogous to
CitrixResourceBase.java)
Looks like things might be slightly different now in 4.2, with
KVMStorageProcessor.java in the mix.This looks more or less like some
of the commands were ripped out verbatim from LibvirtComputingResource
and placed here, so in general what I've said is probably still true,
just that the location of
It looks like this KVMStorageProcessor is meant to handle
StorageSubSystemCommand commands. Probably to handle the new storage
framework for things that are now triggered via the mgmt server's
storage stuff.
On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen wrote:
> Looks like things might be sli
Yeah, I remember that StorageProcessor stuff being put in the codebase and
having to merge my code into it in 4.2.
Thanks for all the details, Marcus! :)
I can start digging into what you were talking about now.
On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen wrote:
> Looks like things might
101 - 115 of 115 matches
Mail list logo