Sounds good
I can:
1) sudo apt-get remove --purge cloudstack-agent
2) sudo apt-get clean
3) Switch to 4.2 branch
4) mvn -P developer,systemvm clean install
5) mvn -P developer -pl developer,tools/devcloud -Ddeploydb
6) Regenerate DEBs and install them
On Wed, Sep 25, 2013 at 6:13 PM, Marc
You'll need to either remove the old debs or force install the new
ones. Also, if any jars have moved location, you may have to delete
the old ones in case they end up in the classpath of your jsvc
command. I'd first try to generate 4.2 debs (or use the release
artifacts), remove the old packages,
By simply switching to 4.2, will CS use the proper version of Libvirt or is
there more I need to do since I've already run 4.3 on this Ubuntu install?
Thanks
On Wed, Sep 25, 2013 at 6:07 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> It's been a bit rough getting this up and runnin
It's been a bit rough getting this up and running, but at least I've been
learning about how CloudStack works on KVM, so that's really good.
On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> I mean switch over to 4.2 from master. :)
>
>
> On Wed, Sep 25, 20
I mean switch over to 4.2 from master. :)
On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> I can switch my branch over to master. I'm afraid master is not working
> with Libvirt on Ubuntu, as well.
>
>
> On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen wro
I can switch my branch over to master. I'm afraid master is not working
with Libvirt on Ubuntu, as well.
On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen wrote:
> It's harder still that you're trying to use master. I know 4.2 works
> on ubuntu, but master is a minefield sometimes. Maybe that's n
It's harder still that you're trying to use master. I know 4.2 works
on ubuntu, but master is a minefield sometimes. Maybe that's not the
problem, but I do see emails going back and forth about libvirt/jna
versions, just need to read them in detail.
It's a shame that you haven't gotten a working
ok, just a guess. I'm assuming it's still this:
Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
wrote:
> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
> Reading package lists... Done
> Building dependency tree
> Reading
mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
Reading package lists... Done
Building dependency tree
Reading state information... Done
libjna-java is already the newest version.
libjna-java set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
On Wed,
Again, not so familiar with Ubuntu. I'd imagine that jna would be set
up as a dependency to the .deb packages.
sudo apt-get install libjna-java
On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski
wrote:
> Was there a step in the docs I may have missed where I was to install them?
> I don't recall i
Was there a step in the docs I may have missed where I was to install them?
I don't recall installing them, but there are several steps and I might
have forgotten that I did install them, too.
I can check.
On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen wrote:
> are you missing the jna package
are you missing the jna packages?
On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
wrote:
> I basically just leveraged the code you provided to redirect the output on
> Ubuntu.
>
> Here is the standard err:
>
> log4j:WARN No appenders could be found for logger
> (org.apache.commons.httpclient.para
I basically just leveraged the code you provided to redirect the output on
Ubuntu.
Here is the standard err:
log4j:WARN No appenders could be found for logger
(org.apache.commons.httpclient.params.DefaultHttpParams).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://lo
Sounds good.
Thanks, Marcus! :)
On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen wrote:
> Ok, so the next step is to track that stdout and see if you can see
> what jsvc complains about when it fails to start up the service.
>
> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
> wrote:
> > Thes
In the past, prior to the addition to the CentOS init script that I
mentioned, I'd modify the init script to echo out the jsvc command it
was going to run, then I'd run that manually instead of the init. Then
I could see where it died.
On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen wrote:
> Ok,
Ok, so the next step is to track that stdout and see if you can see
what jsvc complains about when it fails to start up the service.
On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
wrote:
> These also look good:
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
> x86_64
> mtutkowski@ubuntu:/e
These also look good:
mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
x86_64
mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
Id Name State
--
mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
/var/run/libvirt/libvirt-sock
This is my new agent.properties file (with comments removed...looks decent):
guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
workers=5
host=192.168.233.1
port=8250
cluster=1
pod=1
zone=1
local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec
So you:
1. run that command
2. get a brand new agent.properties as a result
3. start the service
but you don't see it in the process table?
The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
if there were an error not printed via logger you'd not see it. I'm
not as familiar wi
These results look good:
mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1 -p 1
-c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
--prvNic=cloudbr0 --guestNic=cloudbr0
Starting to configure your system:
Configure Apparmor ...[OK]
Configure Network ...
This is what a fresh agent.properties file looks like on my system.
I expect if I try to add it to a cluster, the empty, localhost, and default
values below should be filled in.
I plan to try to add it to a cluster in a bit.
# The GUID to identify the agent with, this is mandatory!
# Generate wi
I stull haven't seen your agent.properties. This would tell me if your
setup succeeded. At this point my best guess is that
"cloudstack-setup-agent -m 192.168.233.1 -z 1 -p 1 -c 1 -g
6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
--prvNic=cloudbr0 --guestNic=cloudbr0" failed in some fa
I've been narrowing it down by putting in a million print-to-log statements.
Do you know if it is a problem that value ends up null (in a constructor
for Agent)?
String value = _shell.getPersistentProperty(getResourceName(), "id");
In that same constructor, this line never finishes:
if (!_resou
It might be a centos specific thing. These are created by the init scripts.
Check your agent init script on Ubuntu and see I'd you can decipher where
it sends stdout.
On Sep 23, 2013 5:21 PM, "Mike Tutkowski"
wrote:
> Weird...no such file exists.
>
>
> On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sore
Weird...no such file exists.
On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen wrote:
> maybe cloudstack-agent.out
>
> On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
> wrote:
> > OK, so, nothing is screaming out in the logs. I did notice the following:
> >
> > From setup.log:
> >
> > DEBUG:root
maybe cloudstack-agent.out
On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
wrote:
> OK, so, nothing is screaming out in the logs. I did notice the following:
>
> From setup.log:
>
> DEBUG:root:execute:apparmor_status |grep libvirt
>
> DEBUG:root:Failed to execute:
>
>
> DEBUG:root:execute:sudo /u
OK, so, nothing is screaming out in the logs. I did notice the following:
>From setup.log:
DEBUG:root:execute:apparmor_status |grep libvirt
DEBUG:root:Failed to execute:
DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
DEBUG:root:Failed to execute: * could not access PID file
Thanks, Marcus
I've been developing on Windows for most of my time, so a bunch of these
Linux-type commands are new to me and I don't always interpret the output
correctly. Getting there. :)
On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen wrote:
> Nope, not running. That's just your grep proce
Nope, not running. That's just your grep process. It would look like:
root 24429 24428 1 14:25 ?00:00:08 jsvc.exec -cp
/usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/
Looks like it's running, though:
mtutkowski@ubuntu:~$ ps -ef | grep jsvc
1000 7097 7013 0 14:32 pts/100:00:00 grep --color=auto jsvc
On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Hey Marcus,
>
> Maybe you could give me a better idea of what
Hey Marcus,
Maybe you could give me a better idea of what the "flow" is when adding a
KVM host.
It looks like we SSH into the potential KVM host and execute a startup
script (giving it necessary info about the cloud and the management server
it should talk to).
After this, is the Java VM started
Well, if you've swapped out all of the INFO to DEBUG in
/etc/cloudstack/agent/log4j-cloud.xml and restarted the agent, the
agent will either spew messages about being unable to connect to the
mgmt server, or crash, or run just fine (in which case you have no
problem). The logs in debug should tell
Hey Marcus,
I've been investigating my issue with not being able to add a KVM host to
CS.
For what it's worth, this comes back successful:
SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
parameters, 3);
This is what the command looks like:
cloudstack-setup-agent -m 192.1
First step is for me to get this working for KVM, though. :)
Once I do that, I can perhaps make modifications to the storage framework
and hypervisor plug-ins to refactor the logic and such.
On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Same would wor
Same would work for KVM.
If CreateCommand and DestroyCommand were called at the appropriate times by
the storage framework, I could move my connect and disconnect logic out of
the attach/detach logic.
On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Conv
Conversely, if the storage framework called the DestroyCommand for managed
storage after the DetachCommand, then I could have had my remove
SR/datastore logic placed in the DestroyCommand handling rather than in the
DetachCommand handling.
On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
mike.t
Edison's plug-in calls the CreateCommand. Mine does not.
The initial approach that was discussed during 4.2 was for me to modify the
attach/detach logic only in the XenServer and VMware hypervisor plug-ins.
Now that I think about it more, though, I kind of would have liked to have
the storage fra
My code does not yet support copying from a template.
Edison's default plug-in does, though (I believe):
CloudStackPrimaryDataStoreProviderImpl
On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen wrote:
> Yeah, I think it probably is as well, but I figured you'd be in a
> better position to tell.
Adding a connectPhysicalDisk method sounds good.
I probably should add a disconnectPhysicalDisk method, as well, and not use
the deletePhysicalDisk method.
On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> That's an interesting comment, Marcus.
>
> It was
Yeah, I think it probably is as well, but I figured you'd be in a
better position to tell.
I see that copyAsync is unsupported in your current 4.2 driver, does
that mean that there's no template support? Or is it some other call
that does templating now? I'm still getting up to speed on all of the
That's an interesting comment, Marcus.
It was my intent that it should work with any CloudStack "managed" storage
that uses an iSCSI target. Even though I'm using CHAP, I wrote the code so
CHAP didn't have to be used.
As I'm doing my testing, I can try to think about whether it is generic
enough
I added a comment to your diff. In general I think it looks good,
though I obviously can't vouch for whether or not it will work. One
thing I do have reservations about is the adaptor/pool naming. If you
think the code is generic enough that it will work for anyone who does
an iscsi LUN-per-volume
Great - thanks!
Just to give you an overview of what my code does (for when you get a
chance to review it):
SolidFireHostListener is registered in SolidfirePrimaryDataStoreProvider.
Its hostConnect method is invoked when a host connects with the CS MS. If
the host is running KVM, the listener sen
Its the log4j properties file in /etc/cloudstack/agent change all INFO to
DEBUG. I imagine the agent just isn't starting, you can tail the log when
you try to start the service, or maybe it will spit something out into one
of the other files in /var/log/cloudstack/agent
On Sep 21, 2013 5:19 PM, "M
This is how I've been trying to query for the status of the service (I
assume it could be started this way, as well, by changing "status" to
"start" or "restart"?):
mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
cloudstack-agent status
I get this back:
Failed to execute: * could
OK, will check it out in the next few days. As mentioned, you can set up
your Ubuntu vm as the management server as well if all else fails. If you
can get to the mgmt server on 8250 from the KVM host, then you need to
enable.debug on the agent. It won't run without complaining loudly if it
can't g
Hey Marcus,
I haven't yet been able to test my new code, but I thought you would be a
good person to ask to review it:
https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
All it is supposed to do is attach and detach a data disk (that has
guaran
When I re-deployed the DEBs, I didn't remove cloudstack-agent first. Would
that be a problem? I just did a sudo apt-get install cloudstack-agent.
On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> I get the same error running the command manually:
>
> mtutk
I get the same error running the command manually:
mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
cloudstack-agent status
* could not access PID file for cloudstack-agent
On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> agent.log looks
agent.log looks OK to me:
2013-09-20 19:35:39,010 INFO [cloud.agent.AgentShell] (main:null) Agent
started
2013-09-20 19:35:39,011 INFO [cloud.agent.AgentShell] (main:null)
Implementation Version is 4.3.0-SNAPSHOT
2013-09-20 19:35:39,015 INFO [cloud.agent.AgentShell] (main:null)
agent.properties
Sorry, I saw that in the log, I thought it was the agent log for some
reason. Is the agent started? That might be the place to look. There is an
agent log for the agent and one for the setup when it adds the host, both
in /var/log
On Sep 20, 2013 10:42 PM, "Mike Tutkowski"
wrote:
> Is it saying t
Is it saying that the MS is at the IP address or the KVM host?
The KVM host is at 192.168.233.10.
The MS host is at 192.168.233.1.
I see this for my host Global Settings parameter:
hostThe ip address of management server192.168.233.1
/etc/cloudstack/agent/agent.properties has a host=192.168.233
The log says your mgmt server is 192.168.233.10? But you tried to telnet to
192.168.233.1? It might be enough to change that in
/etc/cloudstack/agent/agent.properties, but you may want to edit the config
as well to tell it the real ms IP.
On Sep 20, 2013 10:12 PM, "Mike Tutkowski"
wrote:
> Here's
Here's what my /etc/network/interfaces file looks like, if that is of
interest (the 192.168.233.0 network is the NAT network VMware Fusion set
up):
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto cloudbr0
iface cloudbr0 inet static
address 192.168.233.10
netmask 255.
You appear to be correct. This is from the MS log (below). Discovery timed
out.
I'm not sure why this would be. My network settings shouldn't have changed
since the last time I tried this.
I am able to ping the KVM host from the MS host and vice versa.
I'm even able to manually kick off a VM on
I'm surprised there's no mention of pool on the SAN in your description of
the framework. I had assumed this was specific to your implementation,
because normally SANs host multiple disk pools, maybe multiple RAID 50s and
10s, or however the SAN admin wants to split it up. Maybe a pool intended
for
I see where you're coming from.
John Burwell and I took a different approach for this kind of storage.
If you want to add capacity and/or IOPS to primary storage that's based on
my plug-in, you invoke the updateStoragePool API command and pass in the
new capacity and/or IOPS.
Your existing volum
OK. Most other storage types interrogate the storage for the
capacitywhethwr directly or through the hypervisor. This makes it dynamic
(user could add capacity and cloudstack notices), and provides accurate
accounting things like thin provisioning. I would be surprised if edison
didn't allow for t
For what it's worth, OpenStack is quite a bit different.
All storage volumes are dynamically created (like what was enabled in 4.2)
and these volumes are directly attached to VMs (without going through the
hypervisor).
Since we go through the hypervisor, to enable a 1:1 mapping between a CS
volum
What you're saying here is definitely something we should talk about.
Hopefully my previous e-mail has clarified how this works a bit.
It mainly comes down to this:
For the first time in CS history, primary storage is no longer required to
be preallocated by the admin and then handed to CS. CS v
This should answer your question, I believe:
* When you add primary storage that is based on the SolidFire plug-in, you
specify info like host, port, number of bytes from the SAN that CS can use,
number of IOPS from the SAN that CS can use, among other info.
* When a volume is attached for the fi
I guess whether or not a solidfire device is capable of hosting
multiple disk pools is irrelevant, we'd hope that we could get the
stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
stats aren't collected, I can't as an admin define multiple pools and
expect cloudstack to alloca
Ok, on most storage pools it shows how many GB free/used when listing
the pool both via API and in the UI. I'm guessing those are empty then
for the solid fire storage, but it seems like the user should have to
define some sort of pool that the luns get carved out of, and you
should be able to get
Yeah, I should have clarified what I was referring to.
As you mentioned in your last sentence, I was just talking about on the
hypervisor side (responding to attach and detach commands).
On the storage side, the storage framework invokes my plug-in when it needs
a volume created, deleted, etc. By
I think the way people bill for this kind of storage is simply by seeing
how many volumes are in use for a given CS account and tracing a volume
back to the Disk Offering it was created from, which contains info about
guaranteed IOPS.
I am not aware of what stats may be collected for this for XenS
You respond to more than attach and detach, right? Don't you create luns as
well? Or are you just referring to the hypervisor stuff?
On Sep 17, 2013 7:51 PM, "Mike Tutkowski"
wrote:
> Hi Marcus,
>
> I never need to respond to a CreateStoragePool call for either XenServer or
> VMware.
>
> What hap
OK, if you log in per lun, then just saving the info for future reference
is fine.
Does CS provide storage stats at all, then, for other platforms?
On Sep 17, 2013 8:01 PM, "Mike Tutkowski"
wrote:
> Plus, when you log in to a LUN, you need the CHAP info and this info is
> required for each LUN (
Woops...I named this incorrectly:
_mapUuidToAdaptor.put(name, storagePool);
should be
_mapUuidToPool.put(name, storagePool);
On Tue, Sep 17, 2013 at 8:01 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Plus, when you log in to a LUN, you need the CHAP info and this info is
> requi
Plus, when you log in to a LUN, you need the CHAP info and this info is
required for each LUN (as opposed to being for the SAN).
This is how my createStoragePool currently looks, so I think we're on the
same page.
public KVMStoragePool createStoragePool(String name, String host, int port,
String
"I imagine the user enter the SAN details when
registering the pool?"
When primary storage is added to CS that is based on the SolidFire plug-in,
these details (host, port, etc.) are provided. The primary storage then
represents the SAN and not a preallocated volume (i.e. not a particular
LUN).
Hi Marcus,
I never need to respond to a CreateStoragePool call for either XenServer or
VMware.
What happens is I respond only to the Attach- and Detach-volume commands.
Let's say an attach comes in:
In this case, I check to see if the storage is "managed." Talking XenServer
here, if it is, I lo
What do you do with Xen? I imagine the user enter the SAN details when
registering the pool? A the pool details are basically just instructions on
how to log into a target, correct?
You can choose to log in a KVM host to the target during createStoragePool
and save the pool in a map, or just save
Hey Marcus,
I'm reviewing your e-mails as I implement the necessary methods in new
classes.
"So, referencing StorageAdaptor.java, createStoragePool accepts all of
the pool data (host, port, name, path) which would be used to log the
host into the initiator."
Can you tell me, in my case, since a
Well, you'd use neither of the two pool types, because you are not letting
libvirt handle the pool, you are doing it with your own pool and adaptor
class. Libvirt will be unaware of everything but the disk XML you attach to
a vm. You'd only use those if libvirts functions were advantageous, I.e. if
That's right
On Sep 16, 2013 12:31 PM, "Mike Tutkowski"
wrote:
> I understand what you're saying now, Marcus.
>
> I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
> (looking into that still), but I see what you mean: If it is, we don't need
> a new adaptor; otherwise, we do.
>
I understand what you're saying now, Marcus.
I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
(looking into that still), but I see what you mean: If it is, we don't need
a new adaptor; otherwise, we do.
If Libivirt's iSCSI Storage Pool does work, I could update the current
adap
Hey Marcus,
Thanks for that clarification.
Sorry if this is a redundant question:
When the AttachVolumeCommand comes in, it sounds like we thought the best
approach would be for me to discover and log in to the iSCSI target using
iscsiadm.
This will create a new device: /dev/sdX.
We would then
Thanks, Marcus
About this:
"When the agent connects to the
management server, it registers all pools in the cluster with the
agent."
So, my plug-in allows you to create zone-wide primary storage. This just
means that any cluster can use the SAN (the SAN was registered as primary
storage as oppos
It will still register the pool. You still have a primary storage
pool that you registered, whether it's local, cluster or zone wide.
NFS is optionally zone wide as well (I'm assuming customers can launch
your storage only cluster-wide if they choose for resource
partitioning), but it registers th
Yes, see my previous email from the 13th. You can create your own
KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
have. The previous email outlines how to add your own StorageAdaptor
alongside LibvirtStorageAdaptor to take over all of the calls
(createStoragePool, getStoragePo
I see right now LibvirtComputingResource.java has the following method that
I might be able to leverage (it's probably not called at present and would
need to be implemented in my case to discover my iSCSI target and log in to
it):
protected Answer execute(CreateStoragePoolCommand cmd) {
Hey Marcus,
When I implemented support in the XenServer and VMware plug-ins for
"managed" storage, I started at the execute(AttachVolumeCommand) methods in
both plug-ins.
The code there was changed to check the AttachVolumeCommand instance for a
"managed" property.
If managed was false, the norm
Yeah, I remember that StorageProcessor stuff being put in the codebase and
having to merge my code into it in 4.2.
Thanks for all the details, Marcus! :)
I can start digging into what you were talking about now.
On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen wrote:
> Looks like things might
It looks like this KVMStorageProcessor is meant to handle
StorageSubSystemCommand commands. Probably to handle the new storage
framework for things that are now triggered via the mgmt server's
storage stuff.
On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen wrote:
> Looks like things might be sli
Looks like things might be slightly different now in 4.2, with
KVMStorageProcessor.java in the mix.This looks more or less like some
of the commands were ripped out verbatim from LibvirtComputingResource
and placed here, so in general what I've said is probably still true,
just that the location of
Ok, KVM will be close to that, of course, because only the hypervisor
classes differ, the rest is all mgmt server. Creating a volume is just
a db entry until it's deployed for the first time. AttachVolumeCommand
on the agent side (LibvirtStorageAdaptor.java is analogous to
CitrixResourceBase.java)
OK, yeah, the ACL part will be interesting. That is a bit different from
how it works with XenServer and VMware.
Just to give you an idea how it works in 4.2 with XenServer:
* The user creates a CS volume (this is just recorded in the cloud.volumes
table).
* The user attaches the volume as a dis
Perfect. You'll have a domain def ( the VM), a disk def, and the attach the
disk def to the vm. You may need to do your own StorageAdaptor and run
iscsiadm commands to accomplish that, depending on how the libvirt iscsi
works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
xen
Yeah, that would be ideal.
So, I would still need to discover the iSCSI target, log in to it, then
figure out what /dev/sdX was created as a result (and leave it as is - do
not format it with any file system...clustered or not). I would pass that
device into the VM.
Kind of accurate?
On Fri, Se
If you wire up the block device you won't have to require users to manage a
clustered filesystem or lvm, and all of the work in maintaining those
clustered services and quorum management, cloudstack will ensure only one
vm is using the disks at any given time and where. It would be cake
compared to
Look in LibvirtVMDef.java (I think) for the disk definitions. There are
ones that work for block devices rather than files. You can piggy back off
of the existing disk definitions and attach it to the vm as a block device.
The definition is an XML string per libvirt XML format. You may want to use
Yeah, I think it would be nice if it supported Live Migration.
That's kind of why I was initially leaning toward SharedMountPoint and just
doing the work ahead of time to get things in a state where the current
code could run with it.
On Fri, Sep 13, 2013 at 8:00 PM, Marcus Sorensen wrote:
> No
No, as that would rely on virtualized network/iscsi initiator inside the
vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
disk to the VM, rather than attaching some image file that resides on a
filesystem, mounted on the host, living on a target.
Actually, if you plan on
When you say, "wire up the lun directly to the vm," do you mean
circumventing the hypervisor? I didn't think we could do that in CS.
OpenStack, on the other hand, always circumvents the hypervisor, as far as
I know.
On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen wrote:
> Better to wire up the
Better to wire up the lun directly to the vm unless there is a good reason
not to.
On Sep 13, 2013 7:40 PM, "Marcus Sorensen" wrote:
> You could do that, but as mentioned I think its a mistake to go to the
> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
> filesystem o
You could do that, but as mentioned I think its a mistake to go to the
trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
image on that filesystem. You'll lose a lot of iops along the way, and have
more
This would require that they put a clustered filesystem on the lun, right?
Seems like it would be better for them to use CLVM and make a volume group
from the luns, I'll bet some of your customers are doing that unless they
are explicitly instructed otherwise, that's how others are doing iscsi or
f
Ah, OK, I didn't know that was such new ground in KVM with CS.
So, the way people use our SAN with KVM and CS today is by selecting
SharedMountPoint and specifying the location of the share.
They can set up their share using Open iSCSI by discovering their iSCSI
target, logging in to it, then mou
Oh, hypervisor snapshots are a bit different. I need to catch up on the
work done in KVM, but this is basically just disk snapshots + memory dump.
I still think disk snapshots would preferably be handled by the SAN, and
then memory dumps can go to secondary storage or something else. This is
relati
Let me back up and say I don't think you'd use a vdi style on an iscsi lun.
I think you'd want to treat it as a RAW format. Otherwise you're putting a
filesystem on your lun, mounting it, creating a QCOW2 disk image, and that
seems unnecessary and a performance killer.
So probably attaching the ra
1 - 100 of 107 matches
Mail list logo