On 09/17/2013 08:35 PM, Mike Tutkowski wrote:
I'm not familiar with how we package these binding classes in CloudStack.
Is there a new JAR I need to download or source code?
Sorry, forgot this one! Nothing to do on your side. Maven will take care
of this.
The RPM and DEB packaging will a
Hey Wido,
Did they publish the updated jar with the same version number? If that is the
case maven will not take care of it as by definition release artefacts will be
cached indefinitely and never be replaced. Only snapshot dependencies will be
updated.
Cheers,
HUgo
On Sep 21, 2013, at 3:0
On 09/21/2013 09:15 AM, Hugo Trippaers wrote:
Hey Wido,
Did they publish the updated jar with the same version number? If that is the
case maven will not take care of it as by definition release artefacts will be
cached indefinitely and never be replaced. Only snapshot dependencies will be
Ahh ok.
Any idea how long that is going to take? At the moment our automated build are
basically non functional as they all fail to build and thus also don't kickoff
the downstream builds like the noredist build. Can we revert to an older
version of libvirt in the meantime to work around this i
jenkins.buildacloud.org
On Fri, Sep 20, 2013 at 1:03 PM, Rayees Namathponnan
wrote:
>
Hi,
Is it be possible to get jenkins automatically run a build with patches
uploaded to reviewboard? It would be especially useful to make sure we do
not break something in the non-os projects or other projects that need some
special settings and/or do not build by default.
Thank you,
Laszlo
On
Hey Laszlo,
We have that already :-)
On Jenkins.buildacloud.org you can find the job
http://jenkins.buildacloud.org/view/management/job/mgmt-build-reviewboard-requests/
This job will regularly check the reviewboard for new reviews and use the
Jenkins patch plugin to build master with that patc
Cool, then I only have to learn how to use it :-)
On Sat, Sep 21, 2013 at 5:54 PM, Hugo Trippaers wrote:
> Hey Laszlo,
>
> We have that already :-)
>
> On Jenkins.buildacloud.org you can find the job
> http://jenkins.buildacloud.org/view/management/job/mgmt-build-reviewboard-requests/
>
> This
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14281/
---
Review request for cloudstack.
Repository: cloudstack-git
Description
---
> On Sep 20, 2013, at 1:27 PM, Animesh Chaturvedi
> wrote:
>
>
>
>> -Original Message-
>> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
>> Sent: Friday, September 20, 2013 12:51 AM
>> To: dev
>> Subject: Re: Call for 4.3 and 4.2.1 Release Managers!
>>
>> H Animesh and the rest,
>>
Hey Marcus,
I haven't yet been able to test my new code, but I thought you would be a
good person to ask to review it:
https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
All it is supposed to do is attach and detach a data disk (that has
guaran
guys, one consideration on the side
I read that people are planning to separate css for projects and then
unify them at build time. When we have a plugable system, how are ui
parts of plugins going to be integrated? If they are supposed to be
integrated into a single file on build time, that's goi
OK, will check it out in the next few days. As mentioned, you can set up
your Ubuntu vm as the management server as well if all else fails. If you
can get to the mgmt server on 8250 from the KVM host, then you need to
enable.debug on the agent. It won't run without complaining loudly if it
can't g
This is how I've been trying to query for the status of the service (I
assume it could be started this way, as well, by changing "status" to
"start" or "restart"?):
mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
cloudstack-agent status
I get this back:
Failed to execute: * could
Its the log4j properties file in /etc/cloudstack/agent change all INFO to
DEBUG. I imagine the agent just isn't starting, you can tail the log when
you try to start the service, or maybe it will spit something out into one
of the other files in /var/log/cloudstack/agent
On Sep 21, 2013 5:19 PM, "M
Great - thanks!
Just to give you an overview of what my code does (for when you get a
chance to review it):
SolidFireHostListener is registered in SolidfirePrimaryDataStoreProvider.
Its hostConnect method is invoked when a host connects with the CS MS. If
the host is running KVM, the listener sen
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14284/
---
Review request for cloudstack.
Repository: cloudstack-git
Description
---
I fixed the job. The url still pointed to the jenkins.cloudstack.org, it's now
pointing to jenkins.buildacloud.org.
The source for the script doing most of the magic is on gihub:
https://github.com/CloudStack-extras/reviewboard-tools
Cheers,
Hugo
On Sep 22, 2013, at 1:34 AM, Laszlo Hornyak
All,
We've had our own storage plugins based on the 4.1 branch for
awhile now. Basically everything was done in KVM on the Agent side.
With the new storage framework in place for 4.2, I'm working on
splitting this code between Agent-specific (attach to VM, etc) and the
code that talks to the SAN
Oh, one more question. Is grantAccess/revokeAccess called as I'd
expect for migration, e.g. when PrepareForMigrationCommand is called
on the target host we can grantAccess to the new host, and then when
MigrateCommand returns successfully from the old host we revokeAccess
from the old host?
On Sat
I noticed that the
tools/appliance/definitions/systemvmtemplate/postinstall.sh scripts was not
updated to the new path. Review board seems to be down, so I'll just
attempt to attach a patch to this email.
Darren
Hey Marcus,
As far as I remember, grantAccess and revokeAccess are not invoked at all
in 4.2. Edison may be able to elaborate more on this, but I don't believe
the framework ever calls them.
Talk to you later
On Sat, Sep 21, 2013 at 11:02 PM, Marcus Sorensen wrote:
> Oh, one more question. Is
Also, we can bring John Burwell into this as he had related comments
several months ago, but we did not want to have the storage plug-ins
calling into the hypervisors. The idea was to get away from having any
hypervisor dependencies in the storage plug-ins.
The default storage plug-in does not fol
Attachements are filtered.
Either send it in plain text or send it to my email directly and i'll commit it.
Cheers,
Hugo
On Sep 22, 2013, at 1:03 PM, Darren Shepherd
wrote:
> I noticed that the
> tools/appliance/definitions/systemvmtemplate/postinstall.sh scripts was not
> updated to the n
I believe, though, we should look into adding calls to grantAccess and
revokeAccess for 4.3.
For 4.2 I didn't worry about it because my plug-in didn't work with KVM. I
set XenServer and VMware up to use shared SRs/datastores using CHAP
credentials that I added to all hosts in the cluster (programm
I added a comment to your diff. In general I think it looks good,
though I obviously can't vouch for whether or not it will work. One
thing I do have reservations about is the adaptor/pool naming. If you
think the code is generic enough that it will work for anyone who does
an iscsi LUN-per-volume
It's fine, I can leave that code as-is from 4.1 to 4.2 in my plugin,
but if the capability is there to move it I'd prefer to do so. I'm not
sure how we'd get away from calling into any of the hypervisors if we
need to attach disks, or manage default storage types. I can't create
storage on Xen with
That's an interesting comment, Marcus.
It was my intent that it should work with any CloudStack "managed" storage
that uses an iSCSI target. Even though I'm using CHAP, I wrote the code so
CHAP didn't have to be used.
As I'm doing my testing, I can try to think about whether it is generic
enough
I agree you'd want to get away from going from the agent to the SAN API.
My storage plug-in doesn't have any hypervisor dependencies, though. It
creates volumes, deletes them, and other SAN-related activities and lets
the storage framework coordinate orchestration activities (like ask the
storage
Yeah, I think it probably is as well, but I figured you'd be in a
better position to tell.
I see that copyAsync is unsupported in your current 4.2 driver, does
that mean that there's no template support? Or is it some other call
that does templating now? I'm still getting up to speed on all of the
Adding a connectPhysicalDisk method sounds good.
I probably should add a disconnectPhysicalDisk method, as well, and not use
the deletePhysicalDisk method.
On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> That's an interesting comment, Marcus.
>
> It was
My code does not yet support copying from a template.
Edison's default plug-in does, though (I believe):
CloudStackPrimaryDataStoreProviderImpl
On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen wrote:
> Yeah, I think it probably is as well, but I figured you'd be in a
> better position to tell.
Edison's plug-in calls the CreateCommand. Mine does not.
The initial approach that was discussed during 4.2 was for me to modify the
attach/detach logic only in the XenServer and VMware hypervisor plug-ins.
Now that I think about it more, though, I kind of would have liked to have
the storage fra
Conversely, if the storage framework called the DestroyCommand for managed
storage after the DetachCommand, then I could have had my remove
SR/datastore logic placed in the DestroyCommand handling rather than in the
DetachCommand handling.
On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
mike.t
Same would work for KVM.
If CreateCommand and DestroyCommand were called at the appropriate times by
the storage framework, I could move my connect and disconnect logic out of
the attach/detach logic.
On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Conv
First step is for me to get this working for KVM, though. :)
Once I do that, I can perhaps make modifications to the storage framework
and hypervisor plug-ins to refactor the logic and such.
On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:
> Same would wor
"and lets the storage framework coordinate orchestration activities (like
ask the storage plug-in for a volume, then send a message to the hypervisor
to attach the resultant volume)."
I said that a little incorrectly. The storage framework shouldn't be
sending those messages to the hypervisors (as
37 matches
Mail list logo