Re: [ANNOUNCE] New committer: Laszlo Hornyak

2013-10-09 Thread Daan Hoogland
welcome, Laszlo,

have fun


On Mon, Oct 7, 2013 at 10:12 PM, Laszlo Hornyak wrote:

> Thank you all :)
>
>
> On Mon, Oct 7, 2013 at 8:51 PM, Sebastien Goasguen 
> wrote:
>
> > Congrats Laszlo,
> >
> >
> > On Oct 7, 2013, at 12:21 PM, Frankie Onuonga  wrote:
> >
> > > Congratulations.
> > > On a light note , when I grow up I want to be like you . :-)
> > >
> > > Kind regards
> > >
> > > Sent from my Windows Phone
> > > 
> > > From: Chip Childers
> > > Sent: 10/7/2013 5:27 PM
> > > To: dev@cloudstack.apache.org
> > > Subject: [ANNOUNCE] New committer: Laszlo Hornyak
> > >
> > > The Project Management Committee (PMC) for Apache CloudStack
> > > has asked Laszlo Hornyak to become a committer and we are pleased to
> > > announce that he has accepted.
> > >
> > > Being a committer allows many contributors to contribute more
> > > autonomously. For developers, it makes it easier to submit changes and
> > > eliminates the need to have contributions reviewed via the patch
> > > submission process. Whether contributions are development-related or
> > > otherwise, it is a recognition of a contributor's participation in the
> > > project and commitment to the project and the Apache Way.
> > >
> > > Please join me in congratulating Laszlo!
> > >
> > > -chip
> > > on behalf of the CloudStack PMC
> >
> >
>
>
> --
>
> EOF
>


Re: [4.2] [xenserver] [system vms] Xentools

2013-10-09 Thread Daan Hoogland
Paul, Harikrishna,

A colleague at Schuberg Philis created a template with the xenserver
tools.  He was going to give the template back to the communitee. I
think he is too busy but I'll remind him/ask him for the status.

regards,
Daan

On Tue, Oct 8, 2013 at 1:14 PM, Harikrishna Patnala
 wrote:
> Hi,
> Earlier there was discussion on putting xen tools in systemvms. Please look 
> into that. 
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3cce676527.4398b%25abhinandan.prat...@citrix.com%3E
>
>
> From: Paul Angus [mailto:paul.an...@shapeblue.com]
> Sent: Tuesday, October 08, 2013 1:36 PM
> To: dev@cloudstack.apache.org
> Subject: [4.2] [xenserver] [system vms] Xentools
>
> Hi all,
>
> The current XenServer system VM template doesn't seem to include XenServer 
> Tools - is this by design?
>
>
> systemvmtemplate-2013-07-12-master-xen.vhd.bz2
>
>
> Regards
>
> Paul Angus
> Senior Consultant / Cloud Architect
>
> [cid:image002.png@01CE1071.C6CC9C10]
>
> S: +44 20 3603 0540 | M: +447711418784 
> | T: CloudyAngus
> paul.an...@shapeblue.com | 
> www.shapeblue.com | Twitter:@shapeblue
> ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS
>
> Apache CloudStack Bootcamp training courses
> 02/03 October, 
> London
> 13/14 November, 
> London
> 27/28 November, 
> Bangalore
> 08/09 January 2014, 
> London
>
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is a company 
> incorporated in India and is operated under license from Shape Blue Ltd. 
> Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
> operated under license from Shape Blue Ltd. ShapeBlue is a registered 
> trademark.


RE: [4.2] [xenserver] [system vms] Xentools

2013-10-09 Thread Paul Angus
Thanks Daan.


Regards,

Paul Angus
S: +44 20 3603 0540 | M: +447711418784 | T: CloudyAngus
paul.an...@shapeblue.com

-Original Message-
From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
Sent: 09 October 2013 08:15
To: dev
Cc: Joris van Lieshout
Subject: Re: [4.2] [xenserver] [system vms] Xentools

Paul, Harikrishna,

A colleague at Schuberg Philis created a template with the xenserver tools.  He 
was going to give the template back to the communitee. I think he is too busy 
but I'll remind him/ask him for the status.

regards,
Daan

On Tue, Oct 8, 2013 at 1:14 PM, Harikrishna Patnala 
 wrote:
> Hi,
> Earlier there was discussion on putting xen tools in systemvms. Please
> look into that.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3
> cce676527.4398b%25abhinandan.prat...@citrix.com%3E
>
>
> From: Paul Angus [mailto:paul.an...@shapeblue.com]
> Sent: Tuesday, October 08, 2013 1:36 PM
> To: dev@cloudstack.apache.org
> Subject: [4.2] [xenserver] [system vms] Xentools
>
> Hi all,
>
> The current XenServer system VM template doesn't seem to include XenServer 
> Tools - is this by design?
>
>
> systemvmtemplate-2013-07-12-master-xen.vhd.bz2
>
>
> Regards
>
> Paul Angus
> Senior Consultant / Cloud Architect
>
> [cid:image002.png@01CE1071.C6CC9C10]
>
> S: +44 20 3603 0540 | M:
> +447711418784 | T: CloudyAngus
> paul.an...@shapeblue.com |
> www.shapeblue.com | Twitter:@shapeblue
> ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS
>
> Apache CloudStack Bootcamp training courses
> 02/03 October,
> London
> 13/14 November,
> London
> 27/28 November,
> Bangalore />
> 08/09 January 2014,
> London
>
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is a company 
> incorporated in India and is operated under license from Shape Blue Ltd. 
> Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
> operated under license from Shape Blue Ltd. ShapeBlue is a registered 
> trademark.
This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: ACS 4.2 - Error when trying to declare a LB rule in a vpc to a tier network with lb offering

2013-10-09 Thread benoit lair
Hi Murali,

Thanks for your help. It resolved my problem. All is going clear for my
public lb tiers and internal lb tiers.

Regards,

Benoit.


2013/10/8 Murali Reddy 

> On 08/10/13 7:41 PM, "benoit lair"  wrote:
>
> >Hello!
> >
> >I don't understand wht is going wrong :
> >
> >When i'm looking into the official docs, i see that vpc is still declared
> >to be able to do lb only on one tier ??
> >
> >However, https://issues.apache.org/jira/browse/CLOUDSTACK-2367 says that
> >this feature is implemented.
>
> Both external and internal LB are supported. Please see [1]. Both both
> functionality are mutually exclusive with in a tier. From the exception it
> appears that you are trying to do external LB on tier created with
> 'DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB' offering
> which does not support it. Try creating a tier with a network offering
> with lb type as 'public lb'
>
> [1]
> https://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Insta
> llation_Guide/configure-vpc.html#add-loadbalancer-rule-vpc
>
> >
> >I have already configured several tiers in a vpc with internal lb service
> >for each, i deployed several vms into 2 differents tiers.
> >
> >But when i try to create a lb rule choosing two vm in a tier, i got the
> >error message i noticed two messages ago.
> >
> >If somebody has an idea, i would really appreciate.
> >
> >Thanks.
> >
> >Benoit.
> >
> >
> >2013/10/8 benoit lair 
> >
> >> Hello!
> >>
> >> Any ideas for this problem ?
> >>
> >> Thanks for your help.
> >>
> >> Regards,
> >>
> >> Benoit.
> >>
> >>
> >> 2013/10/7 benoit lair 
> >>
> >>> Hi,
> >>>
> >>> I'm working with a CS 4.2, Xenserver 6.2 in a centos 6.3
> >>>
> >>> Deployed a VPC, multiples tiers, each with a Network offering with LB
> >>> activated.
> >>>
> >>> When i navigate on the vpc summary page, i click on the button "Public
> >>>ip
> >>> adresses" on the Vpc virtual router item,
> >>>
> >>> I click on acquire new ip, this one is 10.14.6.5, i click on this one
> >>>and
> >>> go to configuration tab. I click on load balacing, try to create a lb
> >>>rule
> >>> very simple :
> >>>
> >>> just a name, port public 80, private port 80, algorithm least
> >>> connections, no stickiness, no health check, no autoscale, just select
> >>>2
> >>> vms already deployed and running :
> >>>
> >>> I try to create my lb rule, i got this error message in the UI :
> >>>
> >>> Failed to create load balancer rule: lb_rule_mano_frontal1
> >>>
> >>>
> >>> When i look into my mgmt server log :
> >>>
> >>> 2013-10-07 11:54:46,591 DEBUG [cloud.network.NetworkManagerImpl]
> >>> (catalina-exec-21:null) Associating ip Ip[10.14.6.5-1] to network
> >>> Ntwk[204|Guest|13]
> >>> 2013-10-07 11:54:46,598 DEBUG [cloud.network.NetworkManagerImpl]
> >>> (catalina-exec-21:null) Successfully associated ip address 10.14.6.5 to
> >>> network Ntwk[204|Guest|13]
> >>> 2013-10-07 11:54:46,604 WARN
> >>>[network.lb.LoadBalancingRulesManagerImpl]
> >>> (catalina-exec-21:null) Failed to create load balancer due to
> >>> com.cloud.exception.InvalidParameterValueException: Scheme Public is
> >>>not
> >>> supported by the network offering [Network Offering
> >>> [13-Guest-DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB]
> >>> at
> >>>
> >>>com.cloud.network.lb.LoadBalancingRulesManagerImpl.isLbServiceSupportedI
> >>>nNetwork(LoadBalancingRulesManagerImpl.java:2136)
> >>> at
> >>>
> >>>com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalan
> >>>cer(LoadBalancingRulesManagerImpl.java:1432)
> >>> at
> >>>
> >>>com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercepto
> >>>rDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> >>> at
> >>>
> >>>com.cloud.network.lb.LoadBalancingRulesManagerImpl.createPublicLoadBalan
> >>>cerRule(LoadBalancingRulesManagerImpl.java:1360)
> >>> at
> >>>
> >>>com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercepto
> >>>rDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> >>> at
> >>>
> >>>org.apache.cloudstack.api.command.user.loadbalancer.CreateLoadBalancerRu
> >>>leCmd.create(CreateLoadBalancerRuleCmd.java:282)
> >>> at
> >>> com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:104)
> >>> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:460)
> >>> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372)
> >>> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:305)
> >>> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
> >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> >>> at
> >>>
> >>>org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(Applica
> >>>tionFilterChain.java:290)
> >>> at
> >>>
> >>>org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilt
> >>>erChain.java:206)
> >>> at
> >>>
> >>>org.apache.catalin

[DISCUSS] Transaction Hell

2013-10-09 Thread Darren Shepherd
Okay, please read this all, this is important...  I want you all to
know that its personally important to me to attempt to get rid of ACS
custom stuff and introduce patterns, frameworks, libraries, etc that I
feel are more consistent with modern Java development and are
understood by a wider audience.  This is one of the basic reasons I
started the spring-modularization branch.  I just want to be able to
leverage Spring in a sane way.  The current implementation in ACS is
backwards and broken and abuses Spring to the point that leveraging
Spring isn't really all that possible.

So while I did the Spring work, I also started laying the ground work
to get rid of the ACS custom transaction management.  The custom DAO
framework and the corresponding transaction management has been a huge
barrier to me extending ACS in the past.  When you look at how you are
supposed to access the database, it's all very custom and what I feel
isn't really all that straight forward.  I was debugging an issue
today and figured out there is a huge bug in what I've done and that
has lead me down this rabbit hole of what the correct solution is.
Additionally ACS custom transaction mgmt is done in a way that
basically breaks Spring too.

At some point on the mailing list there was a small discussion about
removing the @DB interceptor.  The @DB interceptor does txn.open() and
txn.close() around a method.  If a method forgets to commit or
rollback the txn, txn.close() will rollback the transaction for the
method.  So the general idea of the change was to instead move that
logic to the bottom of the call stack.  The assumption being that the
@DB code was just an additional check to ensure the programmer didn't
forget something and we could instead just do that once at the bottom
of the stack.  Oh how wrong I was.

The problem is that developers have relied on the @DB interceptor to
handle rollback for them.  So you see the following code quite a bit

txn.start()
...
txn.commit()

And there is no sign of a rollback anywhere.  So the rollback will
happen if some exception is thrown.  By moving the @DB logic to the
bottom of stack what happens is the transaction is not rolled back
when the developer thought it would and madness ensues.  So that
change was bad.  So what to do  Here's my totally bias description
of solutions:

Option A or "Custom Forever!":  Go back to custom ACS AOP and the @DB.
 This is what one would think is the simplest and safest solution.
We'll it ain't really.  Here's the killer problem, besides that fact
that it makes me feel very sad inside, the current rollback behavior
is broken in certain spots in ACS.  While investigating possible
solutions I started looking at all the places that do programmatic txn
management.  It's important to realize that the txn framework only
works properly if the method in which you do txn.start() has @DB on
it.  There is a java assert in currentTxn() that attempts to make sure
that @DB is there.  But nobody runs with asserts on.  So there are
places in ACS where transactions are started and no @DB is there, but
it happens to work because some method in the stack has @DB.  So to
properly go back to option A we really need to fix all places that
don't have @DB, plus make sure people always run with asserts on.  And
then give up making the ACS world a better place and just do things
how we always have...

Option B or "Progress is Good":  The current transaction management
approach (especially rollback) doesn't match how the majority of
frameworks out there work.  This option is to change the Transaction
class API to be more consistent with standard frameworks out there.  I
propose the following APIs (if you know Spring TX mgmt, this will look
familiar)

1) remove start(), commit(), rollback() - The easiest way to ensure we
up date everything properly is to break the API and fix everything
that is broken (about 433 places)
2) Add execute(TransactionCallback) where TransactionCallback has one
method doInTransaction().  For somebody to run a transaction you would
need to do

txn.execute(new TransactionCallback() {
Object doInTransaction() {
  // do stuff
}
})
3) add "Object startTransaction()," commit(Object), and
rollback(Object) - These methods are for callers who really really
want to do thing programmatically.  To run a transaction you would do

Object status = txn.startTransaction()
try {
  //.. do stuff
  txn.commit(status)
} catch (Exception e) {
  txn.rollback(status)
}

I'm perfectly willing to go and change all the code for this.  It will
just take a couple hours or so.  Option B is purposely almost exactly
like Spring PlatformTransactionManager.  The reason being if we switch
all the code to this style, we can later drop the implementation of
Transaction and move to 100% fully Spring TX managed.

Just as a final point, every custom approach or framework we have adds
a barrier to people extending ACS and additionally puts more burden on
the ACS community as that is more co

Re: Latest Master DB issue

2013-10-09 Thread Darren Shepherd
Kelven,

So the issue is the combination of my code with VmwareContextPool.
With the ManagedContext framework, what I've done is replace every
Runnable and TimerTask with ManagedContextRunnable and
ManagedContextTimerTask.  Those classes will run the onEnter, onLeave
logic which will setup CallContext as a result.  I purposely changed
every reference so that everything was consistent.  I didn't want to
have developers have to consider when or when not to use
ManagedContext, so just always use it.  So as a result even though
your code has nothing to do with the DB, the ManageContextListener for
CallContext does.

So I'm sure your thinking that Resources shouldn't call the DB.  The
ManagedContext framework only does things when deployed in a managed
JVM.  The only managed JVM is the mgmt server.  If it's AWSAPI, Usage,
or an Agent JVM, then that framework does nothing.

So there are three possible solutions I see

1) Change VmwareContextPool to be initialized from the @PostConstruct
or start() method.
2) Revert the change to VmwareContextPool to use TimerTask and not
ManageContextTimerTask
3) Merge spring modularization.

The simplest stop gap would be option 2.

Darren

On Tue, Oct 8, 2013 at 5:35 PM, Kelven Yang  wrote:
> The problem seems to me is whether or not a background job that touches
> with database respects the bootstrap initialization order. As of
> VmwareContextPool itself, its background job does something fully within
> its own territory (no database, no reference outside). and vmware-base
> package was originally designed to be running on its own without assuming
> any container that offers unified lifecycle management. I don't think this
> type of background job has anything to do with the failure in this
> particular case.
>
> However, I do agree that we need to clean up and unify a few things inside
> the CloudStack, especially on life-cycle management and all
> background-jobs that their execution path touches with component
> life-cyle, auto-wiring, AOP etc.
>
> To live with the time before the spring modularization merge, we just need
> to figure out which background job that triggers all these and get it
> fixed, it used to work before even it is fragile, I don't think the fix of
> the problem is impossible. Is anyone working on this issue?
>
> Kelven
>
>
>
>
> On 10/8/13 2:35 PM, "Darren Shepherd"  wrote:
>
>>Some more info about this.  What specifically is happening is that the
>>VmwareContextPool call is creating a Timer during the constructor of
>>the class which is being constructed in a static block from
>>VmwareContextFactory.  So when the VmwareContextFactory class is
>>loaded by the class loader, the background thread is created.  Which
>>is way, way before the Database upgrade happens.  This will still be
>>fixed if we merge the spring modularization, but this vmware code
>>should change regardless.  Background threads should only be launched
>>from a @PostConstruct or ComponentLifecycle.start() method.  They
>>should not be started when a class is constructed or loaded.
>>
>>Darren
>>
>>
>>On Tue, Oct 8, 2013 at 2:22 PM, Darren Shepherd
>> wrote:
>>> Hey, I half way introduced this issue in a really long and round about
>>> way.  I don't think there's a good simple fix unless we merge the
>>> spring-modularization branch.  I'm going to look further into it.  But
>>> here's the background of why we are seeing this.
>>>
>>> I introduced "Managed Context" framework that will wrap all the
>>> background threads and manage the thread locals.  This was the union
>>> of CallContext, ServerContext, and AsyncJob*Context into one simple
>>> framework.  The problem with ACS though is that A LOT of background
>>> threads are spawned at all different random times of the
>>> initialization.  So what is happening is that during the
>>> initialization of some bean its kicking off a background thread that
>>> tries to access the database before the database upgrade has ran.  Now
>>> the CallContext has a strange suicidal behaviour (this was already
>>> there, I didn't change this), if it can't find account 1, it does a
>>> System.exit(1).  So since this one background thread is failing, the
>>> whole JVM shuts down.  Before CallContext only existed on some
>>> threads, but the addition of the Managed Context framework, it is now
>>> on almost all threads.
>>>
>>> Now in the spring-modularization branch there is a very strict and
>>> (mostly) deterministic initialization order.  The database upgrade
>>> class will be initialized and ran before any other bean in CloudStack
>>> is even initiated.  So this works around all these DB problems.  The
>>> current spring setup in master is very, very fragile.  As I said
>>> before, it is really difficult to ensure certain aspects are
>>> initialized before others, and since we moved to (which I don't really
>>> agree with) doing DB schema upgrades purely on startup of the mgmt
>>> server, we now have to be extra careful about initialization order.
>>

Re: [4.2] [xenserver] [system vms] Xentools

2013-10-09 Thread Abhinandan Prateek

On 09/10/13 12:45 pm, "Daan Hoogland"  wrote:

>Paul, Harikrishna,
>
>A colleague at Schuberg Philis created a template with the xenserver
>tools.  He was going to give the template back to the communitee. I
>think he is too busy but I'll remind him/ask him for the status.

That would be great, can we also generate a 64 bit one ?


-abhi



Re: [jira] [Updated] (CLOUDSTACK-4829) vnc access instance's console through apikey failed

2013-10-09 Thread sunko2014
I have the same problem


On Tue, Oct 8, 2013 at 5:28 PM, huyao (JIRA)  wrote:

>
>  [
> https://issues.apache.org/jira/browse/CLOUDSTACK-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]
>
> huyao updated CLOUDSTACK-4829:
> --
>
> Affects Version/s: (was: 4.2.0)
>4.1.1
>
> > vnc access instance's console through apikey failed
> > ---
> >
> > Key: CLOUDSTACK-4829
> > URL:
> https://issues.apache.org/jira/browse/CLOUDSTACK-4829
> > Project: CloudStack
> >  Issue Type: Bug
> >  Security Level: Public(Anyone can view this level - this is the
> default.)
> >  Components: VNC Proxy
> >Affects Versions: 4.1.1
> > Environment: windows 7 + cygwin + xenserver 6.1.0 + cloudstack
> 4.1.1
> >Reporter: huyao
> >Priority: Critical
> >
> > I compiled cloudstack 4.1.1 source code in cygwin, then test it using
> jetty, it works fine. But, when I access instance's console through vnc
> using apikey, it fails, the browser shows the follow message:
> > Access denied. Invalid web session or API key in request
> > my url:
> >
> http://localhost:8080/client/console?cmd=access&vm=b194369f-e0d4-45d8-a50f-09ec51095e68&apikey=fmS7oyThP6MGxN5X_CgeOCxQIqgTu5QFDz46r2Pv5kLp88EYYBquSu6_3s3d9MXdbUHPpxj5qDDy1jvhEpQWvQ&signature=y3dNHn580NJiCVRGwrBTR4JHImo%3D
> > I test the listAccounts api, it's ok.
> > my url:
> >
> http://localhost:8080/client/api?command=listAccounts&apikey=fmS7oyThP6MGxN5X_CgeOCxQIqgTu5QFDz46r2Pv5kLp88EYYBquSu6_3s3d9MXdbUHPpxj5qDDy1jvhEpQWvQ&signature=ALhJtw%2Bzi7Rcmo%2Bkk3xH3cTJgp4%3D
> > then, I debug the source code, find where it fails.
> > file: ConsoleProxyServlet.java
> > private boolean verifyRequest(Map requestParameters) {
> >   try {
> >   ...
> >   ...
> >   unsignedRequest = unsignedRequest.toLowerCase();
> >   Mac mac = Mac.getInstance("HmacSHA1");
> >   SecretKeySpec keySpec = new
> SecretKeySpec(secretKey.getBytes(), "HmacSHA1");
> >   mac.init(keySpec);
> >   mac.update(unsignedRequest.getBytes());
> >   byte[] encryptedBytes = mac.doFinal();
> >   String computedSignature =
> Base64.encodeBase64URLSafeString(encryptedBytes);
> >   boolean equalSig = signature.equals(computedSignature);
> >   if (!equalSig) {
> >   s_logger.debug("User signature: " + signature + "
> is not equaled to computed signature: " + computedSignature);
> >   }
> >   ...
> >   ...
> >   return equalSig;
> >   } catch (Exception ex) {
> >   s_logger.error("unable to verifty request signature", ex);
> >   }
> >   return false;
> > }
> > in this method, signature not equals to computedSignature, so it returns
> false
> > then, I view ApiServer.java,the verifyRequest method:
> > public boolean verifyRequest(Map requestParameters,
> Long userId)  throws ServerApiException {
> >   try {
> >   ...
> >   ...
> >   unsignedRequest = unsignedRequest.toLowerCase();
> >   Mac mac = Mac.getInstance("HmacSHA1");
> >   SecretKeySpec keySpec = new
> SecretKeySpec(secretKey.getBytes(), "HmacSHA1");
> >   mac.init(keySpec);
> >   mac.update(unsignedRequest.getBytes());
> >   byte[] encryptedBytes = mac.doFinal();
> >   String computedSignature =
> Base64.encodeBase64String(encryptedBytes);
> >   boolean equalSig = signature.equals(computedSignature);
> >   if (!equalSig) {
> >   s_logger.debug("User signature: " + signature + "
> is not equaled to computed signature: " + computedSignature);
> >   }
> >   ...
> >   ...
> >   return equalSig;
> >   } catch (Exception ex) {
> >   s_logger.error("unable to verifty request signature", ex);
> >   }
> >   return false;
> > }
> > these two verifyRequest method produce different signature, because the
> former use :
> > String computedSignature =
> Base64.encodeBase64URLSafeString(encryptedBytes);
> > while the later use:
> > String computedSignature = Base64.encodeBase64String(encryptedBytes);
> > this is why listAccouts works fine, but vnc console is failed.
> > when I replace Base64.encodeBase64URLSafeString by
> Base64.encodeBase64String, vnc console is ok too.
> > so I am confused, why use different encode method? It is a bug?
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.1#6144)
>


RE: [DISCUSS] Transaction Hell

2013-10-09 Thread Donal Lafferty
Hi Darren,

Thank you for explain this issue in huge detail.

Could I respond with some questions?

1. If @DB declarations are not detected, because Java 'assert' is not turned 
on, can we use a different mechanism to check for @DB?

2. What standard framework do you recommend using for DB transactions?  Is 
there a guide available?

3. Going back to 'assert'.  Since they're not being used, should we get rid of 
all instances of Java 'assert' in the code base?  Or should we turn on 'assert' 
in non-release builds?


DL


> -Original Message-
> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
> Sent: 09 October 2013 09:45
> To: dev@cloudstack.apache.org
> Subject: [DISCUSS] Transaction Hell
> 
> Okay, please read this all, this is important...  I want you all to know that 
> its
> personally important to me to attempt to get rid of ACS custom stuff and
> introduce patterns, frameworks, libraries, etc that I feel are more consistent
> with modern Java development and are understood by a wider audience.
> This is one of the basic reasons I started the spring-modularization branch.  
> I
> just want to be able to leverage Spring in a sane way.  The current
> implementation in ACS is backwards and broken and abuses Spring to the
> point that leveraging Spring isn't really all that possible.
> 
> So while I did the Spring work, I also started laying the ground work to get 
> rid
> of the ACS custom transaction management.  The custom DAO framework
> and the corresponding transaction management has been a huge barrier to
> me extending ACS in the past.  When you look at how you are supposed to
> access the database, it's all very custom and what I feel isn't really all 
> that
> straight forward.  I was debugging an issue today and figured out there is a
> huge bug in what I've done and that has lead me down this rabbit hole of
> what the correct solution is.
> Additionally ACS custom transaction mgmt is done in a way that basically
> breaks Spring too.
> 
> At some point on the mailing list there was a small discussion about removing
> the @DB interceptor.  The @DB interceptor does txn.open() and
> txn.close() around a method.  If a method forgets to commit or rollback the
> txn, txn.close() will rollback the transaction for the method.  So the general
> idea of the change was to instead move that logic to the bottom of the call
> stack.  The assumption being that the @DB code was just an additional check
> to ensure the programmer didn't forget something and we could instead just
> do that once at the bottom of the stack.  Oh how wrong I was.
> 
> The problem is that developers have relied on the @DB interceptor to handle
> rollback for them.  So you see the following code quite a bit
> 
> txn.start()
> ...
> txn.commit()
> 
> And there is no sign of a rollback anywhere.  So the rollback will happen if
> some exception is thrown.  By moving the @DB logic to the bottom of stack
> what happens is the transaction is not rolled back when the developer
> thought it would and madness ensues.  So that change was bad.  So what to
> do  Here's my totally bias description of solutions:
> 
> Option A or "Custom Forever!":  Go back to custom ACS AOP and the @DB.
>  This is what one would think is the simplest and safest solution.
> We'll it ain't really.  Here's the killer problem, besides that fact that it 
> makes
> me feel very sad inside, the current rollback behavior is broken in certain
> spots in ACS.  While investigating possible solutions I started looking at 
> all the
> places that do programmatic txn management.  It's important to realize that
> the txn framework only works properly if the method in which you do
> txn.start() has @DB on it.  There is a java assert in currentTxn() that 
> attempts
> to make sure that @DB is there.  But nobody runs with asserts on.  So
> there are places in ACS where transactions are started and no @DB is there,
> but it happens to work because some method in the stack has @DB.  So to
> properly go back to option A we really need to fix all places that don't have
> @DB, plus make sure people always run with asserts on.  And then give up
> making the ACS world a better place and just do things how we always
> have...
> 
> Option B or "Progress is Good":  The current transaction management
> approach (especially rollback) doesn't match how the majority of frameworks
> out there work.  This option is to change the Transaction class API to be more
> consistent with standard frameworks out there.  I propose the following APIs
> (if you know Spring TX mgmt, this will look
> familiar)
> 
> 1) remove start(), commit(), rollback() - The easiest way to ensure we up
> date everything properly is to break the API and fix everything that is broken
> (about 433 places)
> 2) Add execute(TransactionCallback) where TransactionCallback has one
> method doInTransaction().  For somebody to run a transaction you would
> need to do
> 
> txn.execute(new TransactionCallback() {

RE: [DISCUSS] Transaction Hell

2013-10-09 Thread Frankie Onuonga
Hi Darren,
Greetings from Nairobi.

First, let me start by saying that you for the email .
I personally think the options were explained very well.

Now on a serious note, I think that it would be good practice to ensure that we 
go with a standardized method of doing things.
I therefore support the option to go with a framework.
It should make management of code easier.
Future development should also be much better.

Kind Regards,


Sent from my Windows Phone

From: Darren Shepherd
Sent: ‎10/‎9/‎2013 11:46 AM
To: dev@cloudstack.apache.org
Subject: [DISCUSS] Transaction Hell

Okay, please read this all, this is important...  I want you all to
know that its personally important to me to attempt to get rid of ACS
custom stuff and introduce patterns, frameworks, libraries, etc that I
feel are more consistent with modern Java development and are
understood by a wider audience.  This is one of the basic reasons I
started the spring-modularization branch.  I just want to be able to
leverage Spring in a sane way.  The current implementation in ACS is
backwards and broken and abuses Spring to the point that leveraging
Spring isn't really all that possible.

So while I did the Spring work, I also started laying the ground work
to get rid of the ACS custom transaction management.  The custom DAO
framework and the corresponding transaction management has been a huge
barrier to me extending ACS in the past.  When you look at how you are
supposed to access the database, it's all very custom and what I feel
isn't really all that straight forward.  I was debugging an issue
today and figured out there is a huge bug in what I've done and that
has lead me down this rabbit hole of what the correct solution is.
Additionally ACS custom transaction mgmt is done in a way that
basically breaks Spring too.

At some point on the mailing list there was a small discussion about
removing the @DB interceptor.  The @DB interceptor does txn.open() and
txn.close() around a method.  If a method forgets to commit or
rollback the txn, txn.close() will rollback the transaction for the
method.  So the general idea of the change was to instead move that
logic to the bottom of the call stack.  The assumption being that the
@DB code was just an additional check to ensure the programmer didn't
forget something and we could instead just do that once at the bottom
of the stack.  Oh how wrong I was.

The problem is that developers have relied on the @DB interceptor to
handle rollback for them.  So you see the following code quite a bit

txn.start()
...
txn.commit()

And there is no sign of a rollback anywhere.  So the rollback will
happen if some exception is thrown.  By moving the @DB logic to the
bottom of stack what happens is the transaction is not rolled back
when the developer thought it would and madness ensues.  So that
change was bad.  So what to do  Here's my totally bias description
of solutions:

Option A or "Custom Forever!":  Go back to custom ACS AOP and the @DB.
 This is what one would think is the simplest and safest solution.
We'll it ain't really.  Here's the killer problem, besides that fact
that it makes me feel very sad inside, the current rollback behavior
is broken in certain spots in ACS.  While investigating possible
solutions I started looking at all the places that do programmatic txn
management.  It's important to realize that the txn framework only
works properly if the method in which you do txn.start() has @DB on
it.  There is a java assert in currentTxn() that attempts to make sure
that @DB is there.  But nobody runs with asserts on.  So there are
places in ACS where transactions are started and no @DB is there, but
it happens to work because some method in the stack has @DB.  So to
properly go back to option A we really need to fix all places that
don't have @DB, plus make sure people always run with asserts on.  And
then give up making the ACS world a better place and just do things
how we always have...

Option B or "Progress is Good":  The current transaction management
approach (especially rollback) doesn't match how the majority of
frameworks out there work.  This option is to change the Transaction
class API to be more consistent with standard frameworks out there.  I
propose the following APIs (if you know Spring TX mgmt, this will look
familiar)

1) remove start(), commit(), rollback() - The easiest way to ensure we
up date everything properly is to break the API and fix everything
that is broken (about 433 places)
2) Add execute(TransactionCallback) where TransactionCallback has one
method doInTransaction().  For somebody to run a transaction you would
need to do

txn.execute(new TransactionCallback() {
Object doInTransaction() {
  // do stuff
}
})
3) add "Object startTransaction()," commit(Object), and
rollback(Object) - These methods are for callers who really really
want to do thing programmatical

Review Request 14556: CLOUDSTACK-4766: Add timeout if vm does not reach running state

2013-10-09 Thread Girish Shilamkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14556/
---

Review request for cloudstack, sanjeev n and venkata swamy babu  budumuru.


Bugs: CLOUDSTACK-4766


Repository: cloudstack-git


Description
---

The test use to wait for ever for the vm to attain Running state.
Added a timeout so it does not get into infinite loop.


Diffs
-

  test/integration/component/test_reset_ssh_keypair.py ace4499 

Diff: https://reviews.apache.org/r/14556/diff/


Testing
---

Verified that the timeout works. 

test_01_reset_keypair_normal_user 
(test_reset_ssh_keypair.TestResetSSHKeyUserRights)Verify API 
resetSSHKeyForVirtualMachine for non admin non root ... FAIL
test_02_reset_keypair_domain_admin 
(test_reset_ssh_keypair.TestResetSSHKeyUserRights)
Verify API resetSSHKeyForVirtualMachine for domain admin non root ... ok
test_03_reset_keypair_root_admin 
(test_reset_ssh_keypair.TestResetSSHKeyUserRights)
Verify API resetSSHKeyForVirtualMachine for domain admin root ... ok
test_01_reset_ssh_keys (test_reset_ssh_keypair.TestResetSSHKeypair)
Test Reset SSH keys for VM  already having SSH key ... ok
test_02_reset_ssh_key_password_enabled_template 
(test_reset_ssh_keypair.TestResetSSHKeypair)
Reset SSH keys for VM  created from password enabled template and ... ok
test_03_reset_ssh_with_no_key (test_reset_ssh_keypair.TestResetSSHKeypair)
Reset SSH key for VM  having no SSH key ... ok
test_04_reset_key_passwd_enabled_no_key 
(test_reset_ssh_keypair.TestResetSSHKeypair)
Reset SSH keys for VM  created from password enabled template and ... ok
test_05_reset_key_in_running_state (test_reset_ssh_keypair.TestResetSSHKeypair)
Reset SSH keys for VM  already having SSH key when VM is in running ... ok
test_06_reset_key_passwd_enabled_vm_running 
(test_reset_ssh_keypair.TestResetSSHKeypair)
Reset SSH keys for VM  created from password enabled template and ... ok
test_07_reset_keypair_invalid_params 
(test_reset_ssh_keypair.TestResetSSHKeypair)
Verify API resetSSHKeyForVirtualMachine with incorrect parameters … ok

==
FAIL: test_01_reset_keypair_normal_user 
(test_reset_ssh_keypair.TestResetSSHKeyUserRights)
Verify API resetSSHKeyForVirtualMachine for non admin non root
--
Traceback (most recent call last):
  File "/root/girish/test_reset_ssh_keypair.py", line 1218, in 
test_01_reset_keypair_normal_user
% (vms[0].name, self.services["timeout"]))
AssertionError: The virtual machine 974d1275-b747-441c-83b2-1795de4d87df failed 
to start even after 10 minutes

--


Thanks,

Girish Shilamkar



Re: Unable to reboot instance: "Unable to find service offering: [NUMBER] corresponding to the vm"

2013-10-09 Thread Indra Pramana
Dear all,

Anyone can advise on this?

Looking forward to your reply, thank you.

Cheers.



On Mon, Oct 7, 2013 at 9:49 AM, Indra Pramana  wrote:

> Dear all,
>
> After upgrading to CloudStack 4.2.0, I am not able to reboot a VM
> instance. Error message: "Unable to find service offering:
> [SOME-RANDOM-NUMBER] corresponding to the vm". This affects all the VM
> instances that I want to reboot.
>
> Any idea what could be the cause of the problem? Seems that it cannot find
> a certain service offering which corresponds to the VM, anyone can shed a
> light on which service offering it is referring to?
>
> Looking forward to your reply, thank you.
>
> Cheers.
>


Re: what's the reason for the placeholder nic in VPC/VR?

2013-10-09 Thread Murali Reddy
On 09/10/13 11:33 AM, "Darren Shepherd" 
wrote:

>Why is a placeholder nic created before the VRs for the VPC are created?
>
>Darren
>

Generally place holder nic is used in cases where cloudstack uses a subnet
IP from the guest subnet, but ip is not used for any VM nic's. In most of
the external network devices, needs a subnet IP from the guest network
CIDR, cloudstack creates a place holder nic and allocates a subnet ip.



Re: [New Feature FS] SSL Offload Support for Cloudstack

2013-10-09 Thread Murali Reddy
Thanks Syed for the FS.

Couple of comments:

- any reason why you choose assignTo/RemoveFrom load balancer rule API's
to assign/remove certificate to LB rules? These api's are basically for
controlling VM membership with a load balancer rule. Can
create/updateLoadBalancerRule api's b used for registering and
de-registering certificate with load balancer rule?

- to me SSL termination is value added service from providers perspective,
its better we expose service differentiation in the network offering (e.g
dedicated load balancer capability of LB service in the network offering).
So only if network offering permits, SSL termination can be used.

- does adding SSL termination support to load balancer affect/complement
current session persistence, health monitoring, auto scale functionality
anyway? I see session persistence based on SSL session id's please see if
this can supported.

- as commented by other, fail fast at service layer on invalid certificate.

- on the requirement #4, don't infer protocol based on the public/private
ports and impose restrictions. Current createLoadBalancer API does not
take protocol parameter so its inferred at device layer. NetScaler seems
to support SSL with other TCP ports as well.

One general implementation note, network rules can be reprogrammed. So
operations to configure SSL cert, binding cert to virtual server etc need
to be idempotent at NetScaler resource.

[1] 
http://support.citrix.com/proddocs/topic/netscaler-ssl-93/ns-ssl-offloading
-other-tcp-protocols-tsk.html

On 08/10/13 11:44 PM, "Syed Ahmed"  wrote:

>Hi,
>
>I have been working on adding SSL offload functionality to cloudstack
>and make it work for Netscaler. I have an initial design documented at
>https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Supp
>ort 
>and I would really love your feedback. The bug for this is
>https://issues.apache.org/jira/browse/CLOUDSTACK-4821 .
>
>Thanks,
>-Syed
>
>
>




Dynamically scalable?

2013-10-09 Thread Indra Pramana
Dear all,

What does "dynamically scalable" means on a VM instance or a template?

Looking forward to your reply, thank you.

Cheers.


RE: Dynamically scalable?

2013-10-09 Thread Harikrishna Patnala
Hi ,
While registering template we can specify whether xenserver/vmware tools 
installed in the template using this flag(dynamically scalable). This is 
required while deploying vm to set some parameters for dynamic scaling of cpu 
and ram feature to work. The same flag is reflected in the vm instance details.

Thanks
HArikrishna

-Original Message-
From: Indra Pramana [mailto:in...@sg.or.id] 
Sent: Wednesday, October 09, 2013 3:52 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Dynamically scalable?

Dear all,

What does "dynamically scalable" means on a VM instance or a template?

Looking forward to your reply, thank you.

Cheers.


Review Request 14557: CLOUDSTACK-4747: Rename testcase name to use lesser characters

2013-10-09 Thread Girish Shilamkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14557/
---

Review request for cloudstack and Sowmya Krishnan.


Repository: cloudstack-git


Description
---

Renamed testcase name and also initialised _cleanup so that
it does not break on non-NS Cloudstack setup.


Diffs
-

  test/integration/component/test_netscaler_nw_off.py 3139257 

Diff: https://reviews.apache.org/r/14557/diff/


Testing
---


Thanks,

Girish Shilamkar



Re: Review Request 14227: CLOUDSTACK-4707: "sourcetemplateid" field is not getting set for derived templates

2013-10-09 Thread Harikrishna Patnala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14227/
---

(Updated Oct. 9, 2013, 11:03 a.m.)


Review request for cloudstack and Kishan Kavala.


Bugs: CLOUDSTACK-4707


Repository: cloudstack-git


Description
---

CLOUDSTACK-4707:  "sourcetemplateid" field is not getting set for derived 
templates 
 Template created from a volume or snapshot did not have the sourcetemplateid 
field set in vm_template table. 


Diffs (updated)
-

  engine/schema/src/com/cloud/storage/VolumeVO.java ea3d6bf 

Diff: https://reviews.apache.org/r/14227/diff/


Testing
---

Tested by 
1) creating template from root volume of VM
2) created snapshot of root volume and create template from that snapshot
In both the cases sourcetemplateId is set in vm_template table


Thanks,

Harikrishna Patnala



Re: Review Request 14556: CLOUDSTACK-4766: Add timeout if vm does not reach running state

2013-10-09 Thread sanjeev n

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14556/#review26816
---


Patch looks good. However still test is failing. Is it a product issue?

- sanjeev n


On Oct. 9, 2013, 9:50 a.m., Girish Shilamkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14556/
> ---
> 
> (Updated Oct. 9, 2013, 9:50 a.m.)
> 
> 
> Review request for cloudstack, sanjeev n and venkata swamy babu  budumuru.
> 
> 
> Bugs: CLOUDSTACK-4766
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The test use to wait for ever for the vm to attain Running state.
> Added a timeout so it does not get into infinite loop.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_reset_ssh_keypair.py ace4499 
> 
> Diff: https://reviews.apache.org/r/14556/diff/
> 
> 
> Testing
> ---
> 
> Verified that the timeout works. 
> 
> test_01_reset_keypair_normal_user 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)Verify API 
> resetSSHKeyForVirtualMachine for non admin non root ... FAIL
> test_02_reset_keypair_domain_admin 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)
> Verify API resetSSHKeyForVirtualMachine for domain admin non root ... ok
> test_03_reset_keypair_root_admin 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)
> Verify API resetSSHKeyForVirtualMachine for domain admin root ... ok
> test_01_reset_ssh_keys (test_reset_ssh_keypair.TestResetSSHKeypair)
> Test Reset SSH keys for VM  already having SSH key ... ok
> test_02_reset_ssh_key_password_enabled_template 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  created from password enabled template and ... ok
> test_03_reset_ssh_with_no_key (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH key for VM  having no SSH key ... ok
> test_04_reset_key_passwd_enabled_no_key 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  created from password enabled template and ... ok
> test_05_reset_key_in_running_state 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  already having SSH key when VM is in running ... ok
> test_06_reset_key_passwd_enabled_vm_running 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  created from password enabled template and ... ok
> test_07_reset_keypair_invalid_params 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Verify API resetSSHKeyForVirtualMachine with incorrect parameters … ok
> 
> ==
> FAIL: test_01_reset_keypair_normal_user 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)
> Verify API resetSSHKeyForVirtualMachine for non admin non root
> --
> Traceback (most recent call last):
>   File "/root/girish/test_reset_ssh_keypair.py", line 1218, in 
> test_01_reset_keypair_normal_user
> % (vms[0].name, self.services["timeout"]))
> AssertionError: The virtual machine 974d1275-b747-441c-83b2-1795de4d87df 
> failed to start even after 10 minutes
> 
> --
> 
> 
> Thanks,
> 
> Girish Shilamkar
> 
>



Re: Review Request 14556: CLOUDSTACK-4766: Add timeout if vm does not reach running state

2013-10-09 Thread Girish Shilamkar


> On Oct. 9, 2013, 11:03 a.m., sanjeev n wrote:
> > Patch looks good. However still test is failing. Is it a product issue?

The problem was if vm did not come up, the test went into infinite loop. Now it 
times out gracefully and therefore we see the failure, which is the expected 
output. 


- Girish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14556/#review26816
---


On Oct. 9, 2013, 9:50 a.m., Girish Shilamkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14556/
> ---
> 
> (Updated Oct. 9, 2013, 9:50 a.m.)
> 
> 
> Review request for cloudstack, sanjeev n and venkata swamy babu  budumuru.
> 
> 
> Bugs: CLOUDSTACK-4766
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> The test use to wait for ever for the vm to attain Running state.
> Added a timeout so it does not get into infinite loop.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_reset_ssh_keypair.py ace4499 
> 
> Diff: https://reviews.apache.org/r/14556/diff/
> 
> 
> Testing
> ---
> 
> Verified that the timeout works. 
> 
> test_01_reset_keypair_normal_user 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)Verify API 
> resetSSHKeyForVirtualMachine for non admin non root ... FAIL
> test_02_reset_keypair_domain_admin 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)
> Verify API resetSSHKeyForVirtualMachine for domain admin non root ... ok
> test_03_reset_keypair_root_admin 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)
> Verify API resetSSHKeyForVirtualMachine for domain admin root ... ok
> test_01_reset_ssh_keys (test_reset_ssh_keypair.TestResetSSHKeypair)
> Test Reset SSH keys for VM  already having SSH key ... ok
> test_02_reset_ssh_key_password_enabled_template 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  created from password enabled template and ... ok
> test_03_reset_ssh_with_no_key (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH key for VM  having no SSH key ... ok
> test_04_reset_key_passwd_enabled_no_key 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  created from password enabled template and ... ok
> test_05_reset_key_in_running_state 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  already having SSH key when VM is in running ... ok
> test_06_reset_key_passwd_enabled_vm_running 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Reset SSH keys for VM  created from password enabled template and ... ok
> test_07_reset_keypair_invalid_params 
> (test_reset_ssh_keypair.TestResetSSHKeypair)
> Verify API resetSSHKeyForVirtualMachine with incorrect parameters … ok
> 
> ==
> FAIL: test_01_reset_keypair_normal_user 
> (test_reset_ssh_keypair.TestResetSSHKeyUserRights)
> Verify API resetSSHKeyForVirtualMachine for non admin non root
> --
> Traceback (most recent call last):
>   File "/root/girish/test_reset_ssh_keypair.py", line 1218, in 
> test_01_reset_keypair_normal_user
> % (vms[0].name, self.services["timeout"]))
> AssertionError: The virtual machine 974d1275-b747-441c-83b2-1795de4d87df 
> failed to start even after 10 minutes
> 
> --
> 
> 
> Thanks,
> 
> Girish Shilamkar
> 
>



Re: Review Request 14058: Including tests for VPC VM Lifecycle on Tagged hosts

2013-10-09 Thread Ashutosh Kelkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14058/
---

(Updated Oct. 9, 2013, 11:28 a.m.)


Review request for cloudstack, Girish Shilamkar, venkata swamy babu  budumuru, 
and Sheng Yang.


Changes
---

Added host tags to hosts through test cases. No need to manually tag the hosts.
Also, changed the "tags" in service offering to "hosttags", now it is 
interpreted as host tag and not storage tag.

Ran all test cases locally with a 3 host KVM advanced setup.

All new test cases running successfully (please see log in the "Testing Done" 
section.


Repository: cloudstack-git


Description
---

Added 10 tests for VPC VM liftcycle on tagged hosts

New class added  : TestVMLifeCycleDiffHosts

def test_01_deploy_instance_in_network(self):
def test_02_stop_instance_in_network(self):
def test_03_start_instance_in_network(self):
def test_04_reboot_instance_in_network(self):
def test_05_destroy_instance_in_network(self):
def test_06_recover_instance_in_network(self):
def test_07_migrate_instance_in_network(self):
def test_08_user_data(self):
def test_09_meta_data(self):
def test_10_expunge_instance_in_network(self):

This set of tests requires a multi host tagged setup (3 hosts - 2 hosts with 
tag 'host1' and  1 host with tag 'host2')


Diffs (updated)
-

  test/integration/component/test_vpc_vm_life_cycle.py 9844c1f 

Diff: https://reviews.apache.org/r/14058/diff/


Testing (updated)
---

Tested locally on KVM advanced setup.

Log:

test_01_deploy_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test deploy an instance in VPC networks ... ok
test_02_stop_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test stop an instance in VPC networks ... ok
test_03_start_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test start an instance in VPC networks ... ok
test_04_reboot_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test reboot an instance in VPC networks ... ok
test_05_destroy_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test destroy an instance in VPC networks ... ok
test_06_migrate_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test migrate an instance in VPC networks ... ok
test_07_user_data (test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test user data in virtual machines ... ok
test_08_meta_data (test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test meta data in virtual machines ... ok
test_09_expunge_instance_in_network 
(test_vpc_vm_life_cycle.TestVMLifeCycleDiffHosts)
Test expunge an instance in VPC networks ... ok


Thanks,

Ashutosh Kelkar



Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Daan Hoogland
Darren,

Thank you for the great investigative work. As for the assert, I think
that if no one else at least Jenkins should run with asserts on, so we
can see if a commit breaks something.

As for the framework, I think your proposed solution sound good. I am
not a big fan of spring. It does some things very good but it tries to
do to much. If you say it is a good framework for transaction
management I have to trust you as I didn't use that for more then five
years. I would certainly not let you in the pit as sole owner of the
change though as I have been in a short struggle with what you are
trying to solve as well. I would like to hear from the originators of
the present transaction management though.

regards,
Daan

On Wed, Oct 9, 2013 at 11:38 AM, Frankie Onuonga  wrote:
> Hi Darren,
> Greetings from Nairobi.
>
> First, let me start by saying that you for the email .
> I personally think the options were explained very well.
>
> Now on a serious note, I think that it would be good practice to ensure that 
> we go with a standardized method of doing things.
> I therefore support the option to go with a framework.
> It should make management of code easier.
> Future development should also be much better.
>
> Kind Regards,
>
>
> Sent from my Windows Phone
> 
> From: Darren Shepherd
> Sent: 10/9/2013 11:46 AM
> To: dev@cloudstack.apache.org
> Subject: [DISCUSS] Transaction Hell
>
> Okay, please read this all, this is important...  I want you all to
> know that its personally important to me to attempt to get rid of ACS
> custom stuff and introduce patterns, frameworks, libraries, etc that I
> feel are more consistent with modern Java development and are
> understood by a wider audience.  This is one of the basic reasons I
> started the spring-modularization branch.  I just want to be able to
> leverage Spring in a sane way.  The current implementation in ACS is
> backwards and broken and abuses Spring to the point that leveraging
> Spring isn't really all that possible.
>
> So while I did the Spring work, I also started laying the ground work
> to get rid of the ACS custom transaction management.  The custom DAO
> framework and the corresponding transaction management has been a huge
> barrier to me extending ACS in the past.  When you look at how you are
> supposed to access the database, it's all very custom and what I feel
> isn't really all that straight forward.  I was debugging an issue
> today and figured out there is a huge bug in what I've done and that
> has lead me down this rabbit hole of what the correct solution is.
> Additionally ACS custom transaction mgmt is done in a way that
> basically breaks Spring too.
>
> At some point on the mailing list there was a small discussion about
> removing the @DB interceptor.  The @DB interceptor does txn.open() and
> txn.close() around a method.  If a method forgets to commit or
> rollback the txn, txn.close() will rollback the transaction for the
> method.  So the general idea of the change was to instead move that
> logic to the bottom of the call stack.  The assumption being that the
> @DB code was just an additional check to ensure the programmer didn't
> forget something and we could instead just do that once at the bottom
> of the stack.  Oh how wrong I was.
>
> The problem is that developers have relied on the @DB interceptor to
> handle rollback for them.  So you see the following code quite a bit
>
> txn.start()
> ...
> txn.commit()
>
> And there is no sign of a rollback anywhere.  So the rollback will
> happen if some exception is thrown.  By moving the @DB logic to the
> bottom of stack what happens is the transaction is not rolled back
> when the developer thought it would and madness ensues.  So that
> change was bad.  So what to do  Here's my totally bias description
> of solutions:
>
> Option A or "Custom Forever!":  Go back to custom ACS AOP and the @DB.
>  This is what one would think is the simplest and safest solution.
> We'll it ain't really.  Here's the killer problem, besides that fact
> that it makes me feel very sad inside, the current rollback behavior
> is broken in certain spots in ACS.  While investigating possible
> solutions I started looking at all the places that do programmatic txn
> management.  It's important to realize that the txn framework only
> works properly if the method in which you do txn.start() has @DB on
> it.  There is a java assert in currentTxn() that attempts to make sure
> that @DB is there.  But nobody runs with asserts on.  So there are
> places in ACS where transactions are started and no @DB is there, but
> it happens to work because some method in the stack has @DB.  So to
> properly go back to option A we really need to fix all places that
> don't have @DB, plus make sure people always run with asserts on.  And
> then give up making the ACS world a better place and just do things
> how we

Re: [MERGE] spring-modularization to master - Spring Modularization

2013-10-09 Thread Prasanna Santhanam
On Tue, Oct 08, 2013 at 10:20:01AM -0700, Darren Shepherd wrote:
> >From what I can gather it seems that master currently fails the BVT
> (and know when I say BVT I mean that black box that apparently exists
> somewhere doing something, but I have no clue what it really means).
> So in turn my spring modularization branch will additionally fail BVT.
>  Citrix internal QA ran some tests against my branch and they mostly
> passed but some failed.  Its quite difficult to sort through this all
> because tests are failing on master.  So I don't know what to do at
> this point.  At least my branch won't completely blow up everything.
> I just know the longer it takes to merge this the more painful it will
> be

Darren, Sorry about the frustrations. I haven't been able to keep
track these last few weeks about your work.

I have run the tests against your branch and everything looks good so
far. Whatever is failing is failing on master as well, so we'll ignore
that. I will share the report shortly.

They are the result of the job here, which I will summarize in a more
readable format and mail out later tonight once it completes:
http://jenkins.buildacloud.org/job/test-matrix/571/

Each bubble in that grid represents a hypervisor profile against which
all the tests were run.
 
> Honestly this is all quite frustrating for myself being new to
> contributing to ACS.  I feel somewhat lost in the whole process of how
> to get features in.  I'll refrain from venting my frustrations.
> 
> Darren

-- 
Prasanna.,


Powered by BigRock.com



RE: [4.2] [xenserver] [system vms] Xentools

2013-10-09 Thread Joris van Lieshout
Hi Paul,

I will defiantly make sure this improvement gets committed back. I haven't been 
able to do this because we are also testing some other customizations that 
might be interesting to commit back. In short these tweaks will allow the svm 
to handle higher traffic loads.

Kind regards, 
Joris van Lieshout
Schuberg Philis
schubergphilis.com 

-Original Message-
From: Paul Angus [mailto:paul.an...@shapeblue.com] 
Sent: woensdag 9 oktober 2013 10:00
To: dev@cloudstack.apache.org
Cc: Joris van Lieshout
Subject: RE: [4.2] [xenserver] [system vms] Xentools

Thanks Daan.


Regards,

Paul Angus
S: +44 20 3603 0540 | M: +447711418784 | T: CloudyAngus paul.an...@shapeblue.com

-Original Message-
From: Daan Hoogland [mailto:daan.hoogl...@gmail.com]
Sent: 09 October 2013 08:15
To: dev
Cc: Joris van Lieshout
Subject: Re: [4.2] [xenserver] [system vms] Xentools

Paul, Harikrishna,

A colleague at Schuberg Philis created a template with the xenserver tools.  He 
was going to give the template back to the communitee. I think he is too busy 
but I'll remind him/ask him for the status.

regards,
Daan

On Tue, Oct 8, 2013 at 1:14 PM, Harikrishna Patnala 
 wrote:
> Hi,
> Earlier there was discussion on putting xen tools in systemvms. Please 
> look into that.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3
> cce676527.4398b%25abhinandan.prat...@citrix.com%3E
>
>
> From: Paul Angus [mailto:paul.an...@shapeblue.com]
> Sent: Tuesday, October 08, 2013 1:36 PM
> To: dev@cloudstack.apache.org
> Subject: [4.2] [xenserver] [system vms] Xentools
>
> Hi all,
>
> The current XenServer system VM template doesn't seem to include XenServer 
> Tools - is this by design?
>
>
> systemvmtemplate-2013-07-12-master-xen.vhd.bz2
>
>
> Regards
>
> Paul Angus
> Senior Consultant / Cloud Architect
>
> [cid:image002.png@01CE1071.C6CC9C10]
>
> S: +44 20 3603 0540 | M:
> +447711418784 | T: CloudyAngus
> paul.an...@shapeblue.com | 
> www.shapeblue.com | Twitter:@shapeblue
> ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 4HS
>
> Apache CloudStack Bootcamp training courses
> 02/03 October,
> London
> 13/14 November,
> London
> 27/28 November,
> Bangalore />
> 08/09 January 2014,
> London
>
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is a company 
> incorporated in India and is operated under license from Shape Blue Ltd. 
> Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
> operated under license from Shape Blue Ltd. ShapeBlue is a registered 
> trademark.
This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Contrail plugin

2013-10-09 Thread Murali Reddy
On 09/10/13 6:10 AM, "Pedro Roque Marques" 
wrote:

>Darren,
>Using ActionEvents is not desirable for the plugin either... today
>CloudStack lacks the ability for a component/plugin to associate itself
>to the life-cycle of an object. It would be ideal if there was a generic
>way to accomplish that...
>The contrail plugin wants to know about project creation and deletion.
>Projects need to be reflected in the contrail-api server; the project
>delete notification is necessary to understand that the project is not
>longer used.
>
>When it comes to network objects it would also be nice if there could a
>for a component/plugin to associate itself with network creation /
>implementation... Currently there are calls from the NetworkManager to
>components such as firewall/LB/etc which should be optional.
>
>Ideally, i would like to have a common notification mechanism for a
>cloudstack object (e.g. project, network, nic). This could optionally
>allow a plugin to "veto" an operation and/or just get notified that it
>occurred.

For 'veto' to operations, yes you are right that CloudStack core does not
give hooks into workflow for plug-ins to veto. But there are defined
abstraction (Guru, Planner, Investigator etc.) through which plug-ins can
hook into orchestration. Also there is a publisher/subscriber modelled
EventBus that was added in 4.1 by which CloudStack components can get the
entity life cycle and state changes. I see that contrail plug-in
implements an Interceptor for achieving this, but I will leave a comment
in the review if 'EventBus' can leveraged.

Thanks,
Murali

>
>thanks,
>  Pedro.
>
>On Oct 8, 2013, at 10:25 AM, Darren Shepherd wrote:
>
>> I'll take some time and review this code too.  I already know there's
>> going to be a conflict with the stuff I did in the spring
>> modularization branch.  Moving to full spring we have gotten rid of
>> the custom ACS AOP for the mgmt server.  This code relies on that
>> framework so it will have to move to being a standard
>> org.aopalliance.intercept.MethodInteceptor.  I don't particularly care
>> for the fact that functionally it being keyed off of ActionEvents (or
>> AOP in general).  I'll need to review the code further to provide more
>> useful feedback, but just giving the heads up that the AOP stuff will
>> have to change a bit.
>> 
>> Darren
>
>




Re: LXC and Networking

2013-10-09 Thread Francois Gaudreault

I posted on the users list, but no one responded. I am trying here :)

In addition to this, I tried to add an LXC cluster in an existing Zone, 
and I got an error about the LXC resource manager not being found.


I do have some questions regarding LXC containers and the networking. 
First, should I put the LXC clusters on a separate zone or I can use 
an existing zone (which I built for Xen) and just create a new LXC 
cluster? Second, I saw in the doc that bridges are manually created... 
what happens if I have hundreds/thousands of guests VLANs? Will the 
agent automate that part (planning to use OVS here)?



Thanks!

--
Francois Gaudreault
Architecte de Solution Cloud | Cloud Solutions Architect
fgaudrea...@cloudops.com
514-629-6775
- - -
CloudOps
420 rue Guy
Montréal QC  H3J 1S6
www.cloudops.com
@CloudOps_



Re: LXC and Networking

2013-10-09 Thread Chip Childers
Adding Phong, who was the original author of the LXC plugin.  Phong, can
you help Francois out?

-chip

On Wed, Oct 09, 2013 at 10:04:31AM -0400, Francois Gaudreault wrote:
> I posted on the users list, but no one responded. I am trying here :)
> 
> In addition to this, I tried to add an LXC cluster in an existing
> Zone, and I got an error about the LXC resource manager not being
> found.
> >
> >I do have some questions regarding LXC containers and the
> >networking. First, should I put the LXC clusters on a separate
> >zone or I can use an existing zone (which I built for Xen) and
> >just create a new LXC cluster? Second, I saw in the doc that
> >bridges are manually created... what happens if I have
> >hundreds/thousands of guests VLANs? Will the agent automate that
> >part (planning to use OVS here)?
> >
> Thanks!
> 
> -- 
> Francois Gaudreault
> Architecte de Solution Cloud | Cloud Solutions Architect
> fgaudrea...@cloudops.com
> 514-629-6775
> - - -
> CloudOps
> 420 rue Guy
> Montréal QC  H3J 1S6
> www.cloudops.com
> @CloudOps_
> 
> 


Re: [New Feature FS] SSL Offload Support for Cloudstack

2013-10-09 Thread Syed Ahmed
> Additionally, I don't see that the code handles the
> chain also.  I could be wrong, but just from reading the code it seems
> to assume the "cert" string produces a single cert.  Correct me if I'm
> wrong.

You are right. This just handles a single certificate. It does not
handle trust chains.

>  (Don't let people remove a cert that is currently used).

You bring up an good point. I assumed that if the user deletes a
certificate, we internally go ahead and remove all the bindings from
LB rules. This happens with VM instances. The difference that I see is
if you remove the certificate, the LB goes down but if you remove an
instance the LB may still work.

>  I'm guessing you won't need to create any
> new managers, so that probably won't apply.

If I am giving the functionality of updating a certificate then my
CertificateService should call the resource layer for updating on the
device as well. Now from you mail what I understand is this has to be
done by creating a manager. So either we go ahead and drop the
updateCertificate call or we will have to add a manager for this. I
prefer dropping the updateCert call as I don't see anyone updating
cert contents frequently.

Thanks
-Syed







On Wed, Oct 9, 2013 at 1:02 AM, Darren Shepherd
 wrote:
> I'm not too sure that its going to be worth it to reuse
> KeyStoreManager.  That code is for storage of certs for the systemvm.
> So you need to ensure that your changes don't overlap with the
> systemvm code.  Additionally, I don't see that the code handles the
> chain also.  I could be wrong, but just from reading the code it seems
> to assume the "cert" string produces a single cert.  Correct me if I'm
> wrong.
>
> The absolute key thing for this feature, in my mind, is getting the
> input validation right.  If you don't give useful errors, you'll be
> handling requests from people not being able to insert a cert, and the
> default errors from java are typically not very useful.
>
> Regarding design.  API Commands should synchronously call a Service
> class, this is the create() method of an async command or execute() of
> a non async command.  That service method should do no more than input
> validation and saving things to the database.  If you need to
> communicate to resources, then it should be an async api command.  The
> async portion of the API command, this would be the execute() method,
> should also call the service class.  Since you ideally did all the
> input validation in the sync portion, not much validation should
> happen at this point.  But there may be some more intensive validation
> you want to do at this point.  After validation, the service class
> should call the manager.  The manager does the real business logic.
>
> So you have two groups of functionality.  Managing SSL certs and then
> apply SSL to LB.  For the SSL cert management, I'd probably create a
> new CertificateService.  I think all the functionality is really just
> manipulating the DB, so all the calls can be sync.  (Don't let people
> remove a cert that is currently used).

> Now one of my pet peeves in ACS is that Service interface and Manager
> interface are always implemented by the same class.  This is bad, you
> end up bluring the lines of the architecture and code becomes a big
> blob, so avoid doing that.
>
> Darren
>
> On Tue, Oct 8, 2013 at 4:44 PM, Syed Ahmed  wrote:
>> Thanks Edison for the reply.
>>
>> I see that there is already an implementation of KeystoreManager  which does
>> certificate validation and saves it in the keystore table. Also, the API
>> (UploadCustomCertificate) is only  callable from admin. I could add
>> functionality to this class for handling certificate chain  and also make
>> sure the table stores the account_id as well. We could reduce creating one
>> table by reusing the keystore table.
>>
>> I have a question about terminology. What is a service and a manager because
>> I see them both being used. In my case, I assume that my CertificateService
>> will have the KeystoreManager injected and the Service will serve as a proxy
>> between the Resource layer and the KeystoreManager which is the Db layer.
>> Will this approach work?
>>
>> Thanks
>> -Syed
>>
>>
>>
>> On Tue 08 Oct 2013 06:56:34 PM EDT, Edison Su wrote:
>>>
>>> There is command in ACS, UploadCustomCertificateCmd, which can receive ssl
>>> cert, key can chain as input. Maybe can share some code?
>>>
 -Original Message-
 From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
 Sent: Tuesday, October 08, 2013 1:54 PM
 To: dev@cloudstack.apache.org
 Subject: Re: [New Feature FS] SSL Offload Support for Cloudstack

 The API should do input validation on the SSL cert, key and chain.
 Getting those three pieces of info is usually difficult for most people
 to get
 right as they don't really know what those three things are.  There's
 about a
 80% chance most calls will fail.  If you rely on the provider it will
 proba

Re: [DISCUSS] Return ssh publickeys in listSSHKeyPairs

2013-10-09 Thread Ian Duffy
Great thanks for the feedback. Will get this applied at the weekend.

Just out of interest. In an account we have users. Those users have access
to all the VMs via the Cloudstack Management interface. However they don't
necessarily have access to the VMs(i.e. They do not know its password or
their public key is not contained within the machines authorized_keys).

Is there any way to add multiple SSH Public keys to a VM without powering
it down?

Basically, I want a way for all users of an account to share access to all
VMs owned by that account without having to manually store
passwords/private-ssh-keys on a separate system. Or by being able to inject
a SSH key or password reset without changing the power state of the VM.

Thanks.


On 8 October 2013 16:06, Chip Childers  wrote:

> On Tue, Oct 08, 2013 at 01:05:32PM +, Frankie Onuonga wrote:
> > Hi guys ,
> > From my fundamentals of security I do not think returning a public key
> is wrong .
> > What is sensitive is the private key.
> > As long as that is bit exposed in any way then all should be well.
>
> +1 to Frankie's comment
>


Re: Latest Master DB issue

2013-10-09 Thread Darren Shepherd
I think I'll solve this with a fourth option.  I don't particularly
like two thing.  Threads that never access the DB before are now doing
it (albeit just once) and if there is any error calling the DB it does
exit(1).  I'm going to make the CallingContext instance that gets
registered for the system account do lazy loading of the User and
Account objects.  So the system CallContext will still be available on
all thread, but if you never access it, it will never hit the
database.  Second, I'm going to remove the exit(1) that was already in
the code.  Its just a bad idea.  If you were to ever exhauster your DB
connection pool (or the number of connections on mysql), your entire
mgmt stack will kill itself and by default we don't run ACS under
process supervision.

Darren

On Wed, Oct 9, 2013 at 1:59 AM, Darren Shepherd
 wrote:
> Kelven,
>
> So the issue is the combination of my code with VmwareContextPool.
> With the ManagedContext framework, what I've done is replace every
> Runnable and TimerTask with ManagedContextRunnable and
> ManagedContextTimerTask.  Those classes will run the onEnter, onLeave
> logic which will setup CallContext as a result.  I purposely changed
> every reference so that everything was consistent.  I didn't want to
> have developers have to consider when or when not to use
> ManagedContext, so just always use it.  So as a result even though
> your code has nothing to do with the DB, the ManageContextListener for
> CallContext does.
>
> So I'm sure your thinking that Resources shouldn't call the DB.  The
> ManagedContext framework only does things when deployed in a managed
> JVM.  The only managed JVM is the mgmt server.  If it's AWSAPI, Usage,
> or an Agent JVM, then that framework does nothing.
>
> So there are three possible solutions I see
>
> 1) Change VmwareContextPool to be initialized from the @PostConstruct
> or start() method.
> 2) Revert the change to VmwareContextPool to use TimerTask and not
> ManageContextTimerTask
> 3) Merge spring modularization.
>
> The simplest stop gap would be option 2.
>
> Darren
>
> On Tue, Oct 8, 2013 at 5:35 PM, Kelven Yang  wrote:
>> The problem seems to me is whether or not a background job that touches
>> with database respects the bootstrap initialization order. As of
>> VmwareContextPool itself, its background job does something fully within
>> its own territory (no database, no reference outside). and vmware-base
>> package was originally designed to be running on its own without assuming
>> any container that offers unified lifecycle management. I don't think this
>> type of background job has anything to do with the failure in this
>> particular case.
>>
>> However, I do agree that we need to clean up and unify a few things inside
>> the CloudStack, especially on life-cycle management and all
>> background-jobs that their execution path touches with component
>> life-cyle, auto-wiring, AOP etc.
>>
>> To live with the time before the spring modularization merge, we just need
>> to figure out which background job that triggers all these and get it
>> fixed, it used to work before even it is fragile, I don't think the fix of
>> the problem is impossible. Is anyone working on this issue?
>>
>> Kelven
>>
>>
>>
>>
>> On 10/8/13 2:35 PM, "Darren Shepherd"  wrote:
>>
>>>Some more info about this.  What specifically is happening is that the
>>>VmwareContextPool call is creating a Timer during the constructor of
>>>the class which is being constructed in a static block from
>>>VmwareContextFactory.  So when the VmwareContextFactory class is
>>>loaded by the class loader, the background thread is created.  Which
>>>is way, way before the Database upgrade happens.  This will still be
>>>fixed if we merge the spring modularization, but this vmware code
>>>should change regardless.  Background threads should only be launched
>>>from a @PostConstruct or ComponentLifecycle.start() method.  They
>>>should not be started when a class is constructed or loaded.
>>>
>>>Darren
>>>
>>>
>>>On Tue, Oct 8, 2013 at 2:22 PM, Darren Shepherd
>>> wrote:
 Hey, I half way introduced this issue in a really long and round about
 way.  I don't think there's a good simple fix unless we merge the
 spring-modularization branch.  I'm going to look further into it.  But
 here's the background of why we are seeing this.

 I introduced "Managed Context" framework that will wrap all the
 background threads and manage the thread locals.  This was the union
 of CallContext, ServerContext, and AsyncJob*Context into one simple
 framework.  The problem with ACS though is that A LOT of background
 threads are spawned at all different random times of the
 initialization.  So what is happening is that during the
 initialization of some bean its kicking off a background thread that
 tries to access the database before the database upgrade has ran.  Now
 the CallContext has a strange suicidal behaviour (this was already
 there,

Re: Review Request 14549: Rename net.juniper.contrail to org.apache.cloudstack.network.contrail

2013-10-09 Thread Murali Reddy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14549/#review26819
---


- Is there a reason why no new isolation type was not added for 'contrail 
controller'. For other overlay technologies (STT, GRE, VXLAN) that CloudStack 
support there is an isolation type and corresponding Guru that handles 
isolation type. 

- There is a 'EventBus' on to which all events generated by CloudStack gets 
published. Right approach would be to plug-in to subscribe to interested events 
from event bus. But the current implementation of EventBus expects external 
AMQP server, so it may not be ideal. EventInterceptor approach implemented in 
the plug-in works fine to get the notification. But enabling it by default in 
ApplicationContext does not seem right thing to do. 

- If you can add some more details in to the FS on deployment model it will 
give more perspective to reviewers

+ is the VRouter mentioned in the FS is a appliance provisioned by 
CloudStack for each guest network by servicemanager? Or its a logical router on 
the dataplane that does the forwarding?
+ is BGP/MPLS required on the IP fabric and the Hypervisors


- Murali Reddy


On Oct. 8, 2013, 11:58 p.m., Pedro Marques wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14549/
> ---
> 
> (Updated Oct. 8, 2013, 11:58 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> Rename net.juniper.contrail to org.apache.cloudstack.network.contrail.
> 
> 
> Diffs
> -
> 
>   client/tomcatconf/applicationContext.xml.in 0ab2515 
>   client/tomcatconf/componentContext.xml.in 157ad5a 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/api/command/CreateServiceInstanceCmd.java
>  92f5eeb 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/api/response/ServiceInstanceResponse.java
>  1b7a7d8 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailElement.java
>  885a60f 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailElementImpl.java
>  3a38020 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailGuru.java
>  c655b0b 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailManager.java
>  5195793 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ContrailManagerImpl.java
>  8a3ca1b 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/DBSyncGeneric.java
>  d169b37 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/EventUtils.java
>  acd1bed 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ManagementNetworkGuru.java
>  bad2502 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ModelDatabase.java
>  f9e7c24 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerDBSync.java
>  4c8c2e9 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerDBSyncImpl.java
>  06daf12 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerEventHandler.java
>  6f0ecf2 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServerEventHandlerImpl.java
>  aa4e9d5 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceManager.java
>  f3884fb 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceManagerImpl.java
>  b90792c 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/management/ServiceVirtualMachine.java
>  9c8b61d 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/FloatingIpModel.java
>  ca90666 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/FloatingIpPoolModel.java
>  8e238fd 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/InstanceIpModel.java
>  ff08560 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelController.java
>  7abb40a 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelObject.java
>  7cd420c 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ModelObjectBase.java
>  4b05e96 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/ServiceInstanceModel.java
>  f65bfc7 
>   
> plugins/network-elements/juniper-contrail/src/net/juniper/contrail/model/VMInterfaceModel

Re: [DISCUSS] Return ssh publickeys in listSSHKeyPairs

2013-10-09 Thread Wei ZHOU
I need this as well.

AFAIK, an agent is needed in user vms.

> Is there any way to add multiple SSH Public keys to a VM without powering
it down?


Re: [New Feature FS] SSL Offload Support for Cloudstack

2013-10-09 Thread Syed Ahmed
Thanks Murali for your response.

> - any reason why you choose assignTo/RemoveFrom load balancer rule API's

I thought this made more sense than create/updateLoadbalancerRule as
we would have to call update to delete a cert which I find somewhat
confusing. Also this is semantically similar to attaching instances as
in you have a separate entity which is being bound to different LBs.

> - to me SSL termination is value added service from providers perspective,
> So only if network offering permits, SSL termination can be used.

Got it. This seems the logical way. Good point.

> I see session persistence based on SSL session id's please see if
> this can supported.

I was looking at persistence based on SSL session id's [1]  and found
that this is supported for SSL bridge type of configuration where
netscaler just bridges the data without any encryption/decryption. I
am not sure about health checks and autoscale. I will look that up.


> - on the requirement #4, don't infer protocol based on the public/private
> ports and impose restrictions. Current createLoadBalancer API does not
> take protocol parameter so its inferred at device layer. NetScaler seems
> to support SSL with other TCP ports as well.


Would it be a good Idea to add protocol to the createLoadBalancer API.
I think this makes sense in the long run as currently I cannot create
a HTTP loadbalncer for port 8080 from cloudstack.

> One general implementation note, network rules can be reprogrammed. So
> operations to configure SSL cert, binding cert to virtual server etc need
> to be idempotent at NetScaler resource.

Thanks. I'll keep that in  mind when implementing the resource layer.

Thank a lot again for the replies. This is really helpful.

-- REFERENCES --

[1]  
http://support.citrix.com/proddocs/topic/netscaler-load-balancing-93/ns-lb-persistence-configuring-ssl-session-id-tsk.html




-Syed

On Wed, Oct 9, 2013 at 5:57 AM, Murali Reddy  wrote:
> Thanks Syed for the FS.
>
> Couple of comments:
>
> - any reason why you choose assignTo/RemoveFrom load balancer rule API's
> to assign/remove certificate to LB rules? These api's are basically for
> controlling VM membership with a load balancer rule. Can
> create/updateLoadBalancerRule api's b used for registering and
> de-registering certificate with load balancer rule?
>
> - to me SSL termination is value added service from providers perspective,
> its better we expose service differentiation in the network offering (e.g
> dedicated load balancer capability of LB service in the network offering).
> So only if network offering permits, SSL termination can be used.
>
> - does adding SSL termination support to load balancer affect/complement
> current session persistence, health monitoring, auto scale functionality
> anyway? I see session persistence based on SSL session id's please see if
> this can supported.
>
> - as commented by other, fail fast at service layer on invalid certificate.
>
> - on the requirement #4, don't infer protocol based on the public/private
> ports and impose restrictions. Current createLoadBalancer API does not
> take protocol parameter so its inferred at device layer. NetScaler seems
> to support SSL with other TCP ports as well.
>
> One general implementation note, network rules can be reprogrammed. So
> operations to configure SSL cert, binding cert to virtual server etc need
> to be idempotent at NetScaler resource.
>
> [1]
> http://support.citrix.com/proddocs/topic/netscaler-ssl-93/ns-ssl-offloading
> -other-tcp-protocols-tsk.html
>
> On 08/10/13 11:44 PM, "Syed Ahmed"  wrote:
>
>>Hi,
>>
>>I have been working on adding SSL offload functionality to cloudstack
>>and make it work for Netscaler. I have an initial design documented at
>>https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Offloading+Supp
>>ort
>>and I would really love your feedback. The bug for this is
>>https://issues.apache.org/jira/browse/CLOUDSTACK-4821 .
>>
>>Thanks,
>>-Syed
>>
>>
>>
>
>


Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Darren Shepherd
Some random responses.

Spring is good at two things.  The core IoC container and TX mgmt.
The way I use Spring for IoC is I always ensure there is no dependency
on Spring itself.  This means I don't really care too much about the
Spring core IoC container and would be open to using a different
container, but just for practical reasons its safer to stick with
Spring because of the huge user base and excellent quality.

Even if you don't use Spring IoC, you can still use Spring TX.  I have
never found a better TX mgmt framework than Spring.  It is extremely
comprehensive.  So if your not using container managed transactions
(JEE and EJB nonsense), then I really think Spring TX is the only
option.

Spring MVC, Spring Batch, Spring WS, Spring [Whatever], are mostly
nonsense in my mind and I don't care to use them.  I don't subscribe
to the whole Spring world.  Just IoC (because IoC truly revolutionized
Java) and TX.

Regarding asserts.  I don't care for Java asserts.  Computers are fast
and extra null checks here and there are fine to just have in the
code.  It is not like we are writing a stock trading platform.  I
would prefer we never use asserts.  I hate to find bugs that only
happen when asserts are on or off.  If your not aware, Java asserts
are turned on with a runtime option "-ea."  They aren't a build time
thing like you see in C macro based ASSERTS.

Regarding what standard framework we should use.  At the end of the
day I'm trying to move towards Spring TX.  The best way to use Spring
TX is to use declarative transaction management.  The existing ACS
code base uses a lot of programmatic tx mgmt.  Well, in reality its
100% programmatic.  What I mean by programmatic is that in the code
you do tx.start(), tx.commit().  With declarative tx mgmt you use AOP
to isolate portions of code (like dao methods) to
start/commit/rollback the TX.  I don't think we will move away from
programmatic tx mgmt anytime soon, so if we move to Spring I don't
want us to suddenly start using Spring APIs everywhere.  Instead I
will propose we stay with a light wrapper, which will be the API I
already proposed in Option B.  It won't be 100% Spring, as we still
have a custom light wrapper, but at least at that point, anybody who
knows spring TX (or reads the docs) will understand what is going on.
Also protects us in the future if some hot new framework comes along
we should ideally be able to switch to it.

(Random thing, I've noticed in the code instances where the code is
doing txn.start() and then multiple txn.commit().  The transaction
framework AFAIK wasn't built for this.  The first commit will commit
or not depending on if the tx was nested.  The second commit will
either not commit or will destroy the internal state stack of the
transaction class as it will commit the parent tx.  So, for example, I
don't think AlertDaoImpl.archiveAlert() does at all what the
programmer who wrote it thought.  It's stuff like this that makes me
think we just need to bite the bullet, do the hard work, and clean
this up.)

Darren


Re: Contrail plugin

2013-10-09 Thread Darren Shepherd
Pedro,

I completely understand what you are saying as I think this is a gap
too I would like to address.  I'm thinking of something a little
bigger and grander than what you would need right now so that's not
helpful, as I won't get around to doing anything for a couple months.
I do not think EventBus is the correct approach.  That is complete
overkill for what you need.  I hate to tell people that in order to
run contrail they need to setup a HA instance of rabbitmq.  That is a
lot of headache.

I was thinking maybe a small listener framework on API commands would
suffice, but that leads me to my next concern.  I'm a stickler on
reliability.  So currently with ACS AOP approach, or some listener
framework, you can't achieve complete reliability.  99% of the time it
will work, but since there is no way to tie your code to the
transaction of the API code, there is a small window if the JVM dies,
you won't be called.  I know this sounds nit picky, but I hate when
there is a situation that could happen that there is no recovery from.

Is it possible for the contrail plugin to on-demand register the
account/projects?  So only when the element/guru/whatever is called,
you try to sync up the two systems?

Darren


Re: [MERGE] spring-modularization to master - Spring Modularization

2013-10-09 Thread Prasanna Santhanam
Here's the BVT result from spring-modularization branch: Failures that
are listed exist on master as well. So the branch doesn't break
anything additionally.

+1 to merge

http://jenkins.buildacloud.org/job/test-matrix/571/

Xen: Test Run: #892
Logs: 
http://jenkins.buildacloud.org/job/test-matrix/571/distro=centos63,hypervisor=xen,profile=xen62/

Total:85
Fail :22
Skip :1


name   passfailskip
test_internal_lb/ 1   0   0
test_iso/ 1   1   0
test_pvlan/   1   0   0
test_deploy_vm/   1   0   0
test_volumes/ 0   2   0
test_deploy_vms_with_varied_deploymentplanners/   3   0   0
test_guest_vlan_range/1   0   0
test_regions/ 1   0   0
test_vm_life_cycle/   8   2   0
test_non_contigiousvlan/  1   0   0
test_ssvm/9   1   0
test_disk_offerings/  3   0   0
test_reset_vm_on_reboot/  1   0   0
test_network_acl/ 1   0   0
test_global_settings/ 0   2   0
test_privategw_acl/   0   1   0
test_affinity_groups/ 0   1   0
test_multipleips_per_nic/ 1   0   0
test_deploy_vm_with_userdata/ 2   0   0
test_loadbalance/ 0   3   0
test_templates/   5   2   1
test_public_ip_range/ 1   0   0
test_vm_snapshots/0   3   0
test_resource_detail/ 0   1   0
test_portable_publicip/   2   0   0
test_network/ 5   2   0
test_routers/ 9   0   0
test_nic/ 1   0   0
test_scale_vm/1   0   0
test_service_offerings/   3   1   0



Regressions

name   
durationage
integration.smoke.test_ssvm.TestSSVMs.test_07_reboot_ssvm   
 86.497  1
integration.smoke.test_templates.TestCreateTemplate.test_01_create_template 
 61.837  1

Failures

name
durationage
:setup 
 128.276  1
integration.smoke.test_volumes.TestCreateVolume.test_01_create_volume   
   0.083 22
:setup 
 263.343 21
integration.smoke.test_vm_life_cycle.TestVMLifeCycle.test_09_expunge_vm 
 125.244 21
integration.smoke.test_vm_life_cycle.TestVMLifeCycle.test_10_attachAndDetach_iso
1059.25   2
integration.smoke.test_global_settings.TestUpdateConfigWithScope.test_UpdateConfigParamWithScope
   0.028 21
integration.smoke.test_global_settings.TestUpdateConfigWithScope.test_UpdateConfigParamWithScope
   0.059 21
integration.smoke.test_privategw_acl.TestPrivateGwACL.test_privategw_acl
 208.673 21
:setup   
   0  2
integration.smoke.test_loadbalance.TestLoadBalance.test_01_create_lb_rule_src_nat
   1023.78   3
integration.smoke.test_loadbalance.TestLoadBalance.test_02_create_lb_rule_non_nat
   1003.45   3
integration.smoke.test_loadbalance.TestLoadBalance.test_assign_and_removal_lb   
1003.46   3
integration.smoke.test_templates.TestTemplates.test_03_delete_template  
   5.12  21
integration.smoke.test_vm_snapshots.TestVmSnapshot.test_01_create_vm_snapshots  
 993.25   2
integration.smoke.test_vm_snapshots.TestVmSnapshot.test_02_revert_vm_snapshots  
 993.239  2
integration.smoke.test_vm_snapshots.TestVmSnapshot.test_03_delete_vm_snapshots  
   1.23   2
:setup  
   0 21
integration.smoke.test_network.TestPortForwarding.test_01_port_fwd_on_src_nat   
1003.63   2
integration.

Re: Contrail plugin

2013-10-09 Thread Pedro Roque Marques
Darren,

On Oct 9, 2013, at 8:35 AM, Darren Shepherd wrote:

> Pedro,
> 
> I completely understand what you are saying as I think this is a gap
> too I would like to address.  I'm thinking of something a little
> bigger and grander than what you would need right now so that's not
> helpful, as I won't get around to doing anything for a couple months.
> I do not think EventBus is the correct approach.  That is complete
> overkill for what you need.  I hate to tell people that in order to
> run contrail they need to setup a HA instance of rabbitmq.  That is a
> lot of headache.
> 
> I was thinking maybe a small listener framework on API commands would
> suffice, but that leads me to my next concern.  I'm a stickler on
> reliability.  So currently with ACS AOP approach, or some listener
> framework, you can't achieve complete reliability.  99% of the time it
> will work, but since there is no way to tie your code to the
> transaction of the API code, there is a small window if the JVM dies,
> you won't be called.  I know this sounds nit picky, but I hate when
> there is a situation that could happen that there is no recovery from.

The contrail plugin can resynchronize itself on failure. It assumes that the 
API connection between the management server and the contrail-api server can 
have transient failures... when that API connection comes up the code 
synchronizes the databases. 2/3 of the code in the plugin is actually to 
perform this task: being able to deal with transient failures.

> 
> Is it possible for the contrail plugin to on-demand register the
> account/projects?

Currently we do that via the ActionEvent mechanism but also assume the 
possibility of transient failure / timing issues.

>  So only when the element/guru/whatever is called,
> you try to sync up the two systems?

The plugin re-sync when the API session is established and it also runs a 
periodic check between the databases in order to detect  any synchronization 
failure... which typically implies a bug on the plugin side.
The model is one where the CloudStack DB is consider the master and the 
Contrail API is updated with the contents that are present in the CloudStack DB.

> 
> Darren



Re: [MERGE] spring-modularization to master - Spring Modularization

2013-10-09 Thread Chip Childers
+1 from me too.


On Wed, Oct 9, 2013 at 12:13 PM, Prasanna Santhanam  wrote:

> Here's the BVT result from spring-modularization branch: Failures that
> are listed exist on master as well. So the branch doesn't break
> anything additionally.
>
> +1 to merge
>
> http://jenkins.buildacloud.org/job/test-matrix/571/
>
> Xen: Test Run: #892
> Logs:
>
> http://jenkins.buildacloud.org/job/test-matrix/571/distro=centos63,hypervisor=xen,profile=xen62/
> 
> Total:85
> Fail :22
> Skip :1
> 
>
> name   passfailskip
> test_internal_lb/ 1   0   0
> test_iso/ 1   1   0
> test_pvlan/   1   0   0
> test_deploy_vm/   1   0   0
> test_volumes/ 0   2   0
> test_deploy_vms_with_varied_deploymentplanners/   3   0   0
> test_guest_vlan_range/1   0   0
> test_regions/ 1   0   0
> test_vm_life_cycle/   8   2   0
> test_non_contigiousvlan/  1   0   0
> test_ssvm/9   1   0
> test_disk_offerings/  3   0   0
> test_reset_vm_on_reboot/  1   0   0
> test_network_acl/ 1   0   0
> test_global_settings/ 0   2   0
> test_privategw_acl/   0   1   0
> test_affinity_groups/ 0   1   0
> test_multipleips_per_nic/ 1   0   0
> test_deploy_vm_with_userdata/ 2   0   0
> test_loadbalance/ 0   3   0
> test_templates/   5   2   1
> test_public_ip_range/ 1   0   0
> test_vm_snapshots/0   3   0
> test_resource_detail/ 0   1   0
> test_portable_publicip/   2   0   0
> test_network/ 5   2   0
> test_routers/ 9   0   0
> test_nic/ 1   0   0
> test_scale_vm/1   0   0
> test_service_offerings/   3   1   0
> 
>
>
> Regressions
> 
> name
> durationage
> integration.smoke.test_ssvm.TestSSVMs.test_07_reboot_ssvm
>86.497  1
> integration.smoke.test_templates.TestCreateTemplate.test_01_create_template
>  61.837  1
>
> Failures
> 
> name
>  durationage
> :setup
>128.276  1
> integration.smoke.test_volumes.TestCreateVolume.test_01_create_volume
>  0.083 22
> :setup
>263.343 21
> integration.smoke.test_vm_life_cycle.TestVMLifeCycle.test_09_expunge_vm
>125.244 21
> integration.smoke.test_vm_life_cycle.TestVMLifeCycle.test_10_attachAndDetach_iso
>1059.25   2
> integration.smoke.test_global_settings.TestUpdateConfigWithScope.test_UpdateConfigParamWithScope
>   0.028 21
> integration.smoke.test_global_settings.TestUpdateConfigWithScope.test_UpdateConfigParamWithScope
>   0.059 21
> integration.smoke.test_privategw_acl.TestPrivateGwACL.test_privategw_acl
>   208.673 21
> :setup
>  0  2
> integration.smoke.test_loadbalance.TestLoadBalance.test_01_create_lb_rule_src_nat
>   1023.78   3
> integration.smoke.test_loadbalance.TestLoadBalance.test_02_create_lb_rule_non_nat
>   1003.45   3
> integration.smoke.test_loadbalance.TestLoadBalance.test_assign_and_removal_lb
>   1003.46   3
> integration.smoke.test_templates.TestTemplates.test_03_delete_template
> 5.12  21
> integration.smoke.test_vm_snapshots.TestVmSnapshot.test_01_create_vm_snapshots
>   993.25   2
> integration.smoke.test_vm_snapshots.TestVmSnapshot.test_02_revert_vm_snapshots
>   993.239  2
> integration.smoke.test_vm_snapshots.TestVmSnapshot.test_03_delete_vm_snapshots
> 1.23   2
> :setup
> 0 21
> integration.smoke.test_network.TestPortForwarding.test_01_port_fwd_on_src_nat
>   1003.63  

Re: System VM template caching

2013-10-09 Thread kel...@backbonetechnology.com
I was able to create a work around and several community builders tested it out 
for me and it works.

I will not submit to docs as it's a hack, but I have updated the JIRA ticket.

Work around can be found at:

http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/

Thanks,

-Kelcey

Sent from my HTC

- Reply message -
From: "Soheil Eizadi" 
To: "dev@cloudstack.apache.org" 
Subject: System VM template caching
Date: Tue, Oct 8, 2013 9:49 PM

This seems similar to a problem I had on 4.3 Master with System VM creation. If 
it is the same problem you can check from API command ListTemplateCommand(), 
from CloudMonkey and see if it returns a bogus cached value. Then you know it 
is the same problem.
-Soheil 

http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3c6717ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E


From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
Sent: Tuesday, October 08, 2013 12:19 PM
To: Cloud Dev
Subject: [ACS 4.2][Upgrade Issue] System VM template caching

Hi,

Several of us in the community have found that with the 4.2 upgrade, when we 
download and install the latest system VM template, CloudStack refuses to use 
this template for new system VM creation. CloudStack appears to be usin a 
cached or master-clone variant of the old template.

This is causing may KVM+ 4.2 users to have broken clouds, A bug report has been 
filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826

My question is: Does anyone know where this cached template is stored? when 
CloudStack goes to make a new system VM, where does it look first for the 
template? We have observed through testing that this is no secondary storage.

Thanks in advance.

Kelcey Damage | Infrastructure Systems Architect
Strategy | Automation | Cloud Computing | Technology Development

Backbone Technology, Inc
604-331-1152 ext. 114

Re: System VM template caching

2013-10-09 Thread Sebastien Goasguen
Are you sure about this ? I thought we needed to register them as user vm and 
that the upgrade would convert them to systemVM automatically

-Sebastien

On 9 Oct 2013, at 17:22, 
"kel...@backbonetechnology.com" wrote:

> I was able to create a work around and several community builders tested it 
> out for me and it works.
> 
> I will not submit to docs as it's a hack, but I have updated the JIRA ticket.
> 
> Work around can be found at:
> 
> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/
> 
> Thanks,
> 
> -Kelcey
> 
> Sent from my HTC
> 
> - Reply message -
> From: "Soheil Eizadi" 
> To: "dev@cloudstack.apache.org" 
> Subject: System VM template caching
> Date: Tue, Oct 8, 2013 9:49 PM
> 
> This seems similar to a problem I had on 4.3 Master with System VM creation. 
> If it is the same problem you can check from API command 
> ListTemplateCommand(), from CloudMonkey and see if it returns a bogus cached 
> value. Then you know it is the same problem.
> -Soheil 
> 
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3c6717ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E
> 
> 
> From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
> Sent: Tuesday, October 08, 2013 12:19 PM
> To: Cloud Dev
> Subject: [ACS 4.2][Upgrade Issue] System VM template caching
> 
> Hi,
> 
> Several of us in the community have found that with the 4.2 upgrade, when we 
> download and install the latest system VM template, CloudStack refuses to use 
> this template for new system VM creation. CloudStack appears to be usin a 
> cached or master-clone variant of the old template.
> 
> This is causing may KVM+ 4.2 users to have broken clouds, A bug report has 
> been filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826
> 
> My question is: Does anyone know where this cached template is stored? when 
> CloudStack goes to make a new system VM, where does it look first for the 
> template? We have observed through testing that this is no secondary storage.
> 
> Thanks in advance.
> 
> Kelcey Damage | Infrastructure Systems Architect
> Strategy | Automation | Cloud Computing | Technology Development
> 
> Backbone Technology, Inc
> 604-331-1152 ext. 114


Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Pedro Roque Marques
Darren,
My assumption when I tried to make sense of the transaction code is that the 
underlying motivation is that the code is trying to create a transaction per 
API call and then allow multiple modules to implement that API call...
i.e. the intent is do use a bit of what i would call a "web-server logic"...

1. API call starts.
2. Module X starts transaction...
3. Module Y does some other changes in the DB...
4. Either the API call completes successfully or not... commit or error back to 
the user.

 I suspect that this was probably the starting point... but it doesn't really 
work as i describe above. Often when the plugin i'm working on screws up (or 
XenServer is misconfigured) one ends up with DB objects in inconsistent state.

I suspect that the DB Transaction design needs to include what is the 
methodology for the design of the management server.

In an ideal world, i would say that API calls just check authorization and 
quotas and should store the intent of the management server to reach the 
desired state. State machines that can then deal with transient failures should 
then attempt to move the state of the system to the state intended by the user. 
That however doesn't seem to reflect the current state of the management server.

I may be completely wrong... Can you give an example in proposal B of how a 
transaction would span multiple modules of code ?

  Pedro.

On Oct 9, 2013, at 1:44 AM, Darren Shepherd wrote:

> Okay, please read this all, this is important...  I want you all to
> know that its personally important to me to attempt to get rid of ACS
> custom stuff and introduce patterns, frameworks, libraries, etc that I
> feel are more consistent with modern Java development and are
> understood by a wider audience.  This is one of the basic reasons I
> started the spring-modularization branch.  I just want to be able to
> leverage Spring in a sane way.  The current implementation in ACS is
> backwards and broken and abuses Spring to the point that leveraging
> Spring isn't really all that possible.
> 
> So while I did the Spring work, I also started laying the ground work
> to get rid of the ACS custom transaction management.  The custom DAO
> framework and the corresponding transaction management has been a huge
> barrier to me extending ACS in the past.  When you look at how you are
> supposed to access the database, it's all very custom and what I feel
> isn't really all that straight forward.  I was debugging an issue
> today and figured out there is a huge bug in what I've done and that
> has lead me down this rabbit hole of what the correct solution is.
> Additionally ACS custom transaction mgmt is done in a way that
> basically breaks Spring too.
> 
> At some point on the mailing list there was a small discussion about
> removing the @DB interceptor.  The @DB interceptor does txn.open() and
> txn.close() around a method.  If a method forgets to commit or
> rollback the txn, txn.close() will rollback the transaction for the
> method.  So the general idea of the change was to instead move that
> logic to the bottom of the call stack.  The assumption being that the
> @DB code was just an additional check to ensure the programmer didn't
> forget something and we could instead just do that once at the bottom
> of the stack.  Oh how wrong I was.
> 
> The problem is that developers have relied on the @DB interceptor to
> handle rollback for them.  So you see the following code quite a bit
> 
> txn.start()
> ...
> txn.commit()
> 
> And there is no sign of a rollback anywhere.  So the rollback will
> happen if some exception is thrown.  By moving the @DB logic to the
> bottom of stack what happens is the transaction is not rolled back
> when the developer thought it would and madness ensues.  So that
> change was bad.  So what to do  Here's my totally bias description
> of solutions:
> 
> Option A or "Custom Forever!":  Go back to custom ACS AOP and the @DB.
> This is what one would think is the simplest and safest solution.
> We'll it ain't really.  Here's the killer problem, besides that fact
> that it makes me feel very sad inside, the current rollback behavior
> is broken in certain spots in ACS.  While investigating possible
> solutions I started looking at all the places that do programmatic txn
> management.  It's important to realize that the txn framework only
> works properly if the method in which you do txn.start() has @DB on
> it.  There is a java assert in currentTxn() that attempts to make sure
> that @DB is there.  But nobody runs with asserts on.  So there are
> places in ACS where transactions are started and no @DB is there, but
> it happens to work because some method in the stack has @DB.  So to
> properly go back to option A we really need to fix all places that
> don't have @DB, plus make sure people always run with asserts on.  And
> then give up making the ACS world a better place and just do things
> how we always have...
> 
> Option B or "Progress is G

Re: [DISCUSS] Return ssh publickeys in listSSHKeyPairs

2013-10-09 Thread Ian Duffy
> AFAIK, an agent is needed in user vms.

I was hoping it'd be possible via the file sharing capabilities many
of the hypervisor tools offer.
Although I would imagine security issues could arise from that.

On 9 October 2013 15:51, Wei ZHOU  wrote:
> I need this as well.
>
> AFAIK, an agent is needed in user vms.
>
>> Is there any way to add multiple SSH Public keys to a VM without powering
> it down?


Re: System VM template caching

2013-10-09 Thread kel...@backbonetechnology.com
This process you mention for registering as a user VM I can't find in the 
upgrade guide. Do you have a link?

The work around works because CloudStack defaults to re-download the system 
template is it is in NOT_DOWNLOADED status. How ever the database never gets 
updated for the life of the build.

CS is designed it seems to only ever have a single unaltered template_id '3' 
record. And I guess the template download script just overwrites the sane GUID 
filename.

Seems like a solution that could be handled in a better way.

Either way, this is what has been working for us in the community.

Sent from my HTC

- Reply message -
From: "Sebastien Goasguen" 
To: "dev@cloudstack.apache.org" 
Cc: "dev@cloudstack.apache.org" 
Subject: System VM template caching
Date: Wed, Oct 9, 2013 9:30 AM

Are you sure about this ? I thought we needed to register them as user vm and 
that the upgrade would convert them to systemVM automatically

-Sebastien

On 9 Oct 2013, at 17:22, 
"kel...@backbonetechnology.com" wrote:

> I was able to create a work around and several community builders tested it 
> out for me and it works.
> 
> I will not submit to docs as it's a hack, but I have updated the JIRA ticket.
> 
> Work around can be found at:
> 
> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/
> 
> Thanks,
> 
> -Kelcey
> 
> Sent from my HTC
> 
> - Reply message -
> From: "Soheil Eizadi" 
> To: "dev@cloudstack.apache.org" 
> Subject: System VM template caching
> Date: Tue, Oct 8, 2013 9:49 PM
> 
> This seems similar to a problem I had on 4.3 Master with System VM creation. 
> If it is the same problem you can check from API command 
> ListTemplateCommand(), from CloudMonkey and see if it returns a bogus cached 
> value. Then you know it is the same problem.
> -Soheil 
> 
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3c6717ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E
> 
> 
> From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
> Sent: Tuesday, October 08, 2013 12:19 PM
> To: Cloud Dev
> Subject: [ACS 4.2][Upgrade Issue] System VM template caching
> 
> Hi,
> 
> Several of us in the community have found that with the 4.2 upgrade, when we 
> download and install the latest system VM template, CloudStack refuses to use 
> this template for new system VM creation. CloudStack appears to be usin a 
> cached or master-clone variant of the old template.
> 
> This is causing may KVM+ 4.2 users to have broken clouds, A bug report has 
> been filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826
> 
> My question is: Does anyone know where this cached template is stored? when 
> CloudStack goes to make a new system VM, where does it look first for the 
> template? We have observed through testing that this is no secondary storage.
> 
> Thanks in advance.
> 
> Kelcey Damage | Infrastructure Systems Architect
> Strategy | Automation | Cloud Computing | Technology Development
> 
> Backbone Technology, Inc
> 604-331-1152 ext. 114

Re: System VM template caching

2013-10-09 Thread Ahmad Emneina
there might be a more sound way than swapping the template on secondary
storage and hacking the db. I figure one should be able to register the
template, via the documented route... wait for download to succeed, upgrade
the binary bits. then when the system vm's fail to launch. delete the
cached template on primary storage. That should be enough to trigger a new
system vm propagated to the primary storage. I find it hard to believe this
passed QA...


On Wed, Oct 9, 2013 at 9:39 AM, kel...@backbonetechnology.com <
kel...@backbonetechnology.com> wrote:

> This process you mention for registering as a user VM I can't find in the
> upgrade guide. Do you have a link?
>
> The work around works because CloudStack defaults to re-download the
> system template is it is in NOT_DOWNLOADED status. How ever the database
> never gets updated for the life of the build.
>
> CS is designed it seems to only ever have a single unaltered template_id
> '3' record. And I guess the template download script just overwrites the
> sane GUID filename.
>
> Seems like a solution that could be handled in a better way.
>
> Either way, this is what has been working for us in the community.
>
> Sent from my HTC
>
> - Reply message -
> From: "Sebastien Goasguen" 
> To: "dev@cloudstack.apache.org" 
> Cc: "dev@cloudstack.apache.org" 
> Subject: System VM template caching
> Date: Wed, Oct 9, 2013 9:30 AM
>
> Are you sure about this ? I thought we needed to register them as user vm
> and that the upgrade would convert them to systemVM automatically
>
> -Sebastien
>
> On 9 Oct 2013, at 17:22, "kel...@backbonetechnology.com"<
> kel...@backbonetechnology.com> wrote:
>
> > I was able to create a work around and several community builders tested
> it out for me and it works.
> >
> > I will not submit to docs as it's a hack, but I have updated the JIRA
> ticket.
> >
> > Work around can be found at:
> >
> >
> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/
> >
> > Thanks,
> >
> > -Kelcey
> >
> > Sent from my HTC
> >
> > - Reply message -
> > From: "Soheil Eizadi" 
> > To: "dev@cloudstack.apache.org" 
> > Subject: System VM template caching
> > Date: Tue, Oct 8, 2013 9:49 PM
> >
> > This seems similar to a problem I had on 4.3 Master with System VM
> creation. If it is the same problem you can check from API command
> ListTemplateCommand(), from CloudMonkey and see if it returns a bogus
> cached value. Then you know it is the same problem.
> > -Soheil
> >
> >
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3c6717ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E
> >
> > 
> > From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
> > Sent: Tuesday, October 08, 2013 12:19 PM
> > To: Cloud Dev
> > Subject: [ACS 4.2][Upgrade Issue] System VM template caching
> >
> > Hi,
> >
> > Several of us in the community have found that with the 4.2 upgrade,
> when we download and install the latest system VM template, CloudStack
> refuses to use this template for new system VM creation. CloudStack appears
> to be usin a cached or master-clone variant of the old template.
> >
> > This is causing may KVM+ 4.2 users to have broken clouds, A bug report
> has been filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826
> >
> > My question is: Does anyone know where this cached template is stored?
> when CloudStack goes to make a new system VM, where does it look first for
> the template? We have observed through testing that this is no secondary
> storage.
> >
> > Thanks in advance.
> >
> > Kelcey Damage | Infrastructure Systems Architect
> > Strategy | Automation | Cloud Computing | Technology Development
> >
> > Backbone Technology, Inc
> > 604-331-1152 ext. 114
>


Re: what's the reason for the placeholder nic in VPC/VR?

2013-10-09 Thread Alena Prokharchyk
I've just tested it on the latest master, don't see placeholder nic
created for the VPC VR.

In addition to the case Murali explained, placeholder nic is being created
per Shared network case using VR as DHCP provider. Its done to preserve
the same ip address for the case when VR is being expunged/re-created
during the network restart/Vrdestroy. As a result of expunge VR its nic is
being cleaned up - and ip released - , so we had to make sure that the new
VR would get the same ip. More details are in
26b892daf3cdccc2e25711730c7e1efcdec7d2dc, CLOUDSTACK-1771.

-Alena.


On 10/9/13 2:57 AM, "Murali Reddy"  wrote:

>On 09/10/13 11:33 AM, "Darren Shepherd" 
>wrote:
>
>>Why is a placeholder nic created before the VRs for the VPC are created?
>>
>>Darren
>>
>
>Generally place holder nic is used in cases where cloudstack uses a subnet
>IP from the guest subnet, but ip is not used for any VM nic's. In most of
>the external network devices, needs a subnet IP from the guest network
>CIDR, cloudstack creates a place holder nic and allocates a subnet ip.
>
>




Re: System VM template caching

2013-10-09 Thread kel...@backbonetechnology.com
We tested deleting the template on primary storage, and it failed to regenerate.

What is the documented method for registering the new template, can you link to 
it?

It seems many of us failed to find any documentation about updating the 
template period, not just in the 4.2 release doc under upgrades from 4.1

Thanks.

Sent from my HTC

- Reply message -
From: "Ahmad Emneina" 
To: "dev@cloudstack.apache.org" 
Subject: System VM template caching
Date: Wed, Oct 9, 2013 9:48 AM

there might be a more sound way than swapping the template on secondary
storage and hacking the db. I figure one should be able to register the
template, via the documented route... wait for download to succeed, upgrade
the binary bits. then when the system vm's fail to launch. delete the
cached template on primary storage. That should be enough to trigger a new
system vm propagated to the primary storage. I find it hard to believe this
passed QA...


On Wed, Oct 9, 2013 at 9:39 AM, kel...@backbonetechnology.com <
kel...@backbonetechnology.com> wrote:

> This process you mention for registering as a user VM I can't find in the
> upgrade guide. Do you have a link?
>
> The work around works because CloudStack defaults to re-download the
> system template is it is in NOT_DOWNLOADED status. How ever the database
> never gets updated for the life of the build.
>
> CS is designed it seems to only ever have a single unaltered template_id
> '3' record. And I guess the template download script just overwrites the
> sane GUID filename.
>
> Seems like a solution that could be handled in a better way.
>
> Either way, this is what has been working for us in the community.
>
> Sent from my HTC
>
> - Reply message -
> From: "Sebastien Goasguen" 
> To: "dev@cloudstack.apache.org" 
> Cc: "dev@cloudstack.apache.org" 
> Subject: System VM template caching
> Date: Wed, Oct 9, 2013 9:30 AM
>
> Are you sure about this ? I thought we needed to register them as user vm
> and that the upgrade would convert them to systemVM automatically
>
> -Sebastien
>
> On 9 Oct 2013, at 17:22, "kel...@backbonetechnology.com"<
> kel...@backbonetechnology.com> wrote:
>
> > I was able to create a work around and several community builders tested
> it out for me and it works.
> >
> > I will not submit to docs as it's a hack, but I have updated the JIRA
> ticket.
> >
> > Work around can be found at:
> >
> >
> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cloudstack-4-2-dragon-kvm/
> >
> > Thanks,
> >
> > -Kelcey
> >
> > Sent from my HTC
> >
> > - Reply message -
> > From: "Soheil Eizadi" 
> > To: "dev@cloudstack.apache.org" 
> > Subject: System VM template caching
> > Date: Tue, Oct 8, 2013 9:49 PM
> >
> > This seems similar to a problem I had on 4.3 Master with System VM
> creation. If it is the same problem you can check from API command
> ListTemplateCommand(), from CloudMonkey and see if it returns a bogus
> cached value. Then you know it is the same problem.
> > -Soheil
> >
> >
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3c6717ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E
> >
> > 
> > From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
> > Sent: Tuesday, October 08, 2013 12:19 PM
> > To: Cloud Dev
> > Subject: [ACS 4.2][Upgrade Issue] System VM template caching
> >
> > Hi,
> >
> > Several of us in the community have found that with the 4.2 upgrade,
> when we download and install the latest system VM template, CloudStack
> refuses to use this template for new system VM creation. CloudStack appears
> to be usin a cached or master-clone variant of the old template.
> >
> > This is causing may KVM+ 4.2 users to have broken clouds, A bug report
> has been filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826
> >
> > My question is: Does anyone know where this cached template is stored?
> when CloudStack goes to make a new system VM, where does it look first for
> the template? We have observed through testing that this is no secondary
> storage.
> >
> > Thanks in advance.
> >
> > Kelcey Damage | Infrastructure Systems Architect
> > Strategy | Automation | Cloud Computing | Technology Development
> >
> > Backbone Technology, Inc
> > 604-331-1152 ext. 114
>

master simulator build broken on ManagedContext

2013-10-09 Thread Prasanna Santhanam
Hi,

on the simulator build in jenkins [1] on starting jetty I see the
following issue:

ERROR [o.s.w.c.ContextLoader] (main:null) Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'ManagedContext' defined in class path resource
[applicationContext.xml]: Cannot create inner bean
'org.apache.cloudstack.context.CallContextListener#2fd0f745' of type
[org.apache.cloudstack.context.CallContextListener] while setting bean property
'listeners' with key [0]; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.cloudstack.context.CallContextListener#2fd0f745':
Injection of autowired dependencies failed; nested exception is
org.springframework.beans.factory.BeanCreationException: Could not autowire
field: com.cloud.utils.db.EntityManager
org.apache.cloudstack.context.CallContextListener.entityMgr; nested exception
is org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find
class [org.apache.cloudstack.framework.config.ConfigDepotImpl] for bean with
name 'configDepot' defined in class path resource
[simulatorComponentContext.xml]; nested exception is
java.lang.ClassNotFoundException:
org.apache.cloudstack.framework.config.ConfigDepotImpl


Can someone throw some light on this and how I can get the simulator build to
run some basic tests per checkin again?

Here's the steps to run the simulator on the dev environment:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Simulator+integration


[1] http://jenkins.buildacloud.org/view/simulator/job/start-jetty/280/console
-- 
Prasanna.,


Powered by BigRock.com



Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Darren Shepherd
Pedro,

>From a high level I think we'd probably agree.  Generally I feel an
IaaS platform is largely a metadata management framework that stores
the "desired" state of the infrastructure and then pro-actively tries
to reconcile the desired state with reality.  So failures should be
recovered from easily as inconsistency will be discovered and
reconciled.  Having sad that, ACS is not at all like that.  It is very
task oriented.  Hopefully I/we/everyone can change that, its a huge
concern of mine.  The general approach in ACS I see is do task X and
hopefully it works.  If it doesn't work, well hopefully we didn't
leave things in an inconsistent state.  If we find it does leave
things in an inconsistent state, write a cleanup thread to fix bad
things in bad states

Regarding TX specifically.  This is a huge topic.  I really don't know
where to start.  I have so many complaints with the data access in
ACS.  There's what I'd like to see, but its so far from what it really
is.  Instead I'll address specifically your question.

I wish we were doing transaction per API, but I don't think that was
ever a consideration.  I do think the sync portion of API commands
should be wrapped in a single transaction.  I really think the
original intention of the Transaction framework was to assist in
cleaning up resources that people always forget to close.  I think
that is mostly it.

The general guidelines of how I'd like transactions to work would be

1) Synchronous portions of API commands are wrapped in a single
transaction.  Transaction propagation capability from spring tx can
then handle nesting transaction as more complicated transaction
management may be need in certain places.

2) Async jobs that run in a background threads should do small fine
grained transaction management.  Ideally no transactions.
Transactions should not be used as a locking mechanism.

Having said that, there are currently so many technical issues in
getting to that.  For example, with point 1, because IPC/MessageBus
and EventBus were added recently, that makes it difficult to do 1.
The problem is that you can't send a message while a DB tx is open
because the reciever may get the message before the commit.  So
messaging frameworks have to be written in consideration of the
transaction management.  Not saying you need to do complex XA style
transactions, there's simpler ways to do that.  So regarding points 1
and 2 I said.  That's what I'd like to see, but I know its a long road
to that.

Option B is really about introducing an API that will eventually serve
as a lightweight wrapper around Spring TX.  In the short term, if I do
option B, the implementation of the code will still be the custom ACS
TX mgmt.  So across modules, its sorta kinda works but not really.
But if I do the second step of replacing custom ACS TX impl with
Spring TX, it will follow how Spring TX works.  If we have Sprint TX
we can then leverage the transaction propagation features of it to
more sanely handle transaction nesting.

I feel I went a bit the weeds with that response, but maybe something
in there made sense.

Darren

On Wed, Oct 9, 2013 at 9:31 AM, Pedro Roque Marques
 wrote:
> Darren,
> My assumption when I tried to make sense of the transaction code is that the 
> underlying motivation is that the code is trying to create a transaction per 
> API call and then allow multiple modules to implement that API call...
> i.e. the intent is do use a bit of what i would call a "web-server logic"...
>
> 1. API call starts.
> 2. Module X starts transaction...
> 3. Module Y does some other changes in the DB...
> 4. Either the API call completes successfully or not... commit or error back 
> to the user.
>
>  I suspect that this was probably the starting point... but it doesn't really 
> work as i describe above. Often when the plugin i'm working on screws up (or 
> XenServer is misconfigured) one ends up with DB objects in inconsistent state.
>
> I suspect that the DB Transaction design needs to include what is the 
> methodology for the design of the management server.
>
> In an ideal world, i would say that API calls just check authorization and 
> quotas and should store the intent of the management server to reach the 
> desired state. State machines that can then deal with transient failures 
> should then attempt to move the state of the system to the state intended by 
> the user. That however doesn't seem to reflect the current state of the 
> management server.
>
> I may be completely wrong... Can you give an example in proposal B of how a 
> transaction would span multiple modules of code ?
>
>   Pedro.
>
> On Oct 9, 2013, at 1:44 AM, Darren Shepherd wrote:
>
>> Okay, please read this all, this is important...  I want you all to
>> know that its personally important to me to attempt to get rid of ACS
>> custom stuff and introduce patterns, frameworks, libraries, etc that I
>> feel are more consistent with modern Java development and are
>> understood by a wid

Re: master simulator build broken on ManagedContext

2013-10-09 Thread Darren Shepherd
I'll look at that.  Gimme about 15 minutes.

Darren

On Wed, Oct 9, 2013 at 10:12 AM, Prasanna Santhanam  wrote:
> Hi,
>
> on the simulator build in jenkins [1] on starting jetty I see the
> following issue:
>
> ERROR [o.s.w.c.ContextLoader] (main:null) Context initialization failed
> org.springframework.beans.factory.BeanCreationException: Error creating bean
> with name 'ManagedContext' defined in class path resource
> [applicationContext.xml]: Cannot create inner bean
> 'org.apache.cloudstack.context.CallContextListener#2fd0f745' of type
> [org.apache.cloudstack.context.CallContextListener] while setting bean 
> property
> 'listeners' with key [0]; nested exception is
> org.springframework.beans.factory.BeanCreationException: Error creating bean
> with name 'org.apache.cloudstack.context.CallContextListener#2fd0f745':
> Injection of autowired dependencies failed; nested exception is
> org.springframework.beans.factory.BeanCreationException: Could not autowire
> field: com.cloud.utils.db.EntityManager
> org.apache.cloudstack.context.CallContextListener.entityMgr; nested exception
> is org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find
> class [org.apache.cloudstack.framework.config.ConfigDepotImpl] for bean with
> name 'configDepot' defined in class path resource
> [simulatorComponentContext.xml]; nested exception is
> java.lang.ClassNotFoundException:
> org.apache.cloudstack.framework.config.ConfigDepotImpl
>
>
> Can someone throw some light on this and how I can get the simulator build to
> run some basic tests per checkin again?
>
> Here's the steps to run the simulator on the dev environment:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Simulator+integration
>
>
> [1] http://jenkins.buildacloud.org/view/simulator/job/start-jetty/280/console
> --
> Prasanna.,
>
> 
> Powered by BigRock.com
>


Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Chiradeep Vittal
+1 to option B (for a lot of the reasons enunciated by Darren).
Also, let's get this in right away so that by 1/31/2014 we are confident
about the change and fixed any bugs uncovered by the new scheme.

On 10/9/13 10:29 AM, "Darren Shepherd"  wrote:

>Pedro,
>
>From a high level I think we'd probably agree.  Generally I feel an
>IaaS platform is largely a metadata management framework that stores
>the "desired" state of the infrastructure and then pro-actively tries
>to reconcile the desired state with reality.  So failures should be
>recovered from easily as inconsistency will be discovered and
>reconciled.  Having sad that, ACS is not at all like that.  It is very
>task oriented.  Hopefully I/we/everyone can change that, its a huge
>concern of mine.  The general approach in ACS I see is do task X and
>hopefully it works.  If it doesn't work, well hopefully we didn't
>leave things in an inconsistent state.  If we find it does leave
>things in an inconsistent state, write a cleanup thread to fix bad
>things in bad states
>
>Regarding TX specifically.  This is a huge topic.  I really don't know
>where to start.  I have so many complaints with the data access in
>ACS.  There's what I'd like to see, but its so far from what it really
>is.  Instead I'll address specifically your question.
>
>I wish we were doing transaction per API, but I don't think that was
>ever a consideration.  I do think the sync portion of API commands
>should be wrapped in a single transaction.  I really think the
>original intention of the Transaction framework was to assist in
>cleaning up resources that people always forget to close.  I think
>that is mostly it.
>
>The general guidelines of how I'd like transactions to work would be
>
>1) Synchronous portions of API commands are wrapped in a single
>transaction.  Transaction propagation capability from spring tx can
>then handle nesting transaction as more complicated transaction
>management may be need in certain places.
>
>2) Async jobs that run in a background threads should do small fine
>grained transaction management.  Ideally no transactions.
>Transactions should not be used as a locking mechanism.
>
>Having said that, there are currently so many technical issues in
>getting to that.  For example, with point 1, because IPC/MessageBus
>and EventBus were added recently, that makes it difficult to do 1.
>The problem is that you can't send a message while a DB tx is open
>because the reciever may get the message before the commit.  So
>messaging frameworks have to be written in consideration of the
>transaction management.  Not saying you need to do complex XA style
>transactions, there's simpler ways to do that.  So regarding points 1
>and 2 I said.  That's what I'd like to see, but I know its a long road
>to that.
>
>Option B is really about introducing an API that will eventually serve
>as a lightweight wrapper around Spring TX.  In the short term, if I do
>option B, the implementation of the code will still be the custom ACS
>TX mgmt.  So across modules, its sorta kinda works but not really.
>But if I do the second step of replacing custom ACS TX impl with
>Spring TX, it will follow how Spring TX works.  If we have Sprint TX
>we can then leverage the transaction propagation features of it to
>more sanely handle transaction nesting.
>
>I feel I went a bit the weeds with that response, but maybe something
>in there made sense.
>
>Darren
>
>On Wed, Oct 9, 2013 at 9:31 AM, Pedro Roque Marques
> wrote:
>> Darren,
>> My assumption when I tried to make sense of the transaction code is
>>that the underlying motivation is that the code is trying to create a
>>transaction per API call and then allow multiple modules to implement
>>that API call...
>> i.e. the intent is do use a bit of what i would call a "web-server
>>logic"...
>>
>> 1. API call starts.
>> 2. Module X starts transaction...
>> 3. Module Y does some other changes in the DB...
>> 4. Either the API call completes successfully or not... commit or error
>>back to the user.
>>
>>  I suspect that this was probably the starting point... but it doesn't
>>really work as i describe above. Often when the plugin i'm working on
>>screws up (or XenServer is misconfigured) one ends up with DB objects in
>>inconsistent state.
>>
>> I suspect that the DB Transaction design needs to include what is the
>>methodology for the design of the management server.
>>
>> In an ideal world, i would say that API calls just check authorization
>>and quotas and should store the intent of the management server to reach
>>the desired state. State machines that can then deal with transient
>>failures should then attempt to move the state of the system to the
>>state intended by the user. That however doesn't seem to reflect the
>>current state of the management server.
>>
>> I may be completely wrong... Can you give an example in proposal B of
>>how a transaction would span multiple modules of code ?
>>
>>   Pedro.
>>
>> On Oct 9, 2013, at 1:44 AM, D

RE: questions about registerIso API and updateIsoPermissions API

2013-10-09 Thread Jessica Wang
Alena,

> Jessica, did you mean updateIso? As updateIsoPermissions updates permissions 
> only. 
Alena, I meant UpdateIsoPermissions API, not updateIso API.
If you check Java file, it's UpdateIsoPermissionsCmd.java that extends 
BaseUpdateTemplateOrIsoPermissionsCmd.java which takes in isextractable 
parameter.

> Answering  your question - if user can specify the flag when registering the 
> template, he should be allowed to update it.
Thanks.

> Again, should be updateIso.
should be updateIsoPermissions

> its a bug if he can update the flag on existing object, but can't create the 
> object with this flag by default.
Thanks

Jessica

-Original Message-
From: Alena Prokharchyk 
Sent: Tuesday, October 08, 2013 5:21 PM
To: Jessica Wang; 
Cc: Nitin Mehta; Shweta Agarwal
Subject: Re: questions about registerIso API and updateIsoPermissions API

On 10/8/13 5:10 PM, "Jessica Wang"  wrote:

>Hi,
> 
>I have questions about registerIso API and updateIsoPermissions API.
> 
>(1) A normal user is allowed to specify isextractable property when
>registering an ISO (through registerIso API),
>
>but NOT allowed to update isextractable property when updating an ISO
>(through updateIsoPermissions API).
>Is this by design or it's just an API bug?

Jessica, did you mean updateIso? As UpdateIsoPermissions updates
permissions only. 
Answering  your question - if user can specify the flag when registering
the template, he should be allowed to update it.



> 
>(2) A normal user is NOT allowed to specify isfeatured property when
>registering an ISO (through registerIso API),
>
>but allowed to update isfeatured property when updating an ISO (through
>updateIsoPermissions API)?
>Is this by design or it's just an API bug?

Again, should be updateIso. And yes, its a bug if he can update the flag
on existing object, but can't create the object with this flag by default.


> 
>Jessica
>
>




Re: [MERGE] spring-modularization to master - Spring Modularization

2013-10-09 Thread Kelven Yang
+1

Kelven 

On 10/9/13 9:18 AM, "Chip Childers"  wrote:

>+1 from me too.
>
>
>On Wed, Oct 9, 2013 at 12:13 PM, Prasanna Santhanam 
>wrote:
>
>> Here's the BVT result from spring-modularization branch: Failures that
>> are listed exist on master as well. So the branch doesn't break
>> anything additionally.
>>
>> +1 to merge
>>
>> http://jenkins.buildacloud.org/job/test-matrix/571/
>>
>> Xen: Test Run: #892
>> Logs:
>>
>> 
>>http://jenkins.buildacloud.org/job/test-matrix/571/distro=centos63,hyperv
>>isor=xen,profile=xen62/
>> 
>> Total:85
>> Fail :22
>> Skip :1
>> 
>>
>> name   passfailskip
>> test_internal_lb/ 1   0   0
>> test_iso/ 1   1   0
>> test_pvlan/   1   0   0
>> test_deploy_vm/   1   0   0
>> test_volumes/ 0   2   0
>> test_deploy_vms_with_varied_deploymentplanners/   3   0   0
>> test_guest_vlan_range/1   0   0
>> test_regions/ 1   0   0
>> test_vm_life_cycle/   8   2   0
>> test_non_contigiousvlan/  1   0   0
>> test_ssvm/9   1   0
>> test_disk_offerings/  3   0   0
>> test_reset_vm_on_reboot/  1   0   0
>> test_network_acl/ 1   0   0
>> test_global_settings/ 0   2   0
>> test_privategw_acl/   0   1   0
>> test_affinity_groups/ 0   1   0
>> test_multipleips_per_nic/ 1   0   0
>> test_deploy_vm_with_userdata/ 2   0   0
>> test_loadbalance/ 0   3   0
>> test_templates/   5   2   1
>> test_public_ip_range/ 1   0   0
>> test_vm_snapshots/0   3   0
>> test_resource_detail/ 0   1   0
>> test_portable_publicip/   2   0   0
>> test_network/ 5   2   0
>> test_routers/ 9   0   0
>> test_nic/ 1   0   0
>> test_scale_vm/1   0   0
>> test_service_offerings/   3   1   0
>> 
>>
>>
>> Regressions
>> 
>> name
>> durationage
>> integration.smoke.test_ssvm.TestSSVMs.test_07_reboot_ssvm
>>86.497  1
>> 
>>integration.smoke.test_templates.TestCreateTemplate.test_01_create_templa
>>te
>>  61.837  1
>>
>> Failures
>> 
>> name
>>  durationage
>> :setup
>>128.276  1
>> integration.smoke.test_volumes.TestCreateVolume.test_01_create_volume
>>  0.083 22
>> :setup
>>263.343 21
>> integration.smoke.test_vm_life_cycle.TestVMLifeCycle.test_09_expunge_vm
>>125.244 21
>> 
>>integration.smoke.test_vm_life_cycle.TestVMLifeCycle.test_10_attachAndDet
>>ach_iso
>>1059.25   2
>> 
>>integration.smoke.test_global_settings.TestUpdateConfigWithScope.test_Upd
>>ateConfigParamWithScope
>>   0.028 21
>> 
>>integration.smoke.test_global_settings.TestUpdateConfigWithScope.test_Upd
>>ateConfigParamWithScope
>>   0.059 21
>> integration.smoke.test_privategw_acl.TestPrivateGwACL.test_privategw_acl
>>   208.673 21
>> :setup
>>  0  2
>> 
>>integration.smoke.test_loadbalance.TestLoadBalance.test_01_create_lb_rule
>>_src_nat
>>   1023.78   3
>> 
>>integration.smoke.test_loadbalance.TestLoadBalance.test_02_create_lb_rule
>>_non_nat
>>   1003.45   3
>> 
>>integration.smoke.test_loadbalance.TestLoadBalance.test_assign_and_remova
>>l_lb
>>   1003.46   3
>> integration.smoke.test_templates.TestTemplates.test_03_delete_template
>> 5.12  21
>> 
>>integration.smoke.test_vm_snapshots.TestVmSnapshot.test_01_create_vm_snap
>>shots
>>   993.25   2
>> 
>>integration.smoke.test_vm_snapshots.TestVmSnapshot.test_02_revert_vm_snap
>>shots
>>   993.239  2
>> 
>>integration.smoke.test_vm_snapshots.TestVmSnapshot.test_03_delet

Re: master simulator build broken on ManagedContext

2013-10-09 Thread Darren Shepherd
Prasanna,

Try now.  I pushed 0d7aa931b4892661df733cd1ba20fe139d13e59b for this.
The issue was that at some point the package of ConfigDepotImpl was
changed in some refactoring.  The applicationContext was changed for
this, but apparently somebody didn't check simulatorContext. The bean
should never have been in simulatorContext, so I deleted it.

Darren

On Wed, Oct 9, 2013 at 10:30 AM, Darren Shepherd
 wrote:
> I'll look at that.  Gimme about 15 minutes.
>
> Darren
>
> On Wed, Oct 9, 2013 at 10:12 AM, Prasanna Santhanam  wrote:
>> Hi,
>>
>> on the simulator build in jenkins [1] on starting jetty I see the
>> following issue:
>>
>> ERROR [o.s.w.c.ContextLoader] (main:null) Context initialization failed
>> org.springframework.beans.factory.BeanCreationException: Error creating bean
>> with name 'ManagedContext' defined in class path resource
>> [applicationContext.xml]: Cannot create inner bean
>> 'org.apache.cloudstack.context.CallContextListener#2fd0f745' of type
>> [org.apache.cloudstack.context.CallContextListener] while setting bean 
>> property
>> 'listeners' with key [0]; nested exception is
>> org.springframework.beans.factory.BeanCreationException: Error creating bean
>> with name 'org.apache.cloudstack.context.CallContextListener#2fd0f745':
>> Injection of autowired dependencies failed; nested exception is
>> org.springframework.beans.factory.BeanCreationException: Could not autowire
>> field: com.cloud.utils.db.EntityManager
>> org.apache.cloudstack.context.CallContextListener.entityMgr; nested exception
>> is org.springframework.beans.factory.CannotLoadBeanClassException: Cannot 
>> find
>> class [org.apache.cloudstack.framework.config.ConfigDepotImpl] for bean with
>> name 'configDepot' defined in class path resource
>> [simulatorComponentContext.xml]; nested exception is
>> java.lang.ClassNotFoundException:
>> org.apache.cloudstack.framework.config.ConfigDepotImpl
>>
>>
>> Can someone throw some light on this and how I can get the simulator build to
>> run some basic tests per checkin again?
>>
>> Here's the steps to run the simulator on the dev environment:
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Simulator+integration
>>
>>
>> [1] http://jenkins.buildacloud.org/view/simulator/job/start-jetty/280/console
>> --
>> Prasanna.,
>>
>> 
>> Powered by BigRock.com
>>


Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Pedro Roque Marques
Darren,
I generally agree with you... just trying to point out what could be pitfalls 
on the way to evolve the system.

On Oct 9, 2013, at 10:29 AM, Darren Shepherd wrote:
> 
> I wish we were doing transaction per API, but I don't think that was
> ever a consideration.  I do think the sync portion of API commands
> should be wrapped in a single transaction.  I really think the
> original intention of the Transaction framework was to assist in
> cleaning up resources that people always forget to close.  I think
> that is mostly it.

My understanding is that for instance when a VM is created you have a call flow 
that looks a bit like:

1. UserVmManagerImpl.createVirtualMachine (@DB, persist)
2. VirtualMachineManagerImpl.allocate (@DB, persist)
3. NetworkOrchestrator.allocate (@DB, persist)

My understanding is that an check in NetworkOrchestrator (e.g. nic parameters 
not being kosher) is supposed to rollback the transaction and remove the VM in 
the database...

There are some errors for which this mechanism works OK today I believe it 
would be desirable to have a proposal of how to deal with such an example and 
then attempt to implement it consistently. Even if it requires the programmer 
to understand that it needs to explicitly rollback the VM if the underlying 
layers throw an exception.

  Pedro.

RE: questions about registerIso API and updateIsoPermissions API

2013-10-09 Thread Jessica Wang
Nitin,

>  At the moment, I think that for Isos we should allow to edit it so would 
> call it an API bug.
Thanks.

> Register Iso does provide an option to mark an ISO featured. I see that in 
> the latest master.
That only works for admin, but NOT normal user.

If you log in as a normal user, then pass "isfeatured=true" to registerIso API, 
API will ignore it.
The newly registered template will have "isfeatured: false".

e.g.
http://10.215.3.26:8080/client/api?command=registerIso&response=json&sessionkey=u%2FVIHPJuPohidGKFd0lh6csG%2BfM%3D&name=normalUserIso1&displayText=normalUserIso1&url=http%3A%2F%2F10.223.110.231%2Fisos_64bit%2Fdummy.iso&zoneid=6bcd3bd9-591c-4d99-a164-d05b87df1b04&isfeatured=true&isextractable=false&bootable=true&osTypeId=b8cbfd6c-2d40-11e3-86aa-3c970e739c3e&ispublic=false&_=1381340961641
{
"registerisoresponse": {
"count": 1,
"iso": [
{
"id": "9b903876-f17c-4634-8463-8e3025259956",
"name": "normalUserIso1",
"displaytext": "normalUserIso1",
"ispublic": false,
"created": "2013-10-09T10:52:38-0700",
"isready": false,
"bootable": true,
"isfeatured": false,
"crossZones": false,
"ostypeid": "b8cbfd6c-2d40-11e3-86aa-3c970e739c3e",
"ostypename": "Apple Mac OS X 10.6 (32-bit)",
"account": "aaa_user",
"zoneid": "6bcd3bd9-591c-4d99-a164-d05b87df1b04",
"zonename": "jw-adv",
"status": "",
"domain": "aaa",
"domainid": "47b09d73-84ef-48dc-9b73-1720bad600cb",
"isextractable": false,
"tags": []
}
]
}
}

Jessica

From: Nitin Mehta
Sent: Tuesday, October 08, 2013 5:27 PM
To: Jessica Wang; 
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: Re: questions about registerIso API and updateIsoPermissions API

Answers inline.

From: Jessica Wang mailto:jessica.w...@citrix.com>>
Date: Tuesday 8 October 2013 5:10 PM
To: "mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk 
mailto:alena.prokharc...@citrix.com>>, Nitin 
Mehta mailto:nitin.me...@citrix.com>>, Shweta Agarwal 
mailto:shweta.agar...@citrix.com>>
Subject: questions about registerIso API and updateIsoPermissions API

Hi,

I have questions about registerIso API and updateIsoPermissions API.

(1) A normal user is allowed to specify isextractable property when registering 
an ISO (through registerIso API),
but NOT allowed to update isextractable property when updating an ISO (through 
updateIsoPermissions API).
Is this by design or it's just an API bug?

Nitin>> This is a grey area. This was done for templates (Isos just inherited 
it) because derived templates may or may not belong to the same user and we 
want to follow the principle of least privilege.
At the moment, I think that for Isos we should allow to edit it so would call 
it an API bug.

(2) A normal user is NOT allowed to specify isfeatured property when 
registering an ISO (through registerIso API),
but allowed to update isfeatured property when updating an ISO (through 
updateIsoPermissions API)?
Is this by design or it's just an API bug?

Nitin>> Register Iso does provide an option to mark an ISO featured. I see that 
in the latest master.

Jessica


Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Kelven Yang
+1

Original Transaction class also has many tightly-coupled assumptions about
the underlying data source, lock master. Developers are usually lost on
when and where they should use @DB, for nested transactions, it does not
really work as expected.

Kelven


On 10/9/13 10:38 AM, "Chiradeep Vittal" 
wrote:

>+1 to option B (for a lot of the reasons enunciated by Darren).
>Also, let's get this in right away so that by 1/31/2014 we are confident
>about the change and fixed any bugs uncovered by the new scheme.
>
>On 10/9/13 10:29 AM, "Darren Shepherd" 
>wrote:
>
>>Pedro,
>>
>>From a high level I think we'd probably agree.  Generally I feel an
>>IaaS platform is largely a metadata management framework that stores
>>the "desired" state of the infrastructure and then pro-actively tries
>>to reconcile the desired state with reality.  So failures should be
>>recovered from easily as inconsistency will be discovered and
>>reconciled.  Having sad that, ACS is not at all like that.  It is very
>>task oriented.  Hopefully I/we/everyone can change that, its a huge
>>concern of mine.  The general approach in ACS I see is do task X and
>>hopefully it works.  If it doesn't work, well hopefully we didn't
>>leave things in an inconsistent state.  If we find it does leave
>>things in an inconsistent state, write a cleanup thread to fix bad
>>things in bad states
>>
>>Regarding TX specifically.  This is a huge topic.  I really don't know
>>where to start.  I have so many complaints with the data access in
>>ACS.  There's what I'd like to see, but its so far from what it really
>>is.  Instead I'll address specifically your question.
>>
>>I wish we were doing transaction per API, but I don't think that was
>>ever a consideration.  I do think the sync portion of API commands
>>should be wrapped in a single transaction.  I really think the
>>original intention of the Transaction framework was to assist in
>>cleaning up resources that people always forget to close.  I think
>>that is mostly it.
>>
>>The general guidelines of how I'd like transactions to work would be
>>
>>1) Synchronous portions of API commands are wrapped in a single
>>transaction.  Transaction propagation capability from spring tx can
>>then handle nesting transaction as more complicated transaction
>>management may be need in certain places.
>>
>>2) Async jobs that run in a background threads should do small fine
>>grained transaction management.  Ideally no transactions.
>>Transactions should not be used as a locking mechanism.
>>
>>Having said that, there are currently so many technical issues in
>>getting to that.  For example, with point 1, because IPC/MessageBus
>>and EventBus were added recently, that makes it difficult to do 1.
>>The problem is that you can't send a message while a DB tx is open
>>because the reciever may get the message before the commit.  So
>>messaging frameworks have to be written in consideration of the
>>transaction management.  Not saying you need to do complex XA style
>>transactions, there's simpler ways to do that.  So regarding points 1
>>and 2 I said.  That's what I'd like to see, but I know its a long road
>>to that.
>>
>>Option B is really about introducing an API that will eventually serve
>>as a lightweight wrapper around Spring TX.  In the short term, if I do
>>option B, the implementation of the code will still be the custom ACS
>>TX mgmt.  So across modules, its sorta kinda works but not really.
>>But if I do the second step of replacing custom ACS TX impl with
>>Spring TX, it will follow how Spring TX works.  If we have Sprint TX
>>we can then leverage the transaction propagation features of it to
>>more sanely handle transaction nesting.
>>
>>I feel I went a bit the weeds with that response, but maybe something
>>in there made sense.
>>
>>Darren
>>
>>On Wed, Oct 9, 2013 at 9:31 AM, Pedro Roque Marques
>> wrote:
>>> Darren,
>>> My assumption when I tried to make sense of the transaction code is
>>>that the underlying motivation is that the code is trying to create a
>>>transaction per API call and then allow multiple modules to implement
>>>that API call...
>>> i.e. the intent is do use a bit of what i would call a "web-server
>>>logic"...
>>>
>>> 1. API call starts.
>>> 2. Module X starts transaction...
>>> 3. Module Y does some other changes in the DB...
>>> 4. Either the API call completes successfully or not... commit or error
>>>back to the user.
>>>
>>>  I suspect that this was probably the starting point... but it doesn't
>>>really work as i describe above. Often when the plugin i'm working on
>>>screws up (or XenServer is misconfigured) one ends up with DB objects in
>>>inconsistent state.
>>>
>>> I suspect that the DB Transaction design needs to include what is the
>>>methodology for the design of the management server.
>>>
>>> In an ideal world, i would say that API calls just check authorization
>>>and quotas and should store the intent of the management server to reach
>>>the desired state. State machi

Re: System VM template caching

2013-10-09 Thread Chiradeep Vittal
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Releas
e_Notes/upgrade-instructions.html


On 10/9/13 10:11 AM, "kel...@backbonetechnology.com"
 wrote:

>We tested deleting the template on primary storage, and it failed to
>regenerate.
>
>What is the documented method for registering the new template, can you
>link to it?
>
>It seems many of us failed to find any documentation about updating the
>template period, not just in the 4.2 release doc under upgrades from 4.1
>
>Thanks.
>
>Sent from my HTC
>
>- Reply message -
>From: "Ahmad Emneina" 
>To: "dev@cloudstack.apache.org" 
>Subject: System VM template caching
>Date: Wed, Oct 9, 2013 9:48 AM
>
>there might be a more sound way than swapping the template on secondary
>storage and hacking the db. I figure one should be able to register the
>template, via the documented route... wait for download to succeed,
>upgrade
>the binary bits. then when the system vm's fail to launch. delete the
>cached template on primary storage. That should be enough to trigger a new
>system vm propagated to the primary storage. I find it hard to believe
>this
>passed QA...
>
>
>On Wed, Oct 9, 2013 at 9:39 AM, kel...@backbonetechnology.com <
>kel...@backbonetechnology.com> wrote:
>
>> This process you mention for registering as a user VM I can't find in
>>the
>> upgrade guide. Do you have a link?
>>
>> The work around works because CloudStack defaults to re-download the
>> system template is it is in NOT_DOWNLOADED status. How ever the database
>> never gets updated for the life of the build.
>>
>> CS is designed it seems to only ever have a single unaltered template_id
>> '3' record. And I guess the template download script just overwrites the
>> sane GUID filename.
>>
>> Seems like a solution that could be handled in a better way.
>>
>> Either way, this is what has been working for us in the community.
>>
>> Sent from my HTC
>>
>> - Reply message -
>> From: "Sebastien Goasguen" 
>> To: "dev@cloudstack.apache.org" 
>> Cc: "dev@cloudstack.apache.org" 
>> Subject: System VM template caching
>> Date: Wed, Oct 9, 2013 9:30 AM
>>
>> Are you sure about this ? I thought we needed to register them as user
>>vm
>> and that the upgrade would convert them to systemVM automatically
>>
>> -Sebastien
>>
>> On 9 Oct 2013, at 17:22, "kel...@backbonetechnology.com"<
>> kel...@backbonetechnology.com> wrote:
>>
>> > I was able to create a work around and several community builders
>>tested
>> it out for me and it works.
>> >
>> > I will not submit to docs as it's a hack, but I have updated the JIRA
>> ticket.
>> >
>> > Work around can be found at:
>> >
>> >
>> 
>>http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cl
>>oudstack-4-2-dragon-kvm/
>> >
>> > Thanks,
>> >
>> > -Kelcey
>> >
>> > Sent from my HTC
>> >
>> > - Reply message -
>> > From: "Soheil Eizadi" 
>> > To: "dev@cloudstack.apache.org" 
>> > Subject: System VM template caching
>> > Date: Tue, Oct 8, 2013 9:49 PM
>> >
>> > This seems similar to a problem I had on 4.3 Master with System VM
>> creation. If it is the same problem you can check from API command
>> ListTemplateCommand(), from CloudMonkey and see if it returns a bogus
>> cached value. Then you know it is the same problem.
>> > -Soheil
>> >
>> >
>> 
>>http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3C67
>>17ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E
>> >
>> > 
>> > From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
>> > Sent: Tuesday, October 08, 2013 12:19 PM
>> > To: Cloud Dev
>> > Subject: [ACS 4.2][Upgrade Issue] System VM template caching
>> >
>> > Hi,
>> >
>> > Several of us in the community have found that with the 4.2 upgrade,
>> when we download and install the latest system VM template, CloudStack
>> refuses to use this template for new system VM creation. CloudStack
>>appears
>> to be usin a cached or master-clone variant of the old template.
>> >
>> > This is causing may KVM+ 4.2 users to have broken clouds, A bug report
>> has been filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826
>> >
>> > My question is: Does anyone know where this cached template is stored?
>> when CloudStack goes to make a new system VM, where does it look first
>>for
>> the template? We have observed through testing that this is no secondary
>> storage.
>> >
>> > Thanks in advance.
>> >
>> > Kelcey Damage | Infrastructure Systems Architect
>> > Strategy | Automation | Cloud Computing | Technology Development
>> >
>> > Backbone Technology, Inc
>> > 604-331-1152 ext. 114



Re: System VM template caching

2013-10-09 Thread Sebastien Goasguen
and that one:

http://markmail.org/message/kdmvc3frngdki5ho


On Oct 9, 2013, at 2:15 PM, Chiradeep Vittal  
wrote:

> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Releas
> e_Notes/upgrade-instructions.html
> 
> 
> On 10/9/13 10:11 AM, "kel...@backbonetechnology.com"
>  wrote:
> 
>> We tested deleting the template on primary storage, and it failed to
>> regenerate.
>> 
>> What is the documented method for registering the new template, can you
>> link to it?
>> 
>> It seems many of us failed to find any documentation about updating the
>> template period, not just in the 4.2 release doc under upgrades from 4.1
>> 
>> Thanks.
>> 
>> Sent from my HTC
>> 
>> - Reply message -
>> From: "Ahmad Emneina" 
>> To: "dev@cloudstack.apache.org" 
>> Subject: System VM template caching
>> Date: Wed, Oct 9, 2013 9:48 AM
>> 
>> there might be a more sound way than swapping the template on secondary
>> storage and hacking the db. I figure one should be able to register the
>> template, via the documented route... wait for download to succeed,
>> upgrade
>> the binary bits. then when the system vm's fail to launch. delete the
>> cached template on primary storage. That should be enough to trigger a new
>> system vm propagated to the primary storage. I find it hard to believe
>> this
>> passed QA...
>> 
>> 
>> On Wed, Oct 9, 2013 at 9:39 AM, kel...@backbonetechnology.com <
>> kel...@backbonetechnology.com> wrote:
>> 
>>> This process you mention for registering as a user VM I can't find in
>>> the
>>> upgrade guide. Do you have a link?
>>> 
>>> The work around works because CloudStack defaults to re-download the
>>> system template is it is in NOT_DOWNLOADED status. How ever the database
>>> never gets updated for the life of the build.
>>> 
>>> CS is designed it seems to only ever have a single unaltered template_id
>>> '3' record. And I guess the template download script just overwrites the
>>> sane GUID filename.
>>> 
>>> Seems like a solution that could be handled in a better way.
>>> 
>>> Either way, this is what has been working for us in the community.
>>> 
>>> Sent from my HTC
>>> 
>>> - Reply message -
>>> From: "Sebastien Goasguen" 
>>> To: "dev@cloudstack.apache.org" 
>>> Cc: "dev@cloudstack.apache.org" 
>>> Subject: System VM template caching
>>> Date: Wed, Oct 9, 2013 9:30 AM
>>> 
>>> Are you sure about this ? I thought we needed to register them as user
>>> vm
>>> and that the upgrade would convert them to systemVM automatically
>>> 
>>> -Sebastien
>>> 
>>> On 9 Oct 2013, at 17:22, "kel...@backbonetechnology.com"<
>>> kel...@backbonetechnology.com> wrote:
>>> 
 I was able to create a work around and several community builders
>>> tested
>>> it out for me and it works.
 
 I will not submit to docs as it's a hack, but I have updated the JIRA
>>> ticket.
 
 Work around can be found at:
 
 
>>> 
>>> http://cloud.kelceydamage.com/cloudfire/blog/2013/10/08/conquering-the-cl
>>> oudstack-4-2-dragon-kvm/
 
 Thanks,
 
 -Kelcey
 
 Sent from my HTC
 
 - Reply message -
 From: "Soheil Eizadi" 
 To: "dev@cloudstack.apache.org" 
 Subject: System VM template caching
 Date: Tue, Oct 8, 2013 9:49 PM
 
 This seems similar to a problem I had on 4.3 Master with System VM
>>> creation. If it is the same problem you can check from API command
>>> ListTemplateCommand(), from CloudMonkey and see if it returns a bogus
>>> cached value. Then you know it is the same problem.
 -Soheil
 
 
>>> 
>>> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201309.mbox/%3C67
>>> 17ec2e5a665a40a5af626d7d4fa90625e2e...@x2008mb1.infoblox.com%3E
 
 
 From: Kelcey Jamison Damage [kel...@backbonetechnology.com]
 Sent: Tuesday, October 08, 2013 12:19 PM
 To: Cloud Dev
 Subject: [ACS 4.2][Upgrade Issue] System VM template caching
 
 Hi,
 
 Several of us in the community have found that with the 4.2 upgrade,
>>> when we download and install the latest system VM template, CloudStack
>>> refuses to use this template for new system VM creation. CloudStack
>>> appears
>>> to be usin a cached or master-clone variant of the old template.
 
 This is causing may KVM+ 4.2 users to have broken clouds, A bug report
>>> has been filed: https://issues.apache.org/jira/browse/CLOUDSTACK-4826
 
 My question is: Does anyone know where this cached template is stored?
>>> when CloudStack goes to make a new system VM, where does it look first
>>> for
>>> the template? We have observed through testing that this is no secondary
>>> storage.
 
 Thanks in advance.
 
 Kelcey Damage | Infrastructure Systems Architect
 Strategy | Automation | Cloud Computing | Technology Development
 
 Backbone Technology, Inc
 604-331-1152 ext. 114
> 



Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Daan Hoogland
Darren,

Happy to hear your view on Spring.

+1 for option B


On Wed, Oct 9, 2013 at 8:06 PM, Kelven Yang  wrote:

> +1
>
> Original Transaction class also has many tightly-coupled assumptions about
> the underlying data source, lock master. Developers are usually lost on
> when and where they should use @DB, for nested transactions, it does not
> really work as expected.
>
> Kelven
>
>
> On 10/9/13 10:38 AM, "Chiradeep Vittal" 
> wrote:
>
> >+1 to option B (for a lot of the reasons enunciated by Darren).
> >Also, let's get this in right away so that by 1/31/2014 we are confident
> >about the change and fixed any bugs uncovered by the new scheme.
> >
> >On 10/9/13 10:29 AM, "Darren Shepherd" 
> >wrote:
> >
> >>Pedro,
> >>
> >>From a high level I think we'd probably agree.  Generally I feel an
> >>IaaS platform is largely a metadata management framework that stores
> >>the "desired" state of the infrastructure and then pro-actively tries
> >>to reconcile the desired state with reality.  So failures should be
> >>recovered from easily as inconsistency will be discovered and
> >>reconciled.  Having sad that, ACS is not at all like that.  It is very
> >>task oriented.  Hopefully I/we/everyone can change that, its a huge
> >>concern of mine.  The general approach in ACS I see is do task X and
> >>hopefully it works.  If it doesn't work, well hopefully we didn't
> >>leave things in an inconsistent state.  If we find it does leave
> >>things in an inconsistent state, write a cleanup thread to fix bad
> >>things in bad states
> >>
> >>Regarding TX specifically.  This is a huge topic.  I really don't know
> >>where to start.  I have so many complaints with the data access in
> >>ACS.  There's what I'd like to see, but its so far from what it really
> >>is.  Instead I'll address specifically your question.
> >>
> >>I wish we were doing transaction per API, but I don't think that was
> >>ever a consideration.  I do think the sync portion of API commands
> >>should be wrapped in a single transaction.  I really think the
> >>original intention of the Transaction framework was to assist in
> >>cleaning up resources that people always forget to close.  I think
> >>that is mostly it.
> >>
> >>The general guidelines of how I'd like transactions to work would be
> >>
> >>1) Synchronous portions of API commands are wrapped in a single
> >>transaction.  Transaction propagation capability from spring tx can
> >>then handle nesting transaction as more complicated transaction
> >>management may be need in certain places.
> >>
> >>2) Async jobs that run in a background threads should do small fine
> >>grained transaction management.  Ideally no transactions.
> >>Transactions should not be used as a locking mechanism.
> >>
> >>Having said that, there are currently so many technical issues in
> >>getting to that.  For example, with point 1, because IPC/MessageBus
> >>and EventBus were added recently, that makes it difficult to do 1.
> >>The problem is that you can't send a message while a DB tx is open
> >>because the reciever may get the message before the commit.  So
> >>messaging frameworks have to be written in consideration of the
> >>transaction management.  Not saying you need to do complex XA style
> >>transactions, there's simpler ways to do that.  So regarding points 1
> >>and 2 I said.  That's what I'd like to see, but I know its a long road
> >>to that.
> >>
> >>Option B is really about introducing an API that will eventually serve
> >>as a lightweight wrapper around Spring TX.  In the short term, if I do
> >>option B, the implementation of the code will still be the custom ACS
> >>TX mgmt.  So across modules, its sorta kinda works but not really.
> >>But if I do the second step of replacing custom ACS TX impl with
> >>Spring TX, it will follow how Spring TX works.  If we have Sprint TX
> >>we can then leverage the transaction propagation features of it to
> >>more sanely handle transaction nesting.
> >>
> >>I feel I went a bit the weeds with that response, but maybe something
> >>in there made sense.
> >>
> >>Darren
> >>
> >>On Wed, Oct 9, 2013 at 9:31 AM, Pedro Roque Marques
> >> wrote:
> >>> Darren,
> >>> My assumption when I tried to make sense of the transaction code is
> >>>that the underlying motivation is that the code is trying to create a
> >>>transaction per API call and then allow multiple modules to implement
> >>>that API call...
> >>> i.e. the intent is do use a bit of what i would call a "web-server
> >>>logic"...
> >>>
> >>> 1. API call starts.
> >>> 2. Module X starts transaction...
> >>> 3. Module Y does some other changes in the DB...
> >>> 4. Either the API call completes successfully or not... commit or error
> >>>back to the user.
> >>>
> >>>  I suspect that this was probably the starting point... but it doesn't
> >>>really work as i describe above. Often when the plugin i'm working on
> >>>screws up (or XenServer is misconfigured) one ends up with DB objects in
> >>>inconsistent state.
> >>>

Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Darren Shepherd
Pedro,

The @DB annotation adds a guard.  It doesn't actually start or end a
transaction.  It will check that the transaction should be rolled back
if one was started and not closed and that's about it.  So the only
time transactions are really started is when you see txn.start() in
the code.  So the flow of 1, 2, 3 that you wrote, has almost nothing
to do with the actual transactions in play.

And this further illustrates how confusing this framework is and why I
would like to abolish it.  The @DB annotation should go away.

The example you gave is code that runs in the background as part of an
async job.  What I would propose is that we do not rely on transaction
as much as possible.  Transactions in background code should be as
small as possible.  There is no point in trying to manage failures
with transactions when the external resources we are interacting with
are not transactional.  So backend code should be written mostly as

1) set to transitioning state
2) make external thing so
3) set to done/finished/non-transitioning-state

At any point there is no long running DB transaction.  DB transactions
are only used for atomic commits if your change spans a couple rows
and needs to be atomic.  If there is failure at any point because of
the states we can recover.

The only time I'd like DB transactions to be used is on the
synchronous portion of APIs.  The sync portion of APIs typically just
writes data to the DB and nothing else, it is not supposed to interact
with external resources.  For sync APIs, one big transaction typically
handles 90% of the failure cases.  But I don't think this is fully
possible right now for sync APIs.

Darren

Darren

On Wed, Oct 9, 2013 at 10:58 AM, Pedro Roque Marques
 wrote:
> Darren,
> I generally agree with you... just trying to point out what could be pitfalls 
> on the way to evolve the system.
>
> On Oct 9, 2013, at 10:29 AM, Darren Shepherd wrote:
>>
>> I wish we were doing transaction per API, but I don't think that was
>> ever a consideration.  I do think the sync portion of API commands
>> should be wrapped in a single transaction.  I really think the
>> original intention of the Transaction framework was to assist in
>> cleaning up resources that people always forget to close.  I think
>> that is mostly it.
>
> My understanding is that for instance when a VM is created you have a call 
> flow that looks a bit like:
>
> 1. UserVmManagerImpl.createVirtualMachine (@DB, persist)
> 2. VirtualMachineManagerImpl.allocate (@DB, persist)
> 3. NetworkOrchestrator.allocate (@DB, persist)
>
> My understanding is that an check in NetworkOrchestrator (e.g. nic parameters 
> not being kosher) is supposed to rollback the transaction and remove the VM 
> in the database...
>
> There are some errors for which this mechanism works OK today I believe 
> it would be desirable to have a proposal of how to deal with such an example 
> and then attempt to implement it consistently. Even if it requires the 
> programmer to understand that it needs to explicitly rollback the VM if the 
> underlying layers throw an exception.
>
>   Pedro.


Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Darren Shepherd
This is blocking my spring-modularization branch as I coupled changing
the DB transactions stuff with the spring stuff (maybe not the best
idea...).  So if everyone is on board with option 2, I'll look to get
it done by probably Tuesday.  I'll create a branch from master,
independent of the spring stuff I did.  If all is swell with that,
merge the DB txn change, and merge the spring changes.

Darren

On Wed, Oct 9, 2013 at 10:38 AM, Chiradeep Vittal
 wrote:
> +1 to option B (for a lot of the reasons enunciated by Darren).
> Also, let's get this in right away so that by 1/31/2014 we are confident
> about the change and fixed any bugs uncovered by the new scheme.
>
> On 10/9/13 10:29 AM, "Darren Shepherd"  wrote:
>
>>Pedro,
>>
> >From a high level I think we'd probably agree.  Generally I feel an
>>IaaS platform is largely a metadata management framework that stores
>>the "desired" state of the infrastructure and then pro-actively tries
>>to reconcile the desired state with reality.  So failures should be
>>recovered from easily as inconsistency will be discovered and
>>reconciled.  Having sad that, ACS is not at all like that.  It is very
>>task oriented.  Hopefully I/we/everyone can change that, its a huge
>>concern of mine.  The general approach in ACS I see is do task X and
>>hopefully it works.  If it doesn't work, well hopefully we didn't
>>leave things in an inconsistent state.  If we find it does leave
>>things in an inconsistent state, write a cleanup thread to fix bad
>>things in bad states
>>
>>Regarding TX specifically.  This is a huge topic.  I really don't know
>>where to start.  I have so many complaints with the data access in
>>ACS.  There's what I'd like to see, but its so far from what it really
>>is.  Instead I'll address specifically your question.
>>
>>I wish we were doing transaction per API, but I don't think that was
>>ever a consideration.  I do think the sync portion of API commands
>>should be wrapped in a single transaction.  I really think the
>>original intention of the Transaction framework was to assist in
>>cleaning up resources that people always forget to close.  I think
>>that is mostly it.
>>
>>The general guidelines of how I'd like transactions to work would be
>>
>>1) Synchronous portions of API commands are wrapped in a single
>>transaction.  Transaction propagation capability from spring tx can
>>then handle nesting transaction as more complicated transaction
>>management may be need in certain places.
>>
>>2) Async jobs that run in a background threads should do small fine
>>grained transaction management.  Ideally no transactions.
>>Transactions should not be used as a locking mechanism.
>>
>>Having said that, there are currently so many technical issues in
>>getting to that.  For example, with point 1, because IPC/MessageBus
>>and EventBus were added recently, that makes it difficult to do 1.
>>The problem is that you can't send a message while a DB tx is open
>>because the reciever may get the message before the commit.  So
>>messaging frameworks have to be written in consideration of the
>>transaction management.  Not saying you need to do complex XA style
>>transactions, there's simpler ways to do that.  So regarding points 1
>>and 2 I said.  That's what I'd like to see, but I know its a long road
>>to that.
>>
>>Option B is really about introducing an API that will eventually serve
>>as a lightweight wrapper around Spring TX.  In the short term, if I do
>>option B, the implementation of the code will still be the custom ACS
>>TX mgmt.  So across modules, its sorta kinda works but not really.
>>But if I do the second step of replacing custom ACS TX impl with
>>Spring TX, it will follow how Spring TX works.  If we have Sprint TX
>>we can then leverage the transaction propagation features of it to
>>more sanely handle transaction nesting.
>>
>>I feel I went a bit the weeds with that response, but maybe something
>>in there made sense.
>>
>>Darren
>>
>>On Wed, Oct 9, 2013 at 9:31 AM, Pedro Roque Marques
>> wrote:
>>> Darren,
>>> My assumption when I tried to make sense of the transaction code is
>>>that the underlying motivation is that the code is trying to create a
>>>transaction per API call and then allow multiple modules to implement
>>>that API call...
>>> i.e. the intent is do use a bit of what i would call a "web-server
>>>logic"...
>>>
>>> 1. API call starts.
>>> 2. Module X starts transaction...
>>> 3. Module Y does some other changes in the DB...
>>> 4. Either the API call completes successfully or not... commit or error
>>>back to the user.
>>>
>>>  I suspect that this was probably the starting point... but it doesn't
>>>really work as i describe above. Often when the plugin i'm working on
>>>screws up (or XenServer is misconfigured) one ends up with DB objects in
>>>inconsistent state.
>>>
>>> I suspect that the DB Transaction design needs to include what is the
>>>methodology for the design of the management server.
>>>
>>> In an ideal world, i w

Re: [DISCUSS] Transaction Hell

2013-10-09 Thread Mike Tutkowski
I think Option B is a good move.


On Wed, Oct 9, 2013 at 1:00 PM, Darren Shepherd  wrote:

> This is blocking my spring-modularization branch as I coupled changing
> the DB transactions stuff with the spring stuff (maybe not the best
> idea...).  So if everyone is on board with option 2, I'll look to get
> it done by probably Tuesday.  I'll create a branch from master,
> independent of the spring stuff I did.  If all is swell with that,
> merge the DB txn change, and merge the spring changes.
>
> Darren
>
> On Wed, Oct 9, 2013 at 10:38 AM, Chiradeep Vittal
>  wrote:
> > +1 to option B (for a lot of the reasons enunciated by Darren).
> > Also, let's get this in right away so that by 1/31/2014 we are confident
> > about the change and fixed any bugs uncovered by the new scheme.
> >
> > On 10/9/13 10:29 AM, "Darren Shepherd" 
> wrote:
> >
> >>Pedro,
> >>
> > >From a high level I think we'd probably agree.  Generally I feel an
> >>IaaS platform is largely a metadata management framework that stores
> >>the "desired" state of the infrastructure and then pro-actively tries
> >>to reconcile the desired state with reality.  So failures should be
> >>recovered from easily as inconsistency will be discovered and
> >>reconciled.  Having sad that, ACS is not at all like that.  It is very
> >>task oriented.  Hopefully I/we/everyone can change that, its a huge
> >>concern of mine.  The general approach in ACS I see is do task X and
> >>hopefully it works.  If it doesn't work, well hopefully we didn't
> >>leave things in an inconsistent state.  If we find it does leave
> >>things in an inconsistent state, write a cleanup thread to fix bad
> >>things in bad states
> >>
> >>Regarding TX specifically.  This is a huge topic.  I really don't know
> >>where to start.  I have so many complaints with the data access in
> >>ACS.  There's what I'd like to see, but its so far from what it really
> >>is.  Instead I'll address specifically your question.
> >>
> >>I wish we were doing transaction per API, but I don't think that was
> >>ever a consideration.  I do think the sync portion of API commands
> >>should be wrapped in a single transaction.  I really think the
> >>original intention of the Transaction framework was to assist in
> >>cleaning up resources that people always forget to close.  I think
> >>that is mostly it.
> >>
> >>The general guidelines of how I'd like transactions to work would be
> >>
> >>1) Synchronous portions of API commands are wrapped in a single
> >>transaction.  Transaction propagation capability from spring tx can
> >>then handle nesting transaction as more complicated transaction
> >>management may be need in certain places.
> >>
> >>2) Async jobs that run in a background threads should do small fine
> >>grained transaction management.  Ideally no transactions.
> >>Transactions should not be used as a locking mechanism.
> >>
> >>Having said that, there are currently so many technical issues in
> >>getting to that.  For example, with point 1, because IPC/MessageBus
> >>and EventBus were added recently, that makes it difficult to do 1.
> >>The problem is that you can't send a message while a DB tx is open
> >>because the reciever may get the message before the commit.  So
> >>messaging frameworks have to be written in consideration of the
> >>transaction management.  Not saying you need to do complex XA style
> >>transactions, there's simpler ways to do that.  So regarding points 1
> >>and 2 I said.  That's what I'd like to see, but I know its a long road
> >>to that.
> >>
> >>Option B is really about introducing an API that will eventually serve
> >>as a lightweight wrapper around Spring TX.  In the short term, if I do
> >>option B, the implementation of the code will still be the custom ACS
> >>TX mgmt.  So across modules, its sorta kinda works but not really.
> >>But if I do the second step of replacing custom ACS TX impl with
> >>Spring TX, it will follow how Spring TX works.  If we have Sprint TX
> >>we can then leverage the transaction propagation features of it to
> >>more sanely handle transaction nesting.
> >>
> >>I feel I went a bit the weeds with that response, but maybe something
> >>in there made sense.
> >>
> >>Darren
> >>
> >>On Wed, Oct 9, 2013 at 9:31 AM, Pedro Roque Marques
> >> wrote:
> >>> Darren,
> >>> My assumption when I tried to make sense of the transaction code is
> >>>that the underlying motivation is that the code is trying to create a
> >>>transaction per API call and then allow multiple modules to implement
> >>>that API call...
> >>> i.e. the intent is do use a bit of what i would call a "web-server
> >>>logic"...
> >>>
> >>> 1. API call starts.
> >>> 2. Module X starts transaction...
> >>> 3. Module Y does some other changes in the DB...
> >>> 4. Either the API call completes successfully or not... commit or error
> >>>back to the user.
> >>>
> >>>  I suspect that this was probably the starting point... but it doesn't
> >>>really work as i describe above. Often when t

Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot

2013-10-09 Thread SuichII, Christopher
Just bumping this since there haven't been any responses.

Does anyone have any thoughts on this? I'm ready and prepared to do the work, 
but I don't want to move on if people have concerns with this approach or can 
think of a better solution.

-Chris
--
Chris Suich
chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Oct 8, 2013, at 4:53 PM, Chris Suich 
mailto:chris.su...@netapp.com>> wrote:

This is an automatically generated e-mail. To reply, visit: 
https://reviews.apache.org/r/14522/


On October 8th, 2013, 8:18 p.m. UTC, edison su wrote:

ui/scripts/storage.js
 (Diff revision 1)

getActionFilter: function() {


1763

revertSnapshot: {


The ui change here, is there way to disable it from ui, if the storage provider 
is not NetApp? Or move the ui change into your plugin?

This raises the question of whether people expect to see the revert snapshot 
functionality for hypervisors or just storage providers. I figured that the 
hypervisor functionality would be desired, but it sounds like that may not be 
the case for all hypervisors.

Has there been any thoughts to allow storage providers to indicate which 
features they support? Maybe part of the VolumeResponse can be a set of flags 
for which operations are supported (take snapshot, revert snapshot, etc.). This 
way, the UI can dynamically show/hide supported actions without knowing who the 
volume's storage provider actually is. This should be a fairly straight forward 
UI change, but would require adding methods to the storage provider interface. 
If we don't want to always load this information just for the VolumeResponse, 
we could expose new APIs to query which operations are supported for a given 
volume, but we may not want to go exposing APIs for this.

Any thoughts?


- Chris


On October 7th, 2013, 8:26 p.m. UTC, Chris Suich wrote:

Review request for cloudstack, Brian Federle and edison su.
By Chris Suich.

Updated Oct. 7, 2013, 8:26 p.m.

Repository: cloudstack-git
Description

After the last batch of work to the revertSnapshot API, SnapshotServiceImpl was 
not tied into the workflow to be used by storage providers. I have added the 
logic in a similar fashion to takeSnapshot(), backupSnapshot() and 
deleteSnapshot().

I have also added a 'Revert to Snapshot' action to the volume snapshots list in 
the UI.


Testing

I have tested all of this locally with a custom storage provider.

Unfortunately, I'm still in the middle of figuring out how to properly unit 
test this type of code. If anyone has any recommendations, please let me know.


Diffs

  *   
api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java 
(946eebd)
  *   client/WEB-INF/classes/resources/messages.properties (f92b85a)
  *   client/tomcatconf/commands.properties.in (58c770d)
  *   
engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
 (c09adca)
  *   server/src/com/cloud/server/ManagementServerImpl.java (0a0fcdc)
  *   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java (0b53cfd)
  *   ui/dictionary.jsp (f93f9dc)
  *   ui/scripts/storage.js (88fb9f2)

View Diff




Re: [PROPOSAL] Remove Setters from *JoinVO

2013-10-09 Thread Daan Hoogland
Chris,

Since I see no objections, why don't you test your idea and submit a patch?

regards,
Daan

On Fri, Oct 4, 2013 at 7:29 PM, SuichII, Christopher
 wrote:
> *JoinVOs are used to store entries from MySQL views, which are not editable. 
> I think removing setters from the *JoinVOs may help avoid some potential 
> confusion as setters seem to imply that the fields are editable, which they 
> really aren't.
>
> I started looking around and it looks like most setters in *JoinVOs aren't 
> actually used since the creation of *VOs is handled by java reflection. 
> Please let me know if this is not the case or if I'm misunderstanding the way 
> the MySQL views work.
>
> -Chris
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms – Cloud Solutions
> Citrix, Cisco & Red Hat
>


Re: [PROPOSAL] Remove Setters from *JoinVO

2013-10-09 Thread Mike Tutkowski
Yeah, I agree with you, Chris. I think these setters should be removed.


On Wed, Oct 9, 2013 at 1:33 PM, Daan Hoogland wrote:

> Chris,
>
> Since I see no objections, why don't you test your idea and submit a patch?
>
> regards,
> Daan
>
> On Fri, Oct 4, 2013 at 7:29 PM, SuichII, Christopher
>  wrote:
> > *JoinVOs are used to store entries from MySQL views, which are not
> editable. I think removing setters from the *JoinVOs may help avoid some
> potential confusion as setters seem to imply that the fields are editable,
> which they really aren't.
> >
> > I started looking around and it looks like most setters in *JoinVOs
> aren't actually used since the creation of *VOs is handled by java
> reflection. Please let me know if this is not the case or if I'm
> misunderstanding the way the MySQL views work.
> >
> > -Chris
> > --
> > Chris Suich
> > chris.su...@netapp.com
> > NetApp Software Engineer
> > Data Center Platforms – Cloud Solutions
> > Citrix, Cisco & Red Hat
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot

2013-10-09 Thread Mike Tutkowski
"Has there been any thoughts to allow storage providers to indicate which
features they support?"

We talked about this for a while at the CloudStack Collaboration Conference
in Santa Clara.

Right now, this is not supported and that's a serious problem.

This kind of ties in with Storage Tagging and how that is problematic, as
well.

With Storage Tagging, there is no indication of what storage provider
supports the Compute or Disk Offering in question and, as such, we don't
know what fields to show to or hide from users.


On Wed, Oct 9, 2013 at 1:32 PM, SuichII, Christopher  wrote:

> Just bumping this since there haven't been any responses.
>
> Does anyone have any thoughts on this? I'm ready and prepared to do the
> work, but I don't want to move on if people have concerns with this
> approach or can think of a better solution.
>
> -Chris
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms – Cloud Solutions
> Citrix, Cisco & Red Hat
>
> On Oct 8, 2013, at 4:53 PM, Chris Suich  chris.su...@netapp.com>> wrote:
>
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14522/
>
>
> On October 8th, 2013, 8:18 p.m. UTC, edison su wrote:
>
> ui/scripts/storage.js<
> https://reviews.apache.org/r/14522/diff/1/?file=362033#file362033line1763>
> (Diff revision 1)
>
> getActionFilter: function() {
>
>
> 1763
>
> revertSnapshot: {
>
>
> The ui change here, is there way to disable it from ui, if the storage
> provider is not NetApp? Or move the ui change into your plugin?
>
> This raises the question of whether people expect to see the revert
> snapshot functionality for hypervisors or just storage providers. I figured
> that the hypervisor functionality would be desired, but it sounds like that
> may not be the case for all hypervisors.
>
> Has there been any thoughts to allow storage providers to indicate which
> features they support? Maybe part of the VolumeResponse can be a set of
> flags for which operations are supported (take snapshot, revert snapshot,
> etc.). This way, the UI can dynamically show/hide supported actions without
> knowing who the volume's storage provider actually is. This should be a
> fairly straight forward UI change, but would require adding methods to the
> storage provider interface. If we don't want to always load this
> information just for the VolumeResponse, we could expose new APIs to query
> which operations are supported for a given volume, but we may not want to
> go exposing APIs for this.
>
> Any thoughts?
>
>
> - Chris
>
>
> On October 7th, 2013, 8:26 p.m. UTC, Chris Suich wrote:
>
> Review request for cloudstack, Brian Federle and edison su.
> By Chris Suich.
>
> Updated Oct. 7, 2013, 8:26 p.m.
>
> Repository: cloudstack-git
> Description
>
> After the last batch of work to the revertSnapshot API,
> SnapshotServiceImpl was not tied into the workflow to be used by storage
> providers. I have added the logic in a similar fashion to takeSnapshot(),
> backupSnapshot() and deleteSnapshot().
>
> I have also added a 'Revert to Snapshot' action to the volume snapshots
> list in the UI.
>
>
> Testing
>
> I have tested all of this locally with a custom storage provider.
>
> Unfortunately, I'm still in the middle of figuring out how to properly
> unit test this type of code. If anyone has any recommendations, please let
> me know.
>
>
> Diffs
>
>   *
> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
> (946eebd)
>   *   client/WEB-INF/classes/resources/messages.properties (f92b85a)
>   *   client/tomcatconf/commands.properties.in (58c770d)
>   *
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
> (c09adca)
>   *   server/src/com/cloud/server/ManagementServerImpl.java (0a0fcdc)
>   *   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
> (0b53cfd)
>   *   ui/dictionary.jsp (f93f9dc)
>   *   ui/scripts/storage.js (88fb9f2)
>
> View Diff
>
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: questions about registerIso API and updateIsoPermissions API

2013-10-09 Thread Jessica Wang
Currently, at API level, a normal user is not allowed to specify "isfeatured" 
when registering ISO (API will ignore "isfeatured" parameter when a normal user 
passes it),
but a normal user is allowed to specify "isfeatured" when updating ISO.

Should we fix API to:
(1) allow a normal user to specify "isfeatured" when registering ISO (API won't 
ignore "isfeatured" parameter when a normal user passes it)

OR

(2) disallow a normal user to specify "isfeatured" when updating ISO

?


p.s. I'll do corresponding UI change after API is fixed.


From: Jessica Wang
Sent: Wednesday, October 09, 2013 11:01 AM
To: Nitin Mehta; 
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: RE: questions about registerIso API and updateIsoPermissions API

Nitin,

>  At the moment, I think that for Isos we should allow to edit it so would 
> call it an API bug.
Thanks.

> Register Iso does provide an option to mark an ISO featured. I see that in 
> the latest master.
That only works for admin, but NOT normal user.

If you log in as a normal user, then pass "isfeatured=true" to registerIso API, 
API will ignore it.
The newly registered template will have "isfeatured: false".

e.g.
http://10.215.3.26:8080/client/api?command=registerIso&response=json&sessionkey=u%2FVIHPJuPohidGKFd0lh6csG%2BfM%3D&name=normalUserIso1&displayText=normalUserIso1&url=http%3A%2F%2F10.223.110.231%2Fisos_64bit%2Fdummy.iso&zoneid=6bcd3bd9-591c-4d99-a164-d05b87df1b04&isfeatured=true&isextractable=false&bootable=true&osTypeId=b8cbfd6c-2d40-11e3-86aa-3c970e739c3e&ispublic=false&_=1381340961641
{
"registerisoresponse": {
"count": 1,
"iso": [
{
"id": "9b903876-f17c-4634-8463-8e3025259956",
"name": "normalUserIso1",
"displaytext": "normalUserIso1",
"ispublic": false,
"created": "2013-10-09T10:52:38-0700",
"isready": false,
"bootable": true,
"isfeatured": false,
"crossZones": false,
"ostypeid": "b8cbfd6c-2d40-11e3-86aa-3c970e739c3e",
"ostypename": "Apple Mac OS X 10.6 (32-bit)",
"account": "aaa_user",
"zoneid": "6bcd3bd9-591c-4d99-a164-d05b87df1b04",
"zonename": "jw-adv",
"status": "",
"domain": "aaa",
"domainid": "47b09d73-84ef-48dc-9b73-1720bad600cb",
"isextractable": false,
"tags": []
}
]
}
}

Jessica

From: Nitin Mehta
Sent: Tuesday, October 08, 2013 5:27 PM
To: Jessica Wang; 
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: Re: questions about registerIso API and updateIsoPermissions API

Answers inline.

From: Jessica Wang mailto:jessica.w...@citrix.com>>
Date: Tuesday 8 October 2013 5:10 PM
To: "mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk 
mailto:alena.prokharc...@citrix.com>>, Nitin 
Mehta mailto:nitin.me...@citrix.com>>, Shweta Agarwal 
mailto:shweta.agar...@citrix.com>>
Subject: questions about registerIso API and updateIsoPermissions API

Hi,

I have questions about registerIso API and updateIsoPermissions API.

(1) A normal user is allowed to specify isextractable property when registering 
an ISO (through registerIso API),
but NOT allowed to update isextractable property when updating an ISO (through 
updateIsoPermissions API).
Is this by design or it's just an API bug?

Nitin>> This is a grey area. This was done for templates (Isos just inherited 
it) because derived templates may or may not belong to the same user and we 
want to follow the principle of least privilege.
At the moment, I think that for Isos we should allow to edit it so would call 
it an API bug.

(2) A normal user is NOT allowed to specify isfeatured property when 
registering an ISO (through registerIso API),
but allowed to update isfeatured property when updating an ISO (through 
updateIsoPermissions API)?
Is this by design or it's just an API bug?

Nitin>> Register Iso does provide an option to mark an ISO featured. I see that 
in the latest master.

Jessica


Re: [DOC] 4.2.0 Templates

2013-10-09 Thread Daan Hoogland
Marty,

I am not sure what you mean. Do you mean the doc dir in the repo? I
think you need to look in
https://git-wip-us.apache.org/repos/asf/cloudstack-docs.git for the
4.2 docs.

regards,
Daan

On Sun, Oct 6, 2013 at 10:51 PM, Marty Sweet  wrote:
> Hi guys,
>
> I created a document for creating Linux documentation for the 4.2.0
> release. Checking the documentation it seems that it is not there? Is there
> any reason for this?
>
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/working-with-templates.html
>
> https://github.com/apache/cloudstack/commit/922ef76224d4a8534f67f47b97cf664e5c65ecba
> https://issues.apache.org/jira/browse/CLOUDSTACK-4329
>
> Thanks,
> Marty


Re: [DISCUSS] make commands.properties the exception, not the rule

2013-10-09 Thread SuichII, Christopher
I just wanted to add a little clarification from a plugin perspective.

Having commands.properties as a whitelist just adds another place that plugins 
have to register with CloudStack. For plugins that do not intend on being a 
part of the CloudStack source, this is actually quite tricky. Currently, to add 
entries to commands.properties, any plugin like this would either need to tell 
the CS administrator to manually modify this file (error prone, laborious and 
an uncommon installation practice) or develop an installation script to modify 
commands.properties when installing, updating or uninstalling the plugin (also 
error prone and scary).

-- 
Chris Suich
chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Oct 9, 2013, at 1:08 AM, Darren Shepherd  wrote:

> So I'm saying if you want to disable a command you put myBadCmd=0 in
> the commands.properties.  So yes, a blacklist over a whitelist.  For
> people paranoid about maybe some command exists that they don't know
> about, we can even add a "blacklist=false to the command properties.
> Then the commands.properites becomes the all mighty master of what is
> allowed (a whitelist).  But by default, I think the file should be
> empty and default to what is defined by the API annotation.
> 
> Darren
> 
> On Tue, Oct 8, 2013 at 5:45 PM, SuichII, Christopher
>  wrote:
>> Maybe we could consider switching from a whitelist to a blacklist, then. A 
>> whitelist is certainly easier in terms of a one-step configuration, but a 
>> blacklist would allow for much easier plugin development, installation and 
>> removal. Perhaps we could find write a script that generates the complete 
>> list of APIs to create the blacklist from (I know this API exists currently, 
>> but not in the format of commands.properties).
>> 
>> --
>> Chris Suich
>> chris.su...@netapp.com
>> NetApp Software Engineer
>> Data Center Platforms – Cloud Solutions
>> Citrix, Cisco & Red Hat
>> 
>> On Oct 8, 2013, at 7:11 PM, Prachi Damle  wrote:
>> 
>>> I think commands.properties is not just providing ACL on the API - but it 
>>> also serves as a whitelist of APIs available on the deployment.
>>> It can be a one-step configuration option to disable certain functionality.
>>> 
>>> Prachi
>>> 
>>> 
>>> -Original Message-
>>> From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
>>> Sent: Tuesday, October 08, 2013 3:24 PM
>>> To: dev@cloudstack.apache.org
>>> Subject: [DISCUSS] make commands.properties the exception, not the rule
>>> 
>>> I would like to largely remove commands.properties.  I think most API 
>>> commands naturally have a default ACL that should be applied.  I think it 
>>> makes sense to add to the @APICommand flags for user, domain, admin.  Then, 
>>> as an override mechanism, people can edit commands.properties to change the 
>>> default ACL.  This would make it such that people could add new commands 
>>> without the need to edit commands.properties.
>>> 
>>> Thoughts?  How will this play with whatever is being done with rbac?
>>> 
>>> Darren
>> 



Re: what's the reason for the placeholder nic in VPC/VR?

2013-10-09 Thread Darren Shepherd
Okay, that makes sense.  Another random question.  Why does
router_network_ref exist?  Is it not sufficient to just find a
DomainRouter VM that has a nic attached to the network?  It seems to
be for RvR?  Is that right?  Still I don't understand why the table is
needed.

Darren

On Wed, Oct 9, 2013 at 9:50 AM, Alena Prokharchyk
 wrote:
> I've just tested it on the latest master, don't see placeholder nic
> created for the VPC VR.
>
> In addition to the case Murali explained, placeholder nic is being created
> per Shared network case using VR as DHCP provider. Its done to preserve
> the same ip address for the case when VR is being expunged/re-created
> during the network restart/Vrdestroy. As a result of expunge VR its nic is
> being cleaned up - and ip released - , so we had to make sure that the new
> VR would get the same ip. More details are in
> 26b892daf3cdccc2e25711730c7e1efcdec7d2dc, CLOUDSTACK-1771.
>
> -Alena.
>
>
> On 10/9/13 2:57 AM, "Murali Reddy"  wrote:
>
>>On 09/10/13 11:33 AM, "Darren Shepherd" 
>>wrote:
>>
>>>Why is a placeholder nic created before the VRs for the VPC are created?
>>>
>>>Darren
>>>
>>
>>Generally place holder nic is used in cases where cloudstack uses a subnet
>>IP from the guest subnet, but ip is not used for any VM nic's. In most of
>>the external network devices, needs a subnet IP from the guest network
>>CIDR, cloudstack creates a place holder nic and allocates a subnet ip.
>>
>>
>
>


Re: [PROPOSAL] Modularize Spring

2013-10-09 Thread SuichII, Christopher
I think I'll look into a version of (2). The difference being that I think 
using an int is too large of a range and provides unnecessary granularity. If 
two strategies or providers both have snapshot strategies, they are both simply 
going to return the max int. However, if we use an enum with values like:

HIGHEST, PLUGIN, HYPERVISOR, DEFAULT and NO, (HIGHEST would be reserved for 
unforeseen future use, testing, simulators, etc.)

then we allow strategies and providers to fall in the same bucket. All 
strategies and providers would be sorted and asked to handle operations in that 
order. Ultimately, this requires that plugins do their best to determine 
whether they can actually handle an operation, because if two say they can, 
there is no way for the MS to intelligently choose between the two.

-- 
Chris Suich
chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Oct 4, 2013, at 6:10 PM, Darren Shepherd  wrote:

> Sure, I'm open to suggestions.  Basically I think we've discussed
> 
> 1) Global Setting
> 2) canHandle() returns an int
> 3) Strategy has an enum type assigned
> 
> I'm open to all three, I don't have much vested interest in this.
> 
> Darren
> 
> On Fri, Oct 4, 2013 at 3:00 PM, SuichII, Christopher
>  wrote:
>> Well, it seems OK, but I think we should keep on discussing our options. One 
>> concern I have with the global config approach is that it adds manual steps 
>> for 'installing' extensions. Each extension must have installation 
>> instructions to indicate which global configurations it must be included in 
>> and where in that list it should be put (and of course, many extension are 
>> going to say that they should be at the front of the list).
>> 
>> -Chris
>> --
>> Chris Suich
>> chris.su...@netapp.com
>> NetApp Software Engineer
>> Data Center Platforms – Cloud Solutions
>> Citrix, Cisco & Red Hat
>> 
>> On Oct 4, 2013, at 12:12 PM, Darren Shepherd  
>> wrote:
>> 
>>> On 10/04/2013 11:58 AM, SuichII, Christopher wrote:
 Darren,
 
 I think one of the benefits of allowing the priority to be specified in 
 the xml is that it can be configured after deployment. If for some reason 
 two strategies or providers conflict, then their priorities can be changed 
 in XML to resolve the conflict. I believe the Spring @Order annotation an 
 be specified in XML, not just as an annotation.
 
 -Chris
 
>>> 
>>> I would *prefer* extensions to be order independent, but if we determine 
>>> they are order dependant, then that is fine too.  So if we conclude that 
>>> the simplest way to address this is to order the Strategies based on 
>>> configuration, then I will add an ordering "global configuration" as 
>>> described at 
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Extensions.
>>> 
>>> Does the order configuration setting approach seem fine?
>>> 
>>> Darren
>> 



{Responsiveness report] users 2013w40

2013-10-09 Thread Daan Hoogland
http://markmail.org/message/vm2rmrlqdkv7fscz SecondaryStorage VM on
4.1.1 by nick.sindlin...@appcore.com
http://markmail.org/message/whks4wvzuaewvnsw qemu-kvm tweaking
(CloudStack-2.2.14) by Vladimir Melnik
http://markmail.org/message/d7b4vozgi5gs3a5g Migrating/managing
secondary storages by mailingli...@isg.si
http://markmail.org/message/lfyrzjdvxxwnjx4e Bare Metal Docs for
4.2.0? by Jason Davis
http://markmail.org/message/ugn3j6ikrekflrw6 Cloudstack 4.0 after
power|cooling loss by Curtis Old
http://markmail.org/message/mo2hjtcufzrfobrj Zone wide Primary Storage
after upgrade to CS 4.2 by Andrei Mikhailovsky
http://markmail.org/message/xwuhbipiarvcufbn Problem when booting vm
CS 4.2 by Koen Vanoppen


Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot

2013-10-09 Thread SuichII, Christopher
Well then, I think sending back a list of supported operations with volumes 
would be a good start. Eventually, this could be extended to have supported 
fields as well. While it does cost some overhead up front to load the supported 
operations from storage providers when listing volumes, I think it is simpler 
overall than introducing new APIs for querying for that information.

-- 
Chris Suich
chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Oct 9, 2013, at 3:45 PM, Mike Tutkowski  wrote:

> "Has there been any thoughts to allow storage providers to indicate which
> features they support?"
> 
> We talked about this for a while at the CloudStack Collaboration Conference
> in Santa Clara.
> 
> Right now, this is not supported and that's a serious problem.
> 
> This kind of ties in with Storage Tagging and how that is problematic, as
> well.
> 
> With Storage Tagging, there is no indication of what storage provider
> supports the Compute or Disk Offering in question and, as such, we don't
> know what fields to show to or hide from users.
> 
> 
> On Wed, Oct 9, 2013 at 1:32 PM, SuichII, Christopher > wrote:
> 
>> Just bumping this since there haven't been any responses.
>> 
>> Does anyone have any thoughts on this? I'm ready and prepared to do the
>> work, but I don't want to move on if people have concerns with this
>> approach or can think of a better solution.
>> 
>> -Chris
>> --
>> Chris Suich
>> chris.su...@netapp.com
>> NetApp Software Engineer
>> Data Center Platforms – Cloud Solutions
>> Citrix, Cisco & Red Hat
>> 
>> On Oct 8, 2013, at 4:53 PM, Chris Suich > chris.su...@netapp.com>> wrote:
>> 
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/14522/
>> 
>> 
>> On October 8th, 2013, 8:18 p.m. UTC, edison su wrote:
>> 
>> ui/scripts/storage.js<
>> https://reviews.apache.org/r/14522/diff/1/?file=362033#file362033line1763>
>> (Diff revision 1)
>> 
>> getActionFilter: function() {
>> 
>> 
>>1763
>> 
>>revertSnapshot: {
>> 
>> 
>> The ui change here, is there way to disable it from ui, if the storage
>> provider is not NetApp? Or move the ui change into your plugin?
>> 
>> This raises the question of whether people expect to see the revert
>> snapshot functionality for hypervisors or just storage providers. I figured
>> that the hypervisor functionality would be desired, but it sounds like that
>> may not be the case for all hypervisors.
>> 
>> Has there been any thoughts to allow storage providers to indicate which
>> features they support? Maybe part of the VolumeResponse can be a set of
>> flags for which operations are supported (take snapshot, revert snapshot,
>> etc.). This way, the UI can dynamically show/hide supported actions without
>> knowing who the volume's storage provider actually is. This should be a
>> fairly straight forward UI change, but would require adding methods to the
>> storage provider interface. If we don't want to always load this
>> information just for the VolumeResponse, we could expose new APIs to query
>> which operations are supported for a given volume, but we may not want to
>> go exposing APIs for this.
>> 
>> Any thoughts?
>> 
>> 
>> - Chris
>> 
>> 
>> On October 7th, 2013, 8:26 p.m. UTC, Chris Suich wrote:
>> 
>> Review request for cloudstack, Brian Federle and edison su.
>> By Chris Suich.
>> 
>> Updated Oct. 7, 2013, 8:26 p.m.
>> 
>> Repository: cloudstack-git
>> Description
>> 
>> After the last batch of work to the revertSnapshot API,
>> SnapshotServiceImpl was not tied into the workflow to be used by storage
>> providers. I have added the logic in a similar fashion to takeSnapshot(),
>> backupSnapshot() and deleteSnapshot().
>> 
>> I have also added a 'Revert to Snapshot' action to the volume snapshots
>> list in the UI.
>> 
>> 
>> Testing
>> 
>> I have tested all of this locally with a custom storage provider.
>> 
>> Unfortunately, I'm still in the middle of figuring out how to properly
>> unit test this type of code. If anyone has any recommendations, please let
>> me know.
>> 
>> 
>> Diffs
>> 
>>  *
>> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
>> (946eebd)
>>  *   client/WEB-INF/classes/resources/messages.properties (f92b85a)
>>  *   client/tomcatconf/commands.properties.in (58c770d)
>>  *
>> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
>> (c09adca)
>>  *   server/src/com/cloud/server/ManagementServerImpl.java (0a0fcdc)
>>  *   server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
>> (0b53cfd)
>>  *   ui/dictionary.jsp (f93f9dc)
>>  *   ui/scripts/storage.js (88fb9f2)
>> 
>> View Diff
>> 
>> 
>> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the 

Fwd: [DISCUSS/PROPOSAL] Upgrading Driver Model

2013-10-09 Thread Mike Tutkowski
Hey Chris,

This e-mail chain might be of interest to you.

Talk to you later,
Mike

-- Forwarded message --
From: John Burwell 
Date: Tue, Aug 20, 2013 at 3:43 PM
Subject: [DISCUSS/PROPOSAL] Upgrading Driver Model
To: "dev@cloudstack.apache.org" 
Cc: Daan Hoogland , Hugo Trippaers <
htrippa...@schubergphilis.com>, "La Motta, David" 


All,

In capturing my thoughts on storage, my thinking backed into the driver
model.  While we have the beginnings of such a model today, I see the
following deficiencies:


   1. *Multiple Models*: The Storage, Hypervisor, and Security layers each
   have a slightly different model for allowing system functionality to be
   extended/substituted.  These differences increase the barrier of entry for
   vendors seeking to extend CloudStack and accrete code paths to be
   maintained and verified.
   2. *Leaky Abstraction*:  Plugins are registered through a Spring
   configuration file.  In addition to being operator unfriendly (most
   sysadmins are not Spring experts nor do they want to be), we expose the
   core bootstrapping mechanism to operators.  Therefore, a misconfiguration
   could negatively impact the injection/configuration of internal management
   server components.  Essentially handing them a loaded shotgun pointed at
   our right foot.
   3. *Nondeterministic Load/Unload Model*:  Because the core loading
   mechanism is Spring, the management has little control over the timing and
   order of component loading/unloading.  Changes to the Management Server's
   component dependency graph could break a driver by causing it to be started
   at an unexpected time.
   4. *Lack of Execution Isolation*: As a Spring component, plugins are
   loaded into the same execution context as core management server
   components.  Therefore, an errant plugin can corrupt the entire management
   server.


For next revision of the plugin/driver mechanism, I would like see us
migrate towards a standard pluggable driver model that supports all of the
management server's extension points (e.g. network devices, storage
devices, hypervisors, etc) with the following capabilities:


   - *Consolidated Lifecycle and Startup Procedure*:  Drivers share a
   common state machine and categorization (e.g. network, storage, hypervisor,
   etc) that permits the deterministic calculation of initialization and
   destruction order (i.e. network layer drivers -> storage layer drivers ->
   hypervisor drivers).  Plugin inter-dependencies would be supported between
   plugins sharing the same category.
   - *In-process Installation and Upgrade*: Adding or upgrading a driver
   does not require the management server to be restarted.  This capability
   implies a system that supports the simultaneous execution of multiple
   driver versions and the ability to suspend continued execution work on a
   resource while the underlying driver instance is replaced.
   - *Execution Isolation*: The deployment packaging and execution
   environment supports different (and potentially conflicting) versions of
   dependencies to be simultaneously used.  Additionally, plugins would be
   sufficiently sandboxed to protect the management server against driver
   instability.
   - *Extension Data Model*: Drivers provide a property bag with a metadata
   descriptor to validate and render vendor specific data.  The contents of
   this property bag will provided to every driver operation invocation at
   runtime.  The metadata descriptor would be a lightweight description that
   provides a label resource key, a description resource key, data type
   (string, date, number, boolean), required flag, and optional length limit.
   - *Introspection: Administrative APIs/UIs allow operators to understand
   the configuration of the drivers in the system, their configuration, and
   their current state.*
   - *Discoverability*: Optionally, drivers can be discovered via a project
   repository definition (similar to Yum) allowing drivers to be remotely
   acquired and operators to be notified regarding update availability.  The
   project would also provide, free of charge, certificates to sign plugins.
This mechanism would support local mirroring to support air gapped
   management networks.


Fundamentally, I do not want to turn CloudStack into an erector set with
more screws than nuts which is a risk with highly pluggable architectures.
 As such, I think we would need to tightly bound the scope of drivers and
their behaviors to prevent the loss system usability and stability.  My
thinking is that drivers would be packaged into a custom JAR, CAR
(CloudStack ARchive), that would be structured as followed:


   - META-INF
  - MANIFEST.MF
  - driver.yaml (driver metadata(e.g. version, name, description, etc)
  serialized in YAML format)
  - LICENSE (a text file containing the driver's license)
   - lib (driver dependencies)
   - classes (driver implementation)
   - resources (driver message files a

Re: Review Request 14522: [CLOUDSTACK-4771] Support Revert VM Disk from Snapshot

2013-10-09 Thread Mike Tutkowski
Just sent you an e-mail chain under the subject: [DISCUSS/PROPOSAL]
Upgrading Driver Model


On Wed, Oct 9, 2013 at 2:17 PM, SuichII, Christopher  wrote:

> Well then, I think sending back a list of supported operations with
> volumes would be a good start. Eventually, this could be extended to have
> supported fields as well. While it does cost some overhead up front to load
> the supported operations from storage providers when listing volumes, I
> think it is simpler overall than introducing new APIs for querying for that
> information.
>
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms – Cloud Solutions
> Citrix, Cisco & Red Hat
>
> On Oct 9, 2013, at 3:45 PM, Mike Tutkowski 
> wrote:
>
> > "Has there been any thoughts to allow storage providers to indicate which
> > features they support?"
> >
> > We talked about this for a while at the CloudStack Collaboration
> Conference
> > in Santa Clara.
> >
> > Right now, this is not supported and that's a serious problem.
> >
> > This kind of ties in with Storage Tagging and how that is problematic, as
> > well.
> >
> > With Storage Tagging, there is no indication of what storage provider
> > supports the Compute or Disk Offering in question and, as such, we don't
> > know what fields to show to or hide from users.
> >
> >
> > On Wed, Oct 9, 2013 at 1:32 PM, SuichII, Christopher <
> chris.su...@netapp.com
> >> wrote:
> >
> >> Just bumping this since there haven't been any responses.
> >>
> >> Does anyone have any thoughts on this? I'm ready and prepared to do the
> >> work, but I don't want to move on if people have concerns with this
> >> approach or can think of a better solution.
> >>
> >> -Chris
> >> --
> >> Chris Suich
> >> chris.su...@netapp.com
> >> NetApp Software Engineer
> >> Data Center Platforms – Cloud Solutions
> >> Citrix, Cisco & Red Hat
> >>
> >> On Oct 8, 2013, at 4:53 PM, Chris Suich  >> chris.su...@netapp.com>> wrote:
> >>
> >> This is an automatically generated e-mail. To reply, visit:
> >> https://reviews.apache.org/r/14522/
> >>
> >>
> >> On October 8th, 2013, 8:18 p.m. UTC, edison su wrote:
> >>
> >> ui/scripts/storage.js<
> >>
> https://reviews.apache.org/r/14522/diff/1/?file=362033#file362033line1763>
> >> (Diff revision 1)
> >>
> >> getActionFilter: function() {
> >>
> >>
> >>1763
> >>
> >>revertSnapshot: {
> >>
> >>
> >> The ui change here, is there way to disable it from ui, if the storage
> >> provider is not NetApp? Or move the ui change into your plugin?
> >>
> >> This raises the question of whether people expect to see the revert
> >> snapshot functionality for hypervisors or just storage providers. I
> figured
> >> that the hypervisor functionality would be desired, but it sounds like
> that
> >> may not be the case for all hypervisors.
> >>
> >> Has there been any thoughts to allow storage providers to indicate which
> >> features they support? Maybe part of the VolumeResponse can be a set of
> >> flags for which operations are supported (take snapshot, revert
> snapshot,
> >> etc.). This way, the UI can dynamically show/hide supported actions
> without
> >> knowing who the volume's storage provider actually is. This should be a
> >> fairly straight forward UI change, but would require adding methods to
> the
> >> storage provider interface. If we don't want to always load this
> >> information just for the VolumeResponse, we could expose new APIs to
> query
> >> which operations are supported for a given volume, but we may not want
> to
> >> go exposing APIs for this.
> >>
> >> Any thoughts?
> >>
> >>
> >> - Chris
> >>
> >>
> >> On October 7th, 2013, 8:26 p.m. UTC, Chris Suich wrote:
> >>
> >> Review request for cloudstack, Brian Federle and edison su.
> >> By Chris Suich.
> >>
> >> Updated Oct. 7, 2013, 8:26 p.m.
> >>
> >> Repository: cloudstack-git
> >> Description
> >>
> >> After the last batch of work to the revertSnapshot API,
> >> SnapshotServiceImpl was not tied into the workflow to be used by storage
> >> providers. I have added the logic in a similar fashion to
> takeSnapshot(),
> >> backupSnapshot() and deleteSnapshot().
> >>
> >> I have also added a 'Revert to Snapshot' action to the volume snapshots
> >> list in the UI.
> >>
> >>
> >> Testing
> >>
> >> I have tested all of this locally with a custom storage provider.
> >>
> >> Unfortunately, I'm still in the middle of figuring out how to properly
> >> unit test this type of code. If anyone has any recommendations, please
> let
> >> me know.
> >>
> >>
> >> Diffs
> >>
> >>  *
> >>
> api/src/org/apache/cloudstack/api/command/user/snapshot/RevertSnapshotCmd.java
> >> (946eebd)
> >>  *   client/WEB-INF/classes/resources/messages.properties (f92b85a)
> >>  *   client/tomcatconf/commands.properties.in (58c770d)
> >>  *
> >>
> engine/storage/snapshot/src/org/apache/cloudstack/storage/snapshot/SnapshotServiceImpl.java
> >> (c09adca)
> >>  *   server/src/com/clo

Re: what's the reason for the placeholder nic in VPC/VR?

2013-10-09 Thread Alena Prokharchyk
Valid question and concern. Here is the history:

V1.0 No notion of the NaaS yet, no networks/nics tables; guest type and
all network related info - public ip, mac, netmask - for the VR is stored
in domain_router table

V2.1 NaaS added. Although each router has entries in the nics table, and
its possible to retrieve the guest network info from there, network_id
holding guestNetworkId for the VR, was added to the domain_router table. I
can't recall who added it, and the reason why it was done this way, but
most of the code for the VR reads guest network info from
domain_router.network_id. And it's always 1-1 relationship between VR and
guest network. We also continue storing public_ip/netmask/mac_address of
the VR public nic in domain_router table.

V3.0 VPC was added. Now VR can have more than one network, so we created
this table to maintain VR to guest network refs there, and moved
network_id to VR ref from domain_router table to router_network_ref. I
agree the right fix should have been - change all the code reading
network_id info from domain_router, to read it from nics instead. But back
then we just needed the quick fix for VPC to support 1 VR - n guest
networks case, and didn't want to introduce the regressions in existing
code, so we've done it this way.

Following needs to be fixed (DB upgrade too)

* get rid of router_network_ref table and always retrieve guest network
info from nics table.
* vm_instance/domain_router/console_proxy/secondary_storage_vm table - get
rid of 
private_ip_address/private_mac_address/public_mac_address/public_mac_addres
s fields that exist since 1.0. Again, because all this info can be
retrieved via nics. And right now we just duplicate this data across
vm_instance/domain_router/console_proxy/secondary_storage_vm tables, and
sometimes read it from there, and sometimes from nics. We should always
read it from nics, as this table has the most reliable and current info.


-Alena.


On 10/9/13 12:59 PM, "Darren Shepherd"  wrote:

>Okay, that makes sense.  Another random question.  Why does
>router_network_ref exist?  Is it not sufficient to just find a
>DomainRouter VM that has a nic attached to the network?  It seems to
>be for RvR?  Is that right?  Still I don't understand why the table is
>needed.
>
>Darren
>
>On Wed, Oct 9, 2013 at 9:50 AM, Alena Prokharchyk
> wrote:
>> I've just tested it on the latest master, don't see placeholder nic
>> created for the VPC VR.
>>
>> In addition to the case Murali explained, placeholder nic is being
>>created
>> per Shared network case using VR as DHCP provider. Its done to preserve
>> the same ip address for the case when VR is being expunged/re-created
>> during the network restart/Vrdestroy. As a result of expunge VR its nic
>>is
>> being cleaned up - and ip released - , so we had to make sure that the
>>new
>> VR would get the same ip. More details are in
>> 26b892daf3cdccc2e25711730c7e1efcdec7d2dc, CLOUDSTACK-1771.
>>
>> -Alena.
>>
>>
>> On 10/9/13 2:57 AM, "Murali Reddy"  wrote:
>>
>>>On 09/10/13 11:33 AM, "Darren Shepherd" 
>>>wrote:
>>>
Why is a placeholder nic created before the VRs for the VPC are
created?

Darren

>>>
>>>Generally place holder nic is used in cases where cloudstack uses a
>>>subnet
>>>IP from the guest subnet, but ip is not used for any VM nic's. In most
>>>of
>>>the external network devices, needs a subnet IP from the guest network
>>>CIDR, cloudstack creates a place holder nic and allocates a subnet ip.
>>>
>>>
>>
>>
>




Re: Doc Updates

2013-10-09 Thread Daan Hoogland
Great rant Carlos,

You should get it to the dev list. Actually I'll add the dev list in
now. It makes sense to update the docs also after a release, when bug
in the docs are found these can easily be changed without a full
release cycle of the code itself.

regards,
Daan

On Wed, Oct 9, 2013 at 10:24 PM, Carlos Reategui  wrote:
> It seems like the only way that docs (
> http://cloudstack.apache.org/docs/en-US/index.html) are updated is when a
> release is done.  Is it not possible to have these updated otherwise?
>  Waiting for the next patch release of the software so that the docs get
> updated is causing problems with folks not being able to get CloudStack
> installed properly and therefore gives them a bad impression of the
> maturity of CloudStack.
>
> It makes no sense to me why there are multiple versions of documents for
> each of the point releases (currently there is 4.0.0, 4.0.1, 4.0.2, 4.1.0,
> 4.1.1 and 4.0.2 docs) when the feature set has not changed within each of
> these.  I understand that the docs are built as part of the build and
> release process but why does that have to impact the rate at which the
> primary doc site is updated.  Can't the patch releases simply update the
> release notes?  Personally I think there should be a single 4.x version of
> the docs (I would be ok with a 4.0, 4.1 and 4.2 versions too if major
> features are going to be added to them).  Maybe the doc site should have
> wiki like capabilities so that it can be more easily maintained.
>
> ok, I am done ranting...


Re: [DISCUSS/PROPOSAL] Upgrading Driver Model

2013-10-09 Thread John Burwell
Kelven,

As I stated in my proposal, I think it is important to recognize the 
distinction between components that control/interact with infrastructure and 
components that represent orchestration abstractions/mechanisms within the 
management server.  Currently, these two concepts are conflated -- complicating 
the effort to modularize the system.  Therefore, in my view, any effort going 
forward must make this important distinction.

The first type, in my vocabulary, are device drivers.  In my view, these are 
essential system extension points that require greater modularity and isolation 
due to their potential to require conflicting dependencies and their external 
QA.  My proposal pertains only to these types of components, and I think it 
important to continue the discussion as it has far reaching implications to 
both the system architecture and our release process.  IN particular, I think 
it is possible for us to achieve the ability to have completely segregated 
device drivers shipped separately from CloudStack releases.  

Thanks,
-John

On Sep 9, 2013, at 1:49 PM, Kelven Yang  wrote:

> John,
> 
> I understand. The effort we did in 4.1 was mainly to free developers from
> the needs to work at low-level plumbing layer, prior to 4.1, not every
> developer knows how to modify ComponentLocator safely, switching to a
> standard framework can let us focus on Cloud operating business logic.
> 
> Breaking CloudStack into a more modularized architecture is a long journey
> which we are still striving to get there, Daren's work will again bring us
> one step closer, I think this incremental refactoring approach can help
> reduce the turbulence during the flight and ensure smoother releases along
> the way.
> 
> kelven
> 
> 
> On 8/25/13 8:35 PM, "John Burwell"  wrote:
> 
>> Kelven,
>> 
>> Please don't take my proposal as a criticism of the approach taken in
>> 4.1.  I think the current model is a big improvement over the previous
>> approach.  Given the time constraints and ambitions of that work, I think
>> it was a solid, pragmatic first step.  I believe we are at a point to
>> assess our needs, and determine a good next step that (hopefully) further
>> improves the model.
>> 
>> Thanks,
>> -John
>> 
>> On Aug 22, 2013, at 7:44 PM, Kelven Yang  wrote:
>> 
>>> Spring is not meant to be used as a solution for run-time "plug-ins".
>>> Darren is correct that Spring XML should be treated as code (ideal place
>>> for it is the resource section inside the jar). Why we end up the way
>>> now
>>> is mainly for practical reason. Since most of our current pluggable
>>> features are not yet designed to be fully run-time loadable, most of
>>> them
>>> have compile time linkage to other framework components that are solved
>>> at
>>> loading time by Spring.
>>> 
>>> Only after we have cleaned up all these tightly coupled loading time
>>> bindings, can we have a much simpler plugin configuration. And this
>>> run-time loadable framework does not necessary to be based on any
>>> complex
>>> ones (i.e., OSGi).
>>> 
>>> Kelven 
>>> 
>>> On 8/21/13 8:42 AM, "Darren Shepherd" 
>>> wrote:
>>> 
 I also agree with this.  Spring XML should always be treated as code
 not
 really configuration.  It's not good to have a sysadmin touch spring
 config and frankly it's just mean to force them to.
 
 I would ideally like to see that registering a module is as simple as
 putting a jar in a directory.  If its in the directory it gets loaded.
 Then additionally you should have a way such that you can explicitly
 tell
 it not to load modules based on some configuration.  That way, if for
 some reason moving the jar is not possible, you can still disallow it.
 
 So for example the directory based approach works well with rpm/deb's
 so
 "yum install mycoolplugin" will just place jar somewhere.  But say your
 troubleshooting or whatever, you don't really want to have to do "yum
 remove..." just to troubleshoot.  It would be nice to just edit some
 file
 and say "plugin.mycoolplugin.load=false" (or env variable or whatever)
 
 Darren
 
 On Aug 21, 2013, at 6:51 AM, Prasanna Santhanam  wrote:
 
> On Tue, Aug 20, 2013 at 05:43:17PM -0400, John Burwell wrote:
>> Leaky Abstraction:  Plugins are registered through a Spring
>> configuration file.  In addition to being operator unfriendly (most
>> sysadmins are not Spring experts nor do they want to be), we expose
>> the core bootstrapping mechanism to operators.  Therefore, a
>> misconfiguration could negatively impact the injection/configuration
>> of internal management server components.  Essentially handing them
>> a loaded shotgun pointed at our right foot.
> 
> This has been my pet-peeve too and I was told you can write properties
> files
> above the spring contexts to make it simpler for operators to look at.
> 
> Overall a great proposal a

Re: Review Request 14320: add boolean option httpModeEnabled to the service offering for use in haproxy conf

2013-10-09 Thread Daan Hoogland
H Chiradeep,

Would you considder this if I rename the option to keepAlive, adding a
API description field stating that it only has effect on HAProxy (for
now)?

regards,
Daan

On Tue, Oct 1, 2013 at 10:50 AM, Daan Hoogland  wrote:
> Ok Chiradeep,
>
> I see where you worries are. I'll study the stickiness implementation. If it
> is not a zone wide thing I'll consider it.
>
> I disagree that the feature is implementation specific. The tuning is. And
> the tuning the feature are not the same. The abstraction of the feature
> httpClose, which is only implemented by haproxy (let's assume) as a set of
> options is the reason for someone to choose for this implementation of a
> load balancer. This should be leveraged.
>
> Actually in the Schuberg Philis implementation it must. The solution that is
> now done at the actual site is hacked into the running VR. This will then
> lead to an emergency if the router is recreated for some reason.
>
> regards,
> Daan
>
> On Mon, Sep 30, 2013 at 11:50 PM, Chiradeep Vittal
>  wrote:
>>
>> My point is that it is a tuning that is specific for HAProxy and shouldn't
>> be exposed in an abstraction like the CS API.
>> (After all, how do I choose, as an end-user Offering A with httpClose or
>> offering B without httpClose). If there is another desirable feature Y in
>> Netscaler, do you anticipate changing another dozen files for that
>> feature?
>>
>> If you look at the stickiness policy feature, it isn't tied to the service
>> offering despite there being some differences between stickiness
>> capabilities between different LB providers.
>>
>>
>>
>> On 9/28/13 4:18 AM, "Daan Hoogland"  wrote:
>>
>> >Chiradeep,
>> >
>> >the network offerings are created by the cloud operator aren't they? The
>> >netscaler  en f5 modules will have to implement it's own behavior on
>> >httpClose. in case of haproxy it means no mode http and option httpclose
>> >(and some other things)
>> >
>> >If you define it zone wide every tenant has the same setting whilst you
>> >want this to tune setting (like with maxConnections) for a tenant.
>> >
>> >regards,
>> >Daan
>> >
>> >
>> >On Thu, Sep 26, 2013 at 10:57 PM, Chiradeep Vittal
>> >wrote:
>> >
>> >>This is an automatically generated e-mail. To reply, visit:
>> >> https://reviews.apache.org/r/14320/
>> >>
>> >> Not sure if this should be in the API since it is a HAProxy-specific
>> >>configuration. This wouldn't apply to Netscaler or F5.
>> >> After all the end user has no idea if he is using HAProxy of Netscaler
>> >>or F5.
>> >>
>> >> Likely this flag is of interest to the cloud operator only, so why not
>> >>put it in zone-wide config instead of the network offering.
>> >> Do you really see someone creating 2 offerings: one with HttpClose and
>> >>one without HttpClose?
>> >>
>> >>
>> >> - Chiradeep Vittal
>> >>
>> >> On September 26th, 2013, 7:01 p.m. UTC, daan Hoogland wrote:
>> >>   Review request for cloudstack and Wei Zhou.
>> >> By daan Hoogland.
>> >>
>> >> *Updated Sept. 26, 2013, 7:01 p.m.*
>> >>  *Bugs: * CLOUDSTACK-4328
>> >>  *Repository: * cloudstack-git
>> >> Description
>> >>
>> >> add boolean option httpModeEnabled to the service offering for use in
>> >>haproxy conf
>> >>
>> >>   Testing
>> >>
>> >> created unit test.
>> >> instantiated a network with some loadbalancer rule based on a netoffer
>> >>with the option to true/false and maxconnections to a non default value
>> >>-> checked haproxy.cfg on the router
>> >>
>> >>   Diffs
>> >>
>> >>- api/src/com/cloud/offering/NetworkOffering.java (6c5573e)
>> >>- api/src/org/apache/cloudstack/api/ApiConstants.java (f85784b)
>> >>-
>>
>> >> >>api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOffe
>> >>ringCmd.java
>> >>(bdad904)
>> >>-
>>
>> >> >>api/src/org/apache/cloudstack/api/command/admin/network/UpdateNetworkOffe
>> >>ringCmd.java
>> >>(c9c4c8a)
>> >>-
>> >> core/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
>> >>(ee29290)
>> >>- core/src/com/cloud/network/HAProxyConfigurator.java (2309125)
>> >>- core/test/com/cloud/network/HAProxyConfiguratorTest.java
>> >>(PRE-CREATION)
>> >>-
>>
>> >> >>engine/components-api/src/com/cloud/configuration/ConfigurationManager.ja
>> >>va
>> >>(5e1b9b5)
>> >>-
>>
>> >> >>engine/orchestration/src/org/apache/cloudstack/engine/orchestration/Netwo
>> >>rkOrchestrator.java
>> >>(53f64fd)
>> >>- engine/schema/src/com/cloud/offerings/NetworkOfferingVO.java
>> >>(eefdc94)
>> >>-
>>
>> >> >>plugins/network-elements/elastic-loadbalancer/src/com/cloud/network/lb/El
>> >>asticLoadBalancerManagerImpl.java
>> >>(ecd6006)
>> >>-
>>
>> >> >>plugins/network-elements/internal-loadbalancer/src/org/apache/cloudstack/
>> >>network/lb/InternalLoadBalancerVMManagerImpl.java
>> >>(587ae99)
>> >>- server/src/com/cloud/configuration/ConfigurationManagerImpl.java
>> >>(8a0f7a6)
>> >>-
>>
>> >> >>server/src/com/cloud/network/router/VirtualNetwork

Re: LXC and Networking

2013-10-09 Thread Phong Nguyen
The LXC cluster does not require a zone of it's own, so you should be able
to run an LXC cluster with other clusters. Regarding the LXC resource
manager issue, it sounds like you're missing something during server
initialization.

Under the hood, LXC uses Libvirt and leverages most of the KVM Libvirt
code. So any networking abilities that were possible with KVM should work
with LXC. I'm not too familiar with OVS + Cloudstack, so maybe someone from
the community can help there.

-Phong


On Wed, Oct 9, 2013 at 10:09 AM, Chip Childers wrote:

> Adding Phong, who was the original author of the LXC plugin.  Phong, can
> you help Francois out?
>
> -chip
>
> On Wed, Oct 09, 2013 at 10:04:31AM -0400, Francois Gaudreault wrote:
> > I posted on the users list, but no one responded. I am trying here :)
> >
> > In addition to this, I tried to add an LXC cluster in an existing
> > Zone, and I got an error about the LXC resource manager not being
> > found.
> > >
> > >I do have some questions regarding LXC containers and the
> > >networking. First, should I put the LXC clusters on a separate
> > >zone or I can use an existing zone (which I built for Xen) and
> > >just create a new LXC cluster? Second, I saw in the doc that
> > >bridges are manually created... what happens if I have
> > >hundreds/thousands of guests VLANs? Will the agent automate that
> > >part (planning to use OVS here)?
> > >
> > Thanks!
> >
> > --
> > Francois Gaudreault
> > Architecte de Solution Cloud | Cloud Solutions Architect
> > fgaudrea...@cloudops.com
> > 514-629-6775
> > - - -
> > CloudOps
> > 420 rue Guy
> > Montréal QC  H3J 1S6
> > www.cloudops.com
> > @CloudOps_
> >
> >
>


RE: Doc Updates

2013-10-09 Thread Christopher M. Ryan
+1 on this. 
I find management hard to please when I persuade changing to a new technology 
only to have issues related to documentation. This prolongs deployment and 
doesn't help with the already difficult management decision. It took us a month 
to switch to CloudStack and almost a week to begin defending the choice because 
of outdated documentation. This was of course before the donation to apache, 
since then it's been a lot easier and management isn't so concerned. but none 
the less, publicly facing documentation, I feel, should be kept current, to 
include bug fixes. 

Chris Ryan
Harmonia Holdings Group, LLC
404 People Place, Suite 402
Charlottesville, VA 22911
Office: (434) 244-4002



-Original Message-
From: Daan Hoogland [mailto:daan.hoogl...@gmail.com] 
Sent: Wednesday, October 09, 2013 4:34 PM
To: us...@cloudstack.apache.org; car...@reategui.com; dev
Subject: Re: Doc Updates

Great rant Carlos,

You should get it to the dev list. Actually I'll add the dev list in now. It 
makes sense to update the docs also after a release, when bug in the docs are 
found these can easily be changed without a full release cycle of the code 
itself.

regards,
Daan

On Wed, Oct 9, 2013 at 10:24 PM, Carlos Reategui  wrote:
> It seems like the only way that docs (
> http://cloudstack.apache.org/docs/en-US/index.html) are updated is 
> when a release is done.  Is it not possible to have these updated otherwise?
>  Waiting for the next patch release of the software so that the docs 
> get updated is causing problems with folks not being able to get 
> CloudStack installed properly and therefore gives them a bad 
> impression of the maturity of CloudStack.
>
> It makes no sense to me why there are multiple versions of documents 
> for each of the point releases (currently there is 4.0.0, 4.0.1, 
> 4.0.2, 4.1.0,
> 4.1.1 and 4.0.2 docs) when the feature set has not changed within each 
> of these.  I understand that the docs are built as part of the build 
> and release process but why does that have to impact the rate at which 
> the primary doc site is updated.  Can't the patch releases simply 
> update the release notes?  Personally I think there should be a single 
> 4.x version of the docs (I would be ok with a 4.0, 4.1 and 4.2 
> versions too if major features are going to be added to them).  Maybe 
> the doc site should have wiki like capabilities so that it can be more easily 
> maintained.
>
> ok, I am done ranting...


Re: questions about registerIso API and updateIsoPermissions API

2013-10-09 Thread Nitin Mehta
I think (1) is the right way to go.

From: Jessica Wang mailto:jessica.w...@citrix.com>>
Date: Wednesday 9 October 2013 12:47 PM
To: Nitin Mehta mailto:nitin.me...@citrix.com>>, 
"mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>, Alena 
Prokharchyk mailto:alena.prokharc...@citrix.com>>
Cc: Shweta Agarwal mailto:shweta.agar...@citrix.com>>
Subject: RE: questions about registerIso API and updateIsoPermissions API

Currently, at API level, a normal user is not allowed to specify “isfeatured” 
when registering ISO (API will ignore “isfeatured” parameter when a normal user 
passes it),
but a normal user is allowed to specify “isfeatured” when updating ISO.

Should we fix API to:
(1) allow a normal user to specify “isfeatured” when registering ISO (API won’t 
ignore “isfeatured” parameter when a normal user passes it)

OR

(2) disallow a normal user to specify “isfeatured” when updating ISO

?


p.s. I’ll do corresponding UI change after API is fixed.


From: Jessica Wang
Sent: Wednesday, October 09, 2013 11:01 AM
To: Nitin Mehta; mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: RE: questions about registerIso API and updateIsoPermissions API

Nitin,

>  At the moment, I think that for Isos we should allow to edit it so would 
> call it an API bug.
Thanks.

> Register Iso does provide an option to mark an ISO featured. I see that in 
> the latest master.
That only works for admin, but NOT normal user.

If you log in as a normal user, then pass “isfeatured=true” to registerIso API, 
API will ignore it.
The newly registered template will have “isfeatured: false”.

e.g.
http://10.215.3.26:8080/client/api?command=registerIso&response=json&sessionkey=u%2FVIHPJuPohidGKFd0lh6csG%2BfM%3D&name=normalUserIso1&displayText=normalUserIso1&url=http%3A%2F%2F10.223.110.231%2Fisos_64bit%2Fdummy.iso&zoneid=6bcd3bd9-591c-4d99-a164-d05b87df1b04&isfeatured=true&isextractable=false&bootable=true&osTypeId=b8cbfd6c-2d40-11e3-86aa-3c970e739c3e&ispublic=false&_=1381340961641
{
"registerisoresponse": {
"count": 1,
"iso": [
{
"id": "9b903876-f17c-4634-8463-8e3025259956",
"name": "normalUserIso1",
"displaytext": "normalUserIso1",
"ispublic": false,
"created": "2013-10-09T10:52:38-0700",
"isready": false,
"bootable": true,
"isfeatured": false,
"crossZones": false,
"ostypeid": "b8cbfd6c-2d40-11e3-86aa-3c970e739c3e",
"ostypename": "Apple Mac OS X 10.6 (32-bit)",
"account": "aaa_user",
"zoneid": "6bcd3bd9-591c-4d99-a164-d05b87df1b04",
"zonename": "jw-adv",
"status": "",
"domain": "aaa",
"domainid": "47b09d73-84ef-48dc-9b73-1720bad600cb",
"isextractable": false,
"tags": []
}
]
}
}

Jessica

From: Nitin Mehta
Sent: Tuesday, October 08, 2013 5:27 PM
To: Jessica Wang; mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: Re: questions about registerIso API and updateIsoPermissions API

Answers inline.

From: Jessica Wang mailto:jessica.w...@citrix.com>>
Date: Tuesday 8 October 2013 5:10 PM
To: "mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk 
mailto:alena.prokharc...@citrix.com>>, Nitin 
Mehta mailto:nitin.me...@citrix.com>>, Shweta Agarwal 
mailto:shweta.agar...@citrix.com>>
Subject: questions about registerIso API and updateIsoPermissions API

Hi,

I have questions about registerIso API and updateIsoPermissions API.

(1) A normal user is allowed to specify isextractable property when registering 
an ISO (through registerIso API),
but NOT allowed to update isextractable property when updating an ISO (through 
updateIsoPermissions API).
Is this by design or it's just an API bug?

Nitin>> This is a grey area. This was done for templates (Isos just inherited 
it) because derived templates may or may not belong to the same user and we 
want to follow the principle of least privilege.
At the moment, I think that for Isos we should allow to edit it so would call 
it an API bug.

(2) A normal user is NOT allowed to specify isfeatured property when 
registering an ISO (through registerIso API),
but allowed to update isfeatured property when updating an ISO (through 
updateIsoPermissions API)?
Is this by design or it's just an API bug?

Nitin>> Register Iso does provide an option to mark an ISO featured. I see that 
in the latest master.

Jessica


Re: LXC and Networking

2013-10-09 Thread Francois Gaudreault

Thanks Phong about the networking answers. I will follow the KVM docs.

Regarding the resource manager, we did compile the nonoss package on our 
own.  Are you saying we require an extra deps?



--
Francois Gaudreault
Architecte de Solution Cloud | Cloud Solutions Architect
fgaudrea...@cloudops.com
514-629-6775
- - -
CloudOps
420 rue Guy
Montréal QC  H3J 1S6
www.cloudops.com
@CloudOps_



Re: [DISCUSS/PROPOSAL] Upgrading Driver Model

2013-10-09 Thread SuichII, Christopher
Interesting. I'm not sure how I missed this thread... I'll try to chime in 
where I can, then. However, everything going on in here sounds like work for 
post-4.3, but if we are adding revert volume snapshot to 4.3, we will need a 
solution to that before then. It seems like the idea I've got for this is 
fairly lightweight and could be either extended or removed depending on what 
comes out of that discussion. If there are other ideas, I'm more than happy to 
continue discussions.

-Chris
-- 
Chris Suich
chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Oct 9, 2013, at 4:39 PM, John Burwell  wrote:

> Kelven,
> 
> As I stated in my proposal, I think it is important to recognize the 
> distinction between components that control/interact with infrastructure and 
> components that represent orchestration abstractions/mechanisms within the 
> management server.  Currently, these two concepts are conflated -- 
> complicating the effort to modularize the system.  Therefore, in my view, any 
> effort going forward must make this important distinction.
> 
> The first type, in my vocabulary, are device drivers.  In my view, these are 
> essential system extension points that require greater modularity and 
> isolation due to their potential to require conflicting dependencies and 
> their external QA.  My proposal pertains only to these types of components, 
> and I think it important to continue the discussion as it has far reaching 
> implications to both the system architecture and our release process.  IN 
> particular, I think it is possible for us to achieve the ability to have 
> completely segregated device drivers shipped separately from CloudStack 
> releases.  
> 
> Thanks,
> -John
> 
> On Sep 9, 2013, at 1:49 PM, Kelven Yang  wrote:
> 
>> John,
>> 
>> I understand. The effort we did in 4.1 was mainly to free developers from
>> the needs to work at low-level plumbing layer, prior to 4.1, not every
>> developer knows how to modify ComponentLocator safely, switching to a
>> standard framework can let us focus on Cloud operating business logic.
>> 
>> Breaking CloudStack into a more modularized architecture is a long journey
>> which we are still striving to get there, Daren's work will again bring us
>> one step closer, I think this incremental refactoring approach can help
>> reduce the turbulence during the flight and ensure smoother releases along
>> the way.
>> 
>> kelven
>> 
>> 
>> On 8/25/13 8:35 PM, "John Burwell"  wrote:
>> 
>>> Kelven,
>>> 
>>> Please don't take my proposal as a criticism of the approach taken in
>>> 4.1.  I think the current model is a big improvement over the previous
>>> approach.  Given the time constraints and ambitions of that work, I think
>>> it was a solid, pragmatic first step.  I believe we are at a point to
>>> assess our needs, and determine a good next step that (hopefully) further
>>> improves the model.
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Aug 22, 2013, at 7:44 PM, Kelven Yang  wrote:
>>> 
 Spring is not meant to be used as a solution for run-time "plug-ins".
 Darren is correct that Spring XML should be treated as code (ideal place
 for it is the resource section inside the jar). Why we end up the way
 now
 is mainly for practical reason. Since most of our current pluggable
 features are not yet designed to be fully run-time loadable, most of
 them
 have compile time linkage to other framework components that are solved
 at
 loading time by Spring.
 
 Only after we have cleaned up all these tightly coupled loading time
 bindings, can we have a much simpler plugin configuration. And this
 run-time loadable framework does not necessary to be based on any
 complex
 ones (i.e., OSGi).
 
 Kelven 
 
 On 8/21/13 8:42 AM, "Darren Shepherd" 
 wrote:
 
> I also agree with this.  Spring XML should always be treated as code
> not
> really configuration.  It's not good to have a sysadmin touch spring
> config and frankly it's just mean to force them to.
> 
> I would ideally like to see that registering a module is as simple as
> putting a jar in a directory.  If its in the directory it gets loaded.
> Then additionally you should have a way such that you can explicitly
> tell
> it not to load modules based on some configuration.  That way, if for
> some reason moving the jar is not possible, you can still disallow it.
> 
> So for example the directory based approach works well with rpm/deb's
> so
> "yum install mycoolplugin" will just place jar somewhere.  But say your
> troubleshooting or whatever, you don't really want to have to do "yum
> remove..." just to troubleshoot.  It would be nice to just edit some
> file
> and say "plugin.mycoolplugin.load=false" (or env variable or whatever)
> 
> Darren
> 
> On Aug 21, 2013, at 6:51 AM, 

Re: Review Request 14320: add boolean option httpModeEnabled to the service offering for use in haproxy conf

2013-10-09 Thread Chiradeep Vittal
Sure.

On 10/9/13 1:55 PM, "Daan Hoogland"  wrote:

>H Chiradeep,
>
>Would you considder this if I rename the option to keepAlive, adding a
>API description field stating that it only has effect on HAProxy (for
>now)?
>
>regards,
>Daan
>
>On Tue, Oct 1, 2013 at 10:50 AM, Daan Hoogland 
>wrote:
>> Ok Chiradeep,
>>
>> I see where you worries are. I'll study the stickiness implementation.
>>If it
>> is not a zone wide thing I'll consider it.
>>
>> I disagree that the feature is implementation specific. The tuning is.
>>And
>> the tuning the feature are not the same. The abstraction of the feature
>> httpClose, which is only implemented by haproxy (let's assume) as a set
>>of
>> options is the reason for someone to choose for this implementation of a
>> load balancer. This should be leveraged.
>>
>> Actually in the Schuberg Philis implementation it must. The solution
>>that is
>> now done at the actual site is hacked into the running VR. This will
>>then
>> lead to an emergency if the router is recreated for some reason.
>>
>> regards,
>> Daan
>>
>> On Mon, Sep 30, 2013 at 11:50 PM, Chiradeep Vittal
>>  wrote:
>>>
>>> My point is that it is a tuning that is specific for HAProxy and
>>>shouldn't
>>> be exposed in an abstraction like the CS API.
>>> (After all, how do I choose, as an end-user Offering A with httpClose
>>>or
>>> offering B without httpClose). If there is another desirable feature Y
>>>in
>>> Netscaler, do you anticipate changing another dozen files for that
>>> feature?
>>>
>>> If you look at the stickiness policy feature, it isn't tied to the
>>>service
>>> offering despite there being some differences between stickiness
>>> capabilities between different LB providers.
>>>
>>>
>>>
>>> On 9/28/13 4:18 AM, "Daan Hoogland"  wrote:
>>>
>>> >Chiradeep,
>>> >
>>> >the network offerings are created by the cloud operator aren't they?
>>>The
>>> >netscaler  en f5 modules will have to implement it's own behavior on
>>> >httpClose. in case of haproxy it means no mode http and option
>>>httpclose
>>> >(and some other things)
>>> >
>>> >If you define it zone wide every tenant has the same setting whilst
>>>you
>>> >want this to tune setting (like with maxConnections) for a tenant.
>>> >
>>> >regards,
>>> >Daan
>>> >
>>> >
>>> >On Thu, Sep 26, 2013 at 10:57 PM, Chiradeep Vittal
>>> >wrote:
>>> >
>>> >>This is an automatically generated e-mail. To reply, visit:
>>> >> https://reviews.apache.org/r/14320/
>>> >>
>>> >> Not sure if this should be in the API since it is a HAProxy-specific
>>> >>configuration. This wouldn't apply to Netscaler or F5.
>>> >> After all the end user has no idea if he is using HAProxy of
>>>Netscaler
>>> >>or F5.
>>> >>
>>> >> Likely this flag is of interest to the cloud operator only, so why
>>>not
>>> >>put it in zone-wide config instead of the network offering.
>>> >> Do you really see someone creating 2 offerings: one with HttpClose
>>>and
>>> >>one without HttpClose?
>>> >>
>>> >>
>>> >> - Chiradeep Vittal
>>> >>
>>> >> On September 26th, 2013, 7:01 p.m. UTC, daan Hoogland wrote:
>>> >>   Review request for cloudstack and Wei Zhou.
>>> >> By daan Hoogland.
>>> >>
>>> >> *Updated Sept. 26, 2013, 7:01 p.m.*
>>> >>  *Bugs: * CLOUDSTACK-4328
>>> >>  *Repository: * cloudstack-git
>>> >> Description
>>> >>
>>> >> add boolean option httpModeEnabled to the service offering for use
>>>in
>>> >>haproxy conf
>>> >>
>>> >>   Testing
>>> >>
>>> >> created unit test.
>>> >> instantiated a network with some loadbalancer rule based on a
>>>netoffer
>>> >>with the option to true/false and maxconnections to a non default
>>>value
>>> >>-> checked haproxy.cfg on the router
>>> >>
>>> >>   Diffs
>>> >>
>>> >>- api/src/com/cloud/offering/NetworkOffering.java (6c5573e)
>>> >>- api/src/org/apache/cloudstack/api/ApiConstants.java (f85784b)
>>> >>-
>>>
>>> >> 
>api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkO
>ffe
>>> >>ringCmd.java
>>> >>(bdad904)
>>> >>-
>>>
>>> >> 
>api/src/org/apache/cloudstack/api/command/admin/network/UpdateNetworkO
>ffe
>>> >>ringCmd.java
>>> >>(c9c4c8a)
>>> >>-
>>> >> core/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
>>> >>(ee29290)
>>> >>- core/src/com/cloud/network/HAProxyConfigurator.java (2309125)
>>> >>- core/test/com/cloud/network/HAProxyConfiguratorTest.java
>>> >>(PRE-CREATION)
>>> >>-
>>>
>>> >> 
>engine/components-api/src/com/cloud/configuration/ConfigurationManager
>.ja
>>> >>va
>>> >>(5e1b9b5)
>>> >>-
>>>
>>> >> 
>engine/orchestration/src/org/apache/cloudstack/engine/orchestration/Ne
>two
>>> >>rkOrchestrator.java
>>> >>(53f64fd)
>>> >>- engine/schema/src/com/cloud/offerings/NetworkOfferingVO.java
>>> >>(eefdc94)
>>> >>-
>>>
>>> >> 
>plugins/network-elements/elastic-loadbalancer/src/com/cloud/network/lb
>/El
>>> >>asticLoadBalancerManagerImpl.java
>>> >>(ecd6006)
>>> >>-
>>>
>>> >> 
>plugins/network-eleme

Re: Doc Updates

2013-10-09 Thread Sebastien Goasguen
The docs have recently been moved into their main repo.

This potentially means that we are heading towards documentation releases on a 
different cycle than the code release. 

We are not there yet but it's in the works, anyone interested and willing to 
contribute patches and ideas should register to the dev list and participate in 
[DOCS] threads.

-sebastien

On Oct 9, 2013, at 5:25 PM, Harm Boertien  wrote:

> I see at least 1 topic for the cloudstack collab.
> 
> +1
> 
> Sent from my iPhone
> 
> On 9 okt. 2013, at 23:01, "Christopher M. Ryan"  wrote:
> 
>> +1 on this. 
>> I find management hard to please when I persuade changing to a new 
>> technology only to have issues related to documentation. This prolongs 
>> deployment and doesn't help with the already difficult management decision. 
>> It took us a month to switch to CloudStack and almost a week to begin 
>> defending the choice because of outdated documentation. This was of course 
>> before the donation to apache, since then it's been a lot easier and 
>> management isn't so concerned. but none the less, publicly facing 
>> documentation, I feel, should be kept current, to include bug fixes. 
>> 
>> Chris Ryan
>> Harmonia Holdings Group, LLC
>> 404 People Place, Suite 402
>> Charlottesville, VA 22911
>> Office: (434) 244-4002
>> 
>> 
>> 
>> -Original Message-
>> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com] 
>> Sent: Wednesday, October 09, 2013 4:34 PM
>> To: us...@cloudstack.apache.org; car...@reategui.com; dev
>> Subject: Re: Doc Updates
>> 
>> Great rant Carlos,
>> 
>> You should get it to the dev list. Actually I'll add the dev list in now. It 
>> makes sense to update the docs also after a release, when bug in the docs 
>> are found these can easily be changed without a full release cycle of the 
>> code itself.
>> 
>> regards,
>> Daan
>> 
>> On Wed, Oct 9, 2013 at 10:24 PM, Carlos Reategui  wrote:
>>> It seems like the only way that docs (
>>> http://cloudstack.apache.org/docs/en-US/index.html) are updated is 
>>> when a release is done.  Is it not possible to have these updated otherwise?
>>> Waiting for the next patch release of the software so that the docs 
>>> get updated is causing problems with folks not being able to get 
>>> CloudStack installed properly and therefore gives them a bad 
>>> impression of the maturity of CloudStack.
>>> 
>>> It makes no sense to me why there are multiple versions of documents 
>>> for each of the point releases (currently there is 4.0.0, 4.0.1, 
>>> 4.0.2, 4.1.0,
>>> 4.1.1 and 4.0.2 docs) when the feature set has not changed within each 
>>> of these.  I understand that the docs are built as part of the build 
>>> and release process but why does that have to impact the rate at which 
>>> the primary doc site is updated.  Can't the patch releases simply 
>>> update the release notes?  Personally I think there should be a single 
>>> 4.x version of the docs (I would be ok with a 4.0, 4.1 and 4.2 
>>> versions too if major features are going to be added to them).  Maybe 
>>> the doc site should have wiki like capabilities so that it can be more 
>>> easily maintained.
>>> 
>>> ok, I am done ranting...



Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand

2013-10-09 Thread Mike Tutkowski
Hey Marcus,

I'm merging your changes into mine.

It looks like I can remove your call to getPhysicalDisk (below) (and the
associated KVMPhysicalDisk variable, as well).

I assume you would never want to produce any side effects in your
getPhysicalDisk implementation, right? Seems like that would be unintuitive.

public boolean connectPhysicalDisksViaVmSpec(VirtualMachineTO vmSpec) {

boolean result = false;


final String vmName = vmSpec.getName();


List disks = Arrays.asList(vmSpec.getDisks());


for (DiskTO disk : disks) {

KVMPhysicalDisk physicalDisk = null;

KVMStoragePool pool = null;


if (disk.getType() != Volume.Type.ISO) {

VolumeObjectTO vol = (VolumeObjectTO) disk.getData();

PrimaryDataStoreTO store = (PrimaryDataStoreTO)
vol.getDataStore();


pool = getStoragePool(store.getPoolType(), store.getUuid());

physicalDisk = pool.getPhysicalDisk(vol.getPath());


StorageAdaptor adaptor = getStorageAdaptor(pool.getType());


result = adaptor.connectPhysicalDisk(vol.getPath(), pool);


if (! result) {

s_logger.error("Failed to connect disks via vm spec for
vm:" + vmName + " volume:" + vol.toString());


return result;

}

}

}


return result;

}


On Wed, Oct 9, 2013 at 12:01 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Excellent...that template worked like a charm.
>
> Thanks!
>
>
> On Tue, Oct 8, 2013 at 11:50 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Perfect
>>
>> Almost done downloading...I'll give it a try in a moment.
>>
>>
>> On Tue, Oct 8, 2013 at 11:47 PM, Marcus Sorensen wrote:
>>
>>> Not password enabled, root password is just 'password'. I mainly just
>>> use it for devcloud-kvm testing.
>>>  On Oct 8, 2013 11:45 PM, "Mike Tutkowski" 
>>> wrote:
>>>
 Great - thanks!


 On Tue, Oct 8, 2013 at 11:38 PM, Marcus Sorensen 
 wrote:

> Use my tiny centos image. I'm not sure what's up with that crufty old
> default centos template.
>
> Register this qcow2 template:
> http://marcus.mlsorensen.com/cloudstack-extras/tiny-centos-63.qcow2
>
> Needs a service offering with at least 192MB to run.
> On Oct 8, 2013 11:36 PM, "Mike Tutkowski" <
> mike.tutkow...@solidfire.com> wrote:
>
>> Perhaps you might know something about this, Marcus.
>>
>> My instance suffers a Kernel panic while booting up.
>>
>> I'm just using the built-in KVM template (CentOS 5.5(64-bit) no GUI
>> (KVM)) with 1 CPU and 512 MB memory.
>>
>> http://i.imgur.com/QuPH2Ub.png
>>
>> I tried to just use an ISO instead, but apparently that functionality
>> is broken, as well (related to Disk Offerings).
>>
>>
>> On Tue, Oct 8, 2013 at 10:39 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> OK, all is good now.
>>>
>>> I have both system VMs up and running and the Agent States read as
>>> "Up," as well.
>>>
>>>
>>> On Tue, Oct 8, 2013 at 9:50 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
 I believe we've been down this road before:

 2013-10-09 03:47:41,281 ERROR [cloud.agent.AgentShell] (main:null)
 Unable to start agent: Resource class not found:
 com.cloud.storage.resource.PremiumSecondaryStorageResource due to:
 java.lang.ClassNotFoundException:
 com.cloud.storage.resource.PremiumSecondaryStorageResource

 The solution was to compile without -Dnoredist.

 I will try that now.


 On Tue, Oct 8, 2013 at 9:36 PM, Marcus Sorensen <
 shadow...@gmail.com> wrote:

> You may be able to find a stack trace for the java process in
> /var/log/cloud or the messages file, on the system vm.
>  On Oct 8, 2013 9:21 PM, "Mike Tutkowski" <
> mike.tutkow...@solidfire.com> wrote:
>
>> Interesting...I ran the following:
>>
>> /usr/local/cloud/systemvm/ssvm-check.sh
>>
>> It says the Java process is not running.
>>
>> This is the KVM system template I'm using:
>>
>>
>> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
>>
>> I just picked the one that was referenced in the VM_Template
>> table in 4.3.
>>
>>
>> On Tue, Oct 8, 2013 at 9:05 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Found it.
>>>
>>> As an FYI, this is the doc I was referring to:
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSVM%2C+templates%2C+Secondar

RE: questions about registerIso API and updateIsoPermissions API

2013-10-09 Thread Jessica Wang
Thanks, Nitin.

I just filed an API bug for this:
https://issues.apache.org/jira/i#browse/CLOUDSTACK-4843


From: Nitin Mehta
Sent: Wednesday, October 09, 2013 2:08 PM
To: Jessica Wang; ; Alena Prokharchyk
Cc: Shweta Agarwal
Subject: Re: questions about registerIso API and updateIsoPermissions API

I think (1) is the right way to go.

From: Jessica Wang mailto:jessica.w...@citrix.com>>
Date: Wednesday 9 October 2013 12:47 PM
To: Nitin Mehta mailto:nitin.me...@citrix.com>>, 
"mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>, Alena 
Prokharchyk mailto:alena.prokharc...@citrix.com>>
Cc: Shweta Agarwal mailto:shweta.agar...@citrix.com>>
Subject: RE: questions about registerIso API and updateIsoPermissions API

Currently, at API level, a normal user is not allowed to specify "isfeatured" 
when registering ISO (API will ignore "isfeatured" parameter when a normal user 
passes it),
but a normal user is allowed to specify "isfeatured" when updating ISO.

Should we fix API to:
(1) allow a normal user to specify "isfeatured" when registering ISO (API won't 
ignore "isfeatured" parameter when a normal user passes it)

OR

(2) disallow a normal user to specify "isfeatured" when updating ISO

?


p.s. I'll do corresponding UI change after API is fixed.


From: Jessica Wang
Sent: Wednesday, October 09, 2013 11:01 AM
To: Nitin Mehta; mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: RE: questions about registerIso API and updateIsoPermissions API

Nitin,

>  At the moment, I think that for Isos we should allow to edit it so would 
> call it an API bug.
Thanks.

> Register Iso does provide an option to mark an ISO featured. I see that in 
> the latest master.
That only works for admin, but NOT normal user.

If you log in as a normal user, then pass "isfeatured=true" to registerIso API, 
API will ignore it.
The newly registered template will have "isfeatured: false".

e.g.
http://10.215.3.26:8080/client/api?command=registerIso&response=json&sessionkey=u%2FVIHPJuPohidGKFd0lh6csG%2BfM%3D&name=normalUserIso1&displayText=normalUserIso1&url=http%3A%2F%2F10.223.110.231%2Fisos_64bit%2Fdummy.iso&zoneid=6bcd3bd9-591c-4d99-a164-d05b87df1b04&isfeatured=true&isextractable=false&bootable=true&osTypeId=b8cbfd6c-2d40-11e3-86aa-3c970e739c3e&ispublic=false&_=1381340961641
{
"registerisoresponse": {
"count": 1,
"iso": [
{
"id": "9b903876-f17c-4634-8463-8e3025259956",
"name": "normalUserIso1",
"displaytext": "normalUserIso1",
"ispublic": false,
"created": "2013-10-09T10:52:38-0700",
"isready": false,
"bootable": true,
"isfeatured": false,
"crossZones": false,
"ostypeid": "b8cbfd6c-2d40-11e3-86aa-3c970e739c3e",
"ostypename": "Apple Mac OS X 10.6 (32-bit)",
"account": "aaa_user",
"zoneid": "6bcd3bd9-591c-4d99-a164-d05b87df1b04",
"zonename": "jw-adv",
"status": "",
"domain": "aaa",
"domainid": "47b09d73-84ef-48dc-9b73-1720bad600cb",
"isextractable": false,
"tags": []
}
]
}
}

Jessica

From: Nitin Mehta
Sent: Tuesday, October 08, 2013 5:27 PM
To: Jessica Wang; mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk; Shweta Agarwal
Subject: Re: questions about registerIso API and updateIsoPermissions API

Answers inline.

From: Jessica Wang mailto:jessica.w...@citrix.com>>
Date: Tuesday 8 October 2013 5:10 PM
To: "mailto:dev@cloudstack.apache.org>>" 
mailto:dev@cloudstack.apache.org>>
Cc: Alena Prokharchyk 
mailto:alena.prokharc...@citrix.com>>, Nitin 
Mehta mailto:nitin.me...@citrix.com>>, Shweta Agarwal 
mailto:shweta.agar...@citrix.com>>
Subject: questions about registerIso API and updateIsoPermissions API

Hi,

I have questions about registerIso API and updateIsoPermissions API.

(1) A normal user is allowed to specify isextractable property when registering 
an ISO (through registerIso API),
but NOT allowed to update isextractable property when updating an ISO (through 
updateIsoPermissions API).
Is this by design or it's just an API bug?

Nitin>> This is a grey area. This was done for templates (Isos just inherited 
it) because derived templates may or may not belong to the same user and we 
want to follow the principle of least privilege.
At the moment, I think that for Isos we should allow to edit it so would call 
it an API bug.

(2) A normal user is NOT allowed to specify isfeatured property when 
registering an ISO (through registerIso API),
but allowed to update isfeatured property when updating an ISO (through 
updateIsoPermissions API)?
Is this by design or it's just an API bug?

Nitin>> Register Iso does provide an option to mark an ISO featured. I see that 
in the latest master.

Jessica


Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand

2013-10-09 Thread Marcus Sorensen
Yeah, that looks like leftovers from refactoring our 4.1 code into 4.2 and
making it something more generic. It looks like it could be removed.


On Wed, Oct 9, 2013 at 3:40 PM, Mike Tutkowski  wrote:

> Hey Marcus,
>
> I'm merging your changes into mine.
>
> It looks like I can remove your call to getPhysicalDisk (below) (and the
> associated KVMPhysicalDisk variable, as well).
>
> I assume you would never want to produce any side effects in your
> getPhysicalDisk implementation, right? Seems like that would be unintuitive.
>
> public boolean connectPhysicalDisksViaVmSpec(VirtualMachineTO vmSpec)
> {
>
> boolean result = false;
>
>
> final String vmName = vmSpec.getName();
>
>
> List disks = Arrays.asList(vmSpec.getDisks());
>
>
> for (DiskTO disk : disks) {
>
> KVMPhysicalDisk physicalDisk = null;
>
> KVMStoragePool pool = null;
>
>
> if (disk.getType() != Volume.Type.ISO) {
>
> VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>
> PrimaryDataStoreTO store = (PrimaryDataStoreTO)
> vol.getDataStore();
>
>
> pool = getStoragePool(store.getPoolType(),
> store.getUuid());
>
> physicalDisk = pool.getPhysicalDisk(vol.getPath());
>
>
> StorageAdaptor adaptor = getStorageAdaptor(pool.getType());
>
>
> result = adaptor.connectPhysicalDisk(vol.getPath(), pool);
>
>
> if (! result) {
>
> s_logger.error("Failed to connect disks via vm spec
> for vm:" + vmName + " volume:" + vol.toString());
>
>
> return result;
>
> }
>
> }
>
> }
>
>
> return result;
>
> }
>
>
> On Wed, Oct 9, 2013 at 12:01 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Excellent...that template worked like a charm.
>>
>> Thanks!
>>
>>
>> On Tue, Oct 8, 2013 at 11:50 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Perfect
>>>
>>> Almost done downloading...I'll give it a try in a moment.
>>>
>>>
>>> On Tue, Oct 8, 2013 at 11:47 PM, Marcus Sorensen wrote:
>>>
 Not password enabled, root password is just 'password'. I mainly just
 use it for devcloud-kvm testing.
  On Oct 8, 2013 11:45 PM, "Mike Tutkowski" <
 mike.tutkow...@solidfire.com> wrote:

> Great - thanks!
>
>
> On Tue, Oct 8, 2013 at 11:38 PM, Marcus Sorensen 
> wrote:
>
>> Use my tiny centos image. I'm not sure what's up with that crufty old
>> default centos template.
>>
>> Register this qcow2 template:
>> http://marcus.mlsorensen.com/cloudstack-extras/tiny-centos-63.qcow2
>>
>> Needs a service offering with at least 192MB to run.
>> On Oct 8, 2013 11:36 PM, "Mike Tutkowski" <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Perhaps you might know something about this, Marcus.
>>>
>>> My instance suffers a Kernel panic while booting up.
>>>
>>> I'm just using the built-in KVM template (CentOS 5.5(64-bit) no GUI
>>> (KVM)) with 1 CPU and 512 MB memory.
>>>
>>> http://i.imgur.com/QuPH2Ub.png
>>>
>>> I tried to just use an ISO instead, but apparently that
>>> functionality is broken, as well (related to Disk Offerings).
>>>
>>>
>>> On Tue, Oct 8, 2013 at 10:39 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
 OK, all is good now.

 I have both system VMs up and running and the Agent States read as
 "Up," as well.


 On Tue, Oct 8, 2013 at 9:50 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com> wrote:

> I believe we've been down this road before:
>
> 2013-10-09 03:47:41,281 ERROR [cloud.agent.AgentShell] (main:null)
> Unable to start agent: Resource class not found:
> com.cloud.storage.resource.PremiumSecondaryStorageResource due to:
> java.lang.ClassNotFoundException:
> com.cloud.storage.resource.PremiumSecondaryStorageResource
>
> The solution was to compile without -Dnoredist.
>
> I will try that now.
>
>
> On Tue, Oct 8, 2013 at 9:36 PM, Marcus Sorensen <
> shadow...@gmail.com> wrote:
>
>> You may be able to find a stack trace for the java process in
>> /var/log/cloud or the messages file, on the system vm.
>>  On Oct 8, 2013 9:21 PM, "Mike Tutkowski" <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Interesting...I ran the following:
>>>
>>> /usr/local/cloud/systemvm/ssvm-check.sh
>>>
>>> It says the Java process is not running.
>>>
>>> This is the KVM system template I'm using:
>>>
>>>
>>> http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2
>>>
>>> I just 

Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand

2013-10-09 Thread Mike Tutkowski
Will do...I'll remove it (there is similar code in two methods).


On Wed, Oct 9, 2013 at 3:49 PM, Marcus Sorensen  wrote:

> Yeah, that looks like leftovers from refactoring our 4.1 code into 4.2 and
> making it something more generic. It looks like it could be removed.
>
>
> On Wed, Oct 9, 2013 at 3:40 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Hey Marcus,
>>
>> I'm merging your changes into mine.
>>
>> It looks like I can remove your call to getPhysicalDisk (below) (and the
>> associated KVMPhysicalDisk variable, as well).
>>
>> I assume you would never want to produce any side effects in your
>> getPhysicalDisk implementation, right? Seems like that would be unintuitive.
>>
>> public boolean connectPhysicalDisksViaVmSpec(VirtualMachineTO
>> vmSpec) {
>>
>> boolean result = false;
>>
>>
>> final String vmName = vmSpec.getName();
>>
>>
>> List disks = Arrays.asList(vmSpec.getDisks());
>>
>>
>> for (DiskTO disk : disks) {
>>
>> KVMPhysicalDisk physicalDisk = null;
>>
>> KVMStoragePool pool = null;
>>
>>
>> if (disk.getType() != Volume.Type.ISO) {
>>
>> VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>>
>> PrimaryDataStoreTO store = (PrimaryDataStoreTO)
>> vol.getDataStore();
>>
>>
>> pool = getStoragePool(store.getPoolType(),
>> store.getUuid());
>>
>> physicalDisk = pool.getPhysicalDisk(vol.getPath());
>>
>>
>> StorageAdaptor adaptor =
>> getStorageAdaptor(pool.getType());
>>
>>
>> result = adaptor.connectPhysicalDisk(vol.getPath(), pool);
>>
>>
>> if (! result) {
>>
>> s_logger.error("Failed to connect disks via vm spec
>> for vm:" + vmName + " volume:" + vol.toString());
>>
>>
>> return result;
>>
>> }
>>
>> }
>>
>> }
>>
>>
>> return result;
>>
>> }
>>
>>
>> On Wed, Oct 9, 2013 at 12:01 AM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Excellent...that template worked like a charm.
>>>
>>> Thanks!
>>>
>>>
>>> On Tue, Oct 8, 2013 at 11:50 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
 Perfect

 Almost done downloading...I'll give it a try in a moment.


 On Tue, Oct 8, 2013 at 11:47 PM, Marcus Sorensen 
 wrote:

> Not password enabled, root password is just 'password'. I mainly just
> use it for devcloud-kvm testing.
>  On Oct 8, 2013 11:45 PM, "Mike Tutkowski" <
> mike.tutkow...@solidfire.com> wrote:
>
>> Great - thanks!
>>
>>
>> On Tue, Oct 8, 2013 at 11:38 PM, Marcus Sorensen > > wrote:
>>
>>> Use my tiny centos image. I'm not sure what's up with that crufty
>>> old default centos template.
>>>
>>> Register this qcow2 template:
>>> http://marcus.mlsorensen.com/cloudstack-extras/tiny-centos-63.qcow2
>>>
>>> Needs a service offering with at least 192MB to run.
>>> On Oct 8, 2013 11:36 PM, "Mike Tutkowski" <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
 Perhaps you might know something about this, Marcus.

 My instance suffers a Kernel panic while booting up.

 I'm just using the built-in KVM template (CentOS 5.5(64-bit) no
 GUI (KVM)) with 1 CPU and 512 MB memory.

 http://i.imgur.com/QuPH2Ub.png

 I tried to just use an ISO instead, but apparently that
 functionality is broken, as well (related to Disk Offerings).


 On Tue, Oct 8, 2013 at 10:39 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com> wrote:

> OK, all is good now.
>
> I have both system VMs up and running and the Agent States read as
> "Up," as well.
>
>
> On Tue, Oct 8, 2013 at 9:50 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> I believe we've been down this road before:
>>
>> 2013-10-09 03:47:41,281 ERROR [cloud.agent.AgentShell]
>> (main:null) Unable to start agent: Resource class not found:
>> com.cloud.storage.resource.PremiumSecondaryStorageResource due to:
>> java.lang.ClassNotFoundException:
>> com.cloud.storage.resource.PremiumSecondaryStorageResource
>>
>> The solution was to compile without -Dnoredist.
>>
>> I will try that now.
>>
>>
>> On Tue, Oct 8, 2013 at 9:36 PM, Marcus Sorensen <
>> shadow...@gmail.com> wrote:
>>
>>> You may be able to find a stack trace for the java process in
>>> /var/log/cloud or the messages file, on the system vm.
>>>  On Oct 8, 2013 9:21 PM, "Mike Tutkowski" <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
 Interesting...I ran the following:

>

Re: Review Request 14381: KVM: add connect/disconnect capabilities to StorageAdaptors so that external storage services can attach/detach devices on-demand

2013-10-09 Thread Mike Tutkowski
I've got the code all merged, by the way.

I should be able to start in on testing soon (tonight or tomorrow).


On Wed, Oct 9, 2013 at 3:51 PM, Mike Tutkowski  wrote:

> Will do...I'll remove it (there is similar code in two methods).
>
>
> On Wed, Oct 9, 2013 at 3:49 PM, Marcus Sorensen wrote:
>
>> Yeah, that looks like leftovers from refactoring our 4.1 code into 4.2
>> and making it something more generic. It looks like it could be removed.
>>
>>
>> On Wed, Oct 9, 2013 at 3:40 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Hey Marcus,
>>>
>>> I'm merging your changes into mine.
>>>
>>> It looks like I can remove your call to getPhysicalDisk (below) (and the
>>> associated KVMPhysicalDisk variable, as well).
>>>
>>> I assume you would never want to produce any side effects in your
>>> getPhysicalDisk implementation, right? Seems like that would be unintuitive.
>>>
>>> public boolean connectPhysicalDisksViaVmSpec(VirtualMachineTO
>>> vmSpec) {
>>>
>>> boolean result = false;
>>>
>>>
>>> final String vmName = vmSpec.getName();
>>>
>>>
>>> List disks = Arrays.asList(vmSpec.getDisks());
>>>
>>>
>>> for (DiskTO disk : disks) {
>>>
>>> KVMPhysicalDisk physicalDisk = null;
>>>
>>> KVMStoragePool pool = null;
>>>
>>>
>>> if (disk.getType() != Volume.Type.ISO) {
>>>
>>> VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>>>
>>> PrimaryDataStoreTO store = (PrimaryDataStoreTO)
>>> vol.getDataStore();
>>>
>>>
>>> pool = getStoragePool(store.getPoolType(),
>>> store.getUuid());
>>>
>>> physicalDisk = pool.getPhysicalDisk(vol.getPath());
>>>
>>>
>>> StorageAdaptor adaptor =
>>> getStorageAdaptor(pool.getType());
>>>
>>>
>>> result = adaptor.connectPhysicalDisk(vol.getPath(),
>>> pool);
>>>
>>>
>>> if (! result) {
>>>
>>> s_logger.error("Failed to connect disks via vm spec
>>> for vm:" + vmName + " volume:" + vol.toString());
>>>
>>>
>>> return result;
>>>
>>> }
>>>
>>> }
>>>
>>> }
>>>
>>>
>>> return result;
>>>
>>> }
>>>
>>>
>>> On Wed, Oct 9, 2013 at 12:01 AM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
 Excellent...that template worked like a charm.

 Thanks!


 On Tue, Oct 8, 2013 at 11:50 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com> wrote:

> Perfect
>
> Almost done downloading...I'll give it a try in a moment.
>
>
> On Tue, Oct 8, 2013 at 11:47 PM, Marcus Sorensen 
> wrote:
>
>> Not password enabled, root password is just 'password'. I mainly just
>> use it for devcloud-kvm testing.
>>  On Oct 8, 2013 11:45 PM, "Mike Tutkowski" <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Great - thanks!
>>>
>>>
>>> On Tue, Oct 8, 2013 at 11:38 PM, Marcus Sorensen <
>>> shadow...@gmail.com> wrote:
>>>
 Use my tiny centos image. I'm not sure what's up with that crufty
 old default centos template.

 Register this qcow2 template:
 http://marcus.mlsorensen.com/cloudstack-extras/tiny-centos-63.qcow2

 Needs a service offering with at least 192MB to run.
 On Oct 8, 2013 11:36 PM, "Mike Tutkowski" <
 mike.tutkow...@solidfire.com> wrote:

> Perhaps you might know something about this, Marcus.
>
> My instance suffers a Kernel panic while booting up.
>
> I'm just using the built-in KVM template (CentOS 5.5(64-bit) no
> GUI (KVM)) with 1 CPU and 512 MB memory.
>
> http://i.imgur.com/QuPH2Ub.png
>
> I tried to just use an ISO instead, but apparently that
> functionality is broken, as well (related to Disk Offerings).
>
>
> On Tue, Oct 8, 2013 at 10:39 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> OK, all is good now.
>>
>> I have both system VMs up and running and the Agent States read
>> as "Up," as well.
>>
>>
>> On Tue, Oct 8, 2013 at 9:50 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> I believe we've been down this road before:
>>>
>>> 2013-10-09 03:47:41,281 ERROR [cloud.agent.AgentShell]
>>> (main:null) Unable to start agent: Resource class not found:
>>> com.cloud.storage.resource.PremiumSecondaryStorageResource due to:
>>> java.lang.ClassNotFoundException:
>>> com.cloud.storage.resource.PremiumSecondaryStorageResource
>>>
>>> The solution was to compile without -Dnoredist.
>>>
>>> I will try that now.
>>>
>>>
>>> On Tue, Oct 8, 2013 at 9:36 PM, Marcus Sorensen <
>>> shadow...

Re: [DOC] 4.2.0 Templates

2013-10-09 Thread Marty Sweet
Hi Daan,

Yeah the doc directory in the repo commited to master, where is the current
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/working-with-templates.html
being
built from?

Marty


On Wed, Oct 9, 2013 at 8:55 PM, Daan Hoogland wrote:

> Marty,
>
> I am not sure what you mean. Do you mean the doc dir in the repo? I
> think you need to look in
> https://git-wip-us.apache.org/repos/asf/cloudstack-docs.git for the
> 4.2 docs.
>
> regards,
> Daan
>
> On Sun, Oct 6, 2013 at 10:51 PM, Marty Sweet  wrote:
> > Hi guys,
> >
> > I created a document for creating Linux documentation for the 4.2.0
> > release. Checking the documentation it seems that it is not there? Is
> there
> > any reason for this?
> >
> >
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/working-with-templates.html
> >
> >
> https://github.com/apache/cloudstack/commit/922ef76224d4a8534f67f47b97cf664e5c65ecba
> > https://issues.apache.org/jira/browse/CLOUDSTACK-4329
> >
> > Thanks,
> > Marty
>


Re: Doc Updates

2013-10-09 Thread Harm Boertien
I see at least 1 topic for the cloudstack collab.

+1

Sent from my iPhone

On 9 okt. 2013, at 23:01, "Christopher M. Ryan"  wrote:

> +1 on this. 
> I find management hard to please when I persuade changing to a new technology 
> only to have issues related to documentation. This prolongs deployment and 
> doesn't help with the already difficult management decision. It took us a 
> month to switch to CloudStack and almost a week to begin defending the choice 
> because of outdated documentation. This was of course before the donation to 
> apache, since then it's been a lot easier and management isn't so concerned. 
> but none the less, publicly facing documentation, I feel, should be kept 
> current, to include bug fixes. 
> 
> Chris Ryan
> Harmonia Holdings Group, LLC
> 404 People Place, Suite 402
> Charlottesville, VA 22911
> Office: (434) 244-4002
> 
> 
> 
> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@gmail.com] 
> Sent: Wednesday, October 09, 2013 4:34 PM
> To: us...@cloudstack.apache.org; car...@reategui.com; dev
> Subject: Re: Doc Updates
> 
> Great rant Carlos,
> 
> You should get it to the dev list. Actually I'll add the dev list in now. It 
> makes sense to update the docs also after a release, when bug in the docs are 
> found these can easily be changed without a full release cycle of the code 
> itself.
> 
> regards,
> Daan
> 
> On Wed, Oct 9, 2013 at 10:24 PM, Carlos Reategui  wrote:
>> It seems like the only way that docs (
>> http://cloudstack.apache.org/docs/en-US/index.html) are updated is 
>> when a release is done.  Is it not possible to have these updated otherwise?
>> Waiting for the next patch release of the software so that the docs 
>> get updated is causing problems with folks not being able to get 
>> CloudStack installed properly and therefore gives them a bad 
>> impression of the maturity of CloudStack.
>> 
>> It makes no sense to me why there are multiple versions of documents 
>> for each of the point releases (currently there is 4.0.0, 4.0.1, 
>> 4.0.2, 4.1.0,
>> 4.1.1 and 4.0.2 docs) when the feature set has not changed within each 
>> of these.  I understand that the docs are built as part of the build 
>> and release process but why does that have to impact the rate at which 
>> the primary doc site is updated.  Can't the patch releases simply 
>> update the release notes?  Personally I think there should be a single 
>> 4.x version of the docs (I would be ok with a 4.0, 4.1 and 4.2 
>> versions too if major features are going to be added to them).  Maybe 
>> the doc site should have wiki like capabilities so that it can be more 
>> easily maintained.
>> 
>> ok, I am done ranting...


Re: [PROPOSAL] Remove Setters from *JoinVO

2013-10-09 Thread Min Chen
+1 on this. As Chris mentioned, the intention of *JoinVOs are
representation of MySQL views, which should not be editable after search.

-min

On 10/4/13 10:29 AM, "SuichII, Christopher"  wrote:

>*JoinVOs are used to store entries from MySQL views, which are not
>editable. I think removing setters from the *JoinVOs may help avoid some
>potential confusion as setters seem to imply that the fields are
>editable, which they really aren't.
>
>I started looking around and it looks like most setters in *JoinVOs
>aren't actually used since the creation of *VOs is handled by java
>reflection. Please let me know if this is not the case or if I'm
>misunderstanding the way the MySQL views work.
>
>-Chris
>-- 
>Chris Suich
>chris.su...@netapp.com
>NetApp Software Engineer
>Data Center Platforms ­ Cloud Solutions
>Citrix, Cisco & Red Hat
>



why are RvR routers not HA

2013-10-09 Thread Darren Shepherd
I don't quite understand why in the redundant VR use case you wouldn't
want the individual VRs to have HA enabled.  It seems the code will
always set ha=false for RvR.  I know if I loose one of the VRs, the
other takes over, so that is redundant.  But don't you want the lost
VR to come back to life if it can?

Darren


Re: [PROPOSAL] Modularize Spring

2013-10-09 Thread Darren Shepherd
I think I'm fine with that.  Is the enum type return dynamically at
runtime.  So the API would be something like "PlugInPriority
canHandle(...)"?

Darren

On Wed, Oct 9, 2013 at 1:13 PM, SuichII, Christopher
 wrote:
> I think I'll look into a version of (2). The difference being that I think 
> using an int is too large of a range and provides unnecessary granularity. If 
> two strategies or providers both have snapshot strategies, they are both 
> simply going to return the max int. However, if we use an enum with values 
> like:
>
> HIGHEST, PLUGIN, HYPERVISOR, DEFAULT and NO, (HIGHEST would be reserved for 
> unforeseen future use, testing, simulators, etc.)
>
> then we allow strategies and providers to fall in the same bucket. All 
> strategies and providers would be sorted and asked to handle operations in 
> that order. Ultimately, this requires that plugins do their best to determine 
> whether they can actually handle an operation, because if two say they can, 
> there is no way for the MS to intelligently choose between the two.
>
> --
> Chris Suich
> chris.su...@netapp.com
> NetApp Software Engineer
> Data Center Platforms – Cloud Solutions
> Citrix, Cisco & Red Hat
>
> On Oct 4, 2013, at 6:10 PM, Darren Shepherd  
> wrote:
>
>> Sure, I'm open to suggestions.  Basically I think we've discussed
>>
>> 1) Global Setting
>> 2) canHandle() returns an int
>> 3) Strategy has an enum type assigned
>>
>> I'm open to all three, I don't have much vested interest in this.
>>
>> Darren
>>
>> On Fri, Oct 4, 2013 at 3:00 PM, SuichII, Christopher
>>  wrote:
>>> Well, it seems OK, but I think we should keep on discussing our options. 
>>> One concern I have with the global config approach is that it adds manual 
>>> steps for 'installing' extensions. Each extension must have installation 
>>> instructions to indicate which global configurations it must be included in 
>>> and where in that list it should be put (and of course, many extension are 
>>> going to say that they should be at the front of the list).
>>>
>>> -Chris
>>> --
>>> Chris Suich
>>> chris.su...@netapp.com
>>> NetApp Software Engineer
>>> Data Center Platforms – Cloud Solutions
>>> Citrix, Cisco & Red Hat
>>>
>>> On Oct 4, 2013, at 12:12 PM, Darren Shepherd  
>>> wrote:
>>>
 On 10/04/2013 11:58 AM, SuichII, Christopher wrote:
> Darren,
>
> I think one of the benefits of allowing the priority to be specified in 
> the xml is that it can be configured after deployment. If for some reason 
> two strategies or providers conflict, then their priorities can be 
> changed in XML to resolve the conflict. I believe the Spring @Order 
> annotation an be specified in XML, not just as an annotation.
>
> -Chris
>

 I would *prefer* extensions to be order independent, but if we determine 
 they are order dependant, then that is fine too.  So if we conclude that 
 the simplest way to address this is to order the Strategies based on 
 configuration, then I will add an ordering "global configuration" as 
 described at 
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/Extensions.

 Does the order configuration setting approach seem fine?

 Darren
>>>
>


RE: [DOC] 4.2.0 Templates

2013-10-09 Thread Radhika Puthiyetath
4.2 docs are from 4.2 branch.

-Original Message-
From: Marty Sweet [mailto:msweet@gmail.com] 
Sent: Thursday, October 10, 2013 3:27 AM
To: dev@cloudstack.apache.org
Subject: Re: [DOC] 4.2.0 Templates

Hi Daan,

Yeah the doc directory in the repo commited to master, where is the current 
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/working-with-templates.html
being
built from?

Marty


On Wed, Oct 9, 2013 at 8:55 PM, Daan Hoogland wrote:

> Marty,
>
> I am not sure what you mean. Do you mean the doc dir in the repo? I 
> think you need to look in 
> https://git-wip-us.apache.org/repos/asf/cloudstack-docs.git for the
> 4.2 docs.
>
> regards,
> Daan
>
> On Sun, Oct 6, 2013 at 10:51 PM, Marty Sweet  wrote:
> > Hi guys,
> >
> > I created a document for creating Linux documentation for the 4.2.0 
> > release. Checking the documentation it seems that it is not there? 
> > Is
> there
> > any reason for this?
> >
> >
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/A
> dmin_Guide/working-with-templates.html
> >
> >
> https://github.com/apache/cloudstack/commit/922ef76224d4a8534f67f47b97
> cf664e5c65ecba
> > https://issues.apache.org/jira/browse/CLOUDSTACK-4329
> >
> > Thanks,
> > Marty
>


Re: why are RvR routers not HA

2013-10-09 Thread Alena Prokharchyk
On 10/9/13 4:55 PM, "Darren Shepherd"  wrote:

>I don't quite understand why in the redundant VR use case you wouldn't
>want the individual VRs to have HA enabled.  It seems the code will
>always set ha=false for RvR.  I know if I loose one of the VRs, the
>other takes over, so that is redundant.  But don't you want the lost
>VR to come back to life if it can?
>
>Darren
>

Darren, refer to the email thread "HA redundant virtual router" (started
8/23/2013), Sheng Yang gave an explanation there.



Re: [PROPOSAL] Modularize Spring

2013-10-09 Thread SuichII, Christopher
Yep, that's the idea. In fact, we really need it to be dynamic so that the 
strategy or provider can respond depending on the action. For example, 
snapshotting may be supported while reverting may not be.

-- 
Chris Suich
chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Oct 9, 2013, at 8:01 PM, Darren Shepherd 
 wrote:

> I think I'm fine with that.  Is the enum type return dynamically at
> runtime.  So the API would be something like "PlugInPriority
> canHandle(...)"?
> 
> Darren
> 
> On Wed, Oct 9, 2013 at 1:13 PM, SuichII, Christopher
>  wrote:
>> I think I'll look into a version of (2). The difference being that I think 
>> using an int is too large of a range and provides unnecessary granularity. 
>> If two strategies or providers both have snapshot strategies, they are both 
>> simply going to return the max int. However, if we use an enum with values 
>> like:
>> 
>> HIGHEST, PLUGIN, HYPERVISOR, DEFAULT and NO, (HIGHEST would be reserved for 
>> unforeseen future use, testing, simulators, etc.)
>> 
>> then we allow strategies and providers to fall in the same bucket. All 
>> strategies and providers would be sorted and asked to handle operations in 
>> that order. Ultimately, this requires that plugins do their best to 
>> determine whether they can actually handle an operation, because if two say 
>> they can, there is no way for the MS to intelligently choose between the two.
>> 
>> --
>> Chris Suich
>> chris.su...@netapp.com
>> NetApp Software Engineer
>> Data Center Platforms – Cloud Solutions
>> Citrix, Cisco & Red Hat
>> 
>> On Oct 4, 2013, at 6:10 PM, Darren Shepherd  
>> wrote:
>> 
>>> Sure, I'm open to suggestions.  Basically I think we've discussed
>>> 
>>> 1) Global Setting
>>> 2) canHandle() returns an int
>>> 3) Strategy has an enum type assigned
>>> 
>>> I'm open to all three, I don't have much vested interest in this.
>>> 
>>> Darren
>>> 
>>> On Fri, Oct 4, 2013 at 3:00 PM, SuichII, Christopher
>>>  wrote:
 Well, it seems OK, but I think we should keep on discussing our options. 
 One concern I have with the global config approach is that it adds manual 
 steps for 'installing' extensions. Each extension must have installation 
 instructions to indicate which global configurations it must be included 
 in and where in that list it should be put (and of course, many extension 
 are going to say that they should be at the front of the list).
 
 -Chris
 --
 Chris Suich
 chris.su...@netapp.com
 NetApp Software Engineer
 Data Center Platforms – Cloud Solutions
 Citrix, Cisco & Red Hat
 
 On Oct 4, 2013, at 12:12 PM, Darren Shepherd  
 wrote:
 
> On 10/04/2013 11:58 AM, SuichII, Christopher wrote:
>> Darren,
>> 
>> I think one of the benefits of allowing the priority to be specified in 
>> the xml is that it can be configured after deployment. If for some 
>> reason two strategies or providers conflict, then their priorities can 
>> be changed in XML to resolve the conflict. I believe the Spring @Order 
>> annotation an be specified in XML, not just as an annotation.
>> 
>> -Chris
>> 
> 
> I would *prefer* extensions to be order independent, but if we determine 
> they are order dependant, then that is fine too.  So if we conclude that 
> the simplest way to address this is to order the Strategies based on 
> configuration, then I will add an ordering "global configuration" as 
> described at 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Extensions.
> 
> Does the order configuration setting approach seem fine?
> 
> Darren
 
>> 



Re: why are RvR routers not HA

2013-10-09 Thread Darren Shepherd
I didn't read the whole thread yet, but at the end of the day it's sounds like 
an implementation issue.  So I'll just naively say I'll fix that :)

Darren

> On Oct 9, 2013, at 5:58 PM, Alena Prokharchyk  
> wrote:
> 
> HA redundant virtual router


  1   2   >