Re: KVM development, libvirt

2013-06-06 Thread Ove Ewerlid

On 06/06/2013 08:37 AM, Prasanna Santhanam wrote:

On Thu, Jun 06, 2013 at 08:29:26AM +0200, Ove Ewerlid wrote:

On 06/06/2013 07:10 AM, Prasanna Santhanam wrote:

On Wed, Jun 05, 2013 at 05:39:16PM +, Edison Su wrote:

I think we miss  a VOTE from Jenkins, the vote from Jenkins should
be taken as highest priority in each release. This kind of
regression should be easily identified in Jenkins(If we have a
regression test for each environment).



+1 - need more people focussed on cloudstack-infra in general.


The 41 regression with local storage, that required 2 or more hosts
to duplicate, would be one example of an issue that would be
detected by automatic testing provided the testing is done on a
sufficiently big test fixture.

Q: How many hosts are used in daily testing now?


3 (2 in a cluster, 1 in a second pod) and 1 in a second zone -
totalling 4 hosts in the test rig.

But I don't enable local storage on it. It's occupied testing XCP,
Xen and KVM with shared storage. The more configurations the longer
the test run time.



Not sure if you use multiple run queues, one queue with a more extensive 
job that runs ones per day to capture issues in a larger test fixture 
that is not suitable to build for every single commit. This test needs 
to complete within 24 hours.


/Ove


--
Ove Everlid
System Administrator / Architect / SDN & Linux hacker
Mobile: +46706662363
Office: +4618656913 (note EMEA Time Zone)


Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-06 Thread Hiroaki KAWAI

(2013/06/06 14:24), Prasanna Santhanam wrote:

On Thu, Jun 06, 2013 at 08:01:10AM +0900, Hiroaki KAWAI wrote:

It took time to investigate what's happening on Ubuntu.


Thanks for taking the time to investigate!



On Ubuntu, we don't have apparent problem with current setup.
We don't have to touch catalina.out simply because it is
not used.
# Cloudstack java stdout is sent to /dev/null.

In tomcat6, java stdout,stderr is redirected to catalina.out
(in /usr/share/tomcat6/bin/catalina.sh)


Does /usr/share/cloudstack-management/ link to /usr/share/tomcat6?
This is the case with RPMs at least.


No direct symlink, some dirs under /usr/share/cloudstack-management
are linked to tomcat6's.
# By the way, as the source is open, you can see `ln -s` in
# debian/rules.


In which case the permission
should be for cloud:cloud and not tomcat:tomcat as the init script
might have done?


log directory is linked to /var/log/cloudstack/management so
`tomcat` user does not play here.


So, the init script touches catalina.out before invoking
catalina.sh. Additionally, there's logrotate configuration
in tomcat6 package, which will be installed at
/etc/logrotate.d/tomcat6 during postinstall.

IMHO, it is preferable to have our catalina.out, but
there's no need to fix it in haste.


Yes this time we better test it before release or it's going to be
embarassing to have a supported OS not work.



What I wanted to mean here, is "now catalina.out issues are
resolved. We're ready for 4.1.1"

# We should modify Ubuntu init script to have java's stdout,
# stderr redirected to catalina.out. But it is another issue.




Re: Template of systemvm

2013-06-06 Thread Takaaki Suzuki
Thank you for your help!

The connection is pretty good now.
I can download the template of system vm.

On Thu, Jun 6, 2013 at 3:23 PM, Prasanna Santhanam  wrote:
> On Thu, Jun 06, 2013 at 03:13:21PM +0900, Takaaki Suzuki wrote:
>> Hi all
>>
>> I want to download the system VM template from
>> "jenkins.cloudstack.org" (URL:
>> http://jenkins.cloudstack.org/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-06-04-master-kvm.qcow2.bz2)
>>
>> but, the connection is extremely slow.  Does anyone know of other
>> mirror servers or resources? Any idea what's happening with the
>> jenkins server?
>>
>
> Currently, that's the only location. We don't have mirrors for the
> system VM images. The jenkins instance is experiencing high load
> avgs.
>
>
> --
> Prasanna.,
>
> 
> Powered by BigRock.com
>


Re: KVM development, libvirt

2013-06-06 Thread Prasanna Santhanam
On Thu, Jun 06, 2013 at 09:04:55AM +0200, Ove Ewerlid wrote:
> On 06/06/2013 08:37 AM, Prasanna Santhanam wrote:
> >On Thu, Jun 06, 2013 at 08:29:26AM +0200, Ove Ewerlid wrote:
> >>On 06/06/2013 07:10 AM, Prasanna Santhanam wrote:
> >>>On Wed, Jun 05, 2013 at 05:39:16PM +, Edison Su wrote:
> I think we miss  a VOTE from Jenkins, the vote from Jenkins should
> be taken as highest priority in each release. This kind of
> regression should be easily identified in Jenkins(If we have a
> regression test for each environment).
> 
> >>>
> >>>+1 - need more people focussed on cloudstack-infra in general.
> >>
> >>The 41 regression with local storage, that required 2 or more hosts
> >>to duplicate, would be one example of an issue that would be
> >>detected by automatic testing provided the testing is done on a
> >>sufficiently big test fixture.
> >>
> >>Q: How many hosts are used in daily testing now?
> >
> >3 (2 in a cluster, 1 in a second pod) and 1 in a second zone -
> >totalling 4 hosts in the test rig.
> >
> >But I don't enable local storage on it. It's occupied testing XCP,
> >Xen and KVM with shared storage. The more configurations the longer
> >the test run time.
> >
> 
> Not sure if you use multiple run queues, one queue with a more
> extensive job that runs ones per day to capture issues in a larger
> test fixture that is not suitable to build for every single commit.
> This test needs to complete within 24 hours.
> 

We don't run tests for every commit. The tests run every four-five
hours for the three hypervisors. So each test run has collated a group
of commits made during the time window. Each test run splits into
multiple sub-jobs that are running in parallel. 

The extensive jobs that test for regression run on Wednesday and
Saturday. These can take ~6hours to finish.

ASCII representation of how the jobs split up:

test-matrix
|___ test-packaging (new centos VM with latest packaged CloudStack)
|___ test-environment-refresh (kickstarts fresh hypervisors)
|_test-setup-advanced-zone 
  |test-smoke-matrix (Weekdays, except Wed)
  |___ test#1
  |___ test#2
  |___ test#3
  |___
  |___ test#n
  |test-regression-matrix (Wed, Sat)


HTH

-- 
Prasanna.,


Re: Template of systemvm

2013-06-06 Thread Wido den Hollander



On 06/06/2013 08:23 AM, Prasanna Santhanam wrote:

On Thu, Jun 06, 2013 at 03:13:21PM +0900, Takaaki Suzuki wrote:

Hi all

I want to download the system VM template from
"jenkins.cloudstack.org" (URL:
http://jenkins.cloudstack.org/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-06-04-master-kvm.qcow2.bz2)

but, the connection is extremely slow.  Does anyone know of other
mirror servers or resources? Any idea what's happening with the
jenkins server?



Currently, that's the only location. We don't have mirrors for the
system VM images. The jenkins instance is experiencing high load
avgs.


I was already hosting some of these templates on my CloudStack mirror: 
http://cloudstack.apt-get.eu/systemvm/


I just added two extra. Right now this is all done manually, so if one 
is missing that's due to me not downloading them.


Wido


Re: Template of systemvm

2013-06-06 Thread Prasanna Santhanam
On Thu, Jun 06, 2013 at 10:21:33AM +0200, Wido den Hollander wrote:
> 
> 
> On 06/06/2013 08:23 AM, Prasanna Santhanam wrote:
> >On Thu, Jun 06, 2013 at 03:13:21PM +0900, Takaaki Suzuki wrote:
> >>Hi all
> >>
> >>I want to download the system VM template from
> >>"jenkins.cloudstack.org" (URL:
> >>http://jenkins.cloudstack.org/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-06-04-master-kvm.qcow2.bz2)
> >>
> >>but, the connection is extremely slow.  Does anyone know of other
> >>mirror servers or resources? Any idea what's happening with the
> >>jenkins server?
> >>
> >
> >Currently, that's the only location. We don't have mirrors for the
> >system VM images. The jenkins instance is experiencing high load
> >avgs.
> 
> I was already hosting some of these templates on my CloudStack
> mirror: http://cloudstack.apt-get.eu/systemvm/
> 
> I just added two extra. Right now this is all done manually, so if
> one is missing that's due to me not downloading them.
> 

Thanks Wido! More the mirrors better since these are large downloads.

-- 
Prasanna.,


Powered by BigRock.com



Re: Template of systemvm

2013-06-06 Thread Takaaki Suzuki
Cool! I'm going to use this mirror :)

On Thu, Jun 6, 2013 at 5:27 PM, Prasanna Santhanam  wrote:
> On Thu, Jun 06, 2013 at 10:21:33AM +0200, Wido den Hollander wrote:
>>
>>
>> On 06/06/2013 08:23 AM, Prasanna Santhanam wrote:
>> >On Thu, Jun 06, 2013 at 03:13:21PM +0900, Takaaki Suzuki wrote:
>> >>Hi all
>> >>
>> >>I want to download the system VM template from
>> >>"jenkins.cloudstack.org" (URL:
>> >>http://jenkins.cloudstack.org/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-2013-06-04-master-kvm.qcow2.bz2)
>> >>
>> >>but, the connection is extremely slow.  Does anyone know of other
>> >>mirror servers or resources? Any idea what's happening with the
>> >>jenkins server?
>> >>
>> >
>> >Currently, that's the only location. We don't have mirrors for the
>> >system VM images. The jenkins instance is experiencing high load
>> >avgs.
>>
>> I was already hosting some of these templates on my CloudStack
>> mirror: http://cloudstack.apt-get.eu/systemvm/
>>
>> I just added two extra. Right now this is all done manually, so if
>> one is missing that's due to me not downloading them.
>>
>
> Thanks Wido! More the mirrors better since these are large downloads.
>
> --
> Prasanna.,
>
> 
> Powered by BigRock.com
>


Re: Object based Secondary storage.

2013-06-06 Thread Thomas O'Dowd
Thanks Min. I've printed out the material and am reading new threads.
Can't comment much yet until I understand things a bit more.

Meanwhile, feel free to hit me up with any S3 questions you have. I'm
looking forward to playing with the object_store branch and testing it
out.

Tom.

On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
> Welcome Tom. You can check out this FS
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup+Objec
> t+Store+Plugin+Framework for secondary storage architectural work done in
> object_store branch.You may also check out the following recent threads
> regarding 3 major technical questions raised by community as well as our
> answers and clarification.
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C77B3
> 37AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCDD2
> 2955.3DDDC%25min.chen%40citrix.com%3E
> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCDD2
> 300D.3DE0C%25min.chen%40citrix.com%3E
> 
> 
> That branch is mainly worked on by Edison and me, and we are at PST
> timezone. 
> 
> Thanks
> -min
-- 
Cloudian KK - http://www.cloudian.com/get-started.html
Fancy 100TB of full featured S3 Storage?
Checkout the Cloudian® Community Edition!



Re: Review Request: Automation: Add testcases for Affinity/Anti-Affinity Rules

2013-06-06 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11067/#review21511
---


Unable to apply this :(

~/workspace/cloudstack/incubator-cloudstack(branch:master*) » git am -s 
patch/11067.patch   

tsp@cloud-2
Applying: CLOUDSTACK-2254: Automation: Add automation for Affinity and Anti 
Affinity rules
/Users/tsp/workspace/cloudstack/incubator-cloudstack/.git/rebase-apply/patch:16:
 trailing whitespace.
def setUpClass(cls):
/Users/tsp/workspace/cloudstack/incubator-cloudstack/.git/rebase-apply/patch:53:
 trailing whitespace.
   
/Users/tsp/workspace/cloudstack/incubator-cloudstack/.git/rebase-apply/patch:70:
 trailing whitespace.

/Users/tsp/workspace/cloudstack/incubator-cloudstack/.git/rebase-apply/patch:77:
 trailing whitespace.
  
/Users/tsp/workspace/cloudstack/incubator-cloudstack/.git/rebase-apply/patch:78:
 trailing whitespace.
def create_aff_grp(self, api_client=None, aff_grp=None, 
error: test/integration/component/test_affinity_groups.py: does not exist in 
index
Patch failed at 0001 CLOUDSTACK-2254: Automation: Add automation for Affinity 
and Anti Affinity rules


- Prasanna Santhanam


On May 29, 2013, 7:56 a.m., Girish Shilamkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11067/
> ---
> 
> (Updated May 29, 2013, 7:56 a.m.)
> 
> 
> Review request for cloudstack, Prachi Damle, Prasanna Santhanam, and 
> sangeetha hariharan.
> 
> 
> Description
> ---
> 
> Add testcases for Affinity/Anti-Affinity Rules
> 
> 
> This addresses bug CLOUDSTACK-2254.
> 
> 
> Diffs
> -
> 
>   test/integration/component/test_affinity_groups.py PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/11067/diff/
> 
> 
> Testing
> ---
> 
> The tests which are not skipped are working.
> 
> 
> Thanks,
> 
> Girish Shilamkar
> 
>



RE: networkACLList

2013-06-06 Thread Kishan Kavala
Preferred API name is NetworkACL, which cannot be used (NetworkACL is already 
used to for items within the List). Now naming the API NetworkACLList / Group / 
Container, when you expand, all of them are equally redundant.

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Thursday, 6 June 2013 8:03 AM
> To: dev@cloudstack.apache.org
> Subject: Re: networkACLList
> 
> On Wed, Jun 05, 2013 at 05:43:31PM +, Kishan Kavala wrote:
> > Agree that it is redundant. They should be create/list/delete
> > NetworkACL. But these API names are already used for rules (ACL
> > items) within the ACL List.
> > This cannot be fixed without breaking backward compatibility.
> 
> I was talking about the new API (NetworkACLList) that groups the
> NetworkACLs. We can always rename that to something sensible before it
> gets out and we think about backward compat issues.
> 
> --
> Prasanna.,
> 
> 
> Powered by BigRock.com



RE: [DISCUSS] code-freeze and integration tests

2013-06-06 Thread Sudha Ponnaganti
+1  Feature freeze and even RC timelines should not be applicable to automation 
tests at this time. BVT and other regression tests would continue to be 
developed beyond release date as well in reality. So dates should not hold for 
Automation. 

 However having some timeline restriction would help to bring automation to the 
same level as code base. 

-Original Message-
From: Prasanna Santhanam [mailto:t...@apache.org] 
Sent: Wednesday, June 05, 2013 10:20 PM
To: CloudStack Dev
Subject: [DISCUSS] code-freeze and integration tests

Hi,

I would like to get everyone's opinions on the timeline and policies for 
bringing in automated tests into the repo. Integration tests are written in 
marvin by various people today within and without Citrix.
Similar to docs I'd like to propose that tests can be committed to the 
repository beyond the freeze date.

Right now all tests are being committed to master since that's the branch that 
we cut our releases out of. But after the branch for a release has been cut 
tests will be committed to both release branch and master if everyone agrees 
this is a good thing. 

Thoughts?

--
Prasanna.,


Powered by BigRock.com



Review Request: CLOUDSTACK-2288: NPE while creating volume from snapshot when the primary storage is in maintenance state.

2013-06-06 Thread Sanjay Tripathi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11670/
---

Review request for cloudstack and Devdeep Singh.


Description
---

CLOUDSTACK-2288: NPE while creating volume from snapshot when the primary 
storage is in maintenance state.


This addresses bug CLOUDSTACK-2288.


Diffs
-

  server/src/com/cloud/storage/VolumeManagerImpl.java 43f3681 

Diff: https://reviews.apache.org/r/11670/diff/


Testing
---

Tests:
1. In CS setup, put all primary storage in maintenance mode.
2. Create a volume from snapshot.

Verified the fix locally.


Thanks,

Sanjay Tripathi



Review Request: Freshness Check for doc xml on section Storage Setup

2013-06-06 Thread Gavin Lee
For CLOUDSTACK-1597 ,
I made the changes weeks ago.
The commit id on master is: 78ffb7ae5ef9c4383f86fa97cd647316dde507be

It'll be great if someone can help to review in case some materials missing
or out of date since the storage part changed a lot.

Thanks.
-- 
Gavin


Re: [DISCUSS] code-freeze and integration tests

2013-06-06 Thread David Nalley
On Thu, Jun 6, 2013 at 1:20 AM, Prasanna Santhanam  wrote:
> Hi,
>
> I would like to get everyone's opinions on the timeline and policies
> for bringing in automated tests into the repo. Integration tests are
> written in marvin by various people today within and without Citrix.
> Similar to docs I'd like to propose that tests can be committed to the
> repository beyond the freeze date.
>
> Right now all tests are being committed to master since that's the
> branch that we cut our releases out of. But after the branch for a
> release has been cut tests will be committed to both release branch
> and master if everyone agrees this is a good thing.
>

I am in full agreement - code freeze shouldn't affect tests IMO.

--David


RE: StoragePoolForMigrationResponse and StoragePoolResponse

2013-06-06 Thread Devdeep Singh
Hi,

StoragePoolResponse should really only be used for listing storage pools. 
Putting a suitableformigration flag etc. makes it weird for other apis. If 
tomorrow the response object is updated to include more statistics for admin 
user to make a better decision, then such information gets pushed in there 
which makes it unnatural for apis that just need the list of storage pools. I 
am planning to update StoragePoolForMigrationResponse to include the 
StoragePoolResponse object and any other flag; suitableformigration in this 
case. I'll file a bug for the same.

Regards,
Devdeep

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Tuesday, June 04, 2013 2:28 PM
> To: dev@cloudstack.apache.org
> Subject: Re: StoragePoolForMigrationResponse and StoragePoolResponse
> 
> On Fri, May 31, 2013 at 06:28:39PM +0530, Prasanna Santhanam wrote:
> > On Fri, May 31, 2013 at 12:24:20PM +, Pranav Saxena wrote:
> > > Hey Prasanna ,
> > >
> > > I see that the response  object name is
> > > findstoragepoolsformigrationresponse , which is correct as shown
> > > below .  Are you referring to this API or something else  ?
> > >
> > > http://MSIP:8096/client/api?command=findStoragePoolsForMigration
> > >
> > >  > > cloud-stack-version="4.2.0-SNAPSHOT">
> > >
> > >  
> > >
> >
> > No that's what is shown to the user. I meant the class within
> > org.apache.cloudstack.api.response
> >
> Fixed with 0401774a09483354f5b8532a30943351755da93f
> 
> --
> Prasanna.,
> 
> 
> Powered by BigRock.com



Re: [DISCUSS] code-freeze and integration tests

2013-06-06 Thread Joe Brockmeier
On Thu, Jun 6, 2013, at 12:20 AM, Prasanna Santhanam wrote:
> I would like to get everyone's opinions on the timeline and policies
> for bringing in automated tests into the repo. Integration tests are
> written in marvin by various people today within and without Citrix.
> Similar to docs I'd like to propose that tests can be committed to the
> repository beyond the freeze date.
> 
> Right now all tests are being committed to master since that's the
> branch that we cut our releases out of. But after the branch for a
> release has been cut tests will be committed to both release branch
> and master if everyone agrees this is a good thing. 
> 
> Thoughts?

Unless there's something I'm missing, I can't think of any reason why
tests would need to be frozen. I'm +1 for this. Thanks for raising it!

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


[GSOC] informal communication and blogs

2013-06-06 Thread Sebastien Goasguen
Hi folks,

For GSoC informal communication we can also use Twitter (I am @sebgoa).
I am already connected with Dharmesh and Ian.

It would also be nice to see you guys start a blog about your experience. 
Something like blogger.com is easy to use and get started.
You can then tweet your blog posts to the community (@CloudStack)

Cheers,

-sebastien

Review Request: Add docbook of ldap proposal

2013-06-06 Thread Ian Duffy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11672/
---

Review request for cloudstack and Sebastien Goasguen.


Description
---

Add the proposal for the ldap user provisoning project to the docs.


Diffs
-

  docs/en-US/CloudStack_GSoC_Guide.xml 91c2967 
  docs/en-US/gsoc-imduffy15.xml PRE-CREATION 

Diff: https://reviews.apache.org/r/11672/diff/


Testing
---

The added xml file was build with publican successfully.


Thanks,

Ian Duffy



Re: Build failed in Jenkins: cloudstack-rat-master #1468

2013-06-06 Thread David Nalley
Why is jenkins trying to create a tag in our repo?

--David

On Thu, Jun 6, 2013 at 9:00 AM, Apache Jenkins Server
 wrote:
> See 
>
> --
> Started by an SCM change
> Building remotely on ubuntu2 in workspace 
> 
> Checkout:cloudstack-rat-master / 
>  - 
> hudson.remoting.Channel@9907404:ubuntu2
> Using strategy: Default
> Last Built Revision: Revision d98289baca7fbc8a793adadfa386e6ab234952f7 
> (origin/master)
> Fetching changes from 1 remote Git repository
> Fetching upstream changes from 
> https://git-wip-us.apache.org/repos/asf/cloudstack.git
> Commencing build of Revision c0d894346a57e61626f332a9ef25efa9b5e77646 
> (origin/master)
> Checking out Revision c0d894346a57e61626f332a9ef25efa9b5e77646 (origin/master)
> FATAL: Could not apply tag jenkins-cloudstack-rat-master-1468
> hudson.plugins.git.GitException: Could not apply tag 
> jenkins-cloudstack-rat-master-1468
> at hudson.plugins.git.GitAPI.tag(GitAPI.java:829)
> at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1270)
> at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1231)
> at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2348)
> at hudson.remoting.UserRequest.perform(UserRequest.java:118)
> at hudson.remoting.UserRequest.perform(UserRequest.java:48)
> at hudson.remoting.Request$2.run(Request.java:326)
> at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:679)
> Caused by: hudson.plugins.git.GitException: Command "git tag -a -f -m Jenkins 
> Build #1468 jenkins-cloudstack-rat-master-1468" returned status code 128:
> stdout:
> stderr:
> *** Please tell me who you are.
>
> Run
>
>   git config --global user.email "y...@example.com"
>   git config --global user.name "Your Name"
>
> to set your account's default identity.
> Omit --global to set the identity only in this repository.
>
> fatal: empty ident   not allowed
>
> at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:897)
> at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:858)
> at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:868)
> at hudson.plugins.git.GitAPI.tag(GitAPI.java:827)
> ... 12 more


Re: Review Request: Add docbook of ldap proposal

2013-06-06 Thread Sebastien Goasguen

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11672/#review21520
---

Ship it!


Patch applied to master with commit cc7e9eed7e1340729109983f79200557df22296b
Make sure that you define a [user] in your .gitconfig so that we can keep track 
of the authorship of the patch properly.

- Sebastien Goasguen


On June 6, 2013, 1 p.m., Ian Duffy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11672/
> ---
> 
> (Updated June 6, 2013, 1 p.m.)
> 
> 
> Review request for cloudstack and Sebastien Goasguen.
> 
> 
> Description
> ---
> 
> Add the proposal for the ldap user provisoning project to the docs.
> 
> 
> Diffs
> -
> 
>   docs/en-US/CloudStack_GSoC_Guide.xml 91c2967 
>   docs/en-US/gsoc-imduffy15.xml PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/11672/diff/
> 
> 
> Testing
> ---
> 
> The added xml file was build with publican successfully.
> 
> 
> Thanks,
> 
> Ian Duffy
> 
>



Re: Review Request: Add docbook of ldap proposal

2013-06-06 Thread Sebastien Goasguen


> On June 6, 2013, 1:32 p.m., Sebastien Goasguen wrote:
> > Patch applied to master with commit cc7e9eed7e1340729109983f79200557df22296b
> > Make sure that you define a [user] in your .gitconfig so that we can keep 
> > track of the authorship of the patch properly.

And you can mark the review as "submitted"


- Sebastien


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11672/#review21520
---


On June 6, 2013, 1 p.m., Ian Duffy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11672/
> ---
> 
> (Updated June 6, 2013, 1 p.m.)
> 
> 
> Review request for cloudstack and Sebastien Goasguen.
> 
> 
> Description
> ---
> 
> Add the proposal for the ldap user provisoning project to the docs.
> 
> 
> Diffs
> -
> 
>   docs/en-US/CloudStack_GSoC_Guide.xml 91c2967 
>   docs/en-US/gsoc-imduffy15.xml PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/11672/diff/
> 
> 
> Testing
> ---
> 
> The added xml file was build with publican successfully.
> 
> 
> Thanks,
> 
> Ian Duffy
> 
>



Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-06 Thread John Burwell
Joe,

To the best of my knowledge, we have not placed a size restriction on bug fixed 
for a minor release.  As I understand our versioning policy, minor releases are 
scoped to non-interface breaking bug fixes.  Time drift is a non-interface 
breaking bug, and it is a significant operational issue that should be fixed as 
soon as feasible.  Admittedly, the test effort will be significant, but based 
on the previous conversation, the test plan is well understood.  As such, 
fixing this defect appears to be within the scope of our versioning policy, and 
that it can be integrated and tested with an acceptable level of risk.

Seeing as user may be experiencing other system VM problems, it seems might be 
wise to take a step back for 4.1.1 and completely re-test images anyway.  Would 
it be acceptable to test the 4.2 system VMs against 4.1 and based on the 
results, determine if/when they could be included in a 4.1 minor release?

Thanks,
-John

On Jun 5, 2013, at 10:42 AM, Joe Brockmeier  wrote:

> Hi John, 
> 
> On Tue, Jun 4, 2013, at 09:42 PM, John Burwell wrote:
>> I would like to get clock drift fixed for 4.1.1 as well.  What needs
>> to be done to test the 4.2 system VMs?  How can folks assist with the
>> testing process?
> 
> I'd like to get it fixed as well. However, I think that updating system
> VMs is a pretty big leap for a point release. We might have an issue
> with the fix that has gone in so far for KVM:
> 
> http://markmail.org/message/xoy2wn4ypxpdek4r
> 
> (Note, I am not saying I'm happy about letting the issue sit until 4.2 -
> but I'm not sure that a fix that potentially disruptive should go into a
> point release.) 
> 
> Best,
> 
> jzb
> -- 
> Joe Brockmeier
> j...@zonker.net
> Twitter: @jzb
> http://www.dissociatedpress.net/



Re: [DISCUSS] NFS cache storage issue on object_store

2013-06-06 Thread John Burwell
Edison,

Please my comments in-line below.

Thanks,
-John

On Jun 5, 2013, at 6:55 PM, Edison Su  wrote:

> 
> 
>> -Original Message-
>> From: John Burwell [mailto:jburw...@basho.com]
>> Sent: Wednesday, June 05, 2013 1:04 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [DISCUSS] NFS cache storage issue on object_store
>> 
>> Edison,
>> 
>> You have provided some great information below which helps greatly to
>> understand the role of the "NFS cache" mechanism.  To summarize, this
>> mechanism is only currently required for Xen snapshot operations driven by
>> Xen's coalescing operations.  Is my understanding correct?  Just out of
> 
> I think Ceph may still need "NFS cache", for example, during delta snapshot 
> backup:
> http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
> You need to create a delta snapshot into a file, then upload the file into S3.
> 
> For KVM, if the snapshot is taken on qcow2, then need to copy the snapshot 
> into a file system, then backup it to S3.
> 
> Another usage case for "NFS cache " is to cache template stored on S3, if 
> there is no zone-wide primary storage. We need to download template from S3 
> into every primary storage, if there is no cache, each download will take a 
> while: comparing download template directly from S3(if the S3 is region wide) 
> with download from a zone wide "cache" storage, I would say, the download 
> from zone wide cache storage should be faster than from region wide S3. If 
> there is no zone wide primary storage, then we will download the template 
> from S3 several times, which is quite time consuming.
> 
> 
> There may have other places to use "NFS cache", but the point is as long as 
> mgt server can be decoupled from this "cache" storage, then we can 
> decide when/how to use cache storage based on different kind of 
> hypervisor/storage combinations in the future.

I think we would do well to re-orient the way we think about roles and 
requirements.  Ceph doesn't need a file system to perform a delta snapshot 
operation.  Xen, KVM, and/or VMWare need access to a file system to perform 
these operations.  The hypervisor plugin should request a reservation of x size 
as a file handle from the Storage subsystem.  The Ceph driver implements this 
request by using a staging area + transfer operation.  This approach 
encapsulates the operation/rules around the staging area from clients, protects 
against concurrent requests flooding a resource, and allows hypervisor-specific 
behavior/rules to encapsulated in the appropriate plugin.

> 
>> curiosity, is their a Xen expert on the list who can provide a high-level
>> description of the coalescing operation -- in particular, the way it 
>> interacts
>> with storage?  I have Googled a bit, and found very little information about 
>> it.
>> Has the object_store branch been tested with VMWare and KVM?  If so,
>> what operations on these hypervisors have been tested?
> 
> Both vmware and KVM is tested, but without S3 support. Haven't have time to 
> take a look at how to use S3 in both hypervisors yet. 
> For example, we should take a look at how to import a template from url into 
> vmware data store, thus, we can eliminate "NFS cache" during template import.

Given the release extension and the impact of these tests on the 
implementation, we need to test S3 with VMWare and KVM pre-merge.

> 
>> 
>> In reading through the description below, my operation concerns remain
>> regarding potential race conditions and resource exhaustion.  Also, in 
>> reading
>> through the description, I think we should find a new name for this
>> mechanism.  As Chip has previous mentioned, a cache implies the following
>> characteristics:
>> 
>>1. Optional: Systems can operate without caches just more slowly.
>> However, with this mechanism, snapshots on Xen will not function.
> 
> 
> I agree on this one.
> 
>>2. Volatility: Caches are backed by durable, non-volitale storage.  
>> Therefore,
>> if the cache's data is lost, it can be rebuilt from the backing store and no 
>> data
>> will be permanently lost from the system.  However, this mechanism
>> contains snapshots in-transit to an object store.  If the data contained in 
>> this
>> "cache" were lost before its transfer to the object store completed, the
>> snapshot data would be lost.
> 
> It's the same thing for file cache on Linux file system. If the file cache is 
> not flushed into disk, while the machine lost power, then the data on the 
> file cache is lost.
> When we backup the snapshot from primary storage to S3, the snapshot is 
> copied to "Nfs cache", then immediately, copied from "Nfs cache" into S3. If 
> the snapshot on "Nfs cache" is lost, then the snapshot backup is failed. User 
> can issue another backup snapshot command in this case. 
> So I don't think it's an issue.

The window of opportunity for data loss from a file system sync is much 
narrower for the Linux filesystem that for this staging area.  Furthermore, 

Re: deleteVolume is sync

2013-06-06 Thread Marcus Sorensen
So does it just need to be async, or is deleteVolume doing too much in
both moving the volume to destroy state and expunging? If I transition
a volume to 'Destroy' state, the storage cleanup thread comes along
and deletes it for me later, similar to how the VMs are expunged. This
seems preferable, because one could potentially undelete a volume
within the window.

On Wed, Jun 5, 2013 at 7:47 PM, Mike Tutkowski
 wrote:
> Hey Marcus,
>
> To me, it seems like it should be async, as well.
>
> As far as I know (at least in pre 4.2), unless you are deleting a volume
> that has never been attached to a VM, the CS MS would have to have the
> hypervisor perform some operation upon the deletion of a CloudStack
> volume...and that could take a bit of time.
>
>
>
>
> On Wed, Jun 5, 2013 at 7:24 PM, Marcus Sorensen  wrote:
>
>> Oh, I should add that I traced it through the system, and it actually
>> sends a DeleteVolumeCommand to the agent. That has to finish before
>> the sync call completes.
>>
>> This is on 4.1, if it changes significantly with the storage refactor,
>> that's fine, but I'd like to know if there was a reason for it in case
>> we want to make it async for us.
>>
>> On Wed, Jun 5, 2013 at 7:21 PM, Marcus Sorensen 
>> wrote:
>> > Just wondering why deleteVolume is a sync call. It doesn't seem to
>> > adhere to the 'mark it removed, let a worker expunge it later after X
>> > seconds' paradigm.  I only noticed this when a storage system was
>> > taking a bit to do the work and thus blocking the API call.
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*


Handling Self Signed Certs

2013-06-06 Thread Will Stevens
Hey All,
I am building integration between CS and an external Palo Alto Firewall
device.  The API calls to the PA device are done over HTTPS.  In some cases
(like testing or a POC), it makes sense to use a self signed cert for this
connection.

Currently I have a little http client wrapper which allows the use of a
self signed cert.  Obviously, I do not want to use the wrapper when a real
cert is used.

What I am thinking of doing is adding a checkbox on the 'Add Palo Alto
Device' configuration overlay with an option for 'Using a self signed
cert'.  If this checkbox is checked, then the http client wrapper is used
so the self signed cert will not throw errors, if it is not checked, the
the http client wrapper will not be used and errors will be thrown if the
cert is not valid.

Is this a realistic approach to this problem?  Is this problem handled in
other parts of the system in a different way?

Thanks,

Will


[ACS42][DONATED FEATURE] CloudStack Advanced Password Management Engine

2013-06-06 Thread Musayev, Ilya
ISWest contracted CloudSand to develop the Advanced Password Management Engine 
(APME). ISWest  the owner and sponsor of APME, would like to donate the APME 
feature to Apache CloudStack Community.  Special thanks goes to ISWest - 
Clayton Weise for supporting the Apache CloudStack Community and choosing to 
donate this feature.



For technical design questions, please reach out to me directly via this 
thread, or email me and CC Clayton Weise from ISWest.



Thanks

-ilya





Abstract:



Present versions of Apache CloudStack  up until the latest version of 4.2 lack 
secure and granular password management control for domain admins and domain 
users.

Specifically, there is no way to enforce complex password rules, password 
expiration and password history by domain admin for domain users. Moreover, 
basic domain users cannot change their password, domain admin cannot lock and 
reset the password for domain user within the same domain.





Current state:



This feature has been developed on 4.0 code based and will be thoroughly tested 
in multiple environments. This feature will be ported to latest 4.2 code base 
and tested yet again by ISWest and CloudSand.





Feature details and Specifications:



Exceptions:

0) Dont use APME if CloudStack is configured to use external source (ldap/ad), 
display a friendly message on password manager page that this environment is 
using external user authentication mechanism





 1. Create a page under domain user admin tab to enforce password

complexity for domain users by domain admin



 1. Enforce usage of



 1. Upper case, lower case characters and digits



 2. Special characters such as !@#$%^&*()



 3. Password character limit must be greater than

"x"



 4. Password expiration of every x number of days

for all users in domain



 5. Avoid last X password previously used kept in

password history table



 6. Dont apply the password manager rule set on

specific users separated by coma in a field (for

service accounts in mind)







 1. Enable ability for domain admin to change the password of domain

users



 2. Enable ability for domain user to reset his password



 3. APME task is configurable via global settings



 4. Global customizable email notification is configured via global

settings with username and domain and password expiration date

in email body - passed on as attribute, i.e. ,, 
, etc..



Conditions:



Rules apply to each cloudstack domain, each domain may have different rules



If new password complexity is defined on applicable existing user base, it will 
take effect on the next APME job execution. The password complexity rules will 
be effective immediately - if user was to change his password in the UI.



All users will get email notification that they have to change their password 
upon login to CS within grace period, set to -1 if you need immediate change, 
takes effect next time APME task is ran



If user changes the password prior to expiration, mark the change in table that 
user has reset the password



If complexity to password management has been relaxed from more restrictive set 
- do nothing



If new user is added and APME is enabled, user must adhere to APME rule set





Notification rules:



Email the user daily prior to the password is expiring and to notify that user 
needs to reset the password. The advanced email notification rule is configured 
in global settings



Display an event on users page that password is expiring in X days



Re: deleteVolume is sync

2013-06-06 Thread Mike Tutkowski
If it's a long-running op to delete a volume (which is can be), I would say
it should be async.


On Thu, Jun 6, 2013 at 9:25 AM, Marcus Sorensen  wrote:

> So does it just need to be async, or is deleteVolume doing too much in
> both moving the volume to destroy state and expunging? If I transition
> a volume to 'Destroy' state, the storage cleanup thread comes along
> and deletes it for me later, similar to how the VMs are expunged. This
> seems preferable, because one could potentially undelete a volume
> within the window.
>
> On Wed, Jun 5, 2013 at 7:47 PM, Mike Tutkowski
>  wrote:
> > Hey Marcus,
> >
> > To me, it seems like it should be async, as well.
> >
> > As far as I know (at least in pre 4.2), unless you are deleting a volume
> > that has never been attached to a VM, the CS MS would have to have the
> > hypervisor perform some operation upon the deletion of a CloudStack
> > volume...and that could take a bit of time.
> >
> >
> >
> >
> > On Wed, Jun 5, 2013 at 7:24 PM, Marcus Sorensen 
> wrote:
> >
> >> Oh, I should add that I traced it through the system, and it actually
> >> sends a DeleteVolumeCommand to the agent. That has to finish before
> >> the sync call completes.
> >>
> >> This is on 4.1, if it changes significantly with the storage refactor,
> >> that's fine, but I'd like to know if there was a reason for it in case
> >> we want to make it async for us.
> >>
> >> On Wed, Jun 5, 2013 at 7:21 PM, Marcus Sorensen 
> >> wrote:
> >> > Just wondering why deleteVolume is a sync call. It doesn't seem to
> >> > adhere to the 'mark it removed, let a worker expunge it later after X
> >> > seconds' paradigm.  I only noticed this when a storage system was
> >> > taking a bit to do the work and thus blocking the API call.
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: [ACS42][DONATED FEATURE] CloudStack Advanced Password Management Engine

2013-06-06 Thread David Nalley
On Thu, Jun 6, 2013 at 12:10 PM, Musayev, Ilya  wrote:
> ISWest contracted CloudSand to develop the Advanced Password Management 
> Engine (APME). ISWest  the owner and sponsor of APME, would like to donate 
> the APME feature to Apache CloudStack Community.  Special thanks goes to 
> ISWest - Clayton Weise for supporting the Apache CloudStack Community and 
> choosing to donate this feature.
>
>

First - awesome of both of you to work on this and to be interested in
donating the work.

Second - is this up publicly anywhere for review?

--David


Re: networkACLList

2013-06-06 Thread Prasanna Santhanam
ACL Group, ACL container, ACL collection sounds a lot better than ACL
List. "Access Control List List" just reads odd. Feel free to ignore
my nitpick though :)


On Thu, Jun 06, 2013 at 11:11:50AM +, Kishan Kavala wrote:
> Preferred API name is NetworkACL, which cannot be used (NetworkACL
> is already used to for items within the List). Now naming the API
> NetworkACLList / Group / Container, when you expand, all of them are
> equally redundant.
> 
> > -Original Message-
> > From: Prasanna Santhanam [mailto:t...@apache.org]
> > Sent: Thursday, 6 June 2013 8:03 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: networkACLList
> > 
> > On Wed, Jun 05, 2013 at 05:43:31PM +, Kishan Kavala wrote:
> > > Agree that it is redundant. They should be create/list/delete
> > > NetworkACL. But these API names are already used for rules (ACL
> > > items) within the ACL List.
> > > This cannot be fixed without breaking backward compatibility.
> > 
> > I was talking about the new API (NetworkACLList) that groups the
> > NetworkACLs. We can always rename that to something sensible before it
> > gets out and we think about backward compat issues.
> > 
> > --
> > Prasanna.,
> > 
> > 
> > Powered by BigRock.com

-- 
Prasanna.,


Powered by BigRock.com



Re: [GSOC] informal communication and blogs

2013-06-06 Thread Joe Brockmeier
On Thu, Jun 6, 2013, at 07:15 AM, Sebastien Goasguen wrote:
> For GSoC informal communication we can also use Twitter (I am @sebgoa).
> I am already connected with Dharmesh and Ian.
> 
> It would also be nice to see you guys start a blog about your experience.
> Something like blogger.com is easy to use and get started.
> You can then tweet your blog posts to the community (@CloudStack)

If you write something up,  a note to the marketing list would be a Good
Thing (TM). 

If you haven't blogged before or want extra eyeballs on something before
posting, happy to help...

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


RE: [ACS42][DONATED FEATURE] CloudStack Advanced Password Management Engine

2013-06-06 Thread Musayev, Ilya
> -Original Message-
> From: David Nalley [mailto:da...@gnsa.us]
> Sent: Thursday, June 06, 2013 12:18 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [ACS42][DONATED FEATURE] CloudStack Advanced Password
> Management Engine
> 
> On Thu, Jun 6, 2013 at 12:10 PM, Musayev, Ilya 
> wrote:
> > ISWest contracted CloudSand to develop the Advanced Password
> Management Engine (APME). ISWest  the owner and sponsor of APME,
> would like to donate the APME feature to Apache CloudStack Community.
> Special thanks goes to ISWest - Clayton Weise for supporting the Apache
> CloudStack Community and choosing to donate this feature.
> >
> >
> 
> First - awesome of both of you to work on this and to be interested in
> donating the work.
> 
> Second - is this up publicly anywhere for review?
> 
> --David

David,

While the feature has been developed, once we pass all internal QA rounds 
between ISWest and CloudSand, we will work on porting this feature to separate 
branch of 4.2 on ACS ASF git, go through more QA and eventually merge it to 
master.

For now, I've just sent this email with specs to make community aware of what's 
coming soon.

Thanks
ilya



Review Request: fix the occurences of account.account. to account.

2013-06-06 Thread SrikanteswaraRao Talluri

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11677/
---

Review request for cloudstack and Prasanna Santhanam.


Description
---

fix the occurences of account.account. to account.


Diffs
-

  test/integration/component/test_advancedsg_networks.py e24254d 
  test/integration/component/test_custom_hostname.py a85f619 
  test/integration/component/test_netscaler_configs.py 1c67bc4 
  test/integration/component/test_netscaler_lb.py 80b3f0b 
  test/integration/component/test_netscaler_lb_algo.py 4a2d1fe 
  test/integration/component/test_netscaler_lb_sticky.py 7f391d0 
  test/integration/component/test_shared_networks.py 5f96419 

Diff: https://reviews.apache.org/r/11677/diff/


Testing
---

tested


Thanks,

SrikanteswaraRao Talluri



Review Request: Add docbook of GSOC native SDN controller proposal

2013-06-06 Thread tuna

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11678/
---

Review request for cloudstack.


Description
---

This is the docbook for my GSOC project: "Add Xen/XCP support for native GRE 
SDN controller"


Diffs
-

  docs/en-US/gsoc-tuna.xml 68032a8 

Diff: https://reviews.apache.org/r/11678/diff/


Testing
---

The added xml file was build with publican successfully.


Thanks,

tuna



RE: [ACS42][DONATED FEATURE] CloudStack Advanced Password Management Engine

2013-06-06 Thread Animesh Chaturvedi


> -Original Message-
> From: David Nalley [mailto:da...@gnsa.us]
> Sent: Thursday, June 06, 2013 9:18 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [ACS42][DONATED FEATURE] CloudStack Advanced Password
> Management Engine
> 
> On Thu, Jun 6, 2013 at 12:10 PM, Musayev, Ilya 
> wrote:
> > ISWest contracted CloudSand to develop the Advanced Password
> Management Engine (APME). ISWest  the owner and sponsor of APME, would
> like to donate the APME feature to Apache CloudStack Community.  Special
> thanks goes to ISWest - Clayton Weise for supporting the Apache
> CloudStack Community and choosing to donate this feature.
> >
> >
> 
> First - awesome of both of you to work on this and to be interested in
> donating the work.
> 
> Second - is this up publicly anywhere for review?
> 
> --David
[Animesh>] David this will have to go through IP clearance right?


Re: [ACS42][DONATED FEATURE] CloudStack Advanced Password Management Engine

2013-06-06 Thread David Nalley
On Thu, Jun 6, 2013 at 1:36 PM, Animesh Chaturvedi
 wrote:
>
>
>> -Original Message-
>> From: David Nalley [mailto:da...@gnsa.us]
>> Sent: Thursday, June 06, 2013 9:18 AM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [ACS42][DONATED FEATURE] CloudStack Advanced Password
>> Management Engine
>>
>> On Thu, Jun 6, 2013 at 12:10 PM, Musayev, Ilya 
>> wrote:
>> > ISWest contracted CloudSand to develop the Advanced Password
>> Management Engine (APME). ISWest  the owner and sponsor of APME, would
>> like to donate the APME feature to Apache CloudStack Community.  Special
>> thanks goes to ISWest - Clayton Weise for supporting the Apache
>> CloudStack Community and choosing to donate this feature.
>> >
>> >
>>
>> First - awesome of both of you to work on this and to be interested in
>> donating the work.
>>
>> Second - is this up publicly anywhere for review?
>>
>> --David
> [Animesh>] David this will have to go through IP clearance right?


Probably.
Ilya has already brought up the IP issue on private@ before beginning
the work, so it's not really a surprise.

--David


RE: [DISCUSS] code-freeze and integration tests

2013-06-06 Thread Alex Huang
It should not affect tests.  We want to move to mostly if not all automated 
tests.  If we freeze the tests that means no more testing for the release.  :P

--Alex

> -Original Message-
> From: Prasanna Santhanam [mailto:t...@apache.org]
> Sent: Wednesday, June 5, 2013 10:20 PM
> To: CloudStack Dev
> Subject: [DISCUSS] code-freeze and integration tests
> 
> Hi,
> 
> I would like to get everyone's opinions on the timeline and policies for
> bringing in automated tests into the repo. Integration tests are written in
> marvin by various people today within and without Citrix.
> Similar to docs I'd like to propose that tests can be committed to the
> repository beyond the freeze date.
> 
> Right now all tests are being committed to master since that's the branch that
> we cut our releases out of. But after the branch for a release has been cut
> tests will be committed to both release branch and master if everyone
> agrees this is a good thing.
> 
> Thoughts?
> 
> --
> Prasanna.,
> 
> 
> Powered by BigRock.com



Re: deleteVolume is sync

2013-06-06 Thread Marcus Sorensen
Well, if it doesn't actually delete the volume, just mark it 'destroy'
so that the cleanup thread takes care of it, then the api call can
stay sync, since it's just changing a database entry. If it does
actually do the work right then, then we will need to make it async. I
haven't even looked at 4.2 though to see if this was addressed.

On Thu, Jun 6, 2013 at 10:14 AM, Mike Tutkowski
 wrote:
> If it's a long-running op to delete a volume (which is can be), I would say
> it should be async.
>
>
> On Thu, Jun 6, 2013 at 9:25 AM, Marcus Sorensen  wrote:
>
>> So does it just need to be async, or is deleteVolume doing too much in
>> both moving the volume to destroy state and expunging? If I transition
>> a volume to 'Destroy' state, the storage cleanup thread comes along
>> and deletes it for me later, similar to how the VMs are expunged. This
>> seems preferable, because one could potentially undelete a volume
>> within the window.
>>
>> On Wed, Jun 5, 2013 at 7:47 PM, Mike Tutkowski
>>  wrote:
>> > Hey Marcus,
>> >
>> > To me, it seems like it should be async, as well.
>> >
>> > As far as I know (at least in pre 4.2), unless you are deleting a volume
>> > that has never been attached to a VM, the CS MS would have to have the
>> > hypervisor perform some operation upon the deletion of a CloudStack
>> > volume...and that could take a bit of time.
>> >
>> >
>> >
>> >
>> > On Wed, Jun 5, 2013 at 7:24 PM, Marcus Sorensen 
>> wrote:
>> >
>> >> Oh, I should add that I traced it through the system, and it actually
>> >> sends a DeleteVolumeCommand to the agent. That has to finish before
>> >> the sync call completes.
>> >>
>> >> This is on 4.1, if it changes significantly with the storage refactor,
>> >> that's fine, but I'd like to know if there was a reason for it in case
>> >> we want to make it async for us.
>> >>
>> >> On Wed, Jun 5, 2013 at 7:21 PM, Marcus Sorensen 
>> >> wrote:
>> >> > Just wondering why deleteVolume is a sync call. It doesn't seem to
>> >> > adhere to the 'mark it removed, let a worker expunge it later after X
>> >> > seconds' paradigm.  I only noticed this when a storage system was
>> >> > taking a bit to do the work and thus blocking the API call.
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkow...@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*


RE: [DISCUSS] code-freeze and integration tests

2013-06-06 Thread Animesh Chaturvedi


> -Original Message-
> From: Joe Brockmeier [mailto:j...@zonker.net]
> Sent: Thursday, June 06, 2013 5:14 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] code-freeze and integration tests
> 
> On Thu, Jun 6, 2013, at 12:20 AM, Prasanna Santhanam wrote:
> > I would like to get everyone's opinions on the timeline and policies
> > for bringing in automated tests into the repo. Integration tests are
> > written in marvin by various people today within and without Citrix.
> > Similar to docs I'd like to propose that tests can be committed to the
> > repository beyond the freeze date.
> >
> > Right now all tests are being committed to master since that's the
> > branch that we cut our releases out of. But after the branch for a
> > release has been cut tests will be committed to both release branch
> > and master if everyone agrees this is a good thing.
> >
> > Thoughts?
> 
> Unless there's something I'm missing, I can't think of any reason why
> tests would need to be frozen. I'm +1 for this. Thanks for raising it!
> 
> Best,
> 
> jzb
> --
[Animesh>] Yes it is obvious tests should be allowed.
> Joe Brockmeier
> j...@zonker.net
> Twitter: @jzb
> http://www.dissociatedpress.net/


Re: Handling Self Signed Certs

2013-06-06 Thread Kelven Yang
Will,

We have several other integrated components that have the similar
situation, it makes sense to consolidate the HTTPS client we used across
CloudStack and have a global configuration to deal with self-signed
certificate for all in testing or POC.

To help testing/POC process to be smooth, we may allow self-signed
certificate by default(which is the current behave), security sensitive
customers should disallow self-signed certificates in their production
environment.

Kelven 

On 6/6/13 9:08 AM, "Will Stevens"  wrote:

>Hey All,
>I am building integration between CS and an external Palo Alto Firewall
>device.  The API calls to the PA device are done over HTTPS.  In some
>cases
>(like testing or a POC), it makes sense to use a self signed cert for this
>connection.
>
>Currently I have a little http client wrapper which allows the use of a
>self signed cert.  Obviously, I do not want to use the wrapper when a real
>cert is used.
>
>What I am thinking of doing is adding a checkbox on the 'Add Palo Alto
>Device' configuration overlay with an option for 'Using a self signed
>cert'.  If this checkbox is checked, then the http client wrapper is used
>so the self signed cert will not throw errors, if it is not checked, the
>the http client wrapper will not be used and errors will be thrown if the
>cert is not valid.
>
>Is this a realistic approach to this problem?  Is this problem handled in
>other parts of the system in a different way?
>
>Thanks,
>
>Will



Re: deleteVolume is sync

2013-06-06 Thread Mike Tutkowski
I see what you're saying, Marcus.

That makes sense. If it's just marked as deleted, sync is the right way to
go.

I do know for 4.2 in Edison's storage framework that my plug-in is invoked
upon deletion of a CloudStack volume to delete the volume on the SAN (so it
appears to be more than marking the CloudStack volume as deleted).


On Thu, Jun 6, 2013 at 12:40 PM, Marcus Sorensen wrote:

> Well, if it doesn't actually delete the volume, just mark it 'destroy'
> so that the cleanup thread takes care of it, then the api call can
> stay sync, since it's just changing a database entry. If it does
> actually do the work right then, then we will need to make it async. I
> haven't even looked at 4.2 though to see if this was addressed.
>
> On Thu, Jun 6, 2013 at 10:14 AM, Mike Tutkowski
>  wrote:
> > If it's a long-running op to delete a volume (which is can be), I would
> say
> > it should be async.
> >
> >
> > On Thu, Jun 6, 2013 at 9:25 AM, Marcus Sorensen 
> wrote:
> >
> >> So does it just need to be async, or is deleteVolume doing too much in
> >> both moving the volume to destroy state and expunging? If I transition
> >> a volume to 'Destroy' state, the storage cleanup thread comes along
> >> and deletes it for me later, similar to how the VMs are expunged. This
> >> seems preferable, because one could potentially undelete a volume
> >> within the window.
> >>
> >> On Wed, Jun 5, 2013 at 7:47 PM, Mike Tutkowski
> >>  wrote:
> >> > Hey Marcus,
> >> >
> >> > To me, it seems like it should be async, as well.
> >> >
> >> > As far as I know (at least in pre 4.2), unless you are deleting a
> volume
> >> > that has never been attached to a VM, the CS MS would have to have the
> >> > hypervisor perform some operation upon the deletion of a CloudStack
> >> > volume...and that could take a bit of time.
> >> >
> >> >
> >> >
> >> >
> >> > On Wed, Jun 5, 2013 at 7:24 PM, Marcus Sorensen 
> >> wrote:
> >> >
> >> >> Oh, I should add that I traced it through the system, and it actually
> >> >> sends a DeleteVolumeCommand to the agent. That has to finish before
> >> >> the sync call completes.
> >> >>
> >> >> This is on 4.1, if it changes significantly with the storage
> refactor,
> >> >> that's fine, but I'd like to know if there was a reason for it in
> case
> >> >> we want to make it async for us.
> >> >>
> >> >> On Wed, Jun 5, 2013 at 7:21 PM, Marcus Sorensen  >
> >> >> wrote:
> >> >> > Just wondering why deleteVolume is a sync call. It doesn't seem to
> >> >> > adhere to the 'mark it removed, let a worker expunge it later
> after X
> >> >> > seconds' paradigm.  I only noticed this when a storage system was
> >> >> > taking a bit to do the work and thus blocking the API call.
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkow...@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud
> >> > *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: Handling Self Signed Certs

2013-06-06 Thread Will Stevens
Hey Kelven,
I am using the same https client libraries as elsewhere in Cloudstack (well
one of them because there is more than one version of http client libs
currently available in CS).

I am using this client:
import org.apache.http.impl.client.DefaultHttpClient;

I initialize it like this:
_httpclient = new DefaultHttpClient();

Then if self signed certs are allowed, I currently have a utility library
in my plugin which allows me to do this:
// Allows you to connect via SSL using unverified certs
_httpclient = HttpClientWrapper.wrapClient(_httpclient);

Is there a class that already exists in CloudStack which I can use to wrap
my client to enable unverified certs, or will I need to add one?  Should I
create a global setting such as 'Allow unverified SSL certs' which would be
checked by the code to determine if the http client should be wrapped?

Thx, Will


On Thu, Jun 6, 2013 at 2:43 PM, Kelven Yang  wrote:

> Will,
>
> We have several other integrated components that have the similar
> situation, it makes sense to consolidate the HTTPS client we used across
> CloudStack and have a global configuration to deal with self-signed
> certificate for all in testing or POC.
>
> To help testing/POC process to be smooth, we may allow self-signed
> certificate by default(which is the current behave), security sensitive
> customers should disallow self-signed certificates in their production
> environment.
>
> Kelven
>
> On 6/6/13 9:08 AM, "Will Stevens"  wrote:
>
> >Hey All,
> >I am building integration between CS and an external Palo Alto Firewall
> >device.  The API calls to the PA device are done over HTTPS.  In some
> >cases
> >(like testing or a POC), it makes sense to use a self signed cert for this
> >connection.
> >
> >Currently I have a little http client wrapper which allows the use of a
> >self signed cert.  Obviously, I do not want to use the wrapper when a real
> >cert is used.
> >
> >What I am thinking of doing is adding a checkbox on the 'Add Palo Alto
> >Device' configuration overlay with an option for 'Using a self signed
> >cert'.  If this checkbox is checked, then the http client wrapper is used
> >so the self signed cert will not throw errors, if it is not checked, the
> >the http client wrapper will not be used and errors will be thrown if the
> >cert is not valid.
> >
> >Is this a realistic approach to this problem?  Is this problem handled in
> >other parts of the system in a different way?
> >
> >Thanks,
> >
> >Will
>
>


Re: StoragePoolForMigrationResponse and StoragePoolResponse

2013-06-06 Thread Min Chen
I agree with Prasanna on this. We don't need to introduce several Storage
pool related responses just for some specific apis. In some way,
suitableFormigration is some kind of attribute that can be set on a
storage pool or not. If you don't want to show it to listStoragePool call,
you can set that as null so that json serialization will ignore it.

Just my two cents.
-min

On 6/6/13 5:07 AM, "Devdeep Singh"  wrote:

>Hi,
>
>StoragePoolResponse should really only be used for listing storage pools.
>Putting a suitableformigration flag etc. makes it weird for other apis.
>If tomorrow the response object is updated to include more statistics for
>admin user to make a better decision, then such information gets pushed
>in there which makes it unnatural for apis that just need the list of
>storage pools. I am planning to update StoragePoolForMigrationResponse to
>include the StoragePoolResponse object and any other flag;
>suitableformigration in this case. I'll file a bug for the same.
>
>Regards,
>Devdeep
>
>> -Original Message-
>> From: Prasanna Santhanam [mailto:t...@apache.org]
>> Sent: Tuesday, June 04, 2013 2:28 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: StoragePoolForMigrationResponse and StoragePoolResponse
>> 
>> On Fri, May 31, 2013 at 06:28:39PM +0530, Prasanna Santhanam wrote:
>> > On Fri, May 31, 2013 at 12:24:20PM +, Pranav Saxena wrote:
>> > > Hey Prasanna ,
>> > >
>> > > I see that the response  object name is
>> > > findstoragepoolsformigrationresponse , which is correct as shown
>> > > below .  Are you referring to this API or something else  ?
>> > >
>> > > http://MSIP:8096/client/api?command=findStoragePoolsForMigration
>> > >
>> > > > > > cloud-stack-version="4.2.0-SNAPSHOT">
>> > >
>> > >  
>> > >
>> >
>> > No that's what is shown to the user. I meant the class within
>> > org.apache.cloudstack.api.response
>> >
>> Fixed with 0401774a09483354f5b8532a30943351755da93f
>> 
>> --
>> Prasanna.,
>> 
>> 
>> Powered by BigRock.com
>



Re: Object based Secondary storage.

2013-06-06 Thread Min Chen
Thanks Tom. Indeed I have a S3 question that need some advise from some S3
experts. To support upload object > 5G, I have used TransferManager.upload
to upload object to S3, upload went fine and object are successfully put
to S3. However, later on when I am using "s3cmd get " to
retrieve this object, I always got this exception:

"MD5 signatures do not match: computed=Y, received="X"

It seems that Amazon S3 kept a different Md5 sum for the multi-part
uploaded object. We have been using Riak CS for our S3 testing. If I
changed to not using multi-part upload and directly invoking S3 putObject,
I will not run into this issue. Do you have such experience before?

-min

On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:

>Thanks Min. I've printed out the material and am reading new threads.
>Can't comment much yet until I understand things a bit more.
>
>Meanwhile, feel free to hit me up with any S3 questions you have. I'm
>looking forward to playing with the object_store branch and testing it
>out.
>
>Tom.
>
>On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
>> Welcome Tom. You can check out this FS
>> 
>>https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup+Obj
>>ec
>> t+Store+Plugin+Framework for secondary storage architectural work done
>>in
>> object_store branch.You may also check out the following recent threads
>> regarding 3 major technical questions raised by community as well as our
>> answers and clarification.
>> 
>>http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C77
>>B3
>> 37AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
>> 
>>http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCD
>>D2
>> 2955.3DDDC%25min.chen%40citrix.com%3E
>> 
>>http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCD
>>D2
>> 300D.3DE0C%25min.chen%40citrix.com%3E
>> 
>> 
>> That branch is mainly worked on by Edison and me, and we are at PST
>> timezone. 
>> 
>> Thanks
>> -min
>-- 
>Cloudian KK - http://www.cloudian.com/get-started.html
>Fancy 100TB of full featured S3 Storage?
>Checkout the Cloudian® Community Edition!
>



Re: Object based Secondary storage.

2013-06-06 Thread John Burwell
Min,

Are you calculating the MD5 or letting the Amazon client do it?

Thanks,
-John

On Jun 6, 2013, at 4:54 PM, Min Chen  wrote:

> Thanks Tom. Indeed I have a S3 question that need some advise from some S3
> experts. To support upload object > 5G, I have used TransferManager.upload
> to upload object to S3, upload went fine and object are successfully put
> to S3. However, later on when I am using "s3cmd get " to
> retrieve this object, I always got this exception:
> 
> "MD5 signatures do not match: computed=Y, received="X"
> 
> It seems that Amazon S3 kept a different Md5 sum for the multi-part
> uploaded object. We have been using Riak CS for our S3 testing. If I
> changed to not using multi-part upload and directly invoking S3 putObject,
> I will not run into this issue. Do you have such experience before?
> 
> -min
> 
> On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:
> 
>> Thanks Min. I've printed out the material and am reading new threads.
>> Can't comment much yet until I understand things a bit more.
>> 
>> Meanwhile, feel free to hit me up with any S3 questions you have. I'm
>> looking forward to playing with the object_store branch and testing it
>> out.
>> 
>> Tom.
>> 
>> On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
>>> Welcome Tom. You can check out this FS
>>> 
>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup+Obj
>>> ec
>>> t+Store+Plugin+Framework for secondary storage architectural work done
>>> in
>>> object_store branch.You may also check out the following recent threads
>>> regarding 3 major technical questions raised by community as well as our
>>> answers and clarification.
>>> 
>>> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C77
>>> B3
>>> 37AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
>>> 
>>> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCD
>>> D2
>>> 2955.3DDDC%25min.chen%40citrix.com%3E
>>> 
>>> http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3CCD
>>> D2
>>> 300D.3DE0C%25min.chen%40citrix.com%3E
>>> 
>>> 
>>> That branch is mainly worked on by Edison and me, and we are at PST
>>> timezone. 
>>> 
>>> Thanks
>>> -min
>> -- 
>> Cloudian KK - http://www.cloudian.com/get-started.html
>> Fancy 100TB of full featured S3 Storage?
>> Checkout the Cloudian® Community Edition!
>> 
> 



Re: Object based Secondary storage.

2013-06-06 Thread Min Chen
Hi John,

I didn't actually calculating the MD5 explicitly. I traced the code to
ServiceUtils.downloadObjectToFile method from amazon s3 sdk, my invocation
of S3Utils.getObject failed at the following code in ServiceUtils:

byte[] clientSideHash = null;
byte[] serverSideHash = null;
try {
// Multipart Uploads don't have an MD5 calculated on the
service side
if 
(ServiceUtils.isMultipartUploadETag(s3Object.getObjectMetadata().getETag())
 == false) {
clientSideHash = Md5Utils.computeMD5Hash(new
FileInputStream(destinationFile));
serverSideHash =
BinaryUtils.fromHex(s3Object.getObjectMetadata().getETag());
}
} catch (Exception e) {
log.warn("Unable to calculate MD5 hash to validate download: "
+ e.getMessage(), e);
}

if (performIntegrityCheck && clientSideHash != null &&
serverSideHash != null && !Arrays.equals(clientSideHash, serverSideHash)) {
throw new AmazonClientException("Unable to verify integrity of
data download.  " +
"Client calculated content hash didn't match hash
calculated by Amazon S3.  " +
"The data stored in '" +
destinationFile.getAbsolutePath() + "' may be corrupt.");
}

Some web discussion mentioned that this is related to multi-part copy:
http://sourceforge.net/p/s3tools/discussion/618865/thread/50a00c18. But
the resolution there seems not working for me.

Any advise?

Thanks
-min




On 6/6/13 2:02 PM, "John Burwell"  wrote:

>Min,
>
>Are you calculating the MD5 or letting the Amazon client do it?
>
>Thanks,
>-John
>
>On Jun 6, 2013, at 4:54 PM, Min Chen  wrote:
>
>> Thanks Tom. Indeed I have a S3 question that need some advise from some
>>S3
>> experts. To support upload object > 5G, I have used
>>TransferManager.upload
>> to upload object to S3, upload went fine and object are successfully put
>> to S3. However, later on when I am using "s3cmd get " to
>> retrieve this object, I always got this exception:
>> 
>> "MD5 signatures do not match: computed=Y, received="X"
>> 
>> It seems that Amazon S3 kept a different Md5 sum for the multi-part
>> uploaded object. We have been using Riak CS for our S3 testing. If I
>> changed to not using multi-part upload and directly invoking S3
>>putObject,
>> I will not run into this issue. Do you have such experience before?
>> 
>> -min
>> 
>> On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:
>> 
>>> Thanks Min. I've printed out the material and am reading new threads.
>>> Can't comment much yet until I understand things a bit more.
>>> 
>>> Meanwhile, feel free to hit me up with any S3 questions you have. I'm
>>> looking forward to playing with the object_store branch and testing it
>>> out.
>>> 
>>> Tom.
>>> 
>>> On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
 Welcome Tom. You can check out this FS
 
 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup+O
bj
 ec
 t+Store+Plugin+Framework for secondary storage architectural work done
 in
 object_store branch.You may also check out the following recent
threads
 regarding 3 major technical questions raised by community as well as
our
 answers and clarification.
 
 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C
77
 B3
 37AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
 
 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C
CD
 D2
 2955.3DDDC%25min.chen%40citrix.com%3E
 
 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201306.mbox/%3C
CD
 D2
 300D.3DE0C%25min.chen%40citrix.com%3E
 
 
 That branch is mainly worked on by Edison and me, and we are at PST
 timezone. 
 
 Thanks
 -min
>>> -- 
>>> Cloudian KK - http://www.cloudian.com/get-started.html
>>> Fancy 100TB of full featured S3 Storage?
>>> Checkout the Cloudian® Community Edition!
>>> 
>> 
>



RE: Handling Self Signed Certs

2013-06-06 Thread Soheil Eizadi
What is missing is a facility to import a certificate into the store. If it was 
available you could use it for self signed CERTS. Ideally it should be part of 
GUI to add devices.

I am implementing a similar HTTP Client. You are using DefaultHttpClient so it 
is based on the newer Apache libraries. The ones I found in CloudStack were 
older Commons HttpClient which was EOL.

In my case I planned to wrap the Client as you have for development and for 
production have an API to import a certificate for SSL into the Certificate 
Store.

I would call to AuthScope(host, 443) to limit access to only the specific host 
and port.

-Soheil

From: williamstev...@gmail.com [williamstev...@gmail.com] on behalf of Will 
Stevens [wstev...@cloudops.com]
Sent: Thursday, June 06, 2013 1:08 PM
To: dev@cloudstack.apache.org
Subject: Re: Handling Self Signed Certs

Hey Kelven,
I am using the same https client libraries as elsewhere in Cloudstack (well
one of them because there is more than one version of http client libs
currently available in CS).

I am using this client:
import org.apache.http.impl.client.DefaultHttpClient;

I initialize it like this:
_httpclient = new DefaultHttpClient();

Then if self signed certs are allowed, I currently have a utility library
in my plugin which allows me to do this:
// Allows you to connect via SSL using unverified certs
_httpclient = HttpClientWrapper.wrapClient(_httpclient);

Is there a class that already exists in CloudStack which I can use to wrap
my client to enable unverified certs, or will I need to add one?  Should I
create a global setting such as 'Allow unverified SSL certs' which would be
checked by the code to determine if the http client should be wrapped?

Thx, Will


On Thu, Jun 6, 2013 at 2:43 PM, Kelven Yang  wrote:

> Will,
>
> We have several other integrated components that have the similar
> situation, it makes sense to consolidate the HTTPS client we used across
> CloudStack and have a global configuration to deal with self-signed
> certificate for all in testing or POC.
>
> To help testing/POC process to be smooth, we may allow self-signed
> certificate by default(which is the current behave), security sensitive
> customers should disallow self-signed certificates in their production
> environment.
>
> Kelven
>
> On 6/6/13 9:08 AM, "Will Stevens"  wrote:
>
> >Hey All,
> >I am building integration between CS and an external Palo Alto Firewall
> >device.  The API calls to the PA device are done over HTTPS.  In some
> >cases
> >(like testing or a POC), it makes sense to use a self signed cert for this
> >connection.
> >
> >Currently I have a little http client wrapper which allows the use of a
> >self signed cert.  Obviously, I do not want to use the wrapper when a real
> >cert is used.
> >
> >What I am thinking of doing is adding a checkbox on the 'Add Palo Alto
> >Device' configuration overlay with an option for 'Using a self signed
> >cert'.  If this checkbox is checked, then the http client wrapper is used
> >so the self signed cert will not throw errors, if it is not checked, the
> >the http client wrapper will not be used and errors will be thrown if the
> >cert is not valid.
> >
> >Is this a realistic approach to this problem?  Is this problem handled in
> >other parts of the system in a different way?
> >
> >Thanks,
> >
> >Will
>
>


RE: Storage VM to Management Server Connectivity Problem

2013-06-06 Thread Soheil Eizadi
The configuration database is definitely wrong, just not sure how it got that 
way.
-Soheil


mysql> select * from configuration where name= "host";
+--+--+---+--+--+-+
| category | instance | component | name | value| description |
+--+--+---+--+--+-+
| Advanced | DEFAULT  | management-server | host | 192.168.56.1 | NULL|
+--+--+---+--+--+-+
1 row in set (0.10 sec)

mysql> select * from configuration where name= "management.network.cidr";
+--+--+---+-+-+-+
| category | instance | component | name| value 
  | description |
+--+--+---+-+-+-+
| Advanced | DEFAULT  | management-server | management.network.cidr | 
192.168.56.0/24 | NULL|
+--+--+---+-+-+-+
1 row in set (0.00 sec)

mysql> select * from configuration where name= 
"secstorage.allowed.internal.sites";
+--+--+---+---++-+
| category | instance | component | name  | 
value  | description |
+--+--+---+---++-+
| Advanced | DEFAULT  | management-server | secstorage.allowed.internal.sites | 
192.168.56.0/8 | NULL|
+--+--+---+---++-+
1 row in set (0.03 sec)


From: Wei ZHOU [ustcweiz...@gmail.com]
Sent: Wednesday, June 05, 2013 11:16 PM
To: dev@cloudstack.apache.org
Subject: Re: Storage VM to Management Server Connectivity Problem

What is the value in cloud.configuration table with name='host'?

-Wei

2013/6/6, Soheil Eizadi :
> For now I patched this by editing the file /var/cache/cloud/cmdline and
> fixing the IP Address and restarting the Cloud Service on Storage VM. Now it
> is communicating with MS:
>
> INFO  [storage.secondary.SecondaryStorageListener] (AgentConnectTaskPool-1:)
> Received a host startup notification
> com.cloud.agent.api.StartupSecondaryStorageCommand
> INFO  [network.security.SecurityGroupListener] (AgentConnectTaskPool-1:)
> Received a host startup notification
> INFO  [storage.download.DownloadMonitorImpl] (AgentConnectTaskPool-1:)
> Template Sync found SystemVM Template (XenServer) already in the template
> host table
> INFO  [storage.download.DownloadMonitorImpl] (AgentConnectTaskPool-1:)
> Template Sync did not find CentOS 5.5(64-bit) no GUI (KVM) on the server 2,
> will request download shortly
> ...
> 
> From: Soheil Eizadi [seiz...@infoblox.com]
> Sent: Wednesday, June 05, 2013 10:44 PM
> To: dev@cloudstack.apache.org
> Subject: Storage VM to Management Server Connectivity Problem
>
> I am bringing up a Storage VM on my XenServer that is running on My MAC. I
> noticed that the Management Server IP Address is incorrect 192.168.56.1
> versus 172.16.197.1. Checking if this is something others have seen? This
> address is in the last boot record for the Storage VM. The IP Address
> 192.168.56.1 is not valid on my MAC, at least not right now!
> -Soheil
>
>>> Storage VM
>
> root@s-1-VM:~# /usr/local/cloud/systemvm/ssvm-check.sh
> 
> First DNS server is  172.16.197.135
> PING 172.16.197.135 (172.16.197.135): 56 data bytes
> 64 bytes from 172.16.197.135: icmp_seq=0 ttl=64 time=17.099 ms
> 64 bytes from 172.16.197.135: icmp_seq=1 ttl=64 time=0.783 ms
> --- 172.16.197.135 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.783/8.941/17.099/8.158 ms
> Good: Can ping DNS server
> 
> Good: DNS resolves download.cloud.com
> 
> NFS is currently mounted
> 
> Management server is 192.168.56.1. Checking connectivity.
> ERROR: Cannot connect to 192.168.56.1 port 8250
> 2013/05/10 14:23:02 socat[9304] E connecting to AF=2 192.168.56.1:8250:
> Connection timed out
> root@s-1-VM:~# ifconfig
> eth0  Link encap:Ethernet  HWaddr 0e:00:a9:fe:02:74
>   inet addr:169.254.2.116  Bcast:169.254.255.255  Mask:255.255.0.0
>   inet6 addr: fe80::c00:a9ff:fefe:274/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:234 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:131 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 

[ACS42] Release Status Update

2013-06-06 Thread Animesh Chaturvedi

Folks

The new feature freeze date is 6/28 and the RC is 8/19

Out of 104 proposed features /improvements, the status is

|-+-|
| New Features / Improvements | |
|-+-|
| Closed  |   6 |
| Resolved|  46 |
| In Progress |  19 |
| Reopened|   3 |
| Ready To Review |   2 |
| Open|  28 |
|-+-|
| Total   | 104 |
|-+-|

Thanks to folks who updated their tickets, those who have not please take a 
moment to update your feature/improvement tickets

As for bugs here is a summary for this week: 
|-+-+--+---+---|
| Bugs| Blocker | Critical | Major | Total |
|-+-+--+---+---|
| Incoming|   5 |   12 |16 |40 |
| Outgoing|  13 |   20 |38 |77 |
| Open Unassigned |   8 |   19 |96 |   153 |
| Open Total  |  21 |   55 |   207 |   344 |
|-+-+--+---+---|

Given that we have a large number of unassigned and open defects, If you are 
interested in helping out on defects please check the release dashboard widget 
on issues by components  http://s.apache.org/M5k 

One more thing that I want to call out is that we have 342 resolved / fixed 
bugs that are not closed yet. This number is big (61 blocker, 90 critical, 171 
majors) and we need to start closing these issues.

Thanks
Animesh


Re: Storage VM to Management Server Connectivity Problem

2013-06-06 Thread Wei ZHOU
Soheil,

I think, your machine have multiple nics. When you deployed cloudstack,
192.168.56.1 should be the ip of first nic, so cloudstack regarded this as
the management ip.
You need to change these values manually, restart management server,
destroy the systemvms (SSVM and CPVM).

-Wei


2013/6/6 Soheil Eizadi 

> The configuration database is definitely wrong, just not sure how it got
> that way.
> -Soheil
>
>
> mysql> select * from configuration where name= "host";
>
> +--+--+---+--+--+-+
> | category | instance | component | name | value|
> description |
>
> +--+--+---+--+--+-+
> | Advanced | DEFAULT  | management-server | host | 192.168.56.1 | NULL
>|
>
> +--+--+---+--+--+-+
> 1 row in set (0.10 sec)
>
> mysql> select * from configuration where name= "management.network.cidr";
>
> +--+--+---+-+-+-+
> | category | instance | component | name|
> value   | description |
>
> +--+--+---+-+-+-+
> | Advanced | DEFAULT  | management-server | management.network.cidr |
> 192.168.56.0/24 | NULL|
>
> +--+--+---+-+-+-+
> 1 row in set (0.00 sec)
>
> mysql> select * from configuration where name=
> "secstorage.allowed.internal.sites";
>
> +--+--+---+---++-+
> | category | instance | component | name
>| value  | description |
>
> +--+--+---+---++-+
> | Advanced | DEFAULT  | management-server |
> secstorage.allowed.internal.sites | 192.168.56.0/8 | NULL|
>
> +--+--+---+---++-+
> 1 row in set (0.03 sec)
>
> 
> From: Wei ZHOU [ustcweiz...@gmail.com]
> Sent: Wednesday, June 05, 2013 11:16 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Storage VM to Management Server Connectivity Problem
>
> What is the value in cloud.configuration table with name='host'?
>
> -Wei
>
> 2013/6/6, Soheil Eizadi :
> > For now I patched this by editing the file /var/cache/cloud/cmdline and
> > fixing the IP Address and restarting the Cloud Service on Storage VM.
> Now it
> > is communicating with MS:
> >
> > INFO  [storage.secondary.SecondaryStorageListener]
> (AgentConnectTaskPool-1:)
> > Received a host startup notification
> > com.cloud.agent.api.StartupSecondaryStorageCommand
> > INFO  [network.security.SecurityGroupListener] (AgentConnectTaskPool-1:)
> > Received a host startup notification
> > INFO  [storage.download.DownloadMonitorImpl] (AgentConnectTaskPool-1:)
> > Template Sync found SystemVM Template (XenServer) already in the template
> > host table
> > INFO  [storage.download.DownloadMonitorImpl] (AgentConnectTaskPool-1:)
> > Template Sync did not find CentOS 5.5(64-bit) no GUI (KVM) on the server
> 2,
> > will request download shortly
> > ...
> > 
> > From: Soheil Eizadi [seiz...@infoblox.com]
> > Sent: Wednesday, June 05, 2013 10:44 PM
> > To: dev@cloudstack.apache.org
> > Subject: Storage VM to Management Server Connectivity Problem
> >
> > I am bringing up a Storage VM on my XenServer that is running on My MAC.
> I
> > noticed that the Management Server IP Address is incorrect 192.168.56.1
> > versus 172.16.197.1. Checking if this is something others have seen? This
> > address is in the last boot record for the Storage VM. The IP Address
> > 192.168.56.1 is not valid on my MAC, at least not right now!
> > -Soheil
> >
> >>> Storage VM
> >
> > root@s-1-VM:~# /usr/local/cloud/systemvm/ssvm-check.sh
> > 
> > First DNS server is  172.16.197.135
> > PING 172.16.197.135 (172.16.197.135): 56 data bytes
> > 64 bytes from 172.16.197.135: icmp_seq=0 ttl=64 time=17.099 ms
> > 64 bytes from 172.16.197.135: icmp_seq=1 ttl=64 time=0.783 ms
> > --- 172.16.197.135 ping statistics ---
> > 2 packets transmitted, 2 packets received, 0% packet loss
> > round-trip min/avg/max/stddev = 0.783/8.941/17.099/8.158 ms
> > Good: Can ping DNS server
> > 
> > Good: DNS resolves download.cloud.com
> > 
> > NFS is currently mounted
> > 
> > Management server is 192.168.56.1. Checking connectivity.
> > ERROR: Cannot connect to 192.168.56.1 port 8250
> > 2013/05/10 14:23:02 socat[9304] E connecting to 

Re: Handling Self Signed Certs

2013-06-06 Thread Kelven Yang
Will,

We don't have a common HTTPS client yet, as far as I know, different
module developers probably are using slight different way to deal with
self-signed certificate, it is a good time to consolidate it now if it is
not too late. You may make the facility available in cloud-utils package
and encourage adoption from these modules.

Some modules, i.e., download manager, API module to hypervisor hosts have
the similar situation.


Kelven

On 6/6/13 2:33 PM, "Soheil Eizadi"  wrote:

>What is missing is a facility to import a certificate into the store. If
>it was available you could use it for self signed CERTS. Ideally it
>should be part of GUI to add devices.
>
>I am implementing a similar HTTP Client. You are using DefaultHttpClient
>so it is based on the newer Apache libraries. The ones I found in
>CloudStack were older Commons HttpClient which was EOL.
>
>In my case I planned to wrap the Client as you have for development and
>for production have an API to import a certificate for SSL into the
>Certificate Store.
>
>I would call to AuthScope(host, 443) to limit access to only the specific
>host and port.
>
>-Soheil
>
>From: williamstev...@gmail.com [williamstev...@gmail.com] on behalf of
>Will Stevens [wstev...@cloudops.com]
>Sent: Thursday, June 06, 2013 1:08 PM
>To: dev@cloudstack.apache.org
>Subject: Re: Handling Self Signed Certs
>
>Hey Kelven,
>I am using the same https client libraries as elsewhere in Cloudstack
>(well
>one of them because there is more than one version of http client libs
>currently available in CS).
>
>I am using this client:
>import org.apache.http.impl.client.DefaultHttpClient;
>
>I initialize it like this:
>_httpclient = new DefaultHttpClient();
>
>Then if self signed certs are allowed, I currently have a utility library
>in my plugin which allows me to do this:
>// Allows you to connect via SSL using unverified certs
>_httpclient = HttpClientWrapper.wrapClient(_httpclient);
>
>Is there a class that already exists in CloudStack which I can use to wrap
>my client to enable unverified certs, or will I need to add one?  Should I
>create a global setting such as 'Allow unverified SSL certs' which would
>be
>checked by the code to determine if the http client should be wrapped?
>
>Thx, Will
>
>
>On Thu, Jun 6, 2013 at 2:43 PM, Kelven Yang 
>wrote:
>
>> Will,
>>
>> We have several other integrated components that have the similar
>> situation, it makes sense to consolidate the HTTPS client we used across
>> CloudStack and have a global configuration to deal with self-signed
>> certificate for all in testing or POC.
>>
>> To help testing/POC process to be smooth, we may allow self-signed
>> certificate by default(which is the current behave), security sensitive
>> customers should disallow self-signed certificates in their production
>> environment.
>>
>> Kelven
>>
>> On 6/6/13 9:08 AM, "Will Stevens"  wrote:
>>
>> >Hey All,
>> >I am building integration between CS and an external Palo Alto Firewall
>> >device.  The API calls to the PA device are done over HTTPS.  In some
>> >cases
>> >(like testing or a POC), it makes sense to use a self signed cert for
>>this
>> >connection.
>> >
>> >Currently I have a little http client wrapper which allows the use of a
>> >self signed cert.  Obviously, I do not want to use the wrapper when a
>>real
>> >cert is used.
>> >
>> >What I am thinking of doing is adding a checkbox on the 'Add Palo Alto
>> >Device' configuration overlay with an option for 'Using a self signed
>> >cert'.  If this checkbox is checked, then the http client wrapper is
>>used
>> >so the self signed cert will not throw errors, if it is not checked,
>>the
>> >the http client wrapper will not be used and errors will be thrown if
>>the
>> >cert is not valid.
>> >
>> >Is this a realistic approach to this problem?  Is this problem handled
>>in
>> >other parts of the system in a different way?
>> >
>> >Thanks,
>> >
>> >Will
>>
>>



RE: [DISCUSS] NFS cache storage issue on object_store

2013-06-06 Thread Edison Su


> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Thursday, June 06, 2013 7:47 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] NFS cache storage issue on object_store
> 
> Edison,
> 
> Please my comments in-line below.
> 
> Thanks,
> -John
> 
> On Jun 5, 2013, at 6:55 PM, Edison Su  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: John Burwell [mailto:jburw...@basho.com]
> >> Sent: Wednesday, June 05, 2013 1:04 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: [DISCUSS] NFS cache storage issue on object_store
> >>
> >> Edison,
> >>
> >> You have provided some great information below which helps greatly to
> >> understand the role of the "NFS cache" mechanism.  To summarize, this
> >> mechanism is only currently required for Xen snapshot operations
> >> driven by Xen's coalescing operations.  Is my understanding correct?
> >> Just out of
> >
> > I think Ceph may still need "NFS cache", for example, during delta snapshot
> backup:
> > http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
> > You need to create a delta snapshot into a file, then upload the file into 
> > S3.
> >
> > For KVM, if the snapshot is taken on qcow2, then need to copy the
> snapshot into a file system, then backup it to S3.
> >
> > Another usage case for "NFS cache " is to cache template stored on S3, if
> there is no zone-wide primary storage. We need to download template from
> S3 into every primary storage, if there is no cache, each download will take a
> while: comparing download template directly from S3(if the S3 is region wide)
> with download from a zone wide "cache" storage, I would say, the download
> from zone wide cache storage should be faster than from region wide S3. If
> there is no zone wide primary storage, then we will download the template
> from S3 several times, which is quite time consuming.
> >
> >
> > There may have other places to use "NFS cache", but the point is as
> > long as mgt server can be decoupled from this "cache" storage, then we
> can decide when/how to use cache storage based on different kind of
> hypervisor/storage combinations in the future.
> 
> I think we would do well to re-orient the way we think about roles and
> requirements.  Ceph doesn't need a file system to perform a delta snapshot
> operation.  Xen, KVM, and/or VMWare need access to a file system to

For Ceph delta snapshot case, it's Ceph has the requirement that needs a file 
system to perform delta snapshot(http://ceph.com/docs/next/man/8/rbd/):

export-diff [image-name] [dest-path] [-from-snap snapname]
Exports an incremental diff for an image to dest path (use - for stdout). If an 
initial snapshot is specified, only changes since that snapshot are included; 
otherwise, any regions of the image that contain data are included. The end 
snapshot is specified using the standard -snap option or @snap syntax (see 
below). The image diff format includes metadata about image size changes, and 
the start and end snapshots. It efficiently represents discarded or 'zero' 
regions of the image.

The dest-path is either a file, or stdout, if using stdout, then need a lot of 
memory. If using hypervisor's local file system, then the local file system may 
don't have enough space to store the delta diff.

> perform these operations.  The hypervisor plugin should request a
> reservation of x size as a file handle from the Storage subsystem.  The Ceph
> driver implements this request by using a staging area + transfer operation.
> This approach encapsulates the operation/rules around the staging area from
> clients, protects against concurrent requests flooding a resource, and allows
> hypervisor-specific behavior/rules to encapsulated in the appropriate plugin.
> 
> >
> >> curiosity, is their a Xen expert on the list who can provide a
> >> high-level description of the coalescing operation -- in particular,
> >> the way it interacts with storage?  I have Googled a bit, and found very
> little information about it.
> >> Has the object_store branch been tested with VMWare and KVM?  If so,
> >> what operations on these hypervisors have been tested?
> >
> > Both vmware and KVM is tested, but without S3 support. Haven't have
> time to take a look at how to use S3 in both hypervisors yet.
> > For example, we should take a look at how to import a template from url
> into vmware data store, thus, we can eliminate "NFS cache" during template
> import.
> 
> Given the release extension and the impact of these tests on the
> implementation, we need to test S3 with VMWare and KVM pre-merge.

I would like to handle over the implementation of S3(directly use S3 without 
nfs staging area) on both Vmware and KVM to the community, or in the next 
release, or after the merge.
The reason, is simple, we need to get mgt server part refactor done at first, 
the hypervisor side implementation or optimization can be done after the mgt 
server side refactor. I think what we are doing at the mgt s

RE: Object based Secondary storage.

2013-06-06 Thread Edison Su
The Etag created by both RIAK CS and Amazon S3 seems a little bit different, in 
case of multi part upload.

Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
Test environment:
S3cmd: version: version 1.5.0-alpha1
Riak cs:
Name: riak
Arch: x86_64
Version : 1.3.1
Release : 1.el6
Size: 40 M
Repo: installed
>From repo   : basho-products

The command I used to put:
s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d

The etag created for the file, when using Riak CS is WxEUkiQzTWm_2C8A92fLQg==

EBUG: Sending request method_string='POST', 
uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
 headers={'content-length': '309', 'Authorization': 'AWS 
OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 Jun 
2013 22:54:28 +'}, body=(309 bytes)
DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 22:40:09 
GMT', 'content-length': '326', 'content-type': 'application/xml', 'server': 
'Riak CS'}, 'reason': 'OK', 'data': 'http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}

While the etag created by Amazon S3 is: 
"70e1860be687d43c039873adef4280f2-3"

DEBUG: Sending request method_string='POST', 
uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwLzRcG5obVvJFDvnhYUFdT6fYr1rig--',
 
DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3', 
'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 'x-amz-request-id': 
'8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate', 'date': 'Thu, 06 Jun 
2013 22:39:47 GMT', 'content-type': 'application/xml'}, 'reason': 'OK', 'data': 
'\n\nhttp://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso1"70e1860be687d43c039873adef4280f2-3"'}

So the etag created on Amazon S3 has "-"(dash) in it, but there is only "_" 
(underscore) on Riak cs. 

Do you know the reason? What should we need to do to make it compatible with 
Amazon S3 SDK?

> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Thursday, June 06, 2013 2:03 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Object based Secondary storage.
> 
> Min,
> 
> Are you calculating the MD5 or letting the Amazon client do it?
> 
> Thanks,
> -John
> 
> On Jun 6, 2013, at 4:54 PM, Min Chen  wrote:
> 
> > Thanks Tom. Indeed I have a S3 question that need some advise from
> > some S3 experts. To support upload object > 5G, I have used
> > TransferManager.upload to upload object to S3, upload went fine and
> > object are successfully put to S3. However, later on when I am using
> > "s3cmd get " to retrieve this object, I always got this 
> > exception:
> >
> > "MD5 signatures do not match: computed=Y, received="X"
> >
> > It seems that Amazon S3 kept a different Md5 sum for the multi-part
> > uploaded object. We have been using Riak CS for our S3 testing. If I
> > changed to not using multi-part upload and directly invoking S3
> > putObject, I will not run into this issue. Do you have such experience
> before?
> >
> > -min
> >
> > On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:
> >
> >> Thanks Min. I've printed out the material and am reading new threads.
> >> Can't comment much yet until I understand things a bit more.
> >>
> >> Meanwhile, feel free to hit me up with any S3 questions you have. I'm
> >> looking forward to playing with the object_store branch and testing
> >> it out.
> >>
> >> Tom.
> >>
> >> On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
> >>> Welcome Tom. You can check out this FS
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backu
> >>> p+Obj
> >>> ec
> >>> t+Store+Plugin+Framework for secondary storage architectural work
> >>> t+Store+Plugin+done
> >>> in
> >>> object_store branch.You may also check out the following recent
> >>> threads regarding 3 major technical questions raised by community as
> >>> well as our answers and clarification.
> >>>
> >>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201306.mbox/
> >>> %3C77
> >>> B3
> >>>
> 37AF224FD84CBF8401947098DD87036A76%40SJCPEX01CL01.citrite.net%3E
> >>>
> >>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201306.mbox/
> >>> %3CCD
> >>> D2
> >>> 2955.3DDDC%25min.chen%40citrix.com%3E
> >>>
> >>> http://mail-archives.apache.org/mod_mbox/cloudstack-
> dev/201306.mbox/
> >>> %3CCD
> >>> D2
> >>> 300D.3DE0C%25min.chen%40citrix.com%3E
> >>>
> >>>
> >>> That branch is mainly worked on by Edison and me, and we are at PST
> >>> timezone.
> >>>
> >>> Thanks
> >>> -min
> >> --
> >> Cloudian KK - http://www.cloudian.com/get-started.html
> >> Fancy 100TB of full featured S3 Storage?
> >> Checkout the Cloudian(r) Community Edition!
> >>
> >



RE: [ACS42] Release Status Update

2013-06-06 Thread Sudha Ponnaganti
Animesh,

Couple of things :

- Thinking to have a Tuesday defect close day to see if there would be any 
interest to have folks participate.  This is dedicated day to focus on closure 
of defects.  Once FF is done, community might be working on closing the defects 
and till then main focus is to close the features.

- There are quite high number of stories for which QA validation is complete 
but docs are prnding. For those, I haven't closed the tickets. I will review 
and close the ones which are complete. 

Thanks
/Sudha

-Original Message-
From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com] 
Sent: Thursday, June 06, 2013 3:11 PM
To: dev@cloudstack.apache.org
Subject: [ACS42] Release Status Update


Folks

The new feature freeze date is 6/28 and the RC is 8/19

Out of 104 proposed features /improvements, the status is

|-+-|
| New Features / Improvements | |
|-+-|
| Closed  |   6 |
| Resolved|  46 |
| In Progress |  19 |
| Reopened|   3 |
| Ready To Review |   2 |
| Open|  28 |
|-+-|
| Total   | 104 |
|-+-|

Thanks to folks who updated their tickets, those who have not please take a 
moment to update your feature/improvement tickets

As for bugs here is a summary for this week: 
|-+-+--+---+---|
| Bugs| Blocker | Critical | Major | Total |
|-+-+--+---+---|
| Incoming|   5 |   12 |16 |40 |
| Outgoing|  13 |   20 |38 |77 |
| Open Unassigned |   8 |   19 |96 |   153 |
| Open Total  |  21 |   55 |   207 |   344 |
|-+-+--+---+---|

Given that we have a large number of unassigned and open defects, If you are 
interested in helping out on defects please check the release dashboard widget 
on issues by components  http://s.apache.org/M5k 

One more thing that I want to call out is that we have 342 resolved / fixed 
bugs that are not closed yet. This number is big (61 blocker, 90 critical, 171 
majors) and we need to start closing these issues.

Thanks
Animesh


RE: [ACS42] Release Status Update

2013-06-06 Thread Animesh Chaturvedi


> -Original Message-
> From: Sudha Ponnaganti [mailto:sudha.ponnaga...@citrix.com]
> Sent: Thursday, June 06, 2013 4:30 PM
> To: dev@cloudstack.apache.org
> Subject: RE: [ACS42] Release Status Update
> 
> Animesh,
> 
> Couple of things :
> 
> - Thinking to have a Tuesday defect close day to see if there would be
> any interest to have folks participate.  This is dedicated day to focus
> on closure of defects.  Once FF is done, community might be working on
> closing the defects and till then main focus is to close the features.
[Animesh>] Yes we should start one from next Tuesday
> 
> - There are quite high number of stories for which QA validation is
> complete but docs are prnding. For those, I haven't closed the tickets.
> I will review and close the ones which are complete.
> 
> Thanks
> /Sudha
> 
> -Original Message-
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> Sent: Thursday, June 06, 2013 3:11 PM
> To: dev@cloudstack.apache.org
> Subject: [ACS42] Release Status Update
> 
> 
> Folks
> 
> The new feature freeze date is 6/28 and the RC is 8/19
> 
> Out of 104 proposed features /improvements, the status is
> 
> |-+-|
> | New Features / Improvements | |
> |-+-|
> | Closed  |   6 |
> | Resolved|  46 |
> | In Progress |  19 |
> | Reopened|   3 |
> | Ready To Review |   2 |
> | Open|  28 |
> |-+-|
> | Total   | 104 |
> |-+-|
> 
> Thanks to folks who updated their tickets, those who have not please
> take a moment to update your feature/improvement tickets
> 
> As for bugs here is a summary for this week:
> |-+-+--+---+---|
> | Bugs| Blocker | Critical | Major | Total |
> |-+-+--+---+---|
> | Incoming|   5 |   12 |16 |40 |
> | Outgoing|  13 |   20 |38 |77 |
> | Open Unassigned |   8 |   19 |96 |   153 |
> | Open Total  |  21 |   55 |   207 |   344 |
> |-+-+--+---+---|
> 
> Given that we have a large number of unassigned and open defects, If you
> are interested in helping out on defects please check the release
> dashboard widget on issues by components  http://s.apache.org/M5k
> 
> One more thing that I want to call out is that we have 342 resolved /
> fixed bugs that are not closed yet. This number is big (61 blocker, 90
> critical, 171 majors) and we need to start closing these issues.
> 
> Thanks
> Animesh


RE: Storage VM to Management Server Connectivity Problem

2013-06-06 Thread Soheil Eizadi
My machine has multiple NICs but none of them have that IP address (see below). 
That IP Address comes from deploydb stage as the default value. For some reason 
when I ran the Management Server it did not update these values, may be this is 
normal behavior for my use case where I have many potential Management 
Interfaces. 

I fixed these values manually for now for my configuration and restarted the 
Management Server.
-Soheil

Administrators-MacBook-Pro-7:~ seizadi$ ifconfig
lo0: flags=8049 mtu 16384
options=3
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
inet 127.0.0.1 netmask 0xff00 
inet6 ::1 prefixlen 128 
gif0: flags=8010 mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8863 mtu 1500
options=2b
ether 10:9a:dd:6a:2b:03 
media: autoselect (none)
status: inactive
en1: flags=8863 mtu 1500
ether e4:ce:8f:12:ff:1c 
inet6 fe80::e6ce:8fff:fe12:ff1c%en1 prefixlen 64 scopeid 0x5 
inet 10.102.28.164 netmask 0xfe00 broadcast 10.102.29.255
media: autoselect
status: active
p2p0: flags=8843 mtu 2304
ether 06:ce:8f:12:ff:1c 
media: autoselect
status: inactive
fw0: flags=8863 mtu 4078
lladdr 70:cd:60:ff:fe:b8:15:bc 
media: autoselect 
status: inactive
vmnet1: flags=8863 mtu 1500
ether 00:50:56:c0:00:01 
inet 192.168.217.1 netmask 0xff00 broadcast 192.168.217.255
vmnet8: flags=8863 mtu 1500
ether 00:50:56:c0:00:08 
inet 172.16.197.1 netmask 0xff00 broadcast 172.16.197.255


From: Wei ZHOU [ustcweiz...@gmail.com]
Sent: Thursday, June 06, 2013 3:24 PM
To: dev@cloudstack.apache.org
Subject: Re: Storage VM to Management Server Connectivity Problem

Soheil,

I think, your machine have multiple nics. When you deployed cloudstack,
192.168.56.1 should be the ip of first nic, so cloudstack regarded this as
the management ip.
You need to change these values manually, restart management server,
destroy the systemvms (SSVM and CPVM).

-Wei


2013/6/6 Soheil Eizadi 

> The configuration database is definitely wrong, just not sure how it got
> that way.
> -Soheil
>
>
> mysql> select * from configuration where name= "host";
>
> +--+--+---+--+--+-+
> | category | instance | component | name | value|
> description |
>
> +--+--+---+--+--+-+
> | Advanced | DEFAULT  | management-server | host | 192.168.56.1 | NULL
>|
>
> +--+--+---+--+--+-+
> 1 row in set (0.10 sec)
>
> mysql> select * from configuration where name= "management.network.cidr";
>
> +--+--+---+-+-+-+
> | category | instance | component | name|
> value   | description |
>
> +--+--+---+-+-+-+
> | Advanced | DEFAULT  | management-server | management.network.cidr |
> 192.168.56.0/24 | NULL|
>
> +--+--+---+-+-+-+
> 1 row in set (0.00 sec)
>
> mysql> select * from configuration where name=
> "secstorage.allowed.internal.sites";
>
> +--+--+---+---++-+
> | category | instance | component | name
>| value  | description |
>
> +--+--+---+---++-+
> | Advanced | DEFAULT  | management-server |
> secstorage.allowed.internal.sites | 192.168.56.0/8 | NULL|
>
> +--+--+---+---++-+
> 1 row in set (0.03 sec)
>
> 
> From: Wei ZHOU [ustcweiz...@gmail.com]
> Sent: Wednesday, June 05, 2013 11:16 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Storage VM to Management Server Connectivity Problem
>
> What is the value in cloud.configuration table with name='host'?
>
> -Wei
>
> 2013/6/6, Soheil Eizadi :
> > For now I patched this by editing the file /var/cache/cloud/cmdline and
> > fixing the IP Address and restarting the Cloud Service on Storage VM.
> Now it
> > is communicating with MS:
> >
> > INFO  [storage.secondary.SecondaryStorageListener]
> (AgentConnectTaskPool-1:)
> > Received a host startup notification
> > com.cloud.agent.api.StartupSecondaryStorageCommand
> > INFO  [network.security.SecurityGroupListener] (AgentConnectTaskPool-1:)
> > Received a host startup notification
> > INFO  [storage.download.DownloadMonitorImpl] (AgentConnectTaskPool-1:)
> > Template Sync found SystemVM Template (XenServer) already in the template
> > host 

Re: Object based Secondary storage.

2013-06-06 Thread John Burwell
Edison,

Riak CS and S3 seed their hashes differently -- causing the form to appear 
slightly different.  In particular, Riak CS uses URI-safe base64 encoding which 
explains why the ETag values contain "-"s instead of "_"s.  From a client 
perspective, the ETags are treated as opaque strings that are passed through to 
the server for processing and compared strictly for equality.  Therefore, the 
form of the hash will not cause the client to choke, and the Riak CS behavior 
you are seeing is S3 API compatible (see 
http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for more 
details).  

Were you able to successfully download the file from Riak CS using s3cmd?

Thanks,
-John


On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:

> The Etag created by both RIAK CS and Amazon S3 seems a little bit different, 
> in case of multi part upload.
> 
> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
> Test environment:
> S3cmd: version: version 1.5.0-alpha1
> Riak cs:
> Name: riak
> Arch: x86_64
> Version : 1.3.1
> Release : 1.el6
> Size: 40 M
> Repo: installed
> From repo   : basho-products
> 
> The command I used to put:
> s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d
> 
> The etag created for the file, when using Riak CS is WxEUkiQzTWm_2C8A92fLQg==
> 
> EBUG: Sending request method_string='POST', 
> uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
>  headers={'content-length': '309', 'Authorization': 'AWS 
> OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 
> Jun 2013 22:54:28 +'}, body=(309 bytes)
> DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 
> 22:40:09 GMT', 'content-length': '326', 'content-type': 'application/xml', 
> 'server': 'Riak CS'}, 'reason': 'OK', 'data': ' encoding="UTF-8"?> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}
> 
> While the etag created by Amazon S3 is: 
> "70e1860be687d43c039873adef4280f2-3"
> 
> DEBUG: Sending request method_string='POST', 
> uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwLzRcG5obVvJFDvnhYUFdT6fYr1rig--',
>  
> DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3', 
> 'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 
> 'x-amz-request-id': '8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate', 
> 'date': 'Thu, 06 Jun 2013 22:39:47 GMT', 'content-type': 'application/xml'}, 
> 'reason': 'OK', 'data': ' encoding="UTF-8"?>\n\n xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso1"70e1860be687d43c039873adef4280f2-3"'}
> 
> So the etag created on Amazon S3 has "-"(dash) in it, but there is only "_" 
> (underscore) on Riak cs. 
> 
> Do you know the reason? What should we need to do to make it compatible with 
> Amazon S3 SDK?
> 
>> -Original Message-
>> From: John Burwell [mailto:jburw...@basho.com]
>> Sent: Thursday, June 06, 2013 2:03 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: Object based Secondary storage.
>> 
>> Min,
>> 
>> Are you calculating the MD5 or letting the Amazon client do it?
>> 
>> Thanks,
>> -John
>> 
>> On Jun 6, 2013, at 4:54 PM, Min Chen  wrote:
>> 
>>> Thanks Tom. Indeed I have a S3 question that need some advise from
>>> some S3 experts. To support upload object > 5G, I have used
>>> TransferManager.upload to upload object to S3, upload went fine and
>>> object are successfully put to S3. However, later on when I am using
>>> "s3cmd get " to retrieve this object, I always got this 
>>> exception:
>>> 
>>> "MD5 signatures do not match: computed=Y, received="X"
>>> 
>>> It seems that Amazon S3 kept a different Md5 sum for the multi-part
>>> uploaded object. We have been using Riak CS for our S3 testing. If I
>>> changed to not using multi-part upload and directly invoking S3
>>> putObject, I will not run into this issue. Do you have such experience
>> before?
>>> 
>>> -min
>>> 
>>> On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:
>>> 
 Thanks Min. I've printed out the material and am reading new threads.
 Can't comment much yet until I understand things a bit more.
 
 Meanwhile, feel free to hit me up with any S3 questions you have. I'm
 looking forward to playing with the object_store branch and testing
 it out.
 
 Tom.
 
 On Wed, 2013-06-05 at 16:14 +, Min Chen wrote:
> Welcome Tom. You can check out this FS
> 
> 
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backu
> p+Obj
> ec
> t+Store+Plugin+Framework for secondary storage architectural work
> t+Store+Plugin+done
> in
> object_store branch.You may also check out the following recent
> threads regarding 3 major technical questions raised by com

Re: Object based Secondary storage.

2013-06-06 Thread Thomas O'Dowd
Hi guys,

The ETAG is an interesting subject. AWS currently maintains 2 different
types of ETAGS for objects that I know of.

  a) PUT OBJECT - assigned ETAG will be calculated from the MD5 checksum
of the data content that you are uploading. When uploading you should
also always set the Content-MD5 header so that AWS (or other S3 Stores)
can verify your MD5 checksum against what it receives. The ETAG for such
objects will be the MD5 checksum of the content for AWS but doesn't have
to be I guess for other S3 stores. What's important is that AWS will
reject your upload if the MD5 checksum it calculates is not the same as
your Content-MD5 header.

  b) MULTIPART OBJECTS - A multipart object is an object which is
uploaded using mulitple PUT requests each which uploads some part. Parts
can be uploaded out of order and in parallel so AWS cannot calculate the
MD5 checksum for the entire object without actually waiting until all
parts have been uploaded and finally reprocessing all the data. This
would be very heavy for various reasons so they don't do this. The ETAG
therefore can not be calculated from the MD5 checksum of the content
either. I don't know exactly how AWS calculates their ETAG for multipart
objects but the ETAG will always take the form of -YYY where the
X part looks like a regular MD5 checksum of sorts and the Y part is the
number of parts that made up the upload. Therefore you can always tell
that an object was uploaded using a multipart upload by checking its
ETAG ends with -YYY. This however may be only true for AWS - other S3
stores may do it differently. You should just treat the etag as opaque
really.

Some more best practices about multipart uploads.
1. Always calculate the MD5 checksum of each part and send the
Content-MD5 header. This way AWS can verify the content of each part as
you upload it.
2. Always retain the ETAG for each part as returned by the response of
each part upload. You should have an etag for each part you uploaded.
3. Refrain from asking the server for a list of parts in order to create
the final Multipart Upload complete request. Always use your list of
parts and your list of ETAGS (from point 2). The exception is when you
are doing recovery after some client crash.

The main reason for this is that AWS and most other S3 stores are based
on eventual consistency and the server may not always (but mostly does)
give you a correct list of parts. The Multipart upload complete request
allows you to drop parts also so if you ask the server for a list of
parts and it misses one temporarily, you may end up with an object that
is missing a part also.

Btw, shameless plug but Cloudian has very good compatibility with AWS
and has a community edition version that is free for up to 100TB. I'll
test against it but you may also like to. You can run it on a single
node with not much fuss. Feel free to ask me about it offline.

Anyway hope that helps,

Tom.

On Thu, 2013-06-06 at 22:57 +, Edison Su wrote:
> The Etag created by both RIAK CS and Amazon S3 seems a little bit different, 
> in case of multi part upload.
> 
> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
> Test environment:
> S3cmd: version: version 1.5.0-alpha1
> Riak cs:
> Name: riak
> Arch: x86_64
> Version : 1.3.1
> Release : 1.el6
> Size: 40 M
> Repo: installed
> From repo   : basho-products
> 
> The command I used to put:
> s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d
> 
> The etag created for the file, when using Riak CS is WxEUkiQzTWm_2C8A92fLQg==
> 
> EBUG: Sending request method_string='POST', 
> uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
>  headers={'content-length': '309', 'Authorization': 'AWS 
> OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 
> Jun 2013 22:54:28 +'}, body=(309 bytes)
> DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 
> 22:40:09 GMT', 'content-length': '326', 'content-type': 'application/xml', 
> 'server': 'Riak CS'}, 'reason': 'OK', 'data': ' encoding="UTF-8"?> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}
> 
> While the etag created by Amazon S3 is: 
> "70e1860be687d43c039873adef4280f2-3"
> 
> DEBUG: Sending request method_string='POST', 
> uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwLzRcG5obVvJFDvnhYUFdT6fYr1rig--',
>  
> DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3', 
> 'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 
> 'x-amz-request-id': '8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate', 
> 'date': 'Thu, 06 Jun 2013 22:39:47 GMT', 'content-type': 'application/xml'}, 
> 'reason': 'OK', 'data': ' encoding="UTF-8"?>\n\n xmlns="http://s3.amazonaws.com/doc/2006-03-01/"

Re: Review Request: CLOUDSTACK-2758: touch file for tomcat6 package change CVS 2013-1976

2013-06-06 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11620/#review21553
---


Commit 2dc25df62364d4adfe264e6158dcb4b471bc4eb1 in branch refs/heads/4.1 from 
Hiroaki KAWAI
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2dc25df ]

CLOUDSTACK-2758: touch file for tomcat6 package change CVS 2013-1976

catalina.out must be prepared by package installation.
This is the same fix in tomcat6 package.

Signed-off-by: Hiroaki KAWAI 


- ASF Subversion and Git Services


On June 4, 2013, 6:11 a.m., Hiroaki Kawai wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11620/
> ---
> 
> (Updated June 4, 2013, 6:11 a.m.)
> 
> 
> Review request for cloudstack, Prasanna Santhanam and Hugo Trippaers.
> 
> 
> Description
> ---
> 
> This patch is a proposal for a better fix as is the same in tomcat6 spec file.
> 
> 
> This addresses bug CLOUDSTACK-2758.
> 
> 
> Diffs
> -
> 
>   client/tomcatconf/classpath.conf.in f2aeeba 
>   packaging/centos63/cloud.spec 83ccae8 
> 
> Diff: https://reviews.apache.org/r/11620/diff/
> 
> 
> Testing
> ---
> 
> I've tested on fresh centos 6.4 installation
> * install centos 6.4
> * yum upgrade -y
> * yum install mysql-server
> * cloudstack-setup-database
> * cloudstack-setup-management
> 
> The management server started up and I could open the webUI.
> 
> 
> Thanks,
> 
> Hiroaki Kawai
> 
>



Re: ACS 4.1.1 release - bugfixes to backport

2013-06-06 Thread Hiroaki KAWAI

CLOUDSTACK-2758, I already pushed a patch into 4.1 branch. :-)


(2013/06/06 3:22), Musayev, Ilya wrote:

Hi All,

Sorry I was a bit disconnected from the community - as my $dayjob kept me very 
busy.

I would like to start of this thread to keep track of bugfixes we need to back 
port from 4.1 to 4.1.1 release.

Please use this thread and reference bug fixes we need to add into 4.1.1, I 
will be creating a new 4.1.1 branch/tag shortly.

Regards
ilya





Re: Object based Secondary storage.

2013-06-06 Thread John Burwell
Thomas,

When using TransferManager, as we are in CloudStack, the MD5 hashes are 
calculated by the Amazon AWS Java client.  It also determines how best to 
utilize multi-part upload, if at all.  I just want to ensure that folks 
understand the information below applies when interacting with the HTTP API, 
but that the Amazon AWS Java client handles most of these details for the 
developer.

Thanks,
-John

On Jun 6, 2013, at 9:10 PM, Thomas O'Dowd  wrote:

> Hi guys,
> 
> The ETAG is an interesting subject. AWS currently maintains 2 different
> types of ETAGS for objects that I know of.
> 
>  a) PUT OBJECT - assigned ETAG will be calculated from the MD5 checksum
> of the data content that you are uploading. When uploading you should
> also always set the Content-MD5 header so that AWS (or other S3 Stores)
> can verify your MD5 checksum against what it receives. The ETAG for such
> objects will be the MD5 checksum of the content for AWS but doesn't have
> to be I guess for other S3 stores. What's important is that AWS will
> reject your upload if the MD5 checksum it calculates is not the same as
> your Content-MD5 header.
> 
>  b) MULTIPART OBJECTS - A multipart object is an object which is
> uploaded using mulitple PUT requests each which uploads some part. Parts
> can be uploaded out of order and in parallel so AWS cannot calculate the
> MD5 checksum for the entire object without actually waiting until all
> parts have been uploaded and finally reprocessing all the data. This
> would be very heavy for various reasons so they don't do this. The ETAG
> therefore can not be calculated from the MD5 checksum of the content
> either. I don't know exactly how AWS calculates their ETAG for multipart
> objects but the ETAG will always take the form of -YYY where the
> X part looks like a regular MD5 checksum of sorts and the Y part is the
> number of parts that made up the upload. Therefore you can always tell
> that an object was uploaded using a multipart upload by checking its
> ETAG ends with -YYY. This however may be only true for AWS - other S3
> stores may do it differently. You should just treat the etag as opaque
> really.
> 
> Some more best practices about multipart uploads.
> 1. Always calculate the MD5 checksum of each part and send the
> Content-MD5 header. This way AWS can verify the content of each part as
> you upload it.
> 2. Always retain the ETAG for each part as returned by the response of
> each part upload. You should have an etag for each part you uploaded.
> 3. Refrain from asking the server for a list of parts in order to create
> the final Multipart Upload complete request. Always use your list of
> parts and your list of ETAGS (from point 2). The exception is when you
> are doing recovery after some client crash.
> 
> The main reason for this is that AWS and most other S3 stores are based
> on eventual consistency and the server may not always (but mostly does)
> give you a correct list of parts. The Multipart upload complete request
> allows you to drop parts also so if you ask the server for a list of
> parts and it misses one temporarily, you may end up with an object that
> is missing a part also.
> 
> Btw, shameless plug but Cloudian has very good compatibility with AWS
> and has a community edition version that is free for up to 100TB. I'll
> test against it but you may also like to. You can run it on a single
> node with not much fuss. Feel free to ask me about it offline.
> 
> Anyway hope that helps,
> 
> Tom.
> 
> On Thu, 2013-06-06 at 22:57 +, Edison Su wrote:
>> The Etag created by both RIAK CS and Amazon S3 seems a little bit different, 
>> in case of multi part upload.
>> 
>> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
>> Test environment:
>> S3cmd: version: version 1.5.0-alpha1
>> Riak cs:
>> Name: riak
>> Arch: x86_64
>> Version : 1.3.1
>> Release : 1.el6
>> Size: 40 M
>> Repo: installed
>> From repo   : basho-products
>> 
>> The command I used to put:
>> s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d
>> 
>> The etag created for the file, when using Riak CS is WxEUkiQzTWm_2C8A92fLQg==
>> 
>> EBUG: Sending request method_string='POST', 
>> uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
>>  headers={'content-length': '309', 'Authorization': 'AWS 
>> OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 
>> Jun 2013 22:54:28 +'}, body=(309 bytes)
>> DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 
>> 22:40:09 GMT', 'content-length': '326', 'content-type': 'application/xml', 
>> 'server': 'Riak CS'}, 'reason': 'OK', 'data': '> encoding="UTF-8"?>> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}
>> 
>> While the etag created by Amazon S3 is: 
>> "70e1860be687d4

Hello (Upgrade to 4.1)

2013-06-06 Thread Maurice Lawler

Greetings,

I am utilizing KVM / CentOS 6.3. / CS 4.0.2 Upon issuing yum update, I 
am getting a slue of updates for the OS it's self. Previously I was 
informed that CentOS 6.4 was NOT supported, so I have backed off on 
updated my OS due to this fact.


I have taken a paste bin of what my system is attempting to update: 
http://pastebin.com/3NtHrUVd


Is it okay if I proceed with this upgrade, then I notice the cloud 
upgrade: http://pastebin.com/b2Th18SB seems very easy/small.


Please tell me if I can proceed with both, without issues on my VM's. 
It seems easy, but I just like to double check.


Appreciate your time!

- Maurice



Re: Hello (Upgrade to 4.1)

2013-06-06 Thread Hiroaki KAWAI

Hi,

(2013/06/07 11:40), Maurice Lawler wrote:

Greetings,

I am utilizing KVM / CentOS 6.3. / CS 4.0.2 Upon issuing yum update, I
am getting a slue of updates for the OS it's self. Previously I was
informed that CentOS 6.4 was NOT supported, so I have backed off on
updated my OS due to this fact.


Umm... IMHO, CentOS 6.4 must be supported. As a fact, CentOS team is
maintaining CentOS "6" and not CentOS "6.3". You should upgrade for
security reasons. If you have any trouble with updated CentOS, please
do raise a ticket in our JIRA.
https://issues.apache.org/jira/browse/CLOUDSTACK



I have taken a paste bin of what my system is attempting to update:
http://pastebin.com/3NtHrUVd

Is it okay if I proceed with this upgrade, then I notice the cloud
upgrade: http://pastebin.com/b2Th18SB seems very easy/small.

Please tell me if I can proceed with both, without issues on my VM's. It
seems easy, but I just like to double check.


The process should be successful. If you got some trouble, please
let us know.



Appreciate your time!

- Maurice


Good luck!





Re: Hello (Upgrade to 4.1)

2013-06-06 Thread Maurice Lawler

Thank you for the update...

However, upon installing Cloud Stack MONTHS ago, it was said to NOT 
utilize qemu-img / qemu-kvm that was included in the CentOS repos.


qemu-img  x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5
   updates  
 471 k

 qemu-kvm  x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5
   updates  
 1.3 M



Those things want to be updated by the centos repos, will that cause my 
VM's to not come back online?


Re: Hello (Upgrade to 4.1)

2013-06-06 Thread Maurice Lawler
Thank you. I suspect, since it will be updating the qemu- that I should 
perhaps stop all VM's prior to upgrading, correct?




On 2013-06-06 22:41, David Nalley wrote:

That advice is now deprecated. I believe RHT began shipping the
patches in 6.2 or 6.3 - so you should be fine with 4.0.x or 4.1 and
those version of qemu-*

--David

On Thu, Jun 6, 2013 at 11:15 PM, Maurice Lawler  
wrote:

Thank you for the update...

However, upon installing Cloud Stack MONTHS ago, it was said to NOT 
utilize

qemu-img / qemu-kvm that was included in the CentOS repos.

qemu-img  x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5
   
updates   471

k
 qemu-kvm  x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5
   
updates   1.3

M


Those things want to be updated by the centos repos, will that cause 
my VM's

to not come back online?


Re: Hello (Upgrade to 4.1)

2013-06-06 Thread David Nalley
That advice is now deprecated. I believe RHT began shipping the
patches in 6.2 or 6.3 - so you should be fine with 4.0.x or 4.1 and
those version of qemu-*

--David

On Thu, Jun 6, 2013 at 11:15 PM, Maurice Lawler  wrote:
> Thank you for the update...
>
> However, upon installing Cloud Stack MONTHS ago, it was said to NOT utilize
> qemu-img / qemu-kvm that was included in the CentOS repos.
>
> qemu-img  x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5
>updates   471
> k
>  qemu-kvm  x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5
>updates   1.3
> M
>
>
> Those things want to be updated by the centos repos, will that cause my VM's
> to not come back online?


Re: Object based Secondary storage.

2013-06-06 Thread Min Chen
John,
  We are not able to successfully download file that was uploaded to Riak CS 
with TransferManager using S3cmd. Same error as we encountered using amazon s3 
java client due to the incompatible ETAG format ( - and _ difference).

Thanks
-min



On Jun 6, 2013, at 5:40 PM, "John Burwell"  wrote:

> Edison,
> 
> Riak CS and S3 seed their hashes differently -- causing the form to appear 
> slightly different.  In particular, Riak CS uses URI-safe base64 encoding 
> which explains why the ETag values contain "-"s instead of "_"s.  From a 
> client perspective, the ETags are treated as opaque strings that are passed 
> through to the server for processing and compared strictly for equality.  
> Therefore, the form of the hash will not cause the client to choke, and the 
> Riak CS behavior you are seeing is S3 API compatible (see 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for more 
> details).  
> 
> Were you able to successfully download the file from Riak CS using s3cmd?
> 
> Thanks,
> -John
> 
> 
> On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:
> 
>> The Etag created by both RIAK CS and Amazon S3 seems a little bit different, 
>> in case of multi part upload.
>> 
>> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
>> Test environment:
>> S3cmd: version: version 1.5.0-alpha1
>> Riak cs:
>> Name: riak
>> Arch: x86_64
>> Version : 1.3.1
>> Release : 1.el6
>> Size: 40 M
>> Repo: installed
>> From repo   : basho-products
>> 
>> The command I used to put:
>> s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d
>> 
>> The etag created for the file, when using Riak CS is WxEUkiQzTWm_2C8A92fLQg==
>> 
>> EBUG: Sending request method_string='POST', 
>> uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
>>  headers={'content-length': '309', 'Authorization': 'AWS 
>> OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 
>> Jun 2013 22:54:28 +'}, body=(309 bytes)
>> DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 
>> 22:40:09 GMT', 'content-length': '326', 'content-type': 'application/xml', 
>> 'server': 'Riak CS'}, 'reason': 'OK', 'data': '> encoding="UTF-8"?>> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}
>> 
>> While the etag created by Amazon S3 is: 
>> "70e1860be687d43c039873adef4280f2-3"
>> 
>> DEBUG: Sending request method_string='POST', 
>> uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwLzRcG5obVvJFDvnhYUFdT6fYr1rig--',
>>  
>> DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3', 
>> 'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 
>> 'x-amz-request-id': '8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate', 
>> 'date': 'Thu, 06 Jun 2013 22:39:47 GMT', 'content-type': 'application/xml'}, 
>> 'reason': 'OK', 'data': '> encoding="UTF-8"?>\n\n> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso1"70e1860be687d43c039873adef4280f2-3"'}
>> 
>> So the etag created on Amazon S3 has "-"(dash) in it, but there is only "_" 
>> (underscore) on Riak cs. 
>> 
>> Do you know the reason? What should we need to do to make it compatible with 
>> Amazon S3 SDK?
>> 
>>> -Original Message-
>>> From: John Burwell [mailto:jburw...@basho.com]
>>> Sent: Thursday, June 06, 2013 2:03 PM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: Object based Secondary storage.
>>> 
>>> Min,
>>> 
>>> Are you calculating the MD5 or letting the Amazon client do it?
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 6, 2013, at 4:54 PM, Min Chen  wrote:
>>> 
 Thanks Tom. Indeed I have a S3 question that need some advise from
 some S3 experts. To support upload object > 5G, I have used
 TransferManager.upload to upload object to S3, upload went fine and
 object are successfully put to S3. However, later on when I am using
 "s3cmd get " to retrieve this object, I always got this 
 exception:
 
 "MD5 signatures do not match: computed=Y, received="X"
 
 It seems that Amazon S3 kept a different Md5 sum for the multi-part
 uploaded object. We have been using Riak CS for our S3 testing. If I
 changed to not using multi-part upload and directly invoking S3
 putObject, I will not run into this issue. Do you have such experience
>>> before?
 
 -min
 
 On 6/6/13 1:56 AM, "Thomas O'Dowd"  wrote:
 
> Thanks Min. I've printed out the material and am reading new threads.
> Can't comment much yet until I understand things a bit more.
> 
> Meanwhile, feel free to hit me up with any S3 questions you have. I'm
> looking forward to playing with the object_store branch and testing
> it out.
> 
> Tom.
> 
> On Wed,

Re: KVM development, libvirt

2013-06-06 Thread Marcus Sorensen
Ok. Do we need to call a vote or something to change our rules to
solidify that we should require at least two votes from each supported
platform, whether they be automated tests or contributor tests?

On Thu, Jun 6, 2013 at 2:20 AM, Prasanna Santhanam
 wrote:
> On Thu, Jun 06, 2013 at 09:04:55AM +0200, Ove Ewerlid wrote:
>> On 06/06/2013 08:37 AM, Prasanna Santhanam wrote:
>> >On Thu, Jun 06, 2013 at 08:29:26AM +0200, Ove Ewerlid wrote:
>> >>On 06/06/2013 07:10 AM, Prasanna Santhanam wrote:
>> >>>On Wed, Jun 05, 2013 at 05:39:16PM +, Edison Su wrote:
>> I think we miss  a VOTE from Jenkins, the vote from Jenkins should
>> be taken as highest priority in each release. This kind of
>> regression should be easily identified in Jenkins(If we have a
>> regression test for each environment).
>> 
>> >>>
>> >>>+1 - need more people focussed on cloudstack-infra in general.
>> >>
>> >>The 41 regression with local storage, that required 2 or more hosts
>> >>to duplicate, would be one example of an issue that would be
>> >>detected by automatic testing provided the testing is done on a
>> >>sufficiently big test fixture.
>> >>
>> >>Q: How many hosts are used in daily testing now?
>> >
>> >3 (2 in a cluster, 1 in a second pod) and 1 in a second zone -
>> >totalling 4 hosts in the test rig.
>> >
>> >But I don't enable local storage on it. It's occupied testing XCP,
>> >Xen and KVM with shared storage. The more configurations the longer
>> >the test run time.
>> >
>>
>> Not sure if you use multiple run queues, one queue with a more
>> extensive job that runs ones per day to capture issues in a larger
>> test fixture that is not suitable to build for every single commit.
>> This test needs to complete within 24 hours.
>>
>
> We don't run tests for every commit. The tests run every four-five
> hours for the three hypervisors. So each test run has collated a group
> of commits made during the time window. Each test run splits into
> multiple sub-jobs that are running in parallel.
>
> The extensive jobs that test for regression run on Wednesday and
> Saturday. These can take ~6hours to finish.
>
> ASCII representation of how the jobs split up:
>
> test-matrix
> |___ test-packaging (new centos VM with latest packaged CloudStack)
> |___ test-environment-refresh (kickstarts fresh hypervisors)
> |_test-setup-advanced-zone
>   |test-smoke-matrix (Weekdays, except Wed)
>   |___ test#1
>   |___ test#2
>   |___ test#3
>   |___
>   |___ test#n
>   |test-regression-matrix (Wed, Sat)
>
>
> HTH
>
> --
> Prasanna.,


quick systemvm question

2013-06-06 Thread Marcus Sorensen
How does cloudstack know which template is the latest system vm? Does
it match on name or something?  From what I have gathered in the
upgrade docs, you simply register a new template, like any other, and
run a convenience script that restarts your system vms. But I don't
gather from this how cloudstack knows it's a system template (and
further THE system template).


Re: Object based Secondary storage.

2013-06-06 Thread Thomas O'Dowd
Min,

This looks like an s3cmd problem. I just downloaded the latest s3cmd to
check the source code.

In S3/FileLists.py:

compare_md5 = 'md5' in cfg.sync_checks
# Multipart-uploaded files don't have a valid md5 sum - it ends
with "...-nn"
if compare_md5:
if (src_remote == True and src_list[file]['md5'].find("-")
>= 0) or (dst_remote == True and dst_list[file]['md5'].find("-") >= 0):

Basically, s3cmd is trying to verify that the checksum of the data that
it downloads is the same as the etag unless the etag ends with "-YYY".
This is an AWS convention (as I mentioned in an earlier mail) so it
works but it seems that RiakCS has a different ETAG format which doesn't
match -YYY so s3cmd assumes the other type of ETAG which is the same as
the MD5 checksum. For RiakCS however, this is not the case. This is why
you get the checksum error.

Chances are that Riak is doing the right thing here and the data file
will be the same as what you uploaded. You could change the s3cmd code
to be more lenient for Riak. The Basho guys might either like to change
their format or talk to the different tool vendors about changing the
tools to work with Riak. For Cloudian, we choose to try to keep it
similar to AWS so we could avoid stuff like this.

Tom.

On Fri, 2013-06-07 at 04:02 +, Min Chen wrote:
> John,
>   We are not able to successfully download file that was uploaded to Riak CS 
> with TransferManager using S3cmd. Same error as we encountered using amazon 
> s3 java client due to the incompatible ETAG format ( - and _ difference).
> 
> Thanks
> -min
> 
> 
> 
> On Jun 6, 2013, at 5:40 PM, "John Burwell"  wrote:
> 
> > Edison,
> > 
> > Riak CS and S3 seed their hashes differently -- causing the form to appear 
> > slightly different.  In particular, Riak CS uses URI-safe base64 encoding 
> > which explains why the ETag values contain "-"s instead of "_"s.  >From a 
> > client perspective, the ETags are treated as opaque strings that are passed 
> > through to the server for processing and compared strictly for equality.  
> > Therefore, the form of the hash will not cause the client to choke, and the 
> > Riak CS behavior you are seeing is S3 API compatible (see 
> > http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html for more 
> > details).  
> > 
> > Were you able to successfully download the file from Riak CS using s3cmd?
> > 
> > Thanks,
> > -John
> > 
> > 
> > On Jun 6, 2013, at 6:57 PM, Edison Su  wrote:
> > 
> >> The Etag created by both RIAK CS and Amazon S3 seems a little bit 
> >> different, in case of multi part upload.
> >> 
> >> Here is the result I tested on both RIAK CS and Amazon S3, with s3cmd.
> >> Test environment:
> >> S3cmd: version: version 1.5.0-alpha1
> >> Riak cs:
> >> Name: riak
> >> Arch: x86_64
> >> Version : 1.3.1
> >> Release : 1.el6
> >> Size: 40 M
> >> Repo: installed
> >> From repo   : basho-products
> >> 
> >> The command I used to put:
> >> s3cmd put some-file s3://some-path --multipart-chunk-size-mb=100 -v -d
> >> 
> >> The etag created for the file, when using Riak CS is 
> >> WxEUkiQzTWm_2C8A92fLQg==
> >> 
> >> EBUG: Sending request method_string='POST', 
> >> uri='http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/test?uploadId=kfDkh7Q_QCWN7r0ZTqNq4Q==',
> >>  headers={'content-length': '309', 'Authorization': 'AWS 
> >> OYAZXCAFUC1DAFOXNJWI:xlkHI9tUfUV/N+Ekqpi7Jz/pbOI=', 'x-amz-date': 'Thu, 06 
> >> Jun 2013 22:54:28 +'}, body=(309 bytes)
> >> DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 06 Jun 2013 
> >> 22:40:09 GMT', 'content-length': '326', 'content-type': 'application/xml', 
> >> 'server': 'Riak CS'}, 'reason': 'OK', 'data': ' >> encoding="UTF-8"?> >> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://imagestore.s3.amazonaws.com/tmpl/1/1/routing-1/testimagestoretmpl/1/1/routing-1/testkfDkh7Q_QCWN7r0ZTqNq4Q=='}
> >> 
> >> While the etag created by Amazon S3 is: 
> >> "70e1860be687d43c039873adef4280f2-3"
> >> 
> >> DEBUG: Sending request method_string='POST', 
> >> uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwLzRcG5obVvJFDvnhYUFdT6fYr1rig--',
> >>  
> >> DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3', 
> >> 'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 
> >> 'x-amz-request-id': '8DFF5D8025E58E99', 'cache-control': 
> >> 'proxy-revalidate', 'date': 'Thu, 06 Jun 2013 22:39:47 GMT', 
> >> 'content-type': 'application/xml'}, 'reason': 'OK', 'data': ' >> version="1.0" encoding="UTF-8"?>\n\n >> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso1"70e1860be687d43c039873adef4280f2-3"'}
> >> 
> >> So the etag created on Amazon S3 has "-"(dash) in it, but there is only 
> >> "_" (underscore) on Riak cs. 
> >> 
> >> Do you know the reason? What should we need to do to make it compatible 
> >> with Amazon S3 SDK?
> >> 
> >>> -Original 

Re: git commit: updated refs/heads/master to 9fe7846

2013-06-06 Thread Wei ZHOU
In my point view, we ask users register new template in the upgrade
instruction in release notes. If they do not register, it is their
fault. If they do but upgrade fails, it is our fault.

I admit that it is a good way to change each upgrade process and
remove old templates when we use new template. It is not large work.

-Wei

2013/6/6, Kishan Kavala :
> In the mentioned example, when new template for 4.3 is introduced, we should
> remove template upgrade code in Upgrade41to42. This will make upgrade
> succeed even when systemvm-kvm-4.2 is not in database.
> On the other hand, if we allow 'systemvm-kvm-%', upgrade to 4.3 will succeed
> even though the required systemvm-kvm-4.3 is not in database.
>
> So, every time a new system vm template is added, template upgrade from
> previous version should be removed.
>
> 
> From: Wei ZHOU [ustcweiz...@gmail.com]
> Sent: Wednesday, June 05, 2013 3:56 PM
> To: dev@cloudstack.apache.org
> Subject: Re: git commit: updated refs/heads/master to 9fe7846
>
> Kishan,
>
> I know.
>
> If we upgrade from 4.1 to 4.3 ( assume the systemvm template is
> systemvm-kvm-4.3). We need to add systemvm-kvm-4.3 instead of
> systemvm-kvm-4.2. Maybe systemvm-kvm-4.2 is not in database.
> The upgrade includes Upgrade41to42 and Upgrade42to43. It will fail in the
> Upgrade41to42.
>
> -Wei
>
>
> 2013/6/5 Kishan Kavala 
>
>> Wei,
>>  If we use other templates, system Vms may not work. Only 4.2 templates
>> should be used when upgrading to 4.2.
>>
>> > -Original Message-
>> > From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
>> > Sent: Wednesday, 5 June 2013 3:26 PM
>> > To: dev@cloudstack.apache.org
>> > Subject: Re: git commit: updated refs/heads/master to 9fe7846
>> >
>> > Kishan,
>> >
>> > What do you think about change some codes to "name like 'systemvm-
>> > xenserver-%' " ?
>> > If we use other templates, the upgrade maybe fail.
>> >
>> > -Wei
>> >
>> >
>> > 2013/6/5 
>> >
>> > > Updated Branches:
>> > >   refs/heads/master 91b15711b -> 9fe7846d7
>> > >
>> > >
>> > > CLOUDSTACK-2728: 41-42 DB upgrade: add step to upgrade system
>> > > templates
>> > >
>> > >
>> > > Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
>> > > Commit:
>> > > http://git-wip-us.apache.org/repos/asf/cloudstack/commit/9fe7846d
>> > > Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/9fe7846d
>> > > Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/9fe7846d
>> > >
>> > > Branch: refs/heads/master
>> > > Commit: 9fe7846d72e401720e1dcbce52d021e2646429f1
>> > > Parents: 91b1571
>> > > Author: Harikrishna Patnala 
>> > > Authored: Mon Jun 3 12:33:58 2013  0530
>> > > Committer: Kishan Kavala 
>> > > Committed: Wed Jun 5 15:14:04 2013  0530
>> > >
>> > > --
>> > >  .../src/com/cloud/upgrade/dao/Upgrade410to420.java |  209
>> > >   -
>> > >  1 files changed, 204 insertions( ), 5 deletions(-)
>> > > --
>> > >
>> > >
>> > >
>> > > http://git-wip-us.apache.org/repos/asf/cloudstack/blob/9fe7846d/engine
>> > > /schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>> > > --
>> > > diff --git
>> > > a/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>> > > b/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>> > > index 1584973..955ea56 100644
>> > > --- a/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>> > > b/engine/schema/src/com/cloud/upgrade/dao/Upgrade410to420.java
>> > > @@ -112,16  112,215 @@ public class Upgrade410to420 implements
>> > DbUpgrade {
>> > >  }
>> > >
>> > >  private void updateSystemVmTemplates(Connection conn) {
>> > > -   PreparedStatement sql = null;
>> > >
>> > >  PreparedStatement pstmt = null;
>> > >  ResultSet rs = null;
>> > >  boolean xenserver = false;
>> > >  boolean kvm = false;
>> > >  boolean VMware = false;
>> > >  boolean Hyperv = false;
>> > >  boolean LXC = false;
>> > >  s_logger.debug("Updating System Vm template IDs");
>> > >  try{
>> > >  //Get all hypervisors in use
>> > >  try {
>> > >  pstmt = conn.prepareStatement("select
>> > > distinct(hypervisor_type) from `cloud`.`cluster` where removed is
>> > > null");
>> > >  rs = pstmt.executeQuery();
>> > >  while(rs.next()){
>> > >  if("XenServer".equals(rs.getString(1))){
>> > >  xenserver = true;
>> > >  } else if("KVM".equals(rs.getString(1))){
>> > >  kvm = true;
>> > >  } else if("VMware".equals(rs.getString(1))){
>> > >  VMware = true;
>> > >  } else if("Hyperv".equals(rs.getString(1))) {
>> 

RE: StoragePoolForMigrationResponse and StoragePoolResponse

2013-06-06 Thread Devdeep Singh
suitableformigration isn't an attribute of the storage pool. It just tells 
whether a particular pool is suitable for migrating a particular volume. For 
example, if volume A has to be migrated to another pool, then the pools 
available are listed and if the tags on the pool and volume do not match then 
it is flagged as unsuitable. For another volume it may be flagged suitable. So 
it really isn't an attribute of a storage pool and I believe it doesn't belong 
in the StoragePoolResponse object.

Regards,
Devdeep

> -Original Message-
> From: Min Chen [mailto:min.c...@citrix.com]
> Sent: Friday, June 07, 2013 2:20 AM
> To: dev@cloudstack.apache.org
> Subject: Re: StoragePoolForMigrationResponse and StoragePoolResponse
> 
> I agree with Prasanna on this. We don't need to introduce several Storage
> pool related responses just for some specific apis. In some way,
> suitableFormigration is some kind of attribute that can be set on a storage
> pool or not. If you don't want to show it to listStoragePool call, you can 
> set that
> as null so that json serialization will ignore it.
> 
> Just my two cents.
> -min
> 
> On 6/6/13 5:07 AM, "Devdeep Singh"  wrote:
> 
> >Hi,
> >
> >StoragePoolResponse should really only be used for listing storage pools.
> >Putting a suitableformigration flag etc. makes it weird for other apis.
> >If tomorrow the response object is updated to include more statistics
> >for admin user to make a better decision, then such information gets
> >pushed in there which makes it unnatural for apis that just need the
> >list of storage pools. I am planning to update
> >StoragePoolForMigrationResponse to include the StoragePoolResponse
> >object and any other flag; suitableformigration in this case. I'll file a 
> >bug for
> the same.
> >
> >Regards,
> >Devdeep
> >
> >> -Original Message-
> >> From: Prasanna Santhanam [mailto:t...@apache.org]
> >> Sent: Tuesday, June 04, 2013 2:28 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: StoragePoolForMigrationResponse and StoragePoolResponse
> >>
> >> On Fri, May 31, 2013 at 06:28:39PM +0530, Prasanna Santhanam wrote:
> >> > On Fri, May 31, 2013 at 12:24:20PM +, Pranav Saxena wrote:
> >> > > Hey Prasanna ,
> >> > >
> >> > > I see that the response  object name is
> >> > > findstoragepoolsformigrationresponse , which is correct as shown
> >> > > below .  Are you referring to this API or something else  ?
> >> > >
> >> > > http://MSIP:8096/client/api?command=findStoragePoolsForMigration
> >> > >
> >> > >  >> > > cloud-stack-version="4.2.0-SNAPSHOT">
> >> > >
> >> > >  
> >> > >
> >> >
> >> > No that's what is shown to the user. I meant the class within
> >> > org.apache.cloudstack.api.response
> >> >
> >> Fixed with 0401774a09483354f5b8532a30943351755da93f
> >>
> >> --
> >> Prasanna.,
> >>
> >> 
> >> Powered by BigRock.com
> >



Re: quick systemvm question

2013-06-06 Thread Wei ZHOU
Marcus,

(1) cloud-install-sys-tmplt update the template with max(id)

select max(id) from cloud.vm_template where type = \"SYSTEM\" and
hypervisor_type = \"KVM\" and removed is null"`

(2) upgrade process update the template with specified name. in
Upgrade410to420.java
pstmt = conn.prepareStatement("select id from `cloud`.`vm_template` where
name like 'systemvm-xenserver-4.2' and removed is null order by id desc
limit 1");

We are discussing in another thread "git commit: updated refs/heads/master
to 9fe7846". Please join us.

-Wei


2013/6/7 Marcus Sorensen 

> How does cloudstack know which template is the latest system vm? Does
> it match on name or something?  From what I have gathered in the
> upgrade docs, you simply register a new template, like any other, and
> run a convenience script that restarts your system vms. But I don't
> gather from this how cloudstack knows it's a system template (and
> further THE system template).
>