Re: [DISCUSS] Disaster Recovery solution for CloudStack

2013-06-03 Thread Nguyen Anh Tu
hi,

Any idea?


2013/5/31 Wido den Hollander 

> On 05/31/2013 11:02 AM, Nguyen Anh Tu wrote:
>
>> Hi forks,
>>
>> I'm looking for a Disaster Recovery solution on CS. Looking around I found
>> an article showing some great informations but not enought. Personally I
>> think:
>>
>> + Host: CS already implemented migration, which can move VMs to another
>> pearful host
>> + Database: we have replication
>>
>
> Replication is not enough. Your database with CloudStack is key, it's your
> most precious metadata, so make sure you have GOOD, very good backups of it.
>
> The best would be a dump of the database every X hours and have binary
> logs as well to be able to go into into point in time with your database.
>
>
>  + Management Server: we can use multi MS
>> + Primary Storage: it's the important component when Disaster Recovery
>> happen. It contains ROOT and DATA volumes and nobody happy if they're
>> lost.
>> We need mirror (or replicate) solution here. Many distribute file system
>> can help (GlusterFS, Ceph, Hadoop..). An interesting solution I found on
>> XenServer is Portable SR, which make the SR become fully self-contained.
>> We
>> can detach it and re-attach to a new host. Nothing lost.
>>
>
> CloudStack can't tell you anything about how safe the data is on your
> primary storage, so you just want to make sure you never loose data on it.
>
> Ceph is a great example (I'm a big time fan!) of how you can store your
> data on multiple machines. But even when not using Ceph, just make sure you
> don't loose data on it.
>
> ZFS with zfs send|receive is a great way to backup your data to a
> secondary location in case something goes wrong and you need to restore.
>
> Wido
>
>
>  + Secondary Storage: Easy to backup.
>>
>> How do you think? Did you have a plan to do Disaster Recovery?
>>
>> Thanks,
>>
>>
>


-- 

N.g.U.y.e.N.A.n.H.t.U


RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Hugo Trippaers
-1

Extending the release will mean even more features will be packed into the 4.2, 
which already has quite  a lot of changes. The delays with 4.1 shows that 
testing is a big job already and more features will make it worse. I'm 
convinced that allowing for more time in 4.2 would not improve the overall 
quality of the release and has a risk of lowering the quality due to a 
pre-freeze rush.

Cheers,

Hugo

> -Original Message-
> From: Musayev, Ilya [mailto:imusa...@webmd.net]
> Sent: Sunday, June 02, 2013 6:33 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> +1 for freeze request for 1-2 weeks. We've developed advanced password
> management features for IsWest  and would like to merge it in as per
> Claytons approval.
> 
> 
>  Original message 
> From: Wei ZHOU 
> Date:
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> 
> -0
> 
> Change to -0 as I suggest to wait for the merge of existing review requests in
> days (48 or 72 hours).
> 
> -Wei
> 
> 
> 2013/5/31 Wei ZHOU 
> 
> > -1
> > Almost all new features for 4.2 have been merged or being reviewed.
> > From now, we'd better donot accept new feature review requests,and
> > create 4.2 branch after committing existed requests in short time.
> >
> > -Wei
> >
> > 2013/5/31, Chip Childers :
> > > Following our discussion on the proposal to push back the feature
> > > freeze date for 4.2.0 [1], we have not yet achieved a clear consensus.
> Well...
> > > we have already defined the "project rules" for figuring out what to do.
> > > In out project by-laws [2], we have defined a "release plan"
> > > decision as
> > > follows:
> > >
> > >> 3.4.2. Release Plan
> > >>
> > >> Defines the timetable and work items for a release. The plan also
> > >> nominates a Release Manager.
> > >>
> > >> A lazy majority of active committers is required for approval.
> > >>
> > >> Any active committer or PMC member may call a vote. The vote must
> > >> occur on a project development mailing list.
> > >
> > > And our lazy majority is defined as:
> > >
> > >> 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding  1
> > >> votes and more binding  1 votes than binding -1 votes.
> > >
> > > Our current plan is the starting point, so this VOTE is a vote to
> > > change the current plan.  We require a 72 hour window for this vote,
> > > so IMO we
> > are
> > > in an odd position where the feature freeze date is at least
> > > extended
> > until
> > >
> > > Tuesday of next week.
> > >
> > > Our current plan of record for 4.2.0 is at [3].
> > >
> > > [1] http://markmail.org/message/vi3nsd2yo763kzua
> > > [2] http://s.apache.org/csbylaws
> > > [3]
> > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack
> > > 4.2
> > Release
> > >
> > > 
> > >
> > > I'd like to call a VOTE on the following:
> > >
> > > Proposal: Extend the feature freeze date for our 4.2.0 feature
> > > release from today (2013-05-31) to 2013-06-28.  All other dates
> > > following the feature freeze date in the plan would be pushed out 4
> weeks as well.
> > >
> > > Please respond with one of the following:
> > >
> > >  1 : change the plan as listed above
> > >  /-0 : no strong opinion, but leaning   or -
> > > -1 : do not change the plan
> > >
> > > This vote will remain open until Tuesday morning US eastern time.
> > >
> > > -chip
> > >
> >


Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-03 Thread Hiroaki KAWAI

I don't want to see NORMAL users getting failed with running
cloudstack and sending email for asking the workaround.
# Even if you're thinking we're not wrong.

If you're going to release a5214bee99f6c5582d755c9499f7d99fd7b5b701
as 4.1.0, I'd like to suggest releasing 4.1.1 asap.

# I know the voting window has closed, but I'm -1 on releasing
# 4.1.0 at this moment.


(2013/06/02 2:35), Chip Childers wrote:

The vote has *passed* with the following results (binding PMC votes
indicated with a "*" next to their name:

+1 : Edison*, Hugo*, Marcus*, David*, Wido*, Ilya, Animesh, Milamber,
  Joe*, Simon, Prasanna*
-0 : John
-1 : Ove

I'm going to proceed with moving the release into the distribution repo
now, and will do the DEB / RPM builds to push Wido's repo site / push
cloudmonkey to pypi on Monday.

I do note Ove's -1, due to upstream Tomcat changes.  I know Prasanna
mentioned that he was going to check with that project to see why the
change happened.  We will need to discuss what (if anything) this
project should do to resolve the issue for users.  This issue will block
users of all prior versions as well, so it's nothing *in* our code that
causes the bug.  This is my logic for not cancelling the vote.

-chip

On Tue, May 28, 2013 at 09:47:40AM -0400, Chip Childers wrote:

Hi All,

I've created a 4.1.0 release, with the following artifacts up for a
vote.

The changes from round 4 are related to DEB packaging, some
translation strings, and a functional patch to make bridge type
optional during the agent setup (for backward compatibility).

Git Branch and Commit SH:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
Commit: a5214bee99f6c5582d755c9499f7d99fd7b5b701

List of changes:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.1

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.1.0/

PGP release keys (signed using A99A5D58):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Testing instructions are here:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+procedure

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure to
indicate "(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)




Re: Review Request: CLOUDSTACK-2648 [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises created on VR in case of multiple subnets

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11437/
---

(Updated June 3, 2013, 10:01 a.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Description
---

[Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises 
created on VR in case of multiple subnets
https://issues.apache.org/jira/browse/CLOUDSTACK-2648


This addresses bug Cloudstack-2648.


Diffs (updated)
-

  server/src/com/cloud/configuration/ConfigurationManagerImpl.java 214e292 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
b969be2 

Diff: https://reviews.apache.org/r/11437/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: Review Request: CLOUDSTACK-2648 [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises created on VR in case of multiple subnets

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11437/#review21309
---



server/src/com/cloud/configuration/ConfigurationManagerImpl.java


Please ignore this revision of the diff.  i Uploaded the wrong one.


- bharat kumar


On June 3, 2013, 10:01 a.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11437/
> ---
> 
> (Updated June 3, 2013, 10:01 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises 
> created on VR in case of multiple subnets
> https://issues.apache.org/jira/browse/CLOUDSTACK-2648
> 
> 
> This addresses bug Cloudstack-2648.
> 
> 
> Diffs
> -
> 
>   server/src/com/cloud/configuration/ConfigurationManagerImpl.java 214e292 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> b969be2 
> 
> Diff: https://reviews.apache.org/r/11437/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: Review Request: CLOUDSTACK-2648 [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises created on VR in case of multiple subnets

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11437/
---

(Updated June 3, 2013, 10:10 a.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Changes
---

Incorporated the review comments and updated the diff.


Description
---

[Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises 
created on VR in case of multiple subnets
https://issues.apache.org/jira/browse/CLOUDSTACK-2648


This addresses bug Cloudstack-2648.


Diffs (updated)
-

  core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java 
8b996d1 
  patches/systemvm/debian/config/root/deleteIpAlias.sh 865ff3b 
  
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
 5f99a15 
  
plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
 a2cceb1 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
c71d037 

Diff: https://reviews.apache.org/r/11437/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-03 Thread Prasanna Santhanam
On Sat, Jun 01, 2013 at 01:35:06PM -0400, Chip Childers wrote:
> The vote has *passed* with the following results (binding PMC votes
> indicated with a "*" next to their name:
> 
> +1 : Edison*, Hugo*, Marcus*, David*, Wido*, Ilya, Animesh, Milamber,
>  Joe*, Simon, Prasanna*
> -0 : John
> -1 : Ove
> 
> I'm going to proceed with moving the release into the distribution repo
> now, and will do the DEB / RPM builds to push Wido's repo site / push
> cloudmonkey to pypi on Monday.
> 
> I do note Ove's -1, due to upstream Tomcat changes.  I know Prasanna
> mentioned that he was going to check with that project to see why the
> change happened.  We will need to discuss what (if anything) this
> project should do to resolve the issue for users.  This issue will block
> users of all prior versions as well, so it's nothing *in* our code that
> causes the bug.  This is my logic for not cancelling the vote.
> 

I couldn't find a reasonably good solution for this. The vulnerability
is fixed in Tomcat more than a year ago and it was applied only
recently, as Ove pointed, in the distros. While this doesn't affect
those upgrading, it is problematic for those installing CloudStack
afresh.  Any version - 3.0.2, ($insert_commercial_version), 4.0,
4.0.1, 4.0.2, 4.1 and even the 4.2-SNAPSHOT RPMs.

I've applied a fix on master (54127f8) that I think is reasonable by
changing the permissions on the file so it is owned by user `cloud`
which is the user cloudstack-management will run as. To understand why
this is not an obvious hack please see [1]. If there's an even elegant
way, please let the list know.

I'm also not quite sure how and when the deb packages will be
affected. It looked like the debian users haven't reported this
problem yet. We started seeing issues of this right after May 25th,
should've paid more attention then (/me facepalm)

It's an awkward situation, so I'm not sure what will be the next
course of action since our src release is ready to be published.

The options are:
a) Publish workaround of giving `cloud` permissions to catalina.out
b) Release a new source package with fix cherry-picked to 4.1 and
whereever applicable. 

b. shouldn't take longer - just testing the packaging should be
sufficient. CloudStack's overall functionality is satisfactory from
the tests done so far.

[1] http://markmail.org/thread/wuknrv3ml5lfdq7c

-- 
Prasanna.,


Powered by BigRock.com



Re: Review Request: Cloudstack-2621 [Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11435/
---

(Updated June 3, 2013, 10:27 a.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Changes
---

updated the diff as per the review comments.


Description
---

[Multiple_IP_Ranges] Failed to delete guest IP range from a new subnet/C
https://issues.apache.org/jira/browse/CLOUDSTACK-2621


This addresses bug Cloudstack-2621.


Diffs (updated)
-

  server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
c71d037 

Diff: https://reviews.apache.org/r/11435/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: Review Request: CLOUDSTACK-2620 [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address in case of multiple subnets

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11436/
---

(Updated June 3, 2013, 11:10 a.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Changes
---

update the diff with review comments.


Description
---

[Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address 
in case of multiple subnets
https://issues.apache.org/jira/browse/CLOUDSTACK-2620


This addresses bug Cloudstack-2620.


Diffs (updated)
-

  api/src/com/cloud/agent/api/to/DnsmasqTO.java f99878c 
  core/src/com/cloud/network/DnsMasqConfigurator.java ee8e5fc 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
c71d037 

Diff: https://reviews.apache.org/r/11436/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



[ACS41] Upgrade from 2.2.13

2013-06-03 Thread nicolas.lamirault

Hi,
we try to upgrade from 2.2.14 to 4.1
And we failed on this logs :

2013-06-03 13:15:24,367 DEBUG [utils.db.ScriptRunner] (Timer-1:null) 
UPDATE `cloud`.`user` SET PASSWORD=RAND() WHERE id=1
2013-06-03 13:15:24,367 DEBUG [utils.db.ScriptRunner] (Timer-1:null) 
ALTER TABLE `cloud_usage`.`account` ADD COLUMN `default_zone_id` bigint 
unsigned
2013-06-03 13:15:24,552 DEBUG [upgrade.dao.Upgrade302to40] 
(Timer-1:null) Updating VMware System Vms
2013-06-03 13:15:24,556 DEBUG [db.Transaction.Transaction] 
(Timer-1:null) Rolling back the transaction: Time = 9675 Name = 
Upgrade; called by 
-Transaction.rollback:890-Transaction.removeUpTo:833-Transaction.close

:657-DatabaseUpgradeChecker.upgrade:263-DatabaseUpgradeChecker.check:358-ComponentContext.initComponentsLifeCycle:90-CloudStartupServlet$1.run:50-TimerThread.mainLoop:512-TimerThread.run:462
2013-06-03 13:15:24,558 ERROR [utils.component.ComponentContext] 
(Timer-1:null) System integrity check failed. Refuse to startup


According to the code :

https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=server/src/com/cloud/upgrade/dao/Upgrade302to40.java;h=6f31fdd2b8eda8e15c223adceed52c70a6457349;hb=a5214bee99f6c5582d755c9499f7d99fd7b5b701

// Just update the VMware system template. Other hypervisor templates 
are unchanged from previous 3.0.x versions.

 105 s_logger.debug("Updating VMware System Vms");
 106 try {
 107 //Get 4.0 VMware system Vm template Id
 108 pstmt = conn.prepareStatement("select id from 
`cloud`.`vm_template` where name = 'systemvm-vmware-4.0' and removed is 
null");

 109 rs = pstmt.executeQuery();
 110 if(rs.next()){
 111 long templateId = rs.getLong(1);
 112 rs.close();
 113 pstmt.close();
 114 // change template type to SYSTEM
 115 pstmt = conn.prepareStatement("update 
`cloud`.`vm_template` set type='SYSTEM' where id = ?");

 116 pstmt.setLong(1, templateId);
 117 pstmt.executeUpdate();
 118 pstmt.close();
 119 // update templete ID of system Vms
 120 pstmt = conn.prepareStatement("update 
`cloud`.`vm_instance` set vm_template_id = ? where type <> 'User' and 
hypervisor_type = 'VMware'");

 121 pstmt.setLong(1, templateId);
 122 pstmt.executeUpdate();
 123 pstmt.close();
 124 } else {
 125 if (VMware){
 126 throw new CloudRuntimeException("4.0 VMware 
SystemVm template not found. Cannot upgrade system Vms");

 127 } else {
 128 s_logger.warn("4.0 VMware SystemVm template 
not found. VMware hypervisor is not used, so not failing upgrade");

 129 }
 130 }
 131 } catch (SQLException e) {
 132 throw new CloudRuntimeException("Error while updating 
VMware systemVm template", e);

 133 }

but in release PDF, it is written :

VMware
Name: systemvm-vmware-3.0.5
Description: systemvm-vmware-3.0.5
URL: http://download.cloud.com/templates/burbank/burbank-
systemvm-08012012.ova
Zone: Choose the zone where this hypervisor is used
Hypervisor: VMware
Format: OVA
OS Type: Debian GNU/Linux 5.0 (32-bit)
Extractable: no
Password Enabled: no
Public: no
Featured: no


So ? it is a documentation bug ?
Regards.

--
Nicolas Lamirault

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages 
that have been modified, changed or falsified.
Thank you.



Re: Review Request: CLOUDSTACK-2620 [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address in case of multiple subnets

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11436/
---

(Updated June 3, 2013, 12:07 p.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Description
---

[Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address 
in case of multiple subnets
https://issues.apache.org/jira/browse/CLOUDSTACK-2620


This addresses bug Cloudstack-2620.


Diffs (updated)
-

  api/src/com/cloud/agent/api/to/DnsmasqTO.java f99878c 
  core/src/com/cloud/network/DnsMasqConfigurator.java ee8e5fc 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
c71d037 

Diff: https://reviews.apache.org/r/11436/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: Review Request: CLOUDSTACK-2648 [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises created on VR in case of multiple subnets

2013-06-03 Thread Koushik Das

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11437/#review21311
---

Ship it!


Ship It!

- Koushik Das


On June 3, 2013, 10:10 a.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11437/
> ---
> 
> (Updated June 3, 2013, 10:10 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises 
> created on VR in case of multiple subnets
> https://issues.apache.org/jira/browse/CLOUDSTACK-2648
> 
> 
> This addresses bug Cloudstack-2648.
> 
> 
> Diffs
> -
> 
>   
> core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java 
> 8b996d1 
>   patches/systemvm/debian/config/root/deleteIpAlias.sh 865ff3b 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
>  5f99a15 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
>  a2cceb1 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> c71d037 
> 
> Diff: https://reviews.apache.org/r/11437/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: Review Request: CLOUDSTACK-2620 [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address in case of multiple subnets

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11436/
---

(Updated June 3, 2013, 1:08 p.m.)


Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Description
---

[Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address 
in case of multiple subnets
https://issues.apache.org/jira/browse/CLOUDSTACK-2620


This addresses bug Cloudstack-2620.


Diffs (updated)
-

  api/src/com/cloud/agent/api/to/DnsmasqTO.java f99878c 
  core/src/com/cloud/network/DnsMasqConfigurator.java ee8e5fc 
  server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
c71d037 

Diff: https://reviews.apache.org/r/11436/diff/


Testing
---

tested on master.


Thanks,

bharat kumar



Re: Review Request: CLOUDSTACK-2620 [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address in case of multiple subnets

2013-06-03 Thread Koushik Das

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11436/#review21314
---

Ship it!


Ship It!

- Koushik Das


On June 3, 2013, 1:08 p.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11436/
> ---
> 
> (Updated June 3, 2013, 1:08 p.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address 
> in case of multiple subnets
> https://issues.apache.org/jira/browse/CLOUDSTACK-2620
> 
> 
> This addresses bug Cloudstack-2620.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/DnsmasqTO.java f99878c 
>   core/src/com/cloud/network/DnsMasqConfigurator.java ee8e5fc 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> c71d037 
> 
> Diff: https://reviews.apache.org/r/11436/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 03:40:04PM +0530, Prasanna Santhanam wrote:
> On Sat, Jun 01, 2013 at 01:35:06PM -0400, Chip Childers wrote:
> > The vote has *passed* with the following results (binding PMC votes
> > indicated with a "*" next to their name:
> > 
> > +1 : Edison*, Hugo*, Marcus*, David*, Wido*, Ilya, Animesh, Milamber,
> >  Joe*, Simon, Prasanna*
> > -0 : John
> > -1 : Ove
> > 
> > I'm going to proceed with moving the release into the distribution repo
> > now, and will do the DEB / RPM builds to push Wido's repo site / push
> > cloudmonkey to pypi on Monday.
> > 
> > I do note Ove's -1, due to upstream Tomcat changes.  I know Prasanna
> > mentioned that he was going to check with that project to see why the
> > change happened.  We will need to discuss what (if anything) this
> > project should do to resolve the issue for users.  This issue will block
> > users of all prior versions as well, so it's nothing *in* our code that
> > causes the bug.  This is my logic for not cancelling the vote.
> > 
> 
> I couldn't find a reasonably good solution for this. The vulnerability
> is fixed in Tomcat more than a year ago and it was applied only
> recently, as Ove pointed, in the distros. While this doesn't affect
> those upgrading, it is problematic for those installing CloudStack
> afresh.  Any version - 3.0.2, ($insert_commercial_version), 4.0,
> 4.0.1, 4.0.2, 4.1 and even the 4.2-SNAPSHOT RPMs.
> 
> I've applied a fix on master (54127f8) that I think is reasonable by
> changing the permissions on the file so it is owned by user `cloud`
> which is the user cloudstack-management will run as. To understand why
> this is not an obvious hack please see [1]. If there's an even elegant
> way, please let the list know.

This seems like a reasonable fix to me.  I'll cherry-pick it over to
4.0.

> 
> I'm also not quite sure how and when the deb packages will be
> affected. It looked like the debian users haven't reported this
> problem yet. We started seeing issues of this right after May 25th,
> should've paid more attention then (/me facepalm)

CLOUDSTACK-2758 should probably stay open, pending a DEB fix to pre-empt
the issue occurring in those distros.

> 
> It's an awkward situation, so I'm not sure what will be the next
> course of action since our src release is ready to be published.
> 
> The options are:
> a) Publish workaround of giving `cloud` permissions to catalina.out
> b) Release a new source package with fix cherry-picked to 4.1 and
> whereever applicable. 
> 
> b. shouldn't take longer - just testing the packaging should be
> sufficient. CloudStack's overall functionality is satisfactory from
> the tests done so far.

Unfortunately, perhaps I made a big mistake by not cancelling the VOTE
and performing the release copy.  At this point, 4.0.0 is *frozen* from
changes per ASF release policies (we can't change the bits after I put
them in the release dir).

So...  I'm actually going to propose 2 things:

1) I'm going to build the RPM's that we'll host from Wido's repo server
*with* the fix Prasanna provided.

2) Someone (not me, due to vacation starting Wed) needs to spin a 4.1.1 release
ASAP to include the fix for this.

> 
> [1] http://markmail.org/thread/wuknrv3ml5lfdq7c
> 
> -- 
> Prasanna.,
> 
> 
> Powered by BigRock.com
> 
> 


Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 06:40:55PM +0900, Hiroaki KAWAI wrote:
> I don't want to see NORMAL users getting failed with running
> cloudstack and sending email for asking the workaround.
> # Even if you're thinking we're not wrong.
> 
> If you're going to release a5214bee99f6c5582d755c9499f7d99fd7b5b701
> as 4.1.0, I'd like to suggest releasing 4.1.1 asap.
> 
> # I know the voting window has closed, but I'm -1 on releasing
> # 4.1.0 at this moment.

I replied to Prasanna's email on this issue with the path forward, and
apologize for moving forward without figuring out what a fix would be
for this.

We have to do a 4.1.1 to officially fix it.  The hosted RPM's can
contain the fix, since they are not an official ASF artifact (and
therefore don't require VOTEing, etc...).

-chip


Re: Review Request: CLOUDSTACK-2620 [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address in case of multiple subnets

2013-06-03 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11436/#review21317
---


Commit 0a69b828993088487876ce859e6c00e96e4b545c in branch refs/heads/master 
from Abhinandan Prateek 
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=0a69b82 ]

CLOUDSTACK-2620 [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs 
guest IP address in case of multiple subnets

Signed-off-by: Abhinandan Prateek 


- ASF Subversion and Git Services


On June 3, 2013, 1:08 p.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11436/
> ---
> 
> (Updated June 3, 2013, 1:08 p.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Guest vm's nameserver is not set to VRs guest IP address 
> in case of multiple subnets
> https://issues.apache.org/jira/browse/CLOUDSTACK-2620
> 
> 
> This addresses bug Cloudstack-2620.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/agent/api/to/DnsmasqTO.java f99878c 
>   core/src/com/cloud/network/DnsMasqConfigurator.java ee8e5fc 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> c71d037 
> 
> Diff: https://reviews.apache.org/r/11436/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



[ACS41] Upgrade from 2.2.14 failed

2013-06-03 Thread nicolas.lamirault

I create an issue : https://issues.apache.org/jira/browse/CLOUDSTACK-2822

--
Nicolas Lamirault

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages 
that have been modified, changed or falsified.
Thank you.



Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Chip Childers
Reminder to please VOTE here.  This vote will close tomorrow, and your
opinion counts.

-chip

On Fri, May 31, 2013 at 11:00:21AM -0400, Chip Childers wrote:
> Following our discussion on the proposal to push back the feature freeze
> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...  
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
> 
> > 3.4.2. Release Plan
> > 
> > Defines the timetable and work items for a release. The plan also
> > nominates a Release Manager.
> > 
> > A lazy majority of active committers is required for approval.
> > 
> > Any active committer or PMC member may call a vote. The vote must occur
> > on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to change
> the current plan.  We require a 72 hour window for this vote, so IMO we are
> in an odd position where the feature freeze date is at least extended until 
> Tuesday of next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3] 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+Release
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release
> from today (2013-05-31) to 2013-06-28.  All other dates following the
> feature freeze date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


Review Request: Cloudstack-2511 Multiple_Ip_Ranges: Adding guest ip range in subset/superset to existing CIDR is allowed https://issues.apache.org/jira/browse/CLOUDSTACK-2511, Cloudstack-2651 [Shared

2013-06-03 Thread bharat kumar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11600/
---

Review request for cloudstack, Abhinandan Prateek and Koushik Das.


Description
---

Cloudstack-2511 Multiple_Ip_Ranges: Adding guest ip range in subset/superset to 
existing CIDR is allowed
https://issues.apache.org/jira/browse/CLOUDSTACK-2511

Cloudstack-2651 [Shared n/w]Add IP range should ask for gateway and netmask
https://issues.apache.org/jira/browse/CLOUDSTACK-2651


This addresses bugs Cloudstack-2511 and Cloudstack-2651.


Diffs
-

  server/src/com/cloud/configuration/ConfigurationManagerImpl.java 59e70cf 
  utils/src/com/cloud/utils/net/NetUtils.java 8c094c8 

Diff: https://reviews.apache.org/r/11600/diff/


Testing
---

Tested with master.


Thanks,

bharat kumar



Re: Review Request: CLOUDSTACK-2648 [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises created on VR in case of multiple subnets

2013-06-03 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11437/#review21318
---


Commit 48913679e80e50228b1bd4b3d17fe5245461626a in branch refs/heads/master 
from Abhinandan Prateek 
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=4891367 ]

CLOUDSTACK-2648 [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the 
ip alises created on VR in case of multiple subnets

Signed-off-by: Abhinandan Prateek 


- ASF Subversion and Git Services


On June 3, 2013, 10:10 a.m., bharat kumar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11437/
> ---
> 
> (Updated June 3, 2013, 10:10 a.m.)
> 
> 
> Review request for cloudstack, Abhinandan Prateek and Koushik Das.
> 
> 
> Description
> ---
> 
> [Multiple_IP_Ranges] Reboot or start/stop router vm deletes the ip alises 
> created on VR in case of multiple subnets
> https://issues.apache.org/jira/browse/CLOUDSTACK-2648
> 
> 
> This addresses bug Cloudstack-2648.
> 
> 
> Diffs
> -
> 
>   
> core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java 
> 8b996d1 
>   patches/systemvm/debian/config/root/deleteIpAlias.sh 865ff3b 
>   
> plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
>  5f99a15 
>   
> plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
>  a2cceb1 
>   server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java 
> c71d037 
> 
> Diff: https://reviews.apache.org/r/11437/diff/
> 
> 
> Testing
> ---
> 
> tested on master.
> 
> 
> Thanks,
> 
> bharat kumar
> 
>



netscaler jars hosted in maven central

2013-06-03 Thread Vijay Venkatachalam
Hi,

The Apache licensed netscaler jars are now hosted in maven central. 
It is possible to make the netscaler plug-in as part of the OSS.

Please check the details below.

Nitro jar entry
***
com.citrix.netscaler.nitro
nitro
10.0.e

SDX Nitro jar entry
*
com.citrix.netscaler.nitro
sdx_nitro
10.0

I would not be able to work on the OSS upgrade of the NetScaler plugin in the 
immediate future. 

But here are my thoughts.

It is better to split the OSS migration into 2 tasks.
1.  Upgrading the netscaler plugin to use the maven central repository jars and 
testing it.
>>>The current cloudstack netscaler plugin code is compiled with Nitro jar 
10.0.e and SDX jar 9.3. 
>>> So during the OSS migration please anticipate compilation problems for 
SDX related code.
 >>> Someone who is familiar with NetScaler plugin can get this done. May 
be Murali or Rajesh B.?
2. Moving the netscaler plugin to OSS.
 >>> If I remember right Prasanna volunteered for this long time back.

Thanks,
Vijay V.


Re: netscaler jars hosted in maven central

2013-06-03 Thread David Nalley
Wonderful news Vijay. Glad to see this accomplished.
On Jun 3, 2013 10:13 AM, "Vijay Venkatachalam" <
vijay.venkatacha...@citrix.com> wrote:

> Hi,
>
> The Apache licensed netscaler jars are now hosted in maven central.
> It is possible to make the netscaler plug-in as part of the OSS.
>
> Please check the details below.
>
> Nitro jar entry
> ***
> com.citrix.netscaler.nitro
> nitro
> 10.0.e
>
> SDX Nitro jar entry
> *
> com.citrix.netscaler.nitro
> sdx_nitro
> 10.0
>
> I would not be able to work on the OSS upgrade of the NetScaler plugin in
> the immediate future.
>
> But here are my thoughts.
>
> It is better to split the OSS migration into 2 tasks.
> 1.  Upgrading the netscaler plugin to use the maven central repository
> jars and testing it.
> >>>The current cloudstack netscaler plugin code is compiled with Nitro
> jar 10.0.e and SDX jar 9.3.
> >>> So during the OSS migration please anticipate compilation problems
> for SDX related code.
>  >>> Someone who is familiar with NetScaler plugin can get this done.
> May be Murali or Rajesh B.?
> 2. Moving the netscaler plugin to OSS.
>  >>> If I remember right Prasanna volunteered for this long time back.
>
> Thanks,
> Vijay V.
>


Re: [MERGE]object_store branch into master

2013-06-03 Thread John Burwell
Edison/Chip,

Please see my comments in-line.

Thanks,
-John

On May 31, 2013, at 4:04 PM, Chip Childers  wrote:

> Comments inline:
> 
> On Thu, May 30, 2013 at 09:42:29PM +, Edison Su wrote:
>> 
>> 
>>> -Original Message-
>>> From: John Burwell [mailto:jburw...@basho.com]
>>> Sent: Thursday, May 30, 2013 7:43 AM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: [MERGE]object_store branch into master
>>> 
>>> It feels like we have jumped to a solution without completely understanding
>>> the scope of the problem and the associated assumptions.  We have a
>>> community of hypervisor experts who we should consult to ensure we have
>>> the best solution.  As such, I recommend mailing the list with the specific
>>> hypervisors and functions that you have been unable to interface to storage
>>> that does not present a filesystem.  I do not recall seeing such a 
>>> discussion on
>>> the list previously.
>> 
>> If people using zone-wide primary storage, like, ceph/solidfire, then 
>> suddenly, there is no need for nfs cache storage, as zone-wide storage can 
>> be treated as both primary/secondary storage, S3 as the backup  storage. 
>> It's a simple but powerful solution.
>> Why we can't just add code to support this exciting new solutions? It's hard 
>> to do it on master branch, that's why Min and I worked hard to refactor the 
>> code, and remove nfs secondary storage dependency from management server as 
>> much as possible. All we know, nfs secondary storage is not scalable, not 
>> matter how fancy aging policy you have, how advanced capacity planner you 
>> have.
>> 
>> And that's one of reason I don't care that much about the issue with nfs 
>> cache storage, couldn't we put our energy on cloud style storage solution, 
>> instead of on the un-scalable storage?
> 
> Per your comment about you and Min working hard on this: nobody is
> saying that you didn't.  This isn't personal (or shouldn't be).  These
> are questions that are part of a consensus-based approach to
> development.
> 
>>> As I understand the goals of this enhancement, we will support additional
>>> secondary storage types and removing the assumption that secondary
>>> storage will always be NFS or have a filesystem.  As such, when a non-NFS
>>> type of secondary storage is employed, NFS is no longer the repository of
>>> record for this data.  We can always exceed available space in the 
>>> repository
>>> of record, and the failure scenarios are relatively well understood (4.1.0) 
>>> --
>>> operations will fail quickly and obviously.  However, as a transitory 
>>> staging
>>> storage mechanism (4.2.0), the expectation of the user is the NFS storage 
>>> will
>>> not be as reliable or large.  If the only solution we can provide for this
>>> problem is to recommend an NFS "cache" that is equal to the size of the
>>> object store itself then we have little to no progress addressing our user's
>> 
>> No, it's not true.  Admin can add multiple NFS cache storages if they want, 
>> there is no such requirement that NFS storage will be the same size of 
>> object store, I can't be that stupid.
>> It's the same thing that we are doing on the master branch: admin knows that 
>> one NFS secondary storage is not enough, so they can add multiple NFS 
>> secondary storage. And on the master branch,
>> There is no capacity planner for NFS secondary storage, if the code just 
>> randomly chooses one of NFS secondary storages, even if one of them are 
>> full. Yes, NFS secondary storage on master can be full, there is no way to 
>> aging out.
>> 
>> On the current object_store branch, it has the same behavior, admin can add 
>> multiple NFS cache storages, no capacity planner. While, in case nfs cache 
>> storage is full, admin can just simply remove the db entry related to cached 
>> object, and cleanup NFS cache storage, then suddenly, everything just works. 
>> 
>> From implementation point of view, I don't think there is any difference. 
> 
> It's an expectation issue.  Operators expect to be able to manage their
> storage capacity.  So the question is, for the NFS "Cache", how do they
> plan size requirements and manage that capacity?

The driver for employing an object store is to reduce the cost per GB of 
storage while maintaining reliability and availability.  Requiring NFS reduces, 
if not eliminates, this benefit because system architectures must ensure that 
the NFS "cache" (staging area) has sufficient capacity and reliability to hold 
data until it can be transferred to object storage.  How does adding multiple 
staging areas decrease complexity and cost?  As implemented, the NFS "cache" is 
unbounded meaning that an operator would need to have a NFS "cache" as large as 
object storage to avoid data loss and/or operational failures.

> 
>> 
>> 
>>> needs.  Fundamentally, the role of the NFS is different in 4.2.0 than 4.1.0.
>>> Therefore, I disagree with the assertion that issue is present in 4.1.0.
>> 
>> The role of NFS can 

Re: [VOTE][RESULTS] Release Apache CloudStack 4.1.0 (fifth round)

2013-06-03 Thread Joe Brockmeier
On Mon, Jun 3, 2013, at 08:39 AM, Chip Childers wrote:
> 2) Someone (not me, due to vacation starting Wed) needs to spin a 4.1.1
> release
> ASAP to include the fix for this.

I'm happy to help get this together if Ilya needs any assistance. I'm
flying today but will be around tomorrow and Wednesday. 

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread John Burwell
Wei,


On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:

> Hi John, Mike
> 
> I hope Mike's aswer helps you. I am trying to adding more.
> 
> (1) I think billing should depend on IO statistics rather than IOPS
> limitation. Please review disk_io_stat if you have time.   disk_io_stat can
> get the IO statistics including bytes/iops read/write for an individual
> virtual machine.

Going by the AWS model, customers are billed more for volumes with provisioned 
IOPS, as well as, for those operations (http://aws.amazon.com/ebs/).  I would 
imagine our users would like the option to employ similar cost models.  Could 
an operator implement such a billing model in the current patch?

> 
> (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> limitation for a running virtual machine through command line. However,
> CloudStack does not support changing the parameters of a created offering
> (computer offering or disk offering).

I meant at the Java interface level.  I apologize for being unclear.  Can we 
more generalize allocation algorithms with a set of interfaces that describe 
the service guarantees provided by a resource?

> 
> (3) It is a good question. Maybe it is better to commit Mike's patch after
> disk_io_throttling as Mike needs to consider the limitation in hypervisor
> type, I think.

I will expand on my thoughts in a later response to Mike regarding the touch 
points between these two features.  I think that disk_io_throttling will need 
to be merged before SolidFire, but I think we need closer coordination between 
the branches (possibly have have solidfire track disk_io_throttling) to 
coordinate on this issue.

> 
> - Wei
> 
> 
> 2013/6/3 John Burwell 
> 
>> Mike,
>> 
>> The things I want to understand are the following:
>> 
>>   1) Is there value in capturing IOPS policies be captured in a common
>> data model (e.g. for billing/usage purposes, expressing offerings).
>>2) Should there be a common interface model for reasoning about IOP
>> provisioning at runtime?
>>3) How are conflicting provisioned IOPS configurations between a
>> hypervisor and storage device reconciled?  In particular, a scenario where
>> is lead to believe (and billed) for more IOPS configured for a VM than a
>> storage device has been configured to deliver.  Another scenario could a
>> consistent configuration between a VM and a storage device at creation
>> time, but a later modification to storage device introduces logical
>> inconsistency.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski 
>> wrote:
>> 
>> Hi John,
>> 
>> I believe Wei's feature deals with controlling the max number of IOPS from
>> the hypervisor side.
>> 
>> My feature is focused on controlling IOPS from the storage system side.
>> 
>> I hope that helps. :)
>> 
>> 
>> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell  wrote:
>> 
>>> Wei,
>>> 
>>> My opinion is that no features should be merged until all functional
>>> issues have been resolved and it is ready to turn over to test.  Until
>> the
>>> total Ops vs discrete read/write ops issue is addressed and re-reviewed
>> by
>>> Wido, I don't think this criteria has been satisfied.
>>> 
>>> Also, how does this work intersect/compliment the SolidFire patch (
>>> https://reviews.apache.org/r/11479/)?  As I understand it that work is
>>> also involves provisioned IOPS.  I would like to ensure we don't have a
>>> scenario where provisioned IOPS in KVM and SolidFire are unnecessarily
>>> incompatible.
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU  wrote:
>>> 
>>> Wido,
>>> 
>>> 
>>> Sure. I will change it next week.
>>> 
>>> 
>>> -Wei
>>> 
>>> 
>>> 
>>> 2013/6/1 Wido den Hollander 
>>> 
>>> 
>>> Hi Wei,
>>> 
>>> 
>>> 
>>> On 06/01/2013 08:24 AM, Wei ZHOU wrote:
>>> 
>>> 
>>> Wido,
>>> 
>>> 
>>> Exactly. I have pushed the features into master.
>>> 
>>> 
>>> If anyone object thems for technical reason till Monday, I will revert
>>> 
>>> them.
>>> 
>>> 
>>> For the sake of clarity I just want to mention again that we should
>> change
>>> 
>>> the total IOps to R/W IOps asap so that we never release a version with
>>> 
>>> only total IOps.
>>> 
>>> 
>>> You laid the groundwork for the I/O throttling and that's great! We
>> should
>>> 
>>> however prevent that we create legacy from day #1.
>>> 
>>> 
>>> Wido
>>> 
>>> 
>>> -Wei
>>> 
>>> 
>>> 
>>> 2013/5/31 Wido den Hollander 
>>> 
>>> 
>>> On 05/31/2013 03:59 PM, John Burwell wrote:
>>> 
>>> 
>>> Wido,
>>> 
>>> 
>>> +1 -- this enhancement must to discretely support read and write IOPS.
>>> 
>>> I
>>> 
>>> don't see how it could be fixed later because I don't see how we
>>> 
>>> correctly
>>> 
>>> split total IOPS into read and write.  Therefore, we would be stuck
>>> 
>>> with a
>>> 
>>> total unless/until we decided to break backwards compatibility.
>>> 
>>> 
>>> 
>>> What Wei meant was merging it into master now so that it will go in the
>>> 
>>> 4.2 branch and add Read / Write IOps before the 4.2 r

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Wei ZHOU
John,

For the billing, as no one works on billing now, users need to calculate
the billing by themselves. They can get the service_offering and
disk_offering of a VMs and volumes for calculation. Of course it is better
to tell user the exact limitation value of individual volume, and network
rate limitation for nics as well. I can work on it later. Do you think it
is a part of I/O throttling?

Sorry my misunstand the second the question.

Agree with what you said about the two features.

-Wei


2013/6/3 John Burwell 

> Wei,
>
>
> On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
>
> > Hi John, Mike
> >
> > I hope Mike's aswer helps you. I am trying to adding more.
> >
> > (1) I think billing should depend on IO statistics rather than IOPS
> > limitation. Please review disk_io_stat if you have time.   disk_io_stat
> can
> > get the IO statistics including bytes/iops read/write for an individual
> > virtual machine.
>
> Going by the AWS model, customers are billed more for volumes with
> provisioned IOPS, as well as, for those operations (
> http://aws.amazon.com/ebs/).  I would imagine our users would like the
> option to employ similar cost models.  Could an operator implement such a
> billing model in the current patch?
>
> >
> > (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> > limitation for a running virtual machine through command line. However,
> > CloudStack does not support changing the parameters of a created offering
> > (computer offering or disk offering).
>
> I meant at the Java interface level.  I apologize for being unclear.  Can
> we more generalize allocation algorithms with a set of interfaces that
> describe the service guarantees provided by a resource?
>
> >
> > (3) It is a good question. Maybe it is better to commit Mike's patch
> after
> > disk_io_throttling as Mike needs to consider the limitation in hypervisor
> > type, I think.
>
> I will expand on my thoughts in a later response to Mike regarding the
> touch points between these two features.  I think that disk_io_throttling
> will need to be merged before SolidFire, but I think we need closer
> coordination between the branches (possibly have have solidfire track
> disk_io_throttling) to coordinate on this issue.
>
> >
> > - Wei
> >
> >
> > 2013/6/3 John Burwell 
> >
> >> Mike,
> >>
> >> The things I want to understand are the following:
> >>
> >>   1) Is there value in capturing IOPS policies be captured in a common
> >> data model (e.g. for billing/usage purposes, expressing offerings).
> >>2) Should there be a common interface model for reasoning about IOP
> >> provisioning at runtime?
> >>3) How are conflicting provisioned IOPS configurations between a
> >> hypervisor and storage device reconciled?  In particular, a scenario
> where
> >> is lead to believe (and billed) for more IOPS configured for a VM than a
> >> storage device has been configured to deliver.  Another scenario could a
> >> consistent configuration between a VM and a storage device at creation
> >> time, but a later modification to storage device introduces logical
> >> inconsistency.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> >> wrote:
> >>
> >> Hi John,
> >>
> >> I believe Wei's feature deals with controlling the max number of IOPS
> from
> >> the hypervisor side.
> >>
> >> My feature is focused on controlling IOPS from the storage system side.
> >>
> >> I hope that helps. :)
> >>
> >>
> >> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell 
> wrote:
> >>
> >>> Wei,
> >>>
> >>> My opinion is that no features should be merged until all functional
> >>> issues have been resolved and it is ready to turn over to test.  Until
> >> the
> >>> total Ops vs discrete read/write ops issue is addressed and re-reviewed
> >> by
> >>> Wido, I don't think this criteria has been satisfied.
> >>>
> >>> Also, how does this work intersect/compliment the SolidFire patch (
> >>> https://reviews.apache.org/r/11479/)?  As I understand it that work is
> >>> also involves provisioned IOPS.  I would like to ensure we don't have a
> >>> scenario where provisioned IOPS in KVM and SolidFire are unnecessarily
> >>> incompatible.
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU  wrote:
> >>>
> >>> Wido,
> >>>
> >>>
> >>> Sure. I will change it next week.
> >>>
> >>>
> >>> -Wei
> >>>
> >>>
> >>>
> >>> 2013/6/1 Wido den Hollander 
> >>>
> >>>
> >>> Hi Wei,
> >>>
> >>>
> >>>
> >>> On 06/01/2013 08:24 AM, Wei ZHOU wrote:
> >>>
> >>>
> >>> Wido,
> >>>
> >>>
> >>> Exactly. I have pushed the features into master.
> >>>
> >>>
> >>> If anyone object thems for technical reason till Monday, I will revert
> >>>
> >>> them.
> >>>
> >>>
> >>> For the sake of clarity I just want to mention again that we should
> >> change
> >>>
> >>> the total IOps to R/W IOps asap so that we never release a version with
> >>>
> >>> only total IOps.
> >>>
> >>>
> >>> You laid the groundwork for

StaticNatRule vs StaticNat

2013-06-03 Thread Koushik Das
What is the difference between these interfaces? I see that StaticNat is used 
in network elements. And StaticNatRule used elsewhere including APIs. Given 
that PF and FW rules uses a single interface everywhere, should a similar thing 
be there for static nat rules as well?

-Koushik


Chip Childers Keynote and Full Schedule Announced for the Second CloudStack Collaboration Conference!

2013-06-03 Thread Karen Vuong
## The Program Has Been Announced!
 
There’s a stellar line-up of talks from various Apache
CloudStack committers such as, "How to Run from a Zombie: CloudStack
Distributed Process Management" by John Burwell, "SDN in
CloudStack" by Hugo Trippaers, "CloudStack University” by Sebastian
Goasguen and "High Availability and Disaster Recovery for Cloud
Workloads" by Venkata Budumuru. 
 
It doesn’t end there! There’s also plenty of exciting
topics from organizations that integrate with or use CloudStack including,
"Stackmate: your friend in the cloud business" by Kishore
Yerrapragada, "Calling CloudStack: Building a Phone Company One Zone at a
Time" Evan McGee (Ringplus), "Putting the PaaS in CloudStack" by
Diane Mueller (Redhat) and "Whats the Use!? (Real Customer Use-Cases)"
by Paul Angus ShapeBlue. 
 
But that’s not all! The agenda is jam-packed with even
more talks. Check out the topics and the featured panel discussion, “CloudStack
& Cloud Storage: Where are we at? & Where do we need to go?” Make a
note of which talks you won’t want to miss! To view the schedule: [1] 
http://www.cloudstackcollab.org/schedule/ 
 
## Chip Childers to Keynote the CloudStack Collaboration
Conference
 
Chip Childers, Vice President of Apache CloudStack at the
Apache Software Foundation, will be kicking off the conference with “State of
the Project: Apache CloudStack in 2013”.  In this talk, he will highlight 
success stories from around the
community, describe the scope of our software's impact in the industry, and
share his thoughts on the project's future direction; setting the stage for the
conference, and year, to come.
 
In case you missed it, Gene Kim will be speaking on day
one of the conference, with a new talk called "Why We Need DevOps Now: A
Fourteen Year Study of High Performing IT Organizations." He will also be
taking time to sign copies of The Phoenix Project for attendees. It’s the one
book on DevOps that you should read about this year, it's The Phoenix Project
by Gene Kim, Kevin Behr, and George Spafford. (A limited supply of the book
will be given out free to folks who have registered for the conference, so
you’ll want to register before supplies run out!). To view Gene Kim’s keynote:
[2] http://www.cloudstackcollab.org/keynote1 
 
## Important Dates
 
* June 23rd - CloudStack Hackathon. This is your place
and opportunity to share ideas, collaborate, discuss plans for Apache
CloudStack and put those bright ideas into practice guided by the CloudStack
development team! This will be a full day of hacking, learning and having some
fun! The day will start with brainstorming ideas & forming teams. You bring
your laptop, appetite, skills and ideas. Get ready to hack!
 
* June 24th - Conference talks and planned sessions
begin.
 
* June 25th - Day Two of the conference talks and planned
sessions. Conference ends.
 
## Evening Events
 
* Sunday, June 23rd: Welcome Cocktail Party sponsored by
SolidFire at the Santa Clara Convention Center. Attendees can register early
and head over to the welcome party to mingle over cocktails and hors d'oeuvres
with members of the CloudStack community! 
 
* Monday, June 24th: Are you a daredevil? Join the
CloudStack community at California’s Great America for the CloudStack Roller
Coaster Party from 6:30pm - 10:00pm! Private access for CloudStackers will be
given on the top five rides - Flight Deck, Rue le Dodge Bumper Cars,
Celebration Swings, Vortex, and the brand new roller coaster, Gold Striker. Not
a fan of roller coasters? There will be caricature artists, bumper cars, ice
cream, BBQ, beer and wine, music and so much more to do! Register for the 
evening
events here: [3] http://www.cloudstackcollab.org/evenings/  
 
## Location and Pricing
 
The CloudStack Collaboration Conference 2013 will be held
at the Santa Clara Convention Center.
 
We encourage attendees to stay at the Hyatt Regency Santa
Clara to participate in after-hours events with the Apache CloudStack
community. Book your room today to take advantage of the CloudStack
Collaboration Conference 2013 special rate of $199.00 per night. There are a
limited number of rooms available at this special rate so book early at [4] 
www.cloudstackcollab.com/register   
 
## See You in California!
 
The CloudStack community is excited to reunite this year
and exchange ideas, discuss plans for Apache CloudStack, learn how others are
using it, and participate in workshops and sprints about CloudStack. We hope
you'll join us! Spread the word, and join us in Santa Clara, CA on June 23rd.
 
Want more information? Check out [5] www.cloudstackcollab.org for further
information and updates on the conference.
 
## Sponsorships
 
Not sure if you can afford the CloudStack Collaboration
Conference admission ticket? If you are an individual developer that
contributes to the Apache CloudStack community - don’t let that stop you! Send
an email to plann...@cloudstackcollab.org and we would be glad to make an 
arrangement for you. 
 
## Follow Conf

CloudStack Community User Survey

2013-06-03 Thread Chip Childers
Hi all,

You may have seen Giles send out a note about a user survey that we
are conducting for the community.  I'd love if everyone could take a
moment (it's short, I promise) to fill out the survey form to share
some information about your use of CloudStack (or commercial
derivatives) with us.  We will be using the data in *aggregate* to get
to know more about how it's being deployed out there.

The survey is here:

https://www.surveymonkey.com/s/28BV97D


Board report for June board meeting...

2013-06-03 Thread Chip Childers
Hi all,

Since I'm going to be on vacation until next Monday (starting Tuesday
evening), I'd like to ask for help in creating the board report for
this month.

I've created the template here:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/2013-06+Board+Report+for+Apache+CloudStack

I'll have a bit of time to help finalize it, but would really love if
another(or more) community member would take the lead in authoring the
report this month.  It's due by Wed, June 12...  so ideally it would
be drafted by Friday, and a note sent to this list for comments /
updates.

-chip


Re: RPM building tests in jenkins

2013-06-03 Thread David Nalley
On Mon, Jun 3, 2013 at 12:10 PM, Prasanna Santhanam  wrote:
> On Fri, May 31, 2013 at 04:58:45AM -0400, David Nalley wrote:
>> On Fri, May 31, 2013 at 2:59 AM, Prasanna Santhanam  wrote:
>> > On Thu, May 30, 2013 at 10:51:12AM -0400, David Nalley wrote:
>> >> Hi folks:
>> >>
>> >> I came across an interesting problem today, and think it's one that
>> >> deserves fixing.
>> >>
>> >> I looked at our jenkins config and found that we build packages in a
>> >> manner that is different from how our documentation tells users to
>> >> build those packages. IMO while there may be more than one way to skin
>> >> the cat, we should at least be testing the manner we tell others to
>> >> use. (or perhaps changing the method we tell users to use to match the
>> >> tests we are doing)
>> >>
>> >> Such disconnects between how our user base consumes ACS and how we
>> >> test it is bound to cause us problems.
>> >>
>> >
>> > What was different? The package.sh script in our repo was modified to
>> > take options but performs the same steps as does the package job on
>> > jenkins.  We only write the full complete `rpmbuild` command on our
>> > jenkins job.
>> >
>>
>> How do you know they will always remain the same?
>> Being the same is not the point - we aren't testing the way we tell
>> users to do it. We wouldn't know if the way we tell users is broken or
>> not, because we aren't exercising that path. One of them needs to
>> change so that they are the same - even if they are the same in
>> effect.
>>
>
> Oh I fixed this btw. we're in sync with docs now.
>
> --


Awesome - thanks

--David


Re: Board report for June board meeting...

2013-06-03 Thread David Nalley
On Mon, Jun 3, 2013 at 12:22 PM, Chip Childers
 wrote:
> Hi all,
>
> Since I'm going to be on vacation until next Monday (starting Tuesday
> evening), I'd like to ask for help in creating the board report for
> this month.
>
> I've created the template here:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/2013-06+Board+Report+for+Apache+CloudStack
>
> I'll have a bit of time to help finalize it, but would really love if
> another(or more) community member would take the lead in authoring the
> report this month.  It's due by Wed, June 12...  so ideally it would
> be drafted by Friday, and a note sent to this list for comments /
> updates.
>
> -chip

I'll be happy to make sure this happens.

--David


Re: Board report for June board meeting...

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 12:35:43PM -0400, David Nalley wrote:
> On Mon, Jun 3, 2013 at 12:22 PM, Chip Childers
>  wrote:
> > Hi all,
> >
> > Since I'm going to be on vacation until next Monday (starting Tuesday
> > evening), I'd like to ask for help in creating the board report for
> > this month.
> >
> > I've created the template here:
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/2013-06+Board+Report+for+Apache+CloudStack
> >
> > I'll have a bit of time to help finalize it, but would really love if
> > another(or more) community member would take the lead in authoring the
> > report this month.  It's due by Wed, June 12...  so ideally it would
> > be drafted by Friday, and a note sent to this list for comments /
> > updates.
> >
> > -chip
> 
> I'll be happy to make sure this happens.
> 
> --David
>

Thanks David!


[ACS41] Release process update

2013-06-03 Thread Chip Childers
Hi,

I've edited downloads.mdtext, and committed to staging.  I'll be
building the DEB's and RPM's today.

We have that new tomcat issue to address, which I'll do in the RPM
build.  I think we have a decision to make...  announce 4.1 release with
the permission defect or wait for a 4.1.1 to officially announce the release.

IMO, I think we announce but note the issue and fix for those attempting
to build their own RPMs from source.


Re: RPM building tests in jenkins

2013-06-03 Thread Prasanna Santhanam
On Fri, May 31, 2013 at 04:58:45AM -0400, David Nalley wrote:
> On Fri, May 31, 2013 at 2:59 AM, Prasanna Santhanam  wrote:
> > On Thu, May 30, 2013 at 10:51:12AM -0400, David Nalley wrote:
> >> Hi folks:
> >>
> >> I came across an interesting problem today, and think it's one that
> >> deserves fixing.
> >>
> >> I looked at our jenkins config and found that we build packages in a
> >> manner that is different from how our documentation tells users to
> >> build those packages. IMO while there may be more than one way to skin
> >> the cat, we should at least be testing the manner we tell others to
> >> use. (or perhaps changing the method we tell users to use to match the
> >> tests we are doing)
> >>
> >> Such disconnects between how our user base consumes ACS and how we
> >> test it is bound to cause us problems.
> >>
> >
> > What was different? The package.sh script in our repo was modified to
> > take options but performs the same steps as does the package job on
> > jenkins.  We only write the full complete `rpmbuild` command on our
> > jenkins job.
> >
> 
> How do you know they will always remain the same?
> Being the same is not the point - we aren't testing the way we tell
> users to do it. We wouldn't know if the way we tell users is broken or
> not, because we aren't exercising that path. One of them needs to
> change so that they are the same - even if they are the same in
> effect.
> 

Oh I fixed this btw. we're in sync with docs now.

-- 
Prasanna.,


Powered by BigRock.com



Re: Trouble with deployDataCenter.py

2013-06-03 Thread Will Stevens
Has anyone else experience this?  I just pulled in the master code into my
branch and now I am getting this in my dev environment.

[DEBUG] Executing command line: python ../marvin/marvin/deployDataCenter.py
-i devcloud.cfg
Traceback (most recent call last):
  File "../marvin/marvin/deployDataCenter.py", line 517, in 
deploy.deploy()
  File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
self.loadCfg()
  File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
apiKey, securityKey = self.registerApiKey()
  File "../marvin/marvin/deployDataCenter.py", line 390, in registerApiKey
listuserRes = self.testClient.getApiClient().listUsers(listuser)
  File
"/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
line 2385, in listUsers
response = self.connection.marvin_request(command, data=postdata,
response_type=response)
TypeError: marvin_request() got an unexpected keyword argument 'data'

Thanks,

ws


On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski  wrote:

> It looks like the marvin_request method in cloudstackConnection.py does not
> have a parameter named 'data'.
>
> I changed the signature locally to the following and it works now:
>
> def marvin_request(self, cmd, response_type=None, method='GET', data=''):
>
>
> On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > I don't have much Python experience, but it looks like we're trying to
> > pass in a named parameter that doesn't exist on the receiving side.
> >
> > Perhaps I need to update a Python package?
> >
> > def listUsers(self, command, postdata={}):
> >
> > response = listUsersResponse()
> >
> > response = self.connection.marvin_request(command, data=postdata,
> > response_type=response)
> >
> > return response
> >
> >
> > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> Hi,
> >>
> >> I just updated to the latest today and ran deployDataCenter.py to build
> a
> >> DevCloud2 environment.
> >>
> >> The script is having trouble. Any thoughts on this? Has this worked
> >> recently for anyone else?
> >>
> >> Thanks!
> >>
> >> mtutkowski-LT:devcloud mtutkowski$ python
> >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
> >> Traceback (most recent call last):
> >>   File "../marvin/marvin/deployDataCenter.py", line 476, in 
> >> deploy.deploy()
> >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
> >> self.loadCfg()
> >>   File "../marvin/marvin/deployDataCenter.py", line 410, in loadCfg
> >> apiKey, securityKey = self.registerApiKey()
> >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
> registerApiKey
> >> listuserRes = self.testClient.getApiClient().listUsers(listuser)
> >>   File
> >>
> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> >> line 433, in listUsers
> >> response = self.connection.marvin_request(command, data=postdata,
> >> response_type=response)
> >> TypeError: marvin_request() got an unexpected keyword argument 'data'
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkow...@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*
>


Re: [ACS41] Release process update

2013-06-03 Thread David Nalley
On Mon, Jun 3, 2013 at 12:41 PM, Chip Childers
 wrote:
> Hi,
>
> I've edited downloads.mdtext, and committed to staging.  I'll be
> building the DEB's and RPM's today.
>
> We have that new tomcat issue to address, which I'll do in the RPM
> build.  I think we have a decision to make...  announce 4.1 release with
> the permission defect or wait for a 4.1.1 to officially announce the release.
>
> IMO, I think we announce but note the issue and fix for those attempting
> to build their own RPMs from source.

I'd do the 4.1.0 release as planned with patched RPMs for the binaries
and notes about the problem, and at the same time, prepare a 4.1.1 and
get it underway.

--David


Re: [ACS41] Release process update

2013-06-03 Thread Joe Brockmeier
On Mon, Jun 3, 2013, at 11:41 AM, Chip Childers wrote:
> I've edited downloads.mdtext, and committed to staging.  I'll be
> building the DEB's and RPM's today.
> 
> We have that new tomcat issue to address, which I'll do in the RPM
> build.  I think we have a decision to make...  announce 4.1 release with
> the permission defect or wait for a 4.1.1 to officially announce the
> release.
> 
> IMO, I think we announce but note the issue and fix for those attempting
> to build their own RPMs from source.

I've already sent out the media alert to let folks know we're announcing
tomorrow, so I think the best course is to announce 4.1 with the caveats
and work on getting 4.1.1 out quickly. 

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: Trouble with deployDataCenter.py

2013-06-03 Thread Mike Tutkowski
I have fixed this in a patch I submitted last week.

I'm not sure when it began, but I noticed it a long time ago and had just
sent out an e-mail then and corrected it in my sandbox.

Let me see if I can find what I did to fix it.


On Mon, Jun 3, 2013 at 10:09 AM, Will Stevens  wrote:

> Has anyone else experience this?  I just pulled in the master code into my
> branch and now I am getting this in my dev environment.
>
> [DEBUG] Executing command line: python ../marvin/marvin/deployDataCenter.py
> -i devcloud.cfg
> Traceback (most recent call last):
>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
> deploy.deploy()
>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
> self.loadCfg()
>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
> apiKey, securityKey = self.registerApiKey()
>   File "../marvin/marvin/deployDataCenter.py", line 390, in registerApiKey
> listuserRes = self.testClient.getApiClient().listUsers(listuser)
>   File
>
> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> line 2385, in listUsers
> response = self.connection.marvin_request(command, data=postdata,
> response_type=response)
> TypeError: marvin_request() got an unexpected keyword argument 'data'
>
> Thanks,
>
> ws
>
>
> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > It looks like the marvin_request method in cloudstackConnection.py does
> not
> > have a parameter named 'data'.
> >
> > I changed the signature locally to the following and it works now:
> >
> > def marvin_request(self, cmd, response_type=None, method='GET', data=''):
> >
> >
> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > > wrote:
> >
> > > I don't have much Python experience, but it looks like we're trying to
> > > pass in a named parameter that doesn't exist on the receiving side.
> > >
> > > Perhaps I need to update a Python package?
> > >
> > > def listUsers(self, command, postdata={}):
> > >
> > > response = listUsersResponse()
> > >
> > > response = self.connection.marvin_request(command,
> data=postdata,
> > > response_type=response)
> > >
> > > return response
> > >
> > >
> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> Hi,
> > >>
> > >> I just updated to the latest today and ran deployDataCenter.py to
> build
> > a
> > >> DevCloud2 environment.
> > >>
> > >> The script is having trouble. Any thoughts on this? Has this worked
> > >> recently for anyone else?
> > >>
> > >> Thanks!
> > >>
> > >> mtutkowski-LT:devcloud mtutkowski$ python
> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
> > >> Traceback (most recent call last):
> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in 
> > >> deploy.deploy()
> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
> > >> self.loadCfg()
> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in loadCfg
> > >> apiKey, securityKey = self.registerApiKey()
> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
> > registerApiKey
> > >> listuserRes = self.testClient.getApiClient().listUsers(listuser)
> > >>   File
> > >>
> >
> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> > >> line 433, in listUsers
> > >> response = self.connection.marvin_request(command, data=postdata,
> > >> response_type=response)
> > >> TypeError: marvin_request() got an unexpected keyword argument 'data'
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkow...@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >> *™*
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkow...@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Musayev, Ilya
How would this vote work? Is it consensus that wins?

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Monday, June 03, 2013 9:47 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Reminder to please VOTE here.  This vote will close tomorrow, and your
> opinion counts.
> 
> -chip
> 
> On Fri, May 31, 2013 at 11:00:21AM -0400, Chip Childers wrote:
> > Following our discussion on the proposal to push back the feature
> > freeze date for 4.2.0 [1], we have not yet achieved a clear consensus.
> Well...
> > we have already defined the "project rules" for figuring out what to do.
> > In out project by-laws [2], we have defined a "release plan" decision
> > as
> > follows:
> >
> > > 3.4.2. Release Plan
> > >
> > > Defines the timetable and work items for a release. The plan also
> > > nominates a Release Manager.
> > >
> > > A lazy majority of active committers is required for approval.
> > >
> > > Any active committer or PMC member may call a vote. The vote must
> > > occur on a project development mailing list.
> >
> > And our lazy majority is defined as:
> >
> > > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > > votes and more binding +1 votes than binding -1 votes.
> >
> > Our current plan is the starting point, so this VOTE is a vote to
> > change the current plan.  We require a 72 hour window for this vote,
> > so IMO we are in an odd position where the feature freeze date is at
> > least extended until Tuesday of next week.
> >
> > Our current plan of record for 4.2.0 is at [3].
> >
> > [1] http://markmail.org/message/vi3nsd2yo763kzua
> > [2] http://s.apache.org/csbylaws
> > [3]
> >
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
> > Release
> >
> > 
> >
> > I'd like to call a VOTE on the following:
> >
> > Proposal: Extend the feature freeze date for our 4.2.0 feature release
> > from today (2013-05-31) to 2013-06-28.  All other dates following the
> > feature freeze date in the plan would be pushed out 4 weeks as well.
> >
> > Please respond with one of the following:
> >
> > +1 : change the plan as listed above
> > +/-0 : no strong opinion, but leaning + or -
> > -1 : do not change the plan
> >
> > This vote will remain open until Tuesday morning US eastern time.
> >
> > -chip




Re: Trouble with deployDataCenter.py

2013-06-03 Thread Mike Tutkowski
In cloudstackConnection.py, I made the following change:

-def marvin_request(self, cmd, response_type=None, method='GET'):

+def marvin_request(self, cmd, response_type=None, method='GET',
data=''):


On Mon, Jun 3, 2013 at 11:03 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> I have fixed this in a patch I submitted last week.
>
> I'm not sure when it began, but I noticed it a long time ago and had just
> sent out an e-mail then and corrected it in my sandbox.
>
> Let me see if I can find what I did to fix it.
>
>
> On Mon, Jun 3, 2013 at 10:09 AM, Will Stevens wrote:
>
>> Has anyone else experience this?  I just pulled in the master code into my
>> branch and now I am getting this in my dev environment.
>>
>> [DEBUG] Executing command line: python
>> ../marvin/marvin/deployDataCenter.py
>> -i devcloud.cfg
>> Traceback (most recent call last):
>>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
>> deploy.deploy()
>>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
>> self.loadCfg()
>>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
>> apiKey, securityKey = self.registerApiKey()
>>   File "../marvin/marvin/deployDataCenter.py", line 390, in registerApiKey
>> listuserRes = self.testClient.getApiClient().listUsers(listuser)
>>   File
>>
>> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
>> line 2385, in listUsers
>> response = self.connection.marvin_request(command, data=postdata,
>> response_type=response)
>> TypeError: marvin_request() got an unexpected keyword argument 'data'
>>
>> Thanks,
>>
>> ws
>>
>>
>> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com
>> > wrote:
>>
>> > It looks like the marvin_request method in cloudstackConnection.py does
>> not
>> > have a parameter named 'data'.
>> >
>> > I changed the signature locally to the following and it works now:
>> >
>> > def marvin_request(self, cmd, response_type=None, method='GET',
>> data=''):
>> >
>> >
>> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
>> > mike.tutkow...@solidfire.com
>> > > wrote:
>> >
>> > > I don't have much Python experience, but it looks like we're trying to
>> > > pass in a named parameter that doesn't exist on the receiving side.
>> > >
>> > > Perhaps I need to update a Python package?
>> > >
>> > > def listUsers(self, command, postdata={}):
>> > >
>> > > response = listUsersResponse()
>> > >
>> > > response = self.connection.marvin_request(command,
>> data=postdata,
>> > > response_type=response)
>> > >
>> > > return response
>> > >
>> > >
>> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
>> > > mike.tutkow...@solidfire.com> wrote:
>> > >
>> > >> Hi,
>> > >>
>> > >> I just updated to the latest today and ran deployDataCenter.py to
>> build
>> > a
>> > >> DevCloud2 environment.
>> > >>
>> > >> The script is having trouble. Any thoughts on this? Has this worked
>> > >> recently for anyone else?
>> > >>
>> > >> Thanks!
>> > >>
>> > >> mtutkowski-LT:devcloud mtutkowski$ python
>> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
>> > >> Traceback (most recent call last):
>> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in 
>> > >> deploy.deploy()
>> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
>> > >> self.loadCfg()
>> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in loadCfg
>> > >> apiKey, securityKey = self.registerApiKey()
>> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
>> > registerApiKey
>> > >> listuserRes = self.testClient.getApiClient().listUsers(listuser)
>> > >>   File
>> > >>
>> >
>> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
>> > >> line 433, in listUsers
>> > >> response = self.connection.marvin_request(command, data=postdata,
>> > >> response_type=response)
>> > >> TypeError: marvin_request() got an unexpected keyword argument 'data'
>> > >>
>> > >> --
>> > >> *Mike Tutkowski*
>> > >> *Senior CloudStack Developer, SolidFire Inc.*
>> > >> e: mike.tutkow...@solidfire.com
>> > >> o: 303.746.7302
>> > >> Advancing the way the world uses the cloud<
>> > http://solidfire.com/solution/overview/?video=play>
>> > >> *™*
>> > >>
>> > >
>> > >
>> > >
>> > > --
>> > > *Mike Tutkowski*
>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > e: mike.tutkow...@solidfire.com
>> > > o: 303.746.7302
>> > > Advancing the way the world uses the cloud<
>> > http://solidfire.com/solution/overview/?video=play>
>> > > *™*
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkow...@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior

Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 05:04:54PM +, Musayev, Ilya wrote:
> How would this vote work? Is it consensus that wins?

As stated below:

> > > > 3.4.2. Release Plan
> > > >
> > > > Defines the timetable and work items for a release. The plan also
> > > > nominates a Release Manager.
> > > >
> > > > A lazy majority of active committers is required for approval.
> > > >
> > > > Any active committer or PMC member may call a vote. The vote must
> > > > occur on a project development mailing list.


> > > > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > > > votes and more binding +1 votes than binding -1 votes.


Re: Trouble with deployDataCenter.py

2013-06-03 Thread Mike Tutkowski
Surprisingly this has been like this for a long time.

It kind of makes me wonder if anyone uses DevCloud. I use it all the time.
If others were using it, I would have expected this to be corrected like a
month or two ago.

I am "alone" in using DevCloud?


On Mon, Jun 3, 2013 at 11:05 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> In cloudstackConnection.py, I made the following change:
>
> -def marvin_request(self, cmd, response_type=None, method='GET'):
>
> +def marvin_request(self, cmd, response_type=None, method='GET',
> data=''):
>
>
> On Mon, Jun 3, 2013 at 11:03 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> I have fixed this in a patch I submitted last week.
>>
>> I'm not sure when it began, but I noticed it a long time ago and had just
>> sent out an e-mail then and corrected it in my sandbox.
>>
>> Let me see if I can find what I did to fix it.
>>
>>
>> On Mon, Jun 3, 2013 at 10:09 AM, Will Stevens wrote:
>>
>>> Has anyone else experience this?  I just pulled in the master code into
>>> my
>>> branch and now I am getting this in my dev environment.
>>>
>>> [DEBUG] Executing command line: python
>>> ../marvin/marvin/deployDataCenter.py
>>> -i devcloud.cfg
>>> Traceback (most recent call last):
>>>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
>>> deploy.deploy()
>>>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
>>> self.loadCfg()
>>>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
>>> apiKey, securityKey = self.registerApiKey()
>>>   File "../marvin/marvin/deployDataCenter.py", line 390, in
>>> registerApiKey
>>> listuserRes = self.testClient.getApiClient().listUsers(listuser)
>>>   File
>>>
>>> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
>>> line 2385, in listUsers
>>> response = self.connection.marvin_request(command, data=postdata,
>>> response_type=response)
>>> TypeError: marvin_request() got an unexpected keyword argument 'data'
>>>
>>> Thanks,
>>>
>>> ws
>>>
>>>
>>> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com
>>> > wrote:
>>>
>>> > It looks like the marvin_request method in cloudstackConnection.py
>>> does not
>>> > have a parameter named 'data'.
>>> >
>>> > I changed the signature locally to the following and it works now:
>>> >
>>> > def marvin_request(self, cmd, response_type=None, method='GET',
>>> data=''):
>>> >
>>> >
>>> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
>>> > mike.tutkow...@solidfire.com
>>> > > wrote:
>>> >
>>> > > I don't have much Python experience, but it looks like we're trying
>>> to
>>> > > pass in a named parameter that doesn't exist on the receiving side.
>>> > >
>>> > > Perhaps I need to update a Python package?
>>> > >
>>> > > def listUsers(self, command, postdata={}):
>>> > >
>>> > > response = listUsersResponse()
>>> > >
>>> > > response = self.connection.marvin_request(command,
>>> data=postdata,
>>> > > response_type=response)
>>> > >
>>> > > return response
>>> > >
>>> > >
>>> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
>>> > > mike.tutkow...@solidfire.com> wrote:
>>> > >
>>> > >> Hi,
>>> > >>
>>> > >> I just updated to the latest today and ran deployDataCenter.py to
>>> build
>>> > a
>>> > >> DevCloud2 environment.
>>> > >>
>>> > >> The script is having trouble. Any thoughts on this? Has this worked
>>> > >> recently for anyone else?
>>> > >>
>>> > >> Thanks!
>>> > >>
>>> > >> mtutkowski-LT:devcloud mtutkowski$ python
>>> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
>>> > >> Traceback (most recent call last):
>>> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in 
>>> > >> deploy.deploy()
>>> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
>>> > >> self.loadCfg()
>>> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in loadCfg
>>> > >> apiKey, securityKey = self.registerApiKey()
>>> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
>>> > registerApiKey
>>> > >> listuserRes = self.testClient.getApiClient().listUsers(listuser)
>>> > >>   File
>>> > >>
>>> >
>>> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
>>> > >> line 433, in listUsers
>>> > >> response = self.connection.marvin_request(command,
>>> data=postdata,
>>> > >> response_type=response)
>>> > >> TypeError: marvin_request() got an unexpected keyword argument
>>> 'data'
>>> > >>
>>> > >> --
>>> > >> *Mike Tutkowski*
>>> > >> *Senior CloudStack Developer, SolidFire Inc.*
>>> > >> e: mike.tutkow...@solidfire.com
>>> > >> o: 303.746.7302
>>> > >> Advancing the way the world uses the cloud<
>>> > http://solidfire.com/solution/overview/?video=play>
>>> > >> *™*
>>> > >>
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > *Mike Tutkowski*
>>> > > *Senior CloudStack Developer, SolidFire Inc

Re: StaticNatRule vs StaticNat

2013-06-03 Thread Alena Prokharchyk
On 6/3/13 8:33 AM, "Koushik Das"  wrote:

>What is the difference between these interfaces? I see that StaticNat is
>used in network elements.

StaticNat maps user VM to the Public IP address.


>And StaticNatRule used elsewhere including APIs.

Legacy code. In 2.1.x version of the CS there were no Firewall Rules, and
to give an access to certain port of the VM mapped to the Public IP via
StaticNat, createIpForwardingRule API command had to be called. The rule
created through this command, had a purpose StaticNat in firewall_rules
table, and used StaticNatRule interface.

> Given that PF and FW rules uses a single interface everywhere, should a
>similar thing be there for static nat rules as well?

Be careful as customers upgraded from 2.1.x CS, might have rules with
StaticNat Purpose. If you decide to revoke corresponding code, make sure
to fix the DB upgrade scripts to transform rules with the StaticNat
purpose, to the rules with Firewall purpose.

>
>-Koushik
>




Re: [MERGE]object_store branch into master

2013-06-03 Thread Min Chen
Chip/John,

This thread has become very hard to follow due to several technical
debates mixed together. Chip earlier made a good suggestion that we should
start separate threads for several important architectural issues raised
by John so that community can get clear grasp on the debating issues and
reach a wise conclusion. If there is no objection, we are going to do that
right now. If we understood correctly by following through this thread, we
boiled down to the following 3 major technical issues:
1. Missing capacity planning in NFS cache storage implementation.
2. Error handling in case of S3 as native secondary storage.
3. S3TemplateDownloader implementation issue.
If we didn't miss anything, we will start these 3 DISCUSS threads shortly.

Thanks
-min

On 6/3/13 7:18 AM, "John Burwell"  wrote:

>Edison/Chip,
>
>Please see my comments in-line.
>
>Thanks,
>-John
>
>On May 31, 2013, at 4:04 PM, Chip Childers 
>wrote:
>
>> Comments inline:
>> 
>> On Thu, May 30, 2013 at 09:42:29PM +, Edison Su wrote:
>>> 
>>> 
 -Original Message-
 From: John Burwell [mailto:jburw...@basho.com]
 Sent: Thursday, May 30, 2013 7:43 AM
 To: dev@cloudstack.apache.org
 Subject: Re: [MERGE]object_store branch into master
 
 It feels like we have jumped to a solution without completely
understanding
 the scope of the problem and the associated assumptions.  We have a
 community of hypervisor experts who we should consult to ensure we
have
 the best solution.  As such, I recommend mailing the list with the
specific
 hypervisors and functions that you have been unable to interface to
storage
 that does not present a filesystem.  I do not recall seeing such a
discussion on
 the list previously.
>>> 
>>> If people using zone-wide primary storage, like, ceph/solidfire, then
>>>suddenly, there is no need for nfs cache storage, as zone-wide storage
>>>can be treated as both primary/secondary storage, S3 as the backup
>>>storage. It's a simple but powerful solution.
>>> Why we can't just add code to support this exciting new solutions?
>>>It's hard to do it on master branch, that's why Min and I worked hard
>>>to refactor the code, and remove nfs secondary storage dependency from
>>>management server as much as possible. All we know, nfs secondary
>>>storage is not scalable, not matter how fancy aging policy you have,
>>>how advanced capacity planner you have.
>>> 
>>> And that's one of reason I don't care that much about the issue with
>>>nfs cache storage, couldn't we put our energy on cloud style storage
>>>solution, instead of on the un-scalable storage?
>> 
>> Per your comment about you and Min working hard on this: nobody is
>> saying that you didn't.  This isn't personal (or shouldn't be).  These
>> are questions that are part of a consensus-based approach to
>> development.
>> 
 As I understand the goals of this enhancement, we will support
additional
 secondary storage types and removing the assumption that secondary
 storage will always be NFS or have a filesystem.  As such, when a
non-NFS
 type of secondary storage is employed, NFS is no longer the
repository of
 record for this data.  We can always exceed available space in the
repository
 of record, and the failure scenarios are relatively well understood
(4.1.0) --
 operations will fail quickly and obviously.  However, as a transitory
staging
 storage mechanism (4.2.0), the expectation of the user is the NFS
storage will
 not be as reliable or large.  If the only solution we can provide for
this
 problem is to recommend an NFS "cache" that is equal to the size of
the
 object store itself then we have little to no progress addressing our
user's
>>> 
>>> No, it's not true.  Admin can add multiple NFS cache storages if they
>>>want, there is no such requirement that NFS storage will be the same
>>>size of object store, I can't be that stupid.
>>> It's the same thing that we are doing on the master branch: admin
>>>knows that one NFS secondary storage is not enough, so they can add
>>>multiple NFS secondary storage. And on the master branch,
>>> There is no capacity planner for NFS secondary storage, if the code
>>>just randomly chooses one of NFS secondary storages, even if one of
>>>them are full. Yes, NFS secondary storage on master can be full, there
>>>is no way to aging out.
>>> 
>>> On the current object_store branch, it has the same behavior, admin
>>>can add multiple NFS cache storages, no capacity planner. While, in
>>>case nfs cache storage is full, admin can just simply remove the db
>>>entry related to cached object, and cleanup NFS cache storage, then
>>>suddenly, everything just works.
>>> 
>>> From implementation point of view, I don't think there is any
>>>difference. 
>> 
>> It's an expectation issue.  Operators expect to be able to manage their
>> 

Re: [ACS41] Upgrade from 2.2.14 failed

2013-06-03 Thread Alena Prokharchyk
I will look into it.

On 6/3/13 6:47 AM, "nicolas.lamira...@orange.com"
 wrote:

>I create an issue : https://issues.apache.org/jira/browse/CLOUDSTACK-2822
>
>-- 
>Nicolas Lamirault
>
>__
>___
>
>Ce message et ses pieces jointes peuvent contenir des informations
>confidentielles ou privilegiees et ne doivent donc
>pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
>recu ce message par erreur, veuillez le signaler
>a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
>electroniques etant susceptibles d'alteration,
>France Telecom - Orange decline toute responsabilite si ce message a ete
>altere, deforme ou falsifie. Merci.
>
>This message and its attachments may contain confidential or privileged
>information that may be protected by law;
>they should not be distributed, used or copied without authorisation.
>If you have received this email in error, please notify the sender and
>delete this message and its attachments.
>As emails may be altered, France Telecom - Orange is not liable for
>messages that have been modified, changed or falsified.
>Thank you.
>
>




Re: Trouble with deployDataCenter.py

2013-06-03 Thread Will Stevens
Thanks Mike.  Ya, I also did the same change locally and then did the
following to not track the hack in my branch.

git update-index --assume-unchanged
tools/marvin/marvin/cloudstackConnection.py

Thanks for submitting a patch for that.

Cheers,

Will




On Mon, Jun 3, 2013 at 1:05 PM, Mike Tutkowski  wrote:

> In cloudstackConnection.py, I made the following change:
>
> -def marvin_request(self, cmd, response_type=None, method='GET'):
>
> +def marvin_request(self, cmd, response_type=None, method='GET',
> data=''):
>
>
> On Mon, Jun 3, 2013 at 11:03 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > I have fixed this in a patch I submitted last week.
> >
> > I'm not sure when it began, but I noticed it a long time ago and had just
> > sent out an e-mail then and corrected it in my sandbox.
> >
> > Let me see if I can find what I did to fix it.
> >
> >
> > On Mon, Jun 3, 2013 at 10:09 AM, Will Stevens  >wrote:
> >
> >> Has anyone else experience this?  I just pulled in the master code into
> my
> >> branch and now I am getting this in my dev environment.
> >>
> >> [DEBUG] Executing command line: python
> >> ../marvin/marvin/deployDataCenter.py
> >> -i devcloud.cfg
> >> Traceback (most recent call last):
> >>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
> >> deploy.deploy()
> >>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
> >> self.loadCfg()
> >>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
> >> apiKey, securityKey = self.registerApiKey()
> >>   File "../marvin/marvin/deployDataCenter.py", line 390, in
> registerApiKey
> >> listuserRes = self.testClient.getApiClient().listUsers(listuser)
> >>   File
> >>
> >>
> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> >> line 2385, in listUsers
> >> response = self.connection.marvin_request(command, data=postdata,
> >> response_type=response)
> >> TypeError: marvin_request() got an unexpected keyword argument 'data'
> >>
> >> Thanks,
> >>
> >> ws
> >>
> >>
> >> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com
> >> > wrote:
> >>
> >> > It looks like the marvin_request method in cloudstackConnection.py
> does
> >> not
> >> > have a parameter named 'data'.
> >> >
> >> > I changed the signature locally to the following and it works now:
> >> >
> >> > def marvin_request(self, cmd, response_type=None, method='GET',
> >> data=''):
> >> >
> >> >
> >> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
> >> > mike.tutkow...@solidfire.com
> >> > > wrote:
> >> >
> >> > > I don't have much Python experience, but it looks like we're trying
> to
> >> > > pass in a named parameter that doesn't exist on the receiving side.
> >> > >
> >> > > Perhaps I need to update a Python package?
> >> > >
> >> > > def listUsers(self, command, postdata={}):
> >> > >
> >> > > response = listUsersResponse()
> >> > >
> >> > > response = self.connection.marvin_request(command,
> >> data=postdata,
> >> > > response_type=response)
> >> > >
> >> > > return response
> >> > >
> >> > >
> >> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
> >> > > mike.tutkow...@solidfire.com> wrote:
> >> > >
> >> > >> Hi,
> >> > >>
> >> > >> I just updated to the latest today and ran deployDataCenter.py to
> >> build
> >> > a
> >> > >> DevCloud2 environment.
> >> > >>
> >> > >> The script is having trouble. Any thoughts on this? Has this worked
> >> > >> recently for anyone else?
> >> > >>
> >> > >> Thanks!
> >> > >>
> >> > >> mtutkowski-LT:devcloud mtutkowski$ python
> >> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
> >> > >> Traceback (most recent call last):
> >> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in
> 
> >> > >> deploy.deploy()
> >> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
> >> > >> self.loadCfg()
> >> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in loadCfg
> >> > >> apiKey, securityKey = self.registerApiKey()
> >> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
> >> > registerApiKey
> >> > >> listuserRes =
> self.testClient.getApiClient().listUsers(listuser)
> >> > >>   File
> >> > >>
> >> >
> >>
> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> >> > >> line 433, in listUsers
> >> > >> response = self.connection.marvin_request(command,
> data=postdata,
> >> > >> response_type=response)
> >> > >> TypeError: marvin_request() got an unexpected keyword argument
> 'data'
> >> > >>
> >> > >> --
> >> > >> *Mike Tutkowski*
> >> > >> *Senior CloudStack Developer, SolidFire Inc.*
> >> > >> e: mike.tutkow...@solidfire.com
> >> > >> o: 303.746.7302
> >> > >> Advancing the way the world uses the cloud<
> >> > http://solidfire.com/solution/overview/?video=play>
> >> > >> *™*
> >> > >>
> >> > >
> >> > >
> >> > 

Re: [MERGE]object_store branch into master

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 05:09:24PM +, Min Chen wrote:
> Chip/John,
> 
>   This thread has become very hard to follow due to several technical
> debates mixed together. Chip earlier made a good suggestion that we should
> start separate threads for several important architectural issues raised
> by John so that community can get clear grasp on the debating issues and
> reach a wise conclusion. If there is no objection, we are going to do that
> right now. If we understood correctly by following through this thread, we
> boiled down to the following 3 major technical issues:
>   1. Missing capacity planning in NFS cache storage implementation.
>   2. Error handling in case of S3 as native secondary storage.
>   3. S3TemplateDownloader implementation issue.
> If we didn't miss anything, we will start these 3 DISCUSS threads shortly.
> 
>   Thanks
>   -min

+1 - do it!


Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
I agree on merging Wei's feature first, then mine.

If his feature is for KVM only, then it is a non issue as I don't support
KVM in 4.2.


On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU  wrote:

> John,
>
> For the billing, as no one works on billing now, users need to calculate
> the billing by themselves. They can get the service_offering and
> disk_offering of a VMs and volumes for calculation. Of course it is better
> to tell user the exact limitation value of individual volume, and network
> rate limitation for nics as well. I can work on it later. Do you think it
> is a part of I/O throttling?
>
> Sorry my misunstand the second the question.
>
> Agree with what you said about the two features.
>
> -Wei
>
>
> 2013/6/3 John Burwell 
>
> > Wei,
> >
> >
> > On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
> >
> > > Hi John, Mike
> > >
> > > I hope Mike's aswer helps you. I am trying to adding more.
> > >
> > > (1) I think billing should depend on IO statistics rather than IOPS
> > > limitation. Please review disk_io_stat if you have time.   disk_io_stat
> > can
> > > get the IO statistics including bytes/iops read/write for an individual
> > > virtual machine.
> >
> > Going by the AWS model, customers are billed more for volumes with
> > provisioned IOPS, as well as, for those operations (
> > http://aws.amazon.com/ebs/).  I would imagine our users would like the
> > option to employ similar cost models.  Could an operator implement such a
> > billing model in the current patch?
> >
> > >
> > > (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> > > limitation for a running virtual machine through command line. However,
> > > CloudStack does not support changing the parameters of a created
> offering
> > > (computer offering or disk offering).
> >
> > I meant at the Java interface level.  I apologize for being unclear.  Can
> > we more generalize allocation algorithms with a set of interfaces that
> > describe the service guarantees provided by a resource?
> >
> > >
> > > (3) It is a good question. Maybe it is better to commit Mike's patch
> > after
> > > disk_io_throttling as Mike needs to consider the limitation in
> hypervisor
> > > type, I think.
> >
> > I will expand on my thoughts in a later response to Mike regarding the
> > touch points between these two features.  I think that disk_io_throttling
> > will need to be merged before SolidFire, but I think we need closer
> > coordination between the branches (possibly have have solidfire track
> > disk_io_throttling) to coordinate on this issue.
> >
> > >
> > > - Wei
> > >
> > >
> > > 2013/6/3 John Burwell 
> > >
> > >> Mike,
> > >>
> > >> The things I want to understand are the following:
> > >>
> > >>   1) Is there value in capturing IOPS policies be captured in a common
> > >> data model (e.g. for billing/usage purposes, expressing offerings).
> > >>2) Should there be a common interface model for reasoning about IOP
> > >> provisioning at runtime?
> > >>3) How are conflicting provisioned IOPS configurations between a
> > >> hypervisor and storage device reconciled?  In particular, a scenario
> > where
> > >> is lead to believe (and billed) for more IOPS configured for a VM
> than a
> > >> storage device has been configured to deliver.  Another scenario
> could a
> > >> consistent configuration between a VM and a storage device at creation
> > >> time, but a later modification to storage device introduces logical
> > >> inconsistency.
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > >> wrote:
> > >>
> > >> Hi John,
> > >>
> > >> I believe Wei's feature deals with controlling the max number of IOPS
> > from
> > >> the hypervisor side.
> > >>
> > >> My feature is focused on controlling IOPS from the storage system
> side.
> > >>
> > >> I hope that helps. :)
> > >>
> > >>
> > >> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell 
> > wrote:
> > >>
> > >>> Wei,
> > >>>
> > >>> My opinion is that no features should be merged until all functional
> > >>> issues have been resolved and it is ready to turn over to test.
>  Until
> > >> the
> > >>> total Ops vs discrete read/write ops issue is addressed and
> re-reviewed
> > >> by
> > >>> Wido, I don't think this criteria has been satisfied.
> > >>>
> > >>> Also, how does this work intersect/compliment the SolidFire patch (
> > >>> https://reviews.apache.org/r/11479/)?  As I understand it that work
> is
> > >>> also involves provisioned IOPS.  I would like to ensure we don't
> have a
> > >>> scenario where provisioned IOPS in KVM and SolidFire are
> unnecessarily
> > >>> incompatible.
> > >>>
> > >>> Thanks,
> > >>> -John
> > >>>
> > >>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU  wrote:
> > >>>
> > >>> Wido,
> > >>>
> > >>>
> > >>> Sure. I will change it next week.
> > >>>
> > >>>
> > >>> -Wei
> > >>>
> > >>>
> > >>>
> > >>> 2013/6/1 Wido den Hollander 
> > >>>
> > >>>
> > >>> Hi Wei,
> > >>>
> > >>>
> > >>>
> > >>> On 06

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Kelcey Jamison Damage
Is there any plan on supporting KVM in the patch cycle post 4.2?

- Original Message -
From: "Mike Tutkowski" 
To: dev@cloudstack.apache.org
Sent: Monday, June 3, 2013 10:12:32 AM
Subject: Re: [MERGE] disk_io_throttling to MASTER

I agree on merging Wei's feature first, then mine.

If his feature is for KVM only, then it is a non issue as I don't support
KVM in 4.2.


On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU  wrote:

> John,
>
> For the billing, as no one works on billing now, users need to calculate
> the billing by themselves. They can get the service_offering and
> disk_offering of a VMs and volumes for calculation. Of course it is better
> to tell user the exact limitation value of individual volume, and network
> rate limitation for nics as well. I can work on it later. Do you think it
> is a part of I/O throttling?
>
> Sorry my misunstand the second the question.
>
> Agree with what you said about the two features.
>
> -Wei
>
>
> 2013/6/3 John Burwell 
>
> > Wei,
> >
> >
> > On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
> >
> > > Hi John, Mike
> > >
> > > I hope Mike's aswer helps you. I am trying to adding more.
> > >
> > > (1) I think billing should depend on IO statistics rather than IOPS
> > > limitation. Please review disk_io_stat if you have time.   disk_io_stat
> > can
> > > get the IO statistics including bytes/iops read/write for an individual
> > > virtual machine.
> >
> > Going by the AWS model, customers are billed more for volumes with
> > provisioned IOPS, as well as, for those operations (
> > http://aws.amazon.com/ebs/).  I would imagine our users would like the
> > option to employ similar cost models.  Could an operator implement such a
> > billing model in the current patch?
> >
> > >
> > > (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> > > limitation for a running virtual machine through command line. However,
> > > CloudStack does not support changing the parameters of a created
> offering
> > > (computer offering or disk offering).
> >
> > I meant at the Java interface level.  I apologize for being unclear.  Can
> > we more generalize allocation algorithms with a set of interfaces that
> > describe the service guarantees provided by a resource?
> >
> > >
> > > (3) It is a good question. Maybe it is better to commit Mike's patch
> > after
> > > disk_io_throttling as Mike needs to consider the limitation in
> hypervisor
> > > type, I think.
> >
> > I will expand on my thoughts in a later response to Mike regarding the
> > touch points between these two features.  I think that disk_io_throttling
> > will need to be merged before SolidFire, but I think we need closer
> > coordination between the branches (possibly have have solidfire track
> > disk_io_throttling) to coordinate on this issue.
> >
> > >
> > > - Wei
> > >
> > >
> > > 2013/6/3 John Burwell 
> > >
> > >> Mike,
> > >>
> > >> The things I want to understand are the following:
> > >>
> > >>   1) Is there value in capturing IOPS policies be captured in a common
> > >> data model (e.g. for billing/usage purposes, expressing offerings).
> > >>2) Should there be a common interface model for reasoning about IOP
> > >> provisioning at runtime?
> > >>3) How are conflicting provisioned IOPS configurations between a
> > >> hypervisor and storage device reconciled?  In particular, a scenario
> > where
> > >> is lead to believe (and billed) for more IOPS configured for a VM
> than a
> > >> storage device has been configured to deliver.  Another scenario
> could a
> > >> consistent configuration between a VM and a storage device at creation
> > >> time, but a later modification to storage device introduces logical
> > >> inconsistency.
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > >> wrote:
> > >>
> > >> Hi John,
> > >>
> > >> I believe Wei's feature deals with controlling the max number of IOPS
> > from
> > >> the hypervisor side.
> > >>
> > >> My feature is focused on controlling IOPS from the storage system
> side.
> > >>
> > >> I hope that helps. :)
> > >>
> > >>
> > >> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell 
> > wrote:
> > >>
> > >>> Wei,
> > >>>
> > >>> My opinion is that no features should be merged until all functional
> > >>> issues have been resolved and it is ready to turn over to test.
>  Until
> > >> the
> > >>> total Ops vs discrete read/write ops issue is addressed and
> re-reviewed
> > >> by
> > >>> Wido, I don't think this criteria has been satisfied.
> > >>>
> > >>> Also, how does this work intersect/compliment the SolidFire patch (
> > >>> https://reviews.apache.org/r/11479/)?  As I understand it that work
> is
> > >>> also involves provisioned IOPS.  I would like to ensure we don't
> have a
> > >>> scenario where provisioned IOPS in KVM and SolidFire are
> unnecessarily
> > >>> incompatible.
> > >>>
> > >>> Thanks,
> > >>> -John
> > >>>
> > >>> On Jun 1, 2013, at 6:47 AM, Wei ZHOU  wrote:

RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Clayton Weise
+1 to extend the feature freeze date.

-Original Message-
From: Chip Childers [mailto:chip.child...@sungard.com] 
Sent: Friday, May 31, 2013 8:00 AM
To: dev@cloudstack.apache.org
Subject: [VOTE] Pushback 4.2.0 Feature Freeze

Following our discussion on the proposal to push back the feature freeze
date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...  
we have already defined the "project rules" for figuring out what to do.
In out project by-laws [2], we have defined a "release plan" decision as
follows:

> 3.4.2. Release Plan
> 
> Defines the timetable and work items for a release. The plan also
> nominates a Release Manager.
> 
> A lazy majority of active committers is required for approval.
> 
> Any active committer or PMC member may call a vote. The vote must occur
> on a project development mailing list.

And our lazy majority is defined as:

> 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> votes and more binding +1 votes than binding -1 votes.

Our current plan is the starting point, so this VOTE is a vote to change
the current plan.  We require a 72 hour window for this vote, so IMO we are
in an odd position where the feature freeze date is at least extended until 
Tuesday of next week.

Our current plan of record for 4.2.0 is at [3].

[1] http://markmail.org/message/vi3nsd2yo763kzua
[2] http://s.apache.org/csbylaws
[3] 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+Release



I'd like to call a VOTE on the following:

Proposal: Extend the feature freeze date for our 4.2.0 feature release
from today (2013-05-31) to 2013-06-28.  All other dates following the
feature freeze date in the plan would be pushed out 4 weeks as well.

Please respond with one of the following:

+1 : change the plan as listed above
+/-0 : no strong opinion, but leaning + or -
-1 : do not change the plan

This vote will remain open until Tuesday morning US eastern time.

-chip


Re: Trouble with deployDataCenter.py

2013-06-03 Thread Will Stevens
I think a lot of people use DevCloud but they don't redeploy very often so
bugs like this don't get noticed.  I use DevCloud all the time.


On Mon, Jun 3, 2013 at 1:07 PM, Mike Tutkowski  wrote:

> Surprisingly this has been like this for a long time.
>
> It kind of makes me wonder if anyone uses DevCloud. I use it all the time.
> If others were using it, I would have expected this to be corrected like a
> month or two ago.
>
> I am "alone" in using DevCloud?
>
>
> On Mon, Jun 3, 2013 at 11:05 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > In cloudstackConnection.py, I made the following change:
> >
> > -def marvin_request(self, cmd, response_type=None, method='GET'):
> >
> > +def marvin_request(self, cmd, response_type=None, method='GET',
> > data=''):
> >
> >
> > On Mon, Jun 3, 2013 at 11:03 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> I have fixed this in a patch I submitted last week.
> >>
> >> I'm not sure when it began, but I noticed it a long time ago and had
> just
> >> sent out an e-mail then and corrected it in my sandbox.
> >>
> >> Let me see if I can find what I did to fix it.
> >>
> >>
> >> On Mon, Jun 3, 2013 at 10:09 AM, Will Stevens  >wrote:
> >>
> >>> Has anyone else experience this?  I just pulled in the master code into
> >>> my
> >>> branch and now I am getting this in my dev environment.
> >>>
> >>> [DEBUG] Executing command line: python
> >>> ../marvin/marvin/deployDataCenter.py
> >>> -i devcloud.cfg
> >>> Traceback (most recent call last):
> >>>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
> >>> deploy.deploy()
> >>>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
> >>> self.loadCfg()
> >>>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
> >>> apiKey, securityKey = self.registerApiKey()
> >>>   File "../marvin/marvin/deployDataCenter.py", line 390, in
> >>> registerApiKey
> >>> listuserRes = self.testClient.getApiClient().listUsers(listuser)
> >>>   File
> >>>
> >>>
> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> >>> line 2385, in listUsers
> >>> response = self.connection.marvin_request(command, data=postdata,
> >>> response_type=response)
> >>> TypeError: marvin_request() got an unexpected keyword argument 'data'
> >>>
> >>> Thanks,
> >>>
> >>> ws
> >>>
> >>>
> >>> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
> >>> mike.tutkow...@solidfire.com
> >>> > wrote:
> >>>
> >>> > It looks like the marvin_request method in cloudstackConnection.py
> >>> does not
> >>> > have a parameter named 'data'.
> >>> >
> >>> > I changed the signature locally to the following and it works now:
> >>> >
> >>> > def marvin_request(self, cmd, response_type=None, method='GET',
> >>> data=''):
> >>> >
> >>> >
> >>> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
> >>> > mike.tutkow...@solidfire.com
> >>> > > wrote:
> >>> >
> >>> > > I don't have much Python experience, but it looks like we're trying
> >>> to
> >>> > > pass in a named parameter that doesn't exist on the receiving side.
> >>> > >
> >>> > > Perhaps I need to update a Python package?
> >>> > >
> >>> > > def listUsers(self, command, postdata={}):
> >>> > >
> >>> > > response = listUsersResponse()
> >>> > >
> >>> > > response = self.connection.marvin_request(command,
> >>> data=postdata,
> >>> > > response_type=response)
> >>> > >
> >>> > > return response
> >>> > >
> >>> > >
> >>> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
> >>> > > mike.tutkow...@solidfire.com> wrote:
> >>> > >
> >>> > >> Hi,
> >>> > >>
> >>> > >> I just updated to the latest today and ran deployDataCenter.py to
> >>> build
> >>> > a
> >>> > >> DevCloud2 environment.
> >>> > >>
> >>> > >> The script is having trouble. Any thoughts on this? Has this
> worked
> >>> > >> recently for anyone else?
> >>> > >>
> >>> > >> Thanks!
> >>> > >>
> >>> > >> mtutkowski-LT:devcloud mtutkowski$ python
> >>> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
> >>> > >> Traceback (most recent call last):
> >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in
> 
> >>> > >> deploy.deploy()
> >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
> >>> > >> self.loadCfg()
> >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in
> loadCfg
> >>> > >> apiKey, securityKey = self.registerApiKey()
> >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
> >>> > registerApiKey
> >>> > >> listuserRes =
> self.testClient.getApiClient().listUsers(listuser)
> >>> > >>   File
> >>> > >>
> >>> >
> >>>
> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> >>> > >> line 433, in listUsers
> >>> > >> response = self.connection.marvin_request(command,
> >>> data=postdata,
> >>> > >> response_type=response)
> >>> > >> TypeError:

Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread David Nalley
On Mon, Jun 3, 2013 at 1:04 PM, Musayev, Ilya  wrote:
> How would this vote work? Is it consensus that wins?

Consensus would win if we had it. However, we don't, thus we have a vote.


Re: [ACS41] Upgrade from 2.2.13

2013-06-03 Thread Alena Prokharchyk
Nicolas, in order to upgrade to 4.0, you need to have systemvm-vmware-4.0
template pre-installed. Apache CS release notes mention it (section 3.2):

http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Releas
e_Notes/upgrade-instructions.html#upgrade-from-2.2.x-to-4.0


What pdf you are referring to?

-Alena.

On 6/3/13 4:28 AM, "nicolas.lamira...@orange.com"
 wrote:

>Hi,
>we try to upgrade from 2.2.14 to 4.1
>And we failed on this logs :
>
>2013-06-03 13:15:24,367 DEBUG [utils.db.ScriptRunner] (Timer-1:null)
>UPDATE `cloud`.`user` SET PASSWORD=RAND() WHERE id=1
>2013-06-03 13:15:24,367 DEBUG [utils.db.ScriptRunner] (Timer-1:null)
>ALTER TABLE `cloud_usage`.`account` ADD COLUMN `default_zone_id` bigint
>unsigned
>2013-06-03 13:15:24,552 DEBUG [upgrade.dao.Upgrade302to40]
>(Timer-1:null) Updating VMware System Vms
>2013-06-03 13:15:24,556 DEBUG [db.Transaction.Transaction]
>(Timer-1:null) Rolling back the transaction: Time = 9675 Name =
>Upgrade; called by
>-Transaction.rollback:890-Transaction.removeUpTo:833-Transaction.close
>:657-DatabaseUpgradeChecker.upgrade:263-DatabaseUpgradeChecker.check:358-C
>omponentContext.initComponentsLifeCycle:90-CloudStartupServlet$1.run:50-Ti
>merThread.mainLoop:512-TimerThread.run:462
>2013-06-03 13:15:24,558 ERROR [utils.component.ComponentContext]
>(Timer-1:null) System integrity check failed. Refuse to startup
>
>According to the code :
>
>https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=server/s
>rc/com/cloud/upgrade/dao/Upgrade302to40.java;h=6f31fdd2b8eda8e15c223adceed
>52c70a6457349;hb=a5214bee99f6c5582d755c9499f7d99fd7b5b701
>
>// Just update the VMware system template. Other hypervisor templates
>are unchanged from previous 3.0.x versions.
>  105 s_logger.debug("Updating VMware System Vms");
>  106 try {
>  107 //Get 4.0 VMware system Vm template Id
>  108 pstmt = conn.prepareStatement("select id from
>`cloud`.`vm_template` where name = 'systemvm-vmware-4.0' and removed is
>null");
>  109 rs = pstmt.executeQuery();
>  110 if(rs.next()){
>  111 long templateId = rs.getLong(1);
>  112 rs.close();
>  113 pstmt.close();
>  114 // change template type to SYSTEM
>  115 pstmt = conn.prepareStatement("update
>`cloud`.`vm_template` set type='SYSTEM' where id = ?");
>  116 pstmt.setLong(1, templateId);
>  117 pstmt.executeUpdate();
>  118 pstmt.close();
>  119 // update templete ID of system Vms
>  120 pstmt = conn.prepareStatement("update
>`cloud`.`vm_instance` set vm_template_id = ? where type <> 'User' and
>hypervisor_type = 'VMware'");
>  121 pstmt.setLong(1, templateId);
>  122 pstmt.executeUpdate();
>  123 pstmt.close();
>  124 } else {
>  125 if (VMware){
>  126 throw new CloudRuntimeException("4.0 VMware
>SystemVm template not found. Cannot upgrade system Vms");
>  127 } else {
>  128 s_logger.warn("4.0 VMware SystemVm template
>not found. VMware hypervisor is not used, so not failing upgrade");
>  129 }
>  130 }
>  131 } catch (SQLException e) {
>  132 throw new CloudRuntimeException("Error while updating
>VMware systemVm template", e);
>  133 }
>
>but in release PDF, it is written :
>
>VMware
>Name: systemvm-vmware-3.0.5
>Description: systemvm-vmware-3.0.5
>URL: http://download.cloud.com/templates/burbank/burbank-
>systemvm-08012012.ova
>Zone: Choose the zone where this hypervisor is used
>Hypervisor: VMware
>Format: OVA
>OS Type: Debian GNU/Linux 5.0 (32-bit)
>Extractable: no
>Password Enabled: no
>Public: no
>Featured: no
>
>
>So ? it is a documentation bug ?
>Regards.
>
>-- 
>Nicolas Lamirault
>
>__
>___
>
>Ce message et ses pieces jointes peuvent contenir des informations
>confidentielles ou privilegiees et ne doivent donc
>pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
>recu ce message par erreur, veuillez le signaler
>a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
>electroniques etant susceptibles d'alteration,
>France Telecom - Orange decline toute responsabilite si ce message a ete
>altere, deforme ou falsifie. Merci.
>
>This message and its attachments may contain confidential or privileged
>information that may be protected by law;
>they should not be distributed, used or copied without authorisation.
>If you have received this email in error, please notify the sender and
>delete this message and its attachments.
>As emails may be altered, France Telecom - Orange is not liable for
>messages that have been modified, changed or falsified.
>Thank you.
>
>



Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
Yes, ultimately I would like to support all hypervisors that CloudStack
supports. I think I'm just out of time for 4.2 to get KVM in.

Right now this plug-in supports XenServer. Depending on what we do with
regards to 4.2 feature freeze, I have it working for VMware in my sandbox,
as well.

Also, just to be clear, this is all in regards to Disk Offerings. I plan to
support Compute Offerings post 4.2.


On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage wrote:

> Is there any plan on supporting KVM in the patch cycle post 4.2?
>
> - Original Message -
> From: "Mike Tutkowski" 
> To: dev@cloudstack.apache.org
> Sent: Monday, June 3, 2013 10:12:32 AM
> Subject: Re: [MERGE] disk_io_throttling to MASTER
>
> I agree on merging Wei's feature first, then mine.
>
> If his feature is for KVM only, then it is a non issue as I don't support
> KVM in 4.2.
>
>
> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU  wrote:
>
> > John,
> >
> > For the billing, as no one works on billing now, users need to calculate
> > the billing by themselves. They can get the service_offering and
> > disk_offering of a VMs and volumes for calculation. Of course it is
> better
> > to tell user the exact limitation value of individual volume, and network
> > rate limitation for nics as well. I can work on it later. Do you think it
> > is a part of I/O throttling?
> >
> > Sorry my misunstand the second the question.
> >
> > Agree with what you said about the two features.
> >
> > -Wei
> >
> >
> > 2013/6/3 John Burwell 
> >
> > > Wei,
> > >
> > >
> > > On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
> > >
> > > > Hi John, Mike
> > > >
> > > > I hope Mike's aswer helps you. I am trying to adding more.
> > > >
> > > > (1) I think billing should depend on IO statistics rather than IOPS
> > > > limitation. Please review disk_io_stat if you have time.
> disk_io_stat
> > > can
> > > > get the IO statistics including bytes/iops read/write for an
> individual
> > > > virtual machine.
> > >
> > > Going by the AWS model, customers are billed more for volumes with
> > > provisioned IOPS, as well as, for those operations (
> > > http://aws.amazon.com/ebs/).  I would imagine our users would like the
> > > option to employ similar cost models.  Could an operator implement
> such a
> > > billing model in the current patch?
> > >
> > > >
> > > > (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> > > > limitation for a running virtual machine through command line.
> However,
> > > > CloudStack does not support changing the parameters of a created
> > offering
> > > > (computer offering or disk offering).
> > >
> > > I meant at the Java interface level.  I apologize for being unclear.
>  Can
> > > we more generalize allocation algorithms with a set of interfaces that
> > > describe the service guarantees provided by a resource?
> > >
> > > >
> > > > (3) It is a good question. Maybe it is better to commit Mike's patch
> > > after
> > > > disk_io_throttling as Mike needs to consider the limitation in
> > hypervisor
> > > > type, I think.
> > >
> > > I will expand on my thoughts in a later response to Mike regarding the
> > > touch points between these two features.  I think that
> disk_io_throttling
> > > will need to be merged before SolidFire, but I think we need closer
> > > coordination between the branches (possibly have have solidfire track
> > > disk_io_throttling) to coordinate on this issue.
> > >
> > > >
> > > > - Wei
> > > >
> > > >
> > > > 2013/6/3 John Burwell 
> > > >
> > > >> Mike,
> > > >>
> > > >> The things I want to understand are the following:
> > > >>
> > > >>   1) Is there value in capturing IOPS policies be captured in a
> common
> > > >> data model (e.g. for billing/usage purposes, expressing offerings).
> > > >>2) Should there be a common interface model for reasoning about
> IOP
> > > >> provisioning at runtime?
> > > >>3) How are conflicting provisioned IOPS configurations between a
> > > >> hypervisor and storage device reconciled?  In particular, a scenario
> > > where
> > > >> is lead to believe (and billed) for more IOPS configured for a VM
> > than a
> > > >> storage device has been configured to deliver.  Another scenario
> > could a
> > > >> consistent configuration between a VM and a storage device at
> creation
> > > >> time, but a later modification to storage device introduces logical
> > > >> inconsistency.
> > > >>
> > > >> Thanks,
> > > >> -John
> > > >>
> > > >> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com>
> > > >> wrote:
> > > >>
> > > >> Hi John,
> > > >>
> > > >> I believe Wei's feature deals with controlling the max number of
> IOPS
> > > from
> > > >> the hypervisor side.
> > > >>
> > > >> My feature is focused on controlling IOPS from the storage system
> > side.
> > > >>
> > > >> I hope that helps. :)
> > > >>
> > > >>
> > > >> On Sun, Jun 2, 2013 at 6:35 PM, John Burwell 
> > > wrote:
> > > >>
> > > >>> Wei,
> > > >>>
> > > >>> My opinion is that no f

Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread David Nalley
On Fri, May 31, 2013 at 11:00 AM, Chip Childers
 wrote:
> Following our discussion on the proposal to push back the feature freeze
> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
>
>> 3.4.2. Release Plan
>>
>> Defines the timetable and work items for a release. The plan also
>> nominates a Release Manager.
>>
>> A lazy majority of active committers is required for approval.
>>
>> Any active committer or PMC member may call a vote. The vote must occur
>> on a project development mailing list.
>
> And our lazy majority is defined as:
>
>> 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
>> votes and more binding +1 votes than binding -1 votes.
>
> Our current plan is the starting point, so this VOTE is a vote to change
> the current plan.  We require a 72 hour window for this vote, so IMO we are
> in an odd position where the feature freeze date is at least extended until
> Tuesday of next week.
>
> Our current plan of record for 4.2.0 is at [3].
>
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3] 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+Release
>
> 
>
> I'd like to call a VOTE on the following:
>
> Proposal: Extend the feature freeze date for our 4.2.0 feature release
> from today (2013-05-31) to 2013-06-28.  All other dates following the
> feature freeze date in the plan would be pushed out 4 weeks as well.
>
> Please respond with one of the following:
>
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
>
> This vote will remain open until Tuesday morning US eastern time.
>
> -chip


-1 (binding)

Lets stick with the current plan of record. IMO - we accepted the 4.2
timeline knowing we were late, and we could have easily adjusted it
then.

--David


Instructions for pushing DEB's to cloudstack.apt-get.eu

2013-06-03 Thread Chip Childers
Wido,

I have the access, and I have the results of building from the release
source, but I don't have the knowledge to specifically know what to put
where and what to run to get the non-OSS DEB's I just built into the
repo.

Can you share some instructions please?

-chip


Re: Trouble with deployDataCenter.py

2013-06-03 Thread Mike Tutkowski
Ah, OK. I tend to re-deploy daily. :)


On Mon, Jun 3, 2013 at 11:19 AM, Will Stevens  wrote:

> I think a lot of people use DevCloud but they don't redeploy very often so
> bugs like this don't get noticed.  I use DevCloud all the time.
>
>
> On Mon, Jun 3, 2013 at 1:07 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > Surprisingly this has been like this for a long time.
> >
> > It kind of makes me wonder if anyone uses DevCloud. I use it all the
> time.
> > If others were using it, I would have expected this to be corrected like
> a
> > month or two ago.
> >
> > I am "alone" in using DevCloud?
> >
> >
> > On Mon, Jun 3, 2013 at 11:05 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> > > In cloudstackConnection.py, I made the following change:
> > >
> > > -def marvin_request(self, cmd, response_type=None, method='GET'):
> > >
> > > +def marvin_request(self, cmd, response_type=None, method='GET',
> > > data=''):
> > >
> > >
> > > On Mon, Jun 3, 2013 at 11:03 AM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> I have fixed this in a patch I submitted last week.
> > >>
> > >> I'm not sure when it began, but I noticed it a long time ago and had
> > just
> > >> sent out an e-mail then and corrected it in my sandbox.
> > >>
> > >> Let me see if I can find what I did to fix it.
> > >>
> > >>
> > >> On Mon, Jun 3, 2013 at 10:09 AM, Will Stevens  > >wrote:
> > >>
> > >>> Has anyone else experience this?  I just pulled in the master code
> into
> > >>> my
> > >>> branch and now I am getting this in my dev environment.
> > >>>
> > >>> [DEBUG] Executing command line: python
> > >>> ../marvin/marvin/deployDataCenter.py
> > >>> -i devcloud.cfg
> > >>> Traceback (most recent call last):
> > >>>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
> > >>> deploy.deploy()
> > >>>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
> > >>> self.loadCfg()
> > >>>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
> > >>> apiKey, securityKey = self.registerApiKey()
> > >>>   File "../marvin/marvin/deployDataCenter.py", line 390, in
> > >>> registerApiKey
> > >>> listuserRes = self.testClient.getApiClient().listUsers(listuser)
> > >>>   File
> > >>>
> > >>>
> >
> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> > >>> line 2385, in listUsers
> > >>> response = self.connection.marvin_request(command, data=postdata,
> > >>> response_type=response)
> > >>> TypeError: marvin_request() got an unexpected keyword argument 'data'
> > >>>
> > >>> Thanks,
> > >>>
> > >>> ws
> > >>>
> > >>>
> > >>> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
> > >>> mike.tutkow...@solidfire.com
> > >>> > wrote:
> > >>>
> > >>> > It looks like the marvin_request method in cloudstackConnection.py
> > >>> does not
> > >>> > have a parameter named 'data'.
> > >>> >
> > >>> > I changed the signature locally to the following and it works now:
> > >>> >
> > >>> > def marvin_request(self, cmd, response_type=None, method='GET',
> > >>> data=''):
> > >>> >
> > >>> >
> > >>> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
> > >>> > mike.tutkow...@solidfire.com
> > >>> > > wrote:
> > >>> >
> > >>> > > I don't have much Python experience, but it looks like we're
> trying
> > >>> to
> > >>> > > pass in a named parameter that doesn't exist on the receiving
> side.
> > >>> > >
> > >>> > > Perhaps I need to update a Python package?
> > >>> > >
> > >>> > > def listUsers(self, command, postdata={}):
> > >>> > >
> > >>> > > response = listUsersResponse()
> > >>> > >
> > >>> > > response = self.connection.marvin_request(command,
> > >>> data=postdata,
> > >>> > > response_type=response)
> > >>> > >
> > >>> > > return response
> > >>> > >
> > >>> > >
> > >>> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
> > >>> > > mike.tutkow...@solidfire.com> wrote:
> > >>> > >
> > >>> > >> Hi,
> > >>> > >>
> > >>> > >> I just updated to the latest today and ran deployDataCenter.py
> to
> > >>> build
> > >>> > a
> > >>> > >> DevCloud2 environment.
> > >>> > >>
> > >>> > >> The script is having trouble. Any thoughts on this? Has this
> > worked
> > >>> > >> recently for anyone else?
> > >>> > >>
> > >>> > >> Thanks!
> > >>> > >>
> > >>> > >> mtutkowski-LT:devcloud mtutkowski$ python
> > >>> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
> > >>> > >> Traceback (most recent call last):
> > >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in
> > 
> > >>> > >> deploy.deploy()
> > >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in
> deploy
> > >>> > >> self.loadCfg()
> > >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in
> > loadCfg
> > >>> > >> apiKey, securityKey = self.registerApiKey()
> > >>> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
> > >>> > registerApiKey
> > >>> > >>

Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Mike Tutkowski
+1 (not sure if my vote counts for anything since I'm not a committer)

To me it seems that many people spent a lot more time on 4.1 than expected,
so I think an extra 2 - 4 weeks for 4.2 would make sense.


On Mon, Jun 3, 2013 at 11:21 AM, David Nalley  wrote:

> On Fri, May 31, 2013 at 11:00 AM, Chip Childers
>  wrote:
> > Following our discussion on the proposal to push back the feature freeze
> > date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> > we have already defined the "project rules" for figuring out what to do.
> > In out project by-laws [2], we have defined a "release plan" decision as
> > follows:
> >
> >> 3.4.2. Release Plan
> >>
> >> Defines the timetable and work items for a release. The plan also
> >> nominates a Release Manager.
> >>
> >> A lazy majority of active committers is required for approval.
> >>
> >> Any active committer or PMC member may call a vote. The vote must occur
> >> on a project development mailing list.
> >
> > And our lazy majority is defined as:
> >
> >> 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> >> votes and more binding +1 votes than binding -1 votes.
> >
> > Our current plan is the starting point, so this VOTE is a vote to change
> > the current plan.  We require a 72 hour window for this vote, so IMO we
> are
> > in an odd position where the feature freeze date is at least extended
> until
> > Tuesday of next week.
> >
> > Our current plan of record for 4.2.0 is at [3].
> >
> > [1] http://markmail.org/message/vi3nsd2yo763kzua
> > [2] http://s.apache.org/csbylaws
> > [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+Release
> >
> > 
> >
> > I'd like to call a VOTE on the following:
> >
> > Proposal: Extend the feature freeze date for our 4.2.0 feature release
> > from today (2013-05-31) to 2013-06-28.  All other dates following the
> > feature freeze date in the plan would be pushed out 4 weeks as well.
> >
> > Please respond with one of the following:
> >
> > +1 : change the plan as listed above
> > +/-0 : no strong opinion, but leaning + or -
> > -1 : do not change the plan
> >
> > This vote will remain open until Tuesday morning US eastern time.
> >
> > -chip
>
>
> -1 (binding)
>
> Lets stick with the current plan of record. IMO - we accepted the 4.2
> timeline knowing we were late, and we could have easily adjusted it
> then.
>
> --David
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: [ACS41] Upgrade from 2.2.13

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 05:21:40PM +, Alena Prokharchyk wrote:
> Nicolas, in order to upgrade to 4.0, you need to have systemvm-vmware-4.0
> template pre-installed. Apache CS release notes mention it (section 3.2):
> 
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Releas
> e_Notes/upgrade-instructions.html#upgrade-from-2.2.x-to-4.0
> 
> 
> What pdf you are referring to?

Perhaps this is a docs bug that I introduced?  See that same section in
the 4.1 release notes:

http://jenkins.buildacloud.org/job/docs-4.1-releasenotes/

Download the 4.1.0 version.  I think the older artifacts are confusing
to be there, but just pick the correct link.  ;-)



RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Animesh Chaturvedi

+1 to move feature freeze date to 6/28 to get in the features proposed earlier 
for 4.2 and have a longer bug fix cycle.

After moving 103 open 4.1 targeted defects to 4.2 we will have total of 367 
open defects for 4.2. I hope with this change we are able to resolve lot more 
defects before RC and we can get 4.2 out the door with fewer RC re-spin.

Since all the other dates move out by 4 weeks means the feature proposal freeze 
date which was originally at 5/04 would have been 6/1 which means we are 
already past that, so no new feature proposals for 4.2. Any new feature 
proposal will have to be targeted for 4.3 or beyond. 


Thanks

Animesh





> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, May 31, 2013 8:00 AM
> To: dev@cloudstack.apache.org
> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Following our discussion on the proposal to push back the feature freeze
> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
> 
> > 3.4.2. Release Plan
> >
> > Defines the timetable and work items for a release. The plan also
> > nominates a Release Manager.
> >
> > A lazy majority of active committers is required for approval.
> >
> > Any active committer or PMC member may call a vote. The vote must
> > occur on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to change
> the current plan.  We require a 72 hour window for this vote, so IMO we
> are in an odd position where the feature freeze date is at least
> extended until Tuesday of next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+Re
> lease
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release
> from today (2013-05-31) to 2013-06-28.  All other dates following the
> feature freeze date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


Re: [MERGE]object_store branch into master

2013-06-03 Thread John Burwell
Chip/Min,

For thread 1, I would like to see an expanded discussion regarding the need for 
the staging area.  In particular, what features on which hypervisors created 
the need for it.  With the wider expertise of the list, we may be able to find 
solutions to these issues that either reduce or eliminate the need for the 
cache.

Thanks,
-John

On Jun 3, 2013, at 1:11 PM, Chip Childers  wrote:

> On Mon, Jun 03, 2013 at 05:09:24PM +, Min Chen wrote:
>> Chip/John,
>> 
>>  This thread has become very hard to follow due to several technical
>> debates mixed together. Chip earlier made a good suggestion that we should
>> start separate threads for several important architectural issues raised
>> by John so that community can get clear grasp on the debating issues and
>> reach a wise conclusion. If there is no objection, we are going to do that
>> right now. If we understood correctly by following through this thread, we
>> boiled down to the following 3 major technical issues:
>>  1. Missing capacity planning in NFS cache storage implementation.
>>  2. Error handling in case of S3 as native secondary storage.
>>  3. S3TemplateDownloader implementation issue.
>> If we didn't miss anything, we will start these 3 DISCUSS threads shortly.
>> 
>>  Thanks
>>  -min
> 
> +1 - do it!



RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Animesh Chaturvedi


> -Original Message-
> From: Hugo Trippaers [mailto:htrippa...@schubergphilis.com]
> Sent: Monday, June 03, 2013 2:24 AM
> To: dev@cloudstack.apache.org
> Subject: RE: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> -1
> 
> Extending the release will mean even more features will be packed into
> the 4.2, which already has quite  a lot of changes. The delays with 4.1
> shows that testing is a big job already and more features will make it
> worse. I'm convinced that allowing for more time in 4.2 would not
> improve the overall quality of the release and has a risk of lowering
> the quality due to a pre-freeze rush.
> 
[Animesh>] Hugo I share your concern but with this proposal all the other dates 
move out by 4 weeks means. The feature proposal freeze date which was 
originally at 5/04 would have been 6/1 which means we are already past that, so 
no new feature proposals for 4.2. Any new feature proposal will have to be 
targeted for 4.3 or beyond. 

> Cheers,
> 
> Hugo
> 
> > -Original Message-
> > From: Musayev, Ilya [mailto:imusa...@webmd.net]
> > Sent: Sunday, June 02, 2013 6:33 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: [VOTE] Pushback 4.2.0 Feature Freeze
> >
> > +1 for freeze request for 1-2 weeks. We've developed advanced password
> > management features for IsWest  and would like to merge it in as per
> > Claytons approval.
> >
> >
> >  Original message 
> > From: Wei ZHOU 
> > Date:
> > To: dev@cloudstack.apache.org
> > Subject: Re: [VOTE] Pushback 4.2.0 Feature Freeze
> >
> >
> > -0
> >
> > Change to -0 as I suggest to wait for the merge of existing review
> > requests in days (48 or 72 hours).
> >
> > -Wei
> >
> >
> > 2013/5/31 Wei ZHOU 
> >
> > > -1
> > > Almost all new features for 4.2 have been merged or being reviewed.
> > > From now, we'd better donot accept new feature review requests,and
> > > create 4.2 branch after committing existed requests in short time.
> > >
> > > -Wei
> > >
> > > 2013/5/31, Chip Childers :
> > > > Following our discussion on the proposal to push back the feature
> > > > freeze date for 4.2.0 [1], we have not yet achieved a clear
> consensus.
> > Well...
> > > > we have already defined the "project rules" for figuring out what
> to do.
> > > > In out project by-laws [2], we have defined a "release plan"
> > > > decision as
> > > > follows:
> > > >
> > > >> 3.4.2. Release Plan
> > > >>
> > > >> Defines the timetable and work items for a release. The plan also
> > > >> nominates a Release Manager.
> > > >>
> > > >> A lazy majority of active committers is required for approval.
> > > >>
> > > >> Any active committer or PMC member may call a vote. The vote must
> > > >> occur on a project development mailing list.
> > > >
> > > > And our lazy majority is defined as:
> > > >
> > > >> 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding  1
> > > >> votes and more binding  1 votes than binding -1 votes.
> > > >
> > > > Our current plan is the starting point, so this VOTE is a vote to
> > > > change the current plan.  We require a 72 hour window for this
> > > > vote, so IMO we
> > > are
> > > > in an odd position where the feature freeze date is at least
> > > > extended
> > > until
> > > >
> > > > Tuesday of next week.
> > > >
> > > > Our current plan of record for 4.2.0 is at [3].
> > > >
> > > > [1] http://markmail.org/message/vi3nsd2yo763kzua
> > > > [2] http://s.apache.org/csbylaws
> > > > [3]
> > > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack
> > > > 4.2
> > > Release
> > > >
> > > > 
> > > >
> > > > I'd like to call a VOTE on the following:
> > > >
> > > > Proposal: Extend the feature freeze date for our 4.2.0 feature
> > > > release from today (2013-05-31) to 2013-06-28.  All other dates
> > > > following the feature freeze date in the plan would be pushed out
> > > > 4
> > weeks as well.
> > > >
> > > > Please respond with one of the following:
> > > >
> > > >  1 : change the plan as listed above
> > > >  /-0 : no strong opinion, but leaning   or -
> > > > -1 : do not change the plan
> > > >
> > > > This vote will remain open until Tuesday morning US eastern time.
> > > >
> > > > -chip
> > > >
> > >


RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Sudha Ponnaganti
+1 [binding]

Given that there are around 47 features that are still in Open state,  
community can focus on cleaning up that part before feature freeze date. As 
indicated in Chip's original mail and indicated in Animesh' s mail below, I 
assume that no new proposals will be accepted in to 4.2.  So my vote is to 
agree with the extension. 

-Original Message-
From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com] 
Sent: Monday, June 03, 2013 10:32 AM
To: dev@cloudstack.apache.org
Subject: RE: [VOTE] Pushback 4.2.0 Feature Freeze


+1 to move feature freeze date to 6/28 to get in the features proposed earlier 
for 4.2 and have a longer bug fix cycle.

After moving 103 open 4.1 targeted defects to 4.2 we will have total of 367 
open defects for 4.2. I hope with this change we are able to resolve lot more 
defects before RC and we can get 4.2 out the door with fewer RC re-spin.

Since all the other dates move out by 4 weeks means the feature proposal freeze 
date which was originally at 5/04 would have been 6/1 which means we are 
already past that, so no new feature proposals for 4.2. Any new feature 
proposal will have to be targeted for 4.3 or beyond. 


Thanks

Animesh





> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, May 31, 2013 8:00 AM
> To: dev@cloudstack.apache.org
> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Following our discussion on the proposal to push back the feature 
> freeze date for 4.2.0 [1], we have not yet achieved a clear consensus.  
> Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision 
> as
> follows:
> 
> > 3.4.2. Release Plan
> >
> > Defines the timetable and work items for a release. The plan also 
> > nominates a Release Manager.
> >
> > A lazy majority of active committers is required for approval.
> >
> > Any active committer or PMC member may call a vote. The vote must 
> > occur on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1 
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to 
> change the current plan.  We require a 72 hour window for this vote, 
> so IMO we are in an odd position where the feature freeze date is at 
> least extended until Tuesday of next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
> Re
> lease
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release 
> from today (2013-05-31) to 2013-06-28.  All other dates following the 
> feature freeze date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


Re: [ACS41] Upgrade from 2.2.13

2013-06-03 Thread Alena Prokharchyk
On 6/3/13 10:30 AM, "Chip Childers"  wrote:

>On Mon, Jun 03, 2013 at 05:21:40PM +, Alena Prokharchyk wrote:
>> Nicolas, in order to upgrade to 4.0, you need to have
>>systemvm-vmware-4.0
>> template pre-installed. Apache CS release notes mention it (section
>>3.2):
>> 
>> 
>>http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Rele
>>as
>> e_Notes/upgrade-instructions.html#upgrade-from-2.2.x-to-4.0
>> 
>> 
>> What pdf you are referring to?
>
>Perhaps this is a docs bug that I introduced?  See that same section in
>the 4.1 release notes:
>
>http://jenkins.buildacloud.org/job/docs-4.1-releasenotes/
>
>Download the 4.1.0 version.  I think the older artifacts are confusing
>to be there, but just pick the correct link.  ;-)
>
>


Yes, looks like a doc bug to me. Have to replace systemvm-vmware-3.0.5
with systemvm-vmware-4.0



Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Joe Brockmeier
On Fri, May 31, 2013, at 10:00 AM, Chip Childers wrote:
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.

-1 do not change the plan. 

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: [ACS41] Upgrade from 2.2.13

2013-06-03 Thread Joe Brockmeier
On Mon, Jun 3, 2013, at 12:46 PM, Alena Prokharchyk wrote:
> Yes, looks like a doc bug to me. Have to replace systemvm-vmware-3.0.5
> with systemvm-vmware-4.0

I can update this in the docs if it's not correct before I upload them
tonight. 

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


wrong dns server config?

2013-06-03 Thread Shane Witbeck
I was troubleshooting my ssvm the other day and found '4.4.4.4' defined as the 
secondary dns server in  /tools/devcloud/devcloud.cfg and in some other 
scripts. 

Should this be '8.8.4.4' instead since '8.8.8.8' is the one of the google dns 
[1]? 


Thanks, 
Shane

[1] https://developers.google.com/speed/public-dns/

Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Min Chen
+1 

We have spent the past one and half month working on object_store feature,
it is very close to merge, just need some time to address review feedback
and resolve any technical concerns.

Thanks
-min

On 6/3/13 10:35 AM, "Sudha Ponnaganti"  wrote:

>+1 [binding]
>
>Given that there are around 47 features that are still in Open state,
>community can focus on cleaning up that part before feature freeze date.
>As indicated in Chip's original mail and indicated in Animesh' s mail
>below, I assume that no new proposals will be accepted in to 4.2.  So my
>vote is to agree with the extension.
>
>-Original Message-
>From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
>Sent: Monday, June 03, 2013 10:32 AM
>To: dev@cloudstack.apache.org
>Subject: RE: [VOTE] Pushback 4.2.0 Feature Freeze
>
>
>+1 to move feature freeze date to 6/28 to get in the features proposed
>earlier for 4.2 and have a longer bug fix cycle.
>
>After moving 103 open 4.1 targeted defects to 4.2 we will have total of
>367 open defects for 4.2. I hope with this change we are able to resolve
>lot more defects before RC and we can get 4.2 out the door with fewer RC
>re-spin.
>
>Since all the other dates move out by 4 weeks means the feature proposal
>freeze date which was originally at 5/04 would have been 6/1 which means
>we are already past that, so no new feature proposals for 4.2. Any new
>feature proposal will have to be targeted for 4.3 or beyond.
>
>
>Thanks
>
>Animesh
>
>
>
>
>
>> -Original Message-
>> From: Chip Childers [mailto:chip.child...@sungard.com]
>> Sent: Friday, May 31, 2013 8:00 AM
>> To: dev@cloudstack.apache.org
>> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
>> 
>> Following our discussion on the proposal to push back the feature
>> freeze date for 4.2.0 [1], we have not yet achieved a clear consensus.
>>Well...
>> we have already defined the "project rules" for figuring out what to do.
>> In out project by-laws [2], we have defined a "release plan" decision
>> as
>> follows:
>> 
>> > 3.4.2. Release Plan
>> >
>> > Defines the timetable and work items for a release. The plan also
>> > nominates a Release Manager.
>> >
>> > A lazy majority of active committers is required for approval.
>> >
>> > Any active committer or PMC member may call a vote. The vote must
>> > occur on a project development mailing list.
>> 
>> And our lazy majority is defined as:
>> 
>> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
>> > votes and more binding +1 votes than binding -1 votes.
>> 
>> Our current plan is the starting point, so this VOTE is a vote to
>> change the current plan.  We require a 72 hour window for this vote,
>> so IMO we are in an odd position where the feature freeze date is at
>> least extended until Tuesday of next week.
>> 
>> Our current plan of record for 4.2.0 is at [3].
>> 
>> [1] http://markmail.org/message/vi3nsd2yo763kzua
>> [2] http://s.apache.org/csbylaws
>> [3]
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
>> Re
>> lease
>> 
>> 
>> 
>> I'd like to call a VOTE on the following:
>> 
>> Proposal: Extend the feature freeze date for our 4.2.0 feature release
>> from today (2013-05-31) to 2013-06-28.  All other dates following the
>> feature freeze date in the plan would be pushed out 4 weeks as well.
>> 
>> Please respond with one of the following:
>> 
>> +1 : change the plan as listed above
>> +/-0 : no strong opinion, but leaning + or -
>> -1 : do not change the plan
>> 
>> This vote will remain open until Tuesday morning US eastern time.
>> 
>> -chip



RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Edison Su
+1[binding] on pushing back feature freeze date.

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, May 31, 2013 8:00 AM
> To: dev@cloudstack.apache.org
> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Following our discussion on the proposal to push back the feature freeze
> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
> 
> > 3.4.2. Release Plan
> >
> > Defines the timetable and work items for a release. The plan also
> > nominates a Release Manager.
> >
> > A lazy majority of active committers is required for approval.
> >
> > Any active committer or PMC member may call a vote. The vote must
> > occur on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to change the
> current plan.  We require a 72 hour window for this vote, so IMO we are in an
> odd position where the feature freeze date is at least extended until
> Tuesday of next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
> Release
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release from
> today (2013-05-31) to 2013-06-28.  All other dates following the feature
> freeze date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


Re: Trouble with deployDataCenter.py

2013-06-03 Thread Wei ZHOU
I just built the latest source code, and deployed on devcloud , everything
is ok.

-Wei


2013/6/3 Will Stevens 

> Has anyone else experience this?  I just pulled in the master code into my
> branch and now I am getting this in my dev environment.
>
> [DEBUG] Executing command line: python ../marvin/marvin/deployDataCenter.py
> -i devcloud.cfg
> Traceback (most recent call last):
>   File "../marvin/marvin/deployDataCenter.py", line 517, in 
> deploy.deploy()
>   File "../marvin/marvin/deployDataCenter.py", line 500, in deploy
> self.loadCfg()
>   File "../marvin/marvin/deployDataCenter.py", line 451, in loadCfg
> apiKey, securityKey = self.registerApiKey()
>   File "../marvin/marvin/deployDataCenter.py", line 390, in registerApiKey
> listuserRes = self.testClient.getApiClient().listUsers(listuser)
>   File
>
> "/mnt/hgfs/palo_alto/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> line 2385, in listUsers
> response = self.connection.marvin_request(command, data=postdata,
> response_type=response)
> TypeError: marvin_request() got an unexpected keyword argument 'data'
>
> Thanks,
>
> ws
>
>
> On Mon, May 6, 2013 at 5:13 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > It looks like the marvin_request method in cloudstackConnection.py does
> not
> > have a parameter named 'data'.
> >
> > I changed the signature locally to the following and it works now:
> >
> > def marvin_request(self, cmd, response_type=None, method='GET', data=''):
> >
> >
> > On Mon, May 6, 2013 at 2:59 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > > wrote:
> >
> > > I don't have much Python experience, but it looks like we're trying to
> > > pass in a named parameter that doesn't exist on the receiving side.
> > >
> > > Perhaps I need to update a Python package?
> > >
> > > def listUsers(self, command, postdata={}):
> > >
> > > response = listUsersResponse()
> > >
> > > response = self.connection.marvin_request(command,
> data=postdata,
> > > response_type=response)
> > >
> > > return response
> > >
> > >
> > > On Mon, May 6, 2013 at 12:04 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > >> Hi,
> > >>
> > >> I just updated to the latest today and ran deployDataCenter.py to
> build
> > a
> > >> DevCloud2 environment.
> > >>
> > >> The script is having trouble. Any thoughts on this? Has this worked
> > >> recently for anyone else?
> > >>
> > >> Thanks!
> > >>
> > >> mtutkowski-LT:devcloud mtutkowski$ python
> > >> ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
> > >> Traceback (most recent call last):
> > >>   File "../marvin/marvin/deployDataCenter.py", line 476, in 
> > >> deploy.deploy()
> > >>   File "../marvin/marvin/deployDataCenter.py", line 459, in deploy
> > >> self.loadCfg()
> > >>   File "../marvin/marvin/deployDataCenter.py", line 410, in loadCfg
> > >> apiKey, securityKey = self.registerApiKey()
> > >>   File "../marvin/marvin/deployDataCenter.py", line 349, in
> > registerApiKey
> > >> listuserRes = self.testClient.getApiClient().listUsers(listuser)
> > >>   File
> > >>
> >
> "/Users/mtutkowski/Documents/CloudStack/src/incubator-cloudstack/tools/marvin/marvin/cloudstackAPI/cloudstackAPIClient.py",
> > >> line 433, in listUsers
> > >> response = self.connection.marvin_request(command, data=postdata,
> > >> response_type=response)
> > >> TypeError: marvin_request() got an unexpected keyword argument 'data'
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkow...@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >> *™*
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkow...@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud
> > *™*
> >
>


Re: [MERGE]object_store branch into master

2013-06-03 Thread Min Chen
Sure. Edison will start one soon with this context information.

Thanks
-min

On 6/3/13 10:33 AM, "John Burwell"  wrote:

>Chip/Min,
>
>For thread 1, I would like to see an expanded discussion regarding the
>need for the staging area.  In particular, what features on which
>hypervisors created the need for it.  With the wider expertise of the
>list, we may be able to find solutions to these issues that either reduce
>or eliminate the need for the cache.
>
>Thanks,
>-John
>
>On Jun 3, 2013, at 1:11 PM, Chip Childers 
>wrote:
>
>> On Mon, Jun 03, 2013 at 05:09:24PM +, Min Chen wrote:
>>> Chip/John,
>>> 
>>> This thread has become very hard to follow due to several technical
>>> debates mixed together. Chip earlier made a good suggestion that we
>>>should
>>> start separate threads for several important architectural issues
>>>raised
>>> by John so that community can get clear grasp on the debating issues
>>>and
>>> reach a wise conclusion. If there is no objection, we are going to do
>>>that
>>> right now. If we understood correctly by following through this
>>>thread, we
>>> boiled down to the following 3 major technical issues:
>>> 1. Missing capacity planning in NFS cache storage implementation.
>>> 2. Error handling in case of S3 as native secondary storage.
>>> 3. S3TemplateDownloader implementation issue.
>>> If we didn't miss anything, we will start these 3 DISCUSS threads
>>>shortly.
>>> 
>>> Thanks
>>> -min
>> 
>> +1 - do it!
>



[DISCUSS]OBJECT_STORE branch design: Error handling in case of S3 as native secondary storage

2013-06-03 Thread Min Chen
Hi there,
This thread is to address John's comments about missing error handling in S3 as 
secondary storage in object_store branch implementation. From previous merge 
email thread, I realize that we may not explain clearly in FS how S3 should 
work in new object_store branch, so causing several confusions. Let's make it 
clear here.

1. The goal of object_store branch is to make S3 serve as NATIVE secondary 
storage, not just a backup device as NFS secondary storage in master branch. We 
want to lead people to believe that their data (template, snapshot, volumes) 
are stored in S3 object store if they choose S3 as their cloudstack secondary 
storage. When users register template to S3, we are directly issuing S3 API to 
download template directly into S3 object store instead of downloading it to 
NFS secondary storage and then syncing to S3 by schedule done by master branch. 
When we tell users that their data is  READY on their S3 secondary storage, it 
really means that it is ready to use from S3. Unlike this guarantee, in master, 
S3 as a backup device, snapshot may only be ready on NFS secondary storage, not 
in S3 due to any network connection issues, but we actually mislead users that 
their snapshot is ready on S3.

2. NFS cache only comes into picture when user choose S3 as their native 
secondary storage. The data stored in NFS cache is really temporary and serve 
as an intermediate transfer stage for CloudStack to manipulate data stored in 
S3, our design does not have any requirement that these intermediate data has 
to be persist there in NFS cache forever to make CloudStack functional. This is 
quite different from the role of NFS secondary storage for S3 in master branch, 
where we have to keep data there in NFS secondary storage since we cannot 
guarantee that data is READY on S3 due to background sync issue I will mention 
in a minute. Theoretically speaking, we should be able to implement a simple 
LRU or FIFO cache algorithm (with the assumption that we have proved 4.2 
feature freeze extension vote) to age out old cache data without impacting any 
of CloudStack functionality using S3. Not sure if this is true for NFS 
secondary storage data for S3 in master branch, feels not based on my code 
understanding, but maybe I am just ignorant and too new to this part of code in 
master.

3. We have to admit that in current object_store implementation, we only try 
the S3 operations (put, get, etc) once and if it failed, and we just report 
error and user have to manually retry. On this aspect, we definitely can make 
it better by adding some re-try mechanism based on a global configured retry 
parameter. However, infinite retry in interacting with these external devices 
is always a bad idea from my past experience. Also, we disagree with John's 
comment about dropping previous  background sync process is "a step back from 
the current Swift and S3 implementations present in 4.1.0". We agree that 
current master background sync process relieves admin from manual retry in case 
of some S3 errors (BTW, some errors will never recover even with background 
process, for example, capacity full), but it also caused another severe 
drawback, that is,  give user misconception that their data is READY in S3, but 
actually not. Here is a simple example, users take a snapshot on one zone and 
backup to S3, based on S3 region-wide nature, it is very natural for them to 
think that they can immediately restore this snapshot on another zone. However, 
for current master implementation, this may fail. Due to S3 network connection 
issue at backup moment, snapshot may not be ready on S3, and only stored in 
zone-wide NFS secondary storage. Another backup sync process is not kicked in 
yet. If now users are trying to do restore action, it will doom failure in not 
finding proper snapshot. In our opinion, enhancing current object_store 
implementation with some configured retry logic should be a good compromise.

Thanks.
-min



RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Will Chan
+1 [Binding]

It looks like there are a couple of last minute features that would make 4.1 a 
superb release.   I would say that we should not allow any new features that 
haven't already been proposed and that the extension does not go beyond 4 
weeks.  If beyond that, I'm a  -1.

Will

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, May 31, 2013 8:00 AM
> To: dev@cloudstack.apache.org
> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Following our discussion on the proposal to push back the feature freeze
> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
> 
> > 3.4.2. Release Plan
> >
> > Defines the timetable and work items for a release. The plan also
> > nominates a Release Manager.
> >
> > A lazy majority of active committers is required for approval.
> >
> > Any active committer or PMC member may call a vote. The vote must
> > occur on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to change the
> current plan.  We require a 72 hour window for this vote, so IMO we are in
> an odd position where the feature freeze date is at least extended until
> Tuesday of next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2
> +Release
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release from
> today (2013-05-31) to 2013-06-28.  All other dates following the feature
> freeze date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Animesh Chaturvedi

+1 [binding]
> -Original Message-
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> Sent: Monday, June 03, 2013 10:32 AM
> To: dev@cloudstack.apache.org
> Subject: RE: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> 
> +1 to move feature freeze date to 6/28 to get in the features proposed
> earlier for 4.2 and have a longer bug fix cycle.
> 
> After moving 103 open 4.1 targeted defects to 4.2 we will have total of
> 367 open defects for 4.2. I hope with this change we are able to resolve
> lot more defects before RC and we can get 4.2 out the door with fewer RC
> re-spin.
> 
> Since all the other dates move out by 4 weeks means the feature proposal
> freeze date which was originally at 5/04 would have been 6/1 which means
> we are already past that, so no new feature proposals for 4.2. Any new
> feature proposal will have to be targeted for 4.3 or beyond.
> 
> 
> Thanks
> 
> Animesh
> 
> 
> 
> 
> 
> > -Original Message-
> > From: Chip Childers [mailto:chip.child...@sungard.com]
> > Sent: Friday, May 31, 2013 8:00 AM
> > To: dev@cloudstack.apache.org
> > Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> >
> > Following our discussion on the proposal to push back the feature
> > freeze date for 4.2.0 [1], we have not yet achieved a clear consensus.
> Well...
> > we have already defined the "project rules" for figuring out what to
> do.
> > In out project by-laws [2], we have defined a "release plan" decision
> > as
> > follows:
> >
> > > 3.4.2. Release Plan
> > >
> > > Defines the timetable and work items for a release. The plan also
> > > nominates a Release Manager.
> > >
> > > A lazy majority of active committers is required for approval.
> > >
> > > Any active committer or PMC member may call a vote. The vote must
> > > occur on a project development mailing list.
> >
> > And our lazy majority is defined as:
> >
> > > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > > votes and more binding +1 votes than binding -1 votes.
> >
> > Our current plan is the starting point, so this VOTE is a vote to
> > change the current plan.  We require a 72 hour window for this vote,
> > so IMO we are in an odd position where the feature freeze date is at
> > least extended until Tuesday of next week.
> >
> > Our current plan of record for 4.2.0 is at [3].
> >
> > [1] http://markmail.org/message/vi3nsd2yo763kzua
> > [2] http://s.apache.org/csbylaws
> > [3]
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
> > Re
> > lease
> >
> > 
> >
> > I'd like to call a VOTE on the following:
> >
> > Proposal: Extend the feature freeze date for our 4.2.0 feature release
> > from today (2013-05-31) to 2013-06-28.  All other dates following the
> > feature freeze date in the plan would be pushed out 4 weeks as well.
> >
> > Please respond with one of the following:
> >
> > +1 : change the plan as listed above
> > +/-0 : no strong opinion, but leaning + or -
> > -1 : do not change the plan
> >
> > This vote will remain open until Tuesday morning US eastern time.
> >
> > -chip


RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Alex Huang
+1 [binding]

--Alex

> -Original Message-
> From: Will Chan [mailto:will.c...@citrix.com]
> Sent: Monday, June 3, 2013 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: RE: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> +1 [Binding]
> 
> It looks like there are a couple of last minute features that would make 4.1 a
> superb release.   I would say that we should not allow any new features that
> haven't already been proposed and that the extension does not go beyond 4
> weeks.  If beyond that, I'm a  -1.
> 
> Will
> 
> > -Original Message-
> > From: Chip Childers [mailto:chip.child...@sungard.com]
> > Sent: Friday, May 31, 2013 8:00 AM
> > To: dev@cloudstack.apache.org
> > Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> >
> > Following our discussion on the proposal to push back the feature
> > freeze date for 4.2.0 [1], we have not yet achieved a clear consensus.
> Well...
> > we have already defined the "project rules" for figuring out what to do.
> > In out project by-laws [2], we have defined a "release plan" decision
> > as
> > follows:
> >
> > > 3.4.2. Release Plan
> > >
> > > Defines the timetable and work items for a release. The plan also
> > > nominates a Release Manager.
> > >
> > > A lazy majority of active committers is required for approval.
> > >
> > > Any active committer or PMC member may call a vote. The vote must
> > > occur on a project development mailing list.
> >
> > And our lazy majority is defined as:
> >
> > > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > > votes and more binding +1 votes than binding -1 votes.
> >
> > Our current plan is the starting point, so this VOTE is a vote to
> > change the current plan.  We require a 72 hour window for this vote,
> > so IMO we are in an odd position where the feature freeze date is at
> > least extended until Tuesday of next week.
> >
> > Our current plan of record for 4.2.0 is at [3].
> >
> > [1] http://markmail.org/message/vi3nsd2yo763kzua
> > [2] http://s.apache.org/csbylaws
> > [3]
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2
> > +Release
> >
> > 
> >
> > I'd like to call a VOTE on the following:
> >
> > Proposal: Extend the feature freeze date for our 4.2.0 feature release
> > from today (2013-05-31) to 2013-06-28.  All other dates following the
> > feature freeze date in the plan would be pushed out 4 weeks as well.
> >
> > Please respond with one of the following:
> >
> > +1 : change the plan as listed above
> > +/-0 : no strong opinion, but leaning + or -
> > -1 : do not change the plan
> >
> > This vote will remain open until Tuesday morning US eastern time.
> >
> > -chip


Re: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Kelven Yang
+1.

4.2 carries some important changes for the long run, giving it more time
would help a smooth release in the final

Kelven

On 6/3/13 10:55 AM, "Edison Su"  wrote:

>+1[binding] on pushing back feature freeze date.
>
>> -Original Message-
>> From: Chip Childers [mailto:chip.child...@sungard.com]
>> Sent: Friday, May 31, 2013 8:00 AM
>> To: dev@cloudstack.apache.org
>> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
>> 
>> Following our discussion on the proposal to push back the feature freeze
>> date for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
>> we have already defined the "project rules" for figuring out what to do.
>> In out project by-laws [2], we have defined a "release plan" decision as
>> follows:
>> 
>> > 3.4.2. Release Plan
>> >
>> > Defines the timetable and work items for a release. The plan also
>> > nominates a Release Manager.
>> >
>> > A lazy majority of active committers is required for approval.
>> >
>> > Any active committer or PMC member may call a vote. The vote must
>> > occur on a project development mailing list.
>> 
>> And our lazy majority is defined as:
>> 
>> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
>> > votes and more binding +1 votes than binding -1 votes.
>> 
>> Our current plan is the starting point, so this VOTE is a vote to
>>change the
>> current plan.  We require a 72 hour window for this vote, so IMO we are
>>in an
>> odd position where the feature freeze date is at least extended until
>> Tuesday of next week.
>> 
>> Our current plan of record for 4.2.0 is at [3].
>> 
>> [1] http://markmail.org/message/vi3nsd2yo763kzua
>> [2] http://s.apache.org/csbylaws
>> [3]
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+
>> Release
>> 
>> 
>> 
>> I'd like to call a VOTE on the following:
>> 
>> Proposal: Extend the feature freeze date for our 4.2.0 feature release
>>from
>> today (2013-05-31) to 2013-06-28.  All other dates following the feature
>> freeze date in the plan would be pushed out 4 weeks as well.
>> 
>> Please respond with one of the following:
>> 
>> +1 : change the plan as listed above
>> +/-0 : no strong opinion, but leaning + or -
>> -1 : do not change the plan
>> 
>> This vote will remain open until Tuesday morning US eastern time.
>> 
>> -chip



Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread John Burwell
Mike,

Can you explain why the the storage driver is hypervisor specific?

Thanks,
-John

On Jun 3, 2013, at 1:21 PM, Mike Tutkowski  wrote:

> Yes, ultimately I would like to support all hypervisors that CloudStack
> supports. I think I'm just out of time for 4.2 to get KVM in.
> 
> Right now this plug-in supports XenServer. Depending on what we do with
> regards to 4.2 feature freeze, I have it working for VMware in my sandbox,
> as well.
> 
> Also, just to be clear, this is all in regards to Disk Offerings. I plan to
> support Compute Offerings post 4.2.
> 
> 
> On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage wrote:
> 
>> Is there any plan on supporting KVM in the patch cycle post 4.2?
>> 
>> - Original Message -
>> From: "Mike Tutkowski" 
>> To: dev@cloudstack.apache.org
>> Sent: Monday, June 3, 2013 10:12:32 AM
>> Subject: Re: [MERGE] disk_io_throttling to MASTER
>> 
>> I agree on merging Wei's feature first, then mine.
>> 
>> If his feature is for KVM only, then it is a non issue as I don't support
>> KVM in 4.2.
>> 
>> 
>> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU  wrote:
>> 
>>> John,
>>> 
>>> For the billing, as no one works on billing now, users need to calculate
>>> the billing by themselves. They can get the service_offering and
>>> disk_offering of a VMs and volumes for calculation. Of course it is
>> better
>>> to tell user the exact limitation value of individual volume, and network
>>> rate limitation for nics as well. I can work on it later. Do you think it
>>> is a part of I/O throttling?
>>> 
>>> Sorry my misunstand the second the question.
>>> 
>>> Agree with what you said about the two features.
>>> 
>>> -Wei
>>> 
>>> 
>>> 2013/6/3 John Burwell 
>>> 
 Wei,
 
 
 On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
 
> Hi John, Mike
> 
> I hope Mike's aswer helps you. I am trying to adding more.
> 
> (1) I think billing should depend on IO statistics rather than IOPS
> limitation. Please review disk_io_stat if you have time.
>> disk_io_stat
 can
> get the IO statistics including bytes/iops read/write for an
>> individual
> virtual machine.
 
 Going by the AWS model, customers are billed more for volumes with
 provisioned IOPS, as well as, for those operations (
 http://aws.amazon.com/ebs/).  I would imagine our users would like the
 option to employ similar cost models.  Could an operator implement
>> such a
 billing model in the current patch?
 
> 
> (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> limitation for a running virtual machine through command line.
>> However,
> CloudStack does not support changing the parameters of a created
>>> offering
> (computer offering or disk offering).
 
 I meant at the Java interface level.  I apologize for being unclear.
>> Can
 we more generalize allocation algorithms with a set of interfaces that
 describe the service guarantees provided by a resource?
 
> 
> (3) It is a good question. Maybe it is better to commit Mike's patch
 after
> disk_io_throttling as Mike needs to consider the limitation in
>>> hypervisor
> type, I think.
 
 I will expand on my thoughts in a later response to Mike regarding the
 touch points between these two features.  I think that
>> disk_io_throttling
 will need to be merged before SolidFire, but I think we need closer
 coordination between the branches (possibly have have solidfire track
 disk_io_throttling) to coordinate on this issue.
 
> 
> - Wei
> 
> 
> 2013/6/3 John Burwell 
> 
>> Mike,
>> 
>> The things I want to understand are the following:
>> 
>>  1) Is there value in capturing IOPS policies be captured in a
>> common
>> data model (e.g. for billing/usage purposes, expressing offerings).
>>   2) Should there be a common interface model for reasoning about
>> IOP
>> provisioning at runtime?
>>   3) How are conflicting provisioned IOPS configurations between a
>> hypervisor and storage device reconciled?  In particular, a scenario
 where
>> is lead to believe (and billed) for more IOPS configured for a VM
>>> than a
>> storage device has been configured to deliver.  Another scenario
>>> could a
>> consistent configuration between a VM and a storage device at
>> creation
>> time, but a later modification to storage device introduces logical
>> inconsistency.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 2, 2013, at 8:38 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com>
>> wrote:
>> 
>> Hi John,
>> 
>> I believe Wei's feature deals with controlling the max number of
>> IOPS
 from
>> the hypervisor side.
>> 
>> My feature is focused on controlling IOPS from the storage system
>>> side.
>> 
>> I hope that helps. :)
>> 
>> 
>> On Sun, Jun 2, 2013 at 6:35 PM, John

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
Hi John,

The storage plug-in - by itself - is hypervisor agnostic.

The issue is with the volume-attach logic (in the agent code). The storage
framework calls into the plug-in to have it create a volume as needed, but
when the time comes to attach the volume to a hypervisor, the attach logic
has to be smart enough to recognize it's being invoked on zone-wide storage
(where the volume has just been created) and create, say, a storage
repository (for XenServer) or a datastore (for VMware) to make use of the
volume that was just created.

I've been spending most of my time recently making the attach logic work in
the agent code.

Does that clear it up?

Thanks!


On Mon, Jun 3, 2013 at 12:48 PM, John Burwell  wrote:

> Mike,
>
> Can you explain why the the storage driver is hypervisor specific?
>
> Thanks,
> -John
>
> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski 
> wrote:
>
> > Yes, ultimately I would like to support all hypervisors that CloudStack
> > supports. I think I'm just out of time for 4.2 to get KVM in.
> >
> > Right now this plug-in supports XenServer. Depending on what we do with
> > regards to 4.2 feature freeze, I have it working for VMware in my
> sandbox,
> > as well.
> >
> > Also, just to be clear, this is all in regards to Disk Offerings. I plan
> to
> > support Compute Offerings post 4.2.
> >
> >
> > On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage  >wrote:
> >
> >> Is there any plan on supporting KVM in the patch cycle post 4.2?
> >>
> >> - Original Message -
> >> From: "Mike Tutkowski" 
> >> To: dev@cloudstack.apache.org
> >> Sent: Monday, June 3, 2013 10:12:32 AM
> >> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >>
> >> I agree on merging Wei's feature first, then mine.
> >>
> >> If his feature is for KVM only, then it is a non issue as I don't
> support
> >> KVM in 4.2.
> >>
> >>
> >> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU  wrote:
> >>
> >>> John,
> >>>
> >>> For the billing, as no one works on billing now, users need to
> calculate
> >>> the billing by themselves. They can get the service_offering and
> >>> disk_offering of a VMs and volumes for calculation. Of course it is
> >> better
> >>> to tell user the exact limitation value of individual volume, and
> network
> >>> rate limitation for nics as well. I can work on it later. Do you think
> it
> >>> is a part of I/O throttling?
> >>>
> >>> Sorry my misunstand the second the question.
> >>>
> >>> Agree with what you said about the two features.
> >>>
> >>> -Wei
> >>>
> >>>
> >>> 2013/6/3 John Burwell 
> >>>
>  Wei,
> 
> 
>  On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
> 
> > Hi John, Mike
> >
> > I hope Mike's aswer helps you. I am trying to adding more.
> >
> > (1) I think billing should depend on IO statistics rather than IOPS
> > limitation. Please review disk_io_stat if you have time.
> >> disk_io_stat
>  can
> > get the IO statistics including bytes/iops read/write for an
> >> individual
> > virtual machine.
> 
>  Going by the AWS model, customers are billed more for volumes with
>  provisioned IOPS, as well as, for those operations (
>  http://aws.amazon.com/ebs/).  I would imagine our users would like
> the
>  option to employ similar cost models.  Could an operator implement
> >> such a
>  billing model in the current patch?
> 
> >
> > (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
> > limitation for a running virtual machine through command line.
> >> However,
> > CloudStack does not support changing the parameters of a created
> >>> offering
> > (computer offering or disk offering).
> 
>  I meant at the Java interface level.  I apologize for being unclear.
> >> Can
>  we more generalize allocation algorithms with a set of interfaces that
>  describe the service guarantees provided by a resource?
> 
> >
> > (3) It is a good question. Maybe it is better to commit Mike's patch
>  after
> > disk_io_throttling as Mike needs to consider the limitation in
> >>> hypervisor
> > type, I think.
> 
>  I will expand on my thoughts in a later response to Mike regarding the
>  touch points between these two features.  I think that
> >> disk_io_throttling
>  will need to be merged before SolidFire, but I think we need closer
>  coordination between the branches (possibly have have solidfire track
>  disk_io_throttling) to coordinate on this issue.
> 
> >
> > - Wei
> >
> >
> > 2013/6/3 John Burwell 
> >
> >> Mike,
> >>
> >> The things I want to understand are the following:
> >>
> >>  1) Is there value in capturing IOPS policies be captured in a
> >> common
> >> data model (e.g. for billing/usage purposes, expressing offerings).
> >>   2) Should there be a common interface model for reasoning about
> >> IOP
> >> provisioning at runtime?
> >>   3) How are conflicting provi

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
To delve into this in a bit more detail:

Prior to 4.2 and aside from one setup method for XenServer, the admin had
to first create a volume on the storage system, then go into the hypervisor
to set up a data structure to make use of the volume (ex. a storage
repository on XenServer or a datastore on ESX(i)). VMs and data disks then
shared this storage system's volume.

With Edison's new storage framework, storage need no longer be so static
and you can easily create a 1:1 relationship between a storage system's
volume and the VM's data disk (necessary for storage Quality of Service).

You can now write a plug-in that is called to dynamically create and delete
volumes as needed.

The problem that the storage framework did not address is in creating and
deleting the hypervisor-specific data structure when performing an
attach/detach.

That being the case, I've been enhancing it to do so. I've got XenServer
worked out and submitted. I've got ESX(i) in my sandbox and can submit this
if we extend the 4.2 freeze date.

Does that help a bit? :)


On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski  wrote:

> Hi John,
>
> The storage plug-in - by itself - is hypervisor agnostic.
>
> The issue is with the volume-attach logic (in the agent code). The storage
> framework calls into the plug-in to have it create a volume as needed, but
> when the time comes to attach the volume to a hypervisor, the attach logic
> has to be smart enough to recognize it's being invoked on zone-wide storage
> (where the volume has just been created) and create, say, a storage
> repository (for XenServer) or a datastore (for VMware) to make use of the
> volume that was just created.
>
> I've been spending most of my time recently making the attach logic work
> in the agent code.
>
> Does that clear it up?
>
> Thanks!
>
>
> On Mon, Jun 3, 2013 at 12:48 PM, John Burwell  wrote:
>
>> Mike,
>>
>> Can you explain why the the storage driver is hypervisor specific?
>>
>> Thanks,
>> -John
>>
>> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski 
>> wrote:
>>
>> > Yes, ultimately I would like to support all hypervisors that CloudStack
>> > supports. I think I'm just out of time for 4.2 to get KVM in.
>> >
>> > Right now this plug-in supports XenServer. Depending on what we do with
>> > regards to 4.2 feature freeze, I have it working for VMware in my
>> sandbox,
>> > as well.
>> >
>> > Also, just to be clear, this is all in regards to Disk Offerings. I
>> plan to
>> > support Compute Offerings post 4.2.
>> >
>> >
>> > On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage > >wrote:
>> >
>> >> Is there any plan on supporting KVM in the patch cycle post 4.2?
>> >>
>> >> - Original Message -
>> >> From: "Mike Tutkowski" 
>> >> To: dev@cloudstack.apache.org
>> >> Sent: Monday, June 3, 2013 10:12:32 AM
>> >> Subject: Re: [MERGE] disk_io_throttling to MASTER
>> >>
>> >> I agree on merging Wei's feature first, then mine.
>> >>
>> >> If his feature is for KVM only, then it is a non issue as I don't
>> support
>> >> KVM in 4.2.
>> >>
>> >>
>> >> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU 
>> wrote:
>> >>
>> >>> John,
>> >>>
>> >>> For the billing, as no one works on billing now, users need to
>> calculate
>> >>> the billing by themselves. They can get the service_offering and
>> >>> disk_offering of a VMs and volumes for calculation. Of course it is
>> >> better
>> >>> to tell user the exact limitation value of individual volume, and
>> network
>> >>> rate limitation for nics as well. I can work on it later. Do you
>> think it
>> >>> is a part of I/O throttling?
>> >>>
>> >>> Sorry my misunstand the second the question.
>> >>>
>> >>> Agree with what you said about the two features.
>> >>>
>> >>> -Wei
>> >>>
>> >>>
>> >>> 2013/6/3 John Burwell 
>> >>>
>>  Wei,
>> 
>> 
>>  On Jun 3, 2013, at 2:13 AM, Wei ZHOU  wrote:
>> 
>> > Hi John, Mike
>> >
>> > I hope Mike's aswer helps you. I am trying to adding more.
>> >
>> > (1) I think billing should depend on IO statistics rather than IOPS
>> > limitation. Please review disk_io_stat if you have time.
>> >> disk_io_stat
>>  can
>> > get the IO statistics including bytes/iops read/write for an
>> >> individual
>> > virtual machine.
>> 
>>  Going by the AWS model, customers are billed more for volumes with
>>  provisioned IOPS, as well as, for those operations (
>>  http://aws.amazon.com/ebs/).  I would imagine our users would like
>> the
>>  option to employ similar cost models.  Could an operator implement
>> >> such a
>>  billing model in the current patch?
>> 
>> >
>> > (2) Do you mean IOPS runtime change? KVM supports setting IOPS/BPS
>> > limitation for a running virtual machine through command line.
>> >> However,
>> > CloudStack does not support changing the parameters of a created
>> >>> offering
>> > (computer offering or disk offering).
>> 
>>  I meant at the Java interface level.  I apologize for bein

[DISCUSS]Object_Store design: S3TemplateDownloader Implementation Issues

2013-06-03 Thread Min Chen
Hi there,

This thread is to address John's review comments on S3TemplateDownloader 
implementation. From previous thread, there are two major concerns for this 
class implementation.

1. We have used HttpClient library in this class.  For this comment, I can 
explain why I need that HttpClient during downloading object to S3. Current our 
download logic is like this:

-- get object total size and InputStream from a http url by invoking HttpClient 
library method.
-- invoke S3Utils api to download an InputStream to S3, this is totally S3 api, 
and get actual object size downloaded on S3 once completion.
-- compare object total size and actual download size to check if they are 
equal to report any truncation error.

John's concern is on step 1 above. We can get ride of HttpClient library use to 
get InputStream from an URL, but I don't know how I can easily get the object 
size from a URL. In previous email, John you mentioned that I can use S3 api 
getObjectMetaData to get the object size, but my understanding is that that API 
only applies to the object already in S3. In my flow, I need to get the size of 
object that is to be downloaded to S3, but not in S3. Willing to hear your 
suggestion here.

2. John pointed out an issue with current download method implementation in 
this class, where I have used S3 low-level api PutObjectRequest to put an 
InputStream to S3, this has a bug that it cannot handle object > 5GB. That is 
true after reading several S3 documentation on MultiPart upload, sorry that I 
am not expert on S3 and thus didn't know that earlier when I implemented this 
method. To change that, it should not take too long to code based on this 
sample on AWS 
(http://docs.aws.amazon.com/AmazonS3/latest/dev/HLTrackProgressMPUJava.html) by 
using TransferManager, just need some testing time.  IMHO, this bug should not 
become a major issue blocking object_store branch merge, just need several days 
to fix and address assuming that we have extension. Even without extension, I 
personally think that this definitely can be resolved in master with a simple 
bug fix.

Thanks
-min



Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread John Burwell
Mike,

It is generally odd to me that any operation in the Storage layer would
understand or care about details.  I expect to see the Storage services
expose a set of operations that can be composed/driven by the Hypervisor
implementations to allocate space/create structures per their needs.  If we
don't invert this dependency, we are going to end with a massive n-to-n
problem that will make the system increasingly difficult to maintain and
enhance.  Am I understanding that the Xen specific SolidFire code is
located in the CitrixResourceBase class?

Thanks,
-John


On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski  wrote:

> To delve into this in a bit more detail:
>
> Prior to 4.2 and aside from one setup method for XenServer, the admin had
> to first create a volume on the storage system, then go into the hypervisor
> to set up a data structure to make use of the volume (ex. a storage
> repository on XenServer or a datastore on ESX(i)). VMs and data disks then
> shared this storage system's volume.
>
> With Edison's new storage framework, storage need no longer be so static
> and you can easily create a 1:1 relationship between a storage system's
> volume and the VM's data disk (necessary for storage Quality of Service).
>
> You can now write a plug-in that is called to dynamically create and delete
> volumes as needed.
>
> The problem that the storage framework did not address is in creating and
> deleting the hypervisor-specific data structure when performing an
> attach/detach.
>
> That being the case, I've been enhancing it to do so. I've got XenServer
> worked out and submitted. I've got ESX(i) in my sandbox and can submit this
> if we extend the 4.2 freeze date.
>
> Does that help a bit? :)
>
>
> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > Hi John,
> >
> > The storage plug-in - by itself - is hypervisor agnostic.
> >
> > The issue is with the volume-attach logic (in the agent code). The
> storage
> > framework calls into the plug-in to have it create a volume as needed,
> but
> > when the time comes to attach the volume to a hypervisor, the attach
> logic
> > has to be smart enough to recognize it's being invoked on zone-wide
> storage
> > (where the volume has just been created) and create, say, a storage
> > repository (for XenServer) or a datastore (for VMware) to make use of the
> > volume that was just created.
> >
> > I've been spending most of my time recently making the attach logic work
> > in the agent code.
> >
> > Does that clear it up?
> >
> > Thanks!
> >
> >
> > On Mon, Jun 3, 2013 at 12:48 PM, John Burwell 
> wrote:
> >
> >> Mike,
> >>
> >> Can you explain why the the storage driver is hypervisor specific?
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> >> wrote:
> >>
> >> > Yes, ultimately I would like to support all hypervisors that
> CloudStack
> >> > supports. I think I'm just out of time for 4.2 to get KVM in.
> >> >
> >> > Right now this plug-in supports XenServer. Depending on what we do
> with
> >> > regards to 4.2 feature freeze, I have it working for VMware in my
> >> sandbox,
> >> > as well.
> >> >
> >> > Also, just to be clear, this is all in regards to Disk Offerings. I
> >> plan to
> >> > support Compute Offerings post 4.2.
> >> >
> >> >
> >> > On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage <
> kel...@bbits.ca
> >> >wrote:
> >> >
> >> >> Is there any plan on supporting KVM in the patch cycle post 4.2?
> >> >>
> >> >> - Original Message -
> >> >> From: "Mike Tutkowski" 
> >> >> To: dev@cloudstack.apache.org
> >> >> Sent: Monday, June 3, 2013 10:12:32 AM
> >> >> Subject: Re: [MERGE] disk_io_throttling to MASTER
> >> >>
> >> >> I agree on merging Wei's feature first, then mine.
> >> >>
> >> >> If his feature is for KVM only, then it is a non issue as I don't
> >> support
> >> >> KVM in 4.2.
> >> >>
> >> >>
> >> >> On Mon, Jun 3, 2013 at 8:55 AM, Wei ZHOU 
> >> wrote:
> >> >>
> >> >>> John,
> >> >>>
> >> >>> For the billing, as no one works on billing now, users need to
> >> calculate
> >> >>> the billing by themselves. They can get the service_offering and
> >> >>> disk_offering of a VMs and volumes for calculation. Of course it is
> >> >> better
> >> >>> to tell user the exact limitation value of individual volume, and
> >> network
> >> >>> rate limitation for nics as well. I can work on it later. Do you
> >> think it
> >> >>> is a part of I/O throttling?
> >> >>>
> >> >>> Sorry my misunstand the second the question.
> >> >>>
> >> >>> Agree with what you said about the two features.
> >> >>>
> >> >>> -Wei
> >> >>>
> >> >>>
> >> >>> 2013/6/3 John Burwell 
> >> >>>
> >>  Wei,
> >> 
> >> 
> >>  On Jun 3, 2013, at 2:13 AM, Wei ZHOU 
> wrote:
> >> 
> >> > Hi John, Mike
> >> >
> >> > I hope Mike's aswer helps you. I am trying to adding more.
> >> >
> >> > (1) I think billing should depend on IO statistics

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
Oh, sorry to imply the XenServer code is SolidFire specific. It is not.

The XenServer attach logic is now aware of dynamic, zone-wide storage (and
SolidFire is an implementation of this kind of storage). This kind of
storage is new to 4.2 with Edison's storage framework changes.

Edison created a new framework that supported the creation and deletion of
volumes dynamically. However, when I visited with him in Portland back in
April, we realized that it was not complete. We realized there was nothing
CloudStack could do with these volumes unless the attach logic was changed
to recognize this new type of storage and create the appropriate hypervisor
data structure.


On Mon, Jun 3, 2013 at 1:28 PM, John Burwell  wrote:

> Mike,
>
> It is generally odd to me that any operation in the Storage layer would
> understand or care about details.  I expect to see the Storage services
> expose a set of operations that can be composed/driven by the Hypervisor
> implementations to allocate space/create structures per their needs.  If we
> don't invert this dependency, we are going to end with a massive n-to-n
> problem that will make the system increasingly difficult to maintain and
> enhance.  Am I understanding that the Xen specific SolidFire code is
> located in the CitrixResourceBase class?
>
> Thanks,
> -John
>
>
> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > To delve into this in a bit more detail:
> >
> > Prior to 4.2 and aside from one setup method for XenServer, the admin had
> > to first create a volume on the storage system, then go into the
> hypervisor
> > to set up a data structure to make use of the volume (ex. a storage
> > repository on XenServer or a datastore on ESX(i)). VMs and data disks
> then
> > shared this storage system's volume.
> >
> > With Edison's new storage framework, storage need no longer be so static
> > and you can easily create a 1:1 relationship between a storage system's
> > volume and the VM's data disk (necessary for storage Quality of Service).
> >
> > You can now write a plug-in that is called to dynamically create and
> delete
> > volumes as needed.
> >
> > The problem that the storage framework did not address is in creating and
> > deleting the hypervisor-specific data structure when performing an
> > attach/detach.
> >
> > That being the case, I've been enhancing it to do so. I've got XenServer
> > worked out and submitted. I've got ESX(i) in my sandbox and can submit
> this
> > if we extend the 4.2 freeze date.
> >
> > Does that help a bit? :)
> >
> >
> > On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > > wrote:
> >
> > > Hi John,
> > >
> > > The storage plug-in - by itself - is hypervisor agnostic.
> > >
> > > The issue is with the volume-attach logic (in the agent code). The
> > storage
> > > framework calls into the plug-in to have it create a volume as needed,
> > but
> > > when the time comes to attach the volume to a hypervisor, the attach
> > logic
> > > has to be smart enough to recognize it's being invoked on zone-wide
> > storage
> > > (where the volume has just been created) and create, say, a storage
> > > repository (for XenServer) or a datastore (for VMware) to make use of
> the
> > > volume that was just created.
> > >
> > > I've been spending most of my time recently making the attach logic
> work
> > > in the agent code.
> > >
> > > Does that clear it up?
> > >
> > > Thanks!
> > >
> > >
> > > On Mon, Jun 3, 2013 at 12:48 PM, John Burwell 
> > wrote:
> > >
> > >> Mike,
> > >>
> > >> Can you explain why the the storage driver is hypervisor specific?
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > >> wrote:
> > >>
> > >> > Yes, ultimately I would like to support all hypervisors that
> > CloudStack
> > >> > supports. I think I'm just out of time for 4.2 to get KVM in.
> > >> >
> > >> > Right now this plug-in supports XenServer. Depending on what we do
> > with
> > >> > regards to 4.2 feature freeze, I have it working for VMware in my
> > >> sandbox,
> > >> > as well.
> > >> >
> > >> > Also, just to be clear, this is all in regards to Disk Offerings. I
> > >> plan to
> > >> > support Compute Offerings post 4.2.
> > >> >
> > >> >
> > >> > On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage <
> > kel...@bbits.ca
> > >> >wrote:
> > >> >
> > >> >> Is there any plan on supporting KVM in the patch cycle post 4.2?
> > >> >>
> > >> >> - Original Message -
> > >> >> From: "Mike Tutkowski" 
> > >> >> To: dev@cloudstack.apache.org
> > >> >> Sent: Monday, June 3, 2013 10:12:32 AM
> > >> >> Subject: Re: [MERGE] disk_io_throttling to MASTER
> > >> >>
> > >> >> I agree on merging Wei's feature first, then mine.
> > >> >>
> > >> >> If his feature is for KVM only, then it is a non issue as I don't
> > >> support
> > >> >> KVM in 4.2.
> > >> >>
> > >> >>
> > >> >> On Mon, Jun 3, 2013 at 8:55 AM, W

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
For example, let's say another storage company wants to implement a plug-in
to leverage its Quality of Service feature. It would be dynamic, zone-wide
storage, as well. They would need only implement a storage plug-in as I've
made the necessary changes to the hypervisor-attach logic to support their
plug-in.


On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski  wrote:

> Oh, sorry to imply the XenServer code is SolidFire specific. It is not.
>
> The XenServer attach logic is now aware of dynamic, zone-wide storage (and
> SolidFire is an implementation of this kind of storage). This kind of
> storage is new to 4.2 with Edison's storage framework changes.
>
> Edison created a new framework that supported the creation and deletion of
> volumes dynamically. However, when I visited with him in Portland back in
> April, we realized that it was not complete. We realized there was nothing
> CloudStack could do with these volumes unless the attach logic was changed
> to recognize this new type of storage and create the appropriate hypervisor
> data structure.
>
>
> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell  wrote:
>
>> Mike,
>>
>> It is generally odd to me that any operation in the Storage layer would
>> understand or care about details.  I expect to see the Storage services
>> expose a set of operations that can be composed/driven by the Hypervisor
>> implementations to allocate space/create structures per their needs.  If
>> we
>> don't invert this dependency, we are going to end with a massive n-to-n
>> problem that will make the system increasingly difficult to maintain and
>> enhance.  Am I understanding that the Xen specific SolidFire code is
>> located in the CitrixResourceBase class?
>>
>> Thanks,
>> -John
>>
>>
>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com
>> > wrote:
>>
>> > To delve into this in a bit more detail:
>> >
>> > Prior to 4.2 and aside from one setup method for XenServer, the admin
>> had
>> > to first create a volume on the storage system, then go into the
>> hypervisor
>> > to set up a data structure to make use of the volume (ex. a storage
>> > repository on XenServer or a datastore on ESX(i)). VMs and data disks
>> then
>> > shared this storage system's volume.
>> >
>> > With Edison's new storage framework, storage need no longer be so static
>> > and you can easily create a 1:1 relationship between a storage system's
>> > volume and the VM's data disk (necessary for storage Quality of
>> Service).
>> >
>> > You can now write a plug-in that is called to dynamically create and
>> delete
>> > volumes as needed.
>> >
>> > The problem that the storage framework did not address is in creating
>> and
>> > deleting the hypervisor-specific data structure when performing an
>> > attach/detach.
>> >
>> > That being the case, I've been enhancing it to do so. I've got XenServer
>> > worked out and submitted. I've got ESX(i) in my sandbox and can submit
>> this
>> > if we extend the 4.2 freeze date.
>> >
>> > Does that help a bit? :)
>> >
>> >
>> > On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
>> > mike.tutkow...@solidfire.com
>> > > wrote:
>> >
>> > > Hi John,
>> > >
>> > > The storage plug-in - by itself - is hypervisor agnostic.
>> > >
>> > > The issue is with the volume-attach logic (in the agent code). The
>> > storage
>> > > framework calls into the plug-in to have it create a volume as needed,
>> > but
>> > > when the time comes to attach the volume to a hypervisor, the attach
>> > logic
>> > > has to be smart enough to recognize it's being invoked on zone-wide
>> > storage
>> > > (where the volume has just been created) and create, say, a storage
>> > > repository (for XenServer) or a datastore (for VMware) to make use of
>> the
>> > > volume that was just created.
>> > >
>> > > I've been spending most of my time recently making the attach logic
>> work
>> > > in the agent code.
>> > >
>> > > Does that clear it up?
>> > >
>> > > Thanks!
>> > >
>> > >
>> > > On Mon, Jun 3, 2013 at 12:48 PM, John Burwell 
>> > wrote:
>> > >
>> > >> Mike,
>> > >>
>> > >> Can you explain why the the storage driver is hypervisor specific?
>> > >>
>> > >> Thanks,
>> > >> -John
>> > >>
>> > >> On Jun 3, 2013, at 1:21 PM, Mike Tutkowski <
>> > mike.tutkow...@solidfire.com>
>> > >> wrote:
>> > >>
>> > >> > Yes, ultimately I would like to support all hypervisors that
>> > CloudStack
>> > >> > supports. I think I'm just out of time for 4.2 to get KVM in.
>> > >> >
>> > >> > Right now this plug-in supports XenServer. Depending on what we do
>> > with
>> > >> > regards to 4.2 feature freeze, I have it working for VMware in my
>> > >> sandbox,
>> > >> > as well.
>> > >> >
>> > >> > Also, just to be clear, this is all in regards to Disk Offerings. I
>> > >> plan to
>> > >> > support Compute Offerings post 4.2.
>> > >> >
>> > >> >
>> > >> > On Mon, Jun 3, 2013 at 11:14 AM, Kelcey Jamison Damage <
>> > kel...@bbits.ca
>> > >> >wrote:
>> > >> >
>> > >> >> Is there any plan on supporting KV

Re: External Network Usage

2013-06-03 Thread Will Stevens
I have spent some time looking at the usage data in the database and
looking over the code.

When 'bytes_in' and 'bytes_out' are reported, they are reported for a
specific network.  Is this only for traffic between the public and the
private network?  Does private traffic affect these numbers if the traffic
does not go through the public?  So if two VMs on the same network send
traffic between them selves on their vlan without it being routed through
the public ip, will that traffic show up in the bytes_in and bytes_out data?

Thanks...


On Fri, May 31, 2013 at 3:08 PM, Will Stevens  wrote:

> Hey All,
> I am trying to get my head around this.  From what I understand, if I am
> implementing an external firewall I am forced to implement the
> 'ExternalNetworkResourceUsageCommand'.  I have not found a way to not
> support the collection of usage data.  Is that possible?  If so, how?
>
> My current problem is that the Palo Alto does report usage, but only per
> interface.  All of the public IPs are configured on one interface (for
> routing), so I can not determine usage 'per public ip' which is how all the
> other external networks are tracking usage.
>
> Any ideas?
>
> Thanks,
>
> Will
>


Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread John Burwell
Mike,

Reading through the code, what is the difference between the ISCSI and Dynamic 
types?  Why isn't RBD considered Dynamic?

Thanks,
-John

On Jun 3, 2013, at 3:46 PM, Mike Tutkowski  wrote:

> This new type of storage is defined in the Storage.StoragePoolType class
> (called Dynamic):
> 
> public static enum StoragePoolType {
> 
>Filesystem(false), // local directory
> 
>NetworkFilesystem(true), // NFS or CIFS
> 
>IscsiLUN(true), // shared LUN, with a clusterfs overlay
> 
>Iscsi(true), // for e.g., ZFS Comstar
> 
>ISO(false), // for iso image
> 
>LVM(false), // XenServer local LVM SR
> 
>CLVM(true),
> 
>RBD(true),
> 
>SharedMountPoint(true),
> 
>VMFS(true), // VMware VMFS storage
> 
>PreSetup(true), // for XenServer, Storage Pool is set up by
> customers.
> 
>EXT(false), // XenServer local EXT SR
> 
>OCFS2(true),
> 
>Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
> 
> 
>boolean shared;
> 
> 
>StoragePoolType(boolean shared) {
> 
>this.shared = shared;
> 
>}
> 
> 
>public boolean isShared() {
> 
>return shared;
> 
>}
> 
>}
> 
> 
> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski > wrote:
> 
>> For example, let's say another storage company wants to implement a
>> plug-in to leverage its Quality of Service feature. It would be dynamic,
>> zone-wide storage, as well. They would need only implement a storage
>> plug-in as I've made the necessary changes to the hypervisor-attach logic
>> to support their plug-in.
>> 
>> 
>> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>> 
>>> Oh, sorry to imply the XenServer code is SolidFire specific. It is not.
>>> 
>>> The XenServer attach logic is now aware of dynamic, zone-wide storage
>>> (and SolidFire is an implementation of this kind of storage). This kind of
>>> storage is new to 4.2 with Edison's storage framework changes.
>>> 
>>> Edison created a new framework that supported the creation and deletion
>>> of volumes dynamically. However, when I visited with him in Portland back
>>> in April, we realized that it was not complete. We realized there was
>>> nothing CloudStack could do with these volumes unless the attach logic was
>>> changed to recognize this new type of storage and create the appropriate
>>> hypervisor data structure.
>>> 
>>> 
>>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell  wrote:
>>> 
 Mike,
 
 It is generally odd to me that any operation in the Storage layer would
 understand or care about details.  I expect to see the Storage services
 expose a set of operations that can be composed/driven by the Hypervisor
 implementations to allocate space/create structures per their needs.  If
 we
 don't invert this dependency, we are going to end with a massive n-to-n
 problem that will make the system increasingly difficult to maintain and
 enhance.  Am I understanding that the Xen specific SolidFire code is
 located in the CitrixResourceBase class?
 
 Thanks,
 -John
 
 
 On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com
> wrote:
 
> To delve into this in a bit more detail:
> 
> Prior to 4.2 and aside from one setup method for XenServer, the admin
 had
> to first create a volume on the storage system, then go into the
 hypervisor
> to set up a data structure to make use of the volume (ex. a storage
> repository on XenServer or a datastore on ESX(i)). VMs and data disks
 then
> shared this storage system's volume.
> 
> With Edison's new storage framework, storage need no longer be so
 static
> and you can easily create a 1:1 relationship between a storage system's
> volume and the VM's data disk (necessary for storage Quality of
 Service).
> 
> You can now write a plug-in that is called to dynamically create and
 delete
> volumes as needed.
> 
> The problem that the storage framework did not address is in creating
 and
> deleting the hypervisor-specific data structure when performing an
> attach/detach.
> 
> That being the case, I've been enhancing it to do so. I've got
 XenServer
> worked out and submitted. I've got ESX(i) in my sandbox and can submit
 this
> if we extend the 4.2 freeze date.
> 
> Does that help a bit? :)
> 
> 
> On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
>> wrote:
> 
>> Hi John,
>> 
>> The storage plug-in - by itself - is hypervisor agnostic.
>> 
>> The issue is with the volume-attach logic (in the agent code). The
> storage
>> framework calls into the plug-in to have it create a volume as
 needed,
> but
>> when the time comes to attach the volume to a hypervisor, t

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
This new type of storage is defined in the Storage.StoragePoolType class
(called Dynamic):

public static enum StoragePoolType {

Filesystem(false), // local directory

NetworkFilesystem(true), // NFS or CIFS

IscsiLUN(true), // shared LUN, with a clusterfs overlay

Iscsi(true), // for e.g., ZFS Comstar

ISO(false), // for iso image

LVM(false), // XenServer local LVM SR

CLVM(true),

RBD(true),

SharedMountPoint(true),

VMFS(true), // VMware VMFS storage

PreSetup(true), // for XenServer, Storage Pool is set up by
customers.

EXT(false), // XenServer local EXT SR

OCFS2(true),

Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)


boolean shared;


StoragePoolType(boolean shared) {

this.shared = shared;

}


public boolean isShared() {

return shared;

}

}


On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski  wrote:

> For example, let's say another storage company wants to implement a
> plug-in to leverage its Quality of Service feature. It would be dynamic,
> zone-wide storage, as well. They would need only implement a storage
> plug-in as I've made the necessary changes to the hypervisor-attach logic
> to support their plug-in.
>
>
> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Oh, sorry to imply the XenServer code is SolidFire specific. It is not.
>>
>> The XenServer attach logic is now aware of dynamic, zone-wide storage
>> (and SolidFire is an implementation of this kind of storage). This kind of
>> storage is new to 4.2 with Edison's storage framework changes.
>>
>> Edison created a new framework that supported the creation and deletion
>> of volumes dynamically. However, when I visited with him in Portland back
>> in April, we realized that it was not complete. We realized there was
>> nothing CloudStack could do with these volumes unless the attach logic was
>> changed to recognize this new type of storage and create the appropriate
>> hypervisor data structure.
>>
>>
>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell  wrote:
>>
>>> Mike,
>>>
>>> It is generally odd to me that any operation in the Storage layer would
>>> understand or care about details.  I expect to see the Storage services
>>> expose a set of operations that can be composed/driven by the Hypervisor
>>> implementations to allocate space/create structures per their needs.  If
>>> we
>>> don't invert this dependency, we are going to end with a massive n-to-n
>>> problem that will make the system increasingly difficult to maintain and
>>> enhance.  Am I understanding that the Xen specific SolidFire code is
>>> located in the CitrixResourceBase class?
>>>
>>> Thanks,
>>> -John
>>>
>>>
>>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com
>>> > wrote:
>>>
>>> > To delve into this in a bit more detail:
>>> >
>>> > Prior to 4.2 and aside from one setup method for XenServer, the admin
>>> had
>>> > to first create a volume on the storage system, then go into the
>>> hypervisor
>>> > to set up a data structure to make use of the volume (ex. a storage
>>> > repository on XenServer or a datastore on ESX(i)). VMs and data disks
>>> then
>>> > shared this storage system's volume.
>>> >
>>> > With Edison's new storage framework, storage need no longer be so
>>> static
>>> > and you can easily create a 1:1 relationship between a storage system's
>>> > volume and the VM's data disk (necessary for storage Quality of
>>> Service).
>>> >
>>> > You can now write a plug-in that is called to dynamically create and
>>> delete
>>> > volumes as needed.
>>> >
>>> > The problem that the storage framework did not address is in creating
>>> and
>>> > deleting the hypervisor-specific data structure when performing an
>>> > attach/detach.
>>> >
>>> > That being the case, I've been enhancing it to do so. I've got
>>> XenServer
>>> > worked out and submitted. I've got ESX(i) in my sandbox and can submit
>>> this
>>> > if we extend the 4.2 freeze date.
>>> >
>>> > Does that help a bit? :)
>>> >
>>> >
>>> > On Mon, Jun 3, 2013 at 1:03 PM, Mike Tutkowski <
>>> > mike.tutkow...@solidfire.com
>>> > > wrote:
>>> >
>>> > > Hi John,
>>> > >
>>> > > The storage plug-in - by itself - is hypervisor agnostic.
>>> > >
>>> > > The issue is with the volume-attach logic (in the agent code). The
>>> > storage
>>> > > framework calls into the plug-in to have it create a volume as
>>> needed,
>>> > but
>>> > > when the time comes to attach the volume to a hypervisor, the attach
>>> > logic
>>> > > has to be smart enough to recognize it's being invoked on zone-wide
>>> > storage
>>> > > (where the volume has just been created) and create, say, a storage
>>> > > repository (for XenServer) or a datastore (for VMware) to make use
>>> of the
>>> > > volume that was just created.
>>> > >
>>> > > I've been spending most of my t

Re: External Network Usage

2013-06-03 Thread Wei ZHOU
Hi Will,

Commencts inline:

> Is this only for traffic between the public and the private network?
yes.

> Does private traffic affect these numbers if the traffic does not go
through the public?
No.

> So if two VMs on the same network send traffic between them selves on
their vlan without it being routed through the public ip, will that traffic
show up in the bytes_in and bytes_out data?
No.

-Wei


2013/6/3 Will Stevens 

> I have spent some time looking at the usage data in the database and
> looking over the code.
>
> When 'bytes_in' and 'bytes_out' are reported, they are reported for a
> specific network.  Is this only for traffic between the public and the
> private network?  Does private traffic affect these numbers if the traffic
> does not go through the public?  So if two VMs on the same network send
> traffic between them selves on their vlan without it being routed through
> the public ip, will that traffic show up in the bytes_in and bytes_out
> data?
>
> Thanks...
>
>
> On Fri, May 31, 2013 at 3:08 PM, Will Stevens 
> wrote:
>
> > Hey All,
> > I am trying to get my head around this.  From what I understand, if I am
> > implementing an external firewall I am forced to implement the
> > 'ExternalNetworkResourceUsageCommand'.  I have not found a way to not
> > support the collection of usage data.  Is that possible?  If so, how?
> >
> > My current problem is that the Palo Alto does report usage, but only per
> > interface.  All of the public IPs are configured on one interface (for
> > routing), so I can not determine usage 'per public ip' which is how all
> the
> > other external networks are tracking usage.
> >
> > Any ideas?
> >
> > Thanks,
> >
> > Will
> >
>


Re: External Network Usage

2013-06-03 Thread Will Stevens
Thanks for the answers Wei.

Is the public traffic broken down by public IP then?  Lets say we have 3
public IPs setup on a network; the source nat ip, a port forwarding ip and
a static nat ip.  Would each of these ips track its own traffic and only
the traffic which goes through that IP?  If so, we would then have three
rows in the user_statistics table for that network, one row corresponding
to the aggregation for each public ip?

Thanks


On Mon, Jun 3, 2013 at 4:18 PM, Wei ZHOU  wrote:

> Hi Will,
>
> Commencts inline:
>
> > Is this only for traffic between the public and the private network?
> yes.
>
> > Does private traffic affect these numbers if the traffic does not go
> through the public?
> No.
>
> > So if two VMs on the same network send traffic between them selves on
> their vlan without it being routed through the public ip, will that traffic
> show up in the bytes_in and bytes_out data?
> No.
>
> -Wei
>
>
> 2013/6/3 Will Stevens 
>
> > I have spent some time looking at the usage data in the database and
> > looking over the code.
> >
> > When 'bytes_in' and 'bytes_out' are reported, they are reported for a
> > specific network.  Is this only for traffic between the public and the
> > private network?  Does private traffic affect these numbers if the
> traffic
> > does not go through the public?  So if two VMs on the same network send
> > traffic between them selves on their vlan without it being routed through
> > the public ip, will that traffic show up in the bytes_in and bytes_out
> > data?
> >
> > Thanks...
> >
> >
> > On Fri, May 31, 2013 at 3:08 PM, Will Stevens 
> > wrote:
> >
> > > Hey All,
> > > I am trying to get my head around this.  From what I understand, if I
> am
> > > implementing an external firewall I am forced to implement the
> > > 'ExternalNetworkResourceUsageCommand'.  I have not found a way to not
> > > support the collection of usage data.  Is that possible?  If so, how?
> > >
> > > My current problem is that the Palo Alto does report usage, but only
> per
> > > interface.  All of the public IPs are configured on one interface (for
> > > routing), so I can not determine usage 'per public ip' which is how all
> > the
> > > other external networks are tracking usage.
> > >
> > > Any ideas?
> > >
> > > Thanks,
> > >
> > > Will
> > >
> >
>


CloudMonkey on PyPi

2013-06-03 Thread Chip Childers
Hey Rohit,

Do you think that we should remove the 4.1.0-SNAPSHOT artifacts from
PyPi?  It's actually a higher version than 4.1.0-0 I think.

-chip


[SSVM][NETWORK] Change management net to public network

2013-06-03 Thread Musayev, Ilya
I need to customize the CS to our environment needs.

Since ACS at present moment does not support management network with VLAN 
tagging, I need to use another unused network as management vlan.

We are enterprise customer and at the moment have no need for "public" network. 
All the natting is handled by external infrastructure.

I'm looking for pointer where I can override system VMs (eth/nic) network 
assigning mechanism and use Public Network - instead of Untagged Management 
Network.

Any help is appreciated and would save me a lot of time searching.

Thanks
ilya


Re: [GSOC] Community Bonding Period

2013-06-03 Thread Han,Meng

Hi all,

My name is Meng Han. I am a Computer Engineering student at University 
of Florida. I am interested in distributed computing, autonomic 
computing, Hadoop framework and virtualization techonologies.


I will be working on the project -Improve CloudStack Support in Apache 
Whirr and Incubator-provisioner to created hadoop clusters. As implied 
in the title this goal of this project is to enable Hadoop provisioning 
on CloudStack via Apache Whirr and Provisionr. Also I will add a Query 
API  that is compatible with Amazon Elastic MapReduce (EMR) to 
CloudStack. Through this API, all hadoop provisioning functionality will 
be exposed and users can reuse cloud clients that are written for EMR to 
create and manage hadoop clusters on CloudStack based clouds. Full 
proposal can be found 
here:http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013/kyrameng/1


I have created my accounts on JIRA, review board, Github and wiki.

IRC nickname: meng

JIRA: https://issues.apache.org/jira/browse/CLOUDSTACK-1782

Skype name: kyrameng

Wiki:https://cwiki.apache.org/confluence/display/CLOUDSTACK/Improving+CloudStack+Support+for+Apache+Whirr+and+Incubator-provisionr+in+Hadoop+Provisioning

So excited to join you guys :O. I really appreciate this opportunity. I 
am a girl asking a lot of questions. Wish you all a productive summer!


Cheers,
Meng






Re: [GSOC] Community Bonding Period

2013-06-03 Thread Chip Childers
On Mon, Jun 03, 2013 at 04:52:38PM -0400, Han,Meng wrote:
> Hi all,
> 
> My name is Meng Han. I am a Computer Engineering student at
> University of Florida. I am interested in distributed computing,
> autonomic computing, Hadoop framework and virtualization
> techonologies.
> 
> I will be working on the project -Improve CloudStack Support in
> Apache Whirr and Incubator-provisioner to created hadoop clusters.
> As implied in the title this goal of this project is to enable
> Hadoop provisioning on CloudStack via Apache Whirr and Provisionr.
> Also I will add a Query API  that is compatible with Amazon Elastic
> MapReduce (EMR) to CloudStack. Through this API, all hadoop
> provisioning functionality will be exposed and users can reuse cloud
> clients that are written for EMR to create and manage hadoop
> clusters on CloudStack based clouds. Full proposal can be found 
> here:http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013/kyrameng/1
> 
> I have created my accounts on JIRA, review board, Github and wiki.
> 
> IRC nickname: meng
> 
> JIRA: https://issues.apache.org/jira/browse/CLOUDSTACK-1782
> 
> Skype name: kyrameng
> 
> Wiki:https://cwiki.apache.org/confluence/display/CLOUDSTACK/Improving+CloudStack+Support+for+Apache+Whirr+and+Incubator-provisionr+in+Hadoop+Provisioning
> 
> So excited to join you guys :O. I really appreciate this
> opportunity. I am a girl asking a lot of questions. Wish you all a
> productive summer!
> 
> Cheers,
> Meng

Welcome Meng!

Can you let me know your Jira ID, so that I can be sure that you have
permission to assign CLOUDSTACK-1782 to yourself (and to update it's
status as you progress)?


RE: [VOTE] Pushback 4.2.0 Feature Freeze

2013-06-03 Thread Kevin Kluge
+1 [ binding ]

I've been concerned that releases every four months were too aggressive for 
people to absorb given the complexity of some deployments and upgrades.  With 
the current 4.1 delay and 4.2 plan we would expect two major releases within 
two months of each other.  I'd prefer a bigger date shift for 4.2, but I see 
little appetite for that in these discussions.  So I will +1 this proposal as a 
reasonable compromise.

FWIW I doubt we'll get many more features in 4.2 with this.  As Animesh noted 
the feature proposal date has passed so we have an upper bound on the 
additional changes for this four weeks.  I believe this proposal will improve 
the quality of 4.2 on its planned release date as a result.

-kevin

> -Original Message-
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, May 31, 2013 8:00 AM
> To: dev@cloudstack.apache.org
> Subject: [VOTE] Pushback 4.2.0 Feature Freeze
> 
> Following our discussion on the proposal to push back the feature freeze date
> for 4.2.0 [1], we have not yet achieved a clear consensus.  Well...
> we have already defined the "project rules" for figuring out what to do.
> In out project by-laws [2], we have defined a "release plan" decision as
> follows:
> 
> > 3.4.2. Release Plan
> >
> > Defines the timetable and work items for a release. The plan also
> > nominates a Release Manager.
> >
> > A lazy majority of active committers is required for approval.
> >
> > Any active committer or PMC member may call a vote. The vote must
> > occur on a project development mailing list.
> 
> And our lazy majority is defined as:
> 
> > 3.2.2. Lazy Majority - A lazy majority vote requires 3 binding +1
> > votes and more binding +1 votes than binding -1 votes.
> 
> Our current plan is the starting point, so this VOTE is a vote to change the
> current plan.  We require a 72 hour window for this vote, so IMO we are in an
> odd position where the feature freeze date is at least extended until Tuesday 
> of
> next week.
> 
> Our current plan of record for 4.2.0 is at [3].
> 
> [1] http://markmail.org/message/vi3nsd2yo763kzua
> [2] http://s.apache.org/csbylaws
> [3]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+4.2+Rel
> ease
> 
> 
> 
> I'd like to call a VOTE on the following:
> 
> Proposal: Extend the feature freeze date for our 4.2.0 feature release from
> today (2013-05-31) to 2013-06-28.  All other dates following the feature 
> freeze
> date in the plan would be pushed out 4 weeks as well.
> 
> Please respond with one of the following:
> 
> +1 : change the plan as listed above
> +/-0 : no strong opinion, but leaning + or -
> -1 : do not change the plan
> 
> This vote will remain open until Tuesday morning US eastern time.
> 
> -chip


Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
As far as I know, the iSCSI type is uniquely used by XenServer when you
want to set up Primary Storage that is directly based on an iSCSI target.
This allows you to skip the step of going to the hypervisor and creating a
storage repository based on that iSCSI target as CloudStack does that part
for you. I think this is only supported for XenServer. For all other
hypervisors, you must first go to the hypervisor and perform this step
manually.

I don't really know what RBD is.


On Mon, Jun 3, 2013 at 2:13 PM, John Burwell  wrote:

> Mike,
>
> Reading through the code, what is the difference between the ISCSI and
> Dynamic types?  Why isn't RBD considered Dynamic?
>
> Thanks,
> -John
>
> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski 
> wrote:
>
> > This new type of storage is defined in the Storage.StoragePoolType class
> > (called Dynamic):
> >
> > public static enum StoragePoolType {
> >
> >Filesystem(false), // local directory
> >
> >NetworkFilesystem(true), // NFS or CIFS
> >
> >IscsiLUN(true), // shared LUN, with a clusterfs overlay
> >
> >Iscsi(true), // for e.g., ZFS Comstar
> >
> >ISO(false), // for iso image
> >
> >LVM(false), // XenServer local LVM SR
> >
> >CLVM(true),
> >
> >RBD(true),
> >
> >SharedMountPoint(true),
> >
> >VMFS(true), // VMware VMFS storage
> >
> >PreSetup(true), // for XenServer, Storage Pool is set up by
> > customers.
> >
> >EXT(false), // XenServer local EXT SR
> >
> >OCFS2(true),
> >
> >Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
> >
> >
> >boolean shared;
> >
> >
> >StoragePoolType(boolean shared) {
> >
> >this.shared = shared;
> >
> >}
> >
> >
> >public boolean isShared() {
> >
> >return shared;
> >
> >}
> >
> >}
> >
> >
> > On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> >> wrote:
> >
> >> For example, let's say another storage company wants to implement a
> >> plug-in to leverage its Quality of Service feature. It would be dynamic,
> >> zone-wide storage, as well. They would need only implement a storage
> >> plug-in as I've made the necessary changes to the hypervisor-attach
> logic
> >> to support their plug-in.
> >>
> >>
> >> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com> wrote:
> >>
> >>> Oh, sorry to imply the XenServer code is SolidFire specific. It is not.
> >>>
> >>> The XenServer attach logic is now aware of dynamic, zone-wide storage
> >>> (and SolidFire is an implementation of this kind of storage). This
> kind of
> >>> storage is new to 4.2 with Edison's storage framework changes.
> >>>
> >>> Edison created a new framework that supported the creation and deletion
> >>> of volumes dynamically. However, when I visited with him in Portland
> back
> >>> in April, we realized that it was not complete. We realized there was
> >>> nothing CloudStack could do with these volumes unless the attach logic
> was
> >>> changed to recognize this new type of storage and create the
> appropriate
> >>> hypervisor data structure.
> >>>
> >>>
> >>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell 
> wrote:
> >>>
>  Mike,
> 
>  It is generally odd to me that any operation in the Storage layer
> would
>  understand or care about details.  I expect to see the Storage
> services
>  expose a set of operations that can be composed/driven by the
> Hypervisor
>  implementations to allocate space/create structures per their needs.
>  If
>  we
>  don't invert this dependency, we are going to end with a massive
> n-to-n
>  problem that will make the system increasingly difficult to maintain
> and
>  enhance.  Am I understanding that the Xen specific SolidFire code is
>  located in the CitrixResourceBase class?
> 
>  Thanks,
>  -John
> 
> 
>  On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>  mike.tutkow...@solidfire.com
> > wrote:
> 
> > To delve into this in a bit more detail:
> >
> > Prior to 4.2 and aside from one setup method for XenServer, the admin
>  had
> > to first create a volume on the storage system, then go into the
>  hypervisor
> > to set up a data structure to make use of the volume (ex. a storage
> > repository on XenServer or a datastore on ESX(i)). VMs and data disks
>  then
> > shared this storage system's volume.
> >
> > With Edison's new storage framework, storage need no longer be so
>  static
> > and you can easily create a 1:1 relationship between a storage
> system's
> > volume and the VM's data disk (necessary for storage Quality of
>  Service).
> >
> > You can now write a plug-in that is called to dynamically create and
>  delete
> > volumes as needed.
> >
> > The problem that the storage framework did not address is in creating
>  and

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
Alternatively, you can use the PreSetup type for XenServer. In this case,
you must go to XenServer and set up the storage repository (which can be
based on an iSCSI target). Then you must go into CloudStack and select the
PreSetup type for Primary Storage. This is like selecting the vmfs type for
VMware.


On Mon, Jun 3, 2013 at 3:10 PM, Mike Tutkowski  wrote:

> As far as I know, the iSCSI type is uniquely used by XenServer when you
> want to set up Primary Storage that is directly based on an iSCSI target.
> This allows you to skip the step of going to the hypervisor and creating a
> storage repository based on that iSCSI target as CloudStack does that part
> for you. I think this is only supported for XenServer. For all other
> hypervisors, you must first go to the hypervisor and perform this step
> manually.
>
> I don't really know what RBD is.
>
>
> On Mon, Jun 3, 2013 at 2:13 PM, John Burwell  wrote:
>
>> Mike,
>>
>> Reading through the code, what is the difference between the ISCSI and
>> Dynamic types?  Why isn't RBD considered Dynamic?
>>
>> Thanks,
>> -John
>>
>> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski 
>> wrote:
>>
>> > This new type of storage is defined in the Storage.StoragePoolType class
>> > (called Dynamic):
>> >
>> > public static enum StoragePoolType {
>> >
>> >Filesystem(false), // local directory
>> >
>> >NetworkFilesystem(true), // NFS or CIFS
>> >
>> >IscsiLUN(true), // shared LUN, with a clusterfs overlay
>> >
>> >Iscsi(true), // for e.g., ZFS Comstar
>> >
>> >ISO(false), // for iso image
>> >
>> >LVM(false), // XenServer local LVM SR
>> >
>> >CLVM(true),
>> >
>> >RBD(true),
>> >
>> >SharedMountPoint(true),
>> >
>> >VMFS(true), // VMware VMFS storage
>> >
>> >PreSetup(true), // for XenServer, Storage Pool is set up by
>> > customers.
>> >
>> >EXT(false), // XenServer local EXT SR
>> >
>> >OCFS2(true),
>> >
>> >Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
>> >
>> >
>> >boolean shared;
>> >
>> >
>> >StoragePoolType(boolean shared) {
>> >
>> >this.shared = shared;
>> >
>> >}
>> >
>> >
>> >public boolean isShared() {
>> >
>> >return shared;
>> >
>> >}
>> >
>> >}
>> >
>> >
>> > On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com
>> >> wrote:
>> >
>> >> For example, let's say another storage company wants to implement a
>> >> plug-in to leverage its Quality of Service feature. It would be
>> dynamic,
>> >> zone-wide storage, as well. They would need only implement a storage
>> >> plug-in as I've made the necessary changes to the hypervisor-attach
>> logic
>> >> to support their plug-in.
>> >>
>> >>
>> >> On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
>> >> mike.tutkow...@solidfire.com> wrote:
>> >>
>> >>> Oh, sorry to imply the XenServer code is SolidFire specific. It is
>> not.
>> >>>
>> >>> The XenServer attach logic is now aware of dynamic, zone-wide storage
>> >>> (and SolidFire is an implementation of this kind of storage). This
>> kind of
>> >>> storage is new to 4.2 with Edison's storage framework changes.
>> >>>
>> >>> Edison created a new framework that supported the creation and
>> deletion
>> >>> of volumes dynamically. However, when I visited with him in Portland
>> back
>> >>> in April, we realized that it was not complete. We realized there was
>> >>> nothing CloudStack could do with these volumes unless the attach
>> logic was
>> >>> changed to recognize this new type of storage and create the
>> appropriate
>> >>> hypervisor data structure.
>> >>>
>> >>>
>> >>> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell 
>> wrote:
>> >>>
>>  Mike,
>> 
>>  It is generally odd to me that any operation in the Storage layer
>> would
>>  understand or care about details.  I expect to see the Storage
>> services
>>  expose a set of operations that can be composed/driven by the
>> Hypervisor
>>  implementations to allocate space/create structures per their needs.
>>  If
>>  we
>>  don't invert this dependency, we are going to end with a massive
>> n-to-n
>>  problem that will make the system increasingly difficult to maintain
>> and
>>  enhance.  Am I understanding that the Xen specific SolidFire code is
>>  located in the CitrixResourceBase class?
>> 
>>  Thanks,
>>  -John
>> 
>> 
>>  On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>>  mike.tutkow...@solidfire.com
>> > wrote:
>> 
>> > To delve into this in a bit more detail:
>> >
>> > Prior to 4.2 and aside from one setup method for XenServer, the
>> admin
>>  had
>> > to first create a volume on the storage system, then go into the
>>  hypervisor
>> > to set up a data structure to make use of the volume (ex. a storage
>> > repository on XenServer or a datastore on ESX(i)). VMs and data
>> disks
>>  the

Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread John Burwell
Mike,

The current implementation of the Dynamic type attach behavior works in terms 
of Xen ISCSI which why I ask about the difference.  Another way to ask the 
question -- what is the definition of a Dynamic storage pool type?

Thanks,
-John

On Jun 3, 2013, at 5:10 PM, Mike Tutkowski  wrote:

> As far as I know, the iSCSI type is uniquely used by XenServer when you
> want to set up Primary Storage that is directly based on an iSCSI target.
> This allows you to skip the step of going to the hypervisor and creating a
> storage repository based on that iSCSI target as CloudStack does that part
> for you. I think this is only supported for XenServer. For all other
> hypervisors, you must first go to the hypervisor and perform this step
> manually.
> 
> I don't really know what RBD is.
> 
> 
> On Mon, Jun 3, 2013 at 2:13 PM, John Burwell  wrote:
> 
>> Mike,
>> 
>> Reading through the code, what is the difference between the ISCSI and
>> Dynamic types?  Why isn't RBD considered Dynamic?
>> 
>> Thanks,
>> -John
>> 
>> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski 
>> wrote:
>> 
>>> This new type of storage is defined in the Storage.StoragePoolType class
>>> (called Dynamic):
>>> 
>>> public static enum StoragePoolType {
>>> 
>>>   Filesystem(false), // local directory
>>> 
>>>   NetworkFilesystem(true), // NFS or CIFS
>>> 
>>>   IscsiLUN(true), // shared LUN, with a clusterfs overlay
>>> 
>>>   Iscsi(true), // for e.g., ZFS Comstar
>>> 
>>>   ISO(false), // for iso image
>>> 
>>>   LVM(false), // XenServer local LVM SR
>>> 
>>>   CLVM(true),
>>> 
>>>   RBD(true),
>>> 
>>>   SharedMountPoint(true),
>>> 
>>>   VMFS(true), // VMware VMFS storage
>>> 
>>>   PreSetup(true), // for XenServer, Storage Pool is set up by
>>> customers.
>>> 
>>>   EXT(false), // XenServer local EXT SR
>>> 
>>>   OCFS2(true),
>>> 
>>>   Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
>>> 
>>> 
>>>   boolean shared;
>>> 
>>> 
>>>   StoragePoolType(boolean shared) {
>>> 
>>>   this.shared = shared;
>>> 
>>>   }
>>> 
>>> 
>>>   public boolean isShared() {
>>> 
>>>   return shared;
>>> 
>>>   }
>>> 
>>>   }
>>> 
>>> 
>>> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com
 wrote:
>>> 
 For example, let's say another storage company wants to implement a
 plug-in to leverage its Quality of Service feature. It would be dynamic,
 zone-wide storage, as well. They would need only implement a storage
 plug-in as I've made the necessary changes to the hypervisor-attach
>> logic
 to support their plug-in.
 
 
 On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
 mike.tutkow...@solidfire.com> wrote:
 
> Oh, sorry to imply the XenServer code is SolidFire specific. It is not.
> 
> The XenServer attach logic is now aware of dynamic, zone-wide storage
> (and SolidFire is an implementation of this kind of storage). This
>> kind of
> storage is new to 4.2 with Edison's storage framework changes.
> 
> Edison created a new framework that supported the creation and deletion
> of volumes dynamically. However, when I visited with him in Portland
>> back
> in April, we realized that it was not complete. We realized there was
> nothing CloudStack could do with these volumes unless the attach logic
>> was
> changed to recognize this new type of storage and create the
>> appropriate
> hypervisor data structure.
> 
> 
> On Mon, Jun 3, 2013 at 1:28 PM, John Burwell 
>> wrote:
> 
>> Mike,
>> 
>> It is generally odd to me that any operation in the Storage layer
>> would
>> understand or care about details.  I expect to see the Storage
>> services
>> expose a set of operations that can be composed/driven by the
>> Hypervisor
>> implementations to allocate space/create structures per their needs.
>> If
>> we
>> don't invert this dependency, we are going to end with a massive
>> n-to-n
>> problem that will make the system increasingly difficult to maintain
>> and
>> enhance.  Am I understanding that the Xen specific SolidFire code is
>> located in the CitrixResourceBase class?
>> 
>> Thanks,
>> -John
>> 
>> 
>> On Mon, Jun 3, 2013 at 3:12 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com
>>> wrote:
>> 
>>> To delve into this in a bit more detail:
>>> 
>>> Prior to 4.2 and aside from one setup method for XenServer, the admin
>> had
>>> to first create a volume on the storage system, then go into the
>> hypervisor
>>> to set up a data structure to make use of the volume (ex. a storage
>>> repository on XenServer or a datastore on ESX(i)). VMs and data disks
>> then
>>> shared this storage system's volume.
>>> 
>>> With Edison's new storage framework, storage need no longer be so
>> static
>>> and you can ea

Re: [GSOC] Community Bonding Period

2013-06-03 Thread Sebastien Goasguen
Nguyen,

Could you send an email with all the information like the other guys have done ?

thanks,

-Sebastien

On May 29, 2013, at 10:35 AM, Nguyen Anh Tu  wrote:

> @Sebgoa: Done! My account on every page: tuna.
> 
> Looking forward :-)
> 
> 
> 2013/5/29 Sebastien Goasguen 
> Hi Dharmesh, Meng, Ian, Nguyen and Shiva,
> 
> Congratulations again on being selected for the 2013 Google Summer of Code.
> 
> The program has started and we are now in "community bonding period". On June 
> 17th you will officially start to code.
> 
> I will mentor Dharmesh and Meng
> Abhi will mentor Ian
> Hugo will mentor Nguyen
> Kelcey will mentor Shiva
> I will act as overall coordinator.
> 
> While these are your official mentors from a Google perspective, the entire 
> CloudStack community will help you.
> 
> There are a few things to keep in mind:
> --
> -The timeline: Check [0]. Note that there are evaluations throughout the 
> program and that if progress is not satisfactory you can be dropped from the 
> program. Hopefully with terrific mentoring from us all at CloudStack this 
> will not happen and you will finish the program with flying colors.
> 
> -Email: At the Apache Software Foundation, official communication happen via 
> email, so make sure you are registered to the dev@cloudstack.apache.org (you 
> are). This is a high traffic list, so remember to setup mail filters and be 
> sure to keep the GSOC emails where you can read them, without filters you 
> will be overwhelmed and we don't want that to happen. When you email the list 
> for a GSOC specific question, just put [GSOC] at the start of the subject 
> line.  I CC you in this email but will not do it afterwards and just email 
> dev@cloudstack.apache.org
> 
> -IRC: For daily conversation and help, we use IRC. Install an IRC client and 
> join the #cloudstack and #cloudstack-dev on irc.freenode.net [1]. Make 
> yourself none and learn a few IRC tricks.
> 
> -JIRA: Our ticketing system is JIRA [2], create an account and browse JIRA, 
> you should already know where your project is described (which ticket number 
> ?). As you start working you will create tickets and subtasks that will allow 
> us to track progress. Students having to work on Mesos, Whirr and Provisionr 
> will be able to use the same account.
> 
> -Review Board [3]: This is the web interface to submit patches when you are 
> not an official Apache committer. Create an account on review board.
> 
> -Git: To manage the CloudStack source code we use git [4]. You will need to 
> become familiar with git. I strongly recommend that you create a personal 
> github [5] account. If you are not already familiar with git, check my 
> screencast [6].
> 
> -Wiki: All our developer content is on our wiki [7]. Browse it, get an 
> account and create a page about your project in the Student Project page [8].
> 
> -Website: I hope you already know our website :) [9]
> 
> -CloudStack University: To get your started and get a tour of CloudStack, you 
> can watch CloudStack University [10]
> 
> Expectations for bonding period:
> 
> *To get you on-board I would like to ask each of you to send an email 
> introducing yourself in couple sentences, describe your project (couple 
> sentences plus link to the JIRA entry and the wiki page you created), confirm 
> that you joined IRC and if you registered a nick tell us what it is and 
> finally confirm that you created an account on review board and JIRA.
> 
> *By the end of the period, I would like to see your first patch submitted. It 
> will be your GSOC proposal in docbook format contributed to a GSOC guide I 
> will create. There is no code writing involved, this will just serve as a way 
> to make sure you understand the process of submitting a patch and will be the 
> start of a great documentation of our GSOC efforts. More on that later
> 
> On behalf of everyone at CloudStack and especially your mentors (Abhi, 
> Kelcey, Hugo and myself) , welcome and let's have fun coding.
> 
> -Sebastien
> 
> 
> [0] - http://www.google-melange.com/gsoc/events/google/gsoc2013
> [1] - http://www.freenode.net
> [2] - https://issues.apache.org/jira/browse/CLOUDSTACK
> [3] - https://reviews.apache.org/dashboard/
> [4] - http://git-scm.com
> [5] - https://github.com
> [6] - 
> http://www.youtube.com/watch?v=3c5JIW4onGk&list=PLb899uhkHRoZCRE00h_9CRgUSiHEgFDbC&index=5
> [7] - https://cwiki.apache.org/CLOUDSTACK/
> [8] - https://cwiki.apache.org/CLOUDSTACK/student-projects.html
> [9] - http://cloudstack.apache.org
> [10] - http://www.youtube.com/playlist?list=PLb899uhkHRoZCRE00h_9CRgUSiHEgFDbC
> 
> 
> 
> 
> 
> 
> -- 
> N.g.U.y.e.N.A.n.H.t.U
> 



Re: [MERGE] disk_io_throttling to MASTER

2013-06-03 Thread Mike Tutkowski
These are new terms, so I should probably have defined them up front for
you. :)

Static storage: Storage that is pre-allocated (ex. an admin creates a
volume on a SAN), then a hypervisor data structure is created to consume
the storage (ex. XenServer SR), then that hypervisor data structure is
consumed by CloudStack. Disks (VDI) are later placed on this hypervisor
data structure as needed. In these cases, the attach logic assumes the
hypervisor data structure is already in place and simply attaches the VDI
on the hypervisor data structure to the VM in question.

Dynamic storage: Storage that is not pre-allocated. Instead of pre-existent
storage, this could be a SAN (not a volume on a SAN, but the SAN itself).
The hypervisor data structure must be created when an attach volume is
performed because these types of volumes have not been pre-hooked up to
such a hypervisor data structure by an admin. Once the attach logic
creates, say, an SR on XenServer for this volume, it attaches the one and
only VDI within the SR to the VM in question.


On Mon, Jun 3, 2013 at 3:13 PM, John Burwell  wrote:

> Mike,
>
> The current implementation of the Dynamic type attach behavior works in
> terms of Xen ISCSI which why I ask about the difference.  Another way to
> ask the question -- what is the definition of a Dynamic storage pool type?
>
> Thanks,
> -John
>
> On Jun 3, 2013, at 5:10 PM, Mike Tutkowski 
> wrote:
>
> > As far as I know, the iSCSI type is uniquely used by XenServer when you
> > want to set up Primary Storage that is directly based on an iSCSI target.
> > This allows you to skip the step of going to the hypervisor and creating
> a
> > storage repository based on that iSCSI target as CloudStack does that
> part
> > for you. I think this is only supported for XenServer. For all other
> > hypervisors, you must first go to the hypervisor and perform this step
> > manually.
> >
> > I don't really know what RBD is.
> >
> >
> > On Mon, Jun 3, 2013 at 2:13 PM, John Burwell  wrote:
> >
> >> Mike,
> >>
> >> Reading through the code, what is the difference between the ISCSI and
> >> Dynamic types?  Why isn't RBD considered Dynamic?
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 3, 2013, at 3:46 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> >> wrote:
> >>
> >>> This new type of storage is defined in the Storage.StoragePoolType
> class
> >>> (called Dynamic):
> >>>
> >>> public static enum StoragePoolType {
> >>>
> >>>   Filesystem(false), // local directory
> >>>
> >>>   NetworkFilesystem(true), // NFS or CIFS
> >>>
> >>>   IscsiLUN(true), // shared LUN, with a clusterfs overlay
> >>>
> >>>   Iscsi(true), // for e.g., ZFS Comstar
> >>>
> >>>   ISO(false), // for iso image
> >>>
> >>>   LVM(false), // XenServer local LVM SR
> >>>
> >>>   CLVM(true),
> >>>
> >>>   RBD(true),
> >>>
> >>>   SharedMountPoint(true),
> >>>
> >>>   VMFS(true), // VMware VMFS storage
> >>>
> >>>   PreSetup(true), // for XenServer, Storage Pool is set up by
> >>> customers.
> >>>
> >>>   EXT(false), // XenServer local EXT SR
> >>>
> >>>   OCFS2(true),
> >>>
> >>>   Dynamic(true); // dynamic, zone-wide storage (ex. SolidFire)
> >>>
> >>>
> >>>   boolean shared;
> >>>
> >>>
> >>>   StoragePoolType(boolean shared) {
> >>>
> >>>   this.shared = shared;
> >>>
> >>>   }
> >>>
> >>>
> >>>   public boolean isShared() {
> >>>
> >>>   return shared;
> >>>
> >>>   }
> >>>
> >>>   }
> >>>
> >>>
> >>> On Mon, Jun 3, 2013 at 1:41 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com
>  wrote:
> >>>
>  For example, let's say another storage company wants to implement a
>  plug-in to leverage its Quality of Service feature. It would be
> dynamic,
>  zone-wide storage, as well. They would need only implement a storage
>  plug-in as I've made the necessary changes to the hypervisor-attach
> >> logic
>  to support their plug-in.
> 
> 
>  On Mon, Jun 3, 2013 at 1:39 PM, Mike Tutkowski <
>  mike.tutkow...@solidfire.com> wrote:
> 
> > Oh, sorry to imply the XenServer code is SolidFire specific. It is
> not.
> >
> > The XenServer attach logic is now aware of dynamic, zone-wide storage
> > (and SolidFire is an implementation of this kind of storage). This
> >> kind of
> > storage is new to 4.2 with Edison's storage framework changes.
> >
> > Edison created a new framework that supported the creation and
> deletion
> > of volumes dynamically. However, when I visited with him in Portland
> >> back
> > in April, we realized that it was not complete. We realized there was
> > nothing CloudStack could do with these volumes unless the attach
> logic
> >> was
> > changed to recognize this new type of storage and create the
> >> appropriate
> > hypervisor data structure.
> >
> >
> > On Mon, Jun 3, 2013 at 1:28 PM, John Burwell 
> >> wrote:
> >
> >> Mike,
> >>
> >>>

  1   2   >