Re: [RESULT][VOTE] Apache CloudStack 4.7.0

2016-02-08 Thread Remi Bergsma
It would be great if we could work together to complete the remaining items. 
Including release notes, docs, website updates etc. Haven’t done these myself 
and until now someone always stepped in and helped :-)

Thanks,
Remi



On 06/02/16 19:29, "Sebastien Goasguen"  wrote:

>
>> On Feb 5, 2016, at 3:08 AM, John Kinsella  wrote:
>> 
>> Did the announcements for 4.7/4.8 go out? I don’t see them on the mailing 
>> lists or elsewhere?
>> 
>
>
>I don’t think it went out, nor do I think there were RN for them or an update 
>to the website
>
>>> On Dec 17, 2015, at 8:37 AM, Remi Bergsma  
>>> wrote:
>>> 
>>> Hi all,
>>> 
>>> After 72 hours, the vote for CloudStack 4.7.0 [1] *passes* with 5 PMC + 1 
>>> non-PMC votes.
>>> 
>>> +1 (PMC / binding)
>>> * Wilder
>>> * Wido
>>> * Milamber
>>> * Rohit
>>> * Remi
>>> 
>>> +1 (non binding)
>>> * Boris
>>> 
>>> 0
>>> * Abhinandan
>>> * Dag
>>> * Glenn
>>> 
>>> -1
>>> Raja (has been discussed, seems local test configure issue)
>>> 
>>> Thanks to everyone participating.
>>> 
>>> I will now prepare the release announcement to go out after 24 hours to 
>>> give the mirrors time to catch up.
>>> 
>>> [1] http://cloudstack.markmail.org/message/aahz3ajryvd7wzec
>>> 
>> 
>


[GitHub] cloudstack pull request: Emit event UUIDs on template deletion

2016-02-08 Thread ProjectMoon
Github user ProjectMoon commented on the pull request:

https://github.com/apache/cloudstack/pull/1378#issuecomment-181280984
  
License header is now in the file. Let's hope the tests succeed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: Emit event UUIDs on template deletion

2016-02-08 Thread ProjectMoon
Github user ProjectMoon commented on the pull request:

https://github.com/apache/cloudstack/pull/1378#issuecomment-181286402
  
Apparently cloud-utils has a test failure. Will run tests locally and see 
what happens.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: Emit event UUIDs on template deletion

2016-02-08 Thread ProjectMoon
Github user ProjectMoon commented on the pull request:

https://github.com/apache/cloudstack/pull/1378#issuecomment-181300513
  
The tests all succeed locally. Looking at the error message, it appears to 
be some one-off thing. Will try to force them to run again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [PROPOSAL] LTS Release Cycle

2016-02-08 Thread Rene Moser
John,

Something is not clear to me about the frequency of new LTS releases and
support time range.

You wrote in the proposal, that we branch off for a new LTS version 2
times a year, but only 2 LTS versions will in active maintained at any
time, but supported for 20 months.

This conflicting in my mind.

This means we do not branch off _every_ year twice? Otherwise we would
have 3 releases within 12 months 1.1/1.7/1.1. And the support will be
only ~13 months for max as we do not maintain 3 releases.

What I am missing?

René


On Tue, 02 Feb 2016 16:40:42 GMT, John wrote:
> All,
> 
> Based on the feedback from Ilya, Erik, and Daan, I have updated my
> original LTS proposal to clarify that LTS releases are official project
> deliverables, commit traceability across branches, and RM approval of PRs:
> 
> ## START ##
> 
> Motivation
> ==
> 
> The current monthly release cycle addresses the needs of users focused
> on deploying new functionality as quickly as possible.  It does not
> address the needs of users oriented towards stability rather than new
> functionality.  These users typically employ QA processes to comply with
> corporate policy and/or regulatory requirements.  To maintain a growing,
> thriving community, we must address the needs of both user types.
> Therefore, I propose that we overlay a LTS release cycle onto the
> monthly release cycle to address the needs of stability-oriented users
> with minimal to no impact on the monthly release cycle.  This proposed
> LTS release cycle has the following goals:
> 
>   * Prefer Stability to New Functionality: Deliver releases that only
> address defects and CVEs.  This narrowly focused change scope greatly
> reduces the upgrade risk/operational impact and shorter internal QA cycles.
>   * Reliable Release Lifetimes: Embracing a time-based release strategy,
> the LTS release cycle will provide users with a reliable support time
> frames.  Users can use these time frames provide users with an 20 month
> window in which to plan upgrades.
>   * Support Sustainability: With a defined end of support for LTS
> releases and a maximum of two (2) LTS releases under active maintenance
> at any given time, community members can better plan their commitments
> to release support activities.  We also have a agreed upon policy for
> release end-of-life (EOL) to debate about continuing work on old releases.
> 
> LTS releases would be official project releases.  Therefore, they would
> be subject to same release voting requirements and available from the
> project downloads page.
> 
> Proposed Process
> 
> 
> LTS release branches will be cut twice year on 1 Jan and 1 July based
> the tag of the most recent monthly release.  The branch will be named
> _LTS and each LTS release will be versioned in the form of
> _.  For example, if we cut an LTS
> branch based on 4.7.0, the branch would be named 4.7.0_LTS and the
> version of the first LTS release would be 4.7.0_0, the second would be
> 4.7.0_1, etc.  This release naming convention differentiates LTS and
> monthly releases, communicates the version on which the LTS release is
> based, and allows the maintenance releases for monthly releases without
> version number contention/conflict.  Finally, like master, an LTS branch
> would be always deployable following its initial release.  While it is
> unlikely that LTS users would deploy from the branch, the quality
> discipline of this requirement will benefit the long term stability of
> LTS releases.  Like master, all PRs targeting an LTS branch would
> require two LGTMs (one code review and one independent test), as well
> as, an LGTM from the branch RM.  A combined code review/test LGTM and an
> RM LGTM would be acceptable.
> 
> The following are the types of changes that would permitted and
> guarantees provided to users:
> 
>   * No features or enhancements would be backported to LTS release branches.
>   * Database changes would be limited to those required to address the
> backported defect fixes.
>   * Support for the release/version of the following components from the
> release on which the LTS is based throughout the entire release cycle:
> * MySQL/MariaDB
> * JDK/JRE
> * Linux distributions
>   * API compatibility for between all LTS revisions.  API changes would
> be limited to those required to fix defects or address security issues.
> 
> An LTS release would have a twenty (20) month lifetime from the date the
> release branch is cut.  This support period allows up to two (2) months
> of branch stabilization before initial release with a minimum of
> eighteen (18) months of availability for deployment.  The twenty (20)
> month LTS lifecycle would be divided into following support periods:
> 
>   * 0-2 months (average): Stablization of the LTS branch with fixes
> based on defects discovered from functional, regression, endurance, and
> scalability testing.
>   * 2-14 months: backport blocker and critical priority defect fixe

[GitHub] cloudstack pull request: Emit event UUIDs on template deletion

2016-02-08 Thread ProjectMoon
Github user ProjectMoon commented on the pull request:

https://github.com/apache/cloudstack/pull/1378#issuecomment-181313425
  
Jenkins failed with a false positive again, similar to another issue I had 
with a different pull request.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VMware] Problem starting virtual router on 4.6 and 4.7

2016-02-08 Thread Mike Tutkowski
Interesting – yeah, this VR seems to be stuck in the Starting state.

Not sure what to do about it.

As you noted, 4.6 and later behave like this. I can observe the VR entering
the Starting state properly on 4.5.

On Monday, February 8, 2016, Paul Angus  wrote:

> Hi Mike,
>
> I have the VR running in a 4.8 VMware advanced zone.
> I have noticed a new behaviour since 4.6 - that VMware system VMs don't
> report 'Running' until they've successfully called home.
>
>
>
>
> [image: ShapeBlue] 
> Paul Angus
> VP Technology ,  ShapeBlue
> d:  *+44 203 617 0528 | s: +44 203 603 0540*
> <+44%20203%20617%200528%20%7C%20s:%20+44%20203%20603%200540>  |  m:
> *+44 7711 418784* <+44%207711%20418784>
> e:  *paul.an...@shapeblue.com | t: @cloudyangus*
> 
>  |  w:  *www.shapeblue.com* 
> a:  53 Chandos Place, Covent Garden London WC2N 4HS UK
> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue
> Services India LLP is a company incorporated in India and is operated under
> license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a
> company incorporated in Brasil and is operated under license from Shape
> Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of
> South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is
> a registered trademark.
> This email and any attachments to it may be confidential and are intended
> solely for the use of the individual to whom it is addressed. Any views or
> opinions expressed are solely those of the author and do not necessarily
> represent those of Shape Blue Ltd or related companies. If you are not the
> intended recipient of this email, you must neither take any action based
> upon its contents, nor copy or show it to anyone. Please contact the sender
> if you believe you have received this email in error.
>
>
> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com
> ]
> Sent: Monday, February 8, 2016 3:03 AM
> To: dev@cloudstack.apache.org
> 
> Subject: Re: [VMware] Problem starting virtual router on 4.6 and 4.7
>
> Just checking in on this one again.
>
> In a Basic Zone on CS 4.9, the VR never exits the "Starting" state.
>
> Is anyone using VMware to run their system VMs and has it working in 4.9
> (or 4.8 or 4.7).
>
> On Mon, Dec 7, 2015 at 4:27 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> 
> > wrote:
>
> > I just tried it on 4.5, though: works fine on that version.
> >
> > On Mon, Dec 7, 2015 at 3:26 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > wrote:
> >
> >> Normally I run my system VMs on XenServer and that seems to work fine
> >> at the moment.
> >>
> >> I just had a need to run a VMware-only environment today, so came
> >> across this issue with the virtual router (on two different setups of
> mine).
> >>
> >> On Mon, Dec 7, 2015 at 3:25 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com
> > wrote:
> >>
> >>> Well, it does have an IP address assigned on Eth0 that falls within
> >>> my user-VM range.
> >>>
> >>> The only other IP address it states is the loopback.
> >>>
> >>> I'm not sure what's "normal."
> >>>
> >>> On Mon, Dec 7, 2015 at 3:19 PM, Erik Weber  > wrote:
> >>>
>  open the console and poke into it :-)
> 
> 
> 
>  On Mon, Dec 7, 2015 at 11:16 PM, Mike Tutkowski <
>  mike.tutkow...@solidfire.com
> > wrote:
> 
>  > Note: The CS MS is saying it can't connect to the virtual router.
>  > I
>  can
>  > ping the virtual router manually from the CS MS, but I can't SSH
>  > into
>  it
>  > (not sure if I should be able to).
>  >
>  > On Mon, Dec 7, 2015 at 3:14 PM, Mike Tutkowski <
>  > mike.tutkow...@solidfire.com
> 
>  > > wrote:
>  >
>  > > Hi,
>  > >
>  > > I am having a problem getting the virtual router to leave the
>  "Starting"
>  > > state on 4.6 and 4.7.
>  > >
>  > > I am making use of the correct system VM template in each case,
>  > > but
>  the
>  > > virtual router claims it requires an upgrade.
>  > >
>  > > This is in a Basic Zone and using local storage for the system
> VMs.
>  > >
>  > > Thoughts?
>  > >
>  > > Thanks!
>  > >
>  > > --
>  > > *Mike Tutkowski*
>  > > *Senior CloudStack Developer, SolidFire Inc.*
>  > > e: mike.tutkow...@solidfire.com
> 
>  > > o: 303.746.7302
>  > > Advancing the way the world uses the cloud
>  > > *™*
>  > >
>  >
>  >
>  >
>  > --
>  > *Mike Tutkowski*
>  > *Senior CloudStack Developer, SolidFire Inc.*
>  > e: mike.tutkow...@solidfire.com
> 
>  > o: 303.746.7302
>  > Advancing the way the world uses the cloud
>  > *™*
>  >
> 
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Pierre-Luc Dion
Hi Mike,

The reason behind the creation of a SAN snapshot which is exported into
secondary storage, is because creating a copy of the .VHD directly would
impact uptime of the VM as creating that copy take lots of time. Has oppose
to a SAN snapshot that is close to instantaneous which can afterward be
clone into Secondary Storage asynchronously.

I would suspect an extracted VolumeSnapshot taken from a SAN snapshot could
have is SAN snapshot deleted to avoid duplica and space consumption on the
Primary Storage side.


I see 3 definitions in our current discussion regarding the term snapshot
(these are not official terminology but by own interpretation of them):

1. *Snapshot* (AKA: Storage Snapshot / Mike's definition of a snapshot):
it's a volume snapshot at the storage level, point in time of your data. it
reside on the primary storage. Useful and efficient for software side
incident.
2. *Cloud Snapshot *( AKA: CloudStack VolumeSnapshot/ cloud backup aws-S3
style ): Point in time copy of the Virtual Disk that reside on a different
storage array then the original Volume. Facilitate data migration between
clusters and, in case of primary storage incident, Volume snapshots are not
impacted and can be reuse.
3. *Backup*: Archival of your Virtual-machines data that also validate data
integrity, provide a storage efficient archiving method and an independent
way to restore your data in case of an major infrastructure disaster.


Regards,

PL


On Fri, Feb 5, 2016 at 1:34 PM, Mike Tutkowski  wrote:

> So, let's see if I currently follow the requirements:
>
> * Augment volume snapshots for managed storage to conditionally export data
> to NFS. The current process of taking a snapshot on the SAN is fine, but
> we'd like the option to export the data to NFS, as well.
>
> Questions:
>
> Once the data has been exported to NFS, do we keep the SAN snapshot or
> delete it?
>
> If we are deleting the SAN snapshot, then why don't we just copy the VHD
> from primary to secondary the way we do today for non-managed (i.e.
> traditional) storage? Why create a SAN snapshot in that scenario? Perhaps
> to have the SSVM mount and perform the VHD copy to secondary storage
> instead of a XenServer host?
>
> Thanks for the clarification.
>
> By the way, to me a backup is when you copy data from one storage system to
> another (regardless of features, if any, to restore the data in the
> future). A snapshot is a point-in-time view of the data of a volume and
> it's stored on the same storage system as the volume.
>
> On Fri, Feb 5, 2016 at 11:09 AM, Pierre-Luc Dion 
> wrote:
>
> > That's fun to see that discussion happening. I 100% agree with Paul's
> > points of view. VolumeSnapshot are not a backup, but I do consider them
> as
> > a safety vest against Primary Storage failure, because failure append
> :-( .
> >
> > The current proposal around snapshots that reside on the primary storage
> or
> > snapshots that end in the Secondary Storage  is not to address any kind
> of
> > backups requirement because a snapshot is not a backup, event an
> extracted
> > VM snapshot.
> >
> > The main idea, and again this is for managed storage;
> >
> > 1. StorageSnapshotAPI: Provide storage side snapshot capability for fast
> > response time that support rollback to previous timestamp, create new
> > volume and maybe create template.
> > not required to be a new API if the work is already done, I think
> this
> > is a different behaviors than the user expectation of a volume-snapshot.
> > 2. VolumeSnapshotAPI: Provide current cloudstack behavior that create an
> > extraction of a volume into SecondaryStorage which can be reuse to
> create a
> > new volume into another Primary Storage. This type of snapshot is a slow
> > job since yes it would have to copy the full volume size on the Secondary
> > storage.
> >
> >
> > PL
> >
> >
> >
> > On Fri, Feb 5, 2016 at 12:45 PM, Syed Mushtaq 
> > wrote:
> >
> > > I think I share you view on the 'Ideal world'. Backup (via Volume
> > > Snapshots) is a huge bottleneck in Cloudstack. This is amplified
> > especially
> > > when you have a object storage as your secondary storage because it
> > > requires two copies (one to an NFS staging area and from there to
> object
> > > storage). And not to mention that all these copies are consuming
> > hypervisor
> > > resources. Xenserver's Dom0 is also a huge bottleneck as all the
> Network
> > > and I/O flow through it. So our intention of proposing the "Storage
> > > Snapshots" is to give a better way of achiving snapshots while still
> > > keeping the original definition of volume snpashots (ie upload to sec
> > > storage).
> > >
> > > But as Erik pointed out volume snapshots are not backups. They don't
> work
> > > form multi-disk LVM volume groups and dynamic disks. I am all in for a
> > > better backup solution which handles these use cases and utilizes the
> > > storage's advanced features.
> > >
> > >
> > >
> > > On Fri, Feb 5, 2016 at 12:29 PM, Paul Angus 
>

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
To me it sounds like number two and number three are different uses for the
same "thing"(which is totally fine).

As for taking a fast SAN snapshot and exporting it asynchronously, do we
see the SSVM as performing the export?

To be backwards compatible with what we have in 4.6 and later for volume
snapshots for managed storage, I think it might be easier if we pass in a
flag that says whether or not to archive the SAN snapshot (which, I think,
is something that you suggested, as well, Pierre-Luc).

On Monday, February 8, 2016, Pierre-Luc Dion  wrote:

> Hi Mike,
>
> The reason behind the creation of a SAN snapshot which is exported into
> secondary storage, is because creating a copy of the .VHD directly would
> impact uptime of the VM as creating that copy take lots of time. Has oppose
> to a SAN snapshot that is close to instantaneous which can afterward be
> clone into Secondary Storage asynchronously.
>
> I would suspect an extracted VolumeSnapshot taken from a SAN snapshot could
> have is SAN snapshot deleted to avoid duplica and space consumption on the
> Primary Storage side.
>
>
> I see 3 definitions in our current discussion regarding the term snapshot
> (these are not official terminology but by own interpretation of them):
>
> 1. *Snapshot* (AKA: Storage Snapshot / Mike's definition of a snapshot):
> it's a volume snapshot at the storage level, point in time of your data. it
> reside on the primary storage. Useful and efficient for software side
> incident.
> 2. *Cloud Snapshot *( AKA: CloudStack VolumeSnapshot/ cloud backup aws-S3
> style ): Point in time copy of the Virtual Disk that reside on a different
> storage array then the original Volume. Facilitate data migration between
> clusters and, in case of primary storage incident, Volume snapshots are not
> impacted and can be reuse.
> 3. *Backup*: Archival of your Virtual-machines data that also validate data
> integrity, provide a storage efficient archiving method and an independent
> way to restore your data in case of an major infrastructure disaster.
>
>
> Regards,
>
> PL
>
>
> On Fri, Feb 5, 2016 at 1:34 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com 
> > wrote:
>
> > So, let's see if I currently follow the requirements:
> >
> > * Augment volume snapshots for managed storage to conditionally export
> data
> > to NFS. The current process of taking a snapshot on the SAN is fine, but
> > we'd like the option to export the data to NFS, as well.
> >
> > Questions:
> >
> > Once the data has been exported to NFS, do we keep the SAN snapshot or
> > delete it?
> >
> > If we are deleting the SAN snapshot, then why don't we just copy the VHD
> > from primary to secondary the way we do today for non-managed (i.e.
> > traditional) storage? Why create a SAN snapshot in that scenario? Perhaps
> > to have the SSVM mount and perform the VHD copy to secondary storage
> > instead of a XenServer host?
> >
> > Thanks for the clarification.
> >
> > By the way, to me a backup is when you copy data from one storage system
> to
> > another (regardless of features, if any, to restore the data in the
> > future). A snapshot is a point-in-time view of the data of a volume and
> > it's stored on the same storage system as the volume.
> >
> > On Fri, Feb 5, 2016 at 11:09 AM, Pierre-Luc Dion  >
> > wrote:
> >
> > > That's fun to see that discussion happening. I 100% agree with Paul's
> > > points of view. VolumeSnapshot are not a backup, but I do consider them
> > as
> > > a safety vest against Primary Storage failure, because failure append
> > :-( .
> > >
> > > The current proposal around snapshots that reside on the primary
> storage
> > or
> > > snapshots that end in the Secondary Storage  is not to address any kind
> > of
> > > backups requirement because a snapshot is not a backup, event an
> > extracted
> > > VM snapshot.
> > >
> > > The main idea, and again this is for managed storage;
> > >
> > > 1. StorageSnapshotAPI: Provide storage side snapshot capability for
> fast
> > > response time that support rollback to previous timestamp, create new
> > > volume and maybe create template.
> > > not required to be a new API if the work is already done, I think
> > this
> > > is a different behaviors than the user expectation of a
> volume-snapshot.
> > > 2. VolumeSnapshotAPI: Provide current cloudstack behavior that create
> an
> > > extraction of a volume into SecondaryStorage which can be reuse to
> > create a
> > > new volume into another Primary Storage. This type of snapshot is a
> slow
> > > job since yes it would have to copy the full volume size on the
> Secondary
> > > storage.
> > >
> > >
> > > PL
> > >
> > >
> > >
> > > On Fri, Feb 5, 2016 at 12:45 PM, Syed Mushtaq  >
> > > wrote:
> > >
> > > > I think I share you view on the 'Ideal world'. Backup (via Volume
> > > > Snapshots) is a huge bottleneck in Cloudstack. This is amplified
> > > especially
> > > > when you have a object storage as your secondary storage because it
> > > > requires two

Spannish translation mangled

2016-02-08 Thread Daan Hoogland
@Milamber and other internationalisation specialists. I cannot get access
to the spannish strings on transifex. It seems these get mangled into the
source base. for instnce ```label.traffic.type=Tipo de Tráfico``` or
```label.total.of.vm=Total de máquinas virtuales```. Can someone give me
access? or some spannish translator correct the strings? or some
internationalization specialist give a transpose algorithm to deal with the
issue?

Thanks,
-- 
Daan


[GitHub] cloudstack pull request: prevent RTNETLINK errors as we were itera...

2016-02-08 Thread remibergsma
GitHub user remibergsma opened a pull request:

https://github.com/apache/cloudstack/pull/1404

prevent RTNETLINK errors as we were iterating over empty list

Error seen:
  RTNETLINK answers: File exists

Integration tests are running, will post results later.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/remibergsma/cloudstack RTNETLINK-errors

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1404.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1404


commit 6769b9b7fed1ff732244dbe7e2fecfe24f008a14
Author: Remi Bergsma 
Date:   2016-02-08T15:04:54Z

prevent RTNETLINK errors as we were iterating over empty list

Error seen:
  RTNETLINK answers: File exists




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Ian Rae
I think a service provider backup scenario is more likely to take advantage
of SAN snapshot. There are a few reasons for this. Traditional backups
involve access to the file system, and there is an expectation that this
can be done with reasonably short time frames without negatively impacting
VM performance, and that the backup orchestrator can apply various logic
and or transformations to the data (compress, encrypt, deltas etc...).
While it is true that one could apply a backup process to a cloud snapshot,
this would be slow and inefficient requiring the data to be moved several
times and there are some major bottlenecks with cloud snapshots. With a
cloud snapshot - there seems to be no reasonable expectation of being able
to do differential snapshots (I think this depends on the hypervisor) and
if you do differential snapshots this will make file backups from those
snapshots even more complicated to orchestrate.

Suspect there needs to be a different thread on how to better enable
backups, as a service. As per Paul's suggestion, but it is a related
workflow so relevant to this discussion.

Ian

On Monday, February 8, 2016, Mike Tutkowski 
wrote:

> To me it sounds like number two and number three are different uses for the
> same "thing"(which is totally fine).
>
> As for taking a fast SAN snapshot and exporting it asynchronously, do we
> see the SSVM as performing the export?
>
> To be backwards compatible with what we have in 4.6 and later for volume
> snapshots for managed storage, I think it might be easier if we pass in a
> flag that says whether or not to archive the SAN snapshot (which, I think,
> is something that you suggested, as well, Pierre-Luc).
>
> On Monday, February 8, 2016, Pierre-Luc Dion  > wrote:
>
> > Hi Mike,
> >
> > The reason behind the creation of a SAN snapshot which is exported into
> > secondary storage, is because creating a copy of the .VHD directly would
> > impact uptime of the VM as creating that copy take lots of time. Has
> oppose
> > to a SAN snapshot that is close to instantaneous which can afterward be
> > clone into Secondary Storage asynchronously.
> >
> > I would suspect an extracted VolumeSnapshot taken from a SAN snapshot
> could
> > have is SAN snapshot deleted to avoid duplica and space consumption on
> the
> > Primary Storage side.
> >
> >
> > I see 3 definitions in our current discussion regarding the term snapshot
> > (these are not official terminology but by own interpretation of them):
> >
> > 1. *Snapshot* (AKA: Storage Snapshot / Mike's definition of a snapshot):
> > it's a volume snapshot at the storage level, point in time of your data.
> it
> > reside on the primary storage. Useful and efficient for software side
> > incident.
> > 2. *Cloud Snapshot *( AKA: CloudStack VolumeSnapshot/ cloud backup aws-S3
> > style ): Point in time copy of the Virtual Disk that reside on a
> different
> > storage array then the original Volume. Facilitate data migration between
> > clusters and, in case of primary storage incident, Volume snapshots are
> not
> > impacted and can be reuse.
> > 3. *Backup*: Archival of your Virtual-machines data that also validate
> data
> > integrity, provide a storage efficient archiving method and an
> independent
> > way to restore your data in case of an major infrastructure disaster.
> >
> >
> > Regards,
> >
> > PL
> >
> >
> > On Fri, Feb 5, 2016 at 1:34 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com  
> > > wrote:
> >
> > > So, let's see if I currently follow the requirements:
> > >
> > > * Augment volume snapshots for managed storage to conditionally export
> > data
> > > to NFS. The current process of taking a snapshot on the SAN is fine,
> but
> > > we'd like the option to export the data to NFS, as well.
> > >
> > > Questions:
> > >
> > > Once the data has been exported to NFS, do we keep the SAN snapshot or
> > > delete it?
> > >
> > > If we are deleting the SAN snapshot, then why don't we just copy the
> VHD
> > > from primary to secondary the way we do today for non-managed (i.e.
> > > traditional) storage? Why create a SAN snapshot in that scenario?
> Perhaps
> > > to have the SSVM mount and perform the VHD copy to secondary storage
> > > instead of a XenServer host?
> > >
> > > Thanks for the clarification.
> > >
> > > By the way, to me a backup is when you copy data from one storage
> system
> > to
> > > another (regardless of features, if any, to restore the data in the
> > > future). A snapshot is a point-in-time view of the data of a volume and
> > > it's stored on the same storage system as the volume.
> > >
> > > On Fri, Feb 5, 2016 at 11:09 AM, Pierre-Luc Dion  
> > >
> > > wrote:
> > >
> > > > That's fun to see that discussion happening. I 100% agree with Paul's
> > > > points of view. VolumeSnapshot are not a backup, but I do consider
> them
> > > as
> > > > a safety vest against Primary Storage failure, because failure append
> > > :-( .
> > > >
> > > > The current proposal around snapshots that reside

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Pierre-Luc Dion
Mike,

In terms of API's, would you prefer introducing a parameter to the existing
VolumeSnapshot, example:   extract={true|false}  with a default value of
true  which would extract snapshot into the secondary storage, which is the
current default behavior. Then for SAN snapshot that remain on the SAN we
would just set "extract=false" ?  as oppose to create a new
 StorageSnapshot API ?


Paul,

>From what I'm seeing so far, we can't do a VM-snapshot when using managed
storage for VM having more than one Volume. For the reason that snapshot
are performed outside of the hypervisor awareness and asynchronously. If
someone have a way to address that, it would make thinks much more
attractive.




On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:

> I think a service provider backup scenario is more likely to take advantage
> of SAN snapshot. There are a few reasons for this. Traditional backups
> involve access to the file system, and there is an expectation that this
> can be done with reasonably short time frames without negatively impacting
> VM performance, and that the backup orchestrator can apply various logic
> and or transformations to the data (compress, encrypt, deltas etc...).
> While it is true that one could apply a backup process to a cloud snapshot,
> this would be slow and inefficient requiring the data to be moved several
> times and there are some major bottlenecks with cloud snapshots. With a
> cloud snapshot - there seems to be no reasonable expectation of being able
> to do differential snapshots (I think this depends on the hypervisor) and
> if you do differential snapshots this will make file backups from those
> snapshots even more complicated to orchestrate.
>
> Suspect there needs to be a different thread on how to better enable
> backups, as a service. As per Paul's suggestion, but it is a related
> workflow so relevant to this discussion.
>
> Ian
>
> On Monday, February 8, 2016, Mike Tutkowski 
> wrote:
>
> > To me it sounds like number two and number three are different uses for
> the
> > same "thing"(which is totally fine).
> >
> > As for taking a fast SAN snapshot and exporting it asynchronously, do we
> > see the SSVM as performing the export?
> >
> > To be backwards compatible with what we have in 4.6 and later for volume
> > snapshots for managed storage, I think it might be easier if we pass in a
> > flag that says whether or not to archive the SAN snapshot (which, I
> think,
> > is something that you suggested, as well, Pierre-Luc).
> >
> > On Monday, February 8, 2016, Pierre-Luc Dion  > > wrote:
> >
> > > Hi Mike,
> > >
> > > The reason behind the creation of a SAN snapshot which is exported into
> > > secondary storage, is because creating a copy of the .VHD directly
> would
> > > impact uptime of the VM as creating that copy take lots of time. Has
> > oppose
> > > to a SAN snapshot that is close to instantaneous which can afterward be
> > > clone into Secondary Storage asynchronously.
> > >
> > > I would suspect an extracted VolumeSnapshot taken from a SAN snapshot
> > could
> > > have is SAN snapshot deleted to avoid duplica and space consumption on
> > the
> > > Primary Storage side.
> > >
> > >
> > > I see 3 definitions in our current discussion regarding the term
> snapshot
> > > (these are not official terminology but by own interpretation of them):
> > >
> > > 1. *Snapshot* (AKA: Storage Snapshot / Mike's definition of a
> snapshot):
> > > it's a volume snapshot at the storage level, point in time of your
> data.
> > it
> > > reside on the primary storage. Useful and efficient for software side
> > > incident.
> > > 2. *Cloud Snapshot *( AKA: CloudStack VolumeSnapshot/ cloud backup
> aws-S3
> > > style ): Point in time copy of the Virtual Disk that reside on a
> > different
> > > storage array then the original Volume. Facilitate data migration
> between
> > > clusters and, in case of primary storage incident, Volume snapshots are
> > not
> > > impacted and can be reuse.
> > > 3. *Backup*: Archival of your Virtual-machines data that also validate
> > data
> > > integrity, provide a storage efficient archiving method and an
> > independent
> > > way to restore your data in case of an major infrastructure disaster.
> > >
> > >
> > > Regards,
> > >
> > > PL
> > >
> > >
> > > On Fri, Feb 5, 2016 at 1:34 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com  
> > > > wrote:
> > >
> > > > So, let's see if I currently follow the requirements:
> > > >
> > > > * Augment volume snapshots for managed storage to conditionally
> export
> > > data
> > > > to NFS. The current process of taking a snapshot on the SAN is fine,
> > but
> > > > we'd like the option to export the data to NFS, as well.
> > > >
> > > > Questions:
> > > >
> > > > Once the data has been exported to NFS, do we keep the SAN snapshot
> or
> > > > delete it?
> > > >
> > > > If we are deleting the SAN snapshot, then why don't we just copy the
> > VHD
> > > > from primary to secondary the way we do today for no

[GitHub] cloudstack pull request: CLOUDSTACK-9280: Allow system VM volumes ...

2016-02-08 Thread ProjectMoon
Github user ProjectMoon closed the pull request at:

https://github.com/apache/cloudstack/pull/1405


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9280: Allow system VM volumes ...

2016-02-08 Thread ProjectMoon
GitHub user ProjectMoon opened a pull request:

https://github.com/apache/cloudstack/pull/1406

CLOUDSTACK-9280: Allow system VM volumes to be expunged if no system VMs 
are remaining.

This pull request is our proposed fix for 
https://issues.apache.org/jira/browse/CLOUDSTACK-9280. I added a new special 
SSVM endpoint that happily accepts any command given to it. This endpoint is 
used in only a very specific scenario:

The volume's VM is in state destroyed or expunging, but the volume still 
lingers.
The volume's VM is a system VM (SSVM or console proxy).
There are no secondary storage machines existing in the volume's zone.
This necessitated a small change to VolumeObject which allows it to find 
removed VMs (findByIdIncludingRemoved). The main part of the work is in the 
DefaultEndpointSelector.

We would like some thorough review of this PR as well as what tests to 
create/run. I'm not sure if the scope of this fix will lead to unintentional 
behavior changes in other scenarios.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greenqloud/cloudstack pr-volume-expunge-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1406.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1406


commit b5f10b0d9bbf1a6788e82ae86d02e1ba484f238e
Author: jeff 
Date:   2016-02-08T16:30:03Z

CLOUDSTACK-9280: System VM volumes can be expunged if no SSVM exists.

This commit adds a special SSVM endpoint which simply returns true for
all operations sent to it, without actually doing anything. This
allows for destroyed volumes of system VMs to be expunged when there
are no hosts (and thus no system VMs) remaining to handle the volume
destruction.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9280: Allow system VM volumes ...

2016-02-08 Thread ProjectMoon
Github user ProjectMoon commented on the pull request:

https://github.com/apache/cloudstack/pull/1405#issuecomment-181465209
  
Wrong base branch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9280: Allow system VM volumes ...

2016-02-08 Thread ProjectMoon
GitHub user ProjectMoon opened a pull request:

https://github.com/apache/cloudstack/pull/1405

CLOUDSTACK-9280: Allow system VM volumes to be expunged when there are no 
system VMs remaining

This pull request is our proposed fix for 
https://issues.apache.org/jira/browse/CLOUDSTACK-9280. I added a new special 
SSVM endpoint that happily accepts any command given to it. This endpoint is 
used in only a very specific scenario:

* The volume's VM is in state destroyed or expunging, but the volume still 
lingers.
* The volume's VM is a system VM (SSVM or console proxy).
* There are no secondary storage machines existing in the volume's zone.

This necessitated a small change to VolumeObject which allows it to find 
removed VMs (`findByIdIncludingRemoved`). The main part of the work is in the 
DefaultEndpointSelector.

We would like some thorough review of this PR as well as what tests to 
create/run. I'm not sure if the scope of this fix will lead to unintentional 
behavior changes in other scenarios.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greenqloud/cloudstack pr-volume-expunge-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1405


commit 5de8cb179218be97842993d38365c3902a160328
Author: jeff 
Date:   2016-01-28T14:23:10Z

Add missing license header to ActionEventUtilsTest.

commit 7c446f038c9b26dac46b13e0362a76651e561e9d
Author: Daan Hoogland 
Date:   2016-01-29T09:16:24Z

Merge pull request #1382 from greenqloud/pr-fix-license-header

Add missing license header to ActionEventUtilsTest.The test class was 
merged without the license header. This commit fixes that problem.

Also note that the license header exists on the master branch only as a 
result of commit 8a5fc16. The commit seems to be on the master branch and the 
4.7 branch only. So there may be some conflicts when forward merging.

* pr/1382:
  Add missing license header to ActionEventUtilsTest.

Signed-off-by: Daan Hoogland 

commit b5f10b0d9bbf1a6788e82ae86d02e1ba484f238e
Author: jeff 
Date:   2016-02-08T16:30:03Z

CLOUDSTACK-9280: System VM volumes can be expunged if no SSVM exists.

This commit adds a special SSVM endpoint which simply returns true for
all operations sent to it, without actually doing anything. This
allows for destroyed volumes of system VMs to be expunged when there
are no hosts (and thus no system VMs) remaining to handle the volume
destruction.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: Fix Sync of template.properties in Swift

2016-02-08 Thread syed
Github user syed commented on the pull request:

https://github.com/apache/cloudstack/pull/1331#issuecomment-181507090
  
Fixed



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
Hi Pierre-Luc,

My recommendation would be this:

Add an "archive" flag to the current volume-snapshot API. Its default would
be "false" because that would be backward compatible with how 4.6 has
volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7, and
4.8).

If you set archive=true, then we would perform a background migration of
the snapshot from the SAN to the secondary storage (then delete the SAN
snapshot).

That archive parameter would only be valid for managed storage.

Sound reasonable?

Also, a VM snapshot that includes disks provided by managed storage should
work.

Talk to you later,
Mike

On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion  wrote:

> Mike,
>
> In terms of API's, would you prefer introducing a parameter to the existing
> VolumeSnapshot, example:   extract={true|false}  with a default value of
> true  which would extract snapshot into the secondary storage, which is the
> current default behavior. Then for SAN snapshot that remain on the SAN we
> would just set "extract=false" ?  as oppose to create a new
>  StorageSnapshot API ?
>
>
> Paul,
>
> From what I'm seeing so far, we can't do a VM-snapshot when using managed
> storage for VM having more than one Volume. For the reason that snapshot
> are performed outside of the hypervisor awareness and asynchronously. If
> someone have a way to address that, it would make thinks much more
> attractive.
>
>
>
>
> On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:
>
> > I think a service provider backup scenario is more likely to take
> advantage
> > of SAN snapshot. There are a few reasons for this. Traditional backups
> > involve access to the file system, and there is an expectation that this
> > can be done with reasonably short time frames without negatively
> impacting
> > VM performance, and that the backup orchestrator can apply various logic
> > and or transformations to the data (compress, encrypt, deltas etc...).
> > While it is true that one could apply a backup process to a cloud
> snapshot,
> > this would be slow and inefficient requiring the data to be moved several
> > times and there are some major bottlenecks with cloud snapshots. With a
> > cloud snapshot - there seems to be no reasonable expectation of being
> able
> > to do differential snapshots (I think this depends on the hypervisor) and
> > if you do differential snapshots this will make file backups from those
> > snapshots even more complicated to orchestrate.
> >
> > Suspect there needs to be a different thread on how to better enable
> > backups, as a service. As per Paul's suggestion, but it is a related
> > workflow so relevant to this discussion.
> >
> > Ian
> >
> > On Monday, February 8, 2016, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> > wrote:
> >
> > > To me it sounds like number two and number three are different uses for
> > the
> > > same "thing"(which is totally fine).
> > >
> > > As for taking a fast SAN snapshot and exporting it asynchronously, do
> we
> > > see the SSVM as performing the export?
> > >
> > > To be backwards compatible with what we have in 4.6 and later for
> volume
> > > snapshots for managed storage, I think it might be easier if we pass
> in a
> > > flag that says whether or not to archive the SAN snapshot (which, I
> > think,
> > > is something that you suggested, as well, Pierre-Luc).
> > >
> > > On Monday, February 8, 2016, Pierre-Luc Dion  > > > wrote:
> > >
> > > > Hi Mike,
> > > >
> > > > The reason behind the creation of a SAN snapshot which is exported
> into
> > > > secondary storage, is because creating a copy of the .VHD directly
> > would
> > > > impact uptime of the VM as creating that copy take lots of time. Has
> > > oppose
> > > > to a SAN snapshot that is close to instantaneous which can afterward
> be
> > > > clone into Secondary Storage asynchronously.
> > > >
> > > > I would suspect an extracted VolumeSnapshot taken from a SAN snapshot
> > > could
> > > > have is SAN snapshot deleted to avoid duplica and space consumption
> on
> > > the
> > > > Primary Storage side.
> > > >
> > > >
> > > > I see 3 definitions in our current discussion regarding the term
> > snapshot
> > > > (these are not official terminology but by own interpretation of
> them):
> > > >
> > > > 1. *Snapshot* (AKA: Storage Snapshot / Mike's definition of a
> > snapshot):
> > > > it's a volume snapshot at the storage level, point in time of your
> > data.
> > > it
> > > > reside on the primary storage. Useful and efficient for software side
> > > > incident.
> > > > 2. *Cloud Snapshot *( AKA: CloudStack VolumeSnapshot/ cloud backup
> > aws-S3
> > > > style ): Point in time copy of the Virtual Disk that reside on a
> > > different
> > > > storage array then the original Volume. Facilitate data migration
> > between
> > > > clusters and, in case of primary storage incident, Volume snapshots
> are
> > > not
> > > > impacted and can be reuse.
> > > > 3. *Backup*: Archival of your Virtual-machines data that also
> validate
> > > d

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Syed Mushtaq
Hi Mike,

Adding a flag to createSnapshot was the first and the most obvious thing
that came to our minds. The problem that I had with this was that:

1) I feel it is exposing something to the end user that is internal to the
cloud.

2) We have to follow two different ways of restore/deletion in the same
code path depending on where the Snapshot resides which I find kind of a
bad design.

But if exposing a archive flag to the end user is acceptable then we can
definitely use this instead of adding the StorageSnapshot API

Thanks,
-Syed


On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski  wrote:

> Hi Pierre-Luc,
>
> My recommendation would be this:
>
> Add an "archive" flag to the current volume-snapshot API. Its default would
> be "false" because that would be backward compatible with how 4.6 has
> volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7, and
> 4.8).
>
> If you set archive=true, then we would perform a background migration of
> the snapshot from the SAN to the secondary storage (then delete the SAN
> snapshot).
>
> That archive parameter would only be valid for managed storage.
>
> Sound reasonable?
>
> Also, a VM snapshot that includes disks provided by managed storage should
> work.
>
> Talk to you later,
> Mike
>
> On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion 
> wrote:
>
> > Mike,
> >
> > In terms of API's, would you prefer introducing a parameter to the
> existing
> > VolumeSnapshot, example:   extract={true|false}  with a default value of
> > true  which would extract snapshot into the secondary storage, which is
> the
> > current default behavior. Then for SAN snapshot that remain on the SAN we
> > would just set "extract=false" ?  as oppose to create a new
> >  StorageSnapshot API ?
> >
> >
> > Paul,
> >
> > From what I'm seeing so far, we can't do a VM-snapshot when using managed
> > storage for VM having more than one Volume. For the reason that snapshot
> > are performed outside of the hypervisor awareness and asynchronously. If
> > someone have a way to address that, it would make thinks much more
> > attractive.
> >
> >
> >
> >
> > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:
> >
> > > I think a service provider backup scenario is more likely to take
> > advantage
> > > of SAN snapshot. There are a few reasons for this. Traditional backups
> > > involve access to the file system, and there is an expectation that
> this
> > > can be done with reasonably short time frames without negatively
> > impacting
> > > VM performance, and that the backup orchestrator can apply various
> logic
> > > and or transformations to the data (compress, encrypt, deltas etc...).
> > > While it is true that one could apply a backup process to a cloud
> > snapshot,
> > > this would be slow and inefficient requiring the data to be moved
> several
> > > times and there are some major bottlenecks with cloud snapshots. With a
> > > cloud snapshot - there seems to be no reasonable expectation of being
> > able
> > > to do differential snapshots (I think this depends on the hypervisor)
> and
> > > if you do differential snapshots this will make file backups from those
> > > snapshots even more complicated to orchestrate.
> > >
> > > Suspect there needs to be a different thread on how to better enable
> > > backups, as a service. As per Paul's suggestion, but it is a related
> > > workflow so relevant to this discussion.
> > >
> > > Ian
> > >
> > > On Monday, February 8, 2016, Mike Tutkowski <
> > mike.tutkow...@solidfire.com>
> > > wrote:
> > >
> > > > To me it sounds like number two and number three are different uses
> for
> > > the
> > > > same "thing"(which is totally fine).
> > > >
> > > > As for taking a fast SAN snapshot and exporting it asynchronously, do
> > we
> > > > see the SSVM as performing the export?
> > > >
> > > > To be backwards compatible with what we have in 4.6 and later for
> > volume
> > > > snapshots for managed storage, I think it might be easier if we pass
> > in a
> > > > flag that says whether or not to archive the SAN snapshot (which, I
> > > think,
> > > > is something that you suggested, as well, Pierre-Luc).
> > > >
> > > > On Monday, February 8, 2016, Pierre-Luc Dion  > > > > wrote:
> > > >
> > > > > Hi Mike,
> > > > >
> > > > > The reason behind the creation of a SAN snapshot which is exported
> > into
> > > > > secondary storage, is because creating a copy of the .VHD directly
> > > would
> > > > > impact uptime of the VM as creating that copy take lots of time.
> Has
> > > > oppose
> > > > > to a SAN snapshot that is close to instantaneous which can
> afterward
> > be
> > > > > clone into Secondary Storage asynchronously.
> > > > >
> > > > > I would suspect an extracted VolumeSnapshot taken from a SAN
> snapshot
> > > > could
> > > > > have is SAN snapshot deleted to avoid duplica and space consumption
> > on
> > > > the
> > > > > Primary Storage side.
> > > > >
> > > > >
> > > > > I see 3 definitions in our current discussion regarding the term
> > > sna

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
It's not ideal - true, but it does allow us to be backward compatible.

If you have other ideas, though, about how to maintain backward
compatibility, I'm definitely open to hear them.

Thanks!

On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq 
wrote:

> Hi Mike,
>
> Adding a flag to createSnapshot was the first and the most obvious thing
> that came to our minds. The problem that I had with this was that:
>
> 1) I feel it is exposing something to the end user that is internal to the
> cloud.
>
> 2) We have to follow two different ways of restore/deletion in the same
> code path depending on where the Snapshot resides which I find kind of a
> bad design.
>
> But if exposing a archive flag to the end user is acceptable then we can
> definitely use this instead of adding the StorageSnapshot API
>
> Thanks,
> -Syed
>
>
> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > Hi Pierre-Luc,
> >
> > My recommendation would be this:
> >
> > Add an "archive" flag to the current volume-snapshot API. Its default
> would
> > be "false" because that would be backward compatible with how 4.6 has
> > volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7, and
> > 4.8).
> >
> > If you set archive=true, then we would perform a background migration of
> > the snapshot from the SAN to the secondary storage (then delete the SAN
> > snapshot).
> >
> > That archive parameter would only be valid for managed storage.
> >
> > Sound reasonable?
> >
> > Also, a VM snapshot that includes disks provided by managed storage
> should
> > work.
> >
> > Talk to you later,
> > Mike
> >
> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion 
> > wrote:
> >
> > > Mike,
> > >
> > > In terms of API's, would you prefer introducing a parameter to the
> > existing
> > > VolumeSnapshot, example:   extract={true|false}  with a default value
> of
> > > true  which would extract snapshot into the secondary storage, which is
> > the
> > > current default behavior. Then for SAN snapshot that remain on the SAN
> we
> > > would just set "extract=false" ?  as oppose to create a new
> > >  StorageSnapshot API ?
> > >
> > >
> > > Paul,
> > >
> > > From what I'm seeing so far, we can't do a VM-snapshot when using
> managed
> > > storage for VM having more than one Volume. For the reason that
> snapshot
> > > are performed outside of the hypervisor awareness and asynchronously.
> If
> > > someone have a way to address that, it would make thinks much more
> > > attractive.
> > >
> > >
> > >
> > >
> > > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:
> > >
> > > > I think a service provider backup scenario is more likely to take
> > > advantage
> > > > of SAN snapshot. There are a few reasons for this. Traditional
> backups
> > > > involve access to the file system, and there is an expectation that
> > this
> > > > can be done with reasonably short time frames without negatively
> > > impacting
> > > > VM performance, and that the backup orchestrator can apply various
> > logic
> > > > and or transformations to the data (compress, encrypt, deltas
> etc...).
> > > > While it is true that one could apply a backup process to a cloud
> > > snapshot,
> > > > this would be slow and inefficient requiring the data to be moved
> > several
> > > > times and there are some major bottlenecks with cloud snapshots.
> With a
> > > > cloud snapshot - there seems to be no reasonable expectation of being
> > > able
> > > > to do differential snapshots (I think this depends on the hypervisor)
> > and
> > > > if you do differential snapshots this will make file backups from
> those
> > > > snapshots even more complicated to orchestrate.
> > > >
> > > > Suspect there needs to be a different thread on how to better enable
> > > > backups, as a service. As per Paul's suggestion, but it is a related
> > > > workflow so relevant to this discussion.
> > > >
> > > > Ian
> > > >
> > > > On Monday, February 8, 2016, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com>
> > > > wrote:
> > > >
> > > > > To me it sounds like number two and number three are different uses
> > for
> > > > the
> > > > > same "thing"(which is totally fine).
> > > > >
> > > > > As for taking a fast SAN snapshot and exporting it asynchronously,
> do
> > > we
> > > > > see the SSVM as performing the export?
> > > > >
> > > > > To be backwards compatible with what we have in 4.6 and later for
> > > volume
> > > > > snapshots for managed storage, I think it might be easier if we
> pass
> > > in a
> > > > > flag that says whether or not to archive the SAN snapshot (which, I
> > > > think,
> > > > > is something that you suggested, as well, Pierre-Luc).
> > > > >
> > > > > On Monday, February 8, 2016, Pierre-Luc Dion  > > > > > wrote:
> > > > >
> > > > > > Hi Mike,
> > > > > >
> > > > > > The reason behind the creation of a SAN snapshot which is
> exported
> > > into
> > > > > > secondary storage, is because creating a copy of the .VHD
> directly
> > > > would
> > > > > 

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
Now that I re-read your e-mail, it dawned on me: The end-user doesn't care
where the snapshot is.

If that's true, then we should perhaps control this via Global Settings or
something.

On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> It's not ideal - true, but it does allow us to be backward compatible.
>
> If you have other ideas, though, about how to maintain backward
> compatibility, I'm definitely open to hear them.
>
> Thanks!
>
> On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq 
> wrote:
>
>> Hi Mike,
>>
>> Adding a flag to createSnapshot was the first and the most obvious thing
>> that came to our minds. The problem that I had with this was that:
>>
>> 1) I feel it is exposing something to the end user that is internal to the
>> cloud.
>>
>> 2) We have to follow two different ways of restore/deletion in the same
>> code path depending on where the Snapshot resides which I find kind of a
>> bad design.
>>
>> But if exposing a archive flag to the end user is acceptable then we can
>> definitely use this instead of adding the StorageSnapshot API
>>
>> Thanks,
>> -Syed
>>
>>
>> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com
>> > wrote:
>>
>> > Hi Pierre-Luc,
>> >
>> > My recommendation would be this:
>> >
>> > Add an "archive" flag to the current volume-snapshot API. Its default
>> would
>> > be "false" because that would be backward compatible with how 4.6 has
>> > volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7, and
>> > 4.8).
>> >
>> > If you set archive=true, then we would perform a background migration of
>> > the snapshot from the SAN to the secondary storage (then delete the SAN
>> > snapshot).
>> >
>> > That archive parameter would only be valid for managed storage.
>> >
>> > Sound reasonable?
>> >
>> > Also, a VM snapshot that includes disks provided by managed storage
>> should
>> > work.
>> >
>> > Talk to you later,
>> > Mike
>> >
>> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion 
>> > wrote:
>> >
>> > > Mike,
>> > >
>> > > In terms of API's, would you prefer introducing a parameter to the
>> > existing
>> > > VolumeSnapshot, example:   extract={true|false}  with a default value
>> of
>> > > true  which would extract snapshot into the secondary storage, which
>> is
>> > the
>> > > current default behavior. Then for SAN snapshot that remain on the
>> SAN we
>> > > would just set "extract=false" ?  as oppose to create a new
>> > >  StorageSnapshot API ?
>> > >
>> > >
>> > > Paul,
>> > >
>> > > From what I'm seeing so far, we can't do a VM-snapshot when using
>> managed
>> > > storage for VM having more than one Volume. For the reason that
>> snapshot
>> > > are performed outside of the hypervisor awareness and asynchronously.
>> If
>> > > someone have a way to address that, it would make thinks much more
>> > > attractive.
>> > >
>> > >
>> > >
>> > >
>> > > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:
>> > >
>> > > > I think a service provider backup scenario is more likely to take
>> > > advantage
>> > > > of SAN snapshot. There are a few reasons for this. Traditional
>> backups
>> > > > involve access to the file system, and there is an expectation that
>> > this
>> > > > can be done with reasonably short time frames without negatively
>> > > impacting
>> > > > VM performance, and that the backup orchestrator can apply various
>> > logic
>> > > > and or transformations to the data (compress, encrypt, deltas
>> etc...).
>> > > > While it is true that one could apply a backup process to a cloud
>> > > snapshot,
>> > > > this would be slow and inefficient requiring the data to be moved
>> > several
>> > > > times and there are some major bottlenecks with cloud snapshots.
>> With a
>> > > > cloud snapshot - there seems to be no reasonable expectation of
>> being
>> > > able
>> > > > to do differential snapshots (I think this depends on the
>> hypervisor)
>> > and
>> > > > if you do differential snapshots this will make file backups from
>> those
>> > > > snapshots even more complicated to orchestrate.
>> > > >
>> > > > Suspect there needs to be a different thread on how to better enable
>> > > > backups, as a service. As per Paul's suggestion, but it is a related
>> > > > workflow so relevant to this discussion.
>> > > >
>> > > > Ian
>> > > >
>> > > > On Monday, February 8, 2016, Mike Tutkowski <
>> > > mike.tutkow...@solidfire.com>
>> > > > wrote:
>> > > >
>> > > > > To me it sounds like number two and number three are different
>> uses
>> > for
>> > > > the
>> > > > > same "thing"(which is totally fine).
>> > > > >
>> > > > > As for taking a fast SAN snapshot and exporting it
>> asynchronously, do
>> > > we
>> > > > > see the SSVM as performing the export?
>> > > > >
>> > > > > To be backwards compatible with what we have in 4.6 and later for
>> > > volume
>> > > > > snapshots for managed storage, I think it might be easier if we
>> pass
>> > > in a
>> > > > > flag that says whether or not to arc

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Will Stevens
I don't think a global setting is a good option because we need both
functionality to be available at the same time and for different use cases
to be able to pick which they choose.

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski  wrote:

> Now that I re-read your e-mail, it dawned on me: The end-user doesn't care
> where the snapshot is.
>
> If that's true, then we should perhaps control this via Global Settings or
> something.
>
> On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > It's not ideal - true, but it does allow us to be backward compatible.
> >
> > If you have other ideas, though, about how to maintain backward
> > compatibility, I'm definitely open to hear them.
> >
> > Thanks!
> >
> > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq 
> > wrote:
> >
> >> Hi Mike,
> >>
> >> Adding a flag to createSnapshot was the first and the most obvious thing
> >> that came to our minds. The problem that I had with this was that:
> >>
> >> 1) I feel it is exposing something to the end user that is internal to
> the
> >> cloud.
> >>
> >> 2) We have to follow two different ways of restore/deletion in the same
> >> code path depending on where the Snapshot resides which I find kind of a
> >> bad design.
> >>
> >> But if exposing a archive flag to the end user is acceptable then we can
> >> definitely use this instead of adding the StorageSnapshot API
> >>
> >> Thanks,
> >> -Syed
> >>
> >>
> >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com
> >> > wrote:
> >>
> >> > Hi Pierre-Luc,
> >> >
> >> > My recommendation would be this:
> >> >
> >> > Add an "archive" flag to the current volume-snapshot API. Its default
> >> would
> >> > be "false" because that would be backward compatible with how 4.6 has
> >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7,
> and
> >> > 4.8).
> >> >
> >> > If you set archive=true, then we would perform a background migration
> of
> >> > the snapshot from the SAN to the secondary storage (then delete the
> SAN
> >> > snapshot).
> >> >
> >> > That archive parameter would only be valid for managed storage.
> >> >
> >> > Sound reasonable?
> >> >
> >> > Also, a VM snapshot that includes disks provided by managed storage
> >> should
> >> > work.
> >> >
> >> > Talk to you later,
> >> > Mike
> >> >
> >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion 
> >> > wrote:
> >> >
> >> > > Mike,
> >> > >
> >> > > In terms of API's, would you prefer introducing a parameter to the
> >> > existing
> >> > > VolumeSnapshot, example:   extract={true|false}  with a default
> value
> >> of
> >> > > true  which would extract snapshot into the secondary storage, which
> >> is
> >> > the
> >> > > current default behavior. Then for SAN snapshot that remain on the
> >> SAN we
> >> > > would just set "extract=false" ?  as oppose to create a new
> >> > >  StorageSnapshot API ?
> >> > >
> >> > >
> >> > > Paul,
> >> > >
> >> > > From what I'm seeing so far, we can't do a VM-snapshot when using
> >> managed
> >> > > storage for VM having more than one Volume. For the reason that
> >> snapshot
> >> > > are performed outside of the hypervisor awareness and
> asynchronously.
> >> If
> >> > > someone have a way to address that, it would make thinks much more
> >> > > attractive.
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:
> >> > >
> >> > > > I think a service provider backup scenario is more likely to take
> >> > > advantage
> >> > > > of SAN snapshot. There are a few reasons for this. Traditional
> >> backups
> >> > > > involve access to the file system, and there is an expectation
> that
> >> > this
> >> > > > can be done with reasonably short time frames without negatively
> >> > > impacting
> >> > > > VM performance, and that the backup orchestrator can apply various
> >> > logic
> >> > > > and or transformations to the data (compress, encrypt, deltas
> >> etc...).
> >> > > > While it is true that one could apply a backup process to a cloud
> >> > > snapshot,
> >> > > > this would be slow and inefficient requiring the data to be moved
> >> > several
> >> > > > times and there are some major bottlenecks with cloud snapshots.
> >> With a
> >> > > > cloud snapshot - there seems to be no reasonable expectation of
> >> being
> >> > > able
> >> > > > to do differential snapshots (I think this depends on the
> >> hypervisor)
> >> > and
> >> > > > if you do differential snapshots this will make file backups from
> >> those
> >> > > > snapshots even more complicated to orchestrate.
> >> > > >
> >> > > > Suspect there needs to be a different thread on how to better
> enable
> >> > > > backups, as a service. As per Paul's suggestion, but it is a
> related
> >> > > > workflow so relevant to this discussion.
> >> > > >
> >> > > > Ian
> >

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Syed Mushtaq
That is what I was going to suggest Mike. Add a global setting to be
backwards compatible and add the StorageSnapshot API for doing SAN
snapshots (while the VolumeSnapshots uploads to Sec storage)

On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski  wrote:

> Now that I re-read your e-mail, it dawned on me: The end-user doesn't care
> where the snapshot is.
>
> If that's true, then we should perhaps control this via Global Settings or
> something.
>
> On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > It's not ideal - true, but it does allow us to be backward compatible.
> >
> > If you have other ideas, though, about how to maintain backward
> > compatibility, I'm definitely open to hear them.
> >
> > Thanks!
> >
> > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq 
> > wrote:
> >
> >> Hi Mike,
> >>
> >> Adding a flag to createSnapshot was the first and the most obvious thing
> >> that came to our minds. The problem that I had with this was that:
> >>
> >> 1) I feel it is exposing something to the end user that is internal to
> the
> >> cloud.
> >>
> >> 2) We have to follow two different ways of restore/deletion in the same
> >> code path depending on where the Snapshot resides which I find kind of a
> >> bad design.
> >>
> >> But if exposing a archive flag to the end user is acceptable then we can
> >> definitely use this instead of adding the StorageSnapshot API
> >>
> >> Thanks,
> >> -Syed
> >>
> >>
> >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com
> >> > wrote:
> >>
> >> > Hi Pierre-Luc,
> >> >
> >> > My recommendation would be this:
> >> >
> >> > Add an "archive" flag to the current volume-snapshot API. Its default
> >> would
> >> > be "false" because that would be backward compatible with how 4.6 has
> >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7,
> and
> >> > 4.8).
> >> >
> >> > If you set archive=true, then we would perform a background migration
> of
> >> > the snapshot from the SAN to the secondary storage (then delete the
> SAN
> >> > snapshot).
> >> >
> >> > That archive parameter would only be valid for managed storage.
> >> >
> >> > Sound reasonable?
> >> >
> >> > Also, a VM snapshot that includes disks provided by managed storage
> >> should
> >> > work.
> >> >
> >> > Talk to you later,
> >> > Mike
> >> >
> >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion 
> >> > wrote:
> >> >
> >> > > Mike,
> >> > >
> >> > > In terms of API's, would you prefer introducing a parameter to the
> >> > existing
> >> > > VolumeSnapshot, example:   extract={true|false}  with a default
> value
> >> of
> >> > > true  which would extract snapshot into the secondary storage, which
> >> is
> >> > the
> >> > > current default behavior. Then for SAN snapshot that remain on the
> >> SAN we
> >> > > would just set "extract=false" ?  as oppose to create a new
> >> > >  StorageSnapshot API ?
> >> > >
> >> > >
> >> > > Paul,
> >> > >
> >> > > From what I'm seeing so far, we can't do a VM-snapshot when using
> >> managed
> >> > > storage for VM having more than one Volume. For the reason that
> >> snapshot
> >> > > are performed outside of the hypervisor awareness and
> asynchronously.
> >> If
> >> > > someone have a way to address that, it would make thinks much more
> >> > > attractive.
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae  wrote:
> >> > >
> >> > > > I think a service provider backup scenario is more likely to take
> >> > > advantage
> >> > > > of SAN snapshot. There are a few reasons for this. Traditional
> >> backups
> >> > > > involve access to the file system, and there is an expectation
> that
> >> > this
> >> > > > can be done with reasonably short time frames without negatively
> >> > > impacting
> >> > > > VM performance, and that the backup orchestrator can apply various
> >> > logic
> >> > > > and or transformations to the data (compress, encrypt, deltas
> >> etc...).
> >> > > > While it is true that one could apply a backup process to a cloud
> >> > > snapshot,
> >> > > > this would be slow and inefficient requiring the data to be moved
> >> > several
> >> > > > times and there are some major bottlenecks with cloud snapshots.
> >> With a
> >> > > > cloud snapshot - there seems to be no reasonable expectation of
> >> being
> >> > > able
> >> > > > to do differential snapshots (I think this depends on the
> >> hypervisor)
> >> > and
> >> > > > if you do differential snapshots this will make file backups from
> >> those
> >> > > > snapshots even more complicated to orchestrate.
> >> > > >
> >> > > > Suspect there needs to be a different thread on how to better
> enable
> >> > > > backups, as a service. As per Paul's suggestion, but it is a
> related
> >> > > > workflow so relevant to this discussion.
> >> > > >
> >> > > > Ian
> >> > > >
> >> > > > On Monday, February 8, 2016, Mike Tutkowski <
> >> > > mike.tutkow...@solidfire.com>
> >> > > > wrote:
> >> > > >
> >>

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Ian Rae
I would hope that default behaviour in CloudStack is the traditional volume
snapshot moved to secondary storage. A global setting to change that
behaviour is probably ok if it is not default, but the user would want to
in certain cases make copies of those snapshots to secondary storage in
addition to taking the snapshot on the primary storage.

On Monday, February 8, 2016, Will Stevens  wrote:

> I don't think a global setting is a good option because we need both
> functionality to be available at the same time and for different use cases
> to be able to pick which they choose.
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com 
> > wrote:
>
> > Now that I re-read your e-mail, it dawned on me: The end-user doesn't
> care
> > where the snapshot is.
> >
> > If that's true, then we should perhaps control this via Global Settings
> or
> > something.
> >
> > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> > > It's not ideal - true, but it does allow us to be backward compatible.
> > >
> > > If you have other ideas, though, about how to maintain backward
> > > compatibility, I'm definitely open to hear them.
> > >
> > > Thanks!
> > >
> > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq  >
> > > wrote:
> > >
> > >> Hi Mike,
> > >>
> > >> Adding a flag to createSnapshot was the first and the most obvious
> thing
> > >> that came to our minds. The problem that I had with this was that:
> > >>
> > >> 1) I feel it is exposing something to the end user that is internal to
> > the
> > >> cloud.
> > >>
> > >> 2) We have to follow two different ways of restore/deletion in the
> same
> > >> code path depending on where the Snapshot resides which I find kind
> of a
> > >> bad design.
> > >>
> > >> But if exposing a archive flag to the end user is acceptable then we
> can
> > >> definitely use this instead of adding the StorageSnapshot API
> > >>
> > >> Thanks,
> > >> -Syed
> > >>
> > >>
> > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > >> mike.tutkow...@solidfire.com
> > >> > wrote:
> > >>
> > >> > Hi Pierre-Luc,
> > >> >
> > >> > My recommendation would be this:
> > >> >
> > >> > Add an "archive" flag to the current volume-snapshot API. Its
> default
> > >> would
> > >> > be "false" because that would be backward compatible with how 4.6
> has
> > >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7,
> > and
> > >> > 4.8).
> > >> >
> > >> > If you set archive=true, then we would perform a background
> migration
> > of
> > >> > the snapshot from the SAN to the secondary storage (then delete the
> > SAN
> > >> > snapshot).
> > >> >
> > >> > That archive parameter would only be valid for managed storage.
> > >> >
> > >> > Sound reasonable?
> > >> >
> > >> > Also, a VM snapshot that includes disks provided by managed storage
> > >> should
> > >> > work.
> > >> >
> > >> > Talk to you later,
> > >> > Mike
> > >> >
> > >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion  >
> > >> > wrote:
> > >> >
> > >> > > Mike,
> > >> > >
> > >> > > In terms of API's, would you prefer introducing a parameter to the
> > >> > existing
> > >> > > VolumeSnapshot, example:   extract={true|false}  with a default
> > value
> > >> of
> > >> > > true  which would extract snapshot into the secondary storage,
> which
> > >> is
> > >> > the
> > >> > > current default behavior. Then for SAN snapshot that remain on the
> > >> SAN we
> > >> > > would just set "extract=false" ?  as oppose to create a new
> > >> > >  StorageSnapshot API ?
> > >> > >
> > >> > >
> > >> > > Paul,
> > >> > >
> > >> > > From what I'm seeing so far, we can't do a VM-snapshot when using
> > >> managed
> > >> > > storage for VM having more than one Volume. For the reason that
> > >> snapshot
> > >> > > are performed outside of the hypervisor awareness and
> > asynchronously.
> > >> If
> > >> > > someone have a way to address that, it would make thinks much more
> > >> > > attractive.
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae 
> wrote:
> > >> > >
> > >> > > > I think a service provider backup scenario is more likely to
> take
> > >> > > advantage
> > >> > > > of SAN snapshot. There are a few reasons for this. Traditional
> > >> backups
> > >> > > > involve access to the file system, and there is an expectation
> > that
> > >> > this
> > >> > > > can be done with reasonably short time frames without negatively
> > >> > > impacting
> > >> > > > VM performance, and that the backup orchestrator can apply
> various
> > >> > logic
> > >> > > > and or transformations to the data (compress, encrypt, deltas
> > >> etc...).
> > >> > > > While it is true that one could apply a backup process to a
> cloud
> > >> > > snapshot,
> > >> > > > this would be slow and inefficien

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
So, right now in 4.6, 4.7, and 4.8, the behavior for a managed storage
volume snapshot is for the data to remain on the SAN (not secondary
storage).

It was simply designed as a space-efficient and fast alternative to copying
data to NFS (secondary storage).

What we need to do is somehow maintain that original behavior, but augment
it with the option of backing up the data to secondary storage (and, if
doing that, then delete the SAN snapshot).

On Mon, Feb 8, 2016 at 11:55 AM, Ian Rae  wrote:

> I would hope that default behaviour in CloudStack is the traditional volume
> snapshot moved to secondary storage. A global setting to change that
> behaviour is probably ok if it is not default, but the user would want to
> in certain cases make copies of those snapshots to secondary storage in
> addition to taking the snapshot on the primary storage.
>
> On Monday, February 8, 2016, Will Stevens  wrote:
>
> > I don't think a global setting is a good option because we need both
> > functionality to be available at the same time and for different use
> cases
> > to be able to pick which they choose.
> >
> > *Will STEVENS*
> > Lead Developer
> >
> > *CloudOps* *| *Cloud Solutions Experts
> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > w cloudops.com *|* tw @CloudOps_
> >
> > On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com 
> > > wrote:
> >
> > > Now that I re-read your e-mail, it dawned on me: The end-user doesn't
> > care
> > > where the snapshot is.
> > >
> > > If that's true, then we should perhaps control this via Global Settings
> > or
> > > something.
> > >
> > > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > > > It's not ideal - true, but it does allow us to be backward
> compatible.
> > > >
> > > > If you have other ideas, though, about how to maintain backward
> > > > compatibility, I'm definitely open to hear them.
> > > >
> > > > Thanks!
> > > >
> > > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq <
> syed1.mush...@gmail.com
> > >
> > > > wrote:
> > > >
> > > >> Hi Mike,
> > > >>
> > > >> Adding a flag to createSnapshot was the first and the most obvious
> > thing
> > > >> that came to our minds. The problem that I had with this was that:
> > > >>
> > > >> 1) I feel it is exposing something to the end user that is internal
> to
> > > the
> > > >> cloud.
> > > >>
> > > >> 2) We have to follow two different ways of restore/deletion in the
> > same
> > > >> code path depending on where the Snapshot resides which I find kind
> > of a
> > > >> bad design.
> > > >>
> > > >> But if exposing a archive flag to the end user is acceptable then we
> > can
> > > >> definitely use this instead of adding the StorageSnapshot API
> > > >>
> > > >> Thanks,
> > > >> -Syed
> > > >>
> > > >>
> > > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > > >> mike.tutkow...@solidfire.com
> > > >> > wrote:
> > > >>
> > > >> > Hi Pierre-Luc,
> > > >> >
> > > >> > My recommendation would be this:
> > > >> >
> > > >> > Add an "archive" flag to the current volume-snapshot API. Its
> > default
> > > >> would
> > > >> > be "false" because that would be backward compatible with how 4.6
> > has
> > > >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6,
> 4.7,
> > > and
> > > >> > 4.8).
> > > >> >
> > > >> > If you set archive=true, then we would perform a background
> > migration
> > > of
> > > >> > the snapshot from the SAN to the secondary storage (then delete
> the
> > > SAN
> > > >> > snapshot).
> > > >> >
> > > >> > That archive parameter would only be valid for managed storage.
> > > >> >
> > > >> > Sound reasonable?
> > > >> >
> > > >> > Also, a VM snapshot that includes disks provided by managed
> storage
> > > >> should
> > > >> > work.
> > > >> >
> > > >> > Talk to you later,
> > > >> > Mike
> > > >> >
> > > >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion <
> pd...@cloudops.com
> > >
> > > >> > wrote:
> > > >> >
> > > >> > > Mike,
> > > >> > >
> > > >> > > In terms of API's, would you prefer introducing a parameter to
> the
> > > >> > existing
> > > >> > > VolumeSnapshot, example:   extract={true|false}  with a default
> > > value
> > > >> of
> > > >> > > true  which would extract snapshot into the secondary storage,
> > which
> > > >> is
> > > >> > the
> > > >> > > current default behavior. Then for SAN snapshot that remain on
> the
> > > >> SAN we
> > > >> > > would just set "extract=false" ?  as oppose to create a new
> > > >> > >  StorageSnapshot API ?
> > > >> > >
> > > >> > >
> > > >> > > Paul,
> > > >> > >
> > > >> > > From what I'm seeing so far, we can't do a VM-snapshot when
> using
> > > >> managed
> > > >> > > storage for VM having more than one Volume. For the reason that
> > > >> snapshot
> > > >> > > are performed outside of the hypervisor awareness and
> > > asynchronously.
> > > >> If
> > > >> > > someone have a way to address that, it would make thinks much
> more
> > > >> > > attractive.

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
Hey Will,

Who's picking the behavior? Is it the cloud provider or the end user?

Thanks

On Mon, Feb 8, 2016 at 11:52 AM, Will Stevens  wrote:

> I don't think a global setting is a good option because we need both
> functionality to be available at the same time and for different use cases
> to be able to pick which they choose.
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > Now that I re-read your e-mail, it dawned on me: The end-user doesn't
> care
> > where the snapshot is.
> >
> > If that's true, then we should perhaps control this via Global Settings
> or
> > something.
> >
> > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> > > It's not ideal - true, but it does allow us to be backward compatible.
> > >
> > > If you have other ideas, though, about how to maintain backward
> > > compatibility, I'm definitely open to hear them.
> > >
> > > Thanks!
> > >
> > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq  >
> > > wrote:
> > >
> > >> Hi Mike,
> > >>
> > >> Adding a flag to createSnapshot was the first and the most obvious
> thing
> > >> that came to our minds. The problem that I had with this was that:
> > >>
> > >> 1) I feel it is exposing something to the end user that is internal to
> > the
> > >> cloud.
> > >>
> > >> 2) We have to follow two different ways of restore/deletion in the
> same
> > >> code path depending on where the Snapshot resides which I find kind
> of a
> > >> bad design.
> > >>
> > >> But if exposing a archive flag to the end user is acceptable then we
> can
> > >> definitely use this instead of adding the StorageSnapshot API
> > >>
> > >> Thanks,
> > >> -Syed
> > >>
> > >>
> > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > >> mike.tutkow...@solidfire.com
> > >> > wrote:
> > >>
> > >> > Hi Pierre-Luc,
> > >> >
> > >> > My recommendation would be this:
> > >> >
> > >> > Add an "archive" flag to the current volume-snapshot API. Its
> default
> > >> would
> > >> > be "false" because that would be backward compatible with how 4.6
> has
> > >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6, 4.7,
> > and
> > >> > 4.8).
> > >> >
> > >> > If you set archive=true, then we would perform a background
> migration
> > of
> > >> > the snapshot from the SAN to the secondary storage (then delete the
> > SAN
> > >> > snapshot).
> > >> >
> > >> > That archive parameter would only be valid for managed storage.
> > >> >
> > >> > Sound reasonable?
> > >> >
> > >> > Also, a VM snapshot that includes disks provided by managed storage
> > >> should
> > >> > work.
> > >> >
> > >> > Talk to you later,
> > >> > Mike
> > >> >
> > >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion  >
> > >> > wrote:
> > >> >
> > >> > > Mike,
> > >> > >
> > >> > > In terms of API's, would you prefer introducing a parameter to the
> > >> > existing
> > >> > > VolumeSnapshot, example:   extract={true|false}  with a default
> > value
> > >> of
> > >> > > true  which would extract snapshot into the secondary storage,
> which
> > >> is
> > >> > the
> > >> > > current default behavior. Then for SAN snapshot that remain on the
> > >> SAN we
> > >> > > would just set "extract=false" ?  as oppose to create a new
> > >> > >  StorageSnapshot API ?
> > >> > >
> > >> > >
> > >> > > Paul,
> > >> > >
> > >> > > From what I'm seeing so far, we can't do a VM-snapshot when using
> > >> managed
> > >> > > storage for VM having more than one Volume. For the reason that
> > >> snapshot
> > >> > > are performed outside of the hypervisor awareness and
> > asynchronously.
> > >> If
> > >> > > someone have a way to address that, it would make thinks much more
> > >> > > attractive.
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Mon, Feb 8, 2016 at 10:57 AM, Ian Rae 
> wrote:
> > >> > >
> > >> > > > I think a service provider backup scenario is more likely to
> take
> > >> > > advantage
> > >> > > > of SAN snapshot. There are a few reasons for this. Traditional
> > >> backups
> > >> > > > involve access to the file system, and there is an expectation
> > that
> > >> > this
> > >> > > > can be done with reasonably short time frames without negatively
> > >> > > impacting
> > >> > > > VM performance, and that the backup orchestrator can apply
> various
> > >> > logic
> > >> > > > and or transformations to the data (compress, encrypt, deltas
> > >> etc...).
> > >> > > > While it is true that one could apply a backup process to a
> cloud
> > >> > > snapshot,
> > >> > > > this would be slow and inefficient requiring the data to be
> moved
> > >> > several
> > >> > > > times and there are some major bottlenecks with cloud snapshots.
> > >> With a
> > >> > > > cloud snapshot - there seems to be no reasonable expectation of
> > >> being
> > >> > > able
> > >

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Will Stevens
Sorry.  I missed a bit of context when I responded.  The global setting
would be only for the managed storage case, currently being called Storage
Snapshots, and is only to determine if a copy is pushed to secondary
storage right?  The global setting would not change the behavior of the
Volume Snapshots right?

I was referring to the need for Volume Snapshots and Storage Snapshots to
exist together.  Disregard my comment.  I caught up on context after I
posted.  My bad...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Mon, Feb 8, 2016 at 2:05 PM, Mike Tutkowski  wrote:

> Hey Will,
>
> Who's picking the behavior? Is it the cloud provider or the end user?
>
> Thanks
>
> On Mon, Feb 8, 2016 at 11:52 AM, Will Stevens 
> wrote:
>
> > I don't think a global setting is a good option because we need both
> > functionality to be available at the same time and for different use
> cases
> > to be able to pick which they choose.
> >
> > *Will STEVENS*
> > Lead Developer
> >
> > *CloudOps* *| *Cloud Solutions Experts
> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > w cloudops.com *|* tw @CloudOps_
> >
> > On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > > wrote:
> >
> > > Now that I re-read your e-mail, it dawned on me: The end-user doesn't
> > care
> > > where the snapshot is.
> > >
> > > If that's true, then we should perhaps control this via Global Settings
> > or
> > > something.
> > >
> > > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com> wrote:
> > >
> > > > It's not ideal - true, but it does allow us to be backward
> compatible.
> > > >
> > > > If you have other ideas, though, about how to maintain backward
> > > > compatibility, I'm definitely open to hear them.
> > > >
> > > > Thanks!
> > > >
> > > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq <
> syed1.mush...@gmail.com
> > >
> > > > wrote:
> > > >
> > > >> Hi Mike,
> > > >>
> > > >> Adding a flag to createSnapshot was the first and the most obvious
> > thing
> > > >> that came to our minds. The problem that I had with this was that:
> > > >>
> > > >> 1) I feel it is exposing something to the end user that is internal
> to
> > > the
> > > >> cloud.
> > > >>
> > > >> 2) We have to follow two different ways of restore/deletion in the
> > same
> > > >> code path depending on where the Snapshot resides which I find kind
> > of a
> > > >> bad design.
> > > >>
> > > >> But if exposing a archive flag to the end user is acceptable then we
> > can
> > > >> definitely use this instead of adding the StorageSnapshot API
> > > >>
> > > >> Thanks,
> > > >> -Syed
> > > >>
> > > >>
> > > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > > >> mike.tutkow...@solidfire.com
> > > >> > wrote:
> > > >>
> > > >> > Hi Pierre-Luc,
> > > >> >
> > > >> > My recommendation would be this:
> > > >> >
> > > >> > Add an "archive" flag to the current volume-snapshot API. Its
> > default
> > > >> would
> > > >> > be "false" because that would be backward compatible with how 4.6
> > has
> > > >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6,
> 4.7,
> > > and
> > > >> > 4.8).
> > > >> >
> > > >> > If you set archive=true, then we would perform a background
> > migration
> > > of
> > > >> > the snapshot from the SAN to the secondary storage (then delete
> the
> > > SAN
> > > >> > snapshot).
> > > >> >
> > > >> > That archive parameter would only be valid for managed storage.
> > > >> >
> > > >> > Sound reasonable?
> > > >> >
> > > >> > Also, a VM snapshot that includes disks provided by managed
> storage
> > > >> should
> > > >> > work.
> > > >> >
> > > >> > Talk to you later,
> > > >> > Mike
> > > >> >
> > > >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion <
> pd...@cloudops.com
> > >
> > > >> > wrote:
> > > >> >
> > > >> > > Mike,
> > > >> > >
> > > >> > > In terms of API's, would you prefer introducing a parameter to
> the
> > > >> > existing
> > > >> > > VolumeSnapshot, example:   extract={true|false}  with a default
> > > value
> > > >> of
> > > >> > > true  which would extract snapshot into the secondary storage,
> > which
> > > >> is
> > > >> > the
> > > >> > > current default behavior. Then for SAN snapshot that remain on
> the
> > > >> SAN we
> > > >> > > would just set "extract=false" ?  as oppose to create a new
> > > >> > >  StorageSnapshot API ?
> > > >> > >
> > > >> > >
> > > >> > > Paul,
> > > >> > >
> > > >> > > From what I'm seeing so far, we can't do a VM-snapshot when
> using
> > > >> managed
> > > >> > > storage for VM having more than one Volume. For the reason that
> > > >> snapshot
> > > >> > > are performed outside of the hypervisor awareness and
> > > asynchronously.
> > > >> If
> > > >> > > someone have a way to address that, it would make thinks much
> more
> > > >> > > attractive.
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > On Mo

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
Correct, Will.

That Global Settings would only be for managed storage. Non-managed
(traditional) volume snapshots are completely un-impacted by this feature.

If we need to sometimes keep the snapshots on the SAN and sometimes push
them to secondary storage, we'll need a more robust solution than Global
Settings, though.

On Mon, Feb 8, 2016 at 12:11 PM, Will Stevens  wrote:

> Sorry.  I missed a bit of context when I responded.  The global setting
> would be only for the managed storage case, currently being called Storage
> Snapshots, and is only to determine if a copy is pushed to secondary
> storage right?  The global setting would not change the behavior of the
> Volume Snapshots right?
>
> I was referring to the need for Volume Snapshots and Storage Snapshots to
> exist together.  Disregard my comment.  I caught up on context after I
> posted.  My bad...
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Mon, Feb 8, 2016 at 2:05 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > Hey Will,
> >
> > Who's picking the behavior? Is it the cloud provider or the end user?
> >
> > Thanks
> >
> > On Mon, Feb 8, 2016 at 11:52 AM, Will Stevens 
> > wrote:
> >
> > > I don't think a global setting is a good option because we need both
> > > functionality to be available at the same time and for different use
> > cases
> > > to be able to pick which they choose.
> > >
> > > *Will STEVENS*
> > > Lead Developer
> > >
> > > *CloudOps* *| *Cloud Solutions Experts
> > > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > > w cloudops.com *|* tw @CloudOps_
> > >
> > > On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com
> > > > wrote:
> > >
> > > > Now that I re-read your e-mail, it dawned on me: The end-user doesn't
> > > care
> > > > where the snapshot is.
> > > >
> > > > If that's true, then we should perhaps control this via Global
> Settings
> > > or
> > > > something.
> > > >
> > > > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > > > mike.tutkow...@solidfire.com> wrote:
> > > >
> > > > > It's not ideal - true, but it does allow us to be backward
> > compatible.
> > > > >
> > > > > If you have other ideas, though, about how to maintain backward
> > > > > compatibility, I'm definitely open to hear them.
> > > > >
> > > > > Thanks!
> > > > >
> > > > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq <
> > syed1.mush...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > >> Hi Mike,
> > > > >>
> > > > >> Adding a flag to createSnapshot was the first and the most obvious
> > > thing
> > > > >> that came to our minds. The problem that I had with this was that:
> > > > >>
> > > > >> 1) I feel it is exposing something to the end user that is
> internal
> > to
> > > > the
> > > > >> cloud.
> > > > >>
> > > > >> 2) We have to follow two different ways of restore/deletion in the
> > > same
> > > > >> code path depending on where the Snapshot resides which I find
> kind
> > > of a
> > > > >> bad design.
> > > > >>
> > > > >> But if exposing a archive flag to the end user is acceptable then
> we
> > > can
> > > > >> definitely use this instead of adding the StorageSnapshot API
> > > > >>
> > > > >> Thanks,
> > > > >> -Syed
> > > > >>
> > > > >>
> > > > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > > > >> mike.tutkow...@solidfire.com
> > > > >> > wrote:
> > > > >>
> > > > >> > Hi Pierre-Luc,
> > > > >> >
> > > > >> > My recommendation would be this:
> > > > >> >
> > > > >> > Add an "archive" flag to the current volume-snapshot API. Its
> > > default
> > > > >> would
> > > > >> > be "false" because that would be backward compatible with how
> 4.6
> > > has
> > > > >> > volume snapshots implemented (i.e. they stay on the SAN in 4.6,
> > 4.7,
> > > > and
> > > > >> > 4.8).
> > > > >> >
> > > > >> > If you set archive=true, then we would perform a background
> > > migration
> > > > of
> > > > >> > the snapshot from the SAN to the secondary storage (then delete
> > the
> > > > SAN
> > > > >> > snapshot).
> > > > >> >
> > > > >> > That archive parameter would only be valid for managed storage.
> > > > >> >
> > > > >> > Sound reasonable?
> > > > >> >
> > > > >> > Also, a VM snapshot that includes disks provided by managed
> > storage
> > > > >> should
> > > > >> > work.
> > > > >> >
> > > > >> > Talk to you later,
> > > > >> > Mike
> > > > >> >
> > > > >> > On Mon, Feb 8, 2016 at 9:22 AM, Pierre-Luc Dion <
> > pd...@cloudops.com
> > > >
> > > > >> > wrote:
> > > > >> >
> > > > >> > > Mike,
> > > > >> > >
> > > > >> > > In terms of API's, would you prefer introducing a parameter to
> > the
> > > > >> > existing
> > > > >> > > VolumeSnapshot, example:   extract={true|false}  with a
> default
> > > > value
> > > > >> of
> > > > >> > > true  which would extract snapshot into the secondary storage,
> > > which
> > > > >> is
> > > > >> > the
> > > > >> > > cu

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Will Stevens
A global setting would probably be fine.  How would the case be handled if
the global setting is changed?  Would it only affect the snapshots created
after the change was made?  We would also code defensively so if the global
setting changes that we don't assume all the snapshots in the past had the
same global setting when they were created.

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Mon, Feb 8, 2016 at 2:16 PM, Mike Tutkowski  wrote:

> Correct, Will.
>
> That Global Settings would only be for managed storage. Non-managed
> (traditional) volume snapshots are completely un-impacted by this feature.
>
> If we need to sometimes keep the snapshots on the SAN and sometimes push
> them to secondary storage, we'll need a more robust solution than Global
> Settings, though.
>
> On Mon, Feb 8, 2016 at 12:11 PM, Will Stevens 
> wrote:
>
> > Sorry.  I missed a bit of context when I responded.  The global setting
> > would be only for the managed storage case, currently being called
> Storage
> > Snapshots, and is only to determine if a copy is pushed to secondary
> > storage right?  The global setting would not change the behavior of the
> > Volume Snapshots right?
> >
> > I was referring to the need for Volume Snapshots and Storage Snapshots to
> > exist together.  Disregard my comment.  I caught up on context after I
> > posted.  My bad...
> >
> > *Will STEVENS*
> > Lead Developer
> >
> > *CloudOps* *| *Cloud Solutions Experts
> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > w cloudops.com *|* tw @CloudOps_
> >
> > On Mon, Feb 8, 2016 at 2:05 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > > wrote:
> >
> > > Hey Will,
> > >
> > > Who's picking the behavior? Is it the cloud provider or the end user?
> > >
> > > Thanks
> > >
> > > On Mon, Feb 8, 2016 at 11:52 AM, Will Stevens 
> > > wrote:
> > >
> > > > I don't think a global setting is a good option because we need both
> > > > functionality to be available at the same time and for different use
> > > cases
> > > > to be able to pick which they choose.
> > > >
> > > > *Will STEVENS*
> > > > Lead Developer
> > > >
> > > > *CloudOps* *| *Cloud Solutions Experts
> > > > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > > > w cloudops.com *|* tw @CloudOps_
> > > >
> > > > On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> > > > mike.tutkow...@solidfire.com
> > > > > wrote:
> > > >
> > > > > Now that I re-read your e-mail, it dawned on me: The end-user
> doesn't
> > > > care
> > > > > where the snapshot is.
> > > > >
> > > > > If that's true, then we should perhaps control this via Global
> > Settings
> > > > or
> > > > > something.
> > > > >
> > > > > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > > > > mike.tutkow...@solidfire.com> wrote:
> > > > >
> > > > > > It's not ideal - true, but it does allow us to be backward
> > > compatible.
> > > > > >
> > > > > > If you have other ideas, though, about how to maintain backward
> > > > > > compatibility, I'm definitely open to hear them.
> > > > > >
> > > > > > Thanks!
> > > > > >
> > > > > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq <
> > > syed1.mush...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > >> Hi Mike,
> > > > > >>
> > > > > >> Adding a flag to createSnapshot was the first and the most
> obvious
> > > > thing
> > > > > >> that came to our minds. The problem that I had with this was
> that:
> > > > > >>
> > > > > >> 1) I feel it is exposing something to the end user that is
> > internal
> > > to
> > > > > the
> > > > > >> cloud.
> > > > > >>
> > > > > >> 2) We have to follow two different ways of restore/deletion in
> the
> > > > same
> > > > > >> code path depending on where the Snapshot resides which I find
> > kind
> > > > of a
> > > > > >> bad design.
> > > > > >>
> > > > > >> But if exposing a archive flag to the end user is acceptable
> then
> > we
> > > > can
> > > > > >> definitely use this instead of adding the StorageSnapshot API
> > > > > >>
> > > > > >> Thanks,
> > > > > >> -Syed
> > > > > >>
> > > > > >>
> > > > > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > > > > >> mike.tutkow...@solidfire.com
> > > > > >> > wrote:
> > > > > >>
> > > > > >> > Hi Pierre-Luc,
> > > > > >> >
> > > > > >> > My recommendation would be this:
> > > > > >> >
> > > > > >> > Add an "archive" flag to the current volume-snapshot API. Its
> > > > default
> > > > > >> would
> > > > > >> > be "false" because that would be backward compatible with how
> > 4.6
> > > > has
> > > > > >> > volume snapshots implemented (i.e. they stay on the SAN in
> 4.6,
> > > 4.7,
> > > > > and
> > > > > >> > 4.8).
> > > > > >> >
> > > > > >> > If you set archive=true, then we would perform a background
> > > > migration
> > > > > of
> > > > > >> > the snapshot from the SAN to the secondary storage (then
> delete
> > > the
> > > > > SAN
> > > > > >> > snapshot).
> > > > > >> >
> > > > 

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
Right, we'd want to keep track of what kind of a volume snapshot was taken
and not assume all in the past were using the same Global Settings value.

On Mon, Feb 8, 2016 at 12:33 PM, Will Stevens  wrote:

> A global setting would probably be fine.  How would the case be handled if
> the global setting is changed?  Would it only affect the snapshots created
> after the change was made?  We would also code defensively so if the global
> setting changes that we don't assume all the snapshots in the past had the
> same global setting when they were created.
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Mon, Feb 8, 2016 at 2:16 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com
> > wrote:
>
> > Correct, Will.
> >
> > That Global Settings would only be for managed storage. Non-managed
> > (traditional) volume snapshots are completely un-impacted by this
> feature.
> >
> > If we need to sometimes keep the snapshots on the SAN and sometimes push
> > them to secondary storage, we'll need a more robust solution than Global
> > Settings, though.
> >
> > On Mon, Feb 8, 2016 at 12:11 PM, Will Stevens 
> > wrote:
> >
> > > Sorry.  I missed a bit of context when I responded.  The global setting
> > > would be only for the managed storage case, currently being called
> > Storage
> > > Snapshots, and is only to determine if a copy is pushed to secondary
> > > storage right?  The global setting would not change the behavior of the
> > > Volume Snapshots right?
> > >
> > > I was referring to the need for Volume Snapshots and Storage Snapshots
> to
> > > exist together.  Disregard my comment.  I caught up on context after I
> > > posted.  My bad...
> > >
> > > *Will STEVENS*
> > > Lead Developer
> > >
> > > *CloudOps* *| *Cloud Solutions Experts
> > > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > > w cloudops.com *|* tw @CloudOps_
> > >
> > > On Mon, Feb 8, 2016 at 2:05 PM, Mike Tutkowski <
> > > mike.tutkow...@solidfire.com
> > > > wrote:
> > >
> > > > Hey Will,
> > > >
> > > > Who's picking the behavior? Is it the cloud provider or the end user?
> > > >
> > > > Thanks
> > > >
> > > > On Mon, Feb 8, 2016 at 11:52 AM, Will Stevens  >
> > > > wrote:
> > > >
> > > > > I don't think a global setting is a good option because we need
> both
> > > > > functionality to be available at the same time and for different
> use
> > > > cases
> > > > > to be able to pick which they choose.
> > > > >
> > > > > *Will STEVENS*
> > > > > Lead Developer
> > > > >
> > > > > *CloudOps* *| *Cloud Solutions Experts
> > > > > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > > > > w cloudops.com *|* tw @CloudOps_
> > > > >
> > > > > On Mon, Feb 8, 2016 at 1:48 PM, Mike Tutkowski <
> > > > > mike.tutkow...@solidfire.com
> > > > > > wrote:
> > > > >
> > > > > > Now that I re-read your e-mail, it dawned on me: The end-user
> > doesn't
> > > > > care
> > > > > > where the snapshot is.
> > > > > >
> > > > > > If that's true, then we should perhaps control this via Global
> > > Settings
> > > > > or
> > > > > > something.
> > > > > >
> > > > > > On Mon, Feb 8, 2016 at 11:46 AM, Mike Tutkowski <
> > > > > > mike.tutkow...@solidfire.com> wrote:
> > > > > >
> > > > > > > It's not ideal - true, but it does allow us to be backward
> > > > compatible.
> > > > > > >
> > > > > > > If you have other ideas, though, about how to maintain backward
> > > > > > > compatibility, I'm definitely open to hear them.
> > > > > > >
> > > > > > > Thanks!
> > > > > > >
> > > > > > > On Mon, Feb 8, 2016 at 11:42 AM, Syed Mushtaq <
> > > > syed1.mush...@gmail.com
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi Mike,
> > > > > > >>
> > > > > > >> Adding a flag to createSnapshot was the first and the most
> > obvious
> > > > > thing
> > > > > > >> that came to our minds. The problem that I had with this was
> > that:
> > > > > > >>
> > > > > > >> 1) I feel it is exposing something to the end user that is
> > > internal
> > > > to
> > > > > > the
> > > > > > >> cloud.
> > > > > > >>
> > > > > > >> 2) We have to follow two different ways of restore/deletion in
> > the
> > > > > same
> > > > > > >> code path depending on where the Snapshot resides which I find
> > > kind
> > > > > of a
> > > > > > >> bad design.
> > > > > > >>
> > > > > > >> But if exposing a archive flag to the end user is acceptable
> > then
> > > we
> > > > > can
> > > > > > >> definitely use this instead of adding the StorageSnapshot API
> > > > > > >>
> > > > > > >> Thanks,
> > > > > > >> -Syed
> > > > > > >>
> > > > > > >>
> > > > > > >> On Mon, Feb 8, 2016 at 1:26 PM, Mike Tutkowski <
> > > > > > >> mike.tutkow...@solidfire.com
> > > > > > >> > wrote:
> > > > > > >>
> > > > > > >> > Hi Pierre-Luc,
> > > > > > >> >
> > > > > > >> > My recommendation would be this:
> > > > > > >> >
> > > > > > >> > Add an "archive" flag to the current volume-snapshot API.
> Its
> >

Re: [Propose][New Feature] Storage Snapshots

2016-02-08 Thread Mike Tutkowski
Here's what we have for snapshots for managed storage as of 4.6, Paul:

1. VM snapshots (no proposed changes to this).

2. Volume snapshots that do not end up on secondary storage, but rather are
stored on a SAN (effectively storing snapshots on primary storage).

Pierre-Luc is saying he'd like this for snapshots for managed storage:

A. VM snapshots (no proposed changes to this).

B. Volume snapshots that export to secondary storage.

C. New: Storage snapshots that behave like 2 (above).

I like Pierre-Luc's ideas there, but the problem is backward compatibility.

If customers who were using managed storage with volume snapshots in 4.6
were getting their snapshots put on a SAN, then in 4.9 - all of a sudden -
their new snapshots are put on secondary storage (unless they explicitly
change over to using the new Storage snapshots feature).



On Mon, Feb 8, 2016 at 12:32 PM, Paul Angus 
wrote:

> Just to make sure I'm on the same page, are we talking about;
> https://issues.apache.org/jira/browse/CLOUDSTACK-9278 ?
>
> The FS reads (to me) more like 1a + the possibility to export to secondary
> storage if required?
> Have I understood correctly?
> I have seen [1a] implemented for VMware by NetApp in their beta CloudStack
> plugin (pleased I can say that without Mike beating me up now). No changes
> to the CloudStack API were required. (nb it didn't export to secondary
> storage).
>
>
>
> 1. VM Snapshot (point-in-time hypervisor based snapshots)
> 1a. SAN assisted VM snapshots (point-in-time hypervisor snapshot takes
> place on transparently SAN to avoid performance issue in disk chains)
> 2. SAN Snapshot (Storage Snapshot) - NEW
> 3. Volume Snapshot (current old/slow transfer to secstorage)
> 4. Backup - JUST AN IDEAL.
>
>
>
>
> [image: ShapeBlue] 
> Paul Angus
> VP Technology ,  ShapeBlue
> d:  *+44 203 617 0528 | s: +44 203 603 0540*
> <+44%20203%20617%200528%20%7C%20s:%20+44%20203%20603%200540>  |  m:
> *+44 7711 418784* <+44%207711%20418784>
> e:  *paul.an...@shapeblue.com | t: @cloudyangus*
>   |  w:
> *www.shapeblue.com* 
> a:  53 Chandos Place, Covent Garden London WC2N 4HS UK
> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue
> Services India LLP is a company incorporated in India and is operated under
> license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a
> company incorporated in Brasil and is operated under license from Shape
> Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The Republic of
> South Africa and is traded under license from Shape Blue Ltd. ShapeBlue is
> a registered trademark.
> This email and any attachments to it may be confidential and are intended
> solely for the use of the individual to whom it is addressed. Any views or
> opinions expressed are solely those of the author and do not necessarily
> represent those of Shape Blue Ltd or related companies. If you are not the
> intended recipient of this email, you must neither take any action based
> upon its contents, nor copy or show it to anyone. Please contact the sender
> if you believe you have received this email in error.
>
>
> -Original Message-
> From: Mike Tutkowski [mailto:mike.tutkow...@solidfire.com]
> Sent: Monday, February 8, 2016 7:16 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [Propose][New Feature] Storage Snapshots
>
> Correct, Will.
>
> That Global Settings would only be for managed storage. Non-managed
> (traditional) volume snapshots are completely un-impacted by this feature.
>
> If we need to sometimes keep the snapshots on the SAN and sometimes push
> them to secondary storage, we'll need a more robust solution than Global
> Settings, though.
>
> On Mon, Feb 8, 2016 at 12:11 PM, Will Stevens 
> wrote:
>
> > Sorry. I missed a bit of context when I responded. The global setting
> > would be only for the managed storage case, currently being called
> Storage
> > Snapshots, and is only to determine if a copy is pushed to secondary
> > storage right? The global setting would not change the behavior of the
> > Volume Snapshots right?
> >
> > I was referring to the need for Volume Snapshots and Storage Snapshots to
> > exist together. Disregard my comment. I caught up on context after I
> > posted. My bad...
> >
> > *Will STEVENS*
> > Lead Developer
> >
> > *CloudOps* *| *Cloud Solutions Experts
> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > w cloudops.com *|* tw @CloudOps_
> >
> > On Mon, Feb 8, 2016 at 2:05 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com
> > > wrote:
> >
> > > Hey Will,
> > >
> > > Who's picking the behavior? Is it the cloud provider or the end user?
> > >
> > > Thanks
> > >
> > > On Mon, Feb 8, 2016 at 11:52 AM, Will Stevens 
> > > wrote:
> > >
> > > > I don't think a global setting is a good option because we need both
> > > > functionality to be available at the same time and for different use
> > > cases
> > > > to be able to pick which they choose.
> > > >
> > > >

[GitHub] cloudstack pull request: CLOUDSTACK-9252: Support configurable NFS...

2016-02-08 Thread DaanHoogland
Github user DaanHoogland commented on the pull request:

https://github.com/apache/cloudstack/pull/1361#issuecomment-181569850
  
@nvazquez I think the job doesn't clean the prior classes well.  I cleared 
the workspace. Can you push again?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9252: Support configurable NFS...

2016-02-08 Thread nvazquez
Github user nvazquez commented on the pull request:

https://github.com/apache/cloudstack/pull/1361#issuecomment-181571630
  
Thanks @DaanHoogland, I pushed again


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9252: Support configurable NFS...

2016-02-08 Thread nvazquez
Github user nvazquez commented on the pull request:

https://github.com/apache/cloudstack/pull/1361#issuecomment-181644342
  
This time it failed for timeout


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9252: Support configurable NFS...

2016-02-08 Thread rafaelweingartner
Github user rafaelweingartner commented on the pull request:

https://github.com/apache/cloudstack/pull/1361#issuecomment-181646167
  
Since it timed out, what about squashing those 3 last commits into one, and 
crossing the fingers hoping that jenkins succeeds.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9252: Support configurable NFS...

2016-02-08 Thread nvazquez
Github user nvazquez commented on the pull request:

https://github.com/apache/cloudstack/pull/1361#issuecomment-181646997
  
Sure, I'll do it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request: CLOUDSTACK-9252: Support configurable NFS...

2016-02-08 Thread kishankavala
Github user kishankavala commented on the pull request:

https://github.com/apache/cloudstack/pull/1361#issuecomment-181715514
  
@nvazquez
Apologies for reviewing it late.
1. Since version is fetched from image_store_details, can we send a map 
with all details for the image_store instead of just the version. This will 
make the approach more generic. In case more info is required for other vendors 
in future, too many changes can be avoided 
2.  Nfsversion param and corresponding getter/setter methods can be moved 
to a base class (something like BaseImageStoreCommand)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---