On Jul 9, 2014, at 4:38 PM, Zane Bitter <zbit...@redhat.com>
 wrote:
> On 08/07/14 17:17, Steven Hardy wrote:
> 
>> Regarding forcing deployers to make a one-time decision, I have a question
>> re cost (money and performance) of the Swift approach vs just hitting the
>> Heat API
>> 
>> - If folks use the Swift resource and it stores data associated with the
>>   signal in Swift, does that incurr cost to the user in a public cloud
>>   scenario?
> 
> Good question. I believe the way WaitConditions work in AWS is that it sets 
> up a pre-signed URL in a bucket owned by CloudFormation. If we went with that 
> approach we would probably want some sort of quota, I imagine.

Just to clarify, you suggest that the swift-based signal mechanism use 
containers that Heat owns rather than ones owned by the user?

> The other approach is to set up a new container, owned by the user, every 
> time. In that case, a provider selecting this implementation would need to 
> make it clear to customers if they would be billed for a WaitCondition 
> resource. I'd prefer to avoid this scenario though (regardless of the 
> plug-point).

Why? If we won't let the user choose, then why wouldn't we let the provider 
make this choice? I don't think its wise of us to make decisions based on what 
a theoretical operator may theoretically do. If the same theoretical provider 
were to also charge users to create a trust, would we then be concerned about 
that implementation as well? What if said provider decides charges the user per 
resource in a stack regardless of what they are? Having Heat own the 
container(s) as suggested above doesn't preclude that operator from charging 
the stack owner for those either.

While I agree that these examples are totally silly, I'm just trying to 
illustrate that we shouldn't deny an operator an option so long as its 
understood what that option entails from a technical/usage perspective.

>> - What sort of overhead are we adding, with the signals going to swift,
>>   then in the current implementation being copied back into the heat DB[1]?
> 
> I wasn't aware we were doing that, and I'm a bit unsure about it myself. I 
> don't think it's a big overhead, though.

In the current implementation, I think it is minor as well, just a few extra 
Swift API calls which should be pretty minor overhead considering the stack as 
a whole. Plus, it minimizes the above concern around potentially costly user 
containers in that it gets rid of them as soon as its done.

>> It seems to me at the moment that the swift notification method is good if
>> you have significant data associated with the signals, but there are
>> advantages to the simple API signal approach I've been working on when you
>> just need a simple "one shot" low overhead way to get data back from an
>> instance.
>> 
>> FWIW, the reason I revived these patches was I found that
>> SoftwareDeployments did not meet my needs for a really simple signalling
>> mechanism when writing tempest tests:
>> 
>> https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml
>> 
>> These tests currently use the AWS WaitCondition resources, and I wanted a
>> native alternative, without the complexity of using SoftwareDeployments
>> (which also won't work with minimal cirros images without some pretty hacky
>> workarounds[2])
> 
> Yep, I am all for this. I think that Swift is the best way when we have it, 
> but not every cloud has Swift (and the latest rumours from DefCore are that 
> it's likely to stay that way), so we need operators (& developers!) to be 
> able to plug in an alternative implementation.

Very true, but not every cloud has trusts either. Many may have trusts, but 
they don't employ the EC2 extensions to Keystone and therefore can't use the 
"native" signals either (as I understand them anyway). Point being that either 
way, we already impose requirements on a cloud you want to run Heat against. I 
think it in our interest to make the effort to provide choices with obvious 
trade-offs.

>> I'm all for making things simple, avoiding duplication and confusion for
>> users, but I'd like to ensure that making this a one-time deployer level
>> decision definitely makes sense, vs giving users some choice over what
>> method is used.
> 
> Agree, this is an important question to ask. The downside to leaving the 
> choice to the user is that it reduces interoperability between clouds. (In 
> fact, it's unclear whether operators _would_ give users a choice, or just 
> deploy one implementation anyway.) It's not insurmountable (thanks to 
> environments), but it does add friction to the ecosystem so we have to weigh 
> up the trade-offs.

Agreed that this is an important concern, but one of mine is that no other 
resource has "selectable" back-ends. The way an operator controls this today is 
via the global environment where they have the option to disable one or more of 
these resources or even alias one to the other. Seems a large change for 
something an operator already has the ability to deal with. The level of 
interoperability is at least partly an operator choice already and out of our 
hands.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to