On 28/04/15 03:56, Steven Hardy wrote:
On Mon, Apr 27, 2015 at 06:41:52PM -0400, Zane Bitter wrote:
On 27/04/15 13:38, Steven Hardy wrote:
On Mon, Apr 27, 2015 at 04:46:20PM +0100, Steven Hardy wrote:
AFAICT there's two options:
1. Update the stack.Stack so we store "now" at every transition (e.g in
state_set)
2. Stop trying to explicitly control updated_at, and just allow the oslo
TimestampMixin to do it's job and update updated_at every time the DB model
is updated.
Ok, at the risk of answering my own question, there's a third option, which
is to output an event for all stack transitions, not only resource
transitions. This appears to be the way the CFN event API works AFAICS.
My recollection was that in CFN events were always about a particular
resource. That may have been wrong, or they may have changed it. In any
event (uh, no pun intended), I think this option is preferable to options 1
& 2.
Well from the docs I've been looking at, events are also output for stacks:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-listing-event-history.html
Here we see a stack "myteststack", which generates events of ResourceType
AWS::CloudFormation::Stack, with a LogicalResourceId of "myteststack".
Huh, so it does. So the only difference is that the stack events don't
have a ResourceProperties key. Ick.
It's a bit confusing because the PhysicalResourceId doesn't match the
StackId, but I'm interpreting this as an event from the stack rather than a
resource inside the stack. Could be that it's just a bad example though.
When we first implemented this stuff we only operated on one resource at a
time, there was no way to cancel an update, &c. It was a simpler world ;)
Yeah, true - and (with the benefit of hindsight) events are a really bad
interface for hook polling, which is what I'm currently trying to work
around.
Trying to do this has exposed how limited our event API is though, so IMO
it's worth trying to fix this for the benefit of all API consumers.
I guess the event would have a dummy OS::Heat::Stack type and then you
That's too hacky IMHO, I think we should have a more solid way of
distinguishing resource events from stack events. OS::Heat::Stack is a type
of resource already, after all. Arguably they should come from separate
endpoints, to avoid breaking clients until we get to a v2 API.
I disagree about the separate endpoint (not least because it implies hooks
will be unusable for kilo):
Looking more closely at our native event API:
http://developer.openstack.org/api-ref-orchestration-v1.html#stack-events
The path for events is:ยท
/v1/{tenant_id}/stacks/{stack_name}/events
This, to me (historical resource-ness aside) implies events associated with
a particular stack - IMHO it's fair game to output both events associated
with the stack itself here and the resources contained by the stack.
If we were to use some other endpoint, I don't even know what we would
use, because intuitively the path above is the one which makes sense for
events associated with a stack?
I'm not saying it's the wrong place, but somehow, somewhere, it will
break some client who is not expecting it.
I'm open to using something other than OS::Heat::Stack, but that to me is
the most obvious option, which fits OK with the current resource-orientated
event API response payload - it is the resource which describes a stack
after all (and it potentially aligns with the AWS interface I mention above.)
For consistency with CloudFormation, I agree that's the obvious choice.
I withdraw my objection.
could find the most recent transition to e.g UPDATE_IN_PROGRESS in the
events and use that as a marker so you only list results after that event?
Even that is not valid in a distributed system. For convergence we're
planning to have a UUID associated with each update. We should reuse that to
connect events with particular update traversals.
There's still going to be some event (or at least a point in time) where an
API request for update-stack is recieved, and the stack, as a whole, moves
from a stable state (COMPLETE/FAILED) into an in-progress one though, is
there not?
I'm not really sure why distribution of the update workload will affect the
nature of that initial transition, other than that there may be multiple
passes before we reach the final transition back into a stable state (e.g
potentially multiple updates on resources before we stop updating the stack
as a whole)?
Sorry that was far too vague, I should have been more clear:
establishing the order of events by timestamp is not a valid strategy
for a distributed system because time is not monotonic in a distributed
system.
cheers,
Zane.
Anyway, https://review.openstack.org/#/c/177961/2 has been approved now -
I'm happy to follow up if you have specific suggestions on how we can
improve it.
Cheers,
Steve
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev