Hi Jay,

Awesome. I'll just add quick note inline (and sorry for smaller delay):

On 2014/09/01 18:22, Jay Dobies wrote:
I'm trying to hash out where data will live for Tuskar (both long term
and for its Icehouse deliverables). Based on the expectations for
Icehouse (a combination of the wireframes and what's in Tuskar client's
api.py), we have the following concepts:

[snip]

= Resource Categories =
[snip]

== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.
Based on latest discussions - instance count is a bit tricky, but it should be specific to Node Profile if we care what hardware we want in the play.

Later, we can add possibility to enter just number of instances for the whole resource category and let system to decide for me which node profile to deploy. But I believe this is future look.

These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the "waiting to be deployed" aren't
relevant for now.
+1


== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.

Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.
I think we were discussing to keep track of image's name there.

Thanks for this great work
-- Jarda

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to