Hi Rob,

thank you very much for such valuable feedback, I really appreciate it. Few comments follow inline the text.

On 2013/25/09 03:50, Robert Collins wrote:
A few quick notes.

Flavors need their extra attributes to be editable (e.g. architecture,
raid config etc) - particularly in the undercloud, but it's also
relevant for overcloud : If we're creating flavors for deployers, we
need to expose the full capabilities of flavors.

Secondly, if we're creating flavors for deployers, the UI should
reflect that: is it directly editing flavors, or editing the inputs to
the algorithm that creates flavors.
ad extra attributes:
Until now for POC we were dealing only with flavor definition values and no extra attributes. Can you be please a bit more descriptive about extra attributes or at least point me to some documentation for it (for undercloud as well as overcloud flavors)?

Flavors in the Tuskar UI is for deployers and it is a definition for the algorithm that registers flavors in nova after a machine is provisioned.

We seemed to have consensus @ the sprint in Seattle that Racks aren't
really Racks : that is that Rack as currently defined is more of a
logical construct (and some clouds may have just one), vs the actual
physical 'This is a Rack in a DC' aspect. If there is a physical thing
that the current logical Rack maps too, perhaps we should use that as
the top level UI construct?
Well in ideal case we represent physical thing with logical grouping of nodes. In this case we are able to operate with hardware in the most efficient way. Of course we might end up that this is not reality and we need to support it in the UI as well. But I don't think I follow the idea with rack being the top level UI construct. I think this depends on the point of view how you are looking at the deployment. I think there are 2 ways of how deployer wants to see his deployment: 1) Hardware focus. Deployer is interested if his hardware is running fine and everything is running correctly. In this case you are right - the top level should be rack and it is rack in this moment. 2) Service focus. Deployer is interested what service is he providing, how much capacity he has available, left, in capacity planning, etc. For this purpose we have resource classes, which are defining what service you (as deployer) provide to your customers/users.

The related thing is we need to expose the other things that also tend
to map failure domains - shared switches, power bars, A/C - but that
is future work I believe.
In general I don't think that it is good idea to replicate other applications for DC monitoring, which already exist and we would only put effort to their duplication. I mean if we can get general information about switches, etc, yes that would be great, but I would recommend to make distinction between deployment management and DC monitoring.

The 'add rack' thing taking a list of MACs seems odd : a MAC address
isn't enough to deploy even a hardware inventory image to (you need
power management details + CPU arch + one-or-more MACs to configure
TFTP and PXE enough to do a detailed automatic inventory). Long term
I'd like to integrate with the management network switch, so we can
drive the whole thing automatically, but in the short term, I think we
want to drive folk to use the API for mass enrollment. What do you
think?
So for the short term we were counting with some sort of auto-discovery which means with minimal input from user PXE boot some minimal image, do the introspection of the machine and fill all the details for user. But you are right, that only MAC address isn't enough. What I think will be needed are power management credentials (currently support for IPMI - so the MAC address /or IP/, IPMI username and IPMI password). I believe all other information can be introspected (in short term). What do you think?

Regardless, the node list will need to deal with nodes having N MAC
addresses and management credentials, not just a management IP.
Lastly, whats node name for? Instances [may] have names, but I don't
see any reason for nodes to have a name.
I believe that node name will be mac address by default (in majority cases). There was idea about having possibility to rename the node for deployers' needs if they need better recognition. Let's imagine that we have a rack with mixed hardware, each node running different services, if we rename those few nodes with for example name of services they are running (or any purpose they are doing), then for the first glance, I as deployer have better overview about what my rack contains and where it is located. Do you see the use case there?

Similarly, it's a little weird that racks would have names.
Similar situation as nodes above.

CSV uploads stand out to me as an odd thing: JSON is the standard
serialisation format we use, does using CSV really make sense? Tied
into that is the question above - does it make sense to put bulk
enrollment in the web UI at all, or would it be better to just have
prerolled API scripts for that? Having 'upload racks' etc as a UI
element is right in the users face, but will be used quite rarely.
I agree. As for CSV file, I am not convinced, JSON might work here. I think it depends on the end users what format they are used to.

As for the UI usage, you are completely right, that it it's not the hottest feature and we shouldn't be focusing on that as top priority. However, talking about medium deployments, consider operator who is going to deploy 3 racks each consisted of 16 nodes. It's 48 machines to which you need to enter details at once. Thinking about minimal number of 3 text fields which need to be entered for each node (ipmi mac + user + passw) we have 144 fields to be filled in total. I believe that bulk enrollment for nodes (not necessarily racks) will be needed.

I don't follow why racks would be bound to classes: class seems to be
an aspect of a specific machine + network capacity configuration, but
Rack is a logical not physical construct, so it's immediately
decoupled from that. Perhaps it's a keep-it-simple thing, which I can
get behind - but in that case, reducing the emphasis on Rack /
renaming Rack becomes more important IMO.
As for the classes - in class you are actually defining the service which you are going to provide. So let's say that I am going to provide compute service, I need to specify flavors and at least some SLA. The SLA would be assured with some vCPU performance, network bandwidth, etc. All of these happens on the level of class. So that I define for example m1 class, which is providing 5 flavors (tiny, small, medium, large, x-large), it assures certain vCPU performance and minimum network throughput. To assure this performance I need to be sure that the service is running on the correct type of hardware (which I specify in the class as well). Now I have defined service, so what I do, I add resources which are going to run this type of service (assigning racks/nodes to the resource class). At this point I am able to monitor the whole capacity of compute service which I am providing. I can see total capacity, current usage, free space, expectations when I am going out of resources. I can do planning for the future and if I expand my resources by new hardware, I buy similar type of hardware and just add it to the class - no need for additional settings - very smooth way for scaling up. In conclusion I see lot of advantages in classes, mostly from service point of view.
HTH,
Rob

-Rob
I hope the answers helped at least somehow

Cheers,
-- Jarda
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to