Hi Yongli,

Please also see my response to Yunhong. Here, I just want to add a comment 
about your local versus global argument. I took a brief look at your patches, 
and the PCI-flavor is added into the whitelist. The compute node needs to know 
these pci-flavors in order to report PCI stats based on them. Please correct me 
if I'm wrong.

Another comment is that a compute node doesn't need to consult with the 
controller, but it's report or registration of resources may be rejected by the 
controller due to non-existing PCI groups.

thanks,
Robert

On 1/10/14 2:11 AM, "yongli he" 
<yongli...@intel.com<mailto:yongli...@intel.com>> wrote:

On 2014年01月10日 00:49, Robert Li (baoli) wrote:
Hi Folks,
HI, all

basiclly i flavor  the pic-flavor style and against massing  the white-list. 
please see my inline comments.



With John joining the IRC, so far, we had a couple of productive meetings in an 
effort to come to consensus and move forward. Thanks John for doing that, and I 
appreciate everyone's effort to make it to the daily meeting. Let's reconvene 
on Monday.

But before that, and based on our today's conversation on IRC, I'd like to say 
a few things. I think that first of all, we need to get agreement on the 
terminologies that we are using so far. With the current nova PCI passthrough

        PCI whitelist: defines all the available PCI passthrough devices on a 
compute node. pci_passthrough_whitelist=[{ 
"vendor_id":"xxxx","product_id":"xxxx"}]
        PCI Alias: criteria defined on the controller node with which requested 
PCI passthrough devices can be selected from all the PCI passthrough devices 
available in a cloud.
                Currently it has the following format: 
pci_alias={"vendor_id":"xxxx", "product_id":"xxxx", "name":"str"}

        nova flavor extra_specs: request for PCI passthrough devices can be 
specified with extra_specs in the format for 
example:"pci_passthrough:alias"="name:count"

As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against the PCI 
devices, it has to match the vendor_id and product_id against all the available 
PCI devices until one is found. The name is only used for reference in the 
extra_specs. On the other hand, the whitelist is basically the same as the 
alias without a name.

What we have discussed so far is based on something called PCI groups (or PCI 
flavors as Yongli puts it). Without introducing other complexities, and with a 
little change of the above representation, we will have something like:

pci_passthrough_whitelist=[{ "vendor_id":"xxxx","product_id":"xxxx", 
"name":"str"}]

By doing so, we eliminated the PCI alias. And we call the "name" in above as a 
PCI group name. You can think of it as combining the definitions of the 
existing whitelist and PCI alias. And believe it or not, a PCI group is 
actually a PCI alias. However, with that change of thinking, a lot of
the white list configuration is mostly local to a host, so only address in 
there, like John's proposal is good. mix the group into the whitelist means we 
make the global thing per host style, this is maybe wrong.

benefits can be harvested:

         * the implementation is significantly simplified
but more mass, refer my new patches already sent out.
         * provisioning is simplified by eliminating the PCI alias
pci alias provide a good way to define a global reference-able name for PCI, we 
need this, this is also true for John's pci-flavor.
         * a compute node only needs to report stats with something like: PCI 
group name:count. A compute node processes all the PCI passthrough devices 
against the whitelist, and assign a PCI group based on the whitelist definition.
simplify this seems like good, but it does not, separated the local and global 
is the instinct nature simplify.
         * on the controller, we may only need to define the PCI group names. 
if we use a nova api to define PCI groups (could be private or public, for 
example), one potential benefit, among other things (validation, etc),  they 
can be owned by the tenant that creates them. And thus a wholesale of PCI 
passthrough devices is also possible.
this mean you should consult the controller to deploy your host, if we keep 
white-list local, we simplify the deploy.
         * scheduler only works with PCI group names.
         * request for PCI passthrough device is based on PCI-group
         * deployers can provision the cloud based on the PCI groups
         * Particularly for SRIOV, deployers can design SRIOV PCI groups based 
on network connectivities.

Further, to support SRIOV, we are saying that PCI group names not only can be 
used in the extra specs, it can also be used in the —nic option and the neutron 
commands. This allows the most flexibilities and functionalities afforded by 
SRIOV.
i still feel use alias/pci flavor is better solution.

Further, we are saying that we can define default PCI groups based on the PCI 
device's class.
default grouping make our conceptual model more mass, pre-define a global thing 
in API and your hard code is not good way, i post -2 for this.

For vnic-type (or nic-type), we are saying that it defines the link 
characteristics of the nic that is attached to a VM: a nic that's connected to 
a virtual switch, a nic that is connected to a physical switch, or a nic that 
is connected to a physical switch, but has a host macvtap device in between. 
The actual names of the choices are not important here, and can be debated.

I'm hoping that we can go over the above on Monday. But any comments are 
welcome by email.

Thanks,
Robert


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to