On 11/13/2016 8:28 AM, Matt Riedemann wrote:
On 11/12/2016 8:31 PM, Matt Fischer wrote:
Its pretty hard for me to parse the above or help more without a live
pdb shell looking at this but I wonder if this is a Liberty vs Mitaka
difference? We're still on nova liberty. The nova team may know more
and/or I can figure out more once we upgrade since we may hit this same
issue. One difference is that I'm not using the metadefs stuff but I
don't know if that is relevant or not.

On Fri, Nov 11, 2016 at 3:24 AM, Keller, Mario
<mario.kel...@cornelsen.de <mailto:mario.kel...@cornelsen.de>> wrote:

    Hello Matt,

    I found you blog post to this and tried your code, but the problem
    is, that I get an error:

    “Returning exception 'RequestSpec' object has no attribute 'get' to
    caller”

    It seems the the call “image_props = spec_obj.get('request_spec',
    {})” provides an empty object.
    If I write an str(spec_obj.image.__dict__) I get:

    {'_obj_checksum': u'793c47d1b98f9df93bbc09de4d155c1b', '_context':
    <nova.context.RequestContext object at 0x636eed0>,
    '_obj_container_format': u'bare', '_obj_name': u'_MY_WINDOWS1',
    '_obj_min_disk': 1, '_obj_disk_format': u'iso', '_obj_owner':
    u'cec13ed6b7bc42879cea9628dbad01dc', '_obj_status': u'active',
    'VERSION': u'1.8', '_obj_properties':

ImageMetaProps(hw_architecture=<?>,hw_auto_disk_config=<?>,hw_boot_menu=<?>,hw_cdrom_bus=<?>,hw_cpu_cores=<?>,hw_cpu_max_cores=<?>,hw_cpu_max_sockets=<?>,hw_cpu_max_threads=<?>,hw_cpu_policy=<?>,hw_cpu_realtime_mask=<?>,hw_cpu_sockets=<?>,hw_cpu_thread_policy=<?>,hw_cpu_threads=<?>,hw_device_id=<?>,hw_disk_bus='scsi',hw_disk_type='preallocated',hw_firmware_type=<?>,hw_floppy_bus=<?>,hw_ipxe_boot=<?>,hw_machine_type=<?>,hw_mem_page_size=<?>,hw_numa_cpus=<?>,hw_numa_mem=<?>,hw_numa_nodes=<?>,hw_qemu_guest_agent=<?>,hw_rng_model=<?>,hw_scsi_model='lsisas1068',hw_serial_port_count=<?>,hw_video_model=<?>,hw_video_ram=<?>,hw_vif_model='vmxnet3',hw_vif_multiqueue_enabled=<?>,hw_vm_mode=<?>,hw_watchdog_action=<?>,img_bdm_v2=<?>,img_bittorrent=<?>,img_block_device_mapping=<?>,img_cache_in_nova=<?>,img_compression_level=<?>,img_config_drive=<?>,img_hv_requested_version=<?>,img_hv_type='vmware',img_linked_clone=<?>,img_mappings=<?>,img_owner_id=<?>,img_root_device_name=<?>,img_signature=<?>,img_signature_certificate_uuid=<?>,img_signature_hash_method=<?>,img_signature_key_type=<?>,img_use_agent=<?>,img_version=<?>,os_admin_user=<?>,os_command_line=<?>,os_distro='windows9Server64Guest',os_require_quiesce=<?>,os_skip_agent_inject_files_at_boot=<?>,os_skip_agent_inject_ssh=<?>,os_type=<?>),

    '_obj_size': 281018368, '_obj_id':
    'e62da4df-318f-48dc-be26-b634e82ec4a1', '_changed_fields':
    set([u'status', u'name', u'container_format', u'created_at',
    u'disk_format', u'updated_at', u'properties', u'owner', u'min_ram',
    u'checksum', u'min_disk', u'id', u'size']), '_obj_min_ram': 1024,
    '_obj_created_at': datetime.datetime(2016, 8, 29, 8, 27, 47,
    tzinfo=<iso8601.Utc>), '_obj_updated_at': datetime.datetime(2016,
    11, 9, 13, 10, 48, tzinfo=<iso8601.Utc>)}


    Using the spec_obj object itself, I get:

    {'_obj_instance_uuid': '22313c7f-0338-4bed-9131-900b458347d9',
    '_obj_flavor':

Flavor(created_at=2016-11-01T14:26:31Z,deleted=False,deleted_at=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='7d5dbdd9-62f9-4824-9e5e-803c69eef223',id=23,is_public=True,memory_mb=1024,name='1vCPU_1GB-RAM_30GB-HDD',projects=<?>,root_gb=30,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1),

    '_obj_scheduler_hints': {}, '_context': <nova.context.RequestContext
    object at 0x6ae7810>, '_obj_project_id':
    u'027d9ea220bd41e88f9c55227788a863', '_obj_num_instances': 1,
    '_obj_limits':

SchedulerLimits(disk_gb=None,memory_mb=None,numa_topology=None,vcpu=None),

    '_obj_instance_group': None, '_obj_ignore_hosts': None,
    '_obj_image':

ImageMeta(checksum='793c47d1b98f9df93bbc09de4d155c1b',container_format='bare',created_at=2016-08-29T08:27:47Z,direct_url=<?>,disk_format='iso',id=e62da4df-318f-48dc-be26-b634e82ec4a1,min_disk=1,min_ram=1024,name='_MY_WINDOWS1',owner='cec13ed6b7bc42879cea9628dbad01dc',properties=ImageMetaProps,protected=<?>,size=281018368,status='active',tags=<?>,updated_at=2016-11-09T13:10:48Z,virtual_size=<?>,visibility=<?>),

    '_obj_force_hosts': None, 'VERSION': u'1.5', '_obj_force_nodes':
    None, '_obj_pci_requests':

InstancePCIRequests(instance_uuid=22313c7f-0338-4bed-9131-900b458347d9,requests=[]),

    '_obj_retry':
    SchedulerRetries(hosts=ComputeNodeList,num_attempts=1),
    '_changed_fields': set([u'instance_uuid', u'retry',
    u'num_instances', u'pci_requests', u'limits', u'availability_zone',
    u'force_nodes', u'image', u'instance_group', u'force_hosts',
    u'numa_topology', u'ignore_hosts', u'flavor', u'project_id',
    u'scheduler_hints']), '_obj_numa_topology': None,
    '_obj_availability_zone': u'CV_Inhouse_RZ2', 'config_options': {}}

    So there seems no request_spec present. There's an attribute "image"
    within spec_obj that has an attribute properties of the type
    ImageMetaProps that has all the vmware related properties that are
    defined the same way then our properties, but not our self defined
    property.

    Mario.


    Von: tadow...@gmail.com <mailto:tadow...@gmail.com>
    [mailto:tadow...@gmail.com <mailto:tadow...@gmail.com>] Im Auftrag
    von Matt Fischer
    Gesendet: Donnerstag, 10. November 2016 15:27
    An: Keller, Mario
    Cc: openstack-operators@lists.openstack.org
    <mailto:openstack-operators@lists.openstack.org>
    Betreff: Re: [Openstack-operators] Properties missing in Nova
    Scheduler Filter

    Mario,

    If I remember right I had a similar issue with getting image_props
    when I was doing this to pull in custom properties. Through some
    trial and error and poking around with pdb I ended up with this:

            image_props = spec_obj.get('request_spec', {}).\
                get('image', {}).get('properties', {})

    Perhaps that will help?  If not I'd recommend putting a pdb break at
    the top of host_passes and digging through the spec_obj.


    On Thu, Nov 10, 2016 at 12:05 AM, Keller, Mario
    <mario.kel...@cornelsen.de <mailto:mario.kel...@cornelsen.de>> wrote:
    Hello,

    we are trying to build our own nova scheduler filter to separate
    machines to different compute nodes / host aggregates.
    Our setup is based on OpenStack Mitaka and we are using VMware as
    hypervisor on 3 different compute nodes.

    We have created a /etc/glance/metadefs/CV_AggSelect.json file to
    define the new property "os_selectagg"

    {
        "namespace": "OS::Compute::cv-host-agg",
        "display_name": "CV-CUSTOM: Select Host Aggregate",
        "description": "Cornelsen CUSTOM: Select Host Aggregate",
        "visibility": "public",
        "protected": true,
        "resource_type_associations": [
            {
                "name": "OS::Glance::Image"
            },
            {
                "name": "OS::Nova::Aggregate"
            }
        ],
            "properties": {
                "os_selectagg": {
                    "title": "selectagg",
                    "description": "Cornelsen CUSTOM: Select Host
    Aggregate",
                    "type": "string",
                    "enum": [
                        "windows",
                        "linux",
                        "desktop",
                        "test1",
                        "test2"
                    ],
                    "default" : "test2"
            }
        },
        "objects": []
    }


    Getting the details from our image and the host aggregate we see
    that the property is set correctly:

    openstack image show e62da4df-318f-48dc-be26-b634e82ec4a1

+------------------+----------------------------------------------------------------------------------------------------------------------------------+

    | Field            | Value

                  |

+------------------+----------------------------------------------------------------------------------------------------------------------------------+

    ...
    | properties       | description='', hw_vif_model='VirtualVmxnet3',
    hypervisor_type='vmware', os_selectagg='windows',            |
    |                  | vmware_adaptertype='lsiLogicsas',
    vmware_disktype='preallocated', vmware_ostype='windows9Server64Guest
    ...

    We also see the property in the aggregate:

    openstack aggregate show 5

+-------------------+--------------------------------------------------+
    | Field             |
Value                                            |

+-------------------+--------------------------------------------------+
    ...
    | properties        | hypervisor_type='vmware',
os_selectagg='windows' |
    ..

    I have created a new simple filter in
    /usr/lib/python2.7/site-packages/nova/scheduler/filters  just to see
    what properties are set for the current image and the host_state.
    The filter is also set in the  /etc/nova/nova.conf and is executed,
    because I'm getting the logfile that ist created by the filter.

    The filter only implements the " def host_passes(self, host_state,
    spec_obj)" function.

    I'm getting the image properties by " image_props =
    spec_obj.image.properties if spec_obj.image else {} ", but the
    property "os_selectagg" is missing. All other properties like
    hw_vif_model='VirtualVmxnet3' are set.

    The property is set in the host_state.aggregates list, but not in
    the spec_obj.image.properties. What do we miss?

    With best regards,
    Mario Keller.



    Mit freundlichen Grüßen
    Mario Keller
    IT-Operations Engineer

    --
    Cornelsen Verlag GmbH, Mecklenburgische Straße 53, 14197 Berlin
    Tel: +49 30 897 85-8364, Fax: +49 30 897 85-97-8364
    E-Mail: mario.kel...@cornelsen.de <mailto:mario.kel...@cornelsen.de>
    | cornelsen.de <http://cornelsen.de>

    AG Charlottenburg, HRB 114796 B
    Geschäftsführung: Dr. Anja Hagen, Joachim Herbst, Mark van Mierle
    (Vorsitz),
    Patrick Neiss, Michael von Smolinski, Frank Thalhofer


    _______________________________________________
    OpenStack-operators mailing list
    OpenStack-operators@lists.openstack.org
    <mailto:OpenStack-operators@lists.openstack.org>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>




_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


The RequestSpec object doesn't have the DictCompat mixin so you can't
use the get() method like a dict.

https://github.com/openstack/nova/blob/stable/mitaka/nova/objects/request_spec.py#L29


So you need to access the image field like an attribute, i.e.
request_spec.image. However, image is nullable and it might not be in
the RequestSpec object:

https://github.com/openstack/nova/blob/stable/mitaka/nova/objects/request_spec.py#L40


So you need to check if it's set like this:

if request_spec.obj_attr_is_set('image'):
    saucy = request_spec.image.properties.get('img_secret_sauce')

If you have oslo.versionedobjects>=0.11.0 you can make it prettier like
this:

if 'image' in request_spec:
   ...

As that release contains this change:
https://review.openstack.org/#/c/230636/




Also note how the ImageMetaProps.from_dict method works:

https://github.com/openstack/nova/blob/stable/mitaka/nova/objects/image_meta.py#L509

That only stores image properties that it knows about, via the legacy property map or the fields defined on that object. If you're running with custom image meta properties those won't be registered in that object and will be filtered out. So you'd have to register those in the object and bump the RPC version on that object and carry that as a fork, which may or may not cause compat issues for you.

--

Thanks,

Matt Riedemann
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to