Hi all!

I have a node with 8Cores (HT enabled) and 32GB of RAM.

I am trying to limit the VMs that will run on it using scheduler filters.

I have set the following at the nova.conf file:

cpu_allocation_ratio=1.0

ram_allocation_ratio=1.0

reserved_host_memory_mb=1024

scheduler_available_filters=nova.scheduler.filters.all_filters


scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter


I then boot a CirrOS VM with a flavor that has 8vCPUs

# nova flavor-list

+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name           | Memory_MB | Disk | Ephemeral | Swap | VCPUs |
RXTX_Factor | Is_Public |

+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny        | 512       | 1    | 0         |      | 1     |
1.0         | True      |
| 12 | n1.large       | 8192      | 80   | 0         |      | 8     |
1.0         | True      |

+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+


# nova boot --flavor n1.large --image cirros-0.3.3 --security-group
default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2

and it builds succesfully


# nova list

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name        | Status | Task
State | Power State | Networks                        |

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE | -
   | Running     | vmnet=10.0.0.2                  |



Next I try to put a second with the m1.tiny flavor and although I was
expecting to produce an error and do not build it this one is also
build succesfull!!!

# nova boot --flavor m1.tiny --image cirros-0.3.3 --security-group
default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2-2


# nova list

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name        | Status | Task
State | Power State | Networks                        |

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE | -
   | Running     | vmnet=10.0.0.2                  |
| 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE | -
   | Running     | vmnet=10.0.0.3                  |

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+




I can eve boot a third LARGE one



# nova boot --flavor n1.large --image cirros-0.3.3 --security-group
default --key-name aaa-key --availability-zone nova:node02 cirrOS-K2-3


# nova list

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name        | Status | Task
State | Power State | Networks                        |

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE | -
   | Running     | vmnet=10.0.0.2                  |
| 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE | -
   | Running     | vmnet=10.0.0.3                  |
| 6210f7c7-f16a-4343-a181-88ede5ee0132 | cirrOS-K2-3 | ACTIVE | -
   | Running     | vmnet=10.0.0.4                  |

+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+





All these are running on NODE02 as a hypervisor on which I have put
the afforementioned cpu allocation ratio etc.



# nova hypervisor-servers node02

+--------------------------------------+-------------------+---------------+----------------------+
| ID                                   | Name              |
Hypervisor ID | Hypervisor Hostname  |

+--------------------------------------+-------------------+---------------+----------------------+
| a0beb084-73c0-428a-bb59-0604588450be | instance-00000041 | 2
   | node02               |
| 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | instance-00000042 | 2
   | node02               |
| 6210f7c7-f16a-4343-a181-88ede5ee0132 | instance-00000043 | 2
   | node02               |

+--------------------------------------+-------------------+---------------+----------------------+




Any ideas why it is ignored???


I haver restarted all services at the hypervisor node!


Should I restart any service at the controller node as well or are
they picked up automatically?



Does it has anything to do with the fact that I am specifically
requesting that node through the availability zone parameter?


Best regards,


George




I forgot to write in my original e-mail that I am performing the changes at the nova.conf file on the compute node02
and not at the controller node!

Should I change something there as well??


Best regards,

George

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to