Hi again!

I 've changed the settings on nova.conf at the CONTROLLER node.

Restart the scheduler service but still the same behavior!

It just ignores the filters!

Any ideas why is that happening?

All the best,

George

On Thu, 27 Nov 2014 18:39:13 -0500, Yong Feng wrote:
Hi George,

If you want to configure different resource over-commit ratio, you
could try AggregateCoreFilter and other Aggregate related Filter. If
you want to configure resource over-commit ratio per node, you could
write you own Filter and find a place to configure your resource
over-commit ratio.

Thanks,

Yong

On Thu, Nov 27, 2014 at 6:06 PM, Dimitrakakis Georgios  wrote:

Hi Yong,

I will try it and let you know.

What if I want to have different behavior per hypervisor/node?

Can I somehow achieve that?

Regards,

George

---- Ο χρήστης Yong Feng έγραψε ----

Hi George

The following parameters should be configured in nova controller
node where nova scheduler service is running. After that, please
restart the nova scheduler service.

cpu_allocation_ratio=1.0

ram_allocation_ratio=1.0

reserved_host_memory_mb=1024

scheduler_available_filters=no
va.scheduler.filters.all_filters



scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
Thanks,

Yong

On Thu, Nov 27, 2014 at 2:29 PM, Georgios Dimitrakakis wrote:

Hi all!

I have a node with 8Cores (HT enabled) and 32GB of RAM.

I am trying to limit the VMs that will run on it using scheduler
filters.

I have set the following at the nova.conf file:

cpu_allocation_ratio=1.0

ram_allocation_ratio=1.0

reserved_host_memory_mb=1024

scheduler_available_filters=nova.scheduler.filters.all_filters




scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

I then boot a CirrOS VM with a flavor that has 8vCPUs

# nova flavor-list



+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name           | Memory_MB | Disk | Ephemeral | Swap
| VCPUs | RXTX_Factor | Is_Public |



+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny        | 512       | 1    | 0     
   |      | 1     | 1.0         | True      |
| 12 | n1.large       | 8192      | 80   | 0       
 |      | 8     | 1.0         | True      |



+----+----------------+-----------+------+-----------+------+-------+-------------+-----------+

# nova boot --flavor n1.large --image cirros-0.3.3
--security-group default --key-name aaa-key --availability-zone
nova:node02 cirrOS-K2

and it builds succesfully

# nova list



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name 
      | Status | Task State | Power State | Networks       
                |



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE |
-         | Running     | vmnet=10.0.0.2           
      |

Next I try to put a second with the m1.tiny flavor and although I
was expecting to produce an error and do not build it this one is
also build succesfull!!!

# nova boot --flavor m1.tiny --image cirros-0.3.3
--security-group default --key-name aaa-key --availability-zone
nova:node02 cirrOS-K2-2

# nova list



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name 
      | Status | Task State | Power State | Networks       
                |



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE |
-         | Running     | vmnet=10.0.0.2           
      |
| 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE |
-         | Running     | vmnet=10.0.0.3           
      |



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+

I can eve boot a third LARGE one

# nova boot --flavor n1.large --image cirros-0.3.3
--security-group default --key-name aaa-key --availability-zone
nova:node02 cirrOS-K2-3

# nova list



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name 
      | Status | Task State | Power State | Networks       
                |



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| a0beb084-73c0-428a-bb59-0604588450be | cirrOS-K2   | ACTIVE |
-         | Running     | vmnet=10.0.0.2           
      |
| 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | cirrOS-K2-2 | ACTIVE |
-         | Running     | vmnet=10.0.0.3           
      |
| 6210f7c7-f16a-4343-a181-88ede5ee0132 | cirrOS-K2-3 | ACTIVE |
-         | Running     | vmnet=10.0.0.4           
      |



+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+

All these are running on NODE02 as a hypervisor on which I have
put the afforementioned cpu allocation ratio etc.

# nova hypervisor-servers node02



+--------------------------------------+-------------------+---------------+----------------------+
| ID                                   | Name 
            | Hypervisor ID | Hypervisor Hostname  |



+--------------------------------------+-------------------+---------------+----------------------+
| a0beb084-73c0-428a-bb59-0604588450be | instance-00000041 | 2 
          | node02               |
| 32fef068-aea3-423f-afb8-b9a5f8f2e0a6 | instance-00000042 | 2 
          | node02               |
| 6210f7c7-f16a-4343-a181-88ede5ee0132 | instance-00000043 | 2 
          | node02               |



+--------------------------------------+-------------------+---------------+----------------------+

Any ideas why it is ignored???

I haver restarted all services at the hypervisor node!

Should I restart any service at the controller node as well or
are they picked up automatically?

Does it has anything to do with the fact that I am specifically
requesting that node through the availability zone parameter?

Best regards,

George

_______________________________________________
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to     : openstack@lists.openstack.org [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]



Links:
------
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] mailto:gior...@acmac.uoc.gr
[5] mailto:gior...@acmac.uoc.gr

--

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to