Is there anyone currently working on Neat/Gantt projects? I'd like to
contribute to them, as well.


2014-04-11 11:37 GMT-03:00 Andrew Laski <[email protected]>:

> On 04/10/14 at 11:33pm, Oleg Gelbukh wrote:
>
>> Andrew,
>>
>> Thank you for clarification!
>>
>>
>> On Thu, Apr 10, 2014 at 3:47 PM, Andrew Laski <[email protected]
>> >wrote:
>>
>>>
>>>
>>> The scheduler as it currently exists is a placement engine.  There is
>>> sufficient complexity in the scheduler with just that responsibility so I
>>> would prefer to see anything that's making runtime decisions separated
>>> out.
>>>  Perhaps it could just be another service within the scheduler project
>>> once
>>> it's broken out, but I think it will be beneficial to have a clear
>>> distinction between placement decisions and runtime monitoring.
>>>
>>
>>
>> Do you think that auto-scaling could be considered another facet of this
>> 'runtime monitoring' functionality? Now it is a combination of Heat and
>> Ceilometer. Does it worth moving to hypothetical runtime mobility service
>> as well?
>>
>
> Auto-scaling is certainly a facet of runtime monitoring.  But auto-scaling
> performs actions based on a set of user defined rules and is very visible
> while the enhancements proposed below are intended to benefit deployers and
> be very invisible to users.  So the set of allowable actions is very
> constrained compared to what auto-scaling can do.
> In my opinion what's being proposed doesn't seem to fit cleanly into any
> existing service, so perhaps it could start as a standalone entity.  Then
> once there's something that can be used and demoed a proper place might
> suggest itself, or it might make sense to keep it separate.
>
>
>
>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>>
>>
>>>
>>>
>>>  --
>>>> Best regards,
>>>> Oleg Gelbukh
>>>>
>>>>
>>>> On Wed, Apr 9, 2014 at 7:47 PM, Jay Lau <[email protected]> wrote:
>>>>
>>>>  @Oleg, Till now, I'm not sure the target of Gantt, is it for initial
>>>>
>>>>> placement policy or run time policy or both, can you help clarify?
>>>>>
>>>>> @Henrique, not sure if you know IBM PRS (Platform Resource Scheduler)
>>>>> [1],
>>>>> we have finished the "dynamic scheduler" in our Icehouse version (PRS
>>>>> 2.2),
>>>>> it has exactly the same feature as your described, we are planning a
>>>>> live
>>>>> demo for this feature in Atlanta Summit. I'm also writing some document
>>>>> for
>>>>> run time policy which will cover more run time policies for OpenStack,
>>>>> but
>>>>> not finished yet. (My shame for the slow progress). The related
>>>>> blueprint
>>>>> is [2], you can also get some discussion from [3]
>>>>>
>>>>> [1]
>>>>> http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=
>>>>> AN&subtype=CA&htmlfid=897/ENUS213-590&appname=USN
>>>>> [2]
>>>>> https://blueprints.launchpad.net/nova/+spec/resource-
>>>>> optimization-service
>>>>> [3] http://markmail.org/~jaylau/OpenStack-DRS
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>> 2014-04-09 23:21 GMT+08:00 Oleg Gelbukh <[email protected]>:
>>>>>
>>>>> Henrique,
>>>>>
>>>>>
>>>>>> You should check out Gantt project [1], it could be exactly the place
>>>>>> to
>>>>>> implement such features. It is a generic cross-project Scheduler as a
>>>>>> Service forked from Nova recently.
>>>>>>
>>>>>> [1] https://github.com/openstack/gantt
>>>>>>
>>>>>> --
>>>>>> Best regards,
>>>>>> Oleg Gelbukh
>>>>>> Mirantis Labs
>>>>>>
>>>>>>
>>>>>> On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>  Hello, everyone!
>>>>>>
>>>>>>>
>>>>>>> I am currently a graduate student and member of a group of
>>>>>>> contributors
>>>>>>> to OpenStack. We believe that a dynamic scheduler could improve the
>>>>>>> efficiency of an OpenStack cloud, either by rebalancing nodes to
>>>>>>> maximize
>>>>>>> performance or to minimize the number of active hosts, in order to
>>>>>>> minimize
>>>>>>> energy costs. Therefore, we would like to propose a dynamic
>>>>>>> scheduling
>>>>>>> mechanism to Nova. The main idea is using the Ceilometer information
>>>>>>> (e.g.
>>>>>>> RAM, CPU, disk usage) through the ceilometer-client and dinamically
>>>>>>> decide
>>>>>>> whether a instance should be live migrated.
>>>>>>>
>>>>>>> This might me done as a Nova periodic task, which will be executed
>>>>>>> every
>>>>>>> once in a given period or as a new independent project. In both
>>>>>>> cases,
>>>>>>> the
>>>>>>> current Nova scheduler will not be affected, since this new scheduler
>>>>>>> will
>>>>>>> be pluggable. We have done a search and found no such initiative in
>>>>>>> the
>>>>>>> OpenStack BPs. Outside the community, we found only a recent IBM
>>>>>>> announcement for a similiar feature in one of its cloud products.
>>>>>>>
>>>>>>> A possible flow is: In the new scheduler, we periodically make a call
>>>>>>> to
>>>>>>> Nova, get the instance list from a specific host and, for each
>>>>>>> instance, we
>>>>>>> make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
>>>>>>> cpu_util -q resource=$INSTANCE_ID) and then, according to some
>>>>>>> specific
>>>>>>> parameters configured by the user, analyze the meters and do the
>>>>>>> proper
>>>>>>> migrations.
>>>>>>>
>>>>>>> Do you have any comments or suggestions?
>>>>>>>
>>>>>>> --
>>>>>>> Ítalo Henrique Costa Truta
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> [email protected]
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>  _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> [email protected]
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>> --
>>>>> Thanks,
>>>>>
>>>>> Jay
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> [email protected]
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>>   _______________________________________________
>>>
>>>> OpenStack-dev mailing list
>>>> [email protected]
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> [email protected]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>  _______________________________________________
>> OpenStack-dev mailing list
>> [email protected]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
Ítalo Henrique Costa Truta
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to