On 04/23/2014 11:12 AM, Martin Mucha wrote:
Hi,
I was describing current state, first iteration. Need of restart is something
which should not exist, I've removed that necessity meantime.
Altered flow: You allocate mac address for nic in data center without own pool,
it gets registered in global pool. Then you modify settings of that data center
so that new pool is created for it. All NICs for that data center is queries
from DB, it's macs released from global pool and added to data center scope
pool. And other way around. When you delete this scoped pool, all its content
will be moved to global pool. Feature page is updated.
Note: *previously* there was MAC placed in wrong pool only after modification
of existing data center, which caused entirely new pool to be created (there
wasn't pool for this scope, after modification there is). All other operations
were fine. Now all manipulation with scoped pools should be ok.
Note2: all that scoped pool handling is implemented as strategy. If we are
unsatisfied with this implementation we could create another one and switch to
it without modifying 'calling' code. Also many implementation may coexist and
we can switch between them (on app start up) upon config.
Question: When allocating MAC, not one specified by user, system picks
available mac from given mac pool. Imagine, that after some time then mac pool
ranges changes, and lets say that whole new interval of macs is used, not
overlapping with former one. Then all previously allocated macs will be present
in altered pool as a user specified ones -- since they are outside of defined
ranges. With large number of this mac address this have detrimental effect on
memory usage. So if this is a real scenario, it would be acceptable(or
welcomed) for you to reassign all mac address which were selected by system?
For example on engine start / vm start.
no. you don't change mac addresses on the fly.
also, if the mac address isn't in the range of the scope, i don't see
why you need to keep it in memory at all?
iiuc, you keep in memory the unused-ranges of the various mac_pools.
when a mac address is released, you need to check if it is in the range
of the relevant mac_pool for the VM (default, dc, cluster, vm_pool).
if it is, you need to return it to that mac_pool. otherwise, the
mac_pool is not relevant for this out-of-range mac address, and you just
stop using it.
remember, you have to check the released mac address for the specific
associated mac_pool, since we do (read: should[1]) allow overlapping mac
addresses (hence ranges) in different mac_pool.
so cases to consider:
- mac_pool removed --> use the relevant mac_pool (say, the default one)
for the below
- mac_pool range extended - need to check if any affected VMs have mac
addresses in the new range to not use them
- mac_pool range reduced - just need to reduce it, unrelated to current
vm's
- mac_pool range changed all-together / new mac_pool defined affecting
the VM (instead of the default one) - need to review all mac
addresses in affected vm's to check if any are in the range and
should be removed from the mac_pool ranges.
the last 3 all are basically the same - on any change to mac_pool range
just re-calculate the ranges in it by creating sub-ranges based on
removing sorted groups/ranges of already allocated mac addresses?
[1] iirc, we have a config allowing this today for manually configured
mac addresses.
M.
----- Original Message -----
From: "Itamar Heim" <[email protected]>
To: "Martin Mucha" <[email protected]>
Cc: [email protected], [email protected]
Sent: Tuesday, April 22, 2014 5:15:35 PM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
On 04/18/2014 01:17 PM, Martin Mucha wrote:
Hi,
I'll try to describe it little bit more. Lets say, that we've got one data
center. It's not configured yet to have its own mac pool. So in system is only
one, global pool. We create few VMs and it's NICs will obtain its MAC from this
global pool, marking them as used. Next we alter data center definition, so now
it uses it's own mac pool. In system from this point on exists two mac pools,
one global and one related to this data center, but those allocated MACs are
still allocated in global pool, since new data center creation does not (yet)
contain logic to get all assigned MACs related to this data center and reassign
them in new pool. However, after app restart all VmNics are read from db and
placed to appropriate pools. Lets assume, that we've performed such restart.
Now we realized, that we actually don't want that data center have own mac
pool, so we alter it's definition removing mac pool ranges. Pool related to
this data center will be removed and it's content will!
!
be moved t
o a scope above this data center -- into global scope pool. We know, that everything
what's allocated in pool to be removed is still used, but we need to track it elsewhere
and currently there's just one option, global pool. So to answer your last question. When
I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is
returned to the pool, the request goes like: "give me pool for this virtual machine,
and whatever pool it is, I'm returning this MAC to it." Clients of
ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is
right for them, is done behind the scenes upon their identification (I want pool for this
logical network).
Notice, that there is one "problem" in deciding which scope/pool to use. There
are places in code, which requires pool related to given data center, identified by guid.
For that request, only data center scope or something broader like global scope can be
returned. So even if one want to use one pool per logical network, requests identified by
data center id still can return only data center scope or broader, and there are no
chance returning pool related to logical network (except for situation, where there is
sole logical network in that data center).
Thanks for suggestion for another scopes. One question: if we're implementing them, would
you like just to pick a *sole* non-global scope you want to use in your system (like data
center related pools ONLY plus one global, or logical network related pools ONLY plus one
global) or would it be (more) beneficial to you to have implemented some sort of
cascading and overriding? Like: "this data center uses *this* pool, BUT except for
*this* logical network, which should use *this* one instead."
I'll update feature page to contain these paragraphs.
I have to say i really don't like the notion of having to restart the
engine for a change done via the webadmin to apply.
also, iiuc your flow correctly, mac addresses may not go back to the
pool anyway until an engine restart, since the change will only take
effect on engine restart anyway, then available mac's per scope will be
re-calculated.
M.
----- Original Message -----
From: "Itamar Heim" <[email protected]>
To: "Martin Mucha" <[email protected]>, [email protected], [email protected]
Sent: Thursday, April 10, 2014 9:04:37 AM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
On 04/10/2014 09:59 AM, Martin Mucha wrote:
Hi,
I'd like to notify you about new feature, which allows to specify distinct MAC
pools, currently one per data center.
http://www.ovirt.org/Scoped_MacPoolManager
any comments/proposals for improvement are very welcomed.
Martin.
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
(changed title to reflect content)
When specified mac ranges for given "scope", where there wasn't any definition previously,
allocated MAC from default pool will not be moved to "scoped" one until next engine restart. Other
way, when removing "scoped" mac pool definition, all MACs from this pool will be moved to default
one.
cna you please elaborate on this one?
as for potential other "scopes" - i can think of cluster, vm pool and
logical network as potential ones.
one more question - how do you know to "return" the mac address to the
correct pool on delete?
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users