>> 
>>> I currently operate a multi-region cloud split between 2 geographic
>>> locations. I have updated it to Pike not too long ago, but I've been
>>> running into a peculiar issue. Ever since the Pike release, Nova now
>>> asks Keystone if a new project exists in Keystone before configuring
>>> the project’s quotas. However, there doesn’t seem to be any region
>>> restriction regarding which endpoint Nova will query Keystone on. So,
>>> right now, if I create a new project in region one, Nova will query
>>> Keystone in region two. Because my keystone databases are not synched
>>> in real time between each region, the region two Keystone will tell
>>> it that the new project doesn't exist, while it exists in region one
>>> Keystone.
> Are both keystone nodes completely separate? Do they share any information?

I share the DB information between both. In our use case, we very rarely make 
changes to keystone (password change, user creation, project creation) and 
there is a limited number of people who even have access to it, so I can get 
away with having my main DB in region 1 and hosting an exact copy in region 2. 
The original idea was to have a mysql slave in region 2, but that failed and we 
decided to go with manually replicating the keystone DB whenever we would make 
changes. This means I have the same users and projects in both regions, which 
is exactly what I want right now for my specific use case. Of course, that also 
means I only do operations in keystone in Region 1 and never from Region 2 to 
prevent discrepancies.
>>> 
>>> Thinking that this could be a configuration error, I tried setting
>>> the region_name in keystone_authtoken, but that didn’t change much of
>>> anything. Right now I am thinking this may be a bug. Could someone
>>> confirm that this is indeed a bug and not a configuration error?
>>> 
>>> To circumvent this issue, I am considering either modifying the
>>> database by hand or trying to implement realtime replication between
>>> both Keystone databases. Would there be another solution? (beside
>>> modifying the code for the Nova check)
> A variant of this just came up as a proposal for the Forum in a couple
> weeks [0]. A separate proposal was also discussed during this week's
> keystone meeting [1], which brought up an interesting solution. We
> should be seeing a specification soon that covers the proposal in
> greater detail and includes use cases. Either way, both sound like they
> may be relevant to you.
> 
> [0] https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming 
> <https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming>
> [1]
> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-156
>  
> <http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-156>

This is interesting. Unfortunately I will not be in Vancouver, but I will keep 
an eye on it in the future. I will need to find a way to solve the current 
issue at hand shortly though.

>> 
>> This is the specific code you're talking about:
>> 
>> https://github.com/openstack/nova/blob/stable/pike/nova/api/openstack/identity.py#L35
>> 
>> 
>> I don't see region_name as a config option for talking to keystone in
>> Pike:
>> 
>> https://docs.openstack.org/nova/pike/configuration/config.html#keystone
>> 
>> But it is in Queens:
>> 
>> https://docs.openstack.org/nova/queens/configuration/config.html#keystone
>> 
>> That was added in this change:
>> 
>> https://review.openstack.org/#/c/507693/
>> 
>> But I think what you're saying is, since you have multiple regions,
>> the project could be in any of them at any given time until they
>> synchronize so configuring nova for a specific region isn't probably
>> going to help in this case, right?
>> 
>> Isn't this somehow resolved with keystone federation? Granted, I'm not
>> at all a keystone person, but I'd think this isn't a unique problem.
> Without knowing a whole lot about the current setup, I'm inclined to say
> it is. Keystone-to-keystone federation was developed for cases like
> this, and it's been something we've been trying to encourage in favor of
> building replication tooling outside of the database or over an API. The
> main concerns with taking a manual replication approach is that it could
> negatively impact overall performance and that keystone already assumes
> it will be in control of ID generation for most cases (replicating a
> project in RegionOne into RegionTwo will yield a different project ID,
> even though it is possible for both to have the same name).
> Additionally, there are some things that keystone doesn't expose over
> the API that would need to be replicated, like revocation events (I
> mentioned this in the etherpad linked above).

To answer the questions of both posts:

1.I was talking about the region-name parameter underneath keystone_authtoken. 
That is in the pike doc you linked, but I am unaware if this is only used for 
token generation or not. Anyhow, it doesn’t seem to have any impact on the 
issue at hand.

2.My understanding of the issue is this:
        -Keystone creates new project in region 1
        -Nova wants to check if the project exists in keystone, so it asks 
keystone for its endpoint list.
        -Nova picks the first endpoint in the list, which happens to be the 
region 2 endpoint (my endpoint list has the endpoints of both regions since I 
manage from a single horizon/controller node).
        -Since there’s no real-time replication, region 2 replies that the 
project doesn’t exist, while it exists in region 1.

I may be wrong about my assumption that it picks the region 2 endpoint, but the 
facts are that it does query region 2 keystone when it shouldn’t (I see the 
404s in the region 2 logs)

3.I haven't really looked into keystone federation yet, but wouldn’t it cause 
issues if projects in 2 different regions have the same uuid? 

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to