> On Sep 23, 2016, at 11:03 AM, Alexandr Porunov <alexandr.poru...@gmail.com> 
> wrote:
> 
> Hello,
> 
> I have next nodes:
> swift_proxy1 - 192.168.0.11
> swift_proxy2 - 192.168.0.12
> keystone1 - 192.168.0.21
> keystone2 - 192.168.0.22
> 
> I wonder to know if it is possible to use two keystone servers if we use 
> "uuid" or "fernet" tokens.
> 
> With uuid I can use Galera Cluster to use the same database. The problem is I 
> don't know what to write in endpoints. We can create an endpoint only for one 
> the keystone server as I understand. i.e. :
> 
> openstack endpoint create --region RegionOne identity public 
> http://192.168.0.11:5000/v3
> 
> openstack endpoint create --region RegionOne identity internal 
> http://192.168.0.11:5000/v3
> 
> openstack endpoint create --region RegionOne identity admin 
> http://192.168.0.11:35357/v3


You’ll need some way to balance the requests across both Keystone servers, and 
similarly, Swift proxies.

A simple approach is to use round-robin DNS:
keystone.example.com. IN A 192.168.0.21
keystone.example.com. IN A 192.168.0.22
swift.example.com. IN A 192.168.0.11
swift.example.com. IN A 192.168.0.12

Then, add your endpoints using the A resource record:

# openstack endpoint create --region RegionOne identity public 
http://keystone.example.com:5000/v3
# openstack endpoint create --region RegionOne object-store public 
http://swift.example.com:8080/v1/AUTH_%\(tenant_id\)s

It’s quick, easy, and cheap. But you lose availability if one server is 
offline; the DNS record will be cached for the duration of the TTL and client 
will continue attempting to connect  to the unavailable IP. You can set a low 
TTL (<5min) to reduce this, at the expense of increased DNS queries.

For better availability, you can use a third IP for each service to configure 
an external load balancer to distribute requests. The load balancer can 
initiate HTTP/TCP health checks and remove a server from the pool until it’s 
back online. HAProxy and Nginx are both good options to explore.  Then your 
endpoint becomes the DNS A record for the IP of the load balancer.


> Also what should I use when I create a swift endpoints? Does he have to point 
> on itself or on the keystone server?

The DNS name or IP for the Swift proxies (plus protocol, API version, and 
tenant variable of course) should be the endpoint added to keystone. Keystone 
is the authentication mechanism, and returns a catalog containing the URL for 
the proxies.


> My aim is to connect the keystone 1 to the proxy 1 and the keystone 2 to the 
> proxy 2 to. i.e. : Proxy 1 should always check tokens only in the keystone1 
> server and the proxy 2 should always check tokens only in the keystone 2 
> server. But I want to be able to receive tokens from any keystone server (a 
> user can receive a token from the keystone 1 and be authenticated in the 
> proxy 2 with that token).


You’ll need a way to distribute the fernet keys across multiple Keystone 
servers, since they’re not stored in a database like UUID:
http://docs.openstack.org/admin-guide/keystone-fernet-token-faq.html#how-should-i-approach-key-distribution

Rsync is one approach, since each key is maintained in a separate file. This is 
easier with an Active/Passive Keystone configuration, though.
For Active/Active keystone configurations, you may want to mount a replicated 
block device and use a clustered filesystem for your fernet-keys directory. One 
approach might be DRBD+GFS.

https://www.drbd.org/en/use-cases/high-availability
https://www.drbd.org/en/doc/users-guide-83/s-dual-primary-mode


To have proxies only validate a token from a specific keystone server, you 
could have each proxy configured to use /etc/hosts or a local DNS resolver that 
always returns the “local” keystone IP, so that the proxy will query the 
correct endpoint. That way, you can still reference “keystone.example.com” in 
each of your proxy.conf files for token validation.

Eg,

swift_proxy1:/etc/hosts:
192.168.0.21 keystone keystone.example.com

swift_proxy2:/etc/hosts:
192.168.0.22 keystone keystone.example.com


If Keystone is active/active and providing global tokens, any keystone server 
can validate the token (assuming correct key distribution above). It would make 
sense to have local Keystone affinity for token validation if you had a 
multi-region Swift cluster and network latency is high between those two 
regions. I’m not clear why you would want to do this on a local network, 
though, since the IP ranges you referenced would be on the same network 
segment. There wouldn’t be any latency difference if proxy2 queries keystone1 
and vice versa.


-Andrew


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to