I think I've created a bit of confusion, because I forgot that DVR still
does SNAT (generic non Floating IP tied NAT) on a central network node just
like in the non-DVR model. The extra address that is consumed is allocated
to a FIP specific namespace when a DVR is made responsible for supporting
But there already is a second external address, the fip address that's nating.
Is there a double nat? I'm a little confused.
Thanks,
Kevin
From: Robert Starmer [rob...@kumul.us]
Sent: Wednesday, January 27, 2016 3:20 PM
To: Carl Baldwin
Cc: OpenStack Operators; To
Due to scheduling conflicts and a very light agenda, there will be no
Community App Catalog IRC meeting this week.
Our next meeting is scheduled for February 4th, the agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog
One thing on the agenda for the 2/4/2016 meeting is
You can't get rid of the "External" address as it's used to direct return
traffic to the right router node. DVR as implemented is really just a
local NAT gateway per physical compute node. The outside of your NAT needs
to be publicly unique, so it needs it's own address. Some SDN solutions
can p
Glusterfs backend works great for shared glance, and can be configured for
a bit of redundancy at the disk level (rather than non distributed NFS,
which needs the NFS server to be present), much like the Ceph model Kevin
suggests. Is your database also resiliant (e.g. some form of mysql
replicatio
Congratulations Edgar!
Robert
On Wed, Jan 27, 2016 at 9:28 AM, Edgar Magana
wrote:
> Hello All,
>
> Thank you so much Shilla and Jon for the support and confidence I am
> really looking forward to working with you as well.
>
> This is a great opportunity and I am very excited about it. I will d
Hello All,
Thank you so much Shilla and Jon for the support and confidence I am really
looking forward to working with you as well.
This is a great opportunity and I am very excited about it. I will do my best
to provide meaningful feedback to the Foundation based on my experience as
Operator
ceph would work pretty well for that use case too. We've run a ceph with two
ost's, with the replication set to 2, to back both cinder and glance for HA.
Nothing complicated needed to get it working. Less complicated then drbd I
think. You can then also easily scale it out as needed.
Thanks,
Ke
Hi Everyone,
We have an update to the UC. Edgar Magana has been approved to be the board
representative to the User Committee and is replacing Subbu Allamaraju.
Welcome Edgar and we look forward to working with you!
Shilla
___
OpenStack-operators mailin
Yup, it's definitely possible. All Glance nodes will need to share the same
database as well as the same file system. Common ways of sharing the file
system are to mount /var/lib/glance/images either from NFS (like you
mentioned) or Gluster.
I've done both in the past with no issues. The usual cav
Hi Slawek,
we use a shared NFS Export to save images to Glance. That enables HA in (imho)
the simplest way.
For your setting you could use something like a hourly/daily/whenever rsync job
and set the 'second' Glance node to passive/standby in the load balancing. Also
it will be possible to run
Hello,
I want to install Openstack with at least two glance nodes (to have HA)
but with local filesystem as glance storage. Is it possible to use
something like that in setup with two glance nodes? Maybe someone of You
already have something like that?
I'm asking because AFAIK is image will be sto
> We have an image promotion process that does this for us. The command I use
> to get images from a specific tenant is:
>
> glance --os-image-api-version 1 image-list --owner=
>
> I'm sure using the v1 API will make some cringe, but I haven't found
> anything similar in the v2 API.
>
I used this
13 matches
Mail list logo