At Overstock we have a number of separate Openstack deployments in different
facilities that are completely separated from each other. No shared services
between them. Some of the separation is due to the kind of instances they
contain (“Dev/Test" vs “Prod” for example), but it is largely due
[top posting on this one]
Hi,
When you write "Openstack instances", I'm assuming that you're referring
to Openstack deployments right?
We have different deployments based on geographic regions for
performance concerns but certainly not by department. Each Openstack
project is tied to a departme
hi folks,
there's a session coming up at the summit so we can discuss Ceilometer and
give/get some feedback but i wanted to highlight some of the work we've been
doing, specifically relating to storing measurement values. as many of you have
heard/read, we're building this thing called Gnocchi.
Hi All,
We have 3 API Guidelines that are ready for a final review.
1. Metadata guidelines document
https://review.openstack.org/#/c/141229/
2. Tagging guidelines
https://review.openstack.org/#/c/155620/
3. Guidelines on using date and time format
https://review.openstack.org/#/c/159892/
If th
A juno feature may help with this, Utilization based scheduling:
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling
That helps when landing the instance, but doesn't help if utilization
changes /after/ instances have landed, but could help with a resize action
to relocate the
Le 22/04/2015 15:32, Adam Young a écrit :
Its been my understanding that many people are deploying small
OpenStack instances as a way to share the Hardware owned by their
particular team, group, or department. The Keystone instance
represents ownership, and the identity of the users comes from
In addition to these factors, collocation happens to be another key source of
noise. By collocation I mean VMs doing the same/similar work running on the
same hypervisor. This happens under low capacity situations when the scheduler
could not enforce anti-affinity.
Subbu
> On Apr 22, 2015, at
(sorry for cross-post, but this is appropriate to both audiences)
Hey y'all!
For those of you that don't know, StackTach is a notification-based debugging,
monitoring and usage tool for OpenStack.
We're happy to announce that we've recently rolled StackTach.v3 into production
at one of the Rax
This is a case for a cross project cloud (institutional?). It costs more to run
two little clouds then one bigger one. Both in terms of man power, and in cases
like these. under utilized resources.
#3 is interesting though. If there is to be an openstack app catalog, it would
be inportant to be
Its been my understanding that many people are deploying small OpenStack
instances as a way to share the Hardware owned by their particular team,
group, or department. The Keystone instance represents ownership, and
the identity of the users comes from a corporate LDAP server.
Is there much d
Yes, it really depends on the used backing technique. We using SSDs and
raw images, so IO is not an issue.
But memory is more important: if you lack IO capability you left with
slow guests. If you lack memory you left with dead guests (hello, OOM
killer).
BTW: Swap is needed not to swapin/sw
Hello operators,
We have a running installation of Openstack with block storage. We are
currently having a problem with our storage appliance and we would like to
migrate the instance from this storage appliance. To do that, we are thinking
of creating a new availability zone with new storage appl
Cross-posting to operators@ as I think they are rather interested in the
$subject :-)
Le 21/04/2015 23:42, Artom Lifshitz a écrit :
Hello,
I'd like to gauge acceptance of introducing a feature that would give operators
a config option to perform real database deletes instead of soft deletes.
13 matches
Mail list logo