The thing that screwed us up at Ames back in the day was deleting misplaced
data ( should that happen ). Swift was basically incapable of it at the
time, cinder didn't even exist.
Ultimately I ended up heading in the direction of spinning up separate
cloud environments entirely for each cloud. M
I can see a huge problem with your contributing operators... all of them
are enterprise.
enterprise needs are radically different from small to medium deployers who
openstack has traditionally failed to work well for.
On Tue, Jan 17, 2017 at 12:47 PM, Piet Kruithof
wrote:
> Sorry for the late r
you know how many folks are STILL running havana openstack?
On Mon, Oct 31, 2016 at 2:44 PM, Andreas Jaeger wrote:
> On 10/31/2016 07:33 PM, Lutz Birkhahn wrote:
> > Hi,
> >
> > I have already manually created PDF versions of about 8 of the OpenStack
> Manuals (within about 4-6 hours including s
so project '💩 ' would be perfectly okay then.
On Wed, Oct 5, 2016 at 5:36 PM, Steve Martinelli
wrote:
> There are some restrictions.
>
> 1. The project name cannot be longer than 64 characters.
> 2. Within a domain, the project name is unique. So you can have project
> "foo" in the "default" dom
food for nightmares... try to consider how you would handle ip address
mapping around a fiber ring between multiple cloud infrastructures.
On Mon, Oct 3, 2016 at 1:52 PM, Jonathan Proulx wrote:
>
> So my sense from responses so far:
>
> No one is doing unified SDN solutions across clouds and n
I think the best general way to view networking in cloud is WAN vs Cloud
Lan.
There's almost always an edge routing env for your cloud environments (
whether they be by region or by policy or by tim is an angry dude and you
don't touch his instances ).
Everything beyond that edge is a WAN problem
I figure if you have entity Y's workloads running on entity X's hardware...
and that's 51% or greater portion of gross revenue... you are a public
cloud.
On Mon, Sep 26, 2016 at 11:35 AM, Kenny Johnston
wrote:
> That seems like a strange definition. It doesn't incorporate the usual
> multi-tenan
I'd love to see your results on this . Very interesting stuff.
On Sep 17, 2016 1:37 AM, "Joe Topjian" wrote:
> Hi all,
>
> We're planning to deploy Murano to one of our OpenStack clouds and I'm
> debating the RabbitMQ setup.
>
> For background: the Murano agent that runs on instances requires a
I want desperately to see a failed deployments talk at summit. I'd be glad
to contribute but we'd need info on a variety of failure states.
On Sep 12, 2016 1:05 PM, "Jonathan D. Proulx" wrote:
>
> I agree this would make a very interesting OPs session.
>
> As many have poitned out it's difficul
early days was 2 full time 2 part time for a cluster size of a couple
hundred.
On Wed, Sep 7, 2016 at 6:59 PM, Kris G. Lindgren
wrote:
> Hello all,
>
>
>
> I was hoping to poll other operators to see what their average team size
> vs’s deployment size is, as I am trying to use this in an interna
there are fundamental longevity of life questions with SSDs and tuning.
I'd be interested in hearing about that as well.
On Fri, Aug 5, 2016 at 11:23 AM, Edgar Magana
wrote:
> Tim,
>
>
>
> At Workday team we are working on that, our work is for CEPH performance.
> Basically, moving janitor and c
the v1 helion product was a joke for deployment at scale. I still don't
know whose hair brained idea it was to use OOO there and then. but it was
hair brained at best. From my perspective the biggest issue with helion,
was insane architecture decisions like that one being made with no
adherence t
Vish the virtual machine barista?
On Thu, Jul 7, 2016 at 4:23 PM, Kruithof Jr, Pieter <
pieter.kruithof...@intel.com> wrote:
> Operators,
>
> If you have a few moments, please review the following:
>
> https://review.openstack.org/#/c/326662/14
>
> The intent of the document is to generate a grou
I'll check out giftwrap. never heard of it. But interesting.
On Thu, Jun 23, 2016 at 7:50 PM, Xav Paice wrote:
> Can I suggest that using the tool https://github.com/openstack/giftwrap
> might make live a bunch easier?
>
> I went down a similar path with building Debs in a venv using
> dh_virt
I know from conversations that a few folks package their python apps as
distributable virtualenvs. spotify created dh-virtualenv for this. you
can do it pretty simply by hand.
I built a toolchain for building rpms as distributable virtualenvs and that
works really well.
What I'd like to do is
I use thermite.
On Wed, Jun 22, 2016 at 5:26 PM, Gilles Mocellin <
gilles.mocel...@nuagelibre.org> wrote:
> Hello,
>
> While digging in nova's database, I found that many objects ar not really
> deleted, but instead just marked deleted.
> In fact, it's a general behavior in other projects (cinder
+1 also SSL
On Tue, Jun 14, 2016 at 4:58 PM, Russell Bryant wrote:
> This is the most common approach I've heard of (doing rate limiting in
> your load balancer).
>
> On Tue, Jun 14, 2016 at 12:10 PM, Kingshott, Daniel <
> daniel.kingsh...@bestbuy.com> wrote:
>
>> We use Haproxy to load balance
PCI compliance / ITAR / TS stuff all require isolation. You'd need to
stand up isolated environments of the translation env for each.
On Fri, May 6, 2016 at 11:50 AM, Jonathan Proulx wrote:
> On Fri, May 06, 2016 at 11:39:03AM -0400, Silence Dogood wrote:
> :this strikes me as a
this strikes me as a really bad idea from a security standpoint... in fact
it would violently violate like every audit / policy requirement I am aware
of.
-matt
On Fri, May 6, 2016 at 11:32 AM, suresh kumar wrote:
> Thanks Mariano, that really helps.
>
> On Fri, May 6, 2016 at 11:16 AM, Mariano
what you should be looking for is hvm.
On Tue, May 3, 2016 at 3:20 PM, Maish Saidel-Keesing
wrote:
> I would think that the problem is that OpenStack does not really report
> back that you are using KVM - it reports that you are using QEMU.
>
> Even when in nova.conf I have configured virt_type=
+1
On Fri, Mar 4, 2016 at 12:30 PM, Matt Jarvis
wrote:
> +1
>
> On 4 March 2016 at 17:21, Robert Starmer wrote:
>
>> If fixing a typo in a document is considered a technical contribution,
>> then I think we've already cast the net far and wide. ATC as used has
>> become a name implying you're t
cool!
On Thu, Mar 3, 2016 at 1:39 PM, Mathieu Gagné wrote:
> On 2016-03-03 12:50 PM, Silence Dogood wrote:
> > We did some early affinity work and discovered some interesting problems
> > with affinity and scheduling. =/ by default openstack used to ( may
> > still ) depl
We did some early affinity work and discovered some interesting problems
with affinity and scheduling. =/ by default openstack used to ( may still
) deploy nodes across hosts evenly.
Personally, I think this is a bad approach. Most cloud providers stack
across a couple racks at a time filling th
How about just OPS : {$Verified_Count} Physical Nodes
=D
On Thu, Mar 3, 2016 at 12:08 PM, Robert Starmer wrote:
> I setup an etherpad to try to capture this discussion:
>
> https://etherpad.openstack.org/p/OperatorRecognition
>
> R
>
> On Thu, Mar 3, 2016 at 9:04 AM, Robert Starmer wrote:
>
>>
- In-place Full Release upgrades (upgrade an entire cloud from Icehouse
to Kilo for instance)
This tends to be the most likely scenario with CI/CD being almost
impossible for anyone using supported openstack components ( such as SDN /
NAS / other hardware integration pieces ).
That's not to
for glance. They will be geared around a one box
> install at first.
>
> I'll update the site.
>
> Chris
>
> Sent from my iPhone
>
> On Mar 2, 2016, at 1:07 PM, Silence Dogood wrote:
>
> This is neat man. Any support for versioning?
>
> On Wed, Mar 2,
This is neat man. Any support for versioning?
On Wed, Mar 2, 2016 at 3:54 PM, wrote:
> Hi all;
>
> I'm still a bit new to the world of stacking, but like many of you I have
> suffered thru the process of manual Openstack installation.
>
> I've been a developer for decades, so please excuse me f
I believe Eric Windisch did at one point run OpenStack on a pi.
The problem is that it's got so little ram, and no hypervisor. Also at
least it USED to not be able to run docker since docker wasn't
crosscompiled to arm at the time.
It's a terrible target for openstack. NUCs on the other hand...
>From a purely benchmarking aspect it makes sense. It's like a burn in test
case use. That only makes it make sense.
On Fri, Feb 19, 2016 at 5:09 PM, Kevin Bringard (kevinbri) <
kevin...@cisco.com> wrote:
> Sorry for top posting.
>
> Just wanted to say I agree with Monty (and didn't want you to
29 matches
Mail list logo