Li Ma,
This is interesting, In general I am in favor of expanding the scope of any
read/write separation capabilities that we have. I'm not clear what exactly
you are proposing, hopefully you can answer some of my questions inline.
The thing I had thought of immediately was detection of whether an
ur proposal.
-Mike
[1] https://review.openstack.org/#/c/93466/
On Sun, Aug 10, 2014 at 10:30 PM, Li Ma wrote:
> > not sure if I said that :). I know extremely little about galera.
>
> Hi Mike Bayer, I'm so sorry I mistake you from Mike Wilson in the last
> post. :-) Also, say sorry t
reviewed and +2'd this already.
Thanks,
Mike Wilson
[1] https://review.openstack.org/#/c/103064/
[2]
http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/juno-slaveification.rst
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstac
I've been thinking about this use case for a DHT-like design, I think I
want to do what other people have alluded to here and try and intercept
problematic requests like this one in some sort of "pre sending to
ring-segment" stage. In this case the "pre-stage" could decide to send this
off to a sch
I agree heartily with the availability and resiliency aspect. For me, that
is the biggest reason to consider a NOSQL backend. The other potential
performance benefits are attractive to me also.
-Mike
On Wed, Nov 20, 2013 at 9:06 AM, Soren Hansen wrote:
> 2013/11/18 Mike Spreitzer :
> > There
ould like to collaborate regardless.
>
> Amir
>
>
>
> On Nov 19, 2013, at 3:31 AM, Kanthi P wrote:
>
> Hi All,
>
> Thanks for the response!
> Amir,Mike: Is your implementation being done according to ML2 plugin
>
> Regards,
> Kanthi
>
>
>
Hotel information has been posted. Look forward to seeing you all in
February :-).
-Mike
On Mon, Nov 25, 2013 at 8:14 AM, Russell Bryant wrote:
> Greetings,
>
> Other groups have started doing mid-cycle meetups with success. I've
> received significant interest in having one for Nova. I'm no
could watch those two talks and comment.
The bugs are probably separate from the dispatch router discussion, but it
does dampen my enthusiasm a bit not knowing how to fix issues beyond scale
:-(.
-Mike Wilson
[1]
http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-op
to be some
flag to disable/enable this behavior? Maybe I am oversimplifying things...
you tell me.
-Mike Wilson
On Mon, Dec 9, 2013 at 3:01 PM, Vasudevan, Swaminathan (PNB Roseville) <
swaminathan.vasude...@hp.com> wrote:
> Hi Folks,
>
> We are in the process of defining the API fo
re specifically and other members of
the team on this? I would also be happy to pitch in towards whatever
solution is decided on provided we can rescue the poor deployers :-).
-Mike Wilson
[1] https://bugs.launchpad.net/neutron/+bug/1214115
[2] https://review.open
On Mon, Mar 3, 2014 at 3:10 PM, Sergey Skripnick wrote:
>
>
>
> I can run multiple compute service in same hosts without containers.
>> Containers give you a nice isolation and another way to try a more
>> realistic scenario, but my initial goal now is to be able to simulate many
>> fake compute
Hangouts worked well at the nova mid-cycle meetup. Just make sure you have
your network situation sorted out before hand. Bandwidth and firewalls are
what comes to mind immediately.
-Mike
On Tue, Mar 11, 2014 at 9:34 AM, Tom Creighton
wrote:
> When the Designate team had their mini-summit, the
Undeleting things is an important use case in my opinion. We do this in our
environment on a regular basis. In that light I'm not sure that it would be
appropriate just to log the deletion and git rid of the row. I would like
to see it go to an archival table where it is easily restored.
-Mike
O
at 12:46 PM, Johannes Erdfelt wrote:
> On Tue, Mar 11, 2014, Mike Wilson wrote:
> > Undeleting things is an important use case in my opinion. We do this in
> our
> > environment on a regular basis. In that light I'm not sure that it would
> be
> > appropriate just to
The restore use case is for sure inconsistently implemented and used. I
think I agree with Boris that we treat it as separate and just move on with
cleaning up soft delete. I imagine most deployments don't like having most
of the rows in their table be useless and make db access slow? That being
sa
After a read through seems pretty good.
+1
On Thu, Mar 13, 2014 at 1:42 PM, Boris Pavlovic wrote:
> Hi stackers,
>
> As a result of discussion:
> [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
> (step by step)
> http://osdir.com/ml/openstack-dev/2014-03/msg00947.html
;
> > >
> > > So what we should think about is:
> > > 1) How to implement restoring functionally in common way (e.g.
> framework
> > > that will be in oslo)
> > > 2) Split of work of getting rid of soft deletion in steps (that I
> > > a
Hi Yatin,
I'm glad you are thinking about the drawbacks of the zmq-receiver causes, I
want to give you a reason to keep the zmq-receiver and get your feedback.
The way I think about the zmq-receiver is a tiny little mini-broker that
exists separate from any other OpenStack service. As such, it's
i
ve a router in
front of them all or reroute requests, but the API set is not very large so
a very doable task. That being said, in our environment we use a single
neutron-server with another standing by as backup. It's not as performant
as we'd like it to be, but it hasn't stopped us
+1 to what Chris suggested. Zombie state that doesn't affect quota, but
doesn't create more problems by trying to reuse resources that aren't
available. That way we can tell the customer that things are deleted, but
we don't need to break our cloud by screwing up future schedule requests.
-Mike
le search takes
me to a bunch of EE and manufacturing engineering type papers. I'll do more
research on this.
However, this does fit under performance for sure, it is not unrelated at
all. If there is a chance to incorporate this into a performance session I
think this is where it belongs.
-Mi
So, I observe a consensus here of "long migrations suck"m +1 to that.
I also observe a consensus that we need to get no-downtime schema changes
working. It seems super important. Also +1 to that.
Getting back to the original review, it got -2'd because Michael would like
to make sure that the bene
+1
I also have tenants asking for this :-). I'm interested to see a blueprint.
-Mike
On Tue, Oct 29, 2013 at 1:24 PM, Jay Pipes wrote:
> On 10/29/2013 02:25 PM, Justin Hammond wrote:
>
>> We have been considering this and have some notes on our concept, but we
>> haven't made a blueprint for
me more work on our end to do properly.
All that being said, I am very interested in what NOSQL DBs can do for us.
-Mike Wilson
[1] https://review.openstack.org/#/c/43151/
[2] https://blueprints.launchpad.net/nova/+spec/db-mysqldb-impl
On Mon, Nov 18, 2013 at 12:35 PM, Mike Spreitzer wrote:
&g
Hi Kanthi,
Just to reiterate what Kyle said, we do have an internal implementation
using flows that looks very similar to security groups. Jun Park was the
guy that wrote this and is looking to get it upstreamed. I think he'll be
back in the office late next week. I'll point him to this thread whe
thi
>
>
> On Tue, Nov 19, 2013 at 1:43 AM, Mike Wilson wrote:
>
>> Hi Kanthi,
>>
>> Just to reiterate what Kyle said, we do have an internal implementation
>> using flows that looks very similar to security groups. Jun Park was the
>> guy that wrote this a
Just some added info for that talk, we are using qpid as our messaging
backend. I have no data for RabbitMQ, but our schedulers are _always_
behind on processing updates. It may be different with rabbit.
-Mike
On Tue, Jul 23, 2013 at 1:56 PM, Joe Gordon wrote:
>
> On Jul 23, 2013 3:44 PM, "Ian
Again I can only speak for qpid, but it's not really a big load on the
qpidd server itself. I think the issue is that the updates come in serially
into each scheduler that you have running. We don't process those quickly
enough for it to do any good, which is why the lookup from db. You can see
thi
that sudo doesn't do this type of thing
already. It _must_ be something that everyone wants. But #2 may be quicker
and easier to implement, my $.02.
-Mike Wilson
On Thu, Jul 25, 2013 at 2:21 PM, Joe Gordon wrote:
> Hi All,
>
> We have recently hit some performance issues with
s approach, I have really only discussed
it with Devananda van der Veen briefly but he was extremely helpful. This
hopefully get some more eyes on it, so yeah, fire away!
-Mike Wilson
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
ht
So back at the Portland summit myself and Jun Park presented about some of
our difficulties scaling Openstack with the Folsom release:
http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment
.
One of the main obstacles we ran i
31 matches
Mail list logo