fully very small change in
drivers to support it. Technically I don't see it as an issue.
However, is it a change we'd be willing to accept? Is there any good
reason not to do this? Are there any less esoteric workflows which
might use this feature?
Matt
--
Matthew Booth
Red Hat OpenSta
On Wed, 22 Aug 2018 at 10:47, Gorka Eguileor wrote:
>
> On 20/08, Matthew Booth wrote:
> > For those who aren't familiar with it, nova's volume-update (also
> > called swap volume by nova devs) is the nova part of the
> > implementation of cinder's live mig
ependent before doing this, at which
point the volume itself should be migratable?
If we can establish that there's an acceptable alternative to calling
volume-update directly for all use-cases we're aware of, I
On Mon, 13 Aug 2018 at 16:56, Chris Friesen wrote:
>
> On 08/13/2018 08:26 AM, Jay Pipes wrote:
> > On 08/13/2018 10:10 AM, Matthew Booth wrote:
>
> >> I suspect I've misunderstood, but I was arguing this is an anti-goal.
> >> There's no reason to do
gress if you fix the
bug. The regression would be anything user-facing which queries by
metadata key. What does that?
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
Open
On Mon, 13 Aug 2018 at 15:27, Jay Pipes wrote:
>
> On 08/13/2018 10:10 AM, Matthew Booth wrote:
> > On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote:
> >>
> >> On 08/13/2018 06:06 AM, Matthew Booth wrote:
> >>> Thanks mriedem for answering my previous
On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote:
>
> On 08/13/2018 06:06 AM, Matthew Booth wrote:
> > Thanks mriedem for answering my previous question, and also pointing
> > out the related previous spec around just forcing all metadata to be
> > lowercase:
> >
> &
o with Rajesh's patch?
Or should we ask Rajesh to expand his patch into a series covering
other metadata?
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Developme
hange. Can anybody tell me if any of those
jobs ran the included functional test against a MySQL DB?,
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List
On 6 June 2018 at 13:55, Jay Pipes wrote:
> On 06/06/2018 07:46 AM, Matthew Booth wrote:
>>
>> TL;DR I think we need to entirely disable swap volume for multiattach
>> volumes, and this will be an api breaking change with no immediate
>> workaround.
>>
>>
er 2
Combined with a patch to nova to disallow swap_volume on any
multiattach volume, this would then be possible if inconvenient.
Regardless of any other changes, though, I think it's urgent that we
disable the ability to swap_volume a multiattach volume because we
don't want users to st
On 19 April 2018 at 16:46, Chris Friesen wrote:
> On 04/19/2018 08:33 AM, Jay Pipes wrote:
>>
>> On 04/19/2018 09:15 AM, Matthew Booth wrote:
>>>
>>> We've had inconsistent naming of recreate/evacuate in Nova for a long
>>> time, and it will
On 19 April 2018 at 15:33, Jay Pipes wrote:
> On 04/19/2018 09:15 AM, Matthew Booth wrote:
>>
>> We've had inconsistent naming of recreate/evacuate in Nova for a long
>> time, and it will persist in a couple of places for a while more.
>> However, I've propo
t you, please
follow the above in case it merges.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openst
can support
> configdrive properly.
I was going to ask this. Even if the contents of the disk can't be
transferred in advance... how does ironic do this? There must be a
way.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
_
team, but I for one would oppose this
> alternative.
>
> Cheers,
> Gorka.
>
> __________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.
server is rebuilt, and the volume is not
deleted. The user will still lose their data, of course, but that's implied
by the rebuild they explicitly requested. The volume id will remain the
same.
[1] I suspect this would require new functionality in cinder to
re-initialize from image.
Matt
--
Ma
Nova host Z asks cyborg for a local function Y and blocks
* Cyborg hopefully returns function Y which is already available
* If not, Cyborg reprograms a function Y, then returns it
Can anybody correct me/fill in the gaps?
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
ack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
On 31 January 2018 at 16:32, Matt Riedemann wrote:
> On 1/31/2018 7:30 AM, Matthew Booth wrote:
>
>> Could I please have some eyes on this bugfix:
>> https://review.openstack.org/#/c/462521/ . I addressed an issue raised
>> in August 2017, and it's had no negativ
Could I please have some eyes on this bugfix:
https://review.openstack.org/#/c/462521/ . I addressed an issue raised in
August 2017, and it's had no negative feedback since. It would be good to
get this one finished.
Thanks,
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
led on your compute hosts.
But to reiterate, ideally your rescue image would support cloud-init and
you would use a config disk.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
O
-encrypted volume.
>
In the context of the above, I don't think this is a priority as clearly
nobody is currently doing it. There's already a bug to track the problem in
libvirt, which is linked in a code comment. Admittedly that BZ is
unnecessarily private, which I noted in revi
On 9 January 2018 at 15:28, Matthew Booth wrote:
> In summary, the patch series is here:
>
> https://review.openstack.org/#/q/status:open+project:opensta
> ck/nova+branch:master+topic:bp/local-disk-serial-numbers
>
> The bottom 3 patches, which add BDM.uuid have landed. Th
t type and path independent of libvirt config
https://review.openstack.org/#/c/530786/
Don't generate fake disk_info in swap_volume
https://review.openstack.org/#/c/530787/
Local disk serial numbers for the libvirt driver
https://review.openstack.org/#/c/529380/
Thanks,
Matt
--
Matthew Boo
ers for the libvirt driver
https://review.openstack.org/529329 Pass disk_info dict to libvirt_info
https://review.openstack.org/529380 Local disk serial numbers for the
libvirt driver
Here we finally make the libvirt driver-specific changes to expose BDM uuid
as a serial number for local disks.
tack/nova-specs/specs/pike/
>> approved/list-instances-using-searchlight.html
>> [3] https://review.openstack.org/#/c/463618/
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
ution looks very complex for both developers and
operators, and very fragile. I think we'd better going with a relatively
simple solution like this one first, and only going a couple of orders of
magnitude more complex if it turns out to be absolutely essential.
Matt
--
Matthew Booth
Red Hat E
. Perhaps not updating instance.host until after finish_migration.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscri
a
given instance. It would be set back to NULL when the migration was
complete, and we'd at least have an opportunity to do something explicit
with migrations in an error state.
In the meantime I'm going to look for more backportable avenues to fix
this. Perhaps not updating instance.h
recall was fixing up tests which assumed
sub-second timestamp granularity which MySQL did not support at the time
(but may now).
IIRC the series died because we killed the fixture I was using in oslo.db
without replacement before my series finished landing. Fundamentally wasn't
nge.
If anybody would like to slog through some of the above and add a second +2
I'd be very grateful. There's plenty more in the queue after those!
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
___
tter
than this one then I'm in favour. However, I think we can try this one with
limited risk and initial up-front effort.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
___
disks an instance should
have, and how large they should be. Where to put them is down to the
driver. If we're modelling this outside the driver and at least 2 drivers
are implementing it, I wonder if we shouldn't be implementing storage
policy at a higher level than the driver.
T
ly at least a couple of years.
The hypervisor is a (the?) critical component of any cloud deployment.
Objectively, it's bizarre that we expect people to deploy our brand new
code to work round things that were fixed in the hypervisor
On Fri, Sep 30, 2016 at 4:38 PM, Murray, Paul (HP Cloud)
wrote:
>
>
>
>
>
> On 27/09/2016, 18:12, "Daniel P. Berrange" wrote:
>
> >On Tue, Sep 27, 2016 at 10:40:34AM -0600, Chris Friesen wrote:
> >> On 09/27/2016 10:17 AM, Matthew Booth wrote:
>
s is not implemented/supported?
>
> -Viktor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/l
https://review.openstack.org/#/c/347161/ .
I raised this in the live migration sub-team meeting, and the immediate
response was understandably conservative. I think this solves more problems
than it creates, though, and it would result in Nova's libvirt driver
getting a bit smaller and a bit simp
+2 from a maintainer only requires a single
+2 from a core.
We could implement this incrementally by defining a couple of pilot
subsystem maintainer domains.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070
tack.org/show/198298/
>
> [2] http://paste.openstack.org/show/568983/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.opensta
em? I mashed all the links I could find seemingly related to gerrit
settings and I couldn't find anything which looked promising.
Thanks again,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
any other good way to achieve what I'm looking
for which doesn't involve maintaining my own bot list? If not, would it be
feasible to add something?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
rlier test patches, the patch bombs I keep dropping on gerrit will
get a
bit smaller ;)
Thanks,
Matt
[1] https://blueprints.launchpad.net/nova/+spec/libvirt-instance-storage
[2] Flat, Qcow2, Lvm, Rbd, Ploop
[3] For recent examples see stable libvirt rescue, and device tagging.
--
Matthew Booth
Red
tps://review.openstack.org/#/c/323761/2/nova/virt/libvirt/driver.py@4190
>
> --
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://li
o.
> >
> > This data structure is actually described pretty well in the comment
> block
> > at the top of libvirt/blockinfo.py. It is internal to the libvirt driver.
> > It contains:
> >
> > {
> > 'disk_bus': the default bus used by disks
',
'floppy', 'fs', or 'lun')
=== keys below are optional, and may not be present
'format': Used to format swap/ephemeral disks before passing to
instance (e.g. 'swap', 'ext4')
'boot_index': the 1-based boot index of the disk.
the migrating instance would be
> "lost", allowing a newly-scheduled instance to claim the same resources
> (PCI devices, pinned CPUs, etc.)
>
> Chris
>
> ______
>
think this is the way to go. I also think we need to entirely divorce this
functionality from the image cache.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Developme
On Tue, May 24, 2016 at 11:06 AM, John Garbutt wrote:
> On 24 May 2016 at 10:16, Matthew Booth wrote:
> > During its periodic task, ImageCacheManager does a checksum of every
> image
> > in the cache. It verifies this checksum against a previously stored
> value,
> >
dy be sad if I deleted it?
Matt
[1] Incidentally, there also seems to be a bug in this implementation, in
that it doesn't hold the lock on the image itself at any point during the
hashing process, meaning that it cannot guarantee that the image has
finished downloading yet.
--
Matthe
re currently
working on cranking through these backend by backend. I'll provide a weekly
progress update in the live migration meeting.
TL;DR Core reviewers: please review the first 5 patches listed above. There
will be cake.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering,
On Fri, Apr 29, 2016 at 2:47 AM, Eli Qiao wrote:
> hi team,
>
> Is there any require that all compute node's instance_dir should be same?
>
Yes. This is assumed in many places, certainly in cold migration/resize.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualis
#x27;t provide storage locking, so we will
have to implement that for each backend.
I don't want to spend too long on the spec. The only thing worth of
discussion is the image cache, I guess.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
_
Thoughts?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:u
ellent way to address the concerns and
> balance all of the diverse needs we have.
>
> Thank you very much!
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?
On Tue, Jan 19, 2016 at 8:47 PM, Fox, Kevin M wrote:
> One feature I think we would like to see that could benefit from LVM is
> some kind of multidisk support with better fault tolerance
>
> For example:
> Say you have a node, and there are 20 vm's on it, and thats all the disk
> io it could
; Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
master and
liberty after some delays in the gate. Given the importance of the fix I
suspect that most/all downstream distributions will have already patched
(certainly Red Hat has), but it would be good to have them in upstream
stable.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
people
to continue adding storage-related features to the libvirt driver.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage question
I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only the
external api:
https://gist.github.com/mdbooth/163f5fdf47ab45d7addd
It obviously overlaps considerably with host-servers-migrate, which is
supposed t
On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow
wrote:
> Matthew Booth wrote:
>
>> My patch to MessageHandlingServer is currently being reverted because it
>> broke Nova tests:
>>
>> https://review.openstack.org/#/c/235347/
>>
>> Specifically it causes
My patch to MessageHandlingServer is currently being reverted because it
broke Nova tests:
https://review.openstack.org/#/c/235347/
Specifically it causes a number of tests to take a very long time to
execute, which ultimately results in the total build time limit being
exceeded. This is very eas
Accidentally sent this privately.
-- Forwarded message --
From: Matthew Booth
Date: Fri, Oct 9, 2015 at 6:14 PM
Subject: Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long
running task
To: "Deja, Dawid"
On Thu, Oct 8, 2015 at 12:51 PM, Deja, Dawid wr
On Fri, Sep 25, 2015 at 3:44 PM, Ihar Hrachyshka
wrote:
> Hi all,
>
> releases are approaching, so it’s the right time to start some bike
> shedding on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our commit
> message requirement [3] for the message lines t
Hi, Roman,
Evacuated has been on my radar for a while and this post has prodded me to
take a look at the code. I think it's worth starting by explaining the
problems in the current solution. Nova client is currently responsible for
doing this evacuate. It does:
1. List all instances on the source
On 11/09/15 12:19, Sean Dague wrote:
> On 09/11/2015 05:41 AM, Matthew Booth wrote:
>> I've recently been writing a tool which uses Nova's external API. This
>> is my first time consuming this API, so it has involved a certain amount
>> of discovery. The tool is here
f I have. Has there ever been any effort to define
some contract around more fine-grained error reporting?
Thanks,
Matt
[1] Incidentally, this suggests to me that live migrate should just do
this anyway.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID
I expect there are several existing solutions to this problem, but
here's mine (attached).
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
log_merge.sh
Descri
I wrote this:
https://review.openstack.org/#/c/195983/1/tools/de-pbr.py,cm
Ideally we'd fix PBR, but this seems to be expected behaviour. Thoughts?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A
that the commit_base
and commit_top arguments to virt_dom.blockCommit() are unvalidated.
Does python have anything like perl's taint mode? If so, it might be
worth investigating its use.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG
eed positive leadership from the PTL.
Would you like to take a position on how to improve core reviewer
throughput in the next cycle?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GP
pervisor id for other drivers.
Matt
[1] Modulo bugs: https://review.openstack.org/#/c/159481/
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
_
On 25/02/15 20:18, Joe Gordon wrote:
>
>
> On Fri, Feb 20, 2015 at 3:48 AM, Matthew Booth <mailto:mbo...@redhat.com>> wrote:
>
> Gary Kotton came across a doozy of a bug recently:
>
> https://bugs.launchpad.net/nova/+bug/1419785
>
> In shor
On 25/02/15 11:51, Radoslav Gerganov wrote:
> On 02/23/2015 03:18 PM, Matthew Booth wrote:
>> On 23/02/15 12:13, Gary Kotton wrote:
>>>
>>>
>>> On 2/23/15, 2:05 PM, "Matthew Booth" wrote:
>>>
>>>> On 20/02/15 11:48, Matthew B
On 23/02/15 12:13, Gary Kotton wrote:
>
>
> On 2/23/15, 2:05 PM, "Matthew Booth" wrote:
>
>> On 20/02/15 11:48, Matthew Booth wrote:
>>> Gary Kotton came across a doozy of a bug recently:
>>>
>>> https://bugs.launchpad.net/nova/+bug/1419
On 20/02/15 11:48, Matthew Booth wrote:
> Gary Kotton came across a doozy of a bug recently:
>
> https://bugs.launchpad.net/nova/+bug/1419785
>
> In short, when you start a Nova compute, it will query the driver for
> instances and compare that against the expected host o
n is to leave the race there with a
comment in the code and the config documentation. Perhaps we can fix
this properly at some point when we get the online schema changes, but
for the moment it seems like a lot of complication for a relatively
small problem.
Do you use the global or project s
On 19/02/15 18:57, Jay Pipes wrote:
> On 02/19/2015 05:18 AM, Matthew Booth wrote:
>> Nova contains a config variable osapi_compute_unique_server_name_scope.
>> Its help text describes it pretty well:
>>
>> When set, compute API will consider duplicate hostnames inva
ver, at least, has hardcoded
assumptions that it is the only writer in certain circumstances. That
problem would have to be handled separately, perhaps at the messaging layer.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C349
e a cleanup job to consolidate all
instances running on a single hypervisor to have the same 'host'.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
e bugs in a
library, which were fixed. News at 11. This isn't a realistic way to
manage a large software stack.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
Thanks,
Matt
[1] Which has prompted me to get the test_db_api tests running on MySQL.
See this series if you're interested:
https://review.openstack.org/#/c/156299/
[2] For specifics, see my ramblings here:
https://review.openstack.org/#/c/141115/7/nova/db/sqlalchemy/api.py,cm
line 2547
--
Matth
shouldn't work, though. My patch won't prevent that.
Matt
>
>>
>> The main reason is that compute *nodes* are considered given by the
>> hypervisor (ie. the virt driver ran by the compute manager worker), so
>> if 2 or more hypervisors on two distinct machin
ical
I was assuming we wanted a more expedient fix, which is what I've proposed.
Matt
> Thanks
> Gary
>
> On 2/11/15, 5:31 PM, "Matthew Booth" wrote:
>
>> I just posted this:
>>
>> https://review.openstack.org/#/c/154907/
>>
>> as an alte
ng in a broken state, leaving inconsistent state in its wake
as it runs.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600
On 10/02/15 18:29, Jay Pipes wrote:
> On 02/10/2015 09:47 AM, Matthew Booth wrote:
>> On 09/02/15 18:15, Jay Pipes wrote:
>>> On 02/09/2015 01:02 PM, Attila Fazekas wrote:
>>>> I do not see why not to use `FOR UPDATE` even with multi-writer or
>>>> Is the
ite-intent
locks not being replicated, but that link seems to say the opposite.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
_
lly, I think these are good guidelines for comments on all reviews.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
safely recover. If
we want to reduce the number of transactions we're using, which we do,
the impact of this is going to dramatically increase. We should
standardise on read committed.
Matt
--
Matthew Booth
Red Hat Engineering
we take items one by one according to
> who shouts fastest but ask people not to interrupt.
+1
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 37
either:
>
> I guess we can hit a lock contention even in single master.
I don't think so, but you can certainly still have real deadlocks.
They're bugs, though.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR:
sactions.
However, I take your point. I think retry on transaction failure is
something which would benefit from standard handling in a library.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 344
that we'd have to audit that code for other side-effects,
for the moment it sounds like it's safer to fail.
Matt
[1] Standard caveats apply.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094
On 04/02/15 19:04, Jay Pipes wrote:
> On 02/04/2015 12:05 PM, Sahid Orentino Ferdjaoui wrote:
>> On Wed, Feb 04, 2015 at 04:30:32PM +, Matthew Booth wrote:
>>> I've spent a few hours today reading about Galera, a clustering solution
>>> for MySQL. Galera
, but only some functions are decorated. Are
they decorated because of real deadlocks, or because of Galera lock
contention? The solutions to those 2 problems are very different! Also,
hunting deadlocks is hard enough work. Adding the possibility that they
might not even be there is just evil.
Incid
be great.
[2]
http://www.percona.com/blog/2012/11/20/understanding-multi-node-writing-conflict-metrics-in-percona-xtradb-cluster-and-galera/
[3]
http://www.percona.com/blog/2013/03/03/investigating-replication-latency-in-percona-xtradb-cluster/
--
Matthew Booth
Red Hat Engineering, V
On 30/01/15 19:06, Mike Bayer wrote:
>
>
> Matthew Booth wrote:
>
>> At some point in the near future, hopefully early in L, we're intending
>> to update Nova to use the new database transaction management in
>> oslo.db's enginefacade.
>>
>&
t state contains transaction X.
General comments on the usefulness of slave databases, and the
desirability of making maximum use of them?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D
l498
May I suggest that we decouple these changes by doing both? Oslo's
RequestContext object can have the enginefacade decorator applied to it,
so any project which uses it doesn't have to apply it themselves.
Meanwhile, the decorator remains part of the public api for projects not
using
IP: https://review.openstack.org/#/c/136040/
It's obviously not using oslo.db, but I think it's sufficiently similar
to the proposal to be a fairly simple mechanical fixup. It's incomplete
and there are still test failures to look at in there, but I have a good
feeling about the ap
1 - 100 of 169 matches
Mail list logo