On Tue, Jul 04, 2017 at 10:02:46AM +0200, Greg Kurz wrote: > > > There is some history to this. I was doing error recovery and propagation > > > here similarly during memory hotplug development phase until Igor > > > suggested that we shoudn't try to recover after we have done guest > > > visible changes. > > > > > > Refer to "changes in v6" section in this post: > > > https://lists.gnu.org/archive/html/qemu-ppc/2015-06/msg00296.html > > > > > > However at that time we were doing memory add by DRC index method > > > and hence would attach and online one LMB at a time. > > > In that method, if an intermediate attach fails we would end up with a few > > > LMBs being onlined by the guest already. However subsequently > > > we have switched (optionally, based on dedicated_hp_event_source) to > > > count-indexed method of hotplug where we do attach of all LMBs one by one > > > and then request the guest to hotplug all of them at once using > > > count-indexed > > > method. > > > > > > So it will be a bit tricky to abort for index based case and recover > > > correctly for count-indexed case. > > > > Looked at the code again and realized that though we started with > > index based LMB addition, we later switched to count based addition. Then > > we added support for count-indexed type subject to the presence > > of dedidated hotplug event source while still retaining the support > > for count based addition. > > > > So presently we do attach of all LMBs one by one and then do onlining > > (count based or count-indexed based) once. Hence error recovery > > for both cases would be similar now. So I guess you should take care of > > undoing pc_dimm_memory_plug() like Igor mentioned and also undo the > > effects of partial successful attaches. > > > > I've sent a v2 that adds rollback.
oh ok, somehow v2 didn't reach me at all and I saw the v2 in archives only now. So just noting that my above replies were sent w/o being aware of v2 :) > > > > > > Regards, > > > Bharata.