On Thu, Jul 16, 2015 at 09:26:21AM +0200, Igor Mammedov wrote: > On Wed, 15 Jul 2015 19:32:31 +0300 > "Michael S. Tsirkin" <m...@redhat.com> wrote: > > > On Wed, Jul 15, 2015 at 05:12:01PM +0200, Igor Mammedov wrote: > > > On Thu, 9 Jul 2015 13:47:17 +0200 > > > Igor Mammedov <imamm...@redhat.com> wrote: > > > > > > there also is yet another issue with vhost-user. It also has > > > very low limit on amount of memory regions (if I recall correctly 8) > > > and it's possible to trigger even without memory hotplug. > > > one just need to start QEMU with a several -numa memdev= options > > > to create a necessary amount of memory regions to trigger it. > > > > > > lowrisk option to fix it would be increasing limit in vhost-user > > > backend. > > > > > > another option is disabling vhost and fall-back to virtio, > > > but I don't know much about vhost if it's possible to > > > to switch it off without loosing packets guest was sending > > > at the moment and if it will work at all with vhost. > > > > With vhost-user you can't fall back to virtio: it's > > not an accelerator, it's the backend. > > > > Updating the protocol to support a bigger table > > is possible but old remotes won't be able to support it. > > > it looks like increasing limit is the only option left. > > it's not ideal that old remotes /with hardcoded limit/ > might not be able to handle bigger table but at least > new ones and ones that handle VhostUserMsg payload > dynamically would be able to work without crashing.
I think we need a way for hotplug to fail gracefully. As long as we don't implement the hva trick, it's needed for old kernels with vhost in kernel, too. -- MST