Mike Larkin writes:

> On Sat, Jun 26, 2021 at 03:26:55PM +0200, Thomas L. wrote:
>> On Wed, 7 Apr 2021 17:00:00 -0700
>> Mike Larkin <mlar...@nested.page> wrote:
>> > Depends on the exact content that got swapped out (as we didn't handle
>> > TLB flushes correctly), so a crash was certainly a possibility.
>> > That's why I wanted to see the VMM_DEBUG output.
>> >
>> > In any case, Thomas should try -current and see if this problem is
>> > even reproducible.
>> >
>> > -ml
>>
>> I've been running -current with VMM_DEBUG since Apr 14 and the problem
>> has not reproduced, instead I see spurious stops now. Output in
>> /var/log/messages on the occasion is:
>>
>> Jun 19 03:31:16 golem vmd[95337]: vcpu_run_loop: vm 8 / vcpu 0 run ioctl 
>> failed: Invalid argument
>> Jun 19 03:31:16 golem /bsd: vcpu_run_vmx: can't read procbased ctls on exit
>> Jun 19 03:31:17 golem /bsd: vmm_free_vpid: freed VPID/ASID 8
>>
>> There's also a lot of probably unrelated messages for all the VMs:
>>
>> Jun 19 01:31:10 golem vmd[66318]: vionet_enq_rx: descriptor too small for 
>> packet data
>>
>> I realize that this is an old version, so this might be an already
>> fixed bug. I can upgrade to a newer snapshot, but the bug shows about
>> once per month, so by the time it shows it will be an old version
>> again.
>>
>> Kind regards,
>>
>> Thomas
>>
>
> you probably want a newer snap, dv@ fixed some things in this area recently.

The vmx race condition is still present in vmm(4). I'm hoping to share
the diff I've worked on soon that solves the "can't read procbased ctls"
error, but I've been distracted by other things lately and been working
on my AMD system more lately.

The vionet issue should definitely be resolved in -current. A lot of
work has been put into that area since April including security fixes so
please update.

-dv

Reply via email to