On 10/04/19 16:09, Igor Mammedov wrote:
> On Tue, 1 Oct 2019 17:31:17 +0200
> "Laszlo Ersek" <ler...@redhat.com> wrote:

>> (It does not matter if another hotplug CPU starts the relocation in SMM
>> while the earlier one is left with *only* the RSM instruction in SMM,
>> immediately after the swap-back.)
> I don't see why it doesn't matter, let me illustrate the case
> I'm talking about.

Right, I'm sorry -- I completely forgot that the RSM instruction is what
makes the newly set SMBASE permanent!

[...]

> Safest way to deal with it would be:
> 
>   1. wait till all host CPUs are brought into SMM (with hotplug SMI handler = 
> check me in)
> 
>   2. wait till all hotplugged CPUs are put in well know state
>     (a host cpu should send INIT-SIPI-SIPI with NOP vector to wake up)
> 
>   3. a host CPU will serialize hotplugged CPUs relocation by sending
>      uincast SMI + INIT-SIPI-SIPI NOP vector to wake up the second time
>      (with hotplug SMI handler = relocate_me)
> 
> alternatively we could skip step 3 and do following:
> 
>  hotplug_SMI_handler()   
>      hp_cpu_check_in_map[apic_id] = 1;
>      /* wait till another cpu tells me to continue */
>      while(hp_cpu_check_in_map[apic_id]) ;;
>      ...
> 
>  host_cpu_hotplug_all_cpus()
>      wait till all hotpluged CPUs are in hp_cpu_check_in_map;
>      for(i=0;;) {
>         if (hp_cpu_check_in_map[i]) {
>             /* relocate this CPU */
>             hp_cpu_check_in_map[i] = 0;
>         }
> 
>         tell QEMU that FW pulled the CPU in;
>         /* we can add a flag to QEMU's cpu hotplug MMIO interface to FW do it,
>            so that legitimate  GPE handler would tell OS to online only
>            firmware vetted CPUs */
>      }

[...]

>> How can we discover in the hotplug initial SMI handler at 0x3_0000 that
>> the CPU executing the code should have been relocated *earlier*, and the
>> OS just didn't obey the ACPI GPE code? I think a platform reset would be
>> a proper response, but I don't know how to tell, "this CPU should not be
>> here".
> 
> one can ask QEMU about present CPUs (via cpu hotplug interface)
> and compare with present CPU state in FW (OS can hide insert
> event there but not the presence).
> 
> another way is to keep it pinned in relocation SMI handler
> until another CPU tells it to continue with relocation.
> 
> In absence of SMI, such CPU will continue to run wild but do we
> really care about it?

Thanks.

It's clear that, for me, too much hangs in the air right now. I'll have
to get my hands dirty and experiment. I need to learn a lot about this
area of edk2 as well, and experimenting looks like one way.

Let's return to these question once we reach the following state:

- ACPI code generated by QEMU raises a broadcast SMI on VCPU hotplug
- in OVMF, a cold-plugged CPU executes platform code, in a custom SMI
handler
- in OVMF, a hot-plugged CPU executes platform code, in the initial
(default) SMI handler.

Once we get to that point, we can look into further details.

I'll answer under your other email too:
<20191004133052.20373155@redhat.com">http://mid.mail-archive.com/20191004133052.20373155@redhat.com>.

Thanks
Laszlo

-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#48502): https://edk2.groups.io/g/devel/message/48502
Mute This Topic: https://groups.io/mt/34274934/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to