Hi Andrew,
On 1/3/17 00:28, Andrew Cooper wrote:
On 31/12/2016 05:45, Suravee Suthikulpanit wrote:
[...]
+ case AVIC_INCMP_IPI_ERR_INV_TARGET:
+ dprintk(XENLOG_ERR,
+ "SVM: %s: Invalid IPI target (icr=%#08x:%08x, idx=%u)\n",
+ __func__, icrh, icrl, index);
+ break;
Shouldn't this case be emulated to raise appropriate APIC errors in the
guests view?
Actually, I think we would need to domain_crash().
[...]
+static int avic_handle_dfr_update(struct vcpu *v)
+{
+ u32 mod;
+ struct svm_domain *d = &v->domain->arch.hvm_domain.svm;
+ u32 *dfr = avic_get_bk_page_entry(v, APIC_DFR);
+
+ if ( !dfr )
+ return -EINVAL;
+
+ mod = (*dfr >> 28) & 0xFu;
+
+ spin_lock(&d->avic_ldr_mode_lock);
+ if ( d->avic_ldr_mode != mod )
+ {
+ /*
+ * We assume that all local APICs are using the same type.
+ * If LDR mode changes, we need to flush the domain AVIC logical
My apology.... s/LDR mode/DFR mode/
+ * APIC id table.
+ */
The logical APIC ID table is per-domain, yet LDR mode is per-vcpu. How
can these coexist if we flush the table like this? How would a
multi-vcpu domain actually change its mode without this logic in Xen
corrupting the table?
My apology.... s/avic_ldr_mode/avic_dfr_mode/
The per-domain avic_dfr_mode is used to store the value of DFR. The idea
here is that if the DFR is changed, we need to flush the logical APIC ID
table and updated the avic_dfr_mode. In multi-vcpu domain, this will be
done when the first vcpu updates its DFR register. Flushing the table
will automatically invalidate the entries in the table, which should
cause a #VMEXIT.
[...]
+ /*
+ * Handling AVIC Fault (intercept before the access).
+ */
+ if ( !rw )
+ {
+ u32 *entry = avic_get_bk_page_entry(curr, offset);
+
+ if ( !entry )
+ return;
+
+ *entry = vlapic_read_aligned(vcpu_vlapic(curr), offset);
+ }
This part is not needed. I will remove it.
+ hvm_emulate_one_vm_event(EMUL_KIND_NORMAL, TRAP_invalid_op,
+ HVM_DELIVER_NO_ERROR_CODE);
What is this doing here? I don't see any connection.
~Andrew
We need to emulate the instruction for the #vmexit do noaccel fault case.
Thanks,
Suravee
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel