On 8/9/2016 5:45 PM, Jan Beulich wrote:
On 09.08.16 at 11:25, <yu.c.zh...@linux.intel.com> wrote:
On 8/9/2016 4:13 PM, Jan Beulich wrote:
On 09.08.16 at 09:39, <yu.c.zh...@linux.intel.com> wrote:
On 8/9/2016 12:29 AM, Jan Beulich wrote:
On 12.07.16 at 11:02, <yu.c.zh...@linux.intel.com> wrote:
@@ -5512,6 +5513,12 @@ static int hvmop_set_mem_type(
if ( rc )
goto out;
+ if ( t == p2m_ram_rw && memtype[a.hvmmem_type] == p2m_ioreq_server )
+ p2m->ioreq.entry_count++;
+
+ if ( t == p2m_ioreq_server && memtype[a.hvmmem_type] == p2m_ram_rw )
+ p2m->ioreq.entry_count--;
+
These (and others below) happen, afaict, outside of any lock, which
can't be right.
How about we make this entry_count as atomic_t and use atomic_inc/dec
instead?
That's certainly one of the options, but beware of overflow.
Oh, thanks for your remind, Jan. I just found that atomic_t is defined
as "typedef struct { int counter; } atomic_t;"
which do have overflow issues if entry_count is supposed to be a uint32
or uint64.
Now I have another thought: the entry_count is referenced in 3
different scenarios:
1> the hvmop handlers - hvmop_set_mem_type() and
hvmop_map_mem_type_to_ioreq_server(),
which are triggered by device model, and will not be concurrent. And
hvmop_set_mem_type() will
permit the mem type change only when an ioreq server has already been
mapped to this type.
You shouldn't rely on a well behaved dm, and that's already
leaving aside the question whether there couldn't even be well
behaved use cases of parallel invocations of this op.
Oh. This is a good point. Thanks. :-)
2> the misconfiguration handlers - resolve_misconfig() or do_recalc(),
which are triggered by HVM's
vm exit, yet this will only happen after the ioreq server has already
been unmapped. This means the
accesses to the entry_count will not be concurrent with the above hvmop
handlers.
This case may be fine, but not for (just) the named reason:
Multiple misconfig invocations can happen at the same time, but
presumably your addition is inside the p2m-locked region (you'd
have to double check that).
Yes, both are wrapped in p2m-locked region. So this give me another thought:
if we put the increment/decrement of entry_count inside
p2m_change_type_one(),
which has the p2m_lock/p2m_unlock, we could avoid introducing another
spinlock
and can also avoid the overflow possibility by using atomic_t.
[snip]
Thanks
Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel