Currently the function will pointlessly acquire and release the global
'heap_lock' in this case.

NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
      argument, but a subsequent patch will make this possible.

Signed-off-by: Paul Durrant <pdurr...@amazon.com>
---
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: George Dunlap <george.dun...@eu.citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Julien Grall <jul...@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Wei Liu <w...@xen.org>

v5:
 - Split out from the subsequent 'make MEMF_no_refcount pages safe to
   assign' patch as requested by Jan
---
 xen/common/page_alloc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 919a270587..135e15bae0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -460,6 +460,9 @@ unsigned long domain_adjust_tot_pages(struct domain *d, 
long pages)
 {
     long dom_before, dom_after, dom_claimed, sys_before, sys_after;
 
+    if ( !pages )
+        goto out;
+
     ASSERT(spin_is_locked(&d->page_alloc_lock));
     d->tot_pages += pages;
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to