>>> On 09.11.17 at 16:29, <yu.c.zh...@linux.intel.com> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4844,9 +4844,10 @@ int map_pages_to_xen(
>              {
>                  unsigned long base_mfn;
>  
> -                pl1e = l2e_to_l1e(*pl2e);
>                  if ( locking )
>                      spin_lock(&map_pgdir_lock);
> +
> +                pl1e = l2e_to_l1e(*pl2e);
>                  base_mfn = l1e_get_pfn(*pl1e) & ~(L1_PAGETABLE_ENTRIES - 1);
>                  for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++, pl1e++ )
>                      if ( (l1e_get_pfn(*pl1e) != (base_mfn + i)) ||
I agree with the general observation, but there are three things I'd
like to see considered:

1) Please extend the change slightly such that the L2E
re-consolidation code matches the L3E one (i.e. latch into ol2e
earlier and pass that one to l2e_to_l1e(). Personally I would even
prefer if the presence/absence of blank lines matched between
the two pieces of code.

2) Is your change actually enough to take care of all forms of the
race you describe? In particular, isn't it necessary to re-check PSE
after having taken the lock, in case another CPU has just finished
doing the re-consolidation?

3) What about the empty&free checks in modify_xen_mappings()?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to