>>> On 15.10.15 at 18:54, wrote:
>>
>>
>> >> > +rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
>> >> > +if ( rc )
>> >> > +{
>> >> > +rcu_unlock_domain(cd);
>> >> > +goto out;
>> >> > +}
>> >> > +
>> >> > +
>
>
> >> > +rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> >> > +if ( rc )
> >> > +{
> >> > +rcu_unlock_domain(cd);
> >> > +goto out;
> >> > +}
> >> > +
> >> > +if ( !mem_sharing_enabled(cd) )
> >> >
On Mon, Oct 12, 2015 at 12:42 AM, Jan Beulich wrote:
> >>> On 09.10.15 at 19:55, wrote:
> > On Fri, Oct 9, 2015 at 1:51 AM, Jan Beulich wrote:
> >
> >> >>> On 08.10.15 at 22:57, wrote:
> >> > --- a/xen/arch/x86/mm/mem_sharing.c
> >> > +++ b/xen/arch/x86/mm/mem_sharing.c
> >> > @@ -1293,6 +1293
On 09/10/15 19:13, Tamas K Lengyel wrote:
>
>
> On Fri, Oct 9, 2015 at 7:26 AM, Andrew Cooper
> mailto:andrew.coop...@citrix.com>> wrote:
>
> On 08/10/15 21:57, Tamas K Lengyel wrote:
> > diff --git a/xen/arch/x86/mm/mem_sharing.c
> b/xen/arch/x86/mm/mem_sharing.c
> > index a95e105.
>>> On 09.10.15 at 19:55, wrote:
> On Fri, Oct 9, 2015 at 1:51 AM, Jan Beulich wrote:
>
>> >>> On 08.10.15 at 22:57, wrote:
>> > --- a/xen/arch/x86/mm/mem_sharing.c
>> > +++ b/xen/arch/x86/mm/mem_sharing.c
>> > @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d)
>> > retur
On Fri, Oct 9, 2015 at 7:26 AM, Andrew Cooper
wrote:
> On 08/10/15 21:57, Tamas K Lengyel wrote:
> > diff --git a/xen/arch/x86/mm/mem_sharing.c
> b/xen/arch/x86/mm/mem_sharing.c
> > index a95e105..4cdddb1 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @
On Fri, Oct 9, 2015 at 1:51 AM, Jan Beulich wrote:
> >>> On 08.10.15 at 22:57, wrote:
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d)
> > return rc;
> > }
> >
> > +static int bulk_share
On 08/10/15 21:57, Tamas K Lengyel wrote:
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index a95e105..4cdddb1 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d)
>
>>> On 08.10.15 at 22:57, wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d)
> return rc;
> }
>
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> +
Currently mem-sharing can be performed on a page-by-page base from the control
domain. However, when completely deduplicating (cloning) a VM, this requires
at least 3 hypercalls per page. As the user has to loop through all pages up
to max_gpfn, this process is very slow and wasteful.
This patch i
10 matches
Mail list logo