>>> On 05.07.16 at 16:35, wrote:
> On Thu, Jun 23, 2016 at 4:42 PM, Tamas K Lengyel wrote:
+if ( !atomic_read(&d->pause_count) ||
+ !atomic_read(&cd->pause_count) )
+{
+rcu_unlock_domain(cd);
+rc = -E
On Tue, Jul 5, 2016 at 8:35 AM, George Dunlap wrote:
> On Thu, Jun 23, 2016 at 4:42 PM, Tamas K Lengyel wrote:
+if ( !atomic_read(&d->pause_count) ||
+ !atomic_read(&cd->pause_count) )
+{
+rcu_unlock_domain(cd);
+
On Thu, Jun 23, 2016 at 4:42 PM, Tamas K Lengyel wrote:
>>> +if ( !atomic_read(&d->pause_count) ||
>>> + !atomic_read(&cd->pause_count) )
>>> +{
>>> +rcu_unlock_domain(cd);
>>> +rc = -EINVAL;
>>> +goto out;
>>>
On Wed, Jun 22, 2016 at 9:38 AM, George Dunlap wrote:
> On Sun, Jun 12, 2016 at 12:24 AM, Tamas K Lengyel wrote:
>> Currently mem-sharing can be performed on a page-by-page base from the
>> control
>> domain. However, when completely deduplicating (cloning) a VM, this requires
>> at least 3 hype
On Sun, Jun 12, 2016 at 12:24 AM, Tamas K Lengyel wrote:
> Currently mem-sharing can be performed on a page-by-page base from the control
> domain. However, when completely deduplicating (cloning) a VM, this requires
> at least 3 hypercalls per page. As the user has to loop through all pages up
>
On Wed, Jun 15, 2016 at 02:14:15AM -0600, Jan Beulich wrote:
> >>> On 14.06.16 at 18:33, wrote:
> >> +/* Check for continuation if it's not the last iteration. */
> >> +if ( limit > ++bulk->start && hypercall_preempt_check() )
> >
> > I surprised the compiler didn't complain to yo
>>> On 14.06.16 at 18:33, wrote:
>> +/* Check for continuation if it's not the last iteration. */
>> +if ( limit > ++bulk->start && hypercall_preempt_check() )
>
> I surprised the compiler didn't complain to you about lack of parenthesis.
I'm puzzled - what kind of warning would
On Jun 14, 2016 10:33, "Konrad Rzeszutek Wilk"
wrote:
>
> > diff --git a/xen/arch/x86/mm/mem_sharing.c
b/xen/arch/x86/mm/mem_sharing.c
> > index a522423..ba06fb0 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1294,6 +1294,54 @@ int relinquish_shared_
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index a522423..ba06fb0 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1294,6 +1294,54 @@ int relinquish_shared_pages(struct domain *d)
> return rc;
> }
>
> +static int bu
>>> On 12.06.16 at 01:24, wrote:
> @@ -1468,6 +1516,79 @@ int
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
> }
> break;
>
> +case XENMEM_sharing_op_bulk_share:
> +{
> +unsigned long max_sgfn, max_cgfn;
> +struct
On Sat, Jun 11, 2016 at 5:24 PM, Tamas K Lengyel wrote:
> Currently mem-sharing can be performed on a page-by-page base from the control
> domain. However, when completely deduplicating (cloning) a VM, this requires
> at least 3 hypercalls per page. As the user has to loop through all pages up
> t
Currently mem-sharing can be performed on a page-by-page base from the control
domain. However, when completely deduplicating (cloning) a VM, this requires
at least 3 hypercalls per page. As the user has to loop through all pages up
to max_gpfn, this process is very slow and wasteful.
This patch i
12 matches
Mail list logo