Ian Jackson; Keir
> (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> On 22/02/16 17:01, Paul Durrant wrote:
> >> What you did in an earlier version of this series (correct me if I'm
> >> wrong) is to make a sepa
On 22/02/16 17:01, Paul Durrant wrote:
>> What you did in an earlier version of this series (correct me if I'm
>> wrong) is to make a separate hypercall for memory, but still keep using
>> the same internal implementation (i.e., still having a write_dm p2m type
>> and using rangesets to determine w
Ian Jackson; Keir
> (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> On 22/02/16 16:02, Paul Durrant wrote:
> >> -Original Message-
> >> From: dunl...@gmail.com [mailto:dunl...@gmail.com] On Behalf Of
> >
Yu; Wei Liu; Ian Campbell;
>> Andrew Cooper; xen-devel@lists.xen.org; Stefano Stabellini; Zhiyuan Lv; Ian
>> Jackson; Keir (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>>
>> On Wed, Feb 17, 2016 at 11:12 AM, P
ists.xen.org; Stefano Stabellini; Zhiyuan Lv; Ian
> Jackson; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> On Wed, Feb 17, 2016 at 11:12 AM, Paul Durrant
> wrote:
> >> -Original Message-
> >> From:
an Jackson; Stefano Stabellini; Wei Liu;
>> Zhiyuan Lv; xen-devel@lists.xen.org; George Dunlap; Keir (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>>
>> On 17/02/16 10:22, Jan Beulich wrote:
>> >>>>
ts.xen.org; George Dunlap; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> On 17/02/16 10:22, Jan Beulich wrote:
> >>>> On 17.02.16 at 10:58, wrote:
> >> Thanks for the help. Let's see whether we can ha
On 17/02/16 10:22, Jan Beulich wrote:
On 17.02.16 at 10:58, wrote:
>> Thanks for the help. Let's see whether we can have some solution ready for
>> 4.7. :-)
>
> Since we now seem to all agree that a different approach is going to
> be taken, I think we indeed should revert f5a32c5b8e ("x86/
; > Cc: Andrew Cooper; George Dunlap; Ian Campbell; Ian Jackson; Stefano
> > Stabellini; Wei Liu; Zhiyuan Lv; xen-devel@lists.xen.org; George Dunlap;
> > Keir
> > (Xen.org)
> > Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> > max_wp_ram_ranges
eorge Dunlap; Keir
> (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 17.02.16 at 10:58, wrote:
> > Thanks for the help. Let's see whether we can have some solution ready
> for
> > 4.7. :-)
&g
>>> On 17.02.16 at 10:58, wrote:
> Thanks for the help. Let's see whether we can have some solution ready for
> 4.7. :-)
Since we now seem to all agree that a different approach is going to
be taken, I think we indeed should revert f5a32c5b8e ("x86/HVM:
differentiate IO/mem resources tracked by
ts.xen.org; George
> Dunlap; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> > From: Paul Durrant [mailto:paul.durr...@citrix.com]
> > Sent: Wednesday, February 17, 2016 4:58 PM
> >
> > >
> > > b
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Wednesday, February 17, 2016 4:58 PM
>
> >
> > btw does this design consider the case where multiple ioreq servers
> > may claim on same page?
>
> Yes it does and there are currently insufficient page types to allow any more
> than a
>>> On 17.02.16 at 09:58, wrote:
>> > I'd envisaged that setting HVM_emulate_0 type on a page would mean
>> nothing until an
>>
>> for "mean nothing" what is the default policy then if guest happens to access
>> it
>> before any ioreq server claims it?
>>
>
> My thoughts were that, since no spe
> -Original Message-
[snip]
> > > >
> > > > I'm afraid I have made little progress due to the distractions of trying
> get
> > > > some patches into Linux but my thoughts are around replacing the
> > > > HVM_mmio_write_dm with something like HVM_emulate_0 (i.e. the
> zero-
> > > th example
eorge Dunlap; Ian Campbell; Ian Jackson; Stefano
> > Stabellini; Wei Liu; Kevin Tian; Zhiyuan Lv; Zhang Yu;
> > xen-devel@lists.xen.org;
> > George Dunlap; Keir (Xen.org)
> > Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> > max_wp_ram_ranges.
&g
.org;
> George Dunlap; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 16.02.16 at 09:50, wrote:
> >> -Original Message-
> >> From: Tian, Kevin [mailto:kevin.t...@intel.com]
r;
>> Zhang Yu; xen-devel@lists.xen.org; Stefano Stabellini; Lv, Zhiyuan; Ian
>> Jackson; Keir (Xen.org)
>> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>>
>> > From: Paul Durrant [mailto:paul.durr...@citrix.com]
; Lv, Zhiyuan; Ian
> Jackson; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> > From: Paul Durrant [mailto:paul.durr...@citrix.com]
> > Sent: Friday, February 05, 2016 7:24 PM
> >
> > > -Origina
aul Durrant
> > Cc: Jan Beulich; George Dunlap; Kevin Tian; Wei Liu; Ian Campbell; Andrew
> > Cooper; Zhang Yu; xen-devel@lists.xen.org; Stefano Stabellini;
> > zhiyuan...@intel.com; Ian Jackson; Keir (Xen.org)
> > Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce param
On Fri, Feb 5, 2016 at 3:13 PM, Zhiyuan Lv wrote:
>> My question is, suppose a single GTT / gpu thread / tree has 9000
>> ranges. It would be trivial for an attacker to break into the
>> operating system and *construct* such a tree, but it's entirely
>> possible that due to a combination of memor
Hi George,
On Fri, Feb 05, 2016 at 11:05:39AM +, George Dunlap wrote:
> On Fri, Feb 5, 2016 at 3:44 AM, Tian, Kevin wrote:
> >> > So as long as the currently-in-use GTT tree contains no more than
> >> > $LIMIT ranges, you can unshadow and reshadow; this will be slow, but
> >> > strictly speak
ists.xen.org; Stefano Stabellini;
> zhiyuan...@intel.com; Ian Jackson; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> On Fri, Feb 5, 2016 at 9:24 AM, Paul Durrant
> wrote:
> > Utilizing the default server is a back
On Fri, Feb 5, 2016 at 9:24 AM, Paul Durrant wrote:
> Utilizing the default server is a backwards step. GVT-g would have to use the
> old HVM_PARAM mechanism to cause it's emulator to become default. I think a
> more appropriate mechanism would be p2m_mmio_write_dm to become something
> like 'p
On Fri, Feb 5, 2016 at 3:44 AM, Tian, Kevin wrote:
>> > So as long as the currently-in-use GTT tree contains no more than
>> > $LIMIT ranges, you can unshadow and reshadow; this will be slow, but
>> > strictly speaking correct.
>> >
>> > What do you do if the guest driver switches to a GTT such th
>>> On 05.02.16 at 10:24, wrote:
> Utilizing the default server is a backwards step. GVT-g would have to use the
> old HVM_PARAM mechanism to cause it's emulator to become default. I think a
> more appropriate mechanism would be p2m_mmio_write_dm to become something
> like 'p2m_ioreq_server_wri
.@lists.xen.org; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 04.02.16 at 18:12, wrote:
> > Two angles on this.
> >
> > First, assuming that limiting the number of ranges is what we want: I
On 2/5/2016 12:18 PM, Tian, Kevin wrote:
From: George Dunlap [mailto:george.dun...@citrix.com]
Sent: Friday, February 05, 2016 1:12 AM
On 04/02/16 14:08, Jan Beulich wrote:
On 04.02.16 at 14:33, wrote:
Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce para
On 2/5/2016 1:12 AM, George Dunlap wrote:
On 04/02/16 14:08, Jan Beulich wrote:
On 04.02.16 at 14:33, wrote:
Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
On 04.02.16 at 10:38, wrote:
So another question is, if value of
On 2/4/2016 7:06 PM, George Dunlap wrote:
On Thu, Feb 4, 2016 at 9:38 AM, Yu, Zhang wrote:
On 2/4/2016 5:28 PM, Paul Durrant wrote:
I assume this means that the emulator can 'unshadow' GTTs (I guess on an
LRU basis) so that it can shadow new ones when the limit has been exhausted?
If so, how
>>> On 05.02.16 at 04:44, wrote:
> This is why Yu mentioned earlier whether we can just set a default
> limit which is good for majority of use cases, while extending our
> device mode to drop/recreate some shadow tables upon the limitation
> is hit. I think this matches how today's CPU shadow pag
>>> On 04.02.16 at 18:12, wrote:
> Two angles on this.
>
> First, assuming that limiting the number of ranges is what we want: I'm
> not really a fan of using HVM_PARAMs for this, but as long as it's not
> considered a public interface (i.e., it could go away or disappear and
> everything would
> From: George Dunlap [mailto:george.dun...@citrix.com]
> Sent: Friday, February 05, 2016 1:12 AM
>
> On 04/02/16 14:08, Jan Beulich wrote:
> >>>> On 04.02.16 at 14:33, wrote:
> >> Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introdu
lich; Andrew Cooper; George Dunlap; Ian Campbell; Stefano
> > Stabellini; Wei Liu; Kevin Tian; zhiyuan...@intel.com; Zhang Yu; xen-
> > de...@lists.xen.org; Keir (Xen.org)
> > Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> > max_wp_ram_ranges.
>
> From: Lv, Zhiyuan
> Sent: Friday, February 05, 2016 10:01 AM
>
> Hi George,
>
> On Thu, Feb 04, 2016 at 11:06:33AM +, George Dunlap wrote:
> > On Thu, Feb 4, 2016 at 9:38 AM, Yu, Zhang
> > wrote:
> > > On 2/4/2016 5:28 PM, Paul Durrant wrote:
> > >> I assume this means that the emulator c
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Thursday, February 04, 2016 10:13 PM
>
> >>> On 04.02.16 at 14:47, wrote:
> >> From: Ian Jackson [mailto:ian.jack...@eu.citrix.com]
> >> Sent: 04 February 2016 13:34
> >> * Is it possible for libxl to somehow tell from the rest of the
> >>
> From: Ian Campbell [mailto:ian.campb...@citrix.com]
> Sent: Thursday, February 04, 2016 6:06 PM
>
> On Wed, 2016-02-03 at 17:41 +, George Dunlap wrote:
> > But of course, since they they aren't actually ranges but just gpfns,
> > they're scattered randomly throughout the guest physical addre
Hi George,
On Thu, Feb 04, 2016 at 11:06:33AM +, George Dunlap wrote:
> On Thu, Feb 4, 2016 at 9:38 AM, Yu, Zhang wrote:
> > On 2/4/2016 5:28 PM, Paul Durrant wrote:
> >> I assume this means that the emulator can 'unshadow' GTTs (I guess on an
> >> LRU basis) so that it can shadow new ones wh
On 04/02/16 14:08, Jan Beulich wrote:
>>>> On 04.02.16 at 14:33, wrote:
>> Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce
>> parameter
>> max_wp_ram_ranges."):
>>> On 04.02.16 at 10:38, wrote:
>>>> So anoth
; de...@lists.xen.org; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> Paul Durrant writes ("RE: [Xen-devel] [PATCH v3 3/3] tools: introduce
> parameter max_wp_ram_ranges."):
> > There are patches in the Xe
Paul Durrant writes ("RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
> There are patches in the XenGT xen repo which add extra parameters
> into the VM config to allow libxl to provision a gvt-g instance (of
> which there are a finite number per
.@lists.xen.org; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 04.02.16 at 14:47, wrote:
> >> From: Ian Jackson [mailto:ian.jack...@eu.citrix.com]
> >> Sent: 04 February 2016 13:34
> >
>>> On 04.02.16 at 14:47, wrote:
>> From: Ian Jackson [mailto:ian.jack...@eu.citrix.com]
>> Sent: 04 February 2016 13:34
>> * Is it possible for libxl to somehow tell from the rest of the
>>configuration that this larger limit should be applied ?
>>
>>AFAICT there is nothing in libxl dir
>>> On 04.02.16 at 14:33, wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce
> parameter
> max_wp_ram_ranges."):
>> On 04.02.16 at 10:38, wrote:
>> > So another question is, if value of this limit really matters, will a
&g
; de...@lists.xen.org; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce
> parameter max_wp_ram_ranges."):
> > On 04.02.16 at 10:38, w
Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
> On 04.02.16 at 10:38, wrote:
> > So another question is, if value of this limit really matters, will a
> > lower one be more acceptable(the current 256 being no
On Thu, 2016-02-04 at 11:08 +, Ian Campbell wrote:
> On Thu, 2016-02-04 at 10:49 +, George Dunlap wrote:
> > On Thu, Feb 4, 2016 at 8:51 AM, Yu, Zhang
> > wrote:
> > > > Going forward, we probably will, at some point, need to implement a
> > > > parallel "p2t" structure to keep track of ty
On Thu, 2016-02-04 at 10:49 +, George Dunlap wrote:
> On Thu, Feb 4, 2016 at 8:51 AM, Yu, Zhang
> wrote:
> > > Going forward, we probably will, at some point, need to implement a
> > > parallel "p2t" structure to keep track of types -- and probably will
> > > whether end up implementing 4 sepa
On Thu, Feb 4, 2016 at 9:38 AM, Yu, Zhang wrote:
> On 2/4/2016 5:28 PM, Paul Durrant wrote:
>> I assume this means that the emulator can 'unshadow' GTTs (I guess on an
>> LRU basis) so that it can shadow new ones when the limit has been exhausted?
>> If so, how bad is performance likely to be if w
On Thu, Feb 4, 2016 at 8:51 AM, Yu, Zhang wrote:
>> Going forward, we probably will, at some point, need to implement a
>> parallel "p2t" structure to keep track of types -- and probably will
>> whether end up implementing 4 separate write_dm types or not (for the
>> reasons you describe).
>>
>
>
>>> On 04.02.16 at 10:38, wrote:
> So another question is, if value of this limit really matters, will a
> lower one be more acceptable(the current 256 being not enough)?
If you've carefully read George's replies, a primary aspect is
whether we wouldn't better revert commit f5a32c5b8e
("x86/HVM:
On Wed, 2016-02-03 at 17:41 +, George Dunlap wrote:
> But of course, since they they aren't actually ranges but just gpfns,
> they're scattered randomly throughout the guest physical address
> space.
(Possibly) stupid question:
Since, AIUI, the in-guest GPU driver is XenGT aware could it not
> -Original Message-
[snip]
> >>> Compare this to the downsides of the approach you're proposing:
> >>> 1. Using 40 bytes of hypervisor space per guest GPU pagetable page (as
> >>> opposed to using a bit in the existing p2m table)
> >>> 2. Walking down an RB tree with 8000 individual nodes
; zhiyuan...@intel.com; Jan Beulich;
Keir (Xen.org)
Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges.
On 2/4/2016 2:21 AM, George Dunlap wrote:
On Wed, Feb 3, 2016 at 5:41 PM, George Dunlap
wrote:
I think at some point I suggested an alternate design based on
l.com; Jan Beulich;
> Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
>
>
> On 2/4/2016 2:21 AM, George Dunlap wrote:
> > On Wed, Feb 3, 2016 at 5:41 PM, George Dunlap
> > wrote:
> >> I think at
On 2/4/2016 3:12 AM, George Dunlap wrote:
On 03/02/16 18:39, Andrew Cooper wrote:
On 03/02/16 18:21, George Dunlap wrote:
2. It's not technically difficult to extend the number of servers
supported to something sensible, like 4 (using 4 different write_dm
p2m types)
While technically true,
On 2/4/2016 2:21 AM, George Dunlap wrote:
On Wed, Feb 3, 2016 at 5:41 PM, George Dunlap
wrote:
I think at some point I suggested an alternate design based on marking
such gpfns with a special p2m type; I can't remember if that
suggestion was actually addressed or not.
FWIW, the thread where
On 2/4/2016 1:50 AM, George Dunlap wrote:
On Wed, Feb 3, 2016 at 3:10 PM, Paul Durrant wrote:
* Is it possible for libxl to somehow tell from the rest of the
configuration that this larger limit should be applied ?
If a XenGT-enabled VM is provisioned through libxl then some larger lim
On 03/02/16 18:39, Andrew Cooper wrote:
> On 03/02/16 18:21, George Dunlap wrote:
>> 2. It's not technically difficult to extend the number of servers
>> supported to something sensible, like 4 (using 4 different write_dm
>> p2m types)
>
> While technically true, spare bits in the pagetable entrie
On 03/02/16 18:21, George Dunlap wrote:
> 2. It's not technically difficult to extend the number of servers
> supported to something sensible, like 4 (using 4 different write_dm
> p2m types)
While technically true, spare bits in the pagetable entries are at a
premium, and steadily decreasing as In
On Wed, Feb 3, 2016 at 6:21 PM, George Dunlap
wrote:
> I really don't understand where you're coming from on this. The
> approach you've chosen looks to me to be slower, more difficult to
> implement, and more complicated; and it's caused a lot more resistance
> trying to get this series accepted
On Wed, Feb 3, 2016 at 5:41 PM, George Dunlap
wrote:
> I think at some point I suggested an alternate design based on marking
> such gpfns with a special p2m type; I can't remember if that
> suggestion was actually addressed or not.
FWIW, the thread where I suggested using p2m types was in respon
On Wed, Feb 3, 2016 at 3:10 PM, Paul Durrant wrote:
>> * Is it possible for libxl to somehow tell from the rest of the
>>configuration that this larger limit should be applied ?
>>
>
> If a XenGT-enabled VM is provisioned through libxl then some larger limit is
> likely to be required. One o
On Wed, Feb 3, 2016 at 2:43 PM, Ian Jackson wrote:
> Paul Durrant writes ("RE: [Xen-devel] [PATCH v3 3/3] tools: introduce
> parameter max_wp_ram_ranges."):
>> > From: Jan Beulich [mailto:jbeul...@suse.com]
> ...
>> > I wouldn't be happy with that (and
g; Keir
> (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> Paul Durrant writes ("RE: [Xen-devel] [PATCH v3 3/3] tools: introduce
> parameter max_wp_ram_ranges."):
> > > From: Jan Beulich [mailto:jbeul...@s
Paul Durrant writes ("RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
> > From: Jan Beulich [mailto:jbeul...@suse.com]
...
> > I wouldn't be happy with that (and I've said so before), since it
> > would allow all VM this extra
g; Keir
> (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 03.02.16 at 14:07, wrote:
> >> -Original Message-
> > [snip]
> >> >> >> I'm getting the impression that we
>>> On 03.02.16 at 14:07, wrote:
>> -Original Message-
> [snip]
>> >> >> I'm getting the impression that we're moving in circles. A blanket
>> >> >> limit above the 256 one for all domains is _not_ going to be
>> >> >> acceptable; going to 8k will still need host admin consent. With
>> >>
> -Original Message-
[snip]
> >> >> I'm getting the impression that we're moving in circles. A blanket
> >> >> limit above the 256 one for all domains is _not_ going to be
> >> >> acceptable; going to 8k will still need host admin consent. With
> >> >> your rangeset performance improvement
vin Tian; zhiyuan...@intel.com; Zhang Yu; xen-devel@lists.xen.org; Keir
>> (Xen.org)
>> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>>
>> >>> On 03.02.16 at 13:20, wrote:
>> >> -Original Message
g; Keir
> (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 03.02.16 at 13:20, wrote:
> >> -Original Message-
> >> From: Jan Beulich [mailto:jbeul...@suse.com]
> >> Sent: 03 February 20
ellini; Kevin Tian; zhiyuan...@intel.com; xen-devel@lists.xen.org; Keir
>> (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>>
>> >>> On 03.02.16 at 08:10, wrote:
>> > On 2/2/2016 11:21 PM, Jan Beul
g; Keir
> (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
>
> >>> On 03.02.16 at 08:10, wrote:
> > On 2/2/2016 11:21 PM, Jan Beulich wrote:
> >>>>> On 02.02.16 at 16:00, wrote:
> >>> The limit
>>> On 03.02.16 at 08:10, wrote:
> On 2/2/2016 11:21 PM, Jan Beulich wrote:
> On 02.02.16 at 16:00, wrote:
>>> The limit of 4G is to avoid the data missing from uint64 to uint32
>>> assignment. And I can accept the 8K limit for XenGT in practice.
>>> After all, it is vGPU page tables we are t
On 2/2/2016 11:21 PM, Jan Beulich wrote:
On 02.02.16 at 16:00, wrote:
The limit of 4G is to avoid the data missing from uint64 to uint32
assignment. And I can accept the 8K limit for XenGT in practice.
After all, it is vGPU page tables we are trying to trap and emulate,
not normal page frames
On 2/2/2016 11:21 PM, Jan Beulich wrote:
On 02.02.16 at 16:00, wrote:
The limit of 4G is to avoid the data missing from uint64 to uint32
assignment. And I can accept the 8K limit for XenGT in practice.
After all, it is vGPU page tables we are trying to trap and emulate,
not normal page frames
>>> On 02.02.16 at 16:00, wrote:
> The limit of 4G is to avoid the data missing from uint64 to uint32
> assignment. And I can accept the 8K limit for XenGT in practice.
> After all, it is vGPU page tables we are trying to trap and emulate,
> not normal page frames.
>
> And I guess the reason that
On 2/2/2016 10:42 PM, Jan Beulich wrote:
On 02.02.16 at 15:01, wrote:
On 2/2/2016 7:12 PM, Jan Beulich wrote:
On 02.02.16 at 11:56, wrote:
I understand your concern, and to be honest, I do not think
this is an optimal solution. But I also have no better idea
in mind. :(
Another option may
>>> On 02.02.16 at 15:01, wrote:
> On 2/2/2016 7:12 PM, Jan Beulich wrote:
> On 02.02.16 at 11:56, wrote:
>>> I understand your concern, and to be honest, I do not think
>>> this is an optimal solution. But I also have no better idea
>>> in mind. :(
>>> Another option may be: instead of open
On 02/02/16 11:43, Jan Beulich wrote:
On 02.02.16 at 12:31, wrote:
>> This specific issue concerns resource allocation during domain building
>> and is an area which can never ever be given to a less privileged entity.
> Which is because of ...? (And if so, why would we have put
> XEN_DOMCTL_
On 2/2/2016 7:12 PM, Jan Beulich wrote:
On 02.02.16 at 11:56, wrote:
I understand your concern, and to be honest, I do not think
this is an optimal solution. But I also have no better idea
in mind. :(
Another option may be: instead of opening this parameter to
the tool stack, we use a XenGT
On 2/2/2016 7:51 PM, Wei Liu wrote:
On Tue, Feb 02, 2016 at 04:04:14PM +0800, Yu, Zhang wrote:
Thanks for your reply, Ian.
On 2/2/2016 1:05 AM, Ian Jackson wrote:
Yu, Zhang writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
On 2/2/2016 12:
On Tue, Feb 02, 2016 at 04:04:14PM +0800, Yu, Zhang wrote:
> Thanks for your reply, Ian.
>
> On 2/2/2016 1:05 AM, Ian Jackson wrote:
> >Yu, Zhang writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> >max_wp_ram_ranges."):
> >>On 2/2/2
>>> On 02.02.16 at 12:31, wrote:
> This specific issue concerns resource allocation during domain building
> and is an area which can never ever be given to a less privileged entity.
Which is because of ...? (And if so, why would we have put
XEN_DOMCTL_createdomain on the XSA-77 waiver list?)
Ja
On 02/02/16 10:32, Jan Beulich wrote:
On 01.02.16 at 18:05, wrote:
>> Having said that, if the hypervisor maintainers are happy with a
>> situation where this value is configured explicitly, and the
>> configurations where a non-default value is required is expected to be
>> rare, then I gues
>>> On 02.02.16 at 11:56, wrote:
> I understand your concern, and to be honest, I do not think
> this is an optimal solution. But I also have no better idea
> in mind. :(
> Another option may be: instead of opening this parameter to
> the tool stack, we use a XenGT flag, which set the rangeset
>
On 2/2/2016 6:32 PM, Jan Beulich wrote:
On 01.02.16 at 18:05, wrote:
Having said that, if the hypervisor maintainers are happy with a
situation where this value is configured explicitly, and the
configurations where a non-default value is required is expected to be
rare, then I guess we can l
>>> On 01.02.16 at 18:05, wrote:
> Having said that, if the hypervisor maintainers are happy with a
> situation where this value is configured explicitly, and the
> configurations where a non-default value is required is expected to be
> rare, then I guess we can live with it.
Well, from the very
Thanks for your reply, Ian.
On 2/2/2016 1:05 AM, Ian Jackson wrote:
Yu, Zhang writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
On 2/2/2016 12:35 AM, Jan Beulich wrote:
On 01.02.16 at 17:19, wrote:
So, we need also validate thi
Yu, Zhang writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
max_wp_ram_ranges."):
> On 2/2/2016 12:35 AM, Jan Beulich wrote:
>> On 01.02.16 at 17:19, wrote:
> >> So, we need also validate this param in hvm_allow_set_param,
> >> current
On 2/2/2016 12:35 AM, Jan Beulich wrote:
On 01.02.16 at 17:19, wrote:
After a second thought, I guess one of the security concern
is when some APP is trying to trigger the HVMOP_set_param
directly with some illegal values.
Not sure what "directly" is supposed to mean here.
I mean with no
On 2/1/2016 11:14 PM, Yu, Zhang wrote:
On 2/1/2016 9:07 PM, Jan Beulich wrote:
On 01.02.16 at 13:49, wrote:
On Mon, Feb 01, 2016 at 05:15:16AM -0700, Jan Beulich wrote:
On 01.02.16 at 13:02, wrote:
On Mon, Feb 01, 2016 at 12:52:51AM -0700, Jan Beulich wrote:
On 30.01.16 at 15:38, wrot
>>> On 01.02.16 at 17:19, wrote:
> After a second thought, I guess one of the security concern
> is when some APP is trying to trigger the HVMOP_set_param
> directly with some illegal values.
Not sure what "directly" is supposed to mean here.
> So, we need also validate this param in hvm_allow_s
On 2/2/2016 12:16 AM, Jan Beulich wrote:
On 01.02.16 at 16:14, wrote:
But I still do not quite understand. :)
If tool stack can guarantee the validity of a parameter,
under which circumstances will hypervisor be threatened?
At least in disaggregated environments the hypervisor cannot
trust
>>> On 01.02.16 at 16:14, wrote:
> But I still do not quite understand. :)
> If tool stack can guarantee the validity of a parameter,
> under which circumstances will hypervisor be threatened?
At least in disaggregated environments the hypervisor cannot
trust the (parts of the) tool stack(s) livi
On 2/1/2016 7:57 PM, Wei Liu wrote:
On Fri, Jan 29, 2016 at 06:45:14PM +0800, Yu Zhang wrote:
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 25507c7..0c19dee 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -35,6 +35,7 @@
#include
#include
On 2/1/2016 9:07 PM, Jan Beulich wrote:
On 01.02.16 at 13:49, wrote:
On Mon, Feb 01, 2016 at 05:15:16AM -0700, Jan Beulich wrote:
On 01.02.16 at 13:02, wrote:
On Mon, Feb 01, 2016 at 12:52:51AM -0700, Jan Beulich wrote:
On 30.01.16 at 15:38, wrote:
On 1/30/2016 12:33 AM, Jan Beulich w
>>> On 01.02.16 at 13:49, wrote:
> On Mon, Feb 01, 2016 at 05:15:16AM -0700, Jan Beulich wrote:
>> >>> On 01.02.16 at 13:02, wrote:
>> > On Mon, Feb 01, 2016 at 12:52:51AM -0700, Jan Beulich wrote:
>> >> >>> On 30.01.16 at 15:38, wrote:
>> >>
>> >> > On 1/30/2016 12:33 AM, Jan Beulich wrote:
>>
On Mon, Feb 01, 2016 at 05:15:16AM -0700, Jan Beulich wrote:
> >>> On 01.02.16 at 13:02, wrote:
> > On Mon, Feb 01, 2016 at 12:52:51AM -0700, Jan Beulich wrote:
> >> >>> On 30.01.16 at 15:38, wrote:
> >>
> >> > On 1/30/2016 12:33 AM, Jan Beulich wrote:
> >> > On 29.01.16 at 11:45, wrote:
>
>>> On 01.02.16 at 13:02, wrote:
> On Mon, Feb 01, 2016 at 12:52:51AM -0700, Jan Beulich wrote:
>> >>> On 30.01.16 at 15:38, wrote:
>>
>> > On 1/30/2016 12:33 AM, Jan Beulich wrote:
>> > On 29.01.16 at 11:45, wrote:
>> >>> --- a/xen/arch/x86/hvm/hvm.c
>> >>> +++ b/xen/arch/x86/hvm/hvm.c
>>
1 - 100 of 106 matches
Mail list logo