On 1/27/2016 11:58 PM, Jan Beulich wrote:
On 27.01.16 at 16:23, wrote:
On 1/27/2016 11:12 PM, Jan Beulich wrote:
On 27.01.16 at 15:56, wrote:
On 1/27/2016 10:32 PM, Jan Beulich wrote:
On 27.01.16 at 15:13, wrote:
About the truncation issue:
I do not quite follow. Will this hurt
>>> On 27.01.16 at 16:23, wrote:
>
> On 1/27/2016 11:12 PM, Jan Beulich wrote:
> On 27.01.16 at 15:56, wrote:
>>> On 1/27/2016 10:32 PM, Jan Beulich wrote:
>>> On 27.01.16 at 15:13, wrote:
> About the truncation issue:
> I do not quite follow. Will this hurt if the value c
On 1/27/2016 11:12 PM, Jan Beulich wrote:
On 27.01.16 at 15:56, wrote:
On 1/27/2016 10:32 PM, Jan Beulich wrote:
On 27.01.16 at 15:13, wrote:
About the truncation issue:
I do not quite follow. Will this hurt if the value configured does
not exceed 4G? What about a type cast?
A typec
>>> On 27.01.16 at 15:56, wrote:
> On 1/27/2016 10:32 PM, Jan Beulich wrote:
> On 27.01.16 at 15:13, wrote:
>>> About the truncation issue:
>>> I do not quite follow. Will this hurt if the value configured does
>>> not exceed 4G? What about a type cast?
>>
>> A typecast would not alter be
On 1/27/2016 10:32 PM, Jan Beulich wrote:
On 27.01.16 at 15:13, wrote:
About the default value:
You are right. :) For XenGT, MAX_NR_IO_RANGES may only work under
limited conditions. Having it default to zero means XenGT users must
manually configure this option. Since we have plans to pus
>>> On 27.01.16 at 15:13, wrote:
> About the default value:
>You are right. :) For XenGT, MAX_NR_IO_RANGES may only work under
> limited conditions. Having it default to zero means XenGT users must
> manually configure this option. Since we have plans to push other XenGT
> tool stack parameter
On 1/27/2016 6:27 PM, Jan Beulich wrote:
On 27.01.16 at 08:01, wrote:
On 1/26/2016 7:00 PM, Jan Beulich wrote:
On 26.01.16 at 08:32, wrote:
On 1/22/2016 4:01 PM, Jan Beulich wrote:
On 22.01.16 at 04:20, wrote:
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -940,6 +940,1
>>> On 27.01.16 at 08:01, wrote:
>
> On 1/26/2016 7:00 PM, Jan Beulich wrote:
> On 26.01.16 at 08:32, wrote:
>>> On 1/22/2016 4:01 PM, Jan Beulich wrote:
>>> On 22.01.16 at 04:20, wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -940,6 +940,10 @@ st
On 1/26/2016 7:16 PM, David Vrabel wrote:
On 22/01/16 03:20, Yu Zhang wrote:
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -962,6 +962,24 @@ FIFO-based event channel ABI support up to 131,071 event
channels.
Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit
On 1/26/2016 7:00 PM, Jan Beulich wrote:
On 26.01.16 at 08:32, wrote:
On 1/22/2016 4:01 PM, Jan Beulich wrote:
On 22.01.16 at 04:20, wrote:
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -940,6 +940,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct
hvm_ioreq_server *s,
On 22/01/16 03:20, Yu Zhang wrote:
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -962,6 +962,24 @@ FIFO-based event channel ABI support up to 131,071 event
> channels.
> Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit
> x86).
>
> +=item B
> +
> +Limit t
>>> On 26.01.16 at 08:32, wrote:
> On 1/22/2016 4:01 PM, Jan Beulich wrote:
> On 22.01.16 at 04:20, wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -940,6 +940,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct
>>> hvm_ioreq_server *s,
>>> {
>>> u
Thank you, Jan.
On 1/22/2016 4:01 PM, Jan Beulich wrote:
On 22.01.16 at 04:20, wrote:
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -940,6 +940,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct
hvm_ioreq_server *s,
{
unsigned int i;
int rc;
+unsigned int
>>> On 22.01.16 at 04:20, wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -940,6 +940,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct
> hvm_ioreq_server *s,
> {
> unsigned int i;
> int rc;
> +unsigned int max_wp_ram_ranges =
> +( s->domain
A new parameter - max_wp_ram_ranges is added to set the upper limit
of write-protected ram ranges to be tracked inside one ioreq server
rangeset.
Ioreq server uses a group of rangesets to track the I/O or memory
resources to be emulated. Default limit of ranges that one rangeset
can allocate is se
15 matches
Mail list logo