On Thu, Nov 9, 2017 at 10:03 AM, Roger Pau Monné
wrote:
> On Thu, Nov 09, 2017 at 08:15:52AM -0700, Mike Reardon wrote:
> > On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné
> > wrote:
> >
> > > Please try to avoid top-posting.
> > >
> > > On Wed
On Thu, Nov 9, 2017 at 2:30 AM, Roger Pau Monné
wrote:
> Please try to avoid top-posting.
>
> On Wed, Nov 08, 2017 at 08:27:17PM -0700, Mike Reardon wrote:
> > So am I correct in reading this that for at least the foreseeable future
> > storage using 4k sector sizes is n
So am I correct in reading this that for at least the foreseeable future
storage using 4k sector sizes is not gonna happen? I'm just trying to
figure out if I need to get some different hardware.
Thank you!
On Tue, Nov 7, 2017 at 5:41 AM, Roger Pau Monné
wrote:
> On Tue, Nov 07, 2017 at 04:31:
Hello,
I had originally posted about this issue to win-pv-devel but it was
suggested this is actually an issue in blkback.
I added some additional storage to my server with some native 4k sector
size disks. The LVM volumes on that array seem to work fine when mounted
by the host, and when passed
th RT host and 1 RT guest by just having
the guest do a parallel kbuild over NFS (the guest had to be restored
afterward, was corrupted). I'm currently flogging 2 guests as well as
the host, whimper free. I'll let the lot broil for while longer, but
at this point, smoke/flame
do this for special purpose.
> so the behavior may well be unspecified or model-specific. Neither
> of which I'm in the position to comment on, so I can only defer to
> the Intel guys.
Thanks. :)
>
> Jan
>
>
> .
>
--
Regards,
Longpeng(Mike)
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
nter C1/C1E state ?
2) If it won't, then whether it would release the hardware resources shared with
another hyper-thread ?
Any suggestion would be greatly appreciated, thanks!
--
Regards,
Longpeng(Mike)
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
nter C1/C1E state ?
2) If it won't, then whether it would release the hardware resources shared with
another hyper-thread ?
Any suggestion would be greatly appreciated, thanks!
--
Regards,
Longpeng(Mike)
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
> On Tue, Dec 22, 2015 at 21:59 +0100, Mike Belopuhov wrote:
> > Hi,
> >
> > I'm trying to get grant table sub page mappings working on Xen 4.5.
> > I know there have been some changes in the trunk regarding moving src/
> > dst checks closer together, but
GNTST_general_error,
"copy dest out of bounds: %d < %d || %d > %d\n",
op->dest.offset, dest_off,
op->len, dest_len);
}
I fail to understand what am I doing wrong in this case. Any c
Hello,
I'm doing some reviewing of XEN source code. Does x86_emulate() (from
x86_emulate.c) execute on every guest, or is this whenever a machine
doesn't have hardware assisted virtualization?
Thanks,
Mike
___
Xen-devel mailing list
On Wednesday, October 07, 2015 12:52:02 PM Ian Campbell wrote:
> Applied.
>
> Mike, FWIW for singleton patches it is normally ok to dispense with the 0/1
> mail and to just send the patch by itself. If there is commentary which
> doesn't belong in the commit message yo
Hi,
V3 of this patch modifies the comments on check_sharing to document the
change in the return string. This change was necessary to allow the error
string in check_file_sharing to return the device causing the sharing
conflict.
Thanks,
Mike
Mike Latimer (1):
tools/hotplug: Scan xenstore
once, and major and minor numbers from every vbd are checked against the list.
If a match is found, the mode of that vbd is checked for compatibility with
the mode of the device being attached.
Signed-off-by: Mike Latimer
---
tools/hotplug/Linux/block | 89
1:11.
Finally, I added a more complete description of the problem to the patch
itself.
Thanks,
Mike
Mike Latimer (1):
tools/hotplug: Scan xenstore once when attaching shared images files
tools/hotplug/Linux/block | 76 +++
1 file changed, 50
once, and major and minor numbers from every vbd are checked against the list.
If a match is found, the mode of that vbd is checked for compatibility with
the mode of the device being attached.
Signed-off-by: Mike Latimer
---
tools/hotplug/Linux/block | 76
in *$a):;;esac
I can implement a case statement, but that seems even less clean than the
simple [[ ... ]] approach (since there is only one case we care about).
As this is a #!/bin/bash script, is [[ ... ]] okay to use, or would you prefer
to use the case statement? (If you have any other idea
ven though it's not being
> used by anyone else?
>
> Would it make more sense maybe to initialize devmm to ",", and then
> search for *",$d,"*?
Ah, thanks for catching that! I caught the "1:11" case, but somehow missed the
"11:1" side.
I'll address that, and your other comments and submit a V2 shortly.
-Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
.
Signed-off-by: Mike Latimer
---
tools/hotplug/Linux/block | 67 +--
1 file changed, 41 insertions(+), 26 deletions(-)
diff --git a/tools/hotplug/Linux/block b/tools/hotplug/Linux/block
index 8d2ee9d..aef051c 100644
--- a/tools/hotplug/Linux/block
image file.
Thanks,
Mike
[1]http://lists.xenproject.org/archives/html/xen-devel/2015-09/msg03551.html
Mike Latimer (1):
tools/hotplug: Scan xenstore once when attaching shared images files
tools/hotplug/Linux/block | 67 +--
1 file changed, 41
Hi Ian,
On Tuesday, September 29, 2015 10:25:32 AM Ian Campbell wrote:
> On Mon, 2015-09-28 at 17:14 -0600, Mike Latimer wrote:
> > Any better options or ideas?
>
> Is part of the problem that shell is a terrible choice for this kind of
> check?
There is some truth to th
On Tue, Sep 29, 2015 at 13:23 +0100, Andrew Cooper wrote:
> On 29/09/15 13:03, Mike Belopuhov wrote:
> > On Mon, Sep 28, 2015 at 11:24 +0200, Mike Belopuhov wrote:
> >> On Fri, Sep 25, 2015 at 01:12 -0600, Jan Beulich wrote:
> >>>>>> On 22.09.15 at 16:02, wr
On Mon, Sep 28, 2015 at 11:24 +0200, Mike Belopuhov wrote:
> On Fri, Sep 25, 2015 at 01:12 -0600, Jan Beulich wrote:
> > >>> On 22.09.15 at 16:02, wrote:
> > > --- xen/include/public/arch-x86/pmu.h
> > > +++ xen/include/public/arch-x86/pmu.h
> >
> &g
mains, the size of this list could still be an issue.
With the last option above in place, all of my tests showed a block attach
time of around 1 second. Without the change, I saw block attach times from 1
to 1500 (with less than 40 domains sharing one device).
Any better options or ideas?
On Fri, Sep 25, 2015 at 01:12 -0600, Jan Beulich wrote:
> >>> On 22.09.15 at 16:02, wrote:
> > --- xen/include/public/arch-x86/pmu.h
> > +++ xen/include/public/arch-x86/pmu.h
>
> I fixed this up for you this time, but in the future please make sure
> you send patches in conventional format (appli
On Tue, Sep 22, 2015 at 09:00 -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Sep 22, 2015 at 01:42:14PM +0200, Mike Belopuhov wrote:
> > On Fri, Sep 18, 2015 at 10:13 -0400, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Sep 18, 2015 at 08:00:28AM -0400, Bor
there
was no license. It is possible to update or add additional years if major
changes have been done to the the file, but is generally not a requirement.
Signed-off-by: Mike Belopuhov
---
xen/include/public/arch-x86/pmu.h | 22 ++
xen/include/public/hvm/e820.h
On Fri, Sep 18, 2015 at 10:13 -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Sep 18, 2015 at 08:00:28AM -0400, Boris Ostrovsky wrote:
> >
> >
> > On 09/18/2015 04:44 AM, Ian Campbell wrote:
> > >On Thu, 2015-09-17 at 13:53 +0200, Mike Belopuhov wrote:
> &g
Signed-off-by: Mike Belopuhov
---
xen/include/public/arch-x86/pmu.h | 22 ++
xen/include/public/hvm/e820.h | 3 ++-
xen/include/public/hvm/hvm_info_table.h | 2 ++
xen/include/public/hvm/hvm_op.h | 2 ++
xen/include/public/hvm/hvm_xs_strings.h | 2
On Thu, Sep 10, 2015 at 17:05 +, Lars Kurth wrote:
>
>
> On 10/09/2015 17:26, "Roger Pau Monné" wrote:
>
> >CCing Lars (the community manager).
> >
> >El 09/09/15 a les 14.11, Mike Belopuhov ha escrit:
> >> Hi,
> >>
> &g
ight
line mentioning Xen project or an individual contributor like
it's done it othe places in the Xen source code?
With kind regards,
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
> > This driver already makes use of ioremap_wc() on PIO buffers, so
> > convert it to use arch_phys_wc_add().
>
> This is probably OK, but I think you should also remove the qib_wc_pat module
> parameter.
>
> Jason
Revise based on Jason's request a
or_memory_target with a 1 second timeout -
which doesn't leave a huge amount of room for slow memory allocation. This
timeout, as well as the logic in general, should be changed to match the new
xl behavior (IMO). I expect this to really only matter when dealing with large
domains.
-Mike
On Tuesday, March 03, 2015 02:54:50 PM Mike Latimer wrote:
> Thanks for all the help and patience as we've worked through this. Ack to
> the whole series:
>
> Acked-by: Mike Latimer
I guess the more correct response is:
Reviewed-by: Mike Latimer
Tested-by: Mik
s down just the required amount. Also, domU
startup works the first time, as it correctly waits until memory is freed.
(Using dom0_mem is still a preferred option, as the ballooning delay can be
significant.)
Thanks for all the help and patience as we
t_memory_target: Balloon dom0 to free memory for domU
libxl_wait_for_free_memory: Wait for free memory for domU (max 10 seconds)
libxl_wait_for_memory_target: Wait for dom0 to finish ballooning
Decrement retry and try again
Shouldn't libxl_wait_for_memory_target be before libxl_wait_for_free_memory?
Thanks,
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
recommended approach. It just does not seem right to require that (or to
expect the first domU startup to fail without it).
Thanks,
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
On Friday, February 27, 2015 11:29:12 AM Mike Latimer wrote:
> On Friday, February 27, 2015 08:28:49 AM Mike Latimer wrote:
> After adding 2048aeec, dom0's target is lowered by the required amount (e.g.
> 64GB), but as dom0 cannot balloon down fast enough,
> libxl_wait_for_memory_
On Friday, February 27, 2015 08:28:49 AM Mike Latimer wrote:
> On Friday, February 27, 2015 10:52:17 AM Stefano Stabellini wrote:
> > On Thu, 26 Feb 2015, Mike Latimer wrote:
> > >libxl_set_memory_target = 1
> >
> > The new memory targe
On Friday, February 27, 2015 10:52:17 AM Stefano Stabellini wrote:
> On Thu, 26 Feb 2015, Mike Latimer wrote:
> >libxl_set_memory_target = 1
>
> The new memory target is set for dom0 successfully.
>
> >libxl_wait_for_free_memory = -5
>
> Still there i
On Thursday, February 26, 2015 01:45:16 PM Mike Latimer wrote:
> On Thursday, February 26, 2015 05:53:06 PM Stefano Stabellini wrote:
> > What is the return value of libxl_set_memory_target and
> > libxl_wait_for_free_memory in that case? Isn't it just a matter of
> >
would help, but if the timeout were insufficient (say
when dealing with very large guests), it wouldn't solve the problem.
-Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
(Sorry for the delayed response, dealing with ENOTIME.)
On Thursday, February 26, 2015 05:47:21 PM Ian Campbell wrote:
> On Thu, 2015-02-26 at 10:38 -0700, Mike Latimer wrote:
>
> >rc = libxl_set_memory_target(ctx, 0, free_memkb - need_memkb, 1, 0);
>
> I think so. In essen
On Thursday, February 26, 2015 03:57:54 PM Ian Campbell wrote:
> On Thu, 2015-02-26 at 08:36 -0700, Mike Latimer wrote:
> > There is still one aspect of my original patch that is important. As the
> > code currently stands, the target for dom0 is set lower during each
> >
On Wednesday, February 25, 2015 02:09:50 PM Stefano Stabellini wrote:
> > Is the upshot that Mike doesn't need to do anything further with his
> > patch (i.e. can drop it)? I think so?
>
> Yes, I think so. Maybe he could help out testing the patches I am going
> t
There is still a problem with xl's freemem loop, but we can investigate that
further with slack out of the picture.
>From my side:
Acked-by: Mike Latimer
Thanks,
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Hi Wei,
On Friday, February 13, 2015 11:13:50 AM Wei Liu wrote:
> On Tue, Feb 10, 2015 at 02:34:27PM -0700, Mike Latimer wrote:
> > On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote:
> > > It seems that there are two approaches to resolve this:
> > > - Introd
Hi Wei,
On Friday, February 13, 2015 11:01:41 AM Wei Liu wrote:
> On Tue, Feb 10, 2015 at 09:17:23PM -0700, Mike Latimer wrote:
> > Prior to my changes, this issue would only be noticed when starting very
> > large domains - due to the loop being limited to 3 iterations. (For
&g
On Thursday, February 05, 2015 12:45:53 PM Ian Campbell wrote:
> On Mon, 2015-02-02 at 08:17 -0700, Mike Latimer wrote:
> > On Monday, February 02, 2015 02:35:39 PM Ian Campbell wrote:
> > > On Fri, 2015-01-30 at 14:01 -0700, Mike Latimer wrote:
> > > > During domai
On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote:
> While testing commit 2563bca1, I found that libxl_get_free_memory returns 0
> until there is more free memory than required for freemem-slack. This means
> that during the domain creation process, freed memory is first set
ch is the best approach (or did I miss something)?
Thanks!
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
On Monday, February 02, 2015 02:35:39 PM Ian Campbell wrote:
> On Fri, 2015-01-30 at 14:01 -0700, Mike Latimer wrote:
> > During domain startup, all required memory ballooning must complete
> > within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
> > If no
During domain startup, all required memory ballooning must complete
within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
If not, domain creation is aborted with a 'failed to free memory' error.
In order to accommodate large domains or slower hardware (which require
substantially
On Friday, January 30, 2015 01:04:00 PM Mike Latimer wrote:
> +if (free_memkb > free_memkb_prev) {
> +retries = MAX_RETRIES;
> +free_memkb_prev = free_memkb;
> +} else {
> +retires--;
> +}
Please ignore. Typo &
During domain startup, all required memory ballooning must complete
within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
If not, domain creation is aborted with a 'failed to free memory' error.
In order to accommodate large domains or slower hardware (which require
substantially
separate patch though.
xtl_progress looks interesting. I'll do some additional testing before I
submit a patch containing this improvement.
-Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Ah, sorry about that, wrong list indeed.
On Wednesday, January 28, 2015, Ian Campbell
wrote:
> Hi Mike,
>
> On Tue, 2015-01-27 at 14:47 -0700, Mike Tutkowski wrote:
>
> > Xen does not like the fact that both SRs have the same UUID (and, in
> > fact, VDIs in each
On Wednesday, January 28, 2015 01:05:25 PM Ian Campbell wrote:
> On Wed, 2015-01-21 at 22:22 -0700, Mike Latimer wrote:
>
> Sorry for the delay.
No problem! Thanks for the comments.
> > @@ -2228,7 +2230,13 @@ static int freemem(uint32_t domid,
> > libxl_doma
(and a noticeable amount time compared to backend
snapshots, which can be almost instantaneous).
What do you recommend?
Thanks!
--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the cloud
<http://solid
On Wednesday, January 21, 2015 10:22:53 PM Mike Latimer wrote:
> During domain startup, all required memory ballooning must complete
> within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
> If not, domain creation is aborted with a 'failed to free memory' erro
During domain startup, all required memory ballooning must complete
within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
If not, domain creation is aborted with a 'failed to free memory' error.
In order to accommodate large domains or slower hardware (which require
substantially
le
> with the timeout, however, so I'm having second thoughts about adding
> new options.
Ok. Given Ian's comment about ballooning down being serialized, should I send
an official patch for further review?
Thanks,
Mike
___
Xen-devel
, or
> to increase the period of the checks; but ultimately at some point
> someone (either xl or the human) needs to timeout and say, "This is
> never going to finish". 10s seems like a very conservative default.
Agreed. Is a better solution to increase the tim
free_memkb_prev = free_memkb;
} while (retries > 0);
return ERROR_NOMEM;
--
I'm not sure if the above approach is always safe, but it works in my testing.
I'd appreciate any other thoughts you might have before I try submitting an
officia
wait if it
is progressing, might be a better approach.
Any ideas?
Thanks,
Mike
1. http://lists.xen.org/archives/html/xen-devel/2014-12/msg01443.html
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
ere enough to kill the machine.)
Any thoughts on handling this?
Thanks,
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
66 matches
Mail list logo