On Wednesday, October 07, 2015 12:52:02 PM Ian Campbell wrote:
> Applied.
>
> Mike, FWIW for singleton patches it is normally ok to dispense with the 0/1
> mail and to just send the patch by itself. If there is commentary which
> doesn't belong in the commit message you can put it below a "---" ma
Hi,
V3 of this patch modifies the comments on check_sharing to document the
change in the return string. This change was necessary to allow the error
string in check_file_sharing to return the device causing the sharing
conflict.
Thanks,
Mike
Mike Latimer (1):
tools/hotplug: Scan xenstore
once, and major and minor numbers from every vbd are checked against the list.
If a match is found, the mode of that vbd is checked for compatibility with
the mode of the device being attached.
Signed-off-by: Mike Latimer
---
tools/hotplug/Linux/block | 89
1:11.
Finally, I added a more complete description of the problem to the patch
itself.
Thanks,
Mike
Mike Latimer (1):
tools/hotplug: Scan xenstore once when attaching shared images files
tools/hotplug/Linux/block | 76 +++
1 file changed, 50
once, and major and minor numbers from every vbd are checked against the list.
If a match is found, the mode of that vbd is checked for compatibility with
the mode of the device being attached.
Signed-off-by: Mike Latimer
---
tools/hotplug/Linux/block | 76
Hi again,
On Thursday, October 01, 2015 10:51:08 AM George Dunlap wrote:
> >
> > - if [ "$d" = "$devmm" ]
> > + if [[ "$devmm" == *"$d,"* ]]
>
> Style nit: using [[ instead of [. TBH I prefer [[, but it's probably
> better to be consistent with the rest of the file.
I was about to cha
Hi George,
On Thursday, October 01, 2015 10:51:08 AM George Dunlap wrote:
> >then
> > -echo 'local'
> > +echo "local $d"
> > return
> >fi
> > fi
> > @@ -90,13 +107,13 @@ check_sharing()
> > do
> >d=$(xenstore_read_default "$base_path/$dom/
.
Signed-off-by: Mike Latimer
---
tools/hotplug/Linux/block | 67 +--
1 file changed, 41 insertions(+), 26 deletions(-)
diff --git a/tools/hotplug/Linux/block b/tools/hotplug/Linux/block
index 8d2ee9d..aef051c 100644
--- a/tools/hotplug/Linux/block
image file.
Thanks,
Mike
[1]http://lists.xenproject.org/archives/html/xen-devel/2015-09/msg03551.html
Mike Latimer (1):
tools/hotplug: Scan xenstore once when attaching shared images files
tools/hotplug/Linux/block | 67 +--
1 file changed, 41
Hi Ian,
On Tuesday, September 29, 2015 10:25:32 AM Ian Campbell wrote:
> On Mon, 2015-09-28 at 17:14 -0600, Mike Latimer wrote:
> > Any better options or ideas?
>
> Is part of the problem that shell is a terrible choice for this kind of
> check?
There is some truth to th
Hi,
In an environment with read-only image files being shared across domains, the
block script becomes exponentially slower with every block attached. While
this is irritating with a few domains, it becomes very problematic with
hundreds of domains.
Part of the issue was mentioned in a udev ti
On Thursday, March 05, 2015 05:49:35 PM Ian Campbell wrote:
> On Tue, 2015-03-03 at 11:08 +, Stefano Stabellini wrote:
> > Hi all,
> >
> > this patch series fixes the freemem loop on machines with very large
> > amount of memory, where the current wait time is not enough.
> >
> > In order to
On Tuesday, March 03, 2015 02:54:50 PM Mike Latimer wrote:
> Thanks for all the help and patience as we've worked through this. Ack to
> the whole series:
>
> Acked-by: Mike Latimer
I guess the more correct response is:
Reviewed-by: Mike Latimer
Tested-by: Mik
s down just the required amount. Also, domU
startup works the first time, as it correctly waits until memory is freed.
(Using dom0_mem is still a preferred option, as the ballooning delay can be
significant.)
Thanks for all the help and patience as we
On Monday, March 02, 2015 04:15:41 PM Stefano Stabellini wrote:
> On Mon, 2 Mar 2015, Ian Campbell wrote:
> > ? "Continue as long as progress is being made" is exactly what
> > 2563bca1154 "libxl: Wait for ballooning if free memory is increasing"
> > was trying to implement, so it certainly was the
On Monday, March 02, 2015 06:04:11 AM Jan Beulich wrote:
> > Of course users could just use dom0_mem and get down with it.
>
> I don't think we should make this a requirement for correct
> operation.
Exactly. I think from a best practices perspective, dom0_mem is still the
recommended approach.
On Friday, February 27, 2015 11:29:12 AM Mike Latimer wrote:
> On Friday, February 27, 2015 08:28:49 AM Mike Latimer wrote:
> After adding 2048aeec, dom0's target is lowered by the required amount (e.g.
> 64GB), but as dom0 cannot balloon down fast enough,
> libxl_wait_for_memory_
On Friday, February 27, 2015 08:28:49 AM Mike Latimer wrote:
> On Friday, February 27, 2015 10:52:17 AM Stefano Stabellini wrote:
> > On Thu, 26 Feb 2015, Mike Latimer wrote:
> > >libxl_set_memory_target = 1
> >
> > The new memory targe
On Friday, February 27, 2015 10:52:17 AM Stefano Stabellini wrote:
> On Thu, 26 Feb 2015, Mike Latimer wrote:
> >libxl_set_memory_target = 1
>
> The new memory target is set for dom0 successfully.
>
> >libxl_wait_for_free_memory = -5
>
> Still there i
On Thursday, February 26, 2015 01:45:16 PM Mike Latimer wrote:
> On Thursday, February 26, 2015 05:53:06 PM Stefano Stabellini wrote:
> > What is the return value of libxl_set_memory_target and
> > libxl_wait_for_free_memory in that case? Isn't it just a matter of
> >
On Thursday, February 26, 2015 05:53:06 PM Stefano Stabellini wrote:
> What is the return value of libxl_set_memory_target and
> libxl_wait_for_free_memory in that case? Isn't it just a matter of
> properly handle the return values?
The return from libxl_set_memory_target is 0, as the assignment w
(Sorry for the delayed response, dealing with ENOTIME.)
On Thursday, February 26, 2015 05:47:21 PM Ian Campbell wrote:
> On Thu, 2015-02-26 at 10:38 -0700, Mike Latimer wrote:
>
> >rc = libxl_set_memory_target(ctx, 0, free_memkb - need_memkb, 1, 0);
>
> I think so. In essen
On Thursday, February 26, 2015 03:57:54 PM Ian Campbell wrote:
> On Thu, 2015-02-26 at 08:36 -0700, Mike Latimer wrote:
> > There is still one aspect of my original patch that is important. As the
> > code currently stands, the target for dom0 is set lower during each
> >
On Wednesday, February 25, 2015 02:09:50 PM Stefano Stabellini wrote:
> > Is the upshot that Mike doesn't need to do anything further with his
> > patch (i.e. can drop it)? I think so?
>
> Yes, I think so. Maybe he could help out testing the patches I am going
> to write :-)
Sorry for not respond
There is still a problem with xl's freemem loop, but we can investigate that
further with slack out of the picture.
>From my side:
Acked-by: Mike Latimer
Thanks,
Mike
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Hi Wei,
On Friday, February 13, 2015 11:13:50 AM Wei Liu wrote:
> On Tue, Feb 10, 2015 at 02:34:27PM -0700, Mike Latimer wrote:
> > On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote:
> > > It seems that there are two approaches to resolve this:
> > > - Introd
Hi Wei,
On Friday, February 13, 2015 11:01:41 AM Wei Liu wrote:
> On Tue, Feb 10, 2015 at 09:17:23PM -0700, Mike Latimer wrote:
> > Prior to my changes, this issue would only be noticed when starting very
> > large domains - due to the loop being limited to 3 iterations. (For
&g
On Thursday, February 05, 2015 12:45:53 PM Ian Campbell wrote:
> On Mon, 2015-02-02 at 08:17 -0700, Mike Latimer wrote:
> > On Monday, February 02, 2015 02:35:39 PM Ian Campbell wrote:
> > > On Fri, 2015-01-30 at 14:01 -0700, Mike Latimer wrote:
> > > > During domai
On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote:
> While testing commit 2563bca1, I found that libxl_get_free_memory returns 0
> until there is more free memory than required for freemem-slack. This means
> that during the domain creation process, freed memory is first set
Hi,
While testing commit 2563bca1, I found that libxl_get_free_memory returns 0
until there is more free memory than required for freemem-slack. This means
that during the domain creation process, freed memory is first set aside for
freemem-slack, then marked as truly free for consumption.
On
On Monday, February 02, 2015 02:35:39 PM Ian Campbell wrote:
> On Fri, 2015-01-30 at 14:01 -0700, Mike Latimer wrote:
> > During domain startup, all required memory ballooning must complete
> > within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
> > If no
During domain startup, all required memory ballooning must complete
within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
If not, domain creation is aborted with a 'failed to free memory' error.
In order to accommodate large domains or slower hardware (which require
substantially
On Friday, January 30, 2015 01:04:00 PM Mike Latimer wrote:
> +if (free_memkb > free_memkb_prev) {
> +retries = MAX_RETRIES;
> +free_memkb_prev = free_memkb;
> +} else {
> +retires--;
> +}
Please ignore. Typo &
During domain startup, all required memory ballooning must complete
within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
If not, domain creation is aborted with a 'failed to free memory' error.
In order to accommodate large domains or slower hardware (which require
substantially
On Thursday, January 29, 2015 10:14:26 AM Ian Campbell wrote:
> I'm thinking it would be clearer if the comment and the condition were
> logically inverted. e.g.:
>
> /*
> * If the amount of free mem has increased on this iteration (i.e.
> * some progress has been made) then reset th
On Wednesday, January 28, 2015 01:05:25 PM Ian Campbell wrote:
> On Wed, 2015-01-21 at 22:22 -0700, Mike Latimer wrote:
>
> Sorry for the delay.
No problem! Thanks for the comments.
> > @@ -2228,7 +2230,13 @@ static int freemem(uint32_t domid,
> > libxl_doma
On Wednesday, January 21, 2015 10:22:53 PM Mike Latimer wrote:
> During domain startup, all required memory ballooning must complete
> within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
> If not, domain creation is aborted with a 'failed to free memory' erro
During domain startup, all required memory ballooning must complete
within a maximum window of 33 seconds (3 retries, 11 seconds of delay).
If not, domain creation is aborted with a 'failed to free memory' error.
In order to accommodate large domains or slower hardware (which require
substantially
On Monday, January 12, 2015 05:29:25 PM George Dunlap wrote:
> When I said "10s seems very conservative", I meant, "10s should be by
> far long enough for something to happen". If you can't free up at least
> 1k in 30s, then there is certainly something very unusual with your
> system. So I was r
On Monday, January 12, 2015 12:36:01 PM George Dunlap wrote:
> I would:
> 1. Reset the retries after a successful increase
> 2. Not allow free_memkb_prev to go down.
Thanks, George. Good points, which definitely improve the situation.
> So maybe something like the following?
>
> if (free_memkb
On Wednesday, January 07, 2015 09:38:31 AM Ian Campbell wrote:
> That's exactly what I was about to suggest as I read the penultimate
> paragraph, i.e. keep waiting so long as some reasonable delta occurs on
> each iteration.
Thanks, Ian.
I wonder if there is a future-safe threshold on the amount
Hi,
In a previous post (1), I mentioned issues seen while ballooning a large
amount of memory. In the current code, the ballooning process only has 33
seconds to complete, or the xl operation (i.e. domain create) will fail. When
a lot of ballooning is required, or the host is very slow to ballo
Hi,
I've recently been testing large memory (64GB - 1TB) domains, and encountering
CPU soft lockups while dom0 is ballooning down to free memory for the domain.
The root of the issue also exposes a difference between libxl and libvirt.
When creating a domain using xl, if ballooning is enabled (
43 matches
Mail list logo