On 10/31/2016 05:00 PM, Michael R. Hines wrote:
On 10/18/2016 05:47 AM, Peter Lieven wrote:
Am 12.10.2016 um 23:18 schrieb Michael R. Hines:
Peter,
Greetings from DigitalOcean. We're experiencing the same symptoms
without this patch.
We have, collectively, many gigabytes of un-planned-for RSS
On 10/18/2016 05:47 AM, Peter Lieven wrote:
Am 12.10.2016 um 23:18 schrieb Michael R. Hines:
Peter,
Greetings from DigitalOcean. We're experiencing the same symptoms
without this patch.
We have, collectively, many gigabytes of un-planned-for RSS being
used per-hypervisor
that we would like t
Thank you for the response! I'll run off and test that. =)
/*
* Michael R. Hines
* Senior Engineer, DigitalOcean.
*/
On 10/18/2016 05:47 AM, Peter Lieven wrote:
Am 12.10.2016 um 23:18 schrieb Michael R. Hines:
Peter,
Greetings from DigitalOcean. We're experiencing the same symptoms
withou
Am 12.10.2016 um 23:18 schrieb Michael R. Hines:
Peter,
Greetings from DigitalOcean. We're experiencing the same symptoms without this
patch.
We have, collectively, many gigabytes of un-planned-for RSS being used
per-hypervisor
that we would like to get rid of =).
Without explicitly trying th
Peter,
Greetings from DigitalOcean. We're experiencing the same symptoms
without this patch.
We have, collectively, many gigabytes of un-planned-for RSS being used
per-hypervisor
that we would like to get rid of =).
Without explicitly trying this patch (will do that ASAP), we immediately
not
Am 28.06.2016 um 16:43 schrieb Peter Lieven:
Am 28.06.2016 um 14:56 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 28.06.2016 um 14:29 schrieb Paolo Bonzini:
Am 28.06.2016 um 13:37 schrieb Paolo Bonzini:
On 28/06/2016 11:01, Peter Lieven wrote:
I recently found that
Am 28.06.2016 um 14:56 schrieb Dr. David Alan Gilbert:
* Peter Lieven (p...@kamp.de) wrote:
Am 28.06.2016 um 14:29 schrieb Paolo Bonzini:
Am 28.06.2016 um 13:37 schrieb Paolo Bonzini:
On 28/06/2016 11:01, Peter Lieven wrote:
I recently found that Qemu is using several hundred megabytes of RSS
- Original Message -
> From: "Peter Lieven"
> To: "Paolo Bonzini"
> Cc: qemu-devel@nongnu.org, kw...@redhat.com, "peter maydell"
> , m...@redhat.com,
> dgilb...@redhat.com, mre...@redhat.com, kra...@redhat.com
> Sent: Tuesday, June 28, 2016 2:33:02 PM
> Subject: Re: [PATCH 00/15] optim
* Peter Lieven (p...@kamp.de) wrote:
> Am 28.06.2016 um 14:29 schrieb Paolo Bonzini:
> > > Am 28.06.2016 um 13:37 schrieb Paolo Bonzini:
> > > > On 28/06/2016 11:01, Peter Lieven wrote:
> > > > > I recently found that Qemu is using several hundred megabytes of RSS
> > > > > memory
> > > > > more th
Am 28.06.2016 um 14:29 schrieb Paolo Bonzini:
Am 28.06.2016 um 13:37 schrieb Paolo Bonzini:
On 28/06/2016 11:01, Peter Lieven wrote:
I recently found that Qemu is using several hundred megabytes of RSS
memory
more than older versions such as Qemu 2.2.0. So I started tracing
memory allocation an
> Am 28.06.2016 um 13:37 schrieb Paolo Bonzini:
> > On 28/06/2016 11:01, Peter Lieven wrote:
> >> I recently found that Qemu is using several hundred megabytes of RSS
> >> memory
> >> more than older versions such as Qemu 2.2.0. So I started tracing
> >> memory allocation and found 2 major reasons
Am 28.06.2016 um 13:37 schrieb Paolo Bonzini:
On 28/06/2016 11:01, Peter Lieven wrote:
I recently found that Qemu is using several hundred megabytes of RSS memory
more than older versions such as Qemu 2.2.0. So I started tracing
memory allocation and found 2 major reasons for this.
1) We chang
On 28/06/2016 11:01, Peter Lieven wrote:
> I recently found that Qemu is using several hundred megabytes of RSS memory
> more than older versions such as Qemu 2.2.0. So I started tracing
> memory allocation and found 2 major reasons for this.
>
> 1) We changed the qemu coroutine pool to have a p
I recently found that Qemu is using several hundred megabytes of RSS memory
more than older versions such as Qemu 2.2.0. So I started tracing
memory allocation and found 2 major reasons for this.
1) We changed the qemu coroutine pool to have a per thread and a global release
pool. The choosen p
14 matches
Mail list logo