Hi Rik,
Are there any more tests which I can usefully do for you?
I notice that 3.6.0-rc4 is out - are there changes from rc3 which are worth
me retesting?
Cheers,
Richard.
Richard Davies wrote:
> Rik van Riel wrote:
> > Can you get a backtrace to that _raw_spin_lock_irqsave, to see
> > from
Rik van Riel wrote:
> Can you get a backtrace to that _raw_spin_lock_irqsave, to see
> from where it is running into lock contention?
>
> It would be good to know whether it is isolate_freepages_block,
> yield_to, kvm_vcpu_on_spin or something else...
Hi Rik,
I got into a slow boot situation on 3
On 08/25/2012 01:45 PM, Richard Davies wrote:
Are you talking about these patches?
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commit;h=c67fe3752abe6ab47639e2f9b836900c3dc3da84
http://marc.info/?l=linux-mm&m=134521289221259
If so, I believe those are in 3.6.0-rc3, so I teste
Troy Benjegerdes wrote:
> Is there a way to capture/reproduce this 'slow boot' behavior with
> a simple regression test? I'd like to know if it happens on a
> single-physical CPU socket machine, or just on dual-sockets.
Yes, definitely.
These two emails earlier in the thread give a fairly complet
Rik van Riel wrote:
> Richard Davies wrote:
> > Avi Kivity wrote:
> > > Richard Davies wrote:
> > > > I can trigger the slow boots without KSM and they have the same
> > > > profile, with _raw_spin_lock_irqsave and isolate_freepages_block at
> > > > the top.
> > > >
> > > > I reduced to 3x 20GB 8-c
Avi Kivity wrote:
> Richard Davies wrote:
> > Avi Kivity wrote:
> > > Richard Davies wrote:
> > > > I can trigger the slow boots without KSM and they have the same
> > > > profile, with _raw_spin_lock_irqsave and isolate_freepages_block at
> > > > the top.
> > > >
> > > > I reduced to 3x 20GB 8-cor
Avi Kivity wrote:
> Richard Davies wrote:
> > Below are two 'perf top' snapshots during a slow boot, which appear to
> > me to support your idea of a spin-lock problem.
...
> >PerfTop: 62249 irqs/sec kernel:96.9% exact: 0.0% [4000Hz cycles],
> > (all, 16 CPUs)
> > ---
Avi Kivity wrote:
> Richard Davies wrote:
> > I can trigger the slow boots without KSM and they have the same profile,
> > with _raw_spin_lock_irqsave and isolate_freepages_block at the top.
> >
> > I reduced to 3x 20GB 8-core VMs on a 128GB host (rather than 3x 40GB 8-core
> > VMs), and haven't ma
> > I've now triggered a very slow boot at 3x 36GB 8-core VMs on a 128GB host
> > (i.e. 108GB on a 128GB host).
> >
> > It has the same profile with _raw_spin_lock_irqsave and
> > isolate_freepages_block at the top.
>
> Then it's still memory starved.
>
> Please provide /proc/zoneinfo while this
Rik van Riel wrote:
> Richard Davies wrote:
> > I've now triggered a very slow boot at 3x 36GB 8-core VMs on a 128GB
> > host (i.e. 108GB on a 128GB host).
> >
> > It has the same profile with _raw_spin_lock_irqsave and
> > isolate_freepages_block at the top.
>
> That's the page compaction code.
>
On 08/22/2012 10:41 AM, Richard Davies wrote:
Avi Kivity wrote:
Richard Davies wrote:
I can trigger the slow boots without KSM and they have the same profile,
with _raw_spin_lock_irqsave and isolate_freepages_block at the top.
I reduced to 3x 20GB 8-core VMs on a 128GB host (rather than 3x 40G
On 08/22/2012 05:41 PM, Richard Davies wrote:
> Avi Kivity wrote:
>> Richard Davies wrote:
>> > I can trigger the slow boots without KSM and they have the same profile,
>> > with _raw_spin_lock_irqsave and isolate_freepages_block at the top.
>> >
>> > I reduced to 3x 20GB 8-core VMs on a 128GB host
On 08/22/2012 03:40 PM, Richard Davies wrote:
>
> I can trigger the slow boots without KSM and they have the same profile,
> with _raw_spin_lock_irqsave and isolate_freepages_block at the top.
>
> I reduced to 3x 20GB 8-core VMs on a 128GB host (rather than 3x 40GB 8-core
> VMs), and haven't mana
On 08/21/2012 06:21 PM, Richard Davies wrote:
> Avi Kivity wrote:
>> Richard Davies wrote:
>> > We're running host kernel 3.5.1 and qemu-kvm 1.1.1.
>> >
>> > I hadn't though about it, but I agree this is related to cpu overcommit.
>> > The
>> > slow boots are intermittent (and infrequent) with cpu
Do you have any way to determine what CPU groups the different VMs
are running on?
If you end up in an overcommit situation where half the 'virtual'
cpus are on one AMD socket, and the other half are on a different
AMD socket, then you'll be thrashing the hypertransport link.
At Cray we were very
Avi Kivity wrote:
> Richard Davies wrote:
> > We're running host kernel 3.5.1 and qemu-kvm 1.1.1.
> >
> > I hadn't though about it, but I agree this is related to cpu overcommit. The
> > slow boots are intermittent (and infrequent) with cpu overcommit whereas I
> > don't think it occurs without cpu
On 08/20/2012 04:56 PM, Richard Davies wrote:
> We're running host kernel 3.5.1 and qemu-kvm 1.1.1.
>
> I hadn't though about it, but I agree this is related to cpu overcommit. The
> slow boots are intermittent (and infrequent) with cpu overcommit whereas I
> don't think it occurs without cpu ove
Avi Kivity wrote:
> Richard Davies wrote:
> > Hi Avi,
> >
> > Thanks to you and several others for offering help. We will work with Avi at
> > first, but are grateful for all the other offers of help. We have a number
> > of other qemu-related projects which we'd be interested in getting done, and
Brian Jackson wrote:
> Richard Davies wrote:
> > The host in question has 128GB RAM and dual AMD Opteron 6128 (16 cores
> > total). It is running kernel 3.5.1 and qemu-kvm 1.1.1.
> >
> > In this morning's test, we have 3 guests, all booting Windows with 40GB RAM
> > and 8 cores each (we have seen s
On 08/17/2012 03:36 PM, Richard Davies wrote:
> Hi Avi,
>
> Thanks to you and several others for offering help. We will work with Avi at
> first, but are grateful for all the other offers of help. We have a number
> of other qemu-related projects which we'd be interested in getting done, and
> wil
Avi Kivity wrote:
> Richard Davies wrote:
> > The host in question has 128GB RAM and dual AMD Opteron 6128 (16 cores
> > total). It is running kernel 3.5.1 and qemu-kvm 1.1.1.
> >
> > In this morning's test, we have 3 guests, all booting Windows with 40GB RAM
> > and 8 cores each (we have seen smal
On 08/17/2012 03:36 PM, Richard Davies wrote:
> Hi Avi,
>
> Thanks to you and several others for offering help. We will work with Avi at
> first, but are grateful for all the other offers of help. We have a number
> of other qemu-related projects which we'd be interested in getting done, and
> wil
On Friday 17 August 2012 07:36:42 Richard Davies wrote:
> Hi Avi,
>
> Thanks to you and several others for offering help. We will work with Avi
> at first, but are grateful for all the other offers of help. We have a
> number of other qemu-related projects which we'd be interested in getting
> don
Hi Robert,
Robert Vineyard wrote:
> Not sure if you've tried this, but I noticed massive performance
> gains (easily booting 2-3 times as fast) by converting from RAW disk
> images to direct-mapped raw partitions and making sure that IOMMU
> support was enabled in the BIOS and in the kernel at boo
Richard,
Not sure if you've tried this, but I noticed massive performance gains
(easily booting 2-3 times as fast) by converting from RAW disk images to
direct-mapped raw partitions and making sure that IOMMU support was
enabled in the BIOS and in the kernel at boot time. The obvious downside
Hi Avi,
Thanks to you and several others for offering help. We will work with Avi at
first, but are grateful for all the other offers of help. We have a number
of other qemu-related projects which we'd be interested in getting done, and
will get in touch with these names (and anyone else who comes
I'd be interested in working on this.. What I'd like to propose is to write
an automated regression test harness that will reboot the host hardware, and
start booting up guest VMs and report the time-to-boot, as well as relative
performance of the running VMs.
For best results, I'd need access to
Le Thursday 16 Aug 2012 à 11:47:27 (+0100), Richard Davies a écrit :
> Hi,
>
> We run a cloud hosting provider using qemu-kvm 1.1, and are keen to find a
> contractor to track down and fix problems we have with large memory Windows
> guests booting very slowly - they can take several hours.
>
> W
On 08/16/2012 01:47 PM, Richard Davies wrote:
> Hi,
>
> We run a cloud hosting provider using qemu-kvm 1.1, and are keen to find a
> contractor to track down and fix problems we have with large memory Windows
> guests booting very slowly - they can take several hours.
>
> We previously reported t
Hi,
We run a cloud hosting provider using qemu-kvm 1.1, and are keen to find a
contractor to track down and fix problems we have with large memory Windows
guests booting very slowly - they can take several hours.
We previously reported these problems in July (copied below) and they are
still pres
30 matches
Mail list logo