On Wed, Aug 09, 2017 at 09:26:11AM +0200, Paolo Bonzini wrote:
> On 09/08/2017 03:06, Laszlo Ersek wrote:
> >> 20.14% qemu-system-x86_64 [.] render_memory_region
> >> 17.14% qemu-system-x86_64 [.] subpage_register
> >> 10.31% qemu-system-x86_64
On 09/08/2017 12:56, Laszlo Ersek wrote:
> Allow me one last question -- why (and since when) does each device have
> its own separate address space? Is that related to the virtual IOMMU?
No (though it helps there too). It's because a device that has
bus-master DMA disabled in the command registe
On 9 August 2017 at 11:56, Laszlo Ersek wrote:
> Now that I look at the "info mtree" monitor output of a random VM, I see
> the following "address-space"s:
> - memory
> - I/O
> - cpu-memory
> - bunch of nameless ones, with top level regions called
> "bus master container"
> - several named "virt
On 08/09/17 12:16, Paolo Bonzini wrote:
> On 09/08/2017 12:00, Laszlo Ersek wrote:
>> On 08/09/17 09:26, Paolo Bonzini wrote:
>>> On 09/08/2017 03:06, Laszlo Ersek wrote:
> 20.14% qemu-system-x86_64 [.] render_memory_region
> 17.14% qemu-system-x86_64
On 09/08/2017 12:00, Laszlo Ersek wrote:
> On 08/09/17 09:26, Paolo Bonzini wrote:
>> On 09/08/2017 03:06, Laszlo Ersek wrote:
20.14% qemu-system-x86_64 [.] render_memory_region
17.14% qemu-system-x86_64 [.] subpage_register
10.31% qemu-syst
On 08/09/17 09:26, Paolo Bonzini wrote:
> On 09/08/2017 03:06, Laszlo Ersek wrote:
>>> 20.14% qemu-system-x86_64 [.] render_memory_region
>>> 17.14% qemu-system-x86_64 [.] subpage_register
>>> 10.31% qemu-system-x86_64 [.] int128_add
>>>
On 09/08/2017 03:06, Laszlo Ersek wrote:
>> 20.14% qemu-system-x86_64 [.] render_memory_region
>> 17.14% qemu-system-x86_64 [.] subpage_register
>> 10.31% qemu-system-x86_64 [.] int128_add
>>7.86% qemu-system-x86_64 [
On 08/08/17 17:51, Laszlo Ersek wrote:
> On 08/08/17 12:39, Marcin Juszkiewicz wrote:
>> Anyway, beyond the things written in that comment, there is one very
>> interesting symptom that makes me think another (milder?) bottleneck
>> could be in QEMU:
>>
>> When having a large number of PCI(e) devi
On 08/08/17 12:39, Marcin Juszkiewicz wrote:
>
> Few days ago I had an issue with getting PCIe hotplug working on
> AArch64 machine. Enabled PCI hotplug in kernel and then got hit by
> some issues.
>
> Out setup is a bunch of aarch64 servers and we use OpenStack to
> provide access to arm64 systems
Hello
Few days ago I had an issue with getting PCIe hotplug working on AArch64
machine. Enabled PCI hotplug in kernel and then got hit by some issues.
Out setup is a bunch of aarch64 servers and we use OpenStack to provide
access to arm64 systems. OpenStack uses libvirt to control VMs and
allows
10 matches
Mail list logo