Re: [PATCH] util: NUMA aware memory preallocation

2022-06-08 Thread David Hildenbrand
On 11.05.22 11:34, Daniel P. Berrangé wrote: > On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote: Long story short, management application has no way of learning TIDs of allocator threads so it can't make them run NUMA aware. >>> >>> This feels like the key issue. The prea

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-12 Thread Daniel P . Berrangé
On Thu, May 12, 2022 at 09:41:29AM +0200, Paolo Bonzini wrote: > On 5/11/22 18:54, Daniel P. Berrangé wrote: > > On Wed, May 11, 2022 at 01:07:47PM +0200, Paolo Bonzini wrote: > > > On 5/11/22 12:10, Daniel P. Berrangé wrote: > > > > I expect creating/deleting I/O threads is cheap in comparison to

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-12 Thread Paolo Bonzini
On 5/11/22 18:54, Daniel P. Berrangé wrote: On Wed, May 11, 2022 at 01:07:47PM +0200, Paolo Bonzini wrote: On 5/11/22 12:10, Daniel P. Berrangé wrote: I expect creating/deleting I/O threads is cheap in comparison to the work done for preallocation. If libvirt is using -preconfig and object-add

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Daniel P . Berrangé
On Wed, May 11, 2022 at 01:07:47PM +0200, Paolo Bonzini wrote: > On 5/11/22 12:10, Daniel P. Berrangé wrote: > > If all we needs is NUMA affinity, not CPU affinity, then it would > > be sufficient to create 1 I/O thread per host NUMA node that the > > VM needs to use. The job running in the I/O can

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread David Hildenbrand
On 11.05.22 17:08, Daniel P. Berrangé wrote: > On Wed, May 11, 2022 at 03:16:55PM +0200, Michal Prívozník wrote: >> On 5/10/22 11:12, Daniel P. Berrangé wrote: >>> On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote: When allocating large amounts of memory the task is offloaded >>

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Daniel P . Berrangé
On Wed, May 11, 2022 at 03:16:55PM +0200, Michal Prívozník wrote: > On 5/10/22 11:12, Daniel P. Berrangé wrote: > > On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote: > >> When allocating large amounts of memory the task is offloaded > >> onto threads. These threads then use various

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread David Hildenbrand
>> >> The very last cases is the only one where this patch can potentially >> be beneficial. The problem is that because libvirt is in charge of >> enforcing CPU affinity, IIRC, we explicitly block QEMU from doing >> anything with CPU affinity. So AFAICT, this patch should result in >> an error fro

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Michal Prívozník
On 5/10/22 11:12, Daniel P. Berrangé wrote: > On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote: >> When allocating large amounts of memory the task is offloaded >> onto threads. These threads then use various techniques to >> allocate the memory fully (madvise(), writing into the me

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Paolo Bonzini
On 5/11/22 12:10, Daniel P. Berrangé wrote: If all we needs is NUMA affinity, not CPU affinity, then it would be sufficient to create 1 I/O thread per host NUMA node that the VM needs to use. The job running in the I/O can spawn further threads and inherit the NUMA affinity. This might be more c

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Daniel P . Berrangé
On Wed, May 11, 2022 at 12:03:24PM +0200, David Hildenbrand wrote: > On 11.05.22 11:34, Daniel P. Berrangé wrote: > > On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote: > Long story short, management application has no way of learning > TIDs of allocator threads so it can'

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread David Hildenbrand
On 11.05.22 11:34, Daniel P. Berrangé wrote: > On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote: Long story short, management application has no way of learning TIDs of allocator threads so it can't make them run NUMA aware. >>> >>> This feels like the key issue. The prea

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Daniel P . Berrangé
On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote: > >> Long story short, management application has no way of learning > >> TIDs of allocator threads so it can't make them run NUMA aware. > > > > This feels like the key issue. The preallocation threads are > > invisible to libvirt

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread David Hildenbrand
>> Long story short, management application has no way of learning >> TIDs of allocator threads so it can't make them run NUMA aware. > > This feels like the key issue. The preallocation threads are > invisible to libvirt, regardless of whether we're doing coldplug > or hotplug of memory-backends.

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Daniel P . Berrangé
On Wed, May 11, 2022 at 09:34:07AM +0100, Dr. David Alan Gilbert wrote: > * Michal Privoznik (mpriv...@redhat.com) wrote: > > When allocating large amounts of memory the task is offloaded > > onto threads. These threads then use various techniques to > > allocate the memory fully (madvise(), writin

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Daniel P . Berrangé
On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote: > When allocating large amounts of memory the task is offloaded > onto threads. These threads then use various techniques to > allocate the memory fully (madvise(), writing into the memory). > However, these threads are free to run o

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-11 Thread Dr. David Alan Gilbert
* Michal Privoznik (mpriv...@redhat.com) wrote: > When allocating large amounts of memory the task is offloaded > onto threads. These threads then use various techniques to > allocate the memory fully (madvise(), writing into the memory). > However, these threads are free to run on any CPU, which b

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-10 Thread Dr. David Alan Gilbert
* Daniel P. Berrangé (berra...@redhat.com) wrote: > On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote: > > When allocating large amounts of memory the task is offloaded > > onto threads. These threads then use various techniques to > > allocate the memory fully (madvise(), writing in

Re: [PATCH] util: NUMA aware memory preallocation

2022-05-10 Thread Daniel P . Berrangé
On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote: > When allocating large amounts of memory the task is offloaded > onto threads. These threads then use various techniques to > allocate the memory fully (madvise(), writing into the memory). > However, these threads are free to run o