Re: [pve-devel] applied: [RFC pve-qemu] disable jemalloc

2023-03-13 Thread DERUMIER, Alexandre
I have done tests writing a small C program calling  malloc_trim(0),

and it don't break/segfault with LD_PRELOAD tcmalloc.

I don't think that tcmalloc override this specific gblic function, but
maybe malloc_trim is triming empty glibc malloc memory.


I have done 2 days of continous fio benchmark in a vm with tcmalloc
preload, I don't have any problem.

But the speed is really night & days, with iodepth=64  4k randread,

It's something like average 85-90k iops (with some spike at 120k)  vs
50kiops. (with spike to 60kiops).


If it's ok for you, I'll send a patch with something like:

vmid.conf
-
memory_allocator: glibc|tcmalloc


and simply add the LD_PRELOAD in systemd unit when vm is starting

?


Le samedi 11 mars 2023 à 13:14 +, DERUMIER, Alexandre a écrit :
> Le samedi 11 mars 2023 à 10:01 +0100, Thomas Lamprecht a écrit :
> > Hi,
> > 
> > Am 10/03/2023 um 19:05 schrieb DERUMIER, Alexandre:
> > > I'm currently benching again qemu with librbd and memory
> > > allocator.
> > > 
> > > 
> > > It's seem that they are still performance problem with default
> > > glibc
> > > allocator, around 20-25% less iops and bigger latency.
> > 
> > Are those numbers compared to jemalloc or tcmalloc?
> > 
> oh sorry,
> 
> tcmalloc.  (I'm gotting almost same result with jmalloc, maybe a
> little
> bit more less/unstable)
> 
> 
> > Also, a key problem with allocator tuning is that its heavily
> > dependent on
> > the workload of each specific library (i.e., not only QEMU itself
> > but
> > also
> > the specific block backend (library).
> > 
> > > 
> yes, it should help librbd mainly. I don't think help other storage.
> 
> 
> 
> > > From my bench, i'm around 60k iops vs 80-90k iops with 4k
> > > randread.
> > > 
> > > Redhat have also notice it
> > > 
> > > 
> > > I known than jemalloc was buggy with rust lib  && pbs block
> > > driver,
> > > but did you have evaluated tcmalloc ?
> > 
> > Yes, for PBS once - was way worse in how it generally worked than
> > either
> > jemalloc and default glibc IIRC, but I don't think I checked for
> > latency,
> > as then we tracked down freed memory that the allocator did not
> > give
> > back
> > to the system to how they internally try to keep a pool of
> > available
> > memory
> > around.
> > 
> I known than jemalloc could have strange effect on memory. (ceph was
> using jemalloc some year ago with this kind of side effect, and they
> have migrate to tcmalloc later)
> 
> 
> > So for latency it might be a win, but IMO not to sure if the other
> > effects
> > it has are worth that.
> > 
> > > 
> yes, latency is my main objective, mainly for ceph synchronous write
> with low iodepth,they are pretty slow, so 20% improvement is really
> big.
> 
> > > Note that it's possible to load it dynamically with LD_PRELOAD,
> > > so maybe could we add an option in vm config to enable it ? 
> > > 
> 
> > I'm not 100% sure if QEMU copes well with preloading it via the
> > dynlinker
> > as is, or if we need to hard-disable malloc_trim support for it
> > then.
> > As currently with the "system" allocator (glibc) there's
> > malloc_trim
> > called
> > (semi-) periodically via call_rcu_thread - and at least qemu's
> > meson
> > build
> > system config disables malloc_trim for tcmalloc or jemalloc.
> > 
> > 
> > Or did you already test this directly on QEMU, not just rbd bench?
> > As
> > then
> > I'd be open to add some tuning config with a allocator sub-property
> > in there
> > to our CFGs.
> > 
> 
> I have tried directly in qemu, with 
> 
> "
>     my $run_qemu = sub {
>     PVE::Tools::run_fork sub {
> 
>     $ENV{LD_PRELOAD} = "/usr/lib/x86_64-linux-
> gnu/libtcmalloc.so.4" ;
> 
>     PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM
> $vmid", %systemd_properties);
> 
> "
> 
> I really don't known about malloc_trim,
> the initial discussion about is here,
> https://patchwork.ozlabs.org/project/qemu-devel/patch/1510899814-19372-1-git-send-email-yang.zh...@intel.com/
> and indeed, it's disabled when building with tcmalloc/jemalloc  , but
> I
> don't known about dynamic loading.
> 
> But I don't have any crash or segfault.
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] ui: PBSEdit: cleanup iframe for paperkey

2023-03-13 Thread Aaron Lauterer




On 3/11/23 17:49, Thomas Lamprecht wrote:

Am 10/03/2023 um 15:36 schrieb Aaron Lauterer:

Otherwise the iframe used to print the paperkey will remain even after
the encryption key window is closed.


thanks for noticing!


Additionally clean before creating a new one as otherwise we might end
up with multiple iframes.


having to do both seems wrong. Why not add a on close or on destroy listener
on the window which handles that always correctly in a single place?



Signed-off-by: Aaron Lauterer 
---
  www/manager6/storage/PBSEdit.js | 9 -
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/www/manager6/storage/PBSEdit.js b/www/manager6/storage/PBSEdit.js
index 5b6b6bb8..dbc88668 100644
--- a/www/manager6/storage/PBSEdit.js
+++ b/www/manager6/storage/PBSEdit.js

[...]

@@ -181,6 +187,7 @@ ${prettifiedKey}
  
  	printFrame.src = "data:text/html;base64," + btoa(html);

document.body.appendChild(printFrame);
+   return printFrame;


You could replace the whole patch with adding the following line here:

me.on('destroy', () => document.body.removeChild(printFrame));

can also apply directly with a Reported-by tag if you see nothing off with this,
whatever you prefer?


Sure go ahead. Your approach is a lot cleaner. :)




  },
  });
  





___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 1/2] api: ceph: deprecate pools in favor or pool

2023-03-13 Thread Aaron Lauterer

Ping?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH-SERIES v3 storage/docs] fix #2920: add options parameter to CIFS plugin

2023-03-13 Thread Friedrich Weber

Tested-by: Friedrich Weber 

I think that would be nice to have, e.g. to set noserverino [1] or 
actimeo [2] without having to mount manually.


[1] 
https://forum.proxmox.com/threads/proxmox-backup-problem.123560/#post-537586
[2] 
https://forum.proxmox.com/threads/pve-cifs-connection-timed-out-596-communication-failure-0.123216/post-537551


On 01/03/2023 13:13, Fiona Ebner wrote:

similar to the already existing parameter for NFS.

Changes v2 -> v3:
* Rebase on current master.
* Minor style fixes.

Changes v1 -> v2:
# pve-storage (1/2)
* fixed nitpicks

# pve-docs (2/2)
* extended options explanation
* changed example option to `echo_interval=30` as second parameter


storage:

Stefan Hrdlicka (1):
   fix #2920: cifs: add options parameter

  PVE/Storage/CIFSPlugin.pm | 4 +++-
  PVE/Storage/NFSPlugin.pm  | 4 
  PVE/Storage/Plugin.pm | 6 ++
  3 files changed, 9 insertions(+), 5 deletions(-)


docs:

Stefan Hrdlicka (1):
   fix #2920: add cifs options parameter

  pve-storage-cifs.adoc | 8 
  1 file changed, 8 insertions(+)




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-firewall] Fix #4550 : host options: add nf_conntrack_helpers

2023-03-13 Thread Wolfgang Bumiller
applied, thanks


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ui: PBSEdit: cleanup iframe for paperkey

2023-03-13 Thread Thomas Lamprecht
Am 13/03/2023 um 09:30 schrieb Aaron Lauterer:
> On 3/11/23 17:49, Thomas Lamprecht wrote:
>> Am 10/03/2023 um 15:36 schrieb Aaron Lauterer:
>>> @@ -181,6 +187,7 @@ ${prettifiedKey}
>>>     printFrame.src = "data:text/html;base64," + btoa(html);
>>>   document.body.appendChild(printFrame);
>>> +    return printFrame;
>>
>> You could replace the whole patch with adding the following line here:
>>
>> me.on('destroy', () => document.body.removeChild(printFrame));
>>
>> can also apply directly with a Reported-by tag if you see nothing off with 
>> this,
>> whatever you prefer?
> 
> Sure go ahead. Your approach is a lot cleaner. :)
> 

Ack, pushed:
https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=b380cb5814ec551588dfcbd6c5ed190f01d54029


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] qemu options: add memory_allocator

2023-03-13 Thread Alexandre Derumier
Add optional memory_allocator.

Default is glibc malloc, tcmalloc is available to improve performance
of ceph librbd.

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 12 
 1 file changed, 12 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 40be44d..8de6c82 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -723,6 +723,14 @@ EODESCR
description => "List of host cores used to execute guest processes, for 
example: 0,5,8-11",
optional => 1,
 },
+memory_allocator => {
+   optional => 1,
+   type => 'string',
+   enum => [ qw(glibc tcmalloc) ],
+   default => 'glibc',
+   description => "Configure qemu process memory allocator. tcmalloc 
improve performance of ceph librbd",
+   optional => 1,
+},
 };
 
 my $cicustom_fmt = {
@@ -5909,6 +5917,10 @@ sub vm_start_nolock {
 
 my $run_qemu = sub {
PVE::Tools::run_fork sub {
+
+   $ENV{LD_PRELOAD} = "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4"
+   if $conf->{memory_allocator} && $conf->{memory_allocator} eq 
'tcmalloc';
+
PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", 
%systemd_properties);
 
my $tpmpid;
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu] add patches to fix regression with LSI SCSI controller

2023-03-13 Thread Fiona Ebner
The patch 0008-memory-prevent-dma-reentracy-issues.patch introduced a
regression for the LSI SCSI controller leading to boot failures [0],
because, in its current form, it relies on reentrancy for a particular
ram_io region.

[0]: https://forum.proxmox.com/threads/123843

Signed-off-by: Fiona Ebner 
---

Needs these two patches first (for the numbering in the series file):
https://lists.proxmox.com/pipermail/pve-devel/2023-March/056110.html

 ...isabling-re-entrancy-checking-per-MR.patch | 38 +++
 ...le-reentrancy-detection-for-script-R.patch | 33 
 debian/patches/series |  2 +
 3 files changed, 73 insertions(+)
 create mode 100644 
debian/patches/extra/0021-memory-Allow-disabling-re-entrancy-checking-per-MR.patch
 create mode 100644 
debian/patches/extra/0022-lsi53c895a-disable-reentrancy-detection-for-script-R.patch

diff --git 
a/debian/patches/extra/0021-memory-Allow-disabling-re-entrancy-checking-per-MR.patch
 
b/debian/patches/extra/0021-memory-Allow-disabling-re-entrancy-checking-per-MR.patch
new file mode 100644
index 000..3d5c267
--- /dev/null
+++ 
b/debian/patches/extra/0021-memory-Allow-disabling-re-entrancy-checking-per-MR.patch
@@ -0,0 +1,38 @@
+From  Mon Sep 17 00:00:00 2001
+From: Alexander Bulekov 
+Date: Mon, 13 Mar 2023 04:24:16 -0400
+Subject: [PATCH] memory: Allow disabling re-entrancy checking per-MR
+
+Signed-off-by: Alexander Bulekov 
+---
+ include/exec/memory.h | 3 +++
+ softmmu/memory.c  | 2 +-
+ 2 files changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/include/exec/memory.h b/include/exec/memory.h
+index 91f8a2395a..d7268d9f39 100644
+--- a/include/exec/memory.h
 b/include/exec/memory.h
+@@ -765,6 +765,9 @@ struct MemoryRegion {
+ unsigned ioeventfd_nb;
+ MemoryRegionIoeventfd *ioeventfds;
+ RamDiscardManager *rdm; /* Only for RAM */
++
++/* For devices designed to perform re-entrant IO into their own IO MRs */
++bool disable_reentrancy_guard;
+ };
+ 
+ struct IOMMUMemoryRegion {
+diff --git a/softmmu/memory.c b/softmmu/memory.c
+index 7dcb3347aa..2b46714191 100644
+--- a/softmmu/memory.c
 b/softmmu/memory.c
+@@ -544,7 +544,7 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
+ }
+ 
+ /* Do not allow more than one simultanous access to a device's IO Regions 
*/
+-if (mr->owner &&
++if (mr->owner && !mr->disable_reentrancy_guard &&
+ !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {
+ dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
+ if (dev) {
diff --git 
a/debian/patches/extra/0022-lsi53c895a-disable-reentrancy-detection-for-script-R.patch
 
b/debian/patches/extra/0022-lsi53c895a-disable-reentrancy-detection-for-script-R.patch
new file mode 100644
index 000..a4ed0ee
--- /dev/null
+++ 
b/debian/patches/extra/0022-lsi53c895a-disable-reentrancy-detection-for-script-R.patch
@@ -0,0 +1,33 @@
+From  Mon Sep 17 00:00:00 2001
+From: Alexander Bulekov 
+Date: Mon, 13 Mar 2023 04:24:17 -0400
+Subject: [PATCH] lsi53c895a: disable reentrancy detection for script RAM
+
+As the code is designed to use the memory APIs to access the script ram,
+disable reentrancy checks for the pseudo-RAM ram_io MemoryRegion.
+
+In the future, ram_io may be converted from an IO to a proper RAM MemoryRegion.
+
+Reported-by: Fiona Ebner 
+Signed-off-by: Alexander Bulekov 
+---
+ hw/scsi/lsi53c895a.c | 6 ++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/hw/scsi/lsi53c895a.c b/hw/scsi/lsi53c895a.c
+index 50979640c3..894b9311ac 100644
+--- a/hw/scsi/lsi53c895a.c
 b/hw/scsi/lsi53c895a.c
+@@ -2302,6 +2302,12 @@ static void lsi_scsi_realize(PCIDevice *dev, Error 
**errp)
+ memory_region_init_io(&s->io_io, OBJECT(s), &lsi_io_ops, s,
+   "lsi-io", 256);
+ 
++/*
++ * Since we use the address-space API to interact with ram_io, disable the
++ * re-entrancy guard.
++ */
++s->ram_io.disable_reentrancy_guard = true;
++
+ address_space_init(&s->pci_io_as, pci_address_space_io(dev), 
"lsi-pci-io");
+ qdev_init_gpio_out(d, &s->ext_irq, 1);
+ 
diff --git a/debian/patches/series b/debian/patches/series
index ba3279e..681b57d 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -18,6 +18,8 @@ 
extra/0017-vhost-avoid-a-potential-use-of-an-uninitialized-vari.patch
 extra/0018-chardev-char-socket-set-s-listener-NULL-in-char_sock.patch
 extra/0019-intel-iommu-fail-MAP-notifier-without-caching-mode.patch
 extra/0020-intel-iommu-fail-DEVIOTLB_UNMAP-without-dt-mode.patch
+extra/0021-memory-Allow-disabling-re-entrancy-checking-per-MR.patch
+extra/0022-lsi53c895a-disable-reentrancy-detection-for-script-R.patch
 bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
 bitmap-mirror/0002-drive-mirror-add-support-for-conditional-and-always-.patch
 bitmap-mirror/0003-mirror-add-chec

[pve-devel] [PATCH qemu-server] fix: api: fix permission check for cloudinit drive update

2023-03-13 Thread Friedrich Weber
Trying to regenerate a cloudinit drive as a non-root user via the API
currently throws a Perl error, as reported in the forum [1]. This is
due to a type mismatch in the permission check, where a string is
passed but an array is expected.

[1] 
https://forum.proxmox.com/threads/regenerate-cloudinit-by-put-api-return-500.124099/

Signed-off-by: Friedrich Weber 
---
 To see if we have the same problem for other API endpoints, I ran:
grep -r "['\"]perm['\"][^[]*]" .
 in my locally checked-out repos, but found only this single occurrence.

 PVE/API2/Qemu.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 587bb22..0ea18eb 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1398,7 +1398,7 @@ __PACKAGE__->register_method({
 proxyto => 'node',
 description => "Regenerate and change cloudinit config drive.",
 permissions => {
-   check => ['perm', '/vms/{vmid}', 'VM.Config.Cloudinit'],
+   check => ['perm', '/vms/{vmid}', ['VM.Config.Cloudinit']],
 },
 parameters => {
additionalProperties => 0,
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 1/3] api: ceph: add endpoint to fetch config keys

2023-03-13 Thread Aaron Lauterer




On 3/11/23 18:07, Thomas Lamprecht wrote:

Am 08/03/2023 um 13:14 schrieb Dominik Csapak:

high level:

as you mentioned the path 'configkey' is not really optimal

i recently mentioned off-list that we could clean this up on
the next breaking major release with a breaking api change:

have a 'config' dir and a
'file'
'db'
and 'key'( or 'value') api endpoint inside
that represents the different things

for now a possible change could be to do it in 'config'
but with a new parameter, though that's also not ideal

any further ideas/suggestions @Thomas?



We could add the full

cfg/
raw
db
value

now already, re-mount the 'cfg/raw' one on the current 'config' (or just keep
the code duplicated, not much gain if we remove it anyway) one and then drop 
that
old 'config' one in PVE 8.0; slightly hacky but IMO not that much.

Might want to check what other uses of config, cfg, conf, configuration there 
are
in API path's though, as ideally we keep the total unique count of them the 
same ;-)


AFAICT we basically only have "config" in the API paths according to the API 
Viewer. So 'cfg' would be something new, not yet used.
I do like 'cfg' more than 'conf'. Once we dropped support for 'config', we could 
wait a full major release and then move it back? Not sure but 'cfg' is also only 
somewhat nice ;)



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 1/3] api: ceph: add endpoint to fetch config keys

2023-03-13 Thread Thomas Lamprecht
Am 13/03/2023 um 13:58 schrieb Aaron Lauterer:
> On 3/11/23 18:07, Thomas Lamprecht wrote:
>> We could add the full
>>
>> cfg/
>>     raw
>>     db
>>     value
>>
>> now already, re-mount the 'cfg/raw' one on the current 'config' (or just keep
>> the code duplicated, not much gain if we remove it anyway) one and then drop 
>> that
>> old 'config' one in PVE 8.0; slightly hacky but IMO not that much.
>>
>> Might want to check what other uses of config, cfg, conf, configuration 
>> there are
>> in API path's though, as ideally we keep the total unique count of them the 
>> same ;-)
> 
> AFAICT we basically only have "config" in the API paths according to the API 
> Viewer. So 'cfg' would be something new, not yet used.
> I do like 'cfg' more than 'conf'. Once we dropped support for 'config', we 
> could wait a full major release and then move it back? Not sure but 'cfg' is 
> also only somewhat nice ;)

Ok, so we need to use something new anyway, cfg works out fine  for me, we just 
should
try to remember using that if a move like this needs to be done for some other 
API
subtree.

Moving then back again just for the sake of purity seems not the best argument, 
so
I'd rather live with config and cfg then than working through the churn of 
moving
back again.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH qemu 1/2] fixup patch "ide: avoid potential deadlock when draining during trim"

2023-03-13 Thread Thomas Lamprecht
Am 09/03/2023 um 14:37 schrieb Fiona Ebner:
> The patch was incomplete and (re-)introduced an issue with a potential
> failing assertion upon cancelation of the DMA request.
> 
> There is a patch on qemu-devel now[0], and it's the same as this one
> code-wise (except for comments). But the discussion is still ongoing.
> While there shouldn't be a real issue with the patch, there might be
> better approaches. The plan is to use this as a stop-gap for now and
> pick up the proper solution once it's ready.
> 
> [0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg03325.html
> 
> Signed-off-by: Fiona Ebner 
> ---
>  ...ial-deadlock-when-draining-during-tr.patch | 28 +--
>  1 file changed, 25 insertions(+), 3 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH qemu] add patches to fix regression with LSI SCSI controller

2023-03-13 Thread Thomas Lamprecht
Am 13/03/2023 um 12:43 schrieb Fiona Ebner:
> The patch 0008-memory-prevent-dma-reentracy-issues.patch introduced a
> regression for the LSI SCSI controller leading to boot failures [0],
> because, in its current form, it relies on reentrancy for a particular
> ram_io region.
> 
> [0]: https://forum.proxmox.com/threads/123843
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Needs these two patches first (for the numbering in the series file):
> https://lists.proxmox.com/pipermail/pve-devel/2023-March/056110.html
> 
>  ...isabling-re-entrancy-checking-per-MR.patch | 38 +++
>  ...le-reentrancy-detection-for-script-R.patch | 33 
>  debian/patches/series |  2 +
>  3 files changed, 73 insertions(+)
>  create mode 100644 
> debian/patches/extra/0021-memory-Allow-disabling-re-entrancy-checking-per-MR.patch
>  create mode 100644 
> debian/patches/extra/0022-lsi53c895a-disable-reentrancy-detection-for-script-R.patch
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server] qemu options: add memory_allocator

2023-03-13 Thread Thomas Lamprecht
Am 13/03/2023 um 11:16 schrieb Alexandre Derumier:
> Add optional memory_allocator.
> 
> Default is glibc malloc, tcmalloc is available to improve performance
> of ceph librbd.

Looks ok besides some config/api schema details I'd like to see changed.

> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 12 
>  1 file changed, 12 insertions(+)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 40be44d..8de6c82 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -723,6 +723,14 @@ EODESCR
>   description => "List of host cores used to execute guest processes, for 
> example: 0,5,8-11",
>   optional => 1,
>  },
> +memory_allocator => {

kebab-case for new properties please, but actually I'd like to have this nested
in a property string here, e.g.:

tuning => {
optional => 1,
type => 'string',
format => { # <- probably in a $tuning_format variable
allocator => {
 # ...
},
}
}

> + optional => 1,
> + type => 'string',
> + enum => [ qw(glibc tcmalloc) ],

system tcmalloc

> + default => 'glibc',
> + description => "Configure qemu process memory allocator. tcmalloc 
> improve performance of ceph librbd",

I'd change this to:

description => "Override the memory allocator used in QEMU via LD_PRELOAD.",
verbose_description => "Override the memory allocator used in QEMU via 
LD_PRELOAD."
." Using tcmalloc might improve performance if ceph librbd is used.",
." NOTE: you must install the libtcmalloc-minimal4 package first!"

> + optional => 1,

you got "optional" set to true twice

> +},
>  };
>  
>  my $cicustom_fmt = {
> @@ -5909,6 +5917,10 @@ sub vm_start_nolock {
>  
>  my $run_qemu = sub {
>   PVE::Tools::run_fork sub {
> +
> + $ENV{LD_PRELOAD} = "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4"
> + if $conf->{memory_allocator} && $conf->{memory_allocator} eq 
> 'tcmalloc';
> +
>   PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", 
> %systemd_properties);
>  
>   my $tpmpid;



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 qemu-server] qemu options: add tuning allocator

2023-03-13 Thread Alexandre Derumier
Add a new tuning option with allocator property.

Available values:
- Default is 'system', aka glibc malloc
- tcmalloc (improve performance ceph librbd)

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 35 +++
 1 file changed, 35 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 40be44d..761ab48 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -289,6 +289,19 @@ my $meta_info_fmt = {
 },
 };
 
+my $tuning_fmt = {
+allocator => {
+   type => 'string',
+   enum => [ qw(system tcmalloc) ],
+   default => 'system',
+   description => "Override the memory allocator used in QEMU via 
LD_PRELOAD.",
+   verbose_description => "Override the memory allocator used in QEMU via 
LD_PRELOAD."
+   ." Using tcmalloc might improve performance if ceph librbd is used."
+   ." NOTE: you must install the libtcmalloc-minimal4 package first!",
+   optional => 1,
+},
+};
+
 my $confdesc = {
 onboot => {
optional => 1,
@@ -723,6 +736,12 @@ EODESCR
description => "List of host cores used to execute guest processes, for 
example: 0,5,8-11",
optional => 1,
 },
+tuning=> {
+   type => 'string',
+   format => $tuning_fmt,
+   description => "Tune some special features.",
+   optional => 1,
+},
 };
 
 my $cicustom_fmt = {
@@ -2192,6 +2211,16 @@ sub parse_rng {
 return $res;
 }
 
+sub parse_tuning {
+my ($value) = @_;
+
+return if !$value;
+
+my $res = eval { parse_property_string($tuning_fmt, $value) };
+warn $@ if $@;
+return $res;
+}
+
 sub parse_meta_info {
 my ($value) = @_;
 
@@ -5909,6 +5938,12 @@ sub vm_start_nolock {
 
 my $run_qemu = sub {
PVE::Tools::run_fork sub {
+
+   my $tuning = $conf->{tuning} ? parse_tuning($conf->{tuning}) : 
undef;
+
+   $ENV{LD_PRELOAD} = "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4"
+   if $tuning && $tuning->{allocator} eq 'tcmalloc';
+
PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", 
%systemd_properties);
 
my $tpmpid;
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: qemu : add tuning option

2023-03-13 Thread Alexandre Derumier
with memory allocator property

Signed-off-by: Alexandre Derumier 
---
 www/manager6/Makefile   |  1 +
 www/manager6/Utils.js   | 13 +
 www/manager6/form/TuningSelector.js | 41 +
 www/manager6/qemu/Options.js| 14 ++
 4 files changed, 69 insertions(+)
 create mode 100644 www/manager6/form/TuningSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index b73b729a..c29bc87a 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -67,6 +67,7 @@ JSSRC=
\
form/StorageSelector.js \
form/TFASelector.js \
form/TokenSelector.js   \
+   form/TuningSelector.js  \
form/USBSelector.js \
form/UserSelector.js\
form/VLanField.js   \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 7bf3955a..3c395a5c 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -545,6 +545,19 @@ Ext.define('PVE.Utils', {
return output.join(', ');
 },
 
+render_tuning: function(values) {
+   let props = PVE.Parser.parsePropertyString(values);
+   if (Ext.Object.isEmpty(props)) {
+   return Proxmox.Utils.noneText;
+   }
+
+   let output = [];
+   if (props.allocator === 'tcmalloc') {
+   output.push('Memory allocator: ' + props.allocator);
+   }
+   return output.join(', ');
+},
+
 // fixme: auto-generate this
 // for now, please keep in sync with PVE::Tools::kvmkeymaps
 kvm_keymaps: {
diff --git a/www/manager6/form/TuningSelector.js 
b/www/manager6/form/TuningSelector.js
new file mode 100644
index ..d967dc29
--- /dev/null
+++ b/www/manager6/form/TuningSelector.js
@@ -0,0 +1,41 @@
+Ext.define('PVE.form.TuningSelector', {
+extend: 'Proxmox.panel.InputPanel',
+alias: 'widget.pveTuningSelector',
+
+viewModel: {},
+
+items: [
+   {
+   xtype: 'proxmoxKVComboBox',
+   itemId: 'allocator',
+   name: 'allocator',
+   value: 'system',
+   fieldLabel: 'Memory Allocator',
+   comboItems: [
+   ['system', 'system'],
+   ['tcmalloc', 'tcmalloc'],
+   ],
+   },
+],
+
+onGetValues: function(values) {
+   var ret = {};
+
+   if (values.allocator !== "system") {
+   ret.allocator = values.allocator;
+   }
+
+   if (Ext.Object.isEmpty(ret)) {
+   return { 'delete': 'tuning' };
+   }
+   var tuning_props = PVE.Parser.printPropertyString(ret);
+   return { tuning: tuning_props };
+},
+
+setValues: function(values) {
+   if (values.tuning) {
+   var tuning = PVE.Parser.parsePropertyString(values.tuning);
+   this.callParent([tuning]);
+   }
+},
+});
diff --git a/www/manager6/qemu/Options.js b/www/manager6/qemu/Options.js
index 7b112400..53972d18 100644
--- a/www/manager6/qemu/Options.js
+++ b/www/manager6/qemu/Options.js
@@ -338,6 +338,20 @@ Ext.define('PVE.qemu.Options', {
},
} : undefined,
},
+   tuning: {
+   header: gettext('Tuning'),
+   defaultValue: false,
+   renderer: PVE.Utils.render_tuning,
+   editor: caps.vms['VM.Config.Options'] ? {
+   xtype: 'proxmoxWindowEdit',
+   subject: gettext('Tuning'),
+   onlineHelp: 'chapter_virtual_machines', // FIXME: use 
'qm_tuning' once available
+   items: {
+   xtype: 'pveTuningSelector',
+   name: 'tuning',
+   },
+   } : undefined,
+   },
hookscript: {
header: gettext('Hookscript'),
},
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel