Re: [pve-devel] Shell command and Emacs Lisp code injection in emacsclient-mail.desktop

2023-03-09 Thread Stefan Sterz
On 3/8/23 18:05, Thomas Lamprecht wrote:
> Am 08/03/2023 um 17:40 schrieb Stefan Sterz:
>> From: Daniel Tschlatscher 
>>
>> this requires a bump of the widget toolkit so the version includes the
>> necessary widgets.
>>
>> Signed-off-by: Daniel Tschlatscher 
>> Signed-off-by: Stefan Sterz 
>> ---
>>  www/manager6/Workspace.js | 8 
>>  1 file changed, 8 insertions(+)
>>
>>
> 
> applied series, huge thanks to you and Daniel again!
> 
> we might want to make auto default rather quickly ;-)

yes that might make sense. my intention was to not "surprise" existing
users with a potentially unwanted overhaul of the gui. however, im not
sure how relevant that concern is, as it is fairly easy to switch back


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied-series: [PATCH manager v1 1/4] gui: create user info menu intro for selecting the theme

2023-03-09 Thread Thomas Lamprecht
Am 09/03/2023 um 09:07 schrieb Stefan Sterz:
> On 3/8/23 18:05, Thomas Lamprecht wrote:
>> we might want to make auto default rather quickly ;-)
> 
> yes that might make sense. my intention was to not "surprise" existing
> users with a potentially unwanted overhaul of the gui. however, im not
> sure how relevant that concern is, as it is fairly easy to switch back


IMO not a concern as it only changes for those whose browser already tells our
web UI to prefer a dark-color scheme, so while it might come as a "surprise",
I think it's safe to say that it'll be a welcomed one - as it'd be odd if they
configured their OS and/or Browser to prefer the dark mode, if they don't.


ps. something funky happened with your subject (pasted by mistake?)



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied-series: [PATCH manager v1 1/4] gui: create user info menu intro for selecting the theme

2023-03-09 Thread Stefan Sterz
On 3/9/23 09:16, Thomas Lamprecht wrote:
> Am 09/03/2023 um 09:07 schrieb Stefan Sterz:
>> On 3/8/23 18:05, Thomas Lamprecht wrote:
>>> we might want to make auto default rather quickly ;-)
>>
>> yes that might make sense. my intention was to not "surprise" existing
>> users with a potentially unwanted overhaul of the gui. however, im not
>> sure how relevant that concern is, as it is fairly easy to switch back
> 
> 
> IMO not a concern as it only changes for those whose browser already tells our
> web UI to prefer a dark-color scheme, so while it might come as a "surprise",
> I think it's safe to say that it'll be a welcomed one - as it'd be odd if they
> configured their OS and/or Browser to prefer the dark mode, if they don't.
> 

sure, ill send a follow-up then :)

> 
> ps. something funky happened with your subject (pasted by mistake?)
> 

yeah ok i probably messed something up over here o.O not sure why this
happened i didn't mess with the subject line at all.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/6] virtio-blk: add queues option

2023-03-09 Thread Alexandre Derumier
We already added support for virtio-scsi some years ago,
but forget to add it for virtio-blk

Note that qemu attribute in "num-queues" for virtio-blk,

instaed "num_queues"  for virtio-scsi

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm   | 1 +
 PVE/QemuServer/Drive.pm | 1 +
 2 files changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 40be44d..deb7faf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1505,6 +1505,7 @@ sub print_drivedevice_full {
my $pciaddr = print_pci_addr("$drive_id", $bridges, $arch, 
$machine_type);
$device = 
"virtio-blk-pci,drive=drive-$drive_id,id=${drive_id}${pciaddr}";
$device .= ",iothread=iothread-$drive_id" if $drive->{iothread};
+   $device .= ",num-queues=$drive->{queues}" if $drive->{queues};
 } elsif ($drive->{interface} eq 'scsi') {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index b0e0a96..cd2823a 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -304,6 +304,7 @@ PVE::JSONSchema::register_standard_option("pve-qm-sata", 
$satadesc);
 my $virtio_fmt = {
 %drivedesc_base,
 %iothread_fmt,
+%queues_fmt,
 %readonly_fmt,
 };
 my $virtiodesc = {
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 3/6] cpuconfig: add get_cpu_topology helper

2023-03-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm   | 16 +++-
 PVE/QemuServer/CPUConfig.pm | 11 +++
 2 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index deb7faf..b49b59b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -50,7 +50,7 @@ use PVE::QemuConfig;
 use PVE::QemuServer::Helpers qw(min_version config_aware_timeout 
windows_version);
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
-use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
+use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options 
get_cpu_topology);
 use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory;
@@ -3818,13 +3818,7 @@ sub config_to_command {
 
 add_tpm_device($vmid, $devices, $conf);
 
-my $sockets = 1;
-$sockets = $conf->{smp} if $conf->{smp}; # old style - no longer iused
-$sockets = $conf->{sockets} if  $conf->{sockets};
-
-my $cores = $conf->{cores} || 1;
-
-my $maxcpus = $sockets * $cores;
+my ($sockets, $cores, $maxcpus) = get_cpu_topology($conf);
 
 my $vcpus = $conf->{vcpus} ? $conf->{vcpus} : $maxcpus;
 
@@ -4660,11 +4654,7 @@ sub qemu_cpu_hotplug {
 
 my $machine_type = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
-my $sockets = 1;
-$sockets = $conf->{smp} if $conf->{smp}; # old style - no longer iused
-$sockets = $conf->{sockets} if  $conf->{sockets};
-my $cores = $conf->{cores} || 1;
-my $maxcpus = $sockets * $cores;
+my ($sockets, $cores, $maxcpus) = get_cpu_topology($conf);
 
 $vcpus = $maxcpus if !$vcpus;
 
diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index fb0861b..826e472 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -12,6 +12,7 @@ use base qw(PVE::SectionConfig Exporter);
 our @EXPORT_OK = qw(
 print_cpu_device
 get_cpu_options
+get_cpu_topology
 );
 
 # under certain race-conditions, this module might be loaded before pve-cluster
@@ -659,6 +660,16 @@ sub get_cpu_from_running_vm {
 return $1;
 }
 
+sub get_cpu_topology {
+my ($conf) = @_;
+
+my $sockets = $conf->{sockets} || 1;
+my $cores = $conf->{cores} || 1;
+my $maxcpus = $sockets * $cores;
+
+return ($sockets, $cores, $maxcpus);
+}
+
 __PACKAGE__->register();
 __PACKAGE__->init();
 
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/6] add virtio-scsi && virtio-scsi-single tests

2023-03-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 test/cfg2cmd/simple-virtio-scsi-single.conf   | 14 
 .../simple-virtio-scsi-single.conf.cmd| 33 +++
 test/cfg2cmd/simple-virtio-scsi.conf  | 14 
 test/cfg2cmd/simple-virtio-scsi.conf.cmd  | 31 +
 4 files changed, 92 insertions(+)
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single.conf
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-scsi.conf
 create mode 100644 test/cfg2cmd/simple-virtio-scsi.conf.cmd

diff --git a/test/cfg2cmd/simple-virtio-scsi-single.conf 
b/test/cfg2cmd/simple-virtio-scsi-single.conf
new file mode 100644
index 000..982702d
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-scsi-single.conf
@@ -0,0 +1,14 @@
+# TEST: Test for a basic configuration with a virtio-scsi-single IOThread disk
+# QEMU_VERSION: 5.0
+bootdisk: scsi0
+cores: 3
+ide2: none,media=cdrom
+memory: 768
+name: simple
+numa: 0
+ostype: l26
+smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
+sockets: 1
+scsihw: virtio-scsi-single
+scsi0: local:8006/vm-8006-disk-0.qcow2,discard=on,iothread=1,size=104858K
+vmgenid: c773c261-d800-4348-9f5d-167fadd53cf8
diff --git a/test/cfg2cmd/simple-virtio-scsi-single.conf.cmd 
b/test/cfg2cmd/simple-virtio-scsi-single.conf.cmd
new file mode 100644
index 000..374bd96
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-scsi-single.conf.cmd
@@ -0,0 +1,33 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name 'simple,debug-threads=on' \
+  -no-shutdown \
+  -chardev 
'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 768 \
+  -object 'iothread,id=iothread-virtioscsi0' \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' \
+  -device 'vmgenid,guid=c773c261-d800-4348-9f5d-167fadd53cf8' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+  -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' \
+  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -device 
'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0'
 \
+  -drive 
'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-scsi0,discard=on,format=qcow2,cache=none,aio=native,detect-zeroes=unmap'
 \
+  -device 
'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100'
 \
+  -machine 'type=pc+pve0'
diff --git a/test/cfg2cmd/simple-virtio-scsi.conf 
b/test/cfg2cmd/simple-virtio-scsi.conf
new file mode 100644
index 000..b32a3df
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-scsi.conf
@@ -0,0 +1,14 @@
+# TEST: Test for a basic configuration with a virtio-scsi disk
+# QEMU_VERSION: 5.0
+bootdisk: scsi0
+cores: 3
+ide2: none,media=cdrom
+memory: 768
+name: simple
+numa: 0
+ostype: l26
+smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
+sockets: 1
+scsihw: virtio-scsi
+scsi0: local:8006/vm-8006-disk-0.qcow2,discard=on,size=104858K
+vmgenid: c773c261-d800-4348-9f5d-167fadd53cf8
diff --git a/test/cfg2cmd/simple-virtio-scsi.conf.cmd 
b/test/cfg2cmd/simple-virtio-scsi.conf.cmd
new file mode 100644
index 000..c25eed9
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-scsi.conf.cmd
@@ -0,0 +1,31 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name 'simple,debug-threads=on' \
+  -no-shutdown \
+  -chardev 
'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 768 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=

[pve-devel] [PATCH qemu-server 5/6] drive: allow minimum queues = 1

2023-03-09 Thread Alexandre Derumier
If user want to disable new default multiqueue

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer/Drive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index cd2823a..546977d 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -174,7 +174,7 @@ my %queues_fmt = (
 queues => {
type => 'integer',
description => "Number of queues.",
-   minimum => 2,
+   minimum => 1,
optional => 1
 }
 );
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 6/6] add virtio-blk|scsi default multiqueue tests

2023-03-09 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 test/cfg2cmd/simple-virtio-blk-8.0.conf   | 13 
 test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd   | 31 +
 .../simple-virtio-scsi-single-8.0.conf| 14 
 .../simple-virtio-scsi-single-8.0.conf.cmd| 33 +++
 4 files changed, 91 insertions(+)
 create mode 100644 test/cfg2cmd/simple-virtio-blk-8.0.conf
 create mode 100644 test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single-8.0.conf
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single-8.0.conf.cmd

diff --git a/test/cfg2cmd/simple-virtio-blk-8.0.conf 
b/test/cfg2cmd/simple-virtio-blk-8.0.conf
new file mode 100644
index 000..7f7ad57
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-blk-8.0.conf
@@ -0,0 +1,13 @@
+# TEST: Test for a basic configuration with a VirtIO Block IOThread disk
+# QEMU_VERSION: 8.0
+bootdisk: virtio0
+cores: 3
+ide2: none,media=cdrom
+memory: 768
+name: simple
+numa: 0
+ostype: l26
+smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
+sockets: 1
+virtio0: local:8006/vm-8006-disk-0.qcow2,discard=on,iothread=1,size=104858K
+vmgenid: c773c261-d800-4348-9f5d-167fadd53cf8
diff --git a/test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd 
b/test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd
new file mode 100644
index 000..0a89928
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd
@@ -0,0 +1,31 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name 'simple,debug-threads=on' \
+  -no-shutdown \
+  -chardev 
'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 768 \
+  -object 'iothread,id=iothread-virtio0' \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'vmgenid,guid=c773c261-d800-4348-9f5d-167fadd53cf8' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+  -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' \
+  -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' \
+  -drive 
'file=/var/lib/vz/images/8006/vm-8006-disk-0.qcow2,if=none,id=drive-virtio0,discard=on,format=qcow2,cache=none,aio=io_uring,detect-zeroes=unmap'
 \
+  -device 
'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,num-queues=3,bootindex=100'
 \
+  -machine 'type=pc+pve0'
diff --git a/test/cfg2cmd/simple-virtio-scsi-single-8.0.conf 
b/test/cfg2cmd/simple-virtio-scsi-single-8.0.conf
new file mode 100644
index 000..b836b8a
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-scsi-single-8.0.conf
@@ -0,0 +1,14 @@
+# TEST: Test for a basic configuration with a virtio-scsi-single IOThread disk
+# QEMU_VERSION: 8.0
+bootdisk: scsi0
+cores: 3
+ide2: none,media=cdrom
+memory: 768
+name: simple
+numa: 0
+ostype: l26
+smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
+sockets: 1
+scsihw: virtio-scsi-single
+scsi0: local:8006/vm-8006-disk-0.qcow2,discard=on,iothread=1,size=104858K
+vmgenid: c773c261-d800-4348-9f5d-167fadd53cf8
diff --git a/test/cfg2cmd/simple-virtio-scsi-single-8.0.conf.cmd 
b/test/cfg2cmd/simple-virtio-scsi-single-8.0.conf.cmd
new file mode 100644
index 000..364f4be
--- /dev/null
+++ b/test/cfg2cmd/simple-virtio-scsi-single-8.0.conf.cmd
@@ -0,0 +1,33 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name 'simple,debug-threads=on' \
+  -no-shutdown \
+  -chardev 
'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 768 \
+  -object 'iothread,id=iothread-virtioscsi0' \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,

[pve-devel] [PATCH qemu-server 4/6] fix #4295 : virtio-(blk|scsi): enable multiqueue by default

2023-03-09 Thread Alexandre Derumier
set num_queues = maxcpus for qemu 8.0

redhat already done it in rhev since 2021
https://bugzilla.redhat.com/show_bug.cgi?id=1827722#c11

The -device virtio-blk,num-queues= and -device virtio-scsi,num_queues= 
parameters control how many virtqueues are available to the guest. Allocating 
one virtqueue per vCPU improves performance as follows:
Interrupts are handled on the vCPU that submitted the request, avoiding IPIs
The I/O scheduler is automatically set to “none” by the Linux block layer

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b49b59b..39d30e3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1505,7 +1505,14 @@ sub print_drivedevice_full {
my $pciaddr = print_pci_addr("$drive_id", $bridges, $arch, 
$machine_type);
$device = 
"virtio-blk-pci,drive=drive-$drive_id,id=${drive_id}${pciaddr}";
$device .= ",iothread=iothread-$drive_id" if $drive->{iothread};
+
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   if (min_version($machine_version, 8, 0)) {
+   my ($sockets, $cores, $maxcpus) = get_cpu_topology($conf);
+   $drive->{queues} = $maxcpus if !$drive->{queues};
+   }
$device .= ",num-queues=$drive->{queues}" if $drive->{queues};
+
 } elsif ($drive->{interface} eq 'scsi') {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
@@ -4043,6 +4050,12 @@ sub config_to_command {
);
}
 
+   if (min_version($machine_version, 8, 0)) {
+   my ($sockets, $cores, $maxcpus) = get_cpu_topology($conf);
+   $drive->{queues} = $maxcpus if !$drive->{queues};
+
+   }
+
my $queues = '';
if($conf->{scsihw} && $conf->{scsihw} eq "virtio-scsi-single" && 
$drive->{queues}){
$queues = ",num_queues=$drive->{queues}";
@@ -4306,6 +4319,12 @@ sub vm_deviceplug {
$devicefull .= ",iothread=iothread-$deviceid";
}
 
+   my $machine_version = 
PVE::QemuServer::Machine::extract_version($machine_type);
+   if (min_version($machine_version, 8, 0)) {
+   my ($sockets, $cores, $maxcpus) = get_cpu_topology($conf);
+   $device->{queues} = $maxcpus if !$device->{queues};
+   }
+
if($deviceid =~ m/^virtioscsi(\d+)$/ && $device->{queues}) {
$devicefull .= ",num_queues=$device->{queues}";
}
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 0/6] improve virtio drive multiqueues

2023-03-09 Thread Alexandre Derumier
Hi,

This patch series add support for virtio-blk num-queues.
It's was already implemented for virtio-scsi.

It's also enable numqueue to maxcpus for qemu 8.0.
Redhat already was enabled by default in rhev in 2011,
so it's pretty stable.
https://bugzilla.redhat.com/show_bug.cgi?id=1827722

It's improve performance for fast storage like nvme,optane
by around 20%.
My fio bench have jump from 200k to 240k iops with 4 block

I don't have seen perf regression (running them for 1month),
but user is still able to override queues and set it to 1.


Alexandre Derumier (6):
  add virtio-scsi && virtio-scsi-single tests
  virtio-blk: add queues option
  cpuconfig: add get_cpu_topology helper
  fix #4295 : virtio-(blk|scsi): enable multiqueue by default
  drive: allow minimum queues = 1
  add virtio-blk|scsi default multiqueue tests

 PVE/QemuServer.pm | 36 ---
 PVE/QemuServer/CPUConfig.pm   | 11 ++
 PVE/QemuServer/Drive.pm   |  3 +-
 test/cfg2cmd/simple-virtio-blk-8.0.conf   | 13 +++
 test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd   | 31 
 .../simple-virtio-scsi-single-8.0.conf| 14 
 .../simple-virtio-scsi-single-8.0.conf.cmd| 33 +
 test/cfg2cmd/simple-virtio-scsi-single.conf   | 14 
 .../simple-virtio-scsi-single.conf.cmd| 33 +
 test/cfg2cmd/simple-virtio-scsi.conf  | 14 
 test/cfg2cmd/simple-virtio-scsi.conf.cmd  | 31 
 11 files changed, 219 insertions(+), 14 deletions(-)
 create mode 100644 test/cfg2cmd/simple-virtio-blk-8.0.conf
 create mode 100644 test/cfg2cmd/simple-virtio-blk-8.0.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single-8.0.conf
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single-8.0.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single.conf
 create mode 100644 test/cfg2cmd/simple-virtio-scsi-single.conf.cmd
 create mode 100644 test/cfg2cmd/simple-virtio-scsi.conf
 create mode 100644 test/cfg2cmd/simple-virtio-scsi.conf.cmd

-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 storage] api: fix get content call for volumes

2023-03-09 Thread Christian Ebner
`pvesh get /nodes/{node}/storage/{storage}/content/{volume}` failed for
several storage types, because the respective storage plugins returned
only the volumes `size` on `volume_size_info` calls, while also the format
is required.

This patch fixes the issue by returning also `format` and where possible `used`.

The issue was reported in the forum:
https://forum.proxmox.com/threads/pvesh-get-nodes-node-storage-storage-content-volume-returns-error.123747/

Signed-off-by: Christian Ebner 
---

Changes since v1:
 * Remove errous check for $used being set, rely on fallback to 0 if undef
 * Return `parent` for RBD and ZFS
 * Return `used` for ZFS

Changes since v2:
 * Add conditional call to `rbd du` to get `used` for RBD based volumes
 * Get `usedbydataset` instead of `used` for ZFS volumes, refactor
   zfs_get_properties call
 
 Note: The file_size_info for iscsi direct targets unfortunately does
   not return anything usefull for `used` storage size, so it stayed as
   is.

 PVE/Storage/ISCSIDirectPlugin.pm |  2 +-
 PVE/Storage/RBDPlugin.pm | 44 ++--
 PVE/Storage/ZFSPoolPlugin.pm | 11 +---
 3 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index 9777969..eb329d4 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -208,7 +208,7 @@ sub volume_size_info {
 my $vollist = iscsi_ls($scfg,$storeid);
 my $info = $vollist->{$storeid}->{$volname};
 
-return $info->{size};
+return wantarray ? ($info->{size}, 'raw', 0, undef) : $info->{size};
 }
 
 sub volume_resize {
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 9047504..35b2372 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -308,6 +308,45 @@ sub rbd_volume_info {
 return $volume->@{qw(size parent format protected features)};
 }
 
+sub rbd_volume_du {
+my ($scfg, $storeid, $volname) = @_;
+
+my @options = ('du', $volname, '--format', 'json');
+my $cmd = $rbd_cmd->($scfg, $storeid, @options);
+
+my $raw = '';
+my $parser = sub { $raw .= shift };
+
+run_rbd_command($cmd, errmsg => "rbd error", errfunc => sub {}, outfunc => 
$parser);
+
+my $volume;
+if ($raw eq '') {
+   $volume = {};
+} elsif ($raw =~ m/^(\{.*\})$/s) { # untaint
+   $volume = JSON::decode_json($1);
+} else {
+   die "got unexpected data from rbd du: '$raw'\n";
+}
+
+if (!defined($volume->{images})) {
+   die "got no images from rbd du\n";
+}
+
+# `rbd du` returns array of images for name matching `volname`,
+# including snapshots.
+my $images = $volume->{images};
+foreach my $image (@$images) {
+   next if defined($image->{snapshot});
+   next if !defined($image->{used_size}) || !defined($image->{name});
+
+   # Return `used_size` of first volume with matching name which
+   # is not a snapshot.
+   return $image->{used_size} if $image->{name} eq $volname;
+}
+
+die "got no matching image from rbd du\n";
+}
+
 # Configuration
 
 sub type {
@@ -729,8 +768,9 @@ sub volume_size_info {
 my ($class, $scfg, $storeid, $volname, $timeout) = @_;
 
 my ($vtype, $name, $vmid) = $class->parse_volname($volname);
-my ($size, undef) = rbd_volume_info($scfg, $storeid, $name);
-return $size;
+my ($size, $parent) = rbd_volume_info($scfg, $storeid, $name);
+my $used = wantarray ? rbd_volume_du($scfg, $storeid, $name) : 0;
+return wantarray ? ($size, 'raw', $used, $parent) : $size;
 }
 
 sub volume_resize {
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 9fbd149..54dd2ae 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -446,13 +446,16 @@ sub status {
 sub volume_size_info {
 my ($class, $scfg, $storeid, $volname, $timeout) = @_;
 
-my (undef, $vname, undef, undef, undef, undef, $format) =
+my (undef, $vname, undef, $parent, undef, undef, $format) =
 $class->parse_volname($volname);
 
 my $attr = $format eq 'subvol' ? 'refquota' : 'volsize';
-my $value = $class->zfs_get_properties($scfg, $attr, 
"$scfg->{pool}/$vname");
-if ($value =~ /^(\d+)$/) {
-   return $1;
+my ($size, $used) = $class->zfs_get_properties($scfg, 
"$attr,usedbydataset", "$scfg->{pool}/$vname");
+
+$used = ($used =~ /^(\d+)$/) ? $1 : 0;
+
+if ($size =~ /^(\d+)$/) {
+   return wantarray ? ($1, $format, $used, $parent) : $1;
 }
 
 die "Could not get zfs volume size\n";
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster] pvecm add: require user to navigate out of /etc/pve

2023-03-09 Thread Friedrich Weber
If `pvecm add` is issued from /etc/pve (or any subdirectory), it
prints some errors:

[...]
shell-init: error retrieving current directory: getcwd: cannot
access parent directories: Transport endpoint is not connected
[...]
successfully added node 'pve-c2' to cluster.
cannot fetch initial working directory: Transport endpoint is not
connected at /usr/share/perl5/PVE/CLI/pvecm.pm line 446.

The reason is that `pvecm add` restarts pmxcfs, which re-mounts the
fuse mount at /etc/pve, invalidating pvecm's working directory.

The error messages give the impression that something went wrong.
Indeed, the second error indicates the temporary directory is not
cleaned up. The cluster join itself actually works, though.

The issue could be fixed by chdir'ing to / in `pvecm add`. However,
the user's shell would still remain in the now-invalid /etc/pve,
potentially leading to confusing "transport endpoint not connected"
messages in future interactions.

To avoid this, require the user to chdir out of /etc/pve before
running `pvecm add`.

Signed-off-by: Friedrich Weber 
---
 data/PVE/CLI/pvecm.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm
index 5ac9ed3..b0b5931 100755
--- a/data/PVE/CLI/pvecm.pm
+++ b/data/PVE/CLI/pvecm.pm
@@ -3,6 +3,7 @@ package PVE::CLI::pvecm;
 use strict;
 use warnings;
 
+use Cwd qw(getcwd);
 use File::Path;
 use File::Basename;
 use PVE::Tools qw(run_command);
@@ -348,6 +349,11 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # avoid "transport endpoint not connected" errors that occur if
+   # restarting pmxcfs while in fuse-mounted /etc/pve
+   die "Navigate out of $basedir before running 'pvecm add', for example 
by running 'cd'.\n"
+   if getcwd() =~ m!^$basedir(/.*)?$!;
+
my $nodename = PVE::INotify::nodename();
my $host = $param->{hostname};
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu 1/2] fixup patch "ide: avoid potential deadlock when draining during trim"

2023-03-09 Thread Fiona Ebner
The patch was incomplete and (re-)introduced an issue with a potential
failing assertion upon cancelation of the DMA request.

There is a patch on qemu-devel now[0], and it's the same as this one
code-wise (except for comments). But the discussion is still ongoing.
While there shouldn't be a real issue with the patch, there might be
better approaches. The plan is to use this as a stop-gap for now and
pick up the proper solution once it's ready.

[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg03325.html

Signed-off-by: Fiona Ebner 
---
 ...ial-deadlock-when-draining-during-tr.patch | 28 +--
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git 
a/debian/patches/extra/0011-ide-avoid-potential-deadlock-when-draining-during-tr.patch
 
b/debian/patches/extra/0011-ide-avoid-potential-deadlock-when-draining-during-tr.patch
index 77d7eee..8ce9c79 100644
--- 
a/debian/patches/extra/0011-ide-avoid-potential-deadlock-when-draining-during-tr.patch
+++ 
b/debian/patches/extra/0011-ide-avoid-potential-deadlock-when-draining-during-tr.patch
@@ -37,15 +37,25 @@ Thus, even after moving the blk_inc_in_flight to above the
 replay_bh_schedule_event call, the invariant "ide_issue_trim_cb
 returns with an accompanying in-flight count" is still satisfied.
 
+However, the issue 7e5cdb345f fixed for canceling resurfaces, because
+ide_cancel_dma_sync assumes that it just needs to drain once. But now
+the in_flight count is not consistently > 0 during the trim operation.
+So, change it to drain until !s->bus->dma->aiocb, which means that the
+operation finished (s->bus->dma->aiocb is cleared by ide_set_inactive
+via the ide_dma_cb when the end of the transfer is reached).
+
+Discussion here:
+https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg02506.html
+
 Fixes: 7e5cdb345f ("ide: Increment BB in-flight counter for TRIM BH")
 Suggested-by: Hanna Czenczek 
 Signed-off-by: Fiona Ebner 
 ---
- hw/ide/core.c | 7 +++
- 1 file changed, 3 insertions(+), 4 deletions(-)
+ hw/ide/core.c | 12 ++--
+ 1 file changed, 6 insertions(+), 6 deletions(-)
 
 diff --git a/hw/ide/core.c b/hw/ide/core.c
-index 39afdc0006..6474522bc9 100644
+index 39afdc0006..b67c1885a8 100644
 --- a/hw/ide/core.c
 +++ b/hw/ide/core.c
 @@ -443,7 +443,7 @@ static void ide_trim_bh_cb(void *opaque)
@@ -76,3 +86,15 @@ index 39afdc0006..6474522bc9 100644
  iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
  iocb->s = s;
  iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
+@@ -739,8 +738,9 @@ void ide_cancel_dma_sync(IDEState *s)
+  */
+ if (s->bus->dma->aiocb) {
+ trace_ide_cancel_dma_sync_remaining();
+-blk_drain(s->blk);
+-assert(s->bus->dma->aiocb == NULL);
++while (s->bus->dma->aiocb) {
++blk_drain(s->blk);
++}
+ }
+ }
+ 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu 2/2] add more stable fixes

2023-03-09 Thread Fiona Ebner
The patches were selected from the recent "Patch Round-up for stable
7.2.1" [0]. Those that should be relevant for our supported use-cases
(and the upcoming nvme use-case) were picked. Most of the patches
added now have not been submitted to qemu-stable before.

The follow-up for the virtio-rng-pci migration fix will break
migration between versions with the fix and without the fix when a
virtio-pci-rng(-non)-transitional device is used. Luckily Proxmox VE
only uses the virtio-pci-rng device, and this was fixed by
0006-virtio-rng-pci-fix-migration-compat-for-vectors.patch which was
applied before any public version of Proxmox VE's QEMU 7.2 package was
released.

[0]: https://lists.nongnu.org/archive/html/qemu-stable/2023-03/msg00010.html
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=2162569

Signed-off-by: Fiona Ebner 
---
 ...ing-endian-conversions-for-doorbell-.patch |  67 +
 ...fix-field-corruption-in-type-4-table.patch |  50 +++
 ...ix-transitional-migration-compat-for.patch |  35 +
 ...er-hpet-Fix-expiration-time-overflow.patch |  80 +++
 ...vdpa-stop-all-svq-on-device-deletion.patch |  71 ++
 ...tential-use-of-an-uninitialized-vari.patch | 132 ++
 ...ket-set-s-listener-NULL-in-char_sock.patch |  70 ++
 ...il-MAP-notifier-without-caching-mode.patch |  41 ++
 ...-fail-DEVIOTLB_UNMAP-without-dt-mode.patch |  50 +++
 debian/patches/series |   9 ++
 10 files changed, 605 insertions(+)
 create mode 100644 
debian/patches/extra/0012-hw-nvme-fix-missing-endian-conversions-for-doorbell-.patch
 create mode 100644 
debian/patches/extra/0013-hw-smbios-fix-field-corruption-in-type-4-table.patch
 create mode 100644 
debian/patches/extra/0014-virtio-rng-pci-fix-transitional-migration-compat-for.patch
 create mode 100644 
debian/patches/extra/0015-hw-timer-hpet-Fix-expiration-time-overflow.patch
 create mode 100644 
debian/patches/extra/0016-vdpa-stop-all-svq-on-device-deletion.patch
 create mode 100644 
debian/patches/extra/0017-vhost-avoid-a-potential-use-of-an-uninitialized-vari.patch
 create mode 100644 
debian/patches/extra/0018-chardev-char-socket-set-s-listener-NULL-in-char_sock.patch
 create mode 100644 
debian/patches/extra/0019-intel-iommu-fail-MAP-notifier-without-caching-mode.patch
 create mode 100644 
debian/patches/extra/0020-intel-iommu-fail-DEVIOTLB_UNMAP-without-dt-mode.patch

diff --git 
a/debian/patches/extra/0012-hw-nvme-fix-missing-endian-conversions-for-doorbell-.patch
 
b/debian/patches/extra/0012-hw-nvme-fix-missing-endian-conversions-for-doorbell-.patch
new file mode 100644
index 000..aa9d0b0
--- /dev/null
+++ 
b/debian/patches/extra/0012-hw-nvme-fix-missing-endian-conversions-for-doorbell-.patch
@@ -0,0 +1,67 @@
+From  Mon Sep 17 00:00:00 2001
+From: Klaus Jensen 
+Date: Wed, 8 Mar 2023 19:57:12 +0300
+Subject: [PATCH] hw/nvme: fix missing endian conversions for doorbell buffers
+
+The eventidx and doorbell value are not handling endianness correctly.
+Fix this.
+
+Fixes: 3f7fe8de3d49 ("hw/nvme: Implement shadow doorbell buffer support")
+Cc: qemu-sta...@nongnu.org
+Reported-by: Guenter Roeck 
+Reviewed-by: Keith Busch 
+Signed-off-by: Klaus Jensen 
+(cherry picked from commit 2fda0726e5149e032acfa5fe442db56cd6433c4c)
+Signed-off-by: Michael Tokarev 
+Conflicts: hw/nvme/ctrl.c
+(picked up from qemu-stable mailing list)
+Signed-off-by: Fiona Ebner 
+---
+ hw/nvme/ctrl.c | 22 --
+ 1 file changed, 16 insertions(+), 6 deletions(-)
+
+diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
+index e54276dc1d..98d8e34109 100644
+--- a/hw/nvme/ctrl.c
 b/hw/nvme/ctrl.c
+@@ -1333,8 +1333,12 @@ static inline void nvme_blk_write(BlockBackend *blk, 
int64_t offset,
+ 
+ static void nvme_update_cq_head(NvmeCQueue *cq)
+ {
+-pci_dma_read(&cq->ctrl->parent_obj, cq->db_addr, &cq->head,
+-sizeof(cq->head));
++uint32_t v;
++
++pci_dma_read(&cq->ctrl->parent_obj, cq->db_addr, &v, sizeof(v));
++
++cq->head = le32_to_cpu(v);
++
+ trace_pci_nvme_shadow_doorbell_cq(cq->cqid, cq->head);
+ }
+ 
+@@ -6141,15 +6145,21 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, 
NvmeRequest *req)
+ 
+ static void nvme_update_sq_eventidx(const NvmeSQueue *sq)
+ {
+-pci_dma_write(&sq->ctrl->parent_obj, sq->ei_addr, &sq->tail,
+-  sizeof(sq->tail));
++uint32_t v = cpu_to_le32(sq->tail);
++
++pci_dma_write(&sq->ctrl->parent_obj, sq->ei_addr, &v, sizeof(v));
++
+ trace_pci_nvme_eventidx_sq(sq->sqid, sq->tail);
+ }
+ 
+ static void nvme_update_sq_tail(NvmeSQueue *sq)
+ {
+-pci_dma_read(&sq->ctrl->parent_obj, sq->db_addr, &sq->tail,
+- sizeof(sq->tail));
++uint32_t v;
++
++pci_dma_read(&sq->ctrl->parent_obj, sq->db_addr, &v, sizeof(v));
++
++sq->tail = le32_to_cpu(v);
++
+ trace_pci_nvme_shadow_doorbell_sq(sq->sqid, sq->tail);
+ }
+ 
diff --git 
a/debian/patches/extra/0013-hw-smbios-fix-field-co

[pve-devel] [PATCH pve-firewall] Fix #4550 : host options: add nf_conntrack_helpers

2023-03-09 Thread Alexandre Derumier
kernel 6.1 have removed auto helpers loading.
This was deprecaded since multiple years.

We simply need to add rules in PREROUTING to load theses helpers.

supported protocols :
- amanda
- ftp
- irc (ipv4 only)
- netbios-ns (ipv4 only)
- pptp (ipv4 only)
- sane
- sip
- snmp (ipv4 only)
- tftp

Signed-off-by: Alexandre Derumier 
---
 src/PVE/Firewall.pm | 45 -
 1 file changed, 44 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 4924d51..87e44e0 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -578,6 +578,18 @@ my $pve_fw_macros = {
 ],
 };
 
+my $pve_fw_helpers = {
+'amanda' => { proto => 'udp', dport => '10080', 'v4' => 1, 'v6' => 1 },
+'ftp' => { proto => 'tcp', dport => '21', 'v4' => 1, 'v6' => 1},
+'irc' => { proto => 'tcp', dport => '6667', 'v4' => 1 },
+'netbios-ns' => { proto => 'udp', dport => '137', 'v4' => 1 },
+'pptp' => { proto => 'tcp', dport => '1723', 'v4' => 1, },
+'sane' => { proto => 'tcp', dport => '6566', 'v4' => 1, 'v6' => 1 },
+'sip' => { proto => 'udp', dport => '5060', 'v4' => 1, 'v6' => 1 },
+'snmp' => { proto => 'udp', dport => '161', 'v4' => 1 },
+'tftp' => { proto => 'udp', dport => '69', 'v4' => 1, 'v6' => 1},
+};
+
 my $pve_fw_parsed_macros;
 my $pve_fw_macro_descr;
 my $pve_fw_macro_ipversion = {};
@@ -1125,6 +1137,19 @@ sub parse_port_name_number_or_range {
 return (scalar(@elements) > 1);
 }
 
+PVE::JSONSchema::register_format('pve-fw-conntrack-helper', 
\&pve_fw_verify_conntrack_helper);
+sub pve_fw_verify_conntrack_helper {
+   my ($list) = @_;
+
+   my @helpers = split(/,/, $list);
+   die "extraneous commas in list\n" if $list ne join(',', @helpers);
+   foreach my $helper (@helpers) {
+   die "unknown helper $helper" if !$pve_fw_helpers->{$helper};
+   }
+
+   return $list;
+}
+
 PVE::JSONSchema::register_format('pve-fw-sport-spec', 
\&pve_fw_verify_sport_spec);
 sub pve_fw_verify_sport_spec {
my ($portstr) = @_;
@@ -1344,6 +1369,13 @@ our $host_option_properties = {
default => 0,
optional => 1,
 },
+nf_conntrack_helpers => {
+   type => 'string', format => 'pve-fw-conntrack-helper',
+   description => "Enable conntrack helpers for specific protocols. ".
+   "Supported protocols: amanda, ftp, irc, netbios-ns, pptp, sane, 
sip, snmp, tftp",
+   default => '',
+   optional => 1,
+},
 protection_synflood => {
description => "Enable synflood protection",
type => 'boolean',
@@ -2879,6 +2911,10 @@ sub parse_hostfw_option {
 } elsif ($line =~ 
m/^(log_level_in|log_level_out|tcp_flags_log_level|smurf_log_level):\s*(($loglevels)\s*)?$/i)
 {
$opt = lc($1);
$value = $2 ? lc($3) : '';
+} elsif ($line =~ m/^(nf_conntrack_helpers):\s*(((\S+)[,]?)+)\s*$/i) {
+   $opt = lc($1);
+   $value = lc($2);
+   pve_fw_verify_conntrack_helper($value);
 } elsif ($line =~ 
m/^(nf_conntrack_max|nf_conntrack_tcp_timeout_established|nf_conntrack_tcp_timeout_syn_recv|protection_synflood_rate|protection_synflood_burst|protection_limit):\s*(\d+)\s*$/i)
 {
$opt = lc($1);
$value = int($2);
@@ -3729,6 +3765,9 @@ sub compile_iptables_raw {
 
 my $hostfw_options = $hostfw_conf->{options} || {};
 my $protection_synflood = $hostfw_options->{protection_synflood} || 0;
+my $conntrack_helpers = $hostfw_options->{nf_conntrack_helpers} || '';
+
+ruleset_create_chain($ruleset, "PVEFW-PREROUTING") if $protection_synflood 
!= 0 || $conntrack_helpers ne '';
 
 if($protection_synflood) {
 
@@ -3739,10 +3778,14 @@ sub compile_iptables_raw {
$protection_synflood_expire = $protection_synflood_expire * 1000;
my $protection_synflood_mask = $ipversion == 4 ? 32 : 64;
 
-   ruleset_create_chain($ruleset, "PVEFW-PREROUTING");
ruleset_addrule($ruleset, "PVEFW-PREROUTING", "-p tcp -m tcp 
--tcp-flags FIN,SYN,RST,ACK SYN -m hashlimit --hashlimit-above 
$protection_synflood_rate/sec --hashlimit-burst $protection_synflood_burst 
--hashlimit-mode srcip --hashlimit-name syn --hashlimit-htable-size 2097152 
--hashlimit-srcmask $protection_synflood_mask --hashlimit-htable-expire 
$protection_synflood_expire", "-j DROP");
 }
 
+foreach my $conntrack_helper (split(/,/, $conntrack_helpers)) {
+   my $helper = $pve_fw_helpers->{$conntrack_helper};
+   ruleset_addrule($ruleset, "PVEFW-PREROUTING", "-p $helper->{proto} -m 
$helper->{proto} --dport $helper->{dport} -j CT", "--helper $conntrack_helper") 
if $helper && $helper->{"v$ipversion"};
+}
+
 return $ruleset;
 }
 
-- 
2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel