Re: [pve-devel] [PATCH storage 1/1] iscsi: allow to configure multiple portals

2023-10-16 Thread Yuri Konotopov via pve-devel
--- Begin Message ---

10.10.2023 16:54, Fiona Ebner пишет:

Hi,


Hi, Fiona!

Thanks for your review! I will send updated patch soon.



thank you for the contribution! Please prepend the commit title with
"fix #254: ".

Am 16.08.23 um 13:56 schrieb ykonoto...@gnome.org:

From: Yuri Konotopov 

Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=254

To accept contributions, we need your Signed-off-by trailer here and you
need to agree to the Harmony CLA (assuming you haven't sent it to us
already):
https://pve.proxmox.com/wiki/Developer_Documentation#Software_License_and_Copyright


Got it, will add Signed-off-by trailer.

As for CLA - I signed it before sending this patch


--
Best regards, Yuri Konotopov


--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer] Correct DNS IP check on management interface setup

2023-10-16 Thread Filip Schauer
Signed-off-by: Filip Schauer 
---
 proxinstall | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/proxinstall b/proxinstall
index d5b2565..764187f 100755
--- a/proxinstall
+++ b/proxinstall
@@ -481,8 +481,8 @@ sub create_ipconf_view {
$text = $ipconf_entry_dns->get_text();
my ($dns_ip, $dns_ip_version) = parse_ip_address($text);
if (!defined($dns_ip) || $dns_ip_version != $ipversion) {
-   my $msg = defined($gateway_ip)
-   ? "DNS and host IP version must not differ 
(IPv$gateway_ip_version != IPv$ipversion)."
+   my $msg = defined($dns_ip)
+   ? "DNS and host IP version must not differ (IPv$dns_ip_version 
!= IPv$ipversion)."
: "DNS IP is not valid.";
Proxmox::UI::message($msg);
$ipconf_entry_dns->grab_focus();
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC] towards automated integration testing

2023-10-16 Thread Stefan Hanreich



On 10/13/23 15:33, Lukas Wagner wrote:

> - Additionally, it should be easy to run these integration tests locally
>   on a developer's workstation in order to write new test cases, as well
>   as troubleshooting and debugging existing test cases. The local
>   test environment should match the one being used for automated testing
>   as closely as possible
This would also include sharing those fixture templates somewhere, do
you already have an idea on how to accomplish this? PBS sounds like a
good option for this if I'm not missing something.

> As a main mode of operation, the Systems under Test (SUTs)
> will be virtualized on top of a Proxmox VE node.
>
> This has the following benefits:
> - it is easy to create various test setups (fixtures), including but not
>   limited to single Proxmox VE nodes, clusters, Backup servers and
>   auxiliary services (e.g. an LDAP server for testing LDAP
>   authentication)
I can imagine having to setup VMs inside the Test Setup as well for
doing various tests. Doing this manually every time could be quite
cumbersome / hard to automate. Do you have a mechanism in mind to deploy
VMs inside the test system as well? Again, PBS could be an interesting
option for this imo.

> In theory, the test runner would also be able to drive tests on real
> hardware, but of course with some limitations (harder to have a
> predictable, reproducible environment, etc.)

Maybe utilizing Aaron's installer for setting up those test systems
could at least produce somewhat identical setups? Although it is really
hard managing systems with different storage types, network cards, ... .

I've seen GitLab using tags for runners that specify certain
capabilities of systems. Maybe we could also introduce something like
that here for different bare-metal systems? E.g. a test case specifies
it needs a system with tag `ZFS` and then you can run / skip the
respective test case on that system. Managing those tags can introduce
quite a lot of churn though, so I'm not sure if this would be a good idea.

> The test script is executed by the test runner; the test outcome is
> determined by the exit code of the script. Test scripts could be written
Are you considering capturing output as well? That would make sense when
using assertions at least, so in case of failures developers have a
starting point for debugging.

Would it make sense to allow specifying a expected exit code for tests
that actually should fail - or do you consider this something that
should be handled by the test script?



I've refrained from talking about the toml files too much since it's
probably too early to say something about that, but they look good so
far from my pov.

In general this sounds like quite the exciting feature and the RFC looks
very promising already.

Kind Regards
Stefan


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 installer] fix #4869: Make management interface selection more verbose

2023-10-16 Thread Filip Schauer



On 13/10/2023 12:36, Christoph Heiss wrote:

See inline comments. Also, `cargo fmt` please :^)

On Thu, Oct 12, 2023 at 03:02:08PM +0200, Filip Schauer wrote:

[..]
diff --git a/proxinstall b/proxinstall
index d5b2565..51170cd 100755
--- a/proxinstall
+++ b/proxinstall
@@ -347,7 +347,9 @@ sub create_ipconf_view {

  my $get_device_desc = sub {
my $iface = shift;
-   return "$iface->{name} - $iface->{mac} ($iface->{driver})";
+   my $iface_state_symbol = "\N{U+25EF}";
+   $iface_state_symbol = "\N{U+1F7E2}" if ($iface->{state} eq "UP");
+   return "$iface_state_symbol $iface->{name} - $iface->{mac} 
($iface->{driver})";

^ Here we have e.g. " eno1 - 12:34:56:78:9a:bc (r8169)".


  };

  my $run_env = Proxmox::Install::RunEnv::get();
diff --git a/proxmox-tui-installer/src/main.rs 
b/proxmox-tui-installer/src/main.rs
index 3f01713..7e46f33 100644
--- a/proxmox-tui-installer/src/main.rs
+++ b/proxmox-tui-installer/src/main.rs
@@ -548,20 +548,28 @@ fn password_dialog(siv: &mut Cursive) -> InstallerView {
  fn network_dialog(siv: &mut Cursive) -> InstallerView {
  let state = siv.user_data::().unwrap();
  let options = &state.options.network;
-let ifnames = state.runtime_info.network.interfaces.keys();
+
+let mut management_interface_select_view = SelectView::new().popup();
+for (key, value) in state.runtime_info.network.interfaces.iter() {
+let interface_state_symbol = if value.state == "UP" {
+"\x1b[1;92m\u{25CF}"
+} else {
+"\x1b[1;31m\u{25CF}"
+};

One thing I noticed while testing the TUI installer is the
setting of the color here fsck up the colors of the other entries.
Colors/effects applied using ANSI escape sequences do not reset
automatically, so this must be done after the unicode symbol manually
using "\x1b[1;30m".

E.g. with multiple NICS, nics in UP are sometimes displayed entirely in
green, in non-UP entirely red. And scrolling through the list changes
that completely.

[ To properly display colors in Cursive, you'd normally use a
   `StyledString`, but - after quickly trying it out - seems broken for
   SelectView (not sure why, I might investigate that some time). ]

So although a bit hacky, seems to work here fine (still seems a bit
broken under VGA for me, but that is just some random artifact it seems,
as 30 is the correct color code in any case).


+
+let interface_label = format!("{0} - {1} {2}", value.name, value.mac, 
interface_state_symbol);

^ And here we have "eno1 - 12:34:56:78:9a:bc ".

Just for the sake of consistency they should be the same between GUI and
TUI.
I would propose "  - ", and just drop the driver part
completely in the GUI. The latter does not provide any real value IMO,
at least not I could come up with any.



In the TUI installer, placing the symbol at the front changes the color
of the remaining text that comes after it. Resetting the color with
\x1b[1;30m is also not an option since it sets the text color to gray,
which makes it inconsistent with the selectable elements and hard to
read.



+management_interface_select_view.add_item(interface_label, 
key.to_string());
+}

  let inner = FormView::new()
  .child(
  "Management interface",
-SelectView::new()
-.popup()
-.with_all_str(ifnames.clone())
-.selected(
-ifnames
-.clone()
-.position(|ifname| ifname == &options.ifname)
-.unwrap_or_default(),
-),
+management_interface_select_view.selected(
+state.runtime_info.network.interfaces.keys()
+.clone()
+.position(|ifname| ifname == &options.ifname)
+.unwrap_or_default(),
+),
  )
  .child(
  "Hostname (FQDN)",
diff --git a/proxmox-tui-installer/src/setup.rs 
b/proxmox-tui-installer/src/setup.rs
index 942e319..dbff3da 100644
--- a/proxmox-tui-installer/src/setup.rs
+++ b/proxmox-tui-installer/src/setup.rs
@@ -443,6 +443,8 @@ pub struct Interface {

  pub index: usize,

+pub state: String,

This can be an enum for cleanliness, as this field is well defined. All
possible states as recognized (and thus can be put out) by iproute2 are
defined here:
https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/tree/ip/ipaddress.c?h=v6.5.0#n119


+
  pub mac: String,

  #[serde(default)]
--
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





___
pve-devel mailing list
pve-devel@lists.proxmox.com
htt

Re: [pve-devel] [PATCH qemu-server] Fix races with suspended VMs that can wake up

2023-10-16 Thread Fiona Ebner
I'd add a bit to the commit message, but changes look good to me:

Am 13.10.23 um 15:50 schrieb Filip Schauer:
> Fix races with ACPI-suspended VMs which could wake up during migration
> or during a suspend-mode backup.
> 
> Revert prevention, of ACPI-suspended VMs automatically resuming after
> migration, introduced by 7ba974a6828d. The commit introduced a potential
> problem that causes a suspended VM that wakes up during migration to
> remain paused after the migration finishes.
> 

This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.

> Furthermore the commit increased the race window during the preparation
> of a suspend-mode backup, when a suspended VM wakes up between the
> vm_is_paused check in PVE::VZDump::QemuServer::prepare and
> PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
> fs-freeze even if the VM has woken up, potentially leaving the file
> system in an inconsistent state.
> 
> To prevent this, do not treat the suspended runstate as paused when
> migrating or archiving a VM.
> 

[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-08/msg05260.html

> Signed-off-by: Filip Schauer 

Reviewed-by: Fiona Ebner 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-storage] fix #1611: implement import of base-images for LVM-thin Storage

2023-10-16 Thread Hannes Duerr
if a base-image is to be migrated to a lvm-thin storage, a new
vm-image is allocated on the target side, then the data is written
and afterwards the image is converted to a base-image


Signed-off-by: Hannes Duerr 
---

In the bugtracker wolfgang suggested two different approaches. In my
opinion this approach is the cleaner one, but please let me know what
you think

 src/PVE/Storage/LvmThinPlugin.pm | 65 
 1 file changed, 65 insertions(+)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index 1d2e37c..4579d47 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -383,6 +383,71 @@ sub volume_has_feature {
 return undef;
 }
 
+sub volume_import {
+my ($class, $scfg, $storeid, $fh, $volname, $format, $snapshot, 
$base_snapshot, $with_snapshots, $allow_rename) = @_;
+die "volume import format $format not available for $class\n"
+   if $format ne 'raw+size';
+die "cannot import volumes together with their snapshots in $class\n"
+   if $with_snapshots;
+die "cannot import an incremental stream in $class\n" if 
defined($base_snapshot);
+
+my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $file_format) =
+   $class->parse_volname($volname);
+die "cannot import format $format into a file of format $file_format\n"
+   if $file_format ne 'raw';
+
+my $vg = $scfg->{vgname};
+my $lvs = PVE::Storage::LVMPlugin::lvm_list_volumes($vg);
+if ($lvs->{$vg}->{$volname}) {
+   die "volume $vg/$volname already exists\n" if !$allow_rename;
+   warn "volume $vg/$volname already exists - importing with a different 
name\n";
+   $name = undef;
+}
+
+my ($size) = PVE::Storage::Plugin::read_common_header($fh);
+$size = int($size/1024);
+
+# Request new vm-name which is needed for the import
+if ($isBase) {
+   my $newvmname = $class->find_free_diskname($storeid, $scfg, $vmid);
+   $name = $newvmname;
+   $volname = $newvmname;
+}
+
+eval {
+   my $allocname = $class->alloc_image($storeid, $scfg, $vmid, 'raw', 
$name, $size);
+   my $oldname = $volname;
+   $volname = $allocname;
+   if (defined($name) && $allocname ne $oldname) {
+   die "internal error: unexpected allocated name: '$allocname' != 
'$oldname'\n";
+   }
+   my $file = $class->path($scfg, $volname, $storeid)
+   or die "internal error: failed to get path to newly allocated 
volume $volname\n";
+
+   $class->volume_import_write($fh, $file);
+};
+if (my $err = $@) {
+   my $cleanup_worker = eval { $class->free_image($storeid, $scfg, 
$volname, 0) };
+   warn $@ if $@;
+
+   if ($cleanup_worker) {
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+   }
+
+   die $err;
+}
+
+if ($isBase) {
+   my $newbasename = $class->create_base($storeid, $scfg, $volname);
+   $volname=$newbasename;
+}
+
+return "$storeid:$volname";
+}
+
 # used in LVMPlugin->volume_import
 sub volume_import_write {
 my ($class, $input_fh, $output_file) = @_;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH-SERIES qemu-server] fix #4522: vncproxy: always set environment variable for ticket

2023-10-16 Thread Fiona Ebner
Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Fix the vncproxy API
call, which previously, would only set the variable in presence of the
'websocket' parameter.


Fiona Ebner (2):
  api: vncproxy: update description of websocket parameter
  fix #4522: api: vncproxy: also set environment variable for ticket
without websocket

 PVE/API2/Qemu.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/2] api: vncproxy: update description of websocket parameter

2023-10-16 Thread Fiona Ebner
Since commit 3e7567e0 ("do not use novnc wsproxy"), the websocket
upgrade is done via the HTTP server.

Signed-off-by: Fiona Ebner 
---
 PVE/API2/Qemu.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c8a87f3f..a31ddb81 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2267,7 +2267,7 @@ __PACKAGE__->register_method({
websocket => {
optional => 1,
type => 'boolean',
-   description => "starts websockify instead of vncproxy",
+   description => "Prepare for websocket upgrade.",
},
'generate-password' => {
optional => 1,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/2] fix #4522: api: vncproxy: also set environment variable for ticket without websocket

2023-10-16 Thread Fiona Ebner
Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Since it is not only
required in combination with websocket, drop that conditional.

For the non-serial case, this was the last remaining effect of the
'websocket' parameter, so update the parameter description.

Signed-off-by: Fiona Ebner 
---
 PVE/API2/Qemu.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index a31ddb81..9877ce24 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2267,7 +2267,8 @@ __PACKAGE__->register_method({
websocket => {
optional => 1,
type => 'boolean',
-   description => "Prepare for websocket upgrade.",
+   description => "Prepare for websocket upgrade (only required 
when using "
+   ."serial terminal, otherwise upgrade is always possible).",
},
'generate-password' => {
optional => 1,
@@ -2365,7 +2366,7 @@ __PACKAGE__->register_method({
 
} else {
 
-   $ENV{LC_PVE_TICKET} = $password if $websocket; # set ticket 
with "qm vncproxy"
+   $ENV{LC_PVE_TICKET} = $password; # set ticket with "qm vncproxy"
 
$cmd = [@$remcmd, "/usr/sbin/qm", 'vncproxy', $vmid];
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC] towards automated integration testing

2023-10-16 Thread Thomas Lamprecht
A few things, most of which we talked off-list already anyway.

We should eye if we can integrate existing regression testing in there
too, i.e.:

- The qemu autotest that Stefan Reiter started and Fiona still uses,
  here we should drop the in-git tracked backup that the test VM is
  restored from (replace with something like vmdb2 [0] managed Debian
  image   that gets generated on demand), replace some hard coded
  configs with a simple config and make it public.

[0]: https://vmdb2.liw.fi/

- The selenium based end-to-end tests which we also use to generate most
  screenshots with (they can run headless too). Here we also need a few
  clean-ups, but not that many, and make the repo public.

Am 13/10/2023 um 15:33 schrieb Lukas Wagner:> I am currently doing the 
groundwork that should eventually enable us
> to write automated integration tests for our products.
> 
> Part of that endeavor will be to write a custom test runner, which will
>   - setup a specified test environment
>   - execute test cases in that environment

This should be decoupled from all else, so that I can run it on any
existing installation, bare-metal or not. This allows devs using it in
their existing setups with almost no change required.

We can then also add it easily in our existing buildbot instance
relatively easily, so it would be worth doing so even if we might
deprecate Buildbot in the future (for what little it can do, it could
be simpler).

>   - create some sort of test report

As Stefan mentioned, test-output can be good to have. Our buildbot
instance provides that, and while I don't look at them in 99% of the
builds, when I need to its worth *a lot*.

> 
> ## Introduction
> 
> The goal is to establish a framework that allows us to write
> automated integration tests for our products.
> These tests are intended to run in the following situations:
> - When new packages are uploaded to the staging repos (by triggering
>   a test run from repoman, or similar)

*debian repos, as we could also trigger some when git commits are
pushed, just like we do now through Buildbot. Doing so is IMO nice as it
will catch issues before a package was bumped, but is still quite a bit
simpler to implement than an "apply patch from list to git repos" thing
from the next point, but could still act as a preparation for that.

> - Later, this tests could also be run when patch series are posted to
>   our mailing lists. This requires a  mechanism to automatically
>   discover, fetch and build patches, which will be a separate,
>   follow-up project.

> 
> As a main mode of operation, the Systems under Test (SUTs)
> will be virtualized on top of a Proxmox VE node.

For the fully-automated test system this can be OK as primary mode, as
it indeed makes things like going back to an older software state much
easier.

But, if we decouple the test harness and running them from that more
automated system, we can also run the harness periodically on our
bare-metal test servers.

> ## Terminology
> - Template: A backup/VM template that can be instantiated by the test
>   runner

I.e., the base of the test host? I'd call this test-host, template is a
bit to overloaded/generic and might focus too much on the virtual test
environment.

Or is this some part that takes place in the test, i.e., a
generalization of product to test and supplementary tool/app that helps
on that test?

Hmm, could work out ok, and we should be able to specialize stuff
relatively easier later too, if wanted.

> - Test Case: Some script/executable executed by the test runner, success
>   is determined via exit code.
> - Fixture: Description of a test setup (e.g. which templates are needed,
>   additional setup steps to run, etc.)
> 
> ## Approach
> Test writers write template, fixture, test case definition in
> declarative configuration files (most likely TOML). The test case
> references a test executable/script, which performs the actual test.
> 
> The test script is executed by the test runner; the test outcome is
> determined by the exit code of the script. Test scripts could be written
> in any language, e.g. they could be Perl scripts that use the official
> `libpve-apiclient-perl` to test-drive the SUTs.
> If we notice any emerging patterns, we could write additional helper
> libs that reduce the amount of boilerplate in test scripts.
> 
> In essence, the test runner would do the following:
> - Group testcases by fixture
> - For every fixture:
> - Instantiate needed templates from their backup snapshot

Should be optional, possible a default-on boolean option that conveys

> - Start VMs

Same.

> - Run any specified `setup-hooks` (update system, deploy packages,
> etc.)

Should be as idempotent as possible.

> - Take a snapshot, including RAM

Should be optional (as in, don't care if it cannot be done, e.g., on
bare metal).

> - For every testcase using that fixture:
> - Run testcase (execute test executable, check exit code)
> - Rollba

Re: [pve-devel] [PATCH v2 storage 1/1] fix #254: iscsi: allow to configure multiple portals

2023-10-16 Thread Dominik Csapak

hi,

a few high level comments:

first:

is the iscsi.cfg really necessary? couldn't we just lookup on demand like we do 
now?

*if* we want to cache that list, we should only do this on a per node level, 
since
there is no guarantee that this is the same for all nodes.
(e.g. use /var/run//scsi.cache)

but really i'm in favor of dropping that altogether (except you can provide a 
good
argument why it's necessary) this would remove a lot of code from this patch

second:

i would favor an approach that uses an 'array' instead of the '-list' types.
i'm not completely sure if one can just have that as a drop in replacement
with old configs/api calls still working. if not, we could introduce
a new 'portals' api parameter/storage config that is an array which could
take precedence over the old 'portal' one. (that we could drop with the
next major release) in that case we could even automatically convert from
portal -> portals if we detect that in the api update

and a few (basic) comments inline:

On 10/15/23 20:32, ykonoto...@gnome.org wrote:

From: Yuri Konotopov 



it would be great to have a real commit message here. best would be a short 
description
of what the patch wants to achieve, and maybe some more tricky implementation 
detail
that is not obvious (most of what you wrote in the cover letter would be fine, 
sans
the changelog + mention of 'Here is v2 patch'


Signed-off-by: Yuri Konotopov 
---
  PVE/API2/Storage/Scan.pm   |   2 +-
  PVE/Storage.pm |  10 +-
  PVE/Storage/ISCSIPlugin.pm | 207 -
  3 files changed, 187 insertions(+), 32 deletions(-)

diff --git a/PVE/API2/Storage/Scan.pm b/PVE/API2/Storage/Scan.pm
index d7a8743..1f9773c 100644
--- a/PVE/API2/Storage/Scan.pm
+++ b/PVE/API2/Storage/Scan.pm
@@ -305,7 +305,7 @@ __PACKAGE__->register_method({
node => get_standard_option('pve-node'),
portal => {
description => "The iSCSI portal (IP or DNS name with optional 
port).",
-   type => 'string', format => 'pve-storage-portal-dns',
+   type => 'string', format => 'pve-storage-portal-dns-list',
},
},
  },
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index cec3996..0043507 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1428,12 +1428,14 @@ sub resolv_portal {
  sub scan_iscsi {
  my ($portal_in) = @_;
  
-my $portal;

-if (!($portal = resolv_portal($portal_in))) {
-   die "unable to parse/resolve portal address '${portal_in}'\n";
+my @portals = PVE::Tools::split_list($portal_in);


not a big deal, but we usually pack that into a reference immediately:

my $portals = [ <...>::split_list(...) ];

then you can do
for my $portal (@$portals)
or
for my $portal ($portals->@*)

and don't have to create a reference later on when passing to iscsi_discovery


+for my $portal (@portals) {
+   if (!resolv_portal($portal)) {
+   die "unable to parse/resolve portal address '${portal}'\n";
+   }
  }
  
-return PVE::Storage::ISCSIPlugin::iscsi_discovery($portal);

+return PVE::Storage::ISCSIPlugin::iscsi_discovery(\@portals);
  }
  
  sub storage_default_format {

diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index a79fcb0..6f4309f 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -18,6 +18,65 @@ use base qw(PVE::Storage::Plugin);
  my $ISCSIADM = '/usr/bin/iscsiadm';
  $ISCSIADM = undef if ! -X $ISCSIADM;
  
+my $iscsi_cfg = "/etc/pve/iscsi.cfg";

+
+sub read_config {
+my ($filename, $raw) = @_;
+
+my $cfg = {};
+
+return $cfg if ! -f $iscsi_cfg;
+
+my $content = PVE::Tools::file_get_contents($iscsi_cfg);
+return $cfg if !defined($content);
+
+my @lines = split /\n/, $content;
+
+my $target;
+
+for my $line (@lines) {
+   $line =~ s/#.*$//;
+   $line =~ s/^\s+//;
+   $line =~ s/^;.*$//;
+   $line =~ s/\s+$//;
+   next if !$line;
+
+   $target = $1 if $line =~ m/^\[(\S+)\]$/;
+   if (!$target) {
+   warn "no target - skip: $line\n";
+   next;
+   }
+
+   if (!defined($cfg->{$target})) {
+   $cfg->{$target} = [];
+   }
+
+   if ($line =~ m/^((?:$IPV4RE|\[$IPV6RE\]):\d+)$/) {
+   push @{$cfg->{$target}}, $1;
+   }
+}
+
+return $cfg;
+}
+
+sub write_config {
+my ($cfg) = @_;
+
+my $out = '';
+
+for my $target (sort keys %$cfg) {
+   $out .= "[$target]\n";
+   for my $portal (sort @{$cfg->{$target}}) {
+   $out .= "$portal\n";
+   }
+   $out .= "\n";
+}
+
+PVE::Tools::file_set_contents($iscsi_cfg, $out);
+
+return;
+}


didn't look too closely at read/write_config since i'm conviced that we don't
actually need this


+
  sub check_iscsi_support {
  my $noerr = shift;
  
@@ -45,11 +104,10 @@ sub iscsi_session_list {

  eval {
run_command($cmd, errmsg => 'iscsi session scan failed', outfunc => sub 
{
  

[pve-devel] applied-series: [PATCH-SERIES qemu-server] fix #4522: vncproxy: always set environment variable for ticket

2023-10-16 Thread Thomas Lamprecht
Am 16/10/2023 um 15:12 schrieb Fiona Ebner:
> Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
> disallow passwordless"), 'qm vncproxy' will just fail when the
> LC_PVE_TICKET environment variable is not set. Fix the vncproxy API
> call, which previously, would only set the variable in presence of the
> 'websocket' parameter.
> 
> 
> Fiona Ebner (2):
>   api: vncproxy: update description of websocket parameter
>   fix #4522: api: vncproxy: also set environment variable for ticket
> without websocket
> 
>  PVE/API2/Qemu.pm | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 


applied series, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] Fix races with suspended VMs that can wake up

2023-10-16 Thread Thomas Lamprecht
Am 13/10/2023 um 15:50 schrieb Filip Schauer:
> Fix races with ACPI-suspended VMs which could wake up during migration
> or during a suspend-mode backup.
> 
> Revert prevention, of ACPI-suspended VMs automatically resuming after
> migration, introduced by 7ba974a6828d. The commit introduced a potential
> problem that causes a suspended VM that wakes up during migration to
> remain paused after the migration finishes.
> 
> Furthermore the commit increased the race window during the preparation
> of a suspend-mode backup, when a suspended VM wakes up between the
> vm_is_paused check in PVE::VZDump::QemuServer::prepare and
> PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
> fs-freeze even if the VM has woken up, potentially leaving the file
> system in an inconsistent state.
> 
> To prevent this, do not treat the suspended runstate as paused when
> migrating or archiving a VM.
> 
> Signed-off-by: Filip Schauer 
> ---
>  PVE/API2/Qemu.pm | 4 ++--
>  PVE/QemuMigrate.pm   | 4 +++-
>  PVE/QemuServer.pm| 6 +++---
>  PVE/VZDump/QemuServer.pm | 4 +++-
>  4 files changed, 11 insertions(+), 7 deletions(-)
> 
>

applied, with Fiona's R-b and extra info massaged into the
commit message, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC] towards automated integration testing

2023-10-16 Thread Lukas Wagner

Thank you for the feedback!

On 10/16/23 13:20, Stefan Hanreich wrote:

On 10/13/23 15:33, Lukas Wagner wrote:


- Additionally, it should be easy to run these integration tests locally
   on a developer's workstation in order to write new test cases, as well
   as troubleshooting and debugging existing test cases. The local
   test environment should match the one being used for automated testing
   as closely as possible

This would also include sharing those fixture templates somewhere, do
you already have an idea on how to accomplish this? PBS sounds like a
good option for this if I'm not missing something.


Yes, these templates could be stored on some shared storage, e.g. a PBS
instance, or they could also distributed via a .deb/multiple .debs (not 
sure if that is a good idea, since these would become huge pretty

quickly).

It could also be a two-step process: Use one command to get the
latest test templates, restoring them from a remote backup, converting
them to a local VM template. When executing tests, the test runner
could then use linked clones, speeding up the test setup time
quite a bit.

All in all, these templates that can be used in test fixtures should be:
  - easily obtainable for developers, in order to have a fully
functional test setup on their workstation
  - easily updateable (e.g. installing the latest packages, so that
the setup-hook does not need to fetch a boatload of packages every
time)



As a main mode of operation, the Systems under Test (SUTs)
will be virtualized on top of a Proxmox VE node.

This has the following benefits:
- it is easy to create various test setups (fixtures), including but not
   limited to single Proxmox VE nodes, clusters, Backup servers and
   auxiliary services (e.g. an LDAP server for testing LDAP
   authentication)

I can imagine having to setup VMs inside the Test Setup as well for
doing various tests. Doing this manually every time could be quite
cumbersome / hard to automate. Do you have a mechanism in mind to deploy
VMs inside the test system as well? Again, PBS could be an interesting
option for this imo.

Several options come to mind. We could use a virtualized PBS instance 
with a datastore containing the VM backup as part of the fixture.
We could use some external backup store (so the same 'source' as for the 
templates themselves) - however that means that the systems under test

must have network access to that.
We could also think about using iPXE to boot test VMs, with the
boot image either be provided by some template from the fixture, or by
some external server.
For both approaches, the 'as part of the fixture' approaches seem a bit
nicer, as they are more self-contained.

Also, the vmbd2 thingy that thomas mentioned might be interesting for
this - i've only glanced at it so far though.

As of now it seems that this question will not influence the design
of the test runner much, so it can probably be postponed to a later
stage.


In theory, the test runner would also be able to drive tests on real
hardware, but of course with some limitations (harder to have a
predictable, reproducible environment, etc.)


Maybe utilizing Aaron's installer for setting up those test systems
could at least produce somewhat identical setups? Although it is really
hard managing systems with different storage types, network cards, ... .


In general my biggest concern with 'bare-metal' tests - and to precise,
that does not really have anything to do with being 'bare-metal',
more about testing on something that is harder roll back into
a clean state that can be used for the next test execution, is that
I'm afraid that a setup like this could become quite brittle and a 
maintenance burden. At some point, a test execution might leave

something in an unclean state (e.g. due to a crashed test or missing
something while cleanup), tripping up the following test job.
As an example from personal experience: One test run
might test new packages which introduce a new flag in a configuration
file. If that flag is not cleanup up afterwards, another test job
testing other packages might fail because it now has to
deal with an 'unknown' configuration key.

Maybe ZFS snapshots could help with that, but I'm not sure how that
would work in practice (e.g. due to the kernel being stored on
the EFI partition).

The automated installer *could* certainly help here - however,
right now I don't want to extend the scope of this project too much.
Also, there is also the question if the installation should be refreshed 
after every single test run, increasing the test cycle time/resource 
consumption quite a bit? Or only if 'something' breaks?


That being said, it might also make sense to be able to run the tests
(or more likely, a subset of them, since some will inherently
require a fixture) against an arbitrary PVE instance that is under full 
control of a developer (e.g. a development VM, or, if feeling 
adventurous, the workstation itself). If this is possible, 

Re: [pve-devel] [RFC] towards automated integration testing

2023-10-16 Thread Lukas Wagner

Thanks for the summary from our discussion and the additional feedback!

On 10/16/23 15:57, Thomas Lamprecht wrote:

   - create some sort of test report


As Stefan mentioned, test-output can be good to have. Our buildbot
instance provides that, and while I don't look at them in 99% of the
builds, when I need to its worth *a lot*.



Agreed, test output is always valuable and will definitely captured.


## Introduction

The goal is to establish a framework that allows us to write
automated integration tests for our products.
These tests are intended to run in the following situations:
- When new packages are uploaded to the staging repos (by triggering
   a test run from repoman, or similar)


*debian repos, as we could also trigger some when git commits are
pushed, just like we do now through Buildbot. Doing so is IMO nice as it
will catch issues before a package was bumped, but is still quite a bit
simpler to implement than an "apply patch from list to git repos" thing
from the next point, but could still act as a preparation for that.


- Later, this tests could also be run when patch series are posted to
   our mailing lists. This requires a  mechanism to automatically
   discover, fetch and build patches, which will be a separate,
   follow-up project.




As a main mode of operation, the Systems under Test (SUTs)
will be virtualized on top of a Proxmox VE node.


For the fully-automated test system this can be OK as primary mode, as
it indeed makes things like going back to an older software state much
easier.

But, if we decouple the test harness and running them from that more
automated system, we can also run the harness periodically on our
bare-metal test servers.


## Terminology
- Template: A backup/VM template that can be instantiated by the test
   runner


I.e., the base of the test host? I'd call this test-host, template is a
bit to overloaded/generic and might focus too much on the virtual test
environment.


True, 'template' is a bit overloaded.



Or is this some part that takes place in the test, i.e., a
generalization of product to test and supplementary tool/app that helps
on that test?


It was intended to be a 'general VM/CT base thingy' that can be
instantiated and managed by the test runner, so either a PVE/PBS/PMG
base installation, or some auxiliary resource, e.g. a Debian VM with
an already-set-up LDAP server.

I'll see if I can find good terms with the newly added focus on
bare-metal testing / the decoupling between environment setup and test
execution.



Is the order of test-cases guaranteed by toml parsing, or how are intra-
fixture dependencies ensured?



Good point. With rollbacks in between test cases it probably does not 
matter much, but on 'real hardware' with no rollback this could 
definitely be a concern.
A super simple thing that could just work fine is ordering test 
execution by testcase-names, sorted alphabetically. Ideally you'd write 
test cases that do not depend on each other any way, and *if* you ever 
find yourself in the situation where you *need* some ordering, you could

just encode the order in the test-case name by adding an integer prefix
- similar how you would name config files in /etc/sysctl.d/*, for
instance.


--
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 storage 1/1] fix #254: iscsi: allow to configure multiple portals

2023-10-16 Thread Yuri Konotopov via pve-devel
--- Begin Message ---

16.10.2023 17:58, Dominik Csapak пишет:

hi,


Hi, Dominik!

Thanks for your review. I will try to address all issues.

Some comments are below.



a few high level comments:

first:

is the iscsi.cfg really necessary? couldn't we just lookup on demand 
like we do now?


Until now I thought it is. For some reason I thought that open-iscsi 
node list is not


persistent between reboots. However it's indeed can be done without 
custom cache


file. I will rewrite that part. Thanks!



second:

i would favor an approach that uses an 'array' instead of the '-list' 
types.
i'm not completely sure if one can just have that as a drop in 
replacement

with old configs/api calls still working. if not, we could introduce
a new 'portals' api parameter/storage config that is an array which could
take precedence over the old 'portal' one. (that we could drop with the
next major release) in that case we could even automatically convert from
portal -> portals if we detect that in the api update


I'm not familiar with Proxmox formats but will try to investigate it.




and a few (basic) comments inline:

On 10/15/23 20:32, ykonoto...@gnome.org wrote:

From: Yuri Konotopov 



it would be great to have a real commit message here. best would be a 
short description
of what the patch wants to achieve, and maybe some more tricky 
implementation detail
that is not obvious (most of what you wrote in the cover letter would 
be fine, sans

the changelog + mention of 'Here is v2 patch'


I will add it. Git's send-email interface is not something I work often :-)





Signed-off-by: Yuri Konotopov 
---
  PVE/API2/Storage/Scan.pm   |   2 +-
  PVE/Storage.pm |  10 +-
  PVE/Storage/ISCSIPlugin.pm | 207 -
  3 files changed, 187 insertions(+), 32 deletions(-)

diff --git a/PVE/API2/Storage/Scan.pm b/PVE/API2/Storage/Scan.pm
index d7a8743..1f9773c 100644
--- a/PVE/API2/Storage/Scan.pm
+++ b/PVE/API2/Storage/Scan.pm
@@ -305,7 +305,7 @@ __PACKAGE__->register_method({
  node => get_standard_option('pve-node'),
  portal => {
  description => "The iSCSI portal (IP or DNS name with 
optional port).",

-    type => 'string', format => 'pve-storage-portal-dns',
+    type => 'string', format => 'pve-storage-portal-dns-list',
  },
  },
  },
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index cec3996..0043507 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1428,12 +1428,14 @@ sub resolv_portal {
  sub scan_iscsi {
  my ($portal_in) = @_;
  -    my $portal;
-    if (!($portal = resolv_portal($portal_in))) {
-    die "unable to parse/resolve portal address '${portal_in}'\n";
+    my @portals = PVE::Tools::split_list($portal_in);


not a big deal, but we usually pack that into a reference immediately:

my $portals = [ <...>::split_list(...) ];

then you can do
for my $portal (@$portals)
or
for my $portal ($portals->@*)

and don't have to create a reference later on when passing to 
iscsi_discovery



+    for my $portal (@portals) {
+    if (!resolv_portal($portal)) {
+    die "unable to parse/resolve portal address '${portal}'\n";
+    }
  }
  -    return PVE::Storage::ISCSIPlugin::iscsi_discovery($portal);
+    return PVE::Storage::ISCSIPlugin::iscsi_discovery(\@portals);
  }
    sub storage_default_format {
diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index a79fcb0..6f4309f 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -18,6 +18,65 @@ use base qw(PVE::Storage::Plugin);
  my $ISCSIADM = '/usr/bin/iscsiadm';
  $ISCSIADM = undef if ! -X $ISCSIADM;
  +my $iscsi_cfg = "/etc/pve/iscsi.cfg";
+
+sub read_config {
+    my ($filename, $raw) = @_;
+
+    my $cfg = {};
+
+    return $cfg if ! -f $iscsi_cfg;
+
+    my $content = PVE::Tools::file_get_contents($iscsi_cfg);
+    return $cfg if !defined($content);
+
+    my @lines = split /\n/, $content;
+
+    my $target;
+
+    for my $line (@lines) {
+    $line =~ s/#.*$//;
+    $line =~ s/^\s+//;
+    $line =~ s/^;.*$//;
+    $line =~ s/\s+$//;
+    next if !$line;
+
+    $target = $1 if $line =~ m/^\[(\S+)\]$/;
+    if (!$target) {
+    warn "no target - skip: $line\n";
+    next;
+    }
+
+    if (!defined($cfg->{$target})) {
+    $cfg->{$target} = [];
+    }
+
+    if ($line =~ m/^((?:$IPV4RE|\[$IPV6RE\]):\d+)$/) {
+    push @{$cfg->{$target}}, $1;
+    }
+    }
+
+    return $cfg;
+}
+
+sub write_config {
+    my ($cfg) = @_;
+
+    my $out = '';
+
+    for my $target (sort keys %$cfg) {
+    $out .= "[$target]\n";
+    for my $portal (sort @{$cfg->{$target}}) {
+    $out .= "$portal\n";
+    }
+    $out .= "\n";
+    }
+
+    PVE::Tools::file_set_contents($iscsi_cfg, $out);
+
+    return;
+}


didn't look too closely at read/write_config since i'm conviced that 
we don't

actually need this


+
  sub check_iscsi_support {
  my $noerr = shift;
  @@ -45,11 +104,10 @@ sub iscsi_session_list {
  

[pve-devel] [PATCH widget-toolkit] dark-mode: set intentionally black icons to `$icon-color`

2023-10-16 Thread Stefan Sterz
some icons intentionally use black as their color in the light theme.
this includes the little pencil and check mark icon in the acme
overview. change their color to the regular dark-mode icon-color. for
this to work the filter inversion needed for some other icons needs to
be remove too.

Signed-off-by: Stefan Sterz 
---
 src/proxmox-dark/scss/other/_icons.scss | 9 +
 1 file changed, 9 insertions(+)

diff --git a/src/proxmox-dark/scss/other/_icons.scss 
b/src/proxmox-dark/scss/other/_icons.scss
index d4dc316..c045cf4 100644
--- a/src/proxmox-dark/scss/other/_icons.scss
+++ b/src/proxmox-dark/scss/other/_icons.scss
@@ -104,6 +104,9 @@
 }

 // pbs show task log in longest task list column
+.fa.black,
+.fa.black::after,
+.fa.black::before,
 .x-action-col-icon.fa-chevron-right::before {
   filter: none;
 }
@@ -222,6 +225,12 @@
   }
 }

+// set icon color of intentional black icons (e.g.: pencil icon for
+// quickly changing the ACME account)
+.fa.black {
+  color: $icon-color;
+}
+
 // The usage icons dynamically displaying how full a storage is
 .usage-wrapper {
   border: 1px solid $icon-color;
--
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC] towards automated integration testing

2023-10-16 Thread Thomas Lamprecht
Am 16/10/2023 um 17:33 schrieb Lukas Wagner:
>> Or is this some part that takes place in the test, i.e., a
>> generalization of product to test and supplementary tool/app that helps
>> on that test?
> 
> It was intended to be a 'general VM/CT base thingy' that can be
> instantiated and managed by the test runner, so either a PVE/PBS/PMG
> base installation, or some auxiliary resource, e.g. a Debian VM with
> an already-set-up LDAP server.
> 
> I'll see if I can find good terms with the newly added focus on
> bare-metal testing / the decoupling between environment setup and test
> execution.

Hmm, yeah OK, having some additional info on top of "template" like e.g.,
"system-template", or "app-template", could be already slightly better
then.

While slightly details, IMO still important for overall future
direction, I'd possibly split "restore" into "source-type" and "source",
where the "source-type" can be e.g., "disk-image" for a qcow2 or the
like to work directly on, or "backup-image" for your backup restore
process, or some type for bootstrap tools like debootstrap or the VM
specific vmdb2.

Also having re-use configurable, i.e., if the app-template-instance
is destroyed after some test run is done. For that, writing a simple
info about mapping instantiated templates to other identifiers (VMID,
IP, ...) in e.g. /var/cache/ (or some XDG_ directory to cater also to
any users running this as non-root).

Again, can be classified as details, but IMO important for the
direction this is going, and not too much work, so should be at least
on the radar.

>> Is the order of test-cases guaranteed by toml parsing, or how are intra-
>> fixture dependencies ensured?
>>
> 
> Good point. With rollbacks in between test cases it probably does not
> matter much, but on 'real hardware' with no rollback this could
> definitely be a concern.
> A super simple thing that could just work fine is ordering test
> execution by testcase-names, sorted alphabetically. Ideally you'd write
> test cases that do not depend on each other any way, and *if* you ever
> find yourself in the situation where you *need* some ordering, you
> could> just encode the order in the test-case name by adding an integer
> prefix> - similar how you would name config files in /etc/sysctl.d/*,
> for instance.


While it can be OK to leave that for later, encoding such things
in names is IMO brittle and hard to manage if more than a handful
of tests, and we hopefully got lots more ;-)


>From top of my head I'd rather do some attribute based dependency
annotation, so that one can depend on single tests, or whole fixture
on others single tests or whole fixture.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel