Re: [pve-devel] [PATCH v4 pve-manager 15/16] fix #4759: ceph: configure ceph-crash.service and its key

2024-03-20 Thread Fabian Grünbichler
> Max Carrara  hat am 19.03.2024 18:41 CET geschrieben:
> On Tue Mar 19, 2024 at 11:04 AM CET, Fabian Grünbichler wrote:
> > On March 5, 2024 4:07 pm, Max Carrara wrote:
> > > diff --git a/bin/pve-init-ceph-crash b/bin/pve-init-ceph-crash
> > > new file mode 100755
> > > index ..c8e9217d
> > > --- /dev/null
> > > +++ b/bin/pve-init-ceph-crash
> > > @@ -0,0 +1,129 @@
> > > +#!/usr/bin/perl
> > > +
> > > +use strict;
> > > +use warnings;
> > > +
> > > +use List::Util qw(first);
> > > +
> > > +use PVE::Ceph::Tools;
> > > +use PVE::Cluster;
> > > +use PVE::RADOS;
> > > +use PVE::RPCEnvironment;
> > > +
> > > +my $ceph_cfg_file = 'ceph.conf';
> > > +my $keyring_value = '/etc/pve/ceph/$cluster.$name.keyring';
> > > +
> > > +my $entity = 'client.crash';
> >
> > nit: this could be inlined?
> 
> Inlined as in use 'client.crash' as the key directly? Eh, I'm more a fan
> of using a variable here, as constantly spelling it out while changing a
> bunch of things gets a little painful ...
> 
> If you meant that I should put it in `try_adapt_cfg` - sure, I missed
> that that's the only `sub` in which it's being used, woops!

both would be fine for me :)

> > > +
> > > +

[..]

> > > +
> > > +sub main {
> > > +my $rpcenv = PVE::RPCEnvironment->init('cli');
> > > +
> > > +$rpcenv->init_request();
> > > +$rpcenv->set_language($ENV{LANG});
> > > +$rpcenv->set_user('root@pam');
> >
> > why do we need an rpcenv here?
> 
> Double-checked, just to be sure: `librados-perl` requires an
> `RPCEnvironment` to do some handling regarding forks -
> `RPCEnvironment::get()` will die if no env was initialized.

good to know. maybe a comment makes sense - the rpcenv is not used by anything 
else here it seems, and the fact that librados uses it because it forks itself 
and wants to clean up in case it is called in an API server process is not 
obvious ;)

you could replace those lines with a call to setup_default_cli_env that does 
all of the above and the "am I root" check in one ;)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
 pve-system-requirements.adoc | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc
index bc3689d..4db5358 100644
--- a/pve-system-requirements.adoc
+++ b/pve-system-requirements.adoc
@@ -49,6 +49,8 @@ Recommended System Requirements
   (BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are compatible with 
a
   hardware RAID controller.
 ** Shared and distributed storage is possible.
+** SSDs with Power-Loss-Protection (PLP) are recommended for good performance.
+  Using consumer SSDs is discouraged.
 
 * Redundant (Multi-)Gbit NICs, with additional NICs depending on the preferred
   storage technology and cluster setup.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 pve-manager 15/16] fix #4759: ceph: configure ceph-crash.service and its key

2024-03-20 Thread Max Carrara
On Wed Mar 20, 2024 at 9:05 AM CET, Fabian Grünbichler wrote:
> > Max Carrara  hat am 19.03.2024 18:41 CET geschrieben:
> > On Tue Mar 19, 2024 at 11:04 AM CET, Fabian Grünbichler wrote:
> > > On March 5, 2024 4:07 pm, Max Carrara wrote:
> > > > diff --git a/bin/pve-init-ceph-crash b/bin/pve-init-ceph-crash
> > > > new file mode 100755
> > > > index ..c8e9217d
> > > > --- /dev/null
> > > > +++ b/bin/pve-init-ceph-crash
> > > > @@ -0,0 +1,129 @@
> > > > +#!/usr/bin/perl
> > > > +
> > > > +use strict;
> > > > +use warnings;
> > > > +
> > > > +use List::Util qw(first);
> > > > +
> > > > +use PVE::Ceph::Tools;
> > > > +use PVE::Cluster;
> > > > +use PVE::RADOS;
> > > > +use PVE::RPCEnvironment;
> > > > +
> > > > +my $ceph_cfg_file = 'ceph.conf';
> > > > +my $keyring_value = '/etc/pve/ceph/$cluster.$name.keyring';
> > > > +
> > > > +my $entity = 'client.crash';
> > >
> > > nit: this could be inlined?
> > 
> > Inlined as in use 'client.crash' as the key directly? Eh, I'm more a fan
> > of using a variable here, as constantly spelling it out while changing a
> > bunch of things gets a little painful ...
> > 
> > If you meant that I should put it in `try_adapt_cfg` - sure, I missed
> > that that's the only `sub` in which it's being used, woops!
>
> both would be fine for me :)

ACK!

>
> > > > +
> > > > +
>
> [..]
>
> > > > +
> > > > +sub main {
> > > > +my $rpcenv = PVE::RPCEnvironment->init('cli');
> > > > +
> > > > +$rpcenv->init_request();
> > > > +$rpcenv->set_language($ENV{LANG});
> > > > +$rpcenv->set_user('root@pam');
> > >
> > > why do we need an rpcenv here?
> > 
> > Double-checked, just to be sure: `librados-perl` requires an
> > `RPCEnvironment` to do some handling regarding forks -
> > `RPCEnvironment::get()` will die if no env was initialized.
>
> good to know. maybe a comment makes sense - the rpcenv is not used by 
> anything else here it seems, and the fact that librados uses it because it 
> forks itself and wants to clean up in case it is called in an API server 
> process is not obvious ;)
>
> you could replace those lines with a call to setup_default_cli_env that does 
> all of the above and the "am I root" check in one ;)

Also ACK! Thanks again for your input!



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Fiona Ebner
Am 20.03.24 um 09:56 schrieb Aaron Lauterer:
> Signed-off-by: Aaron Lauterer 
> ---
>  pve-system-requirements.adoc | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc
> index bc3689d..4db5358 100644
> --- a/pve-system-requirements.adoc
> +++ b/pve-system-requirements.adoc
> @@ -49,6 +49,8 @@ Recommended System Requirements
>(BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are compatible 
> with a
>hardware RAID controller.
>  ** Shared and distributed storage is possible.
> +** SSDs with Power-Loss-Protection (PLP) are recommended for good 
> performance.
> +  Using consumer SSDs is discouraged.
>  

Having PLP might correlate with having good performance, but it's not
the reason for good performance and good performance is not the reason
you want PLP. It's just that both things are present in many enterprise
SSDs, I'd mention that explicitly to avoid potential confusion.

>  * Redundant (Multi-)Gbit NICs, with additional NICs depending on the 
> preferred
>storage technology and cluster setup.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 3/5] listvms: improve typing and add dataclasses to represent dicts

2024-03-20 Thread Lukas Wagner



On  2024-03-19 16:32, Max Carrara wrote:
> This commit replaces some of the explicitly imported types from the
> `typing` module with their inbuilt counterparts, e.g. `typing.List`
> becomes `list`. This is supported since Python 3.9 [0].
> 
> Additionally, file paths are now represented as `pathlib.Path` [1],
> which also checks whether the given string is actually a valid path
> when constructed.
> 
> Furthermore, the `dict`s with values of mixed types are now
> represented as dataclasses [2] instead, in order to make them more
> type-safe (--> allow for better linting).
> 
> Because dataclasses and `pathlib.Path`s are not JSON-serializable by
> default however, a helper function is added, which allows for more
> fine-grained control regarding how those objects are serialized.
> 
> [0]: 
> https://docs.python.org/3.9/whatsnew/3.9.html#type-hinting-generics-in-standard-collections
> [1]: https://docs.python.org/3.11/library/pathlib.html
> [2]: https://docs.python.org/3.11/library/dataclasses.html
> 
> Signed-off-by: Max Carrara 
> ---
>  listvms.py | 99 --
>  1 file changed, 73 insertions(+), 26 deletions(-)
> 
> diff --git a/listvms.py b/listvms.py
> index cc3209f..cdea95a 100755
> --- a/listvms.py
> +++ b/listvms.py
> @@ -1,26 +1,69 @@
>  #!/usr/bin/python3
>  
> +import dataclasses
>  import json
>  import ssl
>  import sys
>  
> -from typing import List, Dict, Optional, Tuple
> +from dataclasses import dataclass
> +from pathlib import Path
> +from typing import Any
>  
>  from pyVim.connect import SmartConnect, Disconnect
>  from pyVmomi import vim
>  
>  
> -def get_datacenter_of_vm(vm: vim.VirtualMachine) -> Optional[vim.Datacenter]:
> +@dataclass
> +class VmVmxInfo:
> +datastore: str
> +path: Path
> +checksum: str
> +
> +
> +@dataclass
> +class VmDiskInfo:
> +datastore: str
> +path: Path
> +capacity: int
> +
> +
> +@dataclass
> +class VmInfo:
> +config: VmVmxInfo
> +disks: list[VmDiskInfo]
> +power: str
> +
> +
> +def json_dump_helper(obj: Any) -> Any:
> +"""Converts otherwise unserializable objects to types that can be
> +serialized as JSON.
> +
> +Raises:
> +TypeError: If the conversion of the object is not supported.
> +"""
> +if dataclasses.is_dataclass(obj):
> +return dataclasses.asdict(obj)
> +
> +match obj:
> +case Path():
> +return str(obj)
> +
> +raise TypeError(
> +f"Can't make object of type {type(obj)} JSON-serializable: 
> {repr(obj)}"
> +)
> +
> +
> +def get_datacenter_of_vm(vm: vim.VirtualMachine) -> vim.Datacenter | None:
>  """Find the Datacenter object a VM belongs to."""
>  current = vm.parent
>  while current:
>  if isinstance(current, vim.Datacenter):
>  return current
>  current = current.parent
> -return None
> +return

mypy does not seem to like this change :)

listvms.py:157: error: Return value expected  [return-value]


>  
>  
> -def list_vms(service_instance: vim.ServiceInstance) -> 
> List[vim.VirtualMachine]:
> +def list_vms(service_instance: vim.ServiceInstance) -> 
> list[vim.VirtualMachine]:
>  """List all VMs on the ESXi/vCenter server."""
>  content = service_instance.content
>  vm_view = content.viewManager.CreateContainerView(
> @@ -32,39 +75,36 @@ def list_vms(service_instance: vim.ServiceInstance) -> 
> List[vim.VirtualMachine]:
>  vm_view.Destroy()
>  return vms
>  
> -def parse_file_path(path):
> +def parse_file_path(path) -> tuple[str, Path]:
>  """Parse a path of the form '[datastore] file/path'"""
>  datastore_name, relative_path = path.split('] ', 1)
>  datastore_name = datastore_name.strip('[')
> -return (datastore_name, relative_path)
> +return (datastore_name, Path(relative_path))
>  
> -def get_vm_vmx_info(vm: vim.VirtualMachine) -> Dict[str, str]:
> +def get_vm_vmx_info(vm: vim.VirtualMachine) -> VmVmxInfo:
>  """Extract VMX file path and checksum from a VM object."""
>  datastore_name, relative_vmx_path = 
> parse_file_path(vm.config.files.vmPathName)
> -return {
> -'datastore': datastore_name,
> -'path': relative_vmx_path,
> -'checksum': vm.config.vmxConfigChecksum.hex() if 
> vm.config.vmxConfigChecksum else 'N/A'
> -}
>  
> -def get_vm_disk_info(vm: vim.VirtualMachine) -> Dict[str, int]:
> +return VmVmxInfo(
> +datastore=datastore_name,
> +path=relative_vmx_path,
> +checksum=vm.config.vmxConfigChecksum.hex() if 
> vm.config.vmxConfigChecksum else 'N/A'
> +)
> +
> +def get_vm_disk_info(vm: vim.VirtualMachine) -> list[VmDiskInfo]:
>  disks = []
>  for device in vm.config.hardware.device:
> -if type(device).__name__ == 'vim.vm.device.VirtualDisk':
> +if isinstance(device, vim.vm.device.VirtualDisk):
>  try:
>  (datastore, path) = parse_file_path(device.backing.fileName)
>

Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 4/5] listvms: add arg parser, context manager for connections, fetch helper

2024-03-20 Thread Lukas Wagner



On  2024-03-19 16:32, Max Carrara wrote:

> +def _squeeze_and_wrap(text: str) -> str:
> +"""Makes it easier to write help text using multiline strings."""
> +
> +text = text.replace("\n", " ").strip()
> +
> +# squeeze recurring spaces
> +while "  " in text:
> +text = text.replace("  ", " ")
> +

How about `" ".join(text.split())`? :)

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 0/5] Improve listvms.py

2024-03-20 Thread Lukas Wagner
On  2024-03-19 16:32, Max Carrara wrote:
> This series adds a bunch of improvements for listvms.py, most notably
> better typing (and thus better linting support) as well as parsing
> arguments via the Python STL's `argparse` [0]. For more information,
> please see the individual patches.
> 
> All patches were additionally tested in order to ensure that the JSON
> output on successful invocations remains unchanged. This was done as
> follows:
> 
>   # on master
>   ./listvms.py $ARGS | jq > ref.json
>   # after each patch
>   ./listvms.py $ARGS | jq > output.json
>   diff -u ref.json output.json
> 
> Furthermore, I built the repo's package and installed it on my local
> system, and re-added my virtual ESXi host in the storage settings. The
> plugin worked as expected - all my VMs on the ESXi hosts showed up and
> were able to be live-imported. 
> 

Looks good to me.

Since this is type-hinted Python code, maybe the following `mypy.ini` would 
make sense?

```
[mypy]

[mypy-pyVmomi]
ignore_missing_imports = True

[mypy-pyVim.*]
ignore_missing_imports = True
```

Otherwise `mypy` complains about missing type stubs for those two modules :)


Tested-by: Lukas Wagner 
Reviewed-by: Lukas Wagner 

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Aaron Lauterer




On  2024-03-20  10:30, Fiona Ebner wrote:

Am 20.03.24 um 09:56 schrieb Aaron Lauterer:

Signed-off-by: Aaron Lauterer 
---
  pve-system-requirements.adoc | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc
index bc3689d..4db5358 100644
--- a/pve-system-requirements.adoc
+++ b/pve-system-requirements.adoc
@@ -49,6 +49,8 @@ Recommended System Requirements
(BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are compatible 
with a
hardware RAID controller.
  ** Shared and distributed storage is possible.
+** SSDs with Power-Loss-Protection (PLP) are recommended for good performance.
+  Using consumer SSDs is discouraged.
  


Having PLP might correlate with having good performance, but it's not
the reason for good performance and good performance is not the reason
you want PLP. It's just that both things are present in many enterprise
SSDs, I'd mention that explicitly to avoid potential confusion.


When it comes to sync writes, it is definitely one reason for the good 
performance ;)

But yeah, let's think about it, what about the following?:


Enterprise grade SSDs are recommended for good performance. Checking for 
 Power-Loss-Protection (PLP) is a good way to avoid consumer grade 
SSDs. The use of consumer grade SSDs is discouraged.



Not too happy with that either, but phrasing it correctly and succinct 
is an art in itself.





  * Redundant (Multi-)Gbit NICs, with additional NICs depending on the preferred
storage technology and cluster setup.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Stefan Sterz
On Wed Mar 20, 2024 at 10:49 AM CET, Aaron Lauterer wrote:
>
>
> On  2024-03-20  10:30, Fiona Ebner wrote:
> > Am 20.03.24 um 09:56 schrieb Aaron Lauterer:
> >> Signed-off-by: Aaron Lauterer 
> >> ---
> >>   pve-system-requirements.adoc | 2 ++
> >>   1 file changed, 2 insertions(+)
> >>
> >> diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc
> >> index bc3689d..4db5358 100644
> >> --- a/pve-system-requirements.adoc
> >> +++ b/pve-system-requirements.adoc
> >> @@ -49,6 +49,8 @@ Recommended System Requirements
> >> (BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are 
> >> compatible with a
> >> hardware RAID controller.
> >>   ** Shared and distributed storage is possible.
> >> +** SSDs with Power-Loss-Protection (PLP) are recommended for good 
> >> performance.
> >> +  Using consumer SSDs is discouraged.
> >>
> >
> > Having PLP might correlate with having good performance, but it's not
> > the reason for good performance and good performance is not the reason
> > you want PLP. It's just that both things are present in many enterprise
> > SSDs, I'd mention that explicitly to avoid potential confusion.
>
> When it comes to sync writes, it is definitely one reason for the good
> performance ;)
> But yeah, let's think about it, what about the following?:
>
>
> Enterprise grade SSDs are recommended for good performance. Checking for
>   Power-Loss-Protection (PLP) is a good way to avoid consumer grade
> SSDs. The use of consumer grade SSDs is discouraged.
>
>
> Not too happy with that either, but phrasing it correctly and succinct
> is an art in itself.
>

How about:

Enterprise SSDs with good performance are recommended.
Power-Loss-Protection (PLP) support can help identify such disks. Using
consumer SSDs is discouraged

> >
> >>   * Redundant (Multi-)Gbit NICs, with additional NICs depending on the 
> >> preferred
> >> storage technology and cluster setup.
>
>
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Fiona Ebner


Am 20.03.24 um 10:49 schrieb Aaron Lauterer:
> 
> 
> On  2024-03-20  10:30, Fiona Ebner wrote:
>> Am 20.03.24 um 09:56 schrieb Aaron Lauterer:
>>> Signed-off-by: Aaron Lauterer 
>>> ---
>>>   pve-system-requirements.adoc | 2 ++
>>>   1 file changed, 2 insertions(+)
>>>
>>> diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc
>>> index bc3689d..4db5358 100644
>>> --- a/pve-system-requirements.adoc
>>> +++ b/pve-system-requirements.adoc
>>> @@ -49,6 +49,8 @@ Recommended System Requirements
>>>     (BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are
>>> compatible with a
>>>     hardware RAID controller.
>>>   ** Shared and distributed storage is possible.
>>> +** SSDs with Power-Loss-Protection (PLP) are recommended for good
>>> performance.
>>> +  Using consumer SSDs is discouraged.
>>>   
>>
>> Having PLP might correlate with having good performance, but it's not
>> the reason for good performance and good performance is not the reason
>> you want PLP. It's just that both things are present in many enterprise
>> SSDs, I'd mention that explicitly to avoid potential confusion.
> 
> When it comes to sync writes, it is definitely one reason for the good
> performance ;)

Oh, I see. Didn't think about that :)

> But yeah, let's think about it, what about the following?:
> 
> 
> Enterprise grade SSDs are recommended for good performance. Checking for
>  Power-Loss-Protection (PLP) is a good way to avoid consumer grade SSDs.
> The use of consumer grade SSDs is discouraged.
> 
> 
> Not too happy with that either, but phrasing it correctly and succinct
> is an art in itself.
> 

IMHO, it's still succinct enough. But you could also go for "avoid
consumer grade SSDs, whose use is discouraged."

>>
>>>   * Redundant (Multi-)Gbit NICs, with additional NICs depending on
>>> the preferred
>>>     storage technology and cluster setup.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 4/5] listvms: add arg parser, context manager for connections, fetch helper

2024-03-20 Thread Max Carrara
On Wed Mar 20, 2024 at 10:39 AM CET, Lukas Wagner wrote:
>
>
> On  2024-03-19 16:32, Max Carrara wrote:
>
> > +def _squeeze_and_wrap(text: str) -> str:
> > +"""Makes it easier to write help text using multiline strings."""
> > +
> > +text = text.replace("\n", " ").strip()
> > +
> > +# squeeze recurring spaces
> > +while "  " in text:
> > +text = text.replace("  ", " ")
> > +
>
> How about `" ".join(text.split())`? :)

That would indeed also work here - I tend to automatically resort to the
while-loop because `str.split()` also splits on `\n`, `\t` and some
other whitespace characters, which is what I *usually* want to avoid,
but in this case it's unnecessary.

Will use your suggestion instead in v2, thanks!



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 0/5] Improve listvms.py

2024-03-20 Thread Max Carrara
On Wed Mar 20, 2024 at 10:42 AM CET, Lukas Wagner wrote:
> On  2024-03-19 16:32, Max Carrara wrote:
> > This series adds a bunch of improvements for listvms.py, most notably
> > better typing (and thus better linting support) as well as parsing
> > arguments via the Python STL's `argparse` [0]. For more information,
> > please see the individual patches.
> > 
> > All patches were additionally tested in order to ensure that the JSON
> > output on successful invocations remains unchanged. This was done as
> > follows:
> > 
> >   # on master
> >   ./listvms.py $ARGS | jq > ref.json
> >   # after each patch
> >   ./listvms.py $ARGS | jq > output.json
> >   diff -u ref.json output.json
> > 
> > Furthermore, I built the repo's package and installed it on my local
> > system, and re-added my virtual ESXi host in the storage settings. The
> > plugin worked as expected - all my VMs on the ESXi hosts showed up and
> > were able to be live-imported. 
> > 
>
> Looks good to me.
>
> Since this is type-hinted Python code, maybe the following `mypy.ini` would 
> make sense?
>
> ```
> [mypy]
>
> [mypy-pyVmomi]
> ignore_missing_imports = True
>
> [mypy-pyVim.*]
> ignore_missing_imports = True
> ```
>
> Otherwise `mypy` complains about missing type stubs for those two modules :)

I think adding this wouldn't hurt at all, so I'll include it in v2.
Thanks for the suggestion!

If only the VMware Python stuff wasn't so ... well, the way it is.

>
>
> Tested-by: Lukas Wagner 
> Reviewed-by: Lukas Wagner 

Thanks for reviewing and testing!



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 3/5] listvms: improve typing and add dataclasses to represent dicts

2024-03-20 Thread Max Carrara
On Wed Mar 20, 2024 at 10:38 AM CET, Lukas Wagner wrote:
>
>
> On  2024-03-19 16:32, Max Carrara wrote:
> > This commit replaces some of the explicitly imported types from the
> > `typing` module with their inbuilt counterparts, e.g. `typing.List`
> > becomes `list`. This is supported since Python 3.9 [0].
> > 
> > Additionally, file paths are now represented as `pathlib.Path` [1],
> > which also checks whether the given string is actually a valid path
> > when constructed.
> > 
> > Furthermore, the `dict`s with values of mixed types are now
> > represented as dataclasses [2] instead, in order to make them more
> > type-safe (--> allow for better linting).
> > 
> > Because dataclasses and `pathlib.Path`s are not JSON-serializable by
> > default however, a helper function is added, which allows for more
> > fine-grained control regarding how those objects are serialized.
> > 
> > [0]: 
> > https://docs.python.org/3.9/whatsnew/3.9.html#type-hinting-generics-in-standard-collections
> > [1]: https://docs.python.org/3.11/library/pathlib.html
> > [2]: https://docs.python.org/3.11/library/dataclasses.html
> > 
> > Signed-off-by: Max Carrara 
> > ---
> >  listvms.py | 99 --
> >  1 file changed, 73 insertions(+), 26 deletions(-)
> > 
> > diff --git a/listvms.py b/listvms.py
> > index cc3209f..cdea95a 100755
> > --- a/listvms.py
> > +++ b/listvms.py
> > @@ -1,26 +1,69 @@
> >  #!/usr/bin/python3
> >  
> > +import dataclasses
> >  import json
> >  import ssl
> >  import sys
> >  
> > -from typing import List, Dict, Optional, Tuple
> > +from dataclasses import dataclass
> > +from pathlib import Path
> > +from typing import Any
> >  
> >  from pyVim.connect import SmartConnect, Disconnect
> >  from pyVmomi import vim
> >  
> >  
> > -def get_datacenter_of_vm(vm: vim.VirtualMachine) -> 
> > Optional[vim.Datacenter]:
> > +@dataclass
> > +class VmVmxInfo:
> > +datastore: str
> > +path: Path
> > +checksum: str
> > +
> > +
> > +@dataclass
> > +class VmDiskInfo:
> > +datastore: str
> > +path: Path
> > +capacity: int
> > +
> > +
> > +@dataclass
> > +class VmInfo:
> > +config: VmVmxInfo
> > +disks: list[VmDiskInfo]
> > +power: str
> > +
> > +
> > +def json_dump_helper(obj: Any) -> Any:
> > +"""Converts otherwise unserializable objects to types that can be
> > +serialized as JSON.
> > +
> > +Raises:
> > +TypeError: If the conversion of the object is not supported.
> > +"""
> > +if dataclasses.is_dataclass(obj):
> > +return dataclasses.asdict(obj)
> > +
> > +match obj:
> > +case Path():
> > +return str(obj)
> > +
> > +raise TypeError(
> > +f"Can't make object of type {type(obj)} JSON-serializable: 
> > {repr(obj)}"
> > +)
> > +
> > +
> > +def get_datacenter_of_vm(vm: vim.VirtualMachine) -> vim.Datacenter | None:
> >  """Find the Datacenter object a VM belongs to."""
> >  current = vm.parent
> >  while current:
> >  if isinstance(current, vim.Datacenter):
> >  return current
> >  current = current.parent
> > -return None
> > +return
>
> mypy does not seem to like this change :)
>
> listvms.py:157: error: Return value expected  [return-value]

Interesting! I got `rufflsp`, which doesn't seem to warn me here. Thanks
for pointing that out, will add that in v2.

>
>
> >  
> >  
> > -def list_vms(service_instance: vim.ServiceInstance) -> 
> > List[vim.VirtualMachine]:
> > +def list_vms(service_instance: vim.ServiceInstance) -> 
> > list[vim.VirtualMachine]:
> >  """List all VMs on the ESXi/vCenter server."""
> >  content = service_instance.content
> >  vm_view = content.viewManager.CreateContainerView(
> > @@ -32,39 +75,36 @@ def list_vms(service_instance: vim.ServiceInstance) -> 
> > List[vim.VirtualMachine]:
> >  vm_view.Destroy()
> >  return vms
> >  
> > -def parse_file_path(path):
> > +def parse_file_path(path) -> tuple[str, Path]:
> >  """Parse a path of the form '[datastore] file/path'"""
> >  datastore_name, relative_path = path.split('] ', 1)
> >  datastore_name = datastore_name.strip('[')
> > -return (datastore_name, relative_path)
> > +return (datastore_name, Path(relative_path))
> >  
> > -def get_vm_vmx_info(vm: vim.VirtualMachine) -> Dict[str, str]:
> > +def get_vm_vmx_info(vm: vim.VirtualMachine) -> VmVmxInfo:
> >  """Extract VMX file path and checksum from a VM object."""
> >  datastore_name, relative_vmx_path = 
> > parse_file_path(vm.config.files.vmPathName)
> > -return {
> > -'datastore': datastore_name,
> > -'path': relative_vmx_path,
> > -'checksum': vm.config.vmxConfigChecksum.hex() if 
> > vm.config.vmxConfigChecksum else 'N/A'
> > -}
> >  
> > -def get_vm_disk_info(vm: vim.VirtualMachine) -> Dict[str, int]:
> > +return VmVmxInfo(
> > +datastore=datastore_name,
> > +path=relative_vmx_path,
> > +chec

[pve-devel] vSwitch gui

2024-03-20 Thread Gilberto Ferreira via pve-devel
--- Begin Message ---
Hi folks


Sorry to use this channel to send this message, but I wonder if Proxmox are
considering some visual GUI for network devices, like vmware do.
We already know that there will be a big search for Proxmox from the now
abandoned ESXi base.

So this gui should be a neat feature.

Something such this project for instance:

https://github.com/nbonnand/ovs-toolbox


Thanks and again, sorry for disturb the list.
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] vSwitch gui

2024-03-20 Thread Gilberto Ferreira via pve-devel
--- Begin Message ---
or this one here
https://github.com/PLVision/open_vmonitor






Em qua., 20 de mar. de 2024 às 09:04, Gilberto Ferreira via pve-devel <
pve-devel@lists.proxmox.com> escreveu:

>
>
>
> -- Forwarded message --
> From: Gilberto Ferreira 
> To: Proxmox VE development discussion 
> Cc:
> Bcc:
> Date: Wed, 20 Mar 2024 09:03:53 -0300
> Subject: vSwitch gui
> Hi folks
>
>
> Sorry to use this channel to send this message, but I wonder if Proxmox are
> considering some visual GUI for network devices, like vmware do.
> We already know that there will be a big search for Proxmox from the now
> abandoned ESXi base.
>
> So this gui should be a neat feature.
>
> Something such this project for instance:
>
> https://github.com/nbonnand/ovs-toolbox
>
>
> Thanks and again, sorry for disturb the list.
>
>
>
> -- Forwarded message --
> From: Gilberto Ferreira via pve-devel 
> To: Proxmox VE development discussion 
> Cc: Gilberto Ferreira 
> Bcc:
> Date: Wed, 20 Mar 2024 09:03:53 -0300
> Subject: [pve-devel] vSwitch gui
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 4/5] listvms: add arg parser, context manager for connections, fetch helper

2024-03-20 Thread Wolfgang Bumiller
On Tue, Mar 19, 2024 at 04:32:49PM +0100, Max Carrara wrote:
> +@contextmanager
> +def connect_to_esxi_host(args: EsxiConnectonArgs) -> vim.ServiceInstance:
> +"""Opens a connection to an ESXi host with the given username and 
> password
> +contained in the password file.
> +"""
> +ssl_context = (
> +ssl._create_unverified_context()
> +if args.skip_cert_verification
> +else None
> +)
> +
> +with open(args.password_file) as pw_file:
> +password = pw_file.read().strip()

This strips all whitespace from both sides, which is not what we want.
(Not that I particularly care whether esxi even allows spaces in
passwords at all...)
The old code specifically only stripped a single trailing *newline*,
mainly for when you edit the file with eg. vim which defaults to adding
one...


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/3] stop cleanup: remove unnecessary tpmstate cleanup

2024-03-20 Thread Dominik Csapak
tpmstate0 is already included in `get_vm_volumes`, and our only storage
plugin that has unmap_volume implemented is the RBDPlugin, where we call
unmap in `deactivate_volume`. So it's already ummapped by the
`deactivate_volumes` calls above.

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer.pm | 8 
 1 file changed, 8 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ef3aee20..d53e9693 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6167,14 +6167,6 @@ sub vm_stop_cleanup {
if (!$keepActive) {
my $vollist = get_vm_volumes($conf);
PVE::Storage::deactivate_volumes($storecfg, $vollist);
-
-   if (my $tpmdrive = $conf->{tpmstate0}) {
-   my $tpm = parse_drive("tpmstate0", $tpmdrive);
-   my ($storeid, $volname) = 
PVE::Storage::parse_volume_id($tpm->{file}, 1);
-   if ($storeid) {
-   PVE::Storage::unmap_volume($storecfg, $tpm->{file});
-   }
-   }
}
 
foreach my $ext (qw(mon qmp pid vnc qga)) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/3] migrate: call vm_stop_cleanup after stopping in phase3_cleanup

2024-03-20 Thread Dominik Csapak
we currently only call deactivate_volumes, but we actually want to call
the whole vm_stop_cleanup, since that is not invoked by the vm_stop
above (we cannot parse the config anymore) and might do other cleanups
we also want to do (like mdev cleanup).

For this to work properly we have to clone the original config at the
beginning, since we might modify the volids.

To get a better log output, add a `noerr` parameter to `vm_stop_cleanup`
that is enabled by default and decides if we `die` or `warn` on an
error. That way we can catch the error in the migrate code and
log it properly.

Signed-off-by: Dominik Csapak 
---
 PVE/QemuMigrate.pm | 12 ++--
 PVE/QemuServer.pm  | 12 ++--
 2 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b3570770..1be12bf1 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use IO::File;
 use IPC::Open2;
+use Storable qw(dclone);
 use Time::HiRes qw( usleep );
 
 use PVE::Cluster;
@@ -1460,7 +1461,8 @@ sub phase3_cleanup {
 
 my $tunnel = $self->{tunnel};
 
-my $sourcevollist = PVE::QemuServer::get_vm_volumes($conf);
+# we'll need an unmodified copy of the config later for the cleanup
+my $oldconf = dclone($conf);
 
 if ($self->{volume_map} && !$self->{opts}->{remote}) {
my $target_drives = $self->{target_drive};
@@ -1591,12 +1593,10 @@ sub phase3_cleanup {
$self->{errors} = 1;
 }
 
-# always deactivate volumes - avoid lvm LVs to be active on several nodes
-eval {
-   PVE::Storage::deactivate_volumes($self->{storecfg}, $sourcevollist);
-};
+# stop with nocheck does not do a cleanup, so do it here with the original 
config
+eval { PVE::QemuServer::vm_stop_cleanup($self->{storecfg}, $vmid, 
$oldconf, 0) };
 if (my $err = $@) {
-   $self->log('err', $err);
+   $self->log('err', "cleanup for vm failed - $err");
$self->{errors} = 1;
 }
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d53e9693..54f73093 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6160,7 +6160,9 @@ sub cleanup_pci_devices {
 }
 
 sub vm_stop_cleanup {
-my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes) = @_;
+my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes, $noerr) 
= @_;
+
+$noerr //= 1; # defaults to warning
 
 eval {
 
@@ -6186,7 +6188,13 @@ sub vm_stop_cleanup {
 
vmconfig_apply_pending($vmid, $conf, $storecfg) if 
$apply_pending_changes;
 };
-warn $@ if $@; # avoid errors - just warn
+if (my $err = $@) {
+   if ($noerr) {
+   warn $err;
+   } else {
+   die $err;
+   }
+}
 }
 
 # call only in locked context
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 3/3] ui: adapt migration window to precondition api change

2024-03-20 Thread Dominik Csapak
we now return the 'not_allowed_nodes' also if the vm is running, when it
has mapped resources. So do that checks independently so that the
user has instant feedback where those resources exist.

Signed-off-by: Dominik Csapak 
---
 www/manager6/window/Migrate.js | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 21806d50..f20251b5 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -104,7 +104,7 @@ Ext.define('PVE.window.Migrate', {
onTargetChange: function(nodeSelector) {
// Always display the storages of the currently seleceted migration 
target

this.lookup('pveDiskStorageSelector').setNodename(nodeSelector.value);
-   this.checkMigratePreconditions();
+   this.checkMigratePreconditions(true);
},
 
startMigration: function() {
@@ -214,12 +214,12 @@ Ext.define('PVE.window.Migrate', {
migration.possible = true;
}
migration.preconditions = [];
+   let target = me.lookup('pveNodeSelector').value;
+   let disallowed = migrateStats.not_allowed_nodes?.[target] ?? {};
 
if (migrateStats.allowed_nodes) {
migration.allowedNodes = migrateStats.allowed_nodes;
-   let target = me.lookup('pveNodeSelector').value;
if (target.length && 
!migrateStats.allowed_nodes.includes(target)) {
-   let disallowed = migrateStats.not_allowed_nodes[target] ?? 
{};
if (disallowed.unavailable_storages !== undefined) {
let missingStorages = 
disallowed.unavailable_storages.join(', ');
 
@@ -230,17 +230,17 @@ Ext.define('PVE.window.Migrate', {
severity: 'error',
});
}
+   }
+   }
 
-   if (disallowed['unavailable-resources'] !== undefined) {
-   let unavailableResources = 
disallowed['unavailable-resources'].join(', ');
+   if (disallowed['unavailable-resources'] !== undefined) {
+   let unavailableResources = 
disallowed['unavailable-resources'].join(', ');
 
-   migration.possible = false;
-   migration.preconditions.push({
-   text: 'Mapped Resources (' + unavailableResources + 
') not available on selected target. ',
-   severity: 'error',
-   });
-   }
-   }
+   migration.possible = false;
+   migration.preconditions.push({
+   text: 'Mapped Resources (' + unavailableResources + ') not 
available on selected target. ',
+   severity: 'error',
+   });
}
 
let blockingResources = [];
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 1/3] bulk migrate: improve precondition checks

2024-03-20 Thread Dominik Csapak
this now takes into account the 'not_allowed_nodes' hash we get from the
api call. With that, we can now limit the 'local_resources' check for
online vms only, as for offline guests, the 'unavailable-resources' hash
already includes mapped devices that don't exist on the target node.

This now also includes unavailable storages on target nodes.

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Nodes.pm | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index cc5ee65e..1d5c68f5 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2241,11 +2241,23 @@ my $create_migrate_worker = sub {
$invalidConditions .= join(', ', map { $_->{volid} } 
@{$preconditions->{local_disks}});
}
 
-   if (@{$preconditions->{local_resources}}) {
+   if ($online && scalar($preconditions->{local_resources}->@*)) {
$invalidConditions .= "\n  Has local resources: ";
$invalidConditions .= join(', ', 
@{$preconditions->{local_resources}});
}
 
+   if (my $not_allowed_nodes = $preconditions->{not_allowed_nodes}) {
+   if (my $unavail_storages = 
$not_allowed_nodes->{$target}->{unavailable_storages}) {
+   $invalidConditions .= "\n  Has unavailable storages: ";
+   $invalidConditions .= join(', ', $unavail_storages->@*);
+   }
+
+   if (my $unavail_resources = 
$not_allowed_nodes->{$target}->{'unavailable-resources'}) {
+   $invalidConditions .= "\n  Has unavailable resources ";
+   $invalidConditions .= join(', ', $unavail_resources->@*);
+   }
+   }
+
if ($invalidConditions && $invalidConditions ne '') {
print STDERR "skip VM $vmid - precondition check failed:";
die "$invalidConditions\n";
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager 2/3] bulk migrate: include checks for live-migratable local resources

2024-03-20 Thread Dominik Csapak
those should be able to migrate even for online vms. If the mapping does
not exist on the target node, that will be caught further down anyway.

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Nodes.pm | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 1d5c68f5..7397101b 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2241,9 +2241,18 @@ my $create_migrate_worker = sub {
$invalidConditions .= join(', ', map { $_->{volid} } 
@{$preconditions->{local_disks}});
}
 
+   # for a live migration all local_resources must be marked as 
live-migratable
if ($online && scalar($preconditions->{local_resources}->@*)) {
-   $invalidConditions .= "\n  Has local resources: ";
-   $invalidConditions .= join(', ', 
@{$preconditions->{local_resources}});
+   my $resource_not_live = [];
+   for my $resource ($preconditions->{local_resources}->@*) {
+   next if grep $resource, 
$preconditions->{'mapped-with-live-migration'}->@*;
+   push $resource_not_live->@*, $resource;
+   }
+
+   if (scalar($resource_not_live->@*)) {
+   $invalidConditions .= "\n  Has local resources not marked as 
live migratable: ";
+   $invalidConditions .= join(', ', $resource_not_live->@*);
+   }
}
 
if (my $not_allowed_nodes = $preconditions->{not_allowed_nodes}) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 3/3] api: include not mapped resources for running vms in migrate preconditions

2024-03-20 Thread Dominik Csapak
so that we can show a proper warning in the migrate dialog and check it
in the bulk migrate precondition check

the unavailable_storages and allowed_nodes should be the same as before

Signed-off-by: Dominik Csapak 
---
not super happy with this partial approach, we probably should just
always return the 'allowed_nodes' and 'not_allowed_nodes' and change
the gui to handle the running vs not running state?

 PVE/API2/Qemu.pm | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8581a529..b0f155f7 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4439,7 +4439,7 @@ __PACKAGE__->register_method({
not_allowed_nodes => {
type => 'object',
optional => 1,
-   description => "List not allowed nodes with additional 
informations, only passed if VM is offline"
+   description => "List not allowed nodes with additional 
informations",
},
local_disks => {
type => 'array',
@@ -4496,25 +4496,28 @@ __PACKAGE__->register_method({
 
# if vm is not running, return target nodes where local storage/mapped 
devices are available
# for offline migration
+   my $checked_nodes = {};
+   my $allowed_nodes = [];
if (!$res->{running}) {
-   $res->{allowed_nodes} = [];
-   my $checked_nodes = 
PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
+   $checked_nodes = 
PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
delete $checked_nodes->{$localnode};
+   }
 
-   foreach my $node (keys %$checked_nodes) {
-   my $missing_mappings = $missing_mappings_by_node->{$node};
-   if (scalar($missing_mappings->@*)) {
-   $checked_nodes->{$node}->{'unavailable-resources'} = 
$missing_mappings;
-   next;
-   }
+   foreach my $node ((keys $checked_nodes->%*, keys 
$missing_mappings_by_node->%*)) {
+   my $missing_mappings = $missing_mappings_by_node->{$node};
+   if (scalar($missing_mappings->@*)) {
+   $checked_nodes->{$node}->{'unavailable-resources'} = 
$missing_mappings;
+   next;
+   }
 
+   if (!$res->{running}) {
if (!defined($checked_nodes->{$node}->{unavailable_storages})) {
-   push @{$res->{allowed_nodes}}, $node;
+   push $allowed_nodes->@*, $node;
}
-
}
-   $res->{not_allowed_nodes} = $checked_nodes;
}
+   $res->{not_allowed_nodes} = $checked_nodes if 
scalar(keys($checked_nodes->%*)) || !$res->{running};
+   $res->{allowed_nodes} = $allowed_nodes if scalar($allowed_nodes->@*) || 
!$res->{running};
 
my $local_disks = &$check_vm_disks_local($storecfg, $vmconf, $vmid);
$res->{local_disks} = [ values %$local_disks ];;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs v2] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Aaron Lauterer
Signed-off-by: Aaron Lauterer 
---
Thanks for the feedback. I think this variant transports the important
infomation in simple english.

 pve-system-requirements.adoc | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc
index bc3689d..98279bb 100644
--- a/pve-system-requirements.adoc
+++ b/pve-system-requirements.adoc
@@ -49,7 +49,9 @@ Recommended System Requirements
   (BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are compatible with 
a
   hardware RAID controller.
 ** Shared and distributed storage is possible.
-
+** Enterprise grade SSDs are recommended for good performance. Check for power
+  loss protection (PLP) to avoid using consumer-grade SSDs, which are not
+  recommended.
 * Redundant (Multi-)Gbit NICs, with additional NICs depending on the preferred
   storage technology and cluster setup.
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs] system-requirements: mention that SSDs with PLP should be used

2024-03-20 Thread Maximiliano Sandoval
Fiona Ebner  writes:

> IMHO, it's still succinct enough. But you could also go for "avoid
> consumer grade SSDs, whose use is discouraged."

Neither consumer or enterprise grade disks are super well defined, why
not just discourage disks without PLP.

--
Maximiliano


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ui: storage: esxi: check 'skip certificate verification' by default

2024-03-20 Thread Dominik Csapak
needing one less step when adding the storage, assuming most esxi
certificates are self-signed.

Signed-off-by: Dominik Csapak 
---
 www/manager6/storage/ESXIEdit.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/storage/ESXIEdit.js b/www/manager6/storage/ESXIEdit.js
index b105b1e3..d11232a0 100644
--- a/www/manager6/storage/ESXIEdit.js
+++ b/www/manager6/storage/ESXIEdit.js
@@ -48,7 +48,7 @@ Ext.define('PVE.storage.ESXIInputPanel', {
xtype: 'proxmoxcheckbox',
name: 'skip-cert-verification',
fieldLabel: gettext('Skip Certificate Verification'),
-   value: false,
+   value: true,
uncheckedValue: 0,
defaultValue: 0,
deleteDefaultValue: !me.isCreate,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage] esxi: detect correct os type in 'other' family

2024-03-20 Thread Gabriel Goller
This patch introduces the conversion table for all possible OS Types
that are in the VMWare 'other' family and sets the pve counterpart.
Our default OS Type is 'linux', so including mappings to 'other' makes
sense.

Signed-off-by: Gabriel Goller 
---
 src/PVE/Storage/ESXiPlugin.pm |   43 +-
 src/PVE/Storage/ESXiPlugin.pm.tdy | 1216 +
 2 files changed, 1255 insertions(+), 4 deletions(-)
 create mode 100644 src/PVE/Storage/ESXiPlugin.pm.tdy

diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
index b36cea8..6e80caa 100644
--- a/src/PVE/Storage/ESXiPlugin.pm
+++ b/src/PVE/Storage/ESXiPlugin.pm
@@ -849,7 +849,7 @@ sub is_windows {
 return;
 }
 
-my %guest_types = (
+my %guest_types_windows = (
 dos => 'other',
 winNetBusiness  => 'w2k3',
 windows9=> 'win10',
@@ -888,14 +888,49 @@ my %guest_types = (
 'winXPPro-64'   => 'wxp',
 );
 
+my %guest_types_other = (
+'freeBSD11'=> 'other',
+'freeBSD11-64' => 'other',
+'freeBSD12'=> 'other',
+'freeBSD12-64' => 'other',
+'freeBSD13'=> 'other',
+'freeBSD13-64' => 'other',
+'freeBSD14'=> 'other',
+'freeBSD14-64' => 'other',
+'freeBSD'  => 'other',
+'freeBSD-64'   => 'other',
+'os2'  => 'other',
+'netware5' => 'other',
+'netware6' => 'other',
+'solaris10'=> 'solaris',
+'solaris10-64' => 'solaris',
+'solaris11-64' => 'solaris',
+'other'=> 'other',
+'other-64' => 'other',
+'openserver5'  => 'other',
+'openserver6'  => 'other',
+'unixware7'=> 'other',
+'eComStation'  => 'other',
+'eComStation2' => 'other',
+'solaris8' => 'solaris',
+'solaris9' => 'solaris',
+'vmkernel' => 'other',
+'vmkernel5'=> 'other',
+'vmkernel6'=> 'other',
+'vmkernel65'   => 'other',
+'vmkernel7'=> 'other',
+'vmkernel8'=> 'other',
+);
+
 # Best effort translation from vmware guest os type to pve.
 # Returns a tuple: `(pve-type, is_windows)`
 sub guest_type {
 my ($self) = @_;
-
 if (defined(my $guest = $self->{guestOS})) {
-   if (defined(my $known = $guest_types{$guest})) {
-   return ($known, 1);
+   if (defined(my $known_windows = $guest_types_windows{$guest})) {
+   return ($known_windows, 1);
+   } elsif (defined(my $known_other = $guest_types_other{$guest})) {
+   return ($known_other, 0);
}
# This covers all the 'Mac OS' types AFAICT
return ('other', 0) if $guest =~ /^darwin/;
diff --git a/src/PVE/Storage/ESXiPlugin.pm.tdy 
b/src/PVE/Storage/ESXiPlugin.pm.tdy
new file mode 100644
index 000..2a08986
--- /dev/null
+++ b/src/PVE/Storage/ESXiPlugin.pm.tdy
@@ -0,0 +1,1216 @@
+package PVE::Storage::ESXiPlugin;
+
+use strict;
+use warnings;
+
+use Fcntl  qw(F_GETFD F_SETFD FD_CLOEXEC);
+use JSON   qw(from_json);
+use POSIX  ();
+use File::Path qw(mkpath remove_tree);
+
+use PVE::Network;
+use PVE::Systemd;
+use PVE::Tools qw(file_get_contents file_set_contents run_command);
+
+use base qw(PVE::Storage::Plugin);
+
+my $ESXI_LIST_VMS  = '/usr/libexec/pve-esxi-import-tools/listvms.py';
+my $ESXI_FUSE_TOOL = '/usr/libexec/pve-esxi-import-tools/esxi-folder-fuse';
+my $ESXI_PRIV_DIR  = '/etc/pve/priv/import/esxi';
+
+#
+# Configuration
+#
+
+sub type {
+return 'esxi';
+}
+
+sub plugindata {
+return {
+content => [ { import => 1 },{ import => 1 } ],
+format  => [ { raw=> 1, qcow2 => 1, vmdk => 1 }, 'raw' ],
+};
+}
+
+sub properties {
+return {
+'skip-cert-verification' => {
+description =>
+'Disable TLS certificate verification, only enable on fully trusted networks!',
+type=> 'boolean',
+default => 'false',
+},
+};
+}
+
+sub options {
+return {
+nodes   => { optional => 1 },
+shared  => { optional => 1 },
+disable => { optional => 1 },
+content => { optional => 1 },
+
+# FIXME: bwlimit => { optional => 1 },
+server   => {},
+username => {},
+password => { optional => 1 },
+'skip-cert-verification' => { optional => 1 },
+};
+}
+
+sub esxi_cred_file_name {
+my ($storeid) = @_;
+return "/etc/pve/priv/storage/${storeid}.pw";
+}
+
+sub esxi_delete_credentials {
+my ($storeid) = @_;
+
+if ( my $cred_file = get_cred_file($storeid) ) {
+unlink($cred_file)
+  or warn "removing esxi credientials '$cred_file' failed: $!\n";
+}
+}
+
+sub esxi_set_credentials {
+my ( $password, $storeid ) = @_;
+
+my $cred_file = esxi_cred_file_name($storeid);
+mkdir "/etc/pve/priv/storage";
+
+PVE::Tools::file_set_contents( $cred_file, $password );
+
+return $cred_file;
+}
+
+sub get_cred_file {
+my ($storeid) = @_;
+
+my

[pve-devel] [PATCH docs] zfs: add a note about dRaid performance

2024-03-20 Thread Folke Gleumes
Based on statements from the openZFS documentation where it is described
as providing "the same level of redundancy and performance as raidz" [0].

[0] https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAID%20Howto.html
---
 local-zfs.adoc | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/local-zfs.adoc b/local-zfs.adoc
index a1a14e4..5db94cc 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -180,6 +180,8 @@ A 'RAIDZ' of any redundancy level will approximately behave 
like a single disk
 in regard to IOPS with a lot of bandwidth. How much bandwidth depends on the
 size of the RAIDZ vdev and the redundancy level.
 
+A 'dRAID' should match the performance of an equivalent 'RAIDZ' pool.
+
 For running VMs, IOPS is the more important metric in most situations.
 
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs] zfs: add a note about dRaid performance

2024-03-20 Thread Aaron Lauterer




On  2024-03-20  16:52, Folke Gleumes wrote:

Based on statements from the openZFS documentation where it is described
as providing "the same level of redundancy and performance as raidz" [0].

[0] https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAID%20Howto.html
---
  local-zfs.adoc | 2 ++
  1 file changed, 2 insertions(+)




applied with a small change: added 'pool' after dRaid for better 
readbility, thanks!



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 pve-storage 06/16] cephconfig: support line-continuations in parser

2024-03-20 Thread Max Carrara
On Tue Mar 19, 2024 at 4:59 PM CET, Max Carrara wrote:
> On Tue Mar 19, 2024 at 10:37 AM CET, Fabian Grünbichler wrote:
> > On March 5, 2024 4:07 pm, Max Carrara wrote:
> > > Ceph's docs state the following [0]:
> > >> The backslash character `\` is used as the line-continuation marker
> > >> that combines the next line with the current one.
> > > 
> > > This commit implements the support of such line-continuations in our
> > > parser.
> > > 
> > > The line following a line ending with a '\' has its whitespace
> > > preserved, which matches the behaviour in Ceph's original
> > > implementation [1]. In other words, leading and trailing whitespace is
> > > not stripped from a continued line.
> >
> > it's actually a bit more complicated.. ceph only supports line
> > continuations inside values (well, in key value lines after the key ;)),
> > and only if they are unquoted..

Upon further research and confirming the behaviour via `ceph-conf`
(thanks for the tip btw!) line continuations are in fact supported in
different parts as well.

Consider the following example 'ceph.conf' file:
```
[clie\
nt]  # some comment
foo\
\
\
\
= \
bar
```

The continued `client` section header does actually get parsed by
`ceph-conf` without any issues - the trailing comment and whitespace
are also ignored.

Where it gets really interesting is the continuation right after 'foo':
Because keys are defined using `raw[]` [0], whatever is skipped by the
parser is still included in the parsed output [1].

This has the consequence that the four continued lines are in fact not
skipped and instead read as literal newline characters.

After the equals sign, the line continuation is skipped as expected.

By providing literal newlines via the shell, the above can easily be
verified:

$ ceph-conf -c ceph_cancer.conf -s client foo^M^M^M^M
bar

(The ^M is a literal newline and can usually be obtained by typing
CTRL+V, Enter in your shell.)

To make matters even worse, quoted values may in fact be *directly*
followed by continuations (`ceph-conf` fails otherwise):

```
[client]
foo = "bar"\

baz = qux
```

The above is considered "correct" because the escaped newline counts as
whitespace. If you were to put some spaces into the empty line after the
"foo" key, these would be skipped as well.

For completeness's sake, this also parses:

```
[client]
foo = "bar"\
   # some comment
baz = qux
```

However, the following is invalid:

```
[client]
foo = "bar"\
baz = qux
```

... because the parser sees:

```
[client]
foo = "bar"baz = qux
```

... which is not allowed, because a quoted value may only be followed
what the grammar defines as "empty_line" [2].

So, this doesn't really make the parsing logic regarding
line continuations any simpler:

  1. Section headers may contain line continuations
  2. Section headers may be followed by whitespace + comments (after ']'
  3. Keys are parsed "raw" and may therefore be continued
 --> Will probably just not handle this case, as there are no config
 keys that contain newline characters or anything of the sort
 - why would there be? Why would a user need this?
  4. Unquoted values may contain line continuations
  5. Quoted values may be *directly* followed by a line continuation
 character, as long as the remaining stuff is whitespace or a
 comment
  6. Bonus point: Quoted values MUST NOT *contain* line continuations,
 as they're parsed as `lexeme[]`s [3]

... so, see you in v5 ;)

[0]: 
https://git.proxmox.com/?p=ceph.git;a=blob;f=ceph/src/common/ConfUtils.cc;h=2f78fd02bf9e27467275752e6f3bca0c5e3946ce;hb=refs/heads/master#l182
[1]: 
https://www.boost.org/doc/libs/1_53_0/libs/spirit/doc/html/spirit/qi/reference/directive/raw.html
[2]: 
https://git.proxmox.com/?p=ceph.git;a=blob;f=ceph/src/common/ConfUtils.cc;h=2f78fd02bf9e27467275752e6f3bca0c5e3946ce;hb=refs/heads/master#l188
[3]: 
https://www.boost.org/doc/libs/1_53_0/libs/spirit/doc/html/spirit/qi/reference/directive/lexeme.html

>
> As mentioned in my other reply, I'll probably have to revise the whole
> parsing logic to take that into account... but thanks for being so
> thorough!
>
> >
> > > 
> > > [0]: 
> > > https://docs.ceph.com/en/reef/rados/configuration/ceph-conf/#changes-introduced-in-octopus
> > > [1]: 
> > > https://git.proxmox.com/?p=ceph.git;a=blob;f=ceph/src/common/ConfUtils.cc;h=2f78fd02bf9e27467275752e6f3bca0c5e3946ce;hb=refs/heads/master#l262
> > > 
> > > Signed-off-by: Max Carrara 
> > > ---
> > > Changes v2 --> v3:
> > >   * new
> > > Changes v3 --> v4:
> > >   * none
> > > 
> > >  src/PVE/CephConfig.pm | 28 
> > >  1 file changed, 24 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/src/PVE/CephConfig.pm b/src/PVE/CephConfig.pm
> > > index 74a92eb..80f71b0 100644
> > > --- a/src/PVE/CephConfig.pm
> > > +++ b/src/PVE/CephConfig.pm
> > > @@ -19,13 +19,33 @@ sub parse_ceph_config {
> > >  return $cfg if !defined($raw);
> > >  
> > >  my @lines = split /\n/, $raw;
> > > + 

Re: [pve-devel] [PATCH v1 pve-esxi-import-tools 4/5] listvms: add arg parser, context manager for connections, fetch helper

2024-03-20 Thread Max Carrara
On Wed Mar 20, 2024 at 1:45 PM CET, Wolfgang Bumiller wrote:
> On Tue, Mar 19, 2024 at 04:32:49PM +0100, Max Carrara wrote:
> > +@contextmanager
> > +def connect_to_esxi_host(args: EsxiConnectonArgs) -> vim.ServiceInstance:
> > +"""Opens a connection to an ESXi host with the given username and 
> > password
> > +contained in the password file.
> > +"""
> > +ssl_context = (
> > +ssl._create_unverified_context()
> > +if args.skip_cert_verification
> > +else None
> > +)
> > +
> > +with open(args.password_file) as pw_file:
> > +password = pw_file.read().strip()
>
> This strips all whitespace from both sides, which is not what we want.
> (Not that I particularly care whether esxi even allows spaces in
> passwords at all...)
> The old code specifically only stripped a single trailing *newline*,
> mainly for when you edit the file with eg. vim which defaults to adding
> one...

Oh, thanks for pointing this out - I had assumed that, well, passwords
wouldn't contain spaces ... ever.

Mea culpa; will fix in v2.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel