[pve-devel] applied: [PATCH pve-common] fix #4299: network : disable_ipv6: fix path checking

2023-01-16 Thread Wolfgang Bumiller
Sorry for the delay, the change definitely doesn't hurt, I was just
wondering how it would happen.

It's now applied, thanks!

On Thu, Oct 20, 2022 at 12:24:29AM +0200, Alexandre Derumier wrote:
> It's possible to have a
> /proc/sys/net/ipv6/ directory
> 
> but no
> /proc/sys/net/ipv6/conf/$iface/disable_ipv6
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  src/PVE/Network.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Network.pm b/src/PVE/Network.pm
> index c468e40..9d726cd 100644
> --- a/src/PVE/Network.pm
> +++ b/src/PVE/Network.pm
> @@ -210,8 +210,8 @@ my $cond_create_bridge = sub {
>  
>  sub disable_ipv6 {
>  my ($iface) = @_;
> -return if !-d '/proc/sys/net/ipv6'; # ipv6 might be completely disabled
>  my $file = "/proc/sys/net/ipv6/conf/$iface/disable_ipv6";
> +return if !-e $file; # ipv6 might be completely disabled
>  open(my $fh, '>', $file) or die "failed to open $file for writing: $!\n";
>  print {$fh} "1\n" or die "failed to disable link-local ipv6 for 
> $iface\n";
>  close($fh);
> -- 
> 2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs v4 4/5] added vIOMMU documentation

2023-01-16 Thread Wolfgang Bumiller
On Fri, Jan 13, 2023 at 02:31:36PM +0100, Markus Frank wrote:
> 
> 
> On 1/13/23 11:09, Wolfgang Bumiller wrote:
> > On Fri, Nov 25, 2022 at 03:08:56PM +0100, Markus Frank wrote:
> > > Signed-off-by: Markus Frank 
> > > ---
> > >   qm-pci-passthrough.adoc | 25 +
> > >   1 file changed, 25 insertions(+)
> > > 
> > > diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc
> > > index fa6ba35..7ed4d49 100644
> > > --- a/qm-pci-passthrough.adoc
> > > +++ b/qm-pci-passthrough.adoc
> > > @@ -389,6 +389,31 @@ Example configuration with an `Intel GVT-g vGPU` 
> > > (`Intel Skylake 6700k`):
> > >   With this set, {pve} automatically creates such a device on VM start, 
> > > and
> > >   cleans it up again when the VM stops.
> > > +[[qm_pci_viommu]]
> > > +vIOMMU
> > > +~~
> > > +
> > > +vIOMMU enables the option to passthrough pci devices to Level-2 VMs
> > > +in Level-1 VMs via Nested Virtualisation.
> > > +
> > > +Host-Requirement: Set `intel_iommu=on` or `amd_iommu=on` depending on 
> > > your
> > > +CPU.
> > 
> > And by "CPU" you mean kernel command line? ;-)
> 
> Host-Requirement: Add `intel_iommu=on` or `amd_iommu=on`
> depending on your CPU to your kernel command line.
> 
> like this?
> > 
> > > +
> > > +VM-Requirement: For both Intel and AMD CPUs you will have to set
> > > +`intel_iommu=on` as a Linux boot parameter in the vIOMMU-enabled-VM, 
> > > because
> > > +Qemu implements the Intel variant.
> > 
> > ^ As mentioned, there does appear to be an amd_iommu device in the qemu
> > code, so would the amd variant work?
> > 
> > In my reply to the code patch I mentioned checking the host arch. But if
> > you say we can use intel_iommu on AMD as well, I'd say, if both work,
> > give the user a choice, otherwise we can of course just stick to the one
> > that works ;-)
> 
> intel_iommu works better on my AMD CPU than amd_iommu ;)

Can you define "better"?
My main concern is that if we don't give users the option to choose, the
only data point we have is yours ;-)
If we explicitly mention that you can use one on the other in the docs,
people can try it themselves and maybe we'll see some feedback on the
forums etc.

However, I'm fine with a patch for only the intel version for now as we
can always add an option later.

> Moreover it adds an extra AMDVI-PCI device that is using the first pci 
> address.
> `kvm: -device VGA,id=vga,bus=pcie.0,addr=0x1: PCI: slot 1 function 0 not 
> available for VGA, in use by AMDVI-PCI,id=(null)`

For that I'd say, try to add the AMDVI-PCI device manually to an
explicitly chosen slot. We need to avoid automatically added devices
like the plague, because moving them later can break live snapshots (and
windows).

> 
> I cannot find any good documentation for amd_iommu but it also seems like
> it has less features.

Less, or just not configurable? ;-)
I mean, if it works it works ;-)

> 
> $ qemu-system-x86_64 -device 'amd-iommu,help'
> amd-iommu options:
>   device-iotlb=-  (default: false)
>   intremap=   - on/off/auto (default: "auto")
>   pt=  -  (default: true)
> $ qemu-system-x86_64 -device 'intel-iommu,help'
> intel-iommu options:
>   aw-bits=-  (default: 39)
>   caching-mode=-  (default: false)
>   device-iotlb=-  (default: false)
>   dma-drain=   -  (default: true)
>   dma-translation= -  (default: true)
>   eim=- on/off/auto (default: "auto")
>   intremap=   - on/off/auto (default: "auto")
>   pt=  -  (default: true)
>   snoop-control=   -  (default: false)
>   version=   -  (default: 0)
>   x-buggy-eim= -  (default: false)
>   x-pasid-mode=-  (default: false)
>   x-scalable-mode= -  (default: false)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC manager] vzdump: exclude zfs control dirs by default

2023-01-16 Thread Fabian Grünbichler
else in the face of snapdir=visible on a ZFS-backed mountpoint/rootfs, creating
stop mode backups will fail (because automounting on access of
.zfs/snapshot/XXX fails), and restoring a suspend mode backup onto a ZFS
storage will fail (because an attempt to `mkdir 
/path/to/target/.zfs/snapshot/XXX`
fails - or worse, if the "zfs_admin_snapshot" module parameter is enabled, will
create an XXX snapshot for the newly-restored dataset).

the two sub directories of .zfs were chosen to decrease the chance of false
positives, since backing up or restoring the .zfs dir itself is unproblematic.

Signed-off-by: Fabian Grünbichler 
---

Notes:
see 
https://forum.proxmox.com/threads/restore-cannot-mkdir-permission-denied.121096

alternatively, this could also be handled in pve-container by checking for 
each
mountpoint and explicitly skipping .zfs only if that mountpoint is actually
backed by a ZFS storage..

if this patch is ACKed, the description of 'stdexcludes' in 
pve-guest-common should
probably also be updated..

 PVE/VZDump.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index a04837e7..9b9d37a8 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -542,6 +542,8 @@ sub new {
'/tmp/?*',
'/var/tmp/?*',
'/var/run/?*.pid',
+   '.zfs/snapshot',
+   '.zfs/shares',
;
 }
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [RFC qemu-server] migration: nbd export: switch away from deprecated QMP command

2023-01-16 Thread Thomas Lamprecht
Am 02/12/2022 um 13:54 schrieb Fiona Ebner:
> The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
> favor of a more general 'block-export-add'.
> 
> When using 'nbd-server-add', QEMU internally converts the parameters
> and calls blk_exp_add() which is also used by 'block-export-add'. It
> does one more thing, namely calling nbd_export_set_on_eject_blk() to
> auto-remove the export from the server when the backing drive goes
> away. But that behavior is not needed in our case, stopping the NBD
> server removes the exports anyways.
> 
> It was checked with a debugger that the parameters to blk_exp_add()
> are still the same after this change. Well, the block node names are
> autogenerated and not consistent across invocations.
> 
> The alternative to using 'query-block' would be specifying a
> predictable 'node-name' for our '-drive' commandline. It's not that
> difficult for this use case, but in general one needs to be careful
> (e.g. it can't be specified for an empty CD drive, but would need to
> be set when inserting a CD later). Querying the actual 'node-name'
> seemed a bit more future-proof.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> RFC, because I'm not sure which approach is better.

for now this works out fine, we can always switch to persistent node-names
if it shows to has some relevant advantage.

> 
>  PVE/QemuServer.pm| 17 -
>  test/MigrationTest/QmMock.pm |  4 +++-
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-manager] fix #4393: ui: storage backup view: make pbs-specific columns sortable

2023-01-16 Thread Thomas Lamprecht
Am 16/12/2022 um 13:45 schrieb Stefan Hanreich:
> The sort order is analogous to how it behaves in the datastore content
> overview in pbs.
> 
> This means sorting in ascending order behaves as follows:
> 
> Verify State
> * failed
> * none
> * ok
> 
> Encryption
> * no
> * yes
> 
> For the encryption state there is theoretically a distinction between
> signed and encrypted, but as far as I can tell we do not render this
> distinction in PVE, which is why I chose to not make this distinction
> for sorting as well.
> 
> Signed-off-by: Stefan Hanreich 
> ---
>  www/manager6/Utils.js  |  7 +++
>  www/manager6/storage/BackupView.js | 12 
>  2 files changed, 19 insertions(+)
> 
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 8c118fa2..3dd287e3 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1963,6 +1963,13 @@ Ext.define('PVE.Utils', {
>  },
>  
>  tagCharRegex: /^[a-z0-9+_.-]+$/i,
> +
> +verificationStateOrder: {
> + 'failed': 0,
> + 'none': 1,
> + 'ok': 2,
> + '__default__': 3,
> +},

Not that your patch is really at fault here, but made me notice again that we 
really
need to make Utils go away, it's a real dump for so much stuff that it's just
becoming gross..

It should be split up and stuff moved either directly where its actually used,
to specialized classes, even if on the smaller side, or to more generic but
sensible "aggregative" classes like one for Schema stuff (but not utils, helper,
tools anymore).

>  },
>  
>  singleton: true,
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3] fix #4378: standardized error for ovmf files

2023-01-16 Thread Noel Ullreich
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.

Signed-off-by: Noel Ullreich 
---
 changes from v1:
 * rebased to account for move from sub config_to_command to sub
   print_ovmf_drive_commandlines
 * left out check for existing EFI vars image in sub config_to_command
   since it was redundant

 changes from v2:
 * moved all checks to single sub get_ovmf_files

 PVE/QemuServer.pm | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f4b15fd..b18c64e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3379,7 +3379,11 @@ sub get_ovmf_files($$$) {
$type .= '-ms' if $efidisk->{'pre-enrolled-keys'};
 }
 
-return $types->{$type}->@*;
+my ($ovmf_code, $ovmf_vars) = $types->{$type}->@*;
+die "EFI base image '$ovmf_code' not found\n" if ! -f $ovmf_code;
+die "EFI vars image '$ovmf_vars' not found\n" if ! -f $ovmf_vars;
+
+return ($ovmf_code, $ovmf_vars);
 }
 
 my $Arch2Qemu = {
@@ -3528,7 +3532,6 @@ my sub print_ovmf_drive_commandlines {
 my $d = $conf->{efidisk0} ? parse_drive('efidisk0', $conf->{efidisk0}) : 
undef;
 
 my ($ovmf_code, $ovmf_vars) = get_ovmf_files($arch, $d, $q35);
-die "uefi base image '$ovmf_code' not found\n" if ! -f $ovmf_code;
 
 my $var_drive_str = "if=pflash,unit=1,id=drive-efidisk0";
 if ($d) {
@@ -8076,7 +8079,6 @@ sub get_efivars_size {
 $efidisk //= $conf->{efidisk0} ? parse_drive('efidisk0', 
$conf->{efidisk0}) : undef;
 my $smm = PVE::QemuServer::Machine::machine_type_is_q35($conf);
 my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk, $smm);
-die "uefi vars image '$ovmf_vars' not found\n" if ! -f $ovmf_vars;
 return -s $ovmf_vars;
 }
 
@@ -8104,7 +8106,6 @@ sub create_efidisk($$$) {
 my ($storecfg, $storeid, $vmid, $fmt, $arch, $efidisk, $smm) = @_;
 
 my (undef, $ovmf_vars) = get_ovmf_files($arch, $efidisk, $smm);
-die "EFI vars default image not found\n" if ! -f $ovmf_vars;
 
 my $vars_size_b = -s $ovmf_vars;
 my $vars_size = PVE::Tools::convert_size($vars_size_b, 'b' => 'kb');
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH manager v2 0/2] Ceph API update return schemas

2023-01-16 Thread Thomas Lamprecht
Am 23/12/2022 um 10:59 schrieb Aaron Lauterer:
> by adding more precise return definitions.
> 
> Patch 3/3 from the last version [0] is split into two patches.
> 
> The first handles a few small updates while the second one is a bit
> larger, adding more infos for the cluster/ceph/metadata return schemas.
> 
> 
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2022-December/055268.html
> 
> Aaron Lauterer (2):
>   api: ceph: update return schemas
>   api: ceph: metadata: update return schema
> 
>  PVE/API2/Ceph.pm |   7 +-
>  PVE/API2/Ceph/MON.pm |  11 ++-
>  PVE/API2/Ceph/OSD.pm |  10 +++
>  PVE/API2/Cluster/Ceph.pm | 171 ++-
>  4 files changed, 195 insertions(+), 4 deletions(-)
> 


applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH common v4 1/6] VM start timeout config parameter in backend

2023-01-16 Thread Thomas Lamprecht
for anyone wanting to pick this up:

high level: it should go into pve-guest-common

Am 05/01/2023 um 11:08 schrieb Daniel Tschlatscher:
> This allows setting the 'startoptions' property string in the config.
> For now this only implements the 'timeout' parameter but should be
> rather easily extensible and allow related VM start config options
> to be also configurable here.
> 
> Signed-off-by: Daniel Tschlatscher 
> ---
> 
> Changes from v3:
> * No changes
> 
>  src/PVE/JSONSchema.pm | 38 ++
>  1 file changed, 38 insertions(+)
> 
> diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
> index 527e409..64dc01b 100644
> --- a/src/PVE/JSONSchema.pm
> +++ b/src/PVE/JSONSchema.pm
> @@ -640,6 +640,17 @@ sub pve_verify_startup_order {
>  die "unable to parse startup options\n";
>  }
>  
> +register_format('start-options', \&pve_verify_startup_options);

would prefer `guest-start-options`, as we have starts for other things (e.g., 
ceph or
systemd services)

> +sub pve_verify_startup_options {
> +my ($value, $noerr) = @_;
> +
> +return $value if pve_parse_startup_options($value);
> +
> +return undef if $noerr;
> +
> +die "unable to parse vm start options\n";
> +}
> +
>  my %bwlimit_opt = (
>  optional => 1,
>  type => 'number', minimum => '0',
> @@ -748,6 +759,33 @@ 
> PVE::JSONSchema::register_standard_option('pve-startup-order', {
>  typetext => '[[order=]\d+] [,up=\d+] [,down=\d+] ',
>  });
>  
> +sub pve_parse_startup_options {
> +my ($value) = @_;
> +
> +return undef if !$value;
> +
> +my $res = {};
> +
> +foreach my $p (split(/,/, $value)) {
> + next if $p =~ m/^\s*$/;
> +
> + if ($p =~ m/^timeout=(\d+)$/ && int($1) <= 86400) {
> + $res->{timeout} = $1;
> + } else {
> + return undef;
> + }
> +}
> +
> +return $res;
> +}
> +
> +register_standard_option('start-options', {
> +description => "Start up options for the VM. This only allows setting 
> the VM start timeout for now, which is the maximum VM startup timeout in 
> seconds. The maximum value for timeout is 86400, the minimum 0, which 
> disables the timeout completely. If timeout is unset, the timeout will either 
> be the memory of the VM in GiBs or 30, depending on which is higher. If unset 
> and hibernated, the value will at least be 300 seconds, with hugepages at 
> least 150 seconds.",

please split to multiple lines with 100cc max each

> +optional => 1,
> +type => 'string', format => 'start-options',
> +typetext => 'timeout=\d+',
> +});
> +
>  register_format('pve-tfa-secret', \&pve_verify_tfa_secret);
>  sub pve_verify_tfa_secret {
>  my ($key, $noerr) = @_;



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH access-control v2 2/6] added acls for Shared Filesystem Directories

2023-01-16 Thread Thomas Lamprecht
Am 23/12/2022 um 14:10 schrieb Markus Frank:
> Signed-off-by: Markus Frank 
> ---
>  src/PVE/AccessControl.pm  |  2 ++
>  src/PVE/RPCEnvironment.pm | 12 +++-
>  2 files changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
> index a95d072..742304c 100644
> --- a/src/PVE/AccessControl.pm
> +++ b/src/PVE/AccessControl.pm
> @@ -1221,6 +1221,8 @@ sub check_path {
>   |/storage/[[:alnum:]\.\-\_]+
>   |/vms
>   |/vms/[1-9][0-9]{2,}
> + |/dirs
> + |/dirs/[[:alnum:]\.\-\_]+

I do not like this too much, iff we expose this at the ACL level I'd rather 
like to
use a /map// path, as we need that for Dominik's HW (PCI(e)) mappings 
anyway,
and I think we could reuse such a mapping ACL object path for even more things 
(e.g., VMID
(allocation) ranges, CPU cores (for cpu task set/pinning), ...

Besides that, note that our access model normally adds privileges based of the 
top-level
ACL object path, with the fitting roles - e.g., here that could be Dirs.Audit, 
Dirs.Modify
Dirs.Use – but with above it will then naturally something like Map.Audit, 
Map.Modify,
Map.Use.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-docs] fixed grammar in qm manpage

2023-01-16 Thread Thomas Lamprecht
Am 02/12/2022 um 12:52 schrieb Noel Ullreich:
> Fixed two small grammar errors in the qm manpage. Rephrased a

as we generate more than the man pages out of this: s/manpage/chapter/ 

> sentence before the fixed sentence so that it would be more legible.
> 
> Signed-off-by: Noel Ullreich 
> ---
>  qm.adoc | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied with merge conflict to current master resolved, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-network] frr: add prefix-list support

2023-01-16 Thread Thomas Lamprecht
Am 30/11/2022 um 16:18 schrieb Alexandre Derumier:
> parsing of prefix-list in frr.conf.local was missing
> 
> reported on forum:
> https://forum.proxmox.com/threads/using-the-proxmox-sdn-to-manage-host-connectivity-with-bgp.118553
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/Network/SDN/Controllers/BgpPlugin.pm  |  2 +-
>  PVE/Network/SDN/Controllers/EvpnPlugin.pm | 20 +++
>  .../ebgp_loopback/expected_controller_config  |  3 ++-
>  3 files changed, 15 insertions(+), 10 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs] cluster join: mention that storage config from cluster is inherited

2023-01-16 Thread Thomas Lamprecht
Am 30/11/2022 um 15:09 schrieb Fiona Ebner:
> and what to do about it. It's a rather common topic in forum threads.
> 
> Suggested in the community forum:
> https://forum.proxmox.com/threads/118492/post-513743
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> I just added it to the existing note, but maybe it's better to have
> two (one about existing guests, one about storage.cfg)? It still felt
> digestible like this and I wasn't entirely sure what to do about the
> sentence about the "configuration is overwritten"-sentence if opting
> to have two notes.
> 
>  pvecm.adoc | 13 -
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container] fix #4460: setup: centos: create /etc/hostname if it does not exist

2023-01-16 Thread Friedrich Weber
Previously, Setup/CentOS.pm only wrote to /etc/hostname if the file
already existed. Many CT templates of Redhat-derived distros do not
contain that file, so the containers ended up without /etc/hostname.
This caused systemd-hostnamed to report the "static hostname" to be
empty. If networking is handled by NetworkManager, the empty static
hostname caused DHCP requests to be sent without the "Hostname" field,
as reported in #4460.

With this fix, Setup/CentOS.pm creates /etc/hostname if it does not
exist, so NetworkManager correctly reads the hostname and includes it in
DHCP requests.

Manually tested with the following CT templates (checking that
/etc/hostname exists and DHCP requests include the hostname):

* Distros using NetworkManager:
 - Alma Linux 9 (almalinux-9-default_20221108_amd64.tar.xz)
 - CentOS 8 (centos-8-default_20201210_amd64.tar.xz)
 - CentOS 9 Stream (centos-9-stream-default_20221109_amd64.tar.xz)
 - Rocky Linux 9 (rockylinux-9-default_20221109_amd64.tar.xz)
* Distros using network-scripts (here, DHCP requests already contained the
hostname without this fix, as network-scripts does not rely on
systemd-hostnamed):
 - Alma Linux 8 (almalinux-8-default_20210928_amd64.tar.xz)
 - CentOS 7 (centos-7-default_20190926_amd64.tar.xz)
 - CentOS 8 Stream (centos-8-stream-default_20220327_amd64.tar.xz)
 - Rocky Linux 8 (rockylinux-8-default_20210929_amd64.tar.xz)

Signed-off-by: Friedrich Weber 
---

Question: This will cause Setup/CentOS.pm to create /etc/hostname also
in already-existing containers. I don't think this should any cause
issues for users, but I'm not sure. What do you think?

 src/PVE/LXC/Setup/CentOS.pm| 5 ++---
 src/test/test-centos6-001/etc/hostname.exp | 1 +
 src/test/test-centos6-002/etc/hostname.exp | 1 +
 src/test/test-centos8-001/etc/hostname.exp | 1 +
 4 files changed, 5 insertions(+), 3 deletions(-)
 create mode 100644 src/test/test-centos6-001/etc/hostname.exp
 create mode 100644 src/test/test-centos6-002/etc/hostname.exp
 create mode 100644 src/test/test-centos8-001/etc/hostname.exp

diff --git a/src/PVE/LXC/Setup/CentOS.pm b/src/PVE/LXC/Setup/CentOS.pm
index 00fecc6..1d31cee 100644
--- a/src/PVE/LXC/Setup/CentOS.pm
+++ b/src/PVE/LXC/Setup/CentOS.pm
@@ -157,9 +157,8 @@ sub set_hostname {
 
 $self->update_etc_hosts($hostip, $oldname, $hostname, $searchdomains);
 
-if ($self->ct_file_exists($hostname_fn)) {
-   $self->ct_file_set_contents($hostname_fn, "$hostname\n");
-}
+# Always write /etc/hostname, even if it does not exist yet
+$self->ct_file_set_contents($hostname_fn, "$hostname\n");
 
 if ($self->ct_file_exists($sysconfig_network)) {
my $data = $self->ct_file_get_contents($sysconfig_network);
diff --git a/src/test/test-centos6-001/etc/hostname.exp 
b/src/test/test-centos6-001/etc/hostname.exp
new file mode 100644
index 000..a5bce3f
--- /dev/null
+++ b/src/test/test-centos6-001/etc/hostname.exp
@@ -0,0 +1 @@
+test1
diff --git a/src/test/test-centos6-002/etc/hostname.exp 
b/src/test/test-centos6-002/etc/hostname.exp
new file mode 100644
index 000..180cf83
--- /dev/null
+++ b/src/test/test-centos6-002/etc/hostname.exp
@@ -0,0 +1 @@
+test2
diff --git a/src/test/test-centos8-001/etc/hostname.exp 
b/src/test/test-centos8-001/etc/hostname.exp
new file mode 100644
index 000..a5bce3f
--- /dev/null
+++ b/src/test/test-centos8-001/etc/hostname.exp
@@ -0,0 +1 @@
+test1
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel