Includes the contents of /etc/pve/datacenter.cfg
in the cluster section.
Signed-off-by: Max Carrara
---
Changes from v1:
* Output of `/etc/pve/datacenter.cfg` is now in the cluster section,
as discussed[1]
[1] https://lists.proxmox.com/pipermail/pve-devel/2023-February/055715.html
PVE/Repo
Adds a field to the "meta" config property which stores the user who
created the VM.
Signed-off-by: Leo Nunner
---
PVE/QemuServer.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a0e16dc..28ed8e7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/Q
Signed-off-by: Matthias Heiserer
---
Changes from v2:
make expression more compact
src/window/DiskSmart.js | 17 ++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/src/window/DiskSmart.js b/src/window/DiskSmart.js
index 3c8040b..b538ea1 100644
--- a/src/window/Dis
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 10 +++---
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e10c0b7..3899917 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -165,8 +
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm| 6 --
PVE/QemuServer/Memory.pm | 13 ++---
2 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a0e16dc..97185e1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer
max can be multiple of 64GiB only,
The dimm size is compute from the max memory
we can have 64 slots:
64GiB = 64 slots x 1GiB
128GiB = 64 slots x 2GiB
..
4TiB = 64 slots x 64GiB
Also, with numa, we need to share slot between (up to 8) sockets.
64 is a multiple of 8,
64GiB = 8 sockets * 8 slots
This patch series rework the current memory hotplug + virtiomem.
memory option now have extra options:
memory: [[current=]] [,max=] [,virtio=<1|0>]
ex: memory: current=1024,max=131072,virtio=1
for classic memory hotplug, when maxmemory is defined,
we use 64 fixed size dimm.
The max option is a
simple use dimm_list() returned by qemu
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 73
1 file changed, 22 insertions(+), 51 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index a13b3a1..cf1ddb9 100644
If some memory can be removed on a specific node,
we try to rebalance again on other nodes
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 51 +++-
1 file changed, 35 insertions(+), 16 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 7 ++--
PVE/QemuConfig.pm | 4 +--
PVE/QemuMigrate.pm| 6 ++--
PVE/QemuServer.pm | 27 +++
PVE/QemuServer/Helpers.pm | 3 +-
PVE/QemuServer/Memory.pm | 71 ---
Signed-off-by: Alexandre Derumier
---
test/cfg2cmd/memory-max-128G.conf | 11
test/cfg2cmd/memory-max-128G.conf.cmd | 86 +++
test/cfg2cmd/memory-max-512G.conf | 11
test/cfg2cmd/memory-max-512G.conf.cmd | 58 ++
4 files changed, 166 inser
Signed-off-by: Alexandre Derumier
---
test/cfg2cmd/memory-hotplug-hugepages.conf| 12 ++
.../cfg2cmd/memory-hotplug-hugepages.conf.cmd | 62 +++
test/cfg2cmd/memory-hotplug.conf | 11 ++
test/cfg2cmd/memory-hotplug.conf.cmd | 174 ++
test/cfg2cmd/m
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 27 +++
1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 6fac468..e10c0b7 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Mem
a 4GiB static memory is needed for DMA+boot memory, as this memory
is almost always un-unpluggeable.
1 virtio-mem pci device is setup for each numa node on pci.4 bridge
virtio-mem use a fixed blocksize with 32000 blocks
Blocksize is computed from the maxmemory-4096/32000 with a minimum of
2MiB to
Signed-off-by: Alexandre Derumier
---
test/cfg2cmd/memory-virtio-hugepages-1G.conf | 12 +++
.../memory-virtio-hugepages-1G.conf.cmd | 35 +++
test/cfg2cmd/memory-virtio-max.conf | 11 ++
test/cfg2cmd/memory-virtio-max.conf.cmd | 35 +
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 37 +++--
1 file changed, 19 insertions(+), 18 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 09fa25c..5dc2cb8 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/Qem
current qemu_dimm_list can return any kind of memory devices.
make it more generic, with a regex filter to choose kind of device
from id.
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer/Mem
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 12 +---
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 5dc2cb8..32fbdc5 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -151,1
default kernel vhost config only support 64 slots by default,
for performance since 2015.
Original memory hotplug code was done before, using qemu
max supported 255 slots.
To reach max mem (4TB), we used incremental dimm size.
Instead of dynamic memory size, use 1 static dimm size, compute
from
verify than defined vm memorymax is not bigger than
host cpu supported memory
Add add early check in update vm api
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 32 ++--
PVE/QemuServer/Memory.pm | 19 ++-
2 files changed, 40 inserti
Signed-off-by: Christoph Heiss
---
www/manager6/Parser.js | 3 +++
www/manager6/lxc/Network.js | 13 +
2 files changed, 16 insertions(+)
diff --git a/www/manager6/Parser.js b/www/manager6/Parser.js
index 9f7b2c84..c3772d3b 100644
--- a/www/manager6/Parser.js
+++ b/www/manager6/
If this network option is set, the host-side link will be forced down.
This has the effect that the interface inside the container has
LOWERLAYERDOWN set, which basically means that the PHY is considered
down, thus effectivly being "unplugged".
Also fix some trailing whitespace while touching the
This adds a `Disconnect` option for network interfaces on LXC
containers, much like it already exists for VMs. This has been requested
in #3413 [0] and seems useful, esp. considering we already support the
same thing for VMs.
One thing to note is that LXC does not seem to support the notion of
set
On Mon, Feb 13, 2023 at 02:56:59PM +0100, Christoph Heiss wrote:
> If this network option is set, the host-side link will be forced down.
> This has the effect that the interface inside the container has
> LOWERLAYERDOWN set, which basically means that the PHY is considered
> down, thus effectivly
24 matches
Mail list logo