> +return $path if $is_mp =~ m/^(1|on|yes|true)$/i;
> +return undef if $is_mp =~ m/^(0|off|no|false)$/i;
Don't we have a parse_boolean() helper somewhere? If not, can we
add one? I would like to avoid multiple definition of what we
accept for boolean values.
_
I have done ping test, on proxmox 4, with migrate_insecure (so "cont" don't
apply at vm_start),
without disk migration
and I'm around 60ms of loss.
Will try proxmox 5 tomorrow.
- Mail original -
De: "aderumier"
À: "pve-devel"
Envoyé: Jeudi 27 Juillet 2017 16:30:37
Objet: Re: [pve-deve
Am Mittwoch, den 26.07.2017, 20:44 +0200 schrieb Martin Lablans:
> Dear all,
>
> this patch will change the LVM storage plugin to create striped
> rather
> than linear logical volumes, which can multiply the throughput for
> volume groups backed by several controllers or network paths.
>
> The n
From: Wolfgang Bumiller
Also allowing multiple keys since with some key types and
lengths 1024 would fit quite a number of them...
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE
- changelog:
- support any storage and not only qcow2
- cloudinit drive volume no more generated at start.
we can now enable|disable cloudinit with
qm set vmid -(ide|sata)x storeid:cloudinit
qm set vmid -delete (ide|sata)x
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.p
Signed-off-by: Alexandre Derumier
---
Makefile | 2 ++
debian/rules | 2 ++
modules-load.conf| 1 +
nbd-modules-load.conf| 1 +
nbd-modules-options.conf | 1 +
5 files changed, 7 insertions(+)
create mode 100644 nbd-modules-load.conf
create mode 100644 nb
From: Wolfgang Bumiller
and don't map ip/gw to address/gateway in the resulting
hash.
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 110 +-
1 file changed, 59 insertions(+), 51 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/Qe
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ca6f564..a00f07b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -510,19 +
From: Wolfgang Bumiller
include ipconfig in VM.Config.Network
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 4c91f6f..5f18de9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.
use same verification than lxc
also rename sshkey param to sshkeys (as we can define multiple ssh keys)
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 7 +++
PVE/QemuServer.pm | 10 +-
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE
From: Wolfgang Bumiller
It's being removed from LXCCreate.pm so it won't be moved to
pve-common as qemu-server is then the only place using it.
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index
From: Wolfgang Bumiller
*) always replace old cloudinit images
*) apply pending cloudinit changes when generating a new
image
For cloudinit we now always use vdisk_free before
vdisk_alloc in order to always replace old images, this
allows us to hotplug a new drive by setting it to
`none,media=cd
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 12
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cd61475..f50c17a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -45,6 +45,8 @@
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8e91bf2..c73039d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -516,7 +516,7 @@ my $confdes
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8fb47b5..6aa0128 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6696,7 +6696,7 @@ sub gener
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bf34efa..8e91bf2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -524,6 +524,1
when we change ip address, network configuration is correctly writen in guest,
but cloud-init don't apply it and keep previous ip address.
workaround with forcing ifdown ifup
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/QemuS
From: Wolfgang Bumiller
The config-disk is now generated into a qcow2 located on a
configured storage.
It is now also storage-managed and so live-migration and
live-snapshotting should work as they do for regular hard
drives.
Config drives are recognized by their storage name of the
form: VMID/vm
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 769a2b6..ca6f564 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2014,7 +2014,10 @@ s
From: Wolfgang Bumiller
* Add ipconfigX for all netX configuration options and
using ip=CIDR, gw=IP, ip6=CIDR, gw6=IP as option names
like in LXC.
* Adding explicit ip=dhcp and ip6=dhcp options.
* Removing the config-update code and instead generating
the ide3 commandline in config_to
Now that proxmox 5.0 is launched, maybe can we try to target cloudinit for 5.1 ?
changelog : rebase on last git
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
From: Wolfgang Bumiller
This time we can't avoid it: nameservers are listed with
separating spaces in LXC and we want to stay consistent and
use the same format in qemu.
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 155 --
debian/control| 1 +
2 files changed, 153 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1f34101..8326031 100644
--- a/PVE/QemuServe
From: Wolfgang Bumiller
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a00f07b..bf34efa 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6793,8 +6793,8 @@ sub g
looking at user migration log:
Jul 24 18:12:37 start migrate command to unix:/run/qemu-server/100.migrate
Jul 24 18:12:39 migration speed: 256.00 MB/s - downtime 39 ms
Seem that the vm have very low memory, as migration take 2second between the
begin and the end.
so maybe the usleep lowering is
Thanks for the explain Fabian. (I'm always using migration insecure, so I
didn't notice this bug)
>>when live-migrating over a unix socket, PVE 5 takes up to a few seconds
>>between completing the RAM transfer and pausing the source VM, and
>>resuming the target VM. in PVE 4, the same migration
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
we cannot use a rados connection before having at least one monitor,
so we have to move it down
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index eb8f14f3..f13ae9f1 100644
--- a/PVE
the following issue was reported on the forum[1] and as bug #1458[2],
moving this here for further discussion of potential fixes.
when live-migrating over a unix socket, PVE 5 takes up to a few seconds
between completing the RAM transfer and pausing the source VM, and
resuming the target VM. in PV
This should not be needed since we call 'block-job-complete' before
in qemu_drive_mirror_monitor(), and after benchmarking it does not
appear to be needed nor provide a measurable improvement when shutting
down the source.
---
PVE/QemuMigrate.pm | 5 -
1 file changed, 5 deletions(-)
diff --gi
---
PVE/QemuMigrate.pm | 2 --
1 file changed, 2 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index ac2884b..3169b7a 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -224,8 +224,6 @@ sub sync_disks {
# local volumes which have been copied
$self->{volumes}
This turns is_mountpoint more into export(5)'s `mountpoint`
property.
Given the directory storage with the properties:
path /a/b/c
is_mountpoint $value
$value = yes
Same as before, /a/b/c must be mounted.
$value = no (or not set)
Same as before, no effect.
$value = /a/b
New: /
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Add column names at top of output, this allows easier understanding
of what each column means.
Use leading spaces on the percentage column so that this is lined up.
Switch out the 1/0 from the active column with the actual status
(active, inactive, disabled).
Show N/A if storage is disabled.
Us
in the Storage/Status API call we have a 'enabled' param which had no
effect because storage_info only returned enabled one way or the
other.
This affected also `pvesm status` which uses the Storage/Status API
call.
So push also disabled storages to the info array but only activate
and get their
LVM seems to do a sane thing by default here, and can be configured to
do what you're doing via lvm.conf AFAICT.
From lvcreate(1):
| In order to stripe across all PVs of the VG if the -i argument is
| omitted, set raid_stripe_all_devices=1 in the allocation section of
| lvm.conf (5)
And the defaul
It can happen that the qmp connection gets lost while mirroring a disk.
In that case the current block job get cancelled, but the real cause of the
failure
is lost, becase we die() at a later step with the generic message
"die "$job: mirroring has been cancelled\n"
example:
...
drive-scsi0: trans
by ensuring at least lo is up inside the chroot environment
Signed-off-by: Fabian Grünbichler
---
as reported e.g. on https://forum.proxmox.com/threads/install-error.35932/
proxinstall | 3 +++
1 file changed, 3 insertions(+)
diff --git a/proxinstall b/proxinstall
index 21d2005..afb7a01 100755
It can happen that the qmp connection gets lost while mirroring a disk.
In that case the current block job get cancelled, but the real cause of the
failure
is lost, becase we die() at a later step with the generic message
"die "$job: mirroring has been cancelled\n"
example:
...
drive-scsi0: trans
40 matches
Mail list logo