On Thu, 5 Jan 2017 16:21:38 +0100 (CET)
Dietmar Maurer wrote:
>
> If you do not want to debug yourself, please can you file a
> bug at bugzilla.proxmox.com?
>
https://bugzilla.proxmox.com/show_bug.cgi?id=1243
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen
> On January 5, 2017 at 8:02 PM Michael Rasmussen wrote:
>
>
> On Thu, 5 Jan 2017 16:21:38 +0100 (CET)
> Dietmar Maurer wrote:
>
> >
> > If you do not want to debug yourself, please can you file a
> > bug at bugzilla.proxmox.com?
> >
> I will do that but I think I have nailed the problem
On Thu, 5 Jan 2017 16:21:38 +0100 (CET)
Dietmar Maurer wrote:
>
> If you do not want to debug yourself, please can you file a
> bug at bugzilla.proxmox.com?
>
I will do that but I think I have nailed the problem down to either
wrong instructions on the wiki or some kind af regression in pvepro
> if someone has a good argument why this is a bad idea, please share it
> (or any other suggestion for this)
IMHO it is confusing to display all VMs ...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinf
Hi all,
I just stumbled across the following:
When configuring memory for a VM, you can choose between the options 'Use fixed
size memory' and 'Automatically allocate memory within this range'.
The online help explains the ballooning feature quite nicely, but there is a
mismatch:
Under the 'Use
On 01/05/2017 04:04 PM, Dietmar Maurer wrote:
BulkStop: Why do we list already stopped guests?
i wanted to preserve the old behaviour (which included all vms)
this is interesting for one case:
you open the bulk stop window -> someone starts a vm -> you click stop
in the old (and current)
> > You get an error after installing custom certs?
> Yes, getting the error after following the new instructions for
> installing custom certs. I had to renew my custom certs and chose the
> new instructions for doing that.
If you do not want to debug yourself, please can you file a
bug at bug
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
BulkStop: Why do we list already stopped guests?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, thanks!
> this patch series adds a vmid filter to the
> startall/stopall/migrateall calls of nodes
>
> and a gui for selecting this
>
> so you can selectively start, stop and migrate guests in bulk
>
> i will also send a documentation patch later and add a help button to the
> window t
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
This allows visual feedback for first time users doing a backup.
---
change the way we reload by hiding the backup window instead of passing
aroung the reload() function
www/manager6/grid/BackupView.js | 7 ++-
www/manager6/window/Backup.js | 12 ++--
2 files changed, 16 insertions
Reviewed-by: Dominik Csapak
On 01/05/2017 12:23 PM, Thomas Lamprecht wrote:
On the old HA status we saw where a service was located currently,
this information was lost when we merged the resource and the status
tab.
Add this information again.
Signed-off-by: Thomas Lamprecht
---
changes sin
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
any comments?
On 12/20/2016 09:33 AM, Thomas Lamprecht wrote:
shutdown.target is active every time when the node shuts down, be it
reboot, poweroff, halt or kexec.
As we want to return true only when the node powers down without a
restart afterwards this was wrong.
Match only poweroff.target an
this uses the new vmselector and the new vmid filter in the backend
to allow starting/stopping/migrating selected vms instead of all
by default all vms are selected to have the same default behaviour
Signed-off-by: Dominik Csapak
---
www/manager6/Makefile | 3 ++-
www/manager6/node/Confi
this is mostly copied from MigrateAll.js, but a more generic way,
to allow startall and stopall to also use it
Signed-off-by: Dominik Csapak
---
www/manager6/window/BulkAction.js | 141 ++
1 file changed, 141 insertions(+)
create mode 100644 www/manager6/wind
this patch series adds a vmid filter to the
startall/stopall/migrateall calls of nodes
and a gui for selecting this
so you can selectively start, stop and migrate guests in bulk
i will also send a documentation patch later and add a help button to the
window to explain it
Dominik Csapak (4):
this is a form field which is a grid for selecting vms
if nodename is given, it will filter the vms only to the given node
you can filter the grid with the column header, and only the selected
and visible items are in the value of the field
Signed-off-by: Dominik Csapak
---
www/manager6/form/V
this is a simple filter which allows us to limit the actions to specific
vmids
this makes it much simpler to start/stop/migrate a range of vms
Signed-off-by: Dominik Csapak
---
PVE/API2/Nodes.pm | 33 +
1 file changed, 29 insertions(+), 4 deletions(-)
diff --git
On the old HA status we saw where a service was located currently,
this information was lost when we merged the resource and the status
tab.
Add this information again.
Signed-off-by: Thomas Lamprecht
---
changes since v1:
* add 'node' also to the data model
www/manager6/ha/Resources.js | 8 ++
there was still a point where we got the wrong string
on createosd we get the devpath (/dev/cciss/c0d0)
but need the info from get_disks, which looks in /sys/block
where it needs to be cciss!c0d0
Signed-off-by: Dominik Csapak
---
i hope this is the final fix for this issue...
PVE/Diskmanage.pm
The drive-mirror qmp command have a timeout of 3s by default (QMPCLient.pm),
shouldn't we bump it to 6s ? (more than 5s connect-timeout ?)
- Mail original -
De: "Wolfgang Bumiller"
À: "pve-devel"
Envoyé: Jeudi 5 Janvier 2017 10:09:28
Objet: [pve-devel] [PATCH qemu-server 3/4] drive-mir
Thanks Wolfgang.
I had already prepared a v10 with the cleanup ;)
I'm currently on holiday, so don't have too much time this week.
yes, for nbd tls, this need qemu 2.8 + blockdev-add / blockdev-mirror.
Too much change for now. (and blockdev is still experimental and not completed)
- Mail
Sorry, forgot to add the applied tag.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
PVE/QemuServer.pm | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0b866cd..31e30fa 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5926,17 +5926,16 @@ sub qemu_drive_mirror {
die "forkin
Applied the series with some followup patches:
* Added patches for the timeout and POSIX::_exit() changes I mentioned.
* Also added some whitespace & style cleanup patches.
Now we need to figure out whether to first add the ssh-tunnel based
ecnryption or go with qemu's tls. Saw the thread on qemu-
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 31e30fa..c2fa20b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5919,7 +5919,7 @@ sub qemu_drive_mirror {
$format = "nbd";
my $unixsocke
---
PVE/API2/Qemu.pm | 4 ++--
PVE/QemuMigrate.pm | 2 +-
PVE/QemuServer.pm | 16
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index e48bf6d..288a9cd 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1643,7 +1643,
---
PVE/API2/Qemu.pm | 7 +++
PVE/QemuMigrate.pm | 16
PVE/QemuServer.pm | 12 ++--
3 files changed, 13 insertions(+), 22 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0bae424..e48bf6d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -
Just for the record.
Wolfgang Bumiller (4):
cleanup: whitespaces & style
drive-mirror: warn use POSIX::_exit on exec failure
drive-mirror: bump timeout to 5s, add 30s inactivity timeout
cleanup: error messages
PVE/API2/Qemu.pm | 11 +--
PVE/QemuMigrate.pm | 16
On 2017-01-05 09:21, Dietmar Maurer wrote:
The default configuration work for you?
I do not know exactly since I have been using custom certs since proxmox
2.x and have kept these certs while upgrading (following the old
instructions)
You get an error after installing custom certs?
Yes, ge
> I can see the certificate return from the API2 is the default
> selfsigned certificate installed with proxmox but I have real
> certificates installed following this howto:
> https://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x_and_newer)#CAs_other_than_Let.27s_Encrypt
>
> I
34 matches
Mail list logo