> Am 02.12.2016 um 20:04 schrieb Michael Rasmussen :
>
> On Fri, 2 Dec 2016 19:54:20 +0100
> Waschbüsch IT-Services GmbH wrote:
>
>>
>> Any ideas how that could be avoided? Like, at all. :-/
>>
> Could you try when logged in to do: dpkg --configure -a
That comes back empty, since, luckily, I
On Fri, 2 Dec 2016 19:54:20 +0100
Waschbüsch IT-Services GmbH wrote:
>
> Any ideas how that could be avoided? Like, at all. :-/
>
Could you try when logged in to do: dpkg --configure -a
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11
Hi all,
I just upgraded a current node running PVE 4.3 to the latest updates available
on the enterprise repo.
Things work ok until apt gets to:
Preparing to unpack .../proxmox-ve_4.3-72_all.deb ...
Unpacking proxmox-ve (4.3-72) over (4.3-71) ...
Preparing to unpack .../openvswitch-switch_2.6.0
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
When trying to migrate a VM from a node with qemu server <= 4.0-92 to
a node with qemu server >= 4.0-93 we failed as the remote qemu-server
got no explicit migration_type' from the older qemu server on the
source.
Check if migration_type is defined on a incoming migration start, if
not set it.
Si
On Fri, 2 Dec 2016 17:08:40 +0100 (CET)
Dietmar Maurer wrote:
> > Udo report same problem on forum
> >
> > https://forum.proxmox.com/threads/live-migration-between-4-3-71-4-3-72.30757/
> >
>
> yes, I see.
>
Come to think of it. Are we facing the same unsolved issue concerting
the failure t
> Udo report same problem on forum
>
> https://forum.proxmox.com/threads/live-migration-between-4-3-71-4-3-72.30757/
yes, I see.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Udo report same problem on forum
https://forum.proxmox.com/threads/live-migration-between-4-3-71-4-3-72.30757/
- Mail original -
De: "Dominik Csapak"
À: "pve-devel"
Envoyé: Vendredi 2 Décembre 2016 12:52:28
Objet: Re: [pve-devel] Latest upgrade in enterprise repro
On 12/01/2016 10:51
I was able to get this work on my test network.
Right now I believe the issue I am having might stem from using bonding
mode 6 (balance-alb). It may take some time to verify as I am quite busy
at the moment.
On 11/22/2016 02:21 AM, Thomas Lamprecht wrote:
On 11/18/2016 09:12 PM, Phil Kauff
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 12/02/2016 03:44 PM, Wolfgang Bumiller wrote:
Rebased and cleaned up W.Link's clone/move disk patches which have been
sitting around for way too long now.
Differences are:
* removed code which used the raw Storage::vdisk_alloc and called mkfs on
non-subvols in favor of factoring out the all
> So question is now, is this expected behaviour and one should set this
> manually or is this a bug that it is not set automatically?
no, this is not expected.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/
---
src/PVE/API2/LXC.pm | 26 +++---
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 38b1feb..fe611b0 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1147,6 +1147,9 @@ __PACKAGE__->register_method({
---
src/PVE/LXC.pm | 64 ++
1 file changed, 64 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index c61da3e..c3116b6 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1540,5 +1540,69 @@ sub userns_command {
return [];
}
---
src/PVE/API2/LXC.pm | 148
src/PVE/CLI/pct.pm | 1 +
2 files changed, 149 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index fe611b0..be48719 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1447,4 +
---
src/PVE/LXC.pm | 87 +-
1 file changed, 56 insertions(+), 31 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 810fae5..c61da3e 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1336,6 +1336,59 @@ sub destroy_disks {
Rebased and cleaned up W.Link's clone/move disk patches which have been
sitting around for way too long now.
Differences are:
* removed code which used the raw Storage::vdisk_alloc and called mkfs on
non-subvols in favor of factoring out the allocate+mkfs code from
create_disks() into a separa
---
`lxc-start -f file` was broken again, iow. dab/aab/...
debian/patches/lxc-start-configfile.patch | 32 +++
debian/patches/series | 1 +
2 files changed, 33 insertions(+)
create mode 100644 debian/patches/lxc-start-configfile.patch
diff --git
This series changes the container migration so that it shows "restart mode"
instead of "online" since live migration is not implemented
to help the users, we add a help button to the migration window
please apply this patch only after my pve-docs patches:
[PATCH docs 1/3] add migration subchapter
for now we have to explicitely define the
onlineHelp: 'blockid'
string, so that the parser picks it up
in the future we should refactor that window, so that we define the
blockid when declaring the component
Signed-off-by: Dominik Csapak
---
www/manager6/window/Migrate.js | 23 +
since online migration does work at the moment, and with ha we already
have the restart mode
Signed-off-by: Dominik Csapak
---
www/manager6/window/Migrate.js | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/M
Hi everyone,
I do not know if this is a real bug or simply a non-documented behaviour,
but If I setup a masqueraded, private bridge (e.g. with
https://pve.proxmox.com/wiki/Network_Model#Masquerading_.28NAT.29_with_iptables)
everything works as long as I do not enable firewalling for the containers
On 12/01/2016 10:51 PM, Michael Rasmussen wrote:
Hi all,
I am sorry but I have to say that the latest upgrade in enterprise repo
is a disaster. Migration after upgrade is not possible and migration
before upgrade is not possible either! To not be able to migrate
after upgrade was expected but no
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
changes to v1:
* correct mode/running checks
* dont start cts which were stopped before a restart migration
Dominik Csapak (2):
implement lxc restart migration
add restart migration to lxc api
src/PVE/API2/LXC.pm| 15 +--
src/PVE/LXC/Migrate.pm | 34 ++
this checks for the 'restart' parameter and if given, shuts down the
container with a optionally defined timeout (default 180s), and
continues the migration like an offline one.
after finishing, we start the container on the target node
Signed-off-by: Dominik Csapak
---
changes to v1:
* do not s
this simply adds the restart flag and the optional timeout to the
lxc api required for the restart mode migration
Signed-off-by: Dominik Csapak
---
changes to v1:
* correct check so that we do not cancel if -online is set, so
that a ha enabled container still does the right thing
src/PVE/API2/LX
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
I also think we should have a generic class called PVE.GuestStartupEdit:
www/manager6/window/GuestStartupEdit.js
> On December 1, 2016 at 2:16 PM Emmanuel Kasper wrote:
>
>
> this widget and its containing InputPanel can be used by both
> QEMU/LXC so we need to be able to pass the onlineHelp
instead of a mix of sectors and sizes with implicit 512b
sector size. switch alignment to default instead of 1 sector
as a result.
this wastes at most 1M (default alignment of sgdisk) for the
first partition, and should make the partitioning scheme
consistent irregardless of sector size.
Signed-o
Signed-off-by: Fabian Grünbichler
---
Needed for ashift entry in next patch
proxinstall | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/proxinstall b/proxinstall
index b08a5d3..3b908f6 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1643,13 +1643,25 @@ sub clean
Signed-off-by: Fabian Grünbichler
---
proxinstall | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/proxinstall b/proxinstall
index 3a474be..855c39d 100755
--- a/proxinstall
+++ b/proxinstall
@@ -2040,7 +2040,7 @@ sub create_password_view {
my $t3 = $eme->get_text;
for added clarity
Signed-off-by: Fabian Grünbichler
---
proxinstall | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/proxinstall b/proxinstall
index a73a0e3..a74468a 100755
--- a/proxinstall
+++ b/proxinstall
@@ -564,6 +564,7 @@ sub hd_size {
foreach my $hd (@$
the stacked switcher allows us to easily add advanced panels
for other setup types in the future. also, the disk selection
grid for ZFS/BTRFS is now based on the number of detected
disks, instead of hard-coded to 8 entries.
Signed-off-by: Fabian Grünbichler
---
proxinstall | 307
and switch default FQDN to "pve.example.invalid" as well, in
accordance with RFC 6761 (Special-Use Domain Names)
Signed-off-by: Fabian Grünbichler
---
proxinstall | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/proxinstall b/proxinstall
index 0e08f40..3a474be 10075
Fabian Grünbichler (6):
refactor check_float to check_*
refactor disk setup, add advanced ZFS options
fix disk/partition size comments
use partition sizes for boot disk partitioning
add default invalid mail address
fix typos
proxinstall | 359 +-
39 matches
Mail list logo