Hi all,
Every time the backup scheduler runs is see this in log for every VM
that is backed up:
Use of uninitialized value $cmd[8] in exec
at /usr/share/perl/5.14/IPC/Open3.pm line 186.
proxmox-ve-2.6.32: 3.4-163 (running kernel: 3.10.0-11-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936
> Is it possible that it will be implente in the stable 4.0 final?
We try to implement that asap, maybe for 4.1
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Bonsoir,
Is it possible that it will be implente in the stable 4.0 final?
thank's.
Moula
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
UPDATE:
Today, what I do is call qem-img create command, create the image file
without the preallocate flag and just after that, attach the result image
to the VM, using qm set...
It's a lot of work...
2015-09-25 12:51 GMT-03:00 Gilberto Nunes :
> Please Dietmar... I thinkg it will be good, becau
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Please Dietmar... I thinkg it will be good, because right now, I try create
a VM with a little disk of 32 GB ( the default ) over NFS and tooks so long
that I got time out...
The NFS is already mounted with soft options in storage.cfg but no effect...
Thanks
2015-09-25 12:45 GMT-03:00 Dietmar Mau
Doesn't changes behaviour at all, but makes code clearer
Signed-off-by: Thomas Lamprecht
---
src/PVE/API2/HA/Groups.pm| 6 +++---
src/PVE/API2/HA/Resources.pm | 6 +++---
src/PVE/HA/Config.pm | 9 -
3 files changed, 10 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2
An 'action domain' locks guarantees that under all calls using an domainname
the passed code executed atomically.
Indifferent if the have no common file to read/write to.
This can be used in the ha-manger where such behaviour is needed to avoid
parallel changes to different configs and command
This can be used to execute code on an 'action domain' basis.
E.g.: if there are actions that cannot be run simultaneously even if
they, for example, don't access a common file and maybe also spread
across different packages we can now secure the consistence of said
actions on an 'action domain' ba
would be OK for me...
> On September 25, 2015 at 5:27 PM Gilberto Nunes
> wrote:
>
>
> Somebody?
>
> 2015-09-23 20:44 GMT-03:00 Gilberto Nunes :
>
> > Hi
> >
> > Can you, guys, adjust the qemu to make a thin provision, when the VM file
> > reach more than 500 GB or 1 TB??
> > The reason I ask
Somebody?
2015-09-23 20:44 GMT-03:00 Gilberto Nunes :
> Hi
>
> Can you, guys, adjust the qemu to make a thin provision, when the VM file
> reach more than 500 GB or 1 TB??
> The reason I ask for that, is that very hard to wait qemu-img end up
> create a huge VM file, mainly when we use NFS or oth
We now perform a 'sync' after 'lxc-freeze' and before
creating the snapshot, since we now mount snapshots with
'-o noload' which skips the journal entirely.
---
src/PVE/LXC.pm | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index c01b401..
When using block device based snapshots we cannot mount the
filesystem as it's not clean, and we also can't replay the
journal without write access (as even `-o ro` writes to
devices when replaying a journal (see the linux docs under
Documentation/filesystems/ext4.txt section 3 option 'ro')).
So we
Changes: Rather than a generic mount option parameter for
LXC::mountpoint_mount, we now simply always mount snapshots with
noload.
Wolfgang Bumiller (3):
vzdump:lxc: activate the right volumes
vzdump:lxc: sync and skip journal in snapshot mode
mount snapshots with the noload option
src/PVE
---
src/PVE/VZDump/LXC.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 858db8f..a7fafe9 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -105,7 +105,6 @@ sub prepare {
$task->{hostname} = $conf->
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5c34c3a..7f72daa 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3854,7 +3854,7 @@ sub set_migration_caps {
my
it's already disable by default,
but we want to be sure if it's change in later release
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7f72daa..4906f2c 100644
--- a/PVE/QemuServer.pm
+++ b/PV
I have tested xbzrle & compression.
xbzrle seem to be pretty fine, I don't have bug with it. (tested with video
player running in guest)
also they are no overhead on 10gbe network
without xbzlre:
Sep 25 12:52:18 migration speed: 1092.27 MB/s - downtime 66 ms
with xbzlre:
Sep 25 13:30:17 migrati
When using block device based snapshots we cannot mount the
filesystem as it's not clean, and we also can't replay the
journal without write access (as even `-o ro` writes to
devices when replaying a journal (see the linux docs under
Documentation/filesystems/ext4.txt section 3 option 'ro')).
So n
---
src/PVE/LXC.pm | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index c01b401..c198eaf 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2057,7 +2057,7 @@ my $check_mount_path = sub {
# use $rootdir = undef to just ret
---
src/PVE/VZDump/LXC.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 858db8f..a7fafe9 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -105,7 +105,6 @@ sub prepare {
$task->{hostname} = $conf->
---
Changed: Moved the deactiave call from Storage.pm to the plugins as
they are also the ones dealing with the $running parameter.
PVE/Storage/Plugin.pm | 2 ++
PVE/Storage/RBDPlugin.pm | 2 ++
PVE/Storage/SheepdogPlugin.pm | 2 ++
PVE/Storage/ZFSPoolPlugin.pm | 1 +
4 files chan
---
PVE/Storage.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index c27e9cf..e4f434a 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -200,6 +200,7 @@ sub volume_snapshot_delete {
if ($storeid) {
my $scfg = storage_config($cfg, $storeid);
It work's.
The wiki was modified also bye wolf.
Thank's.
> Date: Fri, 25 Sep 2015 09:45:28 +0200
> From: diet...@proxmox.com
> To: moul...@hotmail.com; pve-devel@pve.proxmox.com
> Subject: Re: [pve-devel] Following tests on PvE 4.0 béta 2
>
> Maybe you should use '/dev/sdd' insteadf of 'dev/
> don't known if ubuntu will release 4.2.1 kernel soon and if it's fix that bug.
its already updated - I am just recompiling the kernel.
https://git.proxmox.com/?p=pve-kernel.git;a=commitdiff;h=695da5a3f06c060a44ca6ee9d92261c4ef951c37
___
pve-devel mai
Maybe you should use '/dev/sdd' insteadf of 'dev/sdd'?
> On September 25, 2015 at 9:11 AM Moula BADJI wrote:
>
>
> It work's when i use a gui but never with commande!!!
>
> > Date: Thu, 24 Sep 2015 17:56:03 +0100
> > From: moul...@hotmail.com
> > To: pve-devel@pve.proxmox.com
> > Subject: Re:
verify_blockdev_path didn't check the result of abs_path
causing commands like `pveceph createosd bad/path` to error
with a meaningless "Use of uninitialized value" message.
---
PVE/CephTools.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/CephTools.pm b/PVE/CephTo
Ah you're missing a / in front of the `dev/sdd`.
The error message is misleading, oh and I see the wiki also misses the slash
there.
Does it work if you use `-journal_dev /dev/sdd` with the first "/" included?
> On September 25, 2015 at 9:11 AM Moula BADJI wrote:
>
>
> It work's when i use a gu
>># pveceph createosd /dev/sdb -journal_dev dev/sdd
you forgot a / in journal_dev path !
# pveceph createosd /dev/sdb -journal_dev /dev/sdd
- Mail original -
De: "moula BADJI"
À: "pve-devel"
Envoyé: Jeudi 24 Septembre 2015 18:56:03
Objet: Re: [pve-devel] Following tests on PvE 4.0 béta
It work's when i use a gui but never with commande!!!
> Date: Thu, 24 Sep 2015 17:56:03 +0100
> From: moul...@hotmail.com
> To: pve-devel@pve.proxmox.com
> Subject: Re: [pve-devel] Following tests on PvE 4.0 béta 2
>
> I try to create cluster ceph and i have the same error messages :
>
> # pvece
30 matches
Mail list logo