I'm looking at openstack implementation
https://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-placement.html
and it seem that they check if host numa nodes exist too
"hw:numa_nodes=NN - numa of NUMA nodes to expose to the guest.
The most common case will be th
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7778fb8..2414fd8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5605,7 +5605,7 @@ sub qemu_img_convert {
my $
also wolfgang has added his zeroinit filter to qemu-img command some time ago,
so it should work.
push @$cmd, '/usr/bin/qemu-img', 'convert', '-t', 'writeback', '-p',
'-n';
push @$cmd, '-s', $snapname if($snapname && $src_format eq "qcow2");
push @$cmd, '-f', $src_format,
>>Why not fix qemu-img instead?
This is strange, I am pretty sure that qemu-img was skipping zero writes
" '-S' indicates the consecutive number of bytes (defaults to 4k)
that must\n"
" contain only zeros for qemu-img to create a sparse image
during\n"
> So I tried it by hand using dd conv=sparse, which works fine...
> I'm no expert at this, but could it be that we could move/clone
> sparse/thin disks this way if the destination is indeed a newly created
> LV (case for move/clone) ?
Why not fix qemu-img instead?
Hi,
If I clone or move a disk (coldly, vm stopped), the disk is copied using
qemu-img convert, if I'm not mistaken. For some reasons, even though
qemu-img supports sparse creation, copying from a thin LVM to another
one, or from a QCOW2 to a thin LVM, the destination file is always fully
zeroed. W
Hi,
I just set up a PoC using Proxmox 4.2 & DRBD 9, and realized afterwards
that I can't snapshot those.
I remember a discussion about someone working on patching it, but I
guess it didn't make it through... I see the DRDB cleverly uses
drbdmanage to set each thin LVM as primary accordingly, woul
El 27/07/16 a las 09:45, elacu...@binovo.es escribió:
This patch is a follow-up to Alexandre's rbd patch, so that we set cache
unsafe during restore for performance. This is the same done in qemu-img
convert, which defaults to unsafe cache.
In our fixed ceph cluster this has given us a x4 boost
>>I believe we can simply remove this line since qemu allows it and just
>>applies its default policy. Alternatively we can keep a counter and
>>apply host-nodes manually, starting over at 0 when we run out of nodes,
>>but that's no better than letting qemu do this.
Well, I don't known how auto nu
---
Allows moving away from an lxc/ subdir configuration (needed by the
second patch (otherwise arch users have to wait for a systemd package
upgrade for the change to take effect.)
src/PVE/LXC/Setup/Base.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Setup/Bas
As lxc's archlinux config doesn't set the lxc/ tty subdir by
default either.
---
src/PVE/LXC/Setup/ArchLinux.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/LXC/Setup/ArchLinux.pm b/src/PVE/LXC/Setup/ArchLinux.pm
index 1e60fa6..e93293f 100644
--- a/src/PVE/LXC/Se
Sorry, this was on my /tmp, I didn't intend to re-send, please ignore.
El 27/07/16 a las 09:45, elacu...@binovo.es escribió:
From: Eneko Lacunza
Signed-off-by: Eneko Lacunza
---
.../0054-vma-force-enable-rbd-cache-for-qmrestore.patch | 17 +
debian/patches/series
From: Eneko Lacunza
Signed-off-by: Eneko Lacunza
---
.../0054-vma-force-enable-rbd-cache-for-qmrestore.patch | 17 +
debian/patches/series | 1 +
2 files changed, 18 insertions(+)
create mode 100644
debian/patches/pve/0054-vma-force-enable-rb
From: Eneko Lacunza
Signed-off-by: Eneko Lacunza
---
debian/patches/pve/0055-vma-restore-set-cache-unsafe.patch | 14 ++
debian/patches/series | 1 +
2 files changed, 15 insertions(+)
create mode 100644 debian/patches/pve/0055-vma-restore-set-c
This patch is a follow-up to Alexandre's rbd patch, so that we set cache
unsafe during restore for performance. This is the same done in qemu-img
convert, which defaults to unsafe cache.
In our fixed ceph cluster this has given us a x4 boost (from 15MB/s to
94MB/s) with a 44% sparse 10GB backup (
> On July 26, 2016 at 2:18 PM Alexandre DERUMIER wrote:
>
>
> > >>Issue #1: The above code currently does not honor our 'hostnodes' option
> > >>and breaks when trying to use them together.
>
> Also I need to check how to allocated hugepage, when hostnodes is defined
> with range like : "ho
16 matches
Mail list logo