Hi,
El 26/07/16 a las 17:28, Alexandre DERUMIER escribió:
I wonder if this couldn't be fixed directly in rbd.c block driver
I have send a patch, can you test ?
I got it, will be testing this morning and will report back, this really
seems the correct thing to do (fix the rbd block "bug").
26.07.2016 19:08, Andreas Steinel wrote:
>> Why not use qcow2 format over generic NFS? It will give you
>> shapshot-rollback
>> features and I don't think that with much worse speed than these features
>> on ZFS level.
> I want to have send/receive also and I use QCOW2 on top of ZFS to have
> "swit
On Mon, Jul 25, 2016 at 9:07 PM, Dmitry Petuhov
wrote:
> Why not use qcow2 format over generic NFS? It will give you
> shapshot-rollback
> features and I don't think that with much worse speed than these features
> on ZFS level.
>
I want to have send/receive also and I use QCOW2 on top of ZFS to
Signed-off-by: Alexandre Derumier
---
...-rbd_cache_writethrough_until_flush-with-.patch | 29 ++
debian/patches/series | 1 +
2 files changed, 30 insertions(+)
create mode 100644
debian/patches/pve/0054-rbd-disable-rbd_cache_writethrough_until_
>>I wonder if this couldn't be fixed directly in rbd.c block driver
I have send a patch, can you test ?
- Mail original -
De: "aderumier"
À: "pve-devel"
Envoyé: Mardi 26 Juillet 2016 16:46:58
Objet: Re: [pve-devel] [PATCH] Add patch to improve qmrestore to RBD,
activating writeback c
> With openvz 7 just being released
> (https://lists.openvz.org/pipermail/announce/2016-July/000664.html), are there
> any possible plans to add openvz back into the latest proxmox versions?
No (no way). We moved to LXC long time ago...
___
pve-devel ma
On 07/26/2016 04:52 PM, Alex Wacker wrote:
Hello,
With openvz 7 just being released
(https://lists.openvz.org/pipermail/announce/2016-July/000664.html), are there
any possible plans to add openvz back into the latest proxmox versions?
OpenVZ, while the same name is now not the container t
Hello,
With openvz 7 just being released
(https://lists.openvz.org/pipermail/announce/2016-July/000664.html), are there
any possible plans to add openvz back into the latest proxmox versions?
--
Alex Wacker
___
pve-devel mailing list
pve-devel@pve.
I wonder if this couldn't be fixed directly in rbd.c block driver
currently:
block/rbd.c
if (flags & BDRV_O_NOCACHE) {
rados_conf_set(s->cluster, "rbd_cache", "false");
} else {
rados_conf_set(s->cluster, "rbd_cache", "true");
}
and in block.c
int bdrv_parse_cache
Hi,
I'm unsure about wether it would be better to have one patch or two
independent patches?
El 26/07/16 a las 16:02, Alexandre DERUMIER escribió:
Hi,
I think that qemu-img convert have the same problem
I have found a commit from 2015
https://git.greensocs.com/fkonrad/mttcg/commit/80ccf93b8
Hi,
I think that qemu-img convert have the same problem
I have found a commit from 2015
https://git.greensocs.com/fkonrad/mttcg/commit/80ccf93b884a2edab5ec62634758e942bba81b7c
By default it don't send flush (cache=unsafe), and the commit add a flush on
image closing, because some storage like s
From: Eneko Lacunza
Signed-off-by: Eneko Lacunza
---
.../0054-vma-force-enable-rbd-cache-for-qmrestore.patch | 17 +
debian/patches/series | 1 +
2 files changed, 18 insertions(+)
create mode 100644
debian/patches/pve/0054-vma-force-enable-rb
This time this is generated against git repo.
Patch issues a "bogus" flush after opening restore destination device to enable
rbd cache (writeback/coalescing)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/lis
This seems to work. I also just tested it with cgmanager disabled and
using cgroup namespaces. Seems to be functioning so far.
With cgroup namespaces however, manual intervention is required for
people who use custom apparmor profiles, because they must be based on
lxc-container-default-cgns inste
applied
On Mon, Jul 25, 2016 at 08:33:29AM +0200, Wolfgang Bumiller wrote:
> call encode_rfc3548 explicitly instead as newer versions of
> the base32 package will drop this import scheme (stretch)
> ---
> One less breakage to worry about when we move to newer debian
> releases in the future.
>
>
applied
On Mon, Jul 18, 2016 at 01:47:53PM +0200, Dominik Csapak wrote:
> mostly copied from QemuServer
>
> Signed-off-by: Dominik Csapak
> ---
> src/PVE/CLI/pct.pm | 40
> 1 file changed, 40 insertions(+)
>
> diff --git a/src/PVE/CLI/pct.pm b/src/PVE/C
applied
On Mon, Jul 18, 2016 at 10:50:31AM +0200, Dominik Csapak wrote:
> we did not check if some values were hash refs in
> the verbose output.
>
> this patch adds a recursive hash print sub and uses it
>
> Signed-off-by: Dominik Csapak
> ---
> PVE/CLI/qm.pm | 29 +++-
applied
On Tue, Jul 19, 2016 at 09:17:36AM +0200, Wolfgang Bumiller wrote:
> Otherwise you need to shutdown a VM to disable protection,
> which is inconvenient for a few tasks such as for instance
> deleting an unused disk.
> ---
> This is already the case for containers btw.
>
> PVE/QemuServer.
(rebased and) applied
On Tue, Jul 12, 2016 at 12:14:01PM +0200, Emmanuel Kasper wrote:
> Both terms are rather domain specific and should not be translated.
> See http://pve.proxmox.com/pipermail/pve-devel/2016-July/021975.html
> for the problems of Monitor Host being wrongly translated
> ---
> w
applied with whitespace cleanup
On Fri, Jul 15, 2016 at 10:34:37AM +0200, Wolfgang Bumiller wrote:
> ---
> * anchored the regex
> * added regexText with an example
> * changed the empty text to PVE.Util.noneText
>
> www/manager6/dc/OptionView.js | 37 +
>
Hi Eneko,
On 07/26/2016 02:05 PM, Eneko Lacunza wrote:
Hi Thomas,
El 26/07/16 a las 13:55, Thomas Lamprecht escribió:
Hi, first thanks for the contribution! Not commenting on the code
itself but we need a CLA for being able to add your contributions,
we use the Harmony CLA, a community-cente
> >>Issue #1: The above code currently does not honor our 'hostnodes' option
> >>and breaks when trying to use them together.
Also I need to check how to allocated hugepage, when hostnodes is defined with
range like : "hostnodes:0-1".
>>Useless, yes, which is why I'm wondering whether this
Hi Thomas,
El 26/07/16 a las 13:55, Thomas Lamprecht escribió:
Hi, first thanks for the contribution! Not commenting on the code
itself but we need a CLA for being able to add your contributions,
we use the Harmony CLA, a community-centered CLA for FOSS projects,
see
https://pve.proxmox.com/wi
could you split this into two (or more) parts? mixing cosmetic changes
like variable renaming and style fixes with actual changes makes it hard
to read (and is also bad for git blameability ;)) if the two issues are
easily split into their own commits that might make sense as well
mhmm, well
On Tue, Jul 26, 2016 at 01:35:50PM +0200, Alexandre DERUMIER wrote:
> Hi Wolfgang,
>
> I just come back from holiday.
Hope you had a good time :-)
>
>
>
> >>Issue #1: The above code currently does not honor our 'hostnodes' option
> >>and breaks when trying to use them together.
>
> mmm ind
On Tue, Jul 26, 2016 at 11:53:29AM +0200, Dominik Csapak wrote:
> this patch fixes an issue where we assemble the influxdb
> key value pairs to the wrong measurement
>
> and also we did only allow integer fields,
> excluding all cpu,load and wait measurements
>
> this patch fixes both issues with
Hi, first thanks for the contribution! Not commenting on the code itself
but we need a CLA for being able to add your contributions,
we use the Harmony CLA, a community-centered CLA for FOSS projects,
see
https://pve.proxmox.com/wiki/Developer_Documentation#Software_License_and_Copyright
Also
>>This is how it works right now ;) - not flushing doesn't mean system
>>won't write data; it can just do so when it thinks is a good time.
If think is true with filesystems (fs will try to flush at regular interval),
but I'm not sure when you write to a block device without doing any flush.
I
Hi Wolfgang,
I just come back from holiday.
>>Issue #1: The above code currently does not honor our 'hostnodes' option
>>and breaks when trying to use them together.
mmm indeed. I think this can be improved. I'll try to check that next week.
>>Issue #2: We create one node per *virtual* so
Hi,
I just tested this patch to work as well as the previous one. Instead of
setting rbd_cache_writethrough_until_flush=false in devfn, issue a bogus
flush so that Ceph activated rbd cache.
---
Index: b/vma.c
===
--- a/vma.c
+++
Hi,
El 26/07/16 a las 10:32, Alexandre DERUMIER escribió:
There is no reason to flush a restored disk until just the end, really.
Issuing flushes every x MB could hurt other storages without need.
I'm curious to see host memory usage of a big local file storage restore
(100GB), with writeback
El 26/07/16 a las 13:15, Dietmar Maurer escribió:
Index: b/vma.c
===
--- a/vma.c
+++ b/vma.c
@@ -328,6 +328,12 @@ static int extract_content(int argc, cha
}
+/* Force rbd cache */
+if (0 == str
> Index: b/vma.c
> ===
> --- a/vma.c
> +++ b/vma.c
> @@ -328,6 +328,12 @@ static int extract_content(int argc, cha
> }
>
>
> +/* Force rbd cache */
> +if (0 == strncmp(devfn, "rbd:", strlen("rbd:"
Currently we have the following code in hugepages_topology():
|for (my $i = 0; $i < $MAX_NUMA; $i++) {
|next if !$conf->{"numa$i"};
|my $numa = PVE::QemuServer::parse_numa($conf->{"numa$i"});
(...)
|$hugepages_topology->{$hugepages_size}->{$i} +=
hugepages_nr($numa_mem
Hi all,
This is my first code contribution for Proxmox. Please correct my
wrongdoings with patch creation/code style/solution etc. :-)
This small patch adds a flag to devfn to force rbd cache (writeback
cache) activation for qmrestore, to improve performance on restore to
RBD. This follows o
this patch fixes an issue where we assemble the influxdb
key value pairs to the wrong measurement
and also we did only allow integer fields,
excluding all cpu,load and wait measurements
this patch fixes both issues with a rewrite of the
recursive build_influxdb_payload sub
Signed-off-by: Dominik
For upstream commits 926cde5f3e4d2504ed161ed0 and
cc96677469388bad3d664793 is no CVE number assigned yet.
Signed-off-by: Thomas Lamprecht
---
Readded CVE CVE-2016-2391 and CVE-2016-5126
patch 0001-vga-add-sr_vbe-register-set.patch is only moved in the series file
to match the commit order of the
Else they will be included if a build machine has the respective
packages installed.
Signed-off-by: Thomas Lamprecht
---
debian/rules | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/rules b/debian/rules
index 66038de..7b9b732 100755
--- a/debian/rules
+++ b/debian/rule
applied
On Fri, Jul 22, 2016 at 07:53:53AM +0200, Fabian Grünbichler wrote:
> this API call changes the config quite drastically, and as
> such should not be possible while an operation that holds a
> lock is ongoing (e.g., migration, backup, snapshot).
> ---
> PVE/API2/Qemu.pm | 2 ++
> 1 file c
applied both patches
On Wed, Jul 13, 2016 at 12:44:12PM +0200, Fabian Grünbichler wrote:
> this might otherwise lead to volumes activated on the
> source and target node, which is problematic for at least
> LVM and Ceph.
> ---
> PVE/API2/Qemu.pm | 1 +
> 1 file changed, 1 insertion(+)
>
> diff -
applied and amended the `make download`ed source archive and bump
message
On Mon, Jul 25, 2016 at 10:42:36AM +0200, Fabian Grünbichler wrote:
> ---
> Note: requires "make download" when applying
>
> ...470-KEYS-potential-uninitialized-variable.patch | 94
> ...synchronization-betwe
>>There is no reason to flush a restored disk until just the end, really.
>>Issuing flushes every x MB could hurt other storages without need.
I'm curious to see host memory usage of a big local file storage restore
(100GB), with writeback without any flush ?
- Mail original -
De: "En
Hi,
El 26/07/16 a las 10:04, Alexandre DERUMIER escribió:
I think qmrestore isn't issuing any flush request (until maybe the end),
Need to be checked! (but if I think we open restore block storage with
writeback, so I hope we send flush)
so for ceph storage backend we should set
rbd_cache_wr
>>I think qmrestore isn't issuing any flush request (until maybe the end),
Need to be checked! (but if I think we open restore block storage with
writeback, so I hope we send flush)
>>so for ceph storage backend we should set
>>rbd_cache_writethrough_until_flush=false for better performance.
>>But you can try to assemble larger blocks, and write them once you get
>>an out of order block...
>>I always thought the ceph libraries does (or should do) that anyways?
>>(write combining)
librbd is doing this if writeback is enabled. (merge coalesced block).
But I'm not sure (don't rememb
45 matches
Mail list logo