Hi all,
The official end of life for Debian Wheezy has been announced to April
26 from which date maintenance will be taken over by the LTS team.
https://www.debian.org/News/2016/20160212
What does this mean for Proxmox 3.4?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michae
If we get an 'EWRONG_NODE' error from the migration we have no sane
way out. If we place it then in the started state we also get the
'EWRONG_NODE' error again and it even will place the service in
the migration state again (when it's not restricted by a group) and
thus result in an infinite starte
If a service is in the error state we got an not rather useful log
message about every 5 seconds, this sums up rather quickly and is
not quite helpful.
This changes the behaviour so that we get an initial log message
and then once per minute.
Signed-off-by: Thomas Lamprecht
---
src/PVE/HA/LRM.p
Description of the problem, imagine the following:
We get the CRM command to migrate 'vm:100' from A to B.
Now the migration fails, now normally we would get placed in the
started state on the source node A from the CRM when it processes
our result.
But if the CRM didn't processed our result before
We want to give the error state priority over EWRONG_NODE as a
service may be in the error state because of EWRONG_NODE
Change the error message a bit and add a possibility to not log
the error message which will be used in a future patch to spam
the log less.
Signed-off-by: Thomas Lamprecht
---
---
...6-2391-usb-ohci-avoid-multiple-eof-timers.patch | 40 ++
debian/patches/series | 1 +
2 files changed, 41 insertions(+)
create mode 100644
debian/patches/extra/CVE-2016-2391-usb-ohci-avoid-multiple-eof-timers.patch
diff --git
a/debian/pa
'datachanged' event was not reloading the store with ExtJS5,
but 'refresh' does.
According to the API description 'refresh' seems to be what we need:
http://docs.sencha.com/extjs/5.1/5.1.0-apidocs/#!/api/Ext.data.AbstractStore-event-refresh
also remove deprecated readme ( ExtJS6 do not have the 'c
---
www/manager6/panel/ConfigPanel.js | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/www/manager6/panel/ConfigPanel.js
b/www/manager6/panel/ConfigPanel.js
index bfa9211..94f8fbe 100644
--- a/www/manager6/panel/ConfigPanel.js
+++ b/www/manager6/panel/ConfigPanel.js
@@ -1
Am 16.02.2016 um 15:50 schrieb Dmitry Petuhov:
> 16.02.2016 13:20, Dietmar Maurer wrote:
>>> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
>>> with 500-1500MB/s. See below for an example.
>> The backup process reads 64KB blocks, and it seems this slows down ceph.
>> This i
16.02.2016 13:20, Dietmar Maurer wrote:
Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
with 500-1500MB/s. See below for an example.
The backup process reads 64KB blocks, and it seems this slows down ceph.
This is a known behavior, but I found no solution to speed it up.
J
First remove trailing whitespace from log messages on state changes
This needs to touch some regression test, but with no change in
semantics.
Second add a missing paranthese on the "fixup service location"
message. This needs no regression test log.expect changes.
Signed-off-by: Thomas Lamprecht
This fixes a bug introduced by commit 9da84a0 which set the wrong
hash when a disabled service got a migrate/relocate command.
We set "node => $target", while our state machine could handle that
we got some "uninitialized value" warnings when migrating a disabled
service to an inactive LRM. Better
Helpful when persistent logging is on (use simple check for
existence of /var/log/journal) as else we get an timeout from
the syslog api call as journalctl by default read every thing.
Signed-off-by: Thomas Lamprecht
---
PVE/API2/Nodes.pm | 21 ++---
1 file changed, 18 insertions
Signed-off-by: Thomas Lamprecht
---
src/PVE/Tools.pm | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 6303d20..f597b8d 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@ -1061,7 +1061,7 @@ sub dump_logfile {
}
sub dump_jour
If persistent logging is turned on this is needed as journalctl reads
everything which after a few weeks is a lot. This isn't quite the best
as for really long uptime we have still a lot of reading.
But as it makes it already better it's enough for me for now.
___
---
debian/control | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index 1bccd45..cdb2124 100644
--- a/debian/control
+++ b/debian/control
@@ -3,7 +3,7 @@ Section: admin
Priority: optional
Maintainer: Proxmox Support Team
Build-Depends: debhe
Am 16.02.2016 um 12:58 schrieb Stefan Priebe - Profihost AG:
> Am 16.02.2016 um 11:55 schrieb Dietmar Maurer:
>>> Is it enough to just change these:
>>
>> The whole backup algorithm is based on 64KB blocksize, so it
>> is not trivial (or impossible?) to change that.
>>
>> Besides, I do not underst
From: Alen Grizonic
Signed-off-by: Alen Grizonic
---
src/PVE/Firewall.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 3057d21..83421cc 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -1190,7 +1190,9 @@ my $a
Am 16.02.2016 um 11:55 schrieb Dietmar Maurer:
>> Is it enough to just change these:
>
> The whole backup algorithm is based on 64KB blocksize, so it
> is not trivial (or impossible?) to change that.
>
> Besides, I do not understand why reading 64KB is slow - ceph libraries
> should have/use a re
Additionally there's now a way to specify ipv6-only or
ipv4-only macros.
---
src/PVE/Firewall.pm | 30 ++
1 file changed, 26 insertions(+), 4 deletions(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index a39cf6d..3057d21 100644
--- a/src/PVE/Firewall.pm
+++
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Tue, 16 Feb 2016 11:55:07 +0100 (CET)
Dietmar Maurer wrote:
>
> Besides, I do not understand why reading 64KB is slow - ceph libraries
> should have/use a reasonable readahead cache to make it fast?
>
Due to the nature of the operation that reading is considered random
block read by ceph so
> Is it enough to just change these:
The whole backup algorithm is based on 64KB blocksize, so it
is not trivial (or impossible?) to change that.
Besides, I do not understand why reading 64KB is slow - ceph libraries
should have/use a reasonable readahead cache to make it fast?
_
Am 16.02.2016 um 11:20 schrieb Dietmar Maurer:
>> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
>> with 500-1500MB/s. See below for an example.
>
> The backup process reads 64KB blocks, and it seems this slows down ceph.
> This is a known behavior, but I found no solution
Am 16.02.2016 um 11:20 schrieb Dietmar Maurer:
>> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
>> with 500-1500MB/s. See below for an example.
>
> The backup process reads 64KB blocks, and it seems this slows down ceph.
> This is a known behavior, but I found no solution
Add an unusedX entry for the old value when a mpY entry is
updated. This is already done on mp deletion.
---
src/PVE/LXC.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index a737fc0..84aba83 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1352,6 +
> Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
> with 500-1500MB/s. See below for an example.
The backup process reads 64KB blocks, and it seems this slows down ceph.
This is a known behavior, but I found no solution to speed it up.
__
---
src/PVE/Firewall.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 9806ab8..30b03c6 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -744,7 +744,9 @@ my $icmpv6_type_names = {
'echo-reply' => 1,
'router-solicitation'
---
src/PVE/Firewall.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 47a1aea..b227d70 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -137,6 +137,12 @@ my $pve_ipv6fw_macros = {
'Ping' => [
{ action => 'PARAM',
---
src/PVE/Firewall.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index b227d70..9806ab8 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -143,6 +143,9 @@ my $pve_ipv6fw_macros = {
{ action => 'PARAM', proto => 'icmpv6', d
---
src/PVE/Firewall.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 30b03c6..a39cf6d 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -1675,11 +1675,13 @@ sub ruleset_generate_cmdstr {
if ($rule->{dp
Backport (cleanly cherry-picked) of patches for NDP and DHCPv6 macros,
spelling and numeric icmpv6 types.
Wolfgang Bumiller (4):
ipv6 neighbor discovery and solicitation macros
add DHCPv6 macro
ip6tables accepts both spellings of the word neighbor
allow numeric icmp types
src/PVE/Firewal
Am 16.02.2016 um 11:02 schrieb Martin Waschbüsch:
>
>> Am 16.02.2016 um 10:32 schrieb Stefan Priebe - Profihost AG
>> :
>>
>> Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
>>> Hi Stefan,
>>
This is PVE 3.4 running Qemu 2.4
>>>
>>> To me this looks like the compression is the limiting fac
Stefan,
> The output after 15 minutes is:
> INFO: starting new backup job: vzdump 132 --remove 0 --mode snapshot
> --storage vmbackup --node 1234
> INFO: Starting Backup of VM 132 (qemu)
> INFO: status = running
> INFO: update VM 132: -lock backup
> INFO: backup mode: snapshot
> INFO: ionice prior
If you have restrictions cgroups on your vm, this vm backups with this
restrictions.
2016-02-16 12:02 GMT+02:00 Martin Waschbüsch :
>
> > Am 16.02.2016 um 10:32 schrieb Stefan Priebe - Profihost AG <
> s.pri...@profihost.ag>:
> >
> > Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
> >> Hi Stef
> Am 16.02.2016 um 10:32 schrieb Stefan Priebe - Profihost AG
> :
>
> Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
>> Hi Stefan,
>
>>> This is PVE 3.4 running Qemu 2.4
>>
>> To me this looks like the compression is the limiting factor? What speed do
>> you get for this NFS mount when jus
Am 16.02.2016 um 09:57 schrieb Martin Waschbüsch:
> Hi Stefan,
>> This is PVE 3.4 running Qemu 2.4
>
> To me this looks like the compression is the limiting factor? What speed do
> you get for this NFS mount when just copying an existing file?
Which compression? There is only FS compression on
Am 16.02.2016 um 09:54 schrieb Andreas Steinel:
> Hi Stefan,
>
> That's really slow.
Yes
> I use a similar setup, but with ZFS and I backup 6 nodes in parallel to
> the storage and saturate the 1 GBit network connection.
Currently vzdump / qemu is only uses around 100kb/s of the 10Gbit/s
connec
currently we leave orphaned vmstate files when we restore a
backup over a vm, which has snapshots with saved ram state.
this patch deletes those files on a restore.
Signed-off-by: Dominik Csapak
---
PVE/QemuServer.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/QemuServ
Hi Stefan,
> Am 16.02.2016 um 09:22 schrieb Stefan Priebe - Profihost AG
> :
>
> Hi,
>
> is there any way to speed up PVE Backups?
>
> I'm trying to evaluate the optimal method doing backups but they took
> very long.
>
> I'm trying to use vzdump on top of nfs on top of btrfs using zlib
> com
Hi Stefan,
That's really slow.
I use a similar setup, but with ZFS and I backup 6 nodes in parallel to the
storage and saturate the 1 GBit network connection.
I use LZOP on the Proxmox-side as best tradeoff between size and
online-compression speed.
On Tue, Feb 16, 2016 at 9:22 AM, Stefan Prie
Hi,
is there any way to speed up PVE Backups?
I'm trying to evaluate the optimal method doing backups but they took
very long.
I'm trying to use vzdump on top of nfs on top of btrfs using zlib
compression.
The target FS it totally idle but the backup is running at a very low speed.
The output
42 matches
Mail list logo